16:30:22 #startmeeting fedora_coreos_meeting 16:30:22 Meeting started Wed Jan 20 16:30:22 2021 UTC. 16:30:22 This meeting is logged and archived in a public location. 16:30:22 The chair is lucab. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:30:22 Useful Commands: #action #agreed #halp #info #idea #link #topic. 16:30:22 The meeting name has been set to 'fedora_coreos_meeting' 16:30:32 #topic roll call 16:30:36 .hello2 16:30:37 .hello2 16:30:37 dustymabe: dustymabe 'Dusty Mabe' 16:30:38 .hello2 16:30:39 slowrie: slowrie 'Stephen Lowrie' 16:30:40 .hello2 16:30:42 lucab: lucab 'Luca Bruno' 16:30:43 .hello2 16:30:45 olem: olem 'Olivier Lemasle' 16:30:45 .hello2 16:30:48 lorbus: lorbus 'Christian Glombek' 16:30:51 PanGoat: Sorry, but you don't exist 16:30:53 .hello2 16:30:57 cyberpear: cyberpear 'James Cassell' 16:31:01 :) 16:31:25 #chair lorbus PanGoat slowrie dustymabe jlebon cyberpear 16:31:25 Current chairs: PanGoat cyberpear dustymabe jlebon lorbus lucab slowrie 16:31:30 You can do '.hello ' if it differs from your IRC nick 16:32:20 #chair olem 16:32:20 Current chairs: PanGoat cyberpear dustymabe jlebon lorbus lucab olem slowrie 16:32:58 #topic Action items from last meeting 16:33:49 I think last one was pretty quick and no action item were added 16:33:52 slowrie: thanks 16:34:22 Only docs for two items and the twitter survey 16:34:33 redbeard's action items were there for a while, so i stopped re-actioning them 16:34:45 I finished one doc item last night (#205) and will now move to the survey 16:34:55 then the remaining doc 16:34:58 PanGoat++ 16:35:25 I'll start from the oldest 'meeting' ticket from https://github.com/coreos/fedora-coreos-tracker/labels/meeting 16:36:01 .hello2 siosm 16:36:02 travier: Sorry, but you don't exist 16:36:09 .hello siosm 16:36:10 travier: siosm 'Timothée Ravier' 16:36:11 well, the council status is not for today 16:36:18 .hello jberkus 16:36:19 jberkus: jberkus 'Josh Berkus' 16:36:33 .hello2 jaimelm 16:36:34 PanGoat: Sorry, but you don't exist 16:36:34 #topic Platform Request: CloudStack 16:36:41 weird 16:36:46 #link https://github.com/coreos/fedora-coreos-tracker/issues/716 16:36:58 I think this is from olem 16:37:02 PanGoat: .hello jaimelm 16:37:16 #chair travier jberkus 16:37:16 Current chairs: PanGoat cyberpear dustymabe jberkus jlebon lorbus lucab olem slowrie travier 16:37:34 .hello jaimelm 16:37:35 PanGoat: jaimelm 'Jaime Magiera' 16:37:40 There we go. thanks. 16:37:43 lucab: I have a item for the other stuff section of the meeting 16:37:47 olem: are you there for a quick summary? 16:37:51 Yes. I'm CloudStack user and Fedora packager. I started using fcos quite recently, and I try to make fcos work on CloudStack. 16:38:14 There's already some kind of support in Ignition and Afterburn from CoreOS Container Linux 16:38:53 but it is currently broken because it relies on systemd-networkd and fcos switched to NetworkManager 16:38:59 wow, nice work olem filling out all the required info in the reqest template 16:39:01 .hello2 jasonbrooks 16:39:01 jbrooks: Sorry, but you don't exist 16:39:09 .hello jasonbrooks 16:39:10 jbrooks: jasonbrooks 'Jason Brooks' 16:39:36 Also, CloudStack supports multiple hypervisors, hence requires multiple image formats. 16:40:18 olem: it looks like a single VHD may cover most of them, other than vmware, right? 16:40:42 olem: I'm trying to understand the requirement on systemd-networkd 16:40:57 is that something we can easily fix? 16:41:11 dustymabe: afterburn does parse the DHCP lease out of systemd-networkd 16:41:38 so we could change that code? 16:41:39 lucab: I guess so. But I also think that vmware is the most used hypervisor in CloudStack community 16:41:43 dustymabe: I still haven't found how to do the same with NM, but it may be possible 16:42:05 lucab: Ignition seems to do the same :\ 16:42:21 slowrie: ouch, I didn't know 16:42:21 i'm a little concerned about the ever-growing list of cloud images -- it's a good problem to have, but makes me want to revisit smarter ways of doing this 16:42:33 lucab: we just need to more or less find the IP handed out? 16:43:22 dustymabe: IIRC they used a custom DHCP option to signal where the metadata endpoint is 16:43:46 yes. Actually there's multiple ways to find the metadata server address (virtual router). E.g cloud-init tries to get virtual address by DNS, then systemd-networkd, then dhcpd leases, then default route 16:43:53 https://github.com/canonical/cloud-init/blob/bd76d5cfbde9c0801cefa851db82887e0bab34c1/cloudinit/sources/DataSourceCloudStack.py#L228 16:43:55 jlebon++ 16:44:12 However, most of these methods since to be unavailable when Ignition runs... 16:44:21 maybe `coreos-installer` could learn to restamp VM images or something 16:44:37 jlebon: I like this idea 16:44:40 nice 16:45:21 olem: How broken are current images on those platforms? If you boot an OpenStack image on KVM CloudStack, what happens? 16:45:34 And similarly for VMware 16:46:29 travier: At least for the Ignition side OpenStack assumes a static metadata endpoint (169.254.169.254) so it'd be the wrong metadata URL. It'd work for config drive based metadata but not http metadata service 16:47:01 The VMware ova image works on CloudStack but needs the ignition file passed as OVF parameter, not userdata. For KVM, the Ignition file cannot be found. 16:48:01 olem: hmm, so if we published only one image, would it make more sense to have just a vhd to cover the !vmware cases? 16:49:21 As a "degraded solution", yes. But a cloudstack specific image is still required to get afterburn work 16:49:29 e.g. to get ssh keys, etc 16:50:08 I mean for vmware 16:50:09 right gotcha 16:51:00 it sound like we want a VHD plus a OVA as final artifacts 16:51:12 is there a repository of cloudstack images shared? 16:51:30 but the real blocker is getting the metadata out of the DHCP lease from NM 16:51:55 There's http://dl.openvm.eu/cloudstack/ which is maintained by a member of cloudstack community 16:52:12 for the DHCP options.. could we query NetworkManager for that information? It has it if you run `nmcli c show ` 16:52:27 alternatively we could write a dispatcher script that saved the info off somewhere 16:53:01 dustymabe: if that works without dbus 16:53:15 ahh, so this is in the initramfs? 16:53:21 Yes. This is Ignition 16:53:26 ahh 16:54:04 i'm pretty sure the journal has the option information output, but grabbing it from there isn't fun either 16:54:13 olem: is the metadata endpoint always on the default gateway? 16:54:47 reading from http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/virtual_machines/user-data.html it seems so 16:54:52 dustymabe: this is the code in question https://github.com/coreos/ignition/blob/master/internal/providers/cloudstack/cloudstack.go#L135-L180 16:56:02 if that is the case, something akin to iproute may be enough, without introspecting the DHCP lease 16:56:23 It's available in initramfs? 16:57:06 I tried parsing /proc/net/route as cloud-init does, and did not manage to get the default route. 16:57:38 olem: I think so, but even if not it could be added or we can query via netlink, I guess 16:58:35 ok, so it could be a solution 16:58:49 if feel like `ip` is in the initramfs 16:58:52 if not we could add it 16:59:07 it definitely is, i used it yesterday :) 16:59:24 hmm, that might've been RHCOS :| 16:59:36 `ip route show default` should get us there 17:00:01 while we work on adding NM support for it, i'll file a ticket as well so we can discuss ways to dedupe these images better 17:00:45 even if not 100% correct, that at least should buy us time for the proper NM way 17:01:08 ok 17:01:43 anything else on this topic? 17:02:20 should we push to openvm.eu directly from our pipeline? 17:03:21 can we hold the actual artifact discussion for now? 17:03:30 Oh, I suppose getfedora.org is the place where people expect to get fcos images? 17:03:38 yup 17:04:20 olem: normally yes, but we also upload to some cloud platforms, if that makes it easier to consume 17:04:45 ok, I'll move to the next ticket 17:05:00 #topic Enabling DNF Count Me support in Fedora CoreOS 17:05:08 #link https://github.com/coreos/fedora-coreos-tracker/issues/717 17:05:42 travier: do you want to introduce this? 17:06:15 lucab: sure 17:06:38 So this is a first step to discuss planning for enabling DNF Count Me by default on Fedora CoreOS 17:07:03 DNF Count Me support has been added to rpm-ostree, mirroring the support in libdnf in classic Fedora 17:07:52 With this, Fedora CoreOS will report anonymously that they are running to Fedora mirrors infrastructure to enable statistics 17:08:18 The details of the implementation that make this OK from a privacy perspective is in the ticket 17:08:39 This can also be disabled if users do not want to be counted 17:09:09 So we are to enable that by default we need to announce it to let people opt out of it 17:09:28 Note that currently, if you overlay any RPM on top of the base image, this triggers counting already 17:09:45 (EOF) 17:10:02 travier: i guess we could or should maybe follow what other Fedora editions are already doing 17:10:18 they have count me enabled by default 17:10:19 Counting is on by default already in other editions, right? 17:10:23 yes 17:10:42 and IIUC, it was also enabled by default on upgrade, right? 17:10:47 And this will have to be discussed for other rpm-ostree editions but I don't expect resistance 17:10:57 yes I think so 17:10:59 I'd say we match, but a fedora magazine article explaining would be really nice to have 17:11:12 in theory we prepared the pinger config exactly for this case 17:11:22 This is really hard to weaponize for tracking as far as I konw 17:11:25 in practice I don't know whether we want/need to link the two 17:11:41 the plan you have in https://github.com/coreos/fedora-coreos-tracker/issues/717#issue-789135655 looks good to me 17:12:38 yep sounds good to me 17:13:03 what do the mechanics look like if I want to opt out of counting.. what would I do today? 17:13:12 yes, at some point we discussed doing that in a separated program (pinger) vs rpm-ostree 17:13:29 I'm just catching up on the reading. In the implementations completed, is there notice that it's enabled when DNF is run. 17:13:41 but it felt easier doing it in rpm-ostree in the beginning due to libdnf support & repo parsing. Maybe this could be moved 17:14:23 https://coreos.github.io/rpm-ostree/countme/ > to disable 17:14:34 travier: I mean, just the config, as we told people upfront how to tweak pinger 17:15:11 hum 17:15:24 but I don't have big concerns on enabling this for new installs and updates 17:16:09 it's nice that the countme=0 bit is shared with how it's done on dnf-based Fedora 17:17:03 jlebon: the problem with `sed -i 's/countme=1/countme=0/g' /etc/yum.repos.d/*.repo` is that those repo files will never get updated again 17:17:07 I want to thank travier for helping work on this 17:17:18 lucab: we could indeed add a check for that to skip counting 17:17:29 smooge: thanks 17:17:30 dustymabe: i think that's a problem shared with traditional too 17:17:37 they were very open to suggestions and items 17:17:47 smooge++ 17:17:47 dustymabe: Karma for smooge changed to 10 (for the current release cycle): https://badges.fedoraproject.org/tags/cookie/any 17:18:11 travier: yeah that was my reasoning, but I don't think we really want it 17:18:19 jlebon: i wonder if we should prefer the disable of the timer 17:18:43 dustymabe: that won't disable counting via pkglayering though i think 17:18:46 well I guess it says "also" 17:18:49 ahh, ok 17:19:09 yes, that's the current issue. 17:19:22 DIsabling only the timer will not stop libdnf from trying to count 17:19:23 so let's say in 2 months I start a new node and I layer a packge 17:19:27 will it get counted twice 17:19:32 sort of 17:19:47 because the User Agent is different, the infra can distinguish the two 17:19:53 rpm-ostree vs libdnf 17:20:13 There is an open issue about that in libdnf and I could work on that if needed 17:20:29 because I agree that disabling the timer feels better as a UX 17:20:47 hmm, yeah. if we can tell libdnf to turn that off, that'd be cleaner 17:21:03 because then it's just the timer, and there's no double counting under different UAs 17:21:23 https://github.com/rpm-software-management/libdnf/issues/1068 17:21:26 travier: thanks so hard for working on this.. and answering my questions to help me understand! 17:21:51 dustymabe: 👍 17:21:53 yeah, this is going to be really useful! 17:22:04 anyone know if the stats are available publicly somewhere? 17:22:23 yes 17:22:26 give me a moment 17:22:42 https://data-analysis.fedoraproject.org/csv-reports/countme/ 17:23:01 nice 17:23:05 sweet, thanks smooge! 17:23:33 older compilation of stats are at https://data-analysis.fedoraproject.org/csv-reports/mirrors/ 17:25:20 ok, I think we are done on this topic 17:25:57 I'll go to open floor for the last 5 minutes of the meeting 17:25:57 so all good on the currently proposed plan? 17:25:59 #topic Open Floor 17:26:16 I'd like to ask about shirts/swag 17:26:26 jberkus: :) 17:26:45 jberkus++ 17:26:46 woah hey didn't know jberkus was here :) 17:26:47 dustymabe: Karma for jberkus changed to 1 (for the current release cycle): https://badges.fedoraproject.org/tags/cookie/any 17:26:57 dustymabe: I didn't see any objections, it doesn't feel like we need voting 17:27:12 We really ought to have some, both for the contributors and the users. However, I'm not clear that Fedora CoreOS has an official logo I can use to produce those? 17:27:32 lucab: +1 17:27:38 Please add objections and remarks to the countme issue :) 17:27:56 jberkus: we have an official logo ! 17:28:07 jberkus: there are logos in the official SVG 17:28:25 oh, good. it wasn't clear that that was official. link? 17:28:43 https://getfedora.org/static/images/fedora-coreos-logo.png 17:28:56 https://pagure.io/fedora-web/websites/blob/master/f/sites/asset_sources/assets.svg 17:28:57 I want FCOS swag too :) 17:29:39 +1 for being able to send swag to contributors 17:30:26 seems like countme "done" status should be stored in /var -- does it? 17:30:49 if we need to tweak that logo for design reasons, what's the approval process? 17:30:59 also, someone want to work with me on picking out swag? 17:31:07 * dustymabe me me 17:31:07 cyberpear: it is stored in a private directory created by systemd in /var yes 17:31:15 thanks! 17:31:25 jberkus: sent you a DM 17:31:38 ok, good, Dusty to work with me on it 17:31:46 last thing wanted folks to know abotu this: http://containerplumbing.org/ 17:31:55 look for CfP next week 17:32:02 https://github.com/coreos/rpm-ostree/blob/master/src/app/rpm-ostree-countme.service#L8-L11 17:32:07 thanks jberkus 17:32:18 I have a topic for open floor - cgroups v2 17:32:25 ok, done, thanks 17:32:33 we should probably open this up as a real topic in the next meeting 17:32:42 ^^ 17:32:50 but in general, we should try really hard to make cgroups v2 happen for f34 17:32:58 olem: is updating docker which supports v2 17:33:09 podman is good already 17:33:19 anyone know the status of upstream kube 17:33:20 👍 17:33:31 1.20 should be OK I think 17:34:14 dustymabe: mid-release I guess? I think we want to let the new docker soak a bit first, as we can't go back after the cgroup switch 17:35:40 lucab: I was hoping to get the new docker into `next` soonish along with the cgroups v2 change 17:35:58 of course, people can choose to configure v1, right? 17:36:22 yep, I'm only talking about our defaults 17:37:11 do you think targetting f34 day 1 is too early? 17:37:17 ok, anything else for today? otherwise I'm going to close this here in a few seconds 17:37:32 lucab: +1 - can discuss later 17:37:35 https://v1-19.docs.kubernetes.io/docs/setup/release/notes/ says: "Support for running on a host that uses cgroups v2 unified mode", so sounds like it's good to go! 17:37:41 let's chat in the ticket? 17:37:46 dustymabe: maybe not, I'm just scared of major docker changes 17:38:03 https://github.com/kubernetes/enhancements/issues/2254 17:38:27 jlebon: nice! 17:39:07 ok, let's move the rest of the discussion to the ticket, I'm closing now 17:39:18 #endmeeting