15:59:18 <dustymabe> #startmeeting fedora_cloud_meeting 15:59:18 <zodbot> Meeting started Tue Jun 22 15:59:18 2021 UTC. 15:59:18 <zodbot> This meeting is logged and archived in a public location. 15:59:18 <zodbot> The chair is dustymabe. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:59:18 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:59:18 <zodbot> The meeting name has been set to 'fedora_cloud_meeting' 15:59:24 <dustymabe> #topic roll call 15:59:27 <dustymabe> .hi 15:59:28 <zodbot> dustymabe: dustymabe 'Dusty Mabe' <dusty@dustymabe.com> 16:00:41 <dcavalca> .hi 16:00:42 <zodbot> dcavalca: dcavalca 'Davide Cavalca' <dcavalca@fb.com> 16:00:52 <dustymabe> #chair dcavalca 16:00:52 <zodbot> Current chairs: dcavalca dustymabe 16:01:05 <dustymabe> davdunc: around today? 16:02:36 <Eighth_Doctor> .hello ngompa 16:02:37 <zodbot> Eighth_Doctor: ngompa 'Neal Gompa' <ngompa13@gmail.com> 16:03:38 <dustymabe> #chair Eighth_Doctor 16:03:38 <zodbot> Current chairs: Eighth_Doctor dcavalca dustymabe 16:04:00 <davdunc> .hello2 16:04:01 <zodbot> davdunc: davdunc 'David Duncan' <davdunc@amazon.com> 16:04:09 <Eighth_Doctor> ah there you are :D 16:04:13 <Eighth_Doctor> I just tried to ping you :D 16:04:40 <dustymabe> ok let's get started 16:04:55 <dustymabe> #chair davdunc 16:04:55 <zodbot> Current chairs: Eighth_Doctor davdunc dcavalca dustymabe 16:04:58 <dustymabe> #topic Action items from last meeting 16:05:18 <dustymabe> I don't see any action items in the minutes from last meeting 🎉 16:05:32 <dustymabe> #topic new meeting time 16:05:37 <dustymabe> #link https://pagure.io/cloud-sig/issue/333 16:05:49 <dustymabe> davdunc: has a standing conflict at this time 16:05:56 <dustymabe> can we sort out a new time? 16:06:11 <davdunc> i have an hour long meeting at 9:00 PDT 16:06:22 * dustymabe notes that we stick with UTC so we'd need to consider daylight savings time adjustments when choosing the new time 16:06:33 <Eighth_Doctor> The OKD WG meeting is right after this one 16:06:40 <Eighth_Doctor> so it'd be kind of crappy to conflict with that 16:06:52 <dustymabe> earlier? 16:07:02 <dustymabe> would earlier work? 16:07:13 <dcavalca> I could do an hour earlier 16:07:16 <Eighth_Doctor> yes 16:07:24 <dustymabe> davdunc: ? 16:07:29 <Eighth_Doctor> I can do 10:30am-11:30am EDT 16:07:40 <davdunc> I can do an hour earlier. 16:07:58 <dustymabe> Eighth_Doctor: the meetings sometimes run longer than 30 16:08:01 <dustymabe> I assume that's OK 16:08:18 <Eighth_Doctor> 11:45am-12pm is daily standup with my team 16:08:20 <cmurf[m]> sorry 16:08:20 <Eighth_Doctor> other than that, it's fine 16:08:28 <dustymabe> #chair cmurf[m] 16:08:28 <zodbot> Current chairs: Eighth_Doctor cmurf[m] davdunc dcavalca dustymabe 16:08:54 <dustymabe> #proposed we'll move the bi-weekly meeting earlier by one hour to accomodate a conflict for davdunc 16:09:02 <dustymabe> hmm. ok let's think about this 16:09:14 <dustymabe> my brain hurts when it comes to daylight savings 16:09:27 <dustymabe> when the time changes back in the fall, will we be back in this situation again? 16:09:36 <Eighth_Doctor> yeah 16:09:47 <Eighth_Doctor> because we set times in UTC, we'll have this collision again and have to fix it again 16:10:03 <dcavalca> having to fix this twice a year isn't too bad though 16:10:05 <dustymabe> davdunc: does your meeting move with UTC (I doubt it) 16:10:24 <dustymabe> sorry I should have phrased that "stick with UTC" 16:10:29 <davdunc> :D I am good with UTC. just talking to engineering team at the same time. 16:10:43 <davdunc> couldn't convert and talk at the same time. :( 16:10:52 <Eighth_Doctor> all of my meetings are EDT, which makes this painful 16:11:02 <Eighth_Doctor> err US/Eastern 16:11:07 <dustymabe> davdunc: yeah but I think we're saying if we only move the meeting an hour then we'll have the same problem in a few months 16:11:07 <Eighth_Doctor> so they float across EDT and EST 16:11:37 <davdunc> I see your point. 16:11:47 <dustymabe> let's maybe consider a new option.. how about the same current meeting time, but on thursday 16:11:55 <Eighth_Doctor> I'd push it even earlier, but the Workstation WG meeting is at 9:30am EDT 16:11:58 <davdunc> that would work for me. 16:11:59 * cmurf[m] bans daylight saving time by global edict, problem solved 16:12:06 <dustymabe> cmurf[m]: +1 16:12:11 <cmurf[m]> oops, small problem is cmurf isn't the benevolent ruler 16:12:13 <davdunc> this is the one immovable meeting I have. 16:12:19 <Eighth_Doctor> can't do Thursday, because FPC meeting at that time 16:12:46 <dustymabe> Eighth_Doctor: Thursday + one hour earlier? 16:12:48 <Eighth_Doctor> can't do Wednesday either because of other meetings 16:12:48 <cmurf[m]> i'm flexible btw 16:12:52 <dcavalca> Thu an hour earlier? 16:12:56 <Eighth_Doctor> dustymabe: let me check that 16:12:57 <dcavalca> yeah I can't do Wed either 16:13:16 <dustymabe> Eighth_Doctor: /me hopes FPC meeting sticks with UTC :) 16:13:20 <Eighth_Doctor> dustymabe: standup conflict 16:13:37 <dustymabe> Eighth_Doctor: same standup conflict as tuesday - which you could live with 16:13:43 <Eighth_Doctor> dustymabe: actually most of my Fedora meetings are increasingly no longer using UTC 16:13:45 <Eighth_Doctor> dustymabe: yep 16:13:52 <davdunc> yea. UTC++ 16:14:02 <dustymabe> Eighth_Doctor: does the FPC one use UTC? 16:14:07 <Eighth_Doctor> let me check 16:14:49 <Eighth_Doctor> yes 16:15:30 <dustymabe> #proposed we'll move the meeting one hour earlier and to thursday 16:15:43 <dustymabe> fedocal is running nice and slow this morning 16:16:08 <dustymabe> anyone want to weigh in ? ^^ 16:16:13 <Eighth_Doctor> I wonder what day is going to be my meeting heavy day 16:16:29 <dustymabe> The answer is - the day we want to move the meeting to 16:16:31 <Eighth_Doctor> I used to be able to just "screw Tuesday", but it may change to Thursday now... 16:17:38 <davdunc> Eighth_Doctor: that's a plan I support. 16:17:53 <dustymabe> #proposed we'll move the meeting one hour earlier and to thursday (bi-weekly on thursday at 1500 UTC) 16:17:57 <dustymabe> please vote ^^ 16:18:11 <dcavalca> +1 16:18:16 <Eighth_Doctor> If we could move it to 9am EDT on Thursday, I'd be good with that 16:18:24 <Eighth_Doctor> no conflicts and no manager complaints 16:18:35 <Eighth_Doctor> but that might be hard for some other folks 16:19:04 <Eighth_Doctor> (9am EDT ~ 1pm UTC) 16:19:10 <dcavalca> yeah that's too early for the west coast 16:19:12 <dustymabe> let's try for current proposed if possible 16:19:18 <dcavalca> I could make it as a one off, but not reliably 16:19:44 <Eighth_Doctor> what about 17:00 UTC (1pm EDT)? 16:19:47 <dustymabe> davdunc: good with ^^ 16:20:01 <Eighth_Doctor> but otherwise, current proposal is fine 16:20:12 <davdunc> +1 16:20:22 <Eighth_Doctor> as long as we're okay with me not being there for part of it consistently 16:20:43 <dustymabe> Eighth_Doctor: hopefully the meetings are shortish 16:20:45 <dcavalca> Eighth_Doctor: I have a conflict for 17-1730 UTC :( 16:20:58 <dustymabe> #agreed we'll move the meeting one hour earlier and to thursday (bi-weekly on thursday at 1500 UTC) 16:20:59 <dcavalca> though I could make that work if needed 16:21:04 <Eighth_Doctor> (which is better than when I was triple booked with FESCo + OKD + work) 16:21:05 <davdunc> You are important enough that we can work with what time you do have. 16:21:11 <dustymabe> let's see how the current new time works and then adjust as needed 16:21:15 <davdunc> thanks! 16:21:16 <Eighth_Doctor> aww 16:21:32 <Eighth_Doctor> well, davdunc is more important than I am in this 16:21:34 <Eighth_Doctor> I'm cloud-dumb after all :) 16:21:44 <dustymabe> #topic Fedora-34 nightly images in AWS do not boot 16:21:50 <dustymabe> #link https://pagure.io/cloud-sig/issue/331 16:22:22 <cmurf[m]> this is an odd duck for sure 16:22:49 <dustymabe> cmurf[m]: do you want to give a summary 16:22:54 <cmurf[m]> adamw is going to look into it but hasn't had time yet; maybe mboddu has an idea what controls the firmware selection for an image build VM 16:23:13 <dustymabe> cmurf[m]: that should be in imagefactory 16:23:39 <cmurf[m]> the gist is that only f34 x86_64 nightly cloud base images are being created in a UEFI VM, therefore have a UEFI partition layout and UEFI GRUB 16:23:58 <cmurf[m]> but the image is tested in openqa using a BIOS VM so all the tests fail, it'll actually work in a UEFI VM 16:24:24 <cmurf[m]> but f34 GA is built for BIOS VM, so are the nightlies for rawhide and f33 16:24:33 <cmurf[m]> confused yet? :D 16:24:49 <dustymabe> cmurf[m]: do we know if rawhide works? 16:24:55 <cmurf[m]> it does 16:25:00 <cmurf[m]> it's passing openqa tests 16:25:01 <dustymabe> and f33 does too 16:25:04 <dustymabe> weird 16:26:00 <dustymabe> any other ideas for debugging? 16:26:13 <cmurf[m]> https://pagure.io/pungi-fedora/blob/f34/f/fedora-final.conf#_374 16:26:20 <cmurf[m]> that comment looks like our problem, but it's the wrong image 16:26:27 <cmurf[m]> that's for GCP 16:27:03 <dustymabe> cmurf[m]: yeah but let me check the actual config that gets used for the nightlies 16:27:38 <dustymabe> https://pagure.io/pungi-fedora/blob/f34/f/fedora-cloud.conf 16:28:47 <dustymabe> it's got a different distro entry 16:29:21 <dustymabe> I honestly think this was just bad branching 16:29:39 <dustymabe> if you compare the config to what's in rawhide it never got set to `Fedora-34` 16:29:58 <dustymabe> I think we can fix it by setting it back 16:30:12 <dustymabe> but honestly I question whether we should be building nightlies IMHO 16:30:24 <dustymabe> we have no bandwidth for testing them or fixing issues 16:30:38 <Eighth_Doctor> we need them because we can't build the images locally 16:30:48 <Eighth_Doctor> at least, I can't 16:30:56 <dustymabe> :) 16:30:58 <dustymabe> sure you can 16:31:06 <dustymabe> it's just complicated - which sucks 16:31:17 <Eighth_Doctor> nope, the imgfac tooling crashes when I try to use it 16:31:39 <Eighth_Doctor> and when davdunc and I paired on it, we got different results than what the koji builders got 16:32:10 <Eighth_Doctor> unless davdunc made progress on that, it's basically the only way I have to see how changes get implemented 16:32:47 <cmurf[m]> so this is the opposite of reproducible 16:32:59 <Eighth_Doctor> yup 16:33:13 <Eighth_Doctor> I'm this close to proposing we switch to kiwi for building images 16:33:20 <Eighth_Doctor> because of this insanity 16:33:47 <Eighth_Doctor> the amount of effort it'd take to make the switch is what stops me 16:33:53 <dustymabe> ok this should fix it: https://paste.centos.org/view/29d7cb42 16:33:56 <cmurf[m]> ok this is odd.. 16:33:56 <Eighth_Doctor> but I'm tempted 16:33:57 <cmurf[m]> https://pagure.io/pungi-fedora/blob/f34/f/fedora-cloud.conf#_225 16:34:04 <cmurf[m]> https://pagure.io/pungi-fedora/blob/main/f/fedora-cloud.conf#_226 16:34:15 <dustymabe> cmurf[m]: see fpaste 16:34:37 <cmurf[m]> haha ok 16:34:39 <dustymabe> i'll post up the patch 16:34:45 <cmurf[m]> but yeah why does main says Fedora-22? 16:34:48 <cmurf[m]> and yet it works as expected 16:34:56 <dustymabe> cmurf[m]: it's a "profile" 16:35:15 <dustymabe> it's like a common set of characteristics when starting the VM 16:35:30 <dustymabe> it looks odd, I agree 16:35:38 <dustymabe> but that's more or less what it is 16:35:42 <cmurf[m]> ok and these profiles are elsewhere which is why i couldn't figure it out 16:35:45 <adamw> yeah, that's normal 16:35:48 <Eighth_Doctor> why isn't it just Fedora-BIOS and Fedora-UEFI? 16:35:55 <dustymabe> cmurf[m]: yeah they're in the imagefactory source code I think 16:35:55 <cmurf[m]> +1 16:36:24 <dustymabe> #info we think we should be able to fix this problem with a patch to pungi-fedora 16:36:25 <cmurf[m]> and Fedora-hybrid now 16:36:26 <adamw> anyhow, on stopping the nightlies - wasn't the point of doing nightlies that some other thing uses them as inputs? they are useful as just 'updated base fedora disk images" for some testin workflow or something 16:36:42 <dustymabe> adamw: yeah I think some CI processes use them 16:36:46 <Eighth_Doctor> adamw: well, QA stuff uses them, and we use them for test days 16:36:53 <Eighth_Doctor> and there are some CI thingies too 16:37:07 <dustymabe> and at some point we did want to release updated images to the public, but we haven't made any progress on that 16:37:52 <dustymabe> anybody want to submit that PR (with the diff from the fpaste) to pungi-fedora repo? 16:38:20 <Eighth_Doctor> I can do it 16:39:26 <dustymabe> #action Eighth_Doctor to submit PR to pungi-fedora to fix BIOS booting issues for f34 cloud nightlights 16:39:36 <dustymabe> sigh 16:39:41 <dustymabe> at least it's a cool typo 16:39:51 <Eighth_Doctor> haha 16:40:03 <dustymabe> #topic updates on f35 changes 16:40:04 <davdunc> hah. 16:40:20 <dustymabe> davdunc: Eighth_Doctor: do you want to summarize and highlight the f35 change proposals and status? 16:40:23 <Eighth_Doctor> that's one way to title this topic 16:40:36 <Eighth_Doctor> yeah sure 16:40:49 <Eighth_Doctor> so... we've implemented the change to Btrfs for the Cloud images 16:41:03 <Eighth_Doctor> they've been spinning and booting quite nicely as Rawhide nightlies for a couple of days now 16:41:40 <Eighth_Doctor> we've run into *issues* trying to implement hybrid GPT (which is where davdunc and I attempted to pair program it and couldn't get imgfac to work right) 16:41:53 <dustymabe> #info change to btrfs has been implemented in rawhide cloud image builds - working as expected 16:42:10 <Eighth_Doctor> well, hybrid boot, not hybrid GPT 16:42:17 <Eighth_Doctor> it's GPT with hybrid boot 16:43:19 <dustymabe> Eighth_Doctor: if you want to schedule some time on Friday I can try to walk the beaten path I've traveled in the past and work with you to get you unblocked 16:43:42 <cmurf[m]> note one thing is "off" that we haven't firmly decided: ext4 images are "spare" files as a result of anaconda running e2fsck -E discard following umount. The btrfs images are mostly allocated. davdunc says it's probably preferred images aren't sparse, basically as a bug avoidance strategy. I guess some things don't like sparse files. 16:43:59 <Eighth_Doctor> dustymabe: let's work out something with davdunc and we can schedule a call on Friday 16:44:02 <cmurf[m]> sigh the typos :\ 16:44:32 <Eighth_Doctor> cmurf: I wasn't aware that we were sparsifying the ext4 based images 16:44:48 <Eighth_Doctor> if that's the case, we probably should fix this in anaconda to be fs-agnostic 16:45:10 <cmurf[m]> yeah i've got a bug/rfe for anaconda for using fstrim if it should be used 16:45:13 <dustymabe> Eighth_Doctor: in the past I used to set up everything in vagrant 16:45:24 <dustymabe> here's my vagrantfile: https://paste.centos.org/view/8a44a7dd 16:45:37 <dustymabe> should be able to run the steps on any VM (with nested virt enabled) 16:45:48 <cmurf[m]> the question here is whether they should be sparse or preallocated, then we can make sure the right thing is happening 16:46:12 <dustymabe> #info hitting some issues getting hybrid boot + GPT working - still working through the issues 16:46:28 <dustymabe> #topic sparse disk images or not 16:46:41 <dustymabe> hmm. why on earth would you not want the disk image to be sparse ? 16:47:04 <dustymabe> my mind interprets that as you downloading multiple GB of zeroes when you download a cloud image 16:47:50 <cmurf[m]> what currently happens by i guess imagefactory (?) is after it deletes all the rpms in /home that were downloaded and installed, it writes a big file of zeros using dd if=/dev/zero 16:47:52 <cmurf[m]> then deletes that 16:48:02 <cmurf[m]> so it in effect zeros all the garbage that was deleted 16:48:43 <cmurf[m]> in my testing it's about a 10MiB difference between xz compressed zeros, and xz compressed holes (sparse) 16:48:53 <cmurf[m]> favoring holes 16:49:00 <dustymabe> cmurf[m]: https://pagure.io/fedora-kickstarts/blob/main/f/fedora-cloud-base.ks#_109 16:49:40 <cmurf[m]> haha yes that's it, what I saw in the oz log was "(Don't worry -- that out-of-space error was expected.)" 16:49:43 <cmurf[m]> and had a laugh 16:50:27 <dustymabe> so what's the proposal? 16:50:39 <cmurf[m]> followed by facepalm; a better way to do this is fstrim the file system just before umount; umount it; then fallocate the image file if you don't want it to be sparse. 16:50:57 <cmurf[m]> well my proposal would be, decide whether they should be sparse or preallocated. 16:50:58 <cmurf[m]> i don't know the answer to that question 16:51:15 <Eighth_Doctor> the files should be as small as possible for network transfer 16:51:26 <Eighth_Doctor> I guess that means sparsing them and compressing the resulting file 16:51:31 <dustymabe> yeah, honestly fstrim is tricky - my understanding is that it only works if you configure things correctly 16:51:57 <dustymabe> https://dustymabe.com/2013/06/11/recover-space-from-vm-disk-images-by-using-discardfstrim/ 16:51:59 <cmurf[m]> it'll pass through loop driver by default 16:52:15 <Eighth_Doctor> dcavalca: do you think we could get offline discard added to btrfs-progs? 16:52:23 <Eighth_Doctor> similar to what exists for ext4? 16:52:38 <dustymabe> loop? i thought we were mounting emulated scsi disks 16:52:43 <cmurf[m]> i don't really want offline discard added, and esandeen mentioned to me a while ago it probably shouldn't be in e2fsck either 16:52:54 <Eighth_Doctor> so then what? 16:52:56 <Eighth_Doctor> what do? 16:52:59 <dcavalca> Eighth_Doctor: I can certainly ask if needed 16:53:00 <cmurf[m]> it should just be done on a mounted fs, using fstrim, which calls a kernel ioctl for it and works on all file systems 16:53:17 <Eighth_Doctor> could we call fstrim from the kickstart? 16:53:29 <dustymabe> I think it has more to do with the backing storage 16:53:32 <dcavalca> calling fstrim from %post should work 16:53:47 <dustymabe> for example in the blog post I linked I needed to add `discard='unmap'` to the disk XML description 16:54:12 <cmurf[m]> oh that's a good point since this happening in a VM 16:54:49 <dustymabe> cmurf[m]: and now you get to go dig through imagefactory/oz to see if you can wire everything through 16:54:56 <dustymabe> :) 16:54:56 <cmurf[m]> ok so it's not on loop, it's a qemu device? 16:55:15 <cmurf[m]> i thought i saw the logs showing the image on a loop device, hmmm 16:55:22 <dustymabe> yeah I mean it's just a disk image I assume 16:55:37 * dustymabe can't confirm 100% - that's just what I assume 16:56:10 <cmurf[m]> well my idea would be (a) fallocate the image instead of using truncate and (b) do not use that file's filesystem, i.e. /home, as a staging area for rpms that create a log of garbage to clean up later on 16:56:33 <cmurf[m]> if we did that, we don't have to worry about fstrim, fallocate, dd zeros, etc. later on 16:56:44 <dustymabe> hmm. i'm not familiar with the "/home, as a staging area for rpms that create a log of garbage" issue 16:56:56 <dustymabe> Is that specific to anaconda? 16:56:58 <cmurf[m]> i don't know why the image itself is used for staging rpms, seems kinda sloppy honestly 16:57:01 <cmurf[m]> yeah it's also in the log 16:57:05 <cmurf[m]> yes 16:57:09 <cmurf[m]> netinstallers do it too 16:57:27 <dustymabe> ahh, ok. I leave that issue to the anaconda team then :) 16:57:37 <cmurf[m]> i think it's based on netinstalls on baremetal 16:57:47 <cmurf[m]> where else to stage them? 16:57:57 <dustymabe> the image itself is probably used to stage rpms because systems with low memory wouldn't be able to do it otherwise 16:58:15 <cmurf[m]> it doesn't distinguish between the netinstall on baremetal case, and netinstall to create an image case 16:58:25 <Eighth_Doctor> it's an anaconda thing that can't be fixed 16:58:29 <cmurf[m]> right a build system could just use /var/tmp 16:58:44 * Eighth_Doctor has had this argument before 16:58:48 <dustymabe> cmurf[m]: correct. a system with lots of RAM could just use that 16:58:52 <cmurf[m]> you'd need a "build image" mode for anaconda 16:59:16 <dustymabe> ok I have to run - cmurf[m] maybe create an issue for this with some relevant details 16:59:20 <dustymabe> #topic open floor 16:59:21 <Eighth_Doctor> they don't want to build one and Red Hat is generally reducing investment in Anaconda's ability to do this stuff outside of the absolutely required effort 16:59:24 <cmurf[m]> k 16:59:33 <dustymabe> Eighth_Doctor: can you close the meeting out when open floor discussion has finished 16:59:37 <dustymabe> I have to go 16:59:41 <Eighth_Doctor> dustymabe: sure 16:59:48 <cmurf[m]> yeah so in that case i'd say, we need to focus this functionality on osbuild 16:59:58 <cmurf[m]> and not worry about what we think will be legacy ways of building images 17:00:10 <Eighth_Doctor> that's going to be very hard 17:00:21 <cmurf[m]> which part 17:00:23 <Eighth_Doctor> because there's no infrastructure to build images with osbuild 17:00:37 <Eighth_Doctor> and osbuild does not support most of the customizations we do today for images 17:01:00 <davdunc> but that's something I would like to explore with the osbuild team. 17:01:10 <cmurf[m]> ok well we might have to have a higher level discussion about all those customizations and if they are even really needed and then which ones osbuild needs to learn 17:01:27 <cmurf[m]> i prefer standardization whenever possible 17:01:36 <Eighth_Doctor> there's a larger conversation to have about image building 17:01:46 <cmurf[m]> the customizations are why we have 8 image building tools, or whatever the count is up to 17:01:51 <Eighth_Doctor> but that's hugely out of scope for now 17:01:54 <Eighth_Doctor> (9 tools) 17:02:00 <cmurf[m]> i was close! :D 17:02:10 <cmurf[m]> i thought i was exaggerating too :\ 17:02:38 <cmurf[m]> anyway let's fix this when it's a problem 17:03:48 <cmurf[m]> right now the btrfs raw image is about 4.1G/5G (about 1G of holes) compared to ext4 625M/5G. But xz smashes them down to ~180M and ~300M respectively. 17:04:09 <cmurf[m]> The big difference there isn't the sparseness issue either. I can explain it if anyone cares. But it's just the way it is. 17:04:43 <Eighth_Doctor> certainly something we should figure out if we can shrink 17:04:43 <cmurf[m]> I'd consider it not a problem but room for optimization. 17:04:56 <cmurf[m]> Not by much. 17:05:10 <cmurf[m]> the issue is that btrfs is compressing with zstd:1 and xz can't compress it further. 17:05:52 <Eighth_Doctor> ahhh 17:06:03 <cmurf[m]> we'd need a way to create images with a different (higher) zstd level than the fstab that's in it 17:07:23 <Eighth_Doctor> I think we can go on and on for this topic, so let's table it 17:07:36 <Eighth_Doctor> cmurf: can you make a ticket about this? 17:08:09 <cmurf[m]> a cloud-sig ticket for tracking? 17:08:10 <cmurf[m]> yeah 17:08:20 <Eighth_Doctor> #action cmurf to make a ticket about image creation optimizations 17:08:50 <Eighth_Doctor> So finally, one last thing for me to bring up... 17:09:04 <Eighth_Doctor> #topic New lead for Cloud WG 17:09:23 <Eighth_Doctor> I'm bringing this up because it's increasingly clear dustymabe is being stretched too thin on this 17:09:49 <Eighth_Doctor> his efforts and job around Fedora CoreOS are making harder for him to drive Fedora Cloud and that's not fair to him or everyone else 17:10:23 <Eighth_Doctor> recently though, davdunc has been stepping up to do amazing work on Fedora Cloud, and he clearly cares *a lot* about it 17:10:49 <davdunc> yes. I love what Dusty has done over the years! I also care a lot. 17:11:12 <Eighth_Doctor> so I'd like to ask davdunc if he'd like to consider taking over leading the Fedora Cloud WG to help steer us forward 17:11:30 <Eighth_Doctor> (as for why not me? I'm stretched even thinner than Dusty, if that's even possible...) 17:12:47 <Eighth_Doctor> so what does the group think on this? 17:13:08 <cmurf[m]> no objection 17:13:09 <Eighth_Doctor> (and davdunc too! after all, he's the subject of this!) 17:13:14 <dustymabe> I'm all for more involvement and I think davdunc is an excellent member of our community 17:13:45 <dustymabe> +1 17:13:53 <Eighth_Doctor> jdoss, dcavalca, ? 17:14:05 <dcavalca> +1 17:14:18 <davdunc> I would definitely do it. 17:14:29 <dustymabe> I know mattdm is going to schedule some time to talk with all of us soon about fedora cloud - so this certainly plays into that discussion 17:14:59 <Eighth_Doctor> I think it makes sense to have davdunc leading by that point if he wants to 17:15:03 <dustymabe> davdunc: can you reach out to mattdm and let him know your interest as well? 17:15:53 <davdunc> dustymabe: will do. 17:15:55 <dustymabe> Eighth_Doctor: WFM - let's have a ticket in our tracker anyway - to draw more attention to the proposed change 17:16:02 <Eighth_Doctor> sure 17:16:10 <dustymabe> then we can follow that with a ML announcement and such 17:16:10 <Eighth_Doctor> dustymabe: do you want to make the ticket? 17:16:18 <Eighth_Doctor> you are the leader atm :) 17:16:22 <Eighth_Doctor> so it makes sense 17:16:23 <dustymabe> :) 17:16:25 <dustymabe> sure can do 17:16:49 <Eighth_Doctor> #action dustymabe to file ticket and announce change to davdunc as Cloud WG lead 17:17:04 <dustymabe> thanks Eighth_Doctor 17:17:08 <dustymabe> have to leave keyboard again 17:17:10 <dustymabe> bbiab 17:17:18 <davdunc> :D thanks for the vote of confidence everyone. Thanks dustymabe for everything! 17:17:19 <Eighth_Doctor> alright, that's pretty much everything I had 17:17:29 <cmurf[m]> +1 17:17:29 <Eighth_Doctor> if there's nothing else, we can end this meeting 17:17:43 <Eighth_Doctor> counting down... 17:17:56 <Eighth_Doctor> 3............. 17:17:57 <cmurf[m]> dustymabe++ 17:17:57 <Eighth_Doctor> 2........ 17:18:03 <cmurf[m]> not sure if that works via matrix 17:18:03 <Eighth_Doctor> 1... 17:18:19 <Eighth_Doctor> cmurf: it does if zodbot recognizes the IRC nick from Matrix 17:18:26 <Eighth_Doctor> #endmeeting