16:30:04 <nasirhm> #startmeeting fedora_coreos_meeting
16:30:04 <zodbot> Meeting started Wed Aug 12 16:30:04 2020 UTC.
16:30:04 <zodbot> This meeting is logged and archived in a public location.
16:30:04 <zodbot> The chair is nasirhm. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:30:04 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
16:30:04 <zodbot> The meeting name has been set to 'fedora_coreos_meeting'
16:30:16 <nasirhm> #topic roll call
16:30:21 <nasirhm> .hello2
16:30:22 <zodbot> nasirhm: nasirhm 'Nasir Hussain' <nasirhussainm14@gmail.com>
16:30:26 <bgilbert> .hello2
16:30:27 <zodbot> bgilbert: bgilbert 'Benjamin Gilbert' <bgilbert@backtick.net>
16:30:30 <darkmuggle> .hello2
16:30:31 <zodbot> darkmuggle: darkmuggle 'None' <me@muggle.dev>
16:30:37 <dustymabe> .hello2
16:30:38 <zodbot> dustymabe: dustymabe 'Dusty Mabe' <dusty@dustymabe.com>
16:30:39 <slowrie> .hello2
16:30:41 <zodbot> slowrie: slowrie 'Stephen Lowrie' <slowrie@redhat.com>
16:30:42 <jlebon> .hello2
16:30:47 <zodbot> jlebon: jlebon 'None' <jonathan@jlebon.com>
16:30:49 * cyberpear listens in the background, busy w/ other tasks
16:30:54 <davdunc> .hello2
16:30:55 <zodbot> davdunc: davdunc 'David Duncan' <davdunc@amazon.com>
16:31:44 <nasirhm> #char davdunc jlebon slowrie dustymabe darkmuggle bgilbert
16:32:03 <nasirhm> #chair davdunc jlebon slowrie dustymabe darkmuggle bgilbert
16:32:03 <zodbot> Current chairs: bgilbert darkmuggle davdunc dustymabe jlebon nasirhm slowrie
16:32:40 <nasirhm> #topic Action items from last meeting
16:32:50 <lucab> .hello2
16:32:52 <zodbot> lucab: lucab 'Luca Bruno' <lucab@redhat.com>
16:33:06 <nasirhm> #info bgilbert to summarize discussion from last week's meeting in ticket
16:33:06 <nasirhm> https://github.com/coreos/fedora-coreos-tracker/issues/586
16:33:19 <nasirhm> #link https://github.com/coreos/fedora-coreos-tracker/issues/586
16:33:21 <bgilbert> #action bgilbert to summarize discussion in #586
16:33:23 <bgilbert> :-(
16:33:25 <nasirhm> #chair lucab
16:33:25 <zodbot> Current chairs: bgilbert darkmuggle davdunc dustymabe jlebon lucab nasirhm slowrie
16:33:53 <nasirhm> bgilbert: No worries :)
16:34:07 <nasirhm> #info dustymabe to send an email to coreos@ about the coreos/enhancements repo
16:34:21 <dustymabe> email sent - let me grab a link and #info it
16:34:55 <dustymabe> #info dusty sent email to coreos@ list about coreos enhancements repo https://lists.fedoraproject.org/archives/list/coreos@lists.fedoraproject.org/message/2UDDDU2M32DG6P52MPIVOAFA6YPIRVQT/
16:35:03 <dustymabe> #undo
16:35:03 <zodbot> Removing item from minutes: INFO by dustymabe at 16:34:55 : dusty sent email to coreos@ list about coreos enhancements repo https://lists.fedoraproject.org/archives/list/coreos@lists.fedoraproject.org/message/2UDDDU2M32DG6P52MPIVOAFA6YPIRVQT/
16:35:10 <dustymabe> #info dusty sent email to coreos@ list about coreos enhancements repo
16:35:13 <dustymabe> #link https://lists.fedoraproject.org/archives/list/coreos@lists.fedoraproject.org/message/2UDDDU2M32DG6P52MPIVOAFA6YPIRVQT/
16:35:28 <nasirhm> dustymabe: Am i using the right meetbot command to info the action items for previous meeting ?
16:35:54 <dustymabe> nasirhm: for action items from previous meeting i don't usually #info them
16:36:07 <dustymabe> i just bring it up and then usually people will bring information to the table about what they've done
16:36:10 <dustymabe> and then we #info that
16:36:35 <dustymabe> we're good for today, though. :)
16:36:48 <nasirhm> Ah, Thank you. will do that for the next meeting.
16:37:29 <nasirhm> #topic New Ignition provider for OpenNebula
16:37:50 <nasirhm> #link https://github.com/coreos/fedora-coreos-tracker/issues/166
16:38:19 <lucab> ah, I should have retitled the ticket
16:38:30 <lucab> anyway
16:38:39 <nasirhm> lucab: We can undo and rewrite it if needed.
16:38:49 <lucab> nasirhm: no worries
16:39:18 <lucab> context is: we got an external PR to Afterburn adding some support for OpenNebula
16:39:25 <lucab> #link https://github.com/coreos/afterburn/pull/478
16:39:57 <lucab> that flow is a bit different than usual, as Afterburn is usually the last step to be done
16:40:43 <lucab> from what I can see in the FCOS ticket, the platform is not really straightforward as it even lacks DHCP
16:41:22 <lucab> so my feeling is that we won't reach a point anytime soon where we can claim full OpenNebula support
16:41:53 <jlebon> huh hadn't heard of OpenNebular before.  anyone have a sense of how popular it is?
16:42:06 <lucab> but at the same time I don't want to stall the Afterburn PR, as I guess the submitter wants to use it outside of FCOS context
16:42:10 <dustymabe> i've heard of it before, but never used it
16:42:25 <darkmuggle> In a prior life, I used it :)
16:42:29 <dustymabe> :)
16:42:37 <dustymabe> darkmuggle: with knowledge of all the cloudx
16:42:40 <dustymabe> clouds*
16:43:23 <lucab> my gut feeling at this point is that we could maybe just agree on the platform ID, even without doing the rest of the integration work in the ticket
16:43:33 <dustymabe> lucab: +1
16:43:50 <nasirhm> lucab: I have similar feelings, +1
16:43:57 <jlebon> i wonder if at this point, we should just add support for new cloud platforms in ignition and afterburn, but don't build another image and instead just document instructions for how to stamp the qemu image
16:44:04 <lucab> that would unblock Afterburn without committing to the whole work on FCOS side
16:44:16 <darkmuggle> +1 to jlebon's idea
16:44:25 <jlebon> it's not a great UX, but also adding every cloud in the sky is not scalable
16:44:45 <darkmuggle> As upstream projects, Ignition and Afterburn can support more Clouds than FCOS.
16:45:27 <bgilbert> it's also possible to use the install flow.  coreos-installer knows how to override the platform ID.
16:45:57 <jlebon> yeah, that's a nicer UX for the bare metal platforms
16:46:18 <lucab> it's somewhat unrelated, and also depends on whether the specific platform needs the image in some peculiar format
16:46:40 <dustymabe> our strategy for additional cloud images is probably a separate topic
16:46:41 <darkmuggle> to bgilbert's point, some cloud providers could self-publish by taking a published image and then using the installer to override to a supported platform
16:47:14 <bgilbert> we opted not to ship a Packet image for example
16:47:15 <darkmuggle> s/supported/{Ignition/Afterburn}
16:47:19 <lucab> ok, it looks like I can proceed with just reserving an ID. The submitter picked up `one`, but I'm not very happy with that
16:47:38 <dustymabe> lucab: should you do a proposed? and then we can bikeshed ?
16:48:12 <lucab> darkmuggle: do you maybe know if `one` is some preferred nickname in OpenNebula context
16:48:33 <jlebon> lucab: the repo at least is named one: https://github.com/OpenNebula/one
16:48:35 <darkmuggle> I reserve the right to be wrong, but I think "one" is OpenNebula Enterprise.
16:49:04 <bgilbert> there's precedent for not using the provider's internal abbreviation for itself
16:49:31 <nasirhm> darkmuggle: I think you're right, the link shared by jlebon says the same thing https://github.com/OpenNebula/one
16:49:32 <lucab> #proposed proceed with reserving a platform ID for OpenNebula, without committing to full support nor producing images at this time
16:49:45 <jlebon> the CLI is called `onehost`
16:50:15 <dustymabe> ack
16:50:37 <bgilbert> lucab: I guess you don't want to bikeshed the name here? :-)
16:50:43 <bgilbert> +1 to proposal if not
16:50:52 <jlebon> +1 to proposal
16:51:09 <lucab> we can if we want
16:51:15 <slowrie> +1
16:51:22 <darkmuggle> +1
16:51:27 <davdunc> +1
16:51:34 <lucab> I was just going to propose `nebula` or `opennebula` to the reporter
16:51:46 <lucab> #agreed proceed with reserving a platform ID for OpenNebula, without committing to full support nor producing images at this time
16:52:37 <lucab> how do you folks feel about `one`?
16:52:50 <nasirhm> We don't have other tickets marked as meeting, Does anyone have any ticket they would like to discuss
16:52:51 <jlebon> not great either
16:52:54 <lucab> (I may be the only one not happy with it)
16:53:03 <dustymabe> originally I didn't like it, but it at least has an actual meaning
16:53:32 <jlebon> i like the two counter-proposals
16:53:34 <bgilbert> lucab: not great
16:53:34 <dustymabe> `opennebula` works, `one` seems OK. We'd just document it on our platforms page eventually
16:53:53 <bgilbert> `one` seems likely to lead to perpetual confusion by anyone who doesn't know the name
16:54:11 <darkmuggle> On one hand its no worse than `aws`, but `aws` has universal understanding. I'd vote `nebula`
16:54:15 <lucab> bgilbert: which one? that one!
16:54:26 <bgilbert> hard to Google
16:55:01 <bgilbert> in CL times, we opted for `oracle-oci` rather than just `oci`
16:55:02 <nasirhm> nebula seems like a better idea.
16:55:03 <lucab> bgilbert, jlebon: any preference between the two?
16:55:10 <dustymabe> is there a difference between open nebula enterprise and something else ?
16:55:22 <dustymabe> i.e. could there be two things ?
16:55:50 <dustymabe> i.e. would someone really want the term "open" in front of nebula if we were to go with `nebula`
16:56:13 <lucab> (I'm ok with the full `opennebula`)
16:56:23 <davdunc> dustymabe: I would go with one. better not to give yourself too much to keep track of in alias tables.
16:56:33 <bgilbert> any relationship to the defunct Nebula company?
16:57:00 <lucab> dustymabe: it looks like it's exactly like openstack :)
16:57:00 <bgilbert> they built a private cloud appliance
16:57:10 <davdunc> they get sedimented  into internal systems.
16:57:56 <lucab> #proposed propose `opennebula` as the actual platform ID for OpenNebula
16:58:01 <bgilbert> +1
16:58:12 <davdunc> +1 lucab
16:58:15 <dustymabe> +1
16:58:28 <nasirhm> +1
16:58:35 <jlebon> +1
16:58:42 <darkmuggle> +1
16:58:47 <lucab> ack
16:58:48 <lucab> #agreed propose `opennebula` as the actual platform ID for OpenNebula
16:59:07 * bgilbert should have said "+one"
16:59:27 <lucab> (that's all on this topic from my side)
16:59:31 <lucab> bgilbert: ah!
16:59:53 <dustymabe> 🤣
16:59:55 <nasirhm> Any one would like to discuss any other tickets for today's meeting ?
17:00:00 <bgilbert> nasirhm: could mention https://github.com/coreos/fedora-coreos-tracker/issues/390 briefly
17:00:34 <nasirhm> #topic Publish the initrd and rootfs images separately for PXE booting
17:00:41 <nasirhm> #link https://github.com/coreos/fedora-coreos-tracker/issues/390
17:00:55 <bgilbert> this is starting to land in today's releases
17:01:00 <bgilbert> a coreos-status post just went out
17:01:14 <bgilbert> all three channels now have three PXE artifacts: kernel, initramfs, rootfs
17:01:45 <bgilbert> anyone who's booting the PXE live image will have to start specifying the rootfs in their PXE config.
17:01:45 <jlebon> very nice work on this bgilbert!
17:01:48 <bgilbert> :-)
17:01:58 <nasirhm> bgilbert++
17:02:00 <bgilbert> currently, in all three channels, the rootfs doesn't do anything yet
17:02:12 <bgilbert> FCOS will complain if you don't specify it, but will still work
17:02:19 <miabbott> huge thanks to bgilbert for seeing this through
17:02:24 <miabbott> bgilbert++
17:02:24 <zodbot> miabbott: Karma for bgilbert changed to 7 (for the current release cycle):  https://badges.fedoraproject.org/tags/cookie/any
17:02:44 <bgilbert> but over the next couple months we're going to switch channels over, so the initramfs returns to being a normal-sized initramfs, and most of the bits are in the rootfs instead
17:03:22 <bgilbert> the change provides more flexibility in how PXE systems are provisioned
17:03:37 <bgilbert> in particular, not having to fetch 700-800 MB from the PXE preboot environment
17:04:03 <bgilbert> for more info and migration advice, see
17:04:05 <bgilbert> #link https://docs.fedoraproject.org/en-US/fedora-coreos/bare-metal/#_pxe_rootfs_image
17:04:21 <dustymabe> +1 very nice work
17:04:50 <bgilbert> (also, the ISO images in next and testing now contain bit-for-bit copies of the PXE artifacts, for those who find that useful)
17:05:34 <bgilbert> s/channels/streams/.  wow, old habits.
17:05:48 <bgilbert> that's all I have
17:07:00 <nasirhm> Thank You bgilbert, We've still got time, Would anyone like to discuss another ticket ?
17:07:46 <dustymabe> None from me that I can think of, other than open floor
17:07:50 <lucab> for anyone looking for low-hanging stuff, this one just got in: https://github.com/coreos/fedora-coreos-tracker/issues/601#issuecomment-672997972
17:08:13 * nasirhm is going through the ticket
17:08:39 <dustymabe> nasirhm: let's switch to open floor
17:08:55 <davdunc> lucab do you want to reuse the code in ec2-utils there?
17:08:58 <nasirhm> #topic Open Floor
17:09:12 <nasirhm> #info for anyone looking for low-hanging stuff, this one just got in.
17:09:23 <nasirhm> #undo
17:09:23 <zodbot> Removing item from minutes: INFO by nasirhm at 17:09:12 : for anyone looking for low-hanging stuff, this one just got in.
17:09:46 <lucab> davdunc: do you have a pointer to that? I guess it still uses `nvme` at some point no?
17:09:57 <nasirhm> #info AWS Fedora CoreOS missing /dev/xvd* symlinks
17:10:12 <nasirhm> #link https://github.com/coreos/fedora-coreos-tracker/issues/601#issuecomment-672997972
17:10:22 <nasirhm> I would like to work on it.
17:10:23 <davdunc> I don't have a real link it's in the src.rpm for Amazon Linux.
17:10:40 <davdunc> https://github.com/davdunc/ec2-utils
17:10:51 <lucab> nasirhm: it was just about "we forgot to ship the nvme utility", to be clear
17:11:20 <jlebon> hmm, probably cleaner to just make it part of the existing PR which adds the dependency on it (https://github.com/coreos/fedora-coreos-config/pull/476)
17:11:36 <nasirhm> lucab: Ah, I'm trying to find some good first issues to get started. :)
17:12:28 <dustymabe> jlebon: we put a hold on that PR to see what davdunc could come up with
17:13:37 <jlebon> dustymabe: ahh ok. so if makes it into systemd, then we can close that PR?
17:13:48 <dustymabe> yeah, that was the idea
17:13:51 <davdunc> dustymabe: I am pushing for the systemd changes.
17:13:58 <jlebon> ack
17:14:14 <dustymabe> lucab: so is the real question "do all sets of existing udev rules use the nvme utility?"
17:14:20 <dustymabe> If so then we should ship the utility?
17:14:39 <dustymabe> or maybe we should even ship it independent of that
17:14:55 <lucab> I think the answer is yes
17:15:14 <lucab> (or they are python programs)
17:16:42 <dustymabe> Added:
17:16:45 <dustymabe> nvme-cli-1.10.1-1.fc32.x86_64
17:16:51 <davdunc> lucab yea. they are python applications.
17:16:52 <dustymabe> seems like only one pkg was added when I layered it
17:17:27 <dustymabe> so at least `nvme-cli` doesn't have a dep on python
17:17:50 <lucab> yes, I'd be for shipping it regardless of the AWS discussion
17:18:27 <dustymabe> any opposed?
17:19:39 <jlebon> what does it do exactly?
17:20:58 <lucab> jlebon: it's a small CLI utility to interact with NVMe disk (e.g. detaching them). The udev rules in CL used it to get the aws-friendly-disk-name embedded in the vendor-metadata area
17:22:01 <jlebon> ahh ok. so it's something that sysadmins would find useful on its own
17:22:25 <nasirhm> I found this link to know more about it : https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_storage_devices/overview-of-nvme-over-fabric-devicesmanaging-storage-devices
17:22:27 <lucab> jlebon: that'd be my feedback, yes
17:22:33 <davdunc> and in setting up the volumes at boot.
17:23:20 <jlebon> sanity-checked size in koji. it's tiny
17:23:53 <jlebon> SGTM re. including it
17:24:12 <nasirhm> So, It's a +1 ?
17:24:22 <davdunc> with multi-attach support in EBS 1.4 is a necessary
17:24:52 <dustymabe> haha I read "multi-attach" as "multi-arch" - I need a break
17:25:14 <dustymabe> so I don't hear anyone opposed to including it
17:25:42 <dustymabe> nasirhm: would you want to open a PR to add it? (I can help show you how)
17:26:01 <nasirhm> I would love to.
17:27:02 <davdunc> nasirhm: I am around to help you with anything you need from the AWS side.
17:27:05 <dustymabe> cool. when we do that let's add the context from this meeting about its usefulness
17:27:30 <nasirhm> #action nasirhm to add nvme-cli for udev rules.
17:27:35 <nasirhm> davdunc: Thank You.
17:28:10 <nasirhm> dustymabe: Sure, I will summarize things from here and add it on the PR.
17:29:09 <nasirhm> Good to Close ?
17:29:17 <dustymabe> +1 from my side
17:29:29 <bgilbert> +1
17:29:36 <jlebon> +1
17:29:38 <davdunc> +1
17:29:52 <nasirhm> Thank You everyone for joining today.
17:29:56 <nasirhm> #endmeeting