16:29:47 <dustymabe> #startmeeting fedora_coreos_meeting
16:29:47 <zodbot> Meeting started Wed Nov 11 16:29:47 2020 UTC.
16:29:47 <zodbot> This meeting is logged and archived in a public location.
16:29:47 <zodbot> The chair is dustymabe. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:29:47 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
16:29:47 <zodbot> The meeting name has been set to 'fedora_coreos_meeting'
16:29:51 <dustymabe> #topic roll call
16:29:59 <slowrie> .hello2
16:30:00 <zodbot> slowrie: slowrie 'Stephen Lowrie' <slowrie@redhat.com>
16:30:05 <travier> .hello2
16:30:06 <zodbot> travier: Sorry, but you don't exist
16:30:13 <travier> 🙁
16:30:14 <skunkerk_> .hello sohank2602
16:30:15 <zodbot> skunkerk_: sohank2602 'Sohan Kunkerkar' <skunkerk@redhat.com>
16:30:24 <travier> .hello2 siosm
16:30:25 <zodbot> travier: Sorry, but you don't exist
16:30:28 <travier> .hello siosm
16:30:29 <zodbot> travier: siosm 'Timothée Ravier' <travier@redhat.com>
16:30:36 <travier> 🙂
16:30:36 <lucab> .hello2
16:30:37 <zodbot> lucab: lucab 'Luca Bruno' <lucab@redhat.com>
16:30:42 <dustymabe> travier: third time is the charm!
16:30:50 <dustymabe> .hello2
16:30:51 <zodbot> dustymabe: dustymabe 'Dusty Mabe' <dusty@dustymabe.com>
16:31:25 <red_beard> .hello redbeard
16:31:26 <zodbot> red_beard: redbeard 'Brian 'redbeard' Harrington' <bharring@redhat.com>
16:31:34 <travier> 👋
16:33:07 <dustymabe> #chair slowrie travier skunkerk_ lucab red_beard
16:33:07 <zodbot> Current chairs: dustymabe lucab red_beard skunkerk_ slowrie travier
16:33:17 <dustymabe> #topic Action items from last meeting
16:33:25 <dustymabe> * lucab to follow up with NM team about hostname length and run this
16:33:27 <dustymabe> issue to ground
16:33:47 <dustymabe> #info lucab opened up an issue to start discussion with the NM team https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues/572
16:34:00 <dustymabe> I also just started a mailing list post asking them to look at it soon
16:34:25 <miabbott> .hello miabbott
16:34:25 <cyberpear> .hello2
16:34:25 <zodbot> miabbott: miabbott 'Micah Abbott' <miabbott@redhat.com>
16:34:28 <zodbot> cyberpear: cyberpear 'James Cassell' <fedoraproject@cyberpear.com>
16:34:30 <dustymabe> #chair miabbott cyberpear
16:34:30 <zodbot> Current chairs: cyberpear dustymabe lucab miabbott red_beard skunkerk_ slowrie travier
16:34:43 <lucab> I was unsure whether they look closer at gitlab tickets or fedora BZ
16:34:44 <lorbus> .hello2
16:34:45 <zodbot> lorbus: lorbus 'Christian Glombek' <cglombek@redhat.com>
16:34:57 <dustymabe> lucab: when in doubt, ask :)
16:35:18 <lorbus> that's currently a blocker for the OKD 4.6 release :/
16:36:02 <dustymabe> lorbus: seems like the MCO could just carry that (ugly) workaround for now (checking for localhost and fedora)
16:36:08 <lorbus> we could hack around it by accepting the "fedora" fallback hostname that we get now
16:37:01 <lorbus> yeah exactly, might be the only option to get it to work right now
16:37:22 <dustymabe> hopefully we'll get some traction on the upstream NM issue soon and we'll know in a few days what our path is
16:38:20 <dustymabe> #topic Add support for OVHcloud (formerly named OVH)
16:38:27 <dustymabe> #link https://github.com/coreos/fedora-coreos-tracker/issues/666
16:39:04 <travier> So this one is more of a cooperation issue than development work
16:39:25 * red_beard is reviewing the comments
16:39:35 <travier> OVHcloud is collection of platforms and we already support them (VMware, OpenStack, BareMetal)
16:40:08 <red_beard> travier: clarifying, this is a hosting provider you use where we'd like to see them offer Fedora?
16:40:20 <travier> What we can improve here is to have them show FCOS as an option in their automated installer option
16:40:32 <travier> red_beard: Yes
16:40:37 <red_beard> :thumbs_up:
16:40:38 <lucab> travier: if you can spin up a ContainerLinux instance there, can you check which OEM/platform-id they use? I didn't see anything specific to OVH in CL, so I guess they are plain openstack ones (or I may have missed some bits)
16:41:18 <travier> I have seen the CL option there for BareMetal and KVM based VPS but never tried so far and I'm afraid there is no support for Ignition
16:41:38 <red_beard> travier: it's more based on how they're providing a meta-data service
16:41:58 <red_beard> which is to say ignition itself is just a client to a meta-data servioce
16:42:01 <red_beard> *service
16:42:35 <travier> Which is the main issue here. For OpenStack there isn't much to do beside having them provide FCOS images but for BM, I don't think they will support Ignition (but we can try)
16:42:44 <travier> I don't think there is any for BM
16:43:07 <red_beard> how are they providing bare metal services?  OpenStack Ironic?
16:44:06 <red_beard> my understanding (from a number of years ago looking at their auction site) was that they were manually provisioned... but that doesn't seem tenable in 2020.
16:44:24 <travier> You have remote KVM console access, pre-installed/prepared OS options and a Debian based rescue system as boot option
16:44:44 <red_beard> then again.... they only seem to be available by the month (not by the hour) so it's likely limiting.
16:45:03 <red_beard> hmmm, no it is auto-provisioned _somehow_
16:45:05 <travier> That's for the BM part. The OpenStack & VMware is regular OpenStack & VMware as far as I know
16:45:35 <red_beard> most of the systems say "Delivery from
16:45:38 <red_beard> 120s"
16:45:42 <travier> They send you a password by mail to connect to your instance
16:46:07 <dustymabe> circa DigitalOcean 2015
16:46:12 <travier> Or you can set "users" with SSH keys
16:46:30 <red_beard> ok.  it's ConfigDrive based
16:46:41 <red_beard> lucab: do we still have support for ConfigDrive?
16:46:48 <red_beard> https://docs.ovh.com/us/en/dedicated/bringyourownimage/
16:46:59 <red_beard> "Meta data used by CloudInit when booting."
16:47:16 <red_beard> so they're using a meta-data service (likely the openstack one)
16:47:39 <travier> Oh nice, I didn't know about that one
16:47:58 <lucab> red_beard: yes, but it still need to some relevant OEM-id to know about that (and use it)
16:48:30 <dustymabe> IIUC, we can just use our already created artifacts for OpenStack/VMWare ?
16:48:32 <red_beard> to be fair, i was just assessing how _much_ work it would be.
16:49:08 <lucab> dustymabe: `openstack`, that would be my understanding
16:49:08 <red_beard> dustymabe: that's more to say that step one *isn't* "re-do how things are built for a single provider"
16:49:39 <dustymabe> lucab: not VMWare?
16:49:47 <red_beard> definitely not VMware
16:49:56 <red_beard> again: meta-data service
16:50:17 <dustymabe> i'm confused.. travier said "The OpenStack & VMware is regular OpenStack & VMware as far as I know"
16:50:34 <travier> dustymabe: there are 3 platforms in the OVHcloud offering
16:50:44 <travier> BM, OpenStack & VMware
16:50:49 <dustymabe> yep, got that
16:51:10 <travier> red_beard: yes, why not reuse our images for VMware?
16:51:10 <dustymabe> and our VMWare artifacts could/should work with the VMWare offering?
16:51:27 <travier> I think so. I have no reason to believe they would not but I have not tried
16:51:40 <red_beard> travier: let me send you a Google Doc i wrote years ago
16:51:43 <lucab> dustymabe: I think they will, you are right. red_beard was going through the openstack offering I think
16:51:49 <dustymabe> k
16:52:10 <dustymabe> the reason I bring it up.. is.. we have no automated test coverage of our VMWare artifacts right now
16:52:28 <dustymabe> if we could forge a relationship with OVH similar to the relationships we have with other clouds, we might be able to change that
16:52:45 <dustymabe> I see that as a win
16:54:26 <dustymabe> regarding OpenStack, I think it would be great to have FCOS offered there as a named image (we could enhance our metadata and website to mention the image name too).
16:54:27 <travier> Yes, I think it's easier to start by working with them to provide FCOS as an official image in OpenStack and BareMetal
16:54:46 <dustymabe> BM, sounds a bit more open ended, but works for me too
16:55:03 <dustymabe> I guess it all depends on how much effort we have to expend and how much they are willing to work with us and do part of the work too
16:55:54 <red_beard> dustymabe: starting with openstack is the right thing to do.  based on their docs, I'd bet $1 that they're managing bare metal hosts with Ironic, which means openstack support will handle both traditional VMs and bare metal.
16:56:20 <lucab> silly question related to openstack cloud providers: do they share a common source-list of images? or is everyone redefining its own images for basic distros?
16:56:53 <red_beard> lucab: wild west
16:56:54 <dustymabe> lucab: not sure on that one. until today I didn't realize there was more than one openstack public cloud (vexxhost is the one I knew about)
16:56:59 <dustymabe> other than maybe rackspace
16:57:06 <red_beard> hence - https://github.com/coreos/scripts/tree/master/oem/openstack
16:57:23 <red_beard> ir's a matter of loading the image into Glance
16:57:36 <red_beard> but there is no master catalog
16:57:43 <lucab> red_beard: :sadpuppy:
16:58:12 <red_beard> that's why i was always pushing for a JSON metadata blob in the CL version directories ;)
16:58:42 <red_beard> even that openstack script had to be used with https://github.com/coreos/scripts/blob/master/oem/generic/check_etag.sh
16:58:54 <red_beard> (which is why i first wrote both of them) ;)
16:59:16 <lucab> ack
16:59:18 <dustymabe> ok let me see of what you all think of this:
16:59:26 <dustymabe> #info provided it doesn't take a huge amount of time and effort we'd like to work with OVH cloud to get Fedora CoreOS as a named image for their various cloud offerings. The VMWare offering is particularly interesting because we currently don't have any automated test coverage of VMWare.
17:00:09 <red_beard> dustymabe: clarifying further, so the goal of having VMware availability would be to use it for testing?
17:00:13 <lucab> I think we scoped down the first action time to "try to have them load FCOS images into their public openstack-glance(?) catalog"
17:00:17 <lucab> *item
17:00:35 * red_beard agrees
17:00:50 <dustymabe> red_beard: yeah, it would have multiple benefits, 1. customers see FCOS as an option 2. we get automated test coverage
17:01:00 <red_beard> well, i guess step one is "someone" should try the BYO image mechanism
17:01:15 <red_beard> they provide IPMI access so working through the boot should be possible
17:01:19 <dustymabe> red_beard: and step two is seeing if OVH is open to working with us
17:01:25 <red_beard> i'll commit to taking a stab at that
17:01:42 <dustymabe> we have some other clouds that we'd like to work with too (linode, etc)
17:01:43 <red_beard> (and make RH sponsor the exploration) :D
17:02:34 <dustymabe> travier: would you want to take on trying to find a contact there?
17:02:37 <lucab> red_beard: I guess you have seen https://github.com/squat/coreos-ovh and already nagged the author? ;)
17:03:05 <travier> dustymabe: sure, I can try to reach out to them to find a technical contact
17:03:09 <red_beard> i haven't, but it's a good excuse to shoot the shit with squat
17:03:44 <dustymabe> #action travier to try to find a contact at OVH we can build a relationship with
17:04:24 <dustymabe> travier: typically we need multiple relationships, one with someone who can get us an account (usually with credits or unbilled), and another with a technical team who can help if there are issues
17:04:27 <travier> lucab: this is mostly what I did to install FCOS on an OVH VPS
17:04:57 <dustymabe> #action red_beard to investigate forklifting FCOS artifacts into OVH to see if they work
17:05:11 <dustymabe> anything else for this topic?
17:06:07 <red_beard> not from me
17:06:19 <travier> lgtm
17:06:43 <dustymabe> #topic tracker: Fedora 33 rebase work
17:06:49 <dustymabe> #link https://github.com/coreos/fedora-coreos-tracker/issues/609
17:07:16 <dustymabe> #info we had an amazing turnout at our test day for the f33 rebase: https://testdays.fedoraproject.org/events/98
17:07:26 <dustymabe> almost 30 people turned out
17:07:40 <dustymabe> looks like most everything is looking good. A few smallish issues found
17:08:13 <dustymabe> To keep us on schedule with shipping f33 in `testing` next week I've opened https://github.com/coreos/fedora-coreos-config/pull/735
17:08:28 <lucab> #link https://github.com/coreos/fedora-coreos-tracker/issues?q=is%3Aissue+label%3Afallout%2Ff33+
17:09:03 <dustymabe> yep, 664 was found by jlebon during test day
17:09:19 <dustymabe> 651 is being worked on upstream, not much FCOS can do
17:09:36 <dustymabe> 649 we discussed earlier, pending discussion with NM team
17:10:26 <dustymabe> lucab: any more you wanted to add?
17:11:17 <lucab> https://github.com/coreos/fedora-coreos-tracker/issues/663 is still unclear
17:12:00 <dustymabe> indeed. though I would think that ~30 people who participated in test day would have hit it too
17:12:58 <dustymabe> we'll continue to work with  skycastlelily in the ticket
17:13:15 <dustymabe> anything else before I move to open floor?
17:13:30 <lucab> I also have some inner bias towards "user mistake somewhere else"
17:14:47 <dustymabe> #topic open floor
17:14:54 <dustymabe> anyone with anything for open floor today?
17:15:17 * dustymabe notes we have quite a few tickets where we've decided on the path forward but haven't started working on them yet
17:15:27 <dustymabe> would be great if we could start picking some of those up
17:16:37 <travier> dustymabe: which one do you have in mind?
17:16:44 <travier> ones*
17:17:00 <dustymabe> travier: let me see if I can find some
17:17:12 <dustymabe> https://github.com/coreos/fedora-coreos-tracker/issues/653
17:17:21 <dustymabe> https://github.com/coreos/fedora-coreos-tracker/issues/648
17:18:07 <dustymabe> https://github.com/coreos/fedora-coreos-tracker/issues/567
17:18:30 <dustymabe> so those are just a few
17:18:53 * dustymabe needs to try to go through and do some review of our existing tickets
17:20:05 <dustymabe> anyone with anything else for open floor?
17:20:14 <dustymabe> Thanks to everyone who participated in the test day
17:21:55 <dustymabe> #info we have some plans to re-organize the partition layout a bit. See https://github.com/coreos/fedora-coreos-tracker/issues/669
17:23:28 <dustymabe> #endmeeting