16:29:47 #startmeeting fedora_coreos_meeting 16:29:47 Meeting started Wed Nov 11 16:29:47 2020 UTC. 16:29:47 This meeting is logged and archived in a public location. 16:29:47 The chair is dustymabe. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:29:47 Useful Commands: #action #agreed #halp #info #idea #link #topic. 16:29:47 The meeting name has been set to 'fedora_coreos_meeting' 16:29:51 #topic roll call 16:29:59 .hello2 16:30:00 slowrie: slowrie 'Stephen Lowrie' 16:30:05 .hello2 16:30:06 travier: Sorry, but you don't exist 16:30:13 🙁 16:30:14 .hello sohank2602 16:30:15 skunkerk_: sohank2602 'Sohan Kunkerkar' 16:30:24 .hello2 siosm 16:30:25 travier: Sorry, but you don't exist 16:30:28 .hello siosm 16:30:29 travier: siosm 'Timothée Ravier' 16:30:36 🙂 16:30:36 .hello2 16:30:37 lucab: lucab 'Luca Bruno' 16:30:42 travier: third time is the charm! 16:30:50 .hello2 16:30:51 dustymabe: dustymabe 'Dusty Mabe' 16:31:25 .hello redbeard 16:31:26 red_beard: redbeard 'Brian 'redbeard' Harrington' 16:31:34 👋 16:33:07 #chair slowrie travier skunkerk_ lucab red_beard 16:33:07 Current chairs: dustymabe lucab red_beard skunkerk_ slowrie travier 16:33:17 #topic Action items from last meeting 16:33:25 * lucab to follow up with NM team about hostname length and run this 16:33:27 issue to ground 16:33:47 #info lucab opened up an issue to start discussion with the NM team https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues/572 16:34:00 I also just started a mailing list post asking them to look at it soon 16:34:25 .hello miabbott 16:34:25 .hello2 16:34:25 miabbott: miabbott 'Micah Abbott' 16:34:28 cyberpear: cyberpear 'James Cassell' 16:34:30 #chair miabbott cyberpear 16:34:30 Current chairs: cyberpear dustymabe lucab miabbott red_beard skunkerk_ slowrie travier 16:34:43 I was unsure whether they look closer at gitlab tickets or fedora BZ 16:34:44 .hello2 16:34:45 lorbus: lorbus 'Christian Glombek' 16:34:57 lucab: when in doubt, ask :) 16:35:18 that's currently a blocker for the OKD 4.6 release :/ 16:36:02 lorbus: seems like the MCO could just carry that (ugly) workaround for now (checking for localhost and fedora) 16:36:08 we could hack around it by accepting the "fedora" fallback hostname that we get now 16:37:01 yeah exactly, might be the only option to get it to work right now 16:37:22 hopefully we'll get some traction on the upstream NM issue soon and we'll know in a few days what our path is 16:38:20 #topic Add support for OVHcloud (formerly named OVH) 16:38:27 #link https://github.com/coreos/fedora-coreos-tracker/issues/666 16:39:04 So this one is more of a cooperation issue than development work 16:39:25 * red_beard is reviewing the comments 16:39:35 OVHcloud is collection of platforms and we already support them (VMware, OpenStack, BareMetal) 16:40:08 travier: clarifying, this is a hosting provider you use where we'd like to see them offer Fedora? 16:40:20 What we can improve here is to have them show FCOS as an option in their automated installer option 16:40:32 red_beard: Yes 16:40:37 :thumbs_up: 16:40:38 travier: if you can spin up a ContainerLinux instance there, can you check which OEM/platform-id they use? I didn't see anything specific to OVH in CL, so I guess they are plain openstack ones (or I may have missed some bits) 16:41:18 I have seen the CL option there for BareMetal and KVM based VPS but never tried so far and I'm afraid there is no support for Ignition 16:41:38 travier: it's more based on how they're providing a meta-data service 16:41:58 which is to say ignition itself is just a client to a meta-data servioce 16:42:01 *service 16:42:35 Which is the main issue here. For OpenStack there isn't much to do beside having them provide FCOS images but for BM, I don't think they will support Ignition (but we can try) 16:42:44 I don't think there is any for BM 16:43:07 how are they providing bare metal services? OpenStack Ironic? 16:44:06 my understanding (from a number of years ago looking at their auction site) was that they were manually provisioned... but that doesn't seem tenable in 2020. 16:44:24 You have remote KVM console access, pre-installed/prepared OS options and a Debian based rescue system as boot option 16:44:44 then again.... they only seem to be available by the month (not by the hour) so it's likely limiting. 16:45:03 hmmm, no it is auto-provisioned _somehow_ 16:45:05 That's for the BM part. The OpenStack & VMware is regular OpenStack & VMware as far as I know 16:45:35 most of the systems say "Delivery from 16:45:38 120s" 16:45:42 They send you a password by mail to connect to your instance 16:46:07 circa DigitalOcean 2015 16:46:12 Or you can set "users" with SSH keys 16:46:30 ok. it's ConfigDrive based 16:46:41 lucab: do we still have support for ConfigDrive? 16:46:48 https://docs.ovh.com/us/en/dedicated/bringyourownimage/ 16:46:59 "Meta data used by CloudInit when booting." 16:47:16 so they're using a meta-data service (likely the openstack one) 16:47:39 Oh nice, I didn't know about that one 16:47:58 red_beard: yes, but it still need to some relevant OEM-id to know about that (and use it) 16:48:30 IIUC, we can just use our already created artifacts for OpenStack/VMWare ? 16:48:32 to be fair, i was just assessing how _much_ work it would be. 16:49:08 dustymabe: `openstack`, that would be my understanding 16:49:08 dustymabe: that's more to say that step one *isn't* "re-do how things are built for a single provider" 16:49:39 lucab: not VMWare? 16:49:47 definitely not VMware 16:49:56 again: meta-data service 16:50:17 i'm confused.. travier said "The OpenStack & VMware is regular OpenStack & VMware as far as I know" 16:50:34 dustymabe: there are 3 platforms in the OVHcloud offering 16:50:44 BM, OpenStack & VMware 16:50:49 yep, got that 16:51:10 red_beard: yes, why not reuse our images for VMware? 16:51:10 and our VMWare artifacts could/should work with the VMWare offering? 16:51:27 I think so. I have no reason to believe they would not but I have not tried 16:51:40 travier: let me send you a Google Doc i wrote years ago 16:51:43 dustymabe: I think they will, you are right. red_beard was going through the openstack offering I think 16:51:49 k 16:52:10 the reason I bring it up.. is.. we have no automated test coverage of our VMWare artifacts right now 16:52:28 if we could forge a relationship with OVH similar to the relationships we have with other clouds, we might be able to change that 16:52:45 I see that as a win 16:54:26 regarding OpenStack, I think it would be great to have FCOS offered there as a named image (we could enhance our metadata and website to mention the image name too). 16:54:27 Yes, I think it's easier to start by working with them to provide FCOS as an official image in OpenStack and BareMetal 16:54:46 BM, sounds a bit more open ended, but works for me too 16:55:03 I guess it all depends on how much effort we have to expend and how much they are willing to work with us and do part of the work too 16:55:54 dustymabe: starting with openstack is the right thing to do. based on their docs, I'd bet $1 that they're managing bare metal hosts with Ironic, which means openstack support will handle both traditional VMs and bare metal. 16:56:20 silly question related to openstack cloud providers: do they share a common source-list of images? or is everyone redefining its own images for basic distros? 16:56:53 lucab: wild west 16:56:54 lucab: not sure on that one. until today I didn't realize there was more than one openstack public cloud (vexxhost is the one I knew about) 16:56:59 other than maybe rackspace 16:57:06 hence - https://github.com/coreos/scripts/tree/master/oem/openstack 16:57:23 ir's a matter of loading the image into Glance 16:57:36 but there is no master catalog 16:57:43 red_beard: :sadpuppy: 16:58:12 that's why i was always pushing for a JSON metadata blob in the CL version directories ;) 16:58:42 even that openstack script had to be used with https://github.com/coreos/scripts/blob/master/oem/generic/check_etag.sh 16:58:54 (which is why i first wrote both of them) ;) 16:59:16 ack 16:59:18 ok let me see of what you all think of this: 16:59:26 #info provided it doesn't take a huge amount of time and effort we'd like to work with OVH cloud to get Fedora CoreOS as a named image for their various cloud offerings. The VMWare offering is particularly interesting because we currently don't have any automated test coverage of VMWare. 17:00:09 dustymabe: clarifying further, so the goal of having VMware availability would be to use it for testing? 17:00:13 I think we scoped down the first action time to "try to have them load FCOS images into their public openstack-glance(?) catalog" 17:00:17 *item 17:00:35 * red_beard agrees 17:00:50 red_beard: yeah, it would have multiple benefits, 1. customers see FCOS as an option 2. we get automated test coverage 17:01:00 well, i guess step one is "someone" should try the BYO image mechanism 17:01:15 they provide IPMI access so working through the boot should be possible 17:01:19 red_beard: and step two is seeing if OVH is open to working with us 17:01:25 i'll commit to taking a stab at that 17:01:42 we have some other clouds that we'd like to work with too (linode, etc) 17:01:43 (and make RH sponsor the exploration) :D 17:02:34 travier: would you want to take on trying to find a contact there? 17:02:37 red_beard: I guess you have seen https://github.com/squat/coreos-ovh and already nagged the author? ;) 17:03:05 dustymabe: sure, I can try to reach out to them to find a technical contact 17:03:09 i haven't, but it's a good excuse to shoot the shit with squat 17:03:44 #action travier to try to find a contact at OVH we can build a relationship with 17:04:24 travier: typically we need multiple relationships, one with someone who can get us an account (usually with credits or unbilled), and another with a technical team who can help if there are issues 17:04:27 lucab: this is mostly what I did to install FCOS on an OVH VPS 17:04:57 #action red_beard to investigate forklifting FCOS artifacts into OVH to see if they work 17:05:11 anything else for this topic? 17:06:07 not from me 17:06:19 lgtm 17:06:43 #topic tracker: Fedora 33 rebase work 17:06:49 #link https://github.com/coreos/fedora-coreos-tracker/issues/609 17:07:16 #info we had an amazing turnout at our test day for the f33 rebase: https://testdays.fedoraproject.org/events/98 17:07:26 almost 30 people turned out 17:07:40 looks like most everything is looking good. A few smallish issues found 17:08:13 To keep us on schedule with shipping f33 in `testing` next week I've opened https://github.com/coreos/fedora-coreos-config/pull/735 17:08:28 #link https://github.com/coreos/fedora-coreos-tracker/issues?q=is%3Aissue+label%3Afallout%2Ff33+ 17:09:03 yep, 664 was found by jlebon during test day 17:09:19 651 is being worked on upstream, not much FCOS can do 17:09:36 649 we discussed earlier, pending discussion with NM team 17:10:26 lucab: any more you wanted to add? 17:11:17 https://github.com/coreos/fedora-coreos-tracker/issues/663 is still unclear 17:12:00 indeed. though I would think that ~30 people who participated in test day would have hit it too 17:12:58 we'll continue to work with skycastlelily in the ticket 17:13:15 anything else before I move to open floor? 17:13:30 I also have some inner bias towards "user mistake somewhere else" 17:14:47 #topic open floor 17:14:54 anyone with anything for open floor today? 17:15:17 * dustymabe notes we have quite a few tickets where we've decided on the path forward but haven't started working on them yet 17:15:27 would be great if we could start picking some of those up 17:16:37 dustymabe: which one do you have in mind? 17:16:44 ones* 17:17:00 travier: let me see if I can find some 17:17:12 https://github.com/coreos/fedora-coreos-tracker/issues/653 17:17:21 https://github.com/coreos/fedora-coreos-tracker/issues/648 17:18:07 https://github.com/coreos/fedora-coreos-tracker/issues/567 17:18:30 so those are just a few 17:18:53 * dustymabe needs to try to go through and do some review of our existing tickets 17:20:05 anyone with anything else for open floor? 17:20:14 Thanks to everyone who participated in the test day 17:21:55 #info we have some plans to re-organize the partition layout a bit. See https://github.com/coreos/fedora-coreos-tracker/issues/669 17:23:28 #endmeeting