16:30:52 <dustymabe> #startmeeting fedora_coreos_meeting
16:30:52 <zodbot> Meeting started Wed Nov  7 16:30:52 2018 UTC.
16:30:52 <zodbot> This meeting is logged and archived in a public location.
16:30:52 <zodbot> The chair is dustymabe. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:30:52 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
16:30:52 <zodbot> The meeting name has been set to 'fedora_coreos_meeting'
16:31:00 <dustymabe> #topic roll call
16:31:07 <slowrie> .hello2
16:31:08 <zodbot> slowrie: slowrie 'Stephen Lowrie' <slowrie@redhat.com>
16:31:14 <mskarbek> .hello2
16:31:15 <zodbot> mskarbek: mskarbek 'None' <redhat@skarbek.name>
16:31:17 <yzhang> .hello2
16:31:18 <zodbot> yzhang: yzhang 'Yu Qi Zhang' <jzehrarnyg@gmail.com>
16:31:24 <ajeddeloh> .hello2
16:31:25 <zodbot> ajeddeloh: ajeddeloh 'Andrew Jeddeloh' <andrew.jeddeloh@redhat.com>
16:31:36 <jbrooks> .fas jasonbrooks
16:31:37 <zodbot> jbrooks: jasonbrooks 'Jason Brooks' <jbrooks@redhat.com>
16:31:54 <dustymabe> .hello2
16:31:55 <zodbot> dustymabe: dustymabe 'Dusty Mabe' <dusty@dustymabe.com>
16:32:02 <dustymabe> #chair slowrie mskarbek yzhang ajeddeloh jbrooks
16:32:02 <zodbot> Current chairs: ajeddeloh dustymabe jbrooks mskarbek slowrie yzhang
16:32:10 <d_code> .hello2
16:32:11 <zodbot> d_code: Sorry, but you don't exist
16:32:24 <d_code> I don't exist. nbd
16:32:31 <ajeddeloh> ^ that's my favorite error message
16:32:57 <yzhang> should probably say: that FAS ID doesn't exist
16:33:00 <yzhang> less confusing
16:33:24 <dustymabe> #chair d_code ajeddeloh yzhang
16:33:24 <zodbot> Current chairs: ajeddeloh d_code dustymabe jbrooks mskarbek slowrie yzhang
16:33:27 <dustymabe> welcome d_code
16:33:28 <d_code> yeah...my FAS ID is `dcode` but that's owned by a bot
16:33:30 <kumarmn> This is Manoj, representing ppc64le
16:33:31 <ajeddeloh> it's a reverse turing test, where the bot tries to convince you that you're a robot
16:33:38 <d_code> lol
16:33:41 <dustymabe> welcome kumarmn!
16:33:55 <slowrie> d_code: you can do .hello <fas_id>
16:34:04 <yzhang> or better yet, .hellomynameis
16:34:15 <d_code> .hellomynameis dcode
16:34:15 <slowrie> the .hello2 alias just substitutes your IRC nick
16:34:16 <zodbot> d_code: dcode 'Derek Ditch' <derek@rocknsm.io>
16:34:21 <d_code> \o/
16:34:24 <dustymabe> #chair d_code kumarmn
16:34:24 <zodbot> Current chairs: ajeddeloh d_code dustymabe jbrooks kumarmn mskarbek slowrie yzhang
16:34:25 <yzhang> \o/
16:34:45 <dustymabe> woot
16:34:50 <dustymabe> ok we'll get started
16:35:04 <dustymabe> #topic Action items from last meeting
16:35:16 <dustymabe> * ajeddeloh and lorbus to experiment on grubenv + static grub config
16:35:18 <dustymabe> * slowrie to open separate issues for gsutil/awscli cloud clients
16:35:20 <dustymabe> * lorbus to review gce-oslogin rpm
16:35:22 <dustymabe> https://pagure.io/fedora-server/issue/5#comment-538460
16:35:24 <dustymabe> * lorbus to add moby-engine package to FCOS configs
16:35:39 <slowrie> I opened https://github.com/coreos/fedora-coreos-tracker/issues/75
16:35:39 <dustymabe> lorbus is sick today so I'm going to re-action his individual items
16:35:55 <dustymabe> #action lorbus to review gce-oslogin rpm https://pagure.io/fedora-server/issue/5#comment-538460
16:36:12 <dustymabe> actually for one of them i know the answer
16:36:40 <dustymabe> #info mnguyen_ added the moby-engine package to FCOS: https://github.com/coreos/fedora-coreos-config/pull/26
16:36:49 <dustymabe> so that takes care of one of lorbus' action items
16:36:56 <ajeddeloh> lorbus and I are still punting on the grub thing
16:37:10 <dustymabe> #info slowrie opened issue for gsutil/awscli cloud clients: https://github.com/coreos/fedora-coreos-tracker/issues/75
16:37:17 <dustymabe> ajeddeloh: I'll re-action
16:37:21 <ajeddeloh> +1
16:37:24 <dustymabe> #action ajeddeloh and lorbus to experiment on grubenv + static grub config
16:37:35 <dustymabe> ajeddeloh: any blockers there or is it just ENOTIME ?
16:37:43 <ajeddeloh> ENOTIME
16:37:47 <dustymabe> k
16:38:15 <ajeddeloh> it was blocked on the bottlecap stuff but that works well enough without being merged yet that we could run with it
16:38:15 <dustymabe> kumarmn: FYI, we usually go through meeting items and then go to open floor, but I want to make sure we get to your topic so I'll just make a separate section for it
16:38:35 <dustymabe> ajeddeloh: ah yes. please remind me later today to go through the bottlecap CR again
16:38:43 <ajeddeloh> +1
16:38:55 <dustymabe> #topic friends interested in ppc64le
16:38:57 <kumarmn> dustymabe: Thanks. Just watching for now.
16:39:07 <dustymabe> kumarmn: want to state your interests ?
16:39:26 <dustymabe> we usually have ksinny with us, but I think it is a public holiday in India where she is, so she is AFK today
16:39:39 <dustymabe> she also represents multi-arch interests at times
16:39:45 <kumarmn> Want to make sure that ppc64le is a first tier architecture supported with CoreOS
16:39:56 <bgilbert> .hello2
16:39:57 <zodbot> bgilbert: bgilbert 'Benjamin Gilbert' <bgilbert@backtick.net>
16:40:15 <dustymabe> kumarmn: it's good we have interest from the community for this
16:40:20 <geoff-> geoff-: Geoff Levand <geoff@infradead.org>
16:40:24 <kumarmn> Multi-arch has a negative connotation at Red Hat. It really means that amd64 is the primary arch, and everything else is secondary.
16:40:47 <dustymabe> for Atomic Host we do have ppc64le today, which is nice. we know it can work
16:41:48 <kumarmn> Great. Let me know how I can help.
16:41:50 <mnguyen_> .hello mnguyen
16:41:51 <zodbot> mnguyen_: mnguyen 'Michael Nguyen' <mnguyen@redhat.com>
16:42:15 <dustymabe> kumarmn: one thing we are doing is changing how we build things slightly
16:42:30 <dustymabe> i've already seen you active in the github.com/coreos/coreos-assembler repo
16:42:35 <dustymabe> that is a good start
16:42:45 <kumarmn> OK, should I press down that front?
16:43:09 <dustymabe> kumarmn: it definitely helps, and where you get stuck we should be able to help or we'll identify things as roadblocks that need to be dealt with
16:43:30 <dustymabe> for example, right now we are attempting to be able to run our build pipeline in kubernetes
16:44:01 <dustymabe> which makes for a nicer setup and workflow experience, but multi-arch in kubernetes/openshift has lagged
16:44:18 <dustymabe> so identifying places where things need to be address would be useful
16:44:21 <kumarmn> should be there as of a year ago at least.
16:44:25 <dustymabe> addressed
16:45:18 <dustymabe> right, but most environments (for example the openshift in centos CI) don't have ppc64le hardware hooked into the openshift instance
16:45:37 <dustymabe> so let's identify that as something we need to explore knocking down the barrier for
16:45:41 <walters> .hello2
16:45:42 <zodbot> walters: walters 'Colin Walters' <walters@redhat.com>
16:45:59 <kumarmn> ok, when you say build pipeline do you mean coreos-assembler build
16:46:03 <dustymabe> kumarmn: can you open a ticket for "enabling ppc64le for Fedora CoreOS" in the top level issue tracker ?
16:46:13 <dustymabe> it would help if you also state who you are and your interest in the ticket
16:46:27 <kumarmn> Sure. will do. dustymabe
16:46:31 <dustymabe> https://github.com/coreos/fedora-coreos-tracker/issues
16:46:36 <dustymabe> ^^ that is the place
16:47:12 <dustymabe> #action kumarmn to open a ticket representing ppc64le multi-arch interests in Fedora CoreOS
16:47:36 <dustymabe> kumarmn: one final question before we switch topics. Does a "cloud" exist where we can easily spin up PPC64le nodes?
16:47:58 <kumarmn> there is one at Oregon State University, OSU.
16:48:01 <dustymabe> obviously being able to spin up/down hardware easily for a community is important for enablement and bug investigation
16:48:25 <ajeddeloh> kumarmn: just 1?
16:48:26 <dustymabe> I know packet has aarch64 instances now
16:49:17 <kumarmn> https://osuosl.org/services/powerdev/request_powerci/
16:49:30 <dustymabe> kumarmn: cool. if you find any more it would be useful
16:49:39 <dustymabe> i didn't know if IBM had any offering for that or not
16:50:02 <dustymabe> we do have some hardware in fedora, but it's not quite a "cloud experience" where you just make an API call and get a machine
16:50:07 <ajeddeloh> we have some tests that need 3 machines, so while we can do things with just 1, having multiple will give us faster tests and more tests
16:50:15 <kumarmn> that is available/free for open source projects. The others are at https://developer.ibm.com/linuxonpower/cloud-resources/
16:50:44 <dustymabe> kumarmn: ahh, so is the latter a paid for offering, but nice and reliable?
16:52:05 <kumarmn> Nimbix is paid, but only for people who need systems with GPUs
16:52:25 <dustymabe> ok cool. i'll follow up with you on this in IRC sometime
16:52:30 <dustymabe> will switch topics for now
16:52:37 <kumarmn> thanks dustymabe
16:52:51 <dustymabe> #topic Docker/Moby configuration
16:52:56 <dustymabe> #link https://github.com/coreos/fedora-coreos-tracker/issues/76
16:53:12 <dustymabe> mskarbek: can you suggest some sane configuration settings for us to set ?
16:53:22 <dustymabe> see my last comment: https://github.com/coreos/fedora-coreos-tracker/issues/76#issuecomment-436663280
16:53:50 <mskarbek> just follow docker package configuration (journald for logging, systemd for cgroup)
16:53:56 <dustymabe> also some backgroud: we just added moby-engine to FCOS and are now discussing some sane config settings for it
16:54:15 <dustymabe> mskarbek: so the current moby-engine package in fedora doesn't do that?
16:54:20 <dustymabe> and we just need to change it to do that?
16:54:22 <mskarbek> nope
16:54:57 <dustymabe> mskarbek: ok, do you mind adding that note to the ticket ?
16:55:04 <dustymabe> anybody object to those suggest changes ?
16:55:05 <mskarbek> current package have no configuration so docker deamon set deafults for each driver - logging, storage, cgroup
16:56:31 <mskarbek> more interestingly moby-engine also dose not ship any seccomp profile
16:57:11 <walters> hm, that is odd
16:57:29 <mskarbek> so eater we will have to include that or configure it to use one from cri-o/podman
16:57:57 <dustymabe> all good information to have
16:58:00 <dustymabe> thanks mskarbek
16:58:24 <dustymabe> #action mskarbek to follow up in #76 with more information on suggested configuration changes for the moby-engine package
16:58:53 <dustymabe> mskarbek: are you interested in reviewing the changes to the rpm when we make them (submitting a patch is also an option) :)
16:59:33 <mskarbek> ok, I'll grab that package and submit my proposition
17:00:07 <dustymabe> cool, will look for comments in #76 - thanks mskarbek
17:00:22 <dustymabe> anyone else have something before we move on?
17:01:23 <dustymabe> ok will move to the cloud agents discussion we didn't finish last week
17:01:49 <dustymabe> I think the next one was
17:02:03 <dustymabe> #topic Cloud Agents: Openstack
17:02:09 <dustymabe> #link https://github.com/coreos/fedora-coreos-tracker/issues/68
17:02:23 <dustymabe> anything we need to discuss regarding openstack ?
17:03:13 <dustymabe> bgilbert: slowrie: ^^
17:03:42 <slowrie> I don't have anything directly on openstack specifically
17:04:00 <slowrie> I don't really know if they want an agent in newer openstack deployments
17:04:19 <dustymabe> slowrie: does that mean we are relatively safe with saying openstack won't need any specific cloud-agent and we can close the topic out ?
17:04:32 <bgilbert> I don't really know openstack
17:04:42 <dustymabe> I can try to ping someone who does have more info
17:04:53 <slowrie> dustymabe: we should probably track down someone who knows the platform better, to my knowledge that would be fine
17:04:59 <bgilbert> CL uses the EC2 OEM, which doesn't have an agent
17:05:06 <slowrie> But I don't know if there is any potential weird things on say Ironic
17:05:33 <dustymabe> ok will ask an expert and see if we hear back
17:05:51 <dustymabe> #topic no cloud agents: packet
17:05:58 <dustymabe> #link https://github.com/coreos/fedora-coreos-tracker/issues/69
17:06:30 <slowrie> For packet in CL we currently have a systemd unit
17:06:44 <slowrie> but we have an issue filed in coreos-metadata to switch that functionality there
17:07:58 <dustymabe> #info for packet we are going to try to implement some functionality in coreos-metadata: see issue https://github.com/coreos/coreos-metadata/issues/120
17:08:06 <dustymabe> ack/nack ^^
17:08:12 <slowrie> ack
17:08:17 <bgilbert> ack
17:08:21 <walters> ack
17:08:25 <dustymabe> cool
17:08:27 <d_code> ack
17:08:37 <dustymabe> anything else to add before we move on?
17:08:53 <dustymabe> We'll continue with cloud agents for another 8 minutes and then open up for open floor
17:09:27 <slowrie> That should be it for packet
17:09:34 <slowrie> It's one of the lightest weight requirements for an agent
17:09:37 <dustymabe> that's good
17:09:47 <dustymabe> we discussed VMWare last time so i'll skip that one for now
17:09:53 <dustymabe> #topic no cloud agents: digitalocean
17:09:59 <dustymabe> #link https://github.com/coreos/fedora-coreos-tracker/issues/71
17:10:13 <bgilbert> so, no agents exactly
17:10:30 <bgilbert> but there's some networking code needed in the initramfs
17:11:08 <ajeddeloh> Ben should have some insight into that too when he starts
17:11:15 <bgilbert> or, more precisely: to set up networking, instances have to query parameters from a metadata service
17:11:17 <dustymabe> bgilbert: is that code upstream somewhere ?
17:11:27 <bgilbert> it's in coreos-metadata right now
17:11:35 <bgilbert> which writes networkd configs, so might need changes
17:11:37 * dustymabe wonders if we should add a digitalocean dracut module somewhere ?
17:11:46 <bgilbert> CL currently configures the real root from the initramfs, and fails to configure networking *in* the initramfs
17:12:04 <bgilbert> so on DO on CL, Ignition can't fetch external resources
17:12:42 <dustymabe> bgilbert: it looks like DHCP in DO is a limited preview right now
17:13:03 <dustymabe> but if that works and they open it up more broadly then we might be able to only use DHCP with no special bits
17:13:22 <bgilbert> I'm guessing that still won't help with private IPs
17:13:42 <bgilbert> since I think those are on the same NIC
17:13:46 <dustymabe> i.e. the DHCP server won't hand out a private IP ever ?
17:14:06 <dustymabe> which would make sense unless you are giving the instance multiple IPs I guess
17:14:17 <bgilbert> pretty sure DHCP can't hand out multiple IPs at a time?
17:14:17 <dustymabe> bgilbert: could you ask that question in the ticket ?
17:14:34 <bgilbert> sure
17:14:43 <dustymabe> right. so here's a question: in all the clouds where we use DHCP, how is this problem solved ?
17:14:59 <dustymabe> i.e. why would DHCP on DO be different?
17:14:59 <bgilbert> Packet has the same problem
17:15:15 <bgilbert> AWS has external NAT for the public IP
17:15:47 <dustymabe> I see
17:15:49 <bgilbert> not sure about the others offhand, but the clouds with remappable public IPs I'd guess are all external NAT
17:16:08 <dustymabe> but because DO gives the public IP directly to the instance. this is where the problem comes up
17:16:13 <bgilbert> yup, confirmed DO has private IP on same NIC
17:17:00 <bgilbert> just tried; GCE also has external NAT
17:17:25 <dustymabe> ok bgilbert do you mind adding this information to the ticket ?
17:17:38 <bgilbert> will do
17:17:55 <dustymabe> ryanq has been nice to answer our questions so far, so he'll probably be forthcoming with info
17:18:01 <walters> I'd say too more broadly the larger scale IaaS are kind of expecting you to use their load balancers
17:18:22 <dustymabe> #action bgilbert to followup on DHCP + private IP addresses problem in Digital Ocean card
17:18:32 <dustymabe> one solution could be to give the instance more than one NIC :)
17:18:40 <dustymabe> but that would have to be done by DO and not us
17:18:51 <dustymabe> ok I'm going to move to open floor since we are nearing time
17:19:07 <dustymabe> will move to open floor in 30 seconds.. prepare any topic you want to bring up :)
17:19:36 <dustymabe> #topic open floor
17:19:57 <dustymabe> again I'd like to say welcome to d_code and kumarmn
17:20:18 <d_code> /wave
17:20:31 <dustymabe> d_code: want to tell us about yourself?
17:20:58 <walters> 👋
17:21:00 <d_code> sure
17:21:43 <d_code> I don't have any specific items to bring to vote or anything yet. I'm the lead developer for RockNSM, as network security monitoring platform built upon CentOS 7 currently, SELinux enabled, and we try to harden as much as possible, because sensors have to be 100% trust worthy (or as close as possible).
17:22:13 <d_code> I'm looking to move to FCOS for the added security of read-only systems, segmented security of containers and ease of maintenance of a task-specific distribution.
17:22:33 <dustymabe> d_code: nice. welcome
17:22:34 <d_code> we also want to support a single node, non-k8s version, and a scalable version to multiple hosts
17:23:12 <d_code> so...I'm looking forward to trying things and breaking them and hopefully submitting patches. I love the ignition provisioning approach for ease of maintenance and such.
17:23:43 <walters> cool, supporting those use cases is also important to me
17:23:44 <dustymabe> perfect
17:24:08 <dustymabe> nice to meet you d_code - look forward to seeing you around more
17:24:16 * ajeddeloh waves hello
17:24:18 <dustymabe> anyone else with anything for open floor ?
17:24:49 <d_code> :wave: ajeddeloh
17:24:58 <dustymabe> #info we did a v0.3.1 release of coreos-assembler: https://lists.fedoraproject.org/archives/list/coreos@lists.fedoraproject.org/thread/NEQIQIA4BRSU7EWTB4Q6DM3KO5VAGPVM/
17:25:48 <dustymabe> #info we also landed a PR in master of coreos-assembler that allows us to run unprivileged in a kvm VM https://github.com/coreos/coreos-assembler/pull/190
17:26:09 <dustymabe> ^^ this enables building in unprivileged kubernetes/openshift pods
17:26:28 <dustymabe> ajeddeloh: any news from the ignition side of things?
17:26:52 <ajeddeloh> spec 2.3.0 will be coming out soon
17:27:08 <ajeddeloh> I think jlebon and I are close on the /var thing
17:27:21 <dustymabe> +1
17:27:37 <ajeddeloh> really excited for spec 3.0.0
17:27:43 <bgilbert> +1
17:27:53 <walters> cool
17:28:11 <dustymabe> nice work ajeddeloh pushing that along!
17:28:49 <kaeso> .hello lucab
17:28:50 <zodbot> kaeso: lucab 'Luca Bruno' <lucab@redhat.com>
17:28:56 <dustymabe> ok cool real quick before I close the meeting
17:29:03 * bgilbert waves at kaeso
17:29:06 <kaeso> (it turns out my calendar event is wrong)
17:29:14 <jdoss> .hello2
17:29:15 <zodbot> jdoss: jdoss 'Joe Doss' <joe@solidadmin.com>
17:29:23 <jdoss> kaeso: same here hah
17:29:25 <dustymabe> if anyone is interesting in investigating the firewall topic please comment in the ticket https://github.com/coreos/fedora-coreos-tracker/issues/26
17:29:26 <rfairley> .hello rfairleyredhat
17:29:27 <zodbot> rfairley: rfairleyredhat 'Robert Fairley' <rfairley@redhat.com>
17:29:38 <bgilbert> :-P
17:29:58 * dustymabe welcomes all the wrong calendar event peeps
17:30:13 * jdoss waves
17:30:16 <dustymabe> FYI we stick on UTC so it moved
17:30:29 <dustymabe> that is, if your locale observes daylight savings time changes
17:30:42 <dustymabe> https://github.com/coreos/fedora-coreos-tracker/issues/57
17:30:49 <walters> everyone just joining please express your thoughts on the last hour in one minute before the meeting closes starting...*now*
17:30:59 <dustymabe> :)
17:31:04 <ajeddeloh> in related news: CA passed prop 7
17:31:06 <dustymabe> for anyone that needs to leave, please do so
17:31:17 <dustymabe> for anyone that just joined, we are in open floor
17:31:22 <dustymabe> anything you wanted to bring up today?
17:31:24 <ajeddeloh> so us californians might be getting rid of DST!
17:31:33 <dustymabe> ajeddeloh: nice!
17:31:39 <bgilbert> thanks all!
17:31:52 <jdoss> oh boy, I hope that happens in IL
17:32:04 <dustymabe> jdoss: rfairley kaeso - any topics for open floor?
17:32:17 <jdoss> Just caught up. I am good.
17:32:48 <kaeso> dustymabe: nothing in particular
17:33:03 <dustymabe> ok cool. will close out the meeting in one minute
17:33:57 <rfairley> ahh, sorry about that. I didn't update my event
17:34:10 <dustymabe> #endmeeting