16:31:00 <dustymabe> #startmeeting fedora_coreos_meeting
16:31:00 <zodbot> Meeting started Wed Jun 23 16:31:00 2021 UTC.
16:31:00 <zodbot> This meeting is logged and archived in a public location.
16:31:00 <zodbot> The chair is dustymabe. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:31:00 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
16:31:00 <zodbot> The meeting name has been set to 'fedora_coreos_meeting'
16:31:05 <dustymabe> #topic roll call
16:31:08 <dustymabe> .hi
16:31:09 <zodbot> dustymabe: dustymabe 'Dusty Mabe' <dusty@dustymabe.com>
16:31:11 <skunkerk> .hello sohank2602
16:31:12 <zodbot> skunkerk: sohank2602 'Sohan Kunkerkar' <skunkerk@redhat.com>
16:31:32 <jbrooks> .hello jasonbrooks
16:31:33 <zodbot> jbrooks: jasonbrooks 'Jason Brooks' <jbrooks@redhat.com>
16:31:35 <cyberpear> .hi
16:31:36 <zodbot> cyberpear: cyberpear 'James Cassell' <fedoraproject@cyberpear.com>
16:31:50 <darkmuggle> .hello2
16:31:54 <zodbot> darkmuggle: darkmuggle 'None' <me@muggle.dev>
16:32:02 <lorbus> .hi
16:32:02 <zodbot> lorbus: lorbus 'Christian Glombek' <cglombek@redhat.com>
16:32:17 <travier> .hello siosm
16:32:18 <zodbot> travier: siosm 'Timothée Ravier' <travier@redhat.com>
16:32:31 <jaimelm> .hello2
16:32:32 <zodbot> jaimelm: jaimelm 'Jaime Magiera' <jaimelm@umich.edu>
16:32:34 <jlebon> .hello2
16:32:35 <zodbot> jlebon: jlebon 'None' <jonathan@jlebon.com>
16:32:48 <bgilbert> .hi
16:32:49 <zodbot> bgilbert: bgilbert 'Benjamin Gilbert' <bgilbert@backtick.net>
16:33:10 <miabbott_> .hello miabbott
16:33:11 <zodbot> miabbott_: miabbott 'Micah Abbott' <miabbott@redhat.com>
16:33:56 <jdoss> .hi
16:33:57 <zodbot> jdoss: jdoss 'Joe Doss' <joe@solidadmin.com>
16:34:01 <saqali> .hi
16:34:02 <zodbot> saqali: Sorry, but you don't exist
16:34:17 <dustymabe> #chair skunkerk jbrooks cyberpear darkmuggle lorbus travier jaimelm jlebon bgilbert miabbott jdoss saqali
16:34:17 <zodbot> Current chairs: bgilbert cyberpear darkmuggle dustymabe jaimelm jbrooks jdoss jlebon lorbus miabbott saqali skunkerk travier
16:34:19 <dustymabe> nice turnout!
16:34:22 <dustymabe> woot woot
16:34:34 <dustymabe> let me know if I missed anyone ^^
16:34:40 <dustymabe> #topic Action items from last meeting
16:34:46 <dustymabe> #link https://meetbot-raw.fedoraproject.org/teams/fedora_coreos_meeting/fedora_coreos_meeting.2021-06-16-16.30.txt
16:34:56 <dustymabe> #info no action items from last meeting
16:35:08 * dustymabe reminds himself to make jlebon assign more work to people
16:36:01 <dustymabe> we'll pick up where we left off last week
16:36:08 <dustymabe> #topic Support converting the live ISO into a minimal live ISO
16:36:10 <jlebon> :)
16:36:12 <dustymabe> #link https://github.com/coreos/fedora-coreos-tracker/issues/868
16:37:02 <jlebon> we chatted a bit more about this after the meeting
16:37:22 <jlebon> I think overall there is rough consensus to go ahead with that approach
16:37:47 <bgilbert> +1
16:38:11 <dustymabe> "that approach" being ship tooling to generate minimal iso and not actually ship it as a separate artifact ?
16:38:20 <jlebon> yes
16:38:56 <travier> +1
16:39:01 <dustymabe> #proposed we will ship tooling to generate a minimal ISO image for those who need it, but we won't ship it as a separate artifact to try to reduce confusion for users who might not understand the differences
16:39:08 <travier> +1
16:39:29 <jlebon> +1
16:39:44 <darkmuggle> +1
16:39:44 <dustymabe> +1 from me
16:39:47 <bgilbert> +1
16:40:00 <jdoss> +1
16:40:07 <dustymabe> #agreed we will ship tooling to generate a minimal ISO image for those who need it, but we won't ship it as a separate artifact to try to reduce confusion for users who might not understand the differences
16:40:23 <jlebon> great, now I get to clean up all the hacks in there :)
16:40:41 <jdoss> lucky you
16:40:47 <dustymabe> will move on to next topic soon unless discussion continues
16:40:53 <jlebon> +1
16:41:29 <dustymabe> #topic systemd-oomd for Fedora CoreOS
16:41:33 <dustymabe> #link https://github.com/coreos/fedora-coreos-tracker/issues/840
16:41:45 <dustymabe> jdoss: any luck testing out oomd + swaponzram this week?
16:42:02 <jdoss> We are testing ZRAM and systemd-oomd this week. Getting a few test cases.
16:42:40 <dustymabe> cool, so still waiting on results to come in
16:43:17 <travier> jdoss: thanks!
16:43:26 <jdoss> 1) base line RAM usage on our workload run for 24hr
16:43:33 <jdoss> 2) add in ZRAM and systemd-oomd and run for 24hr
16:43:35 <jdoss> 3) We have some code that has some bad memory leaks so we are going to load that and see what happens with systemd-oomd
16:44:02 <jdoss> all graphing into datadog. We should have some results for next week's meeting.
16:44:10 <dustymabe> haha. "we have some code that has some bad memory leaks" :)
16:44:32 <dustymabe> they've been letting you write too much code
16:44:45 <dustymabe> 🤓
16:44:55 <travier> /troll
16:44:55 <jdoss> https://usercontent.irccloud-cdn.com/file/FfinmpMG/image.png
16:45:03 <jdoss> This is not my code!
16:45:14 <dustymabe> should we try to make any preliminary decisions here? "if testing looks good, we enable in next stream for a period of time".. etc..
16:45:55 <dustymabe> this overlaps with the swaponzram discussion from last week as well. where the discussion started to turn towards, should we do things for single node use case
16:46:06 <travier> Maybe we should couple the oomd change with the zram change
16:46:12 <dustymabe> travier: probably
16:46:19 <jdoss> If we leave this running for a couple days the server goes unresponsive. The test is "does systemd-oomd save us"
16:46:26 <dustymabe> jdoss: nice
16:46:44 <dustymabe> jdoss: without systemd-oomd+zram - does the kernel OOM ever kick in?
16:47:07 <jlebon> dustymabe: yeah, maybe let's sort out the "single node vs k8s" discussion first?
16:47:22 <dustymabe> jlebon: fair
16:47:22 <jlebon> feels like the output from that will naturally lead to easier decisions for the other ones
16:47:26 <jdoss> dustymabe: no, it ends up in this state where the system is still running but you can't do anything.
16:47:35 <dustymabe> jdoss: +1
16:47:44 <dustymabe> ok let's try something
16:48:25 <dustymabe> #proposed since oomd works better with swap, let's tie the swaponzram proposal and the oomd proposals together. If we do one, we do the other.
16:48:29 <dustymabe> travier: ^^
16:48:54 <jdoss> +1
16:49:34 <travier> +1
16:50:24 <dustymabe> anybody else have thoughts or votes on this one ^^
16:50:45 <jlebon> WFM
16:50:47 <darkmuggle> +1
16:50:49 <bgilbert> +1
16:50:58 <dustymabe> #agreed since oomd works better with swap, let's tie the swaponzram proposal and the oomd proposals together. If we do one, we do the other.
16:51:12 <dustymabe> ok jlebon let's try to tackle that higher level topic
16:51:38 <dustymabe> #topic make single node optimizations that don't enhance kubernetes
16:52:25 <dustymabe> so.. the oomd and swaponzram proposals don't really help k8s clusters because they (currently) don't support swap and in the case of oomd have their own memory management schemes
16:52:40 <dustymabe> the question is.. do we still make the changes?
16:52:59 <jlebon> wrote my 2c on this in https://github.com/coreos/fedora-coreos-tracker/issues/859#issuecomment-862583609
16:53:08 <travier> We will probably only enabled it by default on new installs only to not disturb existing k8s deployments
16:53:18 <travier> s/will/should/g
16:53:20 <dustymabe> The oomd and swaponzram stuff is easy to disable. I'm thinking we just write up docs for "integrating FCOS with k8s" and add this there
16:53:38 <jlebon> my TL;DR is: we should default to the single node case, and otherwise make sure we're easily configurable for k8s
16:53:43 <travier> +1 for k8s specific bit of docs, would help multiple projects
16:53:57 <dustymabe> jlebon: but with good docs for that? <- I think that part is key
16:54:11 <jdoss> jlebon: I deeply agree with what you wrote on that comment.
16:54:20 <travier> We definitely need the docs before the change happens
16:54:27 <jlebon> dustymabe: yeah, makes sense
16:54:31 <bgilbert> the mission statement (https://docs.fedoraproject.org/en-US/fedora-coreos/) says "optimized for Kubernetes but also great without it".
16:54:46 <jdoss> FCOS in the single node usecase is fantastic.
16:55:01 <bgilbert> I agree that configuring defaults for single node is closer to current reality though
16:55:02 <travier> bgilbert: good point
16:55:45 <dustymabe> bgilbert: yeah, if we do something like this we'd probably want to update that wording a bit
16:55:53 <bgilbert> dustymabe: yeah
16:56:13 <dustymabe> we *could* add some butane sugar for "kubernetes: true"
16:56:36 <travier> We will still be an heavily optimized OS for Kubernetes
16:56:43 <dustymabe> but then things get more "magical" and less tranparent
16:56:47 <jbrooks> I agree w/ what jlebon wrote as well
16:57:13 <bgilbert> dustymabe: we'll then get into issues of which version of k8s
16:57:16 <jlebon> yeah, I don't see too much conflict with that statement, but probably worth rewording a bit
16:57:36 <dustymabe> probably can just remove the optimized word
16:57:45 <dustymabe> "great for Kubernetes but also great without it"
16:57:46 <bgilbert> dustymabe: i.e. "kubernetes: true" needs clear semantics
16:57:54 <jdoss> heavily optimized OS for "Container Workloads"
16:57:56 <dustymabe> bgilbert: yeah, probably a bad idea
16:58:15 <miabbott> +1 to jlebon's comment on the issue
16:58:23 <bgilbert> I do think the thing we're discussing is a meaningful policy shift, here
16:58:35 <walters> I said this last time but I think part of the problem here is we are not rigorously/reliably intersecting with kubernetes CI yet
16:58:35 <bgilbert> so far, our position has been that if k8s needs it, we do it
16:58:40 <darkmuggle> I think we have consensus following jlebon's comment
16:58:48 <bgilbert> darkmuggle: not entirely
16:59:22 <walters> (well not CI, but deployment)  A good micro example of this is https://github.com/coreos/afterburn/issues/509#issuecomment-866295230 where we could ship disabled-by-default code that most K8S would want, but is hard to say we should always do by default
16:59:49 <jlebon> bgilbert: we've never strictly done that though. e.g. we don't ship cri-o on purpose
17:00:13 <bgilbert> we don't ship cri-o because we _can't_, without knowing the k8s version
17:00:16 <bgilbert> otherwise we certainly would
17:00:21 <walters> (or, maybe we *do* do it by default...but that one takes us a bit farther from current Fedora baseline too, so another instance of the tension)
17:01:26 <jlebon> bgilbert: right, but that's clearly something one would expect if we were to strictly optimize for k8s. and e.g. have a clear stance on supported k8s versions etc
17:01:39 <jbrooks> It seems to me to to really do kubernetes well, there will have to be conf changes specific to the version and kube distro
17:01:55 <bgilbert> right.  I think it could make sense to reconsider our stance here, since the cri-o issue means we don't have a path to working with k8s out of the box
17:02:02 <jbrooks> And that's hard to do out of the box unless we're blessing a certain set of choices
17:02:17 <jlebon> bgilbert: +1
17:02:20 <dustymabe> right, I think part of the argument here (from my side) is that in order to run k8s on FCOS it has to be deployed on top anyway. So these config changes are just a few more knobs to turn
17:02:32 <bgilbert> I just want to make sure we're treating that change with the gravity it's due
17:02:59 <dustymabe> bgilbert: we could make these changes more "integrated" and respect k8s
17:03:08 <dustymabe> i.e. conflicting with kubelet.service or something
17:03:23 <bgilbert> dustymabe: oh, that's interesting
17:03:35 <miabbott> bgilbert: you make a good point and i think the reality of how we have been approaching FCOS development has been more focused towards single node use case.  a revision to the mission statement is probably overdue.
17:03:52 <travier> I agree with bgilbert which is why I think that we can not enable that for existing installs, only for new ones after a warning period
17:05:24 * dustymabe tries to figure out how to push forward :)
17:05:29 <jlebon> hmm, maybe part of the docs is publishing a butane config we maintain?
17:05:33 <bgilbert> I'd say we should have an explicit tracker ticket at least, maybe a coreos@l.fp.o post
17:05:42 <dustymabe> bgilbert: fair
17:05:52 <bgilbert> jlebon: well, that was essentially Dusty's point about sugar
17:06:15 <jlebon> bgilbert: right yeah, just without the sugar to make it more explicit :)
17:06:22 <dustymabe> bgilbert: slightly different - I think he was suggesting just having it docs
17:06:27 <travier> Hum, I don't think having an example Butane config as part of the docs and sugar are the same commitment
17:06:31 <bgilbert> but the sugar would evolve over time, which means there's a version-skew problem with Butane.  likewise with people copypasting configs.
17:06:43 <bgilbert> what about conditioning units on -e /etc/kubernetes or similar?
17:06:57 <bgilbert> then the docs are "create this file in /etc"
17:07:21 <bgilbert> (or "ensure this path exists", really)
17:07:24 <jlebon> hmm, I think it should just be driven by the k8s installation directly
17:07:33 <jlebon> what happens when k8s supports swap?
17:08:15 <jlebon> if it's in e.g. OKD, then they can opt into it by just flipping the bit in their config
17:08:29 <jlebon> at the same time they rebase to 1.22 or whatever supports swap
17:08:29 <bgilbert> is it possible for us to condition behavior on the k8s version?
17:08:55 <dustymabe> bgilbert: probably as long as it's consistent across k8s distros
17:09:00 <bgilbert> I guess we could always have users write 1.22 to /etc/k8s-version
17:09:03 <dustymabe> there is a ConditionExec or something like that
17:09:10 <bgilbert> "please give me behavior compatible with 1.22"
17:09:15 * dustymabe learned about it recently
17:09:23 <travier> I don't think we should introduce additional conditionals based on software we don't ship nor control
17:09:34 <bgilbert> dustymabe: k8s binaries might not exist on first boot, right?
17:09:45 <dustymabe> bgilbert: correct, they might get written later
17:09:51 <darkmuggle> I think that trying too hard to make FCOS work with unknown K8S stacks is driving in the wrong direction. The specific configuration that may be desirable is largely unknowable.
17:09:57 <jlebon> travier: yeah, agreed
17:10:15 <bgilbert> travier: right, but we could introduce our own API for "compatible with X"
17:10:29 <travier> bgilbert: but then do we test it?
17:10:34 <travier> because right now we don't
17:10:48 <bgilbert> if we don't do that, we're going to keep having one-off problems where we want to enable feature X but k8s doesn't support it before k8s version Y
17:10:49 <dustymabe> ok so where do we leave this discussion.. I at least need to create a ticket and probably a ML thread - do we pick up the discussion later? do we draw any conclusions from what we've discussed today?
17:10:59 <travier> and this feels like OKD's / Typhoon's job
17:11:11 <bgilbert> the lack of a way to do that was a major problem in the CL days btw
17:11:52 <jbrooks> +1 to OKD / typhoon's job
17:12:14 <bgilbert> dustymabe: ticket makes sense, and maybe we should move the versioning discussion there
17:12:33 <bgilbert> we can delegate the problem to OKD/typhoon but then we need a policy for how we make changes that gives them time to adapt
17:12:36 <dustymabe> bgilbert: any chance you want to create the ticket?
17:12:49 <travier> bgilbert: agree regarding the policy
17:13:00 <jaimelm> bgilbert: that's possible.
17:13:22 <darkmuggle> radical idea: have we considered working with OKD and Typhoon for them to "own" specific Butane sugar?
17:13:37 <travier> which is why I think we need a warning period and can not enable it update
17:13:41 <travier> on update*
17:13:43 <bgilbert> dustymabe: I'd rather not be the one formally proposing this change but I'll participate
17:13:54 <jaimelm> It would involve us (OKD WG) rounding up devs committed to maintaining that.
17:14:27 <dustymabe> #action dustymabe to create ticket about making single node optimizations that don't enhance kubernetes and possible ways to integrate better with k8s distributions
17:14:35 <jaimelm> Or at the very least, folks knowledgable and interested to particpate at that level.
17:14:45 <bgilbert> darkmuggle: I think it's fine to do that, but I'm not sure Butane sugar is the best technical approach here
17:15:01 <jaimelm> create a ticket, I'll bring it to the OKD group for feedback.
17:15:06 <darkmuggle> I'm not sure it is either, tbh
17:15:25 <darkmuggle> But my argument is that sugar would give the upstreams an easy hook in
17:15:33 <dustymabe> moving on to next topic soon
17:15:41 <darkmuggle> we can't make FCOS work magically for all the Kube workloads
17:16:00 <dustymabe> #topic Differing behavior for aarch64 vs x86_64 disk images
17:16:04 <bgilbert> darkmuggle: OS PRs would actually be better for that, though.  Butane sugar doesn't evolve well because of Butane version skew
17:16:04 <dustymabe> #link https://github.com/coreos/fedora-coreos-tracker/issues/855
17:16:30 <dustymabe> it's been a few weeks. and there was some discussion in the ticket here
17:17:04 <dustymabe> do we want to try to come up with a proposal for how to move forward (or not move forward)?
17:17:06 <jlebon> not butane sugar, but I like the idea of a maintained butane config which also serves as documentation for modifications needed for k8s
17:17:41 <bgilbert> dustymabe: fwiw my view is still that this is a non-problem and we shouldn't take on compat constraints to solve it :-)
17:17:50 <jlebon> dustymabe: can you summarize the outcome so far?
17:18:10 <dustymabe> jlebon: I can try
17:19:05 <dustymabe> more or less I noticed that the aarch64 disk image isn't similar enough to the x86_64 disk image that I ended up with different results when I applied my Ignition config (with paritioning specifics)
17:19:37 <dustymabe> as a dumb user I'm going to look at my new partition getting created as sda1 and worry that I messed something up
17:20:03 <dustymabe> bgilbert's points are that sda1 is a valid partition and partition numbers aren't linear on disk
17:20:16 <dustymabe> so it's not an invalid configuration
17:20:18 <bgilbert> and that partition numbers shouldn't actually matter
17:20:25 <dustymabe> correct
17:20:59 <dustymabe> my only other point is that it's much easier for me as a dev to remember things if they are similar across arches (barring s390x, the exception)
17:21:06 <travier> How about adding an empty sda1 partition for aarch64 only? (leaving ppc & 390 as is)?
17:21:23 <dustymabe> in the very least we should create some documentation on the differences
17:21:31 <jaimelm> +1 documentation
17:21:38 <jaimelm> at least in the short term
17:22:03 <bgilbert> +1 docs, we could add the other arches to the storage page
17:22:04 <jaimelm> that would go a long way in helping the user in your scenario
17:22:09 <jlebon> dustymabe: i didn't know either that partition numbers could be out of order with the partition blocks, so i probably also would've been surprised :)
17:22:10 <bgilbert> looks like we correctly say "x86_64 partition table" there right now
17:22:22 <dustymabe> does ppc have an sda1 ?
17:23:06 <bgilbert> dustymabe: yes, but not a 2
17:23:10 <dustymabe> bgilbert: regarding the "partition numbers shouldn't actually matter" argument.. I think that same argument could be applied to the "why do we match sda4 on aarch64 and x86_64?"
17:23:37 <dustymabe> if they don't matter then why skip at all?
17:23:47 <bgilbert> dustymabe: unfortunately the root partition is a special case right now because of https://github.com/coreos/ignition/issues/1219
17:23:49 <dustymabe> I'm not suggesting we do otherwise, just playing the other side of the argument
17:24:20 <bgilbert> ideally, yes, we would never skip partition numbers
17:24:29 <jaimelm> timecheck: 6 minutes
17:24:50 <dustymabe> ok, let me flesh one other idea out that I proposed
17:24:53 <jlebon> sadly, there's a bunch of other scripts which also assume e.g. boot on 3 and root on 4
17:25:10 <dustymabe> "I guess one option could be to do away with number: 0. You read the docs, you tell it exactly what to do. No surprises."
17:25:33 <dustymabe> i.e. we no longer dynamically select partition number
17:25:59 <bgilbert> dustymabe: that's an option today for anyone who cares about the partition number
17:26:04 <dustymabe> correct
17:26:12 <dustymabe> ok so let me try to tie this off then
17:26:13 <bgilbert> I don't think it makes sense to force everyone down that path though
17:26:47 <dustymabe> is the concensus that we shouldn't try to add partition numbers for missing partitions on aarch64 and ppc64le ?
17:27:00 <dustymabe> and that we should just do documentation instead?
17:27:23 <walters> Bearing in mind that a good chunk of people are going to try this and fail and only then read the docs =)
17:27:53 <travier> I think we need documentation in any case but is there any particular reason we can not add an empty partition on aarch64 only?
17:27:57 <bgilbert> walters: I'm not so sure about "fail".  our docs examples all mount by label
17:28:09 <dustymabe> travier: if we add it on aarch64 I would want to do it for ppc64le
17:28:10 <travier> other arches are snowflakes anyway and "less used"
17:28:11 <bgilbert> travier: how is aarch64 special?
17:28:12 <jlebon> no strong opinions personally. i do agree with bgilbert we shouldn't add more constraints on ourselves without good enough reason
17:28:22 <travier> much more widespread?
17:28:29 <dustymabe> the only reason we can't on s390x is because of technical reasons
17:28:43 <dustymabe> i.e. i'm suggested that we do it everywhere we can
17:28:47 <bgilbert> travier: I'm not sure how to construct a coherent policy out of that, though?
17:29:12 <dustymabe> jlebon: is it a constraint?
17:29:15 <travier> make x86 & aarch64 the same is a bonus and document special arches which are special anyway?
17:29:32 <bgilbert> btw, any partition table changes should probably be reflected in Butane's templates, which introduces some versioning issues
17:29:33 <jlebon> dustymabe: we're solidifying the table, when ideally we'd be going the other way
17:30:02 <dustymabe> ehh, it's already that way
17:30:18 <jlebon> i.e. ideally we'd just use labels everywhere
17:30:19 <travier> Would adding an empty sda1 only on aarch64 change the Butane template?
17:30:27 <bgilbert> travier: yes
17:30:33 <travier> hum
17:30:39 <dustymabe> hmm, yeah I don't understand that
17:30:58 <bgilbert> otherwise, if the user enables boot RAID, they'll end up with a table without sda1
17:31:25 <bgilbert> and if they also ask for a separate /var, it'll go in sda1
17:31:56 <travier> But that won't break existing working configs, so that should be fine
17:32:14 <travier> adding an empty sda1 would not break previous template?
17:32:27 <dustymabe> FTR we don't ship any other arch than x86_64
17:32:28 <bgilbert> if we change the template in place, and someone does actually care about partition numbers, it'll change their numbering
17:32:32 <jlebon> another way to look at this is that Ignition is just wrapping sgdisk here. users would hit the same "issue" if they ran sgdisk directly
17:32:37 <travier> (I don't know if we can add an empty partition with GPT on aarch64)
17:32:41 <dustymabe> so now is the time to change this stuff
17:32:44 <bgilbert> if we add a new template, then we have to get people to type "aarch64v2" or something
17:33:06 <bgilbert> dustymabe: RHCOS does
17:33:31 <bgilbert> (not aarch64 though)
17:33:52 <dustymabe> ok we're over time
17:34:02 <dustymabe> I guess we'll go with docs?
17:34:04 <travier> We're at the right time where we can still change the aarch64 template but not the other ones
17:34:14 <bgilbert> travier: that's true
17:34:21 <jaimelm> docs
17:34:47 <travier> I think we need to document snowflakes (ppc & s390) and try to fix aarch64 with a free workaround (empty sda1 partition)
17:34:56 <travier> "fix"
17:35:02 <jaimelm> heh
17:35:30 <dustymabe> yeah. maybe we can tie this off at our next video meeting
17:35:39 <jlebon> +1
17:35:41 <dustymabe> sorry it's dragging out
17:35:49 <dustymabe> #topic open floor
17:36:01 <dustymabe> sorry this ran long and no real time for open floor
17:36:12 <dustymabe> anything real quick?
17:36:22 <dustymabe> #info cgroups v2 is now fully in FCOS
17:36:40 <jlebon> 🎉
17:36:55 <dustymabe> Please do think about the other changes we have staged (nftables, dnfcountme) and make sure they are on track for going in on their respective schedules
17:37:05 <dustymabe> any others maybe I missed?
17:37:24 <dustymabe> #info i've started making one off tickets for f35 changes that needed more discussion
17:37:46 <jlebon> i think bgilbert mentioned also we're ready to error out re. rootfs spacing
17:37:52 <bgilbert> yup
17:38:02 <dustymabe> will end the meeting in 30s unless discussion continues
17:38:04 <dustymabe> jlebon: +1
17:38:32 <dustymabe> #endmeeting