15:00:04 <pknirsch> #startmeeting Fedora Base Design Working Group (2014-10-10)
15:00:04 <zodbot> Meeting started Fri Oct 10 15:00:04 2014 UTC.  The chair is pknirsch. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:04 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
15:00:12 <pknirsch> #meetingname  Fedora Base Design Working Group
15:00:12 <zodbot> The meeting name has been set to 'fedora_base_design_working_group'
15:00:18 <pknirsch> hello and welcome everyone!
15:00:28 <nphilipp> Hi
15:00:29 <dazo> hi!
15:00:39 <jreznik> hi pknirsch & all :)
15:01:11 <haraldh> <-
15:01:21 <pknirsch> #chair jreznik dgilmore-bne masta haraldh vpavlin msekleta
15:01:21 <zodbot> Current chairs: dgilmore-bne haraldh jreznik masta msekleta pknirsch vpavlin
15:01:55 <msekleta> hi everyone!
15:02:28 * jreznik has to leave in about half hour today
15:02:34 <pknirsch> okidokie, lets jump into the first topic
15:03:00 <pknirsch> #topic Update buildrequires cleanup work (davids & nils!)
15:03:37 <pknirsch> So one nice thing: May i introduce you nphilipp ? Some might already know him, but he's going to help us with the buildrequires cleanup work together with davids
15:04:24 <pknirsch> I've had a chat a few days ago with him about it and ask him to work with you dazo on looking at the whole project and the split up the work and see where to take it
15:05:15 <dazo> wonderful!
15:05:20 <nphilipp> dazo: hey
15:05:33 <dazo> hey, nphilipp!
15:05:41 <pknirsch> the more the merrier i always say :)
15:07:04 <pknirsch> dazo, did you have any chance this week to take a look at moben's work yet?
15:09:17 <dazo> A little bit ... I'm pondering whom will maintain the code, and how will we include that code into rpmdevtools .... as an additional tarball/files or should things be merged completely into rpmdevtools and such
15:09:29 * pknirsch nods
15:09:54 <pknirsch> a git clone is easy to do, but i agree, we need to find a place where several people can contribute etc.
15:09:55 <dazo> does anyone know if moben will still be involved?
15:10:01 <pknirsch> maybe a clone on github?
15:10:04 * pknirsch isn't sure
15:10:32 <dazo> I don't want to "steal" his project if he wants to still drive it
15:11:04 <dazo> I can mail him and get some clarifications here
15:11:27 <pknirsch> well, moben is definitely busy with studies these days and we've talked about it before he left and he was perfectly fine with someone doing a clone and continuing, but yea, an email to him as an introduction might be a good idea :)
15:11:52 <nphilipp> I figured that (studies)
15:12:11 * dazo mails
15:12:23 <pknirsch> cheers!
15:12:55 <pknirsch> I'll let you guys figure it out then how to proceed :)
15:13:05 <dazo> perfect :)
15:13:55 <nphilipp> So, I've pondered a bit about how to use moben's tool, and had a call yesterday with Marcella and a few others (stacks & Envs WG) who are interested in this as well
15:15:24 <nphilipp> The idea was to conduct a first "run" of it on e.g. the base or minimal package set to get a result set, so we can see how much resources it eats up and what we get out of it.
15:16:29 <nphilipp> I'd need to scare up a machine internally for that, should be a beefier one so it doesn't take longer than necessary.
15:17:17 <dazo> beefier hardware .... that's something I discovered was needed too running some smaller tests .... those iterations takes some time
15:17:29 * pknirsch nods
15:18:57 <pknirsch> we do have some nice machines in house, but using them on a permanent basis might be tricky.
15:19:17 <nphilipp> nice is good, permanent is better
15:19:18 <pknirsch> One idea i had was to use one of the P8s we get sometime soon here in STR
15:19:45 <pknirsch> That one should be beefy enough for the job (160 cpus with 128g iirc)
15:19:50 <dazo> get openstack on them? so we can push them to the limits when we need ... and others can do the same when they're idle ;-)
15:19:56 <pknirsch> hm
15:20:10 <pknirsch> thats actually a very good idea too. don't we have an internal openstack instance?
15:20:30 <pknirsch> That would also get around the global yum lock problem
15:20:46 <pknirsch> just spin up X guests instead of running the script multiple times on the same host
15:20:53 <dazo> most likely ... but I've not played with it .... just swallowed the sales propaganda from my colleagues the office here in Oslo ;-)
15:21:02 <nphilipp> hah!
15:21:10 <pknirsch> :)
15:21:33 <nphilipp> would need some tooling to spin up a container to run the tool inside, rather than only the tool
15:21:43 <pknirsch> hm
15:21:49 <pknirsch> DOCKER DOCKER DOCKER!
15:21:50 <pknirsch> :)
15:21:53 <nphilipp> the question is if we need to use mock then
15:22:01 <dazo> hehe
15:22:08 * nphilipp whacks pknirsch with a limp trout
15:22:15 <pknirsch> :)
15:22:26 <dazo> mock does some nice caching of downloaded packages ... saves a base image on regular basis to speed up
15:22:43 <dazo> so that helps on the iterations for each package
15:22:51 <nphilipp> doesn't help if we spin up containers and tear them down after using them
15:23:11 <dazo> that it would be one container per package, or something like that?
15:23:11 <pknirsch> maybe build a container with a full cache?
15:23:18 <dazo> yeah
15:23:20 <nphilipp> yep
15:23:53 <nphilipp> ideally we don't want to pull packages over the network more than once, and probably (for the initial run) don't want to operate on a "moving" set of packages
15:24:13 <dazo> agreed
15:24:19 * pknirsch nods
15:24:39 <nphilipp> so perhaps download everything once, and use that per invocation (regardless of if that's in a container, or mock using a custom yum repo)
15:24:57 <nphilipp> for the initial, "bulk" run
15:25:46 <pknirsch> mhm
15:25:57 <pknirsch> or have a repo container that all others access?
15:26:10 <pknirsch> anyway, i'll leave the details to you guys :)
15:26:11 <dazo> that's what I was thinking
15:26:15 <dazo> :)
15:27:39 <nphilipp> If we containerize the stuff early (docker, openstack, what have you), we'll be able to easier move the stuff from one test machine to another. We'd want some space to store image configurations and actual images though.
15:27:50 <pknirsch> aye
15:28:03 <dazo> yupp
15:28:16 <nphilipp> Not in "much space", but rather somewhere accessible.
15:28:29 <nphilipp> So we're back to needing something permanently :)
15:28:40 <nphilipp> Even if that needn't be so beefy.
15:28:55 <pknirsch> that should be easier
15:29:08 <dazo> that just needs to be some drives on a storage solution close enough to the beefy boxes
15:29:11 <pknirsch> "worst" case we can always use the storage here in STR
15:29:46 <nphilipp> As soon as we talk productive this needs to go somewhere it can be reached from Fedora Infra.
15:29:53 <pknirsch> yea
15:29:58 <dazo> yup
15:30:29 <nphilipp> So right now -- as it's only dazo and I working on it -- RH internal is okay
15:31:05 <dazo> lets shape up a PoC ... and we can move that to Fedora infra as soon as it works decently and provides some expected results
15:31:13 <nphilipp> yep
15:31:33 <nphilipp> when we have results, that'll probably be the point
15:31:39 <dazo> +1
15:32:05 <nphilipp> so we can tell people: here's what we did (results) and how we did it (tool(s), recipes and images)
15:32:45 * dazo nods
15:33:06 <nphilipp> By then we should also have some idea if and how we want to do this ongoing. I have some ideas about that, but this would go to far into detail for this meeting.
15:33:17 <nphilipp> (we already used up half the time)
15:33:19 <pknirsch> :)
15:33:23 <nphilipp> :D
15:33:30 <pknirsch> ok, i'll leave you guys to figure it out offline then
15:33:40 <pknirsch> connection made, mission accomplished ;)
15:33:56 * jreznik has to leave today
15:34:26 <pknirsch> so lets move on to the Docker topic quickly, not sure though if vpavlin is here or dgilmore-bne or msekleta has any news there.
15:34:47 <pknirsch> #topic Update Alpha base image
15:35:08 <msekleta> pknirsch, it seems that for F21 we will not be able to run systemd inside the docker containers
15:35:26 <pknirsch> msekleta: darn :/
15:35:53 <msekleta> pknirsch, I tried to hack-around the issues but with lennart we figured we can't do that and we better fix things properly
15:36:34 <msekleta> major showstopper is that docker doesn't mount tmpfs in /run which is requirement for running systemd inside
15:37:49 <msekleta> there is a pull request for docker on GH to make possible to configure docker that it will do mount tmpfs in /run inside the container
15:38:22 <pknirsch> oki, cool
15:38:30 <pknirsch> but that won't make it in time for F21?
15:39:06 <msekleta> #link https://github.com/docker/docker/pull/8478
15:39:44 <msekleta> pknirsch, well that depends on docker upstream, as soon as the patch is in, we can proceed to next steps
15:40:29 <msekleta> but there was an attempt to merge such patch previously but it failed
15:40:34 <msekleta> so we'll see now
15:40:38 <pknirsch> ok
15:42:04 <msekleta> other than that there is couple more issues
15:42:47 <haraldh> commented on https://github.com/docker/docker/pull/8478   :-))
15:43:05 <pknirsch> haha haraldh :)
15:43:18 <msekleta> I mentioned earlier that there is problem with dbus inside container, I figured out what is wrong.
15:43:55 <msekleta> there is problem in general with options (man 5 systemd.exec) for services
15:45:38 <msekleta> two most visible are so far PrivateTmp and in case on dbus it was OOMScore, if I disable those with drop-in I can boot-up "non-privileged" container with apache and mariadb inside without issues
15:46:39 <msekleta> well, also I had add conditions to couple .mount units so we don't try to mount them if we don't have capability CAP_SYS_ADMIN
15:47:40 <msekleta> so to summarize my findings
15:48:15 <msekleta> a) get "/run on tmpfs" patch to docker upstream, and backport to fedora
15:49:07 <msekleta> b) add ConditionCapability=CAP_SYS_ADMIN to .mount units we ship by default in systemd package (this needs to be discussed on systemd-devel first)
15:50:04 <msekleta> c) disable OOMScoreAdjust and PrivateTmp options in unit files inside the container
15:50:42 <pknirsch> Thanks for the investigation and summary, msekleta
15:51:09 <pknirsch> So whats the plan for the next weeks until F21 development and changes end then? get all 3 done if possible i guess?
15:51:10 <msekleta> as for last point I don't know how exactly we should achieve this, we can have special generator which just disables those options or patch systemd itself
15:51:55 <msekleta> however I think that latter will be not received well by Lennart
15:52:02 <pknirsch> mhm
15:52:14 <pknirsch> have you talked with him yet about these issues?
15:53:10 <msekleta> not yet, there is systemd hackfest next week so was hoping to get this fixed then
15:53:57 <msekleta> so to answer your question, b,c should be fixed next week hopefully
15:54:08 <msekleta> a) I have no idea
15:54:44 <msekleta> also I don't think that patching docker downstream is viable
15:55:41 <msekleta> that would break promise that you can run docker container everywhere, because containers running systemd would require patched docker
15:56:21 <haraldh> why not have a global config option "OOMScoreAdjust=no" "PrivateTmp=no"
15:57:59 <msekleta> haraldh, that is option as well, we already introduced couple of global options mostly related to timers, which can be overridden by unit configuration
15:58:50 <msekleta> we can discuss this next week
15:59:27 <pknirsch> sounds good. keep us posted on the hackfest results and what came out of that msekleta
16:00:22 <msekleta> sure
16:01:26 <pknirsch> alright. anything else on Docker for this week?
16:01:39 <pknirsch> (not that it was plenty already ;)
16:02:03 <msekleta> yes one more thing, but that is something for F22 really
16:02:22 <msekleta> split of udev's hardware database
16:03:40 <msekleta> I played with weak dependencies and dnf and I managed to move hwdb to subpackage so it can be uninstalled when building container image
16:04:37 <pknirsch> cool :)
16:04:38 <msekleta> but there is a problem that DNF doesn't consult its own history and update of systemd will pull-in again hwdb subpackage
16:04:50 <pknirsch> ha :)
16:05:35 <msekleta> becase main systemd package has Recommends: systemd-hwdb and depsolver (libsolve) tries to fulfill Recommends: dependency by default
16:06:50 <msekleta> I discussed with DNF people and they didn't make their mind yet what DNF should do on package update when recommended package was explicitly removed
16:07:04 <pknirsch> hm
16:07:34 <pknirsch> recommended was the weak form of the new weak deps, right? (aka, the optional one)
16:07:39 <msekleta> so they now default to what libsolve does, which in turn is what zypp on SUSE does
16:07:49 <pknirsch> hm
16:08:13 <pknirsch> thats probably a good idea, as otherwise we'll have different semantics for Fedora and the new tags than SuSE has
16:08:18 <pknirsch> which would be really awful
16:08:43 <msekleta> and ffesti said that SUSe guys told him that their users are satisfied with libsolve's current behavior, which I don't like
16:10:03 <haraldh> hmm, I don't think "Recommend" should be executed on updates
16:10:34 <haraldh> If I install the minimal set of packages, I don't want it to grow
16:10:54 <msekleta> exactly
16:10:58 <pknirsch> well, there are 2 types of weak deps, right/
16:11:08 <haraldh> Even though, I never removed any package in the history, because I installed the minimal set in the first place
16:11:08 <pknirsch> one was recommends, the other one was erh, suggests?
16:11:34 <haraldh> hmm
16:11:36 <pknirsch> and one of them was on a yum/dnf/zypper level supposed to be installed by default
16:11:46 <pknirsch> but i can't remember which one it was
16:11:50 <msekleta> recommends
16:11:56 <pknirsch> yea
16:11:59 <haraldh> whatever, but if should not be pulled in on updates
16:12:06 <haraldh> s/if/it
16:12:16 <pknirsch> so you'd either need to change the dep then to the weaker version or figure out something else imho
16:12:36 <haraldh> It's fine for me to do it on the initial install
16:12:47 <haraldh> if I do e.g. yum install foo
16:12:57 <haraldh> but not on yum update foo
16:12:58 <pknirsch> ah, yea, now i get what you mean
16:13:13 <msekleta> well, problem with weaker is that it is too weak
16:13:16 <pknirsch> is the recommends a new dep in the new package?
16:13:32 <msekleta> so on systemd installation hwdb will not be pulled in
16:13:52 <msekleta> so we'll have to hack this around in comps, which is awful
16:13:59 <pknirsch> well
16:14:40 <pknirsch> semantically definitely both behaviors can be considered correct
16:14:44 <pknirsch> depending on the use case
16:14:58 <pknirsch> (pulling in recommends on updates or not)
16:16:55 <pknirsch> though i suspect that should be configurable, too. Maybe that would be something to bring up with the dnf team
16:17:24 <msekleta> anyway, currently we have hwdb in container for udev which doesn't even run there and there is no nice way how to get rid of this even when using weak deps
16:17:35 <pknirsch> right
16:17:46 <pknirsch> It'll need to be a weak dep, one way or the other :)
16:18:06 <msekleta> I will file a bug against DNF requesting that Recommends is not honored on updates
16:18:20 <pknirsch> sounds good.
16:19:08 <rdieter> (peanut gallery comment), it could be argued that Recommends should treated differently on upgrades, depending on if this is the first occurance of the Recommends or not
16:19:35 <rdieter> foo-1 (without recommends) upgrades to foo-2 (with recommends)
16:19:42 * pknirsch nods
16:19:46 <rdieter> imo, the recommends should install in that case
16:19:50 <pknirsch> thats what i've been pondering too, rdieter
16:20:07 <pknirsch> if it first appears, treat it like an install
16:20:33 <msekleta> rdieter, that was my initial plan to propose this
16:20:35 <rdieter> pknirsch: good, sounds like we agree on that.
16:20:36 <pknirsch> otherwise, if the recommended package wasn't installed, don't install it on the update
16:21:16 <pknirsch> as that was then a conscious decision of the user to not install that package despite it being recommended
16:21:17 <msekleta> but if foo-1 recommends something I removed and foo-2 recommends same thing too, then dnf should not pull that thing in again
16:21:25 <pknirsch> aye
16:22:11 <msekleta> however what haraldh was proposing (minimal not growing package set) makes sense too
16:23:05 <pknirsch> that should be doable with a global flag though, and iirc suse has something like that too, where for both weak deps you can specify what the default action should be and where you can disable installations of them for both.
16:24:20 <msekleta> pknirsch, ok, so I will propose the other variant then, so that explicitly removed packages will not be pulled back by Recommends:
16:24:32 * pknirsch nods
16:25:53 <pknirsch> Alright, but lets wrap up for today, great discussions on buildrequires and docker stuff today!
16:27:04 <haraldh> cool
16:27:09 <haraldh> have a nice weekend
16:27:14 * haraldh goes running now
16:27:18 <pknirsch> thanks everyone for joining today! weekend time for the EMEA folks :)
16:27:22 <pknirsch> have fun haraldh :)
16:27:24 <haraldh> it's getting dark :-/
16:27:27 <pknirsch> yea
16:27:31 <pknirsch> Winter is coming...
16:27:34 <pknirsch> :)
16:27:49 <msekleta> have a great weekend everyone
16:28:06 <pknirsch> yep, have a great weekend everyone, cya next week again
16:28:10 <pknirsch> #endmeeting