15:00:04 #startmeeting Fedora Base Design Working Group (2014-10-10) 15:00:04 Meeting started Fri Oct 10 15:00:04 2014 UTC. The chair is pknirsch. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:04 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:00:12 #meetingname Fedora Base Design Working Group 15:00:12 The meeting name has been set to 'fedora_base_design_working_group' 15:00:18 hello and welcome everyone! 15:00:28 Hi 15:00:29 hi! 15:00:39 hi pknirsch & all :) 15:01:11 <- 15:01:21 #chair jreznik dgilmore-bne masta haraldh vpavlin msekleta 15:01:21 Current chairs: dgilmore-bne haraldh jreznik masta msekleta pknirsch vpavlin 15:01:55 hi everyone! 15:02:28 * jreznik has to leave in about half hour today 15:02:34 okidokie, lets jump into the first topic 15:03:00 #topic Update buildrequires cleanup work (davids & nils!) 15:03:37 So one nice thing: May i introduce you nphilipp ? Some might already know him, but he's going to help us with the buildrequires cleanup work together with davids 15:04:24 I've had a chat a few days ago with him about it and ask him to work with you dazo on looking at the whole project and the split up the work and see where to take it 15:05:15 wonderful! 15:05:20 dazo: hey 15:05:33 hey, nphilipp! 15:05:41 the more the merrier i always say :) 15:07:04 dazo, did you have any chance this week to take a look at moben's work yet? 15:09:17 A little bit ... I'm pondering whom will maintain the code, and how will we include that code into rpmdevtools .... as an additional tarball/files or should things be merged completely into rpmdevtools and such 15:09:29 * pknirsch nods 15:09:54 a git clone is easy to do, but i agree, we need to find a place where several people can contribute etc. 15:09:55 does anyone know if moben will still be involved? 15:10:01 maybe a clone on github? 15:10:04 * pknirsch isn't sure 15:10:32 I don't want to "steal" his project if he wants to still drive it 15:11:04 I can mail him and get some clarifications here 15:11:27 well, moben is definitely busy with studies these days and we've talked about it before he left and he was perfectly fine with someone doing a clone and continuing, but yea, an email to him as an introduction might be a good idea :) 15:11:52 I figured that (studies) 15:12:11 * dazo mails 15:12:23 cheers! 15:12:55 I'll let you guys figure it out then how to proceed :) 15:13:05 perfect :) 15:13:55 So, I've pondered a bit about how to use moben's tool, and had a call yesterday with Marcella and a few others (stacks & Envs WG) who are interested in this as well 15:15:24 The idea was to conduct a first "run" of it on e.g. the base or minimal package set to get a result set, so we can see how much resources it eats up and what we get out of it. 15:16:29 I'd need to scare up a machine internally for that, should be a beefier one so it doesn't take longer than necessary. 15:17:17 beefier hardware .... that's something I discovered was needed too running some smaller tests .... those iterations takes some time 15:17:29 * pknirsch nods 15:18:57 we do have some nice machines in house, but using them on a permanent basis might be tricky. 15:19:17 nice is good, permanent is better 15:19:18 One idea i had was to use one of the P8s we get sometime soon here in STR 15:19:45 That one should be beefy enough for the job (160 cpus with 128g iirc) 15:19:50 get openstack on them? so we can push them to the limits when we need ... and others can do the same when they're idle ;-) 15:19:56 hm 15:20:10 thats actually a very good idea too. don't we have an internal openstack instance? 15:20:30 That would also get around the global yum lock problem 15:20:46 just spin up X guests instead of running the script multiple times on the same host 15:20:53 most likely ... but I've not played with it .... just swallowed the sales propaganda from my colleagues the office here in Oslo ;-) 15:21:02 hah! 15:21:10 :) 15:21:33 would need some tooling to spin up a container to run the tool inside, rather than only the tool 15:21:43 hm 15:21:49 DOCKER DOCKER DOCKER! 15:21:50 :) 15:21:53 the question is if we need to use mock then 15:22:01 hehe 15:22:08 * nphilipp whacks pknirsch with a limp trout 15:22:15 :) 15:22:26 mock does some nice caching of downloaded packages ... saves a base image on regular basis to speed up 15:22:43 so that helps on the iterations for each package 15:22:51 doesn't help if we spin up containers and tear them down after using them 15:23:11 that it would be one container per package, or something like that? 15:23:11 maybe build a container with a full cache? 15:23:18 yeah 15:23:20 yep 15:23:53 ideally we don't want to pull packages over the network more than once, and probably (for the initial run) don't want to operate on a "moving" set of packages 15:24:13 agreed 15:24:19 * pknirsch nods 15:24:39 so perhaps download everything once, and use that per invocation (regardless of if that's in a container, or mock using a custom yum repo) 15:24:57 for the initial, "bulk" run 15:25:46 mhm 15:25:57 or have a repo container that all others access? 15:26:10 anyway, i'll leave the details to you guys :) 15:26:11 that's what I was thinking 15:26:15 :) 15:27:39 If we containerize the stuff early (docker, openstack, what have you), we'll be able to easier move the stuff from one test machine to another. We'd want some space to store image configurations and actual images though. 15:27:50 aye 15:28:03 yupp 15:28:16 Not in "much space", but rather somewhere accessible. 15:28:29 So we're back to needing something permanently :) 15:28:40 Even if that needn't be so beefy. 15:28:55 that should be easier 15:29:08 that just needs to be some drives on a storage solution close enough to the beefy boxes 15:29:11 "worst" case we can always use the storage here in STR 15:29:46 As soon as we talk productive this needs to go somewhere it can be reached from Fedora Infra. 15:29:53 yea 15:29:58 yup 15:30:29 So right now -- as it's only dazo and I working on it -- RH internal is okay 15:31:05 lets shape up a PoC ... and we can move that to Fedora infra as soon as it works decently and provides some expected results 15:31:13 yep 15:31:33 when we have results, that'll probably be the point 15:31:39 +1 15:32:05 so we can tell people: here's what we did (results) and how we did it (tool(s), recipes and images) 15:32:45 * dazo nods 15:33:06 By then we should also have some idea if and how we want to do this ongoing. I have some ideas about that, but this would go to far into detail for this meeting. 15:33:17 (we already used up half the time) 15:33:19 :) 15:33:23 :D 15:33:30 ok, i'll leave you guys to figure it out offline then 15:33:40 connection made, mission accomplished ;) 15:33:56 * jreznik has to leave today 15:34:26 so lets move on to the Docker topic quickly, not sure though if vpavlin is here or dgilmore-bne or msekleta has any news there. 15:34:47 #topic Update Alpha base image 15:35:08 pknirsch, it seems that for F21 we will not be able to run systemd inside the docker containers 15:35:26 msekleta: darn :/ 15:35:53 pknirsch, I tried to hack-around the issues but with lennart we figured we can't do that and we better fix things properly 15:36:34 major showstopper is that docker doesn't mount tmpfs in /run which is requirement for running systemd inside 15:37:49 there is a pull request for docker on GH to make possible to configure docker that it will do mount tmpfs in /run inside the container 15:38:22 oki, cool 15:38:30 but that won't make it in time for F21? 15:39:06 #link https://github.com/docker/docker/pull/8478 15:39:44 pknirsch, well that depends on docker upstream, as soon as the patch is in, we can proceed to next steps 15:40:29 but there was an attempt to merge such patch previously but it failed 15:40:34 so we'll see now 15:40:38 ok 15:42:04 other than that there is couple more issues 15:42:47 commented on https://github.com/docker/docker/pull/8478 :-)) 15:43:05 haha haraldh :) 15:43:18 I mentioned earlier that there is problem with dbus inside container, I figured out what is wrong. 15:43:55 there is problem in general with options (man 5 systemd.exec) for services 15:45:38 two most visible are so far PrivateTmp and in case on dbus it was OOMScore, if I disable those with drop-in I can boot-up "non-privileged" container with apache and mariadb inside without issues 15:46:39 well, also I had add conditions to couple .mount units so we don't try to mount them if we don't have capability CAP_SYS_ADMIN 15:47:40 so to summarize my findings 15:48:15 a) get "/run on tmpfs" patch to docker upstream, and backport to fedora 15:49:07 b) add ConditionCapability=CAP_SYS_ADMIN to .mount units we ship by default in systemd package (this needs to be discussed on systemd-devel first) 15:50:04 c) disable OOMScoreAdjust and PrivateTmp options in unit files inside the container 15:50:42 Thanks for the investigation and summary, msekleta 15:51:09 So whats the plan for the next weeks until F21 development and changes end then? get all 3 done if possible i guess? 15:51:10 as for last point I don't know how exactly we should achieve this, we can have special generator which just disables those options or patch systemd itself 15:51:55 however I think that latter will be not received well by Lennart 15:52:02 mhm 15:52:14 have you talked with him yet about these issues? 15:53:10 not yet, there is systemd hackfest next week so was hoping to get this fixed then 15:53:57 so to answer your question, b,c should be fixed next week hopefully 15:54:08 a) I have no idea 15:54:44 also I don't think that patching docker downstream is viable 15:55:41 that would break promise that you can run docker container everywhere, because containers running systemd would require patched docker 15:56:21 why not have a global config option "OOMScoreAdjust=no" "PrivateTmp=no" 15:57:59 haraldh, that is option as well, we already introduced couple of global options mostly related to timers, which can be overridden by unit configuration 15:58:50 we can discuss this next week 15:59:27 sounds good. keep us posted on the hackfest results and what came out of that msekleta 16:00:22 sure 16:01:26 alright. anything else on Docker for this week? 16:01:39 (not that it was plenty already ;) 16:02:03 yes one more thing, but that is something for F22 really 16:02:22 split of udev's hardware database 16:03:40 I played with weak dependencies and dnf and I managed to move hwdb to subpackage so it can be uninstalled when building container image 16:04:37 cool :) 16:04:38 but there is a problem that DNF doesn't consult its own history and update of systemd will pull-in again hwdb subpackage 16:04:50 ha :) 16:05:35 becase main systemd package has Recommends: systemd-hwdb and depsolver (libsolve) tries to fulfill Recommends: dependency by default 16:06:50 I discussed with DNF people and they didn't make their mind yet what DNF should do on package update when recommended package was explicitly removed 16:07:04 hm 16:07:34 recommended was the weak form of the new weak deps, right? (aka, the optional one) 16:07:39 so they now default to what libsolve does, which in turn is what zypp on SUSE does 16:07:49 hm 16:08:13 thats probably a good idea, as otherwise we'll have different semantics for Fedora and the new tags than SuSE has 16:08:18 which would be really awful 16:08:43 and ffesti said that SUSe guys told him that their users are satisfied with libsolve's current behavior, which I don't like 16:10:03 hmm, I don't think "Recommend" should be executed on updates 16:10:34 If I install the minimal set of packages, I don't want it to grow 16:10:54 exactly 16:10:58 well, there are 2 types of weak deps, right/ 16:11:08 Even though, I never removed any package in the history, because I installed the minimal set in the first place 16:11:08 one was recommends, the other one was erh, suggests? 16:11:34 hmm 16:11:36 and one of them was on a yum/dnf/zypper level supposed to be installed by default 16:11:46 but i can't remember which one it was 16:11:50 recommends 16:11:56 yea 16:11:59 whatever, but if should not be pulled in on updates 16:12:06 s/if/it 16:12:16 so you'd either need to change the dep then to the weaker version or figure out something else imho 16:12:36 It's fine for me to do it on the initial install 16:12:47 if I do e.g. yum install foo 16:12:57 but not on yum update foo 16:12:58 ah, yea, now i get what you mean 16:13:13 well, problem with weaker is that it is too weak 16:13:16 is the recommends a new dep in the new package? 16:13:32 so on systemd installation hwdb will not be pulled in 16:13:52 so we'll have to hack this around in comps, which is awful 16:13:59 well 16:14:40 semantically definitely both behaviors can be considered correct 16:14:44 depending on the use case 16:14:58 (pulling in recommends on updates or not) 16:16:55 though i suspect that should be configurable, too. Maybe that would be something to bring up with the dnf team 16:17:24 anyway, currently we have hwdb in container for udev which doesn't even run there and there is no nice way how to get rid of this even when using weak deps 16:17:35 right 16:17:46 It'll need to be a weak dep, one way or the other :) 16:18:06 I will file a bug against DNF requesting that Recommends is not honored on updates 16:18:20 sounds good. 16:19:08 (peanut gallery comment), it could be argued that Recommends should treated differently on upgrades, depending on if this is the first occurance of the Recommends or not 16:19:35 foo-1 (without recommends) upgrades to foo-2 (with recommends) 16:19:42 * pknirsch nods 16:19:46 imo, the recommends should install in that case 16:19:50 thats what i've been pondering too, rdieter 16:20:07 if it first appears, treat it like an install 16:20:33 rdieter, that was my initial plan to propose this 16:20:35 pknirsch: good, sounds like we agree on that. 16:20:36 otherwise, if the recommended package wasn't installed, don't install it on the update 16:21:16 as that was then a conscious decision of the user to not install that package despite it being recommended 16:21:17 but if foo-1 recommends something I removed and foo-2 recommends same thing too, then dnf should not pull that thing in again 16:21:25 aye 16:22:11 however what haraldh was proposing (minimal not growing package set) makes sense too 16:23:05 that should be doable with a global flag though, and iirc suse has something like that too, where for both weak deps you can specify what the default action should be and where you can disable installations of them for both. 16:24:20 pknirsch, ok, so I will propose the other variant then, so that explicitly removed packages will not be pulled back by Recommends: 16:24:32 * pknirsch nods 16:25:53 Alright, but lets wrap up for today, great discussions on buildrequires and docker stuff today! 16:27:04 cool 16:27:09 have a nice weekend 16:27:14 * haraldh goes running now 16:27:18 thanks everyone for joining today! weekend time for the EMEA folks :) 16:27:22 have fun haraldh :) 16:27:24 it's getting dark :-/ 16:27:27 yea 16:27:31 Winter is coming... 16:27:34 :) 16:27:49 have a great weekend everyone 16:28:06 yep, have a great weekend everyone, cya next week again 16:28:10 #endmeeting