15:02:04 #startmeeting RDO meeting (2016-01-13) 15:02:04 Meeting started Wed Jan 13 15:02:04 2016 UTC. The chair is number80. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:02:04 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:02:04 The meeting name has been set to 'rdo_meeting_(2016-01-13)' 15:02:10 \o/ 15:02:15 Yo 15:02:15 o/ 15:02:20 #chair apevec trown chandankumar jschlueter rbowen imcsk8 jruzicka 15:02:20 Current chairs: apevec chandankumar imcsk8 jruzicka jschlueter number80 rbowen trown 15:02:22 o/ 15:02:31 o/ 15:02:43 excellent 15:02:47 agenda: 15:02:50 https://etherpad.openstack.org/p/RDO-Packaging 15:02:53 o/ 15:02:57 o/ 15:03:16 #chair dmsimard 15:03:16 Current chairs: apevec chandankumar dmsimard imcsk8 jruzicka jschlueter number80 rbowen trown 15:03:17 #chair dmsimard trown 15:03:17 Current chairs: apevec chandankumar dmsimard imcsk8 jruzicka jschlueter number80 rbowen trown 15:03:32 I moved the rdoinfo item to later to leave time for Jakub to finish his call 15:03:46 ack 15:03:52 #topic murano/mistral unresponsive maintainer 15:04:23 degorenko: ^ 15:04:25 Daniil has not answered for 6 weeks, so I suggest to continue the reviews and have him as a comaintainer if he pings back 15:04:42 number80, i can help in client packages reviews. 15:04:44 that's good question - do we need a formal uresp-maint procedure like in Fedora? 15:05:02 apevec: why not, as long as it's lightweight :) 15:05:10 for murano/mistral 15:05:22 I just want to make sure that we have murano/mistral ready for the third test day 15:05:30 yeah, email to rdo-list CC maintainer ? 15:05:39 i can help testing it 15:05:39 number80, chandankumar, I have completed magnum package installations, and am now trying to configure them. we have created the database for magnum, and granted all priviledges for root on the ip address that RDO is linked to. What should be the next steps? 15:05:49 and timeout 1week - also maintainer can always come back! 15:05:56 sidx64_Cern_, meeting is going on. 15:06:02 #action number80 email RDO list + Daniil in CC about murano/mistral w/ 1 week timeout 15:06:10 ack 15:06:15 oops sorry. 15:06:19 sidx64_Cern_: np 15:06:22 sorry guys, i am on another meeting, what's wrong with murano? 15:06:52 degorenko: Daniil hasn't answered packages reviews for 6 weeks, so we're thinking about allowing someone else to take over 15:07:05 off course, he'd always welcome to jump in later 15:07:06 number80, Daniil Trishkin? 15:07:09 yup 15:07:24 i will ping him :) 15:07:33 thx degorenko 15:07:41 degorenko: excellent 15:07:53 is here any problems? 15:07:58 or just review comments? 15:08:09 degorenko: fixing the packages and move on w/ reviews 15:08:22 ok, thank you for your attention 15:08:51 degorenko: could you give me (or Daniil) feedback within a week? 15:08:58 #undo 15:08:58 Removing item from minutes: ACTION by number80 at 15:06:02 : number80 email RDO list + Daniil in CC about murano/mistral w/ 1 week timeout 15:09:21 #chair degorenko 15:09:21 Current chairs: apevec chandankumar degorenko dmsimard imcsk8 jruzicka jschlueter number80 rbowen trown 15:09:33 number80, sure, i will pay my attention on this activity 15:09:38 #action degorenko contact Daniil about murano/mistral 15:09:47 thanks, much appreciated 15:10:08 I think we can move on to the next topic 15:10:22 #topic dynamic UID/GID for openstack services 15:10:41 EmilienM reported an issue w/ epmd stealing cinder UID/GID 15:10:46 #chair EmilienM 15:10:46 Current chairs: EmilienM apevec chandankumar degorenko dmsimard imcsk8 jruzicka jschlueter number80 rbowen trown 15:10:58 yeah that's a race 15:11:14 in RPM installation order 15:11:24 but cinder doesn't have a reserved uid/gid right ? 15:11:29 it does 15:11:43 but it's not actually reserved until cinder is installed 15:11:44 hmm, I ctrl+f'd cinder in the list of reserved uids and didn't see it 15:11:45 yeah like all openstack services 15:11:58 Tricky topic, because erlang-erts package is also correct packaging-wise (it shouldn't have taken an UID/GID under 201) but some services also requires packages external to RDO/CentOS/Fedora like Hadoop 15:12:02 nova and glance and others had one, not cinder, I'll re-check 15:12:28 number80, wait, how come it took cinder id which is <200 ?? 15:12:33 apevec: theorically the -r switch is supposed to enforce that we don't take 201 15:12:50 apevec: I dunno, I considered a bug from coreutils 15:13:10 uhoh 15:13:19 Having cinder (and other services) get their ID automatically can mean broken multi-node setups when different controller nodes get different cinder UIDs 15:13:24 #link https://bugzilla.redhat.com/show_bug.cgi?id=1297580 15:13:35 it also raises the question of UID/GID reservation, RDO is neither in Fedora nor RHEL, so there's no reference database for UID for new services 15:13:54 yeah, we shold go dynamic 15:14:08 static ids are actively discouraged in Fedora 15:14:30 jpena, that should be fixed otherwise 15:14:33 not on on RPM level 15:14:41 like central account system 15:14:48 or pre-create users by the installation tool 15:14:56 why is there an issue if cinder has different uid/gid on different nodes? 15:15:06 jschlueter, if you have NFS shares 15:15:16 jschlueter: NFS or when you use LDAP for holding UID/GID for you 15:15:20 number80: what is that page for the reserved uid/gid ? Can't seem to find it 15:15:28 dmsimard: just a sec 15:15:51 https://git.fedorahosted.org/cgit/setup.git/tree/uidgid 15:15:58 apevec: ahh if so then reserved id's is required to keep things in sync ... otherwise you are relying on arbitary ordering to keep the house of card together 15:16:17 Okay, my bad, cinder is indeed in there. Thought it wasn't. 15:16:19 the list used by Fedora (RHEL uses this w/ some tweaks too) 15:16:27 #link > https://git.fedorahosted.org/cgit/setup.git/tree/uidgid 15:16:30 #link https://git.fedorahosted.org/cgit/setup.git/tree/uidgid 15:16:48 #undo 15:16:49 Removing item from minutes: 15:16:49 #undo 15:16:49 Removing item from minutes: 15:16:54 dmsimard: unrequired, meetbot does this w/ line starting w/ links :) 15:17:12 #info reserved UIDs in Fedora https://git.fedorahosted.org/cgit/setup.git/tree/uidgid 15:17:15 number80: oh really, didn't know that 15:17:34 smart bot 15:17:41 so what do we do? 15:17:43 trown: what's your opinion from RDO manager PoV? 15:18:14 I am pretty unopinionated... it seems like both options have flaws 15:18:20 apevec: I know that some components maintainers are reluctant to the idea of dynamic UID/GID 15:18:50 but it means that we keep static UID/GID, there's to decide how to reserve them 15:19:03 I think dynamic would require some pretty major patch to tripleo 15:19:10 no, let's go dynamic for all new packages 15:19:24 for currently allocated, we keep that 15:19:32 trown, why? 15:19:33 wfm 15:19:52 apevec: ensuring that services get the same UID across multiple nodes 15:19:58 number80, but what about cinder bug at hand? 15:20:17 trown, I think neutron is already dynamic for some time 15:20:29 after quantum rename iirc 15:20:32 apevec: it's an heisenbug, it could be document for the very rare occasions it happens 15:20:36 yup 15:20:46 (for neutron) 15:20:51 apevec: ya probably not an issue except for things using shared filesystems 15:21:41 EmilienM, jpena, can't users be pre-created by puppet? 15:21:42 apevec: if dynamic does not cause the need for some new installer machinery to coordinate it, then I am all for it 15:21:51 no 15:22:00 we don't want to manage POSIX resources 15:22:05 because it overlaps with packaging 15:22:24 and some distros don't have the same UID/GID 15:22:30 right 15:22:31 apevec: if we want them to have the same ID in different machines, we are back to square one. Either we use a pre-assigned UID, or we do it in one machine then get the UID and use it elsewhere (ugly) 15:22:47 we'll never manage users/gid/uid with puppet 15:22:49 Unlike Debian and derivatives, we don't have single authority to manage that 15:23:08 we don't ? 15:23:10 i'm sorry, in cinder's case which package is taking the cider user UID? 15:23:14 number80, what's that authority in Debian? 15:23:22 jpena: apevec, maybe the compromise, is dynamic, unless there is some need for static (cinder) 15:23:28 apevec: Debian namely 15:23:30 some modules create the user/group but we are in the process to drop that 15:23:31 imcsk8, yeah see https://bugzilla.redhat.com/show_bug.cgi?id=1297580 15:23:44 trown: but that doesn't prevent packages doing it dynamically from taking the static UIDs 15:23:55 ugh... yeah 15:24:06 jpena, uids under 200 should be reserved 15:24:16 as number80 that might be coreutils bug 15:24:28 ....as number80 said... 15:24:47 I think we should stick to apevec proposal for the moment and try pinging fedora and rhel setup maintainer to see if they can't coordinate on UID/GID 15:24:50 mmm that's right 15:25:12 (the latter being best effort as I'm not optimistic to reach agreement) 15:25:48 number80, maybe CC setup maintainer on that bz, maybe he had similar bugs reported? 15:25:58 apevec: good idea 15:26:10 #action number80 ping RHEL setup maintainer 15:26:34 .whoowns setup 15:26:34 apevec: ovasik 15:26:39 so we agree on status quo + dynamic UID/GID by default? 15:26:53 *for new services 15:27:08 ack 15:27:09 I don't disagree with jpena's approach 15:27:09 ++ for all new things being dynamic 15:27:22 of static uid/gid being static only for services that logically require it 15:27:28 rest dynamic 15:27:50 #agreed keep using reserved UID/GID for existing services and use dynamic UID/GID for new services by default 15:28:29 #info Fedora uid guidelines https://fedoraproject.org/wiki/Packaging:UsersAndGroups 15:28:38 dmsimard: I know that Nova and Cinder devs are reluctant to do the change, so basically, it'll be the same as apevec proposal in practice :) 15:29:40 sorry guys for being lost, Denis ping me) I just had a lot of work, and forgot about reviews, tomorrow I'll have time for that 15:29:55 dtrishkin: excellent :) 15:30:47 action item resolved during the meeting, that's agile! 15:30:50 I suggest that we move the next topic, we have a lot of items today (my fault /o\) 15:31:06 yeah move on 15:31:13 #topic trim changelog entries that are older than 2 years 15:31:19 raised by kashyap 15:31:25 https://review.gerrithub.io/#/c/258590/ 15:31:33 I abandoned that one :) 15:31:48 see review comments for details 15:32:01 I'm fine w/ having it in rdo-rpm-macros but I'm unopinionated 15:32:17 BTW one more link re. uids, it's in upstream docs http://docs.openstack.org/icehouse/install-guide/install/yum/content/reserved_uids.html 15:32:25 #undo 15:32:25 Removing item from minutes: 15:32:36 http://docs.openstack.org/icehouse/install-guide/install/yum/content/reserved_uids.html 15:32:54 oops icehouse :) 15:32:54 (to keep summary consistent, restoring current topic) 15:33:42 #topic trim changelog entries that are older than 2 years 15:33:50 https://review.gerrithub.io/#/c/258590/ 15:33:58 so your call? 15:34:32 if nobody disagrees, by default, I'll follow kashyap suggestion and update rdo-rpm-macros 15:34:58 so new packages will have their changelogs trimmed (no rebuilds for existing packages) 15:35:05 ack, although with current archiving to ChangeLog.old we'll never reach that 2 years :) 15:35:23 if openstack release keep coming more frequently that is 15:35:32 yes, but it's small change so I don't mind 15:36:01 kashyap, BTW is there any other package using that macro_ 15:36:05 ? 15:37:08 nm, let's just add it and move on 15:37:18 #agreed add in rdo-rpm-macros trim changelog entries that are older than 2 years 15:37:30 #topic missing deps from EPEL required by RDO Manager 15:37:39 https://trello.com/c/6uXVuQcC/115-rebuild-epel-packages-required-for-rdo-manager 15:37:41 trown, what's left? 15:37:49 apevec: just ceph at this point 15:37:59 #info good news we don't need dkms in RDO repositories :) 15:38:26 uhm ceph is big one and I wouldn't like to maintain it in RDO 15:38:37 number80, any replies from Storage SIG ? 15:38:45 images being built in CI do not have EPEL installed on them also 15:38:50 I contacted fcami who will maintain ceph for Storage SIG, he was looking for cloud instances to do more testing (pointed him to OS1) 15:39:03 so it's work in progress 15:39:06 Why is pulling Ceph from it's official repositories a problem ? 15:39:17 I'll add a trello card to follow up the effort w/ fcami 15:39:41 Ceph is already mirrored, why do we want to mirror it elsewhere ? 15:39:50 dmsimard: is it? 15:39:52 (Trying to understand the problem we're trying to solve) 15:40:12 http://docs.ceph.com/docs/master/install/get-packages/ 15:41:04 dmsimard: our release package can't have a dependency on release package that is not in centos-extras (or any other blessed repo) 15:41:45 apevec: number80, while we wait on the storage SIG, is it better to pull packages from ceph repos or EPEL for the CI images? 15:41:51 the alternative would be indeed to have ceph release package in extras 15:42:09 trown: good question 15:42:17 number80: ok, well, that's a lot of work duplication for something that already exists. We should find a way to work around that. 15:42:27 I'm not sure about ceph repos 15:42:38 dmsimard: well, if storage SIG maintain the package, little work for us 15:42:39 how are they're built? 15:42:55 apevec: what do you mean how are they built ? 15:43:28 I mean, are those reliable production builds, not done under someones desk :) 15:43:44 also not hacked :) 15:43:48 all I know is they gloriously use rdopkg to manage their packages 15:43:58 sweet 15:44:01 ah that's a good sign 15:44:03 Hum, those are the official binaries, I don't know - we could ask ceph folks 15:44:57 I think I will use EPEL just for the ceph packages for now... since that is how things have been being done thus far 15:45:01 dmsimard: tried to ping ktdreyer but no answer (well, he could be on PTO) 15:46:17 proposal: continue discussion w/ ceph folks and see what's our best option 15:46:30 trown, ack 15:46:53 EPEL for now until centos storage sig has release ceph builds 15:46:56 #agreed temporarily use EPEL ceph package until we find a better alternative w/ ceph folks or storage SIG 15:47:43 #topic CI promote job - To Khaleesi or not to Khaleesi 15:47:49 this is poetry, this must dmsimard 15:47:53 *be 15:48:08 no, this is not from me even though I have a strong opinion on it 15:48:10 :) 15:48:29 who added it? 15:48:33 dunno 15:48:38 trown: ? 15:48:45 trown: is that you? ^ 15:48:54 my take: for packstack let's move tempest to upstream gate and include that in RDO CI via weirdo ! 15:49:02 ah yess that is me 15:49:03 trown, YOU!? 15:49:09 apevec: yeah 15:49:12 as far as packstack is concerned 15:49:31 packstack jobs for RDO will no longer use khaleesi eventually 15:49:47 trown, so what's alternative for rdo-m CI ? 15:49:48 as we've done the necessary changes in packstack to make it able to gate against itself 15:49:59 good news 15:50:01 background: I have been fighting with the liberty promote job since getting back from break... and the feedback time from khaleesi is too long 15:50:20 once packstack gates against itself upstream, it'll land far more stable in delorean and weirdo will pick up the gate jobs, improving further stability 15:50:34 trown, use tripleoci for rdo-m ? 15:50:55 apevec: I would propose using my quickstart for rdo-m... tripleoci is a bit unworkable outside of their environment 15:51:27 trown, sounds good 15:51:47 I have some POC jobs up for liberty and mitaka using it, but I need to get it moved to redhat-openstack, and setup gerrit and jjb 15:51:48 trown: I'm not super familiar with the differences between tripleo and rdo-m. Would picking up the triple o CI jobs through weirdo be relevant ? 15:51:53 trown: wfm 15:52:07 but I wanted to propose it before doing all of that 15:52:17 trown, speaking of that, can you restart discussion w/ centos infra folks about mirroring your quickstart images somewhere with more BW ? 15:52:37 apevec: ya, saw that thread, I will send something to ci-users today 15:52:41 thanks 15:52:45 * number80 reminds that the clock is ticking and we still have rdoinfo discussion 15:52:53 trown: an #agreed ? 15:53:05 +1 for quickstart 15:53:13 if there are no objections to putting tripleo-quickstart on redhat-openstack, I think we can move on 15:53:17 at this point we're troubleshooting khaleesi more than we are rdo-m 15:53:32 I'm always for less layers except for cakes 15:54:02 We can move on to next topic, I've prepared my speech so we can just run past it 15:54:04 #agreed put tripleo-quiackstart on rdo-openstack 15:54:10 number80, you can't comprehend more than 7 :) 15:54:18 *nods* 15:54:29 #topic an update on WeiRDO 15:54:33 dmsimard: stage is yours 15:54:35 So just an update on WeIRDO for the record and for the benefit of everyone :) 15:54:40 For those not aware of what WeIRDO is about yet, I've posted on the mailing list about it 15:54:40 https://github.com/redhat-openstack/weirdo) 15:54:45 #link https://www.redhat.com/archives/rdo-list/2015-December/msg00043.html 15:54:54 So far I've been working on getting necessary upstream commits in puppet-openstack and kolla in order to consume their CI jobs outside of the gate with different repositories and other required things 15:55:02 The general idea being that WeIRDO runs their CI jobs on the delorean repositories (part of the promotion pipeline) and we shift both puppet-openstack and kolla CI to delorean current-passed-ci. 15:55:11 I am almost ready to merge a review to vastly improve debugging and logging to troubleshoot potential failures and after that I'll hook the repository customization into weirdo. 15:55:18 Once repository customization is done, we can consider putting the weirdo jobs in the promotion pipeline. 15:55:22 done :P 15:55:25 excellent 15:55:41 great stuff! 15:55:49 At some point, can we get some content on the website about it, too, so that people can jump in? Thanks. 15:55:52 great 15:56:00 rbowen: sure thing 15:56:11 next topic 15:56:17 weirdo is already setup to use gerrithub and weirdo already gates against itself (though non-voting) 15:56:31 awesome +1 to weirdo in promote pipeline 15:56:37 reviews are here: https://review.gerrithub.io/#/q/project:redhat-openstack/weirdo 15:56:45 #info weirdo is already setup to use gerrithub and weirdo already gates against itself (though non-voting) 15:56:51 * rbowen contemplated WeIRDO tshirts for summit ... 15:57:01 ok next topic :) 15:57:06 #topic Upcoming events 15:57:08 rbowen: 15:57:27 rbowen, why not underwear? 15:57:28 With Mitaka 2 due this week, we have two events coming 15:57:49 Doc day, Jan 20-21. See the open issues list at https://github.com/redhat-openstack/website/issues 15:58:12 #info Doc day, Jan 20-21 15:58:12 Test day - http://rdoproject.org/testday - we can still use a lot of help getting the test cases consumable by people that aren't already experts. 15:58:33 #info test day Mitaka-2 - Jan 27-28 15:58:34 And then, of course, there's the RDO day at FOSDEM, which is apparently well oversold. 15:58:50 awesome news 15:58:50 Looking forward to seeing some of you there. 15:58:58 And some/others of you at DevConf.cz 15:59:09 Oh, and I'm planning to get a RDO meetup room at DevConf 15:59:18 I posted to the mlist about that, and no response so far. 15:59:41 I'm probably going to reserve the hour right after jruzicka's rdopkg talk. 15:59:51 rbowen: haven't yet answered but count me in 16:00:02 Unless I hear that people prefer a morning slot. 16:00:06 16:00:17 rbowen: I prefer afternoon, we're speaking BeerNo 16:00:24 Yes, that was exactly my thought 16:00:31 I don't (want to) do mornings. 16:00:35 oh as of RDO day 16:00:41 I didn't manage to register in time 16:00:43 is all hope lost? :) 16:00:47 We'll sneak you in. 16:00:52 \o/ 16:00:58 jruzicka: we need you 16:01:05 I expect to be there early, and I can vouch for people that aren't on The List. 16:01:10 jruzicka, but you'll have to buy your drink and sandwitches :) 16:01:19 expense them! 16:01:23 I can live with that :) 16:01:38 so jruzicka is back, time for the last topic! 16:01:42 yes 16:01:45 yup 16:01:45 important one 16:01:57 #topic rdoinfo - any tweaks needed to prevent forks and have all releases in one? 16:02:09 right, I've add new thoughts in https://trello.com/c/KxKtVkTz/62-restructure-rdoinfo-for-new-needs 16:02:29 what about this: 16:02:30 - project: gnocchi 16:02:30 conf: core 16:02:30 tags: 16:02:30 mitaka: 16:02:30 liberty: 16:02:32 source-branch: stable/1.2 16:02:34 maintainers: 16:02:50 basically some tweaking in parsing 16:02:55 + filtering by tag 16:03:08 jruzicka, ^ what say ya? 16:03:14 wfm, but if we don't use a specific branch, no tag (ie. mitaka: in your example) 16:03:38 yes, tweaking in parsing for sure 16:03:50 right, no tag means project is for all releases 16:04:00 *nods* 16:04:01 there are multiple approaches however 16:04:10 this makes sense to me 16:04:13 this one (looks pretty fine) 16:04:32 and lso having whole project dict overridable on per-release level 16:04:54 I'd keep generic "tags" vs releases so we could add different filters in the future 16:05:09 jruzicka: would it work with funny ceilometer subprojects branching scheme? 16:05:10 release is just one possible dimension 16:05:16 and who knows what future brings 16:05:24 (funny as in different from the other projects) 16:05:28 yup, tags are simple yet flexible 16:05:29 maybe releases will be gone as a concept 16:05:36 it's already disintegrating upstream 16:05:41 wfm 16:05:49 +1 16:06:16 * dmsimard needs one minute in open floor 16:06:17 #agreed use apevec proposal as foundation for the next-gen rdoinfo database format 16:06:37 jruzicka, eta? :) 16:06:38 #action jruzicka to lead rdoinfo reformat 16:06:53 we need it like, yesterday so no pressure :) 16:07:37 I'll work on it immediately, so in next N weeks? :) 16:07:44 :-p 16:07:46 wfm 16:07:49 if N=1, ack :) 16:08:22 since we're out of time, let's skip w/ open floor + next week chair 16:08:24 for all epsilon greater than zero there exists... 16:08:33 I'll take the chair 16:08:39 #topic open floor 16:08:40 and try not to schedule meeting over :) 16:08:46 yep, sorry to be late, just one thing about Murano topic, I continued with openstack-murano SPEC file, you can check here https://bugzilla.redhat.com/show_bug.cgi?id=1272513 Also, I'm gonna take over the muranoclient and the murano-dashboard RPM ASAP if any other people are working on that 16:08:50 #agreed jruzicka chairing next week meeting 16:09:07 the delorean instance hasn't been performing so well recently, I've worked with the support for the cloud environment it's on and isolated the issue to the compute node the virtual machine is on being quite a bit crowded 16:09:08 mflobo: i'll keep you updated about it 16:09:12 (post meeting) 16:09:14 ack 16:09:30 dmsimard, ah thanks for debugging that! 16:09:39 just wanted to say that I'll be planning a migration of the instance to a quieter node to keep delorean fast and without issues 16:09:41 +1 16:10:10 it's CLOUD 16:10:16 #action dmsimard will migrate delorean instance to a less crowded compute node 16:10:18 thanks 16:10:24 thanks dmsimard 16:10:30 state of the art noop cloud 16:10:33 err 16:10:39 I just found that really funny. 16:10:49 well, I'll move the remaining topics to next week (like RDO DoD) 16:10:50 dmsimard: that's cool :) 16:11:26 anything else? 16:11:33 jpena: I'll need to poke you so understand some things before we do the migration 16:11:40 dmsimard: np 16:11:43 I know you leave early (for me) so will ping you soon 16:12:29 dmsimard, jpena => notes please :) 16:12:54 yeah, trello it 16:13:31 then, I suggest we close the meeting 16:13:49 thank you gentlemen for attending 16:13:53 3 16:13:55 2 16:13:57 1 16:13:58 0 16:14:00 #endmeeting