15:02:04 <number80> #startmeeting RDO meeting (2016-01-13)
15:02:04 <zodbot> Meeting started Wed Jan 13 15:02:04 2016 UTC.  The chair is number80. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:02:04 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
15:02:04 <zodbot> The meeting name has been set to 'rdo_meeting_(2016-01-13)'
15:02:10 <chandankumar> \o/
15:02:15 <rbowen> Yo
15:02:15 <imcsk8> o/
15:02:20 <number80> #chair apevec trown chandankumar jschlueter rbowen imcsk8 jruzicka
15:02:20 <zodbot> Current chairs: apevec chandankumar imcsk8 jruzicka jschlueter number80 rbowen trown
15:02:22 <apevec> o/
15:02:31 <jschlueter> o/
15:02:43 <number80> excellent
15:02:47 <number80> agenda:
15:02:50 <number80> https://etherpad.openstack.org/p/RDO-Packaging
15:02:53 <dmsimard> o/
15:02:57 <trown> o/
15:03:16 <number80> #chair dmsimard
15:03:16 <zodbot> Current chairs: apevec chandankumar dmsimard imcsk8 jruzicka jschlueter number80 rbowen trown
15:03:17 <chandankumar> #chair dmsimard trown
15:03:17 <zodbot> Current chairs: apevec chandankumar dmsimard imcsk8 jruzicka jschlueter number80 rbowen trown
15:03:32 <number80> I moved the rdoinfo item to later to leave time for Jakub to finish his call
15:03:46 <apevec> ack
15:03:52 <number80> #topic murano/mistral unresponsive maintainer
15:04:23 <EmilienM> degorenko: ^
15:04:25 <number80> Daniil has not answered for 6 weeks, so I suggest to continue the reviews and have him as a comaintainer if he pings back
15:04:42 <chandankumar> number80, i can help in client packages reviews.
15:04:44 <apevec> that's good question - do we need a formal uresp-maint procedure like in Fedora?
15:05:02 <number80> apevec: why not, as long as it's lightweight :)
15:05:10 <chandankumar> for murano/mistral
15:05:22 <number80> I just want to make sure that we have murano/mistral ready for the third test day
15:05:30 <apevec> yeah, email to rdo-list CC maintainer ?
15:05:39 <imcsk8> i can help testing it
15:05:39 <sidx64_Cern_> number80, chandankumar, I have completed magnum package installations, and am now trying to configure them. we have created the database for magnum, and granted all priviledges for root on the ip address that RDO is linked to. What should be the next steps?
15:05:49 <apevec> and timeout 1week - also maintainer can always come back!
15:05:56 <chandankumar> sidx64_Cern_, meeting is going on.
15:06:02 <number80> #action number80 email RDO list + Daniil in CC about murano/mistral w/ 1 week timeout
15:06:10 <number80> ack
15:06:15 <sidx64_Cern_> oops sorry.
15:06:19 <number80> sidx64_Cern_: np
15:06:22 <degorenko> sorry guys, i am on another meeting, what's wrong with murano?
15:06:52 <number80> degorenko: Daniil hasn't answered packages reviews for 6 weeks, so we're thinking about allowing someone else to take over
15:07:05 <number80> off course, he'd always welcome to jump in later
15:07:06 <degorenko> number80, Daniil Trishkin?
15:07:09 <number80> yup
15:07:24 <degorenko> i will ping him :)
15:07:33 <EmilienM> thx degorenko
15:07:41 <number80> degorenko: excellent
15:07:53 <degorenko> is here any problems?
15:07:58 <degorenko> or just review comments?
15:08:09 <number80> degorenko: fixing the packages and move on w/ reviews
15:08:22 <degorenko> ok, thank you for your attention
15:08:51 <number80> degorenko: could you give me (or Daniil) feedback within a week?
15:08:58 <number80> #undo
15:08:58 <zodbot> Removing item from minutes: ACTION by number80 at 15:06:02 : number80 email RDO list + Daniil in CC about murano/mistral w/ 1 week timeout
15:09:21 <number80> #chair degorenko
15:09:21 <zodbot> Current chairs: apevec chandankumar degorenko dmsimard imcsk8 jruzicka jschlueter number80 rbowen trown
15:09:33 <degorenko> number80, sure, i will pay my attention on this activity
15:09:38 <number80> #action degorenko contact Daniil about murano/mistral
15:09:47 <number80> thanks, much appreciated
15:10:08 <number80> I think we can move on to the next topic
15:10:22 <number80> #topic dynamic UID/GID for openstack services
15:10:41 <number80> EmilienM reported an issue w/ epmd stealing cinder UID/GID
15:10:46 <number80> #chair EmilienM
15:10:46 <zodbot> Current chairs: EmilienM apevec chandankumar degorenko dmsimard imcsk8 jruzicka jschlueter number80 rbowen trown
15:10:58 <apevec> yeah that's a race
15:11:14 <apevec> in RPM installation order
15:11:24 <dmsimard> but cinder doesn't have a reserved uid/gid right ?
15:11:29 <apevec> it does
15:11:43 <apevec> but it's not actually reserved until cinder is installed
15:11:44 <dmsimard> hmm, I ctrl+f'd cinder in the list of reserved uids and didn't see it
15:11:45 <EmilienM> yeah like all openstack services
15:11:58 <number80> Tricky topic, because erlang-erts package is also correct packaging-wise (it shouldn't have taken an UID/GID under 201) but some services also requires packages external to RDO/CentOS/Fedora like Hadoop
15:12:02 <dmsimard> nova and glance and others had one, not cinder, I'll re-check
15:12:28 <apevec> number80, wait, how come it took cinder id which is <200 ??
15:12:33 <number80> apevec: theorically the -r switch is supposed to enforce that we don't take 201
15:12:50 <number80> apevec: I dunno, I considered a bug from coreutils
15:13:10 <apevec> uhoh
15:13:19 <jpena> Having cinder (and other services) get their ID automatically can mean broken multi-node setups when different controller nodes get different cinder UIDs
15:13:24 <dmsimard> #link https://bugzilla.redhat.com/show_bug.cgi?id=1297580
15:13:35 <number80> it also raises the question of UID/GID reservation, RDO is neither in Fedora nor RHEL, so there's no reference database for UID for new services
15:13:54 <apevec> yeah, we shold go dynamic
15:14:08 <apevec> static ids are actively discouraged in Fedora
15:14:30 <apevec> jpena, that should be fixed otherwise
15:14:33 <apevec> not on on RPM level
15:14:41 <apevec> like central account system
15:14:48 <apevec> or pre-create users by the installation tool
15:14:56 <jschlueter> why is there an issue if cinder has different uid/gid on different nodes?
15:15:06 <apevec> jschlueter, if you have NFS shares
15:15:16 <number80> jschlueter: NFS or when you use LDAP for holding UID/GID for you
15:15:20 <dmsimard> number80: what is that page for the reserved uid/gid ? Can't seem to find it
15:15:28 <number80> dmsimard: just a sec
15:15:51 <number80> https://git.fedorahosted.org/cgit/setup.git/tree/uidgid
15:15:58 <jschlueter> apevec: ahh if so then reserved id's is required to keep things in sync ... otherwise you are relying on arbitary ordering to keep the house of card together
15:16:17 <dmsimard> Okay, my bad, cinder is indeed in there. Thought it wasn't.
15:16:19 <number80> the list used by Fedora (RHEL uses this w/ some tweaks too)
15:16:27 <dmsimard> #link >	https://git.fedorahosted.org/cgit/setup.git/tree/uidgid
15:16:30 <dmsimard> #link https://git.fedorahosted.org/cgit/setup.git/tree/uidgid
15:16:48 <apevec> #undo
15:16:49 <zodbot> Removing item from minutes: <MeetBot.items.Link object at 0x7f474321efd0>
15:16:49 <apevec> #undo
15:16:49 <zodbot> Removing item from minutes: <MeetBot.items.Link object at 0x7f4749f0fad0>
15:16:54 <number80> dmsimard: unrequired, meetbot does this w/ line starting w/ links :)
15:17:12 <apevec> #info reserved UIDs in Fedora https://git.fedorahosted.org/cgit/setup.git/tree/uidgid
15:17:15 <dmsimard> number80: oh really, didn't know that
15:17:34 <apevec> smart bot
15:17:41 <apevec> so what do we do?
15:17:43 <number80> trown: what's your opinion from RDO manager PoV?
15:18:14 <trown> I am pretty unopinionated... it seems like both options have flaws
15:18:20 <number80> apevec: I know that some components maintainers are reluctant to the idea of dynamic UID/GID
15:18:50 <number80> but it means that we keep static UID/GID, there's to decide how to reserve them
15:19:03 <trown> I think dynamic would require some pretty major patch to tripleo
15:19:10 <apevec> no, let's go dynamic for all new packages
15:19:24 <apevec> for currently allocated, we keep that
15:19:32 <apevec> trown, why?
15:19:33 <number80> wfm
15:19:52 <trown> apevec: ensuring that services get the same UID across multiple nodes
15:19:58 <apevec> number80, but what about cinder bug at hand?
15:20:17 <apevec> trown, I think neutron is already dynamic for some time
15:20:29 <apevec> after quantum rename iirc
15:20:32 <number80> apevec: it's an heisenbug, it could be document for the very rare occasions it happens
15:20:36 <number80> yup
15:20:46 <number80> (for neutron)
15:20:51 <trown> apevec: ya probably not an issue except for things using shared filesystems
15:21:41 <apevec> EmilienM, jpena, can't users be pre-created by puppet?
15:21:42 <trown> apevec: if dynamic does not cause the need for some new installer machinery to coordinate it, then I am all for it
15:21:51 <EmilienM> no
15:22:00 <EmilienM> we don't want to manage POSIX resources
15:22:05 <EmilienM> because it overlaps with packaging
15:22:24 <EmilienM> and some distros don't have the same UID/GID
15:22:30 <apevec> right
15:22:31 <jpena> apevec: if we want them to have the same ID in different machines, we are back to square one. Either we use a pre-assigned UID, or we do it in one machine then get the UID and use it elsewhere (ugly)
15:22:47 <EmilienM> we'll never manage users/gid/uid with puppet
15:22:49 <number80> Unlike Debian and derivatives, we don't have single authority to manage that
15:23:08 <dmsimard> we don't ?
15:23:10 <imcsk8> i'm sorry, in cinder's case which package is taking the cider user UID?
15:23:14 <apevec> number80, what's that authority in Debian?
15:23:22 <trown> jpena: apevec, maybe the compromise, is dynamic, unless there is some need for static (cinder)
15:23:28 <number80> apevec: Debian namely
15:23:30 <EmilienM> some modules create the user/group but we are in the process to drop that
15:23:31 <apevec> imcsk8, yeah see https://bugzilla.redhat.com/show_bug.cgi?id=1297580
15:23:44 <jpena> trown: but that doesn't prevent packages doing it dynamically from taking the static UIDs
15:23:55 <trown> ugh... yeah
15:24:06 <apevec> jpena, uids under 200 should be reserved
15:24:16 <apevec> as number80 that might be coreutils bug
15:24:28 <apevec> ....as number80 said...
15:24:47 <number80> I think we should stick to apevec proposal for the moment and try pinging fedora and rhel setup maintainer to see if they can't coordinate on UID/GID
15:24:50 <jpena> mmm that's right
15:25:12 <number80> (the latter being best effort as I'm not optimistic to reach agreement)
15:25:48 <apevec> number80, maybe CC setup maintainer on that bz, maybe he had similar bugs reported?
15:25:58 <number80> apevec: good idea
15:26:10 <number80> #action number80 ping RHEL setup maintainer
15:26:34 <apevec> .whoowns setup
15:26:34 <zodbot> apevec: ovasik
15:26:39 <number80> so we agree on status quo + dynamic UID/GID by default?
15:26:53 <number80> *for new services
15:27:08 <apevec> ack
15:27:09 <dmsimard> I don't disagree with jpena's approach
15:27:09 <trown> ++ for all new things being dynamic
15:27:22 <dmsimard> of static uid/gid being static only for services that logically require it
15:27:28 <dmsimard> rest dynamic
15:27:50 <number80> #agreed keep using reserved UID/GID for existing services and use dynamic UID/GID for new services by default
15:28:29 <apevec> #info Fedora uid guidelines https://fedoraproject.org/wiki/Packaging:UsersAndGroups
15:28:38 <number80> dmsimard: I know that Nova and Cinder devs are reluctant to do the change, so basically, it'll be the same as apevec proposal in practice :)
15:29:40 <dtrishkin> sorry guys for being lost, Denis ping me) I just had a lot of work, and forgot about reviews, tomorrow I'll have time for that
15:29:55 <number80> dtrishkin: excellent :)
15:30:47 <apevec> action item resolved during the meeting, that's agile!
15:30:50 <number80> I suggest that we move the next topic, we have a lot of items today (my fault /o\)
15:31:06 <apevec> yeah move on
15:31:13 <number80> #topic trim changelog entries that are older than 2 years
15:31:19 <number80> raised by kashyap
15:31:25 <number80> https://review.gerrithub.io/#/c/258590/
15:31:33 <apevec> I abandoned that one :)
15:31:48 <apevec> see review comments for details
15:32:01 <number80> I'm fine w/ having it in rdo-rpm-macros but I'm unopinionated
15:32:17 <apevec> BTW one more link re. uids, it's in upstream docs http://docs.openstack.org/icehouse/install-guide/install/yum/content/reserved_uids.html
15:32:25 <number80> #undo
15:32:25 <zodbot> Removing item from minutes: <MeetBot.items.Link object at 0x7f4741e65290>
15:32:36 <number80> http://docs.openstack.org/icehouse/install-guide/install/yum/content/reserved_uids.html
15:32:54 <apevec> oops icehouse :)
15:32:54 <number80> (to keep summary consistent, restoring current topic)
15:33:42 <number80> #topic trim changelog entries that are older than 2 years
15:33:50 <number80> https://review.gerrithub.io/#/c/258590/
15:33:58 <number80> so your call?
15:34:32 <number80> if nobody disagrees, by default, I'll follow kashyap suggestion and update rdo-rpm-macros
15:34:58 <number80> so new packages will have their changelogs trimmed (no rebuilds for existing packages)
15:35:05 <apevec> ack, although with current archiving to ChangeLog.old we'll never reach that 2 years :)
15:35:23 <apevec> if openstack release keep coming more frequently that is
15:35:32 <number80> yes, but it's small change so I don't mind
15:36:01 <apevec> kashyap, BTW is there any other package using that macro_
15:36:05 <apevec> ?
15:37:08 <apevec> nm, let's just add it and move on
15:37:18 <number80> #agreed add in rdo-rpm-macros trim changelog entries that are older than 2 years
15:37:30 <number80> #topic missing deps from EPEL required by RDO Manager
15:37:39 <number80> https://trello.com/c/6uXVuQcC/115-rebuild-epel-packages-required-for-rdo-manager
15:37:41 <apevec> trown, what's left?
15:37:49 <trown> apevec: just ceph at this point
15:37:59 <number80> #info good news we don't need dkms in RDO repositories :)
15:38:26 <apevec> uhm ceph is big one and I wouldn't like to maintain it in RDO
15:38:37 <apevec> number80, any replies from Storage SIG ?
15:38:45 <trown> images being built in CI do not have EPEL installed on them also
15:38:50 <number80> I contacted fcami who will maintain ceph for Storage SIG, he was looking for cloud instances to do more testing (pointed him to OS1)
15:39:03 <number80> so it's work in progress
15:39:06 <dmsimard> Why is pulling Ceph from it's official repositories a problem ?
15:39:17 <number80> I'll add a trello card to follow up the effort w/ fcami
15:39:41 <dmsimard> Ceph is already mirrored, why do we want to mirror it elsewhere ?
15:39:50 <number80> dmsimard: is it?
15:39:52 <dmsimard> (Trying to understand the problem we're trying to solve)
15:40:12 <dmsimard> http://docs.ceph.com/docs/master/install/get-packages/
15:41:04 <number80> dmsimard: our release package can't have a dependency on release package that is not in centos-extras (or any other blessed repo)
15:41:45 <trown> apevec: number80, while we wait on the storage SIG, is it better to pull packages from ceph repos or EPEL for the CI images?
15:41:51 <number80> the alternative would be indeed to have ceph release package in extras
15:42:09 <number80> trown: good question
15:42:17 <dmsimard> number80: ok, well, that's a lot of work duplication for something that already exists. We should find a way to work around that.
15:42:27 <apevec> I'm not sure about ceph repos
15:42:38 <number80> dmsimard: well, if storage SIG maintain the package, little work for us
15:42:39 <apevec> how are they're built?
15:42:55 <dmsimard> apevec: what do you mean how are they built ?
15:43:28 <apevec> I mean, are those reliable production builds, not done under someones desk :)
15:43:44 <apevec> also not hacked :)
15:43:48 <number80> all I know is they gloriously use rdopkg to manage their packages
15:43:58 <trown> sweet
15:44:01 <apevec> ah that's a good sign
15:44:03 <dmsimard> Hum, those are the official binaries, I don't know - we could ask ceph folks
15:44:57 <trown> I think I will use EPEL just for the ceph packages for now... since that is how things have been being done thus far
15:45:01 <number80> dmsimard: tried to ping ktdreyer but no answer (well, he could be on PTO)
15:46:17 <number80> proposal: continue discussion w/ ceph folks and see what's our best option
15:46:30 <apevec> trown, ack
15:46:53 <apevec> EPEL for now until centos storage sig has release ceph builds
15:46:56 <number80> #agreed temporarily use EPEL ceph package until we find a better alternative w/ ceph folks or storage SIG
15:47:43 <number80> #topic CI promote job - To Khaleesi or not to Khaleesi
15:47:49 <number80> this is poetry, this must dmsimard
15:47:53 <number80> *be
15:48:08 <dmsimard> no, this is not from me even though I have a strong opinion on it
15:48:10 <dmsimard> :)
15:48:29 <number80> who added it?
15:48:33 <apevec> dunno
15:48:38 <number80> trown: ?
15:48:45 <dmsimard> trown: is that you? ^
15:48:54 <apevec> my take: for packstack let's move tempest to upstream gate and include that in RDO CI via weirdo !
15:49:02 <trown> ah yess that is me
15:49:03 <apevec> trown, YOU!?
15:49:09 <dmsimard> apevec: yeah
15:49:12 <dmsimard> as far as packstack is concerned
15:49:31 <dmsimard> packstack jobs for RDO will no longer use khaleesi eventually
15:49:47 <apevec> trown, so what's alternative for rdo-m CI ?
15:49:48 <dmsimard> as we've done the necessary changes in packstack to make it able to gate against itself
15:49:59 <number80> good news
15:50:01 <trown> background: I have been fighting with the liberty promote job since getting back from break... and the feedback time from khaleesi is too long
15:50:20 <dmsimard> once packstack gates against itself upstream, it'll land far more stable in delorean and weirdo will pick up the gate jobs, improving further stability
15:50:34 <apevec> trown, use tripleoci for rdo-m ?
15:50:55 <trown> apevec: I would propose using my quickstart for rdo-m... tripleoci is a bit unworkable outside of their environment
15:51:27 <apevec> trown, sounds good
15:51:47 <trown> I have some POC jobs up for liberty and mitaka using it, but I need to get it moved to redhat-openstack, and setup gerrit and jjb
15:51:48 <dmsimard> trown: I'm not super familiar with the differences between tripleo and rdo-m. Would picking up the triple o CI jobs through weirdo be relevant ?
15:51:53 <number80> trown: wfm
15:52:07 <trown> but I wanted to propose it before doing all of that
15:52:17 <apevec> trown, speaking of that, can you restart discussion w/ centos infra folks about mirroring your quickstart images somewhere with more BW ?
15:52:37 <trown> apevec: ya, saw that thread, I will send something to ci-users today
15:52:41 <apevec> thanks
15:52:45 * number80 reminds that the clock is ticking and we still have rdoinfo discussion
15:52:53 <number80> trown: an #agreed ?
15:53:05 <dmsimard> +1 for quickstart
15:53:13 <trown> if there are no objections to putting tripleo-quickstart on redhat-openstack, I think we can move on
15:53:17 <dmsimard> at this point we're troubleshooting khaleesi more than we are rdo-m
15:53:32 <number80> I'm always for less layers except for cakes
15:54:02 <dmsimard> We can move on to next topic, I've prepared my speech so we can just run past it
15:54:04 <number80> #agreed put tripleo-quiackstart on rdo-openstack
15:54:10 <apevec> number80, you can't comprehend more than 7 :)
15:54:18 <number80> *nods*
15:54:29 <number80> #topic an update on WeiRDO
15:54:33 <number80> dmsimard: stage is yours
15:54:35 <dmsimard> So just an update on WeIRDO for the record and for the benefit of everyone :)
15:54:40 <dmsimard> For those not aware of what WeIRDO is about yet, I've posted on the mailing list about it
15:54:40 <number80> https://github.com/redhat-openstack/weirdo)
15:54:45 <dmsimard> #link https://www.redhat.com/archives/rdo-list/2015-December/msg00043.html
15:54:54 <dmsimard> So far I've been working on getting necessary upstream commits in puppet-openstack and kolla in order to consume their CI jobs outside of the gate with different repositories and other required things
15:55:02 <dmsimard> The general idea being that WeIRDO runs their CI jobs on the delorean repositories (part of the promotion pipeline) and we shift both puppet-openstack and kolla CI to delorean current-passed-ci.
15:55:11 <dmsimard> I am almost ready to merge a review to vastly improve debugging and logging to troubleshoot potential failures and after that I'll hook the repository customization into weirdo.
15:55:18 <dmsimard> Once repository customization is done, we can consider putting the weirdo jobs in the promotion pipeline.
15:55:22 <dmsimard> done :P
15:55:25 <number80> excellent
15:55:41 <apevec> great stuff!
15:55:49 <rbowen> At some point, can we get some content on the website about it, too, so that people can jump in? Thanks.
15:55:52 <chandankumar> great
15:56:00 <dmsimard> rbowen: sure thing
15:56:11 <number80> next topic
15:56:17 <dmsimard> weirdo is already setup to use gerrithub and weirdo already gates against itself (though non-voting)
15:56:31 <trown> awesome +1 to weirdo in promote pipeline
15:56:37 <dmsimard> reviews are here: https://review.gerrithub.io/#/q/project:redhat-openstack/weirdo
15:56:45 <number80> #info weirdo is already setup to use gerrithub and weirdo already gates against itself (though non-voting)
15:56:51 * rbowen contemplated WeIRDO tshirts for summit ...
15:57:01 <dmsimard> ok next topic :)
15:57:06 <number80> #topic Upcoming events
15:57:08 <number80> rbowen:
15:57:27 <apevec> rbowen, why not underwear?
15:57:28 <rbowen> With Mitaka 2 due this week, we have two events coming
15:57:49 <rbowen> Doc day, Jan 20-21. See the open issues list at https://github.com/redhat-openstack/website/issues
15:58:12 <number80> #info Doc day, Jan 20-21
15:58:12 <rbowen> Test day - http://rdoproject.org/testday - we can still use a lot of help getting the test cases consumable by people that aren't already experts.
15:58:33 <number80> #info test day Mitaka-2 - Jan 27-28
15:58:34 <rbowen> And then, of course, there's the RDO day at FOSDEM, which is apparently well oversold.
15:58:50 <dmsimard> awesome news
15:58:50 <rbowen> Looking forward to seeing some of you there.
15:58:58 <rbowen> And some/others of you at DevConf.cz
15:59:09 <rbowen> Oh, and I'm planning to get a RDO meetup room at DevConf
15:59:18 <rbowen> I posted to the mlist about that, and no response so far.
15:59:41 <rbowen> I'm probably going to reserve the hour right after jruzicka's rdopkg talk.
15:59:51 <number80> rbowen: haven't yet answered but count me in
16:00:02 <rbowen> Unless I hear that people prefer a morning slot.
16:00:06 <rbowen> </EOL>
16:00:17 <number80> rbowen: I prefer afternoon, we're speaking BeerNo
16:00:24 <rbowen> Yes, that was exactly my thought
16:00:31 <rbowen> I don't (want to) do mornings.
16:00:35 <jruzicka> oh as of RDO day
16:00:41 <jruzicka> I didn't manage to register in time
16:00:43 <jruzicka> is all hope lost? :)
16:00:47 <rbowen> We'll sneak you in.
16:00:52 <jruzicka> \o/
16:00:58 <number80> jruzicka: we need you
16:01:05 <rbowen> I expect to be there early, and I can vouch for people that aren't on The List.
16:01:10 <apevec> jruzicka, but you'll have to buy your drink and sandwitches :)
16:01:19 <number80> expense them!
16:01:23 <jruzicka> I can live with that :)
16:01:38 <apevec> so jruzicka is back, time for the last topic!
16:01:42 <number80> yes
16:01:45 <jruzicka> yup
16:01:45 <number80> important one
16:01:57 <number80> #topic     rdoinfo - any tweaks needed to prevent forks and have all releases in one?
16:02:09 <apevec> right, I've add new thoughts in https://trello.com/c/KxKtVkTz/62-restructure-rdoinfo-for-new-needs
16:02:29 <apevec> what about this:
16:02:30 <apevec> - project: gnocchi
16:02:30 <apevec> conf: core
16:02:30 <apevec> tags:
16:02:30 <apevec> mitaka:
16:02:30 <apevec> liberty:
16:02:32 <apevec> source-branch: stable/1.2
16:02:34 <apevec> maintainers:
16:02:50 <apevec> basically some tweaking in parsing
16:02:55 <apevec> + filtering by tag
16:03:08 <apevec> jruzicka, ^ what say ya?
16:03:14 <number80> wfm, but if we don't use a specific branch, no tag (ie. mitaka: in your example)
16:03:38 <jruzicka> yes, tweaking in parsing for sure
16:03:50 <apevec> right, no tag means project is for all releases
16:04:00 <number80> *nods*
16:04:01 <jruzicka> there are multiple approaches however
16:04:10 <trown> this makes sense to me
16:04:13 <jruzicka> this one (looks pretty fine)
16:04:32 <jruzicka> and lso having whole project dict overridable on per-release level
16:04:54 <apevec> I'd keep generic "tags" vs releases so we could add different filters in the future
16:05:09 <number80> jruzicka: would it work with funny ceilometer subprojects branching scheme?
16:05:10 <apevec> release is just one possible dimension
16:05:16 <apevec> and who knows what future brings
16:05:24 <number80> (funny as in different from the other projects)
16:05:28 <jruzicka> yup, tags are simple yet flexible
16:05:29 <apevec> maybe releases will be gone as a concept
16:05:36 <apevec> it's already disintegrating upstream
16:05:41 <number80> wfm
16:05:49 <trown> +1
16:06:16 * dmsimard needs one minute in open floor
16:06:17 <number80> #agreed use apevec proposal as foundation for the next-gen rdoinfo database format
16:06:37 <apevec> jruzicka, eta? :)
16:06:38 <number80> #action jruzicka to lead rdoinfo reformat
16:06:53 <apevec> we need it like, yesterday so no pressure :)
16:07:37 <jruzicka> I'll work on it immediately, so in next N weeks? :)
16:07:44 <jruzicka> :-p
16:07:46 <number80> wfm
16:07:49 <apevec> if N=1, ack :)
16:08:22 <number80> since we're out of time, let's skip w/ open floor + next week chair
16:08:24 <trown> for all epsilon greater than zero there exists...
16:08:33 <jruzicka> I'll take the chair
16:08:39 <number80> #topic open floor
16:08:40 <jruzicka> and try not to schedule meeting over :)
16:08:46 <mflobo> yep, sorry to be late, just one thing about Murano topic, I continued with openstack-murano SPEC file, you can check here https://bugzilla.redhat.com/show_bug.cgi?id=1272513 Also, I'm gonna take over the muranoclient and the murano-dashboard RPM ASAP if any other people are working on that
16:08:50 <number80> #agreed jruzicka chairing next week meeting
16:09:07 <dmsimard> the delorean instance hasn't been performing so well recently, I've worked with the support for the cloud environment it's on and isolated the issue to the compute node the virtual machine is on being quite a bit crowded
16:09:08 <number80> mflobo: i'll keep you updated about it
16:09:12 <number80> (post meeting)
16:09:14 <mflobo> ack
16:09:30 <apevec> dmsimard, ah thanks for debugging that!
16:09:39 <dmsimard> just wanted to say that I'll be planning a migration of the instance to a quieter node to keep delorean fast and without issues
16:09:41 <number80> +1
16:10:10 <apevec> it's CLOUD
16:10:16 <number80> #action dmsimard will migrate delorean instance to a less crowded compute node
16:10:18 <number80> thanks
16:10:24 <trown> thanks dmsimard
16:10:30 <jruzicka> state of the art noop cloud
16:10:33 <jruzicka> err
16:10:39 <jruzicka> I just found that really funny.
16:10:49 <number80> well, I'll move the remaining topics to next week (like RDO DoD)
16:10:50 <jpena> dmsimard: that's cool :)
16:11:26 <number80> anything else?
16:11:33 <dmsimard> jpena: I'll need to poke you so understand some things before we do the migration
16:11:40 <jpena> dmsimard: np
16:11:43 <dmsimard> I know you leave early (for me) so will ping you soon
16:12:29 <number80> dmsimard, jpena => notes please :)
16:12:54 <apevec> yeah, trello it
16:13:31 <number80> then, I suggest we close the meeting
16:13:49 <number80> thank you gentlemen for attending
16:13:53 <number80> 3
16:13:55 <number80> 2
16:13:57 <number80> 1
16:13:58 <chandankumar> 0
16:14:00 <number80> #endmeeting