15:00:41 #startmeeting RDO meeting (2016-01-17) 15:00:41 Meeting started Wed Feb 17 15:00:41 2016 UTC. The chair is number80. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:41 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:00:41 The meeting name has been set to 'rdo_meeting_(2016-01-17)' 15:00:42 Geez. Is it that time already? 15:00:44 larsks: I think if curl cant do the resume trick, we will need some conditional that parses the url 15:00:50 #topic roll call 15:00:53 larsks: whoops continue after meeting 15:00:53 here 15:00:55 o/ 15:00:56 rbowen: yes :) 15:00:57 o/ 15:01:08 agenda is here: 15:01:13 https://etherpad.openstack.org/p/RDO-Meeting 15:01:20 o/ 15:01:28 RDO meeting (2016-01-17) ! Delorean works! 15:01:38 :) 15:01:42 o/ 15:01:55 \o 15:01:58 #topic Delorean Trunk status/readiness for monthly testday Feb18/19 15:02:10 #chair apevec mflobo rbowen chandankumar dmsimard jpena trown 15:02:10 Current chairs: apevec chandankumar dmsimard jpena mflobo number80 rbowen trown 15:02:18 o/ 15:02:30 o/ 15:02:43 we are still using the same etherpad for tracking trunk status 15:02:46 #chair EmilienM 15:02:46 Current chairs: EmilienM apevec chandankumar dmsimard jpena mflobo number80 rbowen trown 15:02:50 #info trunk status etherpad https://etherpad.openstack.org/p/delorean_master_current_issues 15:03:38 trown: is it possible to have a test day, considering the nova breakage? 15:03:47 keystone reverted the admin_token_auth change, but we need either an opm update with new puppet-keystone, or revert our keystone change 15:03:48 so quite few issues and current-ci is rather old 15:04:06 trown: I'm working on it 15:04:14 I added it to the etherpad 15:04:25 number80: I do not think it will be possible to have a test day testing delorean current, given one of the issues is totally unidentified 15:04:37 number80: however, we do have stable current-passed-ci 15:04:50 it is just only a few days newer than the previous test day 15:05:00 yeah Jan 25 15:05:08 so not much 15:05:13 yeah, but it's 3 weeks old ... 15:05:34 let's skip it 15:05:43 I was really optimistic that we could have gotten a promotion by now 15:05:49 I am +1 to skipping, just wanted to say it is not the only option 15:06:00 but there has really been too many issues that are still in the process of being fixed 15:06:05 I mean, we'll be still on IRC helping in someone wants to play on the edge 15:06:13 tripleo has been having a hard time with their CI and merging things are taking a long time 15:06:25 We had a test day scheduled for tomorrow, but we (I) Haven't done any promotion or gotten the test day page ready. 15:06:34 I completely fumbled that due to travel and PTO. 15:06:49 rbowen, yeah, it was mostly too keep testday a month 15:07:03 So, it would be reasonable to skip the test day, if we need to, just because I don't know that anybody is expecting it. 15:07:04 m3/rc is coming in few weeks 15:07:17 m3 is due March 1, right? 15:07:25 yes, that week 15:07:32 and testday is week after 15:07:41 Mar 10/11 15:08:03 http://releases.openstack.org/mitaka/schedule.html 15:08:29 #link http://releases.openstack.org/mitaka/schedule.html 15:08:48 I'd say that we'll be in continuous testdays starting Mar10, as we enter RC period 15:09:04 so come GA, we'll be ready 15:09:06 That's certainly one perspective. :-) 15:09:10 (famous last words) 15:09:34 The CI coverage for Mitaka is much better than what we had for liberty, I'm pretty confident. 15:09:35 Having a scheduled day is of course also about drawing in folks from "outside" to come see what's new. 15:09:39 * trown is always in a continuous testday :) 15:10:02 but humans are really good at breaking things and finding issues 15:10:06 so test days are cool. 15:10:07 Eliska also has talked about doing an on-site test day in Brno. I don't know if that'll be for this time, or for the GA. We haven't worked out details yet. 15:10:08 rbowen: ya that was my logic for not needing to cancel based on delorean current not working 15:10:24 rbowen, yeah, I wanted to ask about that 15:10:36 if new users can have a reasonable chance of sucess that is worthwile I think 15:10:42 apevec: how about starting to rebuild stuff in CBS for M3? 15:10:43 rbowen, can you ping her on that? 15:10:51 number80, yes 15:11:02 Yes, I need to talk with her about what she has in mind. Still digging out from travels. 15:11:09 number80, scripted 15:11:18 apevec: you have script ready? 15:11:31 no, we need one 15:11:36 trello it 15:11:38 ack 15:11:47 Looks like she's in another meeting. 15:11:47 I think we have a card already for that 15:12:03 number80, probably 15:12:12 #action number80 trim M3 automated rebuild plan 15:12:24 and agreed on skipping Feb testday? 15:12:32 for minutes 15:12:42 +1 from me 15:12:43 I agree. Let's focus on the next one. 15:12:51 Advertising for the next test day +++ 15:12:56 +1 from me too, we have too much blockers imho 15:13:01 +1 15:13:06 number80, write it down! 15:13:08 +! 15:13:10 +1 for me also 15:13:23 #agreed skip february test day 15:13:30 Yes, I'm sorry, I completely dropped the ball on this month's test day. :( 15:13:36 #info next test day will M3 early march 15:13:48 rbowen: turned out to be a good one to drop :) 15:13:49 rbowen: it's less about you than it is about trunk being broken 15:13:54 And next test day, I'll be traveling again, but I'll be sure to get the word out before I go. 15:14:15 next topic? 15:14:17 Also, we need (as always) to focus on brushing up testing instructions so that even a beginner can play along. 15:14:42 #info help rbowen in brushing up test cases for beginners 15:14:55 apevec: sort of related question, do we have some sort of "drill" or practice for stable releases on milestones ? like, do we try to package from tagged tarballs ? 15:15:36 we package from upstream tarballs 15:16:00 that's what I discussed w/ number80 before: we'll start scripted CBS rebuilds for M3 15:16:00 we don't generate them like with delorean (if that's the question) 15:16:03 I thought the process was different for a stable release, if it's not, ignore me :) 15:16:22 apevec: ack 15:16:31 but yes, once we have those builds, I'd like to have CI jobs running against -testing repos 15:16:34 next topic :) 15:16:44 apevec: easy enough 15:17:12 #topic Delorean instance status 15:17:16 jpena: ^ 15:17:18 jpena: I think you have good news 15:17:27 bad news first :) 15:17:37 * number80 closes his eyes 15:17:51 so the bad news is that the current instance running on OS1 is having severe performance issues 15:18:21 good news is we have another instance running on the RCIP cloud, and it's performing much better 15:18:38 let's switch! 15:18:41 like now 15:18:42 so 15:18:45 you can see in http://trunk.rdoproject.org/centos7/report.html vs http://46.231.133.114/centos7/report.html that the old instance is lagging behind like 12 hours 15:18:50 *nods* 15:18:51 I did want to raise a concern with the RCIP delorean 15:19:03 so, assuming we switch, there's a couple topics to keep in mind 15:19:28 That instance is in Paris and *all* of the RDO CI is in North America. I have a concern about latency/throughput for using it as a mirror. 15:19:30 a) the new instance does not have all the history the old one has, so old commits are just not there (no current-passed-ci or any other) 15:20:01 ad a) let's rsync passed-ci 15:20:03 b) rpm-kilo is still not fully working, at least until we drop rdoinfo-Kilo 15:20:32 #action apevec merge rdoinfo-Kilo to main rdoinfo 15:20:53 dmsimard, good point re. latency but can we quantify it first? 15:20:58 maybe it's not that bad? 15:21:05 apevec: would you propose to rsync just the symlinked dirs (current-passed-ci, current-packstack, etc.) ? 15:21:19 apevec: yeah, I have no clue, just bringing it up. There are a lot of jobs pulling from delorean. 15:21:55 jpena, just symlinked 15:22:02 dmsimard: cpu would be today's biggest source of problems, and the current instance is doing quite bad at that 15:22:05 apevec: ack 15:22:18 NB those contain symlinked builds so it's more than one tree 15:22:30 dmsimard: if it's a problem, we can get a mirror in NA 15:22:44 jpena: yeah I'm not blocking the move at all and I'm in favor of it - I'm just thinking we might need to build a mirror in NA if latency/trhoughput is bad 15:22:58 number80, let's not go there 15:23:01 dmsimard: ok, let's keep an eye on that 15:23:30 although local mirror inside ci.centos would make sense 15:23:33 #agreed switch RCIP cloud delorean instance as the main instance 15:23:43 but not sure we have a place for that 15:23:46 I'll create a trello card with the details 15:23:57 jpena: can you add an action? 15:24:05 we didn't get one for hosting 3o quickstart images 15:24:16 apevec: I don't think khaleesi jobs could pull from a mirror located inside the ci.centos infrastructure, it needs to be public 15:24:24 #action jpena will create a Trello card to plan the delorean instance switch to RCIP 15:24:29 but let's discuss it if it becomes a problem 15:24:32 ack 15:24:52 as an added bonus, the new instance is deployed using puppet-delorean 15:25:04 and on centos right ? 15:25:05 jpena++ 15:25:08 yep 15:25:19 which fixes a couple build issues in rawhide (!?) 15:25:27 also latency is not an issue for the image based approach, since it would all be on the image build side 15:25:29 whatever... 15:25:34 maybe has to do with mock or something *shrugs* 15:25:58 awesome 15:26:08 great work jpena, thanks 15:26:18 #info new delorean instance is deployed using puppet-delorean 15:26:26 trown: yeah but what I'm saying is that all of the CI using delorean (not just us, right: puppet-openstack, kolla, packstack from the openstack-infra gate), khaleesi jobs on the various jenkins instances, our jobs on ci.centos 15:26:46 ah right... tripleoci uses squid 15:26:51 next topic? 15:26:56 trown: might be affected to some extent by the fact that we're moving the mirror a couple thousand miles away 15:27:05 next 15:27:09 thanks 15:27:16 #topic RDO Manager branding vs TripleO 15:27:37 apevec? 15:27:41 no, dmsimard :) 15:27:46 oh hi 15:27:48 hum 15:27:53 RDO Manager really is TripleO today 15:27:54 I'll be in supporting role 15:28:21 There were historical reasons why RDO Manager was called that way, it wasn't the same thing as TripleO 15:28:39 We feel that the branding "RDO Manager" confuses the community, the developers and the users into thinking it's something other than TripleO 15:29:06 Also, that it's a separate thing from RDO itself. 15:29:08 it did confuse me 15:29:21 I'm convinced it'd be a good thing to streamline that and just call it like it is: TripleO 15:29:32 it has confused literally everyone I have explained it to 15:29:38 yes, same like we do say RDO Nova 15:29:39 TripleO is an OpenStack installer, just like any other such as Puppet Openstack, Kolla or Packstack 15:29:45 it's OpenStack Nova in RDO 15:30:01 and luckily RPM was always openstack-tripleo 15:30:11 so no package renames, just docs and other visible places 15:30:16 It gives the wrong message that we ship downstream-only patches in TripleO 15:30:19 I've put some of those in etherpad 15:30:22 number80: indeed 15:30:36 (for the record, we don't anymore) 15:30:44 rbowen, ^ check etherpad if I missed something 15:30:49 I'll bite anyone trying to sneak downstream-only feature 15:30:53 So we need to fix/rename/redirect https://www.rdoproject.org/rdo-manager/ and also the front page of rdoproject.org 15:30:54 number80: if you want to be technical, it does carry downstream through OPM 15:31:19 Also, we need to determine if we'll continue doing https://www.rdoproject.org/rdo-manager/ or if we just point at upstream docs. 15:31:19 dmsimard, but that's also for packstack 15:31:27 dmsimard: yes, but I hope by N release, there won't be anymore 15:31:28 apevec: indeed 15:31:37 number80: *nod* 15:31:48 https://www.rdoproject.org/rdo-manager/ says "based on TripleO" 15:31:50 rbowen: we have the tripleO-quickstart 15:32:01 rbowen, we should keep old url and redirect 15:32:23 rbowen: maybe we should just have rdoproject.org/tripleo-quickstart and link to that from front page 15:32:23 trown, ^ would we just redirect to your 3o-quickstart readme? 15:32:26 perhaps https://www.rdoproject.org/install/quickstart/ should be updated to include tripleo-quickstart 15:32:36 or perhaps that ^ 15:32:37 leanderthal: Are you here? 15:33:19 it would be better to link to the README in git repo though, as it is a bit of a pain to sync the two 15:33:42 So, for the record, does anyone object to drop the "RDO Manager" branding in favor of calling it "TripleO" ? Can we open a vote ? I'm a meetbot noob. 15:33:59 All opposed? 15:34:01 trown, so link in https://www.rdoproject.org/install/quickstart/ to 3o-quickstart README 15:34:05 15:34:08 +2 15:34:09 apevec: ya 15:34:10 question is should we do the vote right now? 15:34:22 perhaps a mailing list thread is in order 15:34:30 not perhaps, it *is* in order 15:34:31 So, for the sake of openness, I think we should vote, but we should also take it back to the mailing list. 15:34:32 ya, +1 to ML thread 15:34:43 rbowen++ 15:34:43 Yes, what dmsimard said 15:34:45 this is not an over night change anyways 15:34:46 +1 for m-l and vote next week 15:34:54 ok 15:35:12 it would be nice to have it in place for Mitaka release though 15:35:13 action anyone? 15:35:14 I don't think nobody will complain but it has to go there 15:35:25 trown: that's a good idea 15:35:26 who wants to start that thread? 15:35:34 let's try to finish the rename by M release 15:35:40 * trown nominates dmsimard 15:35:42 trown, yes, for m3/rc 15:35:47 I can do the ML thread 15:36:03 #action dmsimard to start thread about RDO Manager branding 15:36:11 amen 15:36:36 do we agree on including 3o quickstart in the main quickstart too? 15:36:39 rbowen, i'm here 15:36:57 leanderthal: Just wanted to be sure you were aware of the discussion. :-) 15:36:58 I am +1 to that :) 15:37:07 rbowen, lemme catch up 15:37:33 leanderthal, line 33 in https://etherpad.openstack.org/p/RDO-Meeting 15:37:41 apevec, yeah, caught up 15:37:49 i'm so totally PRO the name change. 15:38:20 ok, watch for rdo-list thread coming from dmsimard 15:38:21 #agreed include TripleO quickstart in the main quickstart + redirect RDO Manager doc to upstream TripleO doc 15:38:23 and using redirect 15:39:02 should we go through the test days topics? 15:39:08 I think we covered them earlier 15:39:13 I think we've mostly covered that 15:39:16 yes 15:39:39 dmsimard++ i'll reply / support when you start the tripleo thread 15:39:39 leanderthal: Karma for dmsimard changed to 4 (for the f23 release cycle): https://badges.fedoraproject.org/tags/cookie/any 15:40:00 Well, eliska is here now. 15:40:00 #topic test days 15:40:19 From my side, yes, we have test days scheduled for m3, and a doc day around then too. 15:40:21 Do you guys think I should cross-post the thread to openstack-dev with a [tripleo] tag ? 15:40:44 #info test and doc days are scheduled around M3 15:40:44 dmsimard: That might be a good idea. 15:41:11 eliska said elsewhere that she has some details to work out, including which date we're going to do an in-person test day for. 15:41:16 Whether that's m3 or GA. 15:41:27 dmsimard, it was sent to rh openstack dev with an [rhos-dev] tag 15:41:34 ( everyone was keen ) 15:41:44 leanderthal: different thread 15:41:51 ah, sorry 15:41:55 one is internal, one is public 15:42:07 aha 15:42:11 yes, public then 15:42:12 definitely. 15:42:23 k sorry for sidetracking 15:42:40 Anyways, test day. Yes. We need to get the word out and get the page up for that. I'll do that asap. 15:43:12 so we're having an in-person test day in Brno? 15:43:23 number80, TBD 15:43:23 number80: It's still tentative, but, yes. 15:43:50 #info tentatively hold an in-person test day in Brno (Czech republic) 15:43:54 number80, do you need an excuse to come back to Brno? :) 15:44:09 apevec: in the summer maybe, I caught another cold :) 15:44:31 ok so N-2 testday then :) 15:44:39 \o/ 15:44:55 so i guess we can move to the next topic now 15:45:08 #topic openstack summit talks vote 15:45:31 #info today is the last day for voting for openstack summit talks 15:45:35 Vote closed tonight at midnight, US pacific time. 15:45:42 So, 8am UTC 15:45:54 Vote now to affect the schedule for summit. 15:45:54 would love a vote if you're keen https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/6821 15:46:08 it's going to include the name change / docs change and a live demo 15:46:16 On a related note, the proposal for an RDO Community BoF (Birds of Feather) is *not* included in the vote. 15:46:27 ack 15:46:30 ooh, i'll beg for votes too. https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7797 15:46:31 leanderthal: yeah, got to rename that rdo manager from the talk name :P 15:46:37 I presume that means that those tracks are handled separately, but I don't kno wyet. 15:46:39 dmsimard, EXACTLY 15:46:45 leanderthal: will do, though it has no influence on the result (the same goes for elmiko) 15:46:58 number80, fair enough! 15:47:00 number80: appreciated none-the-less ;) 15:47:05 exactly ^ 15:47:27 leanderthal: gave you a vote as well =D 15:47:38 elmiko++ 15:47:38 leanderthal: Karma for mimccune changed to 2 (for the f23 release cycle): https://badges.fedoraproject.org/tags/cookie/any 15:48:14 leanderthal: that bio is epic 15:48:19 ok let's move on :) 15:48:22 almost out of time 15:48:23 *flex* all true! 15:48:25 yup 15:48:32 lol number80, I was not going to dash the hopes 15:48:42 I voted too, none the less 15:48:44 #topic Cleanup CI jobs 15:48:53 trown: this is tough love :) 15:49:07 yeah so we still have some CI jobs on an Openshift jenkins 15:49:09 we're almost done 15:49:17 that use packstack 15:49:27 these jobs run on the trystack cloud with khaleesi 15:49:42 let's remove them to avoid confusion 15:49:45 We could probably delete the jobs that test delorean trunk right away 15:50:02 However there is value in keeping the ones that test the stable releases until we move them to ci.centos 15:50:09 maybe 15:50:29 I need to ping the ci.centos guys, I have been asking for a second slave to cope with the increasing amount of jobs but have not received it yet 15:50:30 I'd rather have new jobs running against -testing CBS repos asap 15:50:45 dmsimard: who's looking for these jobs? 15:50:51 using the same new jjb as delorean jobs 15:51:08 number80: what jobs, the trystack ones ? tbh I don't look at them often :( 15:51:15 dmsimard: yup, these ones 15:51:24 apevec: yes, sure, just need the repo config .. can you send it to me ? 15:51:30 number80, that's the problem, I don't want to debug those old jobs 15:51:52 #action apevec to send repo config to dmsimard for new stable repo jobs on ci.centos 15:51:56 so that gives us our answer: we can just drop these jobs 15:52:02 apevec: we can set up the same promotion pipeline (minus the promotion part) to test the testing repo 15:52:06 yes, drop them 15:52:08 dmsimard, ack 15:52:10 apevec: ok 15:52:12 if nobody looks at them nor want to debug them, they're useless 15:52:24 #action dmsimard to remove the trystack CI jobs 15:52:29 thanks 15:52:33 dmsimard++ 15:52:33 number80: Karma for dmsimard changed to 5 (for the f23 release cycle): https://badges.fedoraproject.org/tags/cookie/any 15:52:54 what jobs are left? 15:53:00 on trystack ? 15:53:01 delorean-ci ? 15:53:08 probably delorean-ci and the khaleesi syntax gate 15:53:28 syntax gate? 15:53:30 ok, let's keep those there for now 15:53:35 yeah like lint and stuff 15:53:41 ok 15:53:44 number80, for khaleesi changes 15:54:10 ah ok, we should not touch khaleesi jobs 15:54:19 (I mean their own jobs) 15:54:31 well I can reach out to weshay, the khaleesi syntax gate can easily be moved to ci.centos 15:54:48 ack 15:54:49 no, let's keep the scope right 15:54:54 it's delorean-ci that's a bit touchy without a dedicated/static slave 15:55:32 let's keep it as-is for now 15:55:35 ok 15:55:55 #info all TripleO jobs are now hosted on ci.centos.org 15:56:05 trown: ? 15:56:11 yep 15:56:14 facts 15:56:16 anything to add? 15:56:31 nope 15:56:33 ok, let's move to our favorite topic 15:56:42 #topic open floor 15:56:53 we have chair for the next week 15:56:59 if someone has to shout something, this is the right time :) 15:57:09 chandankumar ^ still up for chairing ? 15:57:12 i am chairing for the next meeting 15:57:41 #info chandankumar will be chairing next week 15:57:45 I talked about CI and RDO yesterday at the Montreal OpenStack meetup, was fun. Slides are here if you want to look: https://t.co/FxqrxU3Waq 15:57:55 apevec: i *think* i 15:58:05 i've transfered all the sahara packages to you 15:58:05 The talk was recorded and should be available in a couple days/weeks 15:58:06 #info dmsimard spoke about RDO CI at Montreal OpenStack meetup 15:58:08 slides 15:58:21 https://t.co/FxqrxU3Waq 15:58:26 On the topic if meetups, if you're planning to speak at a meetup about RDO (or just about OpenStack), please give me a little warning, and I can get you RDO swag (stickers, bookmarks) for that event. 15:58:27 #link https://t.co/FxqrxU3Waq 15:58:29 elmiko, thanks 15:58:31 apevec: did i need to take action on the barbican stuff too? 15:58:52 elmiko, barbican is done, xaeth transfered it 15:59:12 k, sorry for all the troubles :/ 15:59:14 The cost for this swag is a blog post and/or photos and/or video of the event. 15:59:33 reasonably priced 15:59:39 elmiko, it wasn't trouble, sorry for automated spam using pkgdb :) 16:00:02 ok, so I assume we're done? 16:00:12 countdown 16:00:12 yes, right on time! 16:00:14 3 16:00:16 4 16:00:18 2 16:00:20 1 16:00:21 cheating 16:00:22 #endmeeting