15:05:39 #startmeeting RDO meeting - 2017-12-20 15:05:39 Meeting started Wed Dec 20 15:05:39 2017 UTC. The chair is amoralej. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:05:39 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:05:39 The meeting name has been set to 'rdo_meeting_-_2017-12-20' 15:05:40 Meeting started Wed Dec 20 15:05:39 2017 UTC and is due to finish in 60 minutes. The chair is amoralej. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:05:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:05:43 The meeting name has been set to 'rdo_meeting___2017_12_20' 15:05:52 #topic rollcall 15:05:57 o/ 15:06:01 \o 15:06:02 o/ 15:06:10 #chair dmsimard jpena ykarel mary_grace 15:06:10 Current chairs: amoralej dmsimard jpena mary_grace ykarel 15:06:11 Current chairs: amoralej dmsimard jpena mary_grace ykarel 15:06:49 o/ 15:06:59 #chair rbowen 15:06:59 Current chairs: amoralej dmsimard jpena mary_grace rbowen ykarel 15:07:00 Current chairs: amoralej dmsimard jpena mary_grace rbowen ykarel 15:07:14 let's start with the first topic 15:07:30 #topic CBS certificate renewal on Dec, 31/Jan, 1 15:08:04 I'm not sure who wrote the note "I'll be taking care of that" 15:08:09 apevec? 15:08:10 number80 ^ 15:08:23 number80 is on pto i think 15:08:30 oh 15:08:43 we had been using number80's cert 15:08:49 yes 15:08:49 so I'm not sure how someone else can take care of that 15:08:52 it needs to be number80 15:08:54 unless we use someone else's cert 15:09:07 or yeah we change cert 15:09:23 apevec: it's either him or you :P 15:09:30 no one else has build privileges in cloud sig I think 15:09:31 #chair apevec 15:09:31 Current chairs: amoralej apevec dmsimard jpena mary_grace rbowen ykarel 15:09:31 Current chairs: amoralej apevec dmsimard jpena mary_grace rbowen ykarel 15:09:36 i have cert also 15:09:39 so iirc it was Haikel who wrote that 15:09:46 iiuc* 15:10:01 so I'd assume he is taking care 15:10:05 so the note means he will take care while in pto? 15:10:12 oook 15:10:22 yeah, that's not good and we need to fix that :) 15:10:42 there is discussion ongoing about creating SIG bot cert 15:10:51 apevec, any progress on the discussion? 15:10:57 i saw the mails about it 15:11:07 but no conclusion, right? 15:11:23 yeah no progress yet 15:11:25 apevec: why do we need to wait until the certificate expires ? can't we renew it before expiration and just switch it ? 15:11:49 that's also good question 15:11:57 cert can be regen at any time 15:12:07 yeah, that's what i was thinking, maybe i could install mine as secret 15:12:29 there is anything special whith the change of year? 15:12:36 well I just mean that number80 can renew it at any time 15:12:45 number80 is on pto 15:12:47 :) 15:12:53 ok, let me rephrase that 15:12:56 let's see if we can take care 15:13:00 number80 should have been able to renew it at any time 15:13:02 Merged openstack/networking-l2gw-distgit rpm-master: Workaround due to Tempest plugin split https://review.rdoproject.org/r/11039 15:13:09 amoralej: do you have build privileges on cbs ? 15:13:12 dmsimard, i got it now, i think so 15:13:12 yes 15:13:18 for cloudsig 15:13:34 ok, I think the real solution is to have a "service" account like paas sig has 15:13:40 but it might be short notice to do that 15:13:46 we can reach out to #centos-devel 15:13:58 dmsimard, there has been some conversations about it 15:14:19 does paas sig have a service accunt? 15:14:22 yeah 15:14:22 account? 15:14:47 amoralej: see for example http://cbs.centos.org/koji/packageinfo?packageID=3175 15:14:50 "paas" 15:14:57 i thought there is not solution for service account in CBS 15:15:00 it's used by their CI afaik 15:15:02 amoralej: there is 15:15:20 amoralej: they tend to stray away from that in *brew* because of a policy where each package must be reviewed by a human 15:15:25 or something like that 15:15:30 then, i don't understand well the discussion about the generic account some time ago in centos-devel 15:15:38 yeah paas is example I brought up, 15:15:44 but unsure how they got it 15:15:45 it' 15:15:53 s not documented in official SIG guide 15:16:10 maybe it's just a regular account 15:16:13 with regular expiration 15:16:16 but with paas name 15:16:16 ok i'll take the action to chat with them 15:16:20 maybe 15:16:28 #action dmsimard to reach out to #centos-devel about getting a service account like paas 15:16:48 it was topic on centos contributor meetup at cern 15:17:04 where conclusion was yes and TBD 15:17:25 ok, let's see what centos guys tell us 15:17:37 maybe there is a chance to get it before end of year? 15:17:49 amoralej, I'd go with the plan to put yours today 15:17:53 and see how that goes 15:18:00 ok 15:18:02 I'm asking now, let's catch up after the meeting 15:18:09 i.e. do we know all places where is it used 15:18:21 dropping by, reason to do it on Jan, 1 is that it's easy to remember 15:18:23 #action amoralej to install new certificate for CBS task 15:18:46 number80 but it's holiday!, not a good day for anything :) 15:19:03 random date means that nobody knows but the person who set it up when to renew it 15:19:16 that's also true 15:19:19 best date to do that, nobody is impacted :) 15:19:33 just the one that has to do it 15:19:37 that's not fair 15:19:47 :) 15:20:11 let's see if we can get the service account 15:20:34 There's already one created, pending validations from KB, we also need email alias 15:20:51 apevec: I think it's a manager thing to get a team alias ^ 15:21:29 KB is on PTO already so if we need him that means it'll be after the holidays 15:22:03 Yeah, he's the one who has to review that 15:22:40 number80, what if we install new cert today?, i guess we'll have service account before expiration 15:22:44 in 6 months 15:23:09 works for me, just wanted to notify this was under radar, anyway 15:23:18 yeah, thanks for that 15:23:34 ok, i'll follow up after the meeting 15:23:37 next topic 15:24:07 #topic Upgrading review.rdoproject.org to software factory to 2.7 15:24:10 dmsimard, 15:24:14 all yours 15:24:19 ohai 15:24:28 we have this thing on review.rdoproject.org, software factory 15:24:59 we'd like to upgrade from 2.6 to 2.7 which will enable us to do things like zuul v3, nodepool v3 and generally allow us to update things and resolve issues 15:25:14 the upgrade and the implementation of zuul v3 are two distinct things 15:25:33 initially, we'll want to upgrade to 2.7 and then later, sometime after the holidays, enable zuul v3 15:25:47 we can have both in parallel? 15:26:08 the definitive plan for the zuul v3 migration hasn't been discussed yet and it's something we'll want to talk about before we do it 15:26:31 I've reached out and briefly discussed this with mrunge, for example -- opstools doesn't have a lot of knowledge about zuulv3 so it is likely we will need to help 15:26:58 ok 15:27:30 and we had a meeting yesterday with the TripleO CI team (weshay, EmilienM, sshnaidm|afk, pabelanger, mwhahaha) to discuss what it means to migrate to zuulv3 in review.rdo (see summary here: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125735.html ) 15:27:42 0/ 15:28:08 for example, we agreed that the job configuration, roles and playbooks should live in openstack-infra/tripleo-ci 15:28:15 #chair weshay 15:28:16 Current chairs: amoralej apevec dmsimard jpena mary_grace rbowen weshay ykarel 15:28:16 Current chairs: amoralej apevec dmsimard jpena mary_grace rbowen weshay ykarel 15:28:26 so we can include things as-is from review.rdo with zuul v3 15:28:54 the specifics/details have not been ironed out yet but we started talking about it so there's that 15:29:16 so the .zuul in openstack-infra/tripleo-ci will be used by review.o.o and review.r.o ? 15:29:33 amoralej: yes, zuul v3 allows us to selectively include components from different projects 15:29:51 we'll be including the configuration from openstack-infra/zuul-jobs, openstack-infra/openstack-zuul-jobs, etc. 15:30:26 the general idea being to centralize everything in one place (tripleo-ci) so that it can be used across not just review.o.o and review.r.o but also further down the stream eventually 15:30:54 ok, sounds reasonable 15:31:05 so, circling back to the upgrade of software factory from 2.6 to 2.7 15:31:28 review.rdoproject.org hasn't been particularly productive recently so it's a shame we hadn't thought of that before, but we'd like to upgrade it before the holidays if we can 15:31:51 that means either doing the upgrade tonight or tomorrow evening (to have tristanC present) 15:32:03 otherwise it means we're pushing back the upgrade to january 15:32:19 for upgrade, iiuc, the options are to have parallel zuulv3 and v3 and automatic creation of legacy-* jobs as intermediary until all jobs are ported to native v3? 15:32:31 other options? 15:32:34 amoralej: again, the upgrade will not enable zuulv3 immediately 15:32:53 yes, but the day it's enabled we'll have the problem 15:33:12 yes, but this can be another meeting topic on it's own :D 15:33:18 that's the migration that i'm concerned about 15:33:19 ok 15:33:22 good for me 15:34:10 Formal question: Would we be okay with updating review.rdoproject.org tonight at 23:00 UTC ? We'd plan for one hour window with fairly brief disturbances. 15:34:27 i'm ok 15:34:28 apevec_, jpena, amoralej, weshay, rlandy|rover ^ 15:34:29 +1 15:34:32 +1 15:34:38 +1 15:34:45 +1 15:34:46 rlandy|rover, ^ 15:34:48 +1 15:35:02 ok, thanks 15:35:05 just a second, I'm checking if the latest backup is there 15:35:20 jpena: we would likely manually run a backup before running the upgrade 15:35:38 #agreed review.rdoproject.org to be updated tonight december 20th at 23:00UTC 15:35:45 #action dmsimard to send an email to notify about the upgrade 15:36:03 anyway, the backup is there, so +1 15:36:09 ok 15:36:26 let's move to next topic 15:36:28 \o/ 15:36:37 #topic Update about status of infra 15:36:40 #chair chandankumar 15:36:40 Current chairs: amoralej apevec chandankumar dmsimard jpena mary_grace rbowen weshay ykarel 15:36:41 Current chairs: amoralej apevec chandankumar dmsimard jpena mary_grace rbowen weshay ykarel 15:36:46 i added that topic 15:37:06 dmsimard, jpena so what's the status of infra?, is rdo cloud migration finished? 15:37:22 s/migration/upgrade/ ? 15:37:26 sorry 15:37:27 yeah 15:37:44 is it fully functional now? 15:37:55 and about nodepool issue?, it's finally fixed? 15:38:01 the RDO Cloud controller upgrade was finished yesterday, and compute nodes were updated today. I think the final details are being ironed out (the rdo-nodepool tenant quota is not correct, but it's not hurting us) 15:38:25 nodepool is not 100% fixed yet, we had another hang early today and we're investigating it 15:38:33 so we can expect it to work normally? 15:38:59 apart from the nodepool hangs, i mean 15:39:01 yes, unless nodepool hangs (or there's a new issue, of course) 15:39:11 so, something to be wary about 15:39:41 it took me about 1hr+ last night to properly clean up everything that had leaked throughout the upgrade 15:40:09 we had a large backlog of jobs in review.rdo and nodepool was started which actually killed RDO cloud 15:40:33 there was 75+ heat stacks to clean up, 100's of networks/orphaned ports 15:40:37 VMs in error everywhere 15:40:54 today i still had the impression that jobs were slow to dequeue 15:41:01 but maybe it's just a perception 15:41:07 nodepool got stuck 15:41:18 I'm not sure if it's been restarted yet, I haven't been involved in troubleshooting that (yet) 15:41:41 ok 15:41:45 what I want to say, though, is to be careful if there is a large zuul backlog and we're starting or restarting nodepool 15:41:50 because nodepool can and will kill RDO cloud 15:41:52 hopefully we can find the root cause soon 15:42:34 ok, i think we are done with this topic 15:42:42 jpena: you have a test instance on the openstack-nodepool tenant - are you still using that? 15:42:47 #topic Test day 15:42:54 rbowen, ^ is it yours? 15:42:58 I just wanted to thank everyone who worked so hard to put together the test cloud 15:43:03 and the people that showed up to test. 15:43:18 I expect that our turnout for this will improve as word gets out that we're doing it. 15:43:31 We had great help promoting it from all through the community. 15:43:44 For next time, we need to continue to improve our testing instructions/suggestions. 15:44:01 Anyways, thanks so much to everyone that helped with that. 15:44:06 yes, and to provide a better network option 15:44:30 Especially dmsimard jpena apevec_ and amoralej 15:44:45 rbowen: actually we got some feedback from the openstack technical committee 15:44:47 but i think it proof it can be useful to involve new people in test day 15:44:54 dmsimard: Yeah? Tell us more. 15:44:56 Carlos Goncalves proposed openstack/octavia-dashboard-distgit rpm-master: Initial commit https://review.rdoproject.org/r/11033 15:44:57 I forgot how or why we ended up discussing it there 15:45:12 but they thought it was a great idea 15:45:29 said we should have advertised it more than that 15:45:32 If we could get that rolled into the Trystack project, and get Trystack rescued in the process, that would be cool too. 15:45:34 not like we didn't try :/1 15:45:52 ok, we'll saturate the various openstack lists next time. :-) 15:46:35 I'll formally reach out to them about the idea in general, maybe we could get more or less of an endorsement for the initiative 15:47:00 #action dmsimard to discuss the test day cloud idea with the TC 15:47:38 anything else about this topic? 15:48:40 i guess, no, i'll move on 15:48:41 #topic Cancelling 27th Dec and 03rd Jan meeting due to holidays 15:48:56 last week we talked about cancelling on 27th 15:49:14 on 3rd Jan i and ykarel is available 15:49:16 in my case i'm on PTO on january, 3rd 15:49:29 I am not sure how many will be there? 15:50:03 I'll still be on PTO on January 3 15:50:07 chandankumar, i think you can run the meeting on 3rd unless there are no topics 15:50:21 to avoid two weeks without meeting 15:50:26 amoralej: ack sir 15:51:04 [sensu] NEW: master.monitoring.rdoproject.org - check-delorean-newton-current @ http://tinyurl.com/y8g8m46f |#| Last check execution was 10977 seconds ago 15:51:12 unless the rest of the teams prefers to cancel it 15:51:56 #agreed the meeting on december 27th will be cancelled 15:52:01 #info 27th Dec, 2017 RDO meeting is cancelled, Happy Holidays :-) 15:52:10 #agreed the meeting on january on 3rd is kept 15:52:27 #action chandankumar will chair the RDO meeting on January the 3rd 15:52:47 #topic open floor 15:52:57 amoralej: one more thing 26th Dec Office hour is also cancelled 15:53:02 due to same 15:53:09 #undo 15:53:09 Removing item from minutes: 15:53:10 Removing item from minutes: #topic open floor 15:53:37 #info the office hour on december 26th is cancelled 15:53:50 amoralej: i will send the cancellation email to the list 15:53:58 thanks chandankumar 15:54:06 #topic open floor 15:54:23 any topic not in the agenda you'd like to bring? 15:54:46 one question related to repoexplorer 15:55:09 repoexplorer currently generates stats from git repository 15:55:32 is there any plan to extend to include open reviews and bug as well as github pr and issues? 15:55:36 jpena: ^^ 15:56:06 chandankumar: I'm not aware of plans to do that in the short term. fbo_ ^^ ? 15:56:23 jpena: not short term but in roadmap 15:57:15 I haven't seen anything in the backlog 15:57:29 i will open an issue on that 15:57:55 that's it 15:58:02 from myside 15:58:09 ok 15:58:17 i think we can close the meeting 15:58:29 thank you everyone 15:58:48 Happy new year for all RDOers!! 15:59:10 yay \o/ 15:59:12 Merry Christmas to all RDOers/Stackers and their families :-) 15:59:48 #endmeeting