15:04:10 #startmeeting RDO meeting - 2017-01-11 15:04:10 Meeting started Wed Jan 11 15:04:10 2017 UTC. The chair is number80. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:04:10 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:04:10 The meeting name has been set to 'rdo_meeting_-_2017-01-11' 15:04:16 #topic roll call 15:04:18 o/ 15:04:18 Thanks, number80 15:04:20 o/ 15:04:23 o/ 15:04:24 o/ 15:04:27 agenda is here: https://etherpad.openstack.org/p/RDO-Meeting 15:04:29 \o 15:04:30 o/ 15:04:37 #chair leifmadsen rbowen jschlueter jpena dmsimard jruzicka 15:04:37 Current chairs: dmsimard jpena jruzicka jschlueter leifmadsen number80 rbowen 15:04:41 #chair apevec 15:04:41 Current chairs: apevec dmsimard jpena jruzicka jschlueter leifmadsen number80 rbowen 15:05:04 please update agenda if you have any items that is not there 15:05:19 I forgot I've conflict, can please someone take over chairing today? 15:05:22 dmsimard: ah he updated the spec i need to recheck 15:05:26 \o/ 15:05:29 apevec: taking it 15:05:32 #chair chandankumar 15:05:32 Current chairs: apevec chandankumar dmsimard jpena jruzicka jschlueter leifmadsen number80 rbowen 15:05:33 thanks 15:05:47 let's move on, then 15:06:00 #topic DLRN instance running out of space 15:06:06 that's mine 15:06:06 jpena? 15:06:26 o/ 15:06:27 o/ 15:06:49 we are having space issues every few days with the DLRN instances. We keep adding patches, but today most of the allocated space is used by the centos-master and centos-newton workers, which are hard to stop to run a purge 15:07:07 I'd like to have a more permanent fix, which requires two things: 15:07:31 o/ 15:07:35 a) Stop the workers for some time (maybe a weekend?) and run a long purge to keep just the latest 60 days of commits 15:08:04 b) Add a cron job to purge old commits every day, so we avoid a long stop and keep storage use (mostly) consistent 15:08:23 for b) I have a puppet-dlrn review going on, but I'd need agreement on a) 15:08:31 Is that really a permanent fix? Or does that just put off the crisis a little longer? 15:08:52 rbowen: a cron would ensure that we keep no longer than x days worth of packages which is sane 15:08:57 rbowen: it's quite permanent. We are keeping 4 months of commits today 15:08:59 60 days of commits is something we can provide storage size estimate so yes 15:09:10 we can modify x as need be, make it more or less 15:09:12 ok, cool. Thanks for that clarification. 15:09:13 jpena: ack for a) and b) 15:09:23 let's vote then 15:09:28 basically sounds like you'd have to double the number of packages available to run into the issue again? 15:09:46 jpena: I'm okay for a and b, I sort of forgot why it wasn't that way in the first place -- lack of confidence ? 15:09:49 leifmadsen: yes. That, or double the number of active workers 15:09:49 * number80 suggests that people puts their name on the etherpad color box 15:10:01 #chair flepied1 trown dougbtv 15:10:01 Current chairs: apevec chandankumar dmsimard dougbtv flepied1 jpena jruzicka jschlueter leifmadsen number80 rbowen trown 15:10:05 dmsimard: yes, we were not too confident, but now the code is quite tested 15:10:10 okay 15:10:25 +1 15:10:26 also, it's worth mentioning that there will be quite a bigger overlap between the release of ocata, start of pike and EOL of mitaka 15:10:31 due to the ocata short cycle 15:10:53 +1 for getting to b in the long run 15:10:54 this will worsen the storage issue -- perhaps the RDO cloud will come to finally save us all by then but who knows 15:11:07 for stable releases, we can even keep less commits if we need to save somespace 15:11:20 should we just account for that now? 15:11:27 if we're going to do a long down time anyways 15:11:30 number80: not really needed. If we just keep x days of commits, stable branches tend to have fewer commits over time 15:11:34 stable == 30, unstable == 60? 15:11:41 ack 15:11:46 works for me 15:11:48 what jpena said, there are less commits on stable releases 15:11:54 e.g. mitaka is only using 5 GB today 15:11:59 vs 80+ 15:12:07 Awesome. 15:12:09 so keeping 60 days of mitaka and 60 days of master is totally different 15:12:15 makes sense 15:12:25 well, +1 from me :) 15:12:29 sounds like the right approach 15:12:32 jpena: was the only thing that fedora-review reported about the python-pulp package the license? 15:12:35 I think even the docs mention 60 days somewhere 15:12:58 jpena: what's the current backlog ... how many days out are we currently? 15:13:10 jschlueter: around 120 days 15:13:36 apevec: was the only thing that fedora-review reported about the python-pulp package the license? 15:13:59 jpena: so need 60 days of pruning 2 days each to get caught up with lower blockage ... is that right? 15:14:43 jschlueter: or one long purge where we catch up, although that means stopping the workers during a weekend 15:15:19 jpena: so the thing about stopping the workers for the weekend I worry about 15:15:22 jpena: with 60 days to catch up it might make sense to do a weekend stoppage in my mind ... so +1 15:15:24 radez: I'll update you on bugzilla (Alan's on another meeting and we have one running right now) 15:15:27 is catching up to the builds we missed 15:15:34 although there may be less commits during the weekend 15:15:50 dmsimard: we can run with --head-only right after the purge 15:16:01 ... and we are coming up on critical points right now? 15:16:10 true 15:16:46 jschlueter: yes, we keep getting alarms of storage going over 85% 15:17:35 c) run a purge to get us below alarms threshold, put in place cron job to purge X per day up to the point we get down to 60 days left? 15:17:58 that might be what is already on table but trying to understand 15:19:05 purge also implies a slowdown, that's why jpena suggested a) a big purge on week-end and b) smaller purges daily 15:19:15 (if I understood correctly) 15:20:18 c) is doable, although we'd have to adjust the command-line for the purge commands every day 15:20:22 o/ 15:20:29 we specify how many days to keep 15:21:02 * trown thinks we should go for the full purge now, and set up cron 15:21:11 now being on the weekend of course :) 15:21:31 dlrn-purge --days $(random) 15:21:49 yes, we definitely can't stop the dlrn workers on weekdays for any extended period of time 15:22:01 * number80 says that we reached 1/3 of the meeting so now time to decide 15:22:36 I'd say let's stick to initial proposal and then optimize if needed 15:22:59 * jschlueter votes for getting us under alarms, and keeping it, but not disrupting the current cycle too much 15:23:14 +1 original proposal 15:23:39 * jschlueter goes back to being quiet 15:23:41 yes, the plan is not to put DLRN instances down for too long 15:23:47 +1 original proposal 15:23:48 jschlueter: you're discouraged to do that! 15:24:03 :-) 15:24:52 ok 15:25:32 #agreed jpena proposal of purging DLRN instance using cron tasks is approved 15:25:58 ok, I'll send an email to rdo-list to communicate 15:26:22 ack, maybe copying openstack-dev since it may impact some upstream CI 15:26:30 not really 15:26:39 the repos won't down 15:26:48 dmsimard: are they all using mirrors,now? 15:26:51 there just won't be any new repos for some duration 15:26:57 trunk.rdoproject.org availability is not affected 15:26:58 but no new packages, that might affect tripleo for example 15:27:21 anyway, it's good to inform 15:27:45 well, since it's temporary (reasonable temporary), it's fine to proceed anyway (well, IMHO) 15:27:45 ya, if it happened during the week it would affect tripleo 15:28:20 let's move to the next topic and let jpena do as he sees fit :) 15:28:35 thanks jpena 15:28:41 #topic Add boost159 in CBS 15:29:08 Ok, review https://bugzilla.redhat.com/show_bug.cgi?id=1391444 has been stuck for a while, and it's blocking facter3 and mongodb3 update 15:29:41 proposal: add boost159 in -candidate tag in CBS as prereview 15:29:55 number80: did you check my comment (#9)? 15:30:00 It won't be in any shippable repo until review is done 15:30:53 jpena: ah yes, I need to fix it (Xmas vacations made me forgot that or the booze) 15:31:48 Dan Radez created rdoinfo: Adding Congress https://review.rdoproject.org/r/4270 15:33:55 anyone ok with a preview CBS builds (or against it) ? 15:34:14 abstain 15:34:19 facter3 should be in RDO ocata, and we're still blocked 15:34:47 I'm fine with it. In my tests it didn't conflict with the existing packages 15:34:51 (the bjam issue is not blocking us since we're not using it but will fix it as it may impact later packages) 15:37:04 this was also discussed with Alan earlier. 15:37:15 I guess we can move forward 15:37:37 #agreed add preview build of boost159 in CBS under -candidate tag 15:38:03 #topic Enable automated build on mitaka-rdo 15:38:20 quick info point, I'll enable automated build on mitaka-rdo 15:38:37 #info automated builds will be enabled for mitaka-rdo branch this week 15:38:50 so more people could help maintaining mitaka builds 15:39:02 cool 15:39:21 so far, no big issue with newton automation, except that I have to renew certificates every 6 months :) 15:39:56 (which I did last week) 15:40:18 next 15:40:30 #topic alternate arch status 15:41:10 we need to update nodejs and phantomjs as they don't build in aarch64, I had some connectivity issues that prevented me working on this 15:41:40 xmengd also told me that he's working with CentOS core team to get ppc64le builders operational 15:41:43 EOF 15:41:58 #topic OVS 2.6 pending update 15:42:03 ok, big chunk here 15:42:43 dmsimard: any update from CI about OVS 2.6? 15:43:05 packstack and p-o-i jobs that runs from 2.6 from scratch work fine 15:43:16 the big question mark is tripleo and upgrades from newton to ocata 15:43:36 trown: ^ 15:43:45 there's currently efforts in tripleo upstream to test upgrades from newton to ocata in the gate 15:44:06 https://review.openstack.org/#/c/404831/ 15:44:54 The concerns here are around time, priority and available resources to get this done well and in time considering everyone already has their hands full with the impending release of ocata 15:44:58 ya, I have not had time to look at it 15:45:08 I've raised some flags to reach out for help 15:45:21 ok, so we need to keep an eye on that 15:45:33 should we add it to next week meeting to follow progress? 15:45:35 number80: I think you also told me there was a mariadb update right ? 15:45:52 dmsimard: yes, but patch is not merged yet 15:46:01 from what version to what version are we going, do you know ? 15:46:05 10.1.20 15:46:07 is it significant ? 15:46:51 last we have is 10.1.18 and it should not be a big deal but it'll go through pending first 15:46:58 okay 15:47:05 also, just making sure 15:47:09 ovs 2.6.1 is ocata-only, right ? 15:47:16 I assume so 15:47:25 ok 15:48:00 ok, I'll add OVS 2.6 follow-up for next week 15:48:07 #topic 16 days since last promotion of RDO Master Trunk 15:48:12 jschlueter? 15:48:58 We are on the verge of promotion, infrastructure problems have prevented us from promoting 15:49:03 as is my understanding 15:49:16 we are currently re-running the same hash in the pipeline to get that done 15:49:20 weshay: correct? ^ 15:50:22 yes, "almost there" ! 15:50:33 https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/1029/ has one last job running 15:50:43 on hash from Friday 15:50:56 ok, that's excellent 15:51:19 Lars Kellogg-Stedman created puppet/puppet-collectd-distgit: fix installed directory name + upstream sync https://review.rdoproject.org/r/4271 15:51:46 btw, for people who wants to follow RDO master statuses: 15:51:47 https://dashboards.rdoproject.org/rdo-dev 15:52:08 Mitaka does not look good btw :) 15:52:34 I ran it yesterday 15:52:53 anything I can fix? 15:53:18 * number80 will be giving some time on mitaka 15:53:28 s/on/to/ 15:53:34 I want to make some changes to the puppet-collectd spec file. I used 'rdopkg clone' to get the spec file, modified it, and used 'rdopkg review-spec' to submit the changes for review. Was that the correct workflow? 15:53:36 I think part of the problem is that we're inconsistent in mitaka 15:53:49 Tim Rozet created puppet/puppet-systemd-distgit: Initial spec for puppet-systemd https://review.rdoproject.org/r/4272 15:53:53 number80: https://trunk.rdoproject.org/centos7-mitaka/status_report.html 15:54:04 there is mitaka ooo issue https://bugs.launchpad.net/tripleo/+bug/1654615 15:54:22 ok 15:54:34 looks like there is an issue with seabios from 7.3 15:54:43 I'll look over the FTBFS 15:54:44 last consistent in mitaka is jan 4th 15:54:46 fyi, last job for master promotion passed 15:54:47 details unclear esp. why it works for > mitaka 15:54:55 only image uploading is pending 15:55:07 YES! 15:55:13 amoralej: god I hope the image upload doesn't fail due to rsync issue 15:55:13 good 15:55:19 weshay, you can do your dance :) 15:55:50 seems that we're still on the green side for this cycle 15:55:50 and cross fingers for the image u/l 15:57:10 Awesome. 15:57:16 number80: sorry am in second meeting but wanted to bring topic up and see how we were doing for it... 15:57:34 jschlueter: we're almost having a master promotion as you may read 15:57:55 ok, little time left so no open floor 15:58:01 #topic next week chair 15:58:08 Juan Antonio Osorio created openstack/openstack-puppet-modules-distgit: Add puppet-ipaclient https://review.rdoproject.org/r/4273 15:58:09 who wants to chair next week? 15:59:07 I can do it 15:59:17 thanks :) 15:59:39 Thanks for everyone who joined today and see you next week! 15:59:40 promoted \o/ https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/1029/ 15:59:46 #endmeeting