15:01:53 #startmeeting RDO meeting (2016-05-11) 15:01:53 Meeting started Wed May 11 15:01:53 2016 UTC. The chair is amoralej. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:53 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:01:53 The meeting name has been set to 'rdo_meeting_(2016-05-11)' 15:02:03 o/ 15:02:05 apevec: jjb is in openstack-infra :) 15:02:06 #topic roll call 15:02:08 o/ 15:02:11 o/ 15:02:13 \o/ 15:02:15 o/ 15:02:17 o/ 15:02:17 o/ (sorta) 15:02:23 \m/ \m/ 15:02:31 rock on!! 15:02:33 dmsimard, well, not sure why 15:02:39 apevec: thanks 15:02:54 #chair imcsk8 dmsimard EmilienM jpena apevec elmiko eggmaster 15:02:54 Current chairs: EmilienM amoralej apevec dmsimard eggmaster elmiko imcsk8 jpena 15:02:56 infra team was against adding more namespaces :) 15:03:11 o/ 15:03:17 * dmsimard shrugs, lots of stuff in openstack-infra https://github.com/openstack-infra/ 15:03:18 openstack/ seems more inclusive I guess 15:03:22 #chair jruzicka 15:03:22 Current chairs: EmilienM amoralej apevec dmsimard eggmaster elmiko imcsk8 jpena jruzicka 15:03:33 o/ 15:03:33 let's go with first topic 15:03:49 #chair chandankumar 15:03:49 Current chairs: EmilienM amoralej apevec chandankumar dmsimard eggmaster elmiko imcsk8 jpena jruzicka 15:04:07 #topic dlrn instance migration - status 15:04:24 jpena, dmsimard, i think you were the ones with actions about this 15:04:28 Merged openstack/ironic-python-agent-distgit: Remove Babel and oslo.i18n dependencies http://review.rdoproject.org/r/1068 15:04:37 last week we were stuck on triggerint promotion on ci.centos DLRN? 15:04:41 o/ 15:04:42 yes 15:04:48 #chair jschuleter 15:04:48 Current chairs: EmilienM amoralej apevec chandankumar dmsimard eggmaster elmiko imcsk8 jpena jruzicka jschuleter 15:04:50 trown figured it out 15:05:05 cool, how will it work? 15:05:18 * dmsimard remembers 15:05:21 esp. for tripleoci from outside 15:05:49 internally for jobs in our promotion pipeline shouldn't be a problem, since it's all inside ci.centos 15:05:59 basically there is a cron from tripleo CI that checks if all jobs have succeeded and if it is the case, that script connects to DLRN to promote the symlink 15:06:18 trown figured a way to create a job with a security token so that it can be triggered remotely 15:06:42 is there review in progress for that? 15:06:42 so we will keep that tripleo ci script but s/ssh/curl -x POST/ and that job will be the one to do the promotion 15:07:08 this will be for current-tripleo-ci 15:07:23 what about rdo promotion ? 15:07:54 We agreed that the job should *not* be stored in JJB due to security concerns so we repurposed the poc job into a production job :) 15:08:07 I'm not sure if trown discussed this further with derekh 15:08:17 ok, so that's still a blocker 15:08:26 for rdo promotion everything is backwards compatible, we rely on DNS to promote 15:08:35 we're connecting to trunk-primary.rdoproject.org 15:08:47 ack, so that stays ssh 15:08:50 so when we change that to the private IP, the job will follow 15:08:51 yeah 15:09:15 so we change it in public DNS to a private IP ? 15:09:31 well, I mean, that's how it is, right 15:09:35 there's no public equivalent 15:09:40 right 15:10:08 kilo is still in http://buildlogs.centos.org/centos/7/cloud/x86_64/, I'll follow up with KB and we'll also stop syncing the consistent repos and keep only current-passed-ci 15:10:16 dmsimard: more questions! Right now, /host/ says "Playbook run for host ", but in fact this is aggregating together all the tasks from multiple playbooks and multiple playbook runs. It seems like maybe this should be a list of playbook runs that include the host, and then /host// is the list of tasks *from that run*. Does that make sense? 15:10:23 I'll update the centos bug for getting rid of all the consistent repos 15:10:31 just somehow weird to have private IP, but -primary is not what is advertised, it's internals 15:10:41 apevec: yeah, this is just for internal usage. 15:11:22 #info DLRN inside ci.centos tl;dr still blocked, need to wait for trown to get back or we check with derekh 15:11:32 I don't think we need trown 15:11:39 well, I mean, we do.. :) 15:11:43 we DO need trown! :) 15:11:44 but not to finish that task 15:11:46 ack 15:11:50 I'll follow up with derekh 15:11:50 I've put or :) 15:11:57 thanks, action it :) 15:12:03 ok, so next topic? 15:12:12 amoralej, not before we have action written 15:12:26 #action dmsimard to follow up with derekh regarding passive trunk.rdoproject usage and promotion through the ci.centos.org job 15:12:28 yeap, sorry 15:12:41 now next topic 15:12:50 #topic separate common repo? BoF topic from pabelanger 15:13:03 pabelanger, ^ have you opened that topic on rdo-list ? 15:13:44 if not, we can move on to the next one 15:14:21 I didn't see it 15:14:39 we'll follow up that thread when it shows up 15:14:47 ok 15:14:57 #topic review.rdoproject.org disaster recovery plans? 15:15:27 I'm not the one owning that topic but I guess it stems from a relative concern about the cloud where the instance is hosted 15:15:37 that was for fbo or tristanC , but I think number80 checked and they should have backups 15:15:45 +1 on that topic 15:15:46 Which is the same concern that is leading us to migrate DLRN away 15:15:54 but I wonder if that's documented esp. rescovery 15:16:03 Backup and recovery should definitely be documented 15:16:34 I'm concerned that we do now some customizations on the machine directly e.g. gerrit config for replication 15:16:58 it should be really good backed up or we use config mgmt for that 15:17:04 vs directly editing config 15:17:08 +1 15:17:32 I manually keep /etc/gerritbot fairly updated with projects (though I have git init'd the directory at least for keeping track of it) 15:17:39 ok, I'll ping softwarefactory folks 15:17:50 dmsimard, good idea 15:18:03 but that doesn't help if machine dies :) 15:18:26 #action apevec to followup with SF team about review.rdoproject.org backup/recovery plans 15:18:30 next! 15:18:49 #topic RDO Bug Triage Day on 18th and 19th May, 2016 15:19:06 we discussed last week about what to do with bugs on EOL versions 15:19:06 date was set last week, I think we stick to it 15:19:19 ah, Mail will be sent just after the meeting. 15:19:31 right, idea was to send warning week before i.e. today 15:19:44 chandankumar, excellent, do you have message text for review? 15:19:53 in etherpad or something 15:19:57 apevec, i will pass you after the meeting 15:20:10 just link it here for the record 15:20:52 apevec, i am still drafting it 15:21:18 ok, then post it on #rdo when ready 15:21:43 apevec: amoralej: not yet, what do you need me to do 15:21:49 anything else on the topic? 15:22:05 pabelanger, just open the topic on rdo-list, explaining the use case 15:22:23 then we discuss details there 15:22:57 #action chandankumar will send a mail about the bug triage day 15:23:22 #topic Update oslo libraries in fedora with python3 15:23:31 chandankumar, this is yours 15:23:36 yes 15:24:03 Actually we need to sync our rdo pkg oslo specs with fedora one 15:24:11 http://fedora.portingdb.xyz/grp/openstack/ 15:24:20 because bugs are started filed against the component 15:24:29 chandankumar, what is that site? 15:24:40 so just a request from the package maintainer, please sync the spec 15:25:02 apevec, it is the website maintained by fedora-infra team to track python 3 porting effort 15:25:35 apevec, https://github.com/fedora-python/portingdb 15:25:35 #info fedora.portingdb.xyz is the website maintained by fedora-infra team to track python 3 porting effort 15:25:54 how was "OpenStack Group" defined? 15:26:04 it includes PyQt4 ?! 15:26:12 apevec, no idea need to check the code. 15:26:41 ah might be dep 15:27:00 but still, weird 15:27:10 anyway, what would be action here? 15:27:26 first sync py3 work from fedora to rpm-master ? 15:27:34 apevec, yes 15:27:42 or vice-versa 15:27:55 all oslo libraries have python3 support in rdo 15:27:56 jruzicka, ^ do you fancy cleaning up that mess, clients are your old love? :) 15:28:12 oh, some actual packaging? 15:28:36 yes, not exacly fun work, but still s* to be done 15:29:09 sure, I can do that. it's just a matter of which s* will get done first ;) 15:29:15 need to check py3 status and sync up fedora rawhide and rpm-master 15:29:32 jruzicka, where are you with docs update? 15:29:44 docs update is definitely high priority 15:29:50 here is latest draft: https://github.com/yac/website/blob/new-pkgdoc/source/documentation/rdo-packaging-draft.html.md 15:29:56 about half way there I guess 15:30:19 https://trello.com/c/ReowuP4z/105-python3 15:30:36 ok, wrap that up asap then check py3 ^ see that trello 15:30:50 ok 15:30:51 aye, aye, captain ;) 15:31:07 jruzicka, action yourself 15:32:06 #action jruzicka to py3 clients 15:32:19 apevec: I confirm review.rdoproject.org is backup daily to an external swift container, last one is 1.2G made 2016-05-11 00:32:54 15:32:25 Merged openstack/cinder-distgit: Add dependencies for Google Cloud Storage backup driver http://review.rdoproject.org/r/1094 15:32:36 tristanC: external, as in, outside of the RCIP cloud ? 15:33:16 dmsimard: as in the same cloud... though sf just reference a swift end point so it could easily be moved somewhere else 15:33:34 tristanC: it'd make sense to backup sf on another cloud than it is hosted on :) 15:33:54 tristanC, is the recovery procedure written down and tested ? 15:34:15 we should backup outside rcip cloud 15:34:33 and should we consider doing all changes on review.rdo config files via confing mgmt? 15:34:42 apevec: the backup/restore feature do have functional tests run in CI, and the restoration process is documented here: http://softwarefactory-project.io/docs/sfmanager.html#backup-and-restore 15:35:26 "backup of all the user data store in your SF installation" - what does that include? 15:35:45 is e.g. gerritbot and gerrit replication config included? 15:36:12 git repository, replication configuration, gerrit database, etherpads, pastes, ... 15:36:20 gerritbot channels.yaml may not be included yet 15:36:56 ok, so we need to check what might be missing 15:37:14 and also look for outside swift for backups 15:37:22 might be a good excercise to similate a disaster indeed 15:37:28 a recovery test in a different cloud would be great 15:37:49 sounds good 15:38:29 alright, so I'm expanding my action to followup w/ SF about doing this disaster exercise 15:38:53 #action apevec followup w/ SF team about doing disaster recovery exercise for review.rdo 15:39:05 apevec++ :) 15:39:48 so, i think we can move to the next topic, right? 15:39:51 py3 was actually last topic, we should have opened open fllor 15:39:53 floor 15:39:55 #topic Chair for next meeting 15:40:07 ah yes that one :) 15:40:13 any volunteers? 15:40:33 I can do it 15:40:37 apevec, RDO traige email draft https://etherpad.openstack.org/p/rdobugtraigeemail 15:40:49 *RDO Bug 15:40:57 #info jpena will chair next week 15:41:04 #topic open floor 15:41:43 I have something to share for Open Floor 15:42:04 go for it 15:42:19 If you haven't seen it yet, take a look at ARA: http://lists.openstack.org/pipermail/openstack-infra/2016-May/004257.html 15:42:31 It's something that'll help us scale the CI that we do by making it easier to troubleshoot 15:43:12 There is only so many hours in the day, if we shave time everytime we have to troubleshoot CI (and god knows we do that all the time) we'll end up more efficient 15:43:33 But it also means that troubleshooting CI runs will be simpler, more accessible and thus we will be able to onboard volounteers more easily 15:43:50 15:43:59 chandankumar, ah that's announcement for the bugtriage itself 15:44:28 I was thinking you've draft for the EOL warning to put as BZ comment? 15:44:32 brb 15:44:34 Merged openstack/packstack: Change name for Red Hat OpenStack when using rhsm https://review.openstack.org/315010 15:44:35 dmsimard++ 15:44:45 Merged openstack/packstack: Change name for Red Hat OpenStack when using rhsm https://review.openstack.org/314473 15:45:05 Merged openstack/packstack: Add sudo to easy_install pip https://review.openstack.org/314495 15:45:30 apevec, i will be drafting the second one soon. 15:46:04 ok, so i guess we can close, right? 15:46:10 anything else for open floor? 15:46:34 3 15:46:35 2 15:46:35 1 15:46:45 #endmeeting