15:01:19 #startmeeting RDO meeting - 2018-01-31 15:01:19 Meeting started Wed Jan 31 15:01:19 2018 UTC. The chair is jpena. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:19 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:01:19 The meeting name has been set to 'rdo_meeting_-_2018-01-31' 15:01:20 Meeting started Wed Jan 31 15:01:19 2018 UTC and is due to finish in 60 minutes. The chair is jpena. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:24 The meeting name has been set to 'rdo_meeting___2018_01_31' 15:01:47 Remember you can add last-minute topics at https://etherpad.openstack.org/p/RDO-Meeting 15:01:49 #topic roll call 15:02:34 o/ 15:02:40 o/ 15:02:46 o/ 15:02:48 o/ 15:02:49 o/ 15:03:09 #chair mjturek amoralej ykarel mary_grace number80 15:03:09 Current chairs: amoralej jpena mary_grace mjturek number80 ykarel 15:03:10 o/ 15:03:10 Current chairs: amoralej jpena mary_grace mjturek number80 ykarel 15:03:14 #chair bcafarel 15:03:14 Current chairs: amoralej bcafarel jpena mary_grace mjturek number80 ykarel 15:03:16 Current chairs: amoralej bcafarel jpena mary_grace mjturek number80 ykarel 15:05:02 v\o/ 15:05:42 #chair chandankumar 15:05:42 Current chairs: amoralej bcafarel chandankumar jpena mary_grace mjturek number80 ykarel 15:05:43 Current chairs: amoralej bcafarel chandankumar jpena mary_grace mjturek number80 ykarel 15:05:47 let's start with the agenda 15:05:56 #topic Plan to provide VM images for Octavia 15:06:33 there have been some discussions about how we could provide VM images for Octavia (and probably others like Manila in the future) 15:06:52 bcafarel ^ do you want to summarize the current status? 15:07:23 jpena: sure 15:08:06 for octavia, patches to use distribution packages are merged now 15:08:49 so the script (provided in openstack-octavia-diskimage-create package) can generate a VM image in a few commands (summarized in rdo-dev thread) 15:10:01 current suggestion: to create a periodic job to rebuild the images (daily) then upload them to images.rdoproject.org 15:10:54 that means for example packages in the image are ensured to be up-to-date (security etc) as one of the DIB steps is running yum update 15:11:22 some nice fellow that I will not name tested the image generation and it did work fine :) 15:12:35 from octavia/tripleo point of view, that sounds good, and as mentioned before it would help Octavia adoption (generating image can be trick sometimes) 15:13:13 would that image be used in tripleo jobs? During some informal discussions, I remember someone mentioned a potential issue if a new image breaks the job 15:13:53 until that happens, I think publishing the image should be ok, but I'd like to hear more feedback 15:14:39 we could test te images after creating it 15:15:18 I think the original mention was in tripleo CI (beagles would know but I don't think he's around right now) 15:15:20 in the same job, i'm not sure if it's posible 15:16:27 but yes initial goal is more to enable octavia usage with pre-built image 15:16:59 Yatin Karel proposed openstack/oslo-db-distgit rpm-master: Requirement sync for queens https://review.rdoproject.org/r/11681 15:18:39 i think it's good to start by creating images and pushing in periodic job 15:18:53 yes, it's a good start 15:19:15 we'll need to monitor disk usage in images.rdo, but it seems manageable 15:19:59 jpena, you are thinking in a periodic job in review.r.o, right? or ci.centos.org? 15:20:06 amoralej: review.r.o 15:21:31 if nobody complains, I'd propose to send an e-mail to dev@lists with the agreed proposal, then roll up our sleeves and implement it. 15:21:38 +1 15:21:50 +1 15:22:10 I certainly won't complain on that :) 15:22:42 bcafarel: tsk tsk, you're not French enough if you can't complain about it :) 15:22:58 #agreed Build images for Octavia using a periodic job on review.rdoproject.org, store on images.rdoproject.org 15:23:02 ahah 15:23:11 number80: I'm keeping my complains reserve for later 15:23:31 #action jpena to send email to dev@lists with results of Octavia image discussion 15:23:45 I think we can move on 15:24:16 #topic Discuss Power CI for RDO 15:24:23 mjturek ^ 15:24:24 hey! 15:24:38 welcome :) 15:24:49 thanks :) so, we're working on testing the ppc64le queens build of rdo available on rdotrunk (starting with current-passed-ci). 15:25:01 Basically we're installing with packstack and running tempest against it. 15:25:18 allinone on centos power guest 15:25:34 I've listed a couple of questions in the agenda that I think would really help us better understand what the rdo community would like from us CI wise. 15:25:54 could we go through them here? or would you rather I start this conversation on the ML 15:26:52 let's start discussing here, if some topic needs more time we can send it to the ML 15:27:07 cool! 15:27:47 so the first question is what does a normal RDO CI scenario look like? Is it a packstack deployment with tempest like we have? or different? 15:28:20 any reference we can look at for htis? 15:28:45 mjturek: do you mean the scenarios we use in the promotion pipeline? 15:29:27 jpena: yeah I think so 15:29:51 amoralej: you know the current pipeline better than I do :) 15:30:18 i was thinking about the best way to test 15:30:19 Yatin Karel created openstack/oslo-i18n-distgit rpm-master: Adjust python2 requirements for Fedora https://review.rdoproject.org/r/11696 15:30:48 it may be interesting to report jobs to dlrn-api 15:31:17 mjturek, i'd start by running all packstack and p-o-i scenarios to start 15:31:53 amoralej: p-o-i? 15:32:03 puppet-openstack-integration 15:32:26 amoralej awesome - are these scenarios defined in wierdo? 15:32:28 mjturek, look at https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo_trunk-promote-master-current-tripleo/ 15:32:33 TIL yum -y install python2-shade is a thing, thanks :D 15:32:38 oh cool 15:32:49 mjturek, weirdo uses the scenarios defined in packstack and p-o-i repos 15:32:54 so, currently 15:33:19 what we do is to run all those + 2 tripleo jobs after there is an upstream promotion in tripleo pipeline 15:33:27 mjturek, do you use weirdo? 15:33:53 amoralej - we have our own setup actually 15:34:34 i'd recomend to re-use the tooling used in RDO CI, as much as you can 15:34:42 and improve it if needed 15:35:07 makes sense! so try to migrate over to wierdo 15:35:16 Yatin Karel proposed openstack/oslo-log-distgit rpm-master: Requirement sync for queens https://review.rdoproject.org/r/11682 15:35:36 mjturek, do you test some tripleo deployment using tripleo-quickstart? 15:35:54 amoralej - we don't have any tripleo jobs yet actually 15:36:13 that would also be good 15:36:26 but it'd probably require more work 15:36:39 fair enough - are the tripleo jobs virtualized? 15:36:51 do virtualization ppc64le support libvirt? 15:37:12 yep, we support libvirt 15:37:26 (see the PowerKVM CI job on nova for more detail) 15:37:43 tripleo-quickstart (aka oooq) can create required virtual machines in a host to test tripleo deployment 15:37:46 usint libvirt 15:37:48 yes, from my remembrance biggest blocker was mongodb for telemetry but it's gone :) 15:37:54 *on ppc64le 15:38:02 awesome! 15:38:48 I don't want to take up too much time here, I know there's another topic coming up. But the jist is start moving to wierdo and start with the packstack/p-o-i scenarios 15:38:54 ok 15:39:20 mjturek, my advise is you to start checking weirdo and try to mimic scenarios in https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo_trunk-promote-master-current-tripleo/ 15:39:39 amoralej - perfect, I'll come here as I hit blocks 15:39:42 you should use the same rdo trunk hash repo 15:39:47 that we use 15:39:53 you can check current-tripleo 15:39:55 link 15:40:12 http://trunk.rdoproject.org/centos7-master/current-tripleo 15:41:15 cool, great info 15:41:19 thanks amoralej 15:41:33 ok, poke me in #rdo if you need something 15:41:42 will do! 15:41:50 ok 15:42:16 let's move to the next topic, then 15:42:22 #topic Preparation for queens release 15:43:34 we are starting the activities for queens GA preparation 15:43:39 Yatin Karel proposed openstack/oslo-messaging-distgit rpm-master: Requirement sync for queens https://review.rdoproject.org/r/11683 15:44:09 I've created a trello card in https://trello.com/c/4hiSJdKq/656-queens-release-preparation 15:44:28 libraries and most clients are already released 15:44:44 and requirements is frozen iiuc 15:45:07 so it's time to start doing the final spec adjustments, specially requirements updates 15:45:55 #action maintainers to send final updates for specs with requirements updates 15:46:25 additionally, we'll use it to adjust specs for fedora policies 15:46:44 specially about moving requirements to python2- instead of python- names 15:47:07 so, expect a bunch of reviews in queens-branching topic 15:47:37 we also need to freeze non-openstack puppet modules for queens soon 15:47:53 EmilienM, mwhahaha ^ let me know when do you think we can do it 15:48:03 if you prefer to wait until RC1, no problem 15:48:04 yeah 15:48:06 I think we can do it now 15:48:12 I don't see any ongoing work in these modules 15:48:26 amoralej: you do it manually or you have a script? 15:48:41 ok, if we get a promotion soon, i'll take builds from new promotion 15:48:44 EmilienM, manually 15:48:46 perfect 15:49:28 #info we will pin non-OpenStack puppet modules for queens with the builds in next promotion 15:50:01 i think this was my update about the topic 15:50:18 ykarel, anything else to add? 15:50:29 amoralej, no 15:51:18 one concern i have is about synchronization to buildlogs and mirror.r.o 15:51:37 from queens we need to ensure that we don't synchronize common tags to queens repos 15:51:43 Yatin Karel proposed openstack/oslo-db-distgit rpm-master: Requirement sync for queens https://review.rdoproject.org/r/11681 15:51:45 only queens-testing and queens-release 15:51:56 number80, what's the best way to manage this? 15:52:05 fill a ticket to centos? 15:52:09 in advance? 15:53:09 i guess so :) 15:53:15 jpena, i think we can move on 15:53:19 cool 15:53:25 #topic Chair for the next meeting 15:53:30 Any volunteer? 15:53:33 i can do it 15:53:42 thx amoralej :) 15:53:49 #action amoralej to chair the next meeting 15:53:54 #topic open floor 15:54:14 anything else to discuss? 15:54:46 nodepool-builder is slow, how can we fix the IO issues there? 15:54:51 and who could maybe do that? 15:55:02 amoralej: yes ticket but it will be when we set up repos 15:55:08 which is not happening yet 15:55:16 number80, ok 15:55:32 I'm finalizing the February newsletter — if there's anything you want included, drop me a line in here or email: mthengva@redhat.com 15:55:32 just we need to remark about not syncying -common- 15:55:57 Yatin Karel proposed openstack/oslo-log-distgit rpm-master: Requirement sync for queens https://review.rdoproject.org/r/11682 15:56:25 pabelanger: about nodepool-builder, I/O on ceph volumes is not superfast in RDO Cloud. The team is working on getting more SSD disks to make it faster, but that takes time 15:57:19 jpena: maybe we could POC mount of local HDD in compute node too, if not SSD 15:57:36 local HDD were not faster than ceph last time we tested it 15:57:40 or consider moving nodepool-builder into another cloud 15:58:52 (not of general interest, but I have a review for sahara which I'd really like to have merged before queens: I'd really https://review.rdoproject.org/r/#/c/10843/ ) 16:00:45 ok, it's time to end the meeting, let's continue discussions after it 16:00:48 #endmeeting