15:01:37 #startmeeting RDO meeting - 2017-11-29 15:01:37 Meeting started Wed Nov 29 15:01:37 2017 UTC. The chair is amoralej. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:37 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:01:37 The meeting name has been set to 'rdo_meeting_-_2017-11-29' 15:01:38 Meeting started Wed Nov 29 15:01:37 2017 UTC and is due to finish in 60 minutes. The chair is amoralej. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:39 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:42 The meeting name has been set to 'rdo_meeting___2017_11_29' 15:01:57 #topic roll call 15:02:07 o/ 15:02:10 o/ 15:02:25 o/ 15:02:44 #chair dmsimard ykarel mary_grace jpena 15:02:44 Current chairs: amoralej dmsimard jpena mary_grace ykarel 15:02:45 Current chairs: amoralej dmsimard jpena mary_grace ykarel 15:03:34 o/ 15:03:42 o/ 15:04:03 #chair number80 rbowen 15:04:03 Current chairs: amoralej dmsimard jpena mary_grace number80 rbowen ykarel 15:04:04 Current chairs: amoralej dmsimard jpena mary_grace number80 rbowen ykarel 15:04:50 o/ 15:05:12 #chair jruzicka 15:05:12 Current chairs: amoralej dmsimard jpena jruzicka mary_grace number80 rbowen ykarel 15:05:13 Current chairs: amoralej dmsimard jpena jruzicka mary_grace number80 rbowen ykarel 15:05:19 ok, let's start with first topic 15:05:42 #topic Test day: "trystack" for the duration of the test day 15:05:49 who sent that? 15:05:55 dmsimard and I, I believe. 15:05:59 hi 15:06:17 Just to confirm that this is indeed what we're planning to do, and that folks are on board with this. 15:06:26 So for the next test day we'll be doing an experimental thing that we hope will be successful 15:06:44 and I'd like everyone to do a lot of buzz around this, tell your friends about it, blog about it etc etc 15:07:05 dmsimard, let me know how i can help 15:07:17 so do we have a proper howto for that? 15:07:17 to deploy it or whatever 15:07:19 We also need to come up with test scenarios that we encourage people to go through on this test cloud. 15:07:30 For the next test day, in addition to the usual test scenarios where we encourage people to try to install openstack on their own, RDO will provide a test cloud so that people can test it from a user perspective 15:08:11 This is cool for a lot of reasons, 1) installing openstack is boring, 2) they can test things within minutes instead of potentially hours 3) people don't necessarily have the hardware to install it on 15:08:28 > True 15:08:54 #info in next test day, RDO will provide a test cloud so that people can perform user test-cases in addition to RDO deployment tests 15:08:56 We're planning to do this first installation with the Packstack installer with as many projects as possible (magnum, etc.), on one single bare metal node with appropriate specifications 15:09:28 If this experiment is successful, it would be great to try different installers throughout the cycle for each test day 15:09:55 dmsimard, so what about tenants 15:10:06 we'll create tenants in advance for users that request it? 15:10:11 or self-service? 15:10:12 However this means involving the core teams of the installers not only to deploy the server for the duration of the test day but also help troubleshoot issues throughout the test days 15:10:44 I think we'll want to create a tenant on-demand to avoid abuse, people interested in testing would let us know in advance if possible 15:10:57 ok 15:11:16 We can probably write a small playbook that'd create the tenant/users and then just a list of user/passwords to iterate through 15:11:40 self-service + approval would be ideal, but i'm not aware of any out-of-the-box tool to do that 15:11:41 Then, we need a simple procedure to be written 15:11:47 Since the first installer is Packstack, I'd like to propose dmsimard, amoralej and jpena as core folks for this first experiment :) 15:12:01 +1 15:12:02 amoralej: right 15:12:04 otherwise, people will show up and the time to create tenants, then, they're gone 15:12:14 number80: creating a tenant/user is easy.. 15:12:16 (and I know this time can be very short) 15:12:43 if people register in advance, we could pre-create the users/tenants for them, then send the credentials on the test day 15:12:45 dmsimard: yeah, but it's better to ask people to register by advance (and fix for others on-demand) 15:12:56 yes 15:13:14 What mechanism do we want to use for folks to register? 15:13:38 rbowen_: I'd probably centralize everything in an etherpad 15:13:58 rbowen_: in the header you'd have a list of people participating (name, email), we'd create access and send them the credentials by email 15:14:14 *nods* 15:14:15 rbowen_: and the pad would also contain feedback for the experiment (issues from an operational perspective, bugs found by users, etc.) 15:14:33 because let's not forget that this is also a good opportunity to test the installer in question 15:15:13 rbowen_: I can draft the etherpad format 15:15:24 ok. 15:15:24 if we're okay with that approach 15:15:27 Pradeep Kilambi created openstack/gnocchi-distgit rpm-master: Rename openstack-gnocchi to gnocchi https://review.rdoproject.org/r/10736 15:15:41 And I'll turn the above transcript into a blog post draft, and run that by you. 15:15:55 ok, so we'll advertise it in users and devs maillist and ask to add themselves to the etherpad 15:16:09 rbowen_: we can use that same text to put a brief paragraph in the newsletter as well. 15:16:17 amoralej: right, I'd try to be as loud as I can everywhere 15:16:25 yeah, ok 15:16:26 twitter, blogs, mailing lists 15:16:37 I'll even post something to openstack-dev/openstack-operators 15:16:56 The scheduled date is Dec 13/14, so we have a few days to get the word out. 15:17:02 how much capacity we will have? 15:17:15 amoralej: looking at a 64GB RAM machine with sufficient disk space 15:17:17 could users deploy aio packstack in an instance in the tenant? 15:17:21 mmm 15:17:26 dmsimard, packstack is still not deprecated, right? last sunday in a meetup a guy was telling that he heard packstack is being deprecated 15:17:26 Dec 13/14? or 14/15? 15:17:33 Um ... checking 15:17:37 that's not much for users to deploy packstack in tenant 15:17:44 14/15. Thanks. 15:17:49 I'm not sure the point is to allow people to deploy tripleo/packstack in that environment 15:18:08 It's not a bad idea by any means, but it has an impact on the budget 15:18:16 I'll update http://rdoproject.org/testday/ today, too. 15:18:29 dmsimard: Let's start small, see how it goes, and then work up. 15:18:43 well, a single 64 GB machine with an all-in-one setup will leave enough room for about 10-20 VMs with 4 GB of RAM, not more 15:18:44 dmsimard, it's a way to allow to test both users and admin use cases in the same environment 15:18:45 like, I even considered limiting the flavors available 15:18:50 but only aio 15:18:55 so that there's no abuse and room for everyone 15:19:07 and with 64GB, there is no much option 15:19:39 Let's start with 64GB, if we're too successful we can consider bumping it up to 128GB for the next one :) 15:19:49 ok, so it sounds as a plan 15:19:57 dmsimard: let me know how I can help amplify via RTs, blog round-up, etc. I'll keep an eye out, but if I miss anything, don't hesitate to reach out. 15:20:04 mary_grace: ack 15:20:32 that wraps up this topic 15:20:35 ykarel, it's not deprecated 15:20:50 but it's recomended only for PoC use-case 15:21:13 ykarel: people tend to confuse the state of packstack upstream and in OSP 15:21:23 yeah 15:21:33 amoralej, Ok the same thing i told to the guy :) 15:21:34 ykarel: I'm not sure what's the state of packstack in OSP anymore but it's probably not supported for production use case scenarios 15:22:01 for sure not, it's in a separated channel, iirc 15:22:01 dmsimard, ack 15:22:25 ok, so are we done about this topic? 15:22:26 Packstack upstream is not "owned" by anyone other than the openstack community -- there's definitely not a lot of new features going in but it's very much kept in working condition 15:22:38 I think so 15:23:01 dmsimard, we could create other etherpad to coordinate the deployment of the test cloud 15:23:26 amoralej: sure 15:24:28 #action dmsimard will create a etherpad for users registration to the "test day cloud" 15:24:47 #undo 15:24:47 Removing item from minutes: ACTION by amoralej at 15:24:28 : dmsimard will create a etherpad for users registration to the "test day cloud" 15:24:48 Removing item from minutes: #action dmsimard will create a etherpad for users registration to the "test day cloud" 15:24:58 #action dmsimard will create a etherpad for users registration and feedback reports for the "test day cloud" 15:24:58 dmsimard, go ahead :) 15:25:16 #action dmsimard to advertise the experiment on openstack-dev and openstack-operators 15:25:38 #action everyone to be loud about this, it's very cool and it would be nice if it would be successful so we can keep doing it in the future 15:25:49 #action rbowen to draft blog post with information from this discussion, re test day cloud. 15:26:17 #action amoralej jpena dmsimard to build test cloud 15:26:25 ok, i think we are done 15:26:29 let's move on 15:26:54 #topic Revisiting RDO Technical Definition of Done 15:26:59 #info https://lists.rdoproject.org/pipermail/dev/2017-November/008412.html 15:27:06 I've already posted my opinion on the thread 15:27:32 just to summarize, since queens tripleo can only deploy openstack in containers 15:27:58 Beyond installers picking up problems with *packaging* such as missing packages (which sometimes happens due to missing tags, etc.) or bad mirrors or etc, I would not block the release of RDO 15:28:21 so we are reconsidering what's the definition of done should be, as we are not 15:28:59 Well, I'm fine with that 15:28:59 dmsimard, but "installers" you mean packstack/puppet/tripleo/kolla/... 15:29:08 amoralej: yes 15:29:19 but we need to run 3o and other installers tests *and* fix issues during the cycle 15:29:21 i mean, with current proposal pacstack/p-o-i are blocking 15:29:35 amoralej: the likelihood of issues being related to packaging beyond RC status is very very low 15:29:48 amoralej: therefore if the mirror is good, we should release 15:30:01 e.g: tripleo containers using packages non-supported by RDO can cause issues later 15:30:24 number80: we're always testing all installers throughout the whole cycle 15:30:52 makes sense to me. Also, installers tend to release their final versions after the OpenStack release, so it makes sense not to block on TripleO 15:30:55 Yeah, but what I don't want to see if TripleO folks running their tests on their side, not reporting issues to us 15:31:19 Nightmare would be them setting up a copr to build packages there to workaround issues 15:31:45 lol ? 15:31:52 It should be clear that's a no-no for us, if there are issues => fix must land in upstream, then RDO, then installer X 15:31:59 but a user can create their own containers and deploy from them 15:32:04 i'd like to validate that case 15:32:08 +1 15:32:45 number80: the circumstances of a patch being carried in RDO before it lands upstream are probably not explicit but there must be a clear consensus and an exceptional situation for that to happen 15:32:47 in the past, both kolla and tripleo RC releases at GA has been good enought to validate 15:33:18 number80: tripleo needing a patch in RDO because it hasn't been landed and tagged upstream would be quite exceptional and it's a case by case basis 15:33:47 but it's not like we can't do hotfixes 15:34:11 if this kind of issue shows up, let's fix them ad-hoc ad hotfixes 15:34:12 dmsimard: I was thinking about all kind of fixes, but TripleO should use our packages, not forks, not packages that are not provided by us 15:34:23 at that point, RDO has done their job of shipping the signed tarballs as released upstream 15:34:39 number80: if tripleo ends up using packaged forks, then we have other problems 15:34:51 and I very much hope they don't do that 15:35:19 i don't think they will, i have not seen anything that can lead us to consider that, tbh 15:35:39 dmsimard: yes, and I'd like to state that in DoD, installers are not blockers for GA but they will be run during the whole cycle and issues will get fixed 15:36:12 number80: yes, let's formalize that in the DoD. 15:36:20 DoD is not just GA criteria but also process to reach GA 15:36:20 "installer are not blockers" is to ambiguous, IMHO 15:36:29 s/to/too/ 15:36:46 amoralej: up to any better wording, I didn't get much thoughts to that :) 15:36:57 Bugfixes can be released as adhoc hotfixes after release, not unlike how upstream freezes the branches and eventually allows backports to stable branches 15:37:04 i mean, if we are saying we need to pass packstack/p-o-i to promote, we are asuming they are blocking 15:37:06 Yeah 15:37:22 and they are installers 15:37:32 promotion of the trunk repositories during the cycle != release 15:37:40 the promotion is really a process to allow themselves to not break their gates 15:37:57 dmsimard, the cloudsig promotion 15:38:06 oh, hmm 15:38:06 from -testing to -release is also gated 15:38:14 using weirdo 15:38:28 and i tripleo (non voting right now) 15:38:34 I think that is for stable releases post-GA 15:38:42 and yes, we should keep that clear 15:38:56 in pike we used it for initial tagging also 15:39:11 so, we need to validate packages are sane and work 15:39:12 amoralej: the process of gating updates to the stable mirrors allows us to prevent regressions and breaking stable people and it is required to do that 15:39:30 amoralej: yes, we used installers for pike, and you saw how long it took us to release it 15:39:42 dmsimard, but it was a tooling issue 15:39:49 part was automation issues, but there was a lot of things like timeouts and installer bugs 15:40:01 we've always used installers to validate the repos before release, haven't we? 15:40:07 in previous releases we also validated it using pipeline in ci.centos.org 15:40:12 yes 15:40:32 yes, and they've been useful in finding those odd packages that didn't get tagged and things like that 15:40:41 ok, so let's refocus a bit 15:41:03 it's not like we don't want to use installers as a criteria for GA 15:41:17 it's just that we want to use a more limited set, that allows us to do basic checks 15:41:28 other than a full-fledged, multi-node, HA installation 15:41:38 especially because tripleo requires containers which are not a RDO deliverable 15:41:51 exact 15:42:17 so, IMO, i'd keep non-container based installers as criteria 15:42:39 + a job to validate containers-build process from the packages in the cloudsig repo 15:43:11 I would specify the criteria as "basic installers" (specifying which ones they are) + the container-build process you mention. We can always change what basic installers means to us in future releases 15:43:45 jpena: even a limited set is awkward 15:43:56 this limited set could be blocking due to issues unrelated to RDO and that's what concerning me 15:44:27 dmsimard, that has always been the case in previous releases 15:44:30 that would be very rare 15:44:30 It's almost.. like, we'll run these installer jobs and we'll, at our discretion, decide whether to release or not, depending on the results 15:45:25 Were we not in agreement that using deployment projects that have the "cycle-with-trailing" model to vet the release was an issue earlier ? I'm confused 15:45:44 dmsimard, not afaik 15:45:54 OpenStack releases on day 0, deployment projects can release on day 14 which means they are not ready 15:46:18 then, what's the point on releasing RDO on day 0 15:46:20 If we use deployment projects to block the release of RDO, it means we can no longer guarantee day 0 release 15:46:28 i agree 15:46:39 but i prefer to provide repos that can be deployable 15:46:52 at least in default use cases, even if we have to wait 15:47:04 What would be the point in releasing a set of repos that nobody can use? 15:47:04 in fact, it has not been an issue before 15:47:11 It's not a bad thing if we want to do that, but one of the objectives (was it an OKR? I forget) was to release RDO within two hours of upstream 15:47:13 although, potentially, it may be 15:47:31 but with an acceptable quality 15:47:39 jpena: Does OpenStack delay it's release if deployment projects can't deploy it ? 15:47:52 We package OpenStack and the tarballs aren't going to change 15:47:54 yes if it doesn't work on devstack(tm) 15:48:18 jpena: right, but day 0 for us starts when they tag the release 15:48:37 I'd rather we push this requirement upstream if we're going to block the release 15:48:49 Worth discussing with the community/TC 15:49:02 (sorry, must be off immediately for family matters) 15:49:09 people don't deploy devstack in production 15:49:11 number80, ok 15:49:42 ok I'll take the action to ask the community about it 15:50:03 dmsimard: my point (besides the pun) was: upstream won't release something that can't be at least deployed with devstack. Once we release, it should at least be deployable somehow 15:51:04 jpena: I don't know, I wonder if that's actually documented somewhere 15:51:11 jpena: do they have a definition of done ? 15:51:25 dmsimard: I guess not, but devstack is part of the gate 15:51:44 ok, I'll reach out and find out 15:52:09 #action dmsimard to reach out to the community about definition of done and discuss cycle-with-trailing 15:52:15 #undo 15:52:15 Removing item from minutes: ACTION by dmsimard at 15:52:09 : dmsimard to reach out to the community about definition of done and discuss cycle-with-trailing 15:52:16 Removing item from minutes: #action dmsimard to reach out to the community about definition of done and discuss cycle-with-trailing 15:52:20 #action dmsimard to reach out to the OpenStack community about definition of done and discuss cycle-with-trailing 15:52:34 we can keep the discussion in the maillist thread 15:52:53 so, another thing to consider in the DoD 15:53:05 which is again awkward due to cycle-with-trailing 15:53:12 is that we actually carry packages for tripleo 15:53:26 so if we release before tripleo, we're actually releasing unreleased software 15:53:35 very chicken-and-egg 15:53:36 we release RC reeleases 15:53:47 they always release before GA 15:53:51 amoralej: right, but otherwise everything else is released 15:53:57 amoralej: nova, keystone, etc 15:54:00 yes 15:54:38 so, there is a time gap between "main GA" to "cycle-with-trailing" where we are in a weird situation 15:54:45 with GA + RC builds 15:54:50 that's right 15:55:04 but RDO includes all of them 15:55:19 we can't do a partial release with only GA, IMO 15:55:28 so we have two options 15:55:50 1. Create and validate GA + RC (for cycle-with-trailing) 15:56:01 2. Wait for cycle-with-trailing 15:56:19 currently we are doing 1 15:56:51 and i'd say it has been working pretty well in last releases 15:56:57 correct me if i'm wrong 15:58:03 we're running out of time, shall we move to the last item in the agenda? 15:58:06 we are almost out of time 15:58:07 yes 15:58:27 #action everyone to keep discussion about DoD in mailing list thread 15:58:45 #topic Dependencies automation status 15:58:57 #info https://review.rdoproject.org/etherpad/p/deps-automation 15:59:20 so, this is just to make everyone aware of the activities we are doing to automate dependencies management 15:59:21 Merged openstack/tempest-distgit pike-rdo: Disable warning-is-error for sphinx https://review.rdoproject.org/r/10734 15:59:59 the goal is to implement some automation to the processes to update dependencies in RDO repo 16:00:33 in the etherpad there are links to trello cards and reviews 16:00:55 so if anyone is interested in the topic, don't hesitate to join #rdo and ask 16:01:16 that's it for this topic 16:01:26 #topic open floor 16:01:34 who want to chair next week? 16:01:37 any volunteer? 16:02:12 dmsimard: ping, hey could you make me a pike tree under the aarch46 directory? 16:02:23 note next Wednesday is a bank holiday in Spain, so amoralej and I will be off 16:02:25 radez: yeah. 16:02:30 dmsimard: thx! 16:02:38 in fact, i'll be out the whole week :) 16:02:43 I can chair 16:02:58 #action dmsimard will chair next weekly meeting 16:03:03 dmsimard: I'm going to work on queens and master next too so if you wanted to throw them in there too while you're at it 16:03:06 any other topic that you want to bring? 16:03:35 ok, i'm closing the meeting 16:03:43 thank you all for joining! 16:03:45 #endmeeting