15:00:37 #startmeeting RDO meeting - 2016-08-17 15:00:37 Meeting started Wed Aug 17 15:00:37 2016 UTC. The chair is jpena. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:37 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:00:37 The meeting name has been set to 'rdo_meeting_-_2016-08-17' 15:00:46 #topic roll call 15:00:52 o/ 15:00:54 o/ 15:01:02 #chair rbowen dmellado 15:01:02 Current chairs: dmellado jpena rbowen 15:01:25 o/ 15:01:27 o/ 15:01:28 hi 15:01:40 #chair tosky coolsvap myoung 15:01:40 Current chairs: coolsvap dmellado jpena myoung rbowen tosky 15:01:48 o/ 15:01:57 #chair amoralej 15:01:57 Current chairs: amoralej coolsvap dmellado jpena myoung rbowen tosky 15:02:02 o/ 15:02:05 dmsimard, jruzicka, around? 15:02:10 #chair jschlueter 15:02:10 Current chairs: amoralej coolsvap dmellado jpena jschlueter myoung rbowen tosky 15:03:01 o/ 15:03:05 o/ 15:03:11 #chair weshay number80 15:03:11 Current chairs: amoralej coolsvap dmellado jpena jschlueter myoung number80 rbowen tosky weshay 15:03:26 o/ 15:03:41 #chair jjoyce 15:03:41 Current chairs: amoralej coolsvap dmellado jjoyce jpena jschlueter myoung number80 rbowen tosky weshay 15:03:51 o/ 15:04:06 #chair dmsimard 15:04:06 Current chairs: amoralej coolsvap dmellado dmsimard jjoyce jpena jschlueter myoung number80 rbowen tosky weshay 15:04:18 I guess we can start 15:04:22 o/ 15:04:29 #chair dhellmann 15:04:29 Current chairs: amoralej coolsvap dhellmann dmellado dmsimard jjoyce jpena jschlueter myoung number80 rbowen tosky weshay 15:04:31 ok, let's go 15:04:36 #topic RDO Stable Mitaka is broken, please can it be given some love 15:04:59 o/ 15:05:00 WTF? 15:05:12 Mitaka and galera? 15:05:18 we should be using only mariadb 15:05:19 #link https://bugzilla.redhat.com/show_bug.cgi?id=1365884 15:05:23 that is the referenced bug 15:05:29 galera is deprecated 15:05:48 o/ 15:06:08 number80: mysql is deprecated, but not Galera surely? 15:06:28 (we should be using PostgreSQL, but that's a whole other argument) 15:06:39 hewbrocca: galera is now included in mariadb 10.1, galera standalone is deprecated 15:07:00 ci has the to-do to add a job that exercises overcloud-image-build afaict 15:07:11 http://cbs.centos.org/koji/buildinfo?buildID=10246 15:07:25 o/ 15:07:31 jpena, my conn dropped, I was really confused :D 15:07:36 sending messages into the void 15:07:56 based on the bug comments, it looks like the issue can be related to having EPEL enabled 15:08:03 #chair jruzicka hrybacki 15:08:03 Current chairs: amoralej coolsvap dhellmann dmellado dmsimard hrybacki jjoyce jpena jruzicka jschlueter myoung number80 rbowen tosky weshay 15:08:22 on fire 15:08:54 jpena: o/ 15:08:56 postgres ftw :-p 15:09:06 #chair imcsk8 15:09:06 Current chairs: amoralej coolsvap dhellmann dmellado dmsimard hrybacki imcsk8 jjoyce jpena jruzicka jschlueter myoung number80 rbowen tosky weshay 15:09:44 so, who could have a look at this? I see ggillies was also affected 15:10:20 "This is a problem because python-openstackclient forces epel to be enabled, even though it shouldn't be (reason being something to do with diskimage-builder)." 15:10:38 I'm curious of why EPEL is required for python-openstackclient 15:10:46 hmm I wonder :) 15:10:48 * jpena too 15:10:48 it should not depends on dib 15:10:54 * number80 looking 15:10:54 it requires yggdrassil of dependencies 15:11:49 *nods* 15:12:18 is the reporter around? 15:12:34 * number80 guesses that ggillies is likely sleeping at this hour 15:12:48 most probably 15:13:44 I can't find how dib gets pulled by osc 15:14:18 I don't see it as a dependency upstream. Is that something to do with the way the RPM packages are defined? 15:14:34 I don't want to sidetrack the meeting too much, but that topic reminds me that RDO still has a branding issue - I've posted about that on the ML before. I hate to see "Mitaka RDO stable is broken" while really it's an issue isolated to a specific installer or use case. 15:15:01 /rant 15:15:05 +1 15:15:19 +1 15:15:40 dhellmann: likely, but dnf repoquery --requires python-openstackclient shows nothing special (only oslo and openstack clients) 15:16:11 the reporter mentions this: "This is using current mitaka stable images. Nightly delorean images do not exhibit this problem.". Maybe we just need to update the images? 15:16:16 number80 : yeah, that's more or less what I would expect to see 15:17:41 jpena: I agree, we should reassign it to 3o folks 15:18:12 * number80 asks who's trown backup? 15:18:32 actually, the bug is assigned to the openstack-tripleo component, we just need someone to take the ball 15:19:09 So TripleO image is broken, not Mitaka RDO if I understand it correctly? 15:19:17 jruzicka: correct 15:20:04 to avoid spending more time on this topic, I propose we poke someone at #tripleo after the meeting and ask for help 15:20:08 #info TripleO image is broken, not Mitaka RDO 15:20:09 that has rebuilt several times over the last few days, we'll have to recheck it 15:21:42 ack 15:21:49 action? 15:21:52 number80: btw I'm trown's backup for the RDO promotion jobs. (/me just started reading) 15:21:58 oh cool 15:22:42 #action adarazs will investigate 1365884 15:22:52 let's move on then! 15:22:54 yay 15:22:59 #topic DLRN, symlinks and CDN 15:23:05 dmsimard, the stage is yours 15:23:09 oh hi 15:23:30 let me gather my thoughts for a sec 15:23:49 right 15:24:11 so, this is mostly about CI purposes -- right now we have two custom symlinks that are managed by CI: current-tripleo and current-passed-ci 15:24:36 These CIs have effectively different purposes and also different criterias (coverage/testing) for promotion 15:25:02 afazekas: this might be interesting for you ^ 15:25:13 There are different initiatives under way to have extended coverage, although with different criterias 15:25:29 An example is around testing quickstart from whatever tripleo-ci has promoted last 15:25:56 Another example would be to run puppet-openstack CI (which has statistically been the most successful CI so far) and hold a symlink'd reference for it's promotion 15:26:31 Right now current-passed-ci and current-triple do not really exist on the trunk.rdo instance, they are redirects to buildlogs 15:27:47 I don't want to get into a debate around the usefulness (or lack thereof) of the CDN *right now* but I want to discuss how we could make it possible to maintain different symlinks in a way that wouldn't be hacky or awkward. 15:28:06 +1 15:28:15 +1 15:28:27 from the DLRN point of view, we could simply synchronize additional links if needed 15:28:36 dmsimard, you have some idea I assume? 15:28:37 just like we do with consistent or current 15:29:05 I've argued before that repository promotions could be done client-side (like p-o-i is doing right now https://review.openstack.org/#/q/project:openstack/puppet-openstack-integration+status:merged+rdo ) 15:29:07 if we do so, they'll show up at the public server, and won't be synced to the CDN unless we think it's needed 15:29:29 and in fact, client-side promotions are really cool because then it means we don't have to maintain anything on our end and each CI is self sufficient 15:30:03 however, projects like tripleoci and puppet-openstack are limited in how/when they can run such jobs (upstream imposes a limit of once per 24hrs on periodic jobs) 15:30:10 are those links only useful for CIs? 15:30:29 the links are used to track whatever repository has worked last for a particular CI pipeline, yes 15:30:44 jpena: I really don't want to see this in the dlrn code 15:30:46 dmsimard: that could change with 3rdparty CI now, it wouldn't be had to add a new pipeline into zuul.rdoproject.org to do it 15:32:33 jpena: can you imagine maintaining ['current-passed-ci', 'current-tripleo', 'current-something'] right in the dlrn code? Ugly! A standalone cron, maybe ? I don't know.. 15:32:47 pabelanger: what do you mean, the periodic limit ? 15:32:54 definitely don't want that in dlrn 15:32:58 dmsimard: we could make it a configurable option in projects.ini, but yes, it's still hard to maintain 15:33:00 * jschlueter agrees dlrn is not the place to maintain that 15:33:31 stand alone config repo a la rdoinfo? 15:33:46 pabelanger: I proposed a dirty, dirty cheat.. which would be to have a periodic job (on the RDO side) that would create a review on review.o.o with a repository promotion (like the one the proposal bot is doing) 15:34:05 pabelanger: every 6 hours, for example.. which would override the periodic limit 15:34:45 dmsimard: the Puppet CI is doing something similar, isn't it? 15:34:52 dmsimard: I'd need to better understand why you are using periodic, but the promotion could be driving my zuul.rdoproject.org have puppet-openstack merges code upstream 15:35:01 we can talk more after the meeting 15:35:04 jpena: it's a periodic upstream job that runs once every 24hrs 15:35:16 jpena: see proposal bot reviews in https://review.openstack.org/#/q/project:openstack/puppet-openstack-integration+status:merged+rdo 15:36:12 any action then? 15:36:19 but anyway, we probably don't need to choose a definitive solution right here in the meeting but we're going to need a way to expose symlinks.. I'm not a fan of symlinks fwiw, I turned down EmilienM when he asked for one and he came up with his promotion client side 15:36:28 +1 15:36:44 sshnaidm, ^ 15:37:29 jpena: if you're okay with it, I can take an action to craft a rsync cron that'd just synchronize the symlink targets 15:37:40 can submit it through puppet-dlrn or something 15:37:43 pabelanger, dmsimard please invite me to that discussion 15:37:53 dmsimard: ok, that's an option 15:38:07 please rope me in as well, I have downstream jobs that are interested in the links... 15:38:18 dmsimard: FWIW: that promotion could happen in the post pipeline on every commit for puppet-openstack, other projects do that. eg: openstack/requirements 15:38:24 #action dmsimard to explore a solution to expose custom symlinks on the public trunk.rdoproject.org instance 15:38:45 let's move then to the next topic, still a lot to discuss 15:38:46 that will help getting things more to be more transparent 15:39:02 #topic Tempest packaging 15:39:22 this was discussed last week, but we postponed any action until dmellado was back 15:39:30 o/ I'm here 15:39:40 There's a couple things wrong with openstack-tempest 15:39:43 so I was reading the option about having tempest-downstream and tempest-vanilla 15:40:04 from my perspective, 1) it installs every plugins (it shouldn't) 2) it's packaged from downstream 15:40:15 first of all, tbh is that I wouldn't package it, but counting that as a given 15:40:16 eggmaster, ^ 15:40:42 dmsimard: but if the entry points for the tests are installed by the XXX package 15:40:49 how would you then make it work? 15:41:12 upstream, we do isolate that behavior by using venvs 15:41:20 but that's not a way to go when rpm'ing 15:41:28 dmsimard: 1. can be solved by adding a tempest-all subpackage that will requires all plugins (for people who wants old behaviour) 15:41:33 dmsimard: unfortunately we can't use reccomends in RPM 15:41:39 number80: the old behavior is stupid 15:41:45 You can't install all the plugins 15:41:52 You just can't 15:41:58 because it will yield a broken install out of the box 15:41:59 they have incompatible dependencies? 15:42:04 dmsimard: if you don't install them, tempest will explode if you have the base package for some service 15:42:20 what hapens is that for different projects they do behave in a different way 15:42:26 dmsimard: well, people requests it (I agree this should not be default behaviour though) 15:42:28 i.e. manila requires an specific commit of tempest 15:42:32 dhellmann: ^^ 15:42:35 For example, some tempest plugins are enabled by default as soon as they as installed unless explicitely disabled in tempest.conf 15:43:11 dmellado : does manila require exactly that commit, or at least that commit? 15:43:11 dmsimard: there was an upstream patch in order to skip plugins to be loaded, -2ed by mtreinish for now 15:43:13 This is the case for mistral and some others, I don't have the references nearby 15:43:14 I'll do take a look 15:43:14 dmsimard: all plugins are listed iirc; then the tests are run or not depending on tempest.conf 15:43:21 dhellmann: *that* commit, sadly 15:43:36 as the api is changing, hopefully for a more stable one 15:43:36 ok, well, that's clearly a mistral bug 15:43:44 you could get some errors 15:43:44 manila is more the exception than the rule here... 15:43:46 sorry, manila 15:43:48 tosky: you'd be surprised 15:44:11 dmsimard: another point is how to keep in track every plugin 15:44:19 well, packaging here is easy to fix, but I'd like that we agree on one roadmap 15:44:20 tosky: if python-aodh-tests (aodh plugin) is installed, and aodh is *NOT* installed (and aodh=False in tempest.conf!) it will run the aodh tests and error out with aodh endpoint not found 15:44:37 * number80 suggests moving that discussion on the list as now dmellado is back 15:44:43 tosky: turns out aodh plugin relies on another parameter which is in fact aodh_plugin=False 15:44:53 dmsimard: but then it's a tempest configuration issue 15:45:01 tosky: but anyway, the fact is that as soon as the plugin is installed, it'll run 15:45:22 tosky: so how do you ship an openstack-tempest package with all plugins installed, then ? 15:45:24 dmsimard: right, but if you have python-aodh installed and not python-aodh-tests, you will get an error 15:45:32 tosky: every plugin is different, there's no consistency 15:45:42 then, we need to fix them 15:45:45 exactly 15:45:51 dmsimard: the recommended behavior was to have one, out-tree repo for the plugin 15:45:53 did you try to address this on the QA community? 15:45:55 that would things much easier 15:45:58 tosky: yes 15:46:03 but it doesn't depend on tempest/qa 15:46:04 we need to take this issue upstream, because if plugins are treating tempest like a library they need to be using released versions of tempest and we need to be tagging things more often 15:46:06 it depends on every project 15:46:22 dhellmann: within tempest there's tempest.lib 15:46:28 *nods* 15:46:29 which we're on track of stabilyzing 15:46:39 where on track == newton 15:46:40 until that's done, those things might happen... 15:46:49 well, can someone take the action of raising this topic upstream? 15:46:52 tosky: http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2016-08-09.log.html#t2016-08-09T14:25:58 15:46:53 yes, but if different plugins from the same release series need different versions of tempest, then we're not testing correctly upstream 15:47:05 (and also define our downstream roadmap) 15:47:07 tosky: hopefully, I'll bring back more details from mid-cycle 15:47:09 tosky: tl;dr tempest has no control over plugin behavior 15:47:24 dmellado: when is mid-cycle? 15:47:27 dmsimard: nothing prevents on writing more guidelines in tempest documentation and proposing a patch for the plugin 15:47:35 number80: Sept 18-22 15:47:44 dmsimard: it's not an infinite number, we are talking probably about less then 10 items 15:47:57 a totally countable set, that can be handled 15:47:58 dmellado: ack 15:48:23 so we need to do two things if I got this correctly 15:48:37 we haven't still talked about the tempest config tool, that we bundle 15:48:45 a) Propose a roadmap for tempest packaging 15:48:52 and I'd say to bring it to its own repo 15:48:59 in order not to depend on our fork 15:49:06 b) Identify "rogue" plugins, work with upstream to fix them 15:49:37 dmellado, maybe it's the tempest deployment or configuration tool the one that should be able to selectively install plugins as required 15:49:44 whould that make sense? 15:49:45 amoralej: that's not possible 15:49:48 not at all 15:50:04 So this all started from a review which, in the meantime, is still blocked https://review.rdoproject.org/r/#/c/1820/ -- it prevents me from integrating puppet-designate testing in puppet-openstack-integration (and therefore adding designate to RDO testing coverage) 15:50:21 the tempest plugin gets installed, if in-tree 15:50:25 jpena: yes! 15:50:26 It'd be great if I could finish fixing designate newton which is still completely broken 15:50:26 when the package gets installed 15:51:03 amoralej: https://github.com/openstack/neutron/blob/master/setup.cfg#L166-L167 15:51:21 dmsimard: about that review, the change you propose is a global one, but applied to a single repo 15:51:35 tosky: explain 15:52:03 tosky: openstack-tempest is not currently installed or a requirement for any of the *-tests packages right now 15:52:07 dmsimard: you are removing the dependency because some consumer could use the source repository; this is the global change we are discussing 15:52:46 dmsimard: tosky basically the change there is because designate is an out-tree plugin 15:52:47 ultimately the problem is not the source repository, it's the fact that openstack-tempest installs every plugin of which some are hardcoded to be enabled by default 15:52:51 and it would fetch tempest for run 15:53:15 running out of time 15:53:16 openstack-tempest by itself would likely not break anything 15:53:17 dmsimard : can we change the plugins upstream to all use a config option to enable them? 15:53:17 trying to be time-conscious, we've got 7 minutes and we should get some action, then complete the agenda 15:53:27 jpena: should we move this to the ML 15:53:29 ? 15:53:32 yes 15:53:36 we're still far from completion 15:53:38 dmellado: agreed 15:53:41 dhellmann: It's in the realm of possibilities, eys 15:53:43 yes* 15:53:55 dmellado: will you restart the email thread? 15:54:05 so that seems like a good thing to start working on, at the same time as dealing with the inverted dependency issue 15:54:20 I'll reply to the last one and bring it back, yes 15:54:29 jpena: 15:54:31 #action dmellado to restart the Tempest packaging thread 15:54:38 \o/ 15:54:39 thx :) 15:54:44 #topic reviews needing eyes 15:54:56 jruzicka needs eyes to review a couple rdopkg changes 15:55:05 * number80 is looking at #1847 15:55:06 #link https://review.rdoproject.org/r/#/c/1847/ 15:55:10 yeah just look at the etherpad and review if you feel like it 15:55:16 #link https://review.rdoproject.org/r/#/c/1275/ 15:55:26 as for rpm-python, I'm out of ideas :) 15:55:32 with the second one I don't know howto proceed 15:55:40 kick in any direction would be welcomed 15:55:44 except using a virtualenv with access to system packages 15:56:00 or writing a pure python rpm module 15:56:07 (that wouldn't suck) 15:56:11 jruzicka: that's something that would be great 15:56:12 a man can dream... 15:57:07 #topic CentOS Cloud SIG meetings (Thursday, 15:00 UTC) 15:57:10 #link https://etherpad.openstack.org/p/centos-cloud-sig 15:57:11 We have basically not had a CentOS Cloud SIG meeting for 2 months. 15:57:11 Part of this is because of travel/summer/whatever. I also have a standing conflict with that meeting, which I'm trying to fix. 15:57:11 But more than that, I think it's because the Cloud SIG is just a rehash of this meeting, because only RDO is participating. 15:57:11 On the one hand, we don't really accomplish much in that meeting when we do have it. 15:57:13 On the other hand, it's a way to get RDO in front of another audience. 15:57:15 So, I guess I'm asking if people think it's valuable, and will try to carve out a little time for this each week. 15:57:18 15:57:50 (Hoping I don't get paste-flood-banned! ;-) 15:57:53 my bad too, but I think we should work with other groups to make it more lively 15:58:35 Without other groups participating, it feels like a waste of time. 15:58:49 would it make sense to merge both meetings? 15:59:13 jpena: It might. There's a handful of people that sometimes attend the CentOS meeting, that we could invite here. 15:59:32 kbsingh feels, of course, that we should continue that meeting. 15:59:42 He's on this channel, as are several of those people 16:00:07 Anyways, no specific action here, but I'll bring it up on the mailing list again. 16:00:12 ok 16:00:20 #topic OpenStack Summit 16:00:22 #link https://etherpad.openstack.org/p/rdo-barcelona-summit-booth 16:00:23 OpenStack Summit is now roughly 2 months out. 16:00:23 If you want to do a demo in the RDO booth, it's time to start thinking about what you want to show. 16:00:23 Videos are ok. Live demos are ok, if they're not too network-dependent. 16:00:23 Or if you want to hang out in the booth and answer questions, that's also very welcomed. 16:00:23 The link above is the etherpad. I don't have specific time slots yet. We should have the event schedule in the next 2 weeks. 16:00:26 You can just indicate that you're interested, and I'll get back in touch once we have a specific schedule. 16:00:28 16:00:43 rbowen: I have volunteers, I need to ping them again 16:00:46 Thanks to Mathieu and Wes for signing up already! 16:00:53 I'll sign up, too 16:01:39 I can probably do some booth time too 16:01:56 #action everyone interested sign up at https://etherpad.openstack.org/p/rdo-barcelona-summit-booth 16:02:19 we're out of schedule, so quickly 16:02:25 #topic Chair for next meeting 16:02:26 anyone? 16:02:43 o/ 16:02:55 #action number80 chair next week 16:03:03 we're already past the hour :) 16:03:16 ok, since it's past time, let's leave the open floor for after the meeting 16:03:32 thx number80 for volunteering :) 16:03:35 #endmeeting