15:00:37 <jpena> #startmeeting RDO meeting - 2016-08-17
15:00:37 <zodbot> Meeting started Wed Aug 17 15:00:37 2016 UTC.  The chair is jpena. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:37 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
15:00:37 <zodbot> The meeting name has been set to 'rdo_meeting_-_2016-08-17'
15:00:46 <jpena> #topic roll call
15:00:52 <rbowen> o/
15:00:54 <dmellado> o/
15:01:02 <jpena> #chair rbowen dmellado
15:01:02 <zodbot> Current chairs: dmellado jpena rbowen
15:01:25 <coolsvap> o/
15:01:27 <myoung> o/
15:01:28 <tosky> hi
15:01:40 <jpena> #chair tosky coolsvap myoung
15:01:40 <zodbot> Current chairs: coolsvap dmellado jpena myoung rbowen tosky
15:01:48 <amoralej> o/
15:01:57 <jpena> #chair amoralej
15:01:57 <zodbot> Current chairs: amoralej coolsvap dmellado jpena myoung rbowen tosky
15:02:02 <jschlueter> o/
15:02:05 <jpena> dmsimard, jruzicka, around?
15:02:10 <jpena> #chair jschlueter
15:02:10 <zodbot> Current chairs: amoralej coolsvap dmellado jpena jschlueter myoung rbowen tosky
15:03:01 <weshay> o/
15:03:05 <number80> o/
15:03:11 <jpena> #chair weshay number80
15:03:11 <zodbot> Current chairs: amoralej coolsvap dmellado jpena jschlueter myoung number80 rbowen tosky weshay
15:03:26 <jjoyce> o/
15:03:41 <jpena> #chair jjoyce
15:03:41 <zodbot> Current chairs: amoralej coolsvap dmellado jjoyce jpena jschlueter myoung number80 rbowen tosky weshay
15:03:51 <dmsimard> o/
15:04:06 <jpena> #chair dmsimard
15:04:06 <zodbot> Current chairs: amoralej coolsvap dmellado dmsimard jjoyce jpena jschlueter myoung number80 rbowen tosky weshay
15:04:18 <number80> I guess we can start
15:04:22 <dhellmann> o/
15:04:29 <jpena> #chair dhellmann
15:04:29 <zodbot> Current chairs: amoralej coolsvap dhellmann dmellado dmsimard jjoyce jpena jschlueter myoung number80 rbowen tosky weshay
15:04:31 <jpena> ok, let's go
15:04:36 <jpena> #topic RDO Stable Mitaka is broken, please can it be given some love
15:04:59 <imcsk8> o/
15:05:00 <number80> WTF?
15:05:12 <number80> Mitaka and galera?
15:05:18 <number80> we should be using only mariadb
15:05:19 <jpena> #link https://bugzilla.redhat.com/show_bug.cgi?id=1365884
15:05:23 <jpena> that is the referenced bug
15:05:29 <number80> galera is deprecated
15:05:48 <hrybacki> o/
15:06:08 <hewbrocca> number80: mysql is deprecated, but not Galera surely?
15:06:28 <hewbrocca> (we should be using PostgreSQL, but that's a whole other argument)
15:06:39 <number80> hewbrocca: galera is now included in mariadb 10.1, galera standalone is deprecated
15:07:00 <weshay> ci has the to-do to add a job that exercises overcloud-image-build afaict
15:07:11 <number80> http://cbs.centos.org/koji/buildinfo?buildID=10246
15:07:25 <jruzicka> o/
15:07:31 <jruzicka> jpena, my conn dropped, I was really confused :D
15:07:36 <jruzicka> sending messages into the void
15:07:56 <jpena> based on the bug comments, it looks like the issue can be related to having EPEL enabled
15:08:03 <jpena> #chair jruzicka hrybacki
15:08:03 <zodbot> Current chairs: amoralej coolsvap dhellmann dmellado dmsimard hrybacki jjoyce jpena jruzicka jschlueter myoung number80 rbowen tosky weshay
15:08:22 <weshay> on fire
15:08:54 <imcsk8> jpena: o/
15:08:56 <jruzicka> postgres ftw :-p
15:09:06 <jpena> #chair imcsk8
15:09:06 <zodbot> Current chairs: amoralej coolsvap dhellmann dmellado dmsimard hrybacki imcsk8 jjoyce jpena jruzicka jschlueter myoung number80 rbowen tosky weshay
15:09:44 <jpena> so, who could have a look at this? I see ggillies was also affected
15:10:20 <number80> "This is a problem because python-openstackclient forces epel to be enabled, even though it shouldn't be (reason being something to do with diskimage-builder)."
15:10:38 <number80> I'm curious of why EPEL is required for python-openstackclient
15:10:46 <jruzicka> hmm I wonder :)
15:10:48 * jpena too
15:10:48 <number80> it should not depends on dib
15:10:54 * number80 looking
15:10:54 <jruzicka> it requires yggdrassil of dependencies
15:11:49 <number80> *nods*
15:12:18 <number80> is the reporter around?
15:12:34 * number80 guesses that ggillies is likely sleeping at this hour
15:12:48 <dmellado> most probably
15:13:44 <number80> I can't find how dib gets pulled by osc
15:14:18 <dhellmann> I don't see it as a dependency upstream. Is that something to do with the way the RPM packages are defined?
15:14:34 <dmsimard> I don't want to sidetrack the meeting too much, but that topic reminds me that RDO still has a branding issue - I've posted about that on the ML before. I hate to see "Mitaka RDO stable is broken" while really it's an issue isolated to a specific installer or use case.
15:15:01 <dmsimard> /rant
15:15:05 <rbowen> +1
15:15:19 <imcsk8> +1
15:15:40 <number80> dhellmann: likely, but dnf repoquery --requires python-openstackclient shows nothing special (only oslo and openstack clients)
15:16:11 <jpena> the reporter mentions this: "This is using current mitaka stable images. Nightly delorean images do not exhibit this problem.". Maybe we just need to update the images?
15:16:16 <dhellmann> number80 : yeah, that's more or less what I would expect to see
15:17:41 <number80> jpena: I agree, we should reassign it to 3o folks
15:18:12 * number80 asks who's trown backup?
15:18:32 <jpena> actually, the bug is assigned to the openstack-tripleo component, we just need someone to take the ball
15:19:09 <jruzicka> So TripleO image is broken, not Mitaka RDO if I understand it correctly?
15:19:17 <jpena> jruzicka: correct
15:20:04 <jpena> to avoid spending more time on this topic, I propose we poke someone at #tripleo after the meeting and ask for help
15:20:08 <jruzicka> #info TripleO image is broken, not Mitaka RDO
15:20:09 <weshay> that has rebuilt several times over the last few days, we'll have to recheck it
15:21:42 <number80> ack
15:21:49 <jpena> action?
15:21:52 <adarazs> number80: btw I'm trown's backup for the RDO promotion jobs. (/me just started reading)
15:21:58 <number80> oh cool
15:22:42 <number80> #action adarazs will investigate 1365884
15:22:52 <jpena> let's move on then!
15:22:54 <jruzicka> yay
15:22:59 <jpena> #topic DLRN, symlinks and CDN
15:23:05 <jpena> dmsimard, the stage is yours
15:23:09 <dmsimard> oh hi
15:23:30 <dmsimard> let me gather my thoughts for a sec
15:23:49 <dmsimard> right
15:24:11 <dmsimard> so, this is mostly about CI purposes -- right now we have two custom symlinks that are managed by CI: current-tripleo and current-passed-ci
15:24:36 <dmsimard> These CIs have effectively different purposes and also different criterias (coverage/testing) for promotion
15:25:02 <adarazs> afazekas: this might be interesting for you ^
15:25:13 <dmsimard> There are different initiatives under way to have extended coverage, although with different criterias
15:25:29 <dmsimard> An example is around testing quickstart from whatever tripleo-ci has promoted last
15:25:56 <dmsimard> Another example would be to run puppet-openstack CI (which has statistically been the most successful CI so far) and hold a symlink'd reference for it's promotion
15:26:31 <dmsimard> Right now current-passed-ci and current-triple do not really exist on the trunk.rdo instance, they are redirects to buildlogs
15:27:47 <dmsimard> I don't want to get into a debate around the usefulness (or lack thereof) of the CDN *right now* but I want to discuss how we could make it possible to maintain different symlinks in a way that wouldn't be hacky or awkward.
15:28:06 <myoung> +1
15:28:15 <jruzicka> +1
15:28:27 <jpena> from the DLRN point of view, we could simply synchronize additional links if needed
15:28:36 <jruzicka> dmsimard, you have some idea I assume?
15:28:37 <jpena> just like we do with consistent or current
15:29:05 <dmsimard> I've argued before that repository promotions could be done client-side (like p-o-i is doing right now https://review.openstack.org/#/q/project:openstack/puppet-openstack-integration+status:merged+rdo )
15:29:07 <jpena> if we do so, they'll show up at the public server, and won't be synced to the CDN unless we think it's needed
15:29:29 <dmsimard> and in fact, client-side promotions are really cool because then it means we don't have to maintain anything on our end and each CI is self sufficient
15:30:03 <dmsimard> however, projects like tripleoci and puppet-openstack are limited in how/when they can run such jobs (upstream imposes a limit of once per 24hrs on periodic jobs)
15:30:10 <amoralej> are those links only useful for CIs?
15:30:29 <dmsimard> the links are used to track whatever repository has worked last for a particular CI pipeline, yes
15:30:44 <dmsimard> jpena: I really don't want to see this in the dlrn code
15:30:46 <pabelanger> dmsimard: that could change with 3rdparty CI now, it wouldn't be had to add a new pipeline into zuul.rdoproject.org to do it
15:32:33 <dmsimard> jpena: can you imagine maintaining ['current-passed-ci', 'current-tripleo', 'current-something'] right in the dlrn code? Ugly! A standalone cron, maybe ? I don't know..
15:32:47 <dmsimard> pabelanger: what do you mean, the periodic limit ?
15:32:54 <jruzicka> definitely don't want that in dlrn
15:32:58 <jpena> dmsimard: we could make it a configurable option in projects.ini, but yes, it's still hard to maintain
15:33:00 * jschlueter agrees dlrn is not the place to maintain that
15:33:31 <jruzicka> stand alone config repo a la rdoinfo?
15:33:46 <dmsimard> pabelanger: I proposed a dirty, dirty cheat.. which would be to have a periodic job (on the RDO side) that would create a review on review.o.o with a repository promotion (like the one the proposal bot is doing)
15:34:05 <dmsimard> pabelanger: every 6 hours, for example.. which would override the periodic limit
15:34:45 <jpena> dmsimard: the Puppet CI is doing something similar, isn't it?
15:34:52 <pabelanger> dmsimard: I'd need to better understand why you are using periodic, but the promotion could be driving my zuul.rdoproject.org have puppet-openstack merges code upstream
15:35:01 <pabelanger> we can talk more after the meeting
15:35:04 <dmsimard> jpena: it's a periodic upstream job that runs once every 24hrs
15:35:16 <dmsimard> jpena: see proposal bot reviews in https://review.openstack.org/#/q/project:openstack/puppet-openstack-integration+status:merged+rdo
15:36:12 <jpena> any action then?
15:36:19 <dmsimard> but anyway, we probably don't need to choose a definitive solution right here in the meeting but we're going to need a way to expose symlinks.. I'm not a fan of symlinks fwiw, I turned down EmilienM when he asked for one and he came up with his promotion client side
15:36:28 <weshay> +1
15:36:44 <weshay> sshnaidm, ^
15:37:29 <dmsimard> jpena: if you're okay with it, I can take an action to craft a rsync cron that'd just synchronize the symlink targets
15:37:40 <dmsimard> can submit it through puppet-dlrn or something
15:37:43 <weshay> pabelanger, dmsimard please invite me to that discussion
15:37:53 <jpena> dmsimard: ok, that's an option
15:38:07 <myoung> please rope me in as well, I have downstream jobs that are interested in the links...
15:38:18 <pabelanger> dmsimard: FWIW: that promotion could happen in the post pipeline on every commit for puppet-openstack, other projects do that. eg: openstack/requirements
15:38:24 <dmsimard> #action dmsimard to explore a solution to expose custom symlinks on the public trunk.rdoproject.org instance
15:38:45 <jpena> let's move then to the next topic, still a lot to discuss
15:38:46 <weshay> that will help getting things more to be more transparent
15:39:02 <jpena> #topic Tempest packaging
15:39:22 <jpena> this was discussed last week, but we postponed any action until dmellado was back
15:39:30 <dmellado> o/ I'm here
15:39:40 <dmsimard> There's a couple things wrong with openstack-tempest
15:39:43 <dmellado> so I was reading the option about having tempest-downstream and tempest-vanilla
15:40:04 <dmsimard> from my perspective, 1) it installs every plugins (it shouldn't) 2) it's packaged from downstream
15:40:15 <dmellado> first of all, tbh is that I wouldn't package it, but counting that as a given
15:40:16 <weshay> eggmaster, ^
15:40:42 <dmellado> dmsimard: but if the entry points for the tests are installed by the XXX package
15:40:49 <dmellado> how would you then make it work?
15:41:12 <dmellado> upstream, we do isolate that behavior by using venvs
15:41:20 <dmellado> but that's not a way to go when rpm'ing
15:41:28 <number80> dmsimard: 1. can be solved by adding a tempest-all subpackage that will requires all plugins (for people who wants old behaviour)
15:41:33 <tosky> dmsimard: unfortunately we can't use reccomends in RPM
15:41:39 <dmsimard> number80: the old behavior is stupid
15:41:45 <dmsimard> You can't install all the plugins
15:41:52 <dmsimard> You just can't
15:41:58 <dmsimard> because it will yield a broken install out of the box
15:41:59 <dhellmann> they have incompatible dependencies?
15:42:04 <tosky> dmsimard: if you don't install them, tempest will explode if you have the base package for some service
15:42:20 <dmellado> what hapens is that for different projects they do behave in a different way
15:42:26 <number80> dmsimard: well, people requests it (I agree this should not be default behaviour though)
15:42:28 <dmellado> i.e. manila requires an specific commit of tempest
15:42:32 <dmellado> dhellmann: ^^
15:42:35 <dmsimard> For example, some tempest plugins are enabled by default as soon as they as installed unless explicitely disabled in tempest.conf
15:43:11 <dhellmann> dmellado : does manila require exactly that commit, or at least that commit?
15:43:11 <dmellado> dmsimard: there was an upstream patch in order to skip plugins to be loaded, -2ed by mtreinish for now
15:43:13 <dmsimard> This is the case for mistral and some others, I don't have the references nearby
15:43:14 <dmellado> I'll do take a look
15:43:14 <tosky> dmsimard: all plugins are listed iirc; then the tests are run or not depending on tempest.conf
15:43:21 <dmellado> dhellmann: *that* commit, sadly
15:43:36 <dmellado> as the api is changing, hopefully for a more stable one
15:43:36 <dhellmann> ok, well, that's clearly a mistral bug
15:43:44 <dmellado> you could get some errors
15:43:44 <tosky> manila is more the exception than the rule here...
15:43:46 <dhellmann> sorry, manila
15:43:48 <dmsimard> tosky: you'd be surprised
15:44:11 <dmellado> dmsimard: another point is how to keep in track every plugin
15:44:19 <number80> well, packaging here is easy to fix, but I'd like that we agree on one roadmap
15:44:20 <dmsimard> tosky: if python-aodh-tests (aodh plugin) is installed, and aodh is *NOT* installed (and aodh=False in tempest.conf!) it will run the aodh tests and error out with aodh endpoint not found
15:44:37 * number80 suggests moving that discussion on the list as now dmellado is back
15:44:43 <dmsimard> tosky: turns out aodh plugin relies on another parameter which is in fact aodh_plugin=False
15:44:53 <tosky> dmsimard: but then it's a tempest configuration issue
15:45:01 <dmsimard> tosky: but anyway, the fact is that as soon as the plugin is installed, it'll run
15:45:22 <dmsimard> tosky: so how do you ship an openstack-tempest package with all plugins installed, then ?
15:45:24 <tosky> dmsimard: right, but if you have python-aodh installed and not python-aodh-tests, you will get an error
15:45:32 <dmsimard> tosky: every plugin is different, there's no consistency
15:45:42 <number80> then, we need to fix them
15:45:45 <tosky> exactly
15:45:51 <dmellado> dmsimard: the recommended behavior was to have one, out-tree repo for the plugin
15:45:53 <tosky> did you try to address this on the QA community?
15:45:55 <dmellado> that would things much easier
15:45:58 <dmsimard> tosky: yes
15:46:03 <dmellado> but it doesn't depend on tempest/qa
15:46:04 <dhellmann> we need to take this issue upstream, because if plugins are treating tempest like a library they need to be using released versions of tempest and we need to be tagging things more often
15:46:06 <dmellado> it depends on every project
15:46:22 <dmellado> dhellmann: within tempest there's tempest.lib
15:46:28 <number80> *nods*
15:46:29 <dmellado> which we're on track of stabilyzing
15:46:39 <tosky> where on track == newton
15:46:40 <dmellado> until that's done, those things might happen...
15:46:49 <number80> well, can someone take the action of raising this topic upstream?
15:46:52 <dmsimard> tosky: http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2016-08-09.log.html#t2016-08-09T14:25:58
15:46:53 <dhellmann> yes, but if different plugins from the same release series need different versions of tempest, then we're not testing correctly upstream
15:47:05 <number80> (and also define our downstream roadmap)
15:47:07 <dmellado> tosky: hopefully, I'll bring back more details from mid-cycle
15:47:09 <dmsimard> tosky: tl;dr tempest has no control over plugin behavior
15:47:24 <number80> dmellado: when is mid-cycle?
15:47:27 <tosky> dmsimard: nothing prevents on writing more guidelines in tempest documentation and proposing a patch for the plugin
15:47:35 <dmellado> number80: Sept 18-22
15:47:44 <tosky> dmsimard: it's not an infinite number, we are talking probably about less then 10 items
15:47:57 <tosky> a totally countable set, that can be handled
15:47:58 <number80> dmellado: ack
15:48:23 <jpena> so we need to do two things if I got this correctly
15:48:37 <dmellado> we haven't still talked about the tempest config tool, that we bundle
15:48:45 <jpena> a) Propose a roadmap for tempest packaging
15:48:52 <dmellado> and I'd say to bring it to its own repo
15:48:59 <dmellado> in order not to depend on our fork
15:49:06 <jpena> b) Identify "rogue" plugins, work with upstream to fix them
15:49:37 <amoralej> dmellado, maybe it's the tempest deployment or configuration tool the one that should be able to selectively install plugins as required
15:49:44 <amoralej> whould that make sense?
15:49:45 <dmellado> amoralej: that's not possible
15:49:48 <dmellado> not at all
15:50:04 <dmsimard> So this all started from a review which, in the meantime, is still blocked https://review.rdoproject.org/r/#/c/1820/ -- it prevents me from integrating puppet-designate testing in puppet-openstack-integration (and therefore adding designate to RDO testing coverage)
15:50:21 <dmellado> the tempest plugin gets installed, if in-tree
15:50:25 <number80> jpena: yes!
15:50:26 <dmsimard> It'd be great if I could finish fixing designate newton which is still completely broken
15:50:26 <dmellado> when the package gets installed
15:51:03 <dmellado> amoralej: https://github.com/openstack/neutron/blob/master/setup.cfg#L166-L167
15:51:21 <tosky> dmsimard: about that review, the change you propose is a global one, but applied to a single repo
15:51:35 <dmsimard> tosky: explain
15:52:03 <dmsimard> tosky: openstack-tempest is not currently installed or a requirement for any of the *-tests packages right now
15:52:07 <tosky> dmsimard: you are removing the dependency because some consumer could use the source repository; this is the global change we are discussing
15:52:46 <dmellado> dmsimard: tosky basically the change there is because designate is an out-tree plugin
15:52:47 <dmsimard> ultimately the problem is not the source repository, it's the fact that openstack-tempest installs every plugin of which some are hardcoded to be enabled by default
15:52:51 <dmellado> and it would fetch tempest for run
15:53:15 <jruzicka> running out of time
15:53:16 <dmsimard> openstack-tempest by itself would likely not break anything
15:53:17 <dhellmann> dmsimard : can we change the plugins upstream to all use a config option to enable them?
15:53:17 <jpena> trying to be time-conscious, we've got 7 minutes and we should get some action, then complete the agenda
15:53:27 <dmellado> jpena: should we move this to the ML
15:53:29 <dmellado> ?
15:53:32 <number80> yes
15:53:36 <dmellado> we're still far from completion
15:53:38 <jpena> dmellado: agreed
15:53:41 <dmsimard> dhellmann: It's in the realm of possibilities, eys
15:53:43 <dmsimard> yes*
15:53:55 <jpena> dmellado: will you restart the email thread?
15:54:05 <dhellmann> so that seems like a good thing to start working on, at the same time as dealing with the inverted dependency issue
15:54:20 <dmellado> I'll reply to the last one and bring it back, yes
15:54:29 <dmellado> jpena:
15:54:31 <jpena> #action dmellado to restart the Tempest packaging thread
15:54:38 <number80> \o/
15:54:39 <jpena> thx :)
15:54:44 <jpena> #topic reviews needing eyes
15:54:56 <jpena> jruzicka needs eyes to review a couple rdopkg changes
15:55:05 * number80 is looking at #1847
15:55:06 <jpena> #link https://review.rdoproject.org/r/#/c/1847/
15:55:10 <jruzicka> yeah just look at the etherpad and review if you feel like it
15:55:16 <jpena> #link https://review.rdoproject.org/r/#/c/1275/
15:55:26 <number80> as for rpm-python, I'm out of ideas :)
15:55:32 <jruzicka> with the second one I don't know howto proceed
15:55:40 <jruzicka> kick in any direction would be welcomed
15:55:44 <number80> except using a virtualenv with access to system packages
15:56:00 <jruzicka> or writing a pure python rpm module
15:56:07 <jruzicka> (that wouldn't suck)
15:56:11 <number80> jruzicka: that's something that would be great
15:56:12 <jruzicka> a man can dream...
15:57:07 <jpena> #topic CentOS Cloud SIG meetings (Thursday, 15:00 UTC)
15:57:10 <rbowen> #link https://etherpad.openstack.org/p/centos-cloud-sig
15:57:11 <rbowen> We have basically not had a CentOS Cloud SIG meeting for 2 months.
15:57:11 <rbowen> Part of this is because of travel/summer/whatever. I also have a standing conflict with that meeting, which I'm trying to fix.
15:57:11 <rbowen> But more than that, I think it's because the Cloud SIG is just a rehash of this meeting, because only RDO is participating.
15:57:11 <rbowen> On the one hand, we don't really accomplish much in that meeting when we do have it.
15:57:13 <rbowen> On the other hand, it's a way to get RDO in front of another audience.
15:57:15 <rbowen> So, I guess I'm asking if people think it's valuable, and will try to carve out a little time for this each week.
15:57:18 <rbowen> </end>
15:57:50 <rbowen> (Hoping I don't get paste-flood-banned! ;-)
15:57:53 <number80> my bad too, but I think we should work with other groups to make it more lively
15:58:35 <rbowen> Without other groups participating, it feels like a waste of time.
15:58:49 <jpena> would it make sense to merge both meetings?
15:59:13 <rbowen> jpena: It might. There's a handful of people that sometimes attend the CentOS meeting, that we could invite here.
15:59:32 <rbowen> kbsingh feels, of course, that we should continue that meeting.
15:59:42 <rbowen> He's on this channel, as are several of those people
16:00:07 <rbowen> Anyways, no specific action here, but I'll bring it up on the mailing list again.
16:00:12 <jpena> ok
16:00:20 <jpena> #topic OpenStack Summit
16:00:22 <rbowen> #link https://etherpad.openstack.org/p/rdo-barcelona-summit-booth
16:00:23 <rbowen> OpenStack Summit is now roughly 2 months out.
16:00:23 <rbowen> If you want to do a demo in the RDO booth, it's time to start thinking about what you want to show.
16:00:23 <rbowen> Videos are ok. Live demos are ok, if they're not too network-dependent.
16:00:23 <rbowen> Or if you want to hang out in the booth and answer questions, that's also very welcomed.
16:00:23 <rbowen> The link above is the etherpad. I don't have specific time slots yet. We should have the event schedule in the next 2 weeks.
16:00:26 <rbowen> You can just indicate that you're interested, and I'll get back in touch once we have a specific schedule.
16:00:28 <rbowen> </end>
16:00:43 <number80> rbowen: I have volunteers, I need to ping them again
16:00:46 <rbowen> Thanks to Mathieu and Wes for signing up already!
16:00:53 <jpena> I'll sign up, too
16:01:39 <dmsimard> I can probably do some booth time too
16:01:56 <jpena> #action everyone interested sign up at https://etherpad.openstack.org/p/rdo-barcelona-summit-booth
16:02:19 <jpena> we're out of schedule, so quickly
16:02:25 <jpena> #topic Chair for next meeting
16:02:26 <jpena> anyone?
16:02:43 <number80> o/
16:02:55 <number80> #action number80 chair next week
16:03:03 <number80> we're already past the hour :)
16:03:16 <jpena> ok, since it's past time, let's leave the open floor for after the meeting
16:03:32 <jpena> thx number80 for volunteering :)
16:03:35 <jpena> #endmeeting