01:51:29 <apevec> #startmeeting https://www.rdoproject.org/RDO_test_day_Kilo
01:51:30 <zodbot> Meeting started Tue May  5 01:51:29 2015 UTC.  The chair is apevec. Information about MeetBot at http://wiki.debian.org/MeetBot.
01:51:30 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
01:52:42 <apevec> #chair mburned aortega rbowen number80
01:52:42 <zodbot> Current chairs: aortega apevec mburned number80 rbowen
01:53:01 <apevec> #topic https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test
01:53:12 <apevec> ^ please not new http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm
01:53:20 <apevec> note
01:54:04 <apevec> I've started early to log questions and results from the other side of the globe, see you in few hours!
06:48:19 <number80> o/
06:51:30 <mrunge> *\o/*
07:17:44 <gchamoul> o/
07:19:16 <number80> #chair mrung gchamoul
07:19:16 <zodbot> Current chairs: aortega apevec gchamoul mburned mrung number80 rbowen
07:19:29 <number80> #chair mrunge gchamoul
07:19:29 <zodbot> Current chairs: aortega apevec gchamoul mburned mrung mrunge number80 rbowen
07:19:40 <number80> oops, thanks for volunteering :>
07:58:59 <itzikb> When installting RDO with the repo http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm  I get error regarding openstack-ceilometer-compute http://pastebin.test.redhat.com/280693
08:02:50 <mabrams> packstack errored out on python-cinder-2015.1.0-2.el7.noarch (not available)
08:03:01 <hewbrocca> Morning all
08:03:58 <gchamoul> itzikb: CentOS7.x or RHEL7.x?
08:04:21 <itzikb> gchamoul:  You can check both
08:04:34 <itzikb> mabrams: Can you send the output?
08:04:42 <nyechiel> morning all, and happy testing!
08:05:02 <mabrams> 10.35.161.149_glance.pp:                             [ DONE ]
08:05:03 <mabrams> 10.35.161.149_cinder.pp:                          [ ERROR ]
08:05:03 <mabrams> Applying Puppet manifests                         [ ERROR ]
08:05:03 <mabrams> ERROR : Error appeared during Puppet run: 10.35.161.149_cinder.pp
08:05:03 <mabrams> Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-cinder' returned 1: Error: Package: python-cinder-2015.1.0-2.el7.noarch (openstack-kilo-testing)
08:05:03 <mabrams> You will find full trace in log /var/tmp/packstack/20150505-104642-Qf_MwW/manifests/10.35.161.149_cinder.pp.log
08:05:03 <mabrams> Please check log file /var/tmp/packstack/20150505-104642-Qf_MwW/openstack-setup.log for more information
08:06:42 <mabrams> Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-cinder' returned 1: Error: Package: python-cinder-2015.1.0-2.el7.noarch (openstack-kilo-testing)
08:08:53 <gchamoul> itzikb: didn't have that issues on Centos71
08:09:22 <mabrams> i'm on rhel71
08:09:45 <gchamoul> mabrams: did you enable optional repo ?
08:09:51 <itzikb> gchamoul: great! If you can update the test cases page -it'll be great
08:10:20 <mabrams> CONFIG_RH_OPTIONAL=y
08:11:00 <mabrams> and is "1" in the repo file
08:11:44 <itzikb> mabrams: Can you list your repositories?
08:12:05 <mabrams> [root@mabrams-svr021 yum.repos.d]# ls
08:12:05 <mabrams> rdo-testing.repo  redhat.repo  rhel-optional.repo  rhel-server.repo
08:12:05 <mabrams> [root@mabrams-svr021 yum.repos.d]# grep enabled *
08:12:05 <mabrams> rdo-testing.repo:enabled=1
08:12:05 <mabrams> rhel-optional.repo:enabled=1
08:12:06 <mabrams> rhel-server.repo:enabled=1
08:12:06 <mabrams> [root@mabrams-svr021 yum.repos.d]#
08:18:06 <mabrams> retrying with CONFIG_USE_EPEL=y
08:21:00 <mrunge> itzikb, please give us the output of just your /usr/bin/yum -d 0 -e 0 -y install openstack-ceilometer-compute
08:23:09 <itzikb> mrunge: Seems like python-werkzeug is missing
08:23:10 <itzikb> a sec
08:23:44 <ekuris> hi all , after enabled FWAAS l3-agent failed : error log  -  ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent [req-642f4abc-2705-42a9-b25c-d8a4621e64e2 ] FWaaS plugin is configured in the server side, but FWaaS is disabled in L3-agent.
08:23:52 <ekuris> some  else saw it ?
08:26:10 <mrunge> itzikb, it's something with your repos. I think you'll need server-extras, too
08:26:12 <itzikb> mrunge: http://pastebin.com/pjCU8xu4
08:29:38 <gchamoul> itzikb: do you have extras repo enabled?
08:29:58 <itzikb> gchamoul: No.I'll add it. We need to add it to instructions
08:30:46 <hewbrocca> jcoufal: how's your test run looking
08:30:59 <jcoufal> hewbrocca: I just built virt-env
08:31:02 <mrunge> itzikb, yes, that needs to be added. are you doing this?
08:31:09 <hewbrocca> so far so good
08:31:23 <gchamoul> itzikb: you have it here https://www.rdoproject.org/Repositories
08:31:40 <gchamoul> itzikb: but I agree that we need to add the link into the test day paage
08:32:30 <itzikb> gchamoul: In RDO there are no subscriptions
08:33:27 <gchamoul> itzikb: well, but if you use RHEL7 to play with RDO ... you will need them
08:33:34 <hewbrocca> social: I think that damn IPA we had last night had caffeine in it
08:33:48 * hewbrocca was up all night
08:35:47 * mrunge sends hewbrocca another IPA
08:35:53 <mrunge> cheers
08:36:26 <jcoufal> hewbrocca: ImportError: No module named oslo_utils
08:36:39 <jcoufal> during undercloud installation
08:36:47 <jcoufal> actually right at the start
08:37:07 <hewbrocca> management repo fail?
08:37:27 <hewbrocca> I don't understand how this was working 2 days ago and now it is a flaming pile of crap
08:38:00 <jcoufal> hewbrocca: I don't know where the package should come from
08:38:08 <jcoufal> if from management or rdo
08:38:15 <jcoufal> Well if we use delorean rdo it works
08:38:29 <hewbrocca> OK, so it's all just repo breakage then
08:42:15 <hewbrocca> Update: We are trying again to build rdo-manager images, this time with the delorean repo enabled (will pull in some liberty stuff)
08:42:33 <hewbrocca> We will still be deploying a pure RDO Kilo overcloud
08:47:24 <itzikb> mrunge: About the horizon 'permission denied' error after (re)install using packstack that we saw yesterday. Do we have a solution?
08:47:47 <mrunge> itzikb, it's still unclear how to produce this
08:48:07 <mrunge> itzikb, I tried to reproduce it several times, but failed
08:48:46 <mrunge> itzikb, it seems, httpd is not restarted after horizon installation, for some reason
08:49:25 <itzikb> mrunge: For example I had a problem with ceilometer - didn't have the RPMs and then reinstalled  and it happened. Restarted httpd and it works as you say
08:49:58 <itzikb> mrunge: Do you want me to update the bug?
08:50:05 <mrunge> itzikb, yes please
08:50:11 <itzikb> mrunge: ok
08:50:24 <mrunge> itzikb, and please re-assign it to openstack-puppet-modules
08:50:49 <itzikb> mrunge: ok
08:51:41 <gchamoul> itzikb: please assign it to me gchamoul@redhat.com!
08:56:49 <ekuris> itzikb, https://bugzilla.redhat.com/show_bug.cgi?id=1218543
08:57:26 <itzikb> mrunge: gchamoul: done
08:57:35 <mrunge> itzikb, thanks!
08:58:09 * number80 remembers to reassign annoying tickets to gchamoul
08:58:11 <number80> thanks :)
08:58:36 <gchamoul> :D
09:02:26 <social> itzikb: mrunge: when using packstack please have -d parameter and send /var/tmp/packstack content
09:03:06 <mrunge> social, thank you for the hint
09:03:38 <social> though you could still send the dir content it'll just contain less info
09:07:43 <itzikb> social: thanks
09:08:30 <social> itzikb: so if you have issue pls fpaste horizon.pp.log files and I'll have a look
09:12:47 <apevec> jcoufal, hewbrocca - oslo_utils is provided by python-oslo-utils-1.4.0-1.el7 which we have in kilo/testing repo
09:12:56 <apevec> jcoufal, please send me full backtrace for ImportError: No module named oslo_utils
09:13:10 <hewbrocca> apevec: morning!
09:13:14 <itzikb> apevec: Regarding https://bugzilla.redhat.com/show_bug.cgi?id=1218543 when adding the config file /etc/neutron/fwaas_driver.ini to /usr/lib/systemd/system/neutron-l3-agent.service and restarted the l3 agent the l3 agent is started
09:13:35 <hewbrocca> apevec: it's entirely possible jcoufal is not enabling kilo/testing repo...
09:13:44 <itzikb> When adding it to /etc/neutron/conf.d/neutron-l3-agent/ it didn't work
09:14:10 <jcoufal> apevec: I was told that this is enough: sudo yum install http://rdoproject.org/repos/openstack/openstack-kilo/rdo-release-kilo.rpm
09:14:15 <jcoufal> apevec: not sure what it enables
09:14:36 <apevec> jcoufal, only testing/kilo - but turns out that's enough only on centos7
09:14:40 <jcoufal> apevec: I mean what repo exactly
09:14:43 <apevec> rhel7 again requires optional and extras :(
09:14:56 <jcoufal> apevec: I am trying on centos
09:15:09 <apevec> this one https://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/el7/
09:15:29 <apevec> jcoufal, ok, if you can capture full backtrace that would be useful
09:16:01 <jcoufal> apevec: I will run it on my second machine and report it back to you
09:16:29 <apevec> thanks, in the meantime if you can workaround with Trunk image for undercloud, that's also good
09:16:37 <ajeain> apevec: according to the RDO test page, packstack should enable epel by default, but unless I modify the answerfile to CONFIG_USE_EPEL=y, it can't install openstack-cinder
09:17:31 <apevec> ajeain, that's on rhel?
09:17:37 <jcoufal> apevec: is this the same repo? sudo yum install -y https://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo-0.noarch.rpm
09:17:58 <apevec> jcoufal, no
09:18:17 <ajeain> apevec: yes...RHEL 7.1
09:18:27 <apevec> jcoufal, https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test
09:18:33 <apevec> yum install http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm
09:18:43 <jcoufal> apevec: yeah I know, the first one is just hardcoded in instack script :(
09:21:13 <apevec> ajeain, pastebin me please output of yumdb info from_repo epel
09:21:14 <jcoufal> apevec: but md5sum show it is the same
09:21:24 <jcoufal> apevec:a3f87ae2fee9e21bb742aa195780cd17  rdo-release-kilo.rpm
09:21:24 <jcoufal> a3f87ae2fee9e21bb742aa195780cd17  rdo-release-kilo-0.noarch.rpm
09:21:50 <apevec> jcoufal, http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm
09:21:58 <apevec> note *testing*
09:22:08 <jcoufal> dammit
09:22:42 <apevec> but otherwise, yes ,  rdo-release-kilo.rpm is a redirect, for release use that redirect
09:22:56 <jcoufal> apevec: in https://www.rdoproject.org/RDO_test_day_Kilo is "release"
09:23:24 <jcoufal> apevec: ah, i just refreshed
09:23:24 <jcoufal> somebody changed it in time
09:23:28 <jcoufal> ok
09:23:30 <jcoufal> here we go
09:23:31 <apevec> damn caching
09:23:40 <apevec> I did change it last night
09:24:44 <ajeain> apevec: # yumdb info from_repo epel
09:24:45 <ajeain> Loaded plugins: langpacks, product-id
09:25:28 <jcoufal> apevec: thanks
09:29:21 <apevec> ajeain, ok, so it's not epel but optional or extras
09:32:40 <tenaglia> hi all
09:34:02 <jcoufal> apevec: Error: Package: python-hardware-0.11-post25.el7.centos.noarch (delorean-rdo-management)
09:34:02 <jcoufal> Requires: python-pandas
09:34:17 <jcoufal> also
09:34:18 <jcoufal> Error: Package: python-hardware-0.11-post25.el7.centos.noarch (delorean-rdo-management)
09:34:18 <jcoufal> Requires: python-ptyprocess
09:35:13 <tenaglia> I'm using redhat-openstack/puppet-pacemaker and have a set of patches ready to improve the handling of resources. I've just noticed this new pull request: https://github.com/redhat-openstack/puppet-pacemaker/pull/45 from another member of RDO. How likely is that the PR will be accepted? And in general what is the state of the puppet-pacemaker module maintenance?
09:42:21 <hewbrocca> tenaglia: we are actively using puppet-pacemaker, so certainly patches are welcome
09:50:14 <tenaglia> hewbrocca: cool. I have a few patches to make it work with RHEL6, mainly affecting "pcs mode".
09:54:43 <apevec> jcoufal, I missed those python-hardware deps, is python-hardware is stable enough so we can move it to the regular repo?
09:55:21 <apevec> delorean-rdo-management is http://trunk-mgt.rdoproject.org/repos/current-passed-ci/ ?
09:55:29 <apevec> jcoufal, ^
09:59:15 <derekh> apevec: I've restart both delorean's , didn't notice anything in sar output, the only thing I have noticed is that of the 2 times I looked after a failure both times we lost the box during an ironic build
10:00:31 <apevec> derekh, can you tell was it in rpmbuild or rpm install phase?
10:01:21 <derekh> apevec: not sure, I'll see if I can find out
10:03:52 <apevec> jcoufal, ok, here's repoclosure report for trunk-mgt repo http://paste.openstack.org/show/214946/
10:04:26 <jcoufal> apevec: I don't know anything about python-hardware packages really
10:04:35 <jcoufal> apevec: and as for the delorean-rdo-management, yes it is that one
10:05:32 <jcoufal> apevec: those dependencies were taken from delorean before?
10:06:00 <apevec> jcoufal, I'm tracking them down now
10:06:09 <jcoufal> ok
10:07:22 <apevec> there's python-hardware-0.14-1.fc23 in Rawhide, I can rebuild it in CBS
10:08:22 <apevec> built by flepied@redhat.com
10:08:48 <apevec> hewbrocca, ^ is Frederic Lepied in mgt team?
10:10:05 <jcoufal> apevec: he is leading our field team I think
10:10:20 <jcoufal> apevec: or consulting team to be more specific
10:10:38 <apevec> ok, I think we can trust his build then :)
10:16:44 <berendt> according to https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test I should insatll openstack-tempest-kilo and python-tempest-lib to be able to run tempest
10:16:55 <berendt> but there is no tempest package in the repository
10:18:36 <apevec> berendt, there are links to fedora builds in the wiki
10:18:38 <derekh> apevec: rpmbuild had failed, but the docker container hadn't been deleted, I'm thinking maybe the delete of the container didn't return http://trunk.rdoproject.org/centos70/45/1b/451bf7bb0dddf949000698993f784d4b5442f3da_3b5540c5/rpmbuild.log
10:18:57 <berendt> yes. so really use them for el7?
10:19:46 <derekh> apevec: I'll try and figure out a way to keep a console, am also adding a cgi-script to output some debug info, will let you know if I find anything out
10:19:57 <apevec> berendt, it should work, but I think eggmaster has el7 build in Koji, just a sec
10:20:40 <apevec> derekh, I propose as a quickfix/workaround to remove ironic from Trunk rdoinfo, mgt has forked version in trunk-mgt Delorean instance
10:21:56 <berendt> apevec: openstack-tempest-kilo requires python-junitxml and this is not available, I think it would be nice to directly have openstack-tempest in the el7 repo
10:22:00 <hewbrocca> Guys, we're throwing in the towel on rdo-manager test day -- we need more time for all the repos to stabilize
10:22:04 <hewbrocca> We will try again next week
10:22:11 <derekh> apevec: ok, that assumes the ironic thing wasn't just a coincidence, but I suppose we'll find out if the problem still persists
10:22:15 <hewbrocca> jcoufal: is going to send some mail about it
10:23:04 <apevec> derekh, yep
10:23:26 <ekuris> Alex_Stef, https://bugzilla.redhat.com/show_bug.cgi?id=1218543
10:24:00 <apevec> hewbrocca, ack - I'm resolving missing deps, I need to talk to trown/mburns about those remaining few
10:25:47 <hewbrocca> Yeah
10:26:10 <hewbrocca> In the future we just need to wait a bit longer before we try to use RDO to deploy itself :)
10:26:41 <hewbrocca> Or -- we need to explicitly separate "test deploying new RDO" from "test the manager's ability to deploy something"
10:29:13 <apevec> hewbrocca, yeah, we need separate test days obviously, mgt builds on RDO "core" so we need to test foundations first
10:29:29 <hewbrocca> Yes
10:30:03 <apevec> it's like building a house, you don't start with the roof :)
10:31:46 <kashyap> apevec: Yeah, that sounds like a good idea - having the 'base' RDO test day first, followed by management.
10:36:49 <apevec> ajo, can you have a look at https://bugzilla.redhat.com/show_bug.cgi?id=1218543 ? I lost all details which config goes where after *aas split ...
10:37:14 <apevec> ajo, but fwaas in l3 seems wrong
10:38:05 <ajo> apevec, ack, I need to iterate a neutron spec, and then I'll look into that (probably this afternoon)
10:38:35 <ajo> apevec, yes, looks like there is an issue on the .service or the directory structure for the l3-agent
10:38:42 <ajo> apevec, from eran comments
10:38:50 <ajo> I'll tune the .service file or .spec as needed
10:38:55 <apevec> thanks
10:39:51 <apevec> ajo, when you git, propose fix in rpm-master and I'll cherry pick to Rawhide/RDO Kilo builds
10:39:57 <derekh> apevec: https://github.com/redhat-openstack/rdoinfo/pull/44
10:39:58 <apevec> s/git/get it/
10:40:27 <ajo> apevec, I wonder if, having FWaaS disabled in server, and enabled in l3-agent works
10:40:37 <ajo> I'm not sure we ever enabled fwaas in RDO
10:40:38 <apevec> derekh, merged
10:41:19 <apevec> ajo, what does "enabled fwaas in RDO" mean?
10:41:28 <apevec> it's separate package now
10:41:29 <ajo> apevec, enabling it on the l3-agent
10:41:52 <ajo> apevec, yes, I know, it's separated as a service plugin
10:42:16 <apevec> bz says "6.enable in answer file FWAAS= y"
10:42:20 <ajo> apevec, afaik it had to be enabled in the l3-agent before too.. not 100% sure, I'll check if there was any relevant change before that
10:42:26 <apevec> ekuris, ^ what does that mean ?
10:42:45 <ajo> apevec, ekuris , I guess he's talking about packstack setting,
10:43:13 <derekh> apevec: ack, server updated
10:44:02 <apevec> ajo, yeah, I'm just not sure what CONFIG_NEUTRON_FWAAS=y implies
10:44:20 <apevec> social, gchamoul ^
10:44:34 <apevec> did that ever work?
10:44:45 <apevec> I mean, before *aas split
10:45:06 <itzikb> ajo: Regarding l3 and fwaas
10:45:55 <ajo> apevec, I responded on the bz asking for /etc/neutron
10:46:00 <ajo> to see how packstack does in that case
10:46:10 <social> apevec: yes it did work
10:46:23 <ajo> and also, a second check, disabling server FwAAS and, enabling FwAAS on the l3-agent to see if they're happy
10:46:34 <ajo> otherwise we may need to have some sort of extra configuration in packstack
10:47:01 <ajo> like adding the fwaas config file to /etc/neutron/conf.d/neutron-l3-agent
10:47:09 <ajo> probably that's the most reasonable solution apevec ^
10:47:39 <apevec> yep, that's why Ihar introduced conf.d
10:47:44 <ekuris> apevec, ajo    FWAAS=y   is the option in answer file
10:47:45 <ajo> or ...
10:47:47 <ajo> to the share sisde of things
10:48:09 <ajo> ekuris, check the bz when you have a moment, I asked you for a bunch of checks
10:48:37 <apevec> ekuris, ack it wasn't full option name, but grep answered me :)
10:48:41 <ajo> I'll come back later to it, I need to finish something else now
10:51:18 <ekuris> ajo,  I saw your comment in BZ I will do it and ping you
10:53:25 <ajo> ekuris, thanks a lot
10:53:35 <itzikb> Regarding tempest yesterday there were dependencies that were not installed after installing the openstack-tempest-kilo and python-tempest-lib. Is there an update?
11:01:43 <apevec> itzikb, junitxml  or more?
11:02:07 <apevec> I'm rebuilding it in CBS now, will test repoclosure for missing deps
11:02:08 <itzikb> apevec: I'll check again
11:10:33 <avico> hey folks is packstack ready for kilo?
11:12:17 <mrunge> avico, at least, it's worth a try
11:12:25 <apevec> avico, it is in testing/kilo repo, see https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test
11:12:34 <avico> thanks!
11:12:42 <avico> I'll give it a run
11:18:34 <apevec> berendt, eggmaster, itzikb - yum install openstack-tempest works now with testing/kilo, I've updated https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test
11:19:39 <apevec> section about venv shouldn't be necessary, but it's fine if it helps you isolate tempest from rest of openstack installation
11:19:53 <apevec> i.e. if you're testing on allinone node
11:23:09 <panbalag> Hi, as anyone seen this error with RDO install on RHEL 7.1 ? "Error: Invalid parameter create_mysql_resource on Class[Galera::Server] at /var/tmp/packstack/3bf3e9c71344425aa247923ffb36f308/manifests/<ip-address>_mariadb.pp:21"
11:23:27 <itzikb> apevec: thanks!
11:25:15 <panbalag> apevec, any idea how to resolve this error?
11:26:33 <apevec> panbalag, didn't see this reported before, can you provide all steps and describe your testing environment?
11:26:58 <panbalag> apevec, its a fresh install on RHEl 7.1. did packstack --allinone
11:27:38 <apevec> panbalag, which repos do you have enabled? please pastebin yum repolist
11:28:07 <apevec> jpena, social - can you tell me what could go wrong on mariadb.pp:21 line?
11:28:37 <social> apevec: missing/wrong version of galera puppet module
11:28:40 <apevec> panbalag, there should be also corresponding mariadb.pp.log
11:28:57 <apevec> ah, how could that happen
11:28:57 <panbalag> the logs show the same message ..let me copy that too in the pastebin
11:29:34 <apevec> panbalag, rpm -q openstack-puppet-modules
11:30:00 <panbalag> http://pastebin.test.redhat.com/280758
11:30:01 <social> apevec: https://github.com/redhat-openstack/puppet-galera/pull/10 missing from opm package
11:30:04 <panbalag> apevec, http://pastebin.test.redhat.com/280758
11:30:29 <panbalag> apevec, [root@lynx13 yum.repos.d]# rpm -q openstack-puppet-modules
11:30:29 <panbalag> openstack-puppet-modules-2014.2.15-1.el7.noarch
11:31:02 <apevec> panbalag, ah so you're install RDO Juno, not testing Kilo ?
11:31:38 <panbalag> apevec, I'm testing kilo only...the rdo-repo shows kilo
11:31:42 <social> apevec: panbalag: looks like packstack kilo with opm juno
11:31:58 <apevec> social, yeah, that won't work
11:32:12 <panbalag> I used the lastest repo link you had sent in reply to my email.
11:32:23 <apevec> panbalag, let me try to understand how did you get into this situation
11:32:25 <tfreger> itzikb: do you familiar with issue in neutron-server?
11:32:33 <panbalag> apevec, here is the base url i used baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/el7/
11:33:07 <aortega> apevec, https://redhat.bluejeans.com/6388798212/
11:33:08 <apevec> panbalag, right, but this repos doesn't have OPM 2014.2
11:33:18 <apevec> aortega, clicking in a sec
11:33:48 <panbalag> apevec, I installed openstack-packstack yesterday and found it was installing juno release..so I uninstalled it and sent you an email regarding the repos. So today I updated the repo and installed again
11:34:01 <panbalag> apevec, probably it did not remove everything completely
11:34:08 <apevec> panbalag, ah that explains
11:34:14 <apevec> yeah, best to start from scratch
11:34:21 <panbalag> apevec, ok..let me do that
11:34:42 <apevec> puppet just ensures packages are preset and doesn't try to update them
11:35:08 <apevec> panbalag, BTW yumdb info openstack-puppet-modules should tell you which repo RPM came from, in from_repo line
11:35:41 <verdurin> Is it Packstack-only installation for the test days, until the rdo-maanger testing next week?
11:35:57 <panbalag> apevec, yes looks like it came from juno "origin_url = https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/openstack-puppet-modules-2014.2.15-1.el7.noarch.rpm"
11:37:16 <panbalag> apevec, I will do a fresh install of rhel and then try again.
11:38:49 <verdurin> I.e. no Foreman-based installation?
11:40:22 <hewbrocca> verdurin: correct
11:40:57 <verdurin> hewbrocca: Okay - thanks for the confirmation.
11:42:57 <social> do we have some tempest config generator?
11:48:46 <ekuris> ajo, I upload all info that you want to the bug . after I make changes in neutron conf file as you wrote me  Igot error in l3 agent  about iptables qrouter . it look like that VM cannot access to external network . I can access to external net from qrouter
11:48:59 <ekuris> ajo,  can you advice ?
11:51:45 <ajo> ekuris, so, if you disable fwaas in neutron-server, and enable it in l3-agent
11:51:45 <ajo> it doesn't work
11:51:51 <ajo> they need to be correcly coordinated, right?
11:52:00 <ajo> enabled or disabled in both side
11:52:02 <ajo> s
11:53:20 <ajeain> mrunge: hey Matthias, did Ido talk to you about the collapse icon issue in the tree?
11:53:53 <ajeain> mrunge: right now, you see a small square, which i believe it should be like an arrow, right?
11:54:02 <pixelbeat> interesting. a 1GB RAM VM is no longer big enough for a packstack --allinone
11:55:48 <ekuris> ajo, I disabled it in neutron server , in l3agent I didnt chage anything
11:56:06 <ajo> ok,
11:56:16 <mrunge> ajeain, no, he did not
11:56:17 <ajo> ekuris, so that means we need good cordination between both
11:56:38 <ajo> ekuris, do you know, if, when fwaas is disabled, packstack will install the firewall package?
11:56:48 <mrunge> ajeain, where is it? could you please screenshot it?
11:57:00 <ajeain> mrunge: sure....
11:57:05 <ajeain> iovadia: ^^
11:57:08 <ajo> apevec, ekuris , I'm considering to add a new conf.d in the "shared" side...
11:57:29 <ajo> and get the fwwas config installed in there (for the l3-agent)
11:57:46 <ajo> I need to check that what I'm thinking make sense
11:58:06 <ekuris> ajo, in l3 agent I dont have anything about FW .
11:58:10 <ajo> apevec or the "shared" side should only be used to override default configurations on our side (I guess that's the use)
11:58:29 <ajo> ekuris, but you added the config on the .service file, right?
11:59:06 <ajeain> mrunge: sent it via email
11:59:16 <ajo> ekuris, the "Permission denied: u'/var/lib/neutron/lock/neutron-iptables-qrouter-7c738e45-cdc0-4f26-9e97-5993946b1031'"
11:59:19 <ajo> should be something different
11:59:25 <ekuris> ajo, I this one /etc/neutron/fwaas_driver.ini
11:59:55 <ekuris> ajo,  what is this error ?
12:00:10 <ajo> ekuris, I'd guess it's related to selinux
12:00:18 <ekuris> ajo, what is the reason
12:00:19 <ajo> ekuris, could you check if setenforce 0
12:00:23 <ajo> fixes the error?
12:00:26 <ekuris> sec
12:00:30 <ajo> it could be a directory permission error
12:00:38 <ajo> like the running agent as user neutron
12:00:45 <ajo> can't write to /var/lib/neutron/lock
12:00:52 <ajo> can you check the permissions of /var/lib/neutron
12:00:57 <ajo> and /var/lib/neutron/lock ?
12:01:51 <mrunge> ajeain, thanks!
12:03:18 <ekuris> ajo,  it really set to enforce . about permission of var/lib/neutron  =drwxr-xr-x.  5 neutron  neutron  4096 May  5 14:30 neutron
12:03:50 <ekuris> ajo, same with /var/lib/neutron/lock
12:04:09 <ajo> ok, those settings seem ok
12:04:35 <ajo> I'd check the audit log because then it looks more probably like an selinux issue
12:04:50 <ajo> if enforcing is disabled, and the error disappears, then it's selinux
12:04:55 <ajo> but we need to update the selinux policies
12:05:08 <ajo> http://www.ajo.es/post/100147773734/how-to-debug-se-linux-errors-and-write-custom
12:05:29 <apevec> pixelbeat, it is not, price of progress! 3GB RAM VM is minimum afaict
12:05:35 <apevec> 2GB was oom-ing
12:06:06 <apevec> pixelbeat, and on current idle allinone, ps_mem shows 2.3GiB used
12:06:14 <ajo> apevec, pixelbeat  ouch
12:06:37 <pixelbeat> yep it's a bit mad
12:06:53 <apevec> ajo, that's with all crap: ceilo, swift...
12:07:51 <pixelbeat> wiki updated with that #GB min RAM suggestion
12:11:47 <ajeain> mrunge: let me know if this has been reported already
12:12:15 <mrunge> ajeain, it's not
12:12:21 <mrunge> but it should
12:12:41 <mrunge> well, with -theme the world looks completely different
12:19:27 <ajeain> mrunge: after installing the theme (and restarting httpd), I got "smoething went wrong", and I couldn't get into Horizon anymore, only after clearing all cookies/history
12:19:56 <ajeain> mrunge: and now, I don't see the tree anymore.,,,,, it now uses tabs
12:20:22 <mrunge> ajeain, yes. we're still working on that
12:20:37 <mrunge> ajeain, that's due to -theme deprecated in rdo
12:21:17 <ajeain> mrunge: OK, so I'm reporting a bug in launchpad, and link it to downstream bug (the icon issue), and another bug about the trace I am getting (while something went wrong)
12:22:09 <mrunge> ajeain, yes please: icon issue. please mention, your dashboard is under /dashboard, not under /
12:22:55 <ajeain> mrunge: it was always under dashboard, wasn't it?
12:22:56 <mrunge> ajeain, and the traceback for -theme package: idk if you should report that. that's too obvious to ignore
12:23:10 <mrunge> ajeain, yes, but now upstream made it configurable
12:23:10 <ajeain> mrunge: I see,,,,
12:23:25 <mrunge> ajeain, but apparently, nobody tests that
12:23:42 <ajeain> mrunge: so, why is it an issue? is it relate to the icon issue?
12:23:54 <ajeain> mrunge: or should I just add it into the icon bug?
12:24:45 <mrunge> ajeain, the icon issue is: webfonts. for me: http://<server-ip>/static/horizon/lib/font-awesome/fonts/fontawesome-webfont.woff?v=4.1.0
12:25:00 <mrunge> and it should read /dashboard/static
12:25:27 <mrunge> ajeain, you don't even see that a font is missing, if you have it already installed
12:27:14 <ajeain> mrunge: http://10.35.117.2/static/horizon/lib/font-awesome/fonts/fontawesome-webfont.woff?v=4.1.0 brings "not found" error
12:27:31 <mrunge> ajeain, exactly
12:27:50 <kashyap> pixelbeat: apevec: `ps_mem` reports 2.6 G for a single node DevStack instance built yesterday:  https://kashyapc.fedorapeople.org/virt/openstack/ps_mem_no_Nova_instances_5MAY2015.txt
12:27:54 <mrunge> ajeain, but it should present you something on http://10.35.117.2/dashboard/static/horizon/lib/font-awesome/fonts/fontawesome-webfont.woff?v=4.1.0
12:28:06 <mrunge> (see added /dashboard in path?)
12:28:07 <ajeain> mrunge: still not found
12:28:47 <ajeain> mrunge: although http://10.35.117.2/dashboard/static/horizon/lib/font-awesome/fonts exists
12:29:00 <apevec> kashyap, yeah, upstream craze nothing specific to RDO or RHEL...
12:29:24 <kashyap> apevec: By "single node", I mean just these services - Keystone, Glance, Nova, Neutron and Cinder
12:30:01 <mrunge> ajeain, http://10.35.117.2/dashboard/static/horizon/lib/font-awesome/fonts/fontawesome-webfont.ttf?v=4.1.0 exists
12:30:01 <apevec> so that's even w/o swift and ceilo which we have in RDO allinone...
12:30:22 <kashyap> apevec: Although, looking at my history of ps_mem profiles of DevStack, some months ago, _without_ Cinder alone, DevStack reported 2.6 GB. Seems to have improved slightly now.
12:31:39 <weshay> apevec, multi node passed and ran tempest fyi
12:32:00 <apevec> weshay, thanks for the report
12:32:38 <apevec> weshay, can we move those rdo-kilo jobs to public http://prod-rdojenkins.rhcloud.com/ now that trystack seems stable enough?
12:32:52 <apevec> weshay, aortega suggested to run them in parallel w/ internal for a while
12:32:54 <ajeain> mrunge: yes....webfonts.ttf exists but not webfonts.woff
12:33:08 <apevec> weshay, then leave them public only
12:33:21 <apevec> so we can send job links to rdo-list
12:33:27 <mrunge> ajeain, yes. I'm still looking into that
12:33:33 <weshay> apevec, as long as the infra is stable
12:34:37 <apevec> sine qua non
12:34:55 <weshay> apevec++ on epel tempest
12:34:55 <zodbot> weshay: Karma for apevec changed to 4:  https://badges.fedoraproject.org/badge/macaron-cookie-i
12:34:57 <ekuris> ajo, I disabled selinux  reboot my host but still I have errors in l3 -agent :  IOError: [Errno 13] Permission denied: u'/var/lib/neutron/lock/neutron-iptables-qrouter-7c738e45-cdc0-4f26-9e97-5993946b1031'
12:35:35 <ekuris> ajo, my router have access to external net  but it cannot to internal neet
12:36:17 <ajo> ekuris, yes, it's not able to configure iptables to do the forwarding or other stuff probably
12:36:35 <ekuris> ajo, I mean VM cannot access to external net . looks like router does not run NAT
12:36:36 <ajo> we may want to loop jlibosva or somebody else if we need this looked up quickly
12:36:51 <ajo> I'm still behind the work I need to submit for today :(
12:37:18 <ekuris> open a bug  ?
12:37:20 <ajo> ekuris, we need to investigate why cant the agent create the lock file
12:37:24 <ajo> ekuris, yes
12:37:28 <ajo> I'd open a separate bug for that
12:38:10 <panbalag> apevec, this time installation fails because of a missing package "Requires: python-cheetah"... any idea what repo I need to add for this?
12:38:39 <social> apevec: do we have sample tempest config or some generator?
12:38:48 <jlibosva> ekuris: hi
12:39:08 <apevec> social, there should beconf generator in tempest RPM
12:39:14 <apevec> weshay, eggmaster ^
12:39:15 <jlibosva> ekuris: what is output of 'ls -l /var/lib/neutron' ?
12:39:54 <jlibosva> ekuris: also can you please paste the whole traceback from l3 agent to pastebin?
12:40:50 <apevec> panbalag, that's in RHEL7 extras repo
12:41:02 <panbalag> apevec, ok.
12:41:04 <weshay> apevec, I've asked dkranz for some initial doc.. I keep hearing no
12:41:40 <apevec> weshay, what are CI jobs doing?
12:41:49 <ajo> jlibosva: [14:03:18]  <ekuris>	ajo,  it really set to enforce . about permission of var/lib/neutron  =drwxr-xr-x.  5 neutron  neutron  4096 May  5 14:30 neutron
12:41:49 <ajo> [14:03:51]  <ekuris>	ajo, same with /var/lib/neutron/lock
12:42:04 <ajo> jlibosva, and he posted logs, here: https://bugzilla.redhat.com/show_bug.cgi?id=1218543
12:42:32 <jlibosva> ajo: k, thanks :)
12:42:34 <ekuris> jlibosva, http://pastebin.test.redhat.com/280780
12:42:45 <ajo> jlibosva, thank *you*
12:43:35 <ekuris> ajo, I dont know if bug is valid here  because It appear after I disabled FW in neutron conf
12:44:01 <weshay> social, dkranz can help w/ tempest
12:46:51 <jlibosva> ekuris: when you said you disabled selinux, did you do it via /etc/selinux?
12:47:09 <jlibosva> ekuris: if you do 'grep -i avc /var/log/audit/audit.log', does it print something?
12:47:54 <ekuris> yes I d o
12:48:32 <ekuris> jlabocki, Yes  I do ,  there is no output
12:50:52 <panbalag> apevec, is this the repo "http://download.devel.redhat.com/rel-eng/EXTRAS-7.1-RHEL-7-20150205.0/compose/Server/x86_64/os/" ?
12:52:55 <apevec> panbalag, yes, but please don't share internal URLs on public channel...
12:53:07 <apevec> (too late it's in meeting minutes :)
12:54:36 <mrunge> the same is true for using internal pastes. fpaste does an excellent job
12:54:55 <mrunge> but everyone can have an eye on the paste
13:02:26 <hewbrocca> apevec: groan... it looks like both "python-hardware" owners are OOTO for the week
13:03:12 <hewbrocca> is there any way around that or do we just have to wait for them to get back?
13:04:07 <apevec> hewbrocca, ok, maybe trown can have a look at merging to Rawhide and then send patch to number80 who is ueberpackager
13:04:26 <itzikb> HI can someone write some words about tempest configuration here https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test ?
13:04:50 <apevec> trown, or email it to me and I'll build from a local branch
13:05:30 <trown> apevec: what am I emailing?
13:05:46 <number80> apevec, hewbrocca: read the thread already, so I should rebased to latest git in rdo-management ?
13:05:58 <number80> -d
13:06:12 <trown> I am totally down to help with the hardware library. I have no idea what I need to do though.
13:06:55 * number80 pestering against CentOS 7.1 cloud image as it breaks all the nice scripts to instantiate quickly a VM
13:07:13 <number80> it makes virt-install hanging :/
13:09:29 <hewbrocca> so IIUC the package python-hardware in Delorean is different from the one in Fedora Rawhide (right apevec ?)
13:09:46 <hewbrocca> and we need to resolve that so that we can get that package into RDO kilo
13:10:15 <hewbrocca> I guess we have been using the one in Delorean trunk, so that is probably what we should get into RDO
13:10:20 <hewbrocca> but that is about all I know at this point
13:10:22 <number80> ack
13:10:27 <trown> apevec: hewbrocca can we not just take exactly what is in delorean-mgt?
13:12:03 * number80 likes RDO manager logo
13:12:27 <itzikb> Does it make sense to add http://docs.openstack.org/developer/tempest/configuration.html to https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test?
13:13:20 <number80> trown: problem is that dtantsur built a newer python-hardware (0.14 vs 0.11)
13:13:57 <number80> I can downgrade but that means introducing Epoch (and I'd rather avoid it if possible)
13:14:33 <trown> ah...I can fix that...that is just the tags not getting updated in the delorean repo
13:14:52 <hewbrocca> xlnt
13:14:54 <trown> number80: what is in the delorean-mgt repo is actually .14
13:15:02 <hewbrocca> trown: many thanks
13:15:20 <number80> trown: so we just need to push this build in RDO then ?
13:15:44 <number80> http://koji.fedoraproject.org/koji/buildinfo?buildID=625251 (btw, it was fred, not dtantsur)
13:15:46 <trown> number80 can you point me to the dtanstur build
13:15:48 <trown> ah thanks
13:16:16 <number80> if it works then it's just a matter of submitting to gerrithub
13:16:57 * number80 notices that fedora notification system failed to notify him of flepied builds
13:19:52 <trown> number80: on the link above am I just checking that the spec is correct? I am a total noob at real packaging, I have only done delorean up to this point.
13:20:22 <apevec> trown, spec in Fedora is missing some deps you've added, please compare git history
13:20:43 <apevec> trown, ideally, we would update Rawhide so we can drop fork
13:20:44 <number80> oh, it's a missing deps
13:20:59 <apevec> yeah, I think Fedora spec needs fixes from rdo-mgt
13:21:24 <number80> http://pkgs.fedoraproject.org/cgit/python-hardware.git/tree/python-hardware.spec (url are pretty predictible)
13:21:34 <apevec> but would also like to confirm there aren't non-upstream patches in https://github.com/rdo-management/hardware
13:21:51 <number80> apevec: ack, spinning a new build (and I already checked that there's no new patches in rdo-management package)
13:22:23 <apevec> number80, be careful, rdo-mgt is using own source
13:22:39 <number80> ok missing R: python-{ptyprocess,pandas,pbr}
13:22:47 <number80> apevec: ouch
13:22:49 <apevec> https://github.com/rdo-management/rdoinfo/blob/master/rdo.yml#L123
13:23:09 <trown> apevec: the rdo-mgt/hardware fork is strictly behind enovance/hardware
13:23:19 <apevec> number80, yeah those, but also need to double-check upstream requirements.txt is really correct
13:23:37 <number80> trown: according these, you advise us to use enovance/hardware release ?
13:23:38 <apevec> trown, any reason not to use latest upstream?
13:23:52 <number80> apevec: thanks to rdopkg reqcheck its already part of my routine :)
13:24:06 <trown> apevec: number80, ya enovance/hardware master would be best
13:24:31 <number80> trown, apevec: then, it should be settled, i'll check requirements and update the package accordingly
13:24:45 <trown> https://github.com/rdo-management/python-hardware-packaging/blob/packaging/python-hardware.spec
13:24:51 <trown> that is our delorean packaging
13:25:10 <trown> it works, what I am working on totally relies on that package so I am confident in that
13:25:38 <number80> trown: if you'll be working on packaging, when the kilo craze'll be over, feel free to ping me for sponsoring :)
13:25:45 <trown> we would need to change the {upstream_version} back to an actual version though
13:26:03 <number80> trown: apevec has improved that part on kilo spec
13:26:06 <trown> number80: awesome, I need to learn how to do the real packaging
13:26:25 <number80> more packagers, less SPOF :)
13:26:38 <rbowen> +1
13:27:03 <trown> +1 to less SPOF
13:27:10 <trown> lots of those it feels like
13:27:20 <apevec> number80, yeah, please add that ?upstream_version thing in Rawhide, so we can keep delorean/rawhide specs closer
13:27:33 <rbowen> Speaking of which, I got email from the guy from CloudBaseSolutions about packaging hyper-v stuff for RDO.
13:27:39 <number80> apevec: +2
13:27:47 <hewbrocca> number80: thanks for pulling that together
13:27:55 <rbowen> I encouraged him to drop by here, and have a look at the packaging guide. Hopefully we'll hear more from him soon.
13:28:04 <alexpilotti> rbowen: cloudbase guy o/ :-)
13:28:04 <rbowen> You may remember talking with him in Paris and Atlanta.
13:28:10 <rbowen> Oh, there you are!
13:28:13 <rbowen> Yay.
13:28:43 <trown> oh hi alexpilotti, I have seen you on #openstack-ironic as well
13:28:50 <number80> alexpilotti: hi o/ # you may not remember but i was trolling you at Paris :>
13:29:05 <alexpilotti> lol
13:29:41 <rook> I will be playing in the test day today! ;)
13:30:32 <alexpilotti> rbowen: the main idea is that we’d like to add Hyper-V support in RDO
13:31:17 <alexpilotti> until kilo we didnt need anything particular on the RDO side, adding simply additional compute nodes to a RDO deployment
13:31:49 <alexpilotti> … and disabling the libvirt nova-compute that RDO requires to install, ahem
13:32:35 <alexpilotti> we also have a tool that does an automated RDO deployment on anythiong with Hyper-V, starting with a Windows 8 laptop: http://www.cloudbase.it/v-magine/
13:32:38 <trown> number80, in the hardware spec it is using pypi release as source, but the last release in pypi is missing a few commits
13:33:02 <alexpilotti> with kilo there’s a new requirement: we need to add a rpm package for networking-hyperv
13:33:22 <number80> trown: so I should just keep the current release in fedora rawhide and fix the spec
13:33:23 <alexpilotti> so I asked rbowen about how to proceed and I got the pointers
13:33:46 <rbowen> And of course if you need anything else, you can always ask here. We span most timezones.
13:34:49 <trown> number80: what do we do in the case we have one commit we need that is not yet in a pypi release? add a patch?
13:35:00 <number80> trown: yes, sir !
13:35:33 <number80> alexpilotti: I'm EMEA based (you're in Italy, right), so you could ping me for packaging stuff (and apevec too, if he agrees)
13:35:53 <alexpilotti> I’m actually Romania (EEST, GMT+3 now)
13:36:11 <number80> not a big difference, but my mistake :)
13:36:21 <alexpilotti> np :-)
13:36:25 <number80> mail is hguemar AT redhat DOT com
13:36:35 <alexpilotti> funny thing is that everybody thinks we are based in Italy
13:37:09 <alexpilotti> due to my name and the .it in the domain, while I’m actually the only italian in the company :-)
13:37:31 <alexpilotti> number80: thanks! going to send you some intro spam right now
13:37:33 <number80> lol
13:45:15 <alexpilotti> number80: email sent
13:47:54 <trown> number80: I am actually not sure if the missing commit is worthy of a patch, I would say we just fix the spec and go with the release in rawhide
14:08:24 <ayoung> It is way too quiet in here\
14:08:32 <ayoung> Something has to be on fire somewhere.
14:08:41 <ricrocha1> hi. testing openstack-kilo on rdo, with this repo: https://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/el7/  i get python-requests 1.1.0  (2.3.0 seems to be there for the juno el7 repo), and i get this: Error: Package: 1:python-keystoneclient-1.3.0-1.el7.noarch (openstack-kilo-testing) Requires: python-requests >= 2.2.0
14:09:03 <ricrocha1> should i also enable the openstack-juno repo to get those pkgs?
14:10:02 <ayoung> ricrocha1, is that package installed but wrong version"?
14:10:09 <ayoung> python-request
14:10:28 <ricrocha1> it is
14:11:04 <ricrocha1> but i don't see a 2.2.0 candidate with the el7 kilo testing
14:24:25 <yrabl> Is there a reason that the cinderclient has an older version, created in 21th of december 2014?
14:36:40 <lon> note:
14:36:49 <lon> mariadb-server is required by openstack-trove-api
14:36:55 <lon> conflicts with mariadb-galera-server
14:37:11 <lon> mariadb-galera-server provides mysql-server, but not mariadb-server
14:37:20 <lon> thus, explosions.
14:37:24 <tosky> uh
14:37:33 <lon> suggestion: openstack-trove-api require: mysql-server
14:37:50 <tosky> I thought it was fixed
14:37:59 <lon> was testing: https://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/el7/
14:38:04 <tosky> or at least I verified it was fixed
14:38:13 <tosky> ah, it was fixed only on rhel-osp
14:38:20 <lon> yeah
14:38:29 <lon> I can work around it.
14:38:46 <tosky> lon: the question is (and let me ping vkmc and sgotliv as well): do we want to fix it this way in RDO too?
14:42:48 <vkmc> why mariadb-galera-server is providing mysql-server?
14:43:31 <vkmc> and... is mysql-server == mariadb-server? are we keeping mysql-server as a link for mariadb-server?
14:45:36 <vkmc> lon, tosky ^
14:45:44 <vkmc> sgotliv, ^
14:48:56 <sgotliv> vkmc, why mariadb-server is required by trove api?
14:48:57 <number80> vkmc: due to history, mariadb is supposed to be a drop-in replacement to mysql
14:49:37 <vkmc> number80, yeah, that's why it feels odd that mariadb-galera-server conflicts with mariadb-server but not with mysql-server
14:50:00 <number80> lon, vkmc, sgotliv, tosky: I need to sync RHOS/RDO
14:50:02 <vkmc> sgordon, to store data related to datastores, datastores versions and such
14:50:04 <tosky> sgotliv: because of me nagging for a solution to the problem "trove services die if they start before the underlying database server"
14:50:15 <lon> vkmc: you get to pick between mariadb-galera and mariadb-galera-server.
14:50:16 <vkmc> sorry sgordon
14:50:18 <vkmc> sgotliv, ^
14:50:23 <lon> if we made one obsolete the other, it would mean you don't have a choice
14:50:36 <lon> so, they should generally both provide/obsolete mysql-server
14:51:04 <sgotliv> number80, I wonder if Cinder has the same dependency?
14:51:37 <number80> sgotliv: no deps on mysql/mariadb/galera :)
14:51:37 <lon> the intent is to allow the admin to pick "do I want clustered db or not"
14:51:55 <lon> rohara: ^ I think that's right?
14:51:58 <vkmc> sgotliv, other services manage the dependency server side... not in the package
14:52:27 <number80> lon: if we rely on deployment tools, I would just drop the "hard" requirements and switch to soft requirement
14:52:40 <number80> and let the deployment tool popping the question
14:52:58 <lon> sure, but you need some sort of mysql flavor to run trove
14:53:02 <lon> hence, mysql-server.
14:53:10 <lon> depend on that, no problem.
14:53:10 <sgotliv> lon, its different
14:53:37 <tosky> number80, lon, sgotliv, vkmc: the dependency is a quick hack, but otherwise structural changes to trove services are needed, that's why maybe it shouldn't go in RDO
14:53:42 <number80> lon: yes but what happens if you want to use a remote mysql server :(
14:53:51 <number80> tosky: +1
14:53:55 <lon> number80: makes sense.
14:54:35 <lon> one thing to avoid however is
14:54:56 <lon> and, this isn't an example of that happening, since remote-mysql is valid...
14:54:56 <tosky> number80: this means I will have to add some workaround on the CI, because I 100% hit the issue whenever I restart the controller on the CI environment
14:55:16 <lon> but removing dependencies for things that really are needed, because you want the deployment tool to do it.
14:55:20 <lon> ^ that would be bad.
14:55:57 <number80> lon: dnf/yum by default installs Recommends:
14:56:13 <lon> yeah, that's fine
14:56:22 <number80> it wouldn't change much end-user experience, and allow for more flexibility
14:56:25 <rohara> lon: oh my favorite topic! :)
14:56:34 <number80> but not introducing that in Kilo :)
14:56:46 <lon> rohara: nevermind, i forgot the whole remote-sql thing.  It's a trove problem
14:56:59 <lon> not sure, why, but there it is :]
14:57:06 <lon> (not sure why I forgot)
14:58:00 <rohara> yeah so mariadb-server and mariadb-galera-sever needed to have same provides but NOT use a obsolete
14:58:09 <rohara> i am trying to recall the details
14:58:32 <rohara> originally i had a obsolete statement in the spec but had to remove it
14:59:33 <lon> tosky: CI would just need to install mariadb-whatever when doing trove tests (probably mariadb-server)
15:00:08 <lon> (I don't see CI setting up a galera cluster to do mariadb-galera based trove tests in the short term?)
15:00:13 <tosky> lon: that's not the problem: the problem is that whenever you restart the system, on the CI environment, trove services starts before mysql/mariadb/mariadb-galera/whatever
15:00:20 <tosky> and they die
15:00:21 <tosky> and boom
15:00:25 <lon> gah
15:00:40 <lon> so bad units or something
15:01:35 <mrunge> sounds like an issue in trove unit files
15:01:44 <lon> mrunge: yeah
15:01:49 <mrunge> well, wait
15:02:07 <tosky> no, no
15:02:16 <mrunge> you don't need to have trove and mysql/mariadb on the same host, don't you?
15:02:26 <tosky> yes, that's the point: what is the supported configuration?
15:02:45 <mrunge> in that case, we don't have a hint, if database was already started
15:02:47 <lon> mrunge: no, that's what number80 pointed out.  but, if on same system, it should work :]
15:02:49 <tosky> in the product it's a different story; in RDO, do we want to tie trove to the controller/database host?
15:03:09 <lon> so trove
15:03:35 <mrunge> that would create a lovely bottleneck
15:03:37 <tosky> yes, trove dies etc etc, I filed an upstream bug for this
15:04:02 <tosky> note: while testing this, I realized that also neutron-server does not start if the underlying db can't be reached
15:04:03 <lon> guestagent should perhaps require a database
15:04:13 <tosky> noooo, that's not about guestagent
15:04:14 <lon> https://wiki.openstack.org/wiki/Trove
15:04:22 <tosky> forget guestagent
15:04:32 <lon> ok
15:06:24 <sgotliv> lon, number80, tosky vkmc let's take it in our weekly Trove call today
15:07:13 <lon> ok
15:07:15 <number80> ok
15:07:27 <sgotliv> s/in/on
15:13:18 <trown> number80: I figured out how to make the patch for hardware...if you have not already done a build, this should be what we want: https://github.com/rdo-management/python-hardware-packaging
15:14:08 <trown> crap...I just did that on the wrong repo...that is correct, but I am moving it to my own repo
15:15:25 <jpena> tosky: for the Trove startup, I think you could apply a similar fix to what I proposed for Neutron in https://bugzilla.redhat.com/show_bug.cgi?id=1188198
15:16:22 <tosky> jpena: I file bugs, others usually fix them :) vkmc, lon, sgotliv ^^^
15:16:32 <number80> trown: mock building currently (had to attend other emergencies), I'll check before pushing, thanks :)
15:18:20 <vkmc> jpena, thanks! I think that could work for us as well
15:21:09 <egafford> Could someone go to https://rsaebs.corp.redhat.com/ and tell me if they're seeing sec issues? I want to do my Compass but I'm having access troubles.
15:21:11 <lon> mburned: btw
15:21:19 <lon> python-ironiccclient will need to be in core :/
15:21:20 <egafford> Sorry, wrong room.
15:21:28 <egafford> Deeply dumb; apologies.
15:22:42 <lon> mburned: perhaps not - we'll see
15:23:12 <yrabl> lon: ping
15:23:12 <zodbot> yrabl: https://blogs.gnome.org/markmc/2014/02/20/naked-pings/
15:24:31 <lon> yrabl: pong
15:24:53 <lon> zodbot: yes, I get it, jcm wrote you.
15:25:22 <yrabl> lon who's responsible for the packaging? the Cinder client package hasn't been updated since december...
15:25:33 <lon> cinder client in rdo?
15:25:37 <yrabl> yep
15:25:40 <number80> egafford: internal links => internal irc ;)
15:25:45 <yrabl> lon^^
15:25:49 <lon> hguemar or jruzicka
15:25:55 <lon> I'd think
15:25:56 <number80> cinder
15:26:01 <yrabl> lon, thanks!
15:26:12 <lon> I could be wrong
15:26:20 <number80> yrabl: jruzicka is responsible with clients and I'm his backup :)
15:26:55 <yrabl> number80: can you update it? :) *nice please*
15:27:01 <lon> yrabl: AH NUMBER80, SEE?
15:27:04 * lon ducks
15:27:32 <lon> my guess is that python-cinderclient hasn' tbeen updated because it's not broken
15:27:38 <number80> yrabl: is there a ticket, which release are we speaking of (juno/kilo) ?
15:28:18 <number80> (theorically, we're still maintaining icehouse currently)
15:29:55 <jruzicka> yrabl, number80, lon: https://jruzicka.fedorapeople.org/pkgs/cinderclient.jpg
15:30:01 <trown> number80: the link I sent in the email is the correct one:
15:30:01 <trown> https://github.com/trown/python-hardware-packaging/
15:30:48 <number80> yep, I also noticed the patch :)
15:32:03 <number80> (that was the missing piece to get the correct build)
15:36:01 <rook> number80 which repo should provide openstack-ironic-conductor ?
15:36:15 <rook> openstack-kilo-testing                                                        OpenStack Kilo Testing                                                                                      746 ?
15:38:30 <number80> rook: it's absent from there, I'm checking if it has been built
15:39:04 <mburned> lon: i'd expect it to be in core along with openstack-ironic
15:39:07 <rook> rgr..
15:39:16 <number80> yup, it's absent from CBS :(
15:39:18 <number80> http://cbs.centos.org/koji/search?match=glob&type=package&terms=openstack-ironic*
15:39:23 <lon> cool
15:39:29 <lon> mburned: right we talked about that
15:39:34 <number80> #action hguemar rebuild openstack-ironic for kilo
15:39:44 <rook> for the time being, disable ironic to get around this i suppose.
15:46:03 <apevec> rook, ironic is in mgt repo, I didn't build it since they've non-upstrem patches
15:46:30 <number80> #undo
15:46:30 <zodbot> Removing item from minutes: ACTION by number80 at 15:39:34 : hguemar rebuild openstack-ironic for kilo
15:46:43 <rook> apevec all good - I just had it enabled in my answer file and wanted to let yall know it failed.
15:46:54 <apevec> yrabl, jruzicka, number80 - I was looking at cinderclient for GA, but there simply wasn't updates on stable/kilo
15:46:57 <apevec> lemme check again
15:46:57 <rook> i can add the other repo.
15:47:22 <number80> apevec: there was none according git
15:48:27 <number80> for juno
15:48:35 <number80> kilo has a new 1.2.0
15:48:55 <apevec> when was that released...
15:49:17 <number80> apevec: nevermind, it's liberty
15:49:27 <number80> my eyes are tricking me
15:49:32 <apevec> yeah, nothing new on stable/kilo
15:49:48 <apevec> yrabl, ^ why did you expect update cinderclient?
15:50:06 <apevec> if there's something missing for Kilo, ask upstream to release it on stable/kilo
15:50:22 <number80> people wants 2015 builds :)
15:50:44 <jruzicka> zomg this build is sooo last year
15:50:51 <number80> jruzicka: \o/
15:51:00 <jruzicka> shall I rebuild it just to get newer stamp? :)
15:51:35 <apevec> RPMs model year 2015
15:51:56 <number80> if we were running test suite, it wouldn't be a bad idea to have mass rebuilds once in a while
15:52:02 <number80> s/were/had/
15:53:06 <apevec> number80, why, do bits decay? :)
15:53:49 <number80> apevec: mostly thinking about updated dependencies breaking stuff at runtime
15:54:06 <number80> CI is supposed to catch them though
15:54:09 <apevec> yeah
15:59:01 <number80> so many things, so little time :(
16:06:40 <yrabl> apevec: I expected to see some of new features ready from the client side
16:07:07 <apevec> yrabl, as I said, talk to upstream, I'm not following cinder development
16:07:20 <apevec> I just see there isn't any updates on stable/kilo branch
16:07:40 <yrabl> apevec: ok - thanks for the help :)
16:18:13 <number80> trown: I've left you few comments in github to fix the packaging in rdo-mgmt for python-hardware
16:18:56 <trown> number80: awesome
16:20:13 <weshay> apevec, testing new jobs on prod-rdojenkins
16:21:57 <crobertsrh> vkmc:  I did retry creating my trove image with the master of diskimage-builder.  I was able to build mysql for centos.  Building for fedora still failed for me (chown: cannot access '/etc/trove': No such file or directory), but I should be ok for what I need it for.  Thanks for your help.
16:28:53 <trown> number80: updated our rdo-mgt packaging with those changes...thanks a ton
16:29:18 <number80> trown: np, you'll save me a lot of time, later ;)
16:30:26 <weshay> apevec, https://prod-rdojenkins.rhcloud.com/view/RDO-Kilo-Test-Day/
16:30:51 <apevec> all sunshine!
16:30:55 <apevec> weshay, thanks!
16:31:35 <number80> weshay++
16:31:35 <zodbot> number80: Karma for whayutin changed to 1:  https://badges.fedoraproject.org/badge/macaron-cookie-i
16:31:36 <eggmaster> sunshine and bunnies!
16:32:08 <number80> btw, if you have a FAS account, you can also gives cookies on freenode too ;)
16:32:51 <trown> number80++
16:33:11 <number80> trown: FAS is hguemar
16:33:30 <trown> hguemar++
16:33:45 <number80> (the number80 is long story but if you're a physicist, easy to understand when you see my initials)
16:34:36 <jruzicka> Hah, mercury
16:34:58 <jruzicka> cool story, bro
16:35:04 <eggmaster> number80: nice
16:35:07 <number80> jruzicka++
17:01:09 <eggmaster> apevec: ping
17:01:09 <zodbot> eggmaster: https://blogs.gnome.org/markmc/2014/02/20/naked-pings/
17:01:14 <eggmaster> oh snap
17:01:28 <weshay> lol
17:01:31 <eggmaster> that ping was not just naked, man
17:01:33 <weshay> eggmaster, ping
17:01:33 <zodbot> weshay: https://blogs.gnome.org/markmc/2014/02/20/naked-pings/
17:01:34 <eggmaster> you should see it.
17:01:45 <eggmaster> it's doing something really wrong. ANYway
17:02:04 <eggmaster> apevec: so we just got a pass on the openstack-tempest-kilo running on centos7.1 tempest node
17:02:19 <eggmaster> weshay: pong
17:02:27 <eggmaster> no warning about naked pong ;)
17:02:28 <apevec> eggmaster, yes please!!
17:02:49 <apevec> I mean, please use centos7.1 tempest nodes instead of f20
17:03:18 <apevec> jpena noticed f20 nodes are sometimes not getting IP so job fails
17:03:22 <eggmaster> heh, let's not get ahead of ourselves ;) (I'm not opposed to that..)
17:03:39 <eggmaster> we just switched everything to f21 last week
17:03:46 <apevec> should be more stable I'd hope
17:03:53 <apevec> ah
17:04:26 <eggmaster> but really, once we get testing synced to production (I assume that is the plan) there's afaik no big reason not to switch.
17:12:54 <kambiz> bswartz: radez :  if you dont mind trying out ssh from a couple of different locations, and make sure you can 1) get in,  2) if you move your ~/.ssh/id_dsa or id_rsa (whichever we used), that you will _not_ be able to get in.  I'm pretty sure it's setup correctly, but doesn't hurt to doublecheck
17:34:54 <ajo> I was looking for it, but my computer is dying
17:35:07 <ajo> do anybody know where do we put the "passed CI" master repo for delorean?
17:35:42 <ajo> abregman was looking for something similar :)
17:38:03 <abregman> ajo, will he survive? =)
17:38:37 <ajo> abregman, not sure XDD
17:39:22 <ajo> mouse cursor moving slow
17:39:30 <ajo> fans blowing hard...
17:44:29 <marcin___> good morning guys
17:55:20 <ayoung> zodbot ++
18:19:51 <rook> number80 where should i pick up python-werkzeug ?
18:20:18 <rook> ceilometer needs it.
18:35:58 <pixelbeat> rook, what OS? https://apps.fedoraproject.org/packages/s/werkzeug
18:36:12 <rook> pixelbeat: RHEL71
18:36:13 <pixelbeat> It's available in base centos 7 at least
18:36:23 <pixelbeat> should be in rhel 7.1 so?
18:38:08 <rook> pixelbeat: nope - not in the 71 repos i have. l
18:38:24 <pixelbeat> rook I had a quick look. is it in the rhel 7.1 extras repo?
18:42:52 <pixelbeat> rook, that's needed for EPEL7 anyway as per https://www.rdoproject.org/Repositories
18:45:01 <rook> pixelbeat: yup - had it enabled for the controller, not the compute.. whoops
18:45:54 <rook> thanks! :)
19:01:29 <ayoung> OK,  doesn;t sound like there is much need for Keystone support. I have to log off for a while.  Email me if there is something burning.  I'll be back on in a couple hours
20:21:00 <arif-ali> I am presuming the test day is still on for today
21:38:02 <dosnibbles> anyone here who can help with packstack installation questions?
21:38:24 <social> dosnibbles: what do you need?
21:40:10 <dosnibbles> I have two HP servers and a netapp. I want to run compute on both servers and make use of the netapp. I also want to be able to access the instances from my lan.
21:41:43 <dosnibbles> I'm not sure if acess is possible without confiuguring floating IPs.
21:43:42 <imcsk8> dosnibbles: you need the floating ip to access external network
21:45:18 <dosnibbles> imcsk8: ok.
21:46:59 <imcsk8> dosnibbles: i think it might be possible not use them if you configure neutron to you LAN but it's a bit of a hassle
21:59:08 <dosnibbles> imcsk8: Ill make use of FIPs. Any idea about the netapp (nfs) setup?
22:36:43 <rook> hm, is it known that all style sheets seem to be missing from horizon?
23:01:19 <arif-ali> Anyone seen this issue yet, http://paste.openstack.org/show/215113/
23:01:38 <arif-ali> can't see anything in the workaround, if anyone's found the issue
23:31:27 <arif-ali> ok, found my issue with packstack, I am using a local mirror of epel, and the mongodb puppet module for packstack doesn't like the fact I am using use_epel=n, as that then defaults to using mmongodb.conf, and only changes that file. It should use mongod.conf
23:33:50 <kfox1111> just installed horizon from: https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc2
23:34:02 <kfox1111> getting: OfflineGenerationError: You have offline compression enabled but key "e5f9a607a797ba75cf310cc38c137e50" is missing from offline manifest. You may need to run "python manage.py compress".
23:34:08 <kfox1111> known issue?
00:08:03 <kfox1111> getting: OfflineGenerationError: You have offline compression enabled but key "e5f9a607a797ba75cf310cc38c137e50" is missing from offline manifest. You may need to run "python manage.py compress".
00:08:07 <kfox1111> known issue?
00:13:54 <arif-ali> kfox1111, you may want to use the latest RPMs that are being used for testing
00:14:05 <kfox1111> is there a newer repo somewhere?
00:14:24 <kfox1111> I'd be happy to test. :)
00:14:50 <kfox1111> heh. sorry. :)
00:15:08 <arif-ali> the new repo is https://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/el7
00:15:32 <arif-ali> check-out https://www.rdoproject.org/RDO_test_day_Kilo
00:15:58 <kfox1111> I could have swarn that rpm wasn't here earlier today. :)
00:16:35 <kfox1111> ah... perfect. thanks. :)
00:16:46 <arif-ali> yeah, it isn't, but try https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-testing-kilo-0.11.noarch.rpm instead
00:17:58 <arif-ali> well, it's there actually, but not listed, when you enumerate the files via the web services
00:18:27 <kfox1111> yeah. I grabbed it ok. the repo file shows up ok. I think it will work.
00:18:44 <arif-ali> cool, good luck
00:19:20 <kfox1111> thanks. :)
08:05:36 <hewbrocca> morning all
08:19:10 <kodoku> Hi
08:19:26 <kodoku> Any idea why python-oslo-vmware is in not update in juno package ?
08:19:42 <kodoku> because current version is 0.12 and in juno rdo it's 0.6
08:31:27 <itzikb> HI, I changed dhcp_delete_namespaces=True in /etc/neutron/dhcp_agent.ini and restarted neutron-dhcp-agent but existing namespaces are not deleted
08:33:03 <kodoku> itzikb try to restart network
08:37:48 <itzikb> kodoku: What do you mean ? systemctl restart network ?
08:41:20 <kodoku> itzikb yes
08:41:35 <kodoku> itzikb i use service network restart but it's old command
08:42:12 <itzikb> kodoku: Did. still there are namespace not belonged to existing networks
09:02:07 <number80> .whoowns MySQL-python
09:02:07 <zodbot> number80: jdornak
09:02:14 <number80> .fasinfo jdornak
09:02:15 <zodbot> number80: User: jdornak, Name: Jakub QB Dorňák, email: jdornak@redhat.com, Creation: 2012-02-02, IRC Nick: , Timezone: Europe/Prague, Locale: cs, GPG key ID: , Status: active
09:02:18 <zodbot> number80: Approved Groups: cla_fpca cla_done packager fedorabugs
09:05:56 <apevec> ah zodbot knows more tricks than just taking minutes
09:06:55 <apevec> kodoku, stable/juno requirements is: oslo.vmware>=0.6.0,<0.9.0
09:06:57 <number80> apevec: https://fedoraproject.org/wiki/Zodbot a lot more commands available :)
09:07:05 <apevec> 0.12 is master (liberty) release
09:07:38 <number80> apevec: yes, received a mail about oslo.vmware (i keep asking them to create a ticket but mail seems easier :( )
09:07:59 <apevec> number80,  there is 0.7.0 on oslo.vmware stable/juno branch, we can update to that
09:08:00 <number80> stable/juno upstream is stuck to 0.7.0 so I will push it today
09:08:08 <number80> \o/
09:08:10 <apevec> ack
09:08:48 <number80> though newer may work, without feedback on bugzilla, it's not right to push oslo.vmware > 0.7.0
09:09:59 <number80> (an actual ticket about these issues may encourage updating it in RHOS too)
09:10:14 <apevec> yeah, we'll stick to upstream reqs for now, but can backport for specific issues
09:10:47 <number80> +1
10:04:11 * cdent is stuck right at the start
10:14:52 <cdent> what repos are required? I'm starting from a bare rhel 7.1 with no repos configured
10:38:52 <berendt> the novnc proxy service is not starting with the kilo testing repository. is this already a known issue?
10:54:51 <berendt> the novnc issue is only fixed for juno, I added a note in https://bugzilla.redhat.com/show_bug.cgi?id=1200701
11:04:35 <verdurin> berendt: I saw that yesterday, too.
11:29:22 <number80> cdent: link in the headline https://www.rdoproject.org/RDO_test_day_Kilo (setup + workaround for known issues)
11:30:56 <cdent> number80 I put slightly more details about my issue on the etherpad: https://etherpad.openstack.org/p/rdo_kilo_test_day
11:31:30 <cdent> I've got the kilo-testing repo but it appears to want some packages my system can't reach
11:34:29 <berendt> cdent: do you have registered the RHEL 7.1 image? I think this is necessary to gain access to the offical RedHat repositories.
11:37:09 <number80> berendt: yes, or use centos repo
11:37:30 <cdent> berendt: No, it's not registered. I guess that's part of my question: How do I turn it on. I usually use fedora for all my testing, but it appears fedora is not a part of this round of testing.
11:37:36 <cdent> How can I add a centos repo?
11:38:15 <apevec> cdent, you can start with centos cloud image
11:38:54 <rbowen> I'm presuming we probably won't do the packaging meeting this morning, due to test day?
11:39:42 <apevec> rbowen, yeah, I still have testday meeting open, I guess we could prepare summary around pkg meeting time
11:39:51 <cdent> It's _hilarious_ that testing rhel is so hard to do that people just don't bother ;)
11:40:09 <rbowen> Sounds good
11:40:19 <apevec> rbowen, you have that BZ query from last time, to see BZs reported during test day?
11:40:26 <apevec> cdent, isn't it?
11:41:09 * cdent sucks down a centos image
11:41:33 <number80> cdent: +1
11:41:40 <number80> (for the RHEL thing)
11:42:18 * cdent weeps briefly
11:42:33 <apevec> not to mention extras, optional annoyances
11:43:48 <apevec> I'm adding special handling in rdo-release for that, it's been issue everybody hitting w/ rdo on rhel
11:45:38 <number80> apevec++
11:53:28 * cdent is in business on centos
11:53:33 <cdent> thanks number80 and apevec
11:54:10 <weshay> apevec, FYI.. jenkins is failing when it's collecting the test results, so the red jobs atm are not an indication of the rdo build
11:54:40 <weshay> will be looking into why the xunit result file is blowing things up
11:55:42 <weshay> actual results are:
11:55:43 <weshay> <testsuite errors="0" failures="48" name="" tests="1459" time="11450.195">
11:56:55 <weshay> hrm.. 05:31:43.563 Caused by: java.lang.OutOfMemoryError: Java heap space
11:58:13 <number80> I still wonder why they added these memory options in the JVM, it just means that Java will run out of memory faster
11:58:41 <apevec> yay, java...
11:58:49 <apevec> weshay, thanks!
11:58:55 <number80> weshay++
11:59:24 <mrunge> java is about 20 years old, and still we're struggling with memory issues
11:59:42 <mrunge> how did we use it 20 years ago?
12:00:38 <number80> mrunge: Sun used to sell hardware
12:00:54 <social> mrunge: on printers, washing machines and other small devices that didn't really have jvm
12:01:46 <berendt> I created https://bugzilla.redhat.com/show_bug.cgi?id=1219006 (Wrong permissions for directory /usr/share/openstack-dashboard/static/dashboard) because at the moment it is not possible to access horizon after the installation with packstack.
12:02:08 <apevec> mrunge, that's why my mobile phone has RAM/CPU like '99 server ;)
12:02:24 <apevec> mrunge, ^ again??
12:02:32 <apevec> berendt, did httpd restart help?
12:02:50 <rbowen> Is anyone keeping a comprehensive list of issues somewhere? There's not much in either Workarounds or the etherpad?
12:03:06 <mrunge> apevec, berendt I still wasn't able to properly reproduce that
12:03:10 <berendt> apevec: yes. I noted this in the bug report.
12:03:23 <mrunge> berendt, how did you install?
12:03:27 <apevec> rbowen, yeah, let's start with the list of BZs reported y-day and today
12:03:48 <berendt> mrunge: with packstack, multi node environment
12:03:50 <apevec> rbowen, maybe not much means we're so good
12:04:07 <mrunge> berendt, did it work in first run?
12:04:10 <rbowen> bugzilla isn't being very friendly to me this morning
12:04:19 <berendt> mrunge: no. i had to manually restart httpd
12:04:26 <berendt> after manually restarting httpd it worked for me.
12:04:39 <apevec> rbowen, no such thing, ever
12:04:39 <mrunge> berendt, no, I mean: did packstack install worked at first try?
12:04:42 <berendt> mrunge: yes
12:04:54 <mrunge> on fresh machines?
12:05:01 <berendt> mrunge: of course
12:05:14 <mrunge> and httpd was already installed/running there?
12:05:23 <berendt> no.. a fresh centos 71 installation
12:05:29 <mrunge> ok, thanks
12:05:39 <apevec> berendt, can you share kickstart or was it manual install?
12:05:48 <berendt> and not using httpd as frontend service for keystone
12:06:21 <berendt> apevec: a tarball of /var/tmp/packstack is sufficient?
12:06:40 <rbowen> 19 issues tagged 'RDO' that have been opened in the last two days.
12:06:41 <apevec> I mean for base OS instalation
12:07:19 <berendt> apevec: https://github.com/boxcutter/centos/blob/master/http/ks7.cfg
12:07:26 <apevec> berendt, kickstart (or steps used to install OS from scratch) + packstack answer file should be enough
12:07:56 <mrunge> berendt, any info you can offer is greatly appreciated.
12:08:13 <mrunge> berendt, actually, your bug is a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1150678
12:12:32 <apevec> mrunge, is it really a duplicate, that BZ is Juno
12:12:35 <apevec> ?
12:13:27 <rbowen> Oh, I see, I searched for tickets *changed* since yesterday. That adds a few more. That's why it's different.
12:14:22 <mrunge> apevec, well, it's probably not
12:14:53 <mrunge> apevec, itzikb_ just captured that bug and added his info
12:14:56 <berendt> apevec: http://paste.openstack.org/show/215445/
12:15:04 <berendt> this is the used packstack.answers file
12:16:54 <itzikb_> apevec: Sorry - not following
12:18:12 <mrunge> itzikb_, this was about the horizon permission issue bug. you added your kilo info to the juno bug
12:18:21 <mrunge> apevec, asked, if it's the same bug
12:18:25 <mrunge> (it's not)
12:18:46 <itzikb_> mrunge: ok .thanks
12:20:48 <itzikb_> mrunge: So it's not duplicate
12:21:39 <itzikb_> mrunge: Should it be opened again? And if yes - do you want me to add my comments in the Kilo bug?
12:22:46 <mrunge> itzikb_, I reopened it, and please add your info there
12:24:13 <itzikb_> mrunge: ok
12:26:48 <itzikb_> mrunge: Done. I'll comment in the Juno bug that is was a mistake and it's KIlo
12:27:13 <mrunge> itzikb_, well...
12:27:36 <mrunge> itzikb_, maybe you should just point to the kilo bug
12:27:48 <itzikb_> mrunge: ok
12:32:43 <apevec> vkmc, number80, tosky - ok, so you need openstack-trove-2015.1.0-2 in kilo/testing ?
12:32:55 <tosky> oh, right
12:32:59 <tosky> here is the place
12:32:59 <apevec> starting rebuils in CBS
12:33:19 <apevec> rebuild even
12:33:45 <number80> apevec: vkmc was working on an update this morning
12:34:01 <tosky> yes, because it can't be installed otherwise (conflict between the trove requirements in 2015.1.0-1 and mariadb-galera-server)
12:35:03 <apevec> number80, -2 is it then, built 11UTC today
12:35:24 <number80> apevec: then it's all good
12:35:35 <number80> I need to fix notifications with gerrithub
12:35:55 <number80> it seems to use $RANDOM emails from github
12:36:06 <tosky> apevec: thanks for the imported build!
12:36:10 <tosky> vkmc: and thanks for the update
12:53:25 <apevec> tosky, done: Installing:
12:53:25 <apevec> openstack-trove           noarch 2015.1.0-2.el7   openstack-kilo-testing
12:55:07 <tosky> apevec: thanks
12:55:28 <ajo> gchamoul++
12:55:28 <zodbot> ajo: Karma for gchamoul changed to 2:  https://badges.fedoraproject.org/badge/macaron-cookie-i
12:55:49 <gchamoul> ajo: ???
12:55:52 <gchamoul> :D
12:55:59 <ajo> :-) the -fwaas cherry pick of the missing commits
12:56:16 <ajo> I had that on my todo list, but I'm a bit swamped lately on the QoS API definition
12:56:43 <gchamoul> ajo: I just migrated openstack-packages/neutron-fwaas to rpm-master branch
12:56:55 <ajo> hmm
12:56:56 <ajo> ah
12:57:03 <ajo> I thought it was something else
12:57:22 <ajo> gchamoul, the rpm-kilo is missing some patches from f20-
12:57:23 <gchamoul> nope just migration from f20-master to rpm-master branch
12:57:27 <ajo> which are neccesary for kilo
12:57:30 <ajo> all the last 4
12:57:52 <gchamoul> ajo: can do it!
12:57:56 <gchamoul> if you want
12:58:00 <ajo> gchamoul, I'd really really thank you
12:58:06 <apevec> ajo, hmm, you mean upstream stable/kilo is missing patches?
12:58:11 <ajo> otherwise it'd need at least until tomorrow to do it
12:58:12 <ajo> apevec, yup!
12:58:14 <ajo> 4 patches
12:58:28 <ajo> let me look for the bz
12:58:28 <apevec> rpm-kilo rpm-master are just dist-gits, for spec
12:58:36 <ajo> apevec, yes, it's in the spec
12:58:46 * apevec looks
12:58:49 <ajo> 1 sec...
12:58:57 <apevec> there shouldn't be patches in rpm-* specs
12:59:13 <apevec> Delorean builds from vanilla upstream branches
12:59:29 <ajo> apevec, gchamoul : https://bugzilla.redhat.com/show_bug.cgi?id=1218543
12:59:32 <apevec> so ideally, we need those proposed on upstream stable/kilo
12:59:37 <ajo> apevec, sorry, I was meaning commits
12:59:47 <ajo> commits to the spec repository
12:59:48 <ajo> :)
13:00:02 <apevec> ah so spec not source changes
13:00:06 * apevec clicks bz
13:00:14 <ajo> ahh, this is missing a last comment I made
13:00:16 <ajo> bad bugzilla
13:00:17 <ajo> 1 sec.
13:00:37 <ajo> https://github.com/openstack-packages/neutron-fwaas/compare/rpm-kilo...f20-master
13:00:37 <ajo> We need to cherry pick the missing commits into rpm-kilo, it seems we branched it out from a wrong commit id.
13:00:56 <ajo> apevec  ^
13:00:58 <apevec> oh has needinfo on me
13:01:25 <cdent> Has there ever been talk of optimizing --allinone so it doesn't have to ssh to do its puppet applies?
13:02:21 <cdent> (or rather getting the manifests in place?)
13:02:22 <apevec> ajo, crap, so I messed it up when branch :(
13:02:43 <ajo> apevec, who nothing does nothing breaks..
13:02:53 <apevec> ajo, omh, I had outdate git checkout :)
13:03:16 <ajo> :)
13:04:04 <apevec> gchamoul, ajo, ok, I'll fix rpmkilo
13:04:17 <apevec> and rebuild in CBS for kilo/testing
13:04:23 <social> cdent: I'd prefer same code path for all cases
13:04:28 <ajo> thanks a lot apevec
13:04:31 <ajo> apevec++
13:04:32 <zodbot> ajo: Karma for apevec changed to 5:  https://badges.fedoraproject.org/badge/macaron-cookie-i
13:04:46 <apevec> that should've been -- rookie mistake!
13:05:55 <cdent> social: I assumed that was the case, but packstack is so slow and for people who really do want to use it for try it to see if they like it, it would be nice to be able to eek out any possible increases
13:06:18 <social> cdent: packstack is slow because it waits for yum
13:06:30 <social> the ssh part isn't taking that much time
13:06:30 <gchamoul> apevec: ack
13:07:13 <apevec> social, an idea: is there a way to do dry-run and get a list of packages which will be installed by packstack?
13:07:38 <apevec> then feed that list into single yum command, that should be faster than separate yum install for each pkg?
13:07:52 <social> apevec: it's not that, packstack hit several times issues with packages so now we check their version in repos and so on
13:08:09 <social> eg we especially each time verify if there isn't update for facter
13:08:10 <social> and so on
13:08:39 <cdent> is this another case of bandaid on top of bandaid on top of bandaid?
13:08:54 <social> there is some plan to make this paralel in multinode and clean it bit up but packstack isn't that high prio if I may point out
13:09:18 <cdent> yet it is still the thing that we keep using for testing
13:09:36 <social> cdent: because try using trippleoi
13:09:46 <cdent> (not just today, but also when doing post rebasing backports testing etc)
13:10:33 <social> cdent: apevec: so the issue is that we have to ensure things with requires us to clean metadata each run
13:10:37 <social> see packstack/plugins/prescript_000.py:    server.append('yum clean metadata')
13:10:39 <social> packstack/plugins/prescript_000.py:    server.append("yum clean all")
13:10:52 <social> packstack/plugins/prescript_000.py:        server.append('yum update -y %s' % packages)
13:10:58 <social> that is the slowest
13:11:58 <apevec> ok, but that's only once, at the beginning
13:12:41 <social> apevec: for example packstack run on my machine for allinone takes 9 minutes
13:12:51 <social> but that's because of fast repos and good disk
13:13:56 <social> for puppet the issue is that it calls yum per each package, but any hacks there are pointless as that should be fixed in puppet
13:14:33 <tosky> apevec: and now packstack installs trove, good
13:16:09 <tosky> I see some strange characters in horizon (kilo) and the firebug console says that the font from font-family: "FontAwesome" couldn't be loaded
13:16:10 <apevec> social, what would be fix in puppet: collect all packages and run single yum install?
13:16:24 <tosky> is it a known issue, or something in my environment?
13:16:27 <social> apevec: you can't do that
13:16:29 <apevec> tosky, I think mrunge was looking at that y-day?
13:16:43 <social> apevec: what puppet can do is to install packages that can be installed in paralel
13:16:46 <tosky> apevec: I was partially on vacation, sorry, I didn't know
13:16:55 <social> apevec: eg make a tree and install parts of the tree
13:17:15 <apevec> tosky, not sure if there was resolution though
13:17:34 <apevec> mrunge, ^ was there bz filed for missing fonts?
13:17:35 <social> but that wouldn't fix much for packstack case because atm we don't do single puppet run (that would also speed up things) but puppet run per component
13:17:36 <tosky> apevec: luckily it's "just" a graphical glitch, so not blocking for now
13:18:17 <apevec> social, yea, in theory it could all on separate hosts
13:21:20 <gchamoul> apevec, number80: some openstack-packages repos have rpm-kilo branch, some don't have! they supposed to have a rpm-kilo branch as well?
13:22:15 <apevec> gchamoul, only those which have L-only changes on rpm-master
13:22:28 <apevec> I branch as needed
13:23:07 <gchamoul> apevec: ack, so I am just doing the migration from f20-master to rpm-master, and I will see that after
13:36:25 <kiran-r> Hello all I am getting this error http://paste.openstack.org/show/215457/
13:38:26 <tosky> apevec, mrunge: it could be a simple Alias issue; if you modify /etc/httpd/conf.d/15-horizon_vhost.conf and add
13:38:29 <apevec> vkmc, number80 - I've cleaned up github.com/redhat-openstack/trove (removed master-patches since we don't have any in Rawhide atm and rebase f22-patches to 2014.2.3 to match f22 spec)
13:38:38 <tosky> - Alias /static "/usr/share/openstack-dashboard/static"
13:38:42 <number80> apevec: ack
13:39:01 <tosky> then the font is found and the arrow is shown
13:39:13 <tosky> there is already a line Alias /dashboard/static "/usr/share/openstack-dashboard/static"
13:39:24 <tosky> not sure if it can be replaced (I would bet "yes") or both are needed
13:39:50 <vkmc> apevec, thx
14:07:17 <apevec> tosky, that's what mrunge was looking at, afaik everything should be under /dashboard/
14:10:11 <ayoung> ping
14:10:25 <ayoung> heh...trying to trigger the bot
14:15:37 <kashyap> ayoung: If you want to trigger the Fedora Zodbot, you can do: .localtime ayoung (and a bunch of other useful commands) :-)
14:17:08 <ayoung> kashyap, ping
14:17:08 <zodbot> ayoung: https://blogs.gnome.org/markmc/2014/02/20/naked-pings/
14:17:12 <ayoung> there we go
14:17:40 <jruzicka> haha, zodbot the Enforcer
14:17:48 <kashyap> ayoung: :-0
14:18:03 <kashyap> ayoung: Ah, someone in Fedora land has configured the bot :-)
14:18:28 <kashyap> jruzicka: Yep, Ralf Bean (IIRC) & co from Fedora Infra did that
14:18:29 <ayoung> kashyap, let me see if I can get that working in #openstack-keystone
14:19:03 <kashyap> ayoung: It won't work.
14:19:23 <kashyap> ayoung: Because, Zodbot (A flavor of Meetbot) is run by Fedora.
14:19:52 <jruzicka> kashyap, I'm all for sane conventions ;)
14:20:49 <kashyap> Yup :)
14:21:00 <ayoung> kashyap, how can I add zodbot to another room?
14:21:42 <kashyap> ayoung: Zodbot is run by Fedora infra. Let me see if there's a plugin that OpenStack infra folks can use it
14:22:04 <kashyap> ayoung: I.e. you can't add that to channels run by OpenStack folks, as the bots there are run by OpenStack Infra folks.
14:27:03 <mburned> apevec: did you ever grant me permissions on ironic?
14:27:17 <mburned> or do i need to bug athomas ?
14:27:23 <apevec> mburned, hmm, I thought I did
14:27:29 <mburned> apevec: you might have
14:27:36 <mburned> i just don't remember
14:27:42 <apevec> or  not...
14:27:58 <apevec> mburned, your FAS is?
14:27:58 <ayoung> kashyap, but I am both
14:28:05 <mburned> apevec: mburns72h
14:29:00 <apevec> mburned, ok, gave you all acls, athomas can revoke if he doesn't like it :)
14:29:16 <apevec> mburned, or he can hand over it to you
14:29:16 <mburned> apevec: i feel like he would be more likely to remove his own...
14:29:31 * athomas hovers his finger over the "revoke" button
14:29:32 <apevec> athomas, it's easy: https://admin.fedoraproject.org/pkgdb/package/openstack-ironic/
14:29:54 <apevec> athomas, and clikc +Give Package
14:30:02 <athomas> apevec, Thanks
14:36:24 <kashyap> ayoung: There we go, the plugin for that -- https://github.com/fedora-infra/supybot-fedora/commit/1ba62ced08487fe4dcc8b5040c8fc64ae3b8ce0f
14:45:09 <hewbrocca> you guys
14:46:05 <number80> number80: ping
14:46:05 <zodbot> number80: https://blogs.gnome.org/markmc/2014/02/20/naked-pings/
14:46:06 <hewbrocca> regebro: is telling me we need to update katello-agent
14:46:11 <number80> works \o/
14:46:55 <hewbrocca> to get satellite integration to work in the manager
14:47:34 <regebro> Yeah, it needs to be katello-agent-1.5.3-6.el7sat or later. (And there are later versions)
14:47:52 <regebro> But they also need other packages updated. I don't know if that matters.
14:49:02 <apevec> regebro, is that in centos base?
14:49:53 <apevec> regebro, hewbrocca, wait, this is for RHEL-only?
14:50:05 <regebro> apevec: I have no idea what I'm doing, so I don't understand that question.
14:50:21 <regebro> apevec: I have no idea if this is RHEL only or not.
14:50:34 <regebro> But it is RHEL, yes.
14:50:36 <apevec> hewbrocca said "satellite integration to work in the manager"
14:51:28 <apevec> hewbrocca, regebro for RDO that means make RDO work with whatever upstream is for Sat6, right?
14:51:45 <apevec> so is there such thing?
14:52:42 <regebro> apevec: Your questions swoosh above my had. Such a thing as what?
14:52:51 <regebro> s/had/head
14:53:33 <apevec> upstream project for Sat6
14:54:25 <regebro> I'm not sure that matters. The bug is not one of communicating with Satellite.
14:55:01 <regebro> It's that another package has been updated so katello-agents package upload fails with an import failure traceback.
14:55:19 <regebro> So we need, on the RHEL/RDO side have a later version.
14:55:44 <hewbrocca> slagle: ^^^
14:55:47 <regebro> apevec: Bug in question https://bugzilla.redhat.com/show_bug.cgi?id=1146292
14:56:04 <slagle> hewbrocca: katello-agent is part of rdo
14:56:09 <slagle> err, isn't part of rdo
14:56:13 <slagle> we don't update that
14:56:39 <slagle> is the update already pushed live to rhel?
14:56:44 <hewbrocca> ahh, of course
14:56:46 <regebro> slagle: no.
14:56:47 <slagle> and is that sync'd to the satellite in question?
14:57:08 <slagle> that's what would be needed i'd think
14:57:11 <regebro> at least, I don't think so.
14:59:04 <regebro> slagle: yum claims that 1.5.3-5.el7sat is the latest version in rhel-7-server-rh-common-rpms
15:00:04 <slagle> ok
15:02:30 <regebro> But I don't know how to double check that.
15:04:52 <regebro> slagle: Hm. Actually not. And yes. This is confusing.
15:05:46 <cdent> apevec, social: I finally managed to get a working rhel based packstack going. Took less time that it seems: 11m3.616s real time with only about 2.5 minutes using the cpu. The subscription trouble I was having was because of an incomplete account.
15:05:50 <apevec> regebro, that is indeed latest published NVR ftp://ftp.redhat.com/pub/redhat/linux/enterprise/7Server/en/RH-COMMON/SRPMS/
15:06:24 <regebro> apevec: OK, cool, thanks.
15:07:27 <regebro> apevec: Ah, no look, there's two: 1.5.3-5 and 1.5.3-7.
15:08:10 <apevec> ah, yes, need to scroll down :)
15:08:22 <apevec> so -7 should be the latest
15:08:43 <regebro> apevec: So for some reason on my overcloud node, it only finds 1.5.3-5.
15:08:45 <apevec> looks like you satellite is not up to date?
15:09:15 <regebro> apevec: Ah, ok, maybe that's it?
15:09:30 <apevec> I don't see other explanation
15:09:54 <apevec> but I also won't claim I know anything about Sat6 esp. not your instance
15:10:13 <regebro> OK, let's see if I can figure out how to make it up to date. :)
15:13:19 <ayoung> morganfainberg, ping
15:13:19 <zodbot> ayoung: https://blogs.gnome.org/markmc/2014/02/20/naked-pings/
15:13:59 <cdent> feh, wrong, that install failed, back with more numbers in a bit
15:15:57 <apevec> cdent, if on RHEL, make sure to enable optional and extras repos
15:16:12 <cdent> apevec: yeah, did that
15:16:20 <cdent> I had forgotten to kill NetworkManager
15:16:29 <apevec> was that 7.1 ?
15:16:39 <cdent> yeah
15:16:50 <apevec> hmm, NM issues are supposedly fixed in 7.1
15:16:52 <apevec> ajo, ^
15:17:18 <cdent> i'll dig deeper, it was a cinder problem but the failure moaned about NM
15:17:24 * cdent gets into logs
15:18:32 <cdent> apevec: it's not finding python-cheetah
15:24:49 <ayoung> cdent, I thought that killing net manager was old advice.  We should not need to do that.  nm-cli can do eveything we need.
15:25:22 <ayoung> I was surprised to see we still post that on the setup page
15:26:45 <cdent> ayoung: it also shows up in the summary message
15:29:32 <ajo> apevec, I believe 7.1 shall be good for what I know
15:29:55 <ajo> it has passed automation testing for the issues we knew about
15:32:46 <apevec> cdent, which summary message?
15:33:02 <apevec> ayoung, yeah, lots of obsolete info in wiki ...
15:33:16 <ayoung> apevec, need to clean that up before the next test day
15:33:23 <cdent> apevec: at the end of packstack
15:33:41 <ayoung> https://www.rdoproject.org/QuickStart  has it
15:33:47 <ayoung> and that is linked from all the test cases
15:33:51 <cdent> "Warning: NetworkManager is active on 10.0.0.3. OpenStack networking currently does not work on systems that have the Network Manager service enabled."
15:34:00 <ayoung> It also says Fedora 20
15:34:14 <ayoung> we need to stop doing test days, and start doing continuos testing...hmmmm
15:34:40 <ayoung> cdent, what log was that in?
15:34:45 <apevec> social, ajo ^ shall we drop that in packstack?
15:34:50 <apevec> cdent, I never read that :)
15:34:58 <cdent> :)
15:35:12 <apevec> ayoung, console output from packstack
15:35:20 <cdent> ayoung: that was in the output at the end of a packstack --allinone that failed (and for which I had not disabled NM)
15:35:30 <apevec> ayoung, Fedora repos are not ready for Kilo, test day is EL7 noly
15:35:46 <social> NM is just an warning
15:35:47 <apevec> cdent, what's the failure?
15:35:54 <social> cdent: what failure you had?
15:36:00 <apevec> social, ajo says it should work on 7.1
15:36:08 <apevec> so let's remove warning
15:36:18 <apevec> and if NM still doesn't work, file a bug
15:36:22 <apevec> against NM
15:36:26 <social> apevec: no no no no, we already removed it once and got serous backslash
15:36:28 <cdent> the failure had nothing to do with NM, it was from installing openstack-cinder, python-cheetah could not be found:
15:36:34 <social> apevec: I will keep the warning for a while
15:36:35 <ayoung> apevec, now that we have delorean, we should be doing an automated build and install for F21/F22.
15:36:43 <ayoung> I'll talk with infra about that
15:37:09 <regebro> apevec: Well, I think Satellite doesn't update katello-agent, because to update katello-agent katello-agent needs to work. :)
15:37:50 <cdent> apevec I've got optional and extras enabled and I can see python-cheetah in the ftp.redhat.com SRPMS but doing a yum search I got nothing
15:37:53 <regebro> but I'm not sure
15:38:12 <apevec> cdent, hmm, lemme see where it should be from
15:54:24 <apevec> cdent, on centos7 python-cheetah is in extras
15:55:58 <cdent> apevec I've rhel-7-server-extras-rpms and rhel-7-server-optional-rpms enabled and no python-cheetah available
15:59:00 <apevec> cdent, bah, so it went to rh-common
16:00:02 <cdent> jeebus, how we supposed to keep track of all this, and how can we search for things we know we should have but can't find?
16:01:44 <cdent> and is there way to do a sort of --enable="the good stuff"? :)
16:02:52 <apevec> cdent, yeah, that's now next, try grepping for common in /etc/yum.repos.d/redhat.repo
16:03:06 <apevec> not sure what's the exact CDN repo name
16:03:22 <cdent> I've got it now
16:03:33 <cdent> subscription-manager repos --enable=rhel-7-server-common-rpms
16:04:20 <apevec> yep, that one, so I'll add that one too in rdo-release update
16:05:03 <cdent> sorry apevec, that's not quite the right line
16:05:19 <cdent> needs an 'rh-' in there
16:05:36 <apevec> ok
16:06:12 * cdent packs his stack one more time
16:06:38 <lon> apevec: do you know what needs python-rdflib ?  I think I have evreything installed, but nothing's requiring it
16:07:39 <apevec> lon, lemme check my notes
16:07:44 <lon> apevec: cool
16:08:07 <apevec> ayoung, which infra you mean re. automated build/install w/ Delorean ? We do have CI jobs
16:11:50 <ayoung> apevec, I mean Fedora, doing automated installs and builds of the RDO packages, so we don't have  long lag
16:12:15 <apevec> ayoung, that's what we have Delorean for
16:12:30 <apevec> but not sure this first Fedora infra
16:12:40 <ayoung> apevec, the test day should not be RHEL/Centos specific.F22 should be tested already
16:12:41 <apevec> s/first/fits/
16:12:56 <ayoung> sure...we'll see what we can figure out
16:13:08 <apevec> ayoung, sure, I just didn't have time to prepare Fedora properly,
16:13:12 <ayoung> we need some additional checks for Federation and SAML
16:13:15 <apevec> few package reviews are pending
16:13:19 <ayoung> post install config
16:13:38 <ayoung> that mirror what people are doing "for real"  when tying in with existing systems
16:13:50 <apevec> ayoung, that pysaml is one of them, I tried to package it and hit upstream issue, so current build has federation disabled
16:14:01 <ayoung> yeah....
16:14:04 <ayoung> that is a PITA
16:14:26 <apevec> https://bugzilla.redhat.com/show_bug.cgi?id=1168314#c2
16:14:37 <apevec> I need to get back to this
16:15:00 <apevec> last I checked, upstream just acknowledge it but no action yet
16:16:32 <apevec> lon, so I'm not sure where is rdflib coming from...
16:17:02 <lon> ok
16:17:11 <lon> it was in the cbs, but it didn't get pulled in by anything
16:18:46 <apevec> lon, ah found it: https://docs.google.com/spreadsheets/d/1ZckiJtvca2-GRvnCVMPz4uNjHuUFvRFdKZxIwY6Glec/view#gid=0
16:19:05 <apevec> lon, notes says " for python-selenium"
16:19:25 <apevec> lon, but we're not running test requiring that so can skip
16:19:40 <lon> ok cool
16:19:50 <apevec> it's BR for horizon and django
16:20:06 <lon> weird
16:20:07 <lon> django built
16:20:32 <apevec> yeah, it's BR in  if 0%{fedora}
16:21:59 <lon> that's right I had to fix it
16:22:02 <lon> cool thanks
16:28:59 <rook> apevec is it known that horizon is missing stylesheets? or is it just my install of kilo?
16:29:21 <apevec> it should not missing stylesheets
16:30:27 <apevec> rook, but you might've hit https://bugzilla.redhat.com/show_bug.cgi?id=1219006 - does httpd restart fix it?
16:30:38 <rook> apevec trying
16:35:00 <rook> apevec: yup, hit that bz.
16:35:03 <ayoung> rook, easy to fix
16:35:18 <apevec> rook, please add comment, with steps you did
16:35:38 <apevec> rook, mrunge was not able to reproduce, so more cases are needed to track it down
16:35:44 <ayoung> rook, I think you need to regen the compressed stylesheets with WEBROOT set, if it is the thing I remember
16:37:29 <rook> apevec: done.
16:37:42 <rook> thanks for the tip!
16:37:49 <apevec> ayoung, it shouldn't be, css is now compressed on httpd startup, with your systemd suggestion
16:38:04 <apevec> ayoung, so issue is why httpd is not restarted in some cases
17:18:27 <mrunge> rook, are you still there?
17:18:35 <rook> yessir
17:19:27 <mrunge> did you install keystone on httpd?
17:20:08 <mrunge> rook, or, let's collect data in a different way
17:20:13 <mrunge> rook, you used packstack=
17:20:16 <mrunge> ?
17:20:29 <rook> yes packstack on rhel71
17:20:49 <mrunge> and it's a all in one install?
17:21:07 <rook> negative, one controller, one compute.
17:21:07 <mrunge> or multi node?
17:21:12 <mrunge> ok
17:21:28 <mrunge> and keystone and horizon both sharing httpd?
17:21:30 <rook> mrunge: keystone kicked off htpd
17:21:32 <rook> yes
17:21:48 <mrunge> ok,
17:22:00 <mrunge> packstack ran fine, or did you had issues there?
17:22:31 <rook> ran fine - hit one issue but it was due to my repos. once i got the repo fixed, good to go.
17:22:41 <rook> (said ran fine, then remembered :) )
17:22:44 <mrunge> so, you had to run it twice?
17:22:47 <rook> yes
17:23:25 <mrunge> ok, thank you
17:23:36 <rook> np, that it?
17:23:59 <mrunge> unfortunately, I don't see a pattern yet
17:24:10 <mrunge> but that's probably it
17:24:12 <rook> ah
17:25:10 <mrunge> one thing: rook, you installed from scratch, or was httpd already running?
17:34:23 <rook> clean host
17:38:16 <mrunge> thanks again
18:31:52 <rook> mrunge: to run tempest against the deployment, it seems that I can install via yum, but then I need to go to /usr/share/openstack-tempest-kilo/ and install tempest ?
18:58:45 <mrunge> rook, sorry, I can't answer that
18:58:56 <rook> mrunge - nm - ran through the wiki
20:16:44 <kfox1111> Trying kilo test horizon. Its mostly working, but no theme bits... I did install openstack-dashboard-theme. any ideas?
20:26:12 <mrunge> kfox1111, from where did you install?
20:29:11 <kfox1111> http://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/el7/
20:29:38 <mrunge> kfox1111, did you restart httpd after installing openstack-dashboard-theme ?
20:29:50 <kfox1111> yes.
20:30:12 <kfox1111> not seeing much javascript/css in the rendered html.
20:30:29 <mrunge> hmm, wait
20:30:35 <kfox1111> just did a yum clean all; yum upgrade. so Its the newest stuff from the repo.
20:30:52 <mrunge> and no systemctl restart httpd?
20:31:11 <mrunge> or did you had delorean installed earlier ß
20:31:14 <mrunge> ?
20:31:25 <kfox1111> trying to build a docker image... running /usr/sbin/httpd -DFOREGROUND
20:31:40 <kfox1111> nothing interesting in access/error logs.
20:32:28 <mrunge> kfox1111, seriously, there is a systemd snippet hooked into httpd
20:32:35 <mrunge> which should be executed
20:33:03 <kfox1111> for the dashboard?
20:34:25 <mrunge> sure
20:35:00 <mrunge> I still wanted to blog about that, but didn't found the time
20:35:42 <kfox1111> in /usr/lib/systemd/system/httpd.service, I only see ExecStart, ExecReload and ExecStop. ExecStart's basically what I'm running.
20:35:58 <kfox1111> ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND
20:36:59 <mrunge> it's in /usr/lib/systemd/system/httpd.service.d
20:37:18 <kfox1111> oh. interesting...
20:37:50 <kfox1111> thanks. :)
20:38:52 <rbowen> Hmm. Keystone is failing to start back up on reboot.
20:39:08 <kfox1111> Yup. that did the trick. :)
20:39:20 <mrunge> kfox1111, ok, great
20:39:44 <mrunge> did I mention systemctl restart httpd ?
20:39:50 <kfox1111> I learned a new thing about systemd today. :)
20:39:55 <mrunge> once or twice? ;-)
20:40:13 <kfox1111> I had looked at httpd.service. I didn't know httpd.service.d was a thing. :)
20:40:42 <mrunge> yes, that's totally cool
20:40:49 <mrunge> because it's pluggable
20:40:53 <kfox1111> I'm still very curious how the docker/systemd thing will play out...
20:40:55 <kfox1111> yeah.
20:41:26 <kfox1111> I'm kind of surprised the default centos7 container doesn't just have systemd running. then you could just yum install httpd and lauch the container and it would all "just work"
20:41:34 <kfox1111> its kind of a pain as is. :(
20:42:10 <mrunge> I would expect lots of pain with docker etc. coming in
20:42:27 <mrunge> esp. when you have socket activated applications
20:42:39 <mrunge> not running as long as you don't talk to their socket
20:43:05 <kfox1111> shoudl still work though, since systemd would just launch it as needed in the same container?
20:43:25 <kfox1111> Its when you want to do everything in micro services that it becomes hard again. :/
20:45:23 <kfox1111> hmm... where is the region selector suposed to show up in the kilo theme?
20:45:43 <mrunge> yes it should work, but you may not necessarily discover containers
20:45:53 <mrunge> ah the region selector is broken currently
20:46:01 <kfox1111> :( ok.
20:46:09 <mrunge> I fixed it for juno and icehouse recently
20:46:45 <mrunge> please file a bug, that I don't forget it
20:47:09 <kfox1111> and the tenant selector too? I'm not seeing that either (probably the same thing these days?)
20:47:12 <kfox1111> k.
20:48:06 <mrunge> ugh, tenant selector too? I didn't saw that yet
20:48:22 <mrunge> I don't need much text, it's more a reminder kfox1111
20:48:42 <kfox1111> k.
20:48:54 <mrunge> thanks!
20:48:58 <kfox1111> np.
20:48:59 <kfox1111> https://bugzilla.redhat.com/show_bug.cgi?id=1219221
20:49:11 <kfox1111> yeah. not seeing a tenant selector.
20:52:51 <kfox1111> have you deprecated the redhat theme and are just making it the default?
20:53:09 <kfox1111> or should I try and swith the theme back to the stock horizon one?
20:53:35 <mrunge> kfox1111, stock theme is default in rdo
20:53:51 <mrunge> the red hat theme is default in red hat openstack
20:54:11 <kfox1111> odd. cause I'm seeing the "Red Hat Enterprise Linux Openstack Platform" one.
20:54:32 <mrunge> kfox1111, where do you see that?
20:54:37 <kfox1111> all I did was yum install openstack-dashboard openstack-dashboard-themes and run the couple of hooks.
20:54:49 <mrunge> yes, sure
20:55:02 <mrunge> it's all open source
20:55:16 <mrunge> so you *can* install it
20:55:30 <kfox1111> yeah. but I haven't specified it anywhere.
20:55:39 <kfox1111> unless openstack-dashboard-themes is the redhat one.
20:56:04 <mrunge> openstack-dashboard-theme is the red hat one
20:56:27 <mrunge> remove it, restart httpd and you have plain upstream
20:56:29 <kfox1111> ah. ok.
20:56:31 <kfox1111> thanks.
20:56:45 <mrunge> you're welcome
20:59:19 <kfox1111> ok. I see the stock login now...
20:59:28 <kfox1111> logging in gets me: ValidationError: [u"'90ece9f251f9956738a973970812a66543805fa1ec3adc7dedf8b34ea0ae9c7d' value must be an integer."]
20:59:37 <mrunge> on kilo?
20:59:57 <kfox1111> kilo horizon, juno everything else.
21:00:11 <kfox1111> is that no longer supported?
21:00:11 <mrunge> yes, known issue, I'm working on that
21:00:15 <kfox1111> ok.
21:01:04 <mrunge> kfox1111, that's https://bugzilla.redhat.com/show_bug.cgi?id=1218894
21:01:54 <kfox1111> hmm... k. let me try that work around.
21:06:10 <kfox1111> ah. that workaround did work. thanks. :)
21:06:34 <kfox1111> interesting theme tweaks. :)
21:07:05 <kfox1111> wow. documentatin on heat resources. sweet. :)
21:13:22 <kfox1111> hmm... with the stock theme, I see the tenant selector, but if I try and switch, I just get an unauthorized: unable to retrieve usage info
21:14:13 <kfox1111> oh... wait..  I think I know what that is.
21:15:00 <kfox1111> hmm.... something new... "Project switch successful for user "kfox".
21:15:05 <kfox1111> "Could not delete token"
21:15:31 <kfox1111> it may be the "my user's token is to big thing though..."
21:30:50 <mrunge> ugh token too big?
21:31:04 <mrunge> kfox1111, are you storing your session somewhere?
21:35:34 <kfox1111> havent changed it from the defaults. I think on the production system I had to switch to memcache because of the session being to big.
21:36:27 <kfox1111> hmm... "admin -> Metadata Definitions"
21:36:32 <kfox1111> not really clear what that's for....
21:36:51 <kfox1111> must be a glance thing...
21:37:09 <mrunge> yes it is
21:37:44 <mrunge> kfox1111, if you're seing session size issues, you might want to switch do database cached sessions
21:38:25 <mrunge> kfox1111, https://docs.djangoproject.com/en/1.8/topics/http/sessions/#using-database-backed-sessions
21:39:12 <kfox1111> ah. cool. thanks.
21:47:41 <kfox1111> have you tried the new instance wizard yet?
21:47:49 <kfox1111> I just get a blank window.
21:50:30 <mrunge> kfox1111, known issue
21:50:42 <mrunge> I just discussed that with upstream
21:51:41 <kfox1111> ok. thanks.
21:53:13 <kfox1111> are there any plans to side in the designate plugin to horizon?
21:53:52 <kfox1111> One of the main reasons I'm trying to build a container for horizon is so I can slide in some of the plugins I may need. Some of which arn't packaged yet. :/
21:55:49 <kfox1111> some of the bits I need to slide in only work with kilo though, and don't want to have problems with kilo/juno on the same controller. :/
21:56:35 <mrunge> kfox1111, that's to be discussed in Vancouver
21:57:09 <mrunge> but for reference, I have a juno environment here, but using master horizon
21:57:15 <mrunge> and this just works
21:57:29 <kfox1111> :)
21:57:37 <kfox1111> I'm planning on being there.
21:57:48 <kfox1111> are you using venv's then?
21:57:56 <mrunge> nope
21:58:01 <mrunge> plain packages
21:58:16 <mrunge> something reproducible :D
21:58:44 <kfox1111> using all the juno python clients then?
21:59:03 <mrunge> it depends.
21:59:15 <kfox1111> or build a whole parallel set of rpms?
21:59:34 <mrunge> oh noo! no parallel rpms. that's a mess.
22:00:13 <kfox1111> yeah.
22:00:17 <mrunge> for my horizon environment: nearly kilo python-...clients
22:00:25 <mrunge> and horizon, of course
22:00:48 <mrunge> but horizon is not critical at all with interoperability
22:00:53 <kfox1111> so your semi upgrading the clients but not breaking juno servers?
22:01:16 <mrunge> you can separate horizon from the rest
22:01:53 <mrunge> so you can have horizon from kilo (including deps) and the rest from juno
22:02:05 <mrunge> of course, nobody will support that
22:02:22 <mrunge> in fact, it works quite well
22:02:24 <kfox1111> but doesn't that require a whole parallel set of rpms? or are you just bundling all the horizon deps with horizon?
22:02:40 <mrunge> uhm, neither nor
22:02:44 <mrunge> two machines
22:02:53 <mrunge> two separate installs
22:02:53 <kfox1111> ahhhh.. ok.
22:03:13 <mrunge> but you can install your horizon in a vm in your openstack cloud
22:03:21 <mrunge> ;-)
22:03:43 <kfox1111> For testing, I have been just putting horizon in its own vm, so I can launch as many tests horizons as I want, but won't stomp on each other.
22:03:43 <kfox1111> exactly. :)
22:03:51 <kfox1111> was just trying out docker to see if I can make it a little closer to the hardware.
22:04:37 <kfox1111> for a lot of things, docker seems... overly complicated. but running a web server is kind of what docker was written for.
22:06:47 <mrunge> worst thing I heard about a docker image was 17 versions of glibc bundled. now look for a glibc related issue. which service is using which glibc?
22:07:49 <kfox1111> yeah. I was hoping the docker hub would help since it can rebuild containers.
22:07:59 <kfox1111> but it doesn't track dependencies below the source repo.
22:08:05 <kfox1111> so you never know if its out of date. :/
22:08:11 <kfox1111> its a really big problem. :/
22:09:17 <mrunge> sure! I'm kind-of old school. I would like my images updated automatically
22:09:20 <kfox1111> I'm fairly convinced that docker doesn't make app deployment any easier. it just pushes where the hard parts are off to other places.
22:09:27 <kfox1111> security for one.
22:09:32 <mrunge> yupp
22:09:58 <mrunge> well, it delivers pre-configured *something* directly to the user
22:10:04 <mrunge> to try things out
22:10:19 <kfox1111> I do really like the idea of being able to go back to an older image if the new one breaks for some reason. But gota pay for that by needing infrastruture for building/updating your containers.
22:11:27 <kfox1111> if docker hub would provide a way to periodically rebuild images, and diff the contents and not replace the old with the new if nothing changed, then it might get much much closer.
22:13:03 <kfox1111> I had the same issue with disk-image-builder and making openstack images.
22:13:32 <kfox1111> had to write a jenkins job that built them, and a way to fingerprint all the rpm's that went in, and only update if the fingerprint changed.
22:13:53 <kfox1111> couldn't fully automate it though without a the glance artifactory stuff, which still isn't done. :/
22:15:32 <mrunge> sounds useful to have
22:15:53 <mrunge> would you share your scripts?
22:16:21 <kfox1111> a bit too rough at this point, but yeah. I can look at cleaning them up and getting them out there.
22:16:52 <mrunge> thank you, that's awesome!
22:17:12 <mrunge> I think, others will benefit as well
22:17:32 <kfox1111> I'm going to try and do the same for docker images. periodically pull a docker image, do a yum update, see if anything changes, and if it does, trigger a rebuild.
22:48:09 <kfox1111> oh... this rdo packaging document is nice. :)
22:59:48 <ccrouch> (05:07:51 PM) kfox1111: yeah. I was hoping the docker hub would help since it can rebuild containers.
22:59:48 <ccrouch> does this help ?
22:59:48 <ccrouch> https://docs.docker.com/docker-hub/builds/#repository-links
23:08:58 <ryansb> ccrouch: didn't know about that, neat!
23:10:16 <ccrouch> it obviously not going to help with your rpm dependencies getting out of date, but it can at least track your base docker image
23:33:38 <shmcfarl> My AIO setup worked great. When I add a new CentOS7.1 compute node into the mix, Packstack dies at “PuppetError: Error appeared during Puppet run: 192.168.80.31_prescript.pp
23:33:38 <shmcfarl> Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list openstack-selinux' returned 1: Error: No matching Packages to list”
23:33:55 <shmcfarl> I can’t manually install it either as it says no packages are available
23:56:42 <shmcfarl> Resolved it for now: Copy the /etc/yum.repos.d/rdo-testing.repo file from the AIO to the 2nd node and re-run Packstack
00:21:28 <kfox1111> trying to load the murano horizon into the kilo test rpms.
00:21:39 <kfox1111> ran into from django.contrib.formtools.wizard import views as wizard_views, ImportError: No module named formtools.wizard
00:22:00 <kfox1111> formtools.wizard's not shipped by python-django-1.8.1-1.el7.noarch
00:22:21 <kfox1111> is this intentional?
00:23:48 <kfox1111> hmm.. interestingly, python-django-1.6.11-1.el7.noarch provides formtools. so it seems like it was removed.
00:32:18 <kfox1111> ah... https://docs.djangoproject.com/en/1.8/ref/contrib/formtools/#formtools-how-to-migrate
05:04:37 <skullone> heh, im pleased with my ghetto Openstack dynamic DNS method
05:05:17 <skullone> on instance boot, script pulls hostname from meta-data service, constructs an 'nsupdate' command to update A/PTR records on our internal BIND zone
05:05:39 <skullone> somewhat insecure, but its only our own internal DNS zone
06:54:39 <gchamoul> morning all!
07:57:28 <apevec> #endmeeting