01:51:29 #startmeeting https://www.rdoproject.org/RDO_test_day_Kilo 01:51:30 Meeting started Tue May 5 01:51:29 2015 UTC. The chair is apevec. Information about MeetBot at http://wiki.debian.org/MeetBot. 01:51:30 Useful Commands: #action #agreed #halp #info #idea #link #topic. 01:52:42 #chair mburned aortega rbowen number80 01:52:42 Current chairs: aortega apevec mburned number80 rbowen 01:53:01 #topic https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test 01:53:12 ^ please not new http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm 01:53:20 note 01:54:04 I've started early to log questions and results from the other side of the globe, see you in few hours! 06:48:19 o/ 06:51:30 *\o/* 07:17:44 o/ 07:19:16 #chair mrung gchamoul 07:19:16 Current chairs: aortega apevec gchamoul mburned mrung number80 rbowen 07:19:29 #chair mrunge gchamoul 07:19:29 Current chairs: aortega apevec gchamoul mburned mrung mrunge number80 rbowen 07:19:40 oops, thanks for volunteering :> 07:58:59 When installting RDO with the repo http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm I get error regarding openstack-ceilometer-compute http://pastebin.test.redhat.com/280693 08:02:50 packstack errored out on python-cinder-2015.1.0-2.el7.noarch (not available) 08:03:01 Morning all 08:03:58 itzikb: CentOS7.x or RHEL7.x? 08:04:21 gchamoul: You can check both 08:04:34 mabrams: Can you send the output? 08:04:42 morning all, and happy testing! 08:05:02 10.35.161.149_glance.pp: [ DONE ] 08:05:03 10.35.161.149_cinder.pp: [ ERROR ] 08:05:03 Applying Puppet manifests [ ERROR ] 08:05:03 ERROR : Error appeared during Puppet run: 10.35.161.149_cinder.pp 08:05:03 Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-cinder' returned 1: Error: Package: python-cinder-2015.1.0-2.el7.noarch (openstack-kilo-testing) 08:05:03 You will find full trace in log /var/tmp/packstack/20150505-104642-Qf_MwW/manifests/10.35.161.149_cinder.pp.log 08:05:03 Please check log file /var/tmp/packstack/20150505-104642-Qf_MwW/openstack-setup.log for more information 08:06:42 Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-cinder' returned 1: Error: Package: python-cinder-2015.1.0-2.el7.noarch (openstack-kilo-testing) 08:08:53 itzikb: didn't have that issues on Centos71 08:09:22 i'm on rhel71 08:09:45 mabrams: did you enable optional repo ? 08:09:51 gchamoul: great! If you can update the test cases page -it'll be great 08:10:20 CONFIG_RH_OPTIONAL=y 08:11:00 and is "1" in the repo file 08:11:44 mabrams: Can you list your repositories? 08:12:05 [root@mabrams-svr021 yum.repos.d]# ls 08:12:05 rdo-testing.repo redhat.repo rhel-optional.repo rhel-server.repo 08:12:05 [root@mabrams-svr021 yum.repos.d]# grep enabled * 08:12:05 rdo-testing.repo:enabled=1 08:12:05 rhel-optional.repo:enabled=1 08:12:06 rhel-server.repo:enabled=1 08:12:06 [root@mabrams-svr021 yum.repos.d]# 08:18:06 retrying with CONFIG_USE_EPEL=y 08:21:00 itzikb, please give us the output of just your /usr/bin/yum -d 0 -e 0 -y install openstack-ceilometer-compute 08:23:09 mrunge: Seems like python-werkzeug is missing 08:23:10 a sec 08:23:44 hi all , after enabled FWAAS l3-agent failed : error log - ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent [req-642f4abc-2705-42a9-b25c-d8a4621e64e2 ] FWaaS plugin is configured in the server side, but FWaaS is disabled in L3-agent. 08:23:52 some else saw it ? 08:26:10 itzikb, it's something with your repos. I think you'll need server-extras, too 08:26:12 mrunge: http://pastebin.com/pjCU8xu4 08:29:38 itzikb: do you have extras repo enabled? 08:29:58 gchamoul: No.I'll add it. We need to add it to instructions 08:30:46 jcoufal: how's your test run looking 08:30:59 hewbrocca: I just built virt-env 08:31:02 itzikb, yes, that needs to be added. are you doing this? 08:31:09 so far so good 08:31:23 itzikb: you have it here https://www.rdoproject.org/Repositories 08:31:40 itzikb: but I agree that we need to add the link into the test day paage 08:32:30 gchamoul: In RDO there are no subscriptions 08:33:27 itzikb: well, but if you use RHEL7 to play with RDO ... you will need them 08:33:34 social: I think that damn IPA we had last night had caffeine in it 08:33:48 * hewbrocca was up all night 08:35:47 * mrunge sends hewbrocca another IPA 08:35:53 cheers 08:36:26 hewbrocca: ImportError: No module named oslo_utils 08:36:39 during undercloud installation 08:36:47 actually right at the start 08:37:07 management repo fail? 08:37:27 I don't understand how this was working 2 days ago and now it is a flaming pile of crap 08:38:00 hewbrocca: I don't know where the package should come from 08:38:08 if from management or rdo 08:38:15 Well if we use delorean rdo it works 08:38:29 OK, so it's all just repo breakage then 08:42:15 Update: We are trying again to build rdo-manager images, this time with the delorean repo enabled (will pull in some liberty stuff) 08:42:33 We will still be deploying a pure RDO Kilo overcloud 08:47:24 mrunge: About the horizon 'permission denied' error after (re)install using packstack that we saw yesterday. Do we have a solution? 08:47:47 itzikb, it's still unclear how to produce this 08:48:07 itzikb, I tried to reproduce it several times, but failed 08:48:46 itzikb, it seems, httpd is not restarted after horizon installation, for some reason 08:49:25 mrunge: For example I had a problem with ceilometer - didn't have the RPMs and then reinstalled and it happened. Restarted httpd and it works as you say 08:49:58 mrunge: Do you want me to update the bug? 08:50:05 itzikb, yes please 08:50:11 mrunge: ok 08:50:24 itzikb, and please re-assign it to openstack-puppet-modules 08:50:49 mrunge: ok 08:51:41 itzikb: please assign it to me gchamoul@redhat.com! 08:56:49 itzikb, https://bugzilla.redhat.com/show_bug.cgi?id=1218543 08:57:26 mrunge: gchamoul: done 08:57:35 itzikb, thanks! 08:58:09 * number80 remembers to reassign annoying tickets to gchamoul 08:58:11 thanks :) 08:58:36 :D 09:02:26 itzikb: mrunge: when using packstack please have -d parameter and send /var/tmp/packstack content 09:03:06 social, thank you for the hint 09:03:38 though you could still send the dir content it'll just contain less info 09:07:43 social: thanks 09:08:30 itzikb: so if you have issue pls fpaste horizon.pp.log files and I'll have a look 09:12:47 jcoufal, hewbrocca - oslo_utils is provided by python-oslo-utils-1.4.0-1.el7 which we have in kilo/testing repo 09:12:56 jcoufal, please send me full backtrace for ImportError: No module named oslo_utils 09:13:10 apevec: morning! 09:13:14 apevec: Regarding https://bugzilla.redhat.com/show_bug.cgi?id=1218543 when adding the config file /etc/neutron/fwaas_driver.ini to /usr/lib/systemd/system/neutron-l3-agent.service and restarted the l3 agent the l3 agent is started 09:13:35 apevec: it's entirely possible jcoufal is not enabling kilo/testing repo... 09:13:44 When adding it to /etc/neutron/conf.d/neutron-l3-agent/ it didn't work 09:14:10 apevec: I was told that this is enough: sudo yum install http://rdoproject.org/repos/openstack/openstack-kilo/rdo-release-kilo.rpm 09:14:15 apevec: not sure what it enables 09:14:36 jcoufal, only testing/kilo - but turns out that's enough only on centos7 09:14:40 apevec: I mean what repo exactly 09:14:43 rhel7 again requires optional and extras :( 09:14:56 apevec: I am trying on centos 09:15:09 this one https://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/el7/ 09:15:29 jcoufal, ok, if you can capture full backtrace that would be useful 09:16:01 apevec: I will run it on my second machine and report it back to you 09:16:29 thanks, in the meantime if you can workaround with Trunk image for undercloud, that's also good 09:16:37 apevec: according to the RDO test page, packstack should enable epel by default, but unless I modify the answerfile to CONFIG_USE_EPEL=y, it can't install openstack-cinder 09:17:31 ajeain, that's on rhel? 09:17:37 apevec: is this the same repo? sudo yum install -y https://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo-0.noarch.rpm 09:17:58 jcoufal, no 09:18:17 apevec: yes...RHEL 7.1 09:18:27 jcoufal, https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test 09:18:33 yum install http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm 09:18:43 apevec: yeah I know, the first one is just hardcoded in instack script :( 09:21:13 ajeain, pastebin me please output of yumdb info from_repo epel 09:21:14 apevec: but md5sum show it is the same 09:21:24 apevec:a3f87ae2fee9e21bb742aa195780cd17 rdo-release-kilo.rpm 09:21:24 a3f87ae2fee9e21bb742aa195780cd17 rdo-release-kilo-0.noarch.rpm 09:21:50 jcoufal, http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm 09:21:58 note *testing* 09:22:08 dammit 09:22:42 but otherwise, yes , rdo-release-kilo.rpm is a redirect, for release use that redirect 09:22:56 apevec: in https://www.rdoproject.org/RDO_test_day_Kilo is "release" 09:23:24 apevec: ah, i just refreshed 09:23:24 somebody changed it in time 09:23:28 ok 09:23:30 here we go 09:23:31 damn caching 09:23:40 I did change it last night 09:24:44 apevec: # yumdb info from_repo epel 09:24:45 Loaded plugins: langpacks, product-id 09:25:28 apevec: thanks 09:29:21 ajeain, ok, so it's not epel but optional or extras 09:32:40 hi all 09:34:02 apevec: Error: Package: python-hardware-0.11-post25.el7.centos.noarch (delorean-rdo-management) 09:34:02 Requires: python-pandas 09:34:17 also 09:34:18 Error: Package: python-hardware-0.11-post25.el7.centos.noarch (delorean-rdo-management) 09:34:18 Requires: python-ptyprocess 09:35:13 I'm using redhat-openstack/puppet-pacemaker and have a set of patches ready to improve the handling of resources. I've just noticed this new pull request: https://github.com/redhat-openstack/puppet-pacemaker/pull/45 from another member of RDO. How likely is that the PR will be accepted? And in general what is the state of the puppet-pacemaker module maintenance? 09:42:21 tenaglia: we are actively using puppet-pacemaker, so certainly patches are welcome 09:50:14 hewbrocca: cool. I have a few patches to make it work with RHEL6, mainly affecting "pcs mode". 09:54:43 jcoufal, I missed those python-hardware deps, is python-hardware is stable enough so we can move it to the regular repo? 09:55:21 delorean-rdo-management is http://trunk-mgt.rdoproject.org/repos/current-passed-ci/ ? 09:55:29 jcoufal, ^ 09:59:15 apevec: I've restart both delorean's , didn't notice anything in sar output, the only thing I have noticed is that of the 2 times I looked after a failure both times we lost the box during an ironic build 10:00:31 derekh, can you tell was it in rpmbuild or rpm install phase? 10:01:21 apevec: not sure, I'll see if I can find out 10:03:52 jcoufal, ok, here's repoclosure report for trunk-mgt repo http://paste.openstack.org/show/214946/ 10:04:26 apevec: I don't know anything about python-hardware packages really 10:04:35 apevec: and as for the delorean-rdo-management, yes it is that one 10:05:32 apevec: those dependencies were taken from delorean before? 10:06:00 jcoufal, I'm tracking them down now 10:06:09 ok 10:07:22 there's python-hardware-0.14-1.fc23 in Rawhide, I can rebuild it in CBS 10:08:22 built by flepied@redhat.com 10:08:48 hewbrocca, ^ is Frederic Lepied in mgt team? 10:10:05 apevec: he is leading our field team I think 10:10:20 apevec: or consulting team to be more specific 10:10:38 ok, I think we can trust his build then :) 10:16:44 according to https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test I should insatll openstack-tempest-kilo and python-tempest-lib to be able to run tempest 10:16:55 but there is no tempest package in the repository 10:18:36 berendt, there are links to fedora builds in the wiki 10:18:38 apevec: rpmbuild had failed, but the docker container hadn't been deleted, I'm thinking maybe the delete of the container didn't return http://trunk.rdoproject.org/centos70/45/1b/451bf7bb0dddf949000698993f784d4b5442f3da_3b5540c5/rpmbuild.log 10:18:57 yes. so really use them for el7? 10:19:46 apevec: I'll try and figure out a way to keep a console, am also adding a cgi-script to output some debug info, will let you know if I find anything out 10:19:57 berendt, it should work, but I think eggmaster has el7 build in Koji, just a sec 10:20:40 derekh, I propose as a quickfix/workaround to remove ironic from Trunk rdoinfo, mgt has forked version in trunk-mgt Delorean instance 10:21:56 apevec: openstack-tempest-kilo requires python-junitxml and this is not available, I think it would be nice to directly have openstack-tempest in the el7 repo 10:22:00 Guys, we're throwing in the towel on rdo-manager test day -- we need more time for all the repos to stabilize 10:22:04 We will try again next week 10:22:11 apevec: ok, that assumes the ironic thing wasn't just a coincidence, but I suppose we'll find out if the problem still persists 10:22:15 jcoufal: is going to send some mail about it 10:23:04 derekh, yep 10:23:26 Alex_Stef, https://bugzilla.redhat.com/show_bug.cgi?id=1218543 10:24:00 hewbrocca, ack - I'm resolving missing deps, I need to talk to trown/mburns about those remaining few 10:25:47 Yeah 10:26:10 In the future we just need to wait a bit longer before we try to use RDO to deploy itself :) 10:26:41 Or -- we need to explicitly separate "test deploying new RDO" from "test the manager's ability to deploy something" 10:29:13 hewbrocca, yeah, we need separate test days obviously, mgt builds on RDO "core" so we need to test foundations first 10:29:29 Yes 10:30:03 it's like building a house, you don't start with the roof :) 10:31:46 apevec: Yeah, that sounds like a good idea - having the 'base' RDO test day first, followed by management. 10:36:49 ajo, can you have a look at https://bugzilla.redhat.com/show_bug.cgi?id=1218543 ? I lost all details which config goes where after *aas split ... 10:37:14 ajo, but fwaas in l3 seems wrong 10:38:05 apevec, ack, I need to iterate a neutron spec, and then I'll look into that (probably this afternoon) 10:38:35 apevec, yes, looks like there is an issue on the .service or the directory structure for the l3-agent 10:38:42 apevec, from eran comments 10:38:50 I'll tune the .service file or .spec as needed 10:38:55 thanks 10:39:51 ajo, when you git, propose fix in rpm-master and I'll cherry pick to Rawhide/RDO Kilo builds 10:39:57 apevec: https://github.com/redhat-openstack/rdoinfo/pull/44 10:39:58 s/git/get it/ 10:40:27 apevec, I wonder if, having FWaaS disabled in server, and enabled in l3-agent works 10:40:37 I'm not sure we ever enabled fwaas in RDO 10:40:38 derekh, merged 10:41:19 ajo, what does "enabled fwaas in RDO" mean? 10:41:28 it's separate package now 10:41:29 apevec, enabling it on the l3-agent 10:41:52 apevec, yes, I know, it's separated as a service plugin 10:42:16 bz says "6.enable in answer file FWAAS= y" 10:42:20 apevec, afaik it had to be enabled in the l3-agent before too.. not 100% sure, I'll check if there was any relevant change before that 10:42:26 ekuris, ^ what does that mean ? 10:42:45 apevec, ekuris , I guess he's talking about packstack setting, 10:43:13 apevec: ack, server updated 10:44:02 ajo, yeah, I'm just not sure what CONFIG_NEUTRON_FWAAS=y implies 10:44:20 social, gchamoul ^ 10:44:34 did that ever work? 10:44:45 I mean, before *aas split 10:45:06 ajo: Regarding l3 and fwaas 10:45:55 apevec, I responded on the bz asking for /etc/neutron 10:46:00 to see how packstack does in that case 10:46:10 apevec: yes it did work 10:46:23 and also, a second check, disabling server FwAAS and, enabling FwAAS on the l3-agent to see if they're happy 10:46:34 otherwise we may need to have some sort of extra configuration in packstack 10:47:01 like adding the fwaas config file to /etc/neutron/conf.d/neutron-l3-agent 10:47:09 probably that's the most reasonable solution apevec ^ 10:47:39 yep, that's why Ihar introduced conf.d 10:47:44 apevec, ajo FWAAS=y is the option in answer file 10:47:45 or ... 10:47:47 to the share sisde of things 10:48:09 ekuris, check the bz when you have a moment, I asked you for a bunch of checks 10:48:37 ekuris, ack it wasn't full option name, but grep answered me :) 10:48:41 I'll come back later to it, I need to finish something else now 10:51:18 ajo, I saw your comment in BZ I will do it and ping you 10:53:25 ekuris, thanks a lot 10:53:35 Regarding tempest yesterday there were dependencies that were not installed after installing the openstack-tempest-kilo and python-tempest-lib. Is there an update? 11:01:43 itzikb, junitxml or more? 11:02:07 I'm rebuilding it in CBS now, will test repoclosure for missing deps 11:02:08 apevec: I'll check again 11:10:33 hey folks is packstack ready for kilo? 11:12:17 avico, at least, it's worth a try 11:12:25 avico, it is in testing/kilo repo, see https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test 11:12:34 thanks! 11:12:42 I'll give it a run 11:18:34 berendt, eggmaster, itzikb - yum install openstack-tempest works now with testing/kilo, I've updated https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test 11:19:39 section about venv shouldn't be necessary, but it's fine if it helps you isolate tempest from rest of openstack installation 11:19:53 i.e. if you're testing on allinone node 11:23:09 Hi, as anyone seen this error with RDO install on RHEL 7.1 ? "Error: Invalid parameter create_mysql_resource on Class[Galera::Server] at /var/tmp/packstack/3bf3e9c71344425aa247923ffb36f308/manifests/_mariadb.pp:21" 11:23:27 apevec: thanks! 11:25:15 apevec, any idea how to resolve this error? 11:26:33 panbalag, didn't see this reported before, can you provide all steps and describe your testing environment? 11:26:58 apevec, its a fresh install on RHEl 7.1. did packstack --allinone 11:27:38 panbalag, which repos do you have enabled? please pastebin yum repolist 11:28:07 jpena, social - can you tell me what could go wrong on mariadb.pp:21 line? 11:28:37 apevec: missing/wrong version of galera puppet module 11:28:40 panbalag, there should be also corresponding mariadb.pp.log 11:28:57 ah, how could that happen 11:28:57 the logs show the same message ..let me copy that too in the pastebin 11:29:34 panbalag, rpm -q openstack-puppet-modules 11:30:00 http://pastebin.test.redhat.com/280758 11:30:01 apevec: https://github.com/redhat-openstack/puppet-galera/pull/10 missing from opm package 11:30:04 apevec, http://pastebin.test.redhat.com/280758 11:30:29 apevec, [root@lynx13 yum.repos.d]# rpm -q openstack-puppet-modules 11:30:29 openstack-puppet-modules-2014.2.15-1.el7.noarch 11:31:02 panbalag, ah so you're install RDO Juno, not testing Kilo ? 11:31:38 apevec, I'm testing kilo only...the rdo-repo shows kilo 11:31:42 apevec: panbalag: looks like packstack kilo with opm juno 11:31:58 social, yeah, that won't work 11:32:12 I used the lastest repo link you had sent in reply to my email. 11:32:23 panbalag, let me try to understand how did you get into this situation 11:32:25 itzikb: do you familiar with issue in neutron-server? 11:32:33 apevec, here is the base url i used baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/el7/ 11:33:07 apevec, https://redhat.bluejeans.com/6388798212/ 11:33:08 panbalag, right, but this repos doesn't have OPM 2014.2 11:33:18 aortega, clicking in a sec 11:33:48 apevec, I installed openstack-packstack yesterday and found it was installing juno release..so I uninstalled it and sent you an email regarding the repos. So today I updated the repo and installed again 11:34:01 apevec, probably it did not remove everything completely 11:34:08 panbalag, ah that explains 11:34:14 yeah, best to start from scratch 11:34:21 apevec, ok..let me do that 11:34:42 puppet just ensures packages are preset and doesn't try to update them 11:35:08 panbalag, BTW yumdb info openstack-puppet-modules should tell you which repo RPM came from, in from_repo line 11:35:41 Is it Packstack-only installation for the test days, until the rdo-maanger testing next week? 11:35:57 apevec, yes looks like it came from juno "origin_url = https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/openstack-puppet-modules-2014.2.15-1.el7.noarch.rpm" 11:37:16 apevec, I will do a fresh install of rhel and then try again. 11:38:49 I.e. no Foreman-based installation? 11:40:22 verdurin: correct 11:40:57 hewbrocca: Okay - thanks for the confirmation. 11:42:57 do we have some tempest config generator? 11:48:46 ajo, I upload all info that you want to the bug . after I make changes in neutron conf file as you wrote me Igot error in l3 agent about iptables qrouter . it look like that VM cannot access to external network . I can access to external net from qrouter 11:48:59 ajo, can you advice ? 11:51:45 ekuris, so, if you disable fwaas in neutron-server, and enable it in l3-agent 11:51:45 it doesn't work 11:51:51 they need to be correcly coordinated, right? 11:52:00 enabled or disabled in both side 11:52:02 s 11:53:20 mrunge: hey Matthias, did Ido talk to you about the collapse icon issue in the tree? 11:53:53 mrunge: right now, you see a small square, which i believe it should be like an arrow, right? 11:54:02 interesting. a 1GB RAM VM is no longer big enough for a packstack --allinone 11:55:48 ajo, I disabled it in neutron server , in l3agent I didnt chage anything 11:56:06 ok, 11:56:16 ajeain, no, he did not 11:56:17 ekuris, so that means we need good cordination between both 11:56:38 ekuris, do you know, if, when fwaas is disabled, packstack will install the firewall package? 11:56:48 ajeain, where is it? could you please screenshot it? 11:57:00 mrunge: sure.... 11:57:05 iovadia: ^^ 11:57:08 apevec, ekuris , I'm considering to add a new conf.d in the "shared" side... 11:57:29 and get the fwwas config installed in there (for the l3-agent) 11:57:46 I need to check that what I'm thinking make sense 11:58:06 ajo, in l3 agent I dont have anything about FW . 11:58:10 apevec or the "shared" side should only be used to override default configurations on our side (I guess that's the use) 11:58:29 ekuris, but you added the config on the .service file, right? 11:59:06 mrunge: sent it via email 11:59:16 ekuris, the "Permission denied: u'/var/lib/neutron/lock/neutron-iptables-qrouter-7c738e45-cdc0-4f26-9e97-5993946b1031'" 11:59:19 should be something different 11:59:25 ajo, I this one /etc/neutron/fwaas_driver.ini 11:59:55 ajo, what is this error ? 12:00:10 ekuris, I'd guess it's related to selinux 12:00:18 ajo, what is the reason 12:00:19 ekuris, could you check if setenforce 0 12:00:23 fixes the error? 12:00:26 sec 12:00:30 it could be a directory permission error 12:00:38 like the running agent as user neutron 12:00:45 can't write to /var/lib/neutron/lock 12:00:52 can you check the permissions of /var/lib/neutron 12:00:57 and /var/lib/neutron/lock ? 12:01:51 ajeain, thanks! 12:03:18 ajo, it really set to enforce . about permission of var/lib/neutron =drwxr-xr-x. 5 neutron neutron 4096 May 5 14:30 neutron 12:03:50 ajo, same with /var/lib/neutron/lock 12:04:09 ok, those settings seem ok 12:04:35 I'd check the audit log because then it looks more probably like an selinux issue 12:04:50 if enforcing is disabled, and the error disappears, then it's selinux 12:04:55 but we need to update the selinux policies 12:05:08 http://www.ajo.es/post/100147773734/how-to-debug-se-linux-errors-and-write-custom 12:05:29 pixelbeat, it is not, price of progress! 3GB RAM VM is minimum afaict 12:05:35 2GB was oom-ing 12:06:06 pixelbeat, and on current idle allinone, ps_mem shows 2.3GiB used 12:06:14 apevec, pixelbeat ouch 12:06:37 yep it's a bit mad 12:06:53 ajo, that's with all crap: ceilo, swift... 12:07:51 wiki updated with that #GB min RAM suggestion 12:11:47 mrunge: let me know if this has been reported already 12:12:15 ajeain, it's not 12:12:21 but it should 12:12:41 well, with -theme the world looks completely different 12:19:27 mrunge: after installing the theme (and restarting httpd), I got "smoething went wrong", and I couldn't get into Horizon anymore, only after clearing all cookies/history 12:19:56 mrunge: and now, I don't see the tree anymore.,,,,, it now uses tabs 12:20:22 ajeain, yes. we're still working on that 12:20:37 ajeain, that's due to -theme deprecated in rdo 12:21:17 mrunge: OK, so I'm reporting a bug in launchpad, and link it to downstream bug (the icon issue), and another bug about the trace I am getting (while something went wrong) 12:22:09 ajeain, yes please: icon issue. please mention, your dashboard is under /dashboard, not under / 12:22:55 mrunge: it was always under dashboard, wasn't it? 12:22:56 ajeain, and the traceback for -theme package: idk if you should report that. that's too obvious to ignore 12:23:10 ajeain, yes, but now upstream made it configurable 12:23:10 mrunge: I see,,,, 12:23:25 ajeain, but apparently, nobody tests that 12:23:42 mrunge: so, why is it an issue? is it relate to the icon issue? 12:23:54 mrunge: or should I just add it into the icon bug? 12:24:45 ajeain, the icon issue is: webfonts. for me: http:///static/horizon/lib/font-awesome/fonts/fontawesome-webfont.woff?v=4.1.0 12:25:00 and it should read /dashboard/static 12:25:27 ajeain, you don't even see that a font is missing, if you have it already installed 12:27:14 mrunge: http://10.35.117.2/static/horizon/lib/font-awesome/fonts/fontawesome-webfont.woff?v=4.1.0 brings "not found" error 12:27:31 ajeain, exactly 12:27:50 pixelbeat: apevec: `ps_mem` reports 2.6 G for a single node DevStack instance built yesterday: https://kashyapc.fedorapeople.org/virt/openstack/ps_mem_no_Nova_instances_5MAY2015.txt 12:27:54 ajeain, but it should present you something on http://10.35.117.2/dashboard/static/horizon/lib/font-awesome/fonts/fontawesome-webfont.woff?v=4.1.0 12:28:06 (see added /dashboard in path?) 12:28:07 mrunge: still not found 12:28:47 mrunge: although http://10.35.117.2/dashboard/static/horizon/lib/font-awesome/fonts exists 12:29:00 kashyap, yeah, upstream craze nothing specific to RDO or RHEL... 12:29:24 apevec: By "single node", I mean just these services - Keystone, Glance, Nova, Neutron and Cinder 12:30:01 ajeain, http://10.35.117.2/dashboard/static/horizon/lib/font-awesome/fonts/fontawesome-webfont.ttf?v=4.1.0 exists 12:30:01 so that's even w/o swift and ceilo which we have in RDO allinone... 12:30:22 apevec: Although, looking at my history of ps_mem profiles of DevStack, some months ago, _without_ Cinder alone, DevStack reported 2.6 GB. Seems to have improved slightly now. 12:31:39 apevec, multi node passed and ran tempest fyi 12:32:00 weshay, thanks for the report 12:32:38 weshay, can we move those rdo-kilo jobs to public http://prod-rdojenkins.rhcloud.com/ now that trystack seems stable enough? 12:32:52 weshay, aortega suggested to run them in parallel w/ internal for a while 12:32:54 mrunge: yes....webfonts.ttf exists but not webfonts.woff 12:33:08 weshay, then leave them public only 12:33:21 so we can send job links to rdo-list 12:33:27 ajeain, yes. I'm still looking into that 12:33:33 apevec, as long as the infra is stable 12:34:37 sine qua non 12:34:55 apevec++ on epel tempest 12:34:55 weshay: Karma for apevec changed to 4: https://badges.fedoraproject.org/badge/macaron-cookie-i 12:34:57 ajo, I disabled selinux reboot my host but still I have errors in l3 -agent : IOError: [Errno 13] Permission denied: u'/var/lib/neutron/lock/neutron-iptables-qrouter-7c738e45-cdc0-4f26-9e97-5993946b1031' 12:35:35 ajo, my router have access to external net but it cannot to internal neet 12:36:17 ekuris, yes, it's not able to configure iptables to do the forwarding or other stuff probably 12:36:35 ajo, I mean VM cannot access to external net . looks like router does not run NAT 12:36:36 we may want to loop jlibosva or somebody else if we need this looked up quickly 12:36:51 I'm still behind the work I need to submit for today :( 12:37:18 open a bug ? 12:37:20 ekuris, we need to investigate why cant the agent create the lock file 12:37:24 ekuris, yes 12:37:28 I'd open a separate bug for that 12:38:10 apevec, this time installation fails because of a missing package "Requires: python-cheetah"... any idea what repo I need to add for this? 12:38:39 apevec: do we have sample tempest config or some generator? 12:38:48 ekuris: hi 12:39:08 social, there should beconf generator in tempest RPM 12:39:14 weshay, eggmaster ^ 12:39:15 ekuris: what is output of 'ls -l /var/lib/neutron' ? 12:39:54 ekuris: also can you please paste the whole traceback from l3 agent to pastebin? 12:40:50 panbalag, that's in RHEL7 extras repo 12:41:02 apevec, ok. 12:41:04 apevec, I've asked dkranz for some initial doc.. I keep hearing no 12:41:40 weshay, what are CI jobs doing? 12:41:49 jlibosva: [14:03:18] ajo, it really set to enforce . about permission of var/lib/neutron =drwxr-xr-x. 5 neutron neutron 4096 May 5 14:30 neutron 12:41:49 [14:03:51] ajo, same with /var/lib/neutron/lock 12:42:04 jlibosva, and he posted logs, here: https://bugzilla.redhat.com/show_bug.cgi?id=1218543 12:42:32 ajo: k, thanks :) 12:42:34 jlibosva, http://pastebin.test.redhat.com/280780 12:42:45 jlibosva, thank *you* 12:43:35 ajo, I dont know if bug is valid here because It appear after I disabled FW in neutron conf 12:44:01 social, dkranz can help w/ tempest 12:46:51 ekuris: when you said you disabled selinux, did you do it via /etc/selinux? 12:47:09 ekuris: if you do 'grep -i avc /var/log/audit/audit.log', does it print something? 12:47:54 yes I d o 12:48:32 jlabocki, Yes I do , there is no output 12:50:52 apevec, is this the repo "http://download.devel.redhat.com/rel-eng/EXTRAS-7.1-RHEL-7-20150205.0/compose/Server/x86_64/os/" ? 12:52:55 panbalag, yes, but please don't share internal URLs on public channel... 12:53:07 (too late it's in meeting minutes :) 12:54:36 the same is true for using internal pastes. fpaste does an excellent job 12:54:55 but everyone can have an eye on the paste 13:02:26 apevec: groan... it looks like both "python-hardware" owners are OOTO for the week 13:03:12 is there any way around that or do we just have to wait for them to get back? 13:04:07 hewbrocca, ok, maybe trown can have a look at merging to Rawhide and then send patch to number80 who is ueberpackager 13:04:26 HI can someone write some words about tempest configuration here https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test ? 13:04:50 trown, or email it to me and I'll build from a local branch 13:05:30 apevec: what am I emailing? 13:05:46 apevec, hewbrocca: read the thread already, so I should rebased to latest git in rdo-management ? 13:05:58 -d 13:06:12 I am totally down to help with the hardware library. I have no idea what I need to do though. 13:06:55 * number80 pestering against CentOS 7.1 cloud image as it breaks all the nice scripts to instantiate quickly a VM 13:07:13 it makes virt-install hanging :/ 13:09:29 so IIUC the package python-hardware in Delorean is different from the one in Fedora Rawhide (right apevec ?) 13:09:46 and we need to resolve that so that we can get that package into RDO kilo 13:10:15 I guess we have been using the one in Delorean trunk, so that is probably what we should get into RDO 13:10:20 but that is about all I know at this point 13:10:22 ack 13:10:27 apevec: hewbrocca can we not just take exactly what is in delorean-mgt? 13:12:03 * number80 likes RDO manager logo 13:12:27 Does it make sense to add http://docs.openstack.org/developer/tempest/configuration.html to https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test? 13:13:20 trown: problem is that dtantsur built a newer python-hardware (0.14 vs 0.11) 13:13:57 I can downgrade but that means introducing Epoch (and I'd rather avoid it if possible) 13:14:33 ah...I can fix that...that is just the tags not getting updated in the delorean repo 13:14:52 xlnt 13:14:54 number80: what is in the delorean-mgt repo is actually .14 13:15:02 trown: many thanks 13:15:20 trown: so we just need to push this build in RDO then ? 13:15:44 http://koji.fedoraproject.org/koji/buildinfo?buildID=625251 (btw, it was fred, not dtantsur) 13:15:46 number80 can you point me to the dtanstur build 13:15:48 ah thanks 13:16:16 if it works then it's just a matter of submitting to gerrithub 13:16:57 * number80 notices that fedora notification system failed to notify him of flepied builds 13:19:52 number80: on the link above am I just checking that the spec is correct? I am a total noob at real packaging, I have only done delorean up to this point. 13:20:22 trown, spec in Fedora is missing some deps you've added, please compare git history 13:20:43 trown, ideally, we would update Rawhide so we can drop fork 13:20:44 oh, it's a missing deps 13:20:59 yeah, I think Fedora spec needs fixes from rdo-mgt 13:21:24 http://pkgs.fedoraproject.org/cgit/python-hardware.git/tree/python-hardware.spec (url are pretty predictible) 13:21:34 but would also like to confirm there aren't non-upstream patches in https://github.com/rdo-management/hardware 13:21:51 apevec: ack, spinning a new build (and I already checked that there's no new patches in rdo-management package) 13:22:23 number80, be careful, rdo-mgt is using own source 13:22:39 ok missing R: python-{ptyprocess,pandas,pbr} 13:22:47 apevec: ouch 13:22:49 https://github.com/rdo-management/rdoinfo/blob/master/rdo.yml#L123 13:23:09 apevec: the rdo-mgt/hardware fork is strictly behind enovance/hardware 13:23:19 number80, yeah those, but also need to double-check upstream requirements.txt is really correct 13:23:37 trown: according these, you advise us to use enovance/hardware release ? 13:23:38 trown, any reason not to use latest upstream? 13:23:52 apevec: thanks to rdopkg reqcheck its already part of my routine :) 13:24:06 apevec: number80, ya enovance/hardware master would be best 13:24:31 trown, apevec: then, it should be settled, i'll check requirements and update the package accordingly 13:24:45 https://github.com/rdo-management/python-hardware-packaging/blob/packaging/python-hardware.spec 13:24:51 that is our delorean packaging 13:25:10 it works, what I am working on totally relies on that package so I am confident in that 13:25:38 trown: if you'll be working on packaging, when the kilo craze'll be over, feel free to ping me for sponsoring :) 13:25:45 we would need to change the {upstream_version} back to an actual version though 13:26:03 trown: apevec has improved that part on kilo spec 13:26:06 number80: awesome, I need to learn how to do the real packaging 13:26:25 more packagers, less SPOF :) 13:26:38 +1 13:27:03 +1 to less SPOF 13:27:10 lots of those it feels like 13:27:20 number80, yeah, please add that ?upstream_version thing in Rawhide, so we can keep delorean/rawhide specs closer 13:27:33 Speaking of which, I got email from the guy from CloudBaseSolutions about packaging hyper-v stuff for RDO. 13:27:39 apevec: +2 13:27:47 number80: thanks for pulling that together 13:27:55 I encouraged him to drop by here, and have a look at the packaging guide. Hopefully we'll hear more from him soon. 13:28:04 rbowen: cloudbase guy o/ :-) 13:28:04 You may remember talking with him in Paris and Atlanta. 13:28:10 Oh, there you are! 13:28:13 Yay. 13:28:43 oh hi alexpilotti, I have seen you on #openstack-ironic as well 13:28:50 alexpilotti: hi o/ # you may not remember but i was trolling you at Paris :> 13:29:05 lol 13:29:41 I will be playing in the test day today! ;) 13:30:32 rbowen: the main idea is that we’d like to add Hyper-V support in RDO 13:31:17 until kilo we didnt need anything particular on the RDO side, adding simply additional compute nodes to a RDO deployment 13:31:49 … and disabling the libvirt nova-compute that RDO requires to install, ahem 13:32:35 we also have a tool that does an automated RDO deployment on anythiong with Hyper-V, starting with a Windows 8 laptop: http://www.cloudbase.it/v-magine/ 13:32:38 number80, in the hardware spec it is using pypi release as source, but the last release in pypi is missing a few commits 13:33:02 with kilo there’s a new requirement: we need to add a rpm package for networking-hyperv 13:33:22 trown: so I should just keep the current release in fedora rawhide and fix the spec 13:33:23 so I asked rbowen about how to proceed and I got the pointers 13:33:46 And of course if you need anything else, you can always ask here. We span most timezones. 13:34:49 number80: what do we do in the case we have one commit we need that is not yet in a pypi release? add a patch? 13:35:00 trown: yes, sir ! 13:35:33 alexpilotti: I'm EMEA based (you're in Italy, right), so you could ping me for packaging stuff (and apevec too, if he agrees) 13:35:53 I’m actually Romania (EEST, GMT+3 now) 13:36:11 not a big difference, but my mistake :) 13:36:21 np :-) 13:36:25 mail is hguemar AT redhat DOT com 13:36:35 funny thing is that everybody thinks we are based in Italy 13:37:09 due to my name and the .it in the domain, while I’m actually the only italian in the company :-) 13:37:31 number80: thanks! going to send you some intro spam right now 13:37:33 lol 13:45:15 number80: email sent 13:47:54 number80: I am actually not sure if the missing commit is worthy of a patch, I would say we just fix the spec and go with the release in rawhide 14:08:24 It is way too quiet in here\ 14:08:32 Something has to be on fire somewhere. 14:08:41 hi. testing openstack-kilo on rdo, with this repo: https://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/el7/ i get python-requests 1.1.0 (2.3.0 seems to be there for the juno el7 repo), and i get this: Error: Package: 1:python-keystoneclient-1.3.0-1.el7.noarch (openstack-kilo-testing) Requires: python-requests >= 2.2.0 14:09:03 should i also enable the openstack-juno repo to get those pkgs? 14:10:02 ricrocha1, is that package installed but wrong version"? 14:10:09 python-request 14:10:28 it is 14:11:04 but i don't see a 2.2.0 candidate with the el7 kilo testing 14:24:25 Is there a reason that the cinderclient has an older version, created in 21th of december 2014? 14:36:40 note: 14:36:49 mariadb-server is required by openstack-trove-api 14:36:55 conflicts with mariadb-galera-server 14:37:11 mariadb-galera-server provides mysql-server, but not mariadb-server 14:37:20 thus, explosions. 14:37:24 uh 14:37:33 suggestion: openstack-trove-api require: mysql-server 14:37:50 I thought it was fixed 14:37:59 was testing: https://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/el7/ 14:38:04 or at least I verified it was fixed 14:38:13 ah, it was fixed only on rhel-osp 14:38:20 yeah 14:38:29 I can work around it. 14:38:46 lon: the question is (and let me ping vkmc and sgotliv as well): do we want to fix it this way in RDO too? 14:42:48 why mariadb-galera-server is providing mysql-server? 14:43:31 and... is mysql-server == mariadb-server? are we keeping mysql-server as a link for mariadb-server? 14:45:36 lon, tosky ^ 14:45:44 sgotliv, ^ 14:48:56 vkmc, why mariadb-server is required by trove api? 14:48:57 vkmc: due to history, mariadb is supposed to be a drop-in replacement to mysql 14:49:37 number80, yeah, that's why it feels odd that mariadb-galera-server conflicts with mariadb-server but not with mysql-server 14:50:00 lon, vkmc, sgotliv, tosky: I need to sync RHOS/RDO 14:50:02 sgordon, to store data related to datastores, datastores versions and such 14:50:04 sgotliv: because of me nagging for a solution to the problem "trove services die if they start before the underlying database server" 14:50:15 vkmc: you get to pick between mariadb-galera and mariadb-galera-server. 14:50:16 sorry sgordon 14:50:18 sgotliv, ^ 14:50:23 if we made one obsolete the other, it would mean you don't have a choice 14:50:36 so, they should generally both provide/obsolete mysql-server 14:51:04 number80, I wonder if Cinder has the same dependency? 14:51:37 sgotliv: no deps on mysql/mariadb/galera :) 14:51:37 the intent is to allow the admin to pick "do I want clustered db or not" 14:51:55 rohara: ^ I think that's right? 14:51:58 sgotliv, other services manage the dependency server side... not in the package 14:52:27 lon: if we rely on deployment tools, I would just drop the "hard" requirements and switch to soft requirement 14:52:40 and let the deployment tool popping the question 14:52:58 sure, but you need some sort of mysql flavor to run trove 14:53:02 hence, mysql-server. 14:53:10 depend on that, no problem. 14:53:10 lon, its different 14:53:37 number80, lon, sgotliv, vkmc: the dependency is a quick hack, but otherwise structural changes to trove services are needed, that's why maybe it shouldn't go in RDO 14:53:42 lon: yes but what happens if you want to use a remote mysql server :( 14:53:51 tosky: +1 14:53:55 number80: makes sense. 14:54:35 one thing to avoid however is 14:54:56 and, this isn't an example of that happening, since remote-mysql is valid... 14:54:56 number80: this means I will have to add some workaround on the CI, because I 100% hit the issue whenever I restart the controller on the CI environment 14:55:16 but removing dependencies for things that really are needed, because you want the deployment tool to do it. 14:55:20 ^ that would be bad. 14:55:57 lon: dnf/yum by default installs Recommends: 14:56:13 yeah, that's fine 14:56:22 it wouldn't change much end-user experience, and allow for more flexibility 14:56:25 lon: oh my favorite topic! :) 14:56:34 but not introducing that in Kilo :) 14:56:46 rohara: nevermind, i forgot the whole remote-sql thing. It's a trove problem 14:56:59 not sure, why, but there it is :] 14:57:06 (not sure why I forgot) 14:58:00 yeah so mariadb-server and mariadb-galera-sever needed to have same provides but NOT use a obsolete 14:58:09 i am trying to recall the details 14:58:32 originally i had a obsolete statement in the spec but had to remove it 14:59:33 tosky: CI would just need to install mariadb-whatever when doing trove tests (probably mariadb-server) 15:00:08 (I don't see CI setting up a galera cluster to do mariadb-galera based trove tests in the short term?) 15:00:13 lon: that's not the problem: the problem is that whenever you restart the system, on the CI environment, trove services starts before mysql/mariadb/mariadb-galera/whatever 15:00:20 and they die 15:00:21 and boom 15:00:25 gah 15:00:40 so bad units or something 15:01:35 sounds like an issue in trove unit files 15:01:44 mrunge: yeah 15:01:49 well, wait 15:02:07 no, no 15:02:16 you don't need to have trove and mysql/mariadb on the same host, don't you? 15:02:26 yes, that's the point: what is the supported configuration? 15:02:45 in that case, we don't have a hint, if database was already started 15:02:47 mrunge: no, that's what number80 pointed out. but, if on same system, it should work :] 15:02:49 in the product it's a different story; in RDO, do we want to tie trove to the controller/database host? 15:03:09 so trove 15:03:35 that would create a lovely bottleneck 15:03:37 yes, trove dies etc etc, I filed an upstream bug for this 15:04:02 note: while testing this, I realized that also neutron-server does not start if the underlying db can't be reached 15:04:03 guestagent should perhaps require a database 15:04:13 noooo, that's not about guestagent 15:04:14 https://wiki.openstack.org/wiki/Trove 15:04:22 forget guestagent 15:04:32 ok 15:06:24 lon, number80, tosky vkmc let's take it in our weekly Trove call today 15:07:13 ok 15:07:15 ok 15:07:27 s/in/on 15:13:18 number80: I figured out how to make the patch for hardware...if you have not already done a build, this should be what we want: https://github.com/rdo-management/python-hardware-packaging 15:14:08 crap...I just did that on the wrong repo...that is correct, but I am moving it to my own repo 15:15:25 tosky: for the Trove startup, I think you could apply a similar fix to what I proposed for Neutron in https://bugzilla.redhat.com/show_bug.cgi?id=1188198 15:16:22 jpena: I file bugs, others usually fix them :) vkmc, lon, sgotliv ^^^ 15:16:32 trown: mock building currently (had to attend other emergencies), I'll check before pushing, thanks :) 15:18:20 jpena, thanks! I think that could work for us as well 15:21:09 Could someone go to https://rsaebs.corp.redhat.com/ and tell me if they're seeing sec issues? I want to do my Compass but I'm having access troubles. 15:21:11 mburned: btw 15:21:19 python-ironiccclient will need to be in core :/ 15:21:20 Sorry, wrong room. 15:21:28 Deeply dumb; apologies. 15:22:42 mburned: perhaps not - we'll see 15:23:12 lon: ping 15:23:12 yrabl: https://blogs.gnome.org/markmc/2014/02/20/naked-pings/ 15:24:31 yrabl: pong 15:24:53 zodbot: yes, I get it, jcm wrote you. 15:25:22 lon who's responsible for the packaging? the Cinder client package hasn't been updated since december... 15:25:33 cinder client in rdo? 15:25:37 yep 15:25:40 egafford: internal links => internal irc ;) 15:25:45 lon^^ 15:25:49 hguemar or jruzicka 15:25:55 I'd think 15:25:56 cinder 15:26:01 lon, thanks! 15:26:12 I could be wrong 15:26:20 yrabl: jruzicka is responsible with clients and I'm his backup :) 15:26:55 number80: can you update it? :) *nice please* 15:27:01 yrabl: AH NUMBER80, SEE? 15:27:04 * lon ducks 15:27:32 my guess is that python-cinderclient hasn' tbeen updated because it's not broken 15:27:38 yrabl: is there a ticket, which release are we speaking of (juno/kilo) ? 15:28:18 (theorically, we're still maintaining icehouse currently) 15:29:55 yrabl, number80, lon: https://jruzicka.fedorapeople.org/pkgs/cinderclient.jpg 15:30:01 number80: the link I sent in the email is the correct one: 15:30:01 https://github.com/trown/python-hardware-packaging/ 15:30:48 yep, I also noticed the patch :) 15:32:03 (that was the missing piece to get the correct build) 15:36:01 number80 which repo should provide openstack-ironic-conductor ? 15:36:15 openstack-kilo-testing OpenStack Kilo Testing 746 ? 15:38:30 rook: it's absent from there, I'm checking if it has been built 15:39:04 lon: i'd expect it to be in core along with openstack-ironic 15:39:07 rgr.. 15:39:16 yup, it's absent from CBS :( 15:39:18 http://cbs.centos.org/koji/search?match=glob&type=package&terms=openstack-ironic* 15:39:23 cool 15:39:29 mburned: right we talked about that 15:39:34 #action hguemar rebuild openstack-ironic for kilo 15:39:44 for the time being, disable ironic to get around this i suppose. 15:46:03 rook, ironic is in mgt repo, I didn't build it since they've non-upstrem patches 15:46:30 #undo 15:46:30 Removing item from minutes: ACTION by number80 at 15:39:34 : hguemar rebuild openstack-ironic for kilo 15:46:43 apevec all good - I just had it enabled in my answer file and wanted to let yall know it failed. 15:46:54 yrabl, jruzicka, number80 - I was looking at cinderclient for GA, but there simply wasn't updates on stable/kilo 15:46:57 lemme check again 15:46:57 i can add the other repo. 15:47:22 apevec: there was none according git 15:48:27 for juno 15:48:35 kilo has a new 1.2.0 15:48:55 when was that released... 15:49:17 apevec: nevermind, it's liberty 15:49:27 my eyes are tricking me 15:49:32 yeah, nothing new on stable/kilo 15:49:48 yrabl, ^ why did you expect update cinderclient? 15:50:06 if there's something missing for Kilo, ask upstream to release it on stable/kilo 15:50:22 people wants 2015 builds :) 15:50:44 zomg this build is sooo last year 15:50:51 jruzicka: \o/ 15:51:00 shall I rebuild it just to get newer stamp? :) 15:51:35 RPMs model year 2015 15:51:56 if we were running test suite, it wouldn't be a bad idea to have mass rebuilds once in a while 15:52:02 s/were/had/ 15:53:06 number80, why, do bits decay? :) 15:53:49 apevec: mostly thinking about updated dependencies breaking stuff at runtime 15:54:06 CI is supposed to catch them though 15:54:09 yeah 15:59:01 so many things, so little time :( 16:06:40 apevec: I expected to see some of new features ready from the client side 16:07:07 yrabl, as I said, talk to upstream, I'm not following cinder development 16:07:20 I just see there isn't any updates on stable/kilo branch 16:07:40 apevec: ok - thanks for the help :) 16:18:13 trown: I've left you few comments in github to fix the packaging in rdo-mgmt for python-hardware 16:18:56 number80: awesome 16:20:13 apevec, testing new jobs on prod-rdojenkins 16:21:57 vkmc: I did retry creating my trove image with the master of diskimage-builder. I was able to build mysql for centos. Building for fedora still failed for me (chown: cannot access '/etc/trove': No such file or directory), but I should be ok for what I need it for. Thanks for your help. 16:28:53 number80: updated our rdo-mgt packaging with those changes...thanks a ton 16:29:18 trown: np, you'll save me a lot of time, later ;) 16:30:26 apevec, https://prod-rdojenkins.rhcloud.com/view/RDO-Kilo-Test-Day/ 16:30:51 all sunshine! 16:30:55 weshay, thanks! 16:31:35 weshay++ 16:31:35 number80: Karma for whayutin changed to 1: https://badges.fedoraproject.org/badge/macaron-cookie-i 16:31:36 sunshine and bunnies! 16:32:08 btw, if you have a FAS account, you can also gives cookies on freenode too ;) 16:32:51 number80++ 16:33:11 trown: FAS is hguemar 16:33:30 hguemar++ 16:33:45 (the number80 is long story but if you're a physicist, easy to understand when you see my initials) 16:34:36 Hah, mercury 16:34:58 cool story, bro 16:35:04 number80: nice 16:35:07 jruzicka++ 17:01:09 apevec: ping 17:01:09 eggmaster: https://blogs.gnome.org/markmc/2014/02/20/naked-pings/ 17:01:14 oh snap 17:01:28 lol 17:01:31 that ping was not just naked, man 17:01:33 eggmaster, ping 17:01:33 weshay: https://blogs.gnome.org/markmc/2014/02/20/naked-pings/ 17:01:34 you should see it. 17:01:45 it's doing something really wrong. ANYway 17:02:04 apevec: so we just got a pass on the openstack-tempest-kilo running on centos7.1 tempest node 17:02:19 weshay: pong 17:02:27 no warning about naked pong ;) 17:02:28 eggmaster, yes please!! 17:02:49 I mean, please use centos7.1 tempest nodes instead of f20 17:03:18 jpena noticed f20 nodes are sometimes not getting IP so job fails 17:03:22 heh, let's not get ahead of ourselves ;) (I'm not opposed to that..) 17:03:39 we just switched everything to f21 last week 17:03:46 should be more stable I'd hope 17:03:53 ah 17:04:26 but really, once we get testing synced to production (I assume that is the plan) there's afaik no big reason not to switch. 17:12:54 bswartz: radez : if you dont mind trying out ssh from a couple of different locations, and make sure you can 1) get in, 2) if you move your ~/.ssh/id_dsa or id_rsa (whichever we used), that you will _not_ be able to get in. I'm pretty sure it's setup correctly, but doesn't hurt to doublecheck 17:34:54 I was looking for it, but my computer is dying 17:35:07 do anybody know where do we put the "passed CI" master repo for delorean? 17:35:42 abregman was looking for something similar :) 17:38:03 ajo, will he survive? =) 17:38:37 abregman, not sure XDD 17:39:22 mouse cursor moving slow 17:39:30 fans blowing hard... 17:44:29 good morning guys 17:55:20 zodbot ++ 18:19:51 number80 where should i pick up python-werkzeug ? 18:20:18 ceilometer needs it. 18:35:58 rook, what OS? https://apps.fedoraproject.org/packages/s/werkzeug 18:36:12 pixelbeat: RHEL71 18:36:13 It's available in base centos 7 at least 18:36:23 should be in rhel 7.1 so? 18:38:08 pixelbeat: nope - not in the 71 repos i have. l 18:38:24 rook I had a quick look. is it in the rhel 7.1 extras repo? 18:42:52 rook, that's needed for EPEL7 anyway as per https://www.rdoproject.org/Repositories 18:45:01 pixelbeat: yup - had it enabled for the controller, not the compute.. whoops 18:45:54 thanks! :) 19:01:29 OK, doesn;t sound like there is much need for Keystone support. I have to log off for a while. Email me if there is something burning. I'll be back on in a couple hours 20:21:00 I am presuming the test day is still on for today 21:38:02 anyone here who can help with packstack installation questions? 21:38:24 dosnibbles: what do you need? 21:40:10 I have two HP servers and a netapp. I want to run compute on both servers and make use of the netapp. I also want to be able to access the instances from my lan. 21:41:43 I'm not sure if acess is possible without confiuguring floating IPs. 21:43:42 dosnibbles: you need the floating ip to access external network 21:45:18 imcsk8: ok. 21:46:59 dosnibbles: i think it might be possible not use them if you configure neutron to you LAN but it's a bit of a hassle 21:59:08 imcsk8: Ill make use of FIPs. Any idea about the netapp (nfs) setup? 22:36:43 hm, is it known that all style sheets seem to be missing from horizon? 23:01:19 Anyone seen this issue yet, http://paste.openstack.org/show/215113/ 23:01:38 can't see anything in the workaround, if anyone's found the issue 23:31:27 ok, found my issue with packstack, I am using a local mirror of epel, and the mongodb puppet module for packstack doesn't like the fact I am using use_epel=n, as that then defaults to using mmongodb.conf, and only changes that file. It should use mongod.conf 23:33:50 just installed horizon from: https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc2 23:34:02 getting: OfflineGenerationError: You have offline compression enabled but key "e5f9a607a797ba75cf310cc38c137e50" is missing from offline manifest. You may need to run "python manage.py compress". 23:34:08 known issue? 00:08:03 getting: OfflineGenerationError: You have offline compression enabled but key "e5f9a607a797ba75cf310cc38c137e50" is missing from offline manifest. You may need to run "python manage.py compress". 00:08:07 known issue? 00:13:54 kfox1111, you may want to use the latest RPMs that are being used for testing 00:14:05 is there a newer repo somewhere? 00:14:24 I'd be happy to test. :) 00:14:50 heh. sorry. :) 00:15:08 the new repo is https://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/el7 00:15:32 check-out https://www.rdoproject.org/RDO_test_day_Kilo 00:15:58 I could have swarn that rpm wasn't here earlier today. :) 00:16:35 ah... perfect. thanks. :) 00:16:46 yeah, it isn't, but try https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-testing-kilo-0.11.noarch.rpm instead 00:17:58 well, it's there actually, but not listed, when you enumerate the files via the web services 00:18:27 yeah. I grabbed it ok. the repo file shows up ok. I think it will work. 00:18:44 cool, good luck 00:19:20 thanks. :) 08:05:36 morning all 08:19:10 Hi 08:19:26 Any idea why python-oslo-vmware is in not update in juno package ? 08:19:42 because current version is 0.12 and in juno rdo it's 0.6 08:31:27 HI, I changed dhcp_delete_namespaces=True in /etc/neutron/dhcp_agent.ini and restarted neutron-dhcp-agent but existing namespaces are not deleted 08:33:03 itzikb try to restart network 08:37:48 kodoku: What do you mean ? systemctl restart network ? 08:41:20 itzikb yes 08:41:35 itzikb i use service network restart but it's old command 08:42:12 kodoku: Did. still there are namespace not belonged to existing networks 09:02:07 .whoowns MySQL-python 09:02:07 number80: jdornak 09:02:14 .fasinfo jdornak 09:02:15 number80: User: jdornak, Name: Jakub QB Dorňák, email: jdornak@redhat.com, Creation: 2012-02-02, IRC Nick: , Timezone: Europe/Prague, Locale: cs, GPG key ID: , Status: active 09:02:18 number80: Approved Groups: cla_fpca cla_done packager fedorabugs 09:05:56 ah zodbot knows more tricks than just taking minutes 09:06:55 kodoku, stable/juno requirements is: oslo.vmware>=0.6.0,<0.9.0 09:06:57 apevec: https://fedoraproject.org/wiki/Zodbot a lot more commands available :) 09:07:05 0.12 is master (liberty) release 09:07:38 apevec: yes, received a mail about oslo.vmware (i keep asking them to create a ticket but mail seems easier :( ) 09:07:59 number80, there is 0.7.0 on oslo.vmware stable/juno branch, we can update to that 09:08:00 stable/juno upstream is stuck to 0.7.0 so I will push it today 09:08:08 \o/ 09:08:10 ack 09:08:48 though newer may work, without feedback on bugzilla, it's not right to push oslo.vmware > 0.7.0 09:09:59 (an actual ticket about these issues may encourage updating it in RHOS too) 09:10:14 yeah, we'll stick to upstream reqs for now, but can backport for specific issues 09:10:47 +1 10:04:11 * cdent is stuck right at the start 10:14:52 what repos are required? I'm starting from a bare rhel 7.1 with no repos configured 10:38:52 the novnc proxy service is not starting with the kilo testing repository. is this already a known issue? 10:54:51 the novnc issue is only fixed for juno, I added a note in https://bugzilla.redhat.com/show_bug.cgi?id=1200701 11:04:35 berendt: I saw that yesterday, too. 11:29:22 cdent: link in the headline https://www.rdoproject.org/RDO_test_day_Kilo (setup + workaround for known issues) 11:30:56 number80 I put slightly more details about my issue on the etherpad: https://etherpad.openstack.org/p/rdo_kilo_test_day 11:31:30 I've got the kilo-testing repo but it appears to want some packages my system can't reach 11:34:29 cdent: do you have registered the RHEL 7.1 image? I think this is necessary to gain access to the offical RedHat repositories. 11:37:09 berendt: yes, or use centos repo 11:37:30 berendt: No, it's not registered. I guess that's part of my question: How do I turn it on. I usually use fedora for all my testing, but it appears fedora is not a part of this round of testing. 11:37:36 How can I add a centos repo? 11:38:15 cdent, you can start with centos cloud image 11:38:54 I'm presuming we probably won't do the packaging meeting this morning, due to test day? 11:39:42 rbowen, yeah, I still have testday meeting open, I guess we could prepare summary around pkg meeting time 11:39:51 It's _hilarious_ that testing rhel is so hard to do that people just don't bother ;) 11:40:09 Sounds good 11:40:19 rbowen, you have that BZ query from last time, to see BZs reported during test day? 11:40:26 cdent, isn't it? 11:41:09 * cdent sucks down a centos image 11:41:33 cdent: +1 11:41:40 (for the RHEL thing) 11:42:18 * cdent weeps briefly 11:42:33 not to mention extras, optional annoyances 11:43:48 I'm adding special handling in rdo-release for that, it's been issue everybody hitting w/ rdo on rhel 11:45:38 apevec++ 11:53:28 * cdent is in business on centos 11:53:33 thanks number80 and apevec 11:54:10 apevec, FYI.. jenkins is failing when it's collecting the test results, so the red jobs atm are not an indication of the rdo build 11:54:40 will be looking into why the xunit result file is blowing things up 11:55:42 actual results are: 11:55:43 11:56:55 hrm.. 05:31:43.563 Caused by: java.lang.OutOfMemoryError: Java heap space 11:58:13 I still wonder why they added these memory options in the JVM, it just means that Java will run out of memory faster 11:58:41 yay, java... 11:58:49 weshay, thanks! 11:58:55 weshay++ 11:59:24 java is about 20 years old, and still we're struggling with memory issues 11:59:42 how did we use it 20 years ago? 12:00:38 mrunge: Sun used to sell hardware 12:00:54 mrunge: on printers, washing machines and other small devices that didn't really have jvm 12:01:46 I created https://bugzilla.redhat.com/show_bug.cgi?id=1219006 (Wrong permissions for directory /usr/share/openstack-dashboard/static/dashboard) because at the moment it is not possible to access horizon after the installation with packstack. 12:02:08 mrunge, that's why my mobile phone has RAM/CPU like '99 server ;) 12:02:24 mrunge, ^ again?? 12:02:32 berendt, did httpd restart help? 12:02:50 Is anyone keeping a comprehensive list of issues somewhere? There's not much in either Workarounds or the etherpad? 12:03:06 apevec, berendt I still wasn't able to properly reproduce that 12:03:10 apevec: yes. I noted this in the bug report. 12:03:23 berendt, how did you install? 12:03:27 rbowen, yeah, let's start with the list of BZs reported y-day and today 12:03:48 mrunge: with packstack, multi node environment 12:03:50 rbowen, maybe not much means we're so good 12:04:07 berendt, did it work in first run? 12:04:10 bugzilla isn't being very friendly to me this morning 12:04:19 mrunge: no. i had to manually restart httpd 12:04:26 after manually restarting httpd it worked for me. 12:04:39 rbowen, no such thing, ever 12:04:39 berendt, no, I mean: did packstack install worked at first try? 12:04:42 mrunge: yes 12:04:54 on fresh machines? 12:05:01 mrunge: of course 12:05:14 and httpd was already installed/running there? 12:05:23 no.. a fresh centos 71 installation 12:05:29 ok, thanks 12:05:39 berendt, can you share kickstart or was it manual install? 12:05:48 and not using httpd as frontend service for keystone 12:06:21 apevec: a tarball of /var/tmp/packstack is sufficient? 12:06:40 19 issues tagged 'RDO' that have been opened in the last two days. 12:06:41 I mean for base OS instalation 12:07:19 apevec: https://github.com/boxcutter/centos/blob/master/http/ks7.cfg 12:07:26 berendt, kickstart (or steps used to install OS from scratch) + packstack answer file should be enough 12:07:56 berendt, any info you can offer is greatly appreciated. 12:08:13 berendt, actually, your bug is a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1150678 12:12:32 mrunge, is it really a duplicate, that BZ is Juno 12:12:35 ? 12:13:27 Oh, I see, I searched for tickets *changed* since yesterday. That adds a few more. That's why it's different. 12:14:22 apevec, well, it's probably not 12:14:53 apevec, itzikb_ just captured that bug and added his info 12:14:56 apevec: http://paste.openstack.org/show/215445/ 12:15:04 this is the used packstack.answers file 12:16:54 apevec: Sorry - not following 12:18:12 itzikb_, this was about the horizon permission issue bug. you added your kilo info to the juno bug 12:18:21 apevec, asked, if it's the same bug 12:18:25 (it's not) 12:18:46 mrunge: ok .thanks 12:20:48 mrunge: So it's not duplicate 12:21:39 mrunge: Should it be opened again? And if yes - do you want me to add my comments in the Kilo bug? 12:22:46 itzikb_, I reopened it, and please add your info there 12:24:13 mrunge: ok 12:26:48 mrunge: Done. I'll comment in the Juno bug that is was a mistake and it's KIlo 12:27:13 itzikb_, well... 12:27:36 itzikb_, maybe you should just point to the kilo bug 12:27:48 mrunge: ok 12:32:43 vkmc, number80, tosky - ok, so you need openstack-trove-2015.1.0-2 in kilo/testing ? 12:32:55 oh, right 12:32:59 here is the place 12:32:59 starting rebuils in CBS 12:33:19 rebuild even 12:33:45 apevec: vkmc was working on an update this morning 12:34:01 yes, because it can't be installed otherwise (conflict between the trove requirements in 2015.1.0-1 and mariadb-galera-server) 12:35:03 number80, -2 is it then, built 11UTC today 12:35:24 apevec: then it's all good 12:35:35 I need to fix notifications with gerrithub 12:35:55 it seems to use $RANDOM emails from github 12:36:06 apevec: thanks for the imported build! 12:36:10 vkmc: and thanks for the update 12:53:25 tosky, done: Installing: 12:53:25 openstack-trove noarch 2015.1.0-2.el7 openstack-kilo-testing 12:55:07 apevec: thanks 12:55:28 gchamoul++ 12:55:28 ajo: Karma for gchamoul changed to 2: https://badges.fedoraproject.org/badge/macaron-cookie-i 12:55:49 ajo: ??? 12:55:52 :D 12:55:59 :-) the -fwaas cherry pick of the missing commits 12:56:16 I had that on my todo list, but I'm a bit swamped lately on the QoS API definition 12:56:43 ajo: I just migrated openstack-packages/neutron-fwaas to rpm-master branch 12:56:55 hmm 12:56:56 ah 12:57:03 I thought it was something else 12:57:22 gchamoul, the rpm-kilo is missing some patches from f20- 12:57:23 nope just migration from f20-master to rpm-master branch 12:57:27 which are neccesary for kilo 12:57:30 all the last 4 12:57:52 ajo: can do it! 12:57:56 if you want 12:58:00 gchamoul, I'd really really thank you 12:58:06 ajo, hmm, you mean upstream stable/kilo is missing patches? 12:58:11 otherwise it'd need at least until tomorrow to do it 12:58:12 apevec, yup! 12:58:14 4 patches 12:58:28 let me look for the bz 12:58:28 rpm-kilo rpm-master are just dist-gits, for spec 12:58:36 apevec, yes, it's in the spec 12:58:46 * apevec looks 12:58:49 1 sec... 12:58:57 there shouldn't be patches in rpm-* specs 12:59:13 Delorean builds from vanilla upstream branches 12:59:29 apevec, gchamoul : https://bugzilla.redhat.com/show_bug.cgi?id=1218543 12:59:32 so ideally, we need those proposed on upstream stable/kilo 12:59:37 apevec, sorry, I was meaning commits 12:59:47 commits to the spec repository 12:59:48 :) 13:00:02 ah so spec not source changes 13:00:06 * apevec clicks bz 13:00:14 ahh, this is missing a last comment I made 13:00:16 bad bugzilla 13:00:17 1 sec. 13:00:37 https://github.com/openstack-packages/neutron-fwaas/compare/rpm-kilo...f20-master 13:00:37 We need to cherry pick the missing commits into rpm-kilo, it seems we branched it out from a wrong commit id. 13:00:56 apevec ^ 13:00:58 oh has needinfo on me 13:01:25 Has there ever been talk of optimizing --allinone so it doesn't have to ssh to do its puppet applies? 13:02:21 (or rather getting the manifests in place?) 13:02:22 ajo, crap, so I messed it up when branch :( 13:02:43 apevec, who nothing does nothing breaks.. 13:02:53 ajo, omh, I had outdate git checkout :) 13:03:16 :) 13:04:04 gchamoul, ajo, ok, I'll fix rpmkilo 13:04:17 and rebuild in CBS for kilo/testing 13:04:23 cdent: I'd prefer same code path for all cases 13:04:28 thanks a lot apevec 13:04:31 apevec++ 13:04:32 ajo: Karma for apevec changed to 5: https://badges.fedoraproject.org/badge/macaron-cookie-i 13:04:46 that should've been -- rookie mistake! 13:05:55 social: I assumed that was the case, but packstack is so slow and for people who really do want to use it for try it to see if they like it, it would be nice to be able to eek out any possible increases 13:06:18 cdent: packstack is slow because it waits for yum 13:06:30 the ssh part isn't taking that much time 13:06:30 apevec: ack 13:07:13 social, an idea: is there a way to do dry-run and get a list of packages which will be installed by packstack? 13:07:38 then feed that list into single yum command, that should be faster than separate yum install for each pkg? 13:07:52 apevec: it's not that, packstack hit several times issues with packages so now we check their version in repos and so on 13:08:09 eg we especially each time verify if there isn't update for facter 13:08:10 and so on 13:08:39 is this another case of bandaid on top of bandaid on top of bandaid? 13:08:54 there is some plan to make this paralel in multinode and clean it bit up but packstack isn't that high prio if I may point out 13:09:18 yet it is still the thing that we keep using for testing 13:09:36 cdent: because try using trippleoi 13:09:46 (not just today, but also when doing post rebasing backports testing etc) 13:10:33 cdent: apevec: so the issue is that we have to ensure things with requires us to clean metadata each run 13:10:37 see packstack/plugins/prescript_000.py: server.append('yum clean metadata') 13:10:39 packstack/plugins/prescript_000.py: server.append("yum clean all") 13:10:52 packstack/plugins/prescript_000.py: server.append('yum update -y %s' % packages) 13:10:58 that is the slowest 13:11:58 ok, but that's only once, at the beginning 13:12:41 apevec: for example packstack run on my machine for allinone takes 9 minutes 13:12:51 but that's because of fast repos and good disk 13:13:56 for puppet the issue is that it calls yum per each package, but any hacks there are pointless as that should be fixed in puppet 13:14:33 apevec: and now packstack installs trove, good 13:16:09 I see some strange characters in horizon (kilo) and the firebug console says that the font from font-family: "FontAwesome" couldn't be loaded 13:16:10 social, what would be fix in puppet: collect all packages and run single yum install? 13:16:24 is it a known issue, or something in my environment? 13:16:27 apevec: you can't do that 13:16:29 tosky, I think mrunge was looking at that y-day? 13:16:43 apevec: what puppet can do is to install packages that can be installed in paralel 13:16:46 apevec: I was partially on vacation, sorry, I didn't know 13:16:55 apevec: eg make a tree and install parts of the tree 13:17:15 tosky, not sure if there was resolution though 13:17:34 mrunge, ^ was there bz filed for missing fonts? 13:17:35 but that wouldn't fix much for packstack case because atm we don't do single puppet run (that would also speed up things) but puppet run per component 13:17:36 apevec: luckily it's "just" a graphical glitch, so not blocking for now 13:18:17 social, yea, in theory it could all on separate hosts 13:21:20 apevec, number80: some openstack-packages repos have rpm-kilo branch, some don't have! they supposed to have a rpm-kilo branch as well? 13:22:15 gchamoul, only those which have L-only changes on rpm-master 13:22:28 I branch as needed 13:23:07 apevec: ack, so I am just doing the migration from f20-master to rpm-master, and I will see that after 13:36:25 Hello all I am getting this error http://paste.openstack.org/show/215457/ 13:38:26 apevec, mrunge: it could be a simple Alias issue; if you modify /etc/httpd/conf.d/15-horizon_vhost.conf and add 13:38:29 vkmc, number80 - I've cleaned up github.com/redhat-openstack/trove (removed master-patches since we don't have any in Rawhide atm and rebase f22-patches to 2014.2.3 to match f22 spec) 13:38:38 - Alias /static "/usr/share/openstack-dashboard/static" 13:38:42 apevec: ack 13:39:01 then the font is found and the arrow is shown 13:39:13 there is already a line Alias /dashboard/static "/usr/share/openstack-dashboard/static" 13:39:24 not sure if it can be replaced (I would bet "yes") or both are needed 13:39:50 apevec, thx 14:07:17 tosky, that's what mrunge was looking at, afaik everything should be under /dashboard/ 14:10:11 ping 14:10:25 heh...trying to trigger the bot 14:15:37 ayoung: If you want to trigger the Fedora Zodbot, you can do: .localtime ayoung (and a bunch of other useful commands) :-) 14:17:08 kashyap, ping 14:17:08 ayoung: https://blogs.gnome.org/markmc/2014/02/20/naked-pings/ 14:17:12 there we go 14:17:40 haha, zodbot the Enforcer 14:17:48 ayoung: :-0 14:18:03 ayoung: Ah, someone in Fedora land has configured the bot :-) 14:18:28 jruzicka: Yep, Ralf Bean (IIRC) & co from Fedora Infra did that 14:18:29 kashyap, let me see if I can get that working in #openstack-keystone 14:19:03 ayoung: It won't work. 14:19:23 ayoung: Because, Zodbot (A flavor of Meetbot) is run by Fedora. 14:19:52 kashyap, I'm all for sane conventions ;) 14:20:49 Yup :) 14:21:00 kashyap, how can I add zodbot to another room? 14:21:42 ayoung: Zodbot is run by Fedora infra. Let me see if there's a plugin that OpenStack infra folks can use it 14:22:04 ayoung: I.e. you can't add that to channels run by OpenStack folks, as the bots there are run by OpenStack Infra folks. 14:27:03 apevec: did you ever grant me permissions on ironic? 14:27:17 or do i need to bug athomas ? 14:27:23 mburned, hmm, I thought I did 14:27:29 apevec: you might have 14:27:36 i just don't remember 14:27:42 or not... 14:27:58 mburned, your FAS is? 14:27:58 kashyap, but I am both 14:28:05 apevec: mburns72h 14:29:00 mburned, ok, gave you all acls, athomas can revoke if he doesn't like it :) 14:29:16 mburned, or he can hand over it to you 14:29:16 apevec: i feel like he would be more likely to remove his own... 14:29:31 * athomas hovers his finger over the "revoke" button 14:29:32 athomas, it's easy: https://admin.fedoraproject.org/pkgdb/package/openstack-ironic/ 14:29:54 athomas, and clikc +Give Package 14:30:02 apevec, Thanks 14:36:24 ayoung: There we go, the plugin for that -- https://github.com/fedora-infra/supybot-fedora/commit/1ba62ced08487fe4dcc8b5040c8fc64ae3b8ce0f 14:45:09 you guys 14:46:05 number80: ping 14:46:05 number80: https://blogs.gnome.org/markmc/2014/02/20/naked-pings/ 14:46:06 regebro: is telling me we need to update katello-agent 14:46:11 works \o/ 14:46:55 to get satellite integration to work in the manager 14:47:34 Yeah, it needs to be katello-agent-1.5.3-6.el7sat or later. (And there are later versions) 14:47:52 But they also need other packages updated. I don't know if that matters. 14:49:02 regebro, is that in centos base? 14:49:53 regebro, hewbrocca, wait, this is for RHEL-only? 14:50:05 apevec: I have no idea what I'm doing, so I don't understand that question. 14:50:21 apevec: I have no idea if this is RHEL only or not. 14:50:34 But it is RHEL, yes. 14:50:36 hewbrocca said "satellite integration to work in the manager" 14:51:28 hewbrocca, regebro for RDO that means make RDO work with whatever upstream is for Sat6, right? 14:51:45 so is there such thing? 14:52:42 apevec: Your questions swoosh above my had. Such a thing as what? 14:52:51 s/had/head 14:53:33 upstream project for Sat6 14:54:25 I'm not sure that matters. The bug is not one of communicating with Satellite. 14:55:01 It's that another package has been updated so katello-agents package upload fails with an import failure traceback. 14:55:19 So we need, on the RHEL/RDO side have a later version. 14:55:44 slagle: ^^^ 14:55:47 apevec: Bug in question https://bugzilla.redhat.com/show_bug.cgi?id=1146292 14:56:04 hewbrocca: katello-agent is part of rdo 14:56:09 err, isn't part of rdo 14:56:13 we don't update that 14:56:39 is the update already pushed live to rhel? 14:56:44 ahh, of course 14:56:46 slagle: no. 14:56:47 and is that sync'd to the satellite in question? 14:57:08 that's what would be needed i'd think 14:57:11 at least, I don't think so. 14:59:04 slagle: yum claims that 1.5.3-5.el7sat is the latest version in rhel-7-server-rh-common-rpms 15:00:04 ok 15:02:30 But I don't know how to double check that. 15:04:52 slagle: Hm. Actually not. And yes. This is confusing. 15:05:46 apevec, social: I finally managed to get a working rhel based packstack going. Took less time that it seems: 11m3.616s real time with only about 2.5 minutes using the cpu. The subscription trouble I was having was because of an incomplete account. 15:05:50 regebro, that is indeed latest published NVR ftp://ftp.redhat.com/pub/redhat/linux/enterprise/7Server/en/RH-COMMON/SRPMS/ 15:06:24 apevec: OK, cool, thanks. 15:07:27 apevec: Ah, no look, there's two: 1.5.3-5 and 1.5.3-7. 15:08:10 ah, yes, need to scroll down :) 15:08:22 so -7 should be the latest 15:08:43 apevec: So for some reason on my overcloud node, it only finds 1.5.3-5. 15:08:45 looks like you satellite is not up to date? 15:09:15 apevec: Ah, ok, maybe that's it? 15:09:30 I don't see other explanation 15:09:54 but I also won't claim I know anything about Sat6 esp. not your instance 15:10:13 OK, let's see if I can figure out how to make it up to date. :) 15:13:19 morganfainberg, ping 15:13:19 ayoung: https://blogs.gnome.org/markmc/2014/02/20/naked-pings/ 15:13:59 feh, wrong, that install failed, back with more numbers in a bit 15:15:57 cdent, if on RHEL, make sure to enable optional and extras repos 15:16:12 apevec: yeah, did that 15:16:20 I had forgotten to kill NetworkManager 15:16:29 was that 7.1 ? 15:16:39 yeah 15:16:50 hmm, NM issues are supposedly fixed in 7.1 15:16:52 ajo, ^ 15:17:18 i'll dig deeper, it was a cinder problem but the failure moaned about NM 15:17:24 * cdent gets into logs 15:18:32 apevec: it's not finding python-cheetah 15:24:49 cdent, I thought that killing net manager was old advice. We should not need to do that. nm-cli can do eveything we need. 15:25:22 I was surprised to see we still post that on the setup page 15:26:45 ayoung: it also shows up in the summary message 15:29:32 apevec, I believe 7.1 shall be good for what I know 15:29:55 it has passed automation testing for the issues we knew about 15:32:46 cdent, which summary message? 15:33:02 ayoung, yeah, lots of obsolete info in wiki ... 15:33:16 apevec, need to clean that up before the next test day 15:33:23 apevec: at the end of packstack 15:33:41 https://www.rdoproject.org/QuickStart has it 15:33:47 and that is linked from all the test cases 15:33:51 "Warning: NetworkManager is active on 10.0.0.3. OpenStack networking currently does not work on systems that have the Network Manager service enabled." 15:34:00 It also says Fedora 20 15:34:14 we need to stop doing test days, and start doing continuos testing...hmmmm 15:34:40 cdent, what log was that in? 15:34:45 social, ajo ^ shall we drop that in packstack? 15:34:50 cdent, I never read that :) 15:34:58 :) 15:35:12 ayoung, console output from packstack 15:35:20 ayoung: that was in the output at the end of a packstack --allinone that failed (and for which I had not disabled NM) 15:35:30 ayoung, Fedora repos are not ready for Kilo, test day is EL7 noly 15:35:46 NM is just an warning 15:35:47 cdent, what's the failure? 15:35:54 cdent: what failure you had? 15:36:00 social, ajo says it should work on 7.1 15:36:08 so let's remove warning 15:36:18 and if NM still doesn't work, file a bug 15:36:22 against NM 15:36:26 apevec: no no no no, we already removed it once and got serous backslash 15:36:28 the failure had nothing to do with NM, it was from installing openstack-cinder, python-cheetah could not be found: 15:36:34 apevec: I will keep the warning for a while 15:36:35 apevec, now that we have delorean, we should be doing an automated build and install for F21/F22. 15:36:43 I'll talk with infra about that 15:37:09 apevec: Well, I think Satellite doesn't update katello-agent, because to update katello-agent katello-agent needs to work. :) 15:37:50 apevec I've got optional and extras enabled and I can see python-cheetah in the ftp.redhat.com SRPMS but doing a yum search I got nothing 15:37:53 but I'm not sure 15:38:12 cdent, hmm, lemme see where it should be from 15:54:24 cdent, on centos7 python-cheetah is in extras 15:55:58 apevec I've rhel-7-server-extras-rpms and rhel-7-server-optional-rpms enabled and no python-cheetah available 15:59:00 cdent, bah, so it went to rh-common 16:00:02 jeebus, how we supposed to keep track of all this, and how can we search for things we know we should have but can't find? 16:01:44 and is there way to do a sort of --enable="the good stuff"? :) 16:02:52 cdent, yeah, that's now next, try grepping for common in /etc/yum.repos.d/redhat.repo 16:03:06 not sure what's the exact CDN repo name 16:03:22 I've got it now 16:03:33 subscription-manager repos --enable=rhel-7-server-common-rpms 16:04:20 yep, that one, so I'll add that one too in rdo-release update 16:05:03 sorry apevec, that's not quite the right line 16:05:19 needs an 'rh-' in there 16:05:36 ok 16:06:12 * cdent packs his stack one more time 16:06:38 apevec: do you know what needs python-rdflib ? I think I have evreything installed, but nothing's requiring it 16:07:39 lon, lemme check my notes 16:07:44 apevec: cool 16:08:07 ayoung, which infra you mean re. automated build/install w/ Delorean ? We do have CI jobs 16:11:50 apevec, I mean Fedora, doing automated installs and builds of the RDO packages, so we don't have long lag 16:12:15 ayoung, that's what we have Delorean for 16:12:30 but not sure this first Fedora infra 16:12:40 apevec, the test day should not be RHEL/Centos specific.F22 should be tested already 16:12:41 s/first/fits/ 16:12:56 sure...we'll see what we can figure out 16:13:08 ayoung, sure, I just didn't have time to prepare Fedora properly, 16:13:12 we need some additional checks for Federation and SAML 16:13:15 few package reviews are pending 16:13:19 post install config 16:13:38 that mirror what people are doing "for real" when tying in with existing systems 16:13:50 ayoung, that pysaml is one of them, I tried to package it and hit upstream issue, so current build has federation disabled 16:14:01 yeah.... 16:14:04 that is a PITA 16:14:26 https://bugzilla.redhat.com/show_bug.cgi?id=1168314#c2 16:14:37 I need to get back to this 16:15:00 last I checked, upstream just acknowledge it but no action yet 16:16:32 lon, so I'm not sure where is rdflib coming from... 16:17:02 ok 16:17:11 it was in the cbs, but it didn't get pulled in by anything 16:18:46 lon, ah found it: https://docs.google.com/spreadsheets/d/1ZckiJtvca2-GRvnCVMPz4uNjHuUFvRFdKZxIwY6Glec/view#gid=0 16:19:05 lon, notes says " for python-selenium" 16:19:25 lon, but we're not running test requiring that so can skip 16:19:40 ok cool 16:19:50 it's BR for horizon and django 16:20:06 weird 16:20:07 django built 16:20:32 yeah, it's BR in if 0%{fedora} 16:21:59 that's right I had to fix it 16:22:02 cool thanks 16:28:59 apevec is it known that horizon is missing stylesheets? or is it just my install of kilo? 16:29:21 it should not missing stylesheets 16:30:27 rook, but you might've hit https://bugzilla.redhat.com/show_bug.cgi?id=1219006 - does httpd restart fix it? 16:30:38 apevec trying 16:35:00 apevec: yup, hit that bz. 16:35:03 rook, easy to fix 16:35:18 rook, please add comment, with steps you did 16:35:38 rook, mrunge was not able to reproduce, so more cases are needed to track it down 16:35:44 rook, I think you need to regen the compressed stylesheets with WEBROOT set, if it is the thing I remember 16:37:29 apevec: done. 16:37:42 thanks for the tip! 16:37:49 ayoung, it shouldn't be, css is now compressed on httpd startup, with your systemd suggestion 16:38:04 ayoung, so issue is why httpd is not restarted in some cases 17:18:27 rook, are you still there? 17:18:35 yessir 17:19:27 did you install keystone on httpd? 17:20:08 rook, or, let's collect data in a different way 17:20:13 rook, you used packstack= 17:20:16 ? 17:20:29 yes packstack on rhel71 17:20:49 and it's a all in one install? 17:21:07 negative, one controller, one compute. 17:21:07 or multi node? 17:21:12 ok 17:21:28 and keystone and horizon both sharing httpd? 17:21:30 mrunge: keystone kicked off htpd 17:21:32 yes 17:21:48 ok, 17:22:00 packstack ran fine, or did you had issues there? 17:22:31 ran fine - hit one issue but it was due to my repos. once i got the repo fixed, good to go. 17:22:41 (said ran fine, then remembered :) ) 17:22:44 so, you had to run it twice? 17:22:47 yes 17:23:25 ok, thank you 17:23:36 np, that it? 17:23:59 unfortunately, I don't see a pattern yet 17:24:10 but that's probably it 17:24:12 ah 17:25:10 one thing: rook, you installed from scratch, or was httpd already running? 17:34:23 clean host 17:38:16 thanks again 18:31:52 mrunge: to run tempest against the deployment, it seems that I can install via yum, but then I need to go to /usr/share/openstack-tempest-kilo/ and install tempest ? 18:58:45 rook, sorry, I can't answer that 18:58:56 mrunge - nm - ran through the wiki 20:16:44 Trying kilo test horizon. Its mostly working, but no theme bits... I did install openstack-dashboard-theme. any ideas? 20:26:12 kfox1111, from where did you install? 20:29:11 http://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/el7/ 20:29:38 kfox1111, did you restart httpd after installing openstack-dashboard-theme ? 20:29:50 yes. 20:30:12 not seeing much javascript/css in the rendered html. 20:30:29 hmm, wait 20:30:35 just did a yum clean all; yum upgrade. so Its the newest stuff from the repo. 20:30:52 and no systemctl restart httpd? 20:31:11 or did you had delorean installed earlier ß 20:31:14 ? 20:31:25 trying to build a docker image... running /usr/sbin/httpd -DFOREGROUND 20:31:40 nothing interesting in access/error logs. 20:32:28 kfox1111, seriously, there is a systemd snippet hooked into httpd 20:32:35 which should be executed 20:33:03 for the dashboard? 20:34:25 sure 20:35:00 I still wanted to blog about that, but didn't found the time 20:35:42 in /usr/lib/systemd/system/httpd.service, I only see ExecStart, ExecReload and ExecStop. ExecStart's basically what I'm running. 20:35:58 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND 20:36:59 it's in /usr/lib/systemd/system/httpd.service.d 20:37:18 oh. interesting... 20:37:50 thanks. :) 20:38:52 Hmm. Keystone is failing to start back up on reboot. 20:39:08 Yup. that did the trick. :) 20:39:20 kfox1111, ok, great 20:39:44 did I mention systemctl restart httpd ? 20:39:50 I learned a new thing about systemd today. :) 20:39:55 once or twice? ;-) 20:40:13 I had looked at httpd.service. I didn't know httpd.service.d was a thing. :) 20:40:42 yes, that's totally cool 20:40:49 because it's pluggable 20:40:53 I'm still very curious how the docker/systemd thing will play out... 20:40:55 yeah. 20:41:26 I'm kind of surprised the default centos7 container doesn't just have systemd running. then you could just yum install httpd and lauch the container and it would all "just work" 20:41:34 its kind of a pain as is. :( 20:42:10 I would expect lots of pain with docker etc. coming in 20:42:27 esp. when you have socket activated applications 20:42:39 not running as long as you don't talk to their socket 20:43:05 shoudl still work though, since systemd would just launch it as needed in the same container? 20:43:25 Its when you want to do everything in micro services that it becomes hard again. :/ 20:45:23 hmm... where is the region selector suposed to show up in the kilo theme? 20:45:43 yes it should work, but you may not necessarily discover containers 20:45:53 ah the region selector is broken currently 20:46:01 :( ok. 20:46:09 I fixed it for juno and icehouse recently 20:46:45 please file a bug, that I don't forget it 20:47:09 and the tenant selector too? I'm not seeing that either (probably the same thing these days?) 20:47:12 k. 20:48:06 ugh, tenant selector too? I didn't saw that yet 20:48:22 I don't need much text, it's more a reminder kfox1111 20:48:42 k. 20:48:54 thanks! 20:48:58 np. 20:48:59 https://bugzilla.redhat.com/show_bug.cgi?id=1219221 20:49:11 yeah. not seeing a tenant selector. 20:52:51 have you deprecated the redhat theme and are just making it the default? 20:53:09 or should I try and swith the theme back to the stock horizon one? 20:53:35 kfox1111, stock theme is default in rdo 20:53:51 the red hat theme is default in red hat openstack 20:54:11 odd. cause I'm seeing the "Red Hat Enterprise Linux Openstack Platform" one. 20:54:32 kfox1111, where do you see that? 20:54:37 all I did was yum install openstack-dashboard openstack-dashboard-themes and run the couple of hooks. 20:54:49 yes, sure 20:55:02 it's all open source 20:55:16 so you *can* install it 20:55:30 yeah. but I haven't specified it anywhere. 20:55:39 unless openstack-dashboard-themes is the redhat one. 20:56:04 openstack-dashboard-theme is the red hat one 20:56:27 remove it, restart httpd and you have plain upstream 20:56:29 ah. ok. 20:56:31 thanks. 20:56:45 you're welcome 20:59:19 ok. I see the stock login now... 20:59:28 logging in gets me: ValidationError: [u"'90ece9f251f9956738a973970812a66543805fa1ec3adc7dedf8b34ea0ae9c7d' value must be an integer."] 20:59:37 on kilo? 20:59:57 kilo horizon, juno everything else. 21:00:11 is that no longer supported? 21:00:11 yes, known issue, I'm working on that 21:00:15 ok. 21:01:04 kfox1111, that's https://bugzilla.redhat.com/show_bug.cgi?id=1218894 21:01:54 hmm... k. let me try that work around. 21:06:10 ah. that workaround did work. thanks. :) 21:06:34 interesting theme tweaks. :) 21:07:05 wow. documentatin on heat resources. sweet. :) 21:13:22 hmm... with the stock theme, I see the tenant selector, but if I try and switch, I just get an unauthorized: unable to retrieve usage info 21:14:13 oh... wait.. I think I know what that is. 21:15:00 hmm.... something new... "Project switch successful for user "kfox". 21:15:05 "Could not delete token" 21:15:31 it may be the "my user's token is to big thing though..." 21:30:50 ugh token too big? 21:31:04 kfox1111, are you storing your session somewhere? 21:35:34 havent changed it from the defaults. I think on the production system I had to switch to memcache because of the session being to big. 21:36:27 hmm... "admin -> Metadata Definitions" 21:36:32 not really clear what that's for.... 21:36:51 must be a glance thing... 21:37:09 yes it is 21:37:44 kfox1111, if you're seing session size issues, you might want to switch do database cached sessions 21:38:25 kfox1111, https://docs.djangoproject.com/en/1.8/topics/http/sessions/#using-database-backed-sessions 21:39:12 ah. cool. thanks. 21:47:41 have you tried the new instance wizard yet? 21:47:49 I just get a blank window. 21:50:30 kfox1111, known issue 21:50:42 I just discussed that with upstream 21:51:41 ok. thanks. 21:53:13 are there any plans to side in the designate plugin to horizon? 21:53:52 One of the main reasons I'm trying to build a container for horizon is so I can slide in some of the plugins I may need. Some of which arn't packaged yet. :/ 21:55:49 some of the bits I need to slide in only work with kilo though, and don't want to have problems with kilo/juno on the same controller. :/ 21:56:35 kfox1111, that's to be discussed in Vancouver 21:57:09 but for reference, I have a juno environment here, but using master horizon 21:57:15 and this just works 21:57:29 :) 21:57:37 I'm planning on being there. 21:57:48 are you using venv's then? 21:57:56 nope 21:58:01 plain packages 21:58:16 something reproducible :D 21:58:44 using all the juno python clients then? 21:59:03 it depends. 21:59:15 or build a whole parallel set of rpms? 21:59:34 oh noo! no parallel rpms. that's a mess. 22:00:13 yeah. 22:00:17 for my horizon environment: nearly kilo python-...clients 22:00:25 and horizon, of course 22:00:48 but horizon is not critical at all with interoperability 22:00:53 so your semi upgrading the clients but not breaking juno servers? 22:01:16 you can separate horizon from the rest 22:01:53 so you can have horizon from kilo (including deps) and the rest from juno 22:02:05 of course, nobody will support that 22:02:22 in fact, it works quite well 22:02:24 but doesn't that require a whole parallel set of rpms? or are you just bundling all the horizon deps with horizon? 22:02:40 uhm, neither nor 22:02:44 two machines 22:02:53 two separate installs 22:02:53 ahhhh.. ok. 22:03:13 but you can install your horizon in a vm in your openstack cloud 22:03:21 ;-) 22:03:43 For testing, I have been just putting horizon in its own vm, so I can launch as many tests horizons as I want, but won't stomp on each other. 22:03:43 exactly. :) 22:03:51 was just trying out docker to see if I can make it a little closer to the hardware. 22:04:37 for a lot of things, docker seems... overly complicated. but running a web server is kind of what docker was written for. 22:06:47 worst thing I heard about a docker image was 17 versions of glibc bundled. now look for a glibc related issue. which service is using which glibc? 22:07:49 yeah. I was hoping the docker hub would help since it can rebuild containers. 22:07:59 but it doesn't track dependencies below the source repo. 22:08:05 so you never know if its out of date. :/ 22:08:11 its a really big problem. :/ 22:09:17 sure! I'm kind-of old school. I would like my images updated automatically 22:09:20 I'm fairly convinced that docker doesn't make app deployment any easier. it just pushes where the hard parts are off to other places. 22:09:27 security for one. 22:09:32 yupp 22:09:58 well, it delivers pre-configured *something* directly to the user 22:10:04 to try things out 22:10:19 I do really like the idea of being able to go back to an older image if the new one breaks for some reason. But gota pay for that by needing infrastruture for building/updating your containers. 22:11:27 if docker hub would provide a way to periodically rebuild images, and diff the contents and not replace the old with the new if nothing changed, then it might get much much closer. 22:13:03 I had the same issue with disk-image-builder and making openstack images. 22:13:32 had to write a jenkins job that built them, and a way to fingerprint all the rpm's that went in, and only update if the fingerprint changed. 22:13:53 couldn't fully automate it though without a the glance artifactory stuff, which still isn't done. :/ 22:15:32 sounds useful to have 22:15:53 would you share your scripts? 22:16:21 a bit too rough at this point, but yeah. I can look at cleaning them up and getting them out there. 22:16:52 thank you, that's awesome! 22:17:12 I think, others will benefit as well 22:17:32 I'm going to try and do the same for docker images. periodically pull a docker image, do a yum update, see if anything changes, and if it does, trigger a rebuild. 22:48:09 oh... this rdo packaging document is nice. :) 22:59:48 (05:07:51 PM) kfox1111: yeah. I was hoping the docker hub would help since it can rebuild containers. 22:59:48 does this help ? 22:59:48 https://docs.docker.com/docker-hub/builds/#repository-links 23:08:58 ccrouch: didn't know about that, neat! 23:10:16 it obviously not going to help with your rpm dependencies getting out of date, but it can at least track your base docker image 23:33:38 My AIO setup worked great. When I add a new CentOS7.1 compute node into the mix, Packstack dies at “PuppetError: Error appeared during Puppet run: 192.168.80.31_prescript.pp 23:33:38 Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list openstack-selinux' returned 1: Error: No matching Packages to list” 23:33:55 I can’t manually install it either as it says no packages are available 23:56:42 Resolved it for now: Copy the /etc/yum.repos.d/rdo-testing.repo file from the AIO to the 2nd node and re-run Packstack 00:21:28 trying to load the murano horizon into the kilo test rpms. 00:21:39 ran into from django.contrib.formtools.wizard import views as wizard_views, ImportError: No module named formtools.wizard 00:22:00 formtools.wizard's not shipped by python-django-1.8.1-1.el7.noarch 00:22:21 is this intentional? 00:23:48 hmm.. interestingly, python-django-1.6.11-1.el7.noarch provides formtools. so it seems like it was removed. 00:32:18 ah... https://docs.djangoproject.com/en/1.8/ref/contrib/formtools/#formtools-how-to-migrate 05:04:37 heh, im pleased with my ghetto Openstack dynamic DNS method 05:05:17 on instance boot, script pulls hostname from meta-data service, constructs an 'nsupdate' command to update A/PTR records on our internal BIND zone 05:05:39 somewhat insecure, but its only our own internal DNS zone 06:54:39 morning all! 07:57:28 #endmeeting