20:59:08 <rbowen> #startmeeting
20:59:08 <zodbot> Meeting started Tue Sep 30 20:59:08 2014 UTC.  The chair is rbowen. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:59:08 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
20:59:28 <rbowen> Starting logging before I head out for the day, so that any testing activity is captured across all participating timezones.
20:59:47 <rbowen> #topic RDO Juno Milestone 3 Test day
20:59:58 <rbowen> #link https://openstack.redhat.com/RDO_test_day_Juno_milestone_3
21:00:30 <rbowen> #link https://openstack.redhat.com/Workarounds
21:21:03 <rook> Veers: so, on the compute nodes, by default the compute nodes will use the local storage.
21:21:12 <rook> /var/lib/nova
21:37:05 <Veers> rook: awesome thanks.. FYI that parameter in the install wizard does work but the puppet run errors out bringing down the interface
21:37:11 <Veers> so I manually have to ifup br-ex and eth1
21:37:42 <Veers> when I've appropriately ruled out operator error I'll file a BZ
21:38:33 <rook> roger, good to know.
21:38:55 * rook is curious if it is because eth1 is being used for both tenant networks and br-ex
21:38:58 <rook> would be good to understand
21:39:45 <Veers> it's possible
21:39:58 <Veers> I want to test on a "best practices" configuration
21:40:43 <rook> IMHO having a nic for tenant (tunneling) and br-ex would be a good step.
21:40:48 <Veers> woah I just kernel panic'd a networker
21:40:53 <rook> oh boy
21:41:19 <Veers> hahaha
21:41:33 <Veers> was going to say "this guy works but I'm getting DUP!" on a ping
21:44:08 <Veers> and should I expect to have to disable selinux on my networker?
22:09:19 <jidar> rook!
22:09:22 <jidar> ROOK!!!
22:12:57 <rook-roll> jidar yo
07:01:48 <apevec> mrunge, hi, I've added some more comments in bz 1148167 please have a look
07:03:12 <mrunge> apevec, yes, I just saw them, thanks
07:03:45 <apevec> manage.py compress seems to work in  horizon-2014.1.2/ checkout
07:04:01 <bkopilov> Hi ,  When installing packstack with neutron enabled , we got warning : Warning: NetworkManager is active on 10.35.167.34, 192.168.2.22. OpenStack networking currently does not work on systems that have the Network Manager service enabled.
07:04:31 <bkopilov> is it harmless ?
07:04:37 <mrunge> apevec, could you please do: yum install python-XStatic-Font-Awesome
07:04:41 <apevec> bkopilov, mostly :)
07:05:00 <apevec> mrunge, I have that
07:05:07 <mrunge> sigh
07:05:10 <bkopilov> apevec, i see issue with cinder VLM , we can not delete instances
07:05:25 <bkopilov> apevec, sorry delete snapshots
07:05:27 <apevec> mrunge, python-XStatic-Font-Awesome-4.1.0.0-1.fc21.noarch on my F20 laptop
07:06:06 <apevec> bkopilov, re. NM warning - is that from packstack or neutron?
07:06:12 <mrunge> apevec, and rpm -q fontawesome-fonts-web ?
07:06:15 <apevec> I thought packstack just disables it
07:06:24 <mrunge> apevec, that warning is from packstack
07:06:27 <apevec> fontawesome-fonts-web-4.0.3-1.fc20.noarch
07:06:31 <bkopilov> apevec, its from packstack install
07:07:02 <mrunge> and packstack fails, if you also have firewall-config installed
07:07:07 <mrunge> (on f20)
07:08:39 <apevec> bkopilov, I'd file that as a bug against packstack to error out instead of warning
07:14:21 <mrunge> apevec, it looks like we'd need fontawesome-fonts from f21, and python-django-pyscss as well from f21
07:22:38 <apevec> mrunge, getting further with fontawesome-fonts-web-4.1.0-1.fc21 and python-django-pyscss-1.0.3-1.fc21:
07:23:12 <mrunge> apevec, you'll need fontawesome-fonts from f21 as well
07:23:29 <apevec> http://fpaste.org/138023/
07:23:36 * mrunge clicks
07:23:39 <apevec> mrunge, yeah, it's hard dep
07:24:12 <apevec> mrunge, don't, old settings.py
07:25:16 <mrunge> apevec, I was able to restart httpd after installing newer deps and it worked
07:25:18 <apevec> mrunge, ok, do click, the same with refreshed local_settings.py
07:25:26 <mrunge> darn
07:25:36 <mrunge> reinstalling here
07:26:05 <mrunge> at least graphviz issues with packstack are gone now.
07:26:09 <apevec> mrunge, maybe more than  COMPRESS_OFFLINE=True needs to be changed in local_settings.py ?
07:26:24 <apevec> mrunge, ah, where was that coming from??
07:26:43 <mrunge> apevec, if COMPRESS_OFFLINE = False, compression will be run in the background
07:26:59 <mrunge> about graphviz: I have no clue
07:33:46 <mrunge> apevec, you need to call ./manage.py collectstatic first, then ./manage.py compress in your master-patches checkout
07:34:06 <apevec> mrunge, aah, thanks
07:35:16 <mrunge> apevec, could you please copy the newer deps to the appropriate dir?
07:36:04 <apevec> mrunge, ok, manage.py compress worked after collectstatic
07:36:13 <mrunge> great, glad to hear
07:36:49 <apevec> mrunge, so we need fontsawesome 4.1.0 + django-pysccss 1.0.3  for f20?
07:36:58 <mrunge> apevec, exactly
07:37:17 <mrunge> and I assume, the same is true for el7
07:37:32 * mrunge checks, what's available there
07:37:42 <apevec> el7 is good
07:37:49 <mrunge> fine
07:38:14 <apevec> django-pyscss is 1.0.2 but that seems enough
07:38:30 <mrunge> yes, it was, when I tested that
07:38:35 <apevec> mrunge, btw rc1 adds one new xstatic
07:38:43 <mrunge> yes, I saw
07:38:57 <mrunge> it's already packaged, but not submitted for review yet
07:39:01 <apevec> mrunge, will that stream of xstatic never end? :)
07:39:18 <mrunge> apevec, well, that's good and bad...
07:39:59 <mrunge> at least we removed all bundled stuff (nearly finished)
07:45:29 <apevec> mrunge, re. IOError: [Errno 21] Is a directory: u'/usr/share/openstack-dashboard/static/bootstrap/scss/bootstrap' - is that known issue?
07:45:55 <mrunge> apevec, it's known upstream
07:46:31 <mrunge> apevec, https://review.openstack.org/#/c/123166/
07:48:00 <mrunge> apevec, that error was produced using python-django-pyscss-1.0.2 or 1.0.3?
07:52:03 <apevec> mrunge, not sure anymore, but now it works with 1.0.3
07:52:26 <mrunge> apevec, thanks. let's have an eye on that
07:52:36 * mrunge will have an eye on that
07:57:54 <armaan01> Hello, Anyone working on quickstack modules ?
08:05:16 <arif-ali> just looking at packstack allinone on juno, I presume even in centos7 we need to go into selinux permissive mode?
08:05:46 <mrunge> arif-ali, that's my understanding
08:06:18 <tshefi> any one ran into mirror issues with: yum -d 0 -e 0 -y install erlang
08:07:18 <arif-ali> no just checking, just making sure that that the test scenario is there for centos7, as the permissive mode only mentions fedora
08:08:21 <ihrachyshka> nmagnezi: hey. back in the office and start looking your mysql connection failure issue
08:08:34 <nmagnezi> ihrachyshka, thank you ihar
08:09:09 <nmagnezi> ihrachyshka, it currently works (I restarted neutron-server) let me know if you which to reproduce
08:10:04 <ihrachyshka> nmagnezi: ehm... I expected to log in and inspect the failure state :)
08:10:39 <nmagnezi> ihrachyshka, you can check the logs.. and can trigger that failure in few minutes
08:16:37 <mflobo> Is this the channel to use for the Test Day Juno Milestone 3?
08:17:09 <mrunge> mflobo, yupp
08:17:31 <mflobo> mrunge, thanks, I'll start in few hours
08:22:24 <arif-ali> first issue of the day from me, with allinone install
08:22:35 <arif-ali> glance: error: argument <subcommand>: invalid choice: u'services' (choose from 'image-create', 'image-delete', 'image-download', 'image-list', 'image-show', 'image-update', 'member-create', 'member-delete', 'member-list', 'help')
08:26:24 <nmagnezi> ihrachyshka, hi
08:26:34 <nmagnezi> ihrachyshka, issue reproduced
08:26:42 <nmagnezi> ihrachyshka, you may connect now
08:30:21 <ihrachyshka> nmagnezi: on my way
08:31:47 <apevec> ihrachyshka, nmagnezi - what are steps to reproduce? Is it after reboot? If so, maybe systemd service dependencies could be a workaround?
08:32:06 <apevec> although, services should be robust and keep trying to reconnect...
08:33:43 <ihrachyshka> apevec: nmagnezi: in systemd logs: neutron-server.service: main process exited, code=exited, status=1/FAILURE
08:34:12 <nmagnezi> apevec, yes, it's after the reboot
08:34:15 <ihrachyshka> seems like trace goes further till sys.exit(main())
08:34:22 <ihrachyshka> so it just exits on any mysql connection failure
08:34:57 <arif-ali> The provision_glance_pp.log pasted in http://pastebin.com/P4Br3Vgd
08:36:21 <mrunge> arif-ali, same here. could you please file a bug?
08:38:35 <nmagnezi> ihrachyshka, actually, before the reboot i noticed the same issue in cinder
08:38:37 <nmagnezi> apevec, ^^
08:39:07 <vaneldik> Good morning. The nova-api issue also applies to CentOS7. Workaround works as well.
08:39:45 <ihrachyshka> nmagnezi: hm, interesting. so it seems that neutron starts before mysql is ready, and crashes without retrying
08:40:33 <nmagnezi> ihrachyshka, I presume same goes for cinder..
08:40:48 <ihrachyshka> is it on the same machine? can I check logs there?
08:41:11 <nmagnezi> ihrachyshka, yes it is
08:41:19 <nmagnezi> ihrachyshka, check volume.log
08:41:40 <nmagnezi> ihrachyshka, it does not happen at the moment, but you'll find the trace in that log
08:41:51 <ihrachyshka> nmagnezi: yeah, looks similar
08:42:00 <ihrachyshka> though there is also another traceback there, cinder guys may be interested
08:42:05 <ihrachyshka> 'NoneType' object has no attribute 'split'
08:43:20 <ihrachyshka> nmagnezi: mariadb log is weird
08:43:28 <ihrachyshka> 11:24:13 - Starting MariaDB database server...
08:43:30 <ihrachyshka> then:
08:43:38 <ihrachyshka> 11:25:20 - mysqld_safe Logging to '/var/log/mariadb/mariadb.log'
08:43:46 <ihrachyshka> 11:25:21 - mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
08:43:55 <ihrachyshka> what was it doing that 1 minute?
08:44:03 <ihrachyshka> neutron crashed during that one minute
08:44:05 <nyechiel> apevec: ping
08:44:10 <ihrachyshka> apevec: ^^
08:46:58 <ihrachyshka> yeah, it seems it took mariadb > 1 minute to start, and neutron and cinder didn't wait for it to complete
08:47:08 <ihrachyshka> now, is it usual for mariadb to take so much time to start?
08:47:29 <arif-ali> Bug submitted, https://bugzilla.redhat.com/show_bug.cgi?id=1148346
08:47:34 <ihrachyshka> though whatever is the answer, neutron should wait for mariadb startup completion
08:47:49 <ihrachyshka> and it also should not crash on the first attempt failure but be more resilient
08:47:58 <ihrachyshka> so there are multiple issues I guess
08:48:46 <ajeain> mrunge: hey Matthias, did u get the same glance error?
08:48:46 <armaan01> jistr: Hi :)
08:49:07 <jistr> armaan01: hello :)
08:50:05 <mrunge> ajeain, yes, I did
08:50:19 <mrunge> ajeain, workaround was to disable demo provision
08:50:23 <ihrachyshka> nmagnezi: yeah, with neutron, we don't wait for mysql to start...
08:50:29 <ihrachyshka> the only rules are: After=syslog.target network.target
08:50:59 <ajeain> :q
08:51:36 <armaan01> jistr: I followed up on your suggestion yesterday , It turns out that OpenStack services and haproxy are trying to bind on the same ports. I checked the code base of quickstack modules. I don't see where should i make changes to make haproxy bind to a different port. https://github.com/redhat-openstack/astapor/blob/d4e87911fb25509557a2f2dbacd3dbabdef68ab5/puppet/modules/quickstack/manifests/load_b
08:51:37 <mrunge> ajeain, Ami, are you going to file a bug?
08:51:37 <armaan01> alancer/amqp.pp
08:51:51 <ajeain> mrunge: it is already filled: https://bugzilla.redhat.com/show_bug.cgi?id=1148346
08:51:58 <armaan01> jistr: https://github.com/redhat-openstack/astapor/blob/d4e87911fb25509557a2f2dbacd3dbabdef68ab5/puppet/modules/quickstack/manifests/load_balancer/amqp.pp
08:52:00 <mrunge> awesome!
08:54:10 <yfried|afk> nmagnezi: wassup
08:54:30 <nmagnezi> yfried|afk, it's very odd that you got this error, but i did nott
08:54:36 <nmagnezi> not*
08:54:50 <armaan01> jistr: # service haproxy startStarting haproxy: [ALERT] 273/085406 (11037) : Starting proxy amqp: cannot bind socket
08:54:54 <nmagnezi> ihrachyshka, why won't it recover?
08:54:56 <yfried|afk> nmagnezi: reprovisionning. waste of time
08:54:59 <jistr> armaan01: haproxy should bind on the same port, but different IP, so there should be no collision in the end. What have you set as "amqp_vip" parameter?
08:56:01 <apevec> ihrachyshka, initial startup probably takes longer as it runs prepare-db-dir ?
08:56:19 <apevec> ihrachyshka, but yes, oslo.db should just keep retrying
08:58:48 <nmagnezi> yfried|afk, let me know how it  goes
09:01:04 <armaan01> jistr: i have set it to "192.168.122.98" , http://fpaste.org/138045/12154045/
09:01:41 <apevec> jruzicka, looks like we have too new glanceclient? http://pastebin.com/P4Br3Vgd
09:02:00 <ihrachyshka> nmagnezi: that's a question for oslo.db I guess :)
09:02:06 <jistr> armaan01: is that a physical IP or a virtual IP? it should be virtual. I'll send you a full example.
09:02:10 <apevec> jruzicka, let packstack folks know, I guess we need opm update?
09:02:46 <nmagnezi> ihrachyshka, who handle this subject in rh?
09:03:37 <jistr> armaan01: here's an example config of HA controller node http://fpaste.org/138046/12154172/
09:03:52 <arif-ali> mrunge, ajeain, added entry for the glance issue in the workarounds page
09:04:08 <mrunge> arif-ali, awesome, thank you!
09:04:19 <jistr> armaan01: the physical IPs are "pacemaker_cluster_members: 192.168.122.15 192.168.122.16 192.168.122.17"
09:04:55 <ihrachyshka> nmagnezi: I would go straight to zzzeek (aka Michael Bayer)
09:05:11 <ihrachyshka> he is the author of sqlalchemy, alembic, and a core for oslo.db :)
09:05:56 <ajeain1> arif-ali: thanks
09:06:08 <nmagnezi> ihrachyshka, well.. he's offline atm
09:06:31 <jistr> armaan01: and all the "..._vip" parameters be different than the physical IPs, and they must also be different from each other. The only exception is that public, private and admin VIPs of *the same service* can be the same.
09:06:49 <jistr> s/be different/must be different/
09:06:55 <ihrachyshka> nmagnezi: sure. he's Aussie :)
09:07:09 <ukalifon> bkopilov: did you install RDO on CentOS 7 successfully?
09:07:10 <armaan01> jistr: Here is my config: http://fpaste.org/138047/12154382/
09:07:18 <ihrachyshka> I'm going to ask on the matter in #openstack-oslo channel
09:07:40 <bkopilov> ukalifon, yes
09:08:06 <armaan01> jistr: In my setup physical ip are 192.168.122.111 and 192.168.122.112
09:08:09 <ukalifon> bkopilov: why am I getting this: https://bugzilla.redhat.com/show_bug.cgi?id=1148348
09:08:15 <bkopilov> ukalifon, i used this repo : yum localinstall -y https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
09:08:40 <ukalifon> bkopilov: thanks, I will try it
09:10:35 <armaan01> jistr: I changed the amqp port to see if it will have any effect.
09:12:18 <armaan01> jistr: After i changed it to a different port, binding happened successfully.
09:12:33 <jistr> armaan01: your ceilometer_private_vip and db_vip are the same, which shouldn't be, but it's not the cause of the problem you had
09:14:05 <nyechiel> bkopilov, ukalifon: I am not able to install all-in-one w/ CentOS. Looks like galera is missing
09:14:29 <jistr> armaan01: ok, good to know :)
09:15:41 <bkopilov> nyechiel, ok i will boot this centos and copy all repo
09:15:58 <armaan01> jistr: But there is no parameter to change other ports, like for keystone, glance-api, glance-registry, and hence binding is failing for these services.
09:16:12 <armaan01> armaan01: Ahan, thanks for pointing it out :)
09:16:49 <pnavarro> hi, I'm installing a JUNO packstack all-in-one, and I'm having issues when glance client is using deprecated parameters
09:18:01 <jistr> armaan01: can you paste "netstat -vltnp" and your haproxy.conf?
09:18:27 <pnavarro> for example, glance -T services -I glance -K 27f90cc042284f6a instead of glance --os-tenant-name services --os-username glance --os-password 27f90cc042284f6a
09:19:27 <arif-ali> pnavarro, it's a bug, check workarounds at https://openstack.redhat.com/Workarounds#provisioning_of_glance_does_not_work_for_demo_environment
09:20:25 <pnavarro> arif-ali, should I file the bug?
09:20:57 <pnavarro> arif-ali, sorry I just showed the bug
09:20:59 <pnavarro> thanks
09:23:06 <armaan01> jistr: haproxy.cfg http://fpaste.org/138057/14121553/
09:24:15 <armaan01> jistr: netstat -ntlp http://fpaste.org/138058/12155446/
09:27:05 <nyechiel> bkopilov: think I found the issue. wrong epel config on the host
09:30:17 <jistr> armaan01: hmm yeah the problem is that the services themselves listen on 0.0.0.0 rather than on ...111 and ...112
09:31:57 * jistr searches for clues
09:38:14 <jistr> armaan01: https://github.com/redhat-openstack/astapor/blob/456f4e32941299794b31d04f03ab672ce638f8ca/puppet/modules/quickstack/manifests/pacemaker/params.pp#L96-L98
09:40:35 <jistr> armaan01: i think private_network should be set to "192.168.122.0" for your setup, but i always used "private_iface". If you set it to the interface name which is on the "192.168.122.0" network, you should be fine. Assumption is that the interface name is the same on both hosts.
09:41:59 <jistr> armaan01: see here how the find_ip function works. If you provide "private_network", then "private_iface" will not be used. https://github.com/redhat-openstack/astapor/blob/3dae7c237bf4a9fd21a70a4dbd2b2e76842052e6/puppet/modules/quickstack/lib/puppet/parser/functions/find_ip.rb
09:42:47 * jistr -> lunch, bbl
09:46:19 <nyechiel> sgotliv: ping
09:54:42 <pnavarro> I launched a VM in Juno after packstack and It worked
09:55:40 <jistr|mobi> armaan01: i didn't have enough time to check the proper formatting of "private_network", it might be that it shouldn't have the "0" or ".0" at the end
09:57:46 <armaan01> jistr|mobi: After setting up the iface parammeter, it is working :)
09:58:03 <bdossant> Hi! i installed packstack using this command "packstack --allinone --os-neutron-install=n --os-heat-cfn-install=y --os-heat-install=y" and when i do "nova network-list" i get a huge list of all the ips on the network range
09:58:37 <jistr|mobi> armaan01: \o/
09:59:01 <bdossant> if i do "nova net-list" the output is displyed correctly
09:59:13 <armaan01> jistr|mobi: Puppet agent is still running, but i can see with netstat -ntlp output, correct ip addresses are showing up. :)
09:59:25 <nyechiel> Does anyone encountered glance issues with all-in-one on CentOS? I am not able to create an image
09:59:56 <nyechiel> Error: Could not prefetch glance_image provider 'glance': Execution of '/usr/bin/glance -T services -I glance -K 77f9c630d31e41a0 -N http://10.35.160.29:35357/v2.0/ index' returned 2: usage: glance [--version] [-d] [-v] [--get-schema] [--timeout TIMEOUT]
10:00:14 <arif-ali> nyechiel, that's a bug, please check workarounds page
10:00:29 <pnavarro> nyechiel, https://openstack.redhat.com/Workarounds#provisioning_of_glance_does_not_work_for_demo_environment
10:00:30 <nyechiel> arif-ali: thanks
10:00:44 <arif-ali> you need to disable the DEMO_PROVISION
10:01:09 <armaan01> jistr|mobi: Will you be there in Paris for OpenStack Summit ?
10:06:36 <yfried|afk> nmagnezi: I think I have a FW issue
10:09:25 <ihrachyshka> nmagnezi: reading thru oslo.db code, it seems to me that they have retry mechanism
10:09:32 <ihrachyshka> though it was probably broken recently
10:09:52 <ihrachyshka> they test connection with retries AFTER they execute some statements against engime
10:09:55 <ihrachyshka> *engine
10:10:24 <ihrachyshka> so those statements bail out BEFORE we are in try: _test_connection() except: repeat() block
10:11:36 <apevec> ihrachyshka, thanks for root causing this! Please move bz to python-oslo-db
10:11:56 <apevec> actually, should be moved upstream, but let's track it in RDO too
10:12:33 <ihrachyshka> apevec: btw I suspect it's fixed in latest oslo.db
10:13:10 <ihrachyshka> apevec: the code there looks like not affected by the issue since they seem to postpone those statements till .connect() is called (inside _test_connection())
10:14:00 <ihrachyshka> apevec: and in openstack/requirements, it's oslo.db>=1.0.0
10:14:25 <ihrachyshka> apevec: so I would retest with new version before creating an u/s bug
10:14:50 <apevec> ihrachyshka, lemme check versions
10:15:12 <ihrachyshka> 0.4.0 on that machine
10:16:06 <ihrachyshka> apevec: version bumped in u/s Sep 18
10:16:13 <ihrachyshka> probably after M3
10:16:18 <apevec> yep, that's currently in rdo juno
10:16:30 <apevec> final is oslo.db - 1.0.0 (same as 0.5.0)
10:16:58 <gszasz> tosky: is https://rdo.fedorapeople.org/rdo-release.rpm repo pointing to Juno or Icehouse?
10:17:45 <ihrachyshka> apevec: ok, then let's just move the bug to python-oslo-db with explanation
10:19:34 <tosky> gszasz: good question, I don't remember :/
10:19:45 <apevec> ihrachyshka, I've new builds in copr, we can push quickly, please test with http://copr-be.cloud.fedoraproject.org/results/jruzicka/rdo-juno-epel-7/epel-7-x86_64/python-oslo-db-1.0.1-1.fc22/
10:20:11 <ihrachyshka> apevec: roger
10:20:43 <gszasz> tosky: okay, I will use https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
10:21:38 <apevec> gszasz, please use redirect link as documented in https://openstack.redhat.com/RDO_test_day_Juno_milestone_3#How_To_Test
10:22:30 <gszasz> apevec: thanks
10:24:45 <ihrachyshka> nmagnezi: so what's the right way to reproduce that? restart?
10:26:32 <nmagnezi> ihrachyshka, reboot
10:27:06 <ihrachyshka> nmagnezi: 100% repro?
10:27:16 <nmagnezi> ihrachyshka, maybe it's possible to try to switch off the db and restart neutron
10:27:23 <nmagnezi> ihrachyshka, and only than, switch on the db
10:27:25 <ihrachyshka> nmagnezi: yeah, I thought so too
10:27:34 <nmagnezi> ihrachyshka, i didn't try that..
10:27:45 <nmagnezi> ihrachyshka, reboot so far gave me 100% repro..
10:30:58 <ihrachyshka> apevec: nmagnezi: nope, new oslo.db doesn't help
10:31:15 <apevec> ihrachyshka, ah so upstream bug
10:31:16 <ihrachyshka> I read their new magic context manager code wrong then :)
10:31:35 <ihrachyshka> yeah, should be
10:31:47 <apevec> ihrachyshka, can you make a small reproducer using oslo.db directly?
10:32:11 <ihrachyshka> hm, let's see. I guess I need to create an engine with mysql down and see that it does not retry
10:32:22 <apevec> yeah
10:39:03 <ihrachyshka> apevec: yeah, 3-lines script to get the trace
10:39:09 <ihrachyshka> import oslo.db.sqlalchemy.session
10:39:09 <ihrachyshka> url = 'mysql://neutron:123456@10.35.161.235/neutron'
10:39:09 <ihrachyshka> engine = oslo.db.sqlalchemy.session.EngineFacade(url)
10:39:13 <apevec> cool
10:39:20 <apevec> I mean not cool :)
10:39:43 <ihrachyshka> well, the cool thing is that we've found the root
10:39:53 <apevec> yeah, that
10:40:04 <apevec> not cool goes to oslo.db :)
10:40:06 <ihrachyshka> now time to communicate it to u/s, screaming around and running with a crazy face
10:40:16 <apevec> ACK
10:41:02 <jistr|mobi> armaan01: yeah i'll be there
10:46:29 <armaan01> jistr|mobi: Great, then dinner is on me :)
10:47:05 <nmagnezi> ihrachyshka, are you filing upstream bug?
10:49:54 <ihrachyshka> nmagnezi: yes
10:49:58 <ihrachyshka> apevec: nmagnezi: https://bugs.launchpad.net/oslo.db/+bug/1376211
10:50:44 <nmagnezi> ihrachyshka, thanks a bunch!
10:51:03 <ihrachyshka> apevec: I don't see python-oslo-db component in bugzilla...
10:53:21 <nmagnezi> ihrachyshka, i see python-oslo-config
10:54:08 <ihrachyshka> nmagnezi: yeah, that's all I see
10:54:21 <nmagnezi> ihrachyshka, shall i change to that?
10:54:29 <ihrachyshka> nmagnezi: no, it's another component
10:54:36 <nmagnezi> ihrachyshka, got it
10:54:40 <ihrachyshka> nmagnezi: we should have separate components for each oslo lib
10:55:07 <ihrachyshka> we have more for RHOS
10:55:11 <ihrachyshka> though no oslo.db there too
10:55:18 <ihrachyshka> (maybe because it's Juno thing)
10:58:23 <jistr> armaan01: ack :D
10:59:28 <nmagnezi> ihrachyshka, who can add that component?
11:00:45 <nmagnezi> ihrachyshka, did you stop the database?
11:01:51 <ihrachyshka> nmagnezi: maybe. well, I don't need the machine anymore, so you're free to do whatever you like with it
11:02:14 <nmagnezi> ihrachyshka, ack. just making sure.. thanks1
11:02:16 <nmagnezi> ihrachyshka, ack. just making sure.. thanks!
11:07:58 <ukalifon> nyechiel: how did you solve the mariadb-galera-server install on CentOS 7 ? I already fixed the link in the epel repo
11:10:44 <nmagnezi> ukalifon, check your epel.repo file
11:11:07 <nmagnezi> ukalifon, in the mirrors url, change the number 6 to 7
11:11:15 <nmagnezi> ukalifon, clean yum cache any try again
11:14:12 <apevec> ihrachyshka, yeah, I need to request new Juno components to be added to BZ
11:14:24 <apevec> ihrachyshka, please move to Fedora Rawhide python-oslo-db
11:28:07 <apevec> ihrachyshka, ok, I changed component using CLI, loading Fedora component list in webui takes ages
11:29:48 <ihrachyshka> apevec: exactly!
11:30:29 <Dafna> nlevinki, thanks :)
11:33:17 <cdent> I'm trying to test day with fedora 21 and packstack, and I'm stuck at the preparing servers stage:
11:33:28 <cdent> ERROR : Failed to set RDO repo on host 10.0.0.2
11:34:00 <cdent> I'm assuming I missed some setup step but it's not clear what that was
11:43:09 <apevec> cdent, anything more in puppet logs?
11:43:20 <cdent> yeah, will paste, just a sec
11:43:31 <apevec> is that AIO or multinode?
11:43:45 <cdent> actually, I'll paste a few minutes, I decided to start over with a freshly created vm
11:43:47 <cdent> it's allinone
11:44:43 <cdent> from memory: it was giving a SequenceError when trying to run yum-config-manager
11:45:00 <cdent> i'll paste it up if/when it happens again on this fresh go
11:51:32 <cdent> apevec: same error: http://paste.openstack.org/show/117438/
11:51:45 <cdent> brb
11:56:41 <bdossant> cannot boot a cirros instance on packstack juno. "No valid host was found. " error. And also cirros was not uploaded to glance by default
11:57:06 <arif-ali> bdossant, how did you install using packstack
11:57:16 <arif-ali> did you do it without DEMO_PROVISION?
11:57:36 <bdossant> y
11:57:40 <bdossant> after
11:57:42 <bdossant> packstack --allinone --os-neutron-install=n --os-heat-cfn-install=y --os-heat-install=y
11:58:07 <arif-ali> then you need to download the cirros image, and add to glance manually
11:58:18 <bdossant> yeah i did that
11:58:28 <bdossant> the error comes fater
11:58:31 <bdossant> *after
11:59:26 <bdossant> "NovaException: Failed to add interface: can't add lo to bridge br100: Invalid argument"
11:59:34 <bdossant> i have this error on nova-compute log
12:00:15 <bdossant> Unable to force TCG mode, libguestfs too old? 'GuestFS' object has no attribute 'set_backend_settings'
12:01:10 <bdossant> thats another warning
12:02:39 <apevec> cdent, do you have rdo-release-juno RPM installed?
12:03:16 <cdent> I did one of these: sudo yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm
12:17:06 <arif-ali> here's my quick dump of allinone install with the DEMO_PROVISION tweaks, if anyone is interested; http://tinyurl.com/knvt4on
12:18:11 <apevec> has bz been filed for CONFIG_PROVISION_DEMO=n ?
12:18:49 <arif-ali> yes
12:19:22 <arif-ali> https://bugzilla.redhat.com/show_bug.cgi?id=1148346
12:26:49 <apevec> mmagr, could someone from packstack team look at ^
12:27:00 <apevec> bz 1148346
12:29:05 <ajeain1> arif-ali: did the change of provisision mode helped u get pass the glance error? I changed it to N, and I got "ERROR : Cinder's volume group 'cinder-volumes' could not be created"
12:29:47 <arif-ali> yes it did, see my log at the URL I pointed to http://tinyurl.com/knvt4on
12:31:37 <arif-ali> to make sure, I am using CentOS 7
12:31:57 <mflobo> On CentSO 7: ERROR : Error appeared during Puppet run: 128.142.143.49_api_nova.pp
12:32:06 <arif-ali> with updates, extras, epel, puppet, and rdo-openstack repos
12:32:47 <mflobo> I thinks is this: https://bugzilla.redhat.com/show_bug.cgi?id=1139771
12:33:31 <arif-ali> mflobo, set selinux to permissive, and try again
12:33:56 <mflobo> arif-ali, ok
12:37:52 <rbowen> Also, did you see this workaround: https://openstack.redhat.com/Workarounds#Packstack_fails.3B_unable_to_start_nova-api
12:38:13 <rbowen> Is it that? Or something different?
12:38:39 <rbowen> Oh, I see that that's what arif-ali already recommended. :-)
12:39:46 <mflobo> rbowen, yes it is, thanks
12:42:21 <apevec> arif-ali, I wonder why you enabled puppet repo, are there issue with puppet from EPEL7 ?
12:43:03 <arif-ali> apevec, I've had that from a few months back, when I had issues, and i have that synced locally
12:43:15 <arif-ali> Do I need to take that out of the equation?
12:44:09 <apevec> for Juno I wanted we try with epel puppet first
12:44:24 <apevec> but good that upstream puppet works, it's newer version
12:44:39 <arif-ali> ok, no worries, I will re-provision my machines
12:44:50 <arif-ali> and re-test
12:50:45 <mflobo> I had also this problem about glance: https://openstack.redhat.com/Workarounds#provisioning_of_glance_does_not_work_for_demo_environment
12:51:12 <mflobo> next try!
12:53:49 <larsks> Morning all...
12:54:02 <rook-roll> yo
12:54:07 <rbowen> Morning.
12:55:13 <larsks> arif-ali: for that glance error, were there any other detail in either /var/tmp/packstack/20141001-091112-m6iWqu/manifests/10.0.0.1_provision_glance.pp.log
12:55:18 <larsks> or in the glance server log?
13:01:46 <arif-ali> larsks, 5 mins late, just re-provisioned my machine
13:01:54 <arif-ali> I can re-test if you want
13:02:07 <larsks> arif-ali: Nah, not right now.
13:02:18 <larsks> Thanks.
13:02:19 <danielfr> I follwed this (https://openstack.redhat.com/Workarounds#Packstack_fails.3B_unable_to_start_nova-api) and built the selinux module but didn't fix the problem. Still getting Error: Could not start Service[nova-api]: Execution of '/usr/bin/systemctl start openstack-nova-api' returned 1. Anyone else?
13:02:36 <larsks> danielfr: anything else in your audit log?
13:02:53 <larsks> any errors in your nova api log? /var/log/nova/nova-api.log
13:03:34 <arif-ali> I have 3 physical machines, So I will be able to easily re-test, without hampering other testing
13:03:53 <danielfr> 2014-10-01 12:58:00.317 5153 INFO nova.wsgi [-] Stopping WSGI server.
13:03:53 <danielfr> 2014-10-01 12:58:00.318 5153 INFO nova.wsgi [-] WSGI server has stopped.
13:03:53 <danielfr> 2014-10-01 12:58:00.318 5153 INFO nova.openstack.common.service [-] Parent process has died unexpectedly, exiting
13:03:54 <larsks> arif-ali: if you have the time and inclination, go for it! :)  I do not wish to make more work for you...
13:04:28 <larsks> danielfr: if you put selinux in permissive mode, do you still get the error?
13:05:09 <danielfr> Umm probably not
13:05:20 <danielfr> I'm gonna change it then
13:05:30 <laudo> When running openstack in active active mode with pcs do the nova-scheduler_monitor and  keystone_monitor need to run on the failover nodes? Lets say I have a 3 node cluster and the services would be active on node 1
13:07:04 <laudo> This enables me to login to the dashboard etc. So I assume the services like scheduler and keystone monitor do only have to run on the active node?
13:07:19 <arif-ali> larsks, the first attachement in the bug is 10.0.0.1_provision_glance.pp.log from the machine
13:07:30 <arif-ali> maybe should have left the filename ;)
13:07:41 <larsks> arif-ali: I thought the first attachment was openstack-setup.log.
13:07:50 <larsks> arif-ali: Did I misread?  Checking...
13:08:05 <arif-ali> I added an attachement when I created the bug
13:08:24 <larsks> arif-ali: Ahaha, I did not even see that.
13:09:20 <larsks> arif-ali: can you confirm what version of the glance client you are running?  "rpm -q python-glanceclient"
13:09:48 <arif-ali> python-glanceclient-0.14.1-1.el7.centos.noarch
13:10:25 <larsks> arif-ali: Thanks.  Need to go check a few things...
13:12:29 <mflobo> If you enable TEMPEST following this https://openstack.redhat.com/Testing_IceHouse_using_Tempest#Enable_tempest_and_demo you have the problem related to glance https://openstack.redhat.com/Workarounds#provisioning_of_glance_does_not_work_for_demo_environment
13:12:29 <arif-ali> lasrks, no worries, btw, I have been allocated these 2 days to test, so am happy to test anything
13:13:02 <mflobo> so, it is not possible testing
13:14:30 <mflobo> In packstack answer file you can see:
13:14:30 <mflobo> # Whether to provision for demo usage and testing. Note that provisioning is only supported for all-in-one installations.
13:14:30 <mflobo> CONFIG_PROVISION_DEMO=y
13:14:58 <danielfr> This is what nova-api.log says (selinux enforcing mode)
13:15:01 <danielfr> 2014-10-01 12:58:00.166 5141 CRITICAL nova [-] PermissionsError: Permission denied
13:15:01 <danielfr> 2014-10-01 12:58:00.166 5141 TRACE nova Traceback (most recent call last):
13:15:02 <danielfr> 2014-10-01 12:58:00.166 5141 TRACE nova   File "/usr/bin/nova-api", line 10, in <module>
13:15:02 <danielfr> 2014-10-01 12:58:00.166 5141 TRACE nova     sys.exit(main())
13:15:02 <danielfr> 2014-10-01 12:58:00.166 5141 TRACE nova   File "/usr/lib/python2.7/site-packages/nova/cmd/api.py", line 55, in main
13:16:00 <ihrachyshka> nmagnezi: hey. are you open to try out hotfix?
13:16:06 <ihrachyshka> nmagnezi: for the mysql issue
13:17:16 <nmagnezi> ihrachyshka, yes, but can it wait for tomorrow? I'm in the middle of other tests
13:17:24 <ihrachyshka> nmagnezi: np. just ping me
13:17:33 <danielfr> Sorry for the flooding before... this is the error trace: http://pastebin.com/0mzjJ39c
13:17:53 <nmagnezi> ihrachyshka, np
13:19:38 <nmagnezi> ihrachyshka, btw, if you are interested. i'm having issues with ipv6
13:19:48 <ihrachyshka> nmagnezi: yeah, whynot :)
13:19:49 <nmagnezi> ihrachyshka, instances won't get an ip address
13:20:08 <ihrachyshka> nmagnezi: what's your address management setup?
13:20:10 <nmagnezi> ihrachyshka, same setup, checkout vm named cirros
13:31:14 <larsks> Hey, packaging peeps, python-glanceclient in RDO for F20 is at 0.13, while for Centos 7 it's at 0.14. Should these packages be in sync?
13:31:38 <bdossant> im having an error on nova-compute booting an instance "/dev/sda is write-protected, mounting read-only" ps: centos7
13:38:06 <larsks> Packaging update: F20 repository is currently behind epel-7 repository due to CI problems, so there are going to be distribution related differences right now.
13:40:40 <apevec> larsks, rsync coming :)
13:42:50 <arif-ali> btw, is the openvswitch bug at https://bugzilla.redhat.com/show_bug.cgi?id=1120326 going to be fixed for Juno release, or do we know where it is at?
13:44:18 <ihrachyshka> arif-ali: looks like a closed bug for RHOSP5
13:44:24 <ihrachyshka> arif-ali: was it raised for RDO?
13:44:26 <apevec> arif-ali, I need to check w/ openvswitch folks
13:44:43 <larsks> arif-ali: that's not a juno thing, that's a base distribution thing.  What version of OVS is on your system?
13:45:01 <apevec> larsks, we ship openvswitch in RDO
13:45:05 <apevec> it's not in base RHEL
13:45:19 <arif-ali> ah, ok, as I've had the same issue in icehouse, and have to restart networking after reboot to re-enable the OVS bridges/ports
13:45:41 <larsks> apevec: See, this is what I get for using fedora all the time...
13:45:48 <apevec> for juno I took what we had in rdo icehouse, need to get ack to reship rhos5 rpm
13:46:19 <apevec> arif-ali, is there RDO bz requesting ovs rebase?
13:46:30 <arif-ali> I have this problem in centos7
13:46:32 <larsks> apevec: Uuuggggg, we're shipping 2.0.0.  Yeah, we totally need to update that.
13:46:39 <arif-ali> apevec, I don't think so
13:46:56 <apevec> arif-ali, ok, I'll clone rhos5 bz then
13:47:19 <arif-ali> apevec, thanks, that will help a lot in my PoC environment
13:49:08 <larsks> arif-ali: this bz has more information about what's going on: https://bugzilla.redhat.com/show_bug.cgi?id=1051593#c3
13:49:43 <ihrachyshka> nmagnezi: what's the pass for rhel machines?
13:50:04 <ihrachyshka> nmagnezi: ehm, in private I guess...
13:50:07 <apevec> ihrachyshka, you don't want this here :)
13:50:19 <ihrachyshka> yeah, wrong tab :)
13:50:31 <arif-ali> larsks, thanks for that, I will test that when my packstack install for multinode is done
13:50:53 <arif-ali> we will prob need to add that to the workarounds as well
13:50:54 <apevec> ihrachyshka, but probably close to https://www.youtube.com/watch?v=_JNGI1dI-e8
13:51:38 <ihrachyshka> apevec: almost :D
13:53:14 <mflobo> https://bugzilla.redhat.com/show_bug.cgi?id=1148459
13:53:57 <arif-ali> mflobo, that's the same problem we opened earlier
13:56:33 <apevec> arif-ali, mflobo - try yum downgrade python-glanceclient as a workaround
13:58:02 <arif-ali> apevec, thanks, will try that in my next round
13:58:07 <rbowen> Is that on the workarounds page yet?
13:58:24 <rbowen> Looks like it's not. Is that the recommended solution, or just an interim?
13:58:57 <danielfr> Packstack finished successfully after setting selinux to permissive...
13:59:16 <apevec> rbowen, iterim until we get o-p-m update
13:59:33 <apevec> rbowen, but you can put it on wiki
13:59:58 <apevec> not sure how long it will take to get o-p-m published
14:05:41 <vaneldik> Two logrotate issues on my CentOS 7 box, for openstack-dashboard and rabbitmq-server
14:05:53 <vaneldik> # logrotate -f /etc/logrotate.conf
14:05:53 <vaneldik> error: openstack-dashboard:1 duplicate log entry for /var/log/httpd/horizon_access.log
14:05:53 <vaneldik> The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other actions, please try to use systemctl.
14:05:53 <vaneldik> error: error running shared postrotate script for '/var/log/rabbitmq/*.log '
14:06:29 <vaneldik> Opened https://bugzilla.redhat.com/show_bug.cgi?id=1148451 and https://bugzilla.redhat.com/show_bug.cgi?id=1148444
14:06:37 <vaneldik> Can people confirm?
14:06:54 <larsks> vaneldik: confirmed on F20.
14:07:05 <larsks> The culprit here is /etc/logrotate.d/rabbitmq-server
14:07:07 <apevec> Fedora RDO Juno repo update, please refresh your local yum caches
14:07:14 <apevec> s/update/updated/
14:12:47 <vaneldik> BTW: what are all those directories /tmp/keystone-signing-* on my CentOS 7 box? More than a 100 by now.
14:18:51 <mflobo> Ok, Cirros 0.3.3 up and running in Juno box :)
14:19:41 <larsks> vaneldik: used by Nova "to cache files related to PKI tokens".  Hard to find documentation on it, or why the directories accumulate...
14:20:46 <danielfr> Same here using an Ubuntu image
14:20:49 <vaneldik> larsks: seems there have been sightings of this on Icehouse installs as well
14:20:59 <arif-ali> packstack, doing multi node installs, with CONFIG_NEUTRON_OVS_BRIDGE_IFACES defined, should it automatically create the relevant bridges in /etc/sysconfig/network-scripts on the compute nova hosts?
14:21:14 <arif-ali> as it does on the node we run packstack on
14:21:48 <apevec> larsks, vaneldik - https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token.py#L650
14:22:20 <linux-uzer> hello, does anybody know the status of Foreman support by RDO? Did RHEL completely move to StayPuft? I see that RHEL OSP 4 was based on Foreman / Astapor and OSP 5 is nice StayPuft version. The docs on RDO site (~ OSP 4) divoted to Foreman are somewhat inconsistent to follow :( Is Astapor abandoned?
14:22:38 <apevec> larsks, if not set, secure tempdir is created but since it's inside middleware, there isn't a place to remove it on exit...
14:22:52 <larsks> apevec: yuck :)
14:23:02 <apevec> you could set signing_dir if you don't like it :)
14:23:11 <larsks> Yup. I wonder if we should be default.
14:23:38 <apevec> larsks, what would you set as a default?
14:23:48 <larsks> /var/lib/nova/keystone-signing, maybe?
14:23:50 <apevec> it was set previously and there were issues
14:23:57 <larsks> That seems to be what some other projects are doing.
14:23:58 <apevec> w/ permission s
14:24:11 <apevec> larsks, what other projects?
14:24:39 <apevec> previous default was homedir iirc
14:24:51 <apevec> tempdir is guaranteed to work
14:24:59 <apevec> there's just this cleanup issue
14:25:44 <larsks> apevec: Yeah, actually I lied.  I was looking at some trove related changes, but they basically stopped setting it explicitly to a directory in /tmp and just use the middleware defaults.  So never mind.
14:26:08 <larsks> What was wrong with using the service home directory?
14:26:50 <apevec> can't remember now, iirc permissions where not correct, breaking auth m/w
14:27:05 <apevec> git history probably has some clues
14:27:14 <ihrachyshka> apevec: hey. can you help me with https://bugzilla.redhat.com/show_bug.cgi?id=1022725 ? it seems it was marked as a blocker for no real reason due to Regression tag. Should I remove that? (it's not clear why it's marked as regression at all)
14:27:15 <larsks> Yup.  Will take a look later, I guess.
14:30:39 <larsks> I have updated the workarounds page with the 'yum downgrade python-glanceclient' suggestion, which seems to work in my local testing.
14:37:49 <arif-ali> hmm, I don't have an old copy of python-glanceclient synced
14:38:15 <larsks> arif-ali: On my centos 7 system, "yum downgrade python-glanceclient" Just Worked...
14:38:31 <arif-ali> yeah, I have a local reposync of the rdo-openstack
14:38:36 <larsks> Ah.
14:38:41 <arif-ali> and I automatically remove the old rpms
14:38:53 <arif-ali> through the -d param of reposync
14:40:03 <arif-ali> I thought reposync was going to be similar to rsync, looks like it's not
14:41:26 <jpena> larks: I have checked the openstack-nova-api SELinux issue, and have come up with a module that seems to do the trick
14:41:39 <jpena> just documented it in the bz
14:42:05 <arif-ali> ah, -n command I need to remove of my reposync, --newest-only
14:44:04 <vaneldik> jpena: your module Works For Me. Thanks.
14:54:55 <vaneldik> jpena: I updated https://openstack.redhat.com/Workarounds#workaround with your module for RHEL7/CentOS7
14:56:48 <alexpilotti> hi guys, we’re testing RDO on CentOS 7. Is there any particular configuration that needs extra attention?
14:57:44 <mflobo> Small bugs in Horizon related to javascript and CSS https://bugzilla.redhat.com/show_bug.cgi?id=1148494
15:01:04 <apevec> alexpilotti, probably only you can test hyperv :)
15:02:42 <mflobo> alexpilotti, theses two things could be interesting for you: https://openstack.redhat.com/index.php?title=Workarounds#Packstack_fails.3B_unable_to_start_nova-api and https://openstack.redhat.com/index.php?title=Workarounds#provisioning_of_glance_does_not_work_for_demo_environment
15:05:55 <alexpilotti> apevec mflobo: ok tx!
15:05:59 <alexpilotti> avladu: ^
15:45:59 <arif-ali> The Network Toplogy in horizon seems to think it is loading the topology forever
15:48:15 <rbowen> On CentOS7, allinone, I'm getting a failure in prescript.pp which says:
15:48:16 <rbowen> Error: Execution of '/usr/bin/rpm -e firewalld-0.3.9-7.el7.noarch' returned 1: error: Failed dependencies:
15:48:46 <rbowen> firewalld >= 0.3.5-1 is needed by (installed) anaconda-19.31.79-1.el7.centos.4.x86_64
15:48:46 <rbowen> firewalld = 0.3.9-7.el7 is needed by (installed) firewall-config-0.3.9-7.el7.noarch
15:49:15 <rbowen> I haven't seen this mentioned yet this morning. Should I just try upgrading firewalld, or is this part of a known problem?
15:49:35 <bdossant> so nova-network doesnt work after packstack installation. Error: NovaException: Failed to add interface: can't add lo to bridge br100: Invalid argument
15:49:41 <arif-ali> rbowen, I have the updates of available as well as extras in the repo
15:51:00 <arif-ali> rbowen, interesting, I don't have that installed on my machines
15:51:39 <rbowen> This is a fresh install of CentOS7 as of last night, with `yum update` and nothing else.
15:53:31 <arif-ali> rbowen, my package list in my kickstart file http://pastebin.com/HEUQa5nw
15:54:07 <arif-ali> maybe there's something in @Core or @Base that may be installing it
15:56:13 <ToM> Hi, I'm new around here, but just wondering if anyone is testing those Ipv6 things from neutron? And howto contribute overall? :)
16:01:45 <arif-ali> maybe the prescript.pp needs to do a yum remove instead, which prob will remove firewall-config as well as firewalld and anaconda
16:02:36 <rbowen> https://bugzilla.redhat.com/show_bug.cgi?id=1148426
16:02:55 <rbowen> Looks like Sean already logged this a couple of hours ago.
16:03:02 <rbowen> Just found it while I was logging mine.
16:04:46 * arif-ali is so glad that I have SOL on IPMI, the only way I can restart networking on the nodes
16:06:36 <hogepodge> Hi. How usable are the juno packages? I'm running into dependency problems with ceilometer trying to grab epel 6 packages
16:06:40 <larsks> rbowen:  You should just remove firewalld.
16:07:13 <larsks> rbowen: I was curious why I didn't run into that problem...but I'm starting with the centos7 cloud image, which does not include firewalld by default.
16:07:28 <rbowen> ok, adding workaround
16:07:47 <larsks> hogepodge: well, we're here testing the juno packages to day :)
16:08:21 <larsks> hogepodge: please log bugs for any dependency problems you encounter.
16:08:56 <hogepodge> larsks ok, I'll do that.
16:10:21 <sean-toner> has anyone seen a problem installing packstack with glance?
16:10:43 <rbowen> There's a workaround listed about that.
16:10:51 <sean-toner> "glance: error: argument <subcommand>: invalid choice: u'services' (choose from 'image-create', 'image-delete', 'image-download', 'image-list', 'image-show', 'image-update', 'member-create', 'member-delete', 'member-list', 'help')"
16:10:52 <hogepodge> larsks which bug tracker?
16:11:13 <larsks> hogepodge: https://bugzilla.redhat.com/enter_bug.cgi?product=RDO
16:11:19 <rbowen> sean-toner: https://openstack.redhat.com/Workarounds#Provisioning_of_glance_does_not_work_for_demo_environment
16:11:55 <sean-toner> rbowen: ah yup, thanks :)
16:12:14 <sean-toner> rbowen: what's strange is that I only run into that problem on baremetal
16:12:19 <sean-toner> I had a VM, and I didn't hit that
16:20:02 <rbowen> Grabbing some lunch. Back soon.
16:20:55 <hogepodge> thanks larsks. Just checked my deployment code, turns out I had a bug. Rechecking, but I expect it will turn out ok.
16:30:38 <cdent> I have some status report and stuckness with juno packstack on fedora21:
16:30:47 <cdent> I was getting this: http://paste.openstack.org/show/117438/
16:31:16 <cdent> And followed the hacking instructions here: https://ask.openstack.org/en/question/37028/how-to-install-rdo-packstack-on-a-rhel-system-subscribed-to-satellite/ (commenting out manage_rdo) to get past it
16:31:51 <cdent> now the puppet apply is being test to see if is finished and it is looping
16:32:10 <cdent> an strace of the process shows that it is seeing: Please login as the user \"fedora"
16:32:30 <cdent> when trying to sssh
16:34:44 <cdent> this is because I'm in a "cloud" image of fedora21
16:35:08 <cdent> and root's authorized_keys accept a connection and then kick out with that message
16:35:29 <cdent> changing the authorized_keys to match what the fedora user is using makes it move on
16:35:48 <cdent> Is this something I should write up somewhere? If so, where/how?
16:37:14 <avladu> hello, does anybody know on what is the current rdo juno based? Which is the latest commit?
16:38:28 <avladu> as I wanted to know when or if it will include the current master, especially on nova
16:42:58 <cdent> wrote it up here: https://tank.peermore.com/tanks/cdent-rhat/20141001
16:52:40 <larsks> cdent: are you running packstack on a host that you've connected to using ssh with agent forwarding?
16:53:28 <cdent> it's messier than that:
16:54:50 <cdent> i'm ssh'd into a linux box where devstack is running a fedora21 image, i'm ssh'd into that fedora guest via forwarded agent, and there I'm running packstack (as the fedora user)
16:55:35 <cdent> I would guess for testers this is a relatively common scenario, less so for people actually doing something
16:56:20 <cdent> (the fedora guest does not have a floating ip so is not visible from the host I'm actually sitting at)
16:56:42 <larsks> ...I ask because packstack creates an id_rsa locally and should configure authorized_keys to match, but if you have an agent-accessible key you will hit the authorized_keys entry created by cloud-init, rather than the one for root's key.
16:57:25 <cdent> it sounds like that's what happened, is that something that can be forced?
17:01:54 <larsks> You could...uh, unset SSH_AUTH_SOCK, maybe, before running packstack.
17:02:58 <cdent> I'll give it a try after this current run (which is taking forever) completes. Is this something we should try to document (somewhere)?
17:08:03 <weshay> cdent, did you by chance add some yum repos by hand?
17:08:36 <cdent> I just followed the instructions on the test day page, weshay
17:09:13 <weshay> I've seen that error for instance when the epel rpm is installed by I removed the repo file from /etc/yum.repos.d
17:09:28 <weshay> s/by/but
17:10:46 <darkomenz> .
18:06:30 <cdent> weshay, larsks: I got a bit further running as root but then had issues with rabbitmq startup (described after the hr here: https://tank.peermore.com/tanks/cdent-rhat/20141001 )
18:06:59 <cdent> I'm about out of brain and time so will probably have to defer until tomorrow or later
18:07:33 <larsks> cdent: no worries.  Refresh my memory, on what platform are you deploying?
18:07:38 <cdent> fedora21
18:08:10 <larsks> cdent: Okay.  I may try on that this afternoon.  I've had successful deployments on f20 and centos7 so far, but have not tried f21.
18:08:14 <weshay> I tried on f21 w/o much success a few weeks ago
18:08:49 <cdent> For sake of reference I might try on f20 later or tomorrow see if I can narrow down my issues to f21 itself
18:08:51 <weshay> I'll add it to ci
18:09:13 <weshay> f20 will work w/ selinux permissive
18:09:25 <cdent> that's what I figured
18:09:34 <cdent> I tried 21 simply because I thought it would be "fun"
18:09:44 <weshay> lol.. I hear ya
18:09:47 <weshay> was it fun?
18:10:09 <cdent> I shall call it a learning experience which I guess is fun.
18:10:15 <cdent> But I seem to have a bit of a headache now.
18:10:16 <cdent> :)
18:10:20 <weshay> heh
18:13:33 <cdent> I think I'll eat and see what happens next.
18:22:43 <alexpilotti> mflobo: we’re testing on CentOS 7 with Hyper-V. Is there a plan to avoid installing Nova compute in RDO?
18:23:14 <alexpilotti> mflobo: ATM we have to enable a QEMU nova-compute instance just to disable it afterwards, since we add Hyper-V compute nodes
18:31:37 <rbowen> What's the status of Kite? Is it still a thing, or was it absorbed into Barbican ... or something else?
18:35:44 <nkinder> rbowen: it's still a thing
18:35:58 <nkinder> rbowen: it's under the same program as barbican, but it's a separate service
18:36:36 <rbowen> Ok. I was looking at http://gitlab.ustack.com/openstack/kite/blob/master/README.rst and the link to the docs there is 404.
18:36:41 <nkinder> rbowen: it's still at the POC stage though, and work is needed in oslo.messaging to allow for signing/encryption
18:37:04 <rbowen> ok. So it's still early days. Got it. Thanks.
18:37:40 <nkinder> rbowen: I wonder if that came from cookiecutter
18:38:02 <nkinder> rbowen: developer docs probably still need to be created.  I'll mention it to jamielennox
18:38:29 <rbowen> Thanks.
18:38:29 <laudo> wondering did you guys ever run into a scenario where when looking at the network diagram in openstack the actual router said gateway no info?
19:12:34 <armaan01> Hello folks, Any idea why this warning, "[Warning] Could not increase number of max_open_files to more than 1024 (request: 1835)"
19:12:47 <armaan01> And mysqld refuse to start after that
19:14:03 <armaan01> I aslo see this in logs, "[ERROR] WSREP: could not use private key file '/etc/pki/galera/galera.key': Invalid argument"
19:17:23 <ToM> because the ulimit hit you? :)
19:52:44 <arif-ali> multinode packstack, are we expecting to use "CONFIG_COMPUTE_HOSTS=" or do we do the config manually on the nova nodes?
19:57:14 <larsks> arif-ali: You are expected to use CONFIG_COMPUTE_HOSTS.
19:57:20 <larsks> ...and it's expected to work. :)
19:58:05 <arif-ali> larsks, thanks, will continue debugging then
20:30:46 <darkomenz> .
20:52:52 <cdent> larsks, weshay: Fedora20 has worked fine for me, so apparently f21 has some problems, at least with starting up rabbit properly
20:53:22 <larsks> cdent: yup, opened a bz on that: https://bugzilla.redhat.com/show_bug.cgi?id=1148604
20:53:32 <cdent> ah excellent
21:07:52 <Veers> anyone ever dare to integrate openstack with VMware or am I going to be the first guy in here to make the attempt? haha
21:09:43 <larsks> Veers: vmware is I think fairly well supported.  It's listed as a "group B" hypervisor here: https://wiki.openstack.org/wiki/HypervisorSupportMatrix#Group_B
21:16:14 <Veers> any gotchas/major issues you've heard/seen any scuttlebut about?
21:20:20 <larsks> Veers: Don't know of any; not something that I've come into contact with...
21:20:53 <Veers> fair enough!
21:20:59 <Veers> diving into https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Configuration_Reference_Guide/section_compute-hypervisors.html#vmware now
21:21:36 <rook_> https://www.youtube.com/watch?v=1zNdw4DaUM8
21:24:36 <Veers> you know.. I sometimes say "good luck..." like Marko
21:24:54 <Veers> usually when I know someone's got good odds of succeeding though
21:24:57 <Veers> so I guess that makes sense
21:25:13 <rook_> ;)
21:27:55 <Veers> thinking of rebuilding my lab environment again with a physical networker haha
21:28:20 <Veers> any sizing guidelines out there for something like a networker node?
21:34:57 <Veers> so far based on documentation I kind of like how this appears to work
21:36:22 <Veers> scope it down to a cluster and set of datastores.. nsx integration (no shocker there) and even supports linked clones
21:37:08 <Veers> actually when I boot say cirros right now, does it just boot over the network and send writes to a file on whatever compute node thats running the instance? or does it fully copy it to the compute node's local disk and boot from there?
21:53:06 <arif-ali> I don't really know openvswitch that much, should packstack automatically run "ovs-vsctl br-set-external-id br-ex bridge-id br-ex"
21:53:28 <arif-ali> from the multi-node aspect, everything only works once I run this command?
22:06:22 <darkomenz> .
22:23:04 <rook> larsks: does RDO have RC1?
22:23:42 <larsks> rook: I'm not sure.  My openstack-nova-compute package is openstack-nova-compute-2014.2-0.4.b3.fc22.noarch...so, maybe not?
22:24:33 <rook> alright, yeah i suppose it was just released....
00:55:28 <arif-ali> is ironic going to be available as a package in RDO?
08:29:28 <armaan01> hello folks,I have a foreman + quickstack setup, All of a sudden mysqld is refusing to start. Has anyone seen this error before. http://fpaste.org/138451/14122382/
08:30:37 <armaan01> [ERROR] WSREP: could not use private key file '/etc/pki/galera/galera.key': Invalid argument
08:35:51 <cmyster> armaan: have you tried it without setting up keys ?
08:41:09 <armaan> cmyster: trying it now.
08:43:59 <armaan> cmyster: Ahan, it works now.
08:44:38 <cmyster> armaan: is this a predefined config file that you were using ?
08:45:15 <cmyster> sorry, let me rephrase, is this a configuration filer that came with the installation ?
08:45:20 <cmyster> file*
08:45:30 <armaan> cmyster: I am using quickstack puppet modules, so i guess yes.
08:45:52 <armaan> cmyster: But i am seeing a different problem now, Debug: Exec[all-galera-nodes-are-up](provider=posix): Executing '/tmp/ha-all-in-one-util.bash all_members_include galera'Debug: Executing '/tmp/ha-all-in-one-util.bash all_members_include galera'Debug: /Stage[main]/Quickstack::Pacemaker::Galera/Exec[all-galera-nodes-are-up]/returns: Sleeping for 10.0 seconds between tries
08:47:11 <cmyster> no idea :) I am not using galera, but you could try and see what that script in /tmp/ha-all-in-one-util.bash is trying to do
08:50:08 <armaan> cmyster: checking it now :)
10:04:02 <sahid> hello all, about the test day how i can help?
10:21:13 <mflobo> sahid, you can start from here https://openstack.redhat.com/RDO_test_day_Juno_milestone_3 ;)
10:30:15 <sahid> mflobo: hum... will do the post installation tests for rhel7
11:30:57 <ihrachyshka> nmagnezi: looks like ipv6 stateful is broken, currently reading the code, looks like we should advertise prefix for stateful too?
11:31:16 <nmagnezi> ihrachyshka, will be right with you.
11:42:32 <ihrachyshka> nmagnezi: can we try a hotfix on one of l3 agents serving a ipv6 network for ?
11:42:46 <ihrachyshka> nmagnezi: for e.g. that cirros instance?
11:43:20 <nmagnezi> ihrachyshka, it will have to wait 20-30 minutes, we are currently running some additional ipv6 related tests (provider network)
11:43:47 <nmagnezi> ihrachyshka, btw cirros seems to be having issues with ipv6, i managed to get ipv6 address on a rhel instance, but not on cirros
11:43:54 <nmagnezi> ihrachyshka, (same subnet)
11:44:23 <ihrachyshka> nmagnezi: ok. how have you managed it? which type of setup?
12:03:35 <ihrachyshka> nmagnezi: anyway, ping me once you're done
12:03:46 <nmagnezi> ihrachyshka, will do
12:03:53 <ihrachyshka> nmagnezi: we have two patches to test - one for ipv6, another one for mysql connection failure
12:04:08 <nmagnezi> ihrachyshka, i managed to get an ip with both ra_mode and address_mode set to dhcp stateless
12:04:21 <nmagnezi> ihrachyshka, but instances did not get default gw for some reason
12:04:43 <nmagnezi> ihrachyshka, in my setup check the instance rhel64_nir2 and you'll see
12:06:13 <ihrachyshka> nmagnezi: probably because we don't provide it via RA?
12:06:53 <ihrachyshka> nmagnezi: radvd is assumed to provide the route, but it seems the way we configure radvd is wrong (not specifying prefix)
12:07:50 <nmagnezi> ihrachyshka, sounds interesting. i've yet to check the radvd configuration. i left it as is post installation, so maybe packstack should change some params ..
12:09:51 <ihrachyshka> nmagnezi: packstack has nothing to do with in-namespace radvd conf
12:09:58 <ihrachyshka> nmagnezi: it's generated by l3 agent
12:16:21 <arif-ali> are we likely to get RDO packages for ironic in juno, or are waiting for someone to package it?
12:39:01 <ihrachyshka> nmagnezi: whats up? :)
12:41:07 <apevec> arif-ali, it is packaged, athomas is working on openstack-ironic
12:41:57 <apevec> athomas, do you have Juno in Rawhide? Once there, I can push it through rdo update process
12:42:16 <arif-ali> apevec, thanks, it's one of those things I want to test and test the xCAT driver already built
12:53:54 <athomas> apevec, I've got an updated rpm. I was waiting for the Ironic RC release, which is likely tonight/tomorrow
12:54:26 <athomas> arif-ali Ironic will be there.
12:55:42 <arif-ali> athomas, thanks, I'll eagerly wait for it's release ;)
13:19:20 <ihrachyshka> nmagnezi: weirdo. dnsmasq seems not to respond to dhcp clients. I've fired tcpdump in the namespace, I see client solicitations, but no responses.
13:21:05 <nmagnezi> ihrachyshka, in my setup?
13:21:56 <ihrachyshka> nmagnezi: yes
13:22:33 <nmagnezi> ihrachyshka, i have created additional subnet
13:22:51 <nmagnezi> ihrachyshka, in which the ra comes from provider router
13:23:06 <nmagnezi> i see the RAs in the host interface
13:23:19 <nmagnezi> but not on the instanves
13:23:22 <nmagnezi> instances*
13:23:32 <nmagnezi> not sure why
13:24:24 <nmagnezi> ihrachyshka, i pinged the instance ipv6 address from within the switch and i see who-has messages in the instance, but since it did not get the RAs it has no ip address to respond with
13:24:43 <nmagnezi> ihrachyshka, i think i'll file bugs for both cases
13:25:14 <nmagnezi> ihrachyshka, btw if you which to check the fix, now is a good time for that
13:26:23 <ihrachyshka> nmagnezi: I would check the one for mysql connection
13:26:52 <nmagnezi> ihrachyshka, check.. the hotfix?
13:27:03 <ihrachyshka> yeah
13:27:13 <ihrachyshka> I'll check it now, stay tuned
13:27:29 <nmagnezi> fingers crossed..
13:40:01 <ihrachyshka> nmagnezi: yeah, traces in the log, but server not dying. instead, writing:
13:40:01 <ihrachyshka> 2014-10-02 16:33:20.428 77836 WARNING oslo.db.sqlalchemy.session [-] SQL connection failed. 10 attempts left.
13:40:01 <ihrachyshka> 2014-10-02 16:33:30.899 77836 WARNING oslo.db.sqlalchemy.session [-] SQL connection failed. 9 attempts left.
13:40:01 <ihrachyshka> 2014-10-02 16:33:40.915 77836 WARNING oslo.db.sqlalchemy.session [-] SQL connection failed. 8 attempts left.
13:40:16 <ihrachyshka> until mysql is finally up and it connects to it
13:40:39 <ihrachyshka> also, seems like neutron won't report startup success until it connects to SQL server, which is also nice
13:40:40 <nmagnezi> sounds great, but why traces?..
13:41:10 <nmagnezi> isn't it just suppose to indicate an error and keep trying? (like the lines you pasted)
13:43:14 <ihrachyshka> nmagnezi: yeah, that's a fair question
13:44:27 <ihrachyshka> nmagnezi: seems like neutron fires up greenthreads to handle incoming RPC messages before connection to SQL server is established, so each new RPC message results in attempt to read something from db => trace
13:45:16 <ihrachyshka> hm, wait, let me clean the logs and repeat
13:48:05 <rbowen> oops
13:48:50 <nmagnezi> ihrachyshka, waiting.. in the mean time, filing bugs against both ipv6 issues i have found
13:48:53 <ihrachyshka> nmagnezi: ah, wrong logs. so it seems that now no traces are shown during initial connection attempt
13:49:13 <nmagnezi> ihrachyshka, looks like we have a fix :)
13:51:12 <ihrachyshka> nmagnezi: yeah, though: mariadb took more than 100 secs to start, so neutron died anyway
13:51:26 <ihrachyshka> nmagnezi: also, log messages are weird - the last one I see is:
13:51:36 <ihrachyshka> 2014-10-02 16:47:56.066 1027 WARNING oslo.db.sqlalchemy.session [-] SQL connection failed. 6 attempts left.
13:51:43 <ihrachyshka> so I wonder where are remaining attempts
13:52:05 <ihrachyshka> nmagnezi: I think we should ask mariadb team why it takes so long to start
13:52:16 <nmagnezi> yup
13:52:28 <ihrachyshka> nmagnezi: and maybe consider raising retry attempts number and interval for all our services?..
13:52:35 <nmagnezi> ihrachyshka, btw what happens if it will try 6 more times? will it halt?
13:54:27 <athomas> Is there a schedule somewhere with the deadlines for getting packages into the Juno release?
13:55:09 <ihrachyshka> nmagnezi: ah, I know why the log messages - systemd killed neutron before it ended all attempts
13:55:17 <ihrachyshka> nmagnezi: yes, it will
13:55:25 <nmagnezi> athomas, that would be a good question for the mailing list
13:55:27 <ihrachyshka> nmagnezi: we may set max_retries = -1
13:55:34 <ihrachyshka> meaning, retry indefinitely
13:55:43 <nmagnezi> ihrachyshka, sounds right to me..
13:55:54 <ihrachyshka> nmagnezi: it should be done for all projects then
13:56:23 <nmagnezi> ihrachyshka, what is the default value?
13:58:16 <ihrachyshka> nmagnezi: yeah, default timeout for systemd units is 90secs. can be disabled.
13:58:29 <ihrachyshka> nmagnezi: default is 10 attempts, 10 sec interval
13:58:35 <ihrachyshka> ~= 100 secs
13:58:57 <ihrachyshka> probably more because some time is spend to access sql server...
14:01:32 <ihrachyshka> nmagnezi: honestly, I would start with beating mariadb team ;)
14:01:45 <nmagnezi> lol
14:01:56 <nmagnezi> ihrachyshka, upsteam or in rh?
14:02:08 <ihrachyshka> nmagnezi: no idea whether long startup problem is u/s
14:02:14 <ihrachyshka> I would allow rh team to decide
14:02:34 <ihrachyshka> if they really mean to start for so long, then we may consider changes in openstack
14:03:10 <ihrachyshka> but again, I wouldn't go with indefinite retries: we probably want to fail after some time, otherwise services will hang there with no definite state
14:04:20 <nmagnezi> ihrachyshka, ack. and btw i see (in my setup) that nova does not function. is that because you are checking something?
14:04:44 <ihrachyshka> hm, not really. which service?
14:04:57 <ihrachyshka> maybe it was also shot due to mariadb
14:05:39 <nmagnezi> the controller server
14:05:46 <nmagnezi> nmagnezi-os-cont1
14:05:49 <ihrachyshka> it's not nova, it's neutron
14:05:50 <nmagnezi> ihrachyshka, ^^
14:05:54 <ihrachyshka> (nova tries to access it)
14:05:56 <nmagnezi> oh
14:05:58 <ihrachyshka> I've restarted neutron
14:10:42 <ihrachyshka> nmagnezi: do you want me to revert the hotfix?
14:23:44 <nmagnezi> ihrachyshka, no need to, thanks
14:23:55 <bdossant> hi! anyone tried the openstack client?
14:26:56 <bdossant> i have an error starting openstack cmd
14:27:00 <bdossant> on centos7
14:27:02 <bdossant> ImportError: cannot import name complete
14:27:24 <mrunge> bdossant, what did you type?
14:27:27 <avladu> hello, has anyone tried/is trying to execute the https://openstack.redhat.com/Post_Installation_Tests on centos7?
14:27:40 <bdossant> openstack server list
14:27:47 <bdossant> but even only openstack doesnt work
14:28:12 <bdossant> Traceback (most recent call last):
14:28:12 <bdossant> File "/bin/openstack", line 6, in <module>
14:28:13 <bdossant> from openstackclient.shell import main
14:28:15 <bdossant> File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line 27, in <module>
14:28:17 <bdossant> from cliff import complete
14:28:19 <bdossant> ImportError: cannot import name complete
14:29:04 <mrunge> never used openstack as a command....
14:29:16 <mrunge> bdossant, could you please file a bug?
14:29:28 <bdossant> sure
14:29:29 <apevec> it needs cliff update, I'm working on it
14:29:45 <bdossant> ah good apevec
14:29:53 <bdossant> yes i was looking to cliff version
14:30:11 <bdossant> i have 1.4.4-1.el7
14:30:44 <avladu> I will start executing https://openstack.redhat.com/Post_Installation_Tests  on centos7, just wanted to know if someone has already handled this
14:30:52 <apevec> ihrachyshka, where is the hotfix for oslo.db ?
14:30:59 <bdossant> apevec: so a bug report is not necessary?
14:31:11 <ihrachyshka> apevec: https://review.openstack.org/#/c/125347/
14:31:21 <apevec> bdossant, file it still, I need reminder
14:31:23 <ihrachyshka> apevec: I've linked to it in external tracker list in bugzilla
14:31:37 <apevec> need to sort some dep updates (some are in el7 base)
14:31:55 <apevec> ihrachyshka, thanks
14:32:54 <apevec> ihrachyshka, which bz, it's not in 1144181 ?
14:33:30 <apevec> ah via LP
14:33:34 <bdossant> avladu: im not following the test plan for centos7, im just testing my use cases, but if you need help let me know
14:42:00 <rbowen> When in doubt, open a ticket. :-)
15:09:38 <bdossant> apevec: https://bugzilla.redhat.com/show_bug.cgi?id=1148885
15:13:31 <apevec> thanks
15:14:07 <apevec> btw it is not juno-1, rdo-release-juno-1 is unrelated to the milestone, it's just sequence :)
15:18:22 <dustymabe> hi all. I'm trying to take part in the testing a bit and give a sanity check for Juno on F21.
15:18:37 <dustymabe> Just wondering if we should update some of the workarounds to include F21 ?
15:18:50 <dustymabe> one of them in particular is: https://openstack.redhat.com/Workarounds#Packstack_--allinone_fails_to_remove_firewalld
15:23:08 <rbowen> dustymabe: Yes, please update the workarounds where appropriate.
15:23:37 <rbowen> dustymabe: What's it doing on F21?
15:23:56 <dustymabe> rbowen: one sec
15:24:38 <dustymabe> rbowen: http://ur1.ca/iadgh
15:24:47 <dustymabe> rbowen: same thing
15:25:17 <rbowen> Yeah, looks very much the same. So I guess put F21 in the list of affected platforms. That would be helpful.
15:25:23 <rbowen> Same workaround help?
15:25:27 <rbowen> I would assume so.
15:25:30 <dustymabe> yep..
15:25:48 <dustymabe> well at least I got past that error :)
15:26:00 <dustymabe> investigating another one at the moment
15:27:44 <rbowen> Hmm. The firewald thing is listed twice.
15:27:51 <rbowen> That's weird. Fixing.
15:28:23 <dustymabe> yeah i was going to ask about that.. seemed pretty much like the same thing and linked to the same bug
15:29:38 <dustymabe> rbowen: do you know of anyone who uses their fedora project account (i.e. openid) to sign in to the rdo web site?
15:30:14 <rbowen> No, I don't. And the openID login appears to be broken right now.
15:30:18 <dustymabe> https://fedoraproject.org/wiki/OpenID
15:30:25 <rbowen> I haven't yet figured out what the problem is
15:30:58 <dustymabe> got it.. yeah my level of account sprawl on the internet is ridiculous these days
15:31:32 <dustymabe> I really wish openid would catch on.. and when I say openid I'm not talking about using google and letting them track absolutely everything I do
15:31:33 <rbowen> I've combined those two workarounds and added F21 to the affected list.
15:31:42 <dustymabe> rbowen: thanks!
15:40:52 <dustymabe> rbowen: does this one look familiar? http://ur1.ca/iadj2
15:57:15 <rbowen> Sorr. stepped away for a moment. Looking
15:57:30 <rbowen> Yeah, but from a long time ago ...
15:57:42 <rbowen> So it's probably a different thing. Looking for that.
15:58:06 <rcrit> I'm trying the test day on RHEL-7 and get a dependency error on python-greenlet when trying to install python-eventlet. Where should that come from?
15:58:25 <rbowen> dustymabe: There's this page - https://openstack.redhat.com/Qpid_errors - but it's really old, and I don't expect it's the same issue.
15:59:06 <rbowen> dustymabe: I guess we'd need to look at the contents of the log file it references.
15:59:22 <dustymabe> rbowen: i'm working on it
15:59:30 <rbowen> cool
16:07:06 <dustymabe> rbowen: ok. so I think it has something to do with https://bugzilla.redhat.com/show_bug.cgi?id=1103524
16:07:22 <dustymabe> rbowen: a recent change to rabbitmq
16:09:17 <arif-ali> chaps, first time doing gre networks through RDO, should I be able to create a network
16:09:22 <arif-ali> I get "Unable to create the network. No tenant network is available for allocation."
16:10:07 <arif-ali> I don't have issues if I havethe tenant networks to be vxlan or vlan
16:10:10 <dustymabe> rcrit: do you have any more info? do you have error messages or anything you can paste using fpaste.org and copy link here?
16:11:15 <rcrit> http://fpaste.org/138574/14122662/ but I think it's just that I'm missing a repo somewhere
16:12:23 <dustymabe> what does yum repolist show?
16:14:45 <rcrit> http://fpaste.org/138582/41226648/
16:14:46 <dustymabe> rbowen: here is the log file "/var/log/rabbitmq/rabbit@localhost.log" http://ur1.ca/iadpj
16:15:10 <dustymabe> rbowen: this is the part that is concerning: "Failed to load NIF library: '/usr/lib64/erlang/lib/sd_notify-0.1/priv/sd_notify_drv.so: undefined symbol: sd_notify'"
16:15:59 <arif-ali> or do I need to create a gre network only as an admin?
16:18:44 <apevec> rcrit, RHEL7 Extras
16:18:55 <rcrit> 10-4, thanks
16:23:42 <rbowen> dustymabe: Yeah, that's ... strange. apevec, any ideas about dustymabe's problem?
16:25:25 <apevec> hmm, sd_notify was added recently
16:25:44 <apevec> dustymabe, is that fedora or EL7 ?
16:25:55 <dustymabe> apevec: f21
16:26:29 <apevec> need to check w/ eck
16:27:28 <dustymabe> apevec: ok.. let me know if there is any info I can gather and report
16:27:44 <apevec> <eck> apevec: https://bugzilla.redhat.com/show_bug.cgi?id=1148604
16:28:42 <dustymabe> apevec: thanks
16:28:47 <apevec> larsks, ^ eck says you hit it, care to add to Workarounds?
16:29:33 <larsks> apevec: I don't have a workaround for that.
16:29:43 <larsks> apevec: The workaround would seem to be "rebuild rabbitmq-server".
16:30:04 <jeckersb> i pushed an update to testing this morning
16:30:10 <dustymabe> larsks: apevec: according to the bug a new version of erlang-sd_notify has been pushed
16:30:17 <dustymabe> https://admin.fedoraproject.org/updates/erlang-sd_notify-0.1-4.fc21
16:30:17 <larsks> dustymabe: Spiffy!
16:30:18 <jeckersb> workaround is to grab the new build manually until it makes it through the workflow
16:30:28 <apevec> jeckersb, ah there you're in disguise :)
16:30:31 <jeckersb> :)
16:30:34 <dustymabe> rbowen: should we add ot workarounds?
16:30:41 <jeckersb> somebody owns eck on freenode
16:30:42 <larsks> dustymabe: if it works, yes...
16:30:43 <jeckersb> it is a shame
16:31:23 <dustymabe> larsks: rbowen: is there a common bugs page as well?
16:31:33 <dustymabe> I wish I would have seen this sooner
16:32:41 <dustymabe> googles probably would have helped if the bug wasn't just found yesterday
16:38:32 <apevec> dustymabe, Workarounds is common bugs page
16:38:58 <apevec> when there isn't workaround currently, we could put: none yet :)
16:39:06 <apevec> and update when there is
16:39:13 <dustymabe> apevec: ok sounds great.. wish this one would have made it on there
16:39:20 <dustymabe> apevec: im going to add it
16:39:51 <rbowen> dustymabe: Well, the workarounds page is supposed to be that.
16:40:03 <rbowen> There's also the page that it links to which are presumed resolved issues
16:40:07 <rbowen> That would be a second place to look.
16:40:14 <dustymabe> rbowen: got it.
16:40:39 <dustymabe> apevec jeckersb larsks rbowen: I was able to move past my issue with the newer rpm
16:41:02 <dustymabe> i did hit another issue, but it's different so I'll continue on :)
16:41:02 <jeckersb> excellent
16:41:12 <jeckersb> and boo re: 2nd issue :(
16:41:38 <dustymabe> jeckersb: testing = useful
16:46:09 <puiterwijk> hi, since yesterday, some rhcloud.com apps have been hit by a bug where curl was unable to find the ca cert list. I fixed this in two apps by restarting the app, but right now openstack.redhat.com is hit by this bug
16:48:29 <hogepodge> So is RDO neutron still using the swan driver for vpnaas, but using a different package?
16:53:02 <rbowen> openstack.redhat.com is affected by what?
16:53:16 <rbowen> puiterwijk: What are the symptoms?
16:53:25 <dustymabe> rbowen: I contacted puiterwijk
16:53:44 <dustymabe> rbowen: I didn't know if the error I was seeing earlier was on the fedora side of things or the openstack side of things
16:53:56 <puiterwijk> rbowen: well, try to sign in with id.fedoraproject.org (OpenID). "
16:53:57 <puiterwijk> Problem with the SSL CA cert (path? access rights?)
16:53:58 <dustymabe> rbowen: this is about openid
16:53:59 <puiterwijk> "
16:54:44 <dustymabe> puiterwijk: do you observe an error when trying to log in using openid?
16:54:58 <puiterwijk> rbowen: basically, because it can't check the certificate, it can't do OpenId discovery, and as such, login fails
16:55:11 <puiterwijk> dustymabe: yes, the exact error I just gave. which is on openstack.rh.c's side
16:55:12 <rbowen> Can't check which certificate?
16:55:23 <puiterwijk> rbowen: of id.fedoraproject.org (the OpenID provider)
16:56:02 <puiterwijk> so as I said, two other rhcloud.com apps hit this as well yesterday, and it was fixed by restarting the application (rhc app restart)
16:56:10 <rbowen> So how would I go about fixing this?
16:56:28 <puiterwijk> rbowen: restarting the openshift app fixes it
16:56:38 <rbowen> This isn't running on openshift, or on rhcloud.com
16:56:56 <puiterwijk> oh, fun. the other two apps did, and they were running there
16:57:14 <puiterwijk> so I guess restarting the web application (httpd restart) might do the trick, but I did not have a more root cause analysis
16:57:40 <dustymabe> puiterwijk: is this the first time we have seen this?
16:57:45 <rbowen> ok, restarted httpd, although I can't imagine what that woul dhave to do with it.
16:57:57 <puiterwijk> dustymabe: yesterday was the first time with those two apps on rhcloud.com
16:58:13 <puiterwijk> rbowen: it
16:58:16 <puiterwijk> it's fixed now
16:58:23 <rbowen> That's truly bizarre.
16:58:24 <puiterwijk> and I don't have any clue why it fixes it either, but it did
16:58:42 <rbowen> It makes me grumpy when a restart fixes things.
16:58:46 <puiterwijk> yeah, and if I had more logs I could diagnose. maybe you can contact me internally to help iagnose?
16:59:10 <rbowen> Sure
16:59:13 <puiterwijk> (or feel free to look yourself of course, but in openshift I couldn't get more info)
16:59:28 <dustymabe> puiterwijk: rbowen: yay I'm logged in
17:00:12 <dustymabe> man I love openid.. I wish all sites accepted it (and i'm not talking about letting you log in with google)
17:01:24 <dustymabe> rbowen: ok I am going to try to add an entry to the workarounds page
17:01:35 <rbowen> ok, cool
17:01:45 <dustymabe> rbowen: do you mind if I also move the example entry to either the very bottom or the very top of the list?
17:01:54 <dustymabe> rbowen: doesn't really make sense to have it in the middle
17:07:20 <puiterwijk> dustymabe: cool. thanks for emailing me.
17:07:49 <dustymabe> puiterwijk: thanks for being helpful.. I hope we can track down the root cause at some point
17:08:30 <puiterwijk> dustymabe: yeah, I'll work with rbowen on that. I hope that since this is a native server, we have some more debugging info
17:11:06 <roshi> is there a key somewhere to what each 'config name' means in the results table?
17:11:53 <roshi> hey dustymabe :)
17:14:27 <dustymabe> roshi: hey
17:18:29 <cdent> dustymabe: did I hear earlier that you are testing with f21?
17:19:07 <cdent> after getting past rabbit, did you have trouble with mariadb not starting because of missing ha_connect.so?
17:19:20 * roshi is also testing on F21
17:19:26 * cdent waves
17:19:31 <roshi> o/
17:19:43 <roshi> I haven't gotten past the rabbitmq bug
17:19:48 <dustymabe> cdent: that's right
17:20:00 <dustymabe> roshi: look at workaround in https://bugzilla.redhat.com/show_bug.cgi?id=1148604
17:20:12 <dustymabe> roshi: i'm adding to workarounds page as we speak
17:20:15 <cdent> I did dustymabe's workaround and it worked fine
17:20:27 <dustymabe> cdent: yes I am hitting mariadb bug
17:20:36 <dustymabe> I decided to write this other one up first
17:20:39 <roshi> I was just looking into that log file to see what the issue was
17:20:43 <dustymabe> before I started investigating too hard
17:20:44 <cdent> i have to comment out the line in /etc/my.cnf.d/connect.cnf
17:20:53 <cdent> (for now)
17:21:12 <cdent> I want to see what other exciting things raise their heads ...
17:24:06 <dustymabe> rbowen: page should be updated now
17:24:21 <rbowen> Good deal. Thanks.
17:24:26 <dustymabe> I also went into a few of the other issues and added <pre></pre> tags
17:25:57 <dustymabe> cdent: "141002 12:38:59 [ERROR] /usr/libexec/mysqld: unknown variable 'plugin-load-add=ha_connect.so'" ?
17:26:03 <cdent> yup
17:26:23 <dustymabe> anyone know if this is already written up?
17:26:24 <roshi> well, now I'm to mariadb issues too :)
17:26:37 <dustymabe> roshi: at least it's consistent
17:26:41 <cdent> i haven't had a chance to investigate yet
17:26:48 <cdent> investigate -> search for writeup
17:27:40 <dustymabe> cdent: maybe https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=1141458
17:28:40 <dustymabe> cdent: yep.. that looks like the issue
17:28:44 * cdent nods
17:29:02 <roshi> fair point dustymabe
17:29:10 <dustymabe> rbowen: should we add this to the workarounds page?
17:29:47 <rbowen> Yes please.
17:30:14 <cdent> I have a working fedora21 now
17:30:31 <rbowen> awesome
17:30:52 <cdent> Basically had to do all the workarounds (as expected) and then the new one with mariadb.
17:30:53 <dustymabe> I'll add this to the bug and the workarounds page: sed -i s/plugin-load-add=ha_connect.so/#plugin-load-add=ha_connect.so/ /etc/my.cnf.d/connect.cnf
17:32:42 <cdent> Random passes over nova, keystone, glance, ceilometer operate approriately so no super obvious gross errors yet
17:32:45 * cdent checks process lsits
17:33:55 * dustymabe notices that other people are probably running this on a server and not on a random old laptop
17:34:07 * dustymabe is still waiting for it to finish
17:34:14 <roshi> mines on a tower, bare metal
17:34:21 <roshi> running it in a VM next
17:37:21 <roshi> earlier this year I attempted the rdo quickstart and ran into similar things I think
17:37:25 <cdent> looks like ceilometer-compute didn't start because no tooz, this is the same problem you saw on f20 isn't it eglynn ?
17:37:27 <roshi> then I just quit trying
17:39:17 <eglynn> cdent: I saw ceilometer-compute not start initially from packstack on f20 ... but python-tooz is installed, and the compute agent sucessfully starts manually
17:39:51 <cdent> I'm seeing it not start, no module tooz.coordination
17:40:00 <cdent> which would make me think the central agent should not start either, but it does
17:40:40 <cdent> and it does start manually
17:40:52 <cdent> which suggests that tooz is being installed later than the compute agent
17:41:10 <dustymabe> rbowen: cdent: roshi: https://openstack.redhat.com/Workarounds#mariadb_fails_to_start
17:41:56 <cdent>17:41:59 <eglynn> cdent: hmmm, interesting ... the central agent has an explicit Requires on python-tooz, whereas the compute agent doesn't
17:42:09 <cdent> it probably needs it?
17:42:21 <cdent> (where did you look for that by the way)
17:42:24 <eglynn> cdent: yep, suggests an ordering issue, central agent RPM installed after compute
17:42:45 <eglynn> cdent: yeah, so IIRC
17:43:13 <eglynn> cdent: ... what happened is we decided to "use" the same coordination mechanism in the compute agent
17:43:18 <eglynn> cdent: ... as a kind of after-thought
17:43:53 <cdent> If you recall my changes to devstack: the compute agent needs to start after the notification agent on a single node setup...
17:43:54 <eglynn> cdent: ... as in "wouldn't it be cool if we could also run multiple compute agents on a single compute node and share the local instances between them"
17:44:04 <cdent> so installing it first seems potentially bad mojo
17:45:02 <eglynn> cdent: yeah, I think there's a further curveball here, in that the puppet class for the compute agent depends on the one for the nova compute service
17:45:27 <eglynn> cdent: so it's a little more complex than simply the ordering within the ceilo services themselves
17:45:41 * cdent blinks twice
17:45:41 <eglynn> cdent: I'll have a dig to see if we change the ordering
17:45:51 <eglynn> cdent: blinks?
17:45:59 <cdent> one moment
17:46:33 <cdent> https://www.youtube.com/watch?v=S3OlpcO0aMs
17:46:53 <cdent> that cartoon character has a tendency to blink, audibly, when confronted with things that might make one sigh
17:47:50 <cdent> but zorak has nothing on brak: https://www.youtube.com/watch?v=kcPSmFhmhJ4
17:48:43 * cdent suffers from too much internet bandwidth
17:50:09 <dustymabe> cdent: roshi: did you have to workaround https://openstack.redhat.com/Workarounds#Provisioning_of_glance_does_not_work_for_demo_environment
17:50:15 <cdent> yes
17:50:19 <cdent> and the nova api startup too
17:50:37 <dustymabe> I'll update to include Fedora 21 in list of OS's
17:50:43 <cdent> (but that's just changing selinux)
17:51:12 <cdent> biab
17:54:26 <cdent> I'm going to have to come back later
17:57:09 * dustymabe waves
18:31:37 <dustymabe> roshi: any luck starting instances?
18:32:36 <roshi> couldn't start openstack-nova-api.service
18:33:03 <dustymabe> mine is up and running
18:33:18 <dustymabe> roshi: is selinux enabled?
18:33:20 <rbowen> There was a workaround aobut that, I believe.
18:33:27 <dustymabe> https://openstack.redhat.com/Workarounds#Unable_to_start_nova-api
18:33:30 <roshi> it is
18:33:38 * roshi reads
18:34:20 <roshi> turning off for the time being, rerunning allinone
18:35:21 <dustymabe> im having issues booting.. gonna go grab lunch and investigate after
18:46:11 <rbowen> flaper87: I was looking for an image to go along with 'zaqar' for a slide for a presentation. It appears, based on what images come up, that Zaqar means "sun tan" or "sun burn" in Azerbaijani.
18:46:31 <rbowen> flaper87: I like the definition on Wikipedia better. :-)
18:49:30 <flaper87> rbowen: LOL, indeed
18:49:47 <flaper87> rbowen: I guess some random evil god would be good
18:53:05 <roshi> hrm, error on _provision_glance.pp: returned 2
18:53:15 <roshi> could not prefetch glance_image provider
18:53:41 <arif-ali> roshi, check the workarounds
18:54:01 <roshi> doing it now
18:54:05 <arif-ali> number 2 on the list
18:54:07 <roshi> there's just so many of them
18:54:49 <arif-ali> roshi: https://openstack.redhat.com/Workarounds#Provisioning_of_glance_does_not_work_for_demo_environment
18:54:55 <roshi> yeah
18:54:59 <roshi> applying the workaround nod
18:55:03 <roshi> s/nod/now
19:12:37 <dustymabe> roshi: yeah I hit a lot of them
19:12:58 <dustymabe> roshi: I made it through though.. now working on getting an instance to boot
19:13:31 <roshi> I got a working instance, going to be doing the same
19:13:44 <dustymabe> working instance?
19:13:59 <dustymabe> running on openstack.. or just a running openstack setup
19:14:09 <roshi> well, packstack finished - gotta launch an image next
19:14:13 <roshi> sorry to be confusing :)
19:14:23 * roshi has many talents and that's one of them :p
19:14:36 <dustymabe> roshi: is that on your resume?
19:14:41 <dustymabe> don't leave it out
19:15:47 <roshi> Special skills: Conversational Obfuscation (both written and verbal)
19:16:07 <roshi> does that read vaguely enough?
19:16:29 <dustymabe> roshi: certainly sounds desirable
19:17:42 <roshi> I assume I can use you as a reference?
19:18:06 <dustymabe> certainly
19:18:50 <roshi> sweet
19:19:14 <roshi> I'll be your reference for "Obfuscation Certification (Written and Verbal)"
19:26:36 <dustymabe> we got any openstack networking gurus in here?
19:27:17 <arif-ali> dustymabe, ask the question, and maybe someone will answer
19:27:45 <dustymabe> so this is the answer file that packstack generated for me: http://ur1.ca/iaeqj
19:28:30 <dustymabe> im just wondering if having CONFIG_NEUTRON_L2_PLUGIN=ml2 and CONFIG_NEUTRON_L2_AGENT=openvswitch is a bad thing
19:28:56 <arif-ali> no it isn't
19:29:06 <dustymabe> arif-ali: ok
19:29:33 <arif-ali> all the tests I am doing, these 2 values don't change
19:29:34 <dustymabe> I don't know enough about it but I am getting weird errors in nova-compute.log and trying to find answers
19:30:03 <dustymabe> http://ur1.ca/iaeqp
19:30:55 <dustymabe> 'Error: argument "qvb58b14674-60" is wrong: Unknown device\n'
19:31:37 <dustymabe> and 'interface qvbdba8d3eb-84 does not exist!\n'
19:31:57 <dustymabe> I don't know why but I am trying to google search things
19:36:47 <arif-ali> I don't know much, but I would look at what "ovs-vsctl show" gives you, that will show you some of the interfaces
19:37:32 <arif-ali> when I hit networking problems I do a "openstack-service restart", which is what I am trying to do now, as I am hitting issues with networking as well
19:43:59 <arif-ali> what's the best workaround for the openvswitch issue wrt OVSBridge/OVSPort not coming up after reboot?
19:49:34 <rbowen> That's not one that I've seen yet.
19:50:43 <arif-ali> also found another issue with openvswitch, where once it creates the interfaces, and restarts the network, the IP does not disappear from the physical port, but the bridge does take the IP, the only way to resolve is to reboot machine
19:52:13 <arif-ali> rbowen, it's in BZ, larsks mentioned it yesterday, but he didn't see the problem in FC
19:52:28 <arif-ali> my history isn't long enough to find it
19:52:52 <rbowen> ok, I'll look in xchat logs ...
19:53:18 <dustymabe> rbowen: have you ever seen neutron-server not be able to start because mysql/mariadb isn't up yet?
19:53:32 <rbowen> There was some discussion of that earlier today, but I haven't seen it myself
19:54:09 <larsks> arif-ali: I think I was mentionining something else (ovs bridges not getting assigned addresses at boot).
19:54:16 <rbowen> https://bugzilla.redhat.com/show_bug.cgi?id=1051593#c3 this one?
19:55:08 <arif-ali> yes, that's the one
19:55:21 <rbowen> Oh, and here's the one about rabbitmq - https://bugzilla.redhat.com/show_bug.cgi?id=1148604
19:57:55 <dustymabe> is there a way to tell when a process finished "starting" using systemd
20:01:35 <roshi> how long should importing an image from local disk take for creating an image?
20:01:41 <roshi> 201MB
20:01:52 <dustymabe> roshi: blink an eye?
20:02:03 <roshi> so not the 30 minutes it's been then
20:02:09 <dustymabe> certainly not
20:02:17 <roshi> status has been "saving" for a long while now
20:02:35 <dustymabe> roshi: ahh
20:02:40 <dustymabe> its probably waiting for stdin
20:02:57 <roshi> where do I poke stdin?
20:03:00 <dustymabe> try typing and then end a line with a .
20:03:07 <dustymabe> so:
20:03:10 <roshi> in the browser?
20:03:11 <dustymabe> type some stuff
20:03:13 <dustymabe> .
20:03:26 <dustymabe> oops.. sorry. I thought you were using the command line
20:03:43 <roshi> nah - new to all this, so sticking with the GUI
20:04:02 * roshi looks up the command to create an image via cmdline
20:04:30 <dustymabe> roshi: make sure you have the env loaded properly for admin
20:04:32 <dustymabe> and then:
20:04:34 <dustymabe> glance image-create --name F21 --disk-format qcow2 --container-format bare --is-public True --progress --location http://download.fedoraproject.org/pub/fedora/linux/releases/test/21-Alpha/Cloud/Images/x86_64/Fedora-Cloud-Base-20140915-21_Alpha.x86_64.qcow2
20:04:52 <roshi> no idea if I'll have the env setup right
20:05:13 <dustymabe> [root@localhost ~]# . ~/keystonerc_admin
20:05:33 <dustymabe> and then you will see:
20:05:34 <dustymabe> [root@localhost ~(keystone_admin)]#
20:05:53 <roshi> aha
20:07:54 <dustymabe> rbowen: can you take a look at this: http://paste.fedoraproject.org/138670/14122804/raw/
20:08:20 <roshi> ok, building now
20:08:27 <roshi> cmdline worked quick
20:08:31 <rbowen> dustymabe: How about if you start mysql and try again?
20:08:42 <dustymabe> rbowen: yes I am going to try that
20:09:03 <dustymabe> rbowen: I just want to build a case for some sort of resolution
20:09:20 <dustymabe> rbowen: if it depends on the db shouldn't that be spelled out in the systemd unit file?
20:16:13 <dustymabe> roshi: can you confirm fi/when you get an instance up and running
20:16:21 <dustymabe> s/fi/if/
20:17:36 <roshi> yeah
20:18:02 <roshi> now it's spawning an image, but I feel like it's taking too long
20:18:48 <dustymabe> grep 'does not exist' /var/log/nova/nova-compute.log
20:18:53 <dustymabe> what does that show?
20:19:27 <roshi> nothing
20:20:02 <dustymabe> ok.. might not be the same thing
20:26:57 <dustymabe> roshi: can you grab the instance id and then cat /var/log/nova/nova-compute.log | grep instance-id
20:28:02 <roshi> sure
20:36:39 <roshi> GUI shows no valid host found
20:36:56 <roshi> this box only has 4 gb of ram
20:37:03 <roshi> it's probably the issue
20:37:30 <arif-ali> roshi, have you tried the cirros image
20:37:50 <roshi> not yet, I will though
20:38:05 <dustymabe> could you ever find the instance id?
20:38:44 <roshi> yeah, got distracted with a phone call
20:38:46 <roshi> sorry
20:41:44 <roshi> http://ur1.ca/iaeyt
20:42:26 <dustymabe> roshi: is glance up
20:42:37 <dustymabe> run openstack-status and make sure all services that should be up are up
20:42:50 <roshi> lemme check - switching to a lighter weight DE
20:44:58 <roshi> ur1.ca/iaeza
20:45:57 <dustymabe> hmm seems like glance is up..
20:46:06 <dustymabe> maybe checkout /var/log/glance/ and see what is in there
20:47:18 <roshi> checking api.log
20:48:33 <dustymabe> rbowen: i've got a doozy
20:48:43 <roshi> http://ur1.ca/iaezo
20:49:16 <dustymabe> roshi: weird.. is that from earlier of is that a recent message?
20:49:27 <roshi> recent
20:49:48 <rbowen> dustymabe: What's up?
20:49:48 <dustymabe> can you confirm that every time you start an instance a new message comes in the log?
20:50:00 <dustymabe> so I have in a way hit https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1364828
20:50:07 <roshi> I'm currently using 89% of my memory - so I thikn it's finding no nodes that can support the image
20:50:09 <dustymabe> ^^ rbowen
20:51:59 <roshi> still getting no valid host found
20:52:07 <roshi> I think this machine is too weak for it
20:52:41 <dustymabe> rbowen: have you run into something similar before?
20:52:53 <dustymabe> I patched /usr/lib/python2.7/site-packages/nova/network/linux_net.py and it worked
20:55:55 <rbowen> No, I haven't seen this. I was kind of hoping someone else would pipe up.
20:56:13 <dustymabe> rbowen: ok. where do I start on writing this up?
20:57:56 <dustymabe> ironically enough i went through a gamut of random names for test instances and when I finally got everything to work the instance name I had chosen was 'poop'
20:58:33 <rbowen> heh
20:59:06 <rbowen> I guess open a BZ on it. Pointing to the launchpad one.
20:59:24 <dustymabe> rbowen: will do
21:02:07 <dustymabe> rbowen: since it is technically an issue in the iproute package where do I open it?
21:02:31 <dustymabe> how do we track that openstack is observing an issue? by just linking to it from the workarounds page?
21:03:59 <rbowen> I'm not sure ... it seems that if the problem is how it affects RDO, open it in RDO, so that we can track the upstream stuff, and know when/how it gets fixed?
21:04:42 <dustymabe> rbowen: ok. so open against rdo and a record will get propagated to the iproute people?
21:05:25 <rbowen> I guess, ideally, we also open a ticket in iproute's ticket tracking, and link them to each other.
21:06:08 <rbowen> I tend to want to err on the side of too many tickets rahter than too few. Someone can always close it.
21:08:36 <arif-ali> do I need to do anything specific to get the tenant network to be vlan
21:08:55 <dustymabe> rbowen: I will do this later and update the workarounds page
21:09:25 <arif-ali> for some reason I can't get to the network on the nova nodes
21:14:52 <dustymabe> rbowen: if you are interested here is the workaround (I'll add to page later)
21:15:07 <rbowen> ok, good deal
21:15:28 <dustymabe> http://ur1.ca/iaf1w
21:15:59 <dustymabe> rbowen: this was fun.. hope to do it again sometime
21:16:21 <rbowen> well, you're welcome to drop by any day. we're always here. :-)
21:16:52 <dustymabe> rbowen: i'm usually lurking but can't really participate much because of dayjob
21:17:03 <rbowen> Yeah.
21:17:07 <rbowen> Thanks so much for your help today
21:17:12 <dustymabe> rbowen: no worries
21:17:46 <dustymabe> rbowen: i'll end concentrating on dev/testing open source software in my dayjob at some point
21:17:59 <dustymabe> rbowen: thank you
21:20:21 <number80> dustymabe: if we met afk, reminds me to pay you a drink :)
21:20:28 <number80> s/met/meet/
21:22:44 * dustymabe gladly accepts drinks
21:30:46 <number80> I gladly pay drinks :)
22:22:05 <arif-ali> Has anyone seen a bug where you are not able to remove the floating IPs from horizon
22:24:20 <arif-ali> http://snag.gy/GpBWX.jpg
01:47:03 <arif-ali> trying to delete a volume from horizon, I have some errors, such as "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 24: ordinal not in range(128)"
05:20:56 <avladu> hello, while testing cinder on Cento7
05:21:08 <avladu> I get an error when I try to delete a volume
05:21:09 <avladu> https://bugzilla.redhat.com/show_bug.cgi?id=1149064
05:26:29 <avladu> anyone with same issue?
05:31:36 <karimb> avladu, could be a wrong password in your conf ?
05:32:04 <karimb> ascii' codec can't decode byte 0xe2 in position 24: ordinal not in range(128) suggests a non ascii character somewhere
05:32:18 <karimb> either in your conf or maybe the name of the volume you tried to create
05:32:32 <avladu> I ll take a look
06:39:17 <avladu> this bug https://bugzilla.redhat.com/show_bug.cgi?id=1149064, which appears in the current rdo juno, has been already solved in master, by this commit: https://github.com/openstack/cinder/commit/cb5bcca025f9a34ecb9bc72b8fe570e6a63e442d
06:52:47 <markmc> avladu, thanks! you should mention that in the bug report
06:53:22 <markmc> avladu, 2014.2.rc1 release of cinder was tagged on wednesday, so I assume it will show up in RDO Juno very soon
06:53:34 <markmc> avladu, 2014.2.rc1 does contain that commit
06:53:38 <avladu> I already did
06:53:53 <avladu> but I haven t closed it yet
06:54:00 <markmc> avladu, ah, great - thanks; wasn't there when I looked
06:54:10 <markmc> yep, you're right not to close it until it appears in the repos
08:32:46 <arif-ali> avladu, markmc, I saw the cinder issue last night as well
12:53:42 <rcb_afk> #endmeeting
12:54:34 <rcb_afk> Hmm. Did someone beat me to it?
12:56:23 <rbowen> #endmeeting