11:24:50 #startmeeting 11:24:50 Meeting started Tue Jan 7 11:24:50 2014 UTC. The chair is kashyap. Information about MeetBot at http://wiki.debian.org/MeetBot. 11:24:50 Useful Commands: #action #agreed #halp #info #idea #link #topic. 11:24:57 #meetingname RDO-test-day-JAN2014 11:24:57 The meeting name has been set to 'rdo-test-day-jan2014' 11:25:11 panda: why would it need internet connection in order to install it locally? 11:25:30 giulivo, yep that's correct 11:27:15 mbourvin: I installed with packstack (all in one) yesterday - hit some workarounds along the way - but it was quicker. Today I am doing the same install over again and it is taking a *long* time 11:27:27 ie: it's not done yet 11:28:17 rlandy: ok thanks, I'll wait a bit more 11:28:24 pixelb: can you check on : yum install http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm' 11:28:46 pixelb: it was just one occurrence, testing now: https://github.com/redhat-openstack/astapor/pull/95 11:29:44 mbourvin: yay - install just finished 11:30:16 ndipanov, However: after restart of OpenStack Compute service, still I don't see Compute node in $ nova-manage service list, and that's the fresh api.log : http://paste.fedoraproject.org/66362/09410513 11:30:24 rlandy: good for you! mine is still running :( 11:30:28 kashyap: jistr: does it work for you ? ' yum install http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm ' 11:30:53 and compute.log kashyap ? 11:31:11 ohochman, I used: http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-1.noarch.rpm 11:31:29 kashyap: is it the same ? 11:31:40 ndipanov, Libvirt traces: 11:31:41 2012-12-10 21:57:27.580 1356 TRACE nova.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 76, in tworker 11:31:41 2012-12-10 21:57:27.580 1356 TRACE nova.openstack.common.threadgroup rv = meth(*args,**kwargs) 11:31:42 2012-12-10 21:57:27.580 1356 TRACE nova.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3622, in baselineCPU 11:31:42 2012-12-10 21:57:27.580 1356 TRACE nova.openstack.common.threadgroup if ret is None: raise libvirtError ('virConnectBaselineCPU() failed', conn=self) 11:31:45 2012-12-10 21:57:27.580 1356 TRACE nova.openstack.common.threadgroup libvirtError: internal error: CPU feature `avx' specified more than once 11:31:48 2012-12-10 21:57:27.580 1356 TRACE nova.openstack.common.threadgroup 11:31:50 * kashyap uses pastebin. Sorry. 11:32:26 anyone managed to get ovs running? 11:32:45 ohochman, I haven't checked, but you can just do a "$ less foo.rpm" & can check the changelog, it should tell you. 11:33:00 ukalifon: yum install http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm 11:34:01 ohochman: the command did work for me, although i didn't start installing anything else after that, i'm on the package array issue now 11:34:17 mbourvin: on the upside, it seems to be working now - my previous install was a just an error-generator 11:34:44 nmagnezi, I have it running, : http://paste.openstack.org/show/60611/ ; Also, I stumbled across: https://bugzilla.redhat.com/show_bug.cgi?id=1049235 11:34:59 (for some definitions of "running" :-) ) 11:36:02 ohochman: 'yum install http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm' works for me so far 11:36:45 was anyone able to get external network connectivity? (ping 8.8.8.8) 11:37:26 or neutron-openvswitch-agent to come alive? 11:37:48 I'm getting: abrtd: Package 'openstack-neutron-openvswitch' isn't signed with proper key 11:38:49 ndipanov, The compute service restarts successfully (at-least systemd reports); however, there's stack traces in compute.log -- http://paste.fedoraproject.org/66364/38909468. I'll write a proper report 11:39:41 rlandy: thanks It fixed for me after I've done : 'yum --disablerepo=* --enablerepo=epel clean metadat' 11:39:45 kashyap, nice 11:39:48 very nice 11:40:01 ndipanov, I can't parse it. Is it sarcasm? :-) 11:40:09 kashyap, yes 11:40:31 I like the blunt honesty! 11:41:00 jistr: I'm testing the new puppet-3.4.2-1.el6.noarch.rpm and will let you know where it will stuck . 11:43:23 need link to download fedora20 11:44:04 hi ohochman 11:44:11 how did it go with foreman & rhel65 ? 11:44:19 I'm trying the same on centos65 :) 11:44:36 majopela: not too good so far. 11:45:23 nlevinki: check out this: http://download.eng.brq.redhat.com/pub/fedora/linux/releases/20/Fedora/x86_64/iso/ 11:45:57 ohochman, I saw this note too late: --> before: running foreman-server.sh / foreman-client.sh --> 'yum install -y puppet-3.3.2 puppet-server-3.3.2' 11:46:06 good that I made an snapshot ':) 11:46:32 majopela: keep me up-to-date. On my side (foreman on rhel6.5) There are two bugs that I cannot workaround so far : 11:46:33 #1047353 [RDO]: Foreman-server installation failed cause: undefined method `mode=' for # majopela: you can try with this puppet version and let me know if it works for you. 11:48:15 right now it's running with 3.4.2 centos default... 11:48:17 I suppose it will fail 11:48:28 oh no 11:48:32 it worked :? :) 11:49:03 majopela: it did? I'm doing it myself right now. 11:49:09 with 3.4.2 11:49:23 it did, for the server side 11:49:26 let's see the clients.. 11:49:35 nlevinki, Live desktop ISO - http://download.fedoraproject.org/pub/fedora/linux/releases/20/Live/x86_64/Fedora-Live-Desktop-x86_64-20-1.iso 11:49:53 majopela: good news 11:50:20 nlevinki, If you need a Fedora 20 (raw) cloud image -- http://download.fedoraproject.org/pub/fedora/linux/releases/20/Images/x86_64/Fedora-x86_64-20-20131211.1-sda.raw.xz 11:50:58 thanks 11:51:54 majopela: yhep, it looks like the foreman-server-side indeed works on 6.5 with puppet 3.4.2 11:52:16 majopela: let's check the clients. 11:52:23 I'm lucky today :) 11:52:37 majopela: which host group are you trying ? 11:52:48 mbourvin, http://openstack.redhat.com/Docs/Storage 11:52:49 none yet, I didn't get to this part 11:52:54 I started this morning :) 11:53:17 jistr, cool. I'll build a new openstack-foreman-installer with that patch 11:54:52 kashyap, Your avx issue reminds me of: https://review.openstack.org/#/c/61310/ though that's lxc related 11:55:38 pixelb, avx? 11:55:57 libvirtError: internal error: CPU feature `avx' specified more than once 11:56:29 Ah, right. This is the latest I see in my Compute log: http://paste.fedoraproject.org/66362/09410513 11:57:47 pixelb, Thanks for the review link, I'll see if I can get to root-cause of this issue. 11:59:45 And, yes, I still see the 'avx' trace: 11:59:45 2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup libvirtError: internal error: CPU feature `avx' specified more than once 12:00:32 I bet danpb can point the fix top off his head, he knows a boat load of stuff on CPU baseline aspects in libvirt/qemu 12:01:22 pixelb, This looks legit too, filing a report to track it 12:01:40 pixelb: just fyi - the patch fixes the issue, puppet manifests compile fine with Puppet 3.4.2 12:01:57 ohochman: ^ https://github.com/redhat-openstack/astapor/pull/95 12:01:59 jistr, OK I notice the patched changed. I'll take the latests 12:02:28 pixelb: yeah there should be no practical difference, but it's probably better to be in sync 12:02:51 * pixelb learned some ruby today 12:03:09 :-) pixelb 12:04:59 libvirtError: internal error: CPU feature `rdtscp' specified more than once 12:06:21 http://www.fpaste.org/66374/90963731/ 12:07:00 jistr: will we get new packstack-modules-puppet soon ? 12:07:13 jistr: rhat contain this fix https://github.com/redhat-openstack/astapor/pull/95 12:07:29 ohochman, Error: Could not request certificate: No route to host - connect(2) 12:07:35 hmm, but they can ping each other.. 12:07:56 majopela: is it when you run foreman_client.sh ? 12:09:31 majopela: check that you can ping them by FQDN 12:09:43 majopela: are you using VMs ? 12:10:27 majopela: I've managed to pass the foreman_client.sh stage with rhel6.5 . now trying to deploy according the host groups. 12:10:40 grrr, how long does PackageKit hold onto the yum lock for in a fresh f20 install? 12:10:55 ohochman: yeah i think pixelb is doing a build with the fix included (it's not in packstack-modules-puppet, but in openstack-foreman-installer) 12:11:06 ... longer than a few mins it seems 12:11:17 * eglynn taps fingers impatiently ... 12:11:22 jistr: that means that I will have to install the foreman-server again :( 12:11:32 yes, vms ':) 12:11:51 ohochman: well i think that shouldn't be necessary 12:12:08 majopela: you'll have to check your networking . I think that jistr wrote the guide for how to make it work with VM's. 12:12:42 majopela: but check their FQDN first 12:13:25 ohochman, it's my iptables 12:13:36 jistr: any way to get this fixed without waiting for pixelb to create the new openstack-foreman-installer rpm 12:13:37 the foreman hosts seems quite locked down only for ssh 12:13:40 majopela: oh 12:13:51 ohochman, 2 minutes 12:14:03 pixelb: Ok /me waiting then.. 12:14:12 ohochman, that doesn't happen on RHEL65? 12:14:23 I'm not sure if centos introduced any difference when building their distro 12:14:29 ohochman: you can just apply these few lines of change https://github.com/jistr/astapor/commit/a3d554d6c2a6039f60984432bf94acadf606d7c7 12:14:53 afazekas ping 12:14:59 btw, lunch!! :-) 12:15:03 see you later 12:15:18 majopela: :) 12:15:34 majopela: I set the following iptables roles on the foreman-server side : 12:15:38 http://pastebin.com/qQXgL7hF 12:15:40 majopela: no route to host indicates iptables is blocking traffic, probably port 8140/tcp on the foreman server 12:15:51 ^ ohochman's on it :) 12:16:13 *rules 12:17:10 yfried: pong 12:19:37 thanks Dominic :) 12:19:45 yes, it matches with what I found 12:21:53 ohochman, majopela jistr updated openstack-foreman-installer now in RDO 12:22:07 pixelb: thanks! 12:24:21 afazekas you are defining the repo in your yum init and then install rdo? 12:26:27 mbourvin, thank you for the bug report. could you please add the status of openstack-status? 12:26:54 pixelb: thanks 12:27:04 mrunge: what do you mean by openstack-status? 12:27:05 pixelb: starting over... 12:27:18 majopela: ^^ 12:27:24 mbourvin, issue the command openstack-status 12:28:09 I added rdo-ichouse to the usual packstack install script, it created a repo entry like this: http://www.fpaste.org/66379/09698413/ 12:28:10 mbourvin, it would help, if you could check nova.api log for issues 12:29:08 mbourvin, neutron-openvswitch-agent: dead does not look good for me 12:29:29 yfried: ping 12:29:41 ohochman: pong 12:29:56 yfried: you and ofer have epel enabled on your setups/ 12:29:58 ? 12:30:05 ohochman: I do 12:30:10 ohochman: don't know about ofer 12:30:17 ohochman: did you solve the issue? 12:30:17 kashyap: are you using the updates-testing repo with f20 ? 12:30:30 and you having this problem Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-keystone' returned 1: Error: Package: python-dogpile-cache-0.5.0-1.el6.noarch (openstack-icehouse) 12:30:37 yfried: ?^^ 12:30:37 mrunge: neutron-openvswitch-agent dead but pid file exists 12:30:45 mrunge: how come it's still working 12:30:46 ? 12:31:16 mbourvin, I can't say anything about neutron here 12:31:41 mrunge: lol 12:32:03 mbourvin, the api logs would help, definitely 12:32:12 yfried: I'm not. Ofer did 12:32:23 yfried: Ok 12:32:53 mbourvin, when trying to show the LaunchInstance dialog, it collects quotas first (e.g) 12:33:05 pixelb, could you confirm that novnc is broken? 12:33:13 pixelb, just go to console in horizon 12:33:37 ndipanov, not in a position to do that at present 12:33:48 pixelb, k 12:33:51 kashyap, ^ 12:38:22 mbourvin, the neutron-openvswitch-agent issue is already logged. You need to `yum install python-psutil` before starting that service 12:39:46 That is a temp workaround of course until the bug is fixed. kashyap have to logged that bug and workaround on RDO wiki? 12:39:55 http://www.fpaste.org/66388/90983821/ 12:42:38 I see similar errors in the n-cpu logs 12:43:13 afazekas, kashyap had a very similar issue, but with "avx" rather than "rdtscp". Something dodgy is going on with these cpu flags 12:44:28 As I remember the baseLine related code can be commented out 12:49:04 tshefi giulivo i'm back, was it the same bug from ystday? 12:49:34 Dafna, nope, icehouse is setting that appropriately 12:50:08 giulivo: what was it? iptables? :) 12:50:23 Dafna, giulivo , but the name of qpid server was blank 12:50:29 no the parameter was blanked but on my deployment, using same version of packstack, it was set 12:50:43 so let us know if it works for you 12:51:14 and see if we can find the root cause, for now I moved on a different issue with the emc backend driver 12:51:16 giulivo: I am configuring gluster as my glance backend but will try to create an image right after... 12:51:49 Dafna, yeah just check if in glance-api.conf you have qpid_hostname set 12:52:26 tshefi can you ping flavio? fpercoco is his nick and he should be on rhev-dev 12:53:54 Dafna, sec, I'm not sure how to reproduce this first 12:54:09 that's why I'm investigating it a bit 12:54:25 it is eventually an issue with packstack not glance 12:55:05 giulivo: lets find what the issue is first and than worry about reproducing it... 12:55:23 yeah so how is qpid_hostname for your glance-api.conf ? 13:01:21 kashyap: http://www.fpaste.org/66395/13890996/ 13:02:31 yrabl: ping 13:07:36 giulivo: did we end up opening the bug on /etc/cinder/shares.conf being created as root:root? 13:09:27 no I haven't 13:09:30 Dafna, ^^ 13:10:13 giulivo: cool. it append to me and yrabl - yogev, open ok? 13:10:40 Dafna, giulivo cook 13:10:42 cool 13:12:40 giulivo: cinder.service ProgrammingError: (ProgrammingError) (1146, "Table 'cinder.services' doesn't exist") 13:12:51 giulivo: that is after I fixed the permission issue... 13:13:14 wow :) 13:14:06 Dafna, but it doesn't seem to be related to gluster, you should have that table 13:14:17 or any driver won't work 13:14:18 giulivo: this is odd... 13:14:36 panda, rlandy how are the installs going 13:14:43 sql_connection=mysql://cinder:a597d1d2bb9245da@10.35.xx.xx/cinder 13:14:49 pixelb: do you know why I'm getting this "GPG key retrieval failed: [Errno 14] Could not open/read file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs" 13:14:59 giulivo: it should connect to the correct machine... 13:15:21 ohochman, that key should be in the rdo-release.rpm Have you that installed? 13:15:30 Dafna, yeah but apparently it doesn't find the "services" table in the "cinder" db 13:15:35 pixelb: right. 13:15:38 so it is connecting, but the db seems broken 13:15:41 pixelb: thanks 13:15:53 giulivo: yeah, I'm logging in to the db now to check 13:16:50 weshay: mixed results 13:17:11 partly cloudy 13:17:11 giulivo: yaykes... 13:17:15 giulivo: mysql> use cinder; 13:17:15 Database changed 13:17:15 mysql> show tables; 13:17:15 Empty set (0.00 sec) 13:17:34 Dafna, I think something went wrong during install though 13:17:37 yeah ... install did a lot better 13:17:48 no workarounds required although process took a lot longer 13:18:02 giulivo: I think so too... 13:18:23 giulivo: but all was reported as ok 13:18:32 are you trying this on rhel6.5? 13:18:41 yrabl: can you please check your db? 13:18:48 giulivo: of course 13:19:09 giulivo: but I think I know what happened... and it's a bug... 13:19:39 weshay: hit this error when running an instance ... can see from the comments that packstack does not setup all the config params that may be needed 13:19:44 https://bugzilla.redhat.com/show_bug.cgi?id=1020954 13:20:08 Dafna, mine is ok 13:20:10 giulivo: however, I am not sure why it only happened to cinder... 13:20:16 yrabl: thanks 13:20:28 yrabl: you are using the ice-house pacakges right? 13:20:49 Dafna, yes. I was briefed :) 13:20:50 Dafna, I have it installed too on rhel and the db is okay 13:21:07 yrabl: :) 13:21:25 Dafna, and the db is on a different server :) 13:21:49 weshay: RHEL 6.5 distributed install OK, working on ML2 plugins: installs fine (workarounded psutils bug) now creating some network, subnet ... 13:21:55 yrabl: thanks. 13:22:12 bah.. you have to hate it when those bugs are just closed.. should be reassigned if the devel thinks its a packstack issue 13:22:15 giulivo: I think its one of those things that can only happen to me... 13:22:43 giulivo: still a bug... 13:22:48 panda, cool.. /me checks on the multinode job test results 13:23:06 guess it's a design choice ... either way, that was my "get yourself acquainted' install .. moving on the to F20 one I actually signed up for noe 13:23:07 now 13:24:01 giulivo: I just re-installed my controller and ran packstack... my guess is that packstack assumed that tables exists... I am just not sure why it did not do the same for glance 13:24:24 giulivo: actually... it did... 13:25:33 rlandy, hi :) I got an issue with mysqld on f20 which I'm uncertain about; it is not the problem mentioned in the workaround, do you plan to install via packstack or foreman? 13:26:22 weshay: how do I publish tempest results for ML2 ? 13:27:26 panda, I think fpaste may be an option 13:27:42 giulivo: hi ... long time .. how are you? I am going to follow the Using_GRE_Tenant_Networks - so packstack 13:30:35 Dafna, did you tried to create a volume from an image? 13:31:47 yrabl: no. I did not have a db created :) 13:32:13 RHEL 6.5 works for me without workarounds. 13:33:55 majopela: jistr pixelb : good news - foreman in rdo (!) Using the: latest & greatest: - openstack-foreman-installer noarch 1.0.1-2.el6 + puppet-3.4.2-1.el6.noarch.rpm , the first scenario: "2 Node install with Nova Networking" - works good (tested with : rhel6.5). 13:34:10 ohochman, excellent 13:34:22 ohochman: cool :) 13:34:23 fedora 20 works for me with the mysql to mariadb packstack workaround 13:34:36 :-D 13:34:47 ohochman, coool! :D 13:35:10 I had a meeting, and in just back into it :) 13:35:17 pixelb: jistr majopela: I'll try the neutron host groups now. 13:40:45 any of you guys seeing this? http://paste.openstack.org/show/60634/ 13:41:11 flaper87: yes, you need to edit a file 13:41:13 sec 13:41:28 flaper87: https://bugzilla.redhat.com/show_bug.cgi?id=1012786#c4 13:41:35 I didn't find anything in the workarounds page 13:41:37 lemme add that 13:41:51 flaper87, thanks for adding that 13:42:04 Workarounds page is convenient 13:42:34 jruzicka: if you give me $1000 I promisse I wont make something up 13:42:43 * flaper87 giggles 13:43:27 flaper87, hmm, I can live with that... 13:49:08 http://openstack.redhat.com/Workarounds#Could_not_enable_mysqld 13:49:11 there you go 13:49:50 pixelb, looks like we have a wrong version of websockify in the repo 13:50:07 ndipanov, checking 13:50:17 4.1 13:50:20 and we need 5.1 13:50:32 pixelb, checking fedora now 13:50:54 kwhitney: hi 13:51:22 ndipanov, I've 0.5.1 in el6. So f20 needs an update ? 13:51:43 likely pixelb will do it now 13:51:49 kwhitney: if you're testing ceilometer on RDO/icehouse, note the following issue https://bugzilla.redhat.com/1049369 13:52:01 will you handle the repo updates (how does it go now since jruzicka was working on it) 13:52:19 eglynn: OK will check 13:52:33 kwhitney: ... and the corresponding workaround: http://openstack.redhat.com/Workarounds#ceilometer:_notifications_from_openstack_services_not_processed 13:52:45 pixelb, ^ ? 13:53:00 ndipanov, will do 13:53:10 pixelb, thanks 13:54:37 ndipanov, Also not one should be using updates-testing really, which does contain the right version 13:55:18 pixelb: the neuron controller deployment ended with the client in "E" status 13:55:33 but I don;t see any errors in the puppet agent outputs 13:55:33 E for Excellent? 13:55:40 Yes :D 13:57:24 morning 13:57:26 pixelb: I had this tinny error : http://pastebin.com/8ErMErTf (from /var/log/messages 13:57:47 how come the rdo test day isnt on planet fedora? 13:58:03 pixelb: not sure that's the reason for the "E" 13:59:37 Egyptian[Laptop], fair point. It has diverged a bit from Fedora test days, but it would have been useful to mention it there 14:01:56 eglynn: I don't see the agent-notifier in "chkconfig" list of auto-start services. SHouldn't it be there as well? 14:03:42 kwhitney: yes, as per the usual pattern for required services, i.e. ... service foobar start ; chkconfig foobar on 14:04:03 kwhitney: (so both steps are missing for the new notification agent) 14:04:32 kwhitney: in the BZ I didn't explicitly mention chkconfig (took it as implicit) 14:04:47 OK. I wondered because that was not mentioned in the work-around or the bug. Just needing to start a service is different than not being listed to auto start 14:04:49 kwhitney: ... so feel free to append a comment to the bug report 14:05:12 eglynn: Thx. Will do. 14:05:26 giulivo: ok ... I have a mysql error on F20 14:05:48 pasting error for you to review 14:05:54 rlandy, so this is probably the one mentioned in the workaround 14:06:07 http://openstack.redhat.com/Workarounds_2014_01 14:06:11 ndipanov, pixelb: /me went out for a walk (weather was too good), now reading the scroll 14:06:37 rlandy, if that is the case, try to go trough the suggested steps and launch again packstack using the *same* answer file 14:06:38 kwhitney: thank you sir! 14:06:47 kashyap, I fixed my issue so dissregard me 14:07:42 ndipanov, Ok. And, I see that the CPU flags issue is legit. . . 14:07:50 pixelb: what are you saying about the mongodb error (during neutron controller deployment) ? 14:08:00 giulivo: not the one in the workaround but the one in the BZ .. yeah doing that 14:08:14 pixelb: http://pastebin.com/8ErMErTf 14:08:26 ohochman, that looks like an epel access issue? can you `yum install v8` on that node manually? 14:08:30 ohochman, pixelb 14:08:34 it fails for me on the clients 14:08:39 kashyap, bug? 14:08:39 with the string / array bug 14:08:48 majopela: with nova-network ? 14:08:48 ndipanov, On its way 14:08:52 neutron 14:09:03 kashyap, thanks!! 14:09:45 majopela: can you check the puppet-modules version on your foreman-server ? 14:10:04 ohochman, how do I do that? 14:10:19 majopela: should be :packstack-modules-puppet-2013.2.1-0.27.dev936.el6.noarch 14:10:29 rpm -qa | grep packstack-modules-puppet 14:10:39 rpm -q packstack-modules-puppet 14:10:40 oh, ok ok, the rpms 14:10:54 I was thinking on querying puppet sorry, I need more coffee :) 14:11:05 packstack-modules-puppet-2013.2.1-0.27.dev936.el6.noarch 14:11:58 majopela: have you started the foreman-server installation with this version ? 14:11:58 majopela, note also that openstack-foreman-installer needs to be 1.0.1-2 14:12:06 and that ^^ 14:12:10 hmm rechecking 14:12:30 openstack-foreman-installer-1.0.1-1.el6.noarch 14:12:31 foreman-installer-1.3.1-1.el6.noarch 14:12:38 I'm outdated! 14:12:39 :-) 14:12:42 |'m using ^^ 14:12:44 foreman-installer-1.3.1-1.el6.noarch 14:13:03 the foreman installer seems fine 14:13:24 ohochman, foreman-installer hasn't been updated. majopela you need to `yum install openstack-foreman-installer` to get the latest 14:13:25 majopela: seems - but it's not ! (in this version) 14:13:43 btw is it a feature or a bug that I don't get the demo rc file created in my home dir after packstack? 14:13:48 mmagr, ^ 14:13:59 Package openstack-foreman-installer-1.0.1-1.el6.noarch already installed and latest version 14:14:09 weird... caching? 14:14:24 majopela, yum --disablerepo=* --enablerepo=openstack-icehouse clean metadata 14:14:44 majopela: as pixelb said --> you need to have : openstack-foreman-installer-1.0.1-2.el6.noarch 14:14:48 majopela, actually different repo name, so 14:15:08 got it updated :-) 14:15:17 it won't do it ^ 14:15:23 you'll have to reinstall 14:15:50 pixelb: regarding the neutron .. 14:15:59 ohochman, nice work on the first foreman scenario! iirc there might be one outstanding packstack-modules issues that may require a manual tweak. lemme see if that pastebin is what I"m thinking of... 14:16:05 hmm, I need to reinstall? :) 14:16:17 ok, back to previous snapshot ... sniff :'( :) 14:16:28 majopela: indeed :D 14:17:30 morazi: there's a problem I cannot nail with the neutron hostgroup, It fails to install mongodb ? - http://pastebin.com/8ErMErTf 14:18:02 morazi: not sure yet what's the issue. but the neutron controller is in "E" after deployment. 14:18:14 ndipanov, it depends ... it is creating rc files in homedir only in all-in-one installations 14:18:46 pixelb: v8 can be installed manually. 14:18:48 mmagr, yeah - I did a allinone that failed but reused the answer file 14:18:53 mmagr, so that's expected then 14:19:17 ohochman, I don't know what puppet would be doing differently TBH 14:19:22 mmagr, ? 14:19:22 ndipanov, pixelb: There we go - https://bugzilla.redhat.com/show_bug.cgi?id=1049391 14:19:37 ndipanov, Logs coming in attachment 14:19:57 ohochman, maybe transient net error? 14:19:59 pixelb: maybe it just got temporary hang - No more mirrors to try. 14:20:05 kashyap, thanks 14:20:11 pixelb: yheppp 14:20:40 pixelb: I wonder if running the puppet agent again will fix it. 14:23:26 ohochman, probably worth giving it a shot to re-run and see. 14:24:38 ohochman, out of curiosity, what did you list what repos you have enabled? 14:25:14 morazi: epel.repo foreman.repo puppetlabs.repo rdo-release.repo rhel6_update.repo rhel-optional.repo rhel-server.repo rhel-source.repo rhel-updates.repo 14:25:18 ukalifon: hi .. how far did you get with the 'allinone' rhel6.5 test? I hit some errors on running on instance 14:25:54 aio rhel6.5 should work w/ I think three workarounds 14:26:17 flaper87, You edited the "havana" workarounds page I think. Also note there is an existing workaround for the mysqld issue on the test day workarounds page: http://openstack.redhat.com/Workarounds_2014_01 14:26:36 rlandy: I never succeeded in getting an instance to run 14:26:40 Failed to parse /etc/nova/nova.conf (RHEL) 14:26:54 pixelb: mmh, sorry, too many tabs! Let me move that 14:27:13 flaper87, s/move/remove/ possibly? 14:27:24 pixelb: move havana -> icehouse 14:27:53 flaper87, OK if you think the existing icehouse workaround isn't appropriate 14:28:17 weshay, Those three bugs associated with "Failed to parse /etc/nova/nova.conf" should be fixed I think 14:28:32 ok.. will remove the wrkaround 14:29:45 yrabl: ping 14:29:51 pixelb: I kept the existing workaround but added one more step 14:29:54 flaper87: pixelb I'd leave this workaround because that's the way it should be done in packstack, moving service files seems much dirtier 14:29:55 ohochman, I'd be curious to see if it is just a blip and/or what happens if you try to manually yum install mongodb-server on that box 14:29:56 morazi: I"ve added the repo list to : https://etherpad.openstack.org/p/rdo_test_day_jan_2014 14:30:01 flaper87, great thanks 14:30:26 imho :) 14:30:32 vladan: yeah, I kept the old steps because there's an "If you hit this when running packstack..." section 14:30:44 pixelb, Added the psutil note to the Workarounds page. 14:30:45 vladan, Moving service files is to avoid a systemd bug. IMHO I think packstack should be able to use mysqld directly 14:30:54 aha k 14:32:27 pixelb: but if mysqld doesn't exist as a service (which is the case now) it shouldn't try to start it at all 14:32:41 pixelb: which systemd bug are you referring to? 14:33:18 morazi: jistr pixelb : I ran puppet agent second time on the neutron controller (and it failed on keystone token-get ? http://pastebin.com/6FBdqQt1 14:33:52 maybe I'll try neutron-controller again on clean machine 14:34:04 FYI.. execute tempest on icehouse: 14:34:06 http://openstack.redhat.com/Testing_IceHouse_using_Tempest#configure_and_run_nosetest_.28_RHEL.2FCentOS_.29 14:34:20 vladan, mariadb tried to provide compat so that one could still support using the mysqld service name. This was done with symlinks. systemd changed recently to have issues with that. The service copy is just to convert a symlink to a real copy. There is no systemd bug yet 14:34:40 giulivo: https://bugzilla.redhat.com/show_bug.cgi?id=1049382 can you please copy the error in the bug comments? 14:34:46 giulivo: second packstack run with the same answer file completed w/o error 14:35:10 pixelb: DO you have any links with more information about the systemd/symlink thing? 14:35:52 ohochman, that seems reasonable. I suspect you got wedged halfway through due to the package blip and it left the data store behind keystone in an inconsistent state (or simply foreman didn't know how to resume against an existing keystone db) 14:35:57 pixelb: I see, thanks for calrifying 14:36:39 larsks, All details I have are in: https://bugzilla.redhat.com/981116 I'll log a systemd bug at some stage 14:36:54 morazi: ok , I'll try it again with clean node 14:37:12 rlandy, great, thanks 14:37:45 pixelb: thanks! 14:37:47 Dafna, yep but that is failing because of an issue with the SMI-S 14:37:56 I'm checking that too 14:38:09 morazi: BTW - I do have many AVC here. 14:38:35 giulivo: not saying it's not, it just makes it easy to search BZ's later when the trace is in the bug 14:38:43 Dafna, I will 14:38:53 giulivo: thanks 14:40:46 ohochman, same as: https://bugzilla.redhat.com/show_bug.cgi?id=1043887 ? 14:40:47 Kashyap: did you tried the http://www.fpaste.org/66395/13890996/ ? 14:41:24 afazekas, No, I was investigating the avx CPU flags bug - https://bugzilla.redhat.com/show_bug.cgi?id=1049391 14:41:37 afazekas, What is this patch for? 14:42:02 kashyap: probbaly a workaround for that bug 14:42:37 "probably" :-) 14:43:39 kashyap, eglynn I've moved your workarounds from the havana to icehouse page in case you're wondering where they went: http://openstack.redhat.com/Workarounds_2014_01 14:44:09 kashyap: my novanet instance had network issue after that, the neutron was already ran out from the ports, and I had to be away, now I am resintaling and testing the workaround 14:44:12 pixelb, Sure, thank you. I'd have done that myself had I noticed this page 14:44:29 afazekas, You using devstack? 14:45:36 afazekas, kashyap I'll add that to the workarounds page 14:45:40 kashyap: https://review.openstack.org/#/c/63647/ 14:46:23 Now with devstack you would need to use the FORCE option and installing some extra packages before starting 14:47:34 BTW it was the n-net issue: http://www.fpaste.org/66420/10603913/ 14:48:08 morazi: it's : avc: denied { read } for pid=26034 comm="ip" path="/proc/25941/status" dev=proc ino=42683 scontext=unconfined_u:system_r:ifconfig_t:s0 tcontext=unconfined_u:system_r:initrc_t:s0 tclass=file 14:48:22 pixelb: ok 14:48:35 morazi: I don't see it in this bug : https://bugzilla.redhat.com/show_bug.cgi?id=1043887 14:48:36 afazekas, Noted; pixelb - thanks 14:50:31 morazi: I think It's becasue the AVCs in this bug are from the foreman-server installation stage and this AVC appears on the foreman-client while deploying neutron-controller. 14:51:07 ohochman, pixelb , (neutron) controller installed :-) 14:51:41 majopela, woot! 14:52:23 kashyap, the workaround for ovs agent (the one with gpgcheck=0) did not work for me. 14:52:31 kashyap, any other ideas? :-) 14:54:35 nmagnezi, Are you referring to this? - https://bugzilla.redhat.com/show_bug.cgi?id=1049235 14:55:39 majopela, nice one! 14:55:42 kashyap, actually no, I was reffering to another workaround. but i'll try this one now 14:55:54 majopela: does it's status "A" 14:55:55 ? 14:56:13 on the foreman-server gui 14:56:16 nmagnezi, If it's related to ImportError: No module named psutil ; the above note I mentioned should work 14:56:44 ohochman, yes :) 14:57:02 majopela: Ok :D 14:57:20 kashyap, it does! :) 14:57:27 kashyap, and it worked 14:58:07 morazi: majopela : worked for me too. 14:58:48 Just logging in now. Any Keystone issues cropped up thus far? 14:59:02 * ayoung asssumes test day chatter is happening here? 14:59:10 nmagnezi, Cool. 14:59:44 hmm 14:59:48 on compute node I found this: 15:00:02 majopela: morazi : now we are having the same neutron havana bugs - all neutron services are down + /etc/init.d/neutron-openvswitch-agent cannot be started :D 15:00:12 but that's "NORMAL" 15:00:14 Error: failure: repodata/44b20c52ea5f27ef34b267d98b79cafd52ca9533340d8fb94377506a380f3d3d-filelists.sqlite.bz2 from openstack-icehouse: [Errno 256] No more mirrors to try. 15:00:14 You could try using --skip-broken to work around the problem 15:00:14 You could try running: rpm -Va --nofiles --nodigest 15:00:51 yes, ohochman , what you say is because of python-psutil, right? 15:01:10 majopela: how do you know that 15:01:11 ? 15:01:31 I think I read it in some BZ 15:02:02 well, I don't think so. when I'm tying to : /etc/init.d/neutron-openvswitch-agent start 15:02:17 https://bugzilla.redhat.com/show_bug.cgi?id=1049235 15:02:21 is not that one? 15:02:22 Package 'openstack-neutron-openvswitch' isn't signed with proper key 15:02:38 huh 15:02:38 woops :) 15:03:01 majopela: from F20 ? 15:03:08 no, centos65 15:03:24 ayoung, there was something that hasn't been reported. stack trace was like: http://pastebin.com/6FBdqQt1 (from ohochman about 30 minutes ago) 15:03:36 morazi, OK, looking 15:03:57 ayoung: if you need to check on the machine let me know. 15:04:19 ayoung, but it is an odd-ball -- the foreman install failed for other reasons and on the second run of puppet we run into an auth issue. My presumption is either the keystone datastore was partially set up or that foreman doesn't have enough info to resume properly 15:04:22 ohochman, is there a keystone log? 15:04:32 kashyap: is there a workaround for this ? https://bugzilla.redhat.com/show_bug.cgi?id=1049235 15:04:44 kashyap: happened to me on rhel6.5 15:05:01 morazi, is Keystone running on that machine? 15:05:24 morazi, ayoung FYI -This channel is logged w/ meetbot for today & tomorrow; Later you can retroactively grep for keystone or other keywords from the meeting file. 15:05:31 ohochman, Looking 15:05:52 ohochman, Yes 15:06:02 ohochman, $ yum install python-psutil -y 15:06:14 ayoung: most of it is : WARNING keystone.common.utils [-] Deprecated: v2 API is deprecated as of Icehouse in favor of v3 API and may be removed in K. 15:06:27 ayoung: no errors 15:06:37 kashyap: thanks.. 15:06:38 ohochman, yeah, give me access 15:07:20 I get this error while trying to create a disk that is bigger then 20G: No valid host was found 15:07:29 did anyone see this error before 15:08:34 mbourvin1, Obvious check: probably none of your compute hosts have enough space? 15:08:50 mbourvin1, You can check your Nova scheduler & api logs for more clues. 15:08:51 ok the yum --disablerepo=* --enablerepo=openstack-icehouse clean metadata did the trick to get the openstack-nova-compute installing 15:14:27 ohochman, pixelb , compute is up!!, going for the network server 15:14:35 Hey all, is it necessary to make the em1 interface not controlled by network manager when enabling neutron for RDO installs on F20? 15:15:07 I have one box w/nova-network and network-manager works fine, but the box with neutron drops the network and it never comes back 15:15:24 you should disable network manager, right 15:16:11 majopela: Ok, thanks - we should probably mention that in the quickstart then, now that neutron is used by default 15:19:17 strange I do not have any net namspace in neutron install.. 15:20:34 Dominic: morazi: I think I just encountered a problem : I have several hosts associated with neutron-controller host group. When I changed the General Host Group parameters It changed the parameters of all the hosts that were already associated with that host group. 15:20:41 tenant_network_type=local and I have 2 network and router.. 15:21:05 ohochman: if you change a host group parameter then it will affect all hosts with that group, yes 15:21:19 Dominic: moraziIt ran over the values I had for each specific host. 15:21:21 you can override a host group parameter when you edit an individual host though 15:21:39 ayoung, sorry no visiblity on that machine at all. Just saw you pop on and was thinking when I originally saw the stack trace I'd really love someone who knows keystone to have a peak and weigh in on it. ;-) 15:21:56 I do not have missing net ns issue with RHEL 6.5 15:22:12 Dominic: I thought It's a default value when adding host with that host group - but not change already exist hosts. 15:22:29 morazi, I'm getting the moring barrage of questions right now, but I'm looking into it. Keystone seems to be healthy. Need a keystonerc file 15:23:58 Dominic: I think it's too dangerous to allow such operation. 15:24:17 ohochman: you've lost me tbh, I don't understand the problem 15:26:23 afazekas, Hey, were you able to test that patch for the CPU baseline problem/ 15:27:04 Dominic: If you have in foreman 100 computes-nodes managed by 2 controllers and the user change the controller IP on the compute host group parameter it will change the parameters for 100 up running hosts. 15:27:11 I just think it's too dangerous to allow such operation. 15:27:57 Dominic, maybe we need to make sure it is clear the precedence/order of operations. how does the param on a host group get overriden by a value on an individual host. 15:27:57 ohochman: that's a feature.. if you have junior admins, you can create users with restricted permissions so they can't change these settings 15:27:58 pbrady, ohochman , I have the networking node!! woah :) 15:28:54 morazi: sure, docs or something else? They're shown as inherited parameters when creating/editing hosts with an override there. 15:29:57 Dominic, I think I was speaking more to my own ignorance of the details more than anything else. ;-) I bet it is already documented up somewhere in foreman land 15:30:10 maybe :) 15:30:17 kashyap: Now I am failing on the f20 networking tests 15:30:51 ohochman, yea, the flip is that you would right bespoke scripts to changes things on 100 hosts (or worse yet, assign some poor sucker to ssh into each host and make the change -- and sure fat-finger that value at least once) 15:31:00 ohochman, morazi I need a keystonerc for that machine, or at least admin/password 15:31:47 morazi: Dominic not in case those parameters are being the default for new hosts . 15:32:14 kashyap: http://www.fpaste.org/66439/38910871/ 15:32:28 Dominic: morazi : and btw , I think that in foreman (the full version) you can click select and edit few hosts at once.. 15:32:50 ohochman, ^^ ayoung request was to dig into that keystone problem you were seeing -- is that host still active or did you recycle that to run a different test? 15:32:58 morazi, I'm on ity 15:33:10 ayoung, cool 15:33:14 afazekas, As a side note - there's a similar CPU baseline bug, for LXC, pointed by pixelb here - https://review.openstack.org/#/c/61310/ 15:33:51 giulivo: did you also open a launchpad for the bug? :) 15:33:57 and features != -1 was libvirt bug 15:34:18 afazekas, You have a URL? 15:34:47 ohochman, yea, so it is potentially an order of operations thing perhaps? I suspect how things would have to work under the covers for foreman is that it would apply changes to the host group, and then re-override values for each individual host. 15:34:58 Dafna, nope let me update the bug so you'll know why 15:35:14 I'm looking into the SMI-S , this is basically blocking the tests 15:35:29 afazekas, That tempest n/w test failure - is that a single-node test? I presume so 15:35:35 giulivo: ok. Glad to hear ;) 15:35:42 http://www.redhat.com/archives/libvir-list/2013-November/msg00967.html , https://bugzilla.redhat.com/show_bug.cgi?id=1033039 15:35:48 kashyap: did you try this workaround? 15:35:59 "this"? 15:36:03 ohochman, so the questions/things we should confirm (I think) are -- does the over-ride get re-applied to an individual host? What is the contract around atomicity there? Dominic may know the answers there and can explain if my wild conjecture above ^^ has any basis in reality 15:36:14 kashyap: https://bugzilla.redhat.com/show_bug.cgi?id=1049235 15:36:15 morazi, ohochman the problem might be the endpoint definition. They all have urls like this: http://10.35.161.188:5000/v2.0 but your command was attempting to contact them using 127.0.0.1 15:36:26 kashyap: not a fast copy/past -er 15:36:43 morazi: I would guess that the host-groups parameters are the default for new hosts but changes to the host groups won't effect all hosts already within that host group (which the user might already manually changed parameters for each one of them). 15:37:05 yfried, Yes - that's what fixed that prob. nmagnezi confirmed it too. I noted the root-cause of it in Comment#1 15:37:23 morazi: sorry, I don't really understand 15:37:28 ayoung: the machine IP 10.35.161.188 and the 127.0.0.1 is local host ? 15:37:38 ohochman: changes to a host group will affect all hosts with that group assigned, next time they run Puppet 15:37:44 yfried, kashyap , yup - the ovs agent is up and running 15:37:52 Dominic: very bad IMHO 15:37:57 ohochman, right 15:38:01 http://www.fpaste.org/66440/13891090/ n-net f20 15:38:05 ohochman, morazi: if you override a value on a host then it will pick up that value instead next time Puppet checks in 15:38:14 ayoung: so what's the problem ?? 15:38:22 ohochman: it's a feature, you're meant to be able to change configuration for all hosts in a group 15:38:22 ohochman, yea, I can also see it being implemented in that way -- however that would likely orphan things if you say added a new parameter to the host group. All the old individual hosts would would not contain the new param 15:38:36 ohochman, not sure. I need a keystonerc file to move ahead 15:38:44 nmagnezi: I had another issue - after the fix secgroup db was damaged 15:38:53 nmagnezi: or secgroup api 15:39:32 morazi, ohochman: we've loosely talked about versioning these things so you can do more gradual rollouts, but that's not how it works today 15:39:33 yfried: could be because i ran /etc/init.d/neutron-ovs-cleanup accidentally 15:40:07 yfried, i did not encounter this. but i do not focus on sec-groups as much as you do. 15:40:20 * afazekas the command is working when manually running: http://www.fpaste.org/66443/10920413/ 15:41:32 Dominic, struggling with how to rephrase. 15:43:38 Dominic, I think maybe the question is -- if you have a host group with some members and a particular host overrides one of the params and runs puppet we would get the overridden value landing on the first puppet run. If we subsquently update the host group, does the overridden param get erased for all intents and purposes? 15:44:03 morazi: ah ok, no, the host's overridden value will always take precedence over the value in the host group 15:44:05 Dafna, so it looks like cinder is implementing a broken workflow 15:44:18 giulivo: how do you mean? 15:44:36 ohochman, I think fundamentally that's what you are asking and observing that you are losing an overridden value somewhere, correct? 15:44:38 in the way it performs the operation on the SMI-S 15:45:05 this should definitely be open upstream and most probably needs an eye from EMC people 15:45:52 morazi: think of a scenario when user is changing the admin_password parameter in the compute hostgroup , when he have 100 compute-nodes running and register on different controller - if puppet will be trigger his cloud might collapse. 15:46:21 morazi: I woudn't want to manage my cloud with foreman allowing such action . 15:46:40 morazi, ohochman 2014-01-07 17:44:06.386 31855 TRACE keystone.common.wsgi OperationalError: (OperationalError) (2003, "Can't connect to MySQL server on '10.35.160.66' (110)") None None 15:46:48 flaper87, Heya, I remember discussing this previously, but I forget. Do you know where can I find the list of arbitrary key/value pairs to be used w/ "glance image-update" --property 15:46:49 so the server is up and running 15:46:56 but Keystone can't talk to it 15:47:03 morazi: you can say the administrator should know better . but really how can he ? 15:48:17 kashyap: Not sure, but http://docs.openstack.org/user-guide/content/glance-image-list.html shows several of them. 15:49:03 ohochman: that's a bug in the puppet manifests then, if they change a property and the service no longer works 15:49:20 larsks, Thanks, so much of docs! I'm looking for qemu guest agent in specific, I can't find it in the above URL 15:50:22 larsks, That page doesn't even list elementary hardware properties for Glance, like: hw_disk_bus, hw_cdrom_bus 15:52:26 f20+rdo has MULTIPLE selinux related issues, one of them causing the root-wrap is not working 15:54:39 Dominic: In openstack you have some key parameters such - mysql_ip, qpid_ip , etc.. that by changing them to the entire group belong on the same hostgroup will make problems for sure, not saying collapse the cloud .. because even if it does change the services .conf parameters , the services might still need restart to change the actual configuration. 15:55:05 ohochman: puppet's entirely capable of restarting services 15:56:29 Dominic: I can see why it's consider a "feature" I still think there should be other safe way to change multiple hosts params. 15:58:35 ohochman, morazi OK, so that machine is misconfigured, I think. There is a working ,mysql DB populated on it, but the keystone server is looking for a mysql server on a different IP address 15:58:51 kashyap, hey u there? 15:58:57 ndipanov, Yes. 15:59:19 mysql://keystone:@10.35.160.66/keystone 15:59:53 kashyap, hey - my networking seems busted on fedora... am I missing something obvious? 16:00:13 oblaut: cross tenant passes on rhel! No SecGroup Bug 16:00:31 oblaut: where do I log this? 16:00:38 ndipanov, Ok, several possible reasons. I don't know what all you're using 16:01:21 ndipanov, However, if you'd like me to take a fuller look, you have to provide some info by running these two scripts: 16:01:33 kashyap, hit me 16:01:37 does anyone know where I can find the .src.rpm files for icehouse? 16:01:48 ndipanov, $ wget https://raw.github.com/larsks/neutron-diag/master/gather-network-info https://raw.github.com/larsks/neutron-diag/master/gather-neutron-info 16:01:49 yfried: what do you mean ? is it fixed in RDO ? 16:01:57 I guess it didn't port 16:02:00 don't know 16:02:03 oblaut: ^ 16:02:12 ndipanov, Instructions I noted here: lines 34-40 -- https://etherpad.openstack.org/p/rdo-testday-JAN2014 16:02:30 vnice 16:02:37 yfried: does it mean RDO is not synced with Ice-House ? 16:02:41 ndipanov, I presume you're using Neutron? (Maybe no, you use nova-networking) 16:02:49 oblaut: also reassociate FLIP and dns update works 16:02:54 oblaut: no idea 16:03:20 kashyap, neutron of course 16:03:55 ndipanov, Cool! 16:04:15 oblaut: where should I log this? 16:04:40 ndipanov, That script will generate tar balls, you have to untar & SCP them somewhere. (If you have FAS account, fedorapeople.org would be easy) 16:06:03 ndipanov, Obvious things: $ systemctl | grep neutron -- you see all of them ACTIVE? 16:06:46 kashyap, lol no 16:06:53 :-) 16:06:54 I told you it was something stupid 16:07:02 morazi, I'm done with that machine 16:07:40 ayoung, ya that leads me to believe that either foreman got confused somewhere along the way due to the restart or there is something gnarly going on in the underlying puppet modules 16:07:47 ayoung, thx! 16:08:03 morazi, NP. feel free to point other people at me that have similar problems 16:08:21 ndipanov, Note: if it's something related to psutil, you can take a look at the workaround I mentioned in the Description here: https://bugzilla.redhat.com/show_bug.cgi?id=1049235 16:08:41 ayoung, great, thx! I think we got there by a really loopy means, and think it would be hard to reproduce it but I appreciate the offer 16:08:55 morazi, keystone/auth problems in general. 16:09:12 ayoung, Heya, since you're here, unverified question: is PKI default now in Keystone? 16:09:21 kashyap, it better be. 16:09:23 You tell me. 16:09:35 what does your keystone config file say kashyap ? 16:10:02 Hm, this returns 1 on my minimal setup: $ grep -i pki /etc/keystone/keystone.conf | grep -v ^$ | grep -v ^# 16:10:58 ayoung, I'm not using PKI on my setup. And, this is hand-configured. So, I can go to hell & not ask you any questions on this, I guess. 16:11:30 ayoung, My next new setup & all further will be PKI! 16:12:10 kashyap, let me look at what that old machine said 16:13:24 kashyap, so, there are 2 answers. 1st, yes, upstream PKI has been the default for a long time. Second, for RDO, it looks like PKI is the default. Tyhe machine I just looked at has 16:13:34 token_format =PKI 16:13:59 which is, I think, the deprecated way to indicate that the token provider should be the PKI provider.... 16:14:51 kashyap, and , if you look in the [token] section, you will see that there is no explicit value for provider 16:15:21 ayoung, You're right. 16:16:21 ayoung, Last interruption: do we have useage of Mozilla NSS here or is this all OpenSSL/something else? 16:16:33 still OpenSSL for now. We' 16:16:37 re working on it 16:18:09 Cool, just being selfish - so I can dig my memory for NSS related tribal info :-) I recall jdennis is also working on it? Thx for the response 16:19:16 jdennis is who I meant be "we" 16:19:36 :-) 16:25:36 larsks, Found it here - http://docs.openstack.org/trunk/config-reference/content/kvm.html 16:29:06 Hi , what does packstack does when EPEL=n ? 16:29:42 Just guessing: It won't use/add any EPEL repositories 16:45:42 vladan, if you tested Fedora 20 allinone, why not put it http://openstack.redhat.com/TestedSetups_2014_01 ? 16:49:58 pixelb, Can I move this bug to RDO component - https://bugzilla.redhat.com/show_bug.cgi?id=981116 16:50:39 pixelb, But sometimes I like the usefulness of Fedora Updates System, which closes the bug automatically for us once the update goes to stable (comment #30). 16:50:43 kashyap, it really is a Fedora/systemd issue 16:51:31 pixelb, Gah, I'm looking too much at screen. I totally forgot that I filed the bz myself. Ignore me! 16:51:50 I'll log systemd bug anyway 16:53:19 Heh already done: https://bugzilla.redhat.com/1014311 16:57:06 larsks, That particular systemd symlink issue you were asking about is https://bugzilla.redhat.com/1014311 16:57:23 pixelb, Should I add "Depends On" field for 981116 pointing to 1014311? 16:57:32 kashyap, done 16:58:05 Ah, I didn't refresh my browser :-) Thx 17:01:28 ndipanov, Able to progress on Neutron problems? 17:01:37 kashyap, no 17:01:47 still can't get network to work 17:02:08 ndipanov, SELinux permissive? (For now) 17:02:40 Also, there are a couple of issues lurking around (rootwrap SELinux issues, etc). Once I have something more concrete, I'll post you details. 17:03:06 kashyap, yeah 17:03:23 kashyap, hit another bug with volumes 17:03:28 but want to sort this out first 17:04:50 Sure. 17:05:11 Ah. AMQP isn't running. Bah. 17:05:42 AMQP, that is 17:05:58 * DrBacchus squints 17:26:27 kashyap,ndipanov audit log added to the https://bugzilla.redhat.com/show_bug.cgi?id=1049503 17:28:43 afazekas, I saw the same thing when booting with volumes 17:28:51 wel similar 17:29:19 pixelb: Thanks for the link... 17:30:03 afazekas, I saw that, added a comment there. 17:30:20 ndipanov: the selinux policies has multiple issues more than 3 IMHO 17:30:49 kashyap, did you see any neutron issues with selinux? 17:31:25 ndipanov, I don't see anything when I tried to generate a reference policy: $ cat /var/log/audit/audit.log | audit2allow -R 17:31:31 on f20 I saw, on RHEL 6.x it seams to be ok 17:31:47 afazekas, with neutron? 17:33:04 With neutron the first visible thing , you do not have namspaces 17:33:24 almost all ip related command fails with permission denied 17:34:18 afazekas, due to selinux? 17:34:19 ndipanov, Do you see anything with: $ cat /var/log/audit/audit.log | audit2allow -R 17:34:58 ndipanov: rootwrap related avc denials are in the audit.log, so yes 17:35:23 afazekas, yes silly me 17:44:20 afazekas, kashyap are you guys gonna report those 17:45:00 ndipanov, I don't have the SELinux errors handy as of now, I just turned SELinux to Enforcing (from permissive), cleared my logs, & trying to capture these. 17:45:19 kashyap, will do the same 17:45:19 (Permissive should still have captured these things in audit.log, don't know why it didn't) 17:45:29 but I had a ton of them 17:45:52 ndipanov: I am not sure we really want to report all selinux issue individually 17:46:22 $ cat /var/log/audit/audit.log | audit2allow -R 17:46:23 could not open interface info [/var/lib/sepolgen/interface_info] 17:46:25 afazekas, that's why I was asking 17:46:27 * kashyap first tries to resolve that 17:46:43 afazekas, You can report them all together by running the above audit2allow -R command 17:48:10 kashyap: last time when I reported selinux issue the audit2allow was not considered as enough detail 17:49:13 afazekas, I mean, full audit.log is always useful, but just noting my previous experience of resolving an issue w/ it :-) 17:50:23 yes, the audit2allow output is usually more human readable ;-) 17:51:24 :-) 17:57:14 if anyone is seeing strange output with 'cinder rate-limits', this bug (https://bugzilla.redhat.com/show_bug.cgi?id=1037728) is still present 18:04:58 jbernard, I'm not a Cinder guy; but, an obvious question: are you hitting this w/ all latest cinder packages? 18:08:10 kashyap, btw - I had to enable both 80 and 6080 in firewalld by hand to get to horizon/vnc 18:08:28 kashyap, that's a bug right? 18:08:54 ndipanov, I'm afraid, I don't have Horizon in my setup; so I can't confirm. 18:08:58 mrunge, ^^ ? 18:09:28 or jpich; if you have a moment (for the above question) 18:09:32 kashyap, well it's not really a mrunge bug 18:09:52 well it is a bit actually :) 18:11:03 Yeah, just that since they're more in the trenches with these; maybe they (he/jpich) know such stuff top off their heads 18:13:12 pixelb, ya.. the jobs are now completing on their first try of the packstack install.. so the wrkarounds are not used :) 18:13:51 Anyone know if this ceilometer-api failure has an existing bug? 18:13:54 http://fpaste.org/66476/13891181/ 18:14:08 weshay, great stuff. The CI speeded things up hugely for me last night. I had multiple things testing in parallel 18:14:21 * shardy couldn't find one 18:14:24 great!! that is what I want to hear 18:14:40 kashyap: I'm afraid I'm unable to check anything right now, sorry 18:15:09 shardy, not seen that. Do you have python-wsme-0.5b6 installed? 18:16:15 pixelb: No, I this is RDO Havana, installed via F20 repos, so I ended up with python-wsme-0.5b2-2 18:16:29 pixelb: Do I need to enable updates-testing? 18:16:35 kashyap: icehouse-1 18:16:47 shardy, Ah I thought you were on icehouse testday stuff 18:17:09 pixelb: I wanted to test Havana first, then Icehouse on a second machine 18:17:16 kashyap: the 'prettytable' package is too old 18:17:31 kashyap: or, the cinder client is too new, depending on how you look at it 18:18:00 shardy, python-wsme-0.5b5 is in updates-testing, so it would be worth testing with that 18:18:00 pixelb: unfortunately the Havana testing has not gone as smoothly as I'd hoped :( 18:18:40 pixelb: Ok, thanks, will give it a try 18:20:13 jpich, No worries. 18:21:06 giulivo, You have any hints for the bug jbernard is encountering? (I see you were part of the bz he referenced) 18:22:32 pixelb: Thanks, that fixed it 18:22:47 shardy, OK cool. That's already sumitted for stable 18:22:57 kashyap, giulivo: if it's possible to bump the version of 'python-prettytable' package from 0.6.1 to ≥ 0.7, that would be ideal (assuming it doesn't break anything else) 18:24:26 pixelb, kashyap I see this iscsiadm: Could not execute operation on all records: no available memory when booting with volume even with permissive on - anyone else seen it? 18:24:30 jbernard, Reason? New upstream release? Or other bug-fix? 18:24:59 ndipanov, not seen it sorry 18:25:12 the current version of 'python-cinderclient' (1.0.7) requires ≥ 0.7 of python-prettytable 18:25:30 sorry, I was on a call 18:25:39 kashyap, ndipanov what was(is the issue? 18:25:50 shardy, A qucik note: If you want to quickly find the latest build available in Koji for a specific package: koji latest-build rawhide python-wsme 18:26:06 mrunge, I had to manually open 80/6080 ports after neutron restart 18:26:11 shardy, Couple related stuff here: http://openstack.redhat.com/DeveloperTips 18:26:22 jbernard, prettytable has prettyterrible backwards compat history, so we'll need to be careful 18:26:25 ndipanov, Haven't seen your issue (yet). 18:26:48 pixelb: im hearing the same thing from a few folks 18:26:56 kashyap: Ok, thanks :) 18:26:58 gmorning -- somewhat new to RDO/Openstack but was going to try to install icehouse after my failed attempt with FC19 andGrizzly 18:27:10 japerry, good luck 18:27:10 jbernard, are you getting a runtime issue? or did you just observe this in requirements.txt? 18:27:22 ndipanov, I have seen so many strange things with neutron on fedora 18:27:31 shardy, Np; just noting them explicitly as people are already buried in a gazillion URLs 18:27:37 mrunge, thanks that makes me feel better 18:27:37 ndipanov, I'm not sure where to start there.... 18:27:47 pixelb: installed with packstack, the current pairing of cinderclient-to-prettytable is mismatched 18:27:53 thanks ndipanov! I'll report back any issues.. its a brand new updated FC20 install 18:27:54 mrunge, but yeah - I could not get it to work afaicr 18:28:23 pixelb: pip install prettytable fixes the cinderclient issue, but it may cause regressions elsewhere 18:28:47 jbernard, so you get a runtime failure without doing that? 18:29:02 pixelb: yes, the expected output is missing 18:29:19 heh ok, jruzicka ^^ 18:29:42 pixelb: i just prints nothing, so the it not entirely obvious that it's failing 18:29:51 pixelb: unless you know the expected behaviour 18:30:32 im happy to help test different versions of either package, or anything else that would help 18:31:06 afazekas, For later, this also gives a human-readable error: $ audit2allow -w -a 18:33:58 jbernard, could you rpm -e python-cinderclient; yum install http://rdo.fedorapeople.org/openstack-havana/epel-6/python-cinderclient-1.0.6-2.el6.noarch.rpm 18:34:07 (to see if previous version is OK) 18:37:19 pixelb: same behaviour with 1.0.6-2 18:40:51 jbernard, Hmm, I reviewed the upstream 0.6 -> 0.7 changes and they look sensible enough, so I'll see if I can provide an update quickly... 18:42:17 pixelb: awesome, im happy to test it 18:46:30 "No package opnstack-packstack available." grrrr 18:47:34 zaitcev: spelling mistake? 18:47:43 zaitcev: ahh, you knew that ;) 18:48:30 Sorry for the noise, but I was this close to tweet it. Very silly, yes. 18:53:11 it does not seems good: http://www.fpaste.org/66492/13891207/ 19:12:40 http://www.fpaste.org/66498/91217481/ do I need to enable something to get the dhcp agent bindings ? 19:12:52 jbernard, Could you try: yum install http://kojipkgs.fedoraproject.org/packages/python-prettytable/0.7.2/1.el6/noarch/python-prettytable-0.7.2-1.el6.noarch.rpm 19:14:43 pixelb: same behaviour 19:15:29 hrm. That's surprising since that's the version pip would be installing 19:16:02 pixelb: oh hang on, i had a typo 19:16:05 pixelb: one sec 19:17:19 I'm just catching up on things. Has anyone else seen a "Could not parse for environment production" when applying xxx_neutron.pp? 19:17:40 afazekas, What are you running? F20? 19:18:23 pixelb: that works 19:18:30 Hmmm, might be a bad config. Checking. 19:18:30 pixelb: output is now correct from cinderclient 19:18:43 jbernard, OK thanks for testing! Copying to repos now.. 19:18:51 pixelb: awesome! 19:19:20 pixelb: i dont think it should cause any troubles with other packages, but ill keep an eye out 19:20:06 Different from "Failed to parse /etc/nova/nova.conf" ? 19:22:45 kashyap: tempest-packstack scenario test 19:23:03 I'm getting "No valid host was found" when I try to launch an instance. Where should I be looking? 19:24:56 kashyap: on the f20 cloud image, disabled selinux, iptables, firewalld, applied nova-libvirt patch, mysql-mariadb workaround with neutron 19:25:49 afazekas, Ok, I'll check in a few, taking a break - been staring at a screen for a while. 19:32:28 http://www.fpaste.org/66503/12312313/ 19:33:18 cinder backup.log 19:34:00 afazekas, something wrong with the db, probably empty 19:34:11 I think Dafna had that earlier today but we couldn't reproduce 19:35:29 May be I got the error t install time, now it is there 19:37:02 eharney, ^^ you know anyone who could take a look there? 19:41:48 afazekas: does 'cinder-manage db sync' help? 19:42:13 afazekas: probably not… 19:46:07 jbernard: I do not see issue with service yet, It just added it to the log probbaly at the startup time 19:47:48 cinder backup-create b54fa509-dfa8-4550-b5a8-5f3df1132d54 19:47:48 ERROR: Service cinder-backup could not be found. (HTTP 500) (Request-ID: req-eb54fecf-e9bf-40a0-94f3-d857773fa9d1) 19:49:11 afazekas: i don't think the packstack cinder is configured to start cinder-backup 19:49:26 afazekas: i would guess you'll not see 'cinder-backup' in your process list 19:49:31 jbernard: the server was not ran 19:49:46 after starting it the request was accepted 19:50:20 would be nice if the openstack-status would include this service 19:51:44 is it possible the packstack tried to start this service before the db sync ? 19:52:26 i agree 19:52:34 on that, i am not sure 19:53:11 the backup will fail if the service is not available to handle the request (as I understand) 19:53:21 that could be a friendlier message 19:53:40 and i think the exception you're seeing is probably a different issue 19:53:51 I think the service should be enabled and running after the packstack install, so it is bug anyway 19:54:03 i agree with you there 19:54:31 It is time to live , bye 19:54:46 afazekas: Thanks for helping out! 20:37:08 hello 20:37:35 i have a probel with RDO and openvswitch, anyone can help ? 20:37:53 Hi, paul__ 20:37:59 Can you elaborate? 20:39:20 sure 20:39:34 I can't join my vm on openstack even from compute node itself 20:39:59 I assume it is a problem with openvswitch config but nor sure 20:40:12 ovs-vsctl show 20:40:14 ed54db06-7cb7-415a-97ea-2bc2ef85e085 20:40:15 Bridge br-int 20:40:17 Port "qvo4f65df07-8e" 20:40:18 tag: 3 20:40:20 Interface "qvo4f65df07-8e" 20:40:21 Port br-int 20:40:23 Interface br-int 20:40:24 type: internal 20:40:26 Bridge br-ex 20:40:27 Port br-ex 20:40:29 Interface br-ex 20:40:30 type: internal 20:40:32 ovs_version: "1.11.0" 20:40:33 the vm is interface qvo4f... 20:42:02 And you can't ssh to it? ping it? 20:43:34 nothing 20:43:45 (even if rules added in security group) 20:45:57 Can anyone help paul__ out with a networking-related problem? 20:46:34 paul__: Are you running an all-in-one configuration? Or do you have separate compute and network hosts? 20:47:12 all-in-one 20:53:35 paul__: Can you post somewhere the outputs of "nova list", "neutron net-list", "neutron router-list", and "neutron router-port-list " for each router? 20:53:42 what should be a working configuration ? ovs-vsctl result ? 20:53:43 ...and also "neutron subnet-list" 20:54:45 Also, if there are any rdo test day people still around, was there any traffic regarding neutron and selinux? because icehouse neutron on my f20 system is super unhappy. 20:55:05 paul__: what is your br-ex interface config like? 20:55:51 I didn't see any selinux traffic today. 20:58:15 Looks like kashyap and ndipanov discussed some selinux/neutron issues a few hours ago. 20:58:43 But it doesn't look like there was much detail. 21:00:45 DrBacchus: Thanks. Bug time I guess. 21:09:24 is there a workaround for this? Error: /Stage[main]/Ceilometer::Db/Exec[ceilometer-dbsync]: Failed to call refresh: ceilometer-dbsync --config-file=/etc/ceilometer/ceilometer.conf returned 1 instead of one of [0] 21:09:47 ah its a mongodb thing 21:13:08 i used the following parameters : packstack --allinone --nagios-install=n --mysql-pw=password --ntp-servers=0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org --os-swift-install=y --provision-demo-floatrange=192.168.0.0/24 --keystone-demo-passwd=password ... and packstack configured my br-ex to take the same ip as my gateway 22:19:52 yay, finally progressed deep enough to hit known workarounds 01:50:37 ping, anyone here testing Fedora 20 w/ Neutron and packstack? 01:50:53 with the icehouse install, from stock I cannot seem to remove the network included 06:34:16 good morning 06:34:37 I was wondering - why does it take so much time to delete an empty volume that I just created? 06:43:02 abaron: ewarszaw: ^^ 06:44:02 mbourvin: if it is 'post zero' then it is because we write zeros on it 06:44:42 abaron: how do I know if it's `post zero` or not? 06:45:10 simplest way of knowing whether the delete was post zero is to look at the vdsm log 06:45:14 at the delete operation 06:45:21 and check to see if postZero param is true 06:45:44 abaron: vdsm? is there a vdsm in openstack that I don't know about? ;) 06:46:02 mbourvin: lol 06:46:09 mbourvin: same issue though 06:46:17 mbourvin: safe delete writes zeros 06:46:49 mbourvin: ping flaper87|afk to know what exactly to look for 06:47:08 abaron: so what is the max time that you think that it should take for deleting a 15G volume for example? 06:47:35 mbourvin: as long as it takes to write 15G of zeros on your storage (it depends on the storage) 06:47:47 abaron: local lvm? 06:48:41 mbourvin: test it. run 'time dd if=/dev/zero of=/dev/VGNAME/LVNAME' 06:49:03 mbourvin: are you using the thin lvm? 06:49:09 abaron: nope 06:49:19 then test the above command 06:49:27 or create smaller volumes 06:50:33 note that here too, the only place in automatic tests that you want to call delete volume from the API is in the 'delete volume' test. Everwhere else you want to have your teardown do it in a much faster and dirtier way 06:51:34 abaron: ok, but now I'm running manual tests, and creating big volumes is a part of it :) 06:53:10 then cinder has a configuration option to disable safe delete 06:53:11 https://github.com/openstack/cinder/commit/bb06ebd0f6a75a6ba55a7c022de96a91e3750d20 06:53:19 mbourvin: ^^^ 06:54:19 abaron: ok, thanks, anyway I see the dd commend that it runs in top, and you were right - this is why it takes so much time 06:54:31 *command 07:08:50 abaron: I'm thinking - maybe the default should safe delete - disabled 07:09:19 abaron: It's weird that it takes so much time to delete a volume that was just created (from the user point of view) 07:10:20 the default should be as it is. with thin target it will be a lot faster, but with thick it is raw and we have no knowledge of where user information resides 07:10:39 this means that the only safe option is to zero it all before exposing the same disk to another user 07:10:47 otherwise data leakage will occur 07:11:17 users need to consciously decide that they don't care about their data falling into other users' hands 07:11:37 users also don't care how long it takes to delete 07:12:03 the only ones who care are admins / and quality engineers who rely on it for their next tests ;) 07:13:18 abaron: lol ok :) 07:42:54 I have a new issue - I created a big volume and tried to delete it - but there is no space left on device now. Horizon is dead, and when I run `cinder list` "I got this error: ERROR: An unexpected error prevented the server from fulfilling your request. (OperationalError) (2003, "Can't connect to MySQL server on '10.35.64.87' (111)") None None (HTTP 500)" 07:43:42 shouldn't it prevent creating such a big disk if there is not enough space? maybe like it works in ovirt? 07:43:48 abaron: yrabl: ^^ 07:48:56 Morning 08:28:10 ndipanov, Heya, Have you filed the issue for the rootwrap issues?; If not, I'm just drafting a report 08:28:21 I presume it's this: 08:28:23 2014-01-08 03:01:34.484 25113 TRACE neutron.agent.dhcp_agent Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'link', 'set', 'tap2816cd25-73', 'address', 'fa:16:3e:7b:f8:9e'] 08:32:57 Hm, I see something else - an IOError of some sort: 08:32:57 2014-01-08 03:31:24.172 1713 TRACE neutron.agent.dhcp_agent File "/usr/lib/python2.7/site-packages/eventlet/hubs/poll.py", line 44, in register 08:32:57 2014-01-08 03:31:24.172 1713 TRACE neutron.agent.dhcp_agent self.poll.register(fileno, mask) 08:32:57 2014-01-08 03:31:24.172 1713 TRACE neutron.agent.dhcp_agent IOError: [Errno 22] Invalid argument 08:33:02 * kashyap investigates further 08:33:25 kashyap, yes - plus I got one with iscsiadm 08:33:35 kashyap, but that one seems to be not related to selinux 08:33:45 so I'm investigating that 08:34:07 ndipanov, Ok, if you filed, can you please point me to bz URL? 08:34:37 kashyap, not yet - want to make sure it's not my system 08:38:57 Ok, I'm certain of a bunch of SELinux issues, just getting the relevant logs to dig further. Thanks. 08:40:12 ayoung: ping 08:40:35 ohochman, ayoung is in Boston, and asleep. 08:41:07 kashyap: oh, 08:47:18 If you have a Keystone question, just post here, someone who's more familiar might take a look 08:51:06 * flaper87 is here 08:56:04 ndipanov, When you get a sec, can you please post this -- $ cat /var/log/audit/audit.log | audit2allow -R 08:56:08 I just wanted to compare 08:56:21 I have installed openstack using rdo in a vm in esx, I need to add a compute node to this setup.how to do that 08:56:44 http://openstack.redhat.com/Adding_a_compute_node 08:56:52 kashyap, sure will do - let me clean it first and start over 09:21:48 ndipanov, FYI, I filed this - https://bugzilla.redhat.com/show_bug.cgi?id=1049807 09:23:07 kashyap, very nice - want to maybe add those critical (ish) bugs to notes? 09:23:21 ndipanov, You mean - workaround? 09:23:42 well more like - have them there in one place 09:23:44 (s/workaround/workarounds wiki page) 09:23:54 Yes, adding there. 09:26:06 kashyap, we already have setenforce 0 in workarounds 09:26:09 :) 09:27:40 Yeah, next time, we (I) should post more details so folks can try to post relevant SELinux failures (with *useful* investigative info) 09:40:03 kashyap, but the crux of the issue is selinux policy is broken for neutron 09:40:23 ndipanov, Yes, I'm talking to SELinux Fedora maintainers & debugging w/ him 09:40:52 kashyap, comparing your audit2allow to mine now 09:52:47 kashyap, might be good to add fedora 20 to that bug (though clear from the nvrs) 09:53:16 ndipanov, Done, sir. It's for selinux-policy component - https://bugzilla.redhat.com/show_bug.cgi?id=1049817 09:53:23 Upstream is looking into it 09:53:59 Changed the version from rawhide to F20 09:54:40 kashyap, fwiw your denials are more comprehensive than mine 09:54:43 but still comparing 09:55:43 kashyap, btw is that output more useful then doing just audit2allow -a ? 09:56:35 ndipanov, Well, -R just allows us to generate a reference policy to make things working. 09:56:54 ndipanov, $ audit2allow -a -w gives a more descriptive/human-readable detail 09:57:05 kashyap, hence my question 09:57:07 :) 09:57:50 Yes, that'd be useful. There'll be a build later today in Koji, mgrepl (Fedora SELinux maintainer) just confirmed 09:58:29 kashyap, awesome 10:04:51 tshefi_: hows it going? any bugs this morning? 10:04:59 giulivo: ^^ 10:07:48 Dafna, none yet but now on my cc im getting such syslogs cougar01 iscsid: conn 0 login rejected: initiator error - target not found (02/03) 10:11:13 Dafna, not sure why as my cinder and glance are on other hosts. 10:12:27 tshefi_: are you sure that you installed the ice-house packages? 10:12:46 tshefi_: what is your packstack? 10:13:01 Dafna, one sec 10:13:36 Dafna, which component do you want to know? 10:13:52 tshefi_: rpm -qa |grep packstack 10:14:34 Dafna, openstack-packstack-2013.2.1-0.25.dev936.el6.noarch 10:16:23 tshefi_, that should be 0.27... ? 10:16:27 Dafna, I was having troubles with F20 but abandoned it so I'm moving on thinlvm on rhel 10:16:55 tshefi_, that might suggest you have a havana repo rather than icehouse? 10:17:26 giulivo: what happened with the emc? 10:17:42 got a couple of bugs open 10:17:51 pixelb, how could that be? Ran setup from RDO quick guide step by step? 10:18:02 but I'm done with it 10:18:10 pixelb, check my repo folder now 10:20:11 Dafna, one thing though, we don't configure cinder for backups ... it's not just about starting the service, it's unconfigured 10:20:20 did you try to get volume backups? 10:21:27 pixelb, correct checking rdo-release.repo says Havana grrrrrr!!! but i used the RPM from RDO site after it was corrected how come i still got Havana? 10:22:04 tshefi_, You followed step 1 at http://openstack.redhat.com/RDO_test_day_January_2014#How_To_Test 10:22:39 ? There should be no other instructions anywhere to confuse things for icehouse testing 10:23:02 pixelb, yeah sure my hosts were blank no repos from before, used /rdo-release-icehouse.rpm for sure 10:25:00 pixelb, i'll test on a new host using icehouse-rpm (again) and check the rdo. repo for icehouse version, be back soon thanks. 10:25:13 tshefi_: lets take this off line 10:25:49 tshefi_, It should all be recorded. yum history list rdo-release; yum history info $id 10:26:19 giulivo: so emc is blocked for testing in ice-house? 10:26:30 no it's not blocked, it's done 10:26:47 giulivo: so it worked? 10:26:59 yeah part of the features work, others don't 10:27:17 I'm now drafting a bug for the backup functionalities which doesn't affect only the emc driver 10:27:21 and then move to something else 10:28:14 giulivo: open bugs for everything that doesn't work 10:28:28 giulivo: do it before you move on... 10:28:42 sure 10:31:29 kashyap, thanks for checking that SElinux issue. plz let me know whan it's resolved and i'll re-run my tests with Enforcing mode. 10:32:59 nmagnezi, Np. It should be in build system later today. You may have to pull it from build system. 10:33:15 giulivo: thank you 11:09:39 giulivo: ping 11:09:47 Dafna, pong 11:10:08 giulivo: should a volume be stateless when I boot an instance from it? 11:10:14 no in-use 11:11:02 giulivo: no... I mean, when I boot an instance from a volume -> create a file -> shut down the instance -> boot a second instance from the volume -> should I see the file? 11:11:22 Dafna, yes, definitely 11:11:29 giulivo: mmm 11:11:44 you should see the file, it's not stateless, the whole point of volumes is in data *not* being stateless 11:11:45 giulivo: maybe because it was on /tmp? 11:11:51 maybe yes 11:12:11 giulivo: bug still... but I will try to create a dir under root 11:12:39 oh tmp is probably cleaned up by the OS, cinder isn't touching the data in the volumes 11:20:04 error in adding multinode. it stops while running packstack-answerfile in host1 11:25:16 Is anyone else seeing -ECONNREFUSED qpid errors after a reboot (F20, Havana)? 11:26:23 nova, ceilometer, heat and neutron can't connect until I restart the qpidd 11:27:45 Where is the epel-7 repo ? 11:28:51 http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-7/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found 11:40:15 afazekas, As near as I know, EPEL7 doesn't have all the packages generated 11:40:32 Compare that $ koji list-pkgs --tag=epel7 | wc -l with $ koji list-pkgs --tag=f20 | wc -l 11:40:55 shardy, Nope, haven't seen it on my Havana F20 setup (it's now IceHouse). 11:41:25 shardy, What does that say -- $ netstat -lnptu | grep qpid 11:41:45 I had to always restart qpidd though after reboot: $ systemctl restart qpidd 11:42:29 kashyap: That's what I'm finding, restarting qpidd should not be necessary IMO, must be a bug somewhere 11:43:47 I raised bug #1049488 yesterday, but I see dneary also raised a similar bug back in July.. 11:45:56 kashyap: I'll try the f-20 packages.. 11:47:37 shardy, Yeah, I missed to file it, thanks. 11:47:57 is there a log collector? 11:49:17 shardy, Can you please also add dneary's bug in your bz? just as a reference 11:49:48 ohochman: nmagnezi: kashyap: is there a log collector? 11:49:52 mbourvin, For Neutron, you can follow-- https://etherpad.openstack.org/p/rdo-testday-JAN2014 lines 34 to 40 11:50:03 kashyap: and for others? 11:51:04 giulivo, ping 11:51:07 mbourvin, Needs to be made. Can you be more specific? "log collector" is a vague phrase 11:51:08 * kashyap off to lunch 11:51:19 kashyap: where the foreman-section on : https://etherpad.openstack.org/p/rdo-testday-JAN2014 11:52:49 kashyap: done 11:56:18 kashyap, My bug? 11:56:56 dneary: https://bugzilla.redhat.com/show_bug.cgi?id=984968 11:57:14 Which I think I duped yesterday with bug #1049488 11:57:33 dneary: Did you get any feedback, at all, as to why that is happening? 11:58:19 seems like folks have all just been bouncing qpidd after reboot, which seems, uh, suboptimal 11:58:31 shardy, No more than in the forum threads & wiki page I pointed at 11:58:47 Definitely suboptimal 11:58:53 Esp if it's occurring a lot 11:59:02 Seems to be a Fedora thing, not CentOS/RHEL 11:59:07 dneary: Ok, thanks, I'm trying to dig into it a bit as it's happening repeatably for me every reboot (fresh F20 with Havana) 11:59:40 dneary: Yeah I don't think I saw it on EL6.5, but that was RHOS not RDO so too many variables 12:02:52 dneary, You got the context above. 12:04:30 pong yrabl 12:10:41 Does anyone have any tips for what needs to be done to enable instances to access the nova metadata service with neutron networking? 12:11:02 giulivo, how is the smi-s? 12:11:11 it worked 12:12:15 yrabl, I have the RDO deployment still configured if you want to run further stuff there 12:12:56 giulivo, no, it's ok - I still have the glusterfs to handle :). just checking 12:16:38 i have RDO deployment. and bare metal provision always fails. 12:17:09 find multiple road blocks. Any one successfull with bare metal with RDO? 12:32:13 pixelb: ping 12:39:14 * afazekas can't find ruby(selinux) libselinux-ruby for epel-7 12:41:41 in EL6 it was part of the optional channel, not EPEL 12:41:41 Dominic: ping 12:41:41 hi 12:41:44 hi 12:42:10 Dominic: I have some new AVCs on neutron-controller deployed by foreman 12:42:26 avc: denied { read } for pid=18313 comm="ip" path="/proc/18223/status" dev=proc ino=222926 scontext=unconfined_u:system_r:ifconfig_t:s0 tcontext=unconfined_u:system_r:initrc_t:s0 tclass=file 12:42:33 avc: denied { read write } for pid=18527 comm="rsync" path="/tmp/puppet20140108-18223-5dcjp8-0" dev=dm-0 ino=524356 scontext=unconfined_u:system_r:rsync_t:s0 tcontext=unconfined_u:object_r:initrc_tmp_t:s0 tclass=file 12:43:16 file BZs then I guess, but it's not against Foreman 12:43:27 foreman-selinux / 12:43:28 ? 12:43:43 no, this isn't a Foreman host, not a Foreman bug 12:43:58 openstack-selinux 12:44:01 ? 12:44:09 yeah, probably 12:44:11 ok 12:44:16 thanks, 12:52:53 Dominic: I should have openstack-selinux installed on the machine right. if it's there none it's probably because the neutron-controller host group not including those puppet manifest that installing it 12:52:56 ? 12:54:19 pixelb: ? 12:59:42 pixelb, hey - I seem to be hitting https://bugzilla.redhat.com/show_bug.cgi?id=1001705 13:20:44 ndipanov, I was expecting some upstream fix to be found in the RDO pkgs but the change is dated 6th of Dec, the RDOs are based on the 5th of Dec tarballs right? 13:21:15 giulivo, you talkin bout the iscsi thing 13:21:16 ? 13:21:29 no this one https://review.openstack.org/#/c/54833/ 13:21:37 morazi: ping 13:21:45 ohochman, pong 13:22:34 morazi: I notice that there's no openstack-selinux installed on the neutron-controller (and some AVCs remains after deployment) 13:22:40 giulivo, so not related to my comment form a few mins back? 13:23:24 from* 13:23:46 ndipanov, no it's not 13:23:48 btw we base our packages on milestone releases 13:23:56 it's not just a random commit 13:24:00 morazi: I've opened a bug on openstack-selinux but I actually think it might be the puppet-modules that should deploy that package on the client. https://bugzilla.redhat.com/show_bug.cgi?id=1049895 13:24:03 there is an upstream process for this 13:24:10 creating a milestone branch etc 13:24:11 ndipanov, yeah so current RDO is based on 5th of Dec tarballs right? 13:24:27 on 2014.1.b1 tarballs 13:24:38 check out that tag in git and see what's there 13:24:51 heh, thanks! 13:32:21 kashyap: bare metal provisioning is expected to work on RDO? is it tested? 13:33:49 ohochman, one sec. I think there is a related bug that implies that we aren't expecting openstack-selinux to be required... 13:35:10 ohochman, lemme ask someone (https://bugzilla.redhat.com/show_bug.cgi?id=1015625 is the one I'm talking about the seems to at least imply openstack-selinux should not be a hard dep) 13:35:22 morazi: if so.. what will be dealing with the AVCs ? 13:35:41 kashyap, btw that isciinitator bug is legit 13:35:48 will report it now 13:36:52 ndipanov: hi 13:37:29 Dafna, ohai 13:38:07 ndipanov: :) can i ask a silly question? 13:38:23 Dafna, allways 13:38:28 there are no silly questions 13:38:46 ndipanov: you should say, how is that different from what you usually ask ;) 13:39:03 I really shouldn't ;) 13:39:21 ndipanov: lol 13:39:39 ndipanov: should I be able to run rebuild on an instance booted from volume? 13:39:39 ohochman, yea, that's a completely fair point. 13:40:03 Dafna, not sure need to check 13:40:15 btw what are you testing this on? rhel? 13:40:22 ndipanov: yes. 13:40:34 Dafna, that's why you can boot from volume 13:40:41 morazi: OK, I've updated the bug - the AVC's will be found the compute side as well. 13:40:43 Dafna, gimme a sec 13:40:43 ndipanov: what do you mean? 13:40:52 it's borked on f20 afaict 13:41:06 ndipanov: can it be selinux? 13:41:16 ndipanov: or is it something else? 13:41:49 nah smth else - a regression fixed in rhel but not in fedora 13:42:03 ohochman, the foreman related AVCs or are you talking about different new AVCs? 13:43:27 Dafna, should work yeah 13:43:37 Dafna, if it's not it's an upstream bug likely 13:43:49 morazi: well the thing is. that foreman do not deploy openstack-selinux-0.1.3-2.el6ost.noarch on the neutron-controller / neutron-compute machines as opposed to packstack 13:45:32 ndipanov: thanks. and when I asked for a password when rebuild from horizon, is that a password change for the rebuild image or is that approval that I am allowed to rebuild? 13:46:05 Dafna, you sure you're not talking about rescue? 13:46:14 ndipanov: i'm sure :) 13:46:33 then I'd have to check Dafna 13:46:41 ndipanov: unless horizon use rescue and call it rebuild... 13:46:48 ndipanov: thanks :) sorry... 13:46:56 ohochman, ah, perhaps that differentiation was not clear to everyone. 13:47:02 yrabl: ping 13:47:26 Dafna, that would also be a bug 13:47:34 ohochman, so a packstack install always puts down openstack-selinux even though there isn't an rpm dep there? 13:47:36 ndipanov: yes :) 13:47:51 morazi: I just check neutron machine that was deployed with packstack it has openstack-selinux installed and no AVC's in /var/log/messages. 13:48:08 yrabl: this is cool... create an empty volume -> boot instance from it -> rebuild the instance from an image 13:48:14 morazi: I guess it does . 13:51:25 bcrochet|bbl, do you happen to know the answer there off the top of your head w/r/t the behavior of foreman vs packstack in terms of the presence of openstack-selinux -- it likely impacts https://bugzilla.redhat.com/show_bug.cgi?id=1015625 at least slightly 13:55:00 giulivo: I think volumes are treated like images when we boot from one. I can boot two instances from the same volume 13:56:33 Dafna, cool 13:56:37 bcrochet|bbl: ping 13:57:18 Dafna, wait when booted from volume the volume should be marked as in use and not be usable to boot another instance 13:57:24 is it marked as "in-use" ? 13:57:25 ohochman, actually I may be misparsing that bz at least slightly, so yea let us talk a bit and see what we come up to. I may be that we just need to ensure a package is pulled in to all host groups. 13:57:58 giulivo: I know... wanna look? 13:58:12 I can try that too, just asking if it is marked as in-use or not 13:58:24 as you shouldn't be allowed from _any_ volume marked as in-use 13:59:02 giulivo: yes 13:59:19 morazi: I agree. I think the packstack should be there and if it blocks the user to perform any action that's a bug that need to be resolve within the openstack-selinux. 14:00:02 Dafna, uh that's bad ... the can you just attach a volume to an instance and, after it is marked in-use, see if you can use it also to boot a new server? 14:00:17 in that case, the "in-use" filtering is probably just not working 14:01:17 giulivo: actually, because of the bug that I found before where the volume seem to be stateless (as in data is not saved once I destroy the instance) I think they treat the volume like an image and are creating some sort of snapshot 14:03:13 Dafna, let me do some tests around that too, are you doing all this on guster? 14:04:20 giulivo: yep 14:06:10 ohochman, can't talk now, got a meeting in a minute. I'll check in once I'm back 14:07:56 giulivo: I found what is wrong... 14:08:22 giulivo: do this... boot an instance from a volume -> rebuild the volume from an image (its no longer attached) 14:09:10 Dafna, rebuild the server you mean, right? 14:09:22 giulivo: so once we rebuild the volume, it's marked with an image and it seems that the active image is no longer the volume 14:09:27 giulivo: yes 14:09:36 giulivo: rebuild the server 14:10:03 Dafna, so that is correct, when you rebuild it basically "frees" the volume 14:10:10 how long should it take to delete a F19 instance. looks to be hanging in 'Deleting' state 14:10:24 Dafna, and it can be used by other instances 14:10:33 but the data in it should be _persistent_ 14:10:39 that's one more thing I want to check 14:10:50 no way you should lose data from volumes 14:10:58 giulivo: so I think that the volume is detached and the vm is started again from an image 14:11:04 giulivo: not sure its a bug... 14:11:15 Dafna, it's the correct behaviour if you rebuild from image 14:11:29 giulivo: not related to the data bug - that happens 100% no rebuild or anything 14:11:33 giulivo: yes 14:11:48 yeah how do I reproduce that? just attach write some data , detach and the attach somewhere else? 14:11:58 for my own curiosity 14:12:04 ohochman: pong 14:12:24 as cinder should touch the data at all so I wonder why it's not there afterwards and what is actually attached to the second instance 14:12:33 *shoult NOT touch 14:12:34 giulivo: boot an instance from volume -> write data -> destroy instance -> boot a second instance from the same volume -> check if the data is there 14:12:43 Dafna, I see, will try! 14:12:46 thanks! 14:12:51 giulivo: no prob 14:13:42 shardy, Thanks (late response) 14:14:03 bcrochet: I wondered about the "Med" in the neutron 4 nodes scenario ? http://openstack.redhat.com/TestedSetups_2014_01#Advanced_Installs_.28Foreman_Based.29_--_Work_in_Progress 14:14:03 ndipanov, You have a URL about the iscsi bz? 14:14:31 kashyap, https://bugzilla.redhat.com/show_bug.cgi?id=1049922 14:14:41 pradeep, I haven't tested bare metal myself. If you have spare cycles, please try it & let us update on the list. 14:16:25 kashyap: Did anyone else try out icehouse on F20 yesterday that you know of? I ran into all sorts of exciting neutron/selinux issues and curious if I was just lucky or if this is an actual thing. 14:17:17 pradeep: i have RDO running fine with KVM nodes. But bare metal provisioning, i feel not at all stable. 14:17:21 kashyap: ^ 14:17:22 larsks, /me is on Icehouse F20. All the bugs I'm filing are related to that. 14:17:56 kashyap: Are you running with selinux enabled? I ended up needing at least this to get neutron running: https://gist.github.com/larsks/8306690 14:17:56 kashyap: community had plans to move bare metal away from nova. probably it might be stable later. 14:18:03 pradeep, "not all stable" is very vague, if you can, please post detailed observations to rdo-list@redhat.com 14:18:07 kashyap: any way, i am trying now. will let you 14:18:15 larsks, Yes, that was my next point (SELinux) 14:18:21 larsks, This is what you need :-) -- https://bugzilla.redhat.com/show_bug.cgi?id=1049817 14:18:30 pradeep: have been updating/sending mails to community. i dint get any responce :) 14:18:34 kashyap: ^ 14:18:42 kashyap: let me try again. :) 14:18:57 pradeep, Patience is your friend. 14:19:11 kashyap: :) hehe excellent. 14:19:12 nmagnezi, There you go, SELinux issues are fixed in this -- selinux-policy-3.12.1-113.fc20 14:19:17 kashyap: Thanks. 14:19:50 kashyap: Sort of wish selinux policies weren't so monolithic (so that we could package the appropriate module with neutron). Ah well. 14:20:18 larsks, Indeed. That was mail in drafts! I had to follow up & bug SELinux maintainers to get this fixes in 14:20:59 yrabl: did you test boot from volume (create new volume)> 14:21:25 Dafna, not yet/ 14:21:37 #action Kashyap/larsks to bring up the issue of modularized SELinux policies for OpenStack on the list 14:21:46 yrabl: :) ok... we have bugs there. several 14:21:51 kashyap, how can I use this in RHEL? 14:22:55 nmagnezi, I assumed you also had some env. based on F20 (no worries). I don't think mgrepl backported these fixes to EPEL. 14:23:07 ohochman: It just signifies that I have some tempest failures. 14:23:17 (He was kind enough to quickly make this build despite a deluge of other issues he was fixing) 14:24:05 kashyap, any chance he will port this to epel for rhel? 14:27:53 nmagnezi, I doubt he has time to do it today. 14:28:22 I posted a message, waiting for his response 14:28:50 ohochman, ok, thx for filing the bug. I've asked bcrochet to start looking at it. Shouldn't be hard as much as requires confirmation that it doesn't break things somewhere else. 14:36:36 bcrochet: morazi: I've manage to set that setup neutron-controller + neutron-networker + 2X neutron-compute. and I booted 7 instances on it . 14:37:09 bcrochet: morazi : If you want we can run tempest on that setup as well. 14:37:35 ohochman: morazi That would be ideal I would think, to run tempest. 14:51:37 any special reason why I can't create a new volume from an existing volume from horizon? 14:53:53 Dafna, I'm not able to try that at the moment - the keystone isn't working properly :) 14:54:37 yrabl: I'm doing it. what do you mean keystone isnt working properly? 14:55:42 when I sent a command it got stuck for like 20-30 seconds on keystone alone 14:55:45 Dafna, 14:55:56 Dafna, I'm checking it at the moment 14:56:08 yrabl: who's helping you? 14:57:48 Dafna, no one, right now, I'm trying to understand what is happening:1. the keystone is very slow. 2. the CC fail to delete an instance 14:58:50 yrabl: ajeain1 can you please help yogev with the issue he found for keystone? 15:00:22 Dafna, it was a problem with Neutron 15:00:56 yrabl: call me 15:31:10 Dafna, did you open a bug for the volumes data loss? 15:31:29 giulivo: yes 15:31:39 giulivo: did it happen for you? 15:31:44 can you add me on CC there? I just reproduced yes 15:31:55 giulivo: sure... I have to find it :) 15:41:15 giulivo: you would want to see this too https://bugzilla.redhat.com/show_bug.cgi?id=1049995 15:41:25 giulivo: I liked this bug... 15:42:12 doh 15:43:24 uff RDO si quite hungry... VM with 2GB of memory isn't enough for it... 15:46:26 mpavlase: 2GB for a compute node should work, as long as you're only starting a single m1.tiny or smaller. I use 2048 for my compute and 2300 MB for my controller. 15:47:28 larsks: neutron used a lot of memory 15:48:01 My neutron controller has 1GB and seems happy. 15:48:22 I'm only creating a single network and booting a Cirros instance for testing, though. 15:49:41 larsks: hmm... my VM (as host) wasn't able to start mysqld service due to lack of memory 15:50:42 Note that I'm running neutron on a separate host from the other controller services. 15:50:48 larsks: so it will not me allow to do almost nothing in openstack 15:50:58 larsks: that would be it 15:51:20 That's what I meant when I said 15:51:25 ...ooops... 15:51:38 "my neutron controller has 1GB" <-- separate from other services 15:51:43 kashyap, you there? 15:52:17 ndipanov, Yes, severely multi-tasking (should reduce it!) 15:52:49 kashyap, so when you come around to this - I am getting the ECONNREFUSED only from neutron services 15:53:05 you had that some time ago... remember what the solution was? 15:53:13 if I remember correctly 15:54:28 ndipanov: if you've confirmed that the services are running, have you checked for appropriate firewall rules? Port 9696 needs to be open. 15:55:20 larsks, was about to - but I just restarted the node - no reason why it shoud change... 15:56:06 Mmmmmaybe. Are you running Fedora? 15:56:19 larsks, yes 15:56:26 Do you have firewalld installed? 15:56:31 (...and running...) 15:56:32 of course 15:56:35 ndipanov, I lost track of what resolved it, I've been chasing SELinux issues, 1 sec. 15:56:46 kashyap, thanks no rush 15:57:20 Because my recollection is that packstack doesn'tk now about firewalld, so it doesn't do the right thing to make persistent firewall rules. 15:57:33 My F20 system is currently installed right now, so I can't check for certain. 15:57:48 I could be wrong. 15:59:00 pixelb, Thanks for the typo spotting, that was meant to be still selinux-policy-3.12.1-113.fc20. 15:59:30 mgrepl mentioned that's for testing only, and he plans to re-spin & submit an update to Bodhi again today with some more issues he's fixing 16:01:47 Dafna, so I couldn't reproduce that consistently, but I had it with thinlvm ... seems something related to buffering of data not synced 16:01:57 (well, not synced on time) 16:02:03 larsks, yeah it's that most likely - the port is disabled 16:02:26 giulivo: reproduce what? the create vol issue? 16:02:36 Dafna, the "data loss" 16:02:55 giulivo: It reproduced for me 100% :) 16:03:11 kashyap, thanks. I hope https://bugzilla.redhat.com/1048946 is fixed by that too 16:05:21 giulivo: when we rebuild a server from horizon we are asked for a password. is that for setting the password for the server we are building or is it authentication for the user running this action? 16:06:26 Dafna, it's the server password 16:07:05 giulivo: :) thanks 16:07:34 it should be hijacking that by loopmounting the dick 16:07:37 disk 16:07:40 :( 16:07:52 giulivo: ? 16:08:27 the server password I mean, nova should be injecting it by mounting the disk image 16:08:44 giulivo: ah... and its not what the api will call rescue? 16:09:17 kashyap, hey what happened to that db bug you had 16:09:20 giulivo: I want to understand if we actually rebuilding the image or if we are changing root pass 16:09:30 I think it's not a real bug - I've not seen it yet 16:09:43 pixelb, Yes, it will be, that's the AVC, right - allow glance_api_t amqp_port_t:tcp_socket name_connect; 16:10:08 y 16:10:10 ndipanov, Hey, I still have that environment intact, just have to find 15-20 quiet minutes to do proper investigation & make a report 16:10:24 k sure no rush 16:11:40 Dafna, no rebuild and rescue are two different things, rebuild should be rebooting the instance with a different image 16:13:44 larsks, yeah firewalld was messed up 16:13:54 giulivo: good, so why would horizon ask for password? I mean... lets say I am a user who does not know what the root password to the image is (perhaps on purpose) if I have permissions to rebuild an instance I can have access to an image which I was not given permissions for... 16:13:58 pixelb, are we reporting that against packstack or? 16:14:41 giulivo: what do we ask if the image was a windows os? do we know how to change Administrator password? 16:14:47 puppet module and package question: is there any work out there to make puppet modules know about firewalld? Also, what about firewalld service definitions in the various packages? 16:14:57 ...well, that was meant for a different window. 16:15:02 Dafna, you can set the root password also when creating a new image so I think the idea is, if you have permissions to launch instances (and inject a password) you should be able to do it when rebuild too 16:15:04 ndipanov, I was reporting AVCs against selinux-policy or are you talking about something else? 16:15:25 pixelb, I think smth else 16:15:38 when I reboot hosts - firewall rules get messed up 16:15:41 on f20 16:15:48 I assume due to firewalld 16:15:52 ah ok 16:15:59 pixelb, I'd say it's a bug 16:16:21 I vaguely remember that being already reported at least. checking.. 16:16:27 pixelb, thanks 16:16:41 also did you see my bug against iscsiadmin-initiator? 16:16:59 Dafna, no that won't work for windows... this is actually called password injection and I think it is planned to be removed completely as one is expected to set the root password (if needed) by passing metadata to cloud-utils at launch time ... ndipanov, makes sense? 16:17:27 ndipanov, larsks logged this last Nov: https://bugzilla.redhat.com/1029929 16:18:00 ...which is CLOSED DUPLICATE of https://bugzilla.redhat.com/show_bug.cgi?id=981583' 16:18:21 giulivo: so it's not only not removed but part of the rebuild process through horizon... 16:18:26 ...which smells really really stale. 16:18:29 There was also looking from the other side: https://bugzilla.redhat.com/981652 16:18:31 giulivo, without knowing exactly what you are talking about - I'd say yes - injection should go away I think - not sure if there are patches already but there was talk about ut 16:19:16 thanks ndipanov for helping there 16:19:53 Dafna, currently it's there yes, you can also set the password when creating a new instance though 16:19:54 pixelb, thanks - so won't spam bugzilla more 16:20:37 Well, you may want to comment on one of those bugs indicating that you're still seeing the problem with F20 and recent RDO packages. 16:20:58 ndipanov: remember when I asked about rebuild? so doing server rebuild through horizon will ask for a new root password on the instance you are rebuilding. do you know when will it go away? 16:21:28 Dafna, yeah no idea really bit let me try it now that I have the setup working 16:21:32 ndipanov, If you're still around for a couple of mins, I can start taking a look at it now 16:21:32 but* 16:22:08 ndipanov: thanks :) 16:22:20 I am sorry to ask... but anyone here an expert or knowledgeable with ceph as a backend for RDO openstack? 16:22:36 kashyap, yeah I'll be here till 19:00 or something 16:22:58 kashyap, I think we're on the same time these days :) 16:24:06 ndipanov, Yeah, I'll leave in ~10 days though (visa expiration) 16:25:40 we'll if it's any consolation, I won't make it to fosdem so you won't miss me there :) 16:26:27 :-) LOL. We'll get some productive work done instead. 16:27:38 I was at FOSDEM 2012, I presented on PKI, etc & couldn't attend many of the 5000 sessions in the 2 days :-). Whole 2 days I spent in the virt-dev room & met some folks 16:32:17 larsks, ndipanov I bumped up priority of 981583 FWIW 16:32:40 pixelb: Writing an email about that right now :) 16:33:54 pixelb, yeah - should really see some love - maybe move it to 20 as well 16:34:07 pixelb, also https://bugzilla.redhat.com/show_bug.cgi?id=1049922 16:35:04 pixelb, from what I can see in iscsi-initiator dist git - this is solved for rhel6.5 but will likely bite us for rhel 7 so good to solve it 16:35:52 ndipanov, ugh only rhel was patched :( was the fix sent upstream? Fedora should be patched in any case 16:37:03 pixelb, not sure tbh - seems like a regression from a backport in rhel 16:37:10 but keeps happening on fedora 16:37:23 pixelb, but i didn't dig too deep 16:38:03 btw 2 node cluster on my poor laptop working nicely now on f20 - just no volumes due to that :) 16:39:01 ndipanov, cool! make sure to update tested setups on the RDO wiki 16:39:19 I did I think but still marked it as fail 16:58:19 ndipanov, Heya, so, the root-cause still is - first my compute-service is not up & running due to that libvirt 'avx' issue 16:58:33 I'm updating to openstack-nova-2014.1-0.5.b1.fc21 & trying 16:59:26 kashyap, cool 17:01:32 Yay, it comes up. Now off to find out what's up with that nova instance hung in "deleting" state 17:03:05 anyone seen "SequenceError: execute() got an unexpected keyword argument 'maskList'" raised from /usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py ? 17:07:56 kashyap, I also noticed that I lose routing information on my hosts when I restart them and hence can't seem to ping instances 17:08:04 kashyap, although that might not be the only reason 17:08:18 kashyap, have you seen that before 17:09:05 ndipanov, Host as in Controller/Compute hosts? I didn't see it on my Havana F20 setup at-least 17:09:14 yeah 17:09:34 ip route gives me only my networks 17:09:42 not the public one 17:09:44 Ice-house: I've been resolving issues one by one. Now pkg update on my Controller node is in progress; will let you know if I can get to it 17:09:54 ndipanov, GRE + OVS? 17:10:11 the basic allinone so vlans 17:11:00 Ok: I don't have a VLAN setup, so - I cannot definitively answer. Let's invoke the spirit of larsks 17:14:14 The what now? 17:14:35 larsks, you are a neutron guy? 17:14:53 I play one on TV. Depends on your issues :) 17:15:16 Reading through scrollback... 17:15:32 ndipanov, So, I see something strange: compute service is "active" for a brief second or so, then it goes into inactive (dead) 17:15:50 larsks, thanks 17:15:56 kashyap, log? 17:16:04 Up-coming. . . 17:16:21 It's still the libvirt error; but there must be something else too 17:16:49 ndipanov: I'm not sure I understand your issue exactly. What routes are disappearing? 17:17:43 larsks, so on my "controler" node 17:17:43 ndipanov, That' the Compute log - http://paste.fedoraproject.org/66759/01418138/ 17:17:51 where all the neutron services are running 17:18:06 I don't have a route to the "public" network 17:18:28 larsks, 17:18:29 ^ 17:19:23 Is your public network associated with an actual interface on your controller? 17:20:33 Can you post "neutron net-list", "neutron subnet-list", "neutron router-list", and "ip route" and "ip addr show" somewhere? 17:20:54 larsks, sure 17:21:08 but I'd say it's not 17:24:21 larsks, http://paste.openstack.org/show/60814/ 17:24:35 Looking... 17:25:15 larsks, but iiuc that thing should be taken care of by the l3 agent 17:25:31 What does subnet a513992a-9795-4325-ba4a-93cdf3da45d5 look like? It doesn't show up in the output of subnet-list. 17:25:58 Also, can you add "ovs-vsctl show"? 17:26:18 yeah sorry 17:27:18 larsks, http://paste.openstack.org/show/60815/ 17:31:07 larsks, also dhcp not working for my instances... 17:31:47 :/. So, regarding your external network, do you recall what the route to it looked like before it disappeared? 17:31:49 pixelb, Isn't this a blocker? Compute service just dies after briefly coming up -- https://bugzilla.redhat.com/show_bug.cgi?id=1049391 17:32:20 larsks, no, but it was there for sure as I was pinging and sshing like a boss 17:32:39 larsks, am I right to assume that l3-agent service is supposed to configure this? 17:32:47 larsks, I can dig from there? 17:33:21 kashyap, yep, that's a bad one, but have you tried the workaround on the wiki? 17:34:24 ndipanov: I don't think the l3-agent is supposed to set this up. Generally, one would have an interface part of br-ex in the host network context that would provide the necessary connectivity (or an address on br-ex itself). 17:34:27 pixelb, Ah, that works? I know afazekas pointed me to it; let me quickly try 17:35:12 kashyap, it might only stop compute from dying rather than actually fixing the issue 17:35:23 pixelb, Should I clone it to launchpad? Or you know if it's already there 17:35:40 I've not looked upstream, so I would clone to lp 17:35:52 ndipanov: I'd like to try to replicate your setup. Did you just run "packstack --allinone", or did you specify additional options? 17:35:54 pixelb, Right; I wanted to debug another nova related issue, just at-least want to see the service up, let me try 17:36:23 larsks, I ran --alinone 17:36:30 and it worked fine 17:36:43 then I rebooted the node and added another one 17:37:02 You mean you added a compute node? 17:37:04 and re-ran packstack with the old answer file, but chnged networks 17:37:08 yes 17:37:19 and also I added an interface to the first node 17:37:35 Okay. Just out of question, what is tenant_network_type in /etc/neutron/plugin.ini? 17:39:30 local 17:39:34 larsks, ^ 17:40:27 So for anything other than an all-in-one install, that *can't* be local -- it needs to be gre or vlan or vxlan. 17:40:46 When you add a compute node to an allinone install, you need to update your tenant network type. 17:41:03 larsks, makes sense 17:41:16 larsks, just for my information - what does that setting do? 17:42:00 That controls the type of networks used for instance network traffic. When set to "gre" (or possibly "vxlan"), compute nodes are connected to your neutron controller via a tunnel. When set to "vlan", connectivity is via vlans. Tunnels are generally easier to work with. 17:42:35 Make sure that you also have enable_tunneling=True. 17:42:37 larsks, which would create an additional bridge iiuc? 17:42:40 See http://openstack.redhat.com/Using_GRE_tenant_networks for some docs. 17:42:55 That would create a new bridge br-tun on your neutron controller and compute nodes. 17:43:26 yeah makes sense larsks 17:43:32 See e.g. this picture: http://blog.oddbit.com/assets/quantum-gre.svg 17:43:54 larsks, yeah got it 17:45:10 larsks, thanks 17:46:06 ndipanov: I've started an allinone install. Going to go grab some lunch. 17:46:15 larsks, enjoy 18:10:02 pixelb, That workaround doesn't fix it -- the service still just does after momentarily coming up. 18:10:09 s/does/dies 18:22:36 Anyhow, a libvirt dev confirms that something is passing the XML with 'avx' enabled twice. 18:39:22 Bummer, looks like ndipanov has taken off. 19:14:38 larsks, still there? 19:32:05 Yup. Do you have an /etc/sysconfig/network-scripts/ifcfg-br-ex? 19:49:27 larsks, sorry missed your mst 19:51:25 larsks, yes I do 19:52:52 That should set up br-ex with an address on your public network (which should also result in creating the appropriate network route). If you just run "ifup br-ex" do things look like you expect? I.e., does br-ex have an address and does the route exist? 19:54:27 larsks, I'll try 19:54:46 but after reviewing docs some more 19:54:55 I think setting the route up by hand 19:55:25 to route public ips to br-ex should work too 19:55:34 You shouldn't need to set up the route by hand. It should get created as a byproduct of setting the ip on br-ex. 19:55:51 yes you are right 19:57:47 I see that running "ifup br-ex" on my allinone, after rebooting, does not properly configure the interface. I'm looking into why right now. 19:58:06 larsks, ifdown and ifup worked for me 19:59:01 That works, but something isn't working at boot. There may be a race condition or something with OVS getting set up. 20:00:24 larsks, race condition between bringing up the interfaces and OVS? 20:00:45 larsks, is that what you meant? 20:01:07 Yes, something like that. 20:01:25 that;s not really an openstack bug tho :) 20:01:51 but yeah network host should make sure that the interfaces are up 20:03:26 Sure, but it's still a bug that results in a non-functioning openstack deployment if you reboot your system. 20:03:34 That's no fun. 20:03:35 which service creates that bridge? I assume l3 agent since it's the otherside of the namespace that des the routing 20:03:46 I may have a related issue, or I may not. But basically I followed test day scenario with packstack, installed everything, but instance networking is dead for some reason. 20:04:01 setenforce 0 zaitcev ? 20:04:07 zaitcev: For an all-in-one install? On RHEL/F20/something else? 20:04:14 policy is totaly broken on fed20 20:04:37 anyway larsks - should l3-agent make sure it's up maybe? 20:06:07 I don't think so. External connectivity is sort of the province of the local administrator. Depending on your configuration, you may not want an address on br-ex -- maybe you're using a real router somewhere else and you just need another interface in the bridge. 20:06:19 ndipanov, larsks: More specifically - http://paste.fedoraproject.org/66823/ 20:06:45 larsks, well in that case you will disable the l3 agent no? 20:07:30 I don't think so, no. The l3 agent will still be responsible for hooking up routers to this network. 20:07:46 The l3 agent handles the openstack side of things, and you handle the "local environment" side of things. 20:07:57 larsks, hmmm ok 20:08:18 zaitcev, ip netns list? 20:08:30 * ndipanov thinks he knows neutron now... :) 20:08:41 [root@guren ~(keystone_admin)]# ip netns list 20:08:41 qdhcp-5e9e313e-f13c-4b7e-b522-668cde803bba 20:08:59 zaitcev: Run "ip netns exec qdhcp-5e9e313e-f13c-4b7e-b522-668cde803bba ip addr" 20:09:12 zaitcev, also - you won't be able to ping them on their private IPs 20:09:24 that network is not routed form the node 20:09:25 Well, you can, by running "ip netns exec ... ping ..." 20:09:35 larsks, yeah ok 20:09:49 Okay. How do I fix this? 20:10:06 It's not clear there's anything to fix from what you've shown us. WHat's not working, exactly? 20:10:25 I want my instances to have network connectivity. 20:10:50 zaitcev, do they get ips 20:10:52 ? 20:11:08 private ips I mean 20:11:23 If you want them to have external connectivity ("external" meaning "outside of openstack"), you'll need to create an external network, create a router, add an interface on localnet to the router, set up a gateway for the router, etc... 20:11:50 Nice. Some all-in-one this turned out to be. 20:12:00 Did you install by running "packstack --allinone"? 20:12:15 Actually, wait. No, I modified the answer file. 20:12:44 It had CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local 20:13:05 That's a no-op if you're using OVS (you'd want CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE). 20:13:49 sorry, CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=local 20:14:40 Yeah, that's what is used by default for an all-in-one install (but won't work if you add additional nodes). 20:17:15 The --allinone install creates networks named "public" and "private" as well as the necessary router to connect things. 20:21:22 larsks, got networking working wooohooo 20:21:37 \o/ 20:40:52 hey larsks 20:41:02 shouldn't br-tun have an IP on both nodes? 20:41:39 No, br-tun doesn't have an address. It just bridges br-int to the GRE or VXLAN tunnel. 20:42:53 larsks, even with two nodes 20:43:00 I mean it has to be on the management network,, no? 20:44:18 Not directly, no. Br-tun has a GRE endpoint that connects to an ip address on your network controller. Your compute node needs to have a route to that address, but (a) that address has nothing to do with br-tun, and (b) it doesn't even need to be a directly connected network. 20:44:36 Starting up a GRE tunnel is no different from any other IP connection (e.g., ssh). 20:45:36 So typically, a compute node will have an interface dedicated to instance traffic (we'll call it "eth2"), and that *interface* will have an address on a network dedicated to instance traffic (or maybe just the management network). 20:45:38 * ndipanov parsing 20:45:53 larsks, OK 20:46:05 ...but that interface is not itself connected to br-tun. It's just used as the local endpoint for the GRE tunnel. 20:46:20 Is that making sense? 20:46:32 larsks, yeah 20:47:29 GRE tunnelled frame is basically in an IP packet that gets routed like any other IP packer orriginatig from a process on the machine 20:47:33 is that correct 20:47:47 That's correct. 20:48:01 makes sense 20:48:43 and it figures out the to part bcs that's how the tunnels are set up 20:48:56 and then the vlan tagging untagging is done by br-int 20:48:59 right? 20:49:49 ...maybe? That is, it depends on what direction you're talking about. br-tun translates between vlan tags and the tunnel id equivalents. br-int applies vlan tags to traffic emitted by an instance, and removes vlan tags from traffic going to an instance. 20:50:40 larsks, that is what I meant (not really but clear now :) ) 20:51:17 so I think that firewalld still messes with a bunch of stuff 20:51:46 but will try it tomorrow again on a fresh set up 20:52:03 If you start with the F20 cloud images, they do not have firewalld installed. 20:53:15 but killing firewalld or disanling it should also work right? 20:54:58 If you want the firewall rules to show up automatically at boot, you'll also need to install iptables-services and "systemctl enable iptables". 20:56:54 Also, both stop and disable the firewalld service (because you don't want it coming back next time you reboot) 20:59:14 larsks, yeah that's what I meant sorry - and enable iptables service 20:59:42 k - I'm off thanks for the help!! 21:00:00 No problem! 21:49:52 * kashyap is going to end the meetbot IRC meeting tomorrow morning (CET) which he started yesterday morning. Folks in US/other geos might still be testing, etc. 01:22:27 ? - in the case where the cinder-volumes VG is linked to a loopback device (all-in-one installation), and I now want to use a real 100GB LVM partition , do I simply create a new VG (e.g. - cinder-volumes-lvm), update the volume_group parameter in cinder.conf and restart? 05:46:06 rdo 2 node installtion. how to make it highly available? 07:22:47 Morning; /me is going to end the meetbot IRC meeting now. 07:22:49 #endmeeting