13:08:47 #startmeeting openstack_test_day 13:08:47 Meeting started Thu Mar 8 13:08:47 2012 UTC. The chair is rbergeron. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:08:47 Useful Commands: #action #agreed #halp #info #idea #link #topic. 13:08:53 #meetingname openstack_test_day 13:08:53 The meeting name has been set to 'openstack_test_day' 13:10:26 has it added a bridge that screwed up you interface? 13:10:26 great! I've been able to connect to deployed instance :D 13:12:15 will keep testing after lunch :D good work guys ;) 13:12:24 #chair danpb pixelbeat markmc 13:12:24 Current chairs: danpb markmc pixelbeat rbergeron 13:12:28 feel free to share those around 13:17:12 eglynn_: found my problem, a bit obvious when I looked at nova.conf (instead of greping for public_interface) 13:17:30 its public_interface=em1 not --public_interface=em1 13:17:36 fixed wiki 13:18:19 removed loating IP's and recreated them and now back on track 13:20:29 eglynn_: If you are running libvirtd inside a VM, I think it needs to use a different address range for virbr0 than in its host. 13:21:37 * derekh goes for lunch 13:22:20 #chair derekh russellb rkukura 13:22:20 Current chairs: danpb derekh markmc pixelbeat rbergeron rkukura russellb 13:32:52 rkukura: yep, I think that's the nub of it ... 192.168.122.1 attached to virbr0 on host, also associated with nova instance on VM 13:33:52 derekh: them pesky hyphens, I tripped up on something similar with --allow_resize_to_same_host yesterday 13:35:00 eglynn_: I hit the same issue last night without even running nova. It appeared when rebooting my VM after I had enable libvirtd. 13:42:11 disassociating 192.168.122.1 from running instance and then releasing address did the trick 13:43:22 eglynn: kudos! 13:45:42 updating http://fedoraproject.org/wiki/QA:Testcase_add_SSH_keypair_to_OpenStack to indicate that the /etc/nova/api-paste.ini should use keystone 13:48:02 eglynn_, you've 192.168.122.1 on virbr0 in the host 13:48:15 eglynn_, do you also have 192.168.122.1 on virbr0 in the guest? 13:48:15 markmc: yep, that was the prob 13:48:29 * eglynn_ more conservative this time with address range for floaters 13:48:35 sudo nova-manage floating create 192.168.122.150/30 13:48:49 2 is enough for any man ... 13:48:59 eglynn_, do you also have 192.168.122.1 on virbr0 in the guest? 13:49:12 eglynn_, is there a virbr0 in the guest at all? 13:49:22 eglynn_, i.e. your Fedora 17 VM where you're running openstack ? 13:50:01 markmc: 192.168.122.1 is the virbr0 on the *host* as opposed to my f17 VM 13:50:17 eglynn_, ok, so there's no virbr0 in the f17 VM ? 13:50:29 markmc: nope 13:50:46 eglynn_, good, just wanted to make sure there wasn't a regression of http://libvirt.org/git/?p=libvirt.git;a=commit;h=a83fe2c23 13:51:11 markmc: cool 13:54:33 * eglynn_ turning hypoglycemic ... time for lunch ;) 13:55:17 markmc: gone from a loose ruck to a scrum - i have an instance running! 13:55:29 gkotton, awesome :) 13:55:38 gkotton, just noticed "Select the keystone option in [pipeline:ec2cloud]" 13:55:51 gkotton, think that's covered in 13:55:52 https://fedoraproject.org/wiki/QA:Testcase_start_OpenStack_Nova_services 13:56:02 gkotton, sudo sed -i -e 's/# \(pipeline = .*\keystone\)/\1/g' /etc/nova/api-paste.ini 13:56:05 markmc: eglynn found this last night :) 13:56:07 markmc: Don't be so sure there isn't a regression - I definitely had a vibr0 on 192.168.122.1 in my F17 VM last night. 13:56:10 gkotton, could you double check that? 13:56:23 gkotton, yeah, we change the instructions slightly to cover the ec2 case too 13:56:26 gkotton, it used to be 13:56:32 markmc: will do 13:56:32 gkotton, sudo sed -i -e 's/# \(pipeline = .*\keystonecontext\)/\1/g' /etc/nova/api-paste.ini 13:57:07 rkukura, cool; if you can confirm, you should definitely bugzilla it and ping crobinso or danpb about it 13:57:55 gkotton, so, have you played rugby in your time? :) 14:01:18 markmc: a little - sadly not any good. i watch it well 14:01:39 gkotton, similar here :) 14:07:54 markmc: sudo sed -i -e 's/ec2noauth/ec2keystoneauth/' api-paste.ini 14:08:27 markmc: sudo sed -i -e 's/ec2noauth/ec2keystoneauth/' /etc/nova/api-paste.ini 14:08:50 gkotton, after this: https://fedoraproject.org/wiki/QA:Testcase_start_OpenStack_Nova_services 14:08:54 gkotton, I'm left with: 14:09:00 [pipeline:ec2cloud] 14:09:00 pipeline = ec2faultwrap logrequest ec2noauth cloudrequest authorizer validator ec2executor 14:09:06 pipeline = ec2faultwrap logrequest ec2keystoneauth cloudrequest authorizer validator ec2executor 14:09:17 gkotton, I think the idea is that the second pipeline wins? 14:09:37 gkotton, i.e. the current sed in the wiki should uncomment the ec2keystoneauth line 14:09:42 markmc, true 14:11:39 markmc: on trunk (as of this week) the pipeline editing is no longer needed. For Essex RC1 you'll just need to set --auth_strategy=keystone in your nova.conf. 14:11:58 markmc: not true for the E4 milestones however. 14:12:08 dprince, yep, cool 14:12:24 dprince, pity it's totally different from the way glance does it, but ... :) 14:12:43 dprince/markmc: intertwining the auth strategy with pipeline selection struck me as odd 14:13:04 given that non-auth middleware like caching or ratelimiting may also be involved 14:13:17 eglynn_, I like that avoids the combinatorial explosion thing 14:13:33 eglynn_, with non-auth, you could have all of them in the pipeline by default with config flags to make them no-op 14:14:04 eglynn_, but I still think it would have been more sane to just copy glance :) 14:14:04 markmc: yep vish suggested a similar solution, seems inconsistent to me though 14:14:45 as in, should I disable filter X by removing it from pipeline or leaving it in and disabling via a separate flag 14:15:03 they're slightly different filters, though 14:15:07 i.e. for auth, you pick one 14:15:08 eglynn_/markmc: Glance and Nova have always done things differently... why would we want to change that now? :) 14:15:13 for the others, you enable/disable them 14:16:03 yeah I guess if I squint I can see it ... 14:16:41 dprince, not to mention keystone... 14:26:12 eglynn_, I have solve the MYSTERY 14:26:32 eglynn_, https://fedoraproject.org/w/index.php?title=QA%3ATestcase_download_and_register_guest_images_with_OpenStack&action=history&year=&month=-1 14:27:11 eglynn_, https://fedoraproject.org/w/index.php?title=QA:Testcase_register_images_with_OpenStack&action=history 14:27:26 eglynn_, looks like pbrady created a duplicate page and you modified the duplicate 14:27:41 pixelbeat, there's a "move" button at the top of the page 14:28:50 markmc: good sleuthing ... 14:29:25 eglynn_: would you be opposed to making glance match how nova does it? 14:29:40 * markmc sets up a redirect 14:29:46 markmc, so latest version is: QA:Testcase register images with OpenStack ? 14:29:53 apevec, yep 14:30:04 eglynn_: with regard to the paste pipelines.... 14:30:52 dprince: well, it seems wrong to me to always include say the caching filters and then turn them off via a seperate flag 14:31:41 eglynn_: I would just leave caching on actually. 14:33:23 speaking of which, we should test caching 14:33:58 i.e. switch to flavor = keystone+caching 14:34:14 markmc: yes we should. And if it was always on by default... well then how great would that be (you'd always be testing it). 14:34:27 dprince: so, the rule would be ... 14:34:32 1. if two filters are mutually exclusion, switch pipeline to include/exclude, but 14:34:34 2. if a filter is optional, leave it in and disable/enable via a flag 14:34:54 s/mutually exclusion/mutually exclusive/ 14:35:11 I'm suggesting that our default pipelines just always use caching. 14:35:51 The glance-config file option (whatever we call it... flavor, auth_stragegy, etc) is used to choose a the pipeline based on the auth strategy. 14:36:12 If I am an advanced user and want to disable caching... I can. But I have to edit the paste file myself. 14:36:17 caching is a valid thing to allow people to easily disable 14:36:30 dprince: but isn't one major win of implementing that logic as a filter that it easy to include/exclude 14:36:43 dprince: for ex, if the backend store was S3 14:37:09 dprince: not sure I'd want to have to allocate loads of diskspace locally for unpredicatable caching requirements 14:37:47 eglynn_: OKay. I suppose I can go for leaving it in. Just trying to DRY it up a bit I guess. 14:38:38 markmc: to your comment.... ratelimiting is also a valid thing to want to disable... 14:38:52 dprince, right 14:38:56 markmc: And I can easily disable it. By editing the paste file. 14:39:26 dprince, the idea with eglynn_'s flavor stuff was to handle most common cases without requiring editing paste.ini 14:39:48 dprince, granted, doesn't work for keystone admin password/token 14:40:59 markmc: Sure. And I like it. I do. Just *playing the devils* a bit here to see if perhaps we could DRY up the implementation a bit... cover the most common case... and in the end users can still always edit the paste file for advanced things (disabling the cache) anyway. 14:41:48 dprince, right, just saying that disabling rate limiting or caching is a valid thing to try and cover in glance-api.conf 14:42:12 dprince, and I think vish's suggestion of having flags to disable some filters by making them a no-op works 14:42:14 markmc, eglynn_ I mentioned the page move issue to eglynn_ on private msg, which on consideration I've not received any responses to any private messages today so perhaps my irc client is broken :) 14:42:24 dprince, while still DRYing up the api-paste.ini file 14:42:36 markmc: well. the rate limiting example was from Nova. And there currently isn't a config option (that I know of) to disable it. 14:43:40 markmc: I like covering it in the config file. I suppose having separate configs for them is desirable however. 14:49:41 gkotton: did you sort out your issue with the yum lock being hogged by another process? 14:49:57 * eglynn seeing the same thing when installing oz 14:50:56 eglynn, PackageKit again? 14:51:22 Any body hit problems with the oz config? http://fpaste.org/B9sz/ 14:52:36 nice 14:52:39 "libvirt.libvirtError: unsupported configuration: Only the first console can be a serial port" 14:53:17 markmc: turned out to be a another stalled yum process from earlier on today, just killed it ... 14:53:49 eglynn, ah, ok 14:53:50 any ideas, should I try ? 14:54:13 danpb, you've tried oz lately, right? ^^ 14:54:28 in smoketest do we want to change > sudo pip-python install nova-adminclient 14:54:33 yp use yum instead? 14:54:41 derekh, yes, defo 14:54:49 s/yp/to/ 14:54:51 k 14:55:00 eglynn, gkotton, for PackageKit thing, try 'yum erase PackageKit-yum-plugin' 14:56:04 markmc: that's a sign of an outdated oz install 14:56:13 derekh: what version of oz ? 14:56:21 eglynn: tempest 5 errors is that expected? 14:56:44 danpb: oz-0.7.0-4.fc17.noarch 14:57:00 derekh: yep that's the right ballpark, I only got a clean run on an EC2 large instance 14:57:27 derekh: (as opposed to a puny VM running on my laptop) 14:57:33 hmm, how big is that? I'm running on bare metal 14:57:45 eglynn, markmc: yes - sorted thank! 14:57:47 deerekh: timeout failures and quota exceeded on floating IPs 14:58:05 ahh ok, I'll move forward so 14:58:13 cool 14:58:35 you could try the latest oz like: 14:58:37 host=repos.fedorapeople.org 14:58:38 when i deply the vm it does not get an ip address. any ideas? 14:58:44 file=oz-0.8.0-1.fc15.noarch.rpm 14:58:49 yum install http://$host/repos/aeolus/oz/0.8.0/packages/fedora-15/x86_64/$file 14:59:06 yeah the 0.7.0 version of oz is fscked 14:59:07 gkotton: you created the testnet network? 14:59:13 it creates multiple tags 14:59:31 * danpb wonders why fedora doesn't have a newer build 15:00:04 danpb, good question 15:00:05 eglynn: yes, followed the instructions ... 15:00:21 gkotton, you mean there's no IP address listed by euca-describe-instances ? 15:00:30 gkotton, I'm seeing that too, but the VM has 10.0.0.2 15:01:05 ok, I'll give it a go once I get though some other tests 15:01:06 gkotton, 'nova list' shows the IP address 15:01:08 gkotton: no IP reported in nova list? 15:01:13 k 15:01:16 oz that is 15:01:18 markm, ok. nova list shows the ip 15:01:36 * markmc files a "no IP address listed by euca-describe-instances" bug 15:01:58 danpb, markmc I'll look into updating oz 15:02:05 pixelbeat, great 15:03:40 ah, you know 15:03:50 it's listing a host name 15:04:02 but it doesn't resolve 15:04:05 "server-1" 15:04:42 markmc, eglynn: please look at http://fpaste.org/mVXH/ 15:05:30 correction - http://fpaste.org/y27F/ 15:05:47 gkotton: fpaste.org says: Error 500: Sorry, you broke our server. You might have reached the 512KiB limit! 15:05:50 gkotton, what's the problem? 15:06:50 markmc - network with deployed vm - http://fpaste.org/y27F/ 15:07:17 latest oz-install failed with: "oz.OzException.OzException: Could not find a viable libvirt NAT bridge, install cannot continue" 15:07:19 eglynn: http://fpaste.org/y27F/ 15:07:28 k 15:07:46 eglynn, yeah, that's because virb0 is disable 15:08:05 eglynn, try 'virsh net-edit default' and give it a 192.168.123.0 range 15:08:11 eglynn, then 'virsh net-start default' 15:08:13 eglynn, AFAIR 15:08:23 * eglynn trying now, thx ... 15:08:43 gkotton, don't understand what your question is? 15:09:14 markmc - i am unablke to ping the deployed VM nor access via ssh. sorry for being cryptic 15:09:27 gkotton, can't ping 10.0.0.2 ? 15:09:29 markmc - i am following the test day instructions 15:09:38 markmc: nope 15:10:05 gkotton, hmm, it's working for me 15:10:19 markmc: ok 15:16:31 markmc: was that 'virsh net-edit default' to be run on the f17 vm or the underlying host? 15:16:35 /me ran it on the VM but net-start failed with: 15:16:49 [eglynn@f17alpha ~]$ sudo virsh net-start default 15:16:56 error: Failed to start network default 15:17:06 error: internal error Network is already in use by interface eth0 15:17:32 $> sudo virsh net-dumpxml default 15:17:34 should have: 15:17:40 [nova@localhost novacreds]$ euca-allocate-address i-00000002 15:17:41 NoMoreFloatingIps: Zero floating ips available. 15:17:43 15:17:54 15:18:30 markmc: your end address is less than your start address there 15:18:31 gkotton: I saw that, try a different IP range, e.g. /30 15:18:42 danpb, point 15:18:44 15:18:53 eglynn: ok 15:19:51 markmc: cool, I'd the right range but the wrong IP addr (still 122.1 ...) 15:22:39 gkotton: run: sudo nova-manage floating list, before euca-allocate-address to ensure the floaters have been created 15:24:27 I am still unable to start parts of nova Nova. nova-cert, nova-compute, nova-network, nova-scheduler are ok, nova-api and nova-volume fail to start. I'm going to have breakfast and be back at it. 15:28:08 eglynn: sad;y this did not help. [nova@localhost novacreds]$ euca-allocate-address i-00000003 15:28:08 NoMoreFloatingIps: Zero floating ips available. 15:28:10 zaitcev, It might be quicker to nuke from orbit and follow instructions again. There are been a few updates in the mean time. 15:30:29 is someone getting this errors: /var/log/nova/compute.log:(nova.manager): TRACE: AttributeError: 'dict' object has no attribute 'state' 15:30:32 ? 15:32:07 TripleDES: https://review.openstack.org/#change,4714 15:33:08 dprince: thanks 15:33:47 at floating ip test page I found a typo in a command 15:33:48 sudo systemctl restart openstack-nova-network 15:33:58 syntax error... 15:35:08 thanks, fixed now 15:35:17 * markmc broke it like 30 seconds ago :) 15:41:23 thank markmc :) 15:42:51 TripleDES, np :) 15:44:31 vaneldik, sorry about my messing with your results, I was just confused :) 15:56:59 Has anybody tried the smoketests yet, I'm getting a lot of the ImageTests failing 15:57:50 derekh: still stuck on oz 15:58:16 derekh: are the failures auth related? 15:59:57 dprince, I'll include that 'state' fix in any package update 16:00:02 eglynn: I got a whole load of these 16:00:13 (nova.manager): TRACE: Stderr: "qemu-img: Could not open '/var/lib/nova/instances/instance-00000016/disk': No such file or directory\n" 16:00:27 I'm gonna clear logs and try again so I know I'm not looking at lod logs 16:00:38 pixelbeat: ack 16:00:45 derekh, have you enough space? 16:01:18 that 'disk' file is created by nova downloading the image from glance to a staging area and then updating that file 16:01:27 If you ran out of space then ... 16:01:40 A distinct possibility if using oz generated (large) images 16:02:07 markmc, can you set the test day URL for the status? 16:02:46 pixelbeat: loads of space on host filesystem, not sure about the nova-volumes volume 16:03:07 that shouldn't matter I think 16:03:52 markmc, thanks 16:04:03 ayoung, np 16:04:09 derekh, maybe the image didn't' register with glance correctly. does `glance index` list it 16:06:00 pixelbeat: it returns 7 images, so far I've tried 3 for them with python ./run_tests.py --test_image=ami-000XXX 16:06:30 pixelbeat: and also tried with no --test_image option 16:11:27 If anyone is on to the keystone step, make sure you yum install python-keystoneclient 16:12:04 ayoung, if you get the latest openstack-keystone package, it pulls in the client 16:12:12 openstack-keystone-2012.1-0.10.e4.fc17.noarch 16:12:27 markmc, which version? I ran the yum line from http://fedoraproject.org/wiki/QA:Testcase_install_OpenStack_packages 16:12:45 I've got openstack-keystone-2012.1-0.9.e4.fc17.noarch 16:13:17 did you run the `yum clean all` command from the first line? 16:13:40 pixelbeat, no, but this was abrand new install 16:13:41 ayoung: isn't 'yum install python-keystoneclient' already in the instructions? I added it yesterday IIRC 16:14:01 eglynn, we deleted it because we don't need it anymore 16:14:07 k 16:14:45 ayoung, try https://fedoraproject.org/wiki/QA:Testcase_install_OpenStack_packages again 16:14:52 markmc, will do 16:14:55 ayoung, if 'yum clean all' doesn't work 16:27:26 ayoung, your mirror might be out of date, grab keystone -0.10 from koji 16:27:53 apevec, yeah, I just ran a yum update, and it still got .9 16:28:58 I have also put latest glance/keystone at http://repos.fedorapeople.org/repos/apevec/openstack-preview/fedora-17/noarch/ 16:30:03 and if somebody wants to try on Fedora 16: https://fedoraproject.org/wiki/Getting_started_with_OpenStack_on_Fedora_17#Preview_Repository_for_Fedora_16 16:31:25 apevec, ah, sweet 16:31:32 apevec, that might be worth posting to the list about 16:31:39 apevec, "look you can test on F-16 too" 16:32:00 restarted everything, cleared logs and reran smoketests, got even more errors now http://fpaste.org/JqjL/ 16:32:41 The one I'm wondering about is "Failed to schedule_run_instance: No valid host was found. Is the appropriate service running?" 16:32:54 anybody seen that one before? 16:33:39 We should have a script along with the common group of scripts (setup_db etc.) that checks the status of all openstack services installed 16:33:43 derekh: Hmmm. Is nova compute running? 16:33:56 openstack_status or something 16:34:30 yup 16:34:34 derekh: That usually means the nova scheduler (probably Chance for E4) can't find a nova compute instance. 16:35:44 derekh: restart nova-scheduler? 16:35:45 systemctl status openstack-nova-compute # < is reporting it as running 16:35:59 derekh: then try to create an instance again 16:36:07 ok 16:38:16 Any ideas: RESERVATION r-r0sjkgce d530a84aadf04f1b92e888addc708b3a default 16:38:17 INSTANCE i-00000004 ami-00000004 server-4 server-4 error nova_key (d530a84aadf04f1b92e888addc708b3a, localhost.localdomain) 0 m1.small 2012-03-08T16:36:59Z nova aki-00000002 ari-00000003 16:38:31 There are errors when I deploy the image 16:38:47 gkotton, should be something useful in nova-compute.log 16:39:30 what the default limit for the number of running instances? 16:39:32 10? 16:39:46 derekh: per compute host... yes 16:39:52 * dprince I *think* 16:40:07 2012-03-08 11:39:47 ERROR nova.manager [-] Error during ComputeManager.update_available_resource: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection refused 16:40:07 thats how many I have, and euca-terminate-instances isn't removing them 16:40:21 dprince, that's per project, right ? 16:40:36 markmc: could be that too. 16:40:41 set quota_instances in nova.conf to change it 16:40:43 There are limits on both. 16:40:47 cfg.IntOpt('quota_instances', 16:40:47 default=10, 16:40:48 help='number of instances allowed per project'), 16:40:53 derekh: i have a similar problem - all in error state 16:41:00 right. 16:41:20 Does 'nova delete' work? 16:41:26 I got a mixture of running and error but I guess that why no more will start 16:41:43 hi everyone! Decided to join the test day, got stuck at the "add ssh keypair" stage 16:42:09 euca-add-keypair just says Unauthorised 16:42:43 You may need to `source .keystonerc` 16:42:43 jekader, do you have your access/secret keys in novarc? and sourced that file? 16:42:45 dprince: nope 16:43:34 derekh: You might check the nova DB and look at the states. It shoulds like we there may be an issue where our state checks in the compute manager aren't allowing you to delete an instance... 16:43:47 markmc - I don't have a novarc file 16:43:52 derekh: I'd check 'task_state' first. 16:44:01 derekh: Its in the instances table. 16:44:07 jekader, https://fedoraproject.org/wiki/QA:Testcase_add_SSH_keypair_to_OpenStack 16:44:08 on which step should that get created? 16:44:12 jekader, it shows you how to create it 16:44:37 jekader, oh, and hi ! :) 16:44:46 markmc hi 16:45:20 markmc - that wasn't there when I last visited this page ;) 16:45:21 will try now 16:45:23 jekader, ah, ok - we've been doing a lot of work on the instructions today 16:46:50 markmc - and I also don't have keystonerc for some reason 16:47:07 dprince: some are active/deleted and error one building 16:47:14 jekader, http://fedoraproject.org/wiki/QA:Testcase_setup_OpenStack_Keystone 16:47:54 thanks! 16:51:50 repeated the whole step, now it works! 16:51:57 derekh: I'm also seeing lots of the smoketests image tests fail 16:52:04 libvirt was not running :( 16:52:42 derekh: but in a different way to you I think, euca-upload-bundle and the likes failing with calls on the S3 API 16:53:24 eglynn: I think I had that error on my first runs, but I have a different problem now 16:53:25 * eglynn digging some more ... 16:53:55 eglynn: mainly I think because of failed smoketests leaving behind instances and now I can't delete them 16:55:00 markmc - now glance tells me I am unauthorized :( 16:55:51 jekader: did you follow the instructions to make glance keystone aware? 16:56:26 eglynn - probably not 16:56:33 jekader: http://fedoraproject.org/wiki/QA:Testcase_start_OpenStack_Glance_services 16:56:40 Change glance configuration to use keystone: ... etc 16:56:43 the testcases have all changed since I started :) 16:56:53 O THANKS< WILL DO| 16:57:03 sorry for the caps 16:57:48 np, fairly heavy edits to those pages this morn GMT, your cache may have been out of date 16:59:09 Glance packages need to have cache and scrubber config files included... 16:59:12 https://github.com/fedora-openstack/openstack-glance/commit/c02a66341e0f44a687465726861ddef9ce818d3d 17:00:00 How do I go about getting that into the "Fedora" master? 17:01:41 dprince, easiest to ask pixelbeat or russellb to cherry-pick it 17:02:12 will do 17:02:48 thanks pixelbeat 17:03:43 dprince, FYI the process is http://pkgs.fedoraproject.org/gitweb/?p=openstack-nova.git;a=blob;f=update-patches.txt;hb=HEAD 17:04:10 pixelbeat, it's only a packaging change 17:04:19 * markmc back later 17:05:58 markmc: thanks 17:06:05 * dprince is a nube 17:06:51 pushed to fedora master and f17 17:07:04 want me to do builds? or just want to see if need to make more changes? 17:07:38 russellb: You can if others are interested. I'm building my own packages today so I'm Okay for now. 17:07:41 russellb, the spec versions are wrong there 17:08:06 russellb, thanks for taking it :) 17:08:15 ah will fix 17:08:43 * pixelbeat off for a while 17:14:16 sometimes my host manages to get into a state where it says this whenever trying to start a guest: 17:14:16 2012-03-08 12:13:51 WARNING nova.scheduler.manager [-] Failed to schedule_run_instance: No valid host was found. 17:14:26 anyone know what sort of thing causes that ? 17:14:26 looks like I have found an uncaught exception, nova-compute start does a few things, spits this out http://fpaste.org/OTGn/ and dies 17:14:37 danpb: I was getting that too 17:15:25 danpb: is your nova-compute server still runnning? 17:16:32 if it is i guess it could be a qpid related problem 17:16:44 but i don't think i've hit that 17:17:11 it claims to be running still 17:17:20 and i restarted it for good measure 17:17:20 2012-03-08 12:16:45 INFO nova.rpc.impl_qpid [-] Connected to AMQP server on localhost:5672 17:18:33 I think mine was too when I first hit the problem, then it went downhill from there 17:18:33 derekh: re. those smoketests image test failures 17:18:52 derekh: in my case, the nova objectstore wasn't even running 17:18:59 derekh: sudo systemctl start openstack-nova-objectstore.service 17:19:38 derekh: so there hadn't been anything listening on the port 3333 specified in $S3_URL 17:20:24 dprince, russellb do scrubber and cache need systemd service too? 17:21:00 mine wasn't either, started it now but I have gotten into a mess that that no longer can fix 17:21:16 k 17:21:40 * eglynn updating the wiki instructions in any case 17:21:46 derekh: yeah same here, even restarting every service doesn't fix it 17:21:55 there must be some piece of state somewhere that's confusing things 17:22:11 danpb: yup agreed 17:23:03 hmm, and now it just threw a fatal qpid timeout error and exited 17:23:24 http://fpaste.org/ZT1p/ 17:25:36 danpb: I havn't seen that one 17:25:54 apevec: I think we'd want them to run periodically. Perhaps a cron job template or something. 17:26:41 hm, wish i knew what API call triggered that timeout. I'll look at updating the log to include that ... 17:26:52 apevec: prefetching and scrubbing the caches are background tasks... 17:28:09 apevec: the glance scrubber also has a non-daemon mode (the default IIRC) where it just blows away images pending deletion then exits 17:28:27 apevec: expecting to be run from cron or summat I guess 17:29:06 derekh: all image tests passing now for me, but a rake of fails & errors in the volume tests ... 17:30:16 derekh: ooooh, it seems i hit some resource limit 17:30:22 is tgtd running? Also I've hit volume limites before when running smoketests 17:30:27 derekh: i had 3 vms running, and running a 4th failed 17:30:38 derekh: shutting down an existing VM lets me start another one 17:31:02 all nonsense, because this host has plenty of free resources 17:32:44 danpb: I can now shut them down, but I had to stick an if statment into nova/virt/libvirt/connection.py to instruct it to skip the instance that was causing it to fail 17:35:08 danpb: presumably you didn't change the default instance quota? (10) 17:35:52 derekh: yeah tgtd running all right, and the main volume test case earlier went fine 17:36:12 eglynn: nope, what do i use for that 17:38:40 danpb: check the quota with: sudo nova-manage project quota testproject 17:39:08 should see: instances: 10 (not 3) 17:40:27 eglynn: yes the quota limits are all above what i have used 17:40:37 derekh: its just a space issue ... Volume group "nova-volumes" has insufficient free space (255 extents): 256 required. 17:41:08 derekh: I was mean with diskspace earlier and only allocated 2Gb to /var/lib/nova/nova-volumes.img 17:41:16 ahh ok 17:41:44 eglynn: oh, what gets stored in that file ? 17:42:59 danpb: the nova-volumes volume group, right? 17:43:18 sudo vgcreate nova-volumes $(sudo losetup --show -f /var/lib/nova/nova-volumes.img) 17:44:01 eglynn: yeah, i mean what is that vol group used for 17:44:30 aaahhhhh, i'm hitting a host mem limit - the guest size is 2 GB, and my host only has 8 GB, so nova refuses to run a 4th VM, not allowing any kind of overcommit 17:45:05 danpb: the smoke tests do a bunch of volume create/attach/detach 17:45:21 danpb: a-ha, nice catch 17:54:16 gotta scoot off for a couple hours, back online later ... 18:06:52 gotta go too, back later 18:50:53 Hah! I've got Nova to run, it needed those catalog updates. 19:25:09 anyone got problems removing a project/user on dashboard? 19:41:08 hm, test cases seem to be broken at https://fedoraproject.org/wiki/QA:Testcase_add_SSH_keypair_to_OpenStack ... the instructions for creating the project/user the old way were removed, but nothing added 19:48:55 vi /etc/nova/api-paste.ini 19:49:07 er 19:52:31 ok, I switched the pipeline 20:13:25 russellb, you create a user in the keystone setup test case 20:14:12 zaitcev, which catalog updates? All catalogs are now in sample-data (catalog backend is sql) 20:14:21 do you have -0.10 keystone? 20:14:48 apevec: Exactly those updates that you added. 20:15:11 I ran yum --enablerepo=updates-testing update and received those updates 20:15:40 ok, b/c some mirrors are not up to date 20:52:44 almost succeeded launching an instance using quantum linuxbridge plugin 20:53:07 cdub: need to run a quick errand, but can you help me debug in a about 20 minutes? 20:53:32 rkukura, that sounds like progress! :) 20:54:43 I'm hopeful we'll get it working 20:54:50 back in a few 21:19:33 * rbergeron peeks in 21:34:08 Can someone literate with libvirt look at the following logs: http://fpaste.org/iP4q/ 21:36:30 rkukura, selinux in permissive mode I assume? 21:37:26 could not open /dev/net/tun: Operation not permitted 21:37:34 that really looks like selinux 21:37:42 can't think offhand what else would cause it 21:37:44 cdub, you? 21:38:52 I did set it to permissive 21:43:40 rkukura, https://www.redhat.com/archives/libvirt-users/2012-January/msg00089.html 21:44:21 rkukura, https://bugzilla.redhat.com/show_bug.cgi?id=770020#c13 21:44:47 http://wiki.libvirt.org/page/Guest_won%27t_start_-_warning:_could_not_open_/dev/net/tun_%28%27generic_ethernet%27_interface%29 21:57:11 markmc: that looks relevant! 21:57:28 rkukura, doesn't it just :) 21:58:19 jfGit ? 22:04:38 markmc: The workaround is worth trying, but a better solution is needed. 22:05:26 rkukura, yep 22:08:59 how do you enable logging at debug level in nova now? 22:10:04 sudo openstack-config-set /etc/nova/nova.conf DEFAULT verbose True 22:37:43 Instance launched successfully with quantum linuxbridge plugin by just adding "/dev/net/tun" to cgroup_device_acl in /etc/libvirt/qemu.conf! 22:39:58 eglynn: there? 22:40:05 rkukura: for linuxbridge? hmm, seems surprinsing, i could see for ovs 22:42:19 I'm wondering why 10.0.0.0 is /24 and not /16 or /8 :-) 22:43:12 zaitcev: if you are creating a network, nova creates a row in a table for each IP, so you'd be creating lots more entries. 22:43:51 cdub: Both linuxbridge and openvswitch use type ethernet for the network device in the libvirt xml. 22:44:52 rkukura: sounds broken to me 22:45:09 My instance launched but did not get an IP via DHCP:-( 22:46:52 Does anyone know what to do about this: 'KeypairNotFound: Keypair nova_key not found for user 89524bec14384dc6bfe211c6f853da6a" 22:47:37 euca-describe-keypairs now shows nothing 22:47:46 I swear I added the blasted thing 00:23:10 crobinso: quick question about https://bugzilla.redhat.com/show_bug.cgi?id=801208 00:23:41 crobinso: Do you think these could be related? http://fpaste.org/Ztp4/ 00:24:59 crobinso: The traceback comes up in the apache error log on each click in the dashboard, but is coming from keystone. 19:17:58 #endmeeting