19:00:12 #startmeeting Cloud SIG 19:00:12 Meeting started Fri Mar 23 19:00:12 2012 UTC. The chair is rbergeron. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:12 Useful Commands: #action #agreed #halp #info #idea #link #topic. 19:00:18 #meetingname Cloud SIG 19:00:18 The meeting name has been set to 'cloud_sig' 19:00:33 oh wait, i see multiple rackers now 19:00:41 * mdomsch is here 19:00:48 * rackerjoe Joe Breu is here 19:00:52 * skvidal is here 19:00:55 #topic Gathering of the Peeps 19:01:01 * rackerhacker might work at rackspace, can't remember 19:01:30 * rackerhacker could go for a cold breu about this time on a friday 19:01:34 rackerhacker: who? never heard of 'em 19:01:35 ;) 19:01:41 Peeps! 19:01:45 * gholms noms 19:01:50 yes, it's that tim eof the year 19:01:53 rbergeron: http://i3.kym-cdn.com/entries/icons/original/000/003/617/okayguy.jpg 19:01:54 * tdawson is here. 19:02:10 hey mr. dawson :) 19:02:24 mull: Just in time! 19:02:45 mull, coming to us live from a plane? 19:02:55 dgilmore: you about by chance? 19:03:03 rbergeron: si amiga 19:03:05 I'm in Rochester 19:03:09 word, okay 19:03:15 mull: ROC...'in 19:03:18 :) 19:03:20 with dokken 19:03:43 * rbergeron puts a blanket over her friday sillies 19:03:45 wokkin' actual (chinese food for lunch) 19:03:55 well played 19:04:00 mull: ohhhhhhhhhhh, nice 19:04:13 i will have to eventually "wien" you off the punniness as well 19:04:27 * rackerhacker gets out a shovel 19:04:33 * rbergeron waits for ghoms to link the sad trombone sound 19:04:42 okay, shall we dance? 19:04:50 #topic EC2, images, and so forth 19:05:13 dgilmore: heya, can you tell us what's shakin with the whole beta thing and other image types :) 19:05:33 rbergeron: sure 19:05:45 rbergeron: earlier this week i got ec2 images that booted 19:05:46 #chair gholms 19:05:46 Current chairs: gholms rbergeron 19:06:02 but for whatever reason i couldnt ssh in 19:06:37 Hiya, gregdek 19:06:40 im as we speak spinning up some f17 Beta rc images 19:06:43 * gregdek hullos. 19:06:52 gregdek: yo 19:06:54 so ill see how they go 19:07:08 dgilmore: any thoughts yet on why not ssh'ing in? 19:07:23 if i can still boot but not login ill take some more drastic measures to see whtas going on 19:07:38 rbergeron: server side says client disconected 19:07:42 * rbergeron wonders if error messages might bgive anyone else here an idea 19:07:46 client side says key was rejected 19:07:53 * gholms is still incredibly confused about that one 19:07:56 * rbergeron wonders if she could get faster freaking internet that isn't lagged to all hell 19:07:57 rbergeron: really not error messages 19:08:06 gholms: have you seen the same thing or just in discussion? 19:08:14 Discussion with dgilmore 19:08:33 #info ec2 images are booting; can only boot but not log in 19:08:50 #info server side says client disconnected; client side says key was rejected; not sure wtf is going on at the moment, but still plugging way 19:09:01 #info thank you to dgilmore for plugging away at this :) 19:09:10 It's almost like sshd doesn't like who it's talking to. 19:09:25 dgilmore: this is making from scratch or making from... the... um.. normal process? 19:10:07 rbergeron: same processes as was used for f16 19:10:25 okay 19:10:37 well, perhaps meetingminutesviewing will prod people into having some miraculous idea 19:10:44 how about $otherimages that have been requested? 19:10:46 oh and to make the ec2 images work they only support pv-grub 19:11:04 so we would need to make a completely different set for use in kvm etc 19:11:09 dgilmore: i can lend a hand on xen, not on ec2 specifically 19:11:38 dgilmore: the whole image? or just the kernel + ramdisk? 19:11:50 rackerhacker: ok. well the current images will boot if using pv-grub 19:12:07 skvidal: to work with our upload scripts i had to exclude grub2 19:12:07 dgilmore: that's the only way we've done it since F16 (w/pv-grub) 19:12:16 there is no bootloader in the images installed 19:14:04 rbergeron: so if we want otherimages its fine 19:14:11 we just need to do them seperate 19:14:31 okay, have we come to a concrete list of what those are or the one is, iirc? 19:14:51 and are we going to post that for beta for sanity check or not really planning on it 19:15:13 we should post for Beta 19:15:22 okay 19:15:54 can you drop a note to the mailing list letting everyone know what it is or i ugess if nothing else "that what it is is available" when we hit beta? 19:16:13 sure 19:16:21 #action dgilmore to post info about additional non-ec2 images being "ready to try" at beta (or additionally beforehand with a heads-up of what is coming) 19:16:28 coolio, i'm gonna move on, thanks for being here :) 19:16:32 * rbergeron looks at clock 19:16:44 #topic Special Guest Star skvidal and The Cloud Stuff He's Doing 19:16:48 hi 19:16:48 #chair skvidal 19:16:48 Current chairs: gholms rbergeron skvidal 19:16:57 HI SETH 19:17:01 so fedora infrastructure has a plan 19:17:12 we're going to be building an eucalyptus cluster 19:17:20 for unicorns and magical ponies 19:17:28 and for random builders 19:17:32 and random testing 19:17:44 we've setup a test cluster using some out-of-warranty hw 19:18:07 and we have the structure/concept under our belts (we think :) and now we're in the get real hw mode 19:18:30 right now we're hoping to have a blade center to devote to it with some goodly sized machines 19:18:32 #info Fedora Infra has a plan to build a eucalyptus cluster (for unicorns and magical ponies, and for builders and random testing) 19:18:46 the cluster will be eucalyptus 3. 19:18:51 Heh 19:19:03 #info test cluster is setup using out of warranty hw currently; structure/concept is under our belts, waiting on actual hardware to arrive 19:19:08 and I've been keeping track of everything I'm doing so, hopefully, others can duplicate this effort 19:19:13 #info will be eucalyptus 3. 19:19:23 skvidal: as in "how i did it so you can too" type of documentation? 19:19:23 the timeline for the hw and for the networking changes we have to get made is.... unknown at this point 19:19:44 rbergeron: well, to be fair it is more "how I did it so when I get eaten by a grue someone else can figure this out" documentation 19:19:49 but that's more or less the same thing 19:19:52 skvidal: is tihs all gonna be hooked up with puppet and whatnot too? 19:19:56 no 19:20:02 skvidal: raptor/bus 19:20:05 I doubt that. 19:20:18 the guest instances won't be puppeted 19:20:26 and I wouldn't want to force that on people I didn't like 19:20:40 19:20:42 the cluster itself may or may not be w/the rest of infrastructure 19:20:57 mainly b/c of network isolation 19:21:13 and infra will be the point of requesting "i need SOME CLOUD PLZ"? 19:21:23 the thing about this cluster is that we'd like to make it easier for fedora contributors to take advantage of 19:21:39 but we'd rather to keep them separated from our protected infrastructures 19:21:42 for obvious reasons 19:22:19 * rbergeron nods 19:22:51 so i guess that part of the plan is probably still being worked out (how to request / how much you get / what you can use it for) 19:23:04 that's all a bit up in the air, I think 19:23:09 okay 19:23:24 up and running first? :) lol 19:23:25 * mdomsch would love a contributor to pick up the FTBFS work and run it on there 19:23:34 Ooh, there's an idea. 19:23:35 mdomsch: so would we all, I think. 19:24:06 but let's not get too far ahead of ourselves :) 19:24:15 I just wanted to update people on the general plan 19:24:17 * nirik wondered if that would be a good GSoC... but perhaps not. 19:24:19 1. test out euca - done 19:24:29 2. get hw/network/etc 19:24:38 3. get stuff in 2 setup into production 19:24:42 4. profit? 19:25:08 #info the hope is that this cloudy space will be easier for contributors to take advantage of, as it ideally will be separated from the more protected pieces of infrastructure 19:25:09 the euca folks have been helping me with issues 19:25:22 as I've gotten it setup on the junk boxes 19:25:23 underpants? 19:25:33 in case anyone is wondering we're using kvm - not xen 19:25:47 and I cannot, personally, imagine that changing. 19:25:54 #info it is using KVM, in case anyone is wondering 19:26:13 I think that's all? 19:26:22 cool. anyone have questions? /me thanks seth for coming into the meeting today :) 19:27:27 skvidal: are we going to vote on the name of the cloud 19:27:39 19:27:43 #topic Events incoming 19:27:51 righty-o 19:27:55 #info OpenStack Summit/Conf is coming, see email I sent to list 19:27:58 Thanks, skvidal! 19:28:04 gholms: thank you 19:28:12 https://fedoraproject.org/wiki/OpenStackSummitConf_April2012 19:28:31 #info if you're going, plz sign up so I can harass you into sitting at the booth for at least a wee bit of time on Thurs/Fri 19:28:39 #info err, sweetly ask you 19:28:55 #link http://lists.fedoraproject.org/pipermail/cloud/2012-March/001333.html 19:29:16 I'm also kind of hoping for some help with general cloud sig colateral type stuff , wouldn't it be fun to have a flyer, etc. 19:29:56 So if you are willing and able or at least able to put some ideas that i can transform into nicer english, because I'm clearly qualified with my great grammar and sentence structure, feel free to add in (links in mail link above) 19:30:01 So that's the first one. 19:30:03 Second one is 19:30:46 #info OpenCloudConf - April 30 - May 3 19:30:52 #link http://www.opencloudconf.com/ 19:31:36 johnmark is wrangling this with dave nielsen at the silicon valley cloud center 19:31:51 I am guessing that at some point given how soon it is that there will be information about how to actually participate 19:32:00 19:32:33 But there is oppportunities for cloudiness of all types so keep it in mind, i guess. 19:32:38 And that's about all i have on that. 19:32:46 More as that unfolds :) 19:32:52 #topic Feature Funj 19:33:00 #undo 19:33:00 Removing item from minutes: 19:33:01 #topic Feature Fun 19:33:06 That's better. 19:33:25 russellb, ayoung, i am not sure who else i saw roll in from the openstack crowd: wassup? 19:33:34 dprince perhaps 19:33:46 um 19:33:59 we've been doing a bunch of updates in the last week or so to RC1 of the various projects for the Essex release 19:34:10 Biggest thing in the Openstack world is the incipience of Essex 19:34:18 And the planning stage for Folsome 19:34:18 so, continued testing to help make sure we don't break the world is good 19:34:27 Folsom 19:34:46 We will break the world, though 19:35:10 So really, we are asking you to help figure out early in which particular way we've broken it 19:35:13 * rbergeron hands ayoung a foot-long hot dog for his use of the word "incipience" in a sentence 19:35:50 \m/ _ _ \M/ 19:35:54 Rackspace is working on a set of chef recipes to installation and testing of the fedora packages. any bugs will report upstream 19:36:31 rbergeron, one thing that has come up, and is a Fedora wide issue, not just cloud, is the means by which we deploy web apps 19:36:32 rackerjoe: chef recipes for for fedora packages... on your own infrastructure or to go with openstack or ... . .. ? 19:37:11 rackerjoe: also upstream to... chef or openstack or ? 19:37:11 This will primarily be used for our deployments but it is on github.com/rcbops for anyone to utilize 19:37:53 Out plan is to report openstack specific bugs upstream to the openstack project. Any packaging bugs we'll file a bugzilla for ATM 19:37:54 rackerjoe: that is super cool, would you be willing to be connived into dropping a line to the mailing list about that? i am sure there are people who would be interested who may not actually slog through reading meeting minutes :) 19:38:24 #info rackerjoe is working on chef recipes for fedora packages, mostly for rax infrastructure deployments but you can find it.... 19:38:26 Once it is in a state where it.. eh em.. always works. Will do 19:38:28 #link github.com/rcbops 19:38:39 and dprince and derekh have been working on puppet stuff 19:38:42 rackerjoe: some people may be willing to help get it to that state :) 19:38:47 it's used with smokestack.openstack.org 19:39:12 correction: the chef recipes are for RCB Deployments of OpenStack (not the RAX public cloud infrastructure) 19:39:13 #info OpenStack: "the incipience of essex" - lots of updates in lsat week or so to RC1 for varoius essex projects 19:39:21 #info plz test, wtb help there 19:39:46 #info correction: the chef recipes are for rcb deployments of openstack, not rax public cloud infrastructure 19:39:50 rackerjoe: gotcha 19:40:01 #info dprince and derekh are working on puppet-y things 19:40:10 russellb: what's this whole devstack business about 19:40:27 devstack is a script primarily used for upstream development 19:40:39 gets the whole stack up and running quickly from git checkouts 19:40:47 it should never be used for operational deployments 19:40:54 right 19:41:05 i'm currently hacking on some automated kickstarts to get openstack set up within VM's/servers (devstack-ish approach, but more for production) 19:41:19 but I find it handy while hacking on OpenStack code ... it previous was Ubuntu only, now it's getting Fedora support 19:42:49 #info devstack is a script primarily used for upstream dev; gets the whole stack up and running quickly from git checkouts; not design for operational deployments... 19:43:06 #info was prevoiusly ubuntu-only, now getting fedora support 19:43:29 rackerhacker: anyplace where people can look or just still in the toying with it on your own phase 19:43:48 rbergeron: i'm sidelined right now by BZ801650 19:43:55 #info rackerhacker is working on automated kickstarts to et openstack set up within vms/servers devstack-ish aproach, but more for production) 19:43:58 law has a fix but it's not yet pushed 19:44:40 err well, it's still in testing, but can be pushed to f17-stable if he so desires 19:45:16 and since we're frozen probably not until after beta i am guessing unless it's flagged as NTH fix? 19:45:40 rackerjoe: Are your Chef changes public? 19:45:49 rackerhacker: well, cool, i guess same to you as rackerjoe: mails are nice :) 19:45:56 i'm not 100% sure... jeff law seemed to say that the first change was relatively trivial, but i'm not sure about the second :/ 19:46:06 rackerhacker: ahhh 19:46:14 rbergeron: i'll email jeff to find out 19:46:20 dprince: yes they are. Don't let the name of the branch fool you.. https://github.com/rcbops/chef-cookbooks/tree/ubuntu-precise 19:46:29 LOL 19:46:32 precisely 19:46:49 rackerjoe: I like to obfuscate my branch names as well ;) 19:46:51 * rbergeron puts on her hot pangolin costume 19:47:09 hrm, it's just not me 19:47:19 I haven't tested the recipes on the latest cut of the f17 packages yet but it is on my todo list 19:47:42 okay, i think that's about all things openstacky, unless you guys want to go on, i'll just start yelping at ke4qqq or sparks, or spstarr for cloudstack / opennebula stuff if they're here 19:48:16 or mgoldmann about as7, or mull/gregdek/gholms/obino if any of them want to give any mini-updates on euca progress for f18 and beyond 19:48:32 * rbergeron notes as7 is more java-ish but that it has plenty of overlap with some of the packages we do over here 19:49:00 Sorry, was distracted 19:49:11 rbergeron, jhernandez and mgoldman did a few more reviews for me. My dep list has about 4 things left on it 19:49:31 gholms: yeah yeah 19:49:34 mull did my last review. 19:49:54 #info Euca still plugs along, dep list has about 4 things left 19:50:56 not much else from us 19:50:57 * rbergeron takes other silence as things being golden (like a nice toasty bun) 19:51:04 mull: gotcha 19:51:29 #info anyone in Rochester area, head over to CloudCampRoc and harass mull and gregdek about how ncst is gonna implode tonight 19:51:48 rbergeron, ncsu is not my team ... ou is 19:51:55 mull doesn't care about NCSU. Harrass him about Ohio. 19:51:56 oh 19:52:00 :) 19:52:04 Even better, harrass him about Ohio State. 19:52:06 Hehe 19:52:11 * mull kicks gregdek 19:52:13 well, we want them to win 19:52:24 http://ianweller.fedorapeople.org/brackets/rbergero.html 19:52:34 for the love of god, save my bracket 19:52:41 okay, moving on :) 19:52:41 Haha 19:52:56 #topic S3 & Mirrors 19:53:00 mdomsch: HI! 19:53:07 IT'S ALIVE! 19:53:09 it's the moemnt you've been waiting a long long time for :) 19:53:28 I flipped the switch in MM a couple days ago 19:53:32 random stats: 19:53:39 Unique IP checkins since 3/21/2012 to the S3 mirror: 19:53:39 EL5: 5296 19:53:39 EL6: 1683 19:53:39 Fedora: 170 19:53:52 * mdomsch is disappointed in the fedora numbers 19:53:56 but there you have it 19:54:01 Fedora ... is that F8, F16, ? 19:54:06 or too hard to parse that out 19:54:07 it's nearly all f16 19:54:17 interesting 19:54:17 it's easily parsable 19:54:43 though note - I only have F15-16-17 in the mirror 19:54:59 so anyone still running f8 won't get directed there 19:55:17 aside from my script spamming sysadmin-main (which skvidal has offered to fix) 19:55:28 it's working as expected, absent any complaints 19:56:17 any questions? 19:56:29 * rbergeron notes that in the past month or so, in wiki/Statistics ... F8 has gone from 7,288,234 to 7,320,091 unique connections to repository 19:56:40 mdomsch: when did you turn it on? 19:56:40 hmmm 19:56:55 3/21 19:57:13 in hte past week from 7312439 to 7320091 19:57:33 * rbergeron notes we've always speculated that that large # was from amazon (if you compare the numbers on the wiki page, it's totally out of whack with other releases) 19:57:45 Fedora 7 || Fedora 8 || Fedora 9 || Fedora 10 || Fedora 11 || Fedora 12 || Fedora 13 || Fedora 14 || Fedora 15 || Fedora 16 19:57:48 |- 19:57:51 | 4409781 || 7312439 || 4119425 || 4791180 || 5021611 || 5554257 || 4596972 || 5253673 || 2771852 || 1631638 19:58:03 (those are last week's numbers) 19:58:26 rbergeron, so you think we should sink F8 content into there? 19:58:26 mdomsch: i assume that there's no way someone could have hardcoded in an ami where to connect to for a repository, right? 19:58:38 rbergeron, sure they could have 19:58:48 /etc/yum.repos.d/*.repo 19:58:53 mdomsch: I am just curious about how many people are still using F8, really 19:58:59 and if a huge number of that is from ec2 19:59:03 (mostly the latter point) 19:59:23 I suppose we could upload F8 content to the mirror, and then watch the mirror logs 19:59:27 there are tons of various amis out there that are like... built for hadoop-specific things that hadoop folks/cloudera have put out 19:59:41 remixes if you will based off f8 :) 20:00:16 so you want me to put f8 there too? 20:00:25 mdomsch: i think it might be interesting. i am guessing that finding out what ami people are actually using is asking too much :) 20:00:36 yes, we have no way to get that 20:00:38 mdomsch: maybe for a bit? just to satisfy curiosity? does that seem reasonable to people? 20:01:53 * rbergeron doesn't think it would kill us 20:01:59 k 20:02:32 mdomsch: is there a way we could see those numbers of updates ... regularly-ish? 20:02:32 now, we're in US-East-1 only 20:02:42 any reason to think we should upload to other regions too? 20:02:44 or is it a "ondemand ask mdomsch when he's not busy with $rest oflife" thing :) 20:03:09 rbergeron, great question. I've started plumbing the S3 logs into FI's awstats tool, but it's not done yet 20:03:12 again, i guess it's hard for us to know what usage we have in other regions, we're just doing this as we all know it's the most used? 20:03:36 maybe spevack can give us some insight 20:03:47 #action mdomsch to upload F8 into the S3 mirror 20:03:51 O MIGHTY SPEVACK 20:04:28 that's all then 20:04:39 #info S3 / mirrors is now ON! unique ip checks to s3 mirror since 3/21/2012: EL5: 5296; EL6: 1683; Fedora (15-16-16): 170 20:04:50 mdomsch: you're awesome, thank you :) 20:04:54 17 20:04:58 rawhide 20:05:25 #info S3 / mirrors is now ON! unique ip checks to s3 mirror since 3/21/2012: EL5: 5296; EL6: 1683; Fedora (15-16-17): 170 20:05:57 rawhide also? how does that wind up getting aggregated over time... do we just have an always ever-growing rawhide # or ... 20:06:07 #info SUPER HUGE MIRACULOUS HUGS to mdomsch for all his work on this, thank you! 20:06:28 no, it deletes content that's been removed from the master mirror 20:06:43 i guess devil is in the details and probably not going to assess it deeply right this second :) 20:06:44 that's one somewhat painful point - the mirrors use hardlinks - S3 doesn't 20:06:53 so content moving from rawhide to f17 gets copied up again 20:07:03 * rbergeron nods 20:07:08 and content moving from updates-testing to updates gets copied again 20:07:56 okeedokee 20:08:15 well, i guess some info is better than no info 20:08:33 #action rbergeron to harass spevack to read these logs and give input to above thoughts on other zones 20:08:49 #topic Open Floor / Your topic here 20:08:59 * rbergeron yields the floor to others, since she is a floor-hog 20:09:31 Just want to say that OpenShift is still working towards being open sourced in time for F18. 20:10:25 I'd like to ask if there is any interest in Diskless booting with Ramdisk RootFS out there? 20:10:54 #info openshift is still working towards being open sourced in time for F18 (YAY) 20:11:12 tdawson: just let us know when you are ready to get on the train of packagership if you're not already 20:11:13 I mean, besides my obvious interest... 20:11:28 There has been alot of finger slapping and "you can't make a spec file that way" ... so I guess we're makingn progress. 20:11:44 LOL 20:11:57 tdawson: who's gonna be the lucky packager people in fedoraland? 20:13:41 I'm not positive who will be doing what, but me, rharrison, J5, and at least two others ... I'm terrible with names. 20:13:48 ayoung: i'm not seeing a lot of ... feedback, i wonder if mailing list might have more opinion (perhaps with use cases or osmething) 20:14:17 rbergeron, I'll write it up...probably a blog post 20:14:45 the idea is that cloud nodes are ideally diskless, at least in certain usages 20:14:50 tdawson: awesome, so you have at lesat 2 already packers to help you out 20:15:07 one a sponsor even :) 20:15:35 #info ayoung curious about interest in diskless booting with ramdisk rootFS; will blog 20:15:45 #info idea is that cloud nodes are ideally diskless, at least in certain usages 20:15:52 Yep ... but there are a few packages that still scare me when I look at their spec files. I'll be cleaning one of those up next week. 20:16:03 tdawson: well, we are happy to welcome that stuff (FINALLY OMG) 20:16:26 anyone else? :) 20:16:30 ayoung: sounds like a plan 20:16:31 :) 20:17:14 * rbergeron holds the meeting for a minute before closing out and thanks everyone from the bottom of her cold little heart for coming 20:17:20 :) 20:18:14 [cue gholms witty comment here] 20:19:07 mestery :) 20:19:35 alrighty folks, thanks for coming :) 20:19:41 #endmeeting