19:01:14 #startmeeting cloud sig 19:01:14 Meeting started Fri Aug 5 19:01:14 2011 UTC. The chair is rbergeron. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:14 Useful Commands: #action #agreed #halp #info #idea #link #topic. 19:01:21 * jforbes is here 19:01:22 #meetingname cloud sig 19:01:22 The meeting name has been set to 'cloud_sig' 19:01:26 #roll call 19:01:28 no 19:01:31 #topic roll call 19:01:37 #chair jforbes 19:01:37 Current chairs: jforbes rbergeron 19:01:45 * tflink is lurking until the blocker bug review meeting is over 19:01:52 * kkeithley is here 19:02:00 * rbergeron was in the car driving to dr. appoint,ent so she can run the meeting for 45 min before her appointment 19:02:06 and is here now 19:02:28 okay 19:02:30 * jsmith is here 19:02:38 #topic ec2 status and test day output 19:02:46 #chair tflink 19:02:46 Current chairs: jforbes rbergeron tflink 19:03:00 jforbes/tflink: whats the word 19:03:28 There were a few testers yesterday, and it seemed that most of the issues were amazon config related, though tflink might have more insight 19:03:52 My testing went well 19:03:55 there was some confusion in the test cases but I think things went well overall 19:04:31 there was one issue brought up that wasn't finished since only one tester could reproduce it 19:04:36 and not reliably 19:04:59 other than that, there were no new issues brought up that I know of 19:05:08 okay 19:05:14 one existing sendmail/systemd bug in the S3 images 19:05:38 but that gets back to the difficulty in updating those images without respinning them 19:06:03 yum update :) 19:06:28 true but how do you test something working on boot with a machine that is terminated on reboot? 19:07:30 point 19:07:49 but it sounds like there isn't much interest in updating the S3 images 19:08:02 updating them on a regular basis, rather 19:08:08 I can do it, will only take me an hour, but I am about to disappear for 2 weeks 19:08:11 yeah 19:08:26 Well, disappear for 1 week, half disappear for the second 19:08:43 note that there are issues in BG right now that prevent S3 AMI composition 19:09:20 What sort of issues? 19:09:25 I haven't run into a problem yet 19:09:28 the AMIs don't work 19:09:39 Hmm, that would be an issue 19:09:42 jforbes: i think its a known issue atm 19:09:47 to the bg folks 19:09:47 I was working with the BG guys yesterday on it 19:09:57 Entirely possible I haven't updated recently 19:09:57 https://issues.jboss.org/browse/BGBUILD-289 19:10:01 * rbergeron looks for the jira 19:10:05 oh there you go 19:10:41 do we want to start shooting for regular S3 AMI builds? 19:10:56 That seems inefficient 19:11:17 yeah, I can see it both ways 19:11:39 * tflink wonders if this conversation would be better continued on cloud@ where he started the conversation 19:11:44 We provide everything users need to build new ones themselves 19:12:03 But I suppose critical bugs could trigger 1 off respins 19:12:18 is sendmail not starting on boot a critical bug? 19:12:41 any more than other critical bugs (not sure what those are for F15 ATM)? 19:12:58 Umm, depends on how you look at it I guess 19:13:50 Good discussion thread for the list I think 19:14:07 yeah, we can continue it there 19:14:13 sooo 19:16:05 ec2 ami's 19:16:36 are we kosher to call them official, and wait on ebs ones? 19:16:43 I guess it comes down to whether or not we respin S3 19:16:46 why wait on the EBS ones? 19:16:47 EBS are already there 19:16:49 oh 19:16:51 did I miss something 19:16:52 ? 19:17:10 no 19:17:12 im confused 19:17:13 I'm of the opinion that the sendmail bug I hit isn't enough of an issue to respin the S3 AMIs 19:17:37 I would agree, the people using S3 will be using them as a basis for new images, not using them as is too much 19:17:39 can we just document it on the ami page 19:17:40 ? 19:17:41 if we aren't going to be respinning them regularly, anyone using them would have to respin anyways and that would get the systemd fix for that 19:18:13 okay 19:18:13 Sure, we can list it as a known issue 19:18:14 so 19:18:16 official? 19:18:21 for the love of god say yes 19:18:21 yeah, I think so 19:18:30 jforbes: can you do the honors 19:18:33 :) 19:18:42 I will, and mirror them out to the other zones 19:19:49 #action jforbes to mark amis as official and get this puppy out the door 19:20:02 What date is alpha spin? 19:20:07 #info the mustard indicates progress 19:20:29 jforbes: we're hoping for F16 alpha RC1 later today 19:20:47 Okay, what date is alpha release would be better? 19:20:53 jforbes: a week from tuesday 19:20:57 theoretically 19:21:10 Cool, so I will be in Vancouver, but can gt images spun 19:21:41 Was just trying to figure out if I had to get the appliance definition done this weekend or could do it after vacation 19:21:52 is BG working with F16 right now? I thought it wasn't 19:22:02 unless the koji process is working 19:22:08 or you were going to do it manually 19:22:21 It will be BG, just have to make a mod to do alpha 19:22:27 not sure 19:22:33 okay 19:22:50 do we want to see if koji is working at some point before we decide? 19:22:51 It doesnt really know how to look for alpha/beta repos, so we modify 19:23:03 or just keep up with our existing rel-eng ticket? 19:23:11 Well, I have to do the appliance definition anyway 19:23:19 but yes, check with them before I push images 19:23:58 okay 19:24:28 #info tentatively planning on bg for f16 alpha images, will check in with releng ticket before doing so 19:24:33 we good? 19:24:37 any other notes? 19:24:47 jforbes: can I get the mods you do for alpha if you don't have them posted somewhere? 19:24:50 I think that's good 19:25:10 tflink: will send them when I make them for F16, I did it for F15 last 19:25:21 cool, thanks 19:26:08 :) 19:26:12 thanks, guys. 19:26:29 * rbergeron appreciates the mountains of help from everyone involved 19:26:40 #topic HekaFS 19:26:42 np, you had to do that one first, though didn't you? :-P 19:26:56 jdarcy, kkeithley 19:26:58 :) 19:27:01 tflink: :) 19:27:24 Lots of talking this week, less coding. 19:27:41 I've been testing glusterfs/cloudfs with valgrind. spun a new Release this morning. Just poked jsteffan again about an update to glusterfs 19:27:43 Conversations about getting GlusterFS/CloudFS into RHEL. 19:27:43 Starting to looking into in-kernel NFS (including NFSv4) over GlusterFS/CloudFS. 19:27:43 Some folks are looking at hooks to let Hadoop get location info so they can optimize 19:27:43 map/reduce job placement. 19:27:43 Turns out to be the exact same problem we need to solve for pNFS over GlusterFS. 19:27:44 Also need to check out Gluster's new S3 interface with Aeolus's iwhd. 19:27:46 Got things building with Gluster's trunk (which keeps moving) again. 19:27:48 Name change ready to go: HekaFS it is. Suggestions on how to proceed are welcome. 19:27:53 There, got paste working. ;) 19:28:35 i have no ideas offhand. 19:28:40 ummmm 19:28:43 * rbergeron looks around 19:29:19 nirik, spot, if youre about: jdarcy/kkeithley need to change CloudFS name to HekaFS. whats the process there? 19:29:35 fesco (it's a feature)? just change the name? thoughts? 19:29:41 Ideally we'd need to change the git repo, the package, the wiki. Some of those might not be strictly necessary, but still. 19:29:47 or anyone else... notting, anyone :) 19:30:31 I can do a new repo request, that's not too daunting. 19:30:36 * rbergeron is happy to hear speculation 19:30:40 well 19:30:43 That might be easiest, actually. 19:30:47 you'll likely need that anyway 19:31:04 on hosted? 19:31:11 so id file that request and maybe fwd it to the cloud sig list 19:31:20 nirik: yeah, and i guess in F16 19:31:27 everywhere 19:31:28 has to go thru re-review for the rename... 19:31:29 lol 19:31:42 so fesco ticket then 19:31:51 and package rereview as well? 19:32:09 seriously, just to s/cloudfs/hekafs in the spec? 19:32:09 yeah, and infra ticket to rename the hosted parts. 19:32:22 Is there any reason to think the re-review would be complicated or difficult? I mean, I don't expect a rubber stamp, but it would be mildly frustrating to have issues brought up as blockers that hadn't been blockers before the rename. 19:32:23 kkeithley: are you obsoleting the old named package? 19:32:32 I suppose 19:32:44 jdarcy: shouldn't be. The main thing is checking Obsoletes/Provides, as thats very easy to mess up . 19:32:56 OK, that sounds entirely appropriate to me. 19:33:08 Diligence is good. :) 19:33:14 #action jdarcy kkeithley to file infra ticket, fesco ticket, and new bz review request for cloudfs --> hekafs change 19:33:54 what does it take to obsolete cloudfs? 19:34:00 is it better to have details done before hitting up fesco (meeting monday morning) ? 19:34:11 I suppose it's in the wiki somewhere. I'll look 19:34:12 kkeithley: I think it's basically just an Obsoletes: line in the spec. 19:34:42 https://fedoraproject.org/wiki/Packaging:Guidelines#Renaming.2FReplacing_Existing_Packages 19:34:56 restart version at 0.1? 19:35:11 I'd say no, continue the current numbering. 19:35:51 So 0.7-7 19:35:58 Something like that. 19:37:05 okay. 19:37:12 you guys set then? :) 19:37:21 * jdarcy nods. 19:37:22 seems so, yes 19:37:31 okay. 19:37:39 #topic aeolus 19:37:47 mmorsi: hi 19:37:55 rbergeron: hey 19:38:22 how's things going? 19:38:27 ah not too bad 19:38:28 srry haven't been around last couple weeks, turns out friday afternoons aren't ideal for me 19:38:44 but as far as aeolus, ya'll probably already caught this but it is in fedora 19:38:52 * rbergeron nods 19:38:58 so the installation / configuration process is straightforward 19:38:59 Yay! 19:39:07 i saw a few last packages needed from clalnce earlier this week 19:39:14 yes those made it in 19:39:24 we are working on making the experience even simpler too, via an interactive configuration utility 19:39:41 which will give everyone one simple command to create images and deploy them to any cloud 19:41:24 hrm other than that, we've begun our next development cycle, some features for that include alot more support for shared identity and secured connections, additional cloud provider support, simpler / better tooling, ipv6 support, expanded cloud monitoring and status reporting, etc 19:42:25 actually there is quite a bit of discussion pertaining to what we're working on, so if anyone has a new feature that they would like to see happen, send it to the group 19:42:50 you can see all our discussions pertaining to features on list https://fedorahosted.org/mailman/listinfo/aeolus-devel 19:42:55 its a very open process 19:43:01 * jdarcy mumbles something about provisioning distributed filesystems automatically. ;) 19:43:45 ah cool, would love to hear more about that 19:43:47 * rbergeron grins 19:43:51 indeed 19:44:01 but in any case, thats about it for me 19:44:05 okay. well, glad you guys made it in. very good stuff :) 19:44:11 unless i'm forgetting something (which i wouldn't put it past me :-p) 19:44:16 #topic any other business 19:45:15 anyone, anyone? 19:45:54 alrighty then. 19:46:17 thanks for coming - i'll send out notes when i get home from dr., unless someone beats me 19:46:19 its 5 o'clock somewhere 19:46:21 to it 19:46:23 that is 19:46:27 yes it is. 19:46:40 in about 15 minutes by my calculations 19:46:41 :) 19:46:47 #endmeeting