17:02:54 #startmeeting Cloud WG 17:02:54 Meeting started Fri Aug 15 17:02:54 2014 UTC. The chair is number80. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:02:54 Useful Commands: #action #agreed #halp #info #idea #link #topic. 17:03:07 #chair jzb agrimm imcleod 17:03:07 Current chairs: agrimm imcleod jzb number80 17:03:15 #topic roll call 17:03:23 .hellomyname is hguemar 17:03:30 .hellomynameis hguemar 17:03:31 number80: hguemar 'Haïkel Guémar' 17:03:35 .hellomynameis jzb 17:03:36 jzb: jzb 'Joe Brockmeier' 17:03:40 .hellomynameis arg 17:03:42 agrimm: arg 'Andy Grimm' 17:03:43 Whoops, now in the right channel. 17:03:47 .hellomyname is imcleod 17:03:50 .hellomynameis pfrields 17:03:50 stickster: pfrields 'Paul W. Frields' 17:03:55 #chair stickster 17:03:55 Current chairs: agrimm imcleod jzb number80 stickster 17:04:01 .hellowmynameis imcleod 17:04:02 hello! 17:04:09 try again, imcleod ! :) 17:04:14 #chair oddshocks 17:04:14 Current chairs: agrimm imcleod jzb number80 oddshocks stickster 17:04:16 sorry many people are speaking to me at the same time 17:04:21 I can't seem to type. Perhaps I should remain silent. 17:04:29 I have that problem too, but the doctor gave me pills. 17:04:32 no quorum but at least, we could hold a proper meeting 17:04:33 .hellomynameis imcleod 17:04:34 imcleod: imcleod 'Ian McLeod' 17:04:36 yay 17:04:55 nobody from atomic ? walters ? 17:05:01 number80: ahem 17:05:22 walters: you around? 17:05:27 sorry, I know you're the lead for the atomic image :) 17:05:40 number80, I'm involved w/ atomic 17:05:48 awesome 17:05:49 here 17:05:49 number80: I'm just trying to herd cats. 17:06:00 maybe we could start now 17:06:01 * dustymabe here 17:06:14 (Herding cats is really a weird expression if you know cats. Mostly, they sleep a lot.) 17:06:16 If I weren't in this meeting, I might finally be making our vagrant builds work again. :) 17:06:29 * jzb boots agrimm out of the meeting. 17:06:36 jzb, I can guarantee walters is one cat who's not sleeping much these days! 17:06:43 #topic automatic smoketests on image build 17:06:46 https://fedorahosted.org/cloud/ticket/38 17:06:55 I bet this is a fast one as roshi is away 17:07:21 dustymabe noticed that there is no cloud testing days in QA agenda 17:07:30 Also, images aren't quite building yet.... But we are close. 17:07:39 it was supposed to be held in the same day as RDO (but same as us) 17:08:32 #info define a testing day when images will build 17:09:01 imcleod: define close? :-) 17:09:48 imcleod, are we still having the weird /dev/root issue? 17:09:52 or what? 17:10:09 * number80 hopes we didn't lose imcleod 17:10:13 Yes. I added the ability to log copious amounts of early boot output. 17:10:35 Which gave us root cause behind that fairly cryptic dracut error. The network is not coming up, so no stage2 installer, no install, etc. 17:10:46 Believe this is down to a change in the device name in F21. Working it. 17:10:59 Also believe we may have seen this before but I cannot recall details or what the solution was. Sigh. 17:11:21 device naming always sucks, and changing it sucks more 17:11:36 imcleod: do you have a ticket number ? 17:11:42 number80: I do not. 17:11:44 ok 17:11:51 imcleod: do we somehow specify ksdevice and give the device name? 17:12:00 jzb: I will define"copious" as https://kojipkgs.fedoraproject.org//work/tasks/475/7310475/oz-x86_64.log 17:12:15 imcleod: and that device name is now wrong? 17:12:32 dustymabe: Oz injects the ks file into the ramdisk itself. Original ks file explicitly tries to bring up eth0. 17:12:54 imcleod: got it. thanks 17:13:05 dustymabe: Anaconda folk suggested removing explicit name reference. That doesn't work either I'm afraid. I need to spin up a test VM myself and poke around to get the name correct. 17:13:13 I'd prefer that we not have to hardcode a name into the kickstart. 17:13:20 *nods* 17:13:23 imcleod, so do you get the same error if you try to build with imagefactory elsewhere (e.g., on your laptop) where you can get a vnc access? 17:13:34 Oddly enough, no. And I'm not sure why. 17:13:38 imcleod: I know in the past we could use something like ksdevice=link but since we are injecting the kickstart that doesn't apply 17:15:02 agrimm: I just noticed that this ticket was assigned to red_trela, do you want ownership of this one ? 17:16:27 number80, I guess I could. It might fit in well with smoke testing that walters and I are going to have to do for atomic images... 17:16:33 ok 17:16:59 I guess we could move to the next topic ? 17:17:08 I don't know anything about the tools I should be using for it, though. probably will need advice on that (from roshi?) 17:17:41 yeah, go ahead and move on 17:17:52 roshi isn't around but it will be helping him on defining test cases for taskotron 17:18:27 #topic start communication/collaboration on cloud image updates 17:18:29 https://fedorahosted.org/cloud/ticket/51 17:19:53 I don't think we moved forward on this matter 17:21:21 number80: call it out on the mailing list, perhaps? 17:21:33 This seems quite a bit wider than just a few people in this meeting. 17:21:42 jzb: do you want to do it ? 17:21:51 number80: I can bring it up on list, sure. 17:22:01 Do we know that relevant QE people are on the list? 17:22:13 stickster: we do not, but I'll CC Adam 17:22:19 stickster: per the discussion at Flock 17:22:20 #action jzb start discussion avec cloud image updates policy on the cloud/releng list 17:22:30 Even if we're looking for people to raise a hand and say, "Yeah, I'll own that"... makes sense. thanks jzb 17:23:12 so next topic ? 17:23:24 number80: sounds good 17:23:43 #topic Process for determining when and why Docker trusted images need to be rebuilt 17:23:48 https://fedorahosted.org/cloud/ticket/59 17:23:58 that would be me 17:24:03 yup 17:24:09 I will bang out a draft this weekend and send to the list 17:24:48 I don't expect my first cut to be the perfect criteria, but having a proposal will let people have something to hack on. 17:24:57 number80: EOF 17:25:00 #action jzb send a draft for Docker trusted images updates policy on the list 17:25:04 thanks jzb 17:25:34 #topic Next step on the test plan 17:25:40 https://fedorahosted.org/cloud/ticket/61 17:26:10 I should assign this to myself and check with roshi when he's back to finalize this point 17:26:15 any comments ? 17:26:24 number80: is this very much like #51? 17:26:49 it is similar but the not the same 17:26:59 if you want ownership, no problem with that :) 17:27:22 number80: nope, just wondering if they should be two tickets or not. 17:28:14 the former is about when and why we should trigger updates to the cloud images, the other is related to the generic QA release policy 17:28:44 It's close enough to be managed by the same person 17:29:10 number80: OK 17:29:22 number80: feel free to assign to me then 17:29:26 ok 17:30:08 #topic Project Atomic weekly status 17:30:15 https://fedorahosted.org/cloud/ticket/64 17:30:23 jzb again :) 17:31:17 number80: I think we're still at the same place as last time. I haven't gotten any updates from walters on progress. I think he may be a wee bit busy this week. 17:31:23 ok 17:31:43 stickster: any info on your end there? 17:31:56 jzb, he's in all next week, so we shuld make some progress. but then he's out for two 17:32:06 * stickster looks to oddshocks and dgilmore 17:32:07 so we _really_ need to make progress next week. :) 17:32:35 agrimm: agreed. I am at LinuxCon next week but will try to make sure to sync with the folks on this... 17:32:38 agrimm: One thing we have going for us is we have some more Fedora rel-eng help now, in the form of pbrobinson 17:32:50 here 17:33:05 stickster, oh, good to know 17:33:34 awesome 17:34:10 should we move to the next topic (docker image deliverable) 17:34:18 He's in UK TZ though, so this particular meeting might be tougher for him. 17:34:25 hola stickster 17:34:48 jzb: it's worth noting that internal Fedora and RH openstack instance image uploading is in the near future, to join with EC2 images 17:35:27 oddshocks: we can't officially do internal red hat things 17:35:44 since we generally cant push from fedora to them 17:35:51 #info upcoming support of internal Fedora openstack instance image uploading in fedimg 17:36:16 dgilmore: This is news to me. Good to know 17:36:18 unless the Red Hat openstack is public 17:36:24 dgilmore: In that case,j 17:36:29 One is public, one is internal. 17:36:31 In that case, just Fedora OpenStack ;) 17:36:35 * stickster notes https://fedorahosted.org/cloud/ticket/64 is not being updated, and that makes it not as useful as a "tracking" item. 17:36:44 oddshocks: most internal Red Hat things are firewalled off, and we have no access from Fedora infra 17:36:53 oddshocks++ 17:37:09 stickster: ack 17:37:36 jzb: What would be a better way to summarize that ticket so we can answer the right questions in these meetings? 17:37:44 dgilmore: OK, cool. That makes sense to me. In that case we'll do Fedora openstack, and from there move on to GCE, HP, and Rackspace, once legal clears. 17:37:45 #action Cloud WG updating #64 (Atomic tracker) every week 17:38:05 jzb: I think there was a bug that walters pointed to which is relevant for OStree 17:38:17 oddshocks: Do you have a reference to that bug? ^ 17:38:23 oddshocks: you're still waiting clearance from legal ? 17:39:46 stickster: just a sec 17:39:56 stickster: I see the bug, do you think that's something that I could work on generating weekly? 17:40:30 atm, really, summarizing the weekly infra/atomic meetings is probably the most useful. 17:40:33 stickster: ^^ 17:40:41 number80: AFAIK no one has been given clearance or credentials on Rackspace, GCE, or HP, as legal has -- to my knowledge -- been working with them to secure accounts to host our images with. 17:40:58 I don't think there's anything outside those issues right now that we are tracking for Atomic. 17:41:31 sorry, pinged elsewhere, back now. 17:41:47 * oddshocks is being pinged like crazy today 17:41:59 jzb: Yeah, agreed, if we could do that in the ticket it would be easier to tell what's blocked :-) 17:42:09 number80: HP is the only one I am actively working on with legal. (Which does not mean nobody else is doing GCE or Rackspace) 17:42:18 stickster: ack. 17:42:36 #info imcleod is working with legal on HP 17:42:49 is Rackspace still a consideration? 17:42:51 I know that skottler had some discussions with legal about GCE 17:42:59 but no follow-up since he left 17:43:03 IIRC they're kind of shuttering their public cloud. 17:43:22 jzb: AFAIK yes. It's important for OpenStack testing. Also, internal sources indicate that they are not, in fact, shuttering the public cloud. 17:43:45 imcleod: this might be a weird question but is anyone working with digital ocean? 17:43:53 jzb: They are trying to emphasize their value add as a provider and this has been incorrectly interpreted as a shutdown of the public cloud. 17:43:56 hmmm. OK. I must have gotten a wrong impression. 17:44:02 what do the legal hurdles tend to be? I'm on two RH projects where we are moving forward with GCE, but I understand Fedora's needs are different 17:44:11 dustymabe: good point, since we already have a friendly contact there :) 17:44:18 jzb: You may have better info. 17:44:36 agrimm: For HP it was indemnification in their standard agreement. 17:44:37 number80: :) - I also happen to work in NYC in the same building as DO 17:44:39 hm,,, 17:45:10 dustymabe: do you want to check with them ? but we still someone in touch with legal (aka spot) 17:45:16 *need 17:45:18 FWIW, any of these providers are super-easy to add support for in Fedimg: http://libcloud.readthedocs.org/en/latest/supported_providers.html 17:45:43 number80: sure.. I am new though so I might need some guidance on what needs to be done/coordinated 17:46:02 imcleod: in this case, almost certainly not. 17:46:03 new to innerworkings of fedora that is 17:46:07 dustymabe: I could help you 17:46:15 number80: awesome 17:46:37 oddshocks: Can we upload and register to all of those? My understanding, based on the experience with Rackspace, was no. 17:46:41 #action dustymabe/number80 working on Digital Ocean support 17:46:45 sorry connection issues 17:46:46 I am in 17:46:52 how far are we ? 17:46:54 #chair frankieonuonga 17:46:54 Current chairs: agrimm frankieonuonga imcleod jzb number80 oddshocks stickster 17:47:09 am I late ? 17:47:19 we're currently speaking about fedimg 17:47:32 aaah ok 17:48:01 imcleod: Right. That's the challenge we're soon to face (or already facing) -- how to host images on providers that aren't as easy to register images with as EC2 17:48:10 btw, if you're discussing with a cloud platform and with legal, please log it in the minutes using info 17:48:32 imcleod: I think that's part of the reason we have legal we have to take with these providers if we want them to serve our latest cloud images 17:48:38 I already added imcleod with HP 17:49:51 oddshocks: Roger. I think we're going to be stuck with a diverse collection of upload/registration techniques, even if our initial builds can be more or less unified. 17:50:00 number80: All I was told (by multiple folks on different days) is that Rackspace, GCE, and HP are all on hold until we clear things with legal, and figure out how things will work with them 17:50:02 * agrimm is happy that GCE is trivial for registering images 17:50:17 oddshocks: do you know the point of contacts ? 17:50:30 agrimm: :) 17:50:54 #info agrimm working on clearance with legal on GCE, Rackspace 17:51:11 blame oddshocks for that ^^ 17:51:14 oddshocks: number80: I'd be happy to talk to the right legal person, whether that's spot or someone else. I'd like to know before bugging spot that it's something he's involved with. 17:51:16 number80: No, unfortunately. I've just heard about this in #fedora-cloud or #fedora-meeting. 17:52:03 It was said a few times that the ball wasn't in my field for those 3 providers, which were the ones people were hoping for next. 17:52:33 stickster: agreed, we need to know who is working with legal on which matters 17:52:54 dgilmore: Do you know anything about this? ^ 17:52:58 s/matters/issues/ 17:53:05 dgilmore: Regarding legal/accounts with Rackspace, GCE, and HP? 17:53:37 number80: Typically spot has been main point of contact for all things related to both Fedora and legal matters. However, I have a long history of working with attorneys too (Red Hat and others) and would be able to assist if needed. 17:53:37 walters: ^ 17:53:56 oddshocks: i know HP is underway, the rest I do not know 17:54:00 stickster: that sounds good to me. 17:54:04 dgilmore: OK, thanks. 17:54:08 stickster: great, we need to identify also the POC in the Cloud SIG side 17:54:38 * stickster is getting the feeling that this is splaying out into "too many roles" for legal stuff. 17:55:06 stickster: It might be worth pining spot at least and letting him know that we are looking toward some sort of connection with Rackspace, GCE, and HP that would allow us to programmatically upload our latest cloud images, if possible. 17:55:07 #action number80 find and identify all our current requests with legal and their owners 17:55:19 /s/pining/pinging 17:55:29 I'll start a thread in the mailing list 17:55:44 number80: I think we're making this more complicated than needed. What we probably need is simply (1) to determine if spot is working on this; (2) if not, who is; (3) identify someone responsible for poking regularly to check progress and update somewhere central. 17:56:06 stickster++ 17:56:11 *nods* 17:56:14 I am working HP. 17:56:16 Full stop. 17:56:19 #undo 17:56:19 Removing item from minutes: ACTION by number80 at 17:55:07 : number80 find and identify all our current requests with legal and their owners 17:56:41 It's not a huge time suck and I have the internal legal contact at this point. I could try to take on GCE and RackSpace as well. 17:56:43 stickster: I could be wrong, but I don't think that this is on spot's plate atm. 17:56:53 jzb: I'm confident it is not. 17:57:09 I'd ask, but I think he's traveling today 17:57:11 imcleod: OK, that's helpful, thanks. 17:57:16 jzb: I don't think he's aware of it either. IDK who would have brought it to him 17:57:35 OK 17:57:36 * stickster leaves another 'spot' callout here to tell him: never mind 17:57:37 I think we need to identify a single person who can figure out stickster's 3 points about Rackspace and GCE 17:57:38 anything else ? 17:57:40 As long as we avoid duplicate efforts and be able to track properly progress on legal requests, I'm fine 17:57:50 we're nearly at the end of the hour. 17:57:53 yup 17:57:58 number80++ 17:58:01 I have Rax contacts. I will try to start that as well. 17:58:02 #topic open floor 17:58:08 Do we have any tie in to Google at all? 17:58:10 imcleod: OK 17:58:12 Any starting contact? 17:58:30 If imcleod has Rackspace and HP, I can try and investigate GCE with any legal folks 17:58:36 imcleod: you should ask mattdm about it 17:58:49 I don't know about anyone with direct Google contacts unfortunately 17:58:50 he had some contacts at GCE 17:58:54 oddshocks: I wonder if Matt Hicks can help 17:58:55 imcleod, I can connect you to the person who owns the multipurpose GCE account that we use for atomic. they may know 17:59:05 jzb: Haven't met him, but any leads are good 17:59:18 agrimm: good idea. 17:59:28 agrimm: That'd be a start. I also know several former VA Linux co-workers who are now at Google. May try to work through them. 17:59:29 oddshocks: can you shoot me a note specifically with what you need? I'll bridge the conversation. 17:59:39 oddshocks: Are you waiting on anything from people here (or sometimes here) to do e.g. further OStree work in MirrorManager? 17:59:45 I know Matt has offered to help us with other GCE things. 17:59:48 oddshocks: Apols. Do you want to own GCE? If so I'll bow out and focus on HP and Rax. 17:59:58 jzb: Yeah, I'll send you the list of any creds we need to access the services and particularly what we're trying to do 18:00:07 oddshocks: groovy, thansk 18:00:09 er, thanks 18:00:10 great 18:00:13 * jzb cannot type this week 18:00:40 before we end this meeting, anyone has another topic to bring out ? 18:01:03 imcleod: I'm just trying to ensure I have my share of the work by taking something off your shoulders, but if you're cool with being the initial POI with all 3 providers, that's fine with me 18:01:28 oddshocks: I'm happy to try. Having a single Fedora person talking to internal legal about it might be helpful. 18:01:33 stickster: I'm listening for any further clear tasks or action items. So far I just have those two tickets. 18:01:36 I expect the issues will be similar. 18:01:49 imcleod: cool, feel free to let me know if I can help at all. 18:02:13 :) 18:02:13 oddshocks: Cheers. 18:02:17 #action oddshocks Send out email to folks about what we need from Rackspace, GCE, and HP 18:02:56 May I close the meeting ? 18:03:03 number80: +1 18:03:07 the prosecution rests 18:03:22 oddshocks: *nod 18:03:34 Thank you gentlemen for attending this meeting and see you next week ! 18:03:38 number80: Thank you for being here and for moderating :-) 18:03:39 sure number80 +1 18:03:45 #endmeeting