17:05:35 <roshi> #startmeeting Cloud WG 17:05:35 <zodbot> Meeting started Wed Aug 19 17:05:35 2015 UTC. The chair is roshi. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:05:35 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic. 17:05:38 <imcleod_> langdon: Who appears to me primarily as a blujeans silhouette with pulsing sound-circle. 17:05:40 <langdon> adimania, is that LaaS? is it similar to LARP? 17:05:43 <roshi> #topic Roll Call 17:05:46 <roshi> who's around? 17:05:49 <roshi> .hello roshi 17:05:50 <zodbot> roshi: roshi 'Mike Ruckman' <mruckman@redhat.com> 17:05:55 <maxamillion> langdon: LARP like HAARP? ... >.> 17:05:56 <lalatenduM> .hello lalatendu 17:05:57 <imcleod_> .hello imcleod 17:05:57 <zodbot> lalatenduM: lalatendu 'Lalatendu Mohanty' <lmohanty@redhat.com> 17:05:59 <maxamillion> .helo maxamillion 17:05:59 <sayan> .hello sayanchowdhury 17:06:00 <adimania> .hello adimania 17:06:00 <zodbot> imcleod_: imcleod 'Ian McLeod' <imcleod@redhat.com> 17:06:03 <zodbot> sayan: sayanchowdhury 'Sayan Chowdhury' <sayan.chowdhury2012@gmail.com> 17:06:06 <zodbot> adimania: adimania 'Aditya Patawari' <adimania@gmail.com> 17:06:07 <dustymabe> .hello dustymabe 17:06:09 <maxamillion> .hello maxamillion 17:06:11 <zodbot> dustymabe: dustymabe 'Dusty Mabe' <dusty@dustymabe.com> 17:06:13 <langdon> .hello langdon 17:06:14 <zodbot> maxamillion: maxamillion 'Adam Miller' <maxamillion@gmail.com> 17:06:17 <zodbot> langdon: langdon 'Langdon White' <langdon@fishjump.com> 17:06:27 <jbrooks> .hello jasonbrooks 17:06:28 <zodbot> jbrooks: jasonbrooks 'Jason Brooks' <JBROOKS@REDHAT.COM> 17:06:35 * langdon wonders if we can kill zodbot 17:06:38 <roshi> quite the turn out today :) 17:06:41 * dustymabe goes to fishjump.com 17:06:54 <roshi> .moar pings zodbot 17:06:55 <zodbot> here zodbot, have some more pings 17:07:12 <roshi> #topic Previous meeting followup 17:07:17 * langdon wonders if the fj redirect is working.. 17:07:36 <roshi> we didn't really have any action items I recall from the meeting at flock 17:07:43 <roshi> aside from the notes jzb pushed out to the list 17:07:53 <roshi> from the meeting before we have: 17:07:58 <lalatenduM> roshi: I have on epending on me :) 17:07:59 <roshi> kushal will create cloud meeting agenda page for flock and post it to the list. rtnpro to test docker-storage as mention in the cloud list 17:08:06 <rtnpro> .fas rtnpro 17:08:11 <roshi> go for it lalatenduM :) 17:08:12 <zodbot> rtnpro: rtnpro 'Ratnadeep Debnath' <rtnpro@gmail.com> 17:08:28 <roshi> kushal posted his to the list and I think I recall seeing something from rtnpro as well 17:08:58 <lalatenduM> roshi: I tried libtaskotron , but nothing more apart from that 17:09:08 * rtnpro nods 17:09:42 <roshi> lalatenduM: which bits did you try? 17:10:33 <lalatenduM> roshi: http://fpaste.org/256770/39996833/ 17:10:46 <lalatenduM> roshi: I need to look in to the errro 17:12:00 <roshi> lalatenduM: link up with tflink or kparal and they can help you troubleshoot 17:12:05 <dustymabe> roshi: should we go through the items from the meeting two meetings ago? 17:12:12 <dustymabe> or two weeks ago? 17:12:18 <roshi> though, things are bit crazy on the qa-devel side of things now with the bodhi2 migration 17:12:25 <roshi> dustymabe: already did 17:12:28 <lalatenduM> roshi: yeah 17:12:41 <dustymabe> roshi: oops.. ok 17:12:46 <roshi> kushal sent out his mail and rtnpro did as well, iirc 17:12:57 <roshi> thanks lalatenduM 17:13:00 <dustymabe> ok.. the one from two weeks ago I had an item I think 17:13:07 <roshi> anything else others want to follow up on? 17:13:13 <dustymabe> https://fedorahosted.org/cloud/ticket/114 17:13:13 <roshi> http://meetbot-raw.fedoraproject.org/fedora-meeting-1/2015-08-05/fedora-meeting-1.2015-08-05-17.00.html 17:13:20 <roshi> for the logs of last meeting we had on IRC 17:13:23 <dustymabe> ^^ this got fixed 17:14:05 <dustymabe> also I now have some limited access to docker hub so I may be able to work on related items 17:14:10 <dustymabe> so keep that in the mind for the future 17:14:16 <roshi> awesome 17:14:52 <roshi> ok, onto the tickets 17:15:17 <roshi> #topic Missing Cockpit RPMs in Fedora Atomic 22 17:15:20 <roshi> #link https://fedorahosted.org/cloud/ticket/105 17:15:53 <roshi> jbrooks: got anything here? 17:16:46 <jbrooks> roshi, I didn't update the ticket as I said I would -- the cloud-init part works, the kickstart part I haven't worked on yet 17:16:58 <jbrooks> I *will* update the ticket now though 17:17:03 <roshi> sweet, thanks :) 17:17:03 <dustymabe> jbrooks: sounds good 17:17:16 <dustymabe> jbrooks: have we already released the information for cloud-init? 17:17:33 <dustymabe> is there a reason to keep it until we have them both or not? 17:17:41 <jbrooks> dustymabe, in what way? it's in the gist linked there 17:18:07 <jbrooks> Where should that info live "for real" 17:18:09 <dustymabe> jbrooks: I was thinking blog form or something more broadcastable/searchable 17:18:20 <dustymabe> probably on project atomic blog? 17:18:24 <jbrooks> dustymabe, I'll do a blog post, yeah 17:18:32 <roshi> cloud wiki would be good as well 17:18:43 <dustymabe> it might have some use in official documentation but I don't know 17:18:44 <jbrooks> roshi, OK 17:18:52 <dustymabe> I always get confused about where the line should be drawn 17:19:01 * roshi dreams of a day where our wiki is as complete as the Arch wiki (those peeps do some serious wiki work) 17:19:17 <dustymabe> jbrooks: you can convert to wiki format using pandoc I think 17:19:27 <jbrooks> dustymabe, cool 17:19:49 <lalatenduM> +1 for Arch wiki :) 17:20:00 <jbrooks> :) 17:20:14 <roshi> thanks jbrooks 17:20:24 <roshi> #topic systemd-networkd 17:20:30 <roshi> #link https://fedorahosted.org/cloud/ticket/14 17:21:26 <roshi> this is one of our oldest tickets 17:21:40 <dustymabe> I guess kushal isn't around 17:21:44 <roshi> as we discussed last meeting - we need to come up with a list of requirements for a network stack 17:21:51 <dustymabe> indeed 17:21:52 <maxamillion> I don't have any personal frame of reference but major hayden spoke very highly of his experience using systemd-networkd at Flock ... I might see if he'd be willing to add some notes to the ticket 17:21:53 <roshi> then we have something to compare the two against 17:22:05 <roshi> that would be great 17:22:06 <dustymabe> maxamillion: ++ 17:22:17 <roshi> there's a couple of things I have concerns about 17:22:32 <dustymabe> we also might be able to take what notes are given to us and formalize it if we can get a FAD put together like we talked about 17:22:35 <roshi> 1) Testing/Documentation burden of having a different stack than the rest of Fedora 17:22:55 <maxamillion> dustymabe: +1 17:23:08 <roshi> 2) Providers that don't abstract away networking (like Digital Ocean) 17:23:32 <roshi> 3) Running a "comparison" w/o knowing what our requirements are for a networking stack 17:23:45 * roshi is in favor of a fad to handle some of this stuff, for sure :) 17:23:58 <dustymabe> roshi: so here is another concern 17:24:17 <roshi> so I think at this point we need some people who can kind of go heads down and dig into documenting the network stack, then doing the comparison 17:24:21 <dustymabe> if we are going to move to atomic as our primary focus (it has been proposed that we do so) 17:24:21 <roshi> go for it dustymabe 17:24:33 <dustymabe> and atomic has NM in it 17:24:41 <maxamillion> dustymabe: for now 17:24:44 <dustymabe> and cloud base is *less important* 17:24:58 <maxamillion> I think cloud base and atomic should share a networking stack, but that's just me 17:25:11 <adimania> maxamillion, +1 17:25:13 <dustymabe> maxamillion: in that case you would be voting against networkd? 17:25:19 <maxamillion> dustymabe: not at all 17:25:44 <maxamillion> dustymabe: but I think if the evaluation of networkd proves to be adventageous(sp?) then we should consider switching atomic to use it as well 17:25:50 <roshi> I'm a fan of having the same stack everywhere, but it's in our best interest to do a good comparison 17:26:00 <roshi> because then we can back up our decision with data 17:26:04 <dustymabe> maxamillion: the problem is we use the same tree for atomic in both cloud and bare metal 17:26:18 <maxamillion> dustymabe: we don't have to, it's just a json file 17:26:51 <dustymabe> true.. but it does simplify things to not have them be completely separate things 17:26:54 <maxamillion> dustymabe: I'm not saying we should or shouldn't diverge but that the possibility is there if there's enough motivation 17:27:05 <dustymabe> yeah. I approached walters about this in the past 17:27:06 <roshi> so who wants to work on coming up with the specifications we care about for our network stack? 17:27:24 <dustymabe> and he really wants to keep them the same for now.. maybe in the future that doesn't make sense 17:27:56 <dustymabe> roshi: well kushal would be a good candidate :) 17:28:25 <roshi> for sure - as he's the one who got networkd into an image 17:28:45 <dustymabe> I agree there needs to be more parties involved 17:28:46 <maxamillion> dustymabe: that's fair 17:29:06 <maxamillion> dustymabe: also good to know his thoughts on the matter since he's the one doing the bulk of the work 17:29:11 <roshi> who has cycles to get the draft started? 17:29:20 <roshi> a roadmap and whatnot for when kushal gets back 17:29:44 <dustymabe> maxamillion: those were his thoughts on it a few months ago - I actually thought it was a bug that the cloud base and cloud atomic images weren't using the same networking 17:30:04 <dustymabe> roshi: since I'm tasked with kinda planning the FAD then I'll bow out on this one 17:30:19 <roshi> we're to the planning phase on that? 17:30:30 <dustymabe> roshi: planning to plan ?? 17:30:32 <dustymabe> :) 17:30:35 <roshi> or is this a "write a proposal and submit for review" kinda thing? 17:30:46 <dustymabe> I was going to make a ticket and draft a proposal 17:30:55 <maxamillion> dustymabe: rgr 17:30:57 <roshi> sounds good 17:31:03 <dustymabe> ticket = Fedora cloud trac ticket 17:31:04 <roshi> anyone have cycles? 17:31:25 * roshi has cycles, but isn't the most familiar with cloud providers out of the group 17:31:43 <dustymabe> if gholms were around he might have some good input 17:31:53 <roshi> for sure 17:32:00 <dustymabe> he tends to have strong opinions when we have had those discussions in the past 17:32:10 <roshi> true 17:32:21 <dustymabe> how about this. let's send an email to the list asking for volunteers to help draft the requirements 17:32:28 <roshi> maxamillion: jbrooks adimania ? 17:32:48 <maxamillion> dustymabe: +1 17:32:55 <roshi> that works, just figured if we had people here now, then we could start now 17:32:57 <jbrooks> roshi, sounds good, I need to understand more about it 17:33:07 <roshi> I think we all do :) 17:33:13 <jbrooks> I don't yet understand the benefit of any change 17:33:22 <langdon> roshi, why not strawman it then let people add .. if you have "something" it will be easier to add/change it 17:33:23 <maxamillion> if only we all had spare time ;) 17:34:03 <roshi> I wouldn't say "spare" time :p this is just a priority for me 17:34:24 <roshi> especially since it could have a project-wide impact and impacts testing considerably 17:34:45 <maxamillion> oh, it's not quite that high on my list at the moment unfortunately 17:34:52 <maxamillion> I just wish I had time for "all the things" 17:35:05 <dustymabe> maxamillion: me 2 17:35:07 <roshi> you and me both :) 17:35:34 <dustymabe> ok so we'll do email for now 17:35:37 <dustymabe> I can send that 17:35:41 <roshi> I think we could all clear our calendars for the next several years and not make it through all the stuff we want to poke at :p 17:35:44 <roshi> thanks dustymabe 17:35:55 <roshi> can you update the ticket as well? 17:36:03 <dustymabe> roshi: will do 17:36:06 <roshi> thanks 17:36:18 <roshi> #topic Care and Feeding, Fedora Dockerfiles 17:36:24 <roshi> #link https://fedorahosted.org/cloud/ticket/84 17:36:25 * maxamillion runs 17:36:46 <roshi> o/ maxamillion 17:37:02 <roshi> looks like we just need some testing of the dnf migration on all the branches 17:37:14 <roshi> Master = rawhide, in this context, right? 17:37:17 <maxamillion> dnf has an issue where it doesn't work with overlayfs 17:37:24 <maxamillion> just like yum used to 17:37:27 <roshi> ooh, fun 17:37:40 <jbrooks> That's an rpm issue, right? 17:37:52 * adimania_ got disconnected. 17:38:07 <maxamillion> jbrooks: I thought it was yum --> https://bugzilla.redhat.com/show_bug.cgi?id=1213602 17:38:18 <dustymabe> adimania: welcome back 17:38:19 <maxamillion> jbrooks: it very well could also be rpm 17:38:49 * adimania is willing to help on systemd-networkd but has almost no experience with it. So things might be slow. 17:39:10 <roshi> adimania: taking things slow tends to yield more solid results :) no worries 17:39:13 <langdon> really the problem is.. you open one read-handle, then you open a write handle, oops.. you invalidated your read-handle cause it is on a different fs (in the overlay sense) 17:39:26 <dustymabe> langdon: yep 17:39:33 <dustymabe> no longer the same fd 17:39:37 <roshi> langdon: just so I have my head on straight with this 17:39:56 <roshi> so, if you keep all rpm actions in the same step in a Dockerfile, you won't have this issue? 17:40:05 <roshi> if the handles get open/closed at the same time? 17:40:13 <roshi> since that woudl be at the same level in the fs? 17:41:36 <langdon> roshi, i am not sure.. i have experienced it myself in docker run->yum install->etc .. but i haven't played to much with it.. just heard about the issue 17:41:48 <roshi> ah 17:41:49 <langdon> and all my dockerfiles have worked.. 17:41:53 <roshi> I'll have to do some digging 17:42:00 <dustymabe> roshi: I don't think that would fix it. it's purely an rpm issue I think 17:42:02 * roshi hasn't seen this in his dockerfiles 17:42:09 <dustymabe> rpm itself opens a file twice 17:42:39 <langdon> i feel like the "overlays being live" is only when the docker image is running.. not during docker image creation.. but that seems crazy to me 17:42:45 <dustymabe> and then uses the file handles interchangably (at least that is what it looks like from the original description) 17:43:03 <roshi> interesting 17:43:07 <langdon> dustymabe, yeah.. thats the "crazy" part in my line above 17:43:26 <roshi> but for this ticket, we just have to go through and test the dockerfiles we have in each branch and note errors, right? 17:43:32 * langdon thinks "meh, worked on my machine" ;) 17:43:38 <roshi> haha 17:43:59 <adimania> langdon, use docker .... oh wait! :P 17:44:06 <dustymabe> anywho. is this super related to current ticket? 17:44:27 <roshi> that's what I just asked :) 17:44:30 <roshi> but for this ticket, we just have to go through and test the dockerfiles we have in each branch and note errors, right? 17:44:58 <dustymabe> roshi: I guess. but do we assume which storage backend is being used? 17:45:21 <dustymabe> I think we should care less about that for these example dockerfiles 17:46:20 <roshi> aren't dockerfiles just examples to work from? 17:46:51 <dustymabe> yeah. I guess what we were discussing is if the bug is in 'dnf' then we shouldn't move to it 17:47:16 <dustymabe> looks like the bug is in rpm itself though 17:48:03 <roshi> aiui, nothing more for this ticket though, right? 17:48:10 <dustymabe> so.. I say move to dnf and test 17:48:17 <dustymabe> if it works it works and move on 17:48:32 <roshi> wfm 17:48:42 <roshi> #topic Flyer text 17:48:48 <roshi> #link https://fedorahosted.org/cloud/ticket/108 17:49:01 <roshi> still waiting on number80 for this one 17:49:10 <roshi> moving on... 17:49:14 <dustymabe> roshi: one sec 17:49:17 <roshi> kk 17:49:21 <dustymabe> was this for Flock? 17:49:25 <dustymabe> which has passed? 17:49:48 <roshi> I think this is for anything we pass out flyers for during the year 17:49:52 <dustymabe> ahh ok 17:49:55 <dustymabe> sounds good 17:50:00 <dustymabe> just looked through ticket and I agree 17:50:38 <roshi> #topic Updated Cloud/Atomic images 17:50:45 <roshi> #link https://fedorahosted.org/cloud/ticket/94 17:50:54 <roshi> waiting for more input from kushal on this one 17:51:10 <roshi> on this note - I'm going to be working on getting local cloud builds working reliably 17:51:27 <roshi> try to suss out what the requirements are and help kushal with docs for it 17:51:46 <roshi> for QA, we spin custom lives often, and I want to be able to do that with cloud 17:52:31 <adimania> I was looking at automating certain tests. I think kushal already has some tests done. 17:52:37 <roshi> it's also be good to have more people up to speed witht he release process for this to reduce the bus factor 17:52:51 <roshi> adimania: our testing needs a lot of work 17:53:10 <roshi> I'm attempting to port the centos tests to fedora so we can run those in taskotron once it's ready 17:53:26 <adimania> yes and it would be needed for the bi-weekly release. 17:53:29 <roshi> taskotron could always use more devs and pull requests if you have interest and cycles 17:53:32 <roshi> for sure 17:55:28 <roshi> ok, we're about out of time 17:55:40 <roshi> open floor for now - then update tickets on your own time? 17:55:49 <roshi> discussion to be ongoing in fedora-cloud? 17:55:54 <dustymabe> ok any more items? 17:55:58 <roshi> 3 more 17:56:27 <roshi> 32 bit image (mostly decided), PRD Discussion (not needed anymore?) and Shipping with firewall on by default 17:56:36 <lalatenduM> Fedora Vagrant images in atlas.hashicorp https://lists.fedoraproject.org/pipermail/cloud/2015-July/005619.html 17:56:57 <roshi> #topic Open Floor 17:56:59 <lalatenduM> Is Fedora cloud SIG is rightforum for this 17:57:23 <dustymabe> lalatenduM: I think so 17:57:31 <dustymabe> I'm +1 for having our images in the index 17:57:43 <jbrooks> +1 17:57:55 <dustymabe> the question is what image? would we only put the virtualbox image in there? 17:58:07 <jbrooks> No, libvirt, too 17:58:10 <dustymabe> or is the client smart enough to know 17:58:15 <dustymabe> and pull down the right image? 17:58:15 <lalatenduM> I think both libvirt and virtualbox 17:58:22 <jbrooks> Yeah, it's smart enough 17:58:24 <lalatenduM> +1 jbrooks 17:58:27 <jbrooks> It works w/ centos 17:58:39 <dustymabe> yeah. in that case what are the next steps there. 17:58:45 <roshi> who's testing the vagrant image? 17:58:47 <dustymabe> do you want someone from the working group to maintain that 17:59:14 <jbrooks> Maybe connect w/ the person from centos who got theirs in place? lalatenduM, do you know who that is? 17:59:19 <lalatenduM> dustymabe: yeah that would be god 17:59:22 <lalatenduM> good* 17:59:42 <lalatenduM> jbrooks: for CentOS KB puts it 17:59:54 <lalatenduM> and for Adb I maintains it 18:00:03 <dustymabe> lalatenduM: I'll create a ticket for it 18:00:11 <lalatenduM> ADb is Atomic developer bundle 18:00:20 <dustymabe> I should be able to look into managing it 18:00:32 * jbrooks has become a big fan of vagrant since it's been avail in fedora 18:00:38 <lalatenduM> dustymabe: I can co-maintain it with you 18:00:42 <langdon> jbrooks, +1 18:00:43 * maxamillion wants vagrant to die a firey death 18:00:45 <dustymabe> langdon: thanks 18:00:49 <dustymabe> maxamillion: :) 18:01:02 * langdon still wants all the other WGs to support it to 18:01:03 <maxamillion> no really, can't stand it, I'm waiting for the fad to die 18:01:07 <jbrooks> maxamillion, heh, I felt that way, until I started using it :) 18:01:16 <maxamillion> jbrooks: I did use it, for about 6 months, still hate it 18:01:19 <langdon> maxamillion, what do you have actually against it? 18:01:32 <langdon> or what is the better alternative? 18:02:10 * langdon endmeeting probably worthwhile before a pro/con on vagrant 18:02:15 <dustymabe> maxamillion: I didn't like the idea of it at first.. but I think it serves a purpose (esp for developer focused people) 18:02:16 <langdon> *says 18:02:21 <maxamillion> langdon: it's a long story, it's a combination of everything that is wrong with the rubygems community bundled up with an upstream that doesn't care about source distribution in a consumable fashion but instead wants to ship a giant vendored blob 18:02:56 <dustymabe> roshi: ^^ want to do endmeeting? 18:02:56 <jbrooks> I've only gotten into it since it's been in rpms 18:02:59 * lalatenduM will keep quite abt Vagrant :) 18:03:04 <langdon> maxamillion, the latter is somewhat reflective of all isvs today.. and is what we are trying to resolve with the fed-mod stuff ;) 18:03:08 <jbrooks> but this sort of thing is EPIC: https://github.com/kubernetes/contrib/tree/master/ansible/vagrant 18:03:18 <maxamillion> jbrooks: I tried to get it into rpms and gave up after needing an adjustment in blood pressure meds 18:03:24 <roshi> sure thing, was letting the convo finish as I was curious :p 18:03:33 <jbrooks> maxamillion, It's packaged now! ;) 18:03:34 <maxamillion> langdon: fed-mod? 18:03:46 <roshi> thanks for coming folks! 18:03:50 <roshi> #endmeeting