17:06:07 <mattdm> #startmeeting Cloud (2014-07-25)
17:06:07 <zodbot> Meeting started Fri Jul 25 17:06:07 2014 UTC.  The chair is mattdm. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:06:07 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
17:06:15 <mattdm> #meetingname cloud
17:06:15 <zodbot> The meeting name has been set to 'cloud'
17:06:26 <mattdm> #topic gigantic unprepared open floor!
17:06:32 <roshi> though trying out a kinesis, so slow typing :-/
17:06:39 <gholms> Hai
17:06:48 <mattdm> hello all peoples!
17:06:51 <agrimm> o/
17:07:15 <mattdm> this has been one of those weeks where I have no idea what's going on and felt busy but got nothing done.
17:07:22 <agrimm> :)
17:07:25 <mattdm> hopefully not that way for everyone :)
17:07:43 <agrimm> I have actually been somewhat productive this week
17:07:48 <mattdm> agrimm++ !
17:07:59 <mattdm> let's start there then :)
17:08:27 <gholms> Woot
17:08:44 <agrimm> so, worked with cgwalters on getting imagefactory set up on atomic01.qa.fp.org, where I'm now testing image builds (since we can't do them in koji yet because $reasons
17:09:33 <agrimm> I also confirmed after mailing list discussions that ttyS0 should work fine for console output in EC2 w/ HVM
17:09:57 <agrimm> which means we should not need a "special" image (no conflict w/ openstack/GCE,etc.)
17:10:30 <mattdm> agrimm cool. so that uses the normal extlinux config like everything else?
17:10:39 <agrimm> mattdm, correct
17:11:03 <agrimm> turns out that when it didn't work, I just wasn't waiting long enough for euca-get-console-output to have data
17:11:23 <mattdm> so, should we do atomic as hvm only?
17:11:29 <agrimm> it takes at least 3 or 4 minutes after the instance is accessible via ssh, which is just mind-boggling
17:11:47 <agrimm> mattdm, that is my feeling
17:11:55 <mattdm> agrimm some sort of weird buffering?
17:12:02 <gholms> It is.
17:12:17 <imcleod> agrimm: It has always been this way I'm afraid....
17:12:19 <gholms> The back end only sends updates to the front end every five minutes or so.
17:12:47 <mattdm> gholms does hvm only sound okay for you? (presumably we'd keep doing the non-Atomic Cloud Base images both ways)
17:13:43 <gholms> mattdm: Only half the instance types work with hvm.
17:13:53 <gholms> Is that really okay?
17:14:18 <agrimm> gholms, for atomic, probably.  the non-HVM instance types are mostly 1) legacy, or 2) very small
17:14:27 <agrimm> not things you'd run a bunch of containers on
17:15:10 <gholms> I wouldn't be running a large number of containers on anything on a cloud, but you know more about what we're shooting for than I do.
17:15:56 <mattdm> isn't openshift ALL ABOUT running a large number of containers in cloud?
17:16:13 <agrimm> yes, that. ^^
17:16:15 <agrimm> :)
17:16:16 <gholms> Mhm
17:17:05 <gholms> Strictly speaking, it's about making it easy for people to run their stuff on RHEL, but I suspect that changed over time.  ;)
17:17:19 <agrimm> gholms, it looks to me like mostly what doesn't support HVM is m1,m2,c1,t1 (all legacy) and t2
17:17:53 <agrimm> I _do_ still think we should have PV cloud-base
17:18:13 <gholms> Yeah, so only the most commonly-used instance types for entry-level use.
17:18:16 <agrimm> but I don't see demand for both for atomic, especially given that somebody has to do the grub support work
17:18:21 <gholms> t2 is actually hvm-only.
17:18:35 <agrimm> oh, right, I misread.
17:18:55 <gholms> Hey, if you're fairly sure no one will use it then let's go without it and see what happens.
17:19:01 <agrimm> gholms, why would someone deploy m1 when they can deploy m3 cheaper (last I checked)
17:19:04 <mattdm> lol :)
17:19:09 <mattdm> okay, so....
17:19:16 <gholms> agrimm: Ten times the instance storage
17:19:57 <mattdm> #info Atomic image will be HVM only unless someone wants to work on the bootloader support work for pvgrub (and figure out something else for the console hacks)
17:19:58 <agrimm> gholms, I suppose.  slower instance storage, though
17:20:18 <mattdm> also, someone should tell oddshocks and dgilmore / rel-eng about this decision
17:20:38 <mattdm> #action mattdm tell oddshocks and dgilmore / rel-eng about atomic hvm only
17:21:08 <gholms> I'm behind on mailing list traffic.  What exactly is broken?
17:21:31 <gholms> ...and why is it only broken on atomic?
17:21:46 * gholms searches his mailbox
17:21:47 <mattdm> gholms the stuff in ostree doesn't deal with the pvgrub boot config
17:22:09 <mattdm> #info where "the bootloader support work" is in rpm-ostree -- talk to colin walters
17:22:34 <mattdm> #info Cloud Base image will be both HVM and PVM
17:22:54 <gholms> Oh, right.  Ostree.
17:22:57 * gholms grumbles
17:23:28 <mattdm> #info this is primarily Amazon EC2 we are talking about here, although PVM support would impact anyone using our images with PVM in a Xen-based private cloud too
17:23:35 <mattdm> (just for the record)
17:24:03 <agrimm> ... for all you cloudstack folks.  :)
17:24:11 <roshi> that's what I was going to ask :)
17:25:12 <agrimm> mattdm, so, I know we have a problem with people not having HVM privileges on EC2.  I have them, but only because of OpenShift Online.  so, I'm happy to register things there for testing until we have another way ...
17:25:31 <agrimm> *but* I shouldn't be using it for testing (separation of billing and all that)
17:26:15 <gholms> agrimm: Recent events have led me to believe that may no longer be the case.  It might be worth more testing.
17:26:21 <mattdm> I *think* the official fedora account owned by releng has the required privs
17:26:22 <agrimm> are Fedora folks just using their personal accounts and getting reimbursed these days?  or do we have some special acount
17:26:37 <mattdm> agrimm we have two special accounts -- the official one and a "community" one
17:26:56 <gholms> Yes, Fedora's official account has special unicorn powers.
17:26:57 <imcleod> agrimm: So mere mortals cannot register HVM AMIs?
17:27:16 <agrimm> imcleod, gholms said no, but now it looks like he's backpedaling. :)
17:27:26 <gholms> imcleod: They couldn't up until recently, but that may have changed.
17:27:53 <gholms> (Dang hosted services)  :P
17:28:34 <imcleod> OpenStack clouds let anyone upload full virt images :-).
17:28:43 * imcleod gets back on topic... sorry....
17:28:49 <agrimm> imcleod, yes, as does GCE
17:29:36 <gholms> Well, don't forget the reason for that restriction was Windows.  ;)
17:31:22 <gholms> What's next?
17:31:22 <agrimm> anyway ... I'm happy to help get us moving forward on both HVM images in general and the partitioned/LVM stuff (there's a task for that somewhere).  just let me know what I can do
17:32:44 <mattdm> agrimm probably talk to oddshocks about this?
17:32:50 <agrimm> ok
17:33:26 <mattdm> oooh!
17:33:37 <mattdm> #action agrimm to talk to oddshocks about HVM and partitioned images
17:34:03 <agrimm> well, the next thing I was going to bring up was the failing cloud image builds (due to the anaconda bug), but I see there's a successful build now
17:34:20 <mattdm> #info WHOOO SUCCESSFUL IMAGE BUILDS NOW
17:34:24 <imcleod> Yay!
17:34:39 <agrimm> that's rawhide, though (just looking at the rel-eng dashboard)
17:34:54 <agrimm> are we trying to build F21 branched images yet?
17:34:54 <mattdm> http://koji.fedoraproject.org/koji/tasks?state=all&view=tree&method=image&order=-id
17:35:02 <gholms> Yay
17:35:19 <mattdm> #info aw. f21 is still failing.
17:35:27 <imcleod> Yup.  Boo.
17:35:28 <gholms> :(
17:36:01 <mattdm> I wonder if the rawhide ones boot and allow sudo.
17:36:17 <imcleod> mattdm: My impression is that relaying these issues to the Anaconda team is still a manual process.  True?  Should we be doing fancy organizational workflow type things to ensure they always know when there is an Anaconda related failure?
17:36:39 <mattdm> imcleod: yes that would be nice.
17:37:05 <agrimm> EARLY_SWAP_RAM ... yay
17:38:27 <agrimm> imcleod, mattdm : by the way, do you guys happen to know if imagefactory has support for passing an update.img ?  that would make it easier to work past anaconda stuff (at least in our atomic environment)
17:39:24 <mattdm> imcleod: I guess some manual process might be needed anyway to clarify whether it's anaconda breaking or whether one of us did something (typo in kickstart, etc) that broke it
17:41:27 <mattdm> well that killed the discussion. is everyone off testing images now? :)
17:41:37 <imcleod> Hey.  Sorry.
17:41:54 <mattdm> heh no problem.
17:42:13 <imcleod> agrimm: I don't believe we support it at the moment but I'm happy to look into adding it to Oz.
17:42:14 <mattdm> so I think the failing builds at this point are still do to releng's general compose issues
17:42:40 <mattdm> imcleod if you're bored, making those screenshots be png wouldn't hurt. :)
17:43:03 <imcleod> mattdm: OK.  And I agree there's manual analysis needed at this point to determine where the failure is actually happening.
17:43:14 <agrimm> imcleod, that would be extremely useful, since I am pointing install urls that I don't control, so I can't put it in the tree
17:43:59 <agrimm> mattdm, Oz actually supports png output ... no idea why we use ppm. wonder if rel-eng can change that for us?
17:44:14 <agrimm> imcleod, I'm looking at code right now. :)
17:44:22 <imcleod> agrimm: I'm not entirely familiar with the function of update.img so I'll likely seek some offline discussion with you.
17:44:25 <imcleod> agrimm: Ahh.  Cool!
17:44:56 <agrimm> imcleod, http://fedoraproject.org/wiki/Anaconda/Updates
17:45:20 <imcleod> agrimm: There's also the ability to source the initrd, kernel and stage2 from one location but the root install tree from another, via the "stage2=" kernel command line.
17:45:21 <mattdm> meh rawhide still asking for sudo password
17:45:24 <agrimm> basically allows you to override python files (and other stuff)
17:45:28 <mattdm> gholms do you know what's going on there?
17:45:36 <imcleod> agrimm: Though Oz doesn't support "stage2=" either.
17:45:46 * mattdm will boot up new one with userdata for direct root access to look
17:46:09 <gholms> I haven't had a chance to test it this week.  :/
17:47:20 <mattdm> gholms Imma do that now and I'll tell you what I see.
17:48:19 <jzb> hi all - poking my head in a few minutes before my flight.
17:48:24 <mattdm> hi jzb!
17:48:28 <jzb> mattdm: howdy!
17:49:16 * gholms waves
17:49:37 * jzb waves to gholms
17:49:53 <agrimm> imcleod, so here's where it got removed:  https://git.fedorahosted.org/cgit/anaconda.git/commit/?h=f21-branch&id=eb05ade5787f70b8ddef3659fbdd4cd6d6ff655f
17:51:16 <agrimm> so this would appear to be a blivet bug
17:54:49 * agrimm is on a long tangent now
17:55:02 <agrimm> mattdm, anything else to talk about today?
17:55:17 <mattdm> agrimm: as noted at the beginning, I don't know. :)
17:55:43 <mattdm> Ooh -- Base WG is interested in taking the Docker Base image, since apparently Server and others also want to use that
17:55:50 <mattdm> and, I guess, both contain the word Base :)
17:56:17 <mattdm> this sounds good to me as long as someone is doing it :)
17:56:53 <mattdm> but I guess that's all for meeting. more chatting in #fedora-cloud as needed :)
17:56:55 <mattdm> #endmeeting