17:06:07 #startmeeting Cloud (2014-07-25) 17:06:07 Meeting started Fri Jul 25 17:06:07 2014 UTC. The chair is mattdm. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:06:07 Useful Commands: #action #agreed #halp #info #idea #link #topic. 17:06:15 #meetingname cloud 17:06:15 The meeting name has been set to 'cloud' 17:06:26 #topic gigantic unprepared open floor! 17:06:32 though trying out a kinesis, so slow typing :-/ 17:06:39 Hai 17:06:48 hello all peoples! 17:06:51 o/ 17:07:15 this has been one of those weeks where I have no idea what's going on and felt busy but got nothing done. 17:07:22 :) 17:07:25 hopefully not that way for everyone :) 17:07:43 I have actually been somewhat productive this week 17:07:48 agrimm++ ! 17:07:59 let's start there then :) 17:08:27 Woot 17:08:44 so, worked with cgwalters on getting imagefactory set up on atomic01.qa.fp.org, where I'm now testing image builds (since we can't do them in koji yet because $reasons 17:09:33 I also confirmed after mailing list discussions that ttyS0 should work fine for console output in EC2 w/ HVM 17:09:57 which means we should not need a "special" image (no conflict w/ openstack/GCE,etc.) 17:10:30 agrimm cool. so that uses the normal extlinux config like everything else? 17:10:39 mattdm, correct 17:11:03 turns out that when it didn't work, I just wasn't waiting long enough for euca-get-console-output to have data 17:11:23 so, should we do atomic as hvm only? 17:11:29 it takes at least 3 or 4 minutes after the instance is accessible via ssh, which is just mind-boggling 17:11:47 mattdm, that is my feeling 17:11:55 agrimm some sort of weird buffering? 17:12:02 It is. 17:12:17 agrimm: It has always been this way I'm afraid.... 17:12:19 The back end only sends updates to the front end every five minutes or so. 17:12:47 gholms does hvm only sound okay for you? (presumably we'd keep doing the non-Atomic Cloud Base images both ways) 17:13:43 mattdm: Only half the instance types work with hvm. 17:13:53 Is that really okay? 17:14:18 gholms, for atomic, probably. the non-HVM instance types are mostly 1) legacy, or 2) very small 17:14:27 not things you'd run a bunch of containers on 17:15:10 I wouldn't be running a large number of containers on anything on a cloud, but you know more about what we're shooting for than I do. 17:15:56 isn't openshift ALL ABOUT running a large number of containers in cloud? 17:16:13 yes, that. ^^ 17:16:15 :) 17:16:16 Mhm 17:17:05 Strictly speaking, it's about making it easy for people to run their stuff on RHEL, but I suspect that changed over time. ;) 17:17:19 gholms, it looks to me like mostly what doesn't support HVM is m1,m2,c1,t1 (all legacy) and t2 17:17:53 I _do_ still think we should have PV cloud-base 17:18:13 Yeah, so only the most commonly-used instance types for entry-level use. 17:18:16 but I don't see demand for both for atomic, especially given that somebody has to do the grub support work 17:18:21 t2 is actually hvm-only. 17:18:35 oh, right, I misread. 17:18:55 Hey, if you're fairly sure no one will use it then let's go without it and see what happens. 17:19:01 gholms, why would someone deploy m1 when they can deploy m3 cheaper (last I checked) 17:19:04 lol :) 17:19:09 okay, so.... 17:19:16 agrimm: Ten times the instance storage 17:19:57 #info Atomic image will be HVM only unless someone wants to work on the bootloader support work for pvgrub (and figure out something else for the console hacks) 17:19:58 gholms, I suppose. slower instance storage, though 17:20:18 also, someone should tell oddshocks and dgilmore / rel-eng about this decision 17:20:38 #action mattdm tell oddshocks and dgilmore / rel-eng about atomic hvm only 17:21:08 I'm behind on mailing list traffic. What exactly is broken? 17:21:31 ...and why is it only broken on atomic? 17:21:46 * gholms searches his mailbox 17:21:47 gholms the stuff in ostree doesn't deal with the pvgrub boot config 17:22:09 #info where "the bootloader support work" is in rpm-ostree -- talk to colin walters 17:22:34 #info Cloud Base image will be both HVM and PVM 17:22:54 Oh, right. Ostree. 17:22:57 * gholms grumbles 17:23:28 #info this is primarily Amazon EC2 we are talking about here, although PVM support would impact anyone using our images with PVM in a Xen-based private cloud too 17:23:35 (just for the record) 17:24:03 ... for all you cloudstack folks. :) 17:24:11 that's what I was going to ask :) 17:25:12 mattdm, so, I know we have a problem with people not having HVM privileges on EC2. I have them, but only because of OpenShift Online. so, I'm happy to register things there for testing until we have another way ... 17:25:31 *but* I shouldn't be using it for testing (separation of billing and all that) 17:26:15 agrimm: Recent events have led me to believe that may no longer be the case. It might be worth more testing. 17:26:21 I *think* the official fedora account owned by releng has the required privs 17:26:22 are Fedora folks just using their personal accounts and getting reimbursed these days? or do we have some special acount 17:26:37 agrimm we have two special accounts -- the official one and a "community" one 17:26:56 Yes, Fedora's official account has special unicorn powers. 17:26:57 agrimm: So mere mortals cannot register HVM AMIs? 17:27:16 imcleod, gholms said no, but now it looks like he's backpedaling. :) 17:27:26 imcleod: They couldn't up until recently, but that may have changed. 17:27:53 (Dang hosted services) :P 17:28:34 OpenStack clouds let anyone upload full virt images :-). 17:28:43 * imcleod gets back on topic... sorry.... 17:28:49 imcleod, yes, as does GCE 17:29:36 Well, don't forget the reason for that restriction was Windows. ;) 17:31:22 What's next? 17:31:22 anyway ... I'm happy to help get us moving forward on both HVM images in general and the partitioned/LVM stuff (there's a task for that somewhere). just let me know what I can do 17:32:44 agrimm probably talk to oddshocks about this? 17:32:50 ok 17:33:26 oooh! 17:33:37 #action agrimm to talk to oddshocks about HVM and partitioned images 17:34:03 well, the next thing I was going to bring up was the failing cloud image builds (due to the anaconda bug), but I see there's a successful build now 17:34:20 #info WHOOO SUCCESSFUL IMAGE BUILDS NOW 17:34:24 Yay! 17:34:39 that's rawhide, though (just looking at the rel-eng dashboard) 17:34:54 are we trying to build F21 branched images yet? 17:34:54 http://koji.fedoraproject.org/koji/tasks?state=all&view=tree&method=image&order=-id 17:35:02 Yay 17:35:19 #info aw. f21 is still failing. 17:35:27 Yup. Boo. 17:35:28 :( 17:36:01 I wonder if the rawhide ones boot and allow sudo. 17:36:17 mattdm: My impression is that relaying these issues to the Anaconda team is still a manual process. True? Should we be doing fancy organizational workflow type things to ensure they always know when there is an Anaconda related failure? 17:36:39 imcleod: yes that would be nice. 17:37:05 EARLY_SWAP_RAM ... yay 17:38:27 imcleod, mattdm : by the way, do you guys happen to know if imagefactory has support for passing an update.img ? that would make it easier to work past anaconda stuff (at least in our atomic environment) 17:39:24 imcleod: I guess some manual process might be needed anyway to clarify whether it's anaconda breaking or whether one of us did something (typo in kickstart, etc) that broke it 17:41:27 well that killed the discussion. is everyone off testing images now? :) 17:41:37 Hey. Sorry. 17:41:54 heh no problem. 17:42:13 agrimm: I don't believe we support it at the moment but I'm happy to look into adding it to Oz. 17:42:14 so I think the failing builds at this point are still do to releng's general compose issues 17:42:40 imcleod if you're bored, making those screenshots be png wouldn't hurt. :) 17:43:03 mattdm: OK. And I agree there's manual analysis needed at this point to determine where the failure is actually happening. 17:43:14 imcleod, that would be extremely useful, since I am pointing install urls that I don't control, so I can't put it in the tree 17:43:59 mattdm, Oz actually supports png output ... no idea why we use ppm. wonder if rel-eng can change that for us? 17:44:14 imcleod, I'm looking at code right now. :) 17:44:22 agrimm: I'm not entirely familiar with the function of update.img so I'll likely seek some offline discussion with you. 17:44:25 agrimm: Ahh. Cool! 17:44:56 imcleod, http://fedoraproject.org/wiki/Anaconda/Updates 17:45:20 agrimm: There's also the ability to source the initrd, kernel and stage2 from one location but the root install tree from another, via the "stage2=" kernel command line. 17:45:21 meh rawhide still asking for sudo password 17:45:24 basically allows you to override python files (and other stuff) 17:45:28 gholms do you know what's going on there? 17:45:36 agrimm: Though Oz doesn't support "stage2=" either. 17:45:46 * mattdm will boot up new one with userdata for direct root access to look 17:46:09 I haven't had a chance to test it this week. :/ 17:47:20 gholms Imma do that now and I'll tell you what I see. 17:48:19 hi all - poking my head in a few minutes before my flight. 17:48:24 hi jzb! 17:48:28 mattdm: howdy! 17:49:16 * gholms waves 17:49:37 * jzb waves to gholms 17:49:53 imcleod, so here's where it got removed: https://git.fedorahosted.org/cgit/anaconda.git/commit/?h=f21-branch&id=eb05ade5787f70b8ddef3659fbdd4cd6d6ff655f 17:51:16 so this would appear to be a blivet bug 17:54:49 * agrimm is on a long tangent now 17:55:02 mattdm, anything else to talk about today? 17:55:17 agrimm: as noted at the beginning, I don't know. :) 17:55:43 Ooh -- Base WG is interested in taking the Docker Base image, since apparently Server and others also want to use that 17:55:50 and, I guess, both contain the word Base :) 17:56:17 this sounds good to me as long as someone is doing it :) 17:56:53 but I guess that's all for meeting. more chatting in #fedora-cloud as needed :) 17:56:55 #endmeeting