17:00:08 #startmeeting fedora_cloud_wg 17:00:08 Meeting started Wed Jul 20 17:00:08 2016 UTC. The chair is jbrooks. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:08 Useful Commands: #action #agreed #halp #info #idea #link #topic. 17:00:08 The meeting name has been set to 'fedora_cloud_wg' 17:00:17 #topic Roll Call 17:00:29 !here 17:00:32 .hello scollier 17:00:33 scollier: scollier 'Scott Collier' 17:00:41 .hello bowlofeggs 17:00:42 bowlofeggs: bowlofeggs 'Randy Barlow' 17:00:46 .hello trishnag 17:00:47 trishnag: trishnag 'Trishna Guha' 17:00:49 .hello mmicene 17:00:54 nzwulfin: mmicene 'Matt Micene' 17:00:57 .hello jberkus 17:00:58 jberkus: jberkus 'Josh Berkus' 17:01:24 .hellomynameis kushal 17:01:26 kushal: kushal 'Kushal Das' 17:01:28 .hellomynameis dustymabe 17:01:31 dustymabe: dustymabe 'Dusty Mabe' 17:01:50 * kushal has few questions related to Flock etc, may be at first or for open floor. 17:02:15 .hello tflink 17:02:16 tflink: tflink 'Tim Flink' 17:02:20 #chair bowlofeggs scollier trishnag nzwulfin jberkus kushal dustymabe tflink 17:02:20 Current chairs: bowlofeggs dustymabe jberkus jbrooks kushal nzwulfin scollier tflink trishnag 17:02:39 #topic discuss Post-GA Cadence 17:02:49 Last time we resolved to talk about this first 17:03:01 https://fedorahosted.org/cloud/ticket/155 17:03:50 2WA atomic is good, but we are still missing info to release the base image. 17:03:59 .hello sayanchowdhury 17:04:00 sayan: sayanchowdhury 'Sayan Chowdhury' 17:04:07 #chair sayan 17:04:07 Current chairs: bowlofeggs dustymabe jberkus jbrooks kushal nzwulfin sayan scollier tflink trishnag 17:04:11 jbrooks: we still need maxamillion for that 17:04:26 jberkus, I just pinged him in fedora-cloud 17:04:48 doesn't he have another meeting in this time slot? 17:05:12 he's coming 17:05:15 .hello maxamillion 17:05:16 maxamillion: maxamillion 'Adam Miller' 17:05:19 sorry all 17:05:19 #chair maxamillion 17:05:19 Current chairs: bowlofeggs dustymabe jberkus jbrooks kushal maxamillion nzwulfin sayan scollier tflink trishnag 17:05:30 I was sitting in #fedora-meeting waiting for the meeting to kick off ... which of course wasn't going to 17:05:37 haha 17:05:42 maxamillion, we're kicking off w/ https://fedorahosted.org/cloud/ticket/155 17:06:07 cool cool 17:06:43 maxamillion, :) 17:06:47 I see the only question mark at https://fedoraproject.org/wiki/Fedora_Program_Management/Updating_deliverables/Fedora24#updating_deliverables is updating the atomic repos 17:07:11 maxamillion, jbrooks we also have the other ticket to release updated base image. 17:07:17 Which sadly we could not before. 17:08:15 kushal, do you have a link for that one? 17:09:31 jbrooks, https://fedorahosted.org/cloud/ticket/138 17:09:32 https://fedorahosted.org/cloud/ticket/138 17:10:26 OK, so, on 155, what's blocking us / what needs discussing? 17:11:03 It seems to make sense to me to update the testing repo daily, and the 2-week one each two weeks 17:11:42 jbrooks: seems reasonable 17:12:02 jbrooks, sounds good. 17:12:07 maxamillion, what say you? 17:12:30 makes sense 17:12:34 jbrooks: do we even have to limit the testing repo at all 17:12:41 just have it update whenever a compose is done? 17:12:44 like is done now? 17:12:57 Oh, is that what we do now? That's better 17:13:08 i think.. it basically updates all the time 17:13:22 dustymabe, OK, so, right now, is that how the stable repo works, too? 17:13:30 yeah 17:13:35 I believe so 17:13:43 basically one pulls from updates-testing 17:13:46 and one pulls from updates 17:13:54 both updated continuously 17:14:17 OK, what's your take on continuing continuous for the stable repo? 17:14:36 let me go look at ticket before I make an ass of myself 17:14:47 Cool 17:15:38 ok here is what I think 17:15:42 which doesn't matter much 17:15:59 there is a stable which updates every two weeks along with the release of atomic host 17:16:21 there is a "continuous" which updates all the time, just like stable branch does now 17:16:37 there is a "testing" which is the same as testing today - updates all the time, etcc 17:16:48 "continuing continuous"? ... Fedora doesn't have a continuous yet 17:16:53 see also: https://fedorahosted.org/rel-eng/ticket/6313 17:17:04 maxamillion, is the current stable repo not continuous? 17:17:06 yeah. continuous might be the wrong word 17:17:18 jbrooks: not the way CentOS continuous is 17:17:19 and that word means a lot of different things to different people 17:17:22 Continuous from the stable repos 17:17:24 jbrooks: it gets updated nightly 17:17:32 got it 17:17:38 so we may use a word like nightly or daily 17:17:43 or wait 17:17:44 maxamillion, is that the same for testing? 17:17:45 instead of continuous 17:17:59 the ostree gets updated constantly from bodhi actually ... I think 17:18:05 #chair walters 17:18:05 Current chairs: bowlofeggs dustymabe jberkus jbrooks kushal maxamillion nzwulfin sayan scollier tflink trishnag walters 17:18:53 we need this on a web page somewhere ... once we figure out what the truth is 17:19:04 dustymabe, OK, so you're saying, we add a new one, and the new one looks like it'd be the two-week one 17:19:12 And have that become the default 17:19:28 jbrooks: right 17:20:07 so users could choose to test latest stable if they want to - and that would probably be a good way for us to try out the next two week release before it gets released 17:20:15 The other thing to consider is when / whether / how to make exceptions for critical security issues 17:20:57 We don't have to let that block us from adding this third stream, though 17:21:26 maxamillion, What's your take on adding this two week only branch (is branch the right term) 17:21:33 Is that doable 17:21:34 jbrooks: I'll be ready to tackle critical security issues once we have a track record of getting out the regular releases without a lot of trauma 17:21:47 jberkus, good point 17:22:07 jbrooks: I honestly don't know, this is off in the weeds of ostree creation that lmacken and dgilmore have done in the past, I'm not well versed in it enough to properly comment 17:22:36 maxamillion, OK, I'll take an action to update the ticket and run this down some more before next mtg 17:22:41 so I don't how difficult it is to make the tooling around it 17:22:44 jbrooks: +1 17:22:53 If there's nothing else for now on this, we can move ahead 17:23:00 but at the highest level it is as simple as updating the file in the repo that points to what commit 17:23:22 #action jbrooks to update / run down issues around two-week repo 17:24:29 jbrooks: can you also capture the discussion we had here 17:24:35 in one of the tickets? 17:24:38 dustymabe, will do 17:25:01 #topic issue 136 https://fedorahosted.org/cloud/ticket/136 17:25:25 A few of us had an action item to look further into this, the vagrant fixups, I didn't look at it myself 17:25:42 Was it decided that this was solved in the blog post thread? 17:26:20 dustymabe, did you look at this any more? 17:28:12 #topic other open items 17:28:31 jbrooks: I did not- I thought imcleod was going to 17:28:42 i'm not sure how much it makes sense to talk about unless there's someone with both commit access, time and motivation 17:29:02 jberkus, you have a few in here https://fedorahosted.org/cloud/report/9 -- any updates on #125, #153, #154 17:29:06 walters, indeed 17:29:11 for reference i have all of those in centos CI so have been working on some of this there 17:29:26 i don't have commit access to fedora 17:29:34 walters, which repos? 17:29:39 walters, you're talking about the vagrant fixups? 17:29:49 no sorry, was talking about the release cadence ticket 17:30:17 walters, OK, right, we'll get it straightened 17:30:37 lemme update those tickets 17:31:05 fwiw the reason i'm working in centos devel is that we really need testing on e.g. python2 before doing any downstream releases etc., and i see that as a gap 17:31:27 there is a larger picture issue that the centos base release cycle is better in general for servers 17:31:50 walters: i.e. less moving parts 17:32:14 but if we find someone who can spend most of their time on this in fedora happy to help 17:33:31 jberkus, This kickstart docs ticket, #156, can you own it? 17:33:58 jbrooks: sure. 17:34:27 jbrooks: I was hoping to doc an all-in-one-USB-key install, but I'm stonewalled by my lack of knowledge about EFIboot 17:35:32 The only other unowned ticket in the list is #147, about overwriting the two-week dl location: https://fedorahosted.org/cloud/ticket/147 17:36:00 is it unowned? I think maxamillion has that one 17:36:14 dustymabe, You reported that one, what do we need to do to put this one to bed -- ah, there isn't a listed owner 17:37:03 Do we know an EFIboot guru who can help jberkus? 17:37:13 dustymabe: yeah, it's "on the list" but if someone has time to look at the code it'd be welcomed ... because of OSBS getting out the door later than it was supposed to I'm kind of knee deep playing catch up on other things that were supposed to be done by now 17:37:29 maxamillion: understood 17:37:44 jbrooks: did you ever figure out a way to fix iptables using ansible? 17:38:10 jbrooks: if not, I really need to open a bug somewhere 17:38:21 jberkus, I haven't looked at it since before summit -- yes, it seems like a bug 17:38:49 We should mark it as open floor. 17:38:50 maxamillion, I'll take a close look at #147 and see if I can help 17:38:57 #topic open floor 17:38:58 Okay, who all are coming to flock from here? 17:39:02 yo 17:39:14 * kushal and sayan will be there. 17:39:16 kushal: I'll be there 17:39:33 Who else? 17:39:38 maxamillion, jberkus jbrooks ? 17:39:50 kushal, not me 17:39:51 jzb, We should have the cloud WG meeting this time too. 17:39:59 jbrooks, Okay, you can join in over hangout :) 17:40:00 imcleod will be there 17:40:02 :) 17:40:03 and I think scollier 17:40:04 * walters can't make it (new child process in september) 17:40:10 awesome 17:40:11 walters: no!!! 17:40:14 walters, Ah, congratulations :) 17:40:18 walters: you will be missed! 17:40:22 kushal, thanks =) 17:40:29 I'll be at Flock 17:40:32 congrats 17:40:34 * dustymabe was hoping to elbow bump with walters some more 17:40:39 :) 17:40:41 I'll be at flock 17:40:42 walters: congratulations!!!! 17:41:02 dustymabe, yup, i'm booked. 17:41:03 walters: you will definitely be missed indeed, but what a great reason to skip :D :D :D 17:41:17 I need to talk on https://fedorahosted.org/cloud/ticket/99 17:41:36 Do we delete the released AMIs? 17:41:53 sayan: I would like us to. It's way confusing right now to pick the right AMI for a new instance 17:41:55 sayan, NO 17:42:09 kushal: ? 17:42:18 sayan, We do not delete anything formally GA released. 17:42:34 i'll be at flock 17:42:44 jberkus, After we release something, we are not supposed to delete any of those. 17:42:48 bowlofeggs, :) 17:42:56 kushal: well, we have a serious usability problem then 17:43:07 kushal: try creating a new Atomic EC2 instance 17:43:12 ☺ 17:43:13 and try to pick the latest image 17:43:20 you'll see the problem 17:43:28 jberkus, okay, I will try that tomorrow morning. 17:43:42 dustymabe: have you convinced maxamillion to be your sponsor yet? i think he said he'd do the rest if i reviewed your vagrant-sshfs package, and that's done ☺ 17:43:43 jberkus, problem is someone may be depended on the released AMI 17:43:51 jberkus, it seems pretty easy from https://getfedora.org/en/cloud/download/atomic.html 17:43:58 kushal: also, given that we do releases every 2 weeks, we can't keep them forever 17:43:59 bowlofeggs: yeah, the problem is me. 17:44:01 though i bet you have to do some reviews too 17:44:03 I have to review other packages 17:44:03 are those not the latest? 17:44:06 jbrooks: try it from inside AWS 17:44:23 jberkus, I always start w/ an ami-ID 17:44:31 wha? 17:44:31 and search for that, and get it 17:44:33 jberkus, we will have to ask FESCo on this then/ 17:45:06 jberkus: I'd prefer to not have them deleted 17:45:08 but that is me 17:45:13 AWS needs something like AMI streams 17:45:20 so there can be versions of images 17:45:32 or something like that 17:46:16 actually, I have an item on ticket 153 17:46:18 or related 17:47:23 jberkus, Go ahead 17:47:26 dustymabe: imagine 2 years down the line, when we have another 80 minages 17:47:38 so the CNCF has a cluster for member projects 17:47:56 we've been looking for hardware for FedAtomOpenShift testing 17:48:16 dustymabe: our AWS bill is ... impressive right now :-) 17:49:05 jzb: yeah. thats another part of the problem 17:49:06 or FOSP 17:49:22 jzb, correct, but we always had the idea not to delete the released ones, but delete all the nightlies and many other snapshots etc. 17:49:30 that seems like a good way to have hosting for that 17:49:31 We should reevaluate the whole story there. 17:49:32 i think after a release has passed, we could then delete all but first and last 17:49:41 and then for current release. we keep all 17:49:54 dustymabe, last time people release screamed at us. 17:49:57 jberkus +1 17:50:19 kushal: "people" or "person"? 17:50:20 kushal: yeah but you deleted something you shouldn't have 17:50:26 dustymabe: that would make things considerably better, although I still think we should look at changing our naming scheme somehow, or at least documenting it 17:50:29 oops 17:50:30 I just remember there was one person who complained. 17:50:33 an actual real release of an image 17:50:37 dustymabe, last time people really screamed at us. 17:50:50 jzb, One person on mail. 17:50:52 I knew of one complaint 17:50:58 Then gholms too 17:51:52 CNCF is still looking at how to set up access for non-employees of member companies; it might not be possible 17:52:26 so, first question: assuming I get a bunch of servers from CNCF, is there anything we'd want to use them for *other than* FOSP? 17:52:45 On a side note, we have an opening for an intern to help us with atomic/cloud docs+examples+testing here in RH Pune. 17:52:52 jberkus, FOSP seems like the most on-message use 17:53:01 jberkus: so, a thoguht 17:53:02 er, thought 17:53:04 We are in the process of interviewing for the same. 17:53:14 jberkus: the idea is to do testing of the entire stack 17:53:30 well the complain was becuase we removed a 21 AMI, a release one I guess 17:53:35 there's no reason that non-RHT folks would need direct access to servers if we were pushing changes via git/Ansible, or whatnot, right? 17:53:46 jzb: true 17:53:51 IOW we could architect this so non-RHT folks can make changes 17:53:56 they just can't touch the servers directly 17:53:58 perhaps 17:54:02 sayan: right but that was "the" release AMI for 21 17:54:04 jberkus, btw, I want to sync up with you about FOSP. 17:54:05 for that region 17:54:10 there wasn't another option 17:54:16 kushal: sure. how much later will you be up? 17:54:22 I think if we announce we are going to be deleting all but first/last for a relase 17:54:25 I need food, see you folks later 17:54:27 then that would be ok 17:54:31 maxamillion: +1 17:54:33 jberkus, Not today, may be on Friday or on next week. 17:54:33 jbrooks: thanks for hosting! 17:54:35 * maxamillion & 17:54:38 jberkus, I will drop you a mail. 17:54:42 ok 17:54:56 All right, are we all set for this week? 17:55:01 Hi 17:55:02 I just would like to check whether there are any plans to do changes in PRD ? 17:55:04 or whether the WG is fine with the current version of it ? 17:55:16 jkurik, we will meet during Flock 17:55:19 jbrooks: I think so 17:55:26 and will get a change to discuss it. 17:55:32 dustymabe: yes, but 21 was EOL around that time. Now, the thing is 22 is EOL so should we remove the AMIs? 17:55:36 second question is who else wants to be involved in setting this up (FOSP/CNCF) 17:55:48 jberkus, count me in. 17:55:48 particularly, I could use some help on CI/automation 17:55:49 kushal: ok, thanks for letting me know 17:56:21 jberkus, as it is a cloud image, autocloud can help in CI part. 17:57:09 #endmeeting