17:00:23 #startmeeting Fedora Atomic WG 17:00:23 Meeting started Wed Feb 15 17:00:23 2017 UTC. The chair is roshi. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:23 Useful Commands: #action #agreed #halp #info #idea #link #topic. 17:00:23 The meeting name has been set to 'fedora_atomic_wg' 17:00:39 #topic Roll Call 17:00:41 .hello bowlofeggs 17:00:42 bowlofeggs: bowlofeggs 'Randy Barlow' 17:00:43 .hello trishnag 17:00:45 .hello jberkus 17:00:46 more like bowlofdocker 17:00:46 trishnag: trishnag 'Trishna Guha' 17:00:49 jberkus: jberkus 'Josh Berkus' 17:00:52 .hello roshi 17:00:55 roshi: roshi 'Mike Ruckman' 17:00:59 hi everyone 17:01:04 bowlofeggs: heh! 17:01:07 ☺ 17:01:09 welcome marc84 :) 17:01:37 marc84: hiya, welcome 17:01:42 .hello dustymabe 17:01:43 dustymabe: dustymabe 'Dusty Mabe' 17:01:45 .fas jasonbrooks 17:01:47 jbrooks: jasonbrooks 'Jason Brooks' 17:01:55 #chair trishnag jberkus dustymabe bowlofeggs jbrooks 17:01:55 Current chairs: bowlofeggs dustymabe jberkus jbrooks roshi trishnag 17:01:59 * tflink is lurking 17:02:24 .hello maxamillion 17:02:25 maxamillion: maxamillion 'Adam Miller' 17:02:26 #chair tflink maxamillion 17:02:26 Current chairs: bowlofeggs dustymabe jberkus jbrooks maxamillion roshi tflink trishnag 17:02:39 have a chair tflink - I find it easier to lurk while seated :p 17:02:52 #unchair tflink 17:02:52 Current chairs: bowlofeggs dustymabe jberkus jbrooks maxamillion roshi trishnag 17:02:56 no 17:02:58 :) 17:03:08 #topic Previous Meeting Follow-up 17:03:14 (fine, be that way :p ) 17:03:34 #info according to the logs, we have no previous items to follow up on 17:03:39 #link https://meetbot-raw.fedoraproject.org/teams/fedora_atomic_wg/fedora_atomic_wg.2017-02-08-17.00.txt 17:04:05 roshi: want to give us an overview of how the VFAD went? 17:04:08 tflink prefers to stand 17:04:19 it makes me feel taller 17:04:26 ☺ 17:04:36 Sure 17:05:05 #topic recap of VFAD 17:05:09 we had a VFAD last week to work on moving the Fedora-Dockerfiles from a github repo to pagure 17:05:49 several of us pestered maxamillion will questions and we assigned maintainers to different Dockerfiles 17:05:56 roshi++ and maxamillion++ and gholms++ 17:05:57 dustymabe: Karma for roshi changed to 3 (for the f25 release cycle): https://badges.fedoraproject.org/tags/cookie/any 17:06:00 dustymabe: Karma for gholms changed to 1 (for the f25 release cycle): https://badges.fedoraproject.org/tags/cookie/any 17:06:08 several BZ review tickets were started 17:06:11 who else was there? jberkus for a littler while? 17:06:17 work is still ongoing for several fronts 17:06:23 \o/ 17:06:26 1) Updating our docs and practices 17:06:28 nope, it was during FOSDEM 17:06:33 there's a lot more to be done, but it wasn't bad for a first run at the VFAD 17:06:39 2) Getting all contianers to review 17:06:41 thank you guys for taking initiative on that, I haven't been able to dig into the containerization efforts yet 17:07:06 * bowlofeggs containerizes dusty, but isn't sure if he should be a priveleged container or not 17:07:12 :) 17:07:22 3) Pruning our list of Dockerfiles that we actually want/need to have in the build service 17:07:22 :) 17:07:28 one thing that I've noticed is that since we started this "effort" a lot of package maintainers have come forward 17:07:29 roshi++ 17:07:29 maxamillion: Karma for roshi changed to 4 (for the f25 release cycle): https://badges.fedoraproject.org/tags/cookie/any 17:07:33 maxamillion: does that about cover it? 17:07:35 to want to containerize their "thing" 17:07:43 roshi: yessir 17:07:46 the more we foster that the better 17:07:50 that's true dustymabe 17:08:02 dustymabe: +1 17:08:11 hhorak stepped up to take over some of the containers he's already been working on 17:08:15 reviews are ongoing 17:08:45 next time we need to advertise it a bit more in advance 17:08:51 dustymabe: I also hope to do a commops blog post at some point about a lot of this stuff now that we're getting it more formalized with better guidelines and such, I think we're better equiped to handle new users than before 17:09:06 suggested topic for 2nd VFAD: FDLIBS container review 17:09:20 a whole vfad for one container? 17:09:26 jberkus: that was the topic of the first one 17:09:39 roshi: FDLIBS = Fedora Docker Layered Image Build Service 17:09:45 lol 17:09:45 oh 17:09:51 * roshi knows things too 17:09:52 is there a search I can do for all of the pending, proposed contianers? 17:09:53 :p 17:09:54 though I prefer FLIBS because we're not just targeting Docker, basically anything OCI runtime 17:10:14 maxamillion: well, just change the name then ... 17:10:25 jberkus: https://pagure.io/atomic-wg/roadmap?status=Open&milestone=2017-02-09+VFAD 17:10:27 FLIBS is also easier to say 17:10:51 #info to look at the work done, look here: https://pagure.io/atomic-wg/roadmap?status=Open&milestone=2017-02-09+VFAD 17:11:02 jberkus: I have in a lot of placed, if you see Docker in the name anywhere, let me know ... I probably missed it 17:11:15 thanks 17:11:20 going to move onto the items we have for the meeting (there are 10) 17:11:45 starting with the PRD 17:11:54 #topic Atomic PRD work 17:11:56 +1 17:11:57 #link https://pagure.io/atomic-wg/issue/170 17:12:10 I added some comments to the PRD 17:12:21 in the last comment, I added some items we needed to look at 17:12:22 I do agree the thing is way too long (at least for my liking) 17:12:30 on the wiki dustymabe ? 17:12:38 roshi: no, in the etherpad 17:12:42 or whatever that thing is 17:12:53 ah 17:12:55 the draft that jberkus made 17:13:12 it's an etherpad 17:13:15 that's where I pulled the content from, then made some edits to the wiki syntax and threw it on there 17:13:29 I'd like to pretty much cut everything after the use cases 17:13:35 are we not all using the same source 17:13:36 #link https://fedoraproject.org/wiki/User:Roshi/QA/Atomic_PRD 17:13:37 https://public.etherpad-mozilla.org/p/fedora_cloud_PRD 17:13:39 ? 17:14:08 the goal, as I saw it, was for me to take the info out of the etherpad and put it in the wiki, and from there we make edits and work on it 17:14:19 we can keep discussion of it on the Talk page for it 17:14:19 it was? 17:14:23 aha, ok 17:14:32 before we move it to the actual cloud namespace on the wiki 17:14:47 I don't have objections to using the wiwki 17:14:52 I figured it made sense to have it on the wiki since that's where it will live 17:15:11 roshi: could it live in our pagure? 17:15:15 for atomic-wg? 17:15:25 it'll end up on the wiki, but there's no reason we can't work on it somewhere else 17:15:40 it could be copied there, but every other thing for the other WGs is in the wiki 17:15:58 ok, jberkus roshi, I made some comments in the etherpad, can you look at those and respond to the questions? 17:16:10 and I can try to get the changes back into wherever we put this thing 17:16:13 the wiki is/should be the place for information on the WG 17:16:38 roshi: that's fine, it's just that nobody else knew it was moving to the wiki so we were still editing the etherpad 17:16:46 roshi: yeah, i think that used to be true. I wouldn't mind seeing us move to storing things in the atomic-wg pagure instance though 17:17:38 ok so let's decide where to update things while we work on the drafts? 17:17:45 etherpad or wiki? 17:18:14 well, the only reason to keep the etherpad version is if we think several people will be editing at once 17:18:15 etherpad allows multiple people working at once 17:18:32 otherwise, since we need it wiki-formatted in the end, it makes sense to continue with the wiki version 17:18:35 the question is: how close do we think we are ? 17:18:41 jberkus: right 17:18:49 if we think we are pretty close then lets stick with what roshi already did 17:18:50 very close, I'd say 17:18:53 ok then 17:18:55 that settles it 17:19:08 Well, it depends on how important your comments are, I guess 17:19:10 heh 17:19:11 really, the PRD needs 2 things 17:19:17 dustymabe: the wiki is kinda the focal point for people looking for information on Fedora, all the other WGs use it for their PRDs, so I'd like to see the information stay there 17:19:20 1) we need to re-work the use cases to reflect Atomic 17:19:30 2) We need to create new requirements 17:19:48 jberkus: ok 17:19:58 3) the issues I raised in the last pagure comment 17:20:05 should we make a VFAD tomorrow to just "do it" 17:20:24 I'd like some more lead time on VFADs in the future, so I can better schedule my time 17:20:43 roshi: indeed. 17:20:50 dustymabe: thursdays are terrible for me 17:20:55 the rest of this week is pretty packed for me 17:20:56 that's my meeting hell day 17:21:04 maybe we should just sit down after this meeting and knock it out? 17:21:13 if anyone can? 17:21:26 Yes 17:21:26 how about this: 17:21:42 who wants to take a first pass at 1) re-work use cases to reflect Atomic? 17:21:54 and so on for 2 & 3 17:22:13 at least leave comments on the issue for 3 17:22:16 it might take longer to "rework use cases" than it would be to just throw away existing use cases and create new ones 17:22:24 that's fine too 17:22:25 anyway, let's do this after the meeting 17:22:42 dustymabe: the use cases are mostly still valid 17:22:44 for this context, re-work means "edit that section of the wiki to make sense" :p 17:22:48 k 17:22:52 OK, let's stick w/ the etherpad for now then 17:22:52 they just need some container content 17:22:53 make sense for atomic, that is 17:23:07 the requirements get thrown away and replaced 17:23:10 it'd really be easier to edit it right in the wiki 17:23:17 IMO 17:23:19 for requirements, we need to hash them out before we start trying to write text 17:23:37 ok, next topic 17:24:40 #topic Do we need/want an SSH container? 17:24:43 #link https://pagure.io/atomic-wg/issue/204 17:25:04 I don't see the point of having this, and maxamillion and I were kinda on the fence about what the point was 17:25:17 roshi: I don't see the point either 17:25:29 I was wondering why it existed, I assumed just as a demo 17:25:29 vote to stop having it? 17:25:29 I'm going to vote "no" ... I don't think we need or want a ssh container 17:25:30 +1 17:25:38 +1 17:25:45 kill it with fire 17:25:57 +1 17:26:01 proposed #agreed - We will no longer have or maintain an SSH container. 17:26:17 +1, until someone comes and asks us for it with a valid use csae 17:26:20 #agreed - We will no longer have or maintain an SSH container. 17:26:38 #topic Syncing with Dockerhub 17:26:40 #link https://pagure.io/atomic-wg/issue/202 17:27:02 oh, I thought we covered that last week 17:27:05 i think we mostly already agreed in the ticket 17:27:15 seems like there's consensus to keep the old namespace 17:27:20 maxamillion: want to comment and close it? 17:27:22 who's going to drive this one forward? 17:27:25 wfm 17:27:26 dustymabe: please do 17:27:30 dustymabe: +1 17:27:32 roshi: I am 17:27:36 roshi: that's on my TODO list 17:27:48 sounds good 17:28:22 #action maxamillion to close pagure issue 202 and complete the sync 17:28:50 #topic Interim container releases 17:28:53 #link https://pagure.io/atomic-wg/issue/200 17:29:50 how long do the builds take? 17:30:45 roshi: depends, normally 5 minutes or less 17:31:10 roshi: also, for a point of reference ... there are currently only 4 container images ready for release so in the short-term this isn't going to be a big issue 17:31:12 this is to rebuild all the packages included in the container? 17:31:22 roshi: no, not the packages, just the container 17:31:26 ah 17:31:36 +1 then 17:31:37 roshi: this one was also agreed upon last week ... I thought it was taken off the meeting roster 17:31:53 I'm actually going to start writing the code for this today after lunch 17:31:56 want to comment and close? 17:32:01 will do 17:32:12 we need to have the #agreed in the logs so I see that things were agreed :p 17:32:25 * roshi needs more memory :p 17:32:45 #action maxamillion to comment and close this ticket as it had already been agreed on 17:32:58 #agreed to rebuild the containers, as noted in the ticket 17:33:12 +1 17:33:16 don't need to do 180 17:33:26 or 201 17:33:44 which others do we need to do? 17:33:50 fedora-motd? 17:34:06 anyone have updates for the on-hold tickets 17:34:06 ? 17:34:34 can we just kill the IRC bot ticket? 17:34:35 not I 17:34:58 if nobody is working on it, means no one is going to work on it 17:35:13 I just now saw the IRC bot ticket 17:35:47 lemme take a look at this sometime 17:35:55 I think we can get this with FMN 17:36:01 ok 17:36:29 fedora-motd is something else that is not really on a high priority list, but it is something we'd like to do, just needs some love that we don't have to give right now 17:36:46 what's next? 17:37:15 jberkus: this is an on hold ticket 17:37:17 https://public.etherpad-mozilla.org/p/fedora_cloud_PRD 17:37:19 sigh 17:37:21 https://pagure.io/atomic-wg/issue/154 17:37:36 download page clearer and openshift playground are the last two I see 17:37:37 not clear on why it's on hold 17:37:47 given that the download page has been completely revamped 17:37:52 OK to mark it complete? 17:37:58 jberkus: yes please 17:38:35 sweet 17:38:38 #topic Open Floor 17:38:48 I have one important item 17:38:54 go for it 17:38:55 if you all don't mind me going first 17:39:25 #topic "support for N-1 Fedora in Atomic Land" 17:39:57 so it has been my understanding for a while (although may not be a valid understanding) that we do not necessarily support F24 Atomic Host once F25 is released 17:40:20 maybe it is only my understanding 17:40:32 but I think we need to clarify our stance on this 17:40:41 and also try to communicate that with our community 17:41:05 I know this can also lead into discussion on whether we should have any "number based" versions of Atomic Host 17:41:10 as it was released under the traditional fedora release ideology, I think we have to support it until F24 EOL 17:41:33 point 17:41:53 i'd rather not have the discussion about "rolling atomic release" right now 17:42:00 and with 2wk releases, do those have a support window? 17:42:03 so let's just try to limit it to our current policy 17:42:15 dustymabe: my perspective is that, until we can shift to "continuous upgrade" (which we can't until at least F27), we need to support one release back 17:42:30 +1 jberkus, mine as well 17:42:39 jberkus: but the problem is that we aren't even doing "releases" for f24 atomic 17:42:52 dustymabe: well, then let's define "support" 17:42:55 stay consistent with the rest of Fedora until we're actually not doing what the rest of Fedora is doing 17:43:10 from my perspective, "support" means "we keep updating the ostree" 17:43:12 roshi: we're already not doing what the rest of Fedora is doing 17:43:12 If there's going to be only one Fedora Atomic, there should be only one 17:43:40 support == putting out new releases 17:43:42 typically it means critpath updates 17:44:01 I seriously don't get what's so weird/disruptive about rolling right into the next release 17:44:09 it's what ostree is all about 17:44:29 jbrooks: I don't think we can do that until we're over the hump of removing the Kube binaries 17:44:31 if it wasn't clearly delineated at F24 release time that this Atomic release wasn't going to be the same, then we've kinda painted ourselves into a corner... 17:44:47 roshi: i'm talking about the future 17:44:48 jberkus, that hump exists either way 17:45:13 we could include kube forever, or remove it whenever, totally separate from what version of fedora we're based on 17:45:28 is there a hurdle to just doing os-tree upgrade on F24 and ending up on F25 at GA? 17:45:37 nope 17:45:39 no 17:45:41 that's kinda what I'd expect from an Atomic host, aiui 17:45:42 just... feels 17:45:44 or something 17:45:49 jbrooks: not really. if we release F26 with no Kube, but we keep updating the F25 OSTree, then users of F25 have 6 months to plan their upgrade to contianerized kube 17:45:53 well.. let me qualify 17:46:03 currently people "rebase" to 25 from 24 17:46:17 there is a technical limitation of why people can't "upgrade" to 25 17:46:19 from 24 17:46:25 we use different ostree repos 17:46:50 dustymabe: that's fixable if we want to fix it 17:47:10 anyway - let's try to boil down what our thoughts into a couple of statements 17:47:12 the hard part is: what happens when we release a feature change which breaks stuff? 17:47:13 I want to propose that we eventually decouple Atomic from traditional fedora release numbers 17:47:19 jberkus, this would be in a world where we support two releases at a time, which we don't 17:47:31 containerized kube isn't such a big deal 17:47:52 maxamillion: I feel like that could be a whole VFAD in itself 17:47:54 I'd like Atomic to have it's own build target in koji so that packages aren't bound to the same release cadence as Fedora releases and the release version of the whole darn thing just be date based ... so 2017.02.15.whatever 17:47:56 it seems to me that we need to do os-tree updates for F24 until F26 ships 17:47:58 jbrooks: moving to it when you have a production installation using non-containerized kube is a big deal 17:48:04 dustymabe: oh no, that's like months worth of work 17:48:12 dustymabe: we could have a whole VFAD to plan it though 17:48:13 and urge people to rebase if they can 17:48:29 how involved is it to just update the F24 trees? 17:48:35 maxamillion: yes, we'd have to answer a lot of questions 17:48:44 I mean, that's just pulling updated packages from the F24 repos, right? 17:48:49 jberkus, is it? I admit I don't have a production setup, but in my tests, built-in to container via upgrade has been trivial 17:49:10 sigh 17:49:20 Anyway, we don't currently support/pay attn to two releases 17:49:22 * dustymabe wanted to limit the scope of this discussion 17:49:29 Does f24 atomic work right now? 17:49:29 dustymabe: yah, good luck with that 17:49:33 and for $future when we just have Atomic with no tie to Fedora numbers, we maintain that last one for the same window and then it becomes a non issue 17:49:41 ^^ that make enough sense? 17:50:02 dustymabe: +1 17:50:13 roshi: no, we need a whole day to talk about what a non numbered release of FAH means 17:50:20 right 17:50:21 i'm not going to speculate 17:50:25 that's not what I'm saying 17:50:26 roshi, the thing is, we're not testing f24 right now, at all 17:50:26 ok, PROPOSAL: that, until we go to rolling releases, we commit to keeping the ostree for N-1 updated until the N+1 release. 17:50:36 that's our fault though 17:50:36 jberkus: also, remind me to ask you something about registry web UIs once we're done with this topic :) 17:50:38 we've never tested two 17:50:49 and why I asked how hard it was to update the f24 ostree 17:50:57 jbrooks: oh, point. my personal experience is that f24Atomic works, but I'm not doing formal testing 17:51:04 jberkus: right 17:51:13 +1 to jberkus since that's exactly what I was also saying :p 17:51:16 i'll call it life-support 17:51:25 OK, so, start testing f24 atomic? 17:51:42 ok, let me make a few statements 17:51:46 And plan to do the same at least through the full life span of f26 at this point? 17:51:52 when we say f24 atomic, we just mean the updated trees with new packages, right? 17:52:33 right now our 'unclarified policy' is that "support" follows tradition N-1 + N Fedora support structure 17:52:49 this also means that N-1 has been on "life support" because we don't really test it 17:52:57 but it *is* getting updates 17:53:11 I'd like to update our policy for the f26 release 17:53:24 to explicitly not "life-support" the N-1 release 17:53:48 clarification question 17:53:53 sure 17:54:00 so F26 would stop getting support when F27 comes out 17:54:11 but F25 would get support for the N-1 period? 17:54:14 right, that would be the "policy" 17:54:25 roshi: yeah, because we haven't clarified it 17:54:27 +1 for that 17:54:30 we would have to do that 17:54:52 jbrooks: jberkus maxamillion - thoughts? 17:55:20 roshi: seems like we need some minimal tests for N-1 17:55:40 Fedora Two-Week Atomic Host since inception has never catered to N-1 17:55:43 install an F24 host and do an os-tree upgrade, right? 17:55:49 fine with me, better clarified than not. and of course, this seems like the absolute perfect time to transition into just one atomic but I get that you don't want to get into that (though I don't get why ;) ) 17:55:50 there's not even a compose to create those images anymore 17:55:58 heh 17:56:11 we can save that for atomic 47 17:56:27 the whole point of 2WA was to move faster and not be bound to the legacy style of releasing the OS 17:56:27 we don't need to rebuild images and test them, just the os-tree packages, right? 17:56:30 jbrooks: :) - i just don't want to talk about it as a side conversation - i'd want to really brainstorm what it would look like 17:56:34 now we want to tie it to N and N-1 support? 17:56:58 maxamillion: jbrooks: schedule the VFAD and let's have a proper discussion on it 17:57:14 +1 17:57:40 all right 17:57:51 ok I'll open a ticket with this information in it if you guys don't mind 17:58:05 sounds great 17:58:05 re: clarifying what support it for N and N-1 releases 17:58:10 k 17:58:10 we're low on time 17:58:14 #topic open floor 17:58:20 ok, one more thing 17:58:25 #action dustymabe to create a ticket with this discussion in it 17:58:32 I want to add more metadata requirements to the FLIBS dockerfiles 17:58:37 like install/run/description 17:58:39 as proposed 17:58:44 ok 17:58:56 however, I've been trying to discuss this on email and not getting responses from the team 17:59:17 is there somewhere we can hash this out and get it done? the longer we wait, the more dockerfiles *we* need to update 17:59:21 email to coud list? 17:59:29 musta glossed right over those jberkus, don't recall them 17:59:35 dustymabe: that's exactly what I ddid. AFAICT, nobody reads the cloud list 17:59:51 I think I responded .... I meant to 17:59:59 maxamillion: you did, nobody else did 18:00:02 <3 18:00:22 jberkus: and yes, it's also been my experience that getting anything done on the mailing list is a fools errand 18:00:29 jberkus: depends on the topic. I've kinda been on the sidelines for the container stuff 18:00:53 which sucks, but I've been trying to get a lot of other stuff done related to the host 18:00:54 the OwnCloud container is an excellent case of how the existing metata requirements aren't adequate 18:01:21 right, what I'm saying is that the mailing list is a bad channel, so how can I get this done? 18:01:22 I find it easier to follow discussions on pagure 18:02:11 yeah the owncloud ticket on pagure maybe if it is specific to that container, but the mailing list if it is an issue related to all containers? 18:02:32 dustymabe: I'm not using the mailing list. it's a waste of time 18:03:17 i have one more open item: i created a cloud-sig pagure issue tracker 18:03:28 dustymabe: I have that exact problem, but from the other side ... I'm sideline about the host but in the weeds with the container stuff 18:03:35 I'll try opening a series of paguire issues, one for each proposed requirement. 18:03:48 that way we at least have clear decisions 18:03:52 https://pagure.io/cloud-sig 18:04:03 jberkus: +1 18:04:57 jberkus: +1 18:05:19 sounds good 18:05:27 * roshi sets the fuse 18:05:30 3... 18:05:34 thanks for coming folks 18:05:36 2... 18:05:38 #action jberkus to open issues around each proposed dockerfile requirements change 18:05:49 roshi: thanks 18:05:56 np 18:05:59 1.... 18:06:02 #endmeeting