17:00:34 <jberkus> #startmeeting fedora_atomic_wg
17:00:34 <zodbot> Meeting started Wed Jan 18 17:00:34 2017 UTC.  The chair is jberkus. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:34 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
17:00:34 <zodbot> The meeting name has been set to 'fedora_atomic_wg'
17:00:38 <jbrooks> .hello jasonbrooks
17:00:41 <zodbot> jbrooks: jasonbrooks 'Jason Brooks' <JBROOKS@REDHAT.COM>
17:00:44 <jberkus> .hello jberkus
17:00:45 <zodbot> jberkus: jberkus 'Josh Berkus' <josh@agliodbs.com>
17:00:50 <trishnag> .hello trishnag
17:00:51 <zodbot> trishnag: trishnag 'Trishna Guha' <trishnaguha17@gmail.com>
17:00:54 <kushal> jberkus, change the topic to roll call :)
17:01:07 <jberkus> #topic roll call
17:01:09 <kushal> .hellomynameis kushal
17:01:10 <zodbot> kushal: kushal 'Kushal Das' <mail@kushaldas.in>
17:01:48 <dustymabe> .hellomynameis dustymabe
17:01:49 <zodbot> dustymabe: dustymabe 'Dusty Mabe' <dustymabe@redhat.com>
17:02:12 <bowlofeggs> .hello bowlofwaitingforthemeetingtostart
17:02:13 <zodbot> bowlofeggs: Sorry, but you don't exist
17:02:14 <kushal> number80, hello there :)
17:02:16 <bowlofeggs> .hello bowlofeggs
17:02:16 <zodbot> bowlofeggs: bowlofeggs 'Randy Barlow' <randy@electronsweatshop.com>
17:02:26 <trishnag> .hello trishnag
17:02:27 <zodbot> trishnag: trishnag 'Trishna Guha' <trishnaguha17@gmail.com>
17:02:31 <jberkus> #chair jberkus dustymabe kushal trishnag jasonbrooks bowlofeggs
17:02:31 <zodbot> Current chairs: bowlofeggs dustymabe jasonbrooks jberkus kushal trishnag
17:02:37 <trishnag> bowlofeggs: hehe
17:03:02 <jberkus> #topic action items
17:03:26 <number80> o/
17:03:46 <scollier> .hello scollier
17:03:47 <zodbot> scollier: scollier 'Scott Collier' <emailscottcollier@gmail.com>
17:04:06 <jberkus> 1. OverlayFS partitioning decision: dustymabe, I feel like we've made a decision in favor of having a docker partition, at least for Fedora 26.
17:04:26 <jberkus> is there any more discussion that needs to happen on this?
17:05:14 <dustymabe> jberkus: I think if we are good with having the separate partition then we are good to go
17:05:25 <kushal> jberkus, I think we are okay to go ahead with this.
17:05:38 <dustymabe> we need to "find" the change request that dan's team filed
17:05:42 <dustymabe> and make sure nothing is missing
17:05:44 <jberkus> OK.  we do want to fix the sizing of the root partition, though
17:05:54 <dustymabe> jberkus: is there a separate ticket for that?
17:06:06 <dustymabe> we should probably make sure it is something we don't forget about
17:06:11 <jberkus> no, there isn't, and there probably should be.  Bugzilla?
17:06:22 <dustymabe> jberkus: I would start with atomic-wg ticket
17:06:25 <jberkus> ok
17:06:32 <dustymabe> and then periodically make sure progress is being made on it
17:06:44 <dustymabe> unfortunately just opening a ticket isn't good enough
17:06:48 <jberkus> #action jberkus to file ticket for fixing size of root partition
17:06:53 <jberkus> dustymabe: when is it ever?
17:07:05 <jberkus> also:
17:07:31 <jberkus> we need someone to commit to writing migration docs for Devicemapper-->Overlay-->Back
17:07:52 <jberkus> probably someone on dan's team
17:08:05 <jberkus> #action find someone to write overlay/DM migration docs
17:08:48 <jberkus> 2. jbrooks to update compose your own doc
17:08:56 <jberkus> jbrooks: do you remember what this meant?
17:09:20 <jbrooks> jberkus, yes, http://www.projectatomic.io/docs/compose-your-own-tree/ -- can we carry this one over
17:09:23 <kushal> :)
17:09:27 <jbrooks> I haven't updated it yet
17:09:41 <jberkus> ok
17:09:51 <jberkus> #action jbrooks still to update compose your own tree doc
17:10:09 <jberkus> 3. jbrooks dustymabe jberkus to collab on testing new docker-storage-setup patches
17:10:11 <kushal> This reminds me that I will have to do the same.
17:10:24 <jberkus> is there more testing which needs to be done here?
17:10:38 <jbrooks> jberkus, I think we're waiting on dusty for that one
17:11:04 <jberkus> dustymabe: ?
17:12:03 <dustymabe> jberkus: yep
17:12:07 <dustymabe> re-action
17:12:22 <dustymabe> I didn't get around to it because of the kubernetes issue and the kernel issue that are blocking 2wk release
17:12:28 <dustymabe> :(
17:12:28 <jberkus> #action more testing on docker-storage-setup patches jbrooks dustymabe jberkus
17:12:43 <jberkus> ok, jumping to (5) because
17:12:58 <jberkus> 5. plan for virtual FADs
17:13:12 <jberkus> I feel like maxamillion did this via email
17:13:38 <kushal> yes
17:14:50 <jberkus> ok, that's resolved. anybody who didn't read maxamillion's plan probably needs to
17:14:59 <jberkus> ok, now 4. Update PRD
17:15:13 <jberkus> so ... AFAIK, nobody has looked at the PRD other than me and jasonbrooks
17:15:15 <jbrooks> https://lists.fedoraproject.org/archives/list/cloud@lists.fedoraproject.org/message/AEX2WRUKCQYDHL4YGQU4OS5W7KSCAFQX/
17:15:28 <jbrooks> that's the vfad plan thread ^
17:15:46 <dustymabe> jbrooks: sigh.. yeah I said I would last week
17:15:50 <jberkus> https://public.etherpad-mozilla.org/p/fedora_cloud_PRD
17:15:52 <dustymabe> and I promise I planned to
17:16:09 <jberkus> however, I'd like to start with a 10,000m view instead
17:16:19 <kushal> jberkus, I did in the morning :)
17:16:27 <dustymabe> it is in my task list
17:16:30 <jberkus> namely, "what are our goals as a WG/project"?
17:16:31 <jberkus> https://public.etherpad-mozilla.org/p/Fedora_Atomic_Project_Goals
17:16:42 <jberkus> both short and long-term
17:17:09 <kushal> or may be wrong link I opened
17:17:18 <jberkus> there's a couple other questions, like that the old Cloud PRD is way longer than either of the other PRDs
17:17:26 <jberkus> which suggests that we can trim it down significantly
17:18:13 <jberkus> so, see the four goals for FY18
17:18:26 <jberkus> (that's Red Hat FY18 for any non-RH folks on the channel)
17:18:36 <jberkus> (meaning March 2017 to March 2018)
17:19:06 <jberkus> are those goals everyone can buy into?  things we can reasonably get accomplished in a year?
17:20:24 <jberkus> tap tap tap is this thing on?
17:20:30 <kushal> jberkus, Yes.
17:20:38 <kushal> I guess everyone is reading the etherpads ;)
17:20:40 <jbrooks> jberkus, Looking at the FAO goal, I wonder where we want to place kubernetes in our goals
17:20:55 <jbrooks> It's a separate thing from openshift, and it needs ongoing work
17:20:57 <jberkus> jbrooks: I think that goes under "stability"
17:21:04 <jbrooks> OK, right
17:21:13 <jberkus> as in, it *ought* to work
17:21:19 <jbrooks> Yep
17:21:22 <jberkus> but any "advanced" stuff we do is going to be Origin
17:21:30 <jbrooks> These goals look good to me
17:21:37 <jberkus> hmmm
17:21:41 <kushal> jbrooks, +1
17:21:46 <jberkus> now that I think of it, I want to add one non-technical goal
17:22:00 <dustymabe> I don't know about FAO
17:22:18 <dustymabe> I think it sounds good but I don't know how we are going to pull it off
17:22:45 <jbrooks> Hmm, I think FAO needs a champion
17:23:06 <jbrooks> I'm honestly most interested in the stability+kube part
17:23:23 <sayan> .hello sayanchowdhury
17:23:24 <zodbot> sayan: sayanchowdhury 'Sayan Chowdhury' <sayan.chowdhury2012@gmail.com>
17:23:29 <dustymabe> yeah, I'm just trying to think about what is maintainable for the existing WG
17:23:42 <sayan> sorry for dropping in late
17:23:47 <kushal> jbrooks, in my flock16 talk, I demoed how can we have test kube using the upstream ansible repo.
17:23:53 <dustymabe> i think we would need to focus on CI/testing/stability before we branch out
17:23:53 <kushal> sayan, welcome :)
17:24:18 <jberkus> #chair jasonbrooks kushal sayan dustymabe bowlofeggs scollier
17:24:18 <zodbot> Current chairs: bowlofeggs dustymabe jasonbrooks jberkus kushal sayan scollier trishnag
17:24:38 <jberkus> dustymabe: so FAO as a longer-term goal?
17:24:58 <jbrooks> jberkus, unless someone wants to own it
17:25:09 <jbrooks> Not do all the work, but stay on it
17:25:22 <jberkus> yah, and reasonably we really need someone on the OpenShift Origin project participating actively
17:25:27 <jbrooks> And what about the FOSP or FOOT or whatever?
17:25:33 <jbrooks> Is that a goal?
17:25:35 <dustymabe> jberkus: yeah, I mean basically it's a question of time
17:25:37 <dustymabe> and resources
17:26:18 <jberkus> jbrooks: AFAICT, that's pretty much in limbo because there's no hosting for it.  it was based on the idea that we had hosting from CNCF, which turned out not to exist
17:26:50 <jbrooks> ok
17:27:17 <dustymabe> i thought at one point we had an "angel" who was going to give us budget for hardware
17:27:29 <dustymabe> but I don't know anything
17:27:40 <jberkus> we thought we did, apparently we don't.  scollier ?
17:27:57 <scollier> reading
17:27:59 <jberkus> mind you, if we *want* a FOSP, we can always just allocate money for AWS hosting
17:28:19 <jberkus> but unless we're actively pursuing FAO, I don't see why we need one
17:28:30 <scollier> it's being hosted on the fedora cloud for now, the OSP environment
17:28:41 <scollier> i am not aware of any aws funding, thought we wanted to keep it in house
17:28:54 <dustymabe> jberkus: well my personal preference is that we start to host some of our fedora infra on openshift
17:29:00 <kushal> We will also need manpower to admin the systems.
17:29:02 <scollier> dustymabe, ack
17:29:04 <dustymabe> but..
17:29:18 <kushal> dustymabe, for that we budget allocation from openshift teams.
17:29:23 <kushal> s/teams/team.
17:29:50 <dustymabe> kushal: good luck
17:30:21 <kushal> dustymabe, I am just reminding the major pain point ;)
17:31:01 <kushal> dustymabe, Also for things related to Fedora Infra, we first should talk to puiterwijk nirik smooge and rest of the Infra team.
17:31:03 <dustymabe> kushal: well I think we should be able to "supply ourselves" with some resources on this front, but that is above where I sit on the totem poll
17:31:23 <jberkus> ok
17:31:24 <dustymabe> kushal: indeed, i was just talking about a personal preference, and was not saying it is what they should be doing now
17:31:33 <kushal> dustymabe, Understood.
17:31:39 <puiterwijk> What is going on here?
17:31:46 <jberkus> so is there anything we should be doing here?  or should I just move FAO/FOSP/etc. to "long term goals"?
17:31:58 <kushal> puiterwijk, discussing the future of Fedora Atomic, and things we want etc.
17:32:05 <puiterwijk> aha
17:32:07 <dustymabe> jberkus: long term goals and mark some blockers
17:32:14 <dustymabe> hardware, people, etc..
17:32:20 <dustymabe> "scoping" maybe
17:32:22 <kushal> jberkus, I thought it was important from the first Cloud FAD.
17:32:31 <jberkus> kushal: it seemed like it at the time
17:32:42 <jberkus> kushal: but then we also had the impression that the OpenShift project would be helping
17:32:50 <dustymabe> yeah, it is important as far as "we'd like to do it" but there are challenges
17:32:52 <kushal> jberkus, :)
17:33:08 <dustymabe> jberkus: i'm not saying the openshift team "won't help"
17:33:16 <dustymabe> i think we need to present to them the benefits
17:33:36 <dustymabe> that's the only way you ever get someone to do something
17:33:51 <jberkus> dustymabe: yah, but I think realistically that moves it to "more than a year"
17:34:02 <jbrooks> And again, does someone want to own this, to take the lead
17:34:18 <kushal> Do we have anyone from Openshift team here?
17:34:19 <roshi> sorry, phone call
17:34:23 <dustymabe> jbrooks: everyone is staying away from it with a 10 foot poll
17:34:26 <roshi> will read backscroll :)
17:35:19 <jberkus> I'll go as far as trying to set up some meetings, but I'm not sure we want to do that this month
17:35:36 <jberkus> how do I schedule an action item for May?
17:35:48 <misc> dustymabe: we are getting hardware for fosp, no ?
17:35:51 <roshi> I don't think you can do that with meetbot
17:36:03 <roshi> (schedule an action item for may)
17:36:08 <jberkus> misc: you would know better than any of us
17:36:14 <dustymabe> yeah I don't know
17:36:21 <misc> jberkus: well, we are in the process of buying hardware
17:36:30 <dustymabe> jzb might know, but we'd have to track him down to ask
17:36:37 <jberkus> misc: can you provide (via email) a brief on the HW situation?
17:36:41 <misc> jberkus: sure
17:36:49 <jberkus> misc: the cloud team here has no idea what's going on with that
17:36:53 <jberkus> thanks
17:37:01 <kushal> misc, do you know which team is supposed to pay for the new hardware?
17:37:07 <misc> jberkus: it got stalled until last week due to budget dance
17:37:07 <jberkus> #action misc to brief cloud team on FOSP HW situation
17:37:18 <kushal> jberkus, Atomic team :)
17:37:23 <misc> kushal: OSAS
17:37:24 <jberkus> point
17:37:29 <kushal> misc, nice :)
17:37:33 <kushal> Thanks :)
17:38:00 <misc> kushal: however, for the disk, we might need to ask you to give a kidney
17:38:15 <kushal> misc, only one?
17:39:02 <jberkus> #topic open issues
17:39:27 <jberkus> UEFI boot issue: https://pagure.io/atomic-wg/issue/185
17:39:48 <jberkus> dustymabe, I believe this is related to the kernel issues which are holding up the release ... as in it's blocked by them
17:40:04 <jberkus> dustymabe: what's the current situation with that?  anything change in the last 24 hours?
17:40:18 <dustymabe> jberkus: with what part exactly?
17:40:24 <dustymabe> UEFI or kernel issues?
17:40:43 <jberkus> first, is the UEFI bug fixed *if* we could do a release?
17:41:34 <dustymabe> jberkus: they are putting out the new pungi to the builders today/tomorrow
17:41:42 <jberkus> ah, ok
17:41:46 <dustymabe> once that is in place we should be able to build media with the new anaconda
17:41:54 <jberkus> so, lemme derail, because this isn't in paguire
17:42:16 <jberkus> dustymabe, jbrooks: where are we with the kernel issue with dns?
17:42:30 <dustymabe> jberkus: https://lists.fedoraproject.org/archives/list/cloud@lists.fedoraproject.org/message/X7KTUTP2ITJQEJUUJFFJL2L4JGGFJ42K/
17:42:57 <jberkus> ok, thanks
17:43:11 <jberkus> so theoretically we can have a delayed release with everything fixed
17:43:29 <dustymabe> theoretically
17:43:35 <dustymabe> we'll get as much in as we can
17:43:52 <dustymabe> as soon as the kernel is in and we test we are going to release though
17:44:12 <dustymabe> going to try to keep the same dates.. i.e. we release next week and the following week
17:44:15 <dustymabe> then two weeks after that
17:44:22 <jberkus> https://pagure.io/atomic-wg/issue/180 Fedora-Dockerfiles
17:44:36 <jberkus> so we were talking about a workshop for porting these to FDLIBS at DevConf
17:44:51 <jberkus> but it seemed like maxamillion and I would be the only ones from the WG there
17:46:43 <dustymabe> jberkus: I can come to it, but don't know how much I can contribute outside of the face 2 face session time
17:47:22 <jberkus> dustymabe: building containers is pretty easy ;-)
17:47:45 <dustymabe> jberkus: yeah, what worries me is that it's not just building containers
17:47:51 <jberkus> my personal issue is that I have a conflict
17:47:55 <dustymabe> it's more about the application
17:48:00 <dustymabe> did you configure it right
17:48:02 <dustymabe> is it insecure
17:48:08 <dustymabe> did you expose enough knobs to end users
17:48:21 <dustymabe> are you pulling from an rpm or from upstream
17:48:26 <jberkus> well, initially we port the stuff in Fedora-Dockerfiles *exactly* the way it is
17:48:32 <jberkus> *then* make adjustments
17:48:33 <dustymabe> ssl or not?
17:48:58 <jberkus> kushal: any update on this? https://pagure.io/atomic-wg/issue/160
17:49:22 <kushal> jbrooks, you are not going?
17:49:28 <kushal> jberkus, not that I know of, rtnpro is still out for his honeymoon.
17:49:40 <kushal> jberkus, I will ping him after he comes back from holidays.
17:49:51 <jbrooks> kushal, to devconf? No
17:49:56 <jberkus> ok, can you mark that one "on-hold" then
17:49:57 <jbrooks> I probably should have
17:50:16 <kushal> jberkus, I will.
17:50:28 <jberkus> ok
17:50:32 <jberkus> #topic open floor
17:50:39 <jberkus> so ... anything else?
17:51:16 <kushal> dustymabe, jberkus, I finally found the cause behind the recent qcow2 image failures on autocloud.
17:51:25 <jberkus> yay!
17:51:31 <kushal> Booting the image with 1 VCPU is triggering the issue.
17:51:38 <dustymabe> kushal: well you found a workaround
17:51:40 <dustymabe> not the cause
17:51:51 <kushal> dustymabe, The high level cause.
17:52:09 <kushal> dustymabe, Not the exact kernel code which is causing this.
17:53:19 <dustymabe> kushal: indeed
17:53:36 <dustymabe> ok next topic?
17:53:41 <dustymabe> open floor?
17:53:46 <dustymabe> runcom_: has something
17:54:07 <runcom_> dustymabe: so yes
17:54:26 <runcom_> the topic is docker in Fedora stable (F25 at this point)
17:54:50 <runcom_> we're currently shipping docker 1.12.6 which was the latest upstream release, till now, because 1.13.0 went out just now
17:55:01 <runcom_> my thought was to bring in 1.13.0 in F25
17:55:12 <jberkus> as docker-latest, presumably?
17:55:13 <kushal> runcom_, what is stopping us to do so?
17:55:14 <runcom_> and start early testing of docker + opeshift (or whatever)
17:55:51 <runcom_> jberkus: there's an ongoing discussion in the ml about it..I would really prefer to drop it, nobody is actually using it (nobody gives karma etc etc)
17:56:16 <runcom_> kushal: mm not sure, docker usually breaks stuff when working with k8s/openshift
17:56:23 <dustymabe> jberkus: did you see the mail list discussion about docker-latest?
17:56:24 <runcom_> people might not be comfortable with that
17:56:27 <kushal> runcom_, that is True :)
17:56:36 <jberkus> dustymabe: yeah, but right now docker-latest and docker are both 1.12
17:56:41 <runcom_> but my point is, that would allow us to battle test it for openshift/kube early
17:56:42 <jbrooks> Well, if it breaks kube and openshift, we don't give karma
17:57:03 <jbrooks> I'm +1 to fedora defaulting to a fresh docker, provided it works
17:57:30 <runcom_> jbrooks: that might be a path, but keeping something in updates-testing is non-trivial, since bugs in docker 1.12.6 might also happen
17:57:44 <runcom_> and that's a maintainence burden on me to downgrade to 1.12.6 again
17:58:07 <jbrooks> So you prefer sticking w/ docker-latest and docker-current?
17:58:15 <jberkus> jbrooks: hmmm.  as long as we have FAH "versions", I'm not that comfortable with upgrading docker without warning
17:58:21 <runcom_> jbrooks: right now, I manually test (run docker's upstream tests) for each new docker before building it in Fedora, that doesn't mean "it'll will work" but ....
17:58:25 <jberkus> given that docker is NOT good about backwards-compat and stability
17:58:27 <misc> why not use copr ?
17:58:43 <misc> because if it start to break, nobody will use it if we ship that by default
17:58:56 <runcom_> jberkus: nope, just pointing out "situations", my preference would be to push 1.13.0 and fix bugs as they come and maybe forget about 1.12.6 bugs
17:58:57 <jberkus> misc: we can't use copr to layer over an existing package in FAH, can we?
17:59:03 <jbrooks> So we withhold karma and fix things if it doesn't work, right?
17:59:23 <jbrooks> jberkus, if we do ostree admin unlock
17:59:45 <jberkus> jbrooks: yeah, but that's not a production solution for someone who wants 1.13 ...
18:00:03 <jbrooks> right
18:00:07 <runcom_> jberkus: and also for people willing to test it out w/o it being inupdates-testing
18:00:07 <jberkus> so
18:00:09 <misc> someone might not want 1.13 in production if that was not tested enough
18:00:22 <jberkus> yah
18:00:26 <jberkus> so we need to schedule this
18:00:26 <runcom_> I mean, my thought would be to be as much as we can aligned with upstream releases
18:00:42 <jberkus> #action runcom_ and everyone to schedule 1.13 testing week
18:00:57 <jberkus> because we're out of time
18:00:59 <misc> and from having tried to run atomic in prod, I think we had enough breakage to not add more :/
18:01:07 <jberkus> any other discussion on #fedora-cloud please
18:01:17 <jberkus> #endmeeting