18:09:05 #startmeeting Fedora Atomic and Infrastructure 18:09:05 Meeting started Tue Oct 7 18:09:05 2014 UTC. The chair is jzb. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:09:05 Useful Commands: #action #agreed #halp #info #idea #link #topic. 18:09:05 Meeting started Tue Oct 7 18:09:02 2014 UTC. The chair is jzb. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:09:06 Useful Commands: #action #agreed #help #info #idea #link #topic. 18:09:18 * stickster here lurking too 18:10:12 OK 18:10:18 if I have the right minutes in front of me 18:10:25 #link http://meetbot.fedoraproject.org/atomic/2014-09-30/atomic.2014-09-30-18.14.txt 18:10:33 we have a few items from last week 18:10:44 #topic Atomic Runbook 18:10:52 oddshocks: you had an action item on the runbook? 18:11:44 jzb: regarding the docs? some have been added: https://fedoraproject.org/wiki/Cloud/Atomic_Runbooks 18:12:02 oddshocks: awesome - is there more to come, do you need help, etc.? 18:12:52 jzb: i can continuously add more as I learn stuff. I have to admit though I'm spinning wheels on the releng scripts that we need ostree support in 18:13:09 oddshocks: how so? 18:13:25 oddshocks: you need more info from someone, the scripts are hard to understand, or...? 18:13:31 oddshocks: how can we unblock you? 18:14:09 Mostly the second one. I don't know what needs to be done in a micro level and I don't have a way to test if I did the right thing or not 18:14:20 sorry im in the middle of packing 18:14:38 next week this tim will be 4am for me 18:14:42 I've been poking around but I feel like I'm just taking random guesses and I don't know how to tell if anything I'm doing is right 18:14:55 dgilmore: where ya goin? 18:15:15 oddshocks: because you don't have a test environment? 18:15:22 I don't really know what most of the script(s) are doing, I don't have experience with the programs they're calling, and I don't have any familiarity with how the machines they're using are set up with regards to directory structure 18:15:33 jzb: yeah that's a big part of it 18:15:34 jzb: Before this meeting is over, let's figure out a mutual time with dgilmore for next week and beyond. He will be in AUS soon 18:15:42 stickster: OK 18:15:54 stickster: any thoughts on that situation (lack of test env) 18:15:56 ? 18:15:58 jzb: And if I had a test environment, I wouldn't know if the outputs I get are right 18:16:01 or dgilmore ^^? 18:16:08 oddshocks: jzb: That's the kind of info I think dgilmore should be providing information for 18:16:16 um, sorry for that circular sentence. 18:16:20 oddshocks: jzb: That's the kind of info I think dgilmore should be providing 18:16:25 :-) 18:16:45 stickster: the info that dgilmore should be providing is of that sort? 18:16:49 ha 18:16:54 Like, I know I need to generate an ostree summary file and put it somewhere that mirrormanager could get it. but i dont know where the file should go or from where i should call the ostree command or even what repo(s) it needs to be run on, or even which directories ARE repos 18:17:31 oddshocks: have you sent those Q's to dgilmore ? 18:17:33 and the scripts are pretty barren as far as comments go and AFAIK there are no docs so it's quite hard to pick up what's happening 18:17:41 we can try to run some of that down here, but maybe email would be faster? 18:17:59 jzb: alright i'll send out an email 18:18:02 Yeah, we need to unlock some of this knowledge. 18:18:17 oddshocks: I'm going to be evil and give you two action items if I may 18:18:20 oddshocks: 1) run this down 18:18:28 and 2) document what you learn 18:18:33 I will do both of those 18:18:39 jzb: Australia 18:18:50 if it's OK for me to send out a long sprawling email explaining exactly what I don't understand and what isn't clear, I'll do that by all means 18:18:56 dgilmore: permanent move or is it vacation? 18:19:05 dgilmore: If there is specific rel-eng access that is needed for someone to be able to accurately test the scripts -- can you (1) provide that to dgay, and (2) give him any specific instructions on things to do/not to do with that access that are critical to test 18:19:08 dustymabe: 2 months, need to sort some things out 18:19:18 dgilmore: cool. good luck! 18:19:31 oddshocks, ostree --repo=repo summary -u 18:19:37 #action oddshocks Compile questions and get with dgilmore to figure out mirrormanager / releng / build systems info. 18:19:51 stickster: kinda. the nightly compose boxes have zero shell accounts that can ssh in 18:19:57 #action oddshocks start compiling documentation based on information about mirrormanager / build systems. 18:20:09 walters: yeah i understand that but i have a thousand questions regarding what to point that at, where to run it from, where the ostree summary file should end up, etf 18:20:11 etc* 18:20:20 I have ZERO experience with these releng scripts and ZERO experience with the machines 18:20:31 I'm sort of diving straight into this unaware of my surroundings 18:20:38 oddshocks: i will try get it sorted on my flight home 18:20:44 the file ends up in the repo 18:20:46 dgilmore: awesome thanks 18:20:52 dgilmore: What would be the best way for oddshocks to be able to test? 18:21:14 stickster: run it locally 18:21:36 stickster: some parts might puke run outside of phx and need some config tweaks 18:21:46 but realisiticlly most of it should run anywhere 18:22:04 dgilmore: OK, if there are other things that need to be setup locally hopefully you can help oddshocks debug and fix. 18:22:21 I know the file is supposed to go in the repo but i don't know where the repo is. all i have to work with are the buildbranched buildrawhide and run-pungi scripts and there isn't anything labeled (or clearly labeled) a repo there 18:22:23 Ideally the scripts should be written in a way that makes it clear what's environment-specific, so it's easy to tune the script. 18:22:25 there's the line `mock -r $MOCKCONFIG --uniqueext=$DATE --unpriv --chroot "/usr/bin/repodiff -s -q --new=file://${MASHDIR}/$BRANCHED$EXPANDARCH/source/SRPMS --old=file://$TREEPREFIX/development/$BRANCHED/source/SRPMS > $logdir/repodiff" 18:22:38 o_O 18:22:39 but.. wat 18:22:57 that's the only line that seems to do something with a repo 18:23:04 oddshocks: well, that's clear as a bell. 18:23:10 and that's the sort of stuff that's in these scripts and it's quite hard for me to wrap my head around 18:23:18 oddshocks: the repo doesnt exist 18:23:29 oddshocks: we will be making a new repo from scratch nightly 18:23:50 we need to make a random directory in /tmp and make it there 18:24:22 oddshocks: that line is really simple to understand 18:24:25 dgilmore: what goes in the repo? what do we base it off of? aka what files should be in there 18:24:48 uh 18:24:51 dgilmore: Obviously not to everyone. 18:24:52 oddshocks: that is defined in the atomic repo files we have to pull from git 18:25:13 "mock -r $MOCKCONFIG --uniqueext=$DATE --unpriv --chroot " 18:25:30 that is runa command as an unprivlidged user in a mock chroot 18:25:48 "/usr/bin/repodiff -s -q --new=file://${MASHDIR}/$BRANCHED$EXPANDARCH/source/SRPMS --old=file://$TREEPREFIX/development/$BRANCHED/source/SRPMS >$logdir/repodiff" 18:25:52 If I get it... this is doing a repodiff in mock, between the existing development SRPMS repo and a new mashed SRPMS repo 18:25:55 that is the command we run 18:26:12 stickster: exatly 18:26:15 i.e. what changed in this mash? 18:26:15 exactly 18:26:40 it is part of whats in teh nightly branched rawhide compose email 18:26:59 But I expect there are other lines to be decoded similarly... oddshocks, maybe I can use some SWAG to help you with some of these, although I'm not a rel-eng expert 18:27:14 stickster: its all along the same lines 18:27:22 * stickster suspects that particular line is not really the exact issue, but yeah 18:27:27 almost everything is run in mock chroots 18:28:01 I'm having a hard time coming up with things to say that don't make me feel like I sound underqualified 18:28:04 :/ 18:28:06 :P * 18:28:34 I get the general premise. 18:28:47 oddshocks: No worries... part of the point of doing this is to get you exposed to rel-eng processes, and precisely so you can help us discover/document what's becoming "institutional knowledge" so we can fix that 18:28:55 Alright. 18:29:07 I mean I don't know what a mash is. And I haven't used mock outside of python-mock. 18:29:32 oddshocks: Yeah, some of this I can definitely help with :-) Don't be shy about asking, I'm good about saying "I don't know" and finding people who do 18:29:34 That line might be obvious to people used to these things but I personally have to scroll to each of those referenced $variables and see what they are 18:29:35 :-) 18:29:46 oddshocks: have you trolled the wiki to see if anything is there? 18:29:55 trolled or trawled? 18:30:03 "This wiki sucks gumballs" 18:30:20 stickster: you do it your way... 18:30:23 ha 18:31:02 jzb: if you're asking if i've found any documentation on these scripts, the answer is no 18:31:24 oddshocks: there is a very large use of variable sin the scripts that make it a bit harder to follow that it possibly should be 18:32:12 oddshocks: I did try and pick variabvle names that make sense 18:32:16 *nod... dgilmore: oddshocks: Maybe as we go we could patch in some comments to make them easier to follow 18:32:36 dgilmore: I assume we could file those as tickets against the rel-eng repo? 18:32:45 stickster: sure 18:32:45 stickster: sounds good to me 18:33:17 * stickster not trying to turn this primarily into a cleanup exercise -- we do have a goal to achieve here -- but we might as well make easyfixes as we go 18:33:24 stickster: +1 18:33:27 * dgilmore really needs to run, I do not have long to finish packing. and I have not decided what computers to take with me 18:33:34 dgilmore: all the computers 18:33:40 Gumballs are delicious. 18:33:44 dgilmore: ok before you leave 18:33:44 jzb: I do not have room for all 50 or so 18:33:45 dgilmore: Make sure you bring several Sun machines. That's critical. 18:33:46 my biggest problem overall is that I have no way of knowing if things I add 1) work 2) are what we actually want 18:33:57 dgilmore: what time next week will work for you? 18:34:14 stickster: the only sun machine I have left is the E3000 its now a printer stand. the rest I gave to Ricky 18:34:15 so even if I take a shot in the dark, they're still just lines that i'd send via email as a patch and people'd have to go over line by line 18:34:24 dgilmore: is there a window of sanity that we can hit for AUS and US? 18:34:41 oddshocks: I think I would like to go on this voyage of discovery with you. Let's set some time aside to work together on it, maybe after 4pm EDT today? 18:34:42 jzb: any time after 3 hours from now 18:34:50 as 3 hours from now is 7:30am 18:34:55 oddshocks: I think we can probably figure out how to run these locally without too big a problem 18:35:00 stickster: sounds good to me. 18:35:04 stickster: i'll be here 18:35:12 OK 18:35:14 Finally, something technical I can do that's worthwhile and maybe not completely over my head ;-) 18:36:11 #action jzb send new meeting time for 16:30 CDT 18:36:32 dgilmore: also, you had an action to pull in lmacken? 18:36:51 jzb: can we make sure the calendar gets updated? 18:37:01 dustymabe: yeah 18:37:08 #action jzb update calendar 18:37:10 I live and die by that calendar 18:37:15 dustymabe: ouch 18:37:46 jzb: more or less it just helps me make sure I don't miss it and it has nice time coverter on the side so I don't have to worry about timezones 18:39:14 jzb: i don't think we are at that point yet yet 18:39:16 but soon 18:40:10 OK 18:41:06 walters: anything new this week? 18:41:12 #topic new business 18:41:18 i would like to blog about the atomic f21 image 18:41:28 but it's not very interesting without online updates 18:41:37 * roshi comes in late to lurk 18:41:48 walters: right. You are referring to the alpha, yes? 18:41:51 right 18:42:04 walters: IIRC it has some fairly significant bugs 18:42:13 so yeah, without updates... 18:43:23 now i could blog with the regular tree compose that's happening from atomic01.qa 18:43:36 which is really just like a cron job running rpm-ostree compose tree 18:43:51 however, there's no mirroring beyond the small set of servers, no metalink 18:43:51 walters: how does that relate to the alpha/beta? 18:44:12 it relates in that you could get regular updates 18:45:37 walters: so I'd point at that tree and then switch back to beta, etc.? 18:45:40 so code wise there's more work on rpm-ostree-toolbox, still interested in a discussion about using that for rel-eng work 18:46:11 hm, yes that's possible 18:46:42 walters: it would sort of point up one of the benefits of rpm-ostree, too 18:46:48 lose some people each time with the "tree is here, now it's there, now it's over here" but better than nothing 18:46:49 "I'm using this tree, now I'm using this tree" 18:47:03 right 18:47:12 walters: adn we don't have signing going on yet, do we? 18:47:15 nope 18:47:16 s/and 18:47:27 walters: so we're not pointing users away from signed content to unsigned 18:47:43 walters: I'd go ahead and blog that, then. 18:47:51 walters: as long as the package list is the same/or very similar 18:47:57 https://dl.fedoraproject.org/pub/alt/fedora-atomic/ is fetched over HTTPS 18:48:28 (again, this is really not significantly more interesting than the "while true; do rpm-ostree compose tree" loop i'd like to have in rel-eng) 18:49:06 two people today tried the old outdated image from projectatomic.io 18:49:17 i had to tell them "oh don't use that, there's no updates, we're trying to do it in Fedora 21" 18:49:28 and that's just people who asked me directly 18:49:36 walters: yeah... please feel free to blog. :-) 18:49:52 so i could point them at fedora 21 but...there's no updates 18:49:54 walters: we're finally unblocking stuff on the CentOS side, so I hope to have something there very soon. 18:50:21 but yeah 18:50:46 walters: on the rpm-ostree-toolbox - what do you need there? 18:52:58 it's ok we can come back to it 18:53:16 i could type stuff here but the regular compose is far more important 18:54:10 walters: we ahve a few chicken and egg problems 18:54:24 we need to just get something out and then sort though the issues 18:55:23 i know fedora.next is taking up a lot of people's time 18:55:27 walters: just looping like you sugegsted is completely inefficient, and we really do not want to just do that 18:55:49 walters: we need to tie making trees into the processes that push out updates and packages 18:56:10 * stickster suggests that maybe we need some sort of higher-bandwidth discussion than this weekly IRC meeting to get things moving faster ;-) 18:56:14 i.e. have bodhi make and push updates trees 18:56:20 stickster: +1 18:56:27 have buildbranched make updates trees 18:56:29 remember rpm-ostree will not do a new commit if the tree would be unchanged (i.e. the rpmdb has the same package set and envrs) 18:56:37 so yes, you spend some CPU time 18:56:38 stickster: +1 18:56:45 but think about how often koji does exactly the same thing 18:56:52 walters: doesnt really matter its wasting resources on teh server side 18:56:58 we know when we push builds 18:57:06 walters: it doesnt 18:57:27 walters: koji is not a delivery mechanism 18:57:55 * stickster has to desert for another call but plans to walk through some of the scripts with oddshocks in about an hour or so 18:57:56 relatedly, I have gnome-continuous now making buildroots in ~3 seconds 18:58:04 via ostree hardlinking 18:58:08 ooo 18:58:14 stickster: if you organize a BlueJeans or conf call, please invite osas-staff. I want to give Jim, Johnny, and KB a chance to lurk to see what is happening in parallel in Fedora Infra around Atomic. 18:58:23 walters: what gnome does doesnt really translate into anything in fedora 18:58:55 walters: perhaps you and I have very different ideas over some of these things and how they should be used 18:58:58 quaid: oddshocks and I are going to get on a G+ Hangout in all likelihood, happy for others to join in though I can't promise a great education if I'm the one at the lectern ;-) 18:59:06 * stickster flees 18:59:23 walters: we could make some process to get koji builds and make trees, but that is only suitable for running build QA and not pushing to users 19:00:29 walters: we can not push out updates to users that have builds that have not gone through the updates process 19:01:03 stickster: I meant in reference to " higher-bandwidth discussion than this weekly IRC " 19:01:32 dgilmore, what about the rawhide stream? 19:01:47 walters: we only push rawhide out once a day 19:02:03 walters: we cant do more than that 19:02:23 mirrormanager takes about an hour or so to update when things change 19:02:32 mirrors can take a few hours to update and catch up 19:03:09 walters: there is very good reasons why we do things as we do 19:04:22 but we don't even have an official tree for rawhide, right? 19:04:59 walters: we will soon, between oddshocks, stickster and myself we will have something soon 19:05:09 so 19:05:12 ok 19:08:14 If we want to do things more than once a day we will have to work out how to do it without mirrors 19:09:19 walters: do we need to do this more than once per day? 19:09:37 take the docker packager 19:09:44 builds in koji 19:09:51 says "hey try this new version of docker" 19:09:56 right now people can pull the RPM down directly 19:10:13 in the current rpm-ostree, they cannot install the RPM locally (however, see prototype posted to atomic-devel) 19:10:35 walters: yes and we plain do not have the resources to build a bazzilion trees all the time. 19:10:41 for this small subset of users, i don't think extensive mirroring is needed, and i'm actually happy with the alt.fp.org space 19:10:55 walters: in this case the answer would be make your own tree applying that update and deploy it 19:11:07 anyways, i'd be fine with once a day 19:11:11 walters: its not a supportable viable option 19:11:18 interesting 19:11:22 can you elaborate why? 19:11:28 server disk usage? 19:11:29 CPU? 19:11:41 bandwidth? 19:11:49 we do not have the man power, nor computing resources to do that 19:11:55 all of the above 19:12:32 walters: how would we know to make a tree pulling in this one update and not others? we have to make sure the rpms are signed 19:12:41 we have to have ways to kick off and push the tree 19:13:00 walters: do you think it would be useful/feasible to have a mechanism to easily do a treecompose from within an atomic host? for development purposes? 19:13:34 dustymabe: I think jbrooks has poked at that. 19:13:43 we need to have this tree differenciated from teh others 19:14:01 walters: maybe in the future we can get to a utopian place and be able to do it 19:14:04 i.e. I have an atomic host and I think it has a bug in a particular rpm so I run a command (could be in a docker container) to build the tree again with one rpm changed.. etc 19:14:11 with what we have today its not possible 19:14:19 dustymabe, yeah, I've built and hosted tree updates for an atomic image, in that atomic instance, in docker 19:14:35 jbrooks: ahh. cool 19:14:51 * dustymabe goes back to the sidelines and lets the important people talk 19:15:08 * jbrooks realizes he totally forgot about this meeting 19:15:53 dustymabe, https://github.com/jasonbrooks/byo-atomic (but I need to update/recheck that) 19:16:31 walters: how do you actually see this all working? 19:16:51 yes, let's go back to a high level here 19:17:11 one thing to bring up is that a proposed model for atomic is to be a single stream 19:17:21 so when fedora 22 comes out, the tree cuts over 19:17:30 and users get it as an atomic upgrade 19:17:40 walters: that is not going to work 19:17:54 when you make statements like that, can you include reasons? 19:18:28 walters: the processes that we will be setting up are release specific 19:18:46 walters: people will want to continue to use Fedora 21 for the life of fedora 21 19:19:10 something in their apps will break going to fedora 22 19:19:10 having a way to move from 21 to 22 is awesome 19:19:18 dgilmore: well, no - the Atomic model is continuous updates. 19:19:20 but cutting off the relase is a non starter 19:19:38 jzb: and? 19:19:46 dgilmore: if there's a policy in Fedora that says we can't do that, OK, but the model is just a stream of updates. 19:20:12 people on f22 atomic could continue running apps in f21-based containers, doesn't that solve the problem? 19:20:21 dgilmore: https://gist.github.com/jzb/0f336c6f23a0ba145b0a 19:20:36 jzb: we syupport everything released in the distro from day 1 to the end of the release 19:20:41 that's the discussion draft for the Atomic Host definition, that CentOS, Fedora, and RHEL are going to roughly follow 19:21:03 it explicitly calls out that once a new release (e.g. F22) is out that users move to that. 19:21:17 jzb: thats not our play to make that call 19:21:21 place 19:21:22 dgilmore: that's also roughly what the cloud WG has mapped out 19:21:28 dgilmore, i think everyone here understands that's the way things have worked up until now 19:21:31 dgilmore: whose place would it be? 19:21:39 we wouldn't be having a conversation if we were just doing the same thing we've always done 19:21:46 dgilmore: again if it's a question of policy I'm happy to take it up there 19:21:54 wherever "there" would be appropriate 19:22:19 just as we will not go and add offical f20 docker images after f20 is ga we will support fedora in the delvered way for the life of the release 19:23:37 dgilmore: so - let's say you're on F21 Atomic 19:23:38 jzb: its up to the users to decide 19:23:48 do they want to run f21 or switch to f22 19:23:49 dgilmore: there's no upgrade mechanism 19:24:01 dgilmore: the users are deciding by picking the Atomic model 19:24:12 jzb: there should be a way to move from f21 to f22 19:24:16 jzb, well there *could* be, you can "atomic rebase" 19:24:20 dgilmore: that's a part of the apeal 19:24:22 er, appeal 19:24:28 but users should be able to choose to follow f21 and skip to f23 19:25:13 dgilmore: I'm going to disagree there. It's baked into the product definition/idea that you don't do that. 19:25:22 dustymabe, so actually...you can actually today run "rpm-ostree compose tree" on a client system 19:25:40 dustymabe, if you think about it the way I do, up until now everyone has been composing trees client side 19:25:40 dgilmore: we're going on a new path with Atomic that doesn't neatly fit into the old upgrade model 19:25:45 dgilmore: that's part of the point 19:26:14 dgilmore: if users want F21 -> F23 then they'd be better suited sticking with the traditional Fedora Cloud release or Server release 19:26:15 jzb: I guess we will have to agree to disagree then. 19:26:20 walters: true. I just was wondering if it could be automated such that information from the current system could be fed into a new compose 19:26:21 you will have to take it up with FESCo 19:26:27 dgilmore: OK 19:26:53 jzb: we have nbaked in everywhere the assumption that things will be supported for the life of the release 19:26:57 #action jzb Take Atomic update model to FESCo 19:27:03 dustymabe, yes, clearly if going down this route, we would want to e.g. optimize it to avoid redownloading the packages every time =) Lots of things are possible, but at the moment I am more focused on the server side, supporting docker, and looking at package layer 19:27:19 dgilmore: I get that, I realize it may take some adjustments. 19:27:36 jzb: I see what dgilmore is saying. As a user I could understand wanted to be able to choose if I get just updates for the current release or if I want to rebase on the next release automatically when it is available 19:27:55 dustymabe: but the point is you shouldn't care. 19:28:03 dustymabe: your applications should be in containers 19:28:07 jzb: we need to leave the choice up to the users 19:28:08 dustymabe, try it: git clone https://fedorahosted.org/fedora-atomic/ ; cd fedora-atomic; rpm-ostree compose tree --repo=/ostree/repo fedora-atomic-docker-host.json 19:28:12 jzb: yes. I agree 19:28:13 I strongly believe that 19:28:19 jzb: im just saying i see both sides 19:28:28 dgilmore: again, the choice is up to the users 19:28:28 ostree admin deploy fedora-atomic/rawhide/x86_64/docker_host 19:28:37 jzb: no its not 19:28:39 dgilmore: if they want the old model, they should use server or cloud 19:28:49 you're taking that away 19:29:12 jzb: anyway there is no point arguing cases here 19:29:14 dgilmore: I disagree. There's not much point in adopting Atomic if you want the old upgrade/support model. 19:29:23 we will not get it resolved 19:29:47 fair enough 19:30:00 dustymabe, (note the ref name doesn't have a remote, because it's just in your *local* ostree repo (/ostree/repo)) 19:30:08 dgilmore & walters were you still in the midst of discussion? 19:30:19 jzb: I really need to run 19:30:26 I have 30 minutes to finish packing 19:30:27 or do we have a next topic or are we at the end of our topics? 19:30:29 walters: thanks. 19:30:46 dgilmore: don't pack any computers that have optical drives. 19:30:51 dgilmore: that'll save time ;-) 19:31:09 walters: any other topics? 19:31:18 I kinda think we made progress? 19:31:45 at the moment we've mostly been discussing the ostree part of atomic - is the base WG handling the Docker base image OK? 19:32:00 walters: as far as I know. 19:32:04 there are potential similarities in the necessary architecture 19:32:15 e.g. are we also going to gate docker image updates on bodhi? 19:33:04 walters: we are likely only going to do monthly updates images 19:33:21 with no provisioning for async updates? 19:33:23 walters: and docker upstream are highly against frequent image updating 19:33:35 i guess a major mitigating factor is you *can* run yum update inside a docker container 19:34:23 walters: I talked to them about automating the process to update the docker registry as I would like to have rawhide images updated daily as part of the nightly ocmpose 19:34:23 it is a bit ironic that ostree was born to do high performance continuous delivery, 80+ times a day, and docker is all about baking golden images updated infrequently, and we're mashing them together 19:34:32 and they strongly did not want that to happen 19:34:56 we may wind up needing our own registry I fear 19:34:59 yes 19:35:07 there's also trust reasons 19:35:35 that's something I really want to avoid, but it's looking more like a thing that has to happen. 19:36:12 there will be processes to update the docker image for critical security updates, cloud image update/creation and docker base images will be tied together 19:39:31 walters: jzb: I have one thing I'd like to bring up with regards to the cloud/atomic images 19:39:51 dustymabe: OK 19:40:06 walters: also, I recognize the irony :-) 19:40:09 Currently I don't believe there is a way to catch the syslinux prompt during bootup to be able to "select" different entries 19:40:32 i.e. different trees for atomic 19:40:39 in a public cloud? 19:40:55 walters: it depends. so for amazon there isn't really a way to do this so its moot 19:41:09 right now we have --timeout=1 which is admittedly short 19:41:16 but if you are running on openstack or something that gives you access to tty0 or some sort of interactive console 19:41:31 then ideally if you have trouble you would like to select between different trees 19:41:40 dustymabe: would you do this in production? 19:41:51 dustymabe: correct me if I'm wrong, but... 19:42:08 dustymabe: in a production environment, ideally, I'd just use rpm-ostree rollback? 19:42:25 dustymabe: or even just spin up an earlier image, assuming a Netflix-type model. 19:42:29 jzb: assuming you can boot 19:42:35 what if you can't 19:42:56 dustymabe: are we talking production or test environment? 19:43:07 jzb: either I guess 19:43:19 I'm just saying we added that functionality for a reason 19:43:30 dustymabe: well, production I would still have a pristine image of the earlier known-good image 19:43:41 dustymabe: test environment (e.g., laptop) 19:43:45 dustymabe: I'm hosed :-) 19:43:58 if you can get into cloud-init, you could add a bootcmd for "atomic rollback &&systemctl reboot" 19:44:25 I understand. 19:44:35 I guess this might help if you answer this question: 19:44:55 why do we currently have multiple options to select during bootup for the different ostrees that are installed? 19:45:14 if the new tree won't boot at all 19:45:26 right. 19:45:29 for cases where you do have access to the physical console 19:45:55 which many baremetal servers do via remote serial, openstack has VNC console as you noted, and on laptops/workstations you obviously do 19:45:59 I just don't see why we wouldn't have at least an opportunity to access that for our cloud images (in clouds that support console access) 19:46:07 walters: well, and b/c initially rpm-ostree/OStree were really meant for switching trees, right? 19:46:22 walters: whereas the Atomic use case is not the same thing as the original OStree use case. 19:47:01 jzb, mmm...you're talking about its support for parallel booting (e.g. have both fedora and centos tree content)? in that case yes you need to interact with the bootloader 19:47:13 walters: right 19:47:31 but that's not a cloud production use case 19:47:37 right 19:48:16 ok. just thought I would throw that out there. I didn't think a few seconds of prompt would hurt :) 19:48:16 dustymabe, i think this ask is for the "try-booting-once" behavior that coreos/chromeos have? 19:49:12 walters: I guess. I'm not super familiar with them 19:49:35 I just notice that I liked that feature of atomic but I don't get it with the cloud images we are creating 19:50:06 jzb: walters: thanks for hearing me out 19:50:18 I understand it's probably not needed in most cases 19:50:21 dustymabe: of course 19:50:42 dustymabe: we probably do need to keep it for troubleshooting 19:50:50 dustymabe: so maybe file a bug on that or ticket with the cloud WG? 19:51:11 jzb: depending on what you guys thought I was going to bring it up in the cloud meeting later this week 19:51:11 OK, we're way way over time. 19:51:24 should I bring it up with them? 19:51:27 dustymabe: please do - though I don't think I'll be in the meeting this week 19:51:53 jzb: ok. i may start an email thread on it to include you 19:52:20 thanks guys 19:52:47 walters: any last items? 19:52:55 nope, thanks 19:54:05 OK, thanks all 19:54:13 Watch for new meeting time invite next week. 19:54:20 #endmeeting