16:04:24 #startmeeting F21-blocker-review 16:04:24 Meeting started Wed Dec 3 16:04:24 2014 UTC. The chair is roshi. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:04:24 Useful Commands: #action #agreed #halp #info #idea #link #topic. 16:04:24 #meetingname F21-blocker-review 16:04:24 The meeting name has been set to 'f21-blocker-review' 16:04:25 #topic Roll Call 16:04:36 who's around to knock out these blockers? 16:04:36 * pschindl is here 16:04:40 * kparal is here 16:04:43 * satellit listening 16:04:50 #chair kparal pschindl satellit adamw 16:04:50 Current chairs: adamw kparal pschindl roshi satellit 16:05:25 adamw: you around? 16:06:48 * nirik is lurking around 16:06:54 well, we can move forward 16:06:58 welcome nirik :) 16:06:59 #topic Introduction 16:07:00 Why are we here? 16:07:00 #info Our purpose in this meeting is to review proposed blocker and nice-to-have bugs and decide whether to accept them, and to monitor the progress of fixing existing accepted blocker and nice-to-have bugs. 16:07:04 #info We'll be following the process outlined at: 16:07:06 #link https://fedoraproject.org/wiki/QA:SOP_Blocker_Bug_Meeting 16:07:09 #info The bugs up for review today are available at: 16:07:11 #link http://qa.fedoraproject.org/blockerbugs/current 16:07:14 #info The criteria for release blocking bugs can be found at: 16:07:16 #link https://fedoraproject.org/wiki/Fedora_21_Alpha_Release_Criteria 16:07:19 #link https://fedoraproject.org/wiki/Fedora_21_Beta_Release_Criteria 16:07:22 #link https://fedoraproject.org/wiki/Fedora_21_Final_Release_Criteria 16:07:25 we've got 2 proposed blockers and one proposed FE 16:07:27 I think we could wait for adamw. I suppose it will be very interesting today 16:07:28 #topic (1170153) anaconda gets stuck during creating a partition, when there is some existing partition after that one 16:07:31 #link https://bugzilla.redhat.com/show_bug.cgi?id=1170153 16:07:33 #info Proposed Blocker, anaconda, NEW 16:07:34 * jreznik is here but has only one hour... 16:07:38 sounds good to me kparal 16:07:39 kparal: yeah, likely to be a bit heated. ;) 16:07:51 also, we need anaconda representatives here 16:07:56 yeah 16:08:05 think you can get some here? 16:08:09 I'm here, and I've just come from a conversation with dcantrell about this. 16:08:25 it would be still nice if they joined 16:08:29 sgallagh: any details before we start? 16:09:04 Yes, David's position is that we revert the change that caused this blocker and document the original bug as a known issue in F21 16:09:10 I agree with this. 16:09:40 there were several suggestions on #anaconda 16:09:42 He does not want his team spending their time on this. 16:09:58 today it seems they don't want to spend time on none of the blockers 16:10:05 putting it mildly 16:10:19 * nirik re-reads the bug. 16:10:44 kparal: The blocker process has worn them out, they're not going to work on a firedrill schedule and I can't blame them. 16:10:49 are tehy going to come to the meeting? 16:11:03 the ideas currently include: a) do nothing and risk user data loss b) restrict users in visiting the spoke again c) rescan all disks on every visit, thus throwing away all pending changes 16:11:19 so, isn't this a pretty corner case? where would someone hit it? and it was possible in f20? 16:11:37 sgallagh: I understand that, on the other hand we are in exactly the same position. and this cycle there has been the lowest number of blockers ever 16:11:42 or, in a long time 16:11:48 sgallagh: well, everyone invests a lot of time into the release and prolonging makes it more work for everyone :( 16:12:10 kparal: we don't want to spend time on blockers today because tomorrow it would be the same thing all over again 16:12:33 and again, and again, ad infinitum 16:12:48 nirik: this one is very easy to hit. you just need to be creating a partition before some other existing partition. therefore in the middle of the disk, or e.g. replacing one partition with a different one (while there is some persistent partition after that one) 16:12:57 yes, this is our lowest number of blockers this release -- you can say that all you want. that does *not* mean it has been less stressful 16:13:15 so, should we stop releasing fedora? 16:13:18 don't really understand 16:13:30 you don't understand that you have to draw the line somewhere and not drive developers into the ground? 16:13:41 you can not fix every bug. that is just impossible 16:13:45 let's stop testing anaconda and release it as it is. Best solution. No work needed. 16:13:48 well, if we fix something and break something else, we can't just ignore it 16:14:00 sbueno: I don't want to argue but then, we can ship just some snapshot of release and completely resign from any testing... and as kparal said, this time it was really nice release compared to others and it still is, blockers were all real blocker (except that high contrast icons and we raised it to the team) 16:14:10 we need to fix blockers = major bugs, not every bug 16:14:44 but you're nominating something as a blocker everyday 16:14:47 data loss is pretty high on my list 16:15:02 well, sorry, it's still broken 16:15:07 it's my job 16:15:17 over and over again, ad infinitum 16:16:19 well, let's go back to the topic - and find solution to make it lower burden for everyone, qe, anaconda, all great folks doing releases 16:16:28 +1 16:16:32 if we're really in a state where development teams can't keep up with the pressure anymore, maybe we should strike out a lot from our criteria. but data loss should still be there, imho 16:16:46 data loss is a big deal 16:17:03 well, there's lots of things that can cause data loss on an install... especially if you have lots of other stuff on the disks you are manipulating. 16:17:06 kparal: with this bug, does manual fiddling with teh disk allow things to start working? 16:17:52 this bug was possible/there in f20? or only with multiple disks? 16:18:00 roshi: anaconda freezes during partition creating, therefore the system is not installed at all. my existing systems were somehow made unbootable during the process on one of the machines, but I haven't had time to investigate why 16:18:14 ok, that's what I was wondering 16:18:21 nirik: that would be 1166598, not this one 16:18:27 fix for 1166598 caused this one 16:18:30 this one is worse 16:18:43 that's the reason why anaconda devs proposed to revert that fix 16:18:58 and release with bug 1166598 present 16:19:05 (and 1170153 fixed) 16:19:12 right. 16:19:44 correction, it's quite hard to define which one is worse. each of them have a specific use case where things might go wrong 16:19:48 potentially to data loss 16:20:13 but if we revert the fix, we can still mitigate 1166598 somehow 16:20:24 I'll re-paste the suggestions 16:20:32 the ideas currently include: a) do nothing and risk user data loss b) restrict users in visiting the spoke again c) rescan all disks on every visit, thus throwing away all pending changes 16:20:49 I think b and c are off the table... 16:20:53 my impressions is that anaconda devs strongly support a) 16:20:57 d) slip and let anaconda team time to fix both bugs properly 16:20:58 nirik: why is it? 16:21:09 jreznik: they say it's not that simple 16:21:18 but I'd like them to talk about it 16:21:33 I think it's a) mark this not blocker somehow and ship or b) revert 1166598 and ship c) slip and try and do something else. 16:22:08 kparal: my understanding is that they wish to do b) and don't want to spend time working on your other options at this time. 16:22:13 b) is the most appealing choice, honestly. 16:22:21 nirik: yes 16:22:31 for c) it would mean get fix in less than week, pretty unlikely, so it would probably lead to January 16:22:37 in that case, we could potentially still ship tomorrow 16:22:41 jreznik: yes 16:22:45 if the choices are just those three, b would be best afaict 16:22:46 The revert puts us in a better situation than we have now 16:22:51 * mattdm drops in to +1 to shipping tomorrow :) 16:22:58 The remaining issue can be documented 16:23:06 if we revert the patch *and* try to mitigate (not fix) 1166598, it's going to take more time probably 16:23:11 there's one more bug... so it's still not tomorrow :) 16:23:13 .fasinfo Corey84- 16:23:14 Corey84-: User "Corey84-" doesn't exist 16:23:28 jreznik: sure, we need to vote on it as well 16:23:28 jreznik: I'm going to push for FESCo to vote to remove the dual-boot criterion 16:23:40 note that the revert is actually in anaconda, blivit and pyparted I think... so it's not just a simple one liner 16:23:46 (As a blocker, rather than a very-nice-to-have) 16:23:57 + 1 sgallagh 16:24:00 nirik: It's just blivet, according to dcantrell 16:24:10 sgallagh: on the other hand, we have a lot of users who asks for even more dual boot support... it will definitely shrink our user base 16:24:17 oh? there were updates for the others in there... possibly because it was the same bodhi update I guess. 16:24:32 let's not discuss dual boot now, it's off topic 16:24:33 jreznik This is just a case of "we don't yet support dual boot with a certain os" 16:24:37 jreznik: I'm not saying we don't try to do it. I'm saying we don't block on it 16:24:41 this is a different bug 16:24:41 1166598 is anoying to read due to the cloning. ;( 16:24:44 * mattdm shuts up 16:24:57 * satellit_e Dual Boot : Use a usb ext disk...boot from it 16:25:29 mattdm: you mean with both mac and windows? :) not speaking about other linuxes 16:25:39 jreznik: later :) 16:26:16 but I understand trying to support too much if we don't have resources to do it - it's better to admit it 16:26:18 dcantrell: can you tell us what you think about reverting the fix for 1166598 and then rescanning all disk every time you enter storage spoke, to avoid 1166598? 16:26:21 ok, I am in favor of reverting the fix for 1166598 and doing another rc I guess. 16:27:03 another rc ? :( but im with your logic 16:27:03 I'm for reverting fix for 1166598 provided we at least somehow protect the users from it 16:27:06 For a formal vote: Proposal: Revert the fix for 1166598 and spin RC5. No additional changes. 16:27:07 kparal: reverting the fix is the least risky option at this point. introducing new code introduces an unknown amount of risk. in discussion the solution sounds nice and it may even work, but now is not the time to be introducing new things 16:27:15 +! 16:27:29 reverting the patch for 1166598 is the safest. document the limitation as a known issue and move on 16:27:31 dcantrell: what about disallowing users to enter the spoke again? 16:27:34 dcantrell: if we slip and give the team enough time to proper fix this issues, how much time do you expect we would need (and if you will be willing to do it?) 16:27:48 kparal: departure from established workflow, which I would argue is an RFE 16:27:49 that would be a pretty easy fix 16:27:52 we don't need to be doing that either 16:28:06 well, it would be to protect users from sounds not complicated 16:28:10 kparal, not all users are aware to jsut reboot if they actually bugger it up first time tho 16:28:12 from 1166598 16:28:21 sorry, bad ctrl+v 16:28:29 sgallagh: we shouldn't vote for rc5 yet, we have another blocker proposed after this one. ;) 16:28:35 jreznik: unknown amount of time required, extremely risky given that we are in December. I am not prepared to commit the team to that work 16:28:54 nirik: If we revert this, RC5 is a given 16:29:01 dcantrell: I understand it would very likely lead to January 16:29:02 ^ 16:29:05 I didn't say "instantly" 16:29:22 sgallagh, when then ? 16:29:26 what about a big fat warning displayed the second time you enter the spoke? 16:29:31 still too complex? 16:29:31 jreznik: the proper solution for this is to bake it in rawhide and fix it in f22. it's a complex problem with no quick fix 16:29:41 kparal: UI changes now? not a good idea 16:29:48 kparal, not likely but confusing likely 16:29:51 Corey84-: I just mean that one will have to happen as soon as possible. Don't read too much into it 16:29:55 sgallagh: I'd rather say: +1 this being a blocker, fix is to revert fix for 1166598 and document that issue/-1 blocker it. 16:29:59 dcantrell: better than partitioning changes, don't you think? 16:30:09 kparal: no, I'm not sure it's what we want now... common bugs and document it 16:30:18 nirik: I'm fine with however it is phrased 16:30:25 ugh 16:30:26 kparal, sure 16:30:40 kparal: no 16:30:49 kparal: We can't fix every bug all the time. It's not ideal, but it's reality. 16:31:10 sgallagh: agreed. that's why we have release criteria, to distinguish them. 16:31:45 so its just the reentry into the spoke that bugs out yes? (dont have the bz in front of me atm) 16:31:51 Yes 16:32:28 kparal: Sure, but the criteria are written somewhat ambiguously, and at times like this I think it's perfectly reasonable to play the "Okay, let's document that and move on" card 16:32:49 * nirik wonders if anyone has a way to wake the adamw. 16:33:14 we have done this many times in the past. but those were minor bugs. this is potential data loss 16:33:27 I'm not happy about it 16:33:34 can you get the data off manually? 16:33:35 kparal: It's an OS installer. That's *always* a risk. 16:33:48 is the data "lost" or "hard to get?" 16:34:22 kparal: if it makes you feel better, there's probably plenty of other data loss bugs we just didn't happen to discover yet. 16:34:23 sgallagh: there's always risk but that doesn't mean we should make that risk too high 16:34:46 roshi: anaconda can delete a wrong partition, other than you selected 16:34:52 do we have any actual numbers on the people this problem affects (if 1166598 is reverted)? what is the likelihood of users hitting this problem? or are we all just speculating? 16:34:54 well, definitely this bug sounds worst than the original one, so 16:34:57 wait, are we talking about just documenting this one and -1'ing it? 16:34:59 jreznik: Of course, but I'm making the personal judgement that it's not too high in this case 16:35:00 from comment 0: "3. There is a high risk of removing partition which was supposed to be kept." 16:35:43 dcantrell: that's hard to tell. only a fraction of affected people will report it 16:35:57 nirik: I think it's revert this one as it's worst and document the original one as less likely happening? or maybe I'm already lost :) 16:36:15 jreznik: Yes, that's my take 16:36:55 it's fudge but seems like the only way how to release this year for me 16:36:56 sgallagh, how can it delete a non declared partition tho 16:37:26 Corey84-: We don't need to investigate the code here. We just need to decide how to proceed. 16:37:37 FWIW, I have seen 0 reports of 1166598 in the wild, but it could be most people just give up and wipe everything instead of reporting or seeking help. 16:37:46 sgallagh, wasn't suggesting a code review lol 16:38:42 nirik: so with 0 reports of it in the wild, I find it hard to believe that it should get the status that is has received 16:38:52 nirik: I don't have numbers to back this up, but I think most people try Fedora out with VMs or Lives these days and don't generally install locally until they're ready for it to take over completely. 16:39:06 my position is still to revert 1166598 and document the problem as a known issue and how to not hit it 16:39:11 So if they hit this, they probably just try another VM 16:39:20 dcantrell: I still completely agree. 16:39:28 sure, it's all speculation really. ;) 16:39:47 * nirik is also with dcantrell and sgallagh. 16:39:55 im with dcantrell on that common bug and doc 16:40:22 sgallagh: I can't agree with that VM/Live (and to be honest, I recommend it to many folks who wants to try Fedora as best/safe way) 16:40:51 jreznik: Sorry, I couldn't parse that. What do you recommend to people? 16:40:51 I'm for reverting, but I'd like to see *some* improvement to protect users from de-fixed 1166598 16:41:19 kparal: I just don't think that's likely to happen in this release. Certainly not if we want to ship in 2014 16:41:21 but without commitment from anaconda team, I don't think we have many options here, so revert 16:41:32 ok, so to stick with the order of the meeting 16:41:41 votes on this bug as a blocker for F21? 16:41:42 sgallagh: to install in vm or use live 16:42:05 jreznik: It sounds like you were agreeing with me, then. 16:42:57 Proposal (again): 1170153 is a blocker. Agreed resolution is to revert the fix for 1166598. 16:43:21 sgallagh +1. It sucks, but let's document it, ship it, move on 16:43:28 +1 16:43:28 +1 16:43:36 we also need to agree that 1166598 not a blocker, or discuss it separately 16:43:41 *is 16:43:53 just patching the proposal 16:43:59 +1 on 1166598 16:44:06 +1 16:44:20 +1 16:44:36 we can discuss the other bug next 16:44:49 +1 16:45:21 roshi: so let's do proposed #agreed 16:45:28 working on it :) 16:45:31 ok 16:46:09 proposed #agreed - 1170153 - AcceptedBlocker - This bug is a clear violation of the Windows dual boot criterion and can lead to data loss. 16:46:44 roshi: Uh, what? 16:46:52 I think you may have jumped ahead... 16:47:02 what do you mean? 16:47:07 Sorry, I misread 16:47:11 ack 16:47:13 we're discussing 1170153 and if we should block on it 16:47:13 ack 16:47:15 ack 16:47:20 ack 16:47:26 then discussing 1166598 16:47:28 roshi: Sorry, I got confused with the *other* dual-boot bug that got opened. 16:47:35 did I miss something? 16:47:40 ah 16:47:41 No, I did. Carry on 16:47:42 ok :) 16:47:53 #agreed - 1170153 - AcceptedBlocker - This bug is a clear violation of the Windows dual boot criterion and can lead to data loss. 16:48:25 now we can talk about the one proposed to revert and document 16:48:26 #topic (1166598) going back to installation destination picker swaps partitions on disks 16:48:29 #link https://bugzilla.redhat.com/show_bug.cgi?id=1166598 16:48:31 #info Accepted Blocker, anaconda, VERIFIED 16:48:44 Proposal: Remove blocker status, revert this fix. 16:49:55 propose as f22 blocker ;) 16:49:55 yeah... 'current fix causes worse issues and no better fix is available in near term, so remove blocker status and document' 16:49:59 actually, 1170153 was not really about windows, and I've found it it works with windows in most cases. 16:50:07 votes on reverting blocker status for this bug? we'll also have to provide a really clear justification in the bug itself 16:50:21 but let's discuss 1166598 now 16:50:22 * adamw wakes up, reads back 16:50:31 adamw: Trust me, go back to bed. 16:50:31 +1 remove blocker status with nirik's justification 16:50:32 hey adamw. welcome to the fun. ;) 16:50:39 proposal: give adam 5 minutes to catch up? 16:50:43 sure 16:50:47 +1 16:50:52 eh, i'm not that important 16:50:53 to 5 minutes 16:50:58 what *exactly* is the data loss scenario for this bug?> 16:51:05 * adamw never actually hit it himself 16:51:15 you may remove partition you don't want to? 16:51:16 adamw: visit storage spoke twice. partitions can get swapped numbers 16:51:26 adamw: If you have existing partitions, you might remove the wrong one without being aware of it 16:51:50 I'm not sure if it affects custom part. definitely guided part 16:52:13 comment 10 has a reproducer 16:52:15 not seen it in custom ( my forte) 16:52:25 kparal: does the confirm dialog show the right info? 16:52:51 mattdm: the reclaim dialog shows partitions, but labels like 'vda1' and 'vda2' are swapped 16:52:57 it is hardly noticeable 16:52:57 you dont' get a confirm dialog on guided 16:53:11 ah. (I apparently never do "guided") 16:53:19 yeah, no confirm dialog, just reclaim dialog 16:53:36 adamw: in the bug long ago, you said "My inclination is to vote -1 blocker on a bug which involves running through the spoke multiple times and changing your mind, at this point." 16:53:51 is that still basically true? 16:54:01 mattdm: at that point i think the impact was believed lower 16:54:08 has anyone checked if this happened in f20? 16:54:26 my last comment never arrived to bugzilla for some reason. this happens in F20, but only for multi disk scenarios. I couldn't reproduce it with single disk scenario 16:54:26 kparal says that it is new in f21 16:54:44 ah, I added this comment to a wrong bug 16:55:02 imo if you have to enter the spoke more than twice you need to preplan deployment better 16:55:10 fixed now 16:55:34 Corey84-: there's that, but then there's also the fact that, well, we built this whole hub and spoke thing which expressly allows you to do that 16:55:43 Right, I think the likelihood of re-entering the spoke is sufficiently small as to not be worth blocking on. 16:55:53 I often do that 16:56:05 just checking whether I set everything right 16:56:06 it seems impolite to say "we built something that's clearly designed to allow you to go through spokes multiple times, but we're going to say any data-eating bugs that happen when you do aren't important". kind of a dissonance there. 16:56:07 kparal: yeah but you're trying to break it :) 16:56:08 kparal: Yeah, but you're schizophrenic ;-) 16:56:11 adamw, not discounting that at all but when is too many times tho 16:56:17 thanks guys 16:56:23 kparal: You're welcome! 16:56:42 but this time I meant real life scenario. installing on your home machine, along your precious data 16:56:44 by design, there shouldn't be "too many times" 16:56:49 you want to be sure you set everything right 16:56:52 on a custom or guided i can see a second reentry but more than that is iffy imo 16:56:54 adamw: I agree with you, but on the other hand, I don't know that it's a strong enough reason to block 16:57:11 the second time through of kparal's single-disk reproducer is a *bit* pathological, though i guess you can do it by mistake 16:57:14 cautios people can hit more likely by tripple checking and retrying configuration several times, so they are in the end more likely to loose data :D 16:57:15 And anyway, the discussion is somewhat moot, since the anaconda folks don't want to engineer a solution at this point. 16:57:25 So it's rather academic, IMHO 16:57:28 adamw: to summarize, anaconda devs say they can't fix this and 1170153 at the same time 16:57:30 fair nuff adamw 16:57:38 i guess i'd say that in a perfect world with perfect adherence to our policies i'd want us to block on this and slip for however long anaconda wanted to be happy they could fix both bugs properly 16:57:45 This is the lesser of the two bugs here 16:57:45 I would forsure propose this as an F22 blocker even if we revert it now 16:57:57 sgallagh: switch to calamares in f21? :) 16:57:59 sgallagh: agreed. 16:58:10 in an imperfect world where anaconda folks are sick to death and everyone else wants to be out of here for christmas i can be ok with not fixing it, i guess. 16:58:17 .fire jreznik 16:58:17 adamw fires jreznik 16:59:06 it sounds like reverting this and documenting it is the best course of action we have 16:59:08 +1 imperfect world. It's not just the anaconda team -- a lot of people want to get the release into the hands of users 16:59:26 i'd like that too, but i'd also like it to be good =) 16:59:28 yes 16:59:33 even if we had someone who could patch both *now* we'd still be pressed for time to test thoroughly 16:59:50 +1 revert and doc again here is fien with me 16:59:57 roshi: yep, fix for this issue definitely means January 16:59:59 roshi: well, if we're reverting something we need to re-test thoroughly. 17:00:07 jreznik: If not February, yes. 17:00:09 jreznik: jokes about calamares and other projects in fedora won't get us to take this entire process seriously like you ask us to. I ask that you recognize the amount of work that we put in to the installer that everyone continually and has always badmouthed 17:00:18 yeah, but it takes less time to revert and test than to code build and test 17:00:40 so, i'm gonna say +/-0 on this, but i'm ok with an overall -1. 17:00:49 dcantrell: well, I appreciate your work on anaconda and yesterday I repeated it like several times you did great job 17:00:57 I'm not, but if there's no one to fix it, what can we do 17:01:13 jreznik: thank you 17:01:30 we're all friends here and we all want a solid release 17:01:35 im for a fix if we can test in time but not to block it 17:01:36 -1 after reverting the fix, document as best we can and try and fix better for f22. 17:01:39 assume good faith and all that :) 17:01:40 I don't think a warning dialog would be that hard to code 17:01:50 We're not friends, we're family. Families fight sometimes :) 17:01:51 nirik, +! 17:01:58 kparal: what part of code freeze don't you understand? 17:02:00 +1 sgallagh :) 17:02:08 roshi: solitaire release? ;p 17:02:08 sgallagh: nice :) 17:02:45 dcantrell: I probably don't understand your reply 17:02:53 * nirik asks sgallagh: "are we there yet!?" 17:03:12 nirik: No, and stop teasing your sister. 17:03:13 kparal: 12:00 < kparal> I don't think a warning dialog would be that hard to code 17:03:20 So, back to the problem at hand. 17:03:35 Are we agreed on reverting, documenting and fixing *early* in F22? 17:03:41 it's not a fix, but it would at least help a bit. just a fraction of people will read commonbugs 17:03:42 are we ready for votes on this? or could people still use some convincing? 17:03:42 right, I think we are in broad agreement here, someone craft a proposal? 17:03:52 sgallagh: +1 17:03:58 I'll do it nirik 17:04:08 votes first though :) 17:04:22 * nirik goes to get coffee. 17:04:23 +/- 0, ok with general -1 if that's how people vote 17:04:42 kparal: not disagreeing, but we either have a code freeze or not. which means problem solving after a code freeze means working with the tools we have limited ourselves to. such as reverting or documenting problems 17:04:47 +/- 0 from me too. I don't like it. 17:05:06 +1 to revert, document (to be clear) - I don't see way to get fix anytime soon and seems like even trying to mitigate it could lead to more issues (code changes, anaconda team burn out) 17:05:14 +1 for proposing revert doc and early fix 17:05:38 freeze means not adding new things unless something is broke - it's the point of freeze aiui 17:06:01 ok, 2 +1 and 2 +/-0 17:06:08 dcantrell: i don't know if you mean anaconda or fedora, but fedora doesn't have a 'code freeze' of that nature 17:06:32 that is clearly evident, but it would be nice 17:06:38 dcantrell: the codification of fedora's milestone freezes is 'only changes to fix blocker and freeze exception bugs will be accepted during these times' 17:06:47 I'm abstaining and looking for some spirits 17:06:53 I'm going to recommend that this is neither the time nor place for discussion of the freeze policy. 17:07:04 true sgallagh 17:07:07 votes? 17:07:10 yeah, it was just a clarification, if we're going to discuss changing it that should happen elsewhere 17:07:22 i'm assuming we're counting dcantrell as -1 ? 17:07:33 well, +1 to revert this one 17:07:37 aiui 17:07:59 my position is still to revert 1166598 and document the problem and workaround 17:08:09 however that works on the voting number line 17:08:12 +1 for dcantrell :) 17:08:29 ok, 3 +1 and 2 +/-0, one abstain 17:09:02 +1 dcantrell 17:09:13 If I wasn't counted, I'm +1 to my own proposal. 17:09:15 wait, what's the proposal? 17:09:16 still +1 rather 17:09:26 oh, sgallagh's. gotcha. 17:09:28 (12:03:22 PM) sgallagh: Are we agreed on reverting, documenting and fixing *early* in F22? 17:09:36 * nirik is +1 17:09:45 * Corey84- +1 17:09:59 assuming a vote of -1 on the blockeriness of the bug, yes. 17:10:05 proposed #agreed - 1166598 - RejectedBlocker - The provided fix for this bug caused a larger issue. At this point in the release it's better to revert and document the problem clearly. Repropose this as a F22 Alpha blocker to get a fix early in the next release. 17:10:10 +1 17:10:14 +1 17:10:16 roshi: ack 17:10:16 ack 17:10:20 ack 17:10:37 #agreed - 1166598 - RejectedBlocker - The provided fix for this bug caused a larger issue. At this point in the release it's better to revert and document the problem clearly. Repropose this as a F22 Alpha blocker to get a fix early in the next release. 17:10:51 ok, next proposed blocker 17:10:51 #topic (1170245) Win 8 UEFI don't start from grub: "error: cannot load image" 17:10:55 #link https://bugzilla.redhat.com/show_bug.cgi?id=1170245 17:10:57 #info Proposed Blocker, grub, NEW 17:11:00 I updated the title 17:11:07 the problem seems to be in secure boot 17:11:10 man, i thought someone tested this. 17:11:12 oh, SB. 17:11:13 if I turn it off, everything works 17:11:18 * roshi will update all these bugs when the meeting ends 17:11:31 and I'll repeat what pjones said on #anaconda 17:11:47 still unlikely to be fixed in F21 at all. 17:11:54 trouble is, if it worked before on a different machine, that would seem to imply that either a) that machine did not, in fact, have SB enabled, or b) the machine you're testing this on can't actually boot windows correctly 17:11:54 pjones: Chainloading to something in the system db should work 17:11:54 Anything else has no chance 17:11:54 I think Suse have a patch that adds shim support to chainload 17:11:55 yeah 17:11:57 hence my last statement. 17:12:15 I'm failing to find any criteria for SB 17:12:24 I don't see one either 17:12:24 I think we have one... 17:12:27 it'd be a conditional violation of the windows dual boot install 17:12:27 adamw: is it hidden under some generic term? 17:12:31 the condition being 'sb enabled' 17:12:39 * nirik looks 17:12:43 there's no explicit sb criterion iirc 17:12:58 I don't see any 17:13:06 https://fedoraproject.org/wiki/Fedora_21_Final_Release_Criteria#Windows_dual_boot 17:13:07 me either 17:13:15 proposal: Reject as blocker, document that installation with secure boot enabled may not work on all systems yet. 17:13:17 Suggestion: document this as "Dual boot not yet working with Win 8 UEFI with SecureBoot enabled", move on. 17:13:22 "The installer must be able to install into free space alongside an existing clean Windows installation and install a bootloader which can boot into both Windows and Fedora. ", "The expected scenario is a cleanly installed or OEM-deployed Windows installation.", "This criterion is considered to cover both BIOS and UEFI cases." 17:13:39 +1 sgallagh, when something can't be done it can't be done. 17:13:40 does OEM installs have sb enabled? 17:13:47 yes, usually. 17:13:53 yep 17:14:00 we can adjust the criterion, it's not evil to adjust the criteria in the face of harsh reality 17:14:01 in 8 or 8.1 it is 17:14:04 this particular machine had it off by default 17:14:08 I enabled it before installation 17:14:20 kparal: win8 oem boxes are required to have it on by default 17:14:21 I've seen 2 more machines, all of them had SB off 17:14:28 OTOH, none of them had Win8 preinstalled 17:14:31 right 17:14:33 newer machines OEM 8.1 Are default SB on 17:14:35 pre-win8 usually wouldn't 17:14:36 I'm not sure how it looks when win8 is preinstalled 17:14:40 but anyhow, it seems academic 17:14:56 adamw: yeah we shouldn't hang ourselves on new external factors 17:14:56 well, like adamw said, if it can't be done, it can't be done 17:15:05 votes? 17:15:14 we should try and narrow down the docs to the actual affected cases if we can. 17:15:16 W8+ its a MS pushed requirement on clean OEM iirc 17:15:20 I'm +1 to my proposal 17:15:26 +1 (although yes to narrowing down the docs as suggested) 17:15:27 +1 to sgallagh's 17:15:34 sgallagh, +1 too 17:15:46 and you can boot from efi ok still? 17:16:03 nirik: yes I can 17:16:07 efi iirc isnt the issue its sb that buggers it 17:16:12 but not all machines have uefi boot menu 17:16:25 kparal: just for the docs' sake, if you turn off SB *after installing* the dual boot starts working right away? 17:16:30 right, but that should be in any documenting. ;) 17:16:30 adamw: yes 17:16:33 k. 17:16:43 CSM mode is FINE yes 17:16:47 post or pre install 17:17:04 if it works post install, I see less push on this having fixed 17:17:06 even legacy first with sb on SHOULD work 17:17:13 proposed #agreed - 1170245 - RejectedBlocker - This doesn't violate any specific release criterion. Document on common bugs that SB enabled dual boots might not work at this point. Workaround is to turn it off. 17:17:24 roshi: ack 17:17:27 +1 17:17:28 ack 17:17:30 ack 17:17:35 ack 17:17:45 #agreed - 1170245 - RejectedBlocker - This doesn't violate any specific release criterion. Document on common bugs that SB enabled dual boots might not work at this point. Workaround is to turn it off. 17:17:55 i'd phrase it as 'not serious enough violation of the windows criterion', but np. 17:18:08 a fair point 17:18:20 well, since we're rolling another RC regardless 17:18:26 let's look at this FE 17:18:27 ack 17:18:41 #topic (1169151) docker run fails with 'finalize namespace setup user setgid operation not supported' 17:18:41 * nirik hasn't looked, but is probibly -1. 17:18:43 #link https://bugzilla.redhat.com/show_bug.cgi?id=1169151 17:18:46 #info Proposed Freeze Exceptions, docker-io, ON_QA 17:19:17 not a docker guy but looks -1 to me 17:20:07 this sounds like it's all 'nicer' in -4... but I see no reason that can't just be a 0 day 17:20:17 honestly i have no clue what's going on here 17:20:30 i keep asking for someone to just test -2 and -4 and tell me which one's better 17:20:42 it seems like a simple request, but for some reason, no-one's done it 17:20:49 nirik: yep, for me it looks like 0 day 17:20:50 -1 to an FE at this point. 17:20:50 so, -1 on the basis of insufficient information. 17:20:53 ugh this is the first I've seen it 17:20:55 jzb: you have any insight on this one? 17:20:58 -1 FE 17:20:58 If there's no reason it can't be fixed in an update, leave it alone 17:21:00 larsks: ^^ 17:21:26 -1 17:21:38 -1 FE barring further info 17:21:58 I'm -1 FE, period. No potentially destabilizing changes now, please. 17:22:06 looks like it can be fixed with an update 17:22:44 sgallagh: well, if someone said -2 was completely non-functional i'd consider it, but no-one has, so. 17:23:04 if its a easy 0 day -1 FE for sure 17:23:20 Off to a meeting. By folks. 17:23:25 proposed #agreed - 1169151 - RejectedFreezeException - Based on the information we have on hand this looks like it can be fixed with an update. No need for an exception to freeze. 17:23:29 later sgallagh 17:24:07 ack 17:24:18 ack 17:24:26 thanks sgallagh, /me supposed to be on other meeting but priorities are priorities :) 17:24:29 ack 17:24:41 #agreed - 1169151 - RejectedFreezeException - Based on the information we have on hand this looks like it can be fixed with an update. No need for an exception to freeze. 17:24:42 jreznik, fesco ? 17:24:50 well, that's all we have for now 17:24:53 I'm _super_ confused with this one because the last I saw in the cloud list was colin noting that everything with docker in the atomic image looked okay. 17:24:53 fesco is in 35min. 17:24:55 Corey84-: nope, internal 17:25:05 ah 17:25:15 adamw: are you going to put in the RC request? 17:25:30 roshi: we need a build with that fix reverted... 17:25:42 ^ 17:25:55 who's going to handle that? 17:26:16 yep, we need build and then request rc5 17:26:16 * roshi thought we just built the RC with the older package and didn't need to rebuild that package 17:26:52 the other question is how confident we will be to take older results to rc5 as there's not much time now 17:26:54 thats a possibility I guess. I was thinking it was multiple places, but if it's just one package we might be able to use the older one. 17:27:15 but hey, I just test them :) I don't know much about building them :) 17:27:34 reverting something like this, reusing results is sketchy at best 17:27:36 it's possible but we have to double check it not to miss anything 17:28:00 If we can get rc5 out by morning I'm down to test all day tomorrow 17:28:03 roshi: I agree, but we don't have much time... go/no-go is tomorow 17:28:05 roshi: can do once we have another anaconda. 17:28:11 sounds good 17:28:16 blivit apparently. 17:28:21 adamw: thoughts on reusing results? 17:28:21 whichever 17:28:26 I don't think we can for this 17:28:31 roshi: we can transfer stuff beyond the installer 17:28:40 base, server, desktop 17:28:43 * satellit will also help test 17:28:53 there's no changes to the installed package set or package deployment code so that should be safe 17:29:00 cant do any efi on this box but the rest of it im down 17:29:02 i'd want to re-run the whole installation page, i guess 17:29:03 there's that 's' word :p 17:29:04 we need a new build. 17:29:10 yeah, tflink's favourite 17:29:11 lives? 17:29:21 there's changes after the one that had the fix and other changes mixed in. 17:29:24 yeah, gotta redo all the install stuff for sure 17:29:30 nirik: right, that's what i figured. 17:29:33 fedup? 17:29:35 * nirik just checked 17:29:49 satellit: fedup should be transferable, i guess. 17:29:50 the s-word is always fun - usually an indication of something that needs to be tested :) 17:30:01 tflink: I SEE A VOLUNTEER 17:30:18 so, tflink has volunteered to re-run all the server, desktop and base tests, thanks tflink 17:30:21 * tflink runs away, isn't sure from what but runs anyways 17:30:52 dude, you have to wait until he actuall *in* the net before you spring it 17:30:58 i can pull down desktop tests for sure 17:31:02 and base i guess 17:31:03 you'll never catch people like that 17:31:06 :p 17:31:26 hehe 17:31:27 lol 17:31:34 Corey84-: it's ok, we were just giving tflink a hard time. 17:31:41 roshi: just saw your notify earlier...what were you pointing at? 17:31:57 the bug in the topic 17:32:01 adamw, i dont mind tho lol 17:32:13 Ah, okay. Thanks. 17:32:33 helps me learn the deeper stuff faster than some college course in OSes 17:32:55 true that :) 17:33:09 just headsup for go/no-go tomorrow - I'm travelling to FAD tomorrow, the best way I think is to move to PRG tomorrow, have Go/No-Go as we departure Friday early... but with current weather situation, I may be trapped somewhere in the middle of nowhere in the train 17:33:29 #topic Open Floor 17:33:40 forecast promises better weather but they promise it for the last two days 17:34:10 so g/ng is tomorrow or friday am now ? 17:34:11 just in case I'll be offline, someone can help and start it :) 17:34:20 Corey84-: Thursday 17:34:23 tomorrow 17:34:27 k 17:34:34 17:00 UTC 17:34:41 jreznik: can you find someone for that? 17:34:42 I'll be here 17:34:46 volunteers? 17:34:52 and readiness meeting 19:00 UTC 17:35:01 * nirik will not be around for go/no-go either... might make readyness... 17:35:04 roshi: I hope I'll make it, just looking for back up 17:35:09 might be late to readiness 17:35:36 where are the docs on running that? 17:35:42 * Corey84- is too new to all that otherwise wouldnt mind 17:35:46 * roshi can be your backup if you don't find someone more suitable :) 17:36:39 roshi: thanks, LTE coverage is now better, so I hope even in train, I'll be able to connect :) 17:36:55 * adamw will be around all day. 17:37:08 I heard about folks being trapped for 17 hours in train yesterday 17:37:12 oof 17:37:15 that's not fun 17:37:35 well, we'll make sure things get started tomorrow for the meeting 17:37:42 anyone have anything else for this meeting? 17:37:42 * Corey84- really needs to get his replacement wwan card lol 17:37:47 17 hrs --- refund ? 17:38:06 * roshi lights the fuse 17:38:21 3... 17:38:24 Corey84-: I read about refunds... but in this case, it wasn't railways fault, just weather 17:38:25 45 secs fuse ? 17:38:37 depends on the day 17:38:45 that's bs even airlines will refund on that long a delay 17:38:59 ACME Fuse Company doesn't do good QA - can never tell how long it'll burn 17:39:02 2... 17:39:45 1... 17:39:51 roshi: so it's you who cause ammunition storage explosions today? :D with your fuse 17:39:59 thanks for coming folks! 17:40:19 nah, it's our supplier :p 17:40:20 http://bit.ly/11UI4vL 17:41:24 thanks everyone! 17:41:25 sheesh 17:41:38 #endmeeting