17:01:33 #startmeeting F22 Final Go/No-Go meeting - 2 17:01:33 Meeting started Fri May 22 17:01:33 2015 UTC. The chair is jreznik. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:33 Useful Commands: #action #agreed #halp #info #idea #link #topic. 17:01:34 #meetingname F22 Final Go/No-Go meeting - 2 17:01:34 The meeting name has been set to 'f22_final_go/no-go_meeting_-_2' 17:01:48 #topic Roll Call 17:01:49 morning 17:01:50 .hello sgallagh 17:01:51 sgallagh: sgallagh 'Stephen Gallagher' 17:02:00 hola dgilmore and everyone! 17:02:04 * satellit listening 17:02:12 hello 17:02:23 hello 17:02:31 .hello mattdm 17:02:32 mattdm: mattdm 'Matthew Miller' 17:02:49 .hello pwhalen 17:02:50 pwhalen: pwhalen 'Paul Whalen' 17:02:50 #chair dgilmore sgallagh nirik mattdm roshi 17:02:50 Current chairs: dgilmore jreznik mattdm nirik roshi sgallagh 17:02:56 .hello dmossor 17:02:57 danofsatx: dmossor 'Dan Mossor' 17:03:04 * kparal is here 17:03:33 ok, let's start then! 17:03:42 .hello roshi 17:03:43 roshi: roshi 'Mike Ruckman' 17:03:51 * nirik has a sense of deja-vu 17:03:53 #topic Purpose of this meeting 17:03:54 #info Purpose of this meeting is to see whether or not F22 Final is ready for shipment, according to the release criteria. 17:03:56 #info This is determined in a few ways: 17:03:57 #info No remaining blocker bugs 17:03:59 #info Release candidate compose is available 17:04:00 #info Test matrices for Final are fully completed 17:04:02 #link http://qa.fedoraproject.org/blockerbugs/milestone/22/final/buglist 17:04:03 #link https://fedoraproject.org/wiki/Test_Results:Fedora_22_Final_RC3_Installation 17:04:05 #link https://fedoraproject.org/wiki/Test_Results:Fedora_22_Final_RC3_Base 17:04:06 #link https://fedoraproject.org/wiki/Test_Results:Fedora_22_Final_RC3_Desktop 17:04:08 #link https://fedoraproject.org/wiki/Test_Results:Fedora_22_Final_RC3_Server 17:04:15 jreznik: you're missing a cloud link 17:04:30 https://fedoraproject.org/wiki/Test_Results:Fedora_22_Final_RC3_Cloud 17:04:31 #link https://fedoraproject.org/wiki/Test_Results:Fedora_22_Final_RC3_Cloud 17:04:47 kparal: thanks, I knew something is missing but I was just blind as... 17:05:10 #topic Current status 17:05:53 so today, we have follow up go/no-go after yesterday's unsuccessful go/no-go 17:06:05 there's 1 proposed blocker we should discuss. 17:06:22 RC3 is being validated and as nirik pointed out, there's one proposed blocker bug 17:06:23 and another that needs discussed, I haven't proposed it yet 17:06:40 #info 1 Proposed Blocker 17:06:45 #undo 17:06:45 Removing item from minutes: INFO by jreznik at 17:06:40 : 1 Proposed Blocker 17:06:58 jreznik: if it is not proposed it does not exist 17:07:18 dgilmore: are we going to be so fast we won't get to it? :) 17:07:44 jreznik: maybe 17:07:49 #topic Mini blocker review 17:07:54 it's being proposed right now 17:07:57 so 2 17:08:10 #info 2 Proposed Blockers 17:08:25 roshi: you want to go through it or do you want me? 17:08:30 sure, I can 17:08:36 just waiting on blocker bugs to load... 17:08:56 #topic (1224048) anaconda does not include package download and filesystem metadata size in minimal partition size computation and hard reboots during installation 17:08:59 #link https://bugzilla.redhat.com/show_bug.cgi?id=1224048 17:09:01 #info Proposed Blocker, anaconda, NEW 17:09:03 note that it updates only every 30min, so it wont have any just proposed ones. 17:09:24 yup, I'll manually write it out 17:09:42 this bug is a direct violation of the criteria, as written 17:09:53 anyhow. On this one I am -1 blocker for f22. It's anoying, but it's a corner I don't think many people will hit. 17:09:57 is it? 17:10:13 only people with netinst and very small partition sizes 17:10:38 as in vms, private clouds 17:10:41 I think it's a direct violation of "When using the custom partitioning flow, the installer must be able to: Reject or disallow invalid disk and volume configurations without crashing. 17:10:45 " 17:10:49 you can hit it with DVD, but you need to be extremely unlucky - specifying just a bit bigger partition than the minimum size 17:10:59 danofsatx: Even in VMs and private clouds, most people expect their content to be larger than the installed packages. 17:11:05 They usually build in room for, say, data. 17:11:08 but the times where you *would* hit this seem small 17:11:19 understood. 17:11:26 yeah, based on how corner it seems to be I'm -1 blocker 17:11:34 I agree that this is a criterion violation, but it happens just in certain occassions, so it's a judgement call, I believe 17:11:41 yeah, same here 17:11:46 common bug? 17:11:55 danofsatx: if rejected, definitely a common bug 17:11:57 for sure 17:11:58 and fix for f23 for sure. 17:11:58 I'm also -1 as a blocker for F22, though as discussed in #fedora-qa last night, we may want to pre-emptively vote it an F23 blocker 17:12:19 definitly violates the criteria 17:12:34 if we reject something in the last minute, I think it's quite a good idea to already accept it for the next fedora release, to make sure it's not forgotten and fixed eventually 17:12:55 but a system that has such a small amount of free space post install seems silly 17:13:18 if this was discovered earlier in the cycle, we could have been more strict, but I think it makes sense here to waive it and accept it for F23, probably Beta or Final 17:13:30 proposed #agreed - 1224048 - RejectedBlocker - This bug does violate the criteria, but the cases where a user can hit this are slim. Rejected as a blocker for F22, please document in Common Bugs and propose for F23 so it doesn't get forgotten. 17:13:31 kparal: sure, let's accept it for F23 - it's more about to disallow such config 17:13:31 * nirik nods. 17:13:42 ack 17:13:45 roshi: maybe we can agree on it for F23 17:13:47 roshi: patch 17:13:47 though I guess if you build cloud images with the intention that the filesystems will be resized after it is less silly 17:13:48 ack 17:13:56 roshi: please also accept for F23 Final 17:14:03 right away 17:14:33 ok 17:14:51 Beta? 17:15:06 I would say f23 alpha 17:15:06 I'm fine with either, not sure if it is serious enough for Beta 17:15:16 proposed #agreed - 1224048 - RejectedBlocker F22Final AcceptedBlocker F23Final- This bug does violate the criteria, but the cases where a user can hit this are slim. Rejected as a blocker for F22, please document in Common Bugs. This has been accepted as a blocker for F23 so it doesn't get forgotten. 17:15:19 dgilmore: it does not violate any alpha criteria 17:15:21 dgilmore: It's not violating an Alpha criterion. 17:15:23 just because it is known now 17:15:34 well, we now go through all milestones for blocker reviews 17:15:38 dgilmore: Nothing prevents it from landing earlier :) 17:15:38 I wouldn't go that path, anaconda devs would hate us 17:15:41 so it'll get looked at during alpha for sure 17:15:42 more than they do now 17:16:05 dgilmore: I tried the same logic for other F23 bugs but still something is better than nothing 17:16:33 jreznik: :) 17:16:36 just someone has to take a look on all blockers ahead of that milestone 17:17:01 ack 17:17:12 roshi: I'll do the bug secretary work 17:17:17 thanks 17:17:41 ack 17:17:49 ack 17:17:59 #agreed - 1224048 - RejectedBlocker F22Final AcceptedBlocker F23Final- This bug does violate the criteria, but the cases where a user can hit this are slim. Rejected as a blocker for F22, please document in Common Bugs. This has been accepted as a blocker for F23 so it doesn't get forgotten. 17:18:29 #topic (1224045) DeviceCreateError: ('Process reported exit code 1280: A volume group called fedora already exists.\n', 'fedora') 17:18:35 #link https://bugzilla.redhat.com/show_bug.cgi?id=1224045 17:19:26 roshi: What does "set up a raid array" mean in this bug? 17:19:27 we think that this is connected with some leftover data after deleting/re-creating raid 17:19:43 bios raid? dmraid in a live image before starting the installer? 17:19:43 well, it failed for me the first time 17:19:55 fwiw, i just did a x86_64 server netinstall on a similar system 17:20:06 intel bios raid, set up a RAID0 array, started the install 17:20:13 cleaning the partitioning tables on raid disks seems to resolve that issue. right, roshi ? 17:20:16 * tflink is going to try workstaion live after this install finishes and changes everything to RAID1 17:20:17 disks were previously installed to on a non-raid setup 17:20:52 kparal: I'm not sure 17:21:01 I went to sleep before I figured that out 17:21:21 i think I am +1 blocker here 17:21:40 I'm not a raid expert - so there could be user error here 17:21:42 though it seems it can be worked around 17:21:54 I believe the experience pschindl had was that after clearing old partitioning tables it started working ok 17:22:08 I'd like someone with more raid experience to double check my results 17:22:13 further complication was that he had old gpt tables on disks, but booted i386 image 17:22:21 it's known that that confuses anaconda a lot 17:22:29 and they rejected such bugs in the past, iirc 17:23:11 we would need dlehman here, it seems 17:23:34 so I'll defer to you all on the blockeryness of this bug - but I didn't want to *not* say something about it 17:23:51 honestly, our original impression with pschindl was that this was not a blocker material 17:24:10 * nirik isn't sure yet, still pondering and trying to work out how common this might be 17:24:29 I'm leaning towards -1. This doesn't appear to be "normal" workflow. 17:24:42 I understand this is bad, but you have to try and hit it. 17:25:06 the original workflow was "take two discs that had normal installs on them, make an array, get a crash on installation start" 17:25:10 to me it seems reinstalling triggers it 17:25:16 roshi: was the orig install x86_64 and the second one i386? 17:25:19 thats not really trying 17:25:33 jreznik invited dlehman over here 17:25:38 tbh, I'm not sure - haven't installed to this box in a while 17:25:52 I'm wiping the disks now to retest with clean disks 17:26:10 the problem is that anaconda sees a new device and tries to create the fedora VG on it, but that VG already exists as part of the underlying data structure on the old device. 17:26:36 pschindl received a crash right on anaconda start, with the leftover partitioning tables (probably) 17:27:06 * danofsatx really needs a hardware budget for Fedora QA ;) 17:27:24 pschindl_wfh: right on time, debating the roshi's raid bug 17:27:26 * pschindl_wfh is here. Everything is -1. 17:27:29 :) 17:27:38 danofsatx: Right, that would be my guess too: you'd only hit this if you tried to create a RAID array from at least one disk that previously had a Fedora install on *just* it 17:27:47 (outside of an array) 17:28:00 I also strongly suspect this has been this way for a long time. 17:28:16 sgallagh: that's exactly how I'm reading it. 17:28:21 I have the same impression as sgallagh in his last comment 17:28:54 As I wrote to the bug. I think that it is caused by firmware. By the way it handles creation and deletion of raid volumes. 17:29:41 * nirik loathes firmware raid, but oh well, we support it. 17:30:08 we're not sure if the firmware itself should remove old partitioning tables and such from disk when destroying the raid firmware. it seems it didn't and it confused anaconda afterwards 17:30:53 well, I think it might also be lvm's fault. 17:31:14 * kparal is not sure if he should repost dlehman's comment here 17:31:17 boot, see old lvm, start wipe to do new install, but don't properly deactivate/destroy the old lv's 17:31:20 you know, it's logged 17:31:51 he's -1 blocker 17:31:53 if you're not in #anaconda - dlehman confirmed sgallagh's theory 17:31:56 anyhow, I am leaning toward -1 blocker as it's so corner a case. 17:31:59 -1 blocker 17:32:07 -1 blocker 17:32:11 -1 from me too 17:32:27 -1 (already stated above) 17:32:33 jreznik: In fairness, it was equal parts danofsatx's theory. (Don't want to steal credit) 17:32:47 ah, sorry 17:32:52 danofsatx++ 17:32:52 jreznik: Karma for dmossor changed to 2: https://badges.fedoraproject.org/tags/cookie/any 17:32:57 yeah, sgallagh just wrapped in a better wrapper 17:32:57 cookie :) 17:33:09 for you as my excuse 17:33:22 proposed #agreed - 1224045 - RejectedBlocker - This bug is a corner case, and while it does violate the criteria it's not severe enough to block the release for Fedora 22. 17:33:24 heh... 17:33:30 ack 17:33:33 ack 17:33:34 ack 17:33:43 #agreed - 1224045 - RejectedBlocker - This bug is a corner case, and while it does violate the criteria it's not severe enough to block the release for Fedora 22. 17:34:05 that's it for blockers I think 17:34:13 * tflink has a question before we move on 17:34:27 how common are promise raid controllers and dmraid? 17:34:46 they were pretty common long ago, but I have not seen/heard of any much in the last few years. 17:34:52 and are they worthy of release blocking release if they don't work 17:35:05 Same; they seem to have fallen out of favor 17:35:15 * danofsatx checks newegg 17:35:18 fwiw, my AMD bios raid seems to be a promise variant 17:35:26 and that's a new-ish system 17:36:27 * tflink hasn't been able to get arrays on either of his promise-ish boxes to show up as installation targets with f22 17:36:47 only 2 Promise controllers on Newegg, zero reviews 17:36:48 huh. 17:37:25 I'd say not worth blocking over, especially when the bug is that they don't see them... 17:37:28 I haven't filed a bug yet because I just hit the second one and I figured that the first one was just wonky hardware but wanted to mention it 17:37:58 I'm with nirik; the worst case here is that the drives are unseen (and therefore we have no effect and cause no changes to their contents) 17:37:59 nirik: yep 17:38:00 nirik: i'd say that not showing up as installation targets is almost as bad as not working well with the arrays 17:38:03 not many reviews on Amazon, either 17:38:24 danofsatx: they're usually embedded onto motherboards 17:38:32 tflink: well, worse would be if it saw them and corrupted data on them. 17:38:34 at least that's been my experience 17:38:36 understood. 17:38:41 tflink: you have f21 on those machines currently? 17:38:42 nirik: that's why i said almost as bad :) 17:38:47 /me used to have a highpoint controller. Those never worked reliably either. 17:39:17 nirik: kind of. f21 at least saw the array and installed to it 17:40:01 ok. I was going to suggest trying the f21 4.0.4 updates-testing kernel on it and see if it sees them... that would tell us if it's a kernel driver issue or not. 17:41:22 I can certainly try it 17:42:08 but i think that the bigger question is whether or not this would be worth slipping over 17:42:17 Right now we need to make a decision though on whether this is a blocking issue 17:42:18 yeah 17:42:54 jwb / jforbes: have you seen any reports of issues with promise raid and 4.x kernels? 17:42:56 if I'm the first person to even try this for f22, I'm not sure it's all that common but then again, I suspect most folks have intel stuff 17:43:02 Considering a web search on "linux promise raid" returns results primarily of the form "don't use it" 17:43:14 I'm inclined to suggest that it's not worth slipping over 17:43:32 * kparal lost context, what's the topic now? that bug has already been #agreed 17:43:32 * nirik is with sgallagh 17:43:45 sgallagh: my amd SB950 bios raid is also detected as promise and has similar symptoms 17:43:45 kparal: There's no associated BZ 17:44:02 kparal: it's tflink's promise raid systems. They don't see the raid at all. 17:44:05 tflink: It's probably an OEM promise controller 17:44:16 #topic tflink's promise raid systems 17:44:30 #topic tflink's promise raid systems 17:44:35 ;) 17:44:40 sgallagh: yeah, but my point is that it could be more common than we think if it shows up as AMD bios raid instead of promise 17:44:45 tflink: I'm not saying it isn't a bug. I'm just saying that cargo-cult internet wisdom supports the "you shouldn't be attempting this" argument 17:44:50 ah 17:45:27 please...not for this bug... 17:45:29 I'm -1 to blocking on this. 17:46:04 I am also -1 to blocking on this. I'm also -1 to trying to decide on the future of promise RAID criteria in this meeting where everyone is sleep-deprived. 17:46:08 * tflink isn't trying to make this out as more than it is but didn't want to not mention it 17:46:11 -1 17:46:24 tflink, it's mentioned ;) 17:46:36 -1 17:46:44 so, all blockers cleared? 17:46:48 -1 but yeah, let's sort out all RAIDs later 17:47:03 * dgilmore wonders if this would have been a blocker a week ago 17:47:04 (and understand that stuff I agree it could be tricky) 17:47:26 dgilmore: A week ago I'd have been +1 FE, but I think still -1 blocker 17:47:27 dgilmore: it would be punt, get more data likely 17:47:51 well, this wasn't proposed yet - so nothing to do here 17:47:58 jreznik: back to you :) 17:48:21 okay, just think we need to take out the context of push to get it out the door out of the way 17:48:27 jreznik: Stage cleared. Advance to the next level. 17:48:32 thanks roshi 17:48:39 np :) 17:48:42 it should be a blocker or not, and when in the cycle it hits should not change that 17:49:05 #topic Test Matrices coverage 17:49:09 I concur dgilmore 17:49:19 but I'll bounce it back to QA 17:49:24 * nirik agrees too. 17:49:40 the coverage is very good, many thanks to everyone who contributed 17:49:48 dgilmore: I mostly agree, but in this case I'd have opposed it as a blocker earlier too 17:49:52 * jreznik is not sure - wa have only some energy to solve all issues of this HW world... 17:49:55 yeah, the matrices got tore through really well :) 17:49:57 * danofsatx apologizes for not representing during this period 17:50:13 sgallagh: confirms my point 17:50:40 we have a few blank spots in the matrices, let me print them out here 17:50:58 xen is not tested 17:51:00 * danofsatx has to bow out of the meeting now. 17:51:06 hardware raid is missing 17:51:16 y'all have the con, danofsatx out. 17:51:16 kparal: is ec2 pvm tested? 17:51:16 and fcoe 17:51:39 mattdm: going through installation matrix now. what's pvm? 17:51:52 xen :) 17:52:11 we have a few blank spots for arm, but I think the number of results is comfortable enough 17:52:23 pwhalen, do you have any concerns about the few missing fields for arm? 17:52:42 mattdm: I can test the EC2 images when I know what the AMIs are for RC3 17:52:46 we're missing local and ec2 cloud tests 17:53:09 I've performed some local tests according to kushal instructions, but I'm not clear how much of that they covered 17:53:17 roshi: I pasted them I thought? or you aren't sure those are right? 17:53:37 I have no way to know if those are right or not :) 17:54:00 they *look* right, but I don't really know if they are or not 17:54:09 roshi: should be what you get with searching for Fedora-Cloud-Base-22-20150521 17:54:15 kparal, there are a couple im working, but also some we need to filter out for arm. i think its well covered. 17:54:23 there are also missing results for server on arm 17:54:30 apart from that mentioned above, we're very much covered 17:54:45 roshi in US East, ami-76dfc41e for 64-bit HVM, ami-9ed8c3f6 for PV 17:54:57 mattdm: nothing shows for me in that search 17:55:18 search never seems to work for me, so I stopped trusting it 17:55:32 I have no reason to believe they *dont* work 17:55:49 do we have someone to test xen, hwraid and fcoe until the ec2 tests are executed? 17:56:03 roshi do you see those ami ids? 17:56:07 oddshocks: ping 17:56:07 mattdm: Ping with data, please: https://fedoraproject.org/wiki/No_naked_pings 17:56:17 * jwb high fives zodbot 17:56:26 lol 17:56:30 when I search the actual AMI id yeah 17:56:33 oddshocks: ^ 17:56:41 ha take taht zodbot 17:58:04 testing 64bit hvm now 17:58:53 ok, so we want to pause here for ami testing? 17:59:13 how much time it will take? 17:59:25 but yeah, we can have coffee/tea break 18:00:01 /me considers Irish Coffee 18:00:09 both hvm and pv amis pass basic smokescreen 18:00:14 sgallagh: i suggest irish whiskey 18:00:16 it's takes me 5-8 minutes to run through some smoketests 18:00:18 skip the coffee 18:00:26 dgilmore: +1 18:00:48 dgilmore: I like the cut of your gib 18:00:53 :) 18:07:46 well, with ec2 things take longer... 18:08:34 ami-76dfc41e seems to be working fine 18:08:46 let me test ami-9ed8c3f6 next... 18:09:01 * mattdm thinks that sshd is maybe not the best test service for the cloud :) 18:09:14 yeah, that needs changing IMHO. 18:09:50 it does 18:09:55 and I keep meaning to change that 18:09:59 I use nginx 18:10:12 let's me test services, installation all in one pass 18:11:29 * mattdm is old-school, uses apache httpd 18:13:49 Also, I believe 32-bit ec2 images are no longer a thing. (they aren't linked on the web page at least, in any case) 18:14:13 that jives with what I last heard 18:14:27 might even drop it completely for F23 18:14:34 things are looking good here 18:14:39 yeah same here 18:14:46 great! 18:14:48 though I'd love to have more time to leave it running 18:14:56 push it a little 18:15:38 roshi: an minute costs the same as an hour :) 18:16:04 true 18:16:24 During a meeting, they feel like the same thing 18:16:40 I mean more of "I'd like cloud testing to be longer running processes that get *used* instead of simple tests." 18:17:26 what's the avarage time cloud instance lasts? I read somewhere it's minute or do 18:17:28 so 18:17:48 depends on usecase 18:18:02 * jreznik understand what roshi means but we should have the same for everything but realistically 18:18:22 when I was doing web development, I'd run a cloud instance in DO or something and it ran until they no longer wanted their website 18:18:25 if only we could test years uptime for everything :P 18:18:38 well, when you put it like that... 18:18:38 we can all run a 2.0 kernel 18:19:00 dgilmore: and find bug and try to reproduce it waiting for years! 18:19:10 jreznik: exactly 18:19:38 ok, but time is running - getting later here on Friday... so where we are now? 18:19:49 cloud seems okay? 18:19:53 on to decision? 18:19:54 yep 18:20:17 from what I can tell 18:20:25 ok 18:20:53 #topic Go/No-Go decision 18:21:48 I should mention that technically we're still blocked on the liveusb-creator bug, which we decided we remove from docs if the fix doesn't make it 18:21:49 * mattdm gets ready to delete pending magazine article about delay 18:21:59 the good news is - a new fix was published and it seems to work 18:22:05 http://giphy.com/gifs/dr6toZX3D1O8 18:22:08 so it just needs a package and an update 18:22:17 kparal: ah, I forget 18:22:29 I'm +1 to go. :) Lets ship those bits. 18:22:30 kparal: we need a) upstream release and package it or b) patch it and build it 18:22:32 Suggestion: remove from docs until update is live, then revert once it goes live? 18:22:39 we also need new spin-kickstarts, but that's just a technicality, as I understand it 18:22:41 mattdm: +1 18:22:51 same here kparal 18:22:54 http://bit.ly/1Hzjocd 18:23:04 kparal: yep. As soon as we are go we can make that... but karma would be helpfull after it's submitted. 18:23:22 ok 18:23:34 nirik: I can help with karma also if you're going to do the package build 18:23:43 it would be good to push that, kernel and libblockdev all stable today. 18:23:46 * maxamillion thought he was on the hook for spin-kickstarts 18:23:55 maxamillion: you can do the package thats fine with me. ;) 18:23:55 nirik: +1 18:24:20 nirik: indeed, then I can prepare the Everything repo and disable branched tomorrow 18:24:30 nirik: either way, I don't have any desire to attempt to lay claim for it ... :) 18:24:39 nirik: really it is a must they go stable today 18:24:57 well, bruno usually does them, but I think there's a SOP 18:25:12 anyhow, we can coordinate all that once we are go. ;) 18:25:53 For the record, I vote "Go" 18:25:53 * mattdm is not opposed to early coordination :) 18:26:16 dgilmore: for releng? and QA too kparal, roshi 18:26:38 looks like all the boxes are checked 18:26:47 I's dotted and T's crossed 18:26:48 except for fcoe and xen 18:26:51 releng is go 18:27:00 I believe we're fine with that 18:27:11 QA is Go 18:27:19 nirik: +1 18:27:30 proposal #agreed Fedora 22 Final status is Go by Release Engineering, QA and Development 18:27:36 ack 18:27:40 ack 18:27:56 ack 18:28:00 The FPL casts an honorary figurehead "Go" vote too :) 18:28:01 ack 18:28:11 ack 18:28:26 #agreed Fedora 22 Final status is Go by Release Engineering, QA and Development 18:28:44 hurray. Thanks everyone for all the hard work 18:28:51 * jreznik should do the last honorary Go vote too but it's too late! 18:29:02 everyone++ 18:29:13 yep, thanks and I'd say good night to many of us :) 18:29:37 #action jreznik to announce Go decision 18:29:44 #topic Open floor 18:30:06 /me passes out cigars 18:30:14 bad for your health 18:30:26 * roshi passes out the cigar cutter 18:30:28 possible to get f22 cds in time for SELF? 18:30:32 so are more meetings 18:30:40 *all testers simply pass out* 18:31:01 striker: When is SELF? And what do you need? 18:31:12 June 12-14 18:31:12 The ISOs are all available; they'll be unchanged from the RC3 content 18:31:25 * jreznik is not going to prolong it here longer than needed 18:31:29 ack - was looking for some nice branded ones :) 18:31:30 striker: Contact the regional ambassador 18:31:31 3... 18:31:35 sgallagh: ok 18:32:17 you are all rockstars, thank you so much for all the amazing work! :D 18:32:44 * lkiesow would like to thank all of you for the hard work! 18:33:02 * jwb notes we should start working on f23 now 18:33:03 yes! thank you so much everyone! 18:33:07 jwb++ 18:33:15 * mattdm notes that we already are! 18:33:19 on to f23! 18:33:21 (yes, i am a terrible person and taskmaster) 18:33:22 jwb: f23 already started! 18:33:35 (for many people) 18:34:01 2... 18:34:16 f23 started at f22 branching 18:35:06 anyway. advance thanks to all of you working the weekend and monday to make sure the bits get where they need to go! 18:35:09 thanks everyone for doing a stellar job testing 18:35:23 mattdm: +1! 18:35:59 as I said, I'll try to help us much on Monday - if still something needed for announcement etc. 18:36:04 1... 18:36:17 thanks again! 18:36:22 #endmeeting