15:00:12 #startmeeting Fedora QA Meeting 15:00:12 Meeting started Mon May 17 15:00:12 2010 UTC. The chair is jlaska. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:12 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:00:16 #meetingname fedora-qa 15:00:16 The meeting name has been set to 'fedora-qa' 15:00:29 #topic Gathering crit-mass 15:01:06 * kparal jumps 15:01:40 kparal: howdy 15:01:50 anyone else lurking for the QA meeting? 15:01:54 * maxamillion is here-ish 15:02:02 * wwoods appears in a puff of smoke 15:02:10 we re-changed $dayjob meeting and now I have internets 15:02:12 maxamillion: wwoods: heyo! 15:02:40 adamw Viking-Ice robatino? 15:02:50 I'll wait another minute or two before starting 15:03:15 * Viking-Ice rides in on ash cloud.. 15:03:28 Viking-Ice: heh 15:04:15 okay let's get started 15:04:25 #topic Previous meeting follow-up 15:04:43 This is easy, I've got nothing listed from last week for follow-up 15:04:48 #info no action items from last week 15:04:52 anything I missed? 15:05:32 well, if so ... raise it during open discussion 15:05:39 #topic Fedora 13 RC#3 test status 15:05:56 No doubt you've all read/heard about the 1 week slip 15:06:09 #info Fedora 13 RC#3 is the current release candidate 15:06:29 Test results available at http://fedoraproject.org/wiki/Category:Fedora_13_Final_RC_Test_Results 15:07:12 assuming I'm counting correctly ... we have 5 tests remaining on the installation matrix (4 - i386, 1 - x86_64) 15:07:33 The desktop matrix is empty, which means 32 outstanding tests 15:07:58 I'm unclear whether test tests from RC#2 can apply against RC#3 15:08:13 nonetheless, I'll try to pick off a few GNOME tests later today 15:09:09 #info Because of all this testing, we have a queue of 14 CommonBugs? that need to be addressed 15:09:18 #link https://bugzilla.redhat.com/buglist.cgi?cmdtype=dorem&remaction=run&namedcmd=CommonBugs%3F&sharer_id=141215 15:09:45 #action jlaska and adamw to empty the CommonBugs? queue 15:09:53 * jeff_hann says hello 15:09:57 jeff_hann: welcome :) 15:10:05 that's all I have on my radar for F-13-RC3 testing 15:10:06 :) 15:10:54 bug#580226 was moved from CLOSED/ERRATA -> ASSIGNED earlier today. But the reporters problem was different from the failure originally reported. 15:11:04 Any other issues that need to be on the radar? 15:11:49 The go/no_go meeting is scheduled for tomorrow. However, there is no reason we can't give the "GO" sooner if test results ared completed and all good 15:12:08 if no other F-13-RC3 issues, I'll move on ... 15:12:52 okay, next topic 15:12:54 #topic Fedora 13 QA Retrospective 15:13:30 You've heard me talk about it, perhaps 5 million times already ... 15:13:53 #info Please contribute your thoughts/concerns on Fedora 13 testing to https://fedoraproject.org/wiki/Fedora_13_QA_Retrospective 15:14:21 Once F-13 is behind us, I plan to organize the feedback and work with folks to come up with some tangible improvements for Fedora 14 Testing 15:14:35 * jskladan attends the meeting from bed :) 15:14:52 jskladan: ouch :( welcome! 15:15:07 okay everyone quiet now, no more talking about jskladan :) 15:15:25 :-D i knew so :)) 15:15:36 hehe 15:15:45 alright, next topic was a carry over from last week 15:15:52 #topic QA Summer of Code 15:16:08 we already missed the deadline 15:16:14 Last week, kparal put out a request for feedback for anyone with experience or interest in a QA summer of code 15:16:21 kparal: the deadline was May 13, right? 15:16:25 right 15:16:51 kparal: I take it no feedback received? 15:17:03 no 15:17:07 booo :( 15:17:54 well, the students have until this week to propose 15:17:58 well, if nothing else, we have a landing place for QA project ideas 15:18:13 #link https://fedoraproject.org/wiki/QA:Summer_of_Code_Ideas 15:18:15 so if there was a student to do the proposal, it can still be on any idea(s) 15:18:35 we are looking strong for a bigger, better run this Sep to Feb 15:18:43 (southern hemisphere summer) 15:18:45 well, we are short of mentors I'm afraid 15:19:17 quaid: oh, so there's another round planned Sep - Feb 15:19:18 ok, you can watch how this summer goes and see if it helps you attract mentors 15:19:23 jlaska: yes 15:19:25 but we can take it as a test run this time, and have something better prepared for the next time 15:19:54 and I am working with RHT partner marketing people to put our proposal in front of many partners who may be interested in being sponsors, so more bigger betterness :) 15:20:01 oooh 15:20:01 kparal: +1 15:20:05 quaid: maybe Dec - Feb? 15:20:11 that's about what the _whole_ project islike 15:20:15 kparal: for coding, yes 15:20:21 I see 15:20:39 I think the preparations -- open for ideas, etc. has to start in Sep so we aren't so rushed this time :) 15:20:44 we haven't worked on the schedule yet, though. 15:21:12 jlaska: so, if we have friends who work Fedora QA and work for companies that benefit from QA advancements ... 15:21:40 let's see if we can work that company as a sponsor from multiple angles. 15:21:57 * quaid *cough* s/390 *cough* 15:22:10 quaid: hah! I think you coughed up a mainframe 15:22:45 quaid: I can start asking around in that space and connect you with the right people? 15:23:28 kparal: anything we need to track here, or anything that you'd like to see completed in advance of this next round? 15:23:29 sure 15:23:42 and I'll be talking through all the partner level at some point ~August 15:23:44 perhaps I can complete the idea stubs I posted, but haven't fleshed out yet 15:23:50 good call 15:24:15 jlaska: I think that's too far ahead, to create an action point several months forward :) 15:24:42 kparal: hah, no kidding :) 15:24:58 okay then ... we'll drop this topic for now ... and hopefully pick this back up again in the end of the summer 15:25:02 * jlaska makes note 15:25:45 kparal: quaid: thanks 15:26:24 alright ... it's that time again folks 15:26:27 #topic Open discussion - 15:26:35 the problem is making the QA tasks seem interesting, when QA is boring by design 15:26:38 heh 15:27:02 wwoods: exactly, I don't think we're looking for people to push buttons and run through pre-planned test execution 15:27:20 this should be fun project ideas that will either grow, or facilitate our QA community in some way 15:27:29 yeah obviously. there's some interesting stuff but none of it is *sexy* really 15:27:46 that's the Curse of QA: the people are sexy, the code isn't 15:27:53 it is our burden to bear. 15:27:56 well, let's make it sexy! 15:28:04 heh 15:28:06 there's a blog title in the making here 15:28:16 anyway, so the bodhi watcher 15:28:16 lets give away hot chicks instead of money as a reward 15:28:35 I've thought of ways to bring in more testers and I've talked a little with jlaska about it but now is a good time to bring it up 15:28:38 kparal: I assume you're talking about this http://letthemeatlentils.files.wordpress.com/2009/12/20_kent_chicks.jpg 15:28:48 #topic Open discussion - make QA sexy 15:28:50 jlaska: almost D: 15:29:09 I've thought about working with the design team to come up with "special" swag that we make and give away to testers who devote their time 15:29:19 http://git.fedorahosted.org/git/?p=autoqa.git;a=blob;f=hooks/post-bodhi-update/watch-bodhi-requests.py;hb=wwoods 15:29:27 err oops 15:29:32 disregard for the moment! 15:29:41 one thing I was thinking of is a modification to the Fedora splat t-shirt that says something like "construction crew" on the back 15:29:47 maxamillion: hah nice 15:29:50 * jlaska notes wwoods is next in queue for the microphone 15:30:27 maxamillion: do we have a way to gather data on bodhi karma feedback? 15:30:28 we should have some small rewards, yes 15:30:32 or maybe a t-shirt that has something to do with a fedora logo squishing a bug 15:30:39 jlaska: I'm not entirely sure 15:30:44 #idea small rewards process for QA volunteers 15:30:58 there's other measures of QA awesomeness 15:31:02 maybe show the most active people in some "top statistics page" of QA contributors? 15:31:03 #idea work with design team for Fedora branded QA swag 15:31:16 * ianweller starts lurking at the sound of statistics 15:31:17 wwoods: definitely, that was just one I've been missing recently 15:31:19 I'd love it if we had some way for devs to hit a button for "the person who reported this bug did an awesome job" 15:31:29 ianweller: you have an unique fascination with statistics 15:31:46 wwoods: that'd be nice 15:31:52 wwoods: ah, right ... so a way for maintainers to say "This person did a nice job" 15:32:11 collecting stars? 15:32:13 yeah. I mean it's a little nervous-making because it introduces a gameable element 15:32:15 #idea maintainer driver kudos -- a way for developers/maintainers to recognize a quality job 15:32:27 wwoods: that's fine ... all stats are like that 15:32:32 oh, true, good point 15:32:39 which I think is why we'd need multiple vectors on this stuff 15:32:51 anyway: In my view, doing a good job of actually tracing a bug to its cause 15:32:53 bugs, QA wiki namespace contributions, mailing list assistance ... 15:32:58 wwoods: right on 15:33:06 is possibly the most valuable thing a QA contributor can do 15:33:39 I think we've got some of this captured on the retrospective already ... so this is good 15:33:40 wwoods: I think that depends on the situation and what the bug is 15:34:08 (note that "QA contributor" here is a slightly broader audience than, say, "QA developer", which includes those of us working on test frameworks and other craziness) 15:34:31 maxamillion: well, yeah. I mean there's plenty of other very, very valuable contributions and not every bug can be traced like that 15:34:40 but that is definitely a skill we want to encourage and reward 15:35:05 if there's one thing I hear from developers *constantly* it's: "god I wish the people reporting these bugs could actually explain what was going on" 15:35:06 I'll update the retrospective after this meeting with some of these ideas 15:35:09 wwoods: agreed 15:35:14 but feel free to note things yourself (all) 15:35:45 * jlaska looking at a new wwoods entry around improving bug reporting... yay! 15:36:04 #topic Open discussion - 15:36:10 who is next? 15:36:28 me! 15:36:37 wwoods: autoqa? 15:36:50 #topic Open discusssion - autoqa update from wwoods 15:36:50 yeah, specifically the post-bodhi-update hook 15:36:58 so here's that link again 15:37:01 wwoods: take it away! 15:37:03 http://git.fedorahosted.org/git/?p=autoqa.git;a=blob;f=hooks/post-bodhi-update/watch-bodhi-requests.py;hb=wwoods 15:37:26 that's the current post-bodhi-update watcher. It's not quite finished but it's 95% there 15:38:02 The basic purpose of this thing is to fire autoqa tests whenever a maintainer uses bodhi to request an update be moved into the testing or stable repos 15:38:23 unfortunately current bodhi has some limitations that we need to work around so there's a couple things to keep in mind 15:38:48 this watcher *can* miss an update, if rel-eng pushes it into testing/stable before the watcher notices it 15:39:12 but if rel-eng force-pushes the update without waiting for test results, they probably have a good reason for it 15:39:31 so they'll be responsible for its safety in those cases anyway 15:39:47 * jlaska goes #info crazy 15:39:51 lol 15:40:10 #info The basic purpose of [post-bodhi-update] is to fire autoqa tests whenever a maintainer uses bodhi to request an update be moved into the testing or stable repos 15:40:48 #info Some limitations exist for the watcher (see minutes and link above for details) 15:40:57 second problem: unfortunately critpath updates will appear *without* their target repo listed 15:41:25 originally (as the comments say) I was going to just fire the tests anyway, and we'd just have to test against both sets of repos 15:41:36 wwoods: what about the future changes to bodhi that mean that all updates should be accepted "in sets"? will that require heavy changes in your script? 15:42:04 kparal: not sure what you mean by "in sets", but it shouldn't require changes to the watcher 15:42:14 the watcher's job is to notice when the contents of that set change 15:42:33 the tests themselves can choose to either a) test all the updates in the set of proposed updates, 15:42:40 or b) test the individual new update(s) 15:43:13 that sounds cool, I will have to check it out 15:43:26 e.g. depcheck needs to test the entire set of proposed updates 15:43:34 same for package sanity 15:43:45 right. but a simpler test that only needs to inspect the package 15:43:53 e.g. a test that checked to make sure the package is signed with the correct key 15:44:05 could just test the individual package 15:44:08 * jlaska would like that test 15:44:15 * jlaska package signature test 15:44:52 so, when we can expect final release of the watcher? :) 15:44:54 anyway, to finish my thought: critpath is a problem, and I'm not sure how to solve it 15:45:20 do we have to wait for some bodhi changes? 15:45:38 there's three choices: 15:45:42 1) fire test with target repo unset, let the tests decide what to do 15:46:07 2) fire two tests, one for each target repo, let the users/testers decide which results they care about 15:46:33 3) ignore the update until it has a target repo (i.e. when it gets the required karma), and then fire the test as normal 15:47:02 my questions is why it doesn't have target repo set from the beginning? 15:47:15 is this a bodhi-1.0 design decision? 15:47:19 long story short: because current bodhi is kind of dumb, and they're rewriting it, and that's gonna take a while 15:47:35 so this watcher is similarly kind of dumb, and will be rewritten when bodhi2 appears 15:47:48 I don't like neither 1) nor 2), and I'm not sure how 3) works 15:48:06 this is basically just supposed to be a "good enough" solution to get us something to work with until the stuff we need goes into bodhi 2 15:48:16 As a proventester, I'd want to see these test results before applying my karma. So I'm not sure about option #3 15:48:44 especially about critpath packages, right 15:48:44 kparal: we just keep a list of updates that appeared with 'request == None', and keep checking until they have a valid 'request' value 15:48:54 yeah, that's the problem 15:49:12 so basically, this watcher is supposed to be temporary so we just need to recognize and accept its limitations 15:49:14 wwoods: is there an option that leaves the tests unchanged when we move to the new bodhi watcher? 15:49:21 and try to choose whatever is the least bad option 15:49:28 and work toward making it work correctly later 15:49:38 yes, choice #2 15:49:45 or #3, actually 15:49:57 but I don't like delaying testing until after karma is applied 15:50:10 which is why #2 was my proposed solution, even though I don't like it 15:50:38 +1 on #2 -- noting your same regrets 15:50:52 I think #2 is the least bad option 15:51:09 if anyone has a better idea I'd love to hear it, but this is what we have to work with right now 15:51:38 anyway given that decision, I've got a couple more things to finish up and we should have the first working version in a day or two 15:51:51 ok, great 15:52:03 so give it the rest of the week to shake some bugs out, and I'd say we'll have a functioning hook next week 15:52:13 wahooo! 15:52:18 still need to write the autoqa hook code and the test templates 15:52:30 and a hello-bodhi proof-of-concept test 15:52:40 but yeah, end of next week is my guess 15:52:42 Our watchers need to have icons associated with them. Post-bodhi-update would be a hydra 15:53:05 wwoods: good stuff, thanks for the update 15:53:16 anything else on autoqa post-bodhi-update? 15:54:07 #topic Open discussion - 15:54:16 anything not already covered? 15:54:31 If not, I'll close out the meeting in 2 minutes 15:56:08 30 seconds ... 15:56:18 * jlaska queues the Jeopardy theme music 15:56:47 Okay gang, thanks for a great meeting! 15:57:02 As always, I'll follow-up to the list with minutes 15:57:10 feel free to raise additional topics not discuss there as well 15:57:13 #endmeeting