18:00:38 #startmeeting 18:00:38 Meeting started Fri May 11 18:00:38 2012 UTC. The chair is davej. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:38 Useful Commands: #action #agreed #halp #info #idea #link #topic. 18:00:39 #meetingname Fedora Kernel meeting 18:00:39 The meeting name has been set to 'fedora_kernel_meeting' 18:00:39 #meetingtopic Fedora Kernel meeting 18:00:39 #addchair jforbes jwb 18:01:15 ok 18:01:53 going to devote most of this weeks meeting to discussion about the autoqa stuff we've been talking about for a while. 18:02:26 w00t 18:02:27 jforbes: want to start this off ? 18:02:38 Sure 18:02:52 #topic auto-testing initiative. 18:02:53 There is actually quite a bit to discuss here 18:03:45 I need to check on the status of getting the "install guest kernel from koji" patch into autotest so that we can post documention on how to set that piece up 18:04:41 At the moment though, it is very limited use. It does a "does it boot" test, and that's about it. Frankly, we would have much more interesting results booting it on actual hardware. It will be more useful when the basic regression test suite exists which it can call in addition to the does it boot test 18:05:17 To that end, I have been working on the framework for the regression tests. 18:05:43 For those wanting a bit more background on the whole project, please check out https://fedoraproject.org/wiki/KernelTestingInitiative 18:05:45 * nirik thinks that sounds pretty cool. 18:06:30 So for the regression suite, I have made some design choices which can be easily changed at this point, but I wanted to get things going qucikly 18:06:59 I don't think we decided yet where the regression tests are going to live. 18:07:24 First, it is a series of shell scripts. There is no requirement that the individual tests be written as shell scripts, but when I started with python, it just seemed unnecessarily complex for what we were doing 18:08:10 davej: no, and I had originally thought that a package would be the best place for it, kernel-tests as a subpackage for the kernel. I have since changed my mind, and think the best place would be in a git repository on fedorahosted.org 18:08:28 yeah, I see pros and cons of both. 18:08:36 adding more subpackages is a PITA 18:08:45 could add it to kernel-tools eventually 18:09:08 My reson for that preference: it isn't much more difficult to tell a user to clone a git-repository than to have them install a package. 18:09:38 New tests are still relevant to older releases, so we want users running the current version of an evolving test suite 18:09:50 yeah, good call 18:10:03 given we rebase older releases, yeah 18:10:32 #link https://fedoraproject.org/wiki/KernelTestingInitiative 18:10:44 hmm, iirc fedorahosted only allows one committer though ? 18:10:51 I could be swayed away from it, but for now that is my thinking, in fact I would really like to get that repository started if everyone thinks it is a good idea 18:11:00 Umm, no, fedorahosted allows mutliple committers 18:11:04 ok 18:11:08 not sure what I was thinking of 18:11:25 fedorapeople needs hoops for more than 1 committer 18:11:27 It also sets up a commit list and such 18:11:34 ah yeah, probably that 18:11:43 Right, that's why fedorahosted was a better choice than just a people page 18:12:42 anyway, the project has aan scm-project comit group through accounts, and you would apply to that just like any other group 18:12:44 yeah, you can have a commit list if you like, and a trac instance, etc. 18:12:51 #action create kernel-tests git repo on fedorahosted 18:13:24 Right, so I can get that started today, just needed to see what we specifically wanted to call it since renaming doesn't work so well 18:14:01 kernel-tests gets my vote 18:14:05 works for me 18:14:26 I was rather uncreating in naming my local git tree as well, so I won't have to change it 18:14:30 err uncreative 18:14:39 jwb: any opposition? 18:14:48 if you can check in some simple test, so we can see the structure, I suspect we can start adding to that pretty quickly. 18:14:48 none at all 18:15:04 Right, so I was going to go into a bit of that right now 18:15:49 At the moment my git tree is documentation.txt, testutils.sh, runtests.sh and two directories: default and destructive 18:17:10 runtests gathers a bit of machine info (uname -r and fedora release), and starts running any test in the default directory. 18:17:43 I suspect in each of those dirs, we'd then have one subdir per test ? 18:18:02 I haven't set it up that way, though I suppose we could 18:19:01 In fact that might be a good idea, because we could be a bit more descriptive in our output without overwhelming the user 18:19:08 by destructive, are you meaning "this will destroy data just to run" or "this might crash your kernel" 18:19:51 destructive is more of a "this might destroy data" Things a regular user might be a bit more hesitant to run on their personal laptop or something 18:20:44 ok, so things like fsx/fsstress may oops ext3 or btrfs etc, and may need a fsck, but I'm wondering if that counts as destructive. 18:20:53 so if you call runtests.sh destructive, it will run everything in default and destructive directories, but it prompts to make sure you really want to do that 18:21:12 I don't think that would count as destructive so much 18:21:25 maybe a third category for stress testing ? 18:21:36 sure, we can do that easily 18:21:57 I kinda figured in the end, we might have quick, default, destructive and possibly a few more 18:22:13 ok, let's get default up and running first, and we can decide from there where to go 18:22:34 perhaps before we commit tests, we make sure we have concensus among us which category something belongs in. 18:22:50 Right, so each tests is run, all output from the test is redirected to test-$date.log 18:23:10 the test returns 0 (success) 3 (skipped) or anything else (fail) 18:23:31 ok 18:24:03 The expectation there is if your test requires certain hardware, it can skip the test if the module is not loaded. I even through a quick function to verify a module is loaded or exit 3 if not 18:24:27 Anything your test outputs will never be seen on stdout or stderr 18:24:54 can we add a --verbose mode? 18:24:57 those go to log, so the user can quickly see the list of 18:25:04 Sure, we can do that 18:25:14 by which i mean, i'll submit a patch to do that eventually 18:25:37 Well, it's not any difficulty to add it quickly 18:26:42 My only real concern with this setup is that the tests are executed right now by iterating the directory. Which means order can change. 18:27:11 so if people are scripting things to scrape the log for regressions, it can be thrown off? 18:27:37 perhaps that randomness is desirable ? I can't think why two tests would be dependant on each other. 18:27:39 It could be, but less likely 18:28:08 jforbes, then what's the concern? 18:28:27 Certainly they should not be dependant. I wasn't really sure if there was one or not, which is why I brought it up 18:28:50 if tests don't cleanup after themselves, there can be dependency issues 18:29:06 jwb: why wouldn't they? 18:29:20 i'm just saying people can be sloppy :) had a loopback test running after a vlan one once that failed because it left the vlan up 18:29:51 Well, that's why commits get reviewed 18:29:54 just something to watch for if we get weird failures, that's all :) 18:29:58 Indeed 18:31:01 So the logfile is basically enough information for someone to be able to post. Eventually it might even be good to have options for users to be able to post the log directly to a bug or similar 18:31:56 And I do not have it collecting any hardware information at this point, though it might be very beneficial. That get's into the squidgy area 18:32:29 yeah, I think that might be a whole separate project tbh 18:32:59 nathanm is working on it i think 18:33:01 Right, the log bit was mainly to keep the screen clean. 18:33:12 But it does make it very easy to auto attach to a bug 18:35:10 It is good to note, that because we exec each test and depend on return codes for results, it really doesn't matter what the individual tests are written in, though it might be good if we didn't expect users to have to compile something, since some people may not have development tools installed 18:35:48 Right now we have some proposed tests on https://fedoraproject.org/wiki/KernelRegressionTests though there are many more that we need. 18:36:04 hmm, that kinda limits the tests though, unless we have every test separately packaged to be yum installable. 18:36:19 or we ship binaries 18:37:04 And I will update that page as well as create the KernelRegressionTestGuidelines page as soon as the git repo is set up. They are pretty quick on that 18:38:21 Hmm, yeah. I suppose so. Well, it is certainly not out of the question as long as we make it very simple for users 18:39:09 perhaps we only do a subset of tests if no gcc is found for eg. 18:39:36 Sure, we can return a skip on that one too 18:40:17 this probably isn't going to be perfect on the first pass, so we can refine it as it evolves. 18:40:36 And we want to do a subdir for each test right? 18:40:37 like it's open source or something 18:40:40 heh 18:40:53 jforbes, think so. 18:40:56 jforbes: yeah I think that will make it easier to keep them separated. 18:41:14 it's only a matter of time otherwise before we get two upstream tests using the same filenames 18:41:14 can use symlinks if we want a test run in both 'default' and 'stress' or something? 18:41:34 Sure, makes sense 18:41:48 jwb: Actually, I had set it up a supersets 18:42:10 so destructive actually runs default + destrcutive, stress would run default + stress 18:42:32 though again, not tied to it, just seemed more simple than managing links and such 18:42:53 probably good enough. running destructive + stress just sounds bad 18:44:08 So that's pretty much the summary of where we are. Like I said, I will have it documented on the wiki as soon as I get the repos set up 18:44:18 And I will make the request for the repos as soon as we are done here 18:44:38 * nirik can get that created in a bit after it's filed. ;) 18:44:46 nirik: thank you sir! 18:44:50 nice. 18:45:08 Any questions, suggestions, feedback? 18:45:50 random down the road thought... 18:46:03 is this upstreamable? or would they not be interested in something like this? 18:46:35 as in the kernel proper? 18:46:40 yeah 18:46:57 or even just a separate project hosted on kernel.org that we can get other distros involved in 18:46:59 there's a tests/ directory now, but i imagine a lot of the stuff we'd run would be grabbed from existing tests 18:47:05 I have thought about that, and not sure what the reception would be 18:47:10 pjones, that might certainly be doable 18:47:15 my big fear is that it turns into ltp. 18:47:17 I do have a kernel.org account, and could put it there instead 18:47:43 just something to think about... I'm sure it may be more known after you have stuff working 18:47:48 jforbes, i do as well, so no big deal either way to me 18:47:55 I know ubuntu have their own infrastructure like this, but all the tests are very hardcoded with ubuntuisms 18:48:12 stuff like "if running on wonkywombat run this test" 18:48:27 yeah. eventually we'll tie into AutoQA, but i think our stuff will be mostly stand-alone 18:48:29 At this point, the only thing specifically tying ti to Fedora is a check of redhat-release. If that fails, it just reports unknown in the logs. Very easy to change 18:48:50 this stuff will always be stand alone though, autoqa/autotest will just have a test which runs it 18:49:54 I don't see anything that would keep other distros or random developers from using it. In fact it might be very nice to get developers adding their own regression tests to it 18:51:22 At the very worst case, if we need to add some specific fedoraisms to it, we could always add them on fedorahosted, and just keep a kernel.org as the main dev target 18:52:27 All future bits to think about though. At this point it wouldn't be good to do anything to discourage it, but we also need to focus on getting this moving for Fedora 18:53:22 One other concern is keeping runtime as low as possible. The longer it takes to run the default test suite, the less likely we are to see critical mass of testers 18:54:28 Any other questions/concerns/ideas? 18:54:37 maybe making a list of concerns like this on the wiki page is a thing to do (kinda like the feature process), and we'll see if we can brainstorm solutions 18:55:01 also, need to do some PR on this i think 18:55:16 blog, tweet, whatever to get people to at least run it 18:55:31 email to the devel list would be good 18:55:32 jwb: Yes, often even 18:56:36 ok, call this done ? 18:56:49 sounds good 18:57:19 nice 18:57:25 #endmeeting