17:00:25 #startmeeting Testing Working Group 17:00:25 Meeting started Thu Sep 22 17:00:25 2016 UTC. The chair is gundalow. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:25 Useful Commands: #action #agreed #halp #info #idea #link #topic. 17:00:25 The meeting name has been set to 'testing_working_group' 17:00:31 #chair gundalow mattclay 17:00:31 Current chairs: gundalow mattclay 17:00:37 #topic Roll call 17:00:45 Who is here? 17:00:46 Who is new this week? 17:00:47 * caphrim007 is here 17:00:50 * mattclay waves 17:00:51 * misc is 17:01:08 * mmaglana is here 17:01:21 i'm new 17:01:22 lots of people, excellent :) 17:01:52 mmaglana: Welcome :) 17:01:59 thanks! 17:03:01 #topic Writing Fully-Tested Ansible Modules for Fun and Profit 17:03:27 mmaglana: Could you please introduce yourself, then we can go into your review? 17:04:01 thanks gundalow. i'm Mark Maglana, QA/Release Engineer from Dimension Data. 17:04:51 The current channel topic is my talk for AnsibleFest Brooklyn and i'm submitting it for everyone's scrutiny. 17:05:18 \o/ 17:06:20 Which can be found here: https://www.relaxdiego.com/2016/06/writing-ansible-modules-with-tests.html 17:06:31 beat me to the punch :) 17:06:37 hehe 17:07:42 So if people could cast their expert eyes over it I know Mark would find that useful 17:08:13 * MichaelBaydoun is now present 17:08:22 I think it's good that you've included the git basics in, and that it's a full end-to-end walk though 17:09:01 yesterday, i added something near the end about getting additional insights from a code coverage report. 17:09:37 mattclay and I were talking about it earlier and he made the good point that once the three git repos are merged it will need some tweaking, though that will be post AnsibleFest NYC 17:10:08 gundalow: true. the prepping and PR part will be a lot simpler then. 17:12:39 yup :) 17:13:51 Have other people taken a look, what do they think? 17:14:19 it is long :) 17:14:24 heh. 17:14:30 mmaglana: too long 17:14:46 mmaglana: Have you timed the talk? 17:14:48 but that's quite good, it cover everything to know 17:15:15 i'm going to skim through certain parts to keep it under 30 17:15:21 30 mins. to be specific ;) 17:15:50 for example, the workspace prep will have to be slideware and i will just refer the audience to the blog post as their reference. 17:15:56 cool, well you can direct people to the article online for extra info 17:16:05 exactly 17:16:33 would giving a plug for the module checklist be relevant? or is that most specific to the developing part and not the testing part? 17:16:44 more* 17:16:44 caphrim007: Good point 17:16:56 The checklist is for *both* the creator and the tester 17:17:01 oh, good point caphrim007 17:17:09 i can put that in 17:17:17 or at least reference it 17:17:26 yeah you dont need to go into detail on it 17:17:28 _shaps_: I'm not sure hte unittest for argument_spec is a good thing to promote. 17:17:30 but at least make mention of it 17:17:34 err sorry _shaps_. 17:17:40 that was just a general comment. 17:19:11 mmaglana: I'd say just reference it 17:19:20 yup. noted. :) 17:19:53 It's on our list to completly redo developing_modules.html 17:21:14 Cool, thanks everyone 17:21:29 Might want to hav e a little side-bar about different types of testing. 17:21:33 If anyone has any other thoughts there is a link at the end of the post to submit feedback 17:21:51 abadger1999: such as? 17:21:53 also hi :) 17:22:00 even though we use the unittest framework, the example test is more an acceptance test than a unittest. 17:22:16 (not wrong -- I'm not enamoured with unittests for everything ;-) 17:23:31 abadger1999: i might have to avoid test taxonomy though as that's a whole topic in itself. i'll leave it to the audience what kind of test they want to do and just show them the tools instead. 17:23:54 abadger1999: i mean, knowing the difference is important, certainly. but 30 mins won't be enough to cover that too. :-) 17:24:41 mmaglana: well... right now we don't really ahve any acceptance tests. 17:25:07 abadger1999: Anything else we should add py3 wise? 17:25:08 It's pretty much just unittests or full integration tests. 17:25:10 but i'll be happy to share links about the different kinds of testing if there's one you can recommend. 17:25:33 Another option would be not to give that example? 17:26:17 abadger1999: problem with not sharing the example though is that it won't give the audience much in terms of how to really get started. 17:26:32 For most modules, testing the main() function is likely less valuable than testing individual functions (and splitting poorly written main functions into multiple helper functions) 17:26:42 mmaglana: right -- but give a different example. 17:27:09 mmaglana: write two test cases for test_fetch-data and test_write_data 17:27:11 abadger1999: oh actually, the main() tests are just the beginning. you'll see further down actual unit tests of individual functions 17:27:14 skip main 17:27:20 abadger1999: i have those two tests 17:27:26 mmaglana: Yeah -- I'm just saying testing main confuses the issue. 17:27:39 abadger1999: ah. ok. i see your point now. :) 17:27:57 Cool. 17:27:58 in a well written module main should be limited to just option parsing and validation 17:28:35 yeah, lighter weight main() is good... it's really the dispatcher. 17:28:39 got it. alright, i can do that :) 17:29:11 abadger1999: I wonder if we can get some complexity testing on the main function 17:29:18 in the modules that take state, it should usually just be if state == 'Foo': Foo(with_proper_arguments) 17:29:32 minimal main FTW 17:29:44 linuxdynasty: Hello you :) 17:30:06 hello (in and out on my phone as I am in a company wide meeting) 17:30:06 mmaglana: might be worth making that point in the presentation 17:30:14 linuxdynasty: cool :) 17:30:16 mmaglana: Cool. One other thing to change. Separate test methods for testing fetch-data and write_data. 17:30:39 gundalow: certainly. i'm going to modify that part to make sure main() is minimal and also point out why it should be minimal 17:31:02 abadger1999: you mean separate them into their own test*.py files? 17:31:50 mmaglana: Oh wait, sorry -- I see you broke them out later in the document. 17:32:03 also maybe a bullet point that functions/methods should be small so that each function can be tested easily 17:32:14 linuxdynasty: +1 17:32:16 mmaglana: Just to separate methods. 17:32:31 abadger1999: got it. :) 17:32:43 In your document, you did do that under: "Let’s Implement fetch_data()" so you're good there. 17:33:14 and "Let’s Implement write_data()" too 17:34:11 Gary bernhardt has some good talks on testing that are good for distilling how to write testable functions... The Boundaries one is a good one for ansible module writers to learn: https://www.destroyallsoftware.com/talks/boundaries 17:34:33 abadger1999: nice! 17:34:34 That might the ruby version... Let me find the pycon version 17:34:42 i don't mind ruby ;) 17:34:56 I'll link you to it in #ansible-devel later 17:35:08 thanks! 17:35:26 so with regards to main(), just to make sure i got it... 17:35:54 it should just initialize AnsibleModule and then dispatch another function to do the module user's bidding? 17:36:07 +1 17:36:10 +20 17:36:11 cool 17:36:12 :P 17:36:52 alright. in that case, there really is no point unit testing main() as the unit test will just _mostly_ look like main itself. 17:37:02 correct 17:37:10 also this is my personal opinion, but I do not like using module.fail_json() inside of functions/methods and I rather return it to main and raise it their. (I am unsure how the core team feels about that though) 17:38:49 linuxdynasty: i tend to go that route too 17:39:01 Core team likes good code 17:39:13 I have a love-hate relation with fail_json() inside of functions/methods. 17:39:25 Sounds like a sensible guideline, but I guess it really depends on the case 17:39:27 fail_json() is really equivalent to pyhton raise Exception(). 17:39:45 But a vastly inferior version of raising an Exception. 17:40:03 no, its equivalent to sys.exit(json.dumps(msg)) 17:40:15 so would the accepted practice be for the functions to raise an exception and for main to catch and fail_json it? 17:40:29 bcoca: You're coming at it from a functional point of view... I'm coming at it from a use point of view. 17:40:36 So to clarify: 17:40:42 abadger1999: programmers point, user doesn't see anything 17:41:19 mmaglana: my issue with catchalls, you tend to give generic unhelpful messages, i'm fine as long as you get a specific message that can help the user determine what is wrong as clearly as possible 17:41:36 Yes, that's a very good point 17:41:39 i can second bcoca's point 17:41:40 It is often used (and thus shows we have a need for) raising an exception. What we have is a method that unconditionally exists the program an error message. 17:41:49 - action you were attempting, - cause of error, if possible, - possible solutions 17:41:50 *with an 17:42:18 abadger1999: not the same, exceptions can be captured, exit_json is meant for 'termination' 17:42:29 So at some point we need to design something that fits the needs better without the problems that we have now. 17:42:46 i'm still not clear on what the problem is other than 'distaste' 17:43:00 bcoca: exit_json() performs termination. People are using it in place of raising an exception. 17:43:16 what I try to do when I am writing modules, is to catch the exception and check if exception occurred and return it back to main and raise that exception. 17:43:27 What it is meant for is gray as "de facto" is fighting with "what it can actually do" 17:43:32 abadger1999: i dont know what the issue is there, a) they are using it wrong? b) why not use exceptions? 17:43:39 linuxdynasty: That sounds pretty decent. 17:43:53 linuxdynasty: The only problem is losing information. 17:44:03 Which happens quite a lot as well. 17:44:30 toplevel does: 17:44:31 try: 17:44:35 do_everything() 17:44:40 except Exception as e: 17:44:43 i still dont see that as an issue with fail_json, if we NEED more info we should make it available, fail_json should ONLY be used to indicate 'failure i cannot recover from' 17:44:49 return module.fail_json(msg=str(e)) 17:45:06 ^ abadger1999, my problem with that is what i stated above, user error messages will be crap 17:45:21 That loses the fine grained information about what is really going wrong and how it should be reported to the user. 17:45:25 bcoca: yep. 17:45:39 I prefer to err on the side of being helpful to the user than to the programmer trying to fix the code (he can add exceptions/loggings/debugging) 17:45:50 well the try do everything is just bad design, try for a, if fail return 17:45:56 +1 it bugs me when i see that method of handling exceptions 17:46:12 So we need to figure out the right way to address the problems raised both both bad designs and get people using that instead. 17:46:12 for me it is a red flag of a really bad error handling 17:46:29 I think we should have a catchall. 17:46:31 smaller methods will promote better code design which in turn will promote better error handling :D 17:46:35 abadger1999: i reject modules with that pattern, not sure that is a problem 17:46:50 ^ its in developer guidelines NO catchall exceptions 17:46:56 but the catchall should be builtin to basic.py 17:47:05 abadger1999: i dont understand why 17:47:17 And the developer should make an effort to do the right thing. 17:47:30 i dont think we need a catchall (we actually do have one, the part that reads module output) 17:47:41 bcoca: to get rid of some of the crap handling of stdout and stderr that we have to do controller side. 17:47:54 we need that crap anyways 17:48:02 bcoca: and we could probably make the fail_json stuff better as well. 17:48:04 ^ not all modules use basic.py, so there is no avoiding it 17:48:19 bcoca: there is no avoiding keeping it... but we can make less people run into it. 17:48:21 not opposed to making it better, opposed to make it a parallel exception mech 17:48:39 abadger1999: you are just moving it, not avoiding it 17:48:56 right now you get different exceptions in output depending on how the module was written. 17:49:05 and i dont see the problem with IT as is, what does it do that you don't like? should we just not fix that? 17:49:27 that will always be true? 17:49:27 bcoca: but you can make it better when you have structured data (the exception in the module) as opposed to screenscraping (from stdout and stderr) 17:49:50 We can make it so that all the modules that we ship are better. 17:50:04 because we mandate that all the modules we ship will use the module_utils boilerplate. 17:50:09 er .. yes and no, you would have to wrapp the whole module in try/except 17:50:23 we can already do that w/o a catchall 17:50:26 Definition of catchall. 17:50:33 No we can't. 17:50:37 depends on which level you put it 17:50:50 def main is in the module, not in the AnsibleModule. 17:50:51 if __main__ is one thing, what you are implying is around that also 17:51:26 can you show me concrete example of what the problem is? i'm still not sure about the issue aside from vague 'output disparity' 17:51:36 anyhow... we have this argument a lot... we don't need to take up the new module meeting having it. 17:51:39 and those are ALWAYS possible 17:51:52 i thought we cancled the meetings until freeze is over 17:52:01 ^ jimi posted message on last one 17:52:30 bcoca: Err sorry the testing meeting. 17:52:32 This is Testing Working Group. New Modules meetings are on hold till after the freezde 17:52:37 bcoca: which is where we are now :-) 17:52:39 abadger1999: i think we need proposal, i just dont 'see' what the issue is nor what the proposed solution will give us aside from more boilerplate and duplicate code 17:52:49 bcoca: +1 to proposal. 17:52:51 gundalow: sorry 17:52:59 did not mean to hijack 17:53:07 bcoca: It's cool. There is usefull discussion going on 17:53:22 I don't have anything that is close to addressing all the problems yet so no proposal yet. 17:53:32 mmaglana: Not sure if we confused the matter a lot there :P 17:53:48 not at all. it was an interesting discussion to me :) 17:53:59 i think the 'actions' return key might be better suited to give us more context info (nitzmahone was writing that proposal) 17:54:13 ^ which would also help with testing 17:54:19 its kind of 'module debugging' 17:54:23 mmaglana: Hopefully you got some good suggestions. If you want to go round this again feel free to add the 2nd version onto the meeting agenda and we will all have another look 17:54:39 thoguh we already have a module debugging but some don't like that it goes to syslog 17:54:57 gundalow: i did. thanks and i will certainly submit the second version! 17:55:08 :) 17:55:14 Right, we have 5 minutes left 17:55:27 Did anyone have anything else? 17:55:40 yeah, gundalow asked about my tests for f5 modules 17:55:52 the tests i have right now are in playbooks 17:56:00 ah, yes 17:56:05 but the coverage here is per module 17:56:06 https://coveralls.io/builds/7958671/source?filename=library%2Fbigip_selfip.py 17:56:21 which is partially why i was asking abadger1999 about the ansiballz thing the other day 17:56:41 so my tests could be construed as "functional" tests specifically of the module 17:57:07 so thanks abadger1999 for not cramming all that module code together! (whether that was on purpose or not) 17:57:15 :-) 17:58:03 Making things more modular, structured, and organized is one of my personality quirks ;-) 17:58:46 i fought with coverage.py for a day and then i had my moment of enlightenment 17:59:03 and it resulted in a bug found, so hooray! 17:59:14 caphrim007: What was the bit of magic you were missing? 18:00:05 i originally thought the entire ansiballz thing was executed as one part, but instead the module specifically is its own file 18:00:16 so i massaged the way coverage reports on that 18:00:31 i think i cc'd the commit in my message to the testing group's github issue 18:02:52 long story short, i create a .pth file at runtime that turns on coverage for everything, then during reporting i change the name of the file that is reported on 18:03:02 ansiballz calles it ansible_module_bigip_selfip 18:03:12 i just change that to bigip_selfip so that coverage can find my code 18:03:31 because ansible_module_bigip_selfip is transient 18:03:52 Cool. 18:06:34 caphrim007: One thing to test your code one is a module that mirrors a name from the stdlib. That's the reason I had to start prefixing with ansible_module_ in the first place. 18:07:19 caphrim007: It probably will be okay because it sounds like you are just changing the name in the report, not in actually running the code but thought I'd mention it as a possible caveat. 18:07:19 yeah, in my .coveragerc i turn off the pylib coverage, but that is a valid point 18:07:51 yeah i change it in the report 18:07:57 under the "include =" section 18:08:13 18:11:24 gundalow: thats it from me 18:11:54 cool 18:12:53 I added a "Tasks/Status" section to the top of https://public.etherpad-mozilla.org/p/ansible-testing-working-group 18:13:23 Feedback on that would be welcome 18:13:57 I need to step away from the computer now 18:14:32 Thanks as always for all your time 18:14:42 One last question 18:14:52 Is anyone going to Velocity in Amsterdam? 18:15:05 I should be there 18:15:18 #endmeeting