20:00:26 #startmeeting Ansible Windows Working Group 20:00:26 Meeting started Tue Jul 16 20:00:26 2019 UTC. 20:00:26 This meeting is logged and archived in a public location. 20:00:26 The chair is jborean93. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:26 Useful Commands: #action #agreed #halp #info #idea #link #topic. 20:00:26 The meeting name has been set to 'ansible_windows_working_group' 20:00:34 so implementing timeout for windows tasks ... 20:00:37 hey 20:00:41 ;-p 20:00:42 #chair nitzmahone jhawkesworth 20:00:42 Current chairs: jborean93 jhawkesworth nitzmahone 20:00:47 #chair bcoca 20:00:47 Current chairs: bcoca jborean93 jhawkesworth nitzmahone 20:00:50 :-) 20:01:00 hola 20:01:05 yo 20:01:20 יקט 20:01:21 hey 20:01:33 #chair Shachaf92 20:01:33 Current chairs: Shachaf92 bcoca jborean93 jhawkesworth nitzmahone 20:01:33 hey 20:01:36 yay big turnout today 20:02:27 * jhawkesworth wonders how far back on agenda we need to go 20:02:38 here: https://github.com/ansible/community/issues/420#issuecomment-507493257 20:03:07 if agowa338wss cococomment is ffinished 20:03:17 damn keyboard 20:04:09 I believe so, most of those PRs are awaiting a rebase or review conversations 20:04:39 mine is really just a request for review, can't remember if I mentioned it last week but everyone here has already looked at it so let's move on from there 20:04:57 #topic https://github.com/ansible/community/issues/420#issuecomment-508959447 wait_for_connection credentials 20:05:47 This is a fun one, I wouldn't want a behaviour change but wouldn't be too against a specific flag 20:05:58 the biggest issue is how do we detect if there was a credential failure 20:06:00 That's a slippery slope- wait_for_connection is designed to be resilient to things like "the DC is unavailable right now", but that would be an auth failure 20:06:05 right now we don't have an exception for it 20:06:27 i mamde a possible iilemeentation but haventt testede yet 20:06:39 plus each connection plugin has a big mix of exception handling levels 20:07:38 shouldi pursue this? or leaave it be? 20:07:43 yeah I guess different connections will behave differently when presented with invalid credentials. 20:07:54 It'd be good to have specific errors for some of those kinds of things, but a lot of connections aren't able to distinguish those failures anyway 20:08:03 this issue seems to focus on winrm connectctction 20:08:18 nitzmahone: ignore_unreachable would be way to 'ignore auth issues' 20:08:23 wait_for_connection is not generic 20:08:26 It absolutely shouldn't be the default behavior, and if you start getting into lots of flags... 20:08:29 bcoca: he doesn't want to ignore auth issues 20:08:38 he wants to ignore connection failures but still fail on auth issues 20:08:46 yes, but `wait_for_connection` is (deliberately) useful against win and non windows hosts 20:08:58 honestly I have half a mind to close saying we aren't aiming to do this but feel free to open a PR that covers all the bases 20:09:06 that would cover both .. since we consider 'auth failures' connection issues 20:09:15 I know 20:09:28 he only wants it to cover unreachable failures which it doesn't right now 20:09:35 cover only* 20:09:36 https://github.com/ansible/proposals/issues/141 20:09:51 ^ an expansion on that idea might cover his want .. possibly/mebbe/infuture 20:10:20 ok, any objections to me closing that saying it required better error detection and support for not just WinRM and link to that proposal? 20:10:31 works4me 20:10:47 +1 20:10:52 no objection from me 20:10:53 (and also can't be the default) 20:11:01 workaround: use ignore_unreachable and then failed_when: 'auth' in error (whatever teh auth error is) 20:11:05 nitzmahone: of course, I rely on this feature right now :) 20:11:06 yes, cannot be default 20:11:14 cool moving on 20:11:15 שערקק 20:11:18 AGREE 20:11:32 #topic https://github.com/ansible/community/issues/420#issuecomment-508960375 win_pagefile 20:12:57 so what's the issue here, it sounds like a bug where it's not reporting a page file? 20:12:59 sort of wishing pagefile info was in facts, which doesn't really help specific issue 20:13:13 ditto 20:13:14 yea, this came out at about the same time we did a few `state: query` modules 20:13:36 in the case of auto managed in teh system level there is no pagefile in the usuall classes the module uses 20:14:06 so youll have to put a specific fix for that case just to return the paths used 20:14:19 i wanted to bring it up here to see if this is desiarialbe 20:14:21 so it's a problem where it's not reporting the path to the auto managed pagefile? 20:14:28 yeah 20:14:31 ah ok 20:14:36 desirable* 20:14:49 well 1 option is to create a `win_pagefile_info/facts` module and deprecate the query option for this one 20:15:01 I don't have a problem with a specific case but its not something I'd use so don't know what this would be useful for 20:15:17 that way we can build the return values properly with something like this in mind 20:15:21 me neither 20:15:26 I think that's the right longer-term direction, assuming someone needs to know the actual details beyond "it's system managed" (why?) 20:15:37 they want a BSOD? 20:15:55 we can ask in the Issue maybe what is the actual use case 20:15:59 they fixed that with the xbox .. they made it green 20:16:08 bcoca: more like red 20:16:23 * jborean93 had a few Red Rings of Death 20:16:25 mine is GSOD .. have not seen red one (yet) 20:17:00 yeah lets ask why this is useful. 20:17:06 ill ask 20:17:23 ok we can ask for the reasoning but I would be reluctant to make `state: query` better and just bite the bullet and create a separate module for that like we do today 20:17:36 -10 to state: query|list|info 20:17:43 facts/info module 20:17:46 instead 20:17:54 #topic https://github.com/ansible/ansible/pull/58483 setup failure handling 20:18:22 I'm personally happy to do the easy route here and then retroactively add nitzmahone's failure mechanism if they try to use this once that feature is in 20:18:40 right now having the setup module fail on some use cases is not a good situation to be in 20:19:09 Which one is "the easy route"? 20:19:25 try catch and return $null 20:19:37 for one knowing the machine sid isn't really that useful 20:19:43 true 20:19:52 setup module on 'posix' should not fail on facts, it skips them and adds either N/a or msg about the fact being unavailable 20:19:55 Seems like setup should never fail 20:19:58 not sure how windows side works 20:20:19 N/A and msg about fact 20:20:28 bcoca: it's a mix, before it would just ignore them but a change in 2.8 made error handling consistent across the board, i.e. it would actually report powershell errors 20:20:45 can you just return them as warnings? 20:21:04 i.e 'could not read sid: ' 20:21:10 but then continue? 20:21:17 we can but I liked nitzmahone's idea of flagging the value so if the user tried to use it then it will fail with the error 20:21:37 that is 100x better, but feature needs to exist first 20:21:40 but that requires his deprecation/flagging work which is ongoing 20:21:43 yep 20:21:43 We can refactor to do that generally once it's in, but I'm ok with this more-or-less as-is right now 20:21:45 jinx! 20:22:03 cool, I'll review the PR to try to get it in soon 20:22:06 flagging things sounds great. 20:22:36 #topic https://github.com/ansible/ansible/pull/58790 win_pester 20:22:38 I can't think of a case where I'd use machine sid anyway, so happy to tip the balance in favour of having setup work without retrieving it for now 20:22:39 That way you don't have to care about how much noise there is from setup failures unless you actually care about a given value that couldn't be fetched 20:22:47 yep 20:23:08 I think the win_pester one is just a request for review 20:24:01 will attempt to get to it at some point but my plate is quite full right now 20:24:23 it has a test - but I'm not using pester so hard to comment 20:24:43 * jborean93 feels like Mr. Creosote sometimes 20:25:06 just one more PR. It is only 'waffer thin' 20:25:54 * jborean93 explodes 20:26:17 #topic https://github.com/ansible/community/issues/420#issuecomment-511918582 win_service credentials 20:26:27 I think I know what the problem is here and commented this morning 20:26:40 oh wait wrong one 20:26:44 this is win_dns_client 20:26:51 i think i delete the comment 20:26:59 i saw your responce 20:27:00 se 20:27:14 #topic https://github.com/ansible/community/issues/420#issuecomment-511918582 win_dns_client disconnected interfaces 20:27:42 Would need to verify the behaviour with server 2012+, but if they don't fail on these types of interfaces then 2008/R2 should act the same 20:28:00 ill look into it 20:28:08 cool 20:28:34 weird, I can't see that issue 20:28:37 maybe give a final go over thie 20:28:39 https://github.com/ansible/community/issues/420#issuecomment-502393874 20:28:49 * jborean93 still can't wait for 2008 EOL 20:29:09 I deleted the comment after i saw jborean93 commented on the issue 20:29:24 yea I was confused as I swear I saw that on the agenda :) 20:29:37 you did, and then you didn't 20:29:46 magic man 20:29:48 a magician 20:31:03 I think I looked through all those PRs 20:31:22 I think so too and most if not all are commentless 20:31:50 well I closed https://github.com/ansible/ansible/pull/38356 20:32:00 Shachaf92: did you not fix https://github.com/ansible/ansible/pull/40535 in another PR? 20:32:40 yeah in setup.ps1 20:32:45 this is the damn config script 20:33:06 there are like 10 issues and some PRs about it 20:33:21 * jborean93 wants to flamethrower that one as well 20:33:31 ah ok 20:34:55 Im working on the side on making a report to start cut down on old issues / multiple problems in a single file 20:35:14 I feel like I'm just going to have to close the majority of these. They are quite old and most not relevant anymore, just need to wordsmith a nice way to saying that 20:35:52 well... you can always say "Seems this one has been inactive for a while, closing and feel free to reopen if relevant" 20:37:25 yep, nothing in there really stands out for more discussion 20:37:42 Unless you disagree or want to talk about anything else, I'll end the meeting shortly 20:37:46 #topic open floor 20:37:46 I'll ping the contributor on https://github.com/ansible/ansible/pull/40535 20:38:17 I did a quick review on the pester one 20:39:02 thanks 20:39:52 i have a noob question about IRC and not ansible for a sec - how do you do the "* @jborean93 wants to flamethrower that one as well" messages? 20:40:08 type in `/me ` 20:40:32 thx 20:40:43 you're welcome 20:41:04 cool so sounds like we are all good 20:41:08 closing unless anything else 20:41:13 5 20:41:13 4 20:41:14 3 20:41:18 oh hang on 20:41:19 forever hold your peace 20:41:25 :) 20:41:42 has anyone ever had to solve a problem where you need to wait till cpu is quiet before running a task 20:42:04 ah? like not busy? or actually silent? 20:42:05 I've seen tasks get slower if something is pegging the CPU but never had to wait 20:42:06 I have 2 playbooks hitting a single cpu box and they fight for runtime and one or other looses 20:42:31 if it's a single CPU it might make sense 20:42:49 a single cpu windows is a terrible idea 20:42:54 yea, I've had those cases where a single CPU machine was blocked by TiWorker doing it's stuff in the background 20:43:13 yep. test environment runs minimal footprint 20:43:33 Yeah, it's nearly impossible to do intelligently- having a task to wait for idle or something would likely not work too well, esp if you have more than one waiting, since they'll thundering-herd it 20:43:48 we are at the mercy of Windows thread and memory management here :( 20:44:09 There's actually a window message that you can subscribe to just for that, but it wouldn't really solve the problem here even if you could get at it from WinRM (which I'm not sure you can) 20:44:37 This is based on it IIRC: https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.process.waitforinputidle?view=netframework-4.8 20:44:40 fair enough. Its going to be easier to get a second cpu or move one app onto a different box. 20:44:53 oh wait, that's a different one 20:44:54 but still 20:45:32 good to know. 20:45:41 thanks for chatting it through, that was it 20:45:49 so ... nothing else from me 20:46:32 cool 20:46:43 thanks everyone for joining today 20:46:47 #endmeeting