16:04:13 <akasurde> #startmeeting Ansible VMware Working Group Meeting
16:04:13 <zodbot> Meeting started Mon Jun 17 16:04:13 2019 UTC.
16:04:13 <zodbot> This meeting is logged and archived in a public location.
16:04:13 <zodbot> The chair is akasurde. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:04:13 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
16:04:13 <zodbot> The meeting name has been set to 'ansible_vmware_working_group_meeting'
16:04:18 <akasurde> #chair Goneri
16:04:18 <zodbot> Current chairs: Goneri akasurde
16:04:34 <akasurde> hi Goneri
16:04:59 <Goneri> I can start with an update regarding the CI.
16:05:41 <Goneri> We can now do a full run of the CI on a regular lab.
16:06:24 <Goneri> The integration with our Worldstream.nl still depends on a couple of patches
16:06:40 <Goneri> - https://github.com/ansible/ansible/pull/52936 (I just pushed a fix for the test-suite)
16:06:53 <Goneri> - https://github.com/vmware/pyvmomi/pull/799
16:07:16 <Goneri> - and a lost one that I'm preparing
16:07:49 <Goneri> a run of all the tests takes a bit less than 6h
16:08:27 <akasurde> Wow,
16:08:38 <Goneri> yes, it's a bit slow
16:08:40 <akasurde> Thats a huge progress
16:11:50 <akasurde> Goneri, Would you be interested in review in https://github.com/ansible/ansible/pull/57832 and https://github.com/ansible/community/wiki/VMware:-HTTPAPI-connection-plugin
16:14:05 <n3pjk_> I have a substantial amount to add to the specified wiki page. I didn't have the link so had kept it under separate cover, but will add in after meeting.
16:15:10 <akasurde> #chair n3pjk_
16:15:10 <zodbot> Current chairs: Goneri akasurde n3pjk_
16:15:13 <akasurde> hi n3pjk_
16:15:30 <akasurde> no issues, I forgot to paste the Wiki link previously
16:16:10 * agowa338 I'd like to add another top (if it isn't too late): vmware_tools handle connection issue while dcpromo/win_domain #57661
16:17:11 <akasurde> #chair agowa338
16:17:11 <zodbot> Current chairs: Goneri agowa338 akasurde n3pjk_
16:17:46 <akasurde> agowa338, I will check #57661 and add my reviews
16:17:52 <akasurde> agowa338, thanks for the PR
16:18:33 <agowa338> Regarding this pr, it does not fully solve the issue of credential swapping when provisioning a windows vm to a domain controller. It just mitigates it, but it's not perfect. Does anyone know how we could archive this without re-entering the vm?
16:19:09 <akasurde> agowa338, I am not sure since my Windows knowledge is very limited
16:19:18 <agowa338> Basically the issue is that the connector tries to keep an hand on the module after invoking it and therefore it requests another token from windows, but because of *foo* within windows this fails...
16:20:15 <agowa338> I'm kinda stuck here on how to properly approach this, as the only way I could imagine is fire and forget...
16:20:27 <akasurde> ah !
16:21:01 <akasurde> jborean93 and nitzmahone might have idea over here ^^
16:22:17 <Goneri> (I've to go)
16:22:27 <akasurde> Goneri, np
16:22:58 <akasurde> agowa338, I will try to find something about the given issue but no promises
16:23:25 <agowa338> thanks ;-)
16:23:54 <n3pjk_> Re: ReST design spec, shall I edit above specified page with design content?
16:24:33 <akasurde> n3pjk_, yup
16:24:33 <nitzmahone> agowa338: you should be able to do this.
16:24:45 <akasurde> #chair nitzmahone
16:24:45 <zodbot> Current chairs: Goneri agowa338 akasurde n3pjk_ nitzmahone
16:25:21 <nitzmahone> You need `async` w/ `poll: 0`, then an `until` loop with `async_status` in another task after you've used task/block/play vars to switch the credentials.
16:26:19 <nitzmahone> So it kicks the task off on one set of credentials, then you change the credentials to come back and poll it. You'd probably also need to force override the async status dir, since that's normally a per-user location
16:27:14 <agowa338> nitzmahone: That works with ssh and winrm, but not with vmware_tools connector, as the connector itself requests multiple tokens.
16:27:54 <nitzmahone> You should be able to do a `meta: reset_connection` in between to clear that. If not, it's a bug in the connection.
16:28:48 <agowa338> ok, I'll have a look into that. Currently it fails within a module, as the win_domain module includes a workaround to restart winrm and that workaround clashes...
16:29:28 <agowa338> maybe also some documentation needs to be added regarding that trick.
16:30:26 <nitzmahone> Hrm, that *shouldn't* matter, since it's all happening within the same session. I'm not familiar with whatever the vmware_tools connection is doing WRT its execution environment or tokens, but unless it's doing something really weird, one of those things should take care of it.
16:33:04 <agowa338> nitzmahone: The problem is it is not the same session, as vmware_tools is one way and opens multiple sessions consecutive and sometimes also opens more sessions than needed... It is a limitation within the connector for sure, but what is the best mitigation?
16:34:21 <nitzmahone> Easiest thing for now might be to try and drop a customized copy of the module in your `library/` dir that omits the WinRM service restart. We might consider a PR that makes that optional...
16:35:38 <agowa338> Some weeks ago I hat that plugin within a api monitor and it calls LogonUserW too many times, but that is proprietary code, it is a limitation within the connector for sure, but maybe your trick is enough to mitigate it.
16:36:06 <agowa338> dropping the restart also does not solve it, it just fails in another way.
16:45:36 <nitzmahone> Sorry, I have no known way to test against vmware, so I'm not able to debug that particular issue
16:51:04 <agowa338> nitzmahone: No worry, you already helped me. I think with that async trick in mind it should be able to work around that issue. And as a kind of last resort adding a "and reboot" to the win_domain and win_domaincontroller module could also work.
16:52:04 <nitzmahone> We can't do internal reboots on any of those- the only way to do it is to wrap them with an action that does the reboots (like win_updates does), which is something we've considered, but just haven't gotten around to.
16:52:52 <nitzmahone> Module-internal reboots are a instant recipe for a race condition: did the module result make it back to the controller before the rug got ripped out from underneath the transport by rebooting? We'll never know...
16:54:02 <agowa338> nitzmahone: That did it make it back to the controller is the problem here, so just probing the connector form the module and forcing it async and rebooting could help, but I have to check if that is possible.
16:55:09 <agowa338> We basically don't get the result in a sane manner.
17:07:41 <akasurde> We are little over time
17:07:46 <akasurde> #endmeeting