16:02:24 #startmeeting Ansible VMware Working group meeting 16:02:24 Meeting started Mon Dec 2 16:02:24 2019 UTC. 16:02:24 This meeting is logged and archived in a public location. 16:02:24 The chair is akasurde. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:24 Useful Commands: #action #agreed #halp #info #idea #link #topic. 16:02:24 The meeting name has been set to 'ansible_vmware_working_group_meeting' 16:02:35 #chair jtyr 16:02:35 Current chairs: akasurde jtyr 16:02:40 Hi jtyr 16:03:19 Hi 16:03:33 Just created this: https://github.com/ansible/community/issues/423#issuecomment-560460206 16:06:19 jtyr, in PR 63741, Any reason why we are removing find_vmdk ? 16:06:23 function* 16:07:01 because it's not used after anywhere else after I implemented the changes discussed on the last meeting ... 16:07:57 Oh I missed the last meeting 16:08:31 I know ;o) 16:08:36 So If I understand correctly you are directly assigning user input for vmdk files to device 16:08:52 Correct 16:08:55 don't you think we should validate first before proceeding 16:09:04 Like the function does ? 16:09:37 Ideally yes, but the validation as currently implemented can take hours to complete ... at least that was my experience as described in the PR description ... 16:10:09 Oh I seriously miss last meeting 16:10:21 I know ;o) 16:12:21 jtyr, do you think we can add a flag to override current beha 16:12:40 This is what I did originally 16:12:53 but we agreed to remove the flag on the last meeting ... ;o) 16:13:11 facepalm 16:13:15 OK 16:13:23 I will go ahead and merge this 16:13:29 Cool 16:14:38 #action akasurde merge 63741 16:15:49 For PR 63740, I need to review, is that OK with you ? 16:16:02 sure 16:16:05 #action akasurde review 63740 16:18:54 #chair Enphuego 16:18:54 Current chairs: Enphuego akasurde jtyr 16:19:20 thanks? 16:20:37 jtyr, 43435 I am OK with change, just needs heads from Goneri for Test part 16:22:23 Sure. He has advised me how to solve testing issue I had - to skip the test for govmomi ... 16:22:38 jtyr, Cool 16:23:17 #chair mariolenz 16:23:17 Current chairs: Enphuego akasurde jtyr mariolenz 16:23:25 mariolenz, Enphuego Welcome 16:23:36 hi 16:24:11 mariolenz, we are discussing PRs in https://github.com/ansible/community/issues/423#issuecomment-560460206 16:27:51 jtyr, number of CPUs at line 44 is not right since we have an option like hotadd_cpu which can add CPU when VM is poweredon 16:28:06 jtyr I am talking about https://github.com/ansible/ansible/pull/65429 16:30:26 43435 looks pretty good to me, does it break when there is local host storage or does it ignore that? 16:34:39 akasurde are you sure that the vmware_guest module actually hot adds CPUs? I thought that just set the option on the VM 16:34:41 Enphuego, how does storage shown in case if it is local host storage ? 16:35:10 same as shared storage, it's just only visible to one host 16:35:34 https://github.com/ansible/ansible/blob/a8ef5d5034c89dfe35b641ce6309c4ef9812254b/lib/ansible/modules/cloud/vmware/vmware_guest.py#L131 16:35:44 Enphuego, ^^ 16:36:07 Enphuego, I will check how does it show where it attached as local host storage 16:36:08 ? 16:36:34 it doesn't... it's just not available to all hosts on the cluster 16:37:54 Ok 16:38:07 in my environment at least it's a LUN on a SAN or similar storage, but only made available to one host. I don't know how you'd detect that in code, I just set group vars in my cluster vars 16:38:47 Jtyr what do you think about Enphuego opinion ? 16:39:35 sorry I can't be more helpful, I've just seen real life examples where people have a physical server with a bunch of disks attached to the host that's not really meant to be used. Deploying there would be not what I expect 16:41:08 Enphuego point noted, we need to explore more combinations 16:41:23 sorry, been busy for a minute ... 16:42:29 i think that vmware_guests hot-adding cpus https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/vmware/vmware_guest.py#L1033 16:43:16 mariolenz, yes, it is possible to add CPU when VM is on with hotcpu add enabled 16:43:19 What's the concern? 16:43:43 regarding 43435? 16:44:30 Concern is that line https://github.com/ansible/ansible/pull/65429/files#diff-ed147163551d1eaa084b21f6bab29f96R44 is not true when it says number of CPUs 16:44:44 regarding 65429, I can add: unless hotcpu is enabled ... or something like that 16:44:49 jtyr, we will come back to 43435 16:44:55 The concern for 43435 is what happens when I have a big local LUN attached to one host in the cluster 16:45:27 on 65429 the module will actually hot add CPUs? 16:45:31 Enphuego, only attached to one host 16:45:59 I don't have hot adding enabled so I cannot check ... 16:46:19 yes only one host. I won't do it as a practice in my environment, but I've seen it happen plenty 16:46:38 jtyr, best way is to use example which is just like amount of RAM or something 16:47:00 ok, I will remove the node about CPUs ... 16:47:14 Jtyr cool 16:47:36 #action jtyr Change example wording in 65429 16:47:42 let me give hot add a whirl in my environment 16:47:57 Now, PR 43435 16:47:59 akasurde: change done 16:48:33 jtyr lgtm 16:48:47 * akasurde waiting for CI to finish 16:49:07 jtyr for PR 43435 Enphuego has valid concern 16:49:11 does this line only pull storage available to the whole cluster? 'cluster = self.find_cluster_by_name(self.params['cluster'], self.content)' 16:49:36 Enphuego, it will only pull cluster objects 16:49:54 next line gets all hosts and their mount points 16:50:01 oh the next line it loops through the hosts pulling all mount points 16:50:13 so yes it will definitely pick up local storage 16:50:45 Does Cluster has intelligence to use that exclusive storage ? 16:50:53 is that a bad think? 16:51:21 Jtyr frankly speaking never tried such setup 16:51:46 the cluster doesn't have any intelligence to use the local storage, just the host. No guarantees that the host picked for build will be the host with the storage 16:51:55 I had ESXis with local storage only in the past 16:52:18 I've seen it especially when they repurposed hardware 16:52:38 and I was using the vmware_guest module to provision directly on ESXi when there was no vCenter available ... 16:52:38 you could check the storage from two hosts in the cluster and only use the storage common to both 16:53:00 and that patch is from that time I think ... 16:54:28 Enphuego, Can you test if 43435 works for exclusive storage in cluster ? 16:54:50 Since I am not in lab I can't test now 16:54:56 let me give it a try 16:55:01 thanks 16:56:39 I don't have 2.9, will it work properly if I load the changed files into a 2.8 install? 16:56:45 even if the host picked for build will be the host with the storage, that's probably not what people will want. if i deploy a vm on a cluster, i don't want it to end up on a local datastore. 16:57:33 if you are not using ESXi only ... ;o) 16:57:58 can we check somehow if the host is in a cluster? 16:58:13 if you are targeting a cluster, hopefully it's in a cluster? 16:59:26 well ... the module works well with ESXi as well .,.. 16:59:37 this is what I was using it for ... 16:59:41 right, but there are two sets of logic there... one for a cluster, one for a host 17:00:36 akasurde: CI finished: https://github.com/ansible/ansible/pull/65429 17:03:43 I think for Host this logic will work, I am not too sure about cluster though 17:04:03 Enphuego: if you use the module against a single host, local storage should be ok imho. 17:04:05 First I thought it will but with the case of exclusive storage I am confused 17:04:15 but it seem to work only with vmfs datastores? some people use nfs. 17:04:45 yes, if you are using it against a single host then local storage would be an expected outcome 17:05:24 I'm trying to find a host in my environment that has a large local disk... 17:09:03 I'm making a host that has a 200TB lun, will be just a minute 17:17:22 Enphuego, jtyr mariolenz what do you suggest for this situtation ? 17:22:37 I'd suggest checking two hosts in the cluster and selecting only storage that's on both 17:23:49 that will get rid of most edge cases 17:24:17 akasurde: i'm not sure, i guess i'll have to think about this. and try one or two things ;-) maybe i can have a closer look into this pr tomorrow and comment. 17:24:42 OK 17:24:58 I feel the same, I will test in my environment too and comment again 17:24:58 or maybe even today, but then without testing anything. 17:25:15 mariolenz, take your time 17:25:26 I really wonder how many people let it auto pick the storage in the first place 17:25:43 Enphuego, A lot of I guess 17:26:18 Since we are little over time in this meeting I will end this meeting now 17:26:32 See you soon people, thanks for join-in 17:26:49 #endmeeting