16:02:24 <akasurde> #startmeeting Ansible VMware Working group meeting 16:02:24 <zodbot> Meeting started Mon Dec 2 16:02:24 2019 UTC. 16:02:24 <zodbot> This meeting is logged and archived in a public location. 16:02:24 <zodbot> The chair is akasurde. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:24 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic. 16:02:24 <zodbot> The meeting name has been set to 'ansible_vmware_working_group_meeting' 16:02:35 <akasurde> #chair jtyr 16:02:35 <zodbot> Current chairs: akasurde jtyr 16:02:40 <akasurde> Hi jtyr 16:03:19 <jtyr> Hi 16:03:33 <jtyr> Just created this: https://github.com/ansible/community/issues/423#issuecomment-560460206 16:06:19 <akasurde> jtyr, in PR 63741, Any reason why we are removing find_vmdk ? 16:06:23 <akasurde> function* 16:07:01 <jtyr> because it's not used after anywhere else after I implemented the changes discussed on the last meeting ... 16:07:57 <akasurde> Oh I missed the last meeting 16:08:31 <jtyr> I know ;o) 16:08:36 <akasurde> So If I understand correctly you are directly assigning user input for vmdk files to device 16:08:52 <jtyr> Correct 16:08:55 <akasurde> don't you think we should validate first before proceeding 16:09:04 <akasurde> Like the function does ? 16:09:37 <jtyr> Ideally yes, but the validation as currently implemented can take hours to complete ... at least that was my experience as described in the PR description ... 16:10:09 <akasurde> Oh I seriously miss last meeting 16:10:21 <jtyr> I know ;o) 16:12:21 <akasurde> jtyr, do you think we can add a flag to override current beha 16:12:40 <jtyr> This is what I did originally 16:12:53 <jtyr> but we agreed to remove the flag on the last meeting ... ;o) 16:13:11 <akasurde> facepalm 16:13:15 <akasurde> OK 16:13:23 <akasurde> I will go ahead and merge this 16:13:29 <jtyr> Cool 16:14:38 <akasurde> #action akasurde merge 63741 16:15:49 <akasurde> For PR 63740, I need to review, is that OK with you ? 16:16:02 <jtyr> sure 16:16:05 <akasurde> #action akasurde review 63740 16:18:54 <akasurde> #chair Enphuego 16:18:54 <zodbot> Current chairs: Enphuego akasurde jtyr 16:19:20 <Enphuego> thanks? 16:20:37 <akasurde> jtyr, 43435 I am OK with change, just needs heads from Goneri for Test part 16:22:23 <jtyr> Sure. He has advised me how to solve testing issue I had - to skip the test for govmomi ... 16:22:38 <akasurde> jtyr, Cool 16:23:17 <akasurde> #chair mariolenz 16:23:17 <zodbot> Current chairs: Enphuego akasurde jtyr mariolenz 16:23:25 <akasurde> mariolenz, Enphuego Welcome 16:23:36 <mariolenz> hi 16:24:11 <akasurde> mariolenz, we are discussing PRs in https://github.com/ansible/community/issues/423#issuecomment-560460206 16:27:51 <akasurde> jtyr, number of CPUs at line 44 is not right since we have an option like hotadd_cpu which can add CPU when VM is poweredon 16:28:06 <akasurde> jtyr I am talking about https://github.com/ansible/ansible/pull/65429 16:30:26 <Enphuego> 43435 looks pretty good to me, does it break when there is local host storage or does it ignore that? 16:34:39 <Enphuego> akasurde are you sure that the vmware_guest module actually hot adds CPUs? I thought that just set the option on the VM 16:34:41 <akasurde> Enphuego, how does storage shown in case if it is local host storage ? 16:35:10 <Enphuego> same as shared storage, it's just only visible to one host 16:35:34 <akasurde> https://github.com/ansible/ansible/blob/a8ef5d5034c89dfe35b641ce6309c4ef9812254b/lib/ansible/modules/cloud/vmware/vmware_guest.py#L131 16:35:44 <akasurde> Enphuego, ^^ 16:36:07 <akasurde> Enphuego, I will check how does it show where it attached as local host storage 16:36:08 <akasurde> ? 16:36:34 <Enphuego> it doesn't... it's just not available to all hosts on the cluster 16:37:54 <akasurde> Ok 16:38:07 <Enphuego> in my environment at least it's a LUN on a SAN or similar storage, but only made available to one host. I don't know how you'd detect that in code, I just set group vars in my cluster vars 16:38:47 <akasurde> Jtyr what do you think about Enphuego opinion ? 16:39:35 <Enphuego> sorry I can't be more helpful, I've just seen real life examples where people have a physical server with a bunch of disks attached to the host that's not really meant to be used. Deploying there would be not what I expect 16:41:08 <akasurde> Enphuego point noted, we need to explore more combinations 16:41:23 <jtyr> sorry, been busy for a minute ... 16:42:29 <mariolenz> i think that vmware_guests hot-adding cpus https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/vmware/vmware_guest.py#L1033 16:43:16 <akasurde> mariolenz, yes, it is possible to add CPU when VM is on with hotcpu add enabled 16:43:19 <jtyr> What's the concern? 16:43:43 <jtyr> regarding 43435? 16:44:30 <akasurde> Concern is that line https://github.com/ansible/ansible/pull/65429/files#diff-ed147163551d1eaa084b21f6bab29f96R44 is not true when it says number of CPUs 16:44:44 <jtyr> regarding 65429, I can add: unless hotcpu is enabled ... or something like that 16:44:49 <akasurde> jtyr, we will come back to 43435 16:44:55 <Enphuego> The concern for 43435 is what happens when I have a big local LUN attached to one host in the cluster 16:45:27 <Enphuego> on 65429 the module will actually hot add CPUs? 16:45:31 <akasurde> Enphuego, only attached to one host 16:45:59 <jtyr> I don't have hot adding enabled so I cannot check ... 16:46:19 <Enphuego> yes only one host. I won't do it as a practice in my environment, but I've seen it happen plenty 16:46:38 <akasurde> jtyr, best way is to use example which is just like amount of RAM or something 16:47:00 <jtyr> ok, I will remove the node about CPUs ... 16:47:14 <akasurde> Jtyr cool 16:47:36 <akasurde> #action jtyr Change example wording in 65429 16:47:42 <Enphuego> let me give hot add a whirl in my environment 16:47:57 <akasurde> Now, PR 43435 16:47:59 <jtyr> akasurde: change done 16:48:33 <akasurde> jtyr lgtm 16:48:47 * akasurde waiting for CI to finish 16:49:07 <akasurde> jtyr for PR 43435 Enphuego has valid concern 16:49:11 <Enphuego> does this line only pull storage available to the whole cluster? 'cluster = self.find_cluster_by_name(self.params['cluster'], self.content)' 16:49:36 <akasurde> Enphuego, it will only pull cluster objects 16:49:54 <akasurde> next line gets all hosts and their mount points 16:50:01 <Enphuego> oh the next line it loops through the hosts pulling all mount points 16:50:13 <Enphuego> so yes it will definitely pick up local storage 16:50:45 <akasurde> Does Cluster has intelligence to use that exclusive storage ? 16:50:53 <jtyr> is that a bad think? 16:51:21 <akasurde> Jtyr frankly speaking never tried such setup 16:51:46 <Enphuego> the cluster doesn't have any intelligence to use the local storage, just the host. No guarantees that the host picked for build will be the host with the storage 16:51:55 <jtyr> I had ESXis with local storage only in the past 16:52:18 <Enphuego> I've seen it especially when they repurposed hardware 16:52:38 <jtyr> and I was using the vmware_guest module to provision directly on ESXi when there was no vCenter available ... 16:52:38 <Enphuego> you could check the storage from two hosts in the cluster and only use the storage common to both 16:53:00 <jtyr> and that patch is from that time I think ... 16:54:28 <akasurde> Enphuego, Can you test if 43435 works for exclusive storage in cluster ? 16:54:50 <akasurde> Since I am not in lab I can't test now 16:54:56 <Enphuego> let me give it a try 16:55:01 <akasurde> thanks 16:56:39 <Enphuego> I don't have 2.9, will it work properly if I load the changed files into a 2.8 install? 16:56:45 <mariolenz> even if the host picked for build will be the host with the storage, that's probably not what people will want. if i deploy a vm on a cluster, i don't want it to end up on a local datastore. 16:57:33 <jtyr> if you are not using ESXi only ... ;o) 16:57:58 <jtyr> can we check somehow if the host is in a cluster? 16:58:13 <Enphuego> if you are targeting a cluster, hopefully it's in a cluster? 16:59:26 <jtyr> well ... the module works well with ESXi as well .,.. 16:59:37 <jtyr> this is what I was using it for ... 16:59:41 <Enphuego> right, but there are two sets of logic there... one for a cluster, one for a host 17:00:36 <jtyr> akasurde: CI finished: https://github.com/ansible/ansible/pull/65429 17:03:43 <akasurde> I think for Host this logic will work, I am not too sure about cluster though 17:04:03 <mariolenz> Enphuego: if you use the module against a single host, local storage should be ok imho. 17:04:05 <akasurde> First I thought it will but with the case of exclusive storage I am confused 17:04:15 <mariolenz> but it seem to work only with vmfs datastores? some people use nfs. 17:04:45 <Enphuego> yes, if you are using it against a single host then local storage would be an expected outcome 17:05:24 <Enphuego> I'm trying to find a host in my environment that has a large local disk... 17:09:03 <Enphuego> I'm making a host that has a 200TB lun, will be just a minute 17:17:22 <akasurde> Enphuego, jtyr mariolenz what do you suggest for this situtation ? 17:22:37 <Enphuego> I'd suggest checking two hosts in the cluster and selecting only storage that's on both 17:23:49 <Enphuego> that will get rid of most edge cases 17:24:17 <mariolenz> akasurde: i'm not sure, i guess i'll have to think about this. and try one or two things ;-) maybe i can have a closer look into this pr tomorrow and comment. 17:24:42 <akasurde> OK 17:24:58 <akasurde> I feel the same, I will test in my environment too and comment again 17:24:58 <mariolenz> or maybe even today, but then without testing anything. 17:25:15 <akasurde> mariolenz, take your time 17:25:26 <Enphuego> I really wonder how many people let it auto pick the storage in the first place 17:25:43 <akasurde> Enphuego, A lot of I guess 17:26:18 <akasurde> Since we are little over time in this meeting I will end this meeting now 17:26:32 <akasurde> See you soon people, thanks for join-in 17:26:49 <akasurde> #endmeeting