15:58:47 <dustymabe> #startmeeting fedora_cloud_meeting 15:58:47 <zodbot> Meeting started Tue Sep 29 15:58:47 2020 UTC. 15:58:47 <zodbot> This meeting is logged and archived in a public location. 15:58:47 <zodbot> The chair is dustymabe. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:58:47 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:58:47 <zodbot> The meeting name has been set to 'fedora_cloud_meeting' 15:58:52 <dustymabe> #topic roll call 15:58:54 <dustymabe> .hello2 15:58:55 <zodbot> dustymabe: dustymabe 'Dusty Mabe' <dusty@dustymabe.com> 15:59:22 <jdoss> .hello2 15:59:23 <zodbot> jdoss: jdoss 'Joe Doss' <joe@solidadmin.com> 15:59:24 <cyberpear> .hello2 15:59:26 <zodbot> cyberpear: cyberpear 'James Cassell' <fedoraproject@cyberpear.com> 15:59:30 <michaelgugino> .hello2 15:59:31 <zodbot> michaelgugino: michaelgugino 'Michael Gugino' <gugino.michael@yahoo.com> 15:59:32 * jdoss waves 16:00:03 * dustymabe waves 16:00:33 <darkmuggle> .hello2 16:00:34 <zodbot> darkmuggle: darkmuggle 'None' <me@muggle.dev> 16:00:48 <dustymabe> #chair jdoss cyberpear michaelgugino darkmuggle 16:00:48 <zodbot> Current chairs: cyberpear darkmuggle dustymabe jdoss michaelgugino 16:01:01 <dustymabe> FYI I'm double booked so might be a bit distracted 16:01:10 <michaelgugino> I have some updates 16:01:31 <darkmuggle> likewise, I too am distracted by a conflicting meeting 16:01:43 <dustymabe> #topic Action items from last meeting 16:01:52 <dustymabe> we had one action item last meeting 16:01:57 <dustymabe> * Eighth_Doctor to work with ericedens on getting the google cloud 16:02:00 <dustymabe> agents packaged for a GCP image 16:02:05 <King_InuYasha> .hello ngompa 16:02:06 <zodbot> King_InuYasha: ngompa 'Neal Gompa' <ngompa13@gmail.com> 16:02:08 <dustymabe> welcome King_InuYasha - we were just talking about you :) 16:02:08 <King_InuYasha> hey all! 16:02:12 <dustymabe> * Eighth_Doctor to work with ericedens on getting the google cloud 16:02:14 <dustymabe> agents packaged for a GCP image 16:02:18 <dustymabe> ^^ action item from last meeting 16:02:38 <King_InuYasha> yeah, I think we started a conversation, but then it didn't really lead anywhere 16:03:01 <King_InuYasha> basically, since it uses Go modules, we just need to check with the Go SIG to see what we need to do to adapt for that 16:03:24 <King_InuYasha> then we need to package up the Go dependencies accordingly, but go2rpm should make that easy 16:03:28 <dustymabe> King_InuYasha: do none of our go packages use go modules? 16:03:37 <King_InuYasha> currently, none that I know of 16:03:58 <King_InuYasha> and again, I don't have a problem with Go modules, just that I don't know how we handle them 16:04:05 * jdoss groans about golang packaging 16:04:32 <dustymabe> King_InuYasha: do you mind reaching out to that group in this next cycle? 16:04:38 <King_InuYasha> sure 16:04:40 <King_InuYasha> I can do that 16:05:05 <dustymabe> #action King_InuYasha to discuss the options for packaging go applications that use go modules with the golang fedora SIG to see if there are any implications there. 16:05:06 <King_InuYasha> jdoss: it's better than finding out that we're vulnerable to a CVE and can't fix it because there's 100 copies of the bloody thing 16:05:21 <dustymabe> RATHOLE :) 16:05:28 <King_InuYasha> hah 16:05:38 <jdoss> I have my tinfoil hat on. CVEs do not apply to me TYVM 16:05:45 * King_InuYasha snorts 16:05:55 <King_InuYasha> there are obviously things we can do to shortcut for expediency, but I'd like to figure out what our options are first 16:06:16 <dustymabe> #topic regular releases: uploading to clouds 16:06:24 <dustymabe> #link https://pagure.io/cloud-sig/issue/301 16:06:30 <dustymabe> michaelgugino: did you say you had updates? 16:06:41 <michaelgugino> yes 16:07:18 <michaelgugino> So, I spent some more time working on this this past weekend. Uploading images to AWS is a lot of tedious work. fedimg is the text-book example of how to do it on the web it's such a pain. 16:07:33 * King_InuYasha is not surprised 16:08:24 <dustymabe> michaelgugino: well i don't know if getting all the way to the point of doing the uploads was totally on you 16:08:26 <michaelgugino> So, I dug a little deeper into what fedimg was doing and how I might replicate that behavior. We could continue to use fedimg for the aws upload parts, maybe make the upload part more portable to be used with a different listener. Or we could re-implement the aws bits. I'm kinda on the fence. 16:08:46 <dustymabe> michaelgugino: we were hopefully going to use a tool for this 16:09:01 <dustymabe> either the mantle codebase (with ore) or mash (from opensuse) 16:09:13 <michaelgugino> Yeah, I know it wasn't all on me. I took a look at the mash thing, couldn't really make sense of how it might be utilized as there are like no docs. 16:09:15 <dustymabe> michaelgugino: ideally your code just calls out to a binary that does all the work for you 16:09:33 <jdoss> IMO Mantle is prob a better long term solution. 16:09:34 <michaelgugino> So, right, but that gave me some insight on the overall arch of what we need to do. 16:09:41 <King_InuYasha> michaelgugino: if there's missing info, let me know, I can ask them to flesh out 16:09:47 <King_InuYasha> or file issues so that they know there 16:10:05 <King_InuYasha> at least one of the members working on mash is someone I know personally, and he's interested in seeing us use it 16:10:08 <jdoss> or really, if we need to get this done use Bash or Ansible and get it out the door. Something other than fedimg 16:10:16 <jdoss> or another tool. 16:10:49 <michaelgugino> So, we need our own task queue IMO, because there are a series of steps that need to be performed from each provider, and having our own queue would allow us to be more durable. Right now, fedimg uses a local thread pool to do the upload bits, so if it fails half way through, nothing will restart the upload because the pungi tasked has already been ack'd from fed-msg. 16:11:03 <King_InuYasha> we don't already have that in pungi? 16:11:06 <King_InuYasha> I thought we did 16:11:11 <dustymabe> that works for me jdoss as an MVP if we want 16:11:47 <dustymabe> michaelgugino: you might be right. the good news is that we get new uploads every night so if it fails it's not too long until a new set comes out 16:11:55 <michaelgugino> I took note of the steps needed for aws if we want to do it in bash or port to boto if we're going to take on that logic. Right now, fedimg uses libcloud (bleh) and euca-tools which looks abandoned upstream. 16:12:10 <dustymabe> yeah it pretty much is IIUC 16:12:32 <dustymabe> michaelgugino: what are the steps you wrote down.. I did this again recently I think and I also wrote down some steps 16:13:08 <dustymabe> these are the steps I wrote down 16:13:17 <dustymabe> aws s3 cp ./test5.vmdk s3://dustymabe-image-import/test5.vmdk 16:13:22 <dustymabe> aws ec2 import-snapshot --disk-container Format=vmdk,UserBucket="{S3Bucket=dustymabe-image-import,S3Key=test5.vmdk}" 16:13:30 <dustymabe> aws ec2 register-image --name test5.vmdk --virtualization-type hvm --architecture x86_64 --root-device-name /dev/xvda --block-device-mappings 'DeviceName=/dev/xvda,VirtualName=string,Ebs={SnapshotId=snap-04b2e952d0debabd7,VolumeSize=4,VolumeType=gp2}' 16:14:20 <michaelgugino> We download the image locally onto the container, unxz, upload the raw (4g) image to s3, run a disk import tool (api/cli only), need to setup a special role and policy for aforementioned (probably setup already in our case), then run another call to create the AMI, then run a series of calls to clone that AMI to different regions. Each step results in an async task ID (other than the upload which is obviously a sync operation) that we 16:14:20 <michaelgugino> need to monitor for completion before proceeding to the next step. 16:14:21 * King_InuYasha needs to find some time to package up img-proof and img-mash 16:14:32 <King_InuYasha> it shouldn't be that hard, it's just... time 16:15:23 <dustymabe> right. so the good news is that michaelgugino looks like he is to the point where we decide the next step 16:15:26 <dustymabe> so we've made progress 16:15:34 <dustymabe> thanks michaelgugino for working on this! 16:15:44 <jdoss> We used an Ansible playbook for our image uploads. You get all thea sync and checking stuff bolted into everyone's fav programming language... YAML! 16:15:54 <King_InuYasha> michaelgugino: if you could file an issue on mash about how to execute it, that'd help 16:15:57 <michaelgugino> So, there's a lot to it. I think we should use the new codebase to upload to the other platforms, we'll let fedimg keep the lights on for aws until we get something working on GCP, and then we'll pivot. 16:16:06 <King_InuYasha> otherwise, I'll bug someone the next time I talk to them 16:16:15 <michaelgugino> King_InuYasha: on the mash repo itself? 16:16:17 <King_InuYasha> yes 16:16:20 <michaelgugino> will do 16:16:39 <King_InuYasha> awesome 16:16:58 <michaelgugino> I considered going down the ansible route, I'm sure there's tooling to do this there. 16:17:27 <michaelgugino> More or less though, need to sit down and flesh out a design for another queue and how we want tasks to proceeed. I think we can use a local thread pool as fedimg does today for the time being, though. 16:17:37 <King_InuYasha> sounds good 16:17:41 <dustymabe> michaelgugino: do you mind posting a update in the ticket? 16:17:58 <dustymabe> michaelgugino: do you want to schedule some time for a few of us to get together and chat on that design? 16:18:01 <michaelgugino> I will do this. Wasn't sure what I wanted to write yet, just wanted to talk through it. 16:18:02 <jdoss> IMO ansible would be the fast path to getting it done, but I am down for whatever. 16:18:45 <dustymabe> jdoss: does ansible support creating images on various cloud platforms? 16:18:50 <dustymabe> maybe that's the move? 16:18:53 <dustymabe> not sure 16:19:05 * dustymabe wonders how flexible it is 16:19:06 <jdoss> just call the CLI tools and use it to glue everything together. 16:19:18 <michaelgugino> yeah, a meeting might be useful to flesh out the design. The now design and the future design. I don't think either is going to be super complicated, but I'm sure about how to get new queues and stuff on fed-msg, etc. 16:19:32 <michaelgugino> Seems like it would be half design, half create tickets. 16:19:41 <dustymabe> michaelgugino: +1 16:20:04 <dustymabe> #action michaelgugino to schedule some time for a few of us to get together to flesh out design and next steps for cloud image upload bits 16:20:12 <michaelgugino> The nice thing about ansible is if the modules are useful, it's trivial to just scrape those bits and incorporate into another program if we don't want to run a full ansible. 16:20:57 <dustymabe> shall we move on to the next topic? 16:21:16 <michaelgugino> how do I schedule the meeting? 16:22:08 <dustymabe> michaelgugino: I would just send a calendar invite to the few of us who are interested 16:22:22 <dustymabe> if you're interested can you share your email addr with michaelgugino ? ^^ 16:23:06 <King_InuYasha> I think michaelgugino already has mine ;) 16:23:20 <dustymabe> michaelgugino: sound good? 16:23:54 <jdoss> sent 16:24:35 <dustymabe> next topic.... 16:24:44 <dustymabe> #topic publish a cloud image for GCP 16:24:59 <dustymabe> #link https://pagure.io/cloud-sig/issue/310 16:25:41 <dustymabe> good news the first f33 images were created in last nights run 16:25:49 <dustymabe> https://pagure.io/cloud-sig/issue/310#comment-688932 16:26:02 <dustymabe> anybody want to give it a spin? 16:27:01 <King_InuYasha> I'm not rich enough to randomly spend money in the public cloud :P 16:27:24 <King_InuYasha> but our GCP image is a UEFI-native one, right? 16:27:32 <dustymabe> King_InuYasha: I can give you a project in GCP temporarily if you're willing to test it out 16:27:41 <King_InuYasha> dustymabe: then yeah, I can test it 16:27:46 <dustymabe> +1 16:27:50 <King_InuYasha> I like playing with clouds, I just can't afford them :) 16:28:13 <dustymabe> King_InuYasha: yeah it's UEFI 16:28:45 <dustymabe> #action dustymabe to give King_InuYasha access to a project in GCP to do testing 16:29:05 <dustymabe> #action King_InuYasha to test out f33 fedora cloud base GCP images 16:29:06 <King_InuYasha> dustymabe: how hard would it be to generate an Azure/Hyper-V compatible image too? 16:29:15 <King_InuYasha> we still have that ask for such an image 16:29:56 <dustymabe> King_InuYasha: i'm not sure. I don't have much experience with Azure image specifics 16:30:44 <King_InuYasha> do we know anyone who knows anything about how Azure images are formatted? 16:30:46 <dustymabe> it takes an interested party to kind of drive it a bit 16:30:48 <King_InuYasha> maybe in RH cloud team? 16:31:02 <dustymabe> I think darkmuggle knows 16:31:23 <dustymabe> part of the reason I've been spearheading GCP is because we have involvment from GCP engineers who are doing at least half of the work 16:31:27 <King_InuYasha> right 16:31:29 <jdoss> I have purged all my Azure understanding. 16:31:31 <dustymabe> so I figured we should take advantage of that opportunity 16:31:34 <King_InuYasha> that's fair 16:31:49 <King_InuYasha> I wonder if Evolution might be able to help us get that from Azure folks 16:31:53 <dustymabe> King_InuYasha: the biggest problem with Azure is we don't have an account so I can't give out access :) 16:32:02 <King_InuYasha> haha 16:32:13 <King_InuYasha> that sounds like a thing worth talking to Evolution about 16:32:24 <dustymabe> maybe :) 16:32:45 <dustymabe> moving on :) 16:32:48 <dustymabe> #topic open floor 16:32:53 <dustymabe> anyone with anything for open floor? 16:33:11 <King_InuYasha> yeah, I've got something 16:33:20 <dustymabe> \o/ 16:33:36 <King_InuYasha> so... davdunc filed a request with Fedora Council last week: https://pagure.io/Fedora-Council/tickets/issue/332 16:34:03 <King_InuYasha> the request is to work with AWS to get them promoted into the AWS Marketplace 16:34:08 <King_InuYasha> similar to how CentOS images are 16:34:54 <King_InuYasha> I think this is a good idea, and I'm thinking of creating a fedora-cloud group where we can have namespaced projects for the different cloud providers 16:35:14 <King_InuYasha> where us and the cloud provider representative have ticket and code access to manage those relationships 16:35:30 <King_InuYasha> so, what does everyone think of this? 16:35:46 <dustymabe> I think I'm generally +1 16:36:06 <dustymabe> though I don't know if we want to create a bunch of new repos/trackers just yet 16:36:19 <King_InuYasha> well, we need one at least for aws 16:36:29 <King_InuYasha> so I figured I might as well create the scheme correctly for this 16:36:35 <King_InuYasha> and be slightly forward thinking 16:36:44 <dustymabe> is it a tracker or is there actual code/files we need to keep up with 16:36:47 <King_InuYasha> both 16:36:56 <jdoss> I am still very interested in using Ignition on Fedora Cloud 16:36:59 <King_InuYasha> there's code that davdunc is going to be managing in there 16:38:21 <michaelgugino> getting into marketplace would be great. 16:38:37 <dustymabe> King_InuYasha: +1 16:38:49 <King_InuYasha> I believe davdunc is working on writing code and Jenkins pipeline stuff for managing that aspect as an AWS guy in the open, and he wanted a repo for that 16:39:00 <King_InuYasha> I figured it'd make sense to combine that with a proper tracker everyone knows 16:39:14 <King_InuYasha> and if we need this for other clouds in the future, having a decent way to handle that makes sense to me 16:39:18 <dustymabe> King_InuYasha: would managing the files/issues in the top level repo be useful (namespaced sub directories) 16:39:29 <dustymabe> or is there a lot of churn in the repo? 16:39:33 <King_InuYasha> I don't know 16:39:34 <dustymabe> i.e. files change very often 16:39:45 <King_InuYasha> that might be a question for davdunc since I don't know too much of what he's doing there 16:39:55 <King_InuYasha> I would potentially lean towards churning more often than not 16:40:04 <King_InuYasha> but again, I have no idea 16:40:27 <dustymabe> King_InuYasha: yeah. at least right now our repo isn't high volume so I'd lean more towards trying to keep just one until we get to the point it becomes too active and we want to split it out 16:40:42 <dustymabe> but I'm flexible :) 16:40:49 <King_InuYasha> the main reason I'm going for splitting out is because AWS people need to co-own the repo 16:41:02 <King_InuYasha> and that scales kinda badly if we have AWS, GCP, Azure, OCI, etc. 16:41:28 <dustymabe> yeah, at least for AWS I don't mind @davdunc co-owning 16:41:56 <King_InuYasha> we could also be clever and use branches in the main repo 16:42:07 <King_InuYasha> but that's an undiscoverable abstraction imo 16:42:11 <dustymabe> yeah I agree 16:42:26 <dustymabe> let's *try* to keep one repo and if it doens't work it doesn't work 16:42:34 <King_InuYasha> okay 16:42:48 <King_InuYasha> so we add davdunc to cloud-sig repo and make a folder for him to put stuff in? 16:42:52 <dustymabe> and at the end of the day, I trust your judgement on if it is working or not 16:43:00 <King_InuYasha> and if we need to split it out, I can do a filter branch easily enough :) 16:43:01 <dustymabe> yep, King_InuYasha that works for me 16:43:29 <King_InuYasha> so throw that as an action for me and I'll get that set up 16:44:20 <dustymabe> #action King_InuYasha to work with davdunc on structure for files to add to the cloud-sig repo for AWS marketplace enablement 16:44:31 <King_InuYasha> \o/ 16:45:04 <dustymabe> jdoss: yeah I think darkmuggle is going to look at Ignition for the cloud images in the f34 cycle 16:45:11 <King_InuYasha> that's going to be exciting 16:45:30 <jdoss> I am looking at doing Ignition in a Rootfs right now 16:45:36 <jdoss> so that prob lines up here. 16:45:59 <King_InuYasha> I'm toying with the idea of an experimental transactional-update-style cloud image 16:46:17 <dustymabe> with BTRFS? 16:46:19 <King_InuYasha> yes 16:46:26 <jdoss> King_InuYasha: sign me up for that newsletter. 16:46:58 <King_InuYasha> I've been in contact with fos from the openSUSE Kubic guys about porting the technology to run with DNF 16:47:09 <King_InuYasha> and adapting it properly for Fedora 16:47:19 <dustymabe> sounds interesting 16:47:51 <King_InuYasha> I like the idea as a halfway point between CoreOS and traditional fedora 16:48:16 <King_InuYasha> more compatibility with traditional software and way easier to do fancy things like mass deployments and synchronized updates 16:48:49 <dustymabe> anything else before we close? 16:49:35 <King_InuYasha> nothing from me 16:49:48 <dustymabe> #endmeeting