15:58:47 #startmeeting fedora_cloud_meeting 15:58:47 Meeting started Tue Sep 29 15:58:47 2020 UTC. 15:58:47 This meeting is logged and archived in a public location. 15:58:47 The chair is dustymabe. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:58:47 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:58:47 The meeting name has been set to 'fedora_cloud_meeting' 15:58:52 #topic roll call 15:58:54 .hello2 15:58:55 dustymabe: dustymabe 'Dusty Mabe' 15:59:22 .hello2 15:59:23 jdoss: jdoss 'Joe Doss' 15:59:24 .hello2 15:59:26 cyberpear: cyberpear 'James Cassell' 15:59:30 .hello2 15:59:31 michaelgugino: michaelgugino 'Michael Gugino' 15:59:32 * jdoss waves 16:00:03 * dustymabe waves 16:00:33 .hello2 16:00:34 darkmuggle: darkmuggle 'None' 16:00:48 #chair jdoss cyberpear michaelgugino darkmuggle 16:00:48 Current chairs: cyberpear darkmuggle dustymabe jdoss michaelgugino 16:01:01 FYI I'm double booked so might be a bit distracted 16:01:10 I have some updates 16:01:31 likewise, I too am distracted by a conflicting meeting 16:01:43 #topic Action items from last meeting 16:01:52 we had one action item last meeting 16:01:57 * Eighth_Doctor to work with ericedens on getting the google cloud 16:02:00 agents packaged for a GCP image 16:02:05 .hello ngompa 16:02:06 King_InuYasha: ngompa 'Neal Gompa' 16:02:08 welcome King_InuYasha - we were just talking about you :) 16:02:08 hey all! 16:02:12 * Eighth_Doctor to work with ericedens on getting the google cloud 16:02:14 agents packaged for a GCP image 16:02:18 ^^ action item from last meeting 16:02:38 yeah, I think we started a conversation, but then it didn't really lead anywhere 16:03:01 basically, since it uses Go modules, we just need to check with the Go SIG to see what we need to do to adapt for that 16:03:24 then we need to package up the Go dependencies accordingly, but go2rpm should make that easy 16:03:28 King_InuYasha: do none of our go packages use go modules? 16:03:37 currently, none that I know of 16:03:58 and again, I don't have a problem with Go modules, just that I don't know how we handle them 16:04:05 * jdoss groans about golang packaging 16:04:32 King_InuYasha: do you mind reaching out to that group in this next cycle? 16:04:38 sure 16:04:40 I can do that 16:05:05 #action King_InuYasha to discuss the options for packaging go applications that use go modules with the golang fedora SIG to see if there are any implications there. 16:05:06 jdoss: it's better than finding out that we're vulnerable to a CVE and can't fix it because there's 100 copies of the bloody thing 16:05:21 RATHOLE :) 16:05:28 hah 16:05:38 I have my tinfoil hat on. CVEs do not apply to me TYVM 16:05:45 * King_InuYasha snorts 16:05:55 there are obviously things we can do to shortcut for expediency, but I'd like to figure out what our options are first 16:06:16 #topic regular releases: uploading to clouds 16:06:24 #link https://pagure.io/cloud-sig/issue/301 16:06:30 michaelgugino: did you say you had updates? 16:06:41 yes 16:07:18 So, I spent some more time working on this this past weekend. Uploading images to AWS is a lot of tedious work. fedimg is the text-book example of how to do it on the web it's such a pain. 16:07:33 * King_InuYasha is not surprised 16:08:24 michaelgugino: well i don't know if getting all the way to the point of doing the uploads was totally on you 16:08:26 So, I dug a little deeper into what fedimg was doing and how I might replicate that behavior. We could continue to use fedimg for the aws upload parts, maybe make the upload part more portable to be used with a different listener. Or we could re-implement the aws bits. I'm kinda on the fence. 16:08:46 michaelgugino: we were hopefully going to use a tool for this 16:09:01 either the mantle codebase (with ore) or mash (from opensuse) 16:09:13 Yeah, I know it wasn't all on me. I took a look at the mash thing, couldn't really make sense of how it might be utilized as there are like no docs. 16:09:15 michaelgugino: ideally your code just calls out to a binary that does all the work for you 16:09:33 IMO Mantle is prob a better long term solution. 16:09:34 So, right, but that gave me some insight on the overall arch of what we need to do. 16:09:41 michaelgugino: if there's missing info, let me know, I can ask them to flesh out 16:09:47 or file issues so that they know there 16:10:05 at least one of the members working on mash is someone I know personally, and he's interested in seeing us use it 16:10:08 or really, if we need to get this done use Bash or Ansible and get it out the door. Something other than fedimg 16:10:16 or another tool. 16:10:49 So, we need our own task queue IMO, because there are a series of steps that need to be performed from each provider, and having our own queue would allow us to be more durable. Right now, fedimg uses a local thread pool to do the upload bits, so if it fails half way through, nothing will restart the upload because the pungi tasked has already been ack'd from fed-msg. 16:11:03 we don't already have that in pungi? 16:11:06 I thought we did 16:11:11 that works for me jdoss as an MVP if we want 16:11:47 michaelgugino: you might be right. the good news is that we get new uploads every night so if it fails it's not too long until a new set comes out 16:11:55 I took note of the steps needed for aws if we want to do it in bash or port to boto if we're going to take on that logic. Right now, fedimg uses libcloud (bleh) and euca-tools which looks abandoned upstream. 16:12:10 yeah it pretty much is IIUC 16:12:32 michaelgugino: what are the steps you wrote down.. I did this again recently I think and I also wrote down some steps 16:13:08 these are the steps I wrote down 16:13:17 aws s3 cp ./test5.vmdk s3://dustymabe-image-import/test5.vmdk 16:13:22 aws ec2 import-snapshot --disk-container Format=vmdk,UserBucket="{S3Bucket=dustymabe-image-import,S3Key=test5.vmdk}" 16:13:30 aws ec2 register-image --name test5.vmdk --virtualization-type hvm --architecture x86_64 --root-device-name /dev/xvda --block-device-mappings 'DeviceName=/dev/xvda,VirtualName=string,Ebs={SnapshotId=snap-04b2e952d0debabd7,VolumeSize=4,VolumeType=gp2}' 16:14:20 We download the image locally onto the container, unxz, upload the raw (4g) image to s3, run a disk import tool (api/cli only), need to setup a special role and policy for aforementioned (probably setup already in our case), then run another call to create the AMI, then run a series of calls to clone that AMI to different regions. Each step results in an async task ID (other than the upload which is obviously a sync operation) that we 16:14:20 need to monitor for completion before proceeding to the next step. 16:14:21 * King_InuYasha needs to find some time to package up img-proof and img-mash 16:14:32 it shouldn't be that hard, it's just... time 16:15:23 right. so the good news is that michaelgugino looks like he is to the point where we decide the next step 16:15:26 so we've made progress 16:15:34 thanks michaelgugino for working on this! 16:15:44 We used an Ansible playbook for our image uploads. You get all thea sync and checking stuff bolted into everyone's fav programming language... YAML! 16:15:54 michaelgugino: if you could file an issue on mash about how to execute it, that'd help 16:15:57 So, there's a lot to it. I think we should use the new codebase to upload to the other platforms, we'll let fedimg keep the lights on for aws until we get something working on GCP, and then we'll pivot. 16:16:06 otherwise, I'll bug someone the next time I talk to them 16:16:15 King_InuYasha: on the mash repo itself? 16:16:17 yes 16:16:20 will do 16:16:39 awesome 16:16:58 I considered going down the ansible route, I'm sure there's tooling to do this there. 16:17:27 More or less though, need to sit down and flesh out a design for another queue and how we want tasks to proceeed. I think we can use a local thread pool as fedimg does today for the time being, though. 16:17:37 sounds good 16:17:41 michaelgugino: do you mind posting a update in the ticket? 16:17:58 michaelgugino: do you want to schedule some time for a few of us to get together and chat on that design? 16:18:01 I will do this. Wasn't sure what I wanted to write yet, just wanted to talk through it. 16:18:02 IMO ansible would be the fast path to getting it done, but I am down for whatever. 16:18:45 jdoss: does ansible support creating images on various cloud platforms? 16:18:50 maybe that's the move? 16:18:53 not sure 16:19:05 * dustymabe wonders how flexible it is 16:19:06 just call the CLI tools and use it to glue everything together. 16:19:18 yeah, a meeting might be useful to flesh out the design. The now design and the future design. I don't think either is going to be super complicated, but I'm sure about how to get new queues and stuff on fed-msg, etc. 16:19:32 Seems like it would be half design, half create tickets. 16:19:41 michaelgugino: +1 16:20:04 #action michaelgugino to schedule some time for a few of us to get together to flesh out design and next steps for cloud image upload bits 16:20:12 The nice thing about ansible is if the modules are useful, it's trivial to just scrape those bits and incorporate into another program if we don't want to run a full ansible. 16:20:57 shall we move on to the next topic? 16:21:16 how do I schedule the meeting? 16:22:08 michaelgugino: I would just send a calendar invite to the few of us who are interested 16:22:22 if you're interested can you share your email addr with michaelgugino ? ^^ 16:23:06 I think michaelgugino already has mine ;) 16:23:20 michaelgugino: sound good? 16:23:54 sent 16:24:35 next topic.... 16:24:44 #topic publish a cloud image for GCP 16:24:59 #link https://pagure.io/cloud-sig/issue/310 16:25:41 good news the first f33 images were created in last nights run 16:25:49 https://pagure.io/cloud-sig/issue/310#comment-688932 16:26:02 anybody want to give it a spin? 16:27:01 I'm not rich enough to randomly spend money in the public cloud :P 16:27:24 but our GCP image is a UEFI-native one, right? 16:27:32 King_InuYasha: I can give you a project in GCP temporarily if you're willing to test it out 16:27:41 dustymabe: then yeah, I can test it 16:27:46 +1 16:27:50 I like playing with clouds, I just can't afford them :) 16:28:13 King_InuYasha: yeah it's UEFI 16:28:45 #action dustymabe to give King_InuYasha access to a project in GCP to do testing 16:29:05 #action King_InuYasha to test out f33 fedora cloud base GCP images 16:29:06 dustymabe: how hard would it be to generate an Azure/Hyper-V compatible image too? 16:29:15 we still have that ask for such an image 16:29:56 King_InuYasha: i'm not sure. I don't have much experience with Azure image specifics 16:30:44 do we know anyone who knows anything about how Azure images are formatted? 16:30:46 it takes an interested party to kind of drive it a bit 16:30:48 maybe in RH cloud team? 16:31:02 I think darkmuggle knows 16:31:23 part of the reason I've been spearheading GCP is because we have involvment from GCP engineers who are doing at least half of the work 16:31:27 right 16:31:29 I have purged all my Azure understanding. 16:31:31 so I figured we should take advantage of that opportunity 16:31:34 that's fair 16:31:49 I wonder if Evolution might be able to help us get that from Azure folks 16:31:53 King_InuYasha: the biggest problem with Azure is we don't have an account so I can't give out access :) 16:32:02 haha 16:32:13 that sounds like a thing worth talking to Evolution about 16:32:24 maybe :) 16:32:45 moving on :) 16:32:48 #topic open floor 16:32:53 anyone with anything for open floor? 16:33:11 yeah, I've got something 16:33:20 \o/ 16:33:36 so... davdunc filed a request with Fedora Council last week: https://pagure.io/Fedora-Council/tickets/issue/332 16:34:03 the request is to work with AWS to get them promoted into the AWS Marketplace 16:34:08 similar to how CentOS images are 16:34:54 I think this is a good idea, and I'm thinking of creating a fedora-cloud group where we can have namespaced projects for the different cloud providers 16:35:14 where us and the cloud provider representative have ticket and code access to manage those relationships 16:35:30 so, what does everyone think of this? 16:35:46 I think I'm generally +1 16:36:06 though I don't know if we want to create a bunch of new repos/trackers just yet 16:36:19 well, we need one at least for aws 16:36:29 so I figured I might as well create the scheme correctly for this 16:36:35 and be slightly forward thinking 16:36:44 is it a tracker or is there actual code/files we need to keep up with 16:36:47 both 16:36:56 I am still very interested in using Ignition on Fedora Cloud 16:36:59 there's code that davdunc is going to be managing in there 16:38:21 getting into marketplace would be great. 16:38:37 King_InuYasha: +1 16:38:49 I believe davdunc is working on writing code and Jenkins pipeline stuff for managing that aspect as an AWS guy in the open, and he wanted a repo for that 16:39:00 I figured it'd make sense to combine that with a proper tracker everyone knows 16:39:14 and if we need this for other clouds in the future, having a decent way to handle that makes sense to me 16:39:18 King_InuYasha: would managing the files/issues in the top level repo be useful (namespaced sub directories) 16:39:29 or is there a lot of churn in the repo? 16:39:33 I don't know 16:39:34 i.e. files change very often 16:39:45 that might be a question for davdunc since I don't know too much of what he's doing there 16:39:55 I would potentially lean towards churning more often than not 16:40:04 but again, I have no idea 16:40:27 King_InuYasha: yeah. at least right now our repo isn't high volume so I'd lean more towards trying to keep just one until we get to the point it becomes too active and we want to split it out 16:40:42 but I'm flexible :) 16:40:49 the main reason I'm going for splitting out is because AWS people need to co-own the repo 16:41:02 and that scales kinda badly if we have AWS, GCP, Azure, OCI, etc. 16:41:28 yeah, at least for AWS I don't mind @davdunc co-owning 16:41:56 we could also be clever and use branches in the main repo 16:42:07 but that's an undiscoverable abstraction imo 16:42:11 yeah I agree 16:42:26 let's *try* to keep one repo and if it doens't work it doesn't work 16:42:34 okay 16:42:48 so we add davdunc to cloud-sig repo and make a folder for him to put stuff in? 16:42:52 and at the end of the day, I trust your judgement on if it is working or not 16:43:00 and if we need to split it out, I can do a filter branch easily enough :) 16:43:01 yep, King_InuYasha that works for me 16:43:29 so throw that as an action for me and I'll get that set up 16:44:20 #action King_InuYasha to work with davdunc on structure for files to add to the cloud-sig repo for AWS marketplace enablement 16:44:31 \o/ 16:45:04 jdoss: yeah I think darkmuggle is going to look at Ignition for the cloud images in the f34 cycle 16:45:11 that's going to be exciting 16:45:30 I am looking at doing Ignition in a Rootfs right now 16:45:36 so that prob lines up here. 16:45:59 I'm toying with the idea of an experimental transactional-update-style cloud image 16:46:17 with BTRFS? 16:46:19 yes 16:46:26 King_InuYasha: sign me up for that newsletter. 16:46:58 I've been in contact with fos from the openSUSE Kubic guys about porting the technology to run with DNF 16:47:09 and adapting it properly for Fedora 16:47:19 sounds interesting 16:47:51 I like the idea as a halfway point between CoreOS and traditional fedora 16:48:16 more compatibility with traditional software and way easier to do fancy things like mass deployments and synchronized updates 16:48:49 anything else before we close? 16:49:35 nothing from me 16:49:48 #endmeeting