19:01:19 <nirik> #startmeeting Fedora Infrastructure Ops Daily Standup Meeting
19:01:19 <zodbot> Meeting started Mon Feb 20 19:01:19 2023 UTC.
19:01:19 <zodbot> This meeting is logged and archived in a public location.
19:01:19 <zodbot> The chair is nirik. Information about MeetBot at https://fedoraproject.org/wiki/Zodbot#Meeting_Functions.
19:01:19 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
19:01:19 <zodbot> The meeting name has been set to 'fedora_infrastructure_ops_daily_standup_meeting'
19:01:29 <nirik> #chair  nirik nb
19:01:29 <zodbot> Current chairs: nb nirik
19:01:34 <nirik> #meetingname fedora_infrastructure_ops_daily_standup_meeting
19:01:34 <zodbot> The meeting name has been set to 'fedora_infrastructure_ops_daily_standup_meeting'
19:01:40 <nirik> #info meeting is 30 minutes MAX. At the end of 30, its stops
19:01:48 <nirik> #info agenda is at https://board.net/p/fedora-infra-daily
19:01:53 <nirik> #info reminder: speak up if you want to work on a ticket!
19:01:58 <nirik> #topic Tickets needing review
19:02:05 <nirik> #info https://pagure.io/fedora-infrastructure/issues?status=Open&priority=1
19:02:30 <nirik> .ticket 11129
19:02:32 <zodbot> nirik: Issue #11129: Remove cwickert as a sponsor of the packager group - fedora-infrastructure - Pagure.io - https://pagure.io/fedora-infrastructure/issue/11129
19:02:54 <nirik> low low ops. needs some investigation.
19:03:17 <nirik> .ticket 11135
19:03:18 <zodbot> nirik: Issue #11135: Create new fas group 'clamav' - fedora-infrastructure - Pagure.io - https://pagure.io/fedora-infrastructure/issue/11135
19:03:36 <nirik> I am just going to do this one right now.
19:07:17 <nirik> .ticket 11136
19:07:18 <zodbot> nirik: Issue #11136: Pulp instance for Copr - fedora-infrastructure - Pagure.io - https://pagure.io/fedora-infrastructure/issue/11136
19:07:39 <nirik> I don't actually think we need to do anything here...
19:07:59 <ssmoogen> oops sorry
19:08:21 <nirik> thats it for infra
19:08:38 <nirik> .releng 11294
19:08:39 <zodbot> nirik: Issue #11294: Unretire tss2, tss2-devel - releng - Pagure.io - https://pagure.io/releng/issue/11294
19:08:46 <nirik> low low ops...
19:09:05 <nirik> .ticket 11295
19:09:09 <zodbot> nirik: An error has occurred and has been logged. Check the logs for more information.
19:09:20 <ssmoogen> you broke zod
19:09:44 <nirik> .ticket 11295
19:09:46 <zodbot> nirik: An error has occurred and has been logged. Check the logs for more information.
19:09:47 <nirik> oh, right.
19:09:50 <nirik> .releng 111295
19:09:51 <zodbot> nirik: An error has occurred and has been logged. Check the logs for more information.
19:10:01 <nirik> * .releng 11295
19:10:07 <nirik> I can type... sometimes.
19:10:12 <nirik> .releng 11295
19:10:13 <zodbot> nirik: Issue #11295: File F38 mass rebuild FTBFS tickets - releng - Pagure.io - https://pagure.io/releng/issue/11295
19:10:39 <nirik> jednorozec is already doing this one.
19:10:54 <jednorozec> yup
19:10:59 <nirik> #topic work plans / upcoming tasks
19:11:01 <jednorozec> they are done
19:11:05 <ssmoogen> speaking of disk space cleanup.. how is koji currently?
19:11:35 <nirik> I've been pinged on various things this morning about 10,000 times. So, hopefully that will queit done.
19:12:29 <nirik> ssmoogen: well, it's ok right now, but not great. I ran the build archiver thursday last week and... it blew up and removed some packages entirely. I had to restore them from snapshot. So, I need to look at that script closely before we run it ever again.
19:12:40 <nirik> tomorrow starts f38 beta freeze tho.
19:12:59 <nirik> so I hope to not do much on it over the freeze... but if it hits limits again I will have to do something.
19:13:01 <ssmoogen> lovely
19:14:10 <nirik> we have about 7tb right now... and things should go down as old stuff from before branching rolls out
19:14:32 <ssmoogen> i know what we should do.. archive everything to S3 and then set up a completely new build system
19:14:54 <nirik> jednorozec: on the beta freeze, when we set 'frozen' in bodhi, we should test right after that... pretend to do a push and see what bodhi says it would push (should be not f38 stable, only f38-testing)
19:15:01 * ssmoogen is kickbanned from fedora
19:15:09 <nirik> /kickban smooge
19:15:36 <nirik> I am thinking I might want to schedule a koji outage for between f38 and f39...
19:15:56 <jednorozec> nirik, right
19:16:10 <nirik> like a day or two outage. During that time we can move the db to rhel9/newer postgres and try and clean up/move more disk/change the way we do the mounts.
19:17:24 <nirik> ie, move repos/ off entirely (which means that needs to be mounted all over the place) and scratch builds and perhaps images, etc
19:17:25 <nirik> but thats for later
19:17:29 <ssmoogen> do it first in staging?
19:17:41 <ssmoogen> or is that done alread
19:18:01 <nirik> I did get all the bvmhost-p09 machines booted on the newer kernel and the guests were tweaked to hopefully be faster on nested virt.
19:18:14 <nirik> staging uses a complex setup... it mounts the prod volume as another readonly volume...
19:18:25 <ssmoogen> ooooh I forgot that
19:18:32 <nirik> so it's local storage is limited to only things built in staging, which is... not much
19:18:59 <ssmoogen> i wonder if there is anyway to do this in amazon cloud first
19:19:14 <nirik> not that I can think of
19:19:30 <ssmoogen> that way you don't end up with my 'suggestion' of starting from scratch actually happening
19:19:36 <ssmoogen> me either.
19:19:39 <nirik> fun fact: aws volumes are limited to 16TB
19:19:57 <ssmoogen> well who needs more than that?
19:20:11 <nirik> I don't think we need to start from scratch, we just need to change the way it's setup... but thats hard while people are using it.
19:21:07 <nirik> anyhow... anything else anyone would like to discuss today?
19:21:15 <nirik> or shall we close out?
19:21:29 <ssmoogen> not me. I will leave my existential dread of db upgrades failing for elsewhere
19:22:18 <nirik> I'm oddly less afraid of the db than the files... ;)
19:22:39 <nirik> anyhow, thanks for coming everyone... back to the salt mines!
19:22:40 <nirik> #endmeeting