19:00:14 <nirik> #startmeeting Fedora Infrastructure Ops Daily Standup Meeting
19:00:14 <zodbot> Meeting started Mon Nov 28 19:00:14 2022 UTC.
19:00:14 <zodbot> This meeting is logged and archived in a public location.
19:00:14 <zodbot> The chair is nirik. Information about MeetBot at https://fedoraproject.org/wiki/Zodbot#Meeting_Functions.
19:00:14 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
19:00:14 <zodbot> The meeting name has been set to 'fedora_infrastructure_ops_daily_standup_meeting'
19:00:15 <nirik> #chair  nirik nb
19:00:15 <zodbot> Current chairs: nb nirik
19:00:15 <nirik> #meetingname fedora_infrastructure_ops_daily_standup_meeting
19:00:15 <zodbot> The meeting name has been set to 'fedora_infrastructure_ops_daily_standup_meeting'
19:00:15 <nirik> #info meeting is 30 minutes MAX. At the end of 30, its stops
19:00:15 <nirik> #info agenda is at https://board.net/p/fedora-infra-daily
19:00:15 <nirik> #info reminder: speak up if you want to work on a ticket!
19:00:17 <nirik> #topic Tickets needing review
19:00:19 <nirik> #info https://pagure.io/fedora-infrastructure/issues?status=Open&priority=1
19:00:22 <smooge> here
19:00:42 * smooge was going to get some coffee
19:01:43 <nirik> yeah, I need to do that too... but after meeting I guess.
19:01:48 <smooge> same here
19:02:18 <nirik> .ticket 11013
19:02:19 <zodbot> nirik: Issue #11013: Redeploy openQA workers with consistent storage configuration - fedora-infrastructure - Pagure.io - https://pagure.io/fedora-infrastructure/issue/11013
19:02:34 <nirik> low low ops? or low trouble, med gain ?
19:03:28 <smooge> from my past dealing with openqa and installing for it: med trouble, med gain ops
19:04:29 <smooge> if adamw can help workout a kickstart or similar setup it will make it easier.. but the networking and some other things tend to make it a 'try this, ok no reinstall, try this'
19:05:11 <smooge> and the PPC boxes I would put at high trouble :)
19:05:22 <nirik> .ticket 11014
19:05:22 <nirik> ok
19:05:22 <nirik> zodbot: ping
19:05:22 <zodbot> pong
19:05:22 <nirik> this is my outage ticket... med/med/ops/outage
19:05:25 <zodbot> nirik: Issue #11014: Planned Outage - Updates / Reboots - 2022-11-30 21:00 UTC - fedora-infrastructure - Pagure.io - https://pagure.io/fedora-infrastructure/issue/11014
19:05:40 <smooge> +1 on med/med/ops/outage
19:05:57 <nirik> smooge: I have a kickstart that works now I think... but needs tweaked for whatever storage we decide to finally do
19:06:06 <smooge> ah ok cool.
19:06:19 <smooge> I think you or I got lagged nirik
19:06:31 <nirik> I think it was me... :(
19:06:47 <nirik> ok, on to releng... there's some unretires. All low/low I think
19:07:53 <smooge> the unretires low/low.
19:08:16 <smooge> epel9 branch may be low low also?
19:08:33 <nirik> I think it's solved now and that can be close.
19:08:35 <nirik> closed.
19:09:10 <smooge> ok
19:09:36 <nirik> .releng 11154
19:09:37 <smooge> I took a naked ping as an oncall earlier today and asked for 11154 to be opened
19:09:37 <zodbot> nirik: Issue #11154: bodhi haven't sent some packages to stable - releng - Pagure.io - https://pagure.io/releng/issue/11154
19:09:55 * nirik looks
19:10:03 <smooge> I was guessing it might be something from the bodhi code updates?
19:10:06 <nirik> ah..
19:10:14 <nirik> nope. All those are failing gating tests.
19:10:28 <nirik> The submittor needs to either waive them or fix them.
19:10:40 <smooge> ah ok
19:11:46 <nirik> I can close with explain
19:12:53 <smooge> thanks. the only other oncall thing I had was some spam on lists this weekend
19:13:28 <smooge> banhammer the addresses. they had been opened over a year ago it seemed but finally got used by whoever
19:13:28 <nirik> ah spam, so fun
19:13:53 <nirik> #topic work plans / upcoming tasks
19:13:54 <nirik> #info everyone should note what things they are hoping to work on over the next day / week
19:14:30 <aheath1992-mobil> Still working on Splunk work
19:14:58 <nirik> So, gonna be a busy week. ;) I'm planning on updating staging hosts today, a bunch of non outage causing ones tomorrow and then outage on wed. Around that I want to upgrade some hosts from f36 to f37 (koji and builders for sure). Catch up on PR's and fight down tickets.
19:15:15 <nirik> aheath1992-mobil: thanks for working on that. will be interesting to see how it looks. ;)
19:15:20 <smooge> ah yeah that (splunk) came up last week but I didn't want to do any changes while you were out and I was less than capable to volunteer
19:15:48 <smooge> I think a firewall change is needed and possibly a routing issue but aheath1992-mobil put in a ticket for that
19:16:18 <smooge> nirik, I got log01 last week on updates so it should be a quick update/reboot for that bobx
19:16:21 <saibug[m]> aheath1992-mobil: Any update??
19:16:24 <aheath1992-mobil> Yep working with IT Networking on the ticket
19:16:39 <smooge> I also went through and got rkhunter dealt with on all but a couple of boxes
19:16:48 <nirik> smooge: awesome, thanks.
19:16:48 <saibug[m]> Okeyy
19:17:39 <smooge> the openqa-ppc box went into some sort of weird lock when I tried to run rkhunter --propupd but I was able to get the other ones
19:17:55 <smooge> the others look to need some updates to a config to deal with containers
19:17:58 <nirik> right before the holiday I also cleaned up a bunch of old composes and stuff on fedora_koji volume. It's down a good deal now (it was heading for the 100T limit)
19:17:59 <smooge> and that was it
19:18:22 <nirik> yeah, there's a new .containersomething man page. ;(
19:18:25 <smooge> nirik, cool
19:18:30 <nirik> we just need to allow it.
19:18:42 <nirik> down to 80T now: ntap-iad2-c02-fedora01-nfs01a:/fedora_koji   80T   78T  1.7T  98% /mnt/fedora_koji
19:19:15 <smooge> I was not sure how much of the space was 'real' or .snapshots
19:19:27 <smooge> sounds like a lot of real
19:19:41 <nirik> it's mostly real.
19:20:24 <nirik> Snapshot Spill                           2.57TB         3%
19:20:27 <smooge> well time for more archiving I guess. F35 will finally be gone
19:20:33 <nirik> so, 2.5TB are snapshots currently
19:20:38 <saibug[m]> I saw some ssl cert alerts expiration..
19:20:40 <smooge> dang thats not a lot
19:21:04 <nirik> yeah, I want to move composes/ over to another volume, but thats lower pri
19:22:05 <nirik> saibug[m]: yeah, smooge fixed those. Basically it just needed a playbook run and ansible renewed them via letsencrypt.
19:22:41 <nirik> We do need to renew getfedora.org soon... it's a digicert cert currently. We could switch it to letsencrypt
19:22:46 <smooge> yeah I thought the tag httpd/certificates would grab them, but the coreos use a completely different tag for eahc one
19:22:57 <saibug[m]> Ohh yes, with -t httpd
19:23:24 <nirik> smooge: perhaps we should add a common tag to all the certs so we can just get them all if needed?
19:23:34 <smooge> so for the coreos ones, you need to do -t status.updates.coreos.fedoraproject.org,raw-updates.coreos.fedoraproject.org,status.raw-updates.coreos.fedoraproject.org
19:23:50 <saibug[m]> much easy...
19:23:54 <smooge> nirik, I was going to ask because it seemed they had been done different for a reason
19:24:06 <smooge> and I didn't know the reason :)
19:24:21 <saibug[m]> :/
19:24:52 <nirik> I dont know.... I've slept since then? I would say keep those in case you want to target a specific one, but have a higher level tag for all certs... ?
19:24:54 <smooge> saibug[m], the reason could have been "Its 2am and I need to commit this." or it could be "openshift needs a different thing"
19:25:24 <smooge> so if there isn't a "openshift needs special care" then I am ok with adding the tag :)
19:25:36 <smooge> s/openshift/coreos-stuff/
19:25:46 <smooge> and can do that shortly
19:25:55 <smooge> or have someone else who is wanting an easyfix
19:26:03 <saibug[m]> It makes sense
19:27:12 <nirik> ok, anything else to discuss? or shall we call it I standup?
19:27:40 <smooge> nope I have said more than my share =)
19:28:07 <aheath1992-mobil> I'm good
19:28:44 <nirik> ok, thanks for coming everyone!
19:28:48 <nirik> #endmeeting