19:00:04 #startmeeting Fedora Infrastructure Ops Daily Standup Meeting 19:00:04 Meeting started Mon Feb 17 19:00:04 2020 UTC. 19:00:04 This meeting is logged and archived in a public location. 19:00:04 The chair is smooge. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:04 Useful Commands: #action #agreed #halp #info #idea #link #topic. 19:00:04 The meeting name has been set to 'fedora_infrastructure_ops_daily_standup_meeting' 19:00:05 #chair cverna mboddu nirik relrod smooge 19:00:05 Current chairs: cverna mboddu nirik relrod smooge 19:00:05 #info meeting is 30 minutes MAX. At the end of 30, its stops 19:00:05 #info agenda is at https://board.net/p/fedora-infra-daily 19:00:06 #topic Tickets needing review 19:00:07 #info https://pagure.io/fedora-infrastructure/issues?status=Open&priority=1 19:00:22 morning all 19:00:42 .ticket 8654 19:00:43 nirik: Issue #8654: New F32 test machine - fedora-infrastructure - Pagure.io - https://pagure.io/fedora-infrastructure/issue/8654 19:00:54 I can do this one as part of moving the maintainer tests... 19:01:02 I guess I was just gonna move them to aws. 19:01:20 yeah that sounds about right 19:01:24 unless someone objects? we will have to figure out the ppc64le one tho 19:01:58 pagure.issue.assigned.added -- smooge assigned ticket fedora-infrastructure#8654 to kevin https://pagure.io/fedora-infrastructure/issue/8654 19:01:59 pagure.issue.edit -- smooge edited the priority fields of ticket fedora-infrastructure#8654 https://pagure.io/fedora-infrastructure/issue/8654 19:02:18 .ticket 8656 19:02:19 nirik: Issue #8656: Module build error: "Package not in list for f33-modular-updates-candidate" - fedora-infrastructure - Pagure.io - https://pagure.io/fedora-infrastructure/issue/8656 19:02:31 this seems like a releng one. ;) mboddu: did modules get rebuilt? 19:03:25 or we missed adding them at branching or something. 19:03:56 I wasn't sure if it was that or some other mbs issue 19:04:05 ➜ ~ koji list-pkgs --tag=f32-modular-updates-candidate | wc -l 19:04:06 265 19:04:06 ➜ ~ koji list-pkgs --tag=f33-modular-updates-candidate | wc -l 19:04:06 5 19:04:10 no, it's koji. 19:04:36 so not sure if all those need added or what. 19:04:38 pagure.issue.tag.added -- smooge tagged ticket fedora-infrastructure#8656: mbs and releng https://pagure.io/fedora-infrastructure/issue/8656 19:04:39 pagure.issue.edit -- smooge edited the priority fields of ticket fedora-infrastructure#8656 https://pagure.io/fedora-infrastructure/issue/8656 19:05:09 .ticket 8657 19:05:10 nirik: Issue #8657: `fedpkg module-build` fails with "The modulemd libreoffice.yaml is invalid. Please verify the syntax is correct." upon buildopts: arches: [x86_64] - fedora-infrastructure - Pagure.io - https://pagure.io/fedora-infrastructure/issue/8657 19:05:19 this one is mbs. ;) 19:05:37 I guess we will need to check what they mention and check with mbs developers. 19:05:44 pagure.issue.tag.added -- smooge tagged ticket fedora-infrastructure#8656: koji https://pagure.io/fedora-infrastructure/issue/8656 19:05:45 pagure.issue.tag.removed -- smooge removed the mbs tags from ticket fedora-infrastructure#8656 https://pagure.io/fedora-infrastructure/issue/8656 19:05:58 .ticket 8658 19:05:59 nirik: Issue #8658: cantata missing from https://apps.fedoraproject.org/packages/ - fedora-infrastructure - Pagure.io - https://pagure.io/fedora-infrastructure/issue/8658 19:06:09 this is that packages is just broken. 19:06:11 pagure.issue.tag.added -- smooge tagged ticket fedora-infrastructure#8657: mbs and releng https://pagure.io/fedora-infrastructure/issue/8657 19:06:12 pagure.issue.edit -- smooge edited the priority fields of ticket fedora-infrastructure#8657 https://pagure.io/fedora-infrastructure/issue/8657 19:06:20 we should close in favor of the upstream thing... 19:06:22 * nirik looks for it 19:07:11 https://github.com/fedora-infra/fedora-packages/issues/410 I guess 19:08:05 I am not sure exactly what to say in the ticket ot close upstream 19:08:17 without being completely wrong 19:09:01 I can do it... 19:10:06 pagure.issue.edit -- kevin edited the close_status and status fields of ticket fedora-infrastructure#8658 https://pagure.io/fedora-infrastructure/issue/8658 19:10:07 pagure.issue.comment.added -- kevin commented on ticket fedora-infrastructure#8658: "cantata missing from https://apps.fedoraproject.org/packages/" https://pagure.io/fedora-infrastructure/issue/8658#comment-626930 19:10:23 .ticket 8659 19:10:24 nirik: Issue #8659: openshift: allow for users to be able to start a rollout of a deployment - fedora-infrastructure - Pagure.io - https://pagure.io/fedora-infrastructure/issue/8659 19:10:42 I guess this is fine.. .the playbooks already do this I think. 19:11:54 pagure.issue.tag.added -- kevin tagged ticket fedora-infrastructure#8659: OpenShift https://pagure.io/fedora-infrastructure/issue/8659 19:11:55 pagure.issue.edit -- kevin edited the priority fields of ticket fedora-infrastructure#8659 https://pagure.io/fedora-infrastructure/issue/8659 19:11:56 pagure.issue.comment.added -- kevin commented on ticket fedora-infrastructure#8659: "openshift: allow for users to be able to start a rollout of a deployment" https://pagure.io/fedora-infrastructure/issue/8659#comment-626932 19:12:17 ok, thats all the needs review 19:12:20 pagure.issue.comment.added -- mohanboddu commented on ticket fedora-infrastructure#8656: "Module build error: "Package not in list for f33-modular-updates-candidate"" https://pagure.io/fedora-infrastructure/issue/8656#comment-626934 19:12:51 ^ I will take a look 19:13:35 smooge: ok, any others you want to talk about? 19:13:44 seems people have dumped a pile on us again... 19:13:48 ok for this week, I intend to finish up the last of the aarch64 boxes, install the virhost-comm server replacements, get the COPR box to sort of work 19:14:12 so, on the aarch64's... the ones left only have 1 drive? 19:14:43 well I need to readjust them. the 1 disk boxes are the qa boxes but I installed some of the virthost ones as that 19:15:01 so it is fix dns, reinstall and turn over 19:15:05 I used the 17/18 ones you setup... do you mean you need to redo them? 19:15:21 no openqa-aarch64-01 needs to be virthost19 19:15:53 openqa-aarch64-02 needs to be cloud box 19:16:08 vh19? so it's an x86_64 one? 19:16:26 no sorry vmhost-aarch64-19 19:16:36 trying to type quickly 19:16:45 ah. ok. so wait, how many are left now? 19:16:52 should be 5? or is there 6? 19:17:09 or more? 19:17:49 * nirik has lost track, or forgotten or something 19:18:16 by my count I have 7 19:18:34 3 with 1 drive, 4 with 2 drive 19:19:20 ah ha. 19:19:29 but 1 box dropped off over the weekend and I need to figure out why 19:20:10 oh because I moved it to so I could get it on a network 19:20:14 so... lets see. how were you gonna allocate them? 19:20:34 adamw: how much space does a aarch64 openqa worker need? 19:21:54 so for the openqa boxes, I was going see what kind of drives they have in them and if they have spare caddies. if they did I was going to see if I had 'spare' disks in the boxes that could be used to make them 2 disk systems. Then I was going to reinstall them, move their eth1 cable over to the QA network and go from there 19:22:46 ok, but we should ask adamw how much space they need... because I think they just mostly use nfs from the master node... 19:22:54 so, 1 disk might be ok for them 19:23:08 for the 4 systems with 2 drives.. I was thinking 2 of them would become virthost-aarch64 boxes and 2 would be sent to IAD2 19:23:37 so, we are actually all caught up for aarch64/armv7 builders (ie, all of them are now off the moonshot) 19:23:44 of course more is always nice. 19:24:11 I would send all of them to IAD2 but -ENOROOMINRACK 19:24:20 right. 19:24:49 anyhow, we should find out disk usage needs and go based on that... 19:24:57 anything else for meeting? 19:25:08 I'm slowly making progress on stuff moving out of cloud. 19:25:16 PPC builders to send to IAD2 was my next thing to talk about 19:25:24 will try and do maintainer-test this week. 19:25:53 well, we need at least 1 power9 box. 19:26:11 we should go over the shipping lists 19:26:12 i need to get the serial numbers to Shaun RSN 19:26:23 so that was my next 'meeting' with you 19:26:30 will end this one 19:26:34 #endmeeting