18:00:42 #startmeeting EPEL (2018-07-18) 18:00:42 Meeting started Wed Jul 18 18:00:42 2018 UTC. 18:00:42 This meeting is logged and archived in a public location. 18:00:42 The chair is smooge. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:42 Useful Commands: #action #agreed #halp #info #idea #link #topic. 18:00:42 The meeting name has been set to 'epel_(2018-07-18)' 18:00:42 #meetingname epel 18:00:42 The meeting name has been set to 'epel' 18:00:42 #topic Chair and Introductions 18:00:42 #chair avij bstinson Evolution nirik smooge 18:00:42 Current chairs: Evolution avij bstinson nirik smooge 18:01:17 morning 18:01:37 hello 18:02:04 hi all 18:02:09 hello bstinson 18:02:18 waiting to see if we can get avij and Evolution 18:05:07 ok so lets move on 18:05:47 First off I have added a meeting agenda to the Fedora Gobby like the Infrastructure meeting. This is so if I am unavailable someone else can run the meeting 18:05:53 #topic EL-6 end of life 18:05:53 #info EL-6 will be reaching its end of life in late 2020. 18:06:25 Red Hat Enterprise 6 will be moving to a stage where we are no longer getting updates for it in late 2020 18:06:48 So still nearly 2.5 years away. 18:06:59 At that time we will archive it off to /pub/archive/epel and EOL it in mirrormanager 18:07:06 1.5 18:07:15 Can't math today. 18:07:32 Well, one of us can't math. 18:07:37 and it aint you 18:08:15 Was there pressure to continue EPEL5 past the "first stage" of RHEL5's retirement? 18:08:18 so yes we have 2.5 years to deal with this... however we are talking about enterprise systems so this is short notice to them 18:08:29 from users? yes. 18:08:41 Since Red Hat does continue to maintain those releases in some way. 18:09:03 "Extended Support", they call it. 18:09:07 I expect it will be more with 6... epel5 wasn't nearly as popular as epel6 is 18:09:42 Most of my mirror traffic (by hit count, not bytes) is still the EPEL5 repomd. 18:10:09 You won't hear any complaints from me if we kill it, but the messaging needs to be extremely clear. 18:11:08 but the EL5 deployment was about 1/4 of EL6 so I don't know if it will be more 18:11:49 I think that it will be up to other parties to work on what to do about it then 18:12:00 #topic clamav.i386 in EL-6 problems 18:12:01 #info clamdb is using a new zlib compression which i386 does not work with. 18:12:01 #info need to rebuild and fix somehow 18:12:20 so I was wondering why I haven't gotten any email on my smoogespace.com system this week 18:12:21 My recommendation was to bundle a newer zlib on that platform. 18:12:22 fun 18:12:29 There's a bugzilla.... 18:12:38 if someone maintains it... 18:13:00 https://bugzilla.redhat.com/show_bug.cgi?id=1600458 18:13:18 Future clamav will simply fail to build if it doesn't see a "new enough" zlib. 18:13:29 We're not completely sure what counts as "new enough". 18:14:13 Might even show up on 32-bit RHEL7, if it existed. 18:14:49 I don't believe anyone has done the work to figure out what needs to change in zlib; it could be a simple patch or not. 18:15:00 And then no idea if Red Hat would ever push an update. 18:15:20 So EPEL could package a parallel-installable zlib, or clamav could bundle it. 18:15:52 who is the current clamav maintainer? 18:16:15 serbiomb has been doing the bulk of the work lately. 18:16:49 He's trying hard but things like this are tough to handle well. 18:17:06 That package has such a terrible history. 18:17:12 * nirik nods 18:17:13 so I will see about trying to make a parallel-installable zlib using the RHEL package. 18:17:18 RHEL-7 one. 18:17:30 Like I said, RHEL7 might not be new enough. 18:17:42 might be easier to bundle in clamav... then other things aren't using it at least 18:17:44 No reason not to try it, though. 18:18:00 well I want to start there. I could try building a clamav in CentOS-i386 and find out 18:18:24 I am not averse to an embedded one 18:18:29 soversion hasn't changed between what's in EL6 and current Fedora, so it might be possible (for basic testing) to just update the system zlib. 18:18:32 sorry bundled 18:19:39 bstinson, is there the equivalent of doing a fedpkg scratch-build in the centos infrastructure? 18:21:09 smooge: not exactly, but we have some tools that download tarballs from our lookaside and stuff 18:22:57 The C6 is built using a different system so I didn't expect so either.. I will see if I can get mockbuild to do my work for me 18:23:36 so I would like to get this working mainly because clam is as best I can see a highly downloaded package in EL6 18:24:08 though the fact that people aren't setting things on fire.. has me wonder how many systems are like mine sitting with giant queues 18:24:21 smooge: if you need a hand (or want a trial build in CBS) let me know 18:24:36 ClamAV is not maintained in a manner which makes it easy to run on "old" systems. 18:24:55 Simply not upgrading to 0.100 would have been an option, if anyone knew about this beforehandl 18:25:11 But then you get clamav yelling at you about it being out of date. 18:25:35 And we can't update to 0.100 on 64-bit EL6 while leaving 32-bit on 0.99. 18:27:04 yeah I was getting tired of those messages 18:27:13 so rock hard-place 18:27:58 ok so I will work with bstinson to get a trial build in CBS 18:28:07 Anyway, I don't think a fix will be terribly difficult, but needs someone with experience and the ability to test. 18:28:26 so I can test :).. since I can't seem to get my mail otherwise 18:29:22 removing clamsmtpd from postfix still has 10k messages sitting with (connect to 127.0.0.1[127.0.0.1]:10025: Connection refused) 18:29:43 thanks tibbs for bringing it up 18:29:45 * orc_fedo appears, having been fighting a network outage issue 18:29:47 and working on it 18:30:16 #topic EL-6.11/EL-7.5 package reaping 18:30:33 I didn't know there was a 6.11. 18:30:39 #info CentOS got 6.10 out the door in record time 18:31:10 \o/ 18:31:12 << mis topic >> 18:31:30 I guess it helps when nothing is getting rebased 18:32:14 but I would like to get a list of packages we have conflicts/duplicated in between EL6/EL7 and put those as blocks in koji 18:33:11 we need a list of the limited arch ones 18:33:45 well one thing we need to make sure of is that the limited arch ones are 'updated' 18:34:02 I am not sure how we should do that.. 18:34:59 well, we can't even really block things until we know what subset of them are the limited arch ones. Or I guess we can, but only deal with the new stuff that came in in the last minor 18:35:24 smooge: I have a beta candidate 6.10 SRPM archive which should ne quaryable as to ExclusiveArch type details 18:35:29 use that as a hunting list 18:35:56 yes that would be useful. In general the blocked arch is ppc64 18:36:57 ok so I will work on getting a list like that for next week 18:37:42 sorry 2 weeks from now 18:37:52 which leads to my last topic for the meeting 18:38:02 #topic Next Meeting 18:38:02 #info Scheduled for 2018-08-01 18:38:10 * orc_fedo sees a unresolved forward ference in the agenda 18:38:19 s//reference/ 18:38:59 We don't have a lot of things on meetings so moving this to an every other week versus just seeing we have no agenda and canclleing every other meeting 18:39:13 +1 18:39:22 +1 also 18:39:24 hi, sorry I'm late, I just came back home 18:40:14 it is in my diarying weekly, because of the loss of information from reading in arrears exceeds the cost of somply looking in and miving on ... adding a persistant notice of cancellation in #epel would help of well, when it happens 18:40:19 Schedule an additional next-week meeting only if you have a lot of business. The Packaging Committee used to do this. 18:40:24 either in the channel, or viat the /topic ... 18:40:41 but +1 to having the meeting every other week (I'll be 50 km away from civilization next week anyway) 18:41:02 Just make sure the fedocal is correct and then you can just get the ical file from there into whatever calendar you use. 18:41:38 I added it to fedocal with a 2 week every other 18:41:46 local manually maintained cron, and the calendar package are pre ical ;) 18:42:01 and I will make sure that I cancel on list and in channel in the future 18:42:06 ty 18:42:21 BTW, I have a question for open floor. 18:42:28 ok that is where are at 18:42:32 #topic Open Floor 18:42:57 tibbs, you are up 18:43:06 If you have epel7 and epel7-testing enabled, do you see complaints about broken update notices? 18:43:11 Like this: https://paste.fedoraproject.org/paste/gHGKasKs~SlVx1zsPmiLmw 18:44:07 I have no idea what's supposed to happen here but I'm pretty sure my local mirror has the correct data. 18:44:42 tibbs, confirmed 18:44:45 >>>>>>> epel:updated 18:44:45 Update notice FEDORA-EPEL-2018-a2b0c16057 (from epel-testing) is broken, or a bad duplicate, skipping. 18:44:46 Something like this was brought up on the epel list a while back. 18:44:48 Ah, cool. 18:45:03 Just to be sure, did you hit either of my mirrors for that? 18:45:28 (list of mirrors it used is down near the bottom of the output). 18:45:48 Mine are pubmirror*.math.uh.edu. 18:45:56 * epel-testing: ftp.cse.buffalo.edu 18:46:13 OK, that would tend to absolve the mirrors, at least. 18:46:53 I'm guessing this is a releng issue but I'm really not sure what's supposed to happen, or why I just started seeing this within the past couple of months. 18:47:23 so it looks like something in epel-testing is saying it is still there.. even though it is in stable 18:47:56 Right, either the update info should be removed from testing, or the update metadata shouldn't be changed when something gets pushed. 18:48:26 I could understand seeing this for a brief time between the main push and the testing push, but for me it's continuous. 18:48:42 Sometimes deleting /var/cache/yum will shut it up for a bit but it always seems to come back. 18:49:00 and it takes a delete and not a yum clean all? 18:49:07 I think it's a bodhi issue. 18:49:21 Yes, yum clean all doesn't seem to help. 18:49:47 is that a security update by chance? 18:50:08 why is https://paste.fedoraproject.org/paste/gHGKasKs~SlVx1zsPmiLmw 18:50:11 oops 18:50:23 why is https://paste.fedoraproject.org/paste/gHGKasKs~SlVx1zsPmiLmw 18:50:37 being diffed against https://paste.fedoraproject.org/paste/gHGKasKs~SlVx1zsPmiLmw 18:50:49 there is a locla copy problem it seems 18:51:08 why os 'epel-testing;issued being diffed avainst epel:issued ?? 18:51:45 I would expect that they vary 18:52:15 I don't know enough about how updateinfo works to know how it matches up "information" 18:52:16 I guess the RHEL yum maintainers decided it should do that. 18:52:28 perhaps I am not understanding the report 18:52:43 yum updateinfo info won't even show me any of the advisories it complains about. 18:53:06 It ignores them completely, which could be kind of bad if anyone relies on that information. 18:53:10 nirik, the ones I looked at are marked in bodhi as bugfix 18:53:31 ok. no idea. I'd say gather all the info you can and file a upstream bodhi issue 18:53:57 ok thanks 18:54:15 tibbs, do you have any more requests on this one? 18:55:20 No, I'm good. 18:55:50 ok well then any other open floor items? I need to fix my i386 EL6 system 18:55:54 I'll work on a bodhi issue. 18:56:15 Thank you all for coming this week. I will see you all on the 1st 18:56:21 #endmeeting