18:00:42 <smooge> #startmeeting EPEL (2018-07-18)
18:00:42 <zodbot> Meeting started Wed Jul 18 18:00:42 2018 UTC.
18:00:42 <zodbot> This meeting is logged and archived in a public location.
18:00:42 <zodbot> The chair is smooge. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:42 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
18:00:42 <zodbot> The meeting name has been set to 'epel_(2018-07-18)'
18:00:42 <smooge> #meetingname epel
18:00:42 <zodbot> The meeting name has been set to 'epel'
18:00:42 <smooge> #topic Chair and Introductions
18:00:42 <smooge> #chair avij bstinson Evolution nirik smooge
18:00:42 <zodbot> Current chairs: Evolution avij bstinson nirik smooge
18:01:17 <nirik> morning
18:01:37 <smooge> hello
18:02:04 <bstinson> hi all
18:02:09 <smooge> hello bstinson
18:02:18 <smooge> waiting to see if we can get avij and Evolution
18:05:07 <smooge> ok so lets move on
18:05:47 <smooge> First off I have added a meeting agenda to the Fedora Gobby like the Infrastructure meeting. This is so if I am unavailable someone else can run the meeting
18:05:53 <smooge> #topic EL-6 end of life
18:05:53 <smooge> #info EL-6 will be reaching its end of life in late 2020.
18:06:25 <smooge> Red Hat Enterprise 6 will be moving to a stage where we are no longer getting updates for it in late 2020
18:06:48 <tibbs> So still nearly 2.5 years away.
18:06:59 <smooge> At that time we will archive it off to /pub/archive/epel and EOL it in mirrormanager
18:07:06 <smooge> 1.5
18:07:15 <tibbs> Can't math today.
18:07:32 <tibbs> Well, one of us can't math.
18:07:37 <smooge> and it aint you
18:08:15 <tibbs> Was there pressure to continue EPEL5 past the "first stage" of RHEL5's retirement?
18:08:18 <smooge> so yes we have 2.5 years to deal with this... however we are talking about enterprise systems so this is short notice to them
18:08:29 <smooge> from users? yes.
18:08:41 <tibbs> Since Red Hat does continue to maintain those releases in some way.
18:09:03 <tibbs> "Extended Support", they call it.
18:09:07 <nirik> I expect it will be more with 6... epel5 wasn't nearly as popular as epel6 is
18:09:42 <tibbs> Most of my mirror traffic (by hit count, not bytes) is still the EPEL5 repomd.
18:10:09 <tibbs> You won't hear any complaints from me if we kill it, but the messaging needs to be extremely clear.
18:11:08 <smooge> but the EL5 deployment was about 1/4 of EL6 so I don't know if it will be more
18:11:49 <smooge> I think that it will be up to other parties to work on what to do about it then
18:12:00 <smooge> #topic clamav.i386 in EL-6 problems
18:12:01 <smooge> #info clamdb is using a new zlib compression which i386 does not work with.
18:12:01 <smooge> #info need to rebuild and fix somehow
18:12:20 <smooge> so I was wondering why I haven't gotten any email on my smoogespace.com system this week
18:12:21 <tibbs> My recommendation was to bundle a newer zlib on that platform.
18:12:22 <nirik> fun
18:12:29 <tibbs> There's a bugzilla....
18:12:38 <nirik> if someone maintains it...
18:13:00 <tibbs> https://bugzilla.redhat.com/show_bug.cgi?id=1600458
18:13:18 <tibbs> Future clamav will simply fail to build if it doesn't see a "new enough" zlib.
18:13:29 <tibbs> We're not completely sure what counts as "new enough".
18:14:13 <tibbs> Might even show up on 32-bit RHEL7, if it existed.
18:14:49 <tibbs> I don't believe anyone has done the work to figure out what needs to change in zlib; it could be a simple patch or not.
18:15:00 <tibbs> And then no idea if Red Hat would ever push an update.
18:15:20 <tibbs> So EPEL could package a parallel-installable zlib, or clamav could bundle it.
18:15:52 <smooge> who is the current clamav maintainer?
18:16:15 <tibbs> serbiomb has been doing the bulk of the work lately.
18:16:49 <tibbs> He's trying hard but things like this are tough to handle well.
18:17:06 <tibbs> That package has such a terrible history.
18:17:12 * nirik nods
18:17:13 <smooge> so I will see about trying to make a parallel-installable zlib using the RHEL package.
18:17:18 <smooge> RHEL-7 one.
18:17:30 <tibbs> Like I said, RHEL7 might not be new enough.
18:17:42 <nirik> might be easier to bundle in clamav... then other things aren't using it at least
18:17:44 <tibbs> No reason not to try it, though.
18:18:00 <smooge> well I want to start there. I could try building a clamav in CentOS-i386 and find out
18:18:24 <smooge> I am not averse to an embedded one
18:18:29 <tibbs> soversion hasn't changed between what's in EL6 and current Fedora, so it might be possible (for basic testing)  to just update the system zlib.
18:18:32 <smooge> sorry bundled
18:19:39 <smooge> bstinson, is there the equivalent of doing a fedpkg scratch-build in the centos infrastructure?
18:21:09 <bstinson> smooge: not exactly, but we have some tools that download tarballs from our lookaside and stuff
18:22:57 <smooge> The C6 is built using a different system so I didn't expect so either.. I will see if I can get mockbuild to do my work for me
18:23:36 <smooge> so I would like to get this working mainly because clam is as best I can see a highly downloaded package in EL6
18:24:08 <smooge> though the fact that people aren't setting things on fire.. has me wonder how many systems are like mine sitting with giant queues
18:24:21 <bstinson> smooge: if you need a hand (or want a trial build in CBS) let me know
18:24:36 <tibbs> ClamAV is not maintained in a manner which makes it easy to run on "old" systems.
18:24:55 <tibbs> Simply not upgrading to 0.100 would have been an option, if anyone knew about this beforehandl
18:25:11 <tibbs> But then you get clamav yelling at you about it being out of date.
18:25:35 <tibbs> And we can't update to 0.100 on 64-bit EL6 while leaving 32-bit on 0.99.
18:27:04 <smooge> yeah I was getting tired of those messages
18:27:13 <smooge> so rock hard-place
18:27:58 <smooge> ok so I will work with bstinson to get a trial build in CBS
18:28:07 <tibbs> Anyway, I don't think a fix will be terribly difficult, but needs someone with experience and the ability to test.
18:28:26 <smooge> so I can test :).. since I can't seem to get my mail otherwise
18:29:22 <smooge> removing clamsmtpd from postfix still has 10k messages sitting with                    (connect to 127.0.0.1[127.0.0.1]:10025: Connection refused)
18:29:43 <smooge> thanks tibbs for bringing it up
18:29:45 * orc_fedo appears, having been fighting a network outage issue
18:29:47 <smooge> and working on it
18:30:16 <smooge> #topic EL-6.11/EL-7.5 package reaping
18:30:33 <tibbs> I didn't know there was a 6.11.
18:30:39 <smooge> #info CentOS got 6.10 out the door in record time
18:31:10 <bstinson> \o/
18:31:12 <smooge> << mis topic >>
18:31:30 <smooge> I guess it helps when nothing is getting rebased
18:32:14 <smooge> but I would like to get a list of packages we have conflicts/duplicated in between EL6/EL7 and put those as blocks in koji
18:33:11 <nirik> we need a list of the limited arch ones
18:33:45 <smooge> well one thing we need to make sure of is that the limited arch ones are 'updated'
18:34:02 <smooge> I am not sure how we should do that..
18:34:59 <nirik> well, we can't even really block things until we know what subset of them are the limited arch ones. Or I guess we can, but only deal with the new stuff that came in in the last minor
18:35:24 <orc_fedo> smooge: I have a beta candidate 6.10 SRPM archive which should ne quaryable as to ExclusiveArch type details
18:35:29 <orc_fedo> use that as a hunting list
18:35:56 <smooge> yes that would be useful. In general the blocked arch is ppc64
18:36:57 <smooge> ok so I will work on getting a list like that for next week
18:37:42 <smooge> sorry 2 weeks from now
18:37:52 <smooge> which leads to my last topic for the meeting
18:38:02 <smooge> #topic Next Meeting
18:38:02 <smooge> #info  Scheduled for 2018-08-01
18:38:10 * orc_fedo sees a unresolved forward ference in the agenda
18:38:19 <orc_fedo> s//reference/
18:38:59 <smooge> We don't have a lot of things on meetings so moving this to an every other week versus just seeing we have no agenda and canclleing every other meeting
18:39:13 <nirik> +1
18:39:22 <bstinson> +1 also
18:39:24 <avij> hi, sorry I'm late, I just came back home
18:40:14 <orc_fedo> it is in my diarying weekly, because of the loss of information from reading in arrears exceeds the cost of somply looking in and miving on ... adding a persistant notice of cancellation in #epel would help of well, when it happens
18:40:19 <tibbs> Schedule an additional next-week meeting only if you have a lot of business.  The Packaging Committee used to do this.
18:40:24 <orc_fedo> either in the channel, or viat the /topic ...
18:40:41 <avij> but +1 to having the meeting every other week (I'll be 50 km away from civilization next week anyway)
18:41:02 <tibbs> Just make sure the fedocal is correct and then you can just get the ical file from there into whatever calendar you use.
18:41:38 <smooge> I added it to fedocal with a 2 week every other
18:41:46 <orc_fedo> local manually maintained cron, and the calendar package are pre ical  ;)
18:42:01 <smooge> and I will make sure that I cancel on list and in channel in the future
18:42:06 <orc_fedo> ty
18:42:21 <tibbs> BTW, I have a question for open floor.
18:42:28 <smooge> ok that is where are at
18:42:32 <smooge> #topic Open Floor
18:42:57 <smooge> tibbs, you are up
18:43:06 <tibbs> If you have epel7 and epel7-testing enabled, do you see complaints about broken update notices?
18:43:11 <tibbs> Like this: https://paste.fedoraproject.org/paste/gHGKasKs~SlVx1zsPmiLmw
18:44:07 <tibbs> I have no idea what's supposed to happen here but I'm pretty sure my local mirror has the correct data.
18:44:42 <smooge> tibbs, confirmed
18:44:45 <smooge> >>>>>>> epel:updated
18:44:45 <smooge> Update notice FEDORA-EPEL-2018-a2b0c16057 (from epel-testing) is broken, or a bad duplicate, skipping.
18:44:46 <tibbs> Something like this was brought up on the epel list a while back.
18:44:48 <tibbs> Ah, cool.
18:45:03 <tibbs> Just to be sure, did you hit either of my mirrors for that?
18:45:28 <tibbs> (list of mirrors it used is down near the bottom of the output).
18:45:48 <tibbs> Mine are pubmirror*.math.uh.edu.
18:45:56 <smooge> * epel-testing: ftp.cse.buffalo.edu
18:46:13 <tibbs> OK, that would tend to absolve the mirrors, at least.
18:46:53 <tibbs> I'm guessing this is a releng issue but I'm really not sure what's supposed to happen, or why I just started seeing this within the past couple of months.
18:47:23 <smooge> so it looks like something in epel-testing is saying it is still there.. even though it is in stable
18:47:56 <tibbs> Right, either the update info should be removed from testing, or the update metadata shouldn't be changed when something gets pushed.
18:48:26 <tibbs> I could understand seeing this for a brief time between the main push and the testing push, but for me it's continuous.
18:48:42 <tibbs> Sometimes deleting /var/cache/yum will shut it up for a bit but it always seems to come back.
18:49:00 <smooge> and it takes a delete and not a yum clean all?
18:49:07 <nirik> I think it's a bodhi issue.
18:49:21 <tibbs> Yes, yum clean all doesn't seem to help.
18:49:47 <nirik> is that a security update by chance?
18:50:08 <orc_fedo> why is               https://paste.fedoraproject.org/paste/gHGKasKs~SlVx1zsPmiLmw
18:50:11 <orc_fedo> oops
18:50:23 <orc_fedo> why is               https://paste.fedoraproject.org/paste/gHGKasKs~SlVx1zsPmiLmw
18:50:37 <orc_fedo> being diffed against               https://paste.fedoraproject.org/paste/gHGKasKs~SlVx1zsPmiLmw
18:50:49 <orc_fedo> there is a locla copy problem it seems
18:51:08 <orc_fedo> why os 'epel-testing;issued being diffed avainst epel:issued ??
18:51:45 <orc_fedo> I would expect that they vary
18:52:15 <smooge> I don't know enough about how updateinfo works to know how it matches up "information"
18:52:16 <tibbs> I guess the RHEL yum maintainers decided it should do that.
18:52:28 <orc_fedo> perhaps I am not understanding the report
18:52:43 <tibbs> yum updateinfo info won't even show me any of the advisories it complains about.
18:53:06 <tibbs> It ignores them completely, which could be kind of bad if anyone relies on that information.
18:53:10 <smooge> nirik, the ones I looked at are marked in bodhi as bugfix
18:53:31 <nirik> ok. no idea. I'd say gather all the info you can and file a upstream bodhi issue
18:53:57 <smooge> ok thanks
18:54:15 <smooge> tibbs, do you have any more requests on this one?
18:55:20 <tibbs> No, I'm good.
18:55:50 <smooge> ok well then any other open floor items? I need to fix my i386 EL6 system
18:55:54 <tibbs> I'll work on a bodhi issue.
18:56:15 <smooge> Thank you all for coming this week. I will see you all on the 1st
18:56:21 <smooge> #endmeeting