17:30:40 #startmeeting gluster 17:30:47 Let's begin! 17:30:53 Welcome everyone. 17:30:58 #topic Roll Call 17:31:34 * kshlm _o/ 17:31:40 * kkeithley is here 17:31:58 * anoopcs is here 17:32:09 o/ 17:33:08 * rastar is here 17:33:34 * msvbhat is here 17:33:43 o/ 17:33:47 Let's start. 17:33:53 Welcome again everyone. 17:34:10 The agenda is always available at https://public.pad.fsfe.org/p/gluster-community-meetings 17:34:26 * ira is here. 17:34:40 Add any topics you want to discuss that aren't already in the agenda under Open-floor 17:34:53 #topic Next weeks host 17:35:15 First up, do we have volunteers to host the next meeting? 17:35:27 * jiffin is here 17:36:11 No one? 17:37:13 I could try doing that next week. Haven't done it before though. 17:37:28 samikshan, Cool! 17:37:36 It's not too hard. 17:37:59 Just follow the agenda and what we're doing today. 17:38:03 Yep will just follow the process. 17:38:09 There should be people around to help. 17:38:18 #info samikshan is next week's host 17:38:33 Thanks samikshan++ 17:38:34 kshlm: Karma for samxan changed to 1 (for the f24 release cycle): https://badges.fedoraproject.org/tags/cookie/any 17:38:42 :) 17:38:52 #topic GlusterFS-4.0 17:39:10 I don't have much to report with regards to GD2 for this week. 17:39:15 I've not progressed a lot. 17:39:46 And I don't see anyone else around to provide updates on DHT2 and JBR. 17:40:00 I'm assuming not progress there either. 17:40:28 I'll move onto the next topic unless we have anything other related to 4.0 to discuss. 17:41:35 Okay. 17:41:41 #topic GlusterFS-3.9 17:42:01 aravindavk, Any updates for us? 17:42:38 He doesn't seem to be around. 17:42:41 kshlm: 3.9 branch created, we are working on stabilizing the branch 17:42:48 Oh. There you are. 17:42:58 * ndevos _o/ 17:43:06 Pranith is collecting Tests details which we can run for 3.9 17:43:31 That's cool. I'm just adding tests for GlusterD. 17:43.45 Or was adding. I'll continue after the meeting. 17:44:36 aravindavk, Please make sure to announce the branching on the mailing-lists. (If you'd already done it, I must have missed it. Sorry) 17:44:48 kshlm: I did 17:44:57 Link? 17:45:29 kshlm: http://www.gluster.org/pipermail/gluster-devel/2016-September/050741.html 17:45:39 #link https://www.gluster.org/pipermail/gluster-devel/2016-September/050741.html 17:45:42 Thanks aravindavk 17:46:23 aravindavk, Could you also update the mail with the dates you're targetting? 17:46:35 For the rc, beta and final release? 17:47:09 kshlm: sure 17:47:13 We cannot have patches being accepted till the release date. 17:47:34 I'll add an AI for this on you. 17:48:06 #action aravindavk to update the lists with the target dates for the 3.9 release 17:48:43 aravindavk, Do you have anything else to share? 17:48:55 kshlm: thanks. thats all from my side 17:49:10 aravindavk, Okay. 17:49:29 Everyone make sure to fill up pranithk's list of tests for your components soon. 17:49:34 Let's move on. 17:49:39 #topic GlusterFS-3.8 17:49:55 ndevos, You're up. 17:50:06 Everything fine in release-3.8 land? 17:50:07 nothing special to note, all goes according to plan 17:50:39 I'll probably start to release tomorrow or friday, and will ask the maintainers to refrain from merging patches 17:50:44 So the next release will happen on the 10th? 17:51:04 yes, I plan to do that 17:51:12 ndevos, Sounds good. 17:51:20 You've got it under control. 17:51:33 (might be an other airport release, we should give the releases useless names) 17:51:34 If there's nothing else, I'll move on. 17:51:46 nope, move on :) 17:51:51 Thank you. 17:51:56 #topic GlusterFS-3.7 17:52:16 3.7.15 has been released. 17:52:35 bug fixes? or releast notes? 17:53:04 (we're seeing "random" insane load on 3.7.14, specifically with many, many small files) 17:53:11 jkroon, Notes are available at https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.15.md 17:53:32 The release happened on time, and was smooth. 17:53:38 Thanks everyone who helped. 17:54:10 jkroon, Have you discussed the issue before? 17:54:19 I mean on the mailing lists or on IRC? 17:54:50 kshlm, only saw it yesterday. 17:55:17 we've had a VERY long discussion between JoeJulian and myself a whilst back - that one was resolved by kernel downgrade from 4.6.4 to 3.19.5 (was old version). 17:55:31 seems that was caused by underlying mdadm problems. in this case however that did not help. 17:56:12 3.8.4 will be the Humberto Delgado release. 17:56:56 jkroon, Okay. Please let's discuss your issue further after this meeting. 17:57:13 It can continue on #gluster{,-dev} or on the mailing lists. 17:57:32 kshlm, i'd really appreciate that, i'm in #gluster. 17:57:38 We should be able to help fix it. 17:58:13 That's it for 3.7 17:58:27 I'll move on if we have nothing further to discuss. I don't have anything else. 17:59:35 I'll skip 3.6. Nothing to add there, other than the pending EOL. 18:00:02 #topic Project Infrastructure 18:00:15 nigelb, misc, ^ 18:00:23 * post-factum is late 18:00:55 Hello! 18:01:09 nigelb, Hi :) 18:01:19 I've done one round of cleaning up the gluster-maintainers team as I discussed on the mailing list. 18:01:19 Hey post-factum! 18:01:28 kshlm: o/ 18:01:39 So if you suddenly lost access to merge, file a bug (if you lost access, you should have gotten an email from me though) 18:02:01 I'm working with Shyam to get some upstream performance numbers. 18:02:20 nigelb, Was there a lot to cleanup? 18:02:28 About 5 or 6. 18:02:54 The rough strategy is we'll publish the scripts to get performance numbers, our machine specs and the numbers we get on our infra. 18:03:21 nigelb, Where will these be running? 18:03:34 It may be running on internal hardware. 18:03:37 On centos-ci? Or do have new infra available? 18:03:43 This is why we'll publish the specs. 18:04:07 I'm still working out the details (in fact, I have a call after this) 18:04:09 So, it's still unknown then. 18:04:16 Okay. 18:04:29 We all look forward to see this done. 18:04:33 I've been putting some work into the failure dashboard 18:04:41 The latest is here -> https://2586f5f0.ngrok.io/ 18:04:49 I think it's ready to be our single souce of truth. 18:04:55 It has information from the last 2 weeks. 18:05:07 (you're hitting my laptop, so go easy there) 18:05:55 Oh. I thought it was static. 18:06:01 There's a bit more clean up I need to do so it doesn't get DDoSed. And I'll get it on a public URL. 18:06:10 Nope, there's a good amount of db queries. 18:06:44 Rather than sending emails, I wanted one place that would record the information. 18:06:53 And you could do an archive search if you wanted. 18:07:21 nigelb, I'd still want emails. 18:07:43 It's a good way to nag people to fix whatever needs fixing. 18:07:51 Sure, but now we can do targetted emails. Or bugs. 18:07:52 yeah, emails are easier to follow up and check progress 18:08:02 Like that list has 3 failures which need looking into 18:08:08 because they've happened 10+ times. 18:08:47 what job results are this? 18:08:50 Targetted is good. 18:08:54 centos and netbsd 18:08:58 As long as we don't swamp -devel or maintainers as we do now. 18:09:14 no, I mean, is it running git/HEAD, or patches that are posted 18:09:34 in the 2nd case, many patches need to update/correct the .t files, and that is done only after it fails a couple of times 18:09:51 Yeah, this is from patches submitted. 18:09:55 so that sort of failure is not excluded. 18:10:05 (of course developers should run the tests before posting patches, but that rarely happens) 18:10:13 next week on, we'll have enough data from regression-test-burn-in as well 18:10:22 which should give us the status of master. 18:10:35 We fixed regression-test-burn-in yesterday! 18:10:38 ok, results from regression-test-burn-in would definitely be more useful 18:10:50 indeed. 18:10:58 And we also have a working strfmt_errors job. 18:11:08 master is green and a few release branches need to be green. 18:11:11 then I can turn it on. 18:11:22 (turn voting on it, rather) 18:11:34 ah, yes, but strfmt is not correct for release-3.7 and 3.8 yet, I've send backports to get those addressed 18:11:46 ndevos: can you link me to the patches? 18:11:57 I'll watch them, so I can turn them to vote after those are merged. 18:12:09 #link http://review.gluster.org/#/q/I6f57b5e8ea174dd9e3056aff5da685e497894ccf 18:12:09 misc is working on getting VMs on the new hardware. 18:12:15 strfmt fixes for 3.7 is/are already merged 18:12:24 thanks kkeithley! 18:12:24 We're targetting moving rpm jobs to our hardware in the near future. 18:12:30 And then centos regression. 18:12:47 hopefully, in a few months, we shouldn't have any rackspace machines. 18:12:57 \o/ 18:13:08 and we may have machines that we can use and rebuild for tests. 18:13:30 The next round of cleanup is going to target jenkins users. 18:13:43 I'll send an email to devel once I have the energy to fix it up. 18:13:50 we want to drop all rackspace machines? or did their sponsoring completely stop? 18:14:02 the sponsoring situation hasn't changed. 18:14:11 I do not think we should drop, but reduce for sure 18:14:24 drop all rackspace machines -> for CI. 18:14:30 we'll still have some machines there. 18:14:34 oh ... we could have a build farm and a test farm 18:14:45 But sort of as a burstable capacity. 18:14:55 mchangir: we only have 4 machines, not enough to run a test farm. 18:15:06 Our test farm will have to be centos CI. 18:15:22 That's all from infra, unless you have questions for me. 18:15:24 nigelb, Awesome news! 18:15:33 we have other machines (internal) for running test farms on 18:15:36 But I'd like to move on to the next topic now. 18:15:51 Thanks nigelb 18:15:59 #topic NFS-Ganesha 18:16:01 and the CentOS CI has many machines we can use for testing too - just needs someone to write/maintain tests 18:16:03 *** rastar is now known as rtalur_afk 18:16:12 *** rtalur_afk is now known as rastar_afk 18:16:43 kkeithley, ndevos ? 18:16:52 ? 18:16:56 oh 18:17:16 kkeithley, #topic NFS-Ganesha 18:17:35 2.4rc3 was tagged yesterday. expect GA in two weeks if we can figure out the perf regressions from dev29 by then 18:18:11 (dev29 was the last -dev release before rc1) 18:18:35 kkeithley, Thanks. 18:18:40 I believe skoduri has some patches out for review to address some of the perf regressions. so we're optimistic 18:18:58 The regression was in gluster? 18:19:18 no, in nfs-ganesha 18:19:33 all testing has been on top of glusterfs-3.8.2 18:20:21 or 3.8.1 18:20:39 kkeithley, I hope the regressions get fixed :) 18:20:51 I'll move on if there is nothing else. 18:20:52 you're not the only one. ;-) 18:21:51 Thanks again kkeithley 18:21:55 #topic Samba 18:21:57 * obnox is summoned 18:22:07 obnox, Yes you are. 18:22:13 it seemes samba 4.5.0 is going to be released today 18:22:22 (instead of 4.5.0 rc4) 18:22:52 rastar_afk: has proposed a patch to samba upstream to teach the gluster vfs module about multiple volfile servers 18:23:01 (as supported by libgfapi) 18:23:19 performance testing with samba with the md-cache improvements is ongoing 18:23:30 that's all i can think of right now 18:23:49 ah on a related note: 18:24:10 obnox, How is the performance? All good? 18:24:19 there are requests of extending CTDB to provide better ha management for gluster-nfs 18:24:31 kshlm: poornima would have the latest infos on that 18:24:47 i think it is going well, nice improvements 18:25:05 currently testing also that no general functional regressions are introduced when using samba on top of those patches 18:25:59 obnox, I hope there are none. 18:26:02 Thanks obnox. 18:26:12 I'll move on if you're done. 18:26:16 yep, please 18:26:40 Thanks again. 18:26:49 #topic Last weeks AIs 18:26:57 #topic improve cleanup to control the processes that test starts 18:27:05 Doesn't have an assignee. 18:27:19 Who was this AI on? 18:27:47 either nigelb, jdarcy or me, I guess? 18:28:01 or maybe rastar_afk 18:28:16 kshlm: I guess it was on rastar 18:28:32 I'm looking at the logs, 18:28:34 I've not seen any progress there, but I might have missed it... 18:28:43 and ndevos jdarcy and rastar_afk are involved. 18:28:54 ndevos, Keep it open for next week> 18:28:54 it was about killing processes that started in the background 18:29:12 yes, thats fine 18:29:41 * ndevos doesnt know how urgent it is anyway 18:29:49 #action rastar_afk/ndevos/jdarcy to improve cleanup to control the processes that test starts 18:30:02 No more AIs. So... 18:30:07 #topic Open Floor 18:30:17 memory leaks! i guess we need dedicated topic for them 18:30:20 Looking at the list looks like just announcements. 18:30:30 From me and kkeithley 18:30:41 well, info, not announcements. 18:31:01 yeah. 18:31:01 also no hot sauce, no Ghiradelli chocolate until I get my reviews. 18:31:22 ;-) 18:31:54 post-factum, You're the force wiping the leaks. 18:32:03 kshlm, i've managed how to deal with massif valgrind profiler and updated BZ about FUSE client leak. Nithya should take a look at that. 18:32:05 Just keep going. 18:32:11 :) 18:32:35 Awesome! So finally you got it figured. 18:33:36 post-factum, I'm okay with adding a mem-leak topic, if you agre to provide updates on your adventures wiping leaks. 18:33:57 I should set up some VMs to run longevity on 3.7.x. (current longevity is on bare metal.) I'd also like to test other things besides DHT+AFR volume, e.g. tiering 18:34:25 kshlm: unfortunately, no. those are non-systematic, and my plan is to move to RH in next 2 weeks, so I'm not sure I'll be able to have some deal with Gluster for the next couple of months 18:35:14 post-factum, That's okay. Just pop in the topic under open-floor whenever you have anything. 18:35:27 And nbalacha has just shown up. 18:35:30 kshlm: okay, sure 18:35:35 nbalacha: just in time :) 18:35:43 If you need to discuss leaks, do it on #gluster-dev 18:35:51 I'm want to end this meeting soon. 18:36:02 okay 18:36:15 I'll dump all the Open-floor topics here. 18:36:34 More review needed for https://github.com/gluster/glusterdocs/pull/139, this is the update to the release process doc. 18:36:38 It needs to merge. 18:36:51 From kkeithley 18:37:00 'please review nine remaining 'unused variable' patches, otherwise I'm bringing my laser eyes to BLR next week to glare at you. 18:37:00 reviewers, you know who you are! (And so do I)' 18:37:16 chocolate and hot sauce are the reward! 18:37:31 kkeithley again on the longetivity cluster 18:37:34 for everyone 18:37:44 '''longevity cluster (36 days running) https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/ 18:37:44 glusterd                         RSZ 22132 -> 22132, VSZ  673160 ->  673160 (small change in VSZ from last week) 18:37:44  glusterfs (fuse client)  RSZ 59628 -> 61460, VSZ  800044 ->  947508 (small change in RSZ, no change in VSZ  from last week) 18:37:44 glusterfs (shd)              RSZ 58928 -> 59388, VSZ  747476 ->  813012 (tiny change in RSZ, no change in VSZ from last week 18:37:44  glusterfsd                      RSZ 47452 -> 47832, VSZ  1319908 -> 1584108 (small change in RSZ, no change in VSZ from last week) 18:37:44 (no, I haven't added state dumps yet. still running 3.8.1) 18:37:44 ''' 18:37:46 #link http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:bug-1369124 18:38:05 And finally, the regular announcements. 18:38:17 If you're attending any event/conference please add the event and yourselves to Gluster attendance of events: http://www.gluster.org/events (replaces https://public.pad.fsfe.org/p/gluster-events)  18:38:24 Put (even minor) interesting topics on https://public.pad.fsfe.org/p/gluster-weekly-news 18:38:37 This should be fortnightly news now. 18:38:57 hmm, I thought I send a pull-request for the events page last week... 18:39:02 Anyways, I'm ending this meeting. 18:39:03 *** skoduri|training is now known as skoduri 18:39:08 Thanks everyone. 18:39:13 thanks kshlm! 18:39:20 Let's meet next week. 18:39:24 samikshan will be hosting. 18:39:27 #endmeeting