12:02:13 #startmeeting 12:02:13 Meeting started Wed Nov 26 12:02:13 2014 UTC. The chair is davemc. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:02:13 Useful Commands: #action #agreed #halp #info #idea #link #topic. 12:02:25 Etherpad is at https://public.pad.fsfe.org/p/gluster-community-meetings 12:02:37 * kkeithley is ready 12:02:47 #link https://public.pad.fsfe.org/p/gluster-community-meetings 12:02:57 Roll call 12:03:01 Who's here 12:03:08 * davemc is almost here 12:03:12 * hagarth is here 12:03:16 * kkeithley is here 12:03:16 * atinmu is here 12:03:17 * overclk is here 12:03:49 * raghu` is here 12:04:03 First apologies for yesterdays hangouts issues. That hasn't happened to me before 12:04:46 #item Debloper to send email on web revamp 12:04:48 davemc: no worries, we will always have teething problems 12:05:23 Debloper, I know you sent messages 12:05:24 * kshlm is here 12:05:28 davemc: have already 12:05:31 yeah :) 12:05:38 anything to add, or request 12:05:52 should we broaden the review of the revamped gluster.org to gluster-users ML? 12:05:57 Thoumas has been busy for the temp fix 12:05:57 not immediate, will post on list, in that case 12:06:30 hagarth, I think a bit of wiring to try an A-B test would be best before broadening it 12:06:53 davemc: ok 12:07:12 #item davemc to run a "what should it say?" through users ML 12:07:26 Not done, will try to queue up 12:07:31 Been a busy week 12:07:51 #item davemc Announcement about GlusterFS releases and its EOL 12:08:21 Not done on email: It will be mentioned in the GHO video releasing next week 12:08:28 * lalatenduM is here 12:08:36 davemc: cool 12:08:39 * krishnan_p is here 12:08:49 #item hagarth to update maintainers file 12:08:53 * Debloper is here 12:09:13 davemc: tbd, I will be sending a note about new maintainers for other components this week 12:09:19 k 12:09:31 plan to update maintainers file after that 12:09:52 #item TBD ndevos, lalatenduM, kkeithley and Humble to propose a long term solution to the packaging problem in CentOS/RHEL 12:09:59 Not sure where we ar with this 12:10:15 s/ar/are/ 12:10:16 davemc, not done yet 12:10:24 k 12:10:36 lets carry it to next meeting 12:10:41 yeah, nowhere, so far 12:11:01 #item pointer to wiki from site 12:11:29 I'll go add one to the current site while we finish up the revamp 12:11:53 #item JustinClift to respond to Soumya Deb on gluster-infra about website 12:12:40 *silence* I'll ping Justin 12:12:56 #item Gluster 3.6 12:13:10 updates? 12:13:17 raghu` has been added to the gerrit admin group 12:13:31 3.6.2 ? 12:13:39 he will be managing patches for release-3.6 from now on in conjunction with component maintainers 12:14:06 cool 12:14:23 3.6.2 beta1 is expected this week 12:14:24 whats are next release target? 12:14:30 Yes. I am going through the patches. Will try to release 3.6.2 AAP 12:14:34 beat me to it 12:14:35 oops...ASAP 12:15:00 next week would be good. I mention 3.6.2 on th evideo as well 12:15:05 often and early .. way to go 12:15:15 other 3.6 items? 12:15:24 raghu, there are few spurious failures related patches from me, can u pls consider them in your merge list 12:15:55 sure. I am going through them. Will merge them soon 12:16:18 other 3.6 items? 12:16:30 raghu`, thanks 12:16:43 #item 3.5 12:16:53 Any 3.5 items? 12:17:37 hearing none... 12:17:43 ndevos is MIA today 12:17:56 k 12:17:57 we will have 3.5.4 sometime 12:18:02 12:18:05 as patches are being merged to release-3.5 :) 12:18:17 ndevos would be able to indicate timelines for 3.5.4 12:18:31 I'll ping him later 12:18:44 #item 3.4 12:18:45 ndevos is on a conf call with nfs-ganesha meeting 12:18:57 kkeithley: ok 12:19:01 Anything on 3.4? 12:19:13 there's one show-stopper bug 12:19:16 https://bugzilla.redhat.com/show_bug.cgi?id=1127140 12:19:19 memory leak. 12:19:28 I suspect it applies to 3.5 as well 12:19:44 the memory leak stuff is not good 12:20:03 I think pranith was looking at it. I've been looking at it too, but been busy with ganesha work 12:20:11 kkeithley: do we know what workload triggers this? 12:20:13 davemc: +1 12:20:34 just I/O I think. See the longevity cluster 12:20:49 it's client-side fuse bridge glusterfs daemon that's leaking 12:20:52 I've had a coupe of side conversations with users complaining about stability, mentioning memory leaks 12:21:11 kkeithley: is there any impact by dropping caches in the longevity cluster? 12:21:11 several bugs have come in about client-side memory consumption 12:21:28 I tried disabling the caches in longevity 12:21:36 I suspect we also need to educate about how vfs' caching can cause memory spikes in glusterfs client 12:21:39 it didn't seem to change the long term memory consumption 12:21:57 kkeithley: is the problem still happening in the longevity cluster? 12:22:08 eventually it OOM kills the daemon. Haven't seen that on longevity yet, but people are reporting it 12:22:21 davemc: we need to review our strategy for stability 12:22:24 yes, it happens on longevity 12:22:33 which is running 3.6.1 atm 12:22:38 hagarth, +1 12:22:55 davemc: add an agenda item for next week on stability? 12:23:03 kkeithley: cool, I will review that 12:23:05 sure. 12:23:25 #action Discuss stabiity issues and directions 12:23:28 kkeithley: may need access details, will sync up with you offline on that one. 12:23:45 sure 12:23:59 anything else for 3.4? 12:24:11 going... 12:24:18 nope, I'll do 3.4.7beta once we fix the leak 12:24:27 sounds good 12:24:42 #Item gluster next 12:25:01 We need to reschedule the bitrot session 12:25:11 And I'm not easily available next week 12:25:18 someone else want to run it? 12:25:40 overclk: ^^ ? 12:26:07 (travel and conference talks) 12:26:11 hagarth, davemc, I can run it. no probs. 12:26:19 great 12:26:37 let me know when and I'll promote it 12:27:03 davemc, sure. Most probably tuesday but I'll conform you tomorrow. 12:27:25 k. It's an American holiday, but I'll check for it 12:27:50 #action reschedule bitrot session 12:27:56 davemc, ah ok. Wednesday then. 12:28:15 nope, tomorrow, not next week, sorry 12:28:18 tuesday works 12:28:30 tomorrow and Friday are holidays. 12:28:54 happy thanksgiving to all folks out there! 12:29:13 same from me! 12:29:19 okay 12:29:23 davemc, cool then. Tuesday it is. will confirm tomorrow. 12:29:35 overclk, done deal 12:29:42 #item small file performance 12:30:11 right 12:30:27 we have had some discussions about improving small file performance in the mailing lists. 12:31:14 I think we need more focus on dealing with this.. planning to convey a meeting for this to see how we can improve this in 3.7 12:31:23 that would be good 12:31:45 most probably will set it up for thursday of next week 12:32:01 #action hagarth to set up small file performance meeting 12:32:32 I would love to do whatever we can to address this for good! 12:32:48 it's definitely an issue for GlusterFS 12:33:21 davemc: yes 12:33:24 Any more on this 12:33:27 ? 12:33:41 if all of you listening/reading this have great ideas for improving small file perf, please route them to the feature page or MLs :) 12:34:14 hagarth, +1 12:34:24 any more gluster next topics? 12:34:39 if not, 12:34:42 hagarth, I am interested in understanding how we plan to characterize good vs bad performance for small files workload 12:35:02 It would be great if we got feedback on glusterd management volume proposal 12:35:03 krishnan_p: cool, that can be a very good item for discussion next week 12:35:14 krishnan_p: on my list for this week 12:35:22 hagarth, thanks 12:35:38 any more gluster next topics? 12:35:40 hagarth, thanks 12:35:55 #item other agenda items 12:36:13 We didn' really resolve the Ubuntu issue last tiem 12:36:13 any thought process for stabilizing the rackspace regressions? 12:36:30 progress there (Ubuntu cppcheck) 12:36:41 kkeithley, great 12:36:43 kkeithley: should we attempt inclusion again? 12:36:45 I saw hagarth merged the fixes in master. ndevos merged in 3.5 12:37:01 attempt inclusion again? 12:37:15 inclusion in Ubuntu. Yes 12:37:26 kkeithley, +1 12:37:37 if we're ready 12:37:49 which release to pick? 12:37:52 still need the merge to 3.6 12:37:52 3.5.x? 12:38:07 maybe 3.6 would be better as it is the latest 12:38:16 we have 3.5.x in Fedora to f21, 3.6 in f22 12:38:50 kkeithley: would we still have to wait for April for Ubuntu to carry us? (considering their 6 month cycles) 12:38:56 For Ubuntu LTS I'd argue for 3.6 12:39:32 when is the next LTS planned? 15.04? 12:39:37 I don't know the Ubuntu protocol, but we can engage semiosis who is pretty well connected 12:40:04 kkeithley: right .. looks like the next LTS is 16.04 12:40:33 davemc: can you follow this up with semiosis? 12:41:17 #action davemc to chat semiosis on ubuntu release 12:41:48 done with ubuntu for now? 12:41:57 seems like 12:42:04 atinmu: what in rackspace bothers us? 12:42:13 atinmu, rackspace 12:42:52 hagarth, we are seeing lots of spurious failures these days 12:43:13 atinmu: are there specific tests that cause these failures? 12:43:35 hagarth, and sometimes its becoming difficult to debug as well 12:43:52 hagarth, unfortunately I could see few new test cases which are failing sometimes 12:44:07 * kkeithley wishes we could get some action on putting the big server+RAID into the DC, ether RDU2 with CentOS or Ceph gear, or in PHX2 12:44:31 kkeithley, plans are beeing talked about to do that 12:44:35 atinmu: might be good to identify the ones which are failing more 12:44:39 and fix them first 12:44:51 but lots of moving parts 12:44:54 atinmu: would it be possible for you to open a discussion on gluster-devel regarding this topic? 12:45:19 hagarth, I did few days back, probably I will follow up on that mail 12:45:38 atinmu: ok, let us act on that thread 12:46:07 hagarth, sure 12:46:09 #action follow up on Rackspace regressions 12:46:28 okay on moving on? 12:46:34 yep 12:46:43 #item gerrit upgrade 12:47:26 we need to upgrade gerrit 12:47:27 Yes please. 12:47:42 so that we can integrate with static analyzers like sonar 12:47:57 Humble has started a thread with JustinClift and misc on this one 12:48:07 I think we need a little more attention to make this happen 12:48:12 that seems the best path 12:48:25 I can also ping them 12:48:37 Humble: can you please also copy gluster-infra? 12:48:45 hagarth, sure.. 12:48:48 davemc, thanks! 12:48:54 We need to get atleast 2.10, it has a github plugin that can be helful in bringing in more contributors. 12:48:57 I know misc has a bunch of stuff he's dealing with 12:49:05 hagarth, gerrit upgrade is stooping us from using sonar 12:49:16 I would like sonar to vet every patch that comes in to gerrit 12:49:41 after gerrit, we can upgrade Jenkins too :) 12:50:13 There is no other way to stop static code analysis issues 12:50:16 we'll need to figure out how best to upgrade without too much continual impact 12:50:43 davemc: yes 12:50:51 lalatenduM: can we start re-triggering coverity scans? 12:51:05 hagarth, yes, will do 12:51:25 do you want me to send the result to ML lists 12:51:27 ? 12:51:46 lalatenduM: yes please 12:51:53 hagarth, ok 12:52:53 k 12:53:06 any thing else 12:53:30 One thing, I am not available to run next weeks meeting. Any volunteers? 12:53:45 I'll be speaking on a panel at this time 12:53:58 * lalatenduM too on PTO whole next week 12:54:06 davemc: will do 12:54:14 great, and thanks 12:54:19 hagarth++ 12:54:44 anything else for this week? 12:55:03 going.. 12:55:13 .. 12:55:25 .. 12:55:33 and gone 12:55:40 #endmeeting