12:02:13 <davemc> #startmeeting
12:02:13 <zodbot> Meeting started Wed Nov 26 12:02:13 2014 UTC.  The chair is davemc. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:02:13 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:02:25 <davemc> Etherpad is at https://public.pad.fsfe.org/p/gluster-community-meetings
12:02:37 * kkeithley is ready
12:02:47 <davemc> #link https://public.pad.fsfe.org/p/gluster-community-meetings
12:02:57 <davemc> Roll call
12:03:01 <davemc> Who's here
12:03:08 * davemc is almost here
12:03:12 * hagarth is here
12:03:16 * kkeithley is here
12:03:16 * atinmu is here
12:03:17 * overclk is here
12:03:49 * raghu` is here
12:04:03 <davemc> First apologies for yesterdays hangouts issues. That hasn't happened to me before
12:04:46 <davemc> #item     Debloper to     send email on web revamp
12:04:48 <hagarth> davemc: no worries, we will always have teething problems
12:05:23 <davemc> Debloper, I know you sent messages
12:05:24 * kshlm is here
12:05:28 <Debloper> davemc: have already
12:05:31 <Debloper> yeah :)
12:05:38 <davemc> anything to add, or request
12:05:52 <hagarth> should we broaden the review of the revamped gluster.org to gluster-users ML?
12:05:57 <davemc> Thoumas has been busy for the temp fix
12:05:57 <Debloper> not immediate, will post on list, in that case
12:06:30 <davemc> hagarth, I think a bit of wiring to try an A-B test would be best before broadening it
12:06:53 <hagarth> davemc: ok
12:07:12 <davemc> #item      davemc to run     a "what should it say?" through users ML
12:07:26 <davemc> Not done, will try to queue up
12:07:31 <davemc> Been a busy week
12:07:51 <davemc> #item     davemc     Announcement about GlusterFS releases and its EOL
12:08:21 <davemc> Not done on email: It will be mentioned in the GHO video releasing next week
12:08:28 * lalatenduM is here
12:08:36 <hagarth> davemc: cool
12:08:39 * krishnan_p is here
12:08:49 <davemc> #item  hagarth to update maintainers file
12:08:53 * Debloper is here
12:09:13 <hagarth> davemc: tbd, I will be sending a note about new maintainers for other components this week
12:09:19 <davemc> k
12:09:31 <hagarth> plan to update maintainers file after that
12:09:52 <davemc> #item     TBD  ndevos, lalatenduM, kkeithley and Humble to propose a long term solution to the packaging problem in CentOS/RHEL
12:09:59 <davemc> Not sure where we ar with this
12:10:15 <davemc> s/ar/are/
12:10:16 <lalatenduM> davemc, not done yet
12:10:24 <davemc> k
12:10:36 <lalatenduM> lets carry it to next meeting
12:10:41 <kkeithley> yeah, nowhere, so far
12:11:01 <davemc> #item pointer to wiki from site
12:11:29 <davemc> I'll go add one to the current site while we finish up the revamp
12:11:53 <davemc> #item     JustinClift to respond to Soumya Deb on gluster-infra about website
12:12:40 <davemc> *silence* I'll ping Justin
12:12:56 <davemc> #item Gluster 3.6
12:13:10 <davemc> updates?
12:13:17 <hagarth> raghu` has been added to the gerrit admin group
12:13:31 <davemc> 3.6.2 ?
12:13:39 <hagarth> he will be managing patches for release-3.6 from now on in conjunction with component maintainers
12:14:06 <davemc> cool
12:14:23 <hagarth> 3.6.2 beta1 is expected this week
12:14:24 <davemc> whats are next release target?
12:14:30 <raghu`> Yes. I am going through the patches. Will try to release 3.6.2 AAP
12:14:34 <davemc> beat me to it
12:14:35 <raghu`> oops...ASAP
12:15:00 <davemc> next week would be good. I mention 3.6.2 on th evideo as well
12:15:05 <hagarth> often and early .. way to go
12:15:15 <davemc> other 3.6 items?
12:15:24 <atinmu> raghu, there are few spurious failures related patches from me, can u pls consider them in your merge list
12:15:55 <raghu`> sure. I am going through them. Will merge them soon
12:16:18 <davemc> other 3.6 items?
12:16:30 <atinmu> raghu`, thanks
12:16:43 <davemc> #item 3.5
12:16:53 <davemc> Any 3.5 items?
12:17:37 <davemc> hearing none...
12:17:43 <hagarth> ndevos is MIA today
12:17:56 <davemc> k
12:17:57 <hagarth> we will have 3.5.4 sometime
12:18:02 <davemc> <grin>
12:18:05 <hagarth> as patches are being merged to release-3.5 :)
12:18:17 <hagarth> ndevos would be able to indicate timelines for 3.5.4
12:18:31 <davemc> I'll ping him later
12:18:44 <davemc> #item 3.4
12:18:45 <kkeithley> ndevos is on a conf call with nfs-ganesha meeting
12:18:57 <hagarth> kkeithley: ok
12:19:01 <davemc> Anything on 3.4?
12:19:13 <kkeithley> there's one show-stopper bug
12:19:16 <kkeithley> https://bugzilla.redhat.com/show_bug.cgi?id=1127140
12:19:19 <kkeithley> memory leak.
12:19:28 <kkeithley> I suspect it applies to 3.5 as well
12:19:44 <davemc> the memory leak stuff is not good
12:20:03 <kkeithley> I think pranith was looking at it.  I've been looking at it too, but been busy with ganesha work
12:20:11 <hagarth> kkeithley: do we know what workload triggers this?
12:20:13 <hagarth> davemc: +1
12:20:34 <kkeithley> just I/O I think.  See the longevity cluster
12:20:49 <kkeithley> it's client-side fuse bridge glusterfs daemon that's leaking
12:20:52 <davemc> I've had a coupe of side conversations with users complaining about stability, mentioning memory leaks
12:21:11 <hagarth> kkeithley: is there any impact by dropping caches in the longevity cluster?
12:21:11 <kkeithley> several bugs have come in about client-side memory consumption
12:21:28 <kkeithley> I tried disabling the caches in longevity
12:21:36 <hagarth> I suspect we also need to educate about how vfs' caching can cause memory spikes in glusterfs client
12:21:39 <kkeithley> it didn't seem to change the long term memory consumption
12:21:57 <hagarth> kkeithley: is the problem still happening in the longevity cluster?
12:22:08 <kkeithley> eventually it OOM kills the daemon. Haven't seen that on longevity yet, but people are reporting it
12:22:21 <hagarth> davemc: we need to review our strategy for stability
12:22:24 <kkeithley> yes, it happens on longevity
12:22:33 <kkeithley> which is running 3.6.1 atm
12:22:38 <davemc> hagarth, +1
12:22:55 <hagarth> davemc: add an agenda item for next week on stability?
12:23:03 <hagarth> kkeithley: cool, I will review that
12:23:05 <davemc> sure.
12:23:25 <davemc> #action Discuss stabiity issues and directions
12:23:28 <hagarth> kkeithley: may need access details, will sync up with you offline on that one.
12:23:45 <kkeithley> sure
12:23:59 <davemc> anything else for 3.4?
12:24:11 <davemc> going...
12:24:18 <kkeithley> nope, I'll do 3.4.7beta once we fix the leak
12:24:27 <davemc> sounds good
12:24:42 <davemc> #Item gluster next
12:25:01 <davemc> We need to reschedule the bitrot session
12:25:11 <davemc> And I'm not easily available next week
12:25:18 <davemc> someone else want to run it?
12:25:40 <hagarth> overclk: ^^ ?
12:26:07 <davemc> (travel and conference talks)
12:26:11 <overclk> hagarth, davemc, I can run it. no probs.
12:26:19 <davemc> great
12:26:37 <davemc> let me know when and I'll promote it
12:27:03 <overclk> davemc, sure. Most probably tuesday but I'll conform you tomorrow.
12:27:25 <davemc> k.  It's an American holiday, but I'll check for it
12:27:50 <davemc> #action reschedule bitrot session
12:27:56 <overclk> davemc, ah ok. Wednesday then.
12:28:15 <davemc> nope, tomorrow, not next week, sorry
12:28:18 <davemc> tuesday works
12:28:30 <kkeithley> tomorrow and Friday are holidays.
12:28:54 <hagarth> happy thanksgiving to all folks out there!
12:29:13 <davemc> same from me!
12:29:19 <davemc> okay
12:29:23 <overclk> davemc, cool then. Tuesday it is. will confirm tomorrow.
12:29:35 <davemc> overclk, done deal
12:29:42 <davemc> #item small file performance
12:30:11 <hagarth> right
12:30:27 <hagarth> we have had some discussions about improving small file performance in the mailing lists.
12:31:14 <hagarth> I think we need more focus on dealing with this.. planning to convey a meeting for this to see how we can improve this in 3.7
12:31:23 <davemc> that would be good
12:31:45 <hagarth> most probably will set it up for thursday of next week
12:32:01 <davemc> #action hagarth to set up small file performance meeting
12:32:32 <hagarth> I would love to do whatever we can to address this for good!
12:32:48 <davemc> it's definitely an issue for GlusterFS
12:33:21 <hagarth> davemc: yes
12:33:24 <davemc> Any more on this
12:33:27 <davemc> ?
12:33:41 <hagarth> if all of you listening/reading this have great ideas for improving small file perf, please route them to the feature page or MLs :)
12:34:14 <davemc> hagarth, +1
12:34:24 <davemc> any more gluster next topics?
12:34:39 <davemc> if not,
12:34:42 <krishnan_p> hagarth, I am interested in understanding how we plan to characterize good vs bad performance for small files workload
12:35:02 <krishnan_p> It would be great if we got feedback on glusterd management volume proposal
12:35:03 <hagarth> krishnan_p: cool, that can be a very good item for discussion next week
12:35:14 <hagarth> krishnan_p: on my list for this week
12:35:22 <krishnan_p> hagarth, thanks
12:35:38 <davemc> any more gluster next topics?
12:35:40 <atinmu> hagarth, thanks
12:35:55 <davemc> #item other agenda items
12:36:13 <davemc> We didn' really resolve the Ubuntu issue last tiem
12:36:13 <atinmu> any thought process for stabilizing the rackspace regressions?
12:36:30 <kkeithley> progress there (Ubuntu cppcheck)
12:36:41 <davemc> kkeithley, great
12:36:43 <hagarth> kkeithley: should we attempt inclusion again?
12:36:45 <kkeithley> I saw hagarth merged the fixes in master. ndevos merged in 3.5
12:37:01 <kkeithley> attempt inclusion again?
12:37:15 <kkeithley> inclusion in Ubuntu.  Yes
12:37:26 <davemc> kkeithley, +1
12:37:37 <davemc> if we're ready
12:37:49 <hagarth> which release to pick?
12:37:52 <kkeithley> still need the merge to 3.6
12:37:52 <hagarth> 3.5.x?
12:38:07 <hagarth> maybe 3.6 would be better as it is the latest
12:38:16 <kkeithley> we have 3.5.x in Fedora to f21,  3.6 in f22
12:38:50 <hagarth> kkeithley: would we still have to wait for April for Ubuntu to carry us? (considering their 6 month cycles)
12:38:56 <kkeithley> For Ubuntu LTS I'd argue for 3.6
12:39:32 <hagarth> when is the next LTS planned? 15.04?
12:39:37 <kkeithley> I don't know the Ubuntu protocol, but we can engage semiosis who is pretty well connected
12:40:04 <hagarth> kkeithley: right .. looks like the next LTS is 16.04
12:40:33 <hagarth> davemc: can you follow this up with semiosis?
12:41:17 <davemc> #action davemc to chat semiosis on ubuntu release
12:41:48 <davemc> done with ubuntu for now?
12:41:57 <hagarth> seems like
12:42:04 <hagarth> atinmu: what in rackspace bothers us?
12:42:13 <davemc> atinmu, rackspace
12:42:52 <atinmu> hagarth, we are seeing lots of spurious failures these days
12:43:13 <hagarth> atinmu: are there specific tests that cause these failures?
12:43:35 <atinmu> hagarth, and sometimes its becoming difficult to debug as well
12:43:52 <atinmu> hagarth, unfortunately I could see few new test cases which are failing sometimes
12:44:07 * kkeithley wishes we could get some action on putting the big server+RAID into the DC, ether RDU2 with CentOS or Ceph gear, or in PHX2
12:44:31 <davemc> kkeithley, plans are beeing talked about to do that
12:44:35 <hagarth> atinmu: might be good to identify the ones which are failing more
12:44:39 <hagarth> and fix them first
12:44:51 <davemc> but lots of moving parts
12:44:54 <hagarth> atinmu: would it be possible for you to open a discussion on gluster-devel regarding this topic?
12:45:19 <atinmu> hagarth, I did few days back, probably I will follow up on that mail
12:45:38 <hagarth> atinmu: ok, let us act on that thread
12:46:07 <atinmu> hagarth, sure
12:46:09 <davemc> #action follow up on Rackspace regressions
12:46:28 <davemc> okay on moving on?
12:46:34 <hagarth> yep
12:46:43 <davemc> #item gerrit upgrade
12:47:26 <hagarth> we need to upgrade gerrit
12:47:27 <kshlm> Yes please.
12:47:42 <hagarth> so that we can integrate with static analyzers like sonar
12:47:57 <hagarth> Humble has started a thread with JustinClift and misc on this one
12:48:07 <hagarth> I think we need a little more attention to make this happen
12:48:12 <davemc> that seems the best path
12:48:25 <davemc> I can also ping them
12:48:37 <hagarth> Humble: can you please also copy gluster-infra?
12:48:45 <Humble> hagarth, sure..
12:48:48 <Humble> davemc, thanks!
12:48:54 <kshlm> We need to get atleast 2.10, it has a github plugin that can be helful in bringing in more contributors.
12:48:57 <davemc> I know misc has a bunch of stuff he's dealing with
12:49:05 <lalatenduM> hagarth, gerrit upgrade is stooping us from using sonar
12:49:16 <hagarth> I would like sonar to vet every patch that comes in to gerrit
12:49:41 <hagarth> after gerrit, we can upgrade Jenkins too :)
12:50:13 <lalatenduM> There is no other way to stop static code analysis issues
12:50:16 <davemc> we'll need to figure out how best to upgrade without too much continual impact
12:50:43 <hagarth> davemc: yes
12:50:51 <hagarth> lalatenduM: can we start re-triggering coverity scans?
12:51:05 <lalatenduM> hagarth, yes, will do
12:51:25 <lalatenduM> do you want me to send the result to ML lists
12:51:27 <lalatenduM> ?
12:51:46 <hagarth> lalatenduM: yes please
12:51:53 <lalatenduM> hagarth, ok
12:52:53 <davemc> k
12:53:06 <davemc> any thing else
12:53:30 <davemc> One thing, I am not available to run next weeks meeting. Any volunteers?
12:53:45 <davemc> I'll be speaking on a panel at this time
12:53:58 * lalatenduM too on PTO whole next week
12:54:06 <hagarth> davemc: will do
12:54:14 <davemc> great, and thanks
12:54:19 <davemc> hagarth++
12:54:44 <davemc> anything else for this week?
12:55:03 <davemc> going..
12:55:13 <davemc> ..
12:55:25 <davemc> ..
12:55:33 <davemc> and gone
12:55:40 <davemc> #endmeeting