12:00:43 <kshlm> #startmeeting Weekly Community Meeting 24/Aug/2015
12:00:43 <zodbot> Meeting started Wed Aug 24 12:00:43 2016 UTC.  The chair is kshlm. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:00:43 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:00:43 <zodbot> The meeting name has been set to 'weekly_community_meeting_24/aug/2015'
12:00:53 <kshlm> #topic Rollcall
12:00:57 <kshlm> Welcome everyone!
12:01:01 <post-factum> o/
12:01:09 <kshlm> \o post-factum
12:01:11 * jiffin is here
12:01:18 * cloph 's just lurking..
12:01:28 <kshlm> Giving a couple of minutes for people to filter in
12:01:33 * skoduri is here
12:01:35 <cloph> pad seems to be sloooow/or even unreachable for me..
12:02:01 <cloph> 504 gw timeout..
12:02:28 * obnox is here
12:02:48 <kshlm> cloph, Me too.
12:02:55 <kshlm> Stuck on loading.
12:03:14 * ankitraj here
12:03:19 <kshlm> Bah refreshed just as it loaded.
12:03:24 <cloph> now it did load
12:04:06 <kshlm> Loaded for me as well.
12:04:12 * rastar is here, late
12:04:15 * anoopcs is here
12:04:17 * kkeithley is late too
12:04:17 <kshlm> Welcome everyone. And welcome cloph!
12:04:21 * karthik_ is here
12:04:22 <kshlm> Let's start
12:04:33 <kshlm> #topic Next week's meeting host
12:04:38 <kshlm> Hey hagarth!
12:04:47 <hagarth> kshlm: hey!
12:04:52 <kshlm> Okay, so who wants to host next weeks meeting?
12:05:18 <kshlm> We need a volunteer.
12:05:48 <kshlm> Anyone?
12:05:56 <rastar> I will.
12:06:01 <kshlm> Cool.
12:06:05 <kshlm> Thanks rastar.
12:06:15 <kshlm> #info rastar will host next weeks meeting
12:06:19 <rastar> thats a good way to force myself to attend the meeting too :)
12:06:25 <kshlm> Let's move on.
12:06:30 <kshlm> rastar, Agreed.
12:06:42 <kshlm> #topic GlusterFS-4.0
12:06:44 * rjoseph joins late
12:06:56 <kshlm> Do we have updates for 4.0?
12:07:29 <kshlm> I don't see anyone aprat from me around. So I'll go.
12:08:01 <kshlm> In GlusterD-2.0 land, last week I hit a problem with the transaction framework.
12:08:21 <kshlm> I was testing out solutions for it, and I've landed on my choice.
12:09:03 <kshlm> I was having trouble sending random data-structs as json and rebuilding the struct from json.
12:09:27 <kshlm> Instead of sending it on wire, I'll just use the etcd store to share these bits of info.
12:09:44 <kshlm> I'm working on cleaning up the change to push it for review.
12:10:05 <kshlm> Also, last week I found that etcd now allows embedding itself in other apps.
12:10:24 <kshlm> I will check out how it's gonna work in the weeks to come.
12:10:38 <kshlm> This should allow very easy deployments.
12:10:47 <cloph> (FYI: https://www.gluster.org/community/roadmap/4.0/ → glusterD 2.0 features page ( http://www.gluster.org/community/documentation/index.php/Features/thousand-node-glusterd is 404/documentation moved))
12:11:26 <kshlm> cloph, Yeah, we shutdown the old wiki.
12:11:46 <kshlm> This GlusterD-2.0 bit now lives in the gluster-specs repository
12:12:21 <kshlm> https://github.com/gluster/glusterfs-specs
12:12:26 <kshlm> It's glusterfs-specs
12:12:34 <cloph> thx
12:12:56 <kshlm> Does anyone else have stuff to add about 4.0 projects?
12:13:13 <justinclift> Yep
12:13:19 <kshlm> justinclift, Go on.
12:13:35 <justinclift> Is there an ETA for when people will be able to try something out?  even extremely alpha? :D
12:14:06 <kshlm> For GD2 I'm aiming for something to be available at summit.
12:14:20 <kshlm> For DHT2 an JBR, I'm not sure.
12:14:33 <kshlm> Things have been progressin very (very) slowly.
12:14:50 <justinclift> 2016 is unlikely for something other dev's could try out?
12:14:53 <kshlm> hagarth, Do you have anything to say about this?
12:14:55 <justinclift> DHT2 that is
12:15:24 <hagarth> kshlm: not at the moment
12:15:41 <kshlm> justinclift, I honestly don't know what state it is in. I just know that POCs were done.
12:15:43 <hagarth> justinclift: I think JBR is close to being consumable.. DHT2 is still some time way
12:15:53 <justinclift> k
12:16:01 <justinclift> Please continue :)
12:16:07 <kshlm> Thanks hagarth and justinclift
12:16:11 <kshlm> Let's get on.
12:16:23 <kshlm> #topic GlusterFS-3.9
12:16:46 <kshlm> Neither of the release-maintainers are around today.
12:16:58 <kshlm> So just reitierating what was said last week.
12:17:05 <aravindavk_> nothing much, AI on me to send remainder to mailing list, Not done yet
12:17:19 <kshlm> Oh cool! aravindavk_ is here.
12:17:49 <kshlm> Yup. The mail is to notify that 3.9 will be entering a stability phase at the end of August.
12:18:09 <kshlm> So any new feature that needs to make it in should be merged in before that.
12:18:36 <kshlm> After that, only bug-fixes will be allowed, to allow sometime for the release to stabilize.
12:18:45 <kshlm> aravindavk_, Please send that out ASAP.
12:18:47 <hagarth> do we have any patches that need review attention before the end of August?
12:19:01 <aravindavk_> kshlm: will send tomorrow
12:19:13 <kshlm> hagarth, We should get to know once the announcement goes out.
12:19:18 <hagarth> kshlm: right
12:19:39 <kshlm> aravindavk_, As soon as you can. We have a week left.
12:19:46 <kshlm> Shall I move on?
12:19:47 <ndevos> Manikandan: do you think the SELinux xlator can be ready fast enough? otherwise we'll move it to 3.9+1
12:20:19 <kshlm> Manikandan, are you here?
12:20:42 * ndevos thinks the Kerberos work got put down in priority, and doubt it'll be ready :-/
12:20:58 <kshlm> ndevos, He's probably not around. You can take it up later.
12:21:03 <kshlm> I'll move to the next topic.
12:21:06 <ndevos> kshlm: yes, move on :)
12:21:09 <kshlm> #topic GlusterFS-3.8
12:21:11 <Manikandan> ndevos, kshlm nope, I could not finish that soon
12:21:14 <kshlm> And you're on.
12:21:22 <kshlm> ndevos, ^
12:21:25 <ndevos> Manikandan: ok, thanks
12:21:28 <Manikandan> ndevos, we should move it to 3.9+1
12:21:42 <ndevos> we did a 3.8.3 release earlier this week
12:22:02 <ndevos> packages are on download.gluster.org and pushed into many distributions' repositories
12:22:25 <post-factum> Arch still has 3.8.1 :/
12:22:28 <ndevos> the announcement was sent yesterday morning, but some people mentioned delays in receiving the email
12:22:46 <kshlm> post-factum, What! That's not very Arch like.
12:22:53 <post-factum> kshlm: true
12:22:56 <ndevos> post-factum: hmm, the arch maintainer should be aware, iirc he is on the packagers list
12:22:58 <kkeithley> do we know who does the packages for Arch?
12:23:10 <kshlm> ndevos, Yup. Lots of delays for mails on the lists.
12:23:13 <post-factum> kkeithley: yep
12:23:17 <hagarth> ndevos, kshlm: any change in testing strategy planned for future releases?
12:23:34 <post-factum> kkeithley: Sergej Pupykin <pupykin.s+arch@gmail.com>
12:23:45 <kshlm> hagarth, We need to automate a lot of it.
12:23:49 <ndevos> hagarth: yes, at one point we should have distaf/Glusto tests
12:24:16 <kshlm> The propsed method of getting maintainers to give an ACK doesn't seem to have worked well.
12:24:19 <hagarth> ndevos,kshlm: I am getting concerned about the consistent dead on arrival releases..
12:24:46 <kshlm> It takes too long to get ACKs when we're on schedule.
12:25:08 <ndevos> hagarth: not only you, it is really embarrasing that we do not catch regressions :-(
12:25:22 <kkeithley> and are those "official" Arch packages?  Versus the ones we build in Ubuntu Launchpad and SuSE Build System are just our own "community" packages, and have no official standing.
12:25:30 <kshlm> hagarth, Yeah. But atleast out current release process is working well, so that we can get releases out quickly when something like that happens.
12:25:35 <hagarth> kshlm, ndevos: how do we change this pattern?
12:25:37 <kshlm> kkeithley, Yup.
12:25:49 <nigelb> kshlm: We should talk later on how to make the ack process better.
12:26:06 <post-factum> kkeithley: https://www.archlinux.org/packages/community/x86_64/glusterfs/ community repo as you may see
12:26:07 <hagarth> kshlm: considering the number of users in production .. any unstable release hurts all of us
12:26:11 <ndevos> hagarth: hopefully by adding more tests and have them run automatically...
12:26:41 <kkeithley> the only packages we build that have any sort of official standing are for Fedora and CentOS Storage SIG.
12:27:03 <ndevos> nigelb: one of the difficulties is that only very few maintainers seem to be interested in acking releases, it should not be a manual thing :-/
12:27:03 <hagarth> ndevos: we should evolve a concrete plan .. we do owe the community that
12:27:19 <kkeithley> Everything else is just (or only) for the convenience of the community
12:27:19 <nigelb> ndevos: yeah, I'm happy to help automate the hell out of that.
12:27:43 * justinclift sighs
12:27:44 <hagarth> ndevos: we really need to update the maintainer lifecycle document and have it out
12:28:03 <ndevos> nigelb: we should, and have a testing framework without tests... automating the framework alone will not help
12:28:26 <nigelb> I suggested we change our release process slightly so we have a period where we are no longer merging things into it. Just testing and fixing bugs from said testing.
12:28:34 <nigelb> (On the bug)
12:29:12 <ndevos> nigelb: right, but we *just* changed the release schedule to allow faster releases
12:29:15 <post-factum> and add "Required-backport-to:" tag
12:29:48 <nigelb> ndevos: Let's talk after the meeting. I still think we can do both, but need to delay one release.
12:29:56 <ndevos> hagarth: we should, and I think you started (or wanted to?) document the maintainer addition/recycling/...
12:30:25 <hagarth> ndevos: I am looking for help in writing that one
12:30:41 <kshlm> ndevos, I had started this sometime way back.
12:30:45 <ndevos> hagarth: sharing the etherpad on the maintainers list would be a start :)
12:30:59 <kshlm> I didn't take it forward.
12:31:04 <kshlm> Let's restart.
12:31:10 <hagarth> ndevos: shared it with multiple interested folks several times already :-/
12:31:12 <nigelb> hagarth: I can possibly help. I've seen policies for other projects.
12:31:17 <kshlm> Let's move on to the other topics.
12:31:26 <hagarth> nigelb: will share it with you
12:31:26 <kshlm> This discussion can be had later.
12:31:32 <kshlm> #topic GlusterFS-3.7
12:31:38 <kshlm> Nothing much to report here.
12:31:50 <kshlm> 3.7.15 is still on track for 30-August.
12:32:01 * post-factum has found another memory leak within 3.7 branch and wants some help :(
12:32:11 <kshlm> I've sent out an announcement about this on the lists.
12:32:40 <kshlm> I'll stop merging patches this weekend, to get some time to do some verification of the release.
12:33:12 <kshlm> post-factum, Have you asked on #gluster-dev or gluster-devel@
12:33:24 <kshlm> Someone should be able to help.
12:33:48 <post-factum> kshlm: yep, Nithya should take a look at it later, but I know kkeithley is interested as well ;)
12:33:56 <kshlm> Cool then.
12:34:10 <kshlm> Do we need to discuss anything else regarding 3.7?
12:34:24 <post-factum> may I ask again for 2 reviews?
12:35:02 <kshlm> post-factum, Reply to the 3.7.15 update mail I sent.
12:35:07 <post-factum> kshlm: I did
12:35:12 <kshlm> I'll take a look.
12:35:25 <kshlm> When did you do it?
12:35:25 <post-factum> kshlm: let me know if mail didn't hit your inbox
12:35:35 <kshlm> I guess it hasn't yet.
12:35:53 <kshlm> I'll take a look at the archives after the meeting.
12:35:56 <post-factum> kshlm: monday, 15:33 EET
12:36:22 <kshlm> I'll move on to 3.6
12:36:26 <kshlm> #topic GlusterFS-3.6
12:36:40 <kshlm> We had another bug screen this week.
12:37:08 <kshlm> We got lots of help, and we screened about half of the bug list.
12:37:21 <kshlm> We'll have 1 more next week and that should be it.
12:37:44 <kshlm> #topic Project Infrastructure
12:37:57 <nigelb> o/
12:38:02 <kshlm> nigelb, Do you have update?
12:38:06 <nigelb> I've got a hacky fix for the NetBSD hangs, but rastar has confirmed that we're doing something bad in Gluster actually causing the hang.
12:38:10 <kshlm> s/update/updates/
12:38:20 <nigelb> Hoping that we nail it down in the next few weeks.
12:38:48 <rastar> I have not proceeded on that today
12:38:48 <nigelb> misc has been working on the cage migration. The last updates are on the mailing list. We'll need a downtime at some point for jenkins and Gerrit.
12:39:00 <rastar> but it surely seems like a hang in gluster process than setup issue
12:39:42 <nigelb> I've also been working on a prototype for tracking regression failure trends. I'll share it when I have something that looks nearly functional.
12:39:42 <kshlm> Could it be nfs related? The gluster nfs server crashes and hangs the nfs mount.
12:39:52 <nigelb> NFS is my guess as well.
12:40:00 <rastar> kshlm: this one was a fuse mount though
12:40:01 <nigelb> Because the proccess are stuck in D state
12:40:04 <rastar> /mnt/glusterfs/0
12:40:11 <ndevos> hey, dont always blame nfs!
12:40:14 <kshlm> rastar, Oh. Okay.
12:40:34 <nigelb> heh
12:40:43 <nigelb> Centos CI tests should soon be moving to JJB as well. I'm in the process of cleaning up our existing JJB scripts to follow a standard (and writing down this standard).
12:40:48 <kshlm> ndevos, Only just guessing. I've had my laptop hang when I've mistakenly killed the gluster nfs server.
12:40:57 <kshlm> nigelb, Cool!
12:41:02 <ndevos> kshlm: yes, that'll do it
12:41:13 <nigelb> that's it as far as infra updates go :)
12:41:22 <kshlm> Thanks nigelb.
12:41:31 <kshlm> Let's move on to
12:41:37 <kshlm> #topic Ganesha
12:41:56 <kkeithley> 2.4 RC1 was supposed to be tagged last week, but probably will happen this week.
12:42:14 <kkeithley> planned 2-3 weeks of RCs before GA
12:42:30 <kkeithley> Of course it was originally supposed to GA back in February
12:42:59 <kkeithley> that's all unless skoduri is here and wants to add anything
12:43:02 <kkeithley> or jiffin
12:43:07 <kshlm> kkeithley, Woah! So we're actually better with our releases!
12:43:14 <kkeithley> careful
12:43:17 <kkeithley> ;-)
12:43:27 <skoduri> I do not have anything to add
12:43:51 <kshlm> Better with out release schedules, (probably not so much the actual release).
12:44:00 <kshlm> Thanks kkeithley.
12:44:07 <kshlm> #topic Samba
12:44:18 * obnox waves
12:44:23 <kshlm> Hey obnox!
12:44:41 <obnox> samba upstream is preparing for release of samba 4.5.0
12:45:12 <obnox> our gluster-focused people are working on the performance improvements for samba on gluster
12:45:30 <obnox> most notable the generic md-cache improvements of poornima
12:45:58 <obnox> apart from that, work is ongoing regarding generic SMB3 feature implementation in Samba
12:46:08 <obnox> SMB3 multichannel
12:46:23 <obnox> that's my brief update
12:46:34 <kshlm> Thanks obnox.
12:46:52 <kshlm> The md-cache work is really awesome.
12:46:57 <obnox> \o/
12:47:13 <kkeithley> can't wait to get it (the md-cache work)
12:47:16 <obnox> :-)
12:47:19 <kkeithley> for nfs-ganesha
12:47:26 <kshlm> #topic Last weeks AIs
12:47:50 <kshlm> Of the 4 AIs from last week, 3 have been addressed.
12:47:53 <obnox> i forgot: the other items in samba are related to ssl-support and multi-volfile support via samba-gluster-vfs / libgfapi
12:47:57 <obnox> sry
12:48:20 <kshlm> The one remaining I'm carrying forward.
12:48:30 <kshlm> #action pranithk/aravindavk/dblack to send out a reminder about the feature deadline for 3.9
12:48:33 <kkeithley> obnox will buy beer in Berlin to atone
12:48:47 <obnox> kkeithley: :-)
12:48:57 <nigelb> I totally forgot to say, I finished one of the long stanging AIs from this meeting -> We now have Worker Ant for commenting on bugs.
12:48:59 <kshlm> obnox, Yup. I need to review those gfapi ssl changes.
12:49:10 <kshlm> nigelb, Yey!!!!
12:49:29 <kshlm> workerant++
12:49:30 <glusterbot> kshlm: workerant's karma is now 1
12:49:34 <kshlm> nigelb++
12:49:34 <zodbot> kshlm: Karma for nigelb changed to 1 (for the f24 release cycle):  https://badges.fedoraproject.org/tags/cookie/any
12:49:34 <glusterbot> kshlm: nigelb's karma is now 1
12:49:48 <kshlm> #topic Open Floor
12:49:53 <kshlm> Let me check what we have.
12:50:21 <kshlm> kkeithley, Both the topics are yours?
12:50:33 <kkeithley> no, only longevity is mine
12:50:50 <kshlm> Ah the other one is atinms IIRC.
12:51:21 <kshlm> Since you're here kkeithley
12:51:27 <kshlm> #topic Longetivity
12:51:43 <kkeithley> not much to say about mine besides what's in the etherpad
12:52:02 <kkeithley> people need to look into where the memory is going
12:52:06 <kkeithley> kinda "name and shame"
12:52:28 <post-factum> yes please :)
12:52:34 <kshlm> Didn't the last cluster you ran, run fine for a long time?
12:52:45 <kshlm> What was that? 3.6 or previous?
12:52:48 <kkeithley> e.g. shd? why is shd growing? The network is solid, there shouldn't be any healing
12:52:53 <kkeithley> this is 3.8.1
12:53:06 <kkeithley> but old numbers are still there to look at
12:53:28 <kkeithley> it ran for a long time.  "fine" is subjective
12:53:53 <kshlm> kkeithley, 1 request about this.
12:54:05 <kkeithley> the workload I'm running may not be as agressive or as busy as Real World workloads
12:54:08 <kkeithley> yes?
12:54:14 <kshlm> When you take the memory samples, could you also take statedumps?
12:54:32 <kshlm> Would make is much easier to compare.
12:54:52 <kshlm> BTW, glusterd seems to be doing pretty good. :)
12:54:55 <kkeithley> sure, that shouldn't be too hard
12:55:09 <kkeithley> yes, glusterd is better
12:55:12 <kkeithley> than it was
12:55:36 <kkeithley> but it's not doing anything, and it is still growing a bit
12:55:44 <kshlm> Thank you everyone who helped get it into this shape!
12:55:51 <kshlm> We still got more work to do.
12:56:37 <kshlm> So people (developers and maintainers mainly) please look into the reports kkeithley's longetivity cluster is generating,
12:56:42 <kshlm> and get working.
12:56:53 <kshlm> kkeithley, Shall I move on?
12:56:57 <kkeithley> sure
12:57:02 <obnox> where is 'the etherpad' of kkeithley ?
12:57:02 <kshlm> Thanks
12:57:14 <kkeithley> https://public.pad.fsfe.org/p/gluster-community-meetings
12:57:28 <obnox> kkeithley: thanks
12:57:30 <kshlm> obnox, https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/ has the reports.
12:57:35 <obnox> ah tha
12:57:37 <obnox> kshlm: thx
12:57:45 <kshlm> #topic restructuring bugzilla components/subcomponents
12:57:58 <kshlm> Was this about adding new components into bugzilla?
12:58:27 <kshlm> Anyone?
12:58:31 <kkeithley> doesn't atinm want to sync upstream and downstream components?
12:59:01 <kshlm> Wouldn't that be done downstream then.
12:59:01 <kkeithley> he should make a specific proposal?
12:59:29 <atinm> kkeithley, honestly its in the pipeline but due to lack of time I haven't got a chance to get someone to propose and work on it
12:59:36 <kshlm> Upstream has more fine grained components, and I think it was requested downstream as well.
12:59:51 <atinm> let's not talk about downstream here
13:00:05 <kshlm> atinm, Cool.
13:00:09 <atinm> I think the proposal here was to fine grain upstream in a better way
13:00:24 <kshlm> atinm, So do you want to discuss that here?
13:00:49 <kshlm> Or will you send a mail to gluster-devel (or maintainers)?
13:01:12 * kshlm notes we are in overtime now.
13:01:22 <atinm> as I said, I still have to work on it and I don't have any concrete data, although the AI is on me but I will try to get a volunteer to do it
13:01:43 <kshlm> Okay. Find a volunteer soon.
13:01:54 <kshlm> If that's it, I'll end the meeting.
13:02:28 <kshlm> #topic Announcements
13:02:31 <kshlm> If you're attending any event/conference please add the event and yourselves to Gluster attendance of events: http://www.gluster.org/events (replaces https://public.pad.fsfe.org/p/gluster-events)
13:02:31 <kshlm> Put (even minor) interesting topics on https://public.pad.fsfe.org/p/gluster-weekly-news
13:02:55 <kshlm> (I'm skipping the backport etherpad, as I don't think anyone is using it).
13:03:04 <kshlm> Thanks for attending the meeting everyone.
13:03:09 <kkeithley> yeah, kill it
13:03:13 <kshlm> See you all next week.
13:03:20 <kshlm> #endmeeting