12:00:43 #startmeeting Weekly Community Meeting 24/Aug/2015 12:00:43 Meeting started Wed Aug 24 12:00:43 2016 UTC. The chair is kshlm. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:00:43 Useful Commands: #action #agreed #halp #info #idea #link #topic. 12:00:43 The meeting name has been set to 'weekly_community_meeting_24/aug/2015' 12:00:53 #topic Rollcall 12:00:57 Welcome everyone! 12:01:01 o/ 12:01:09 \o post-factum 12:01:11 * jiffin is here 12:01:18 * cloph 's just lurking.. 12:01:28 Giving a couple of minutes for people to filter in 12:01:33 * skoduri is here 12:01:35 pad seems to be sloooow/or even unreachable for me.. 12:02:01 504 gw timeout.. 12:02:28 * obnox is here 12:02:48 cloph, Me too. 12:02:55 Stuck on loading. 12:03:14 * ankitraj here 12:03:19 Bah refreshed just as it loaded. 12:03:24 now it did load 12:04:06 Loaded for me as well. 12:04:12 * rastar is here, late 12:04:15 * anoopcs is here 12:04:17 * kkeithley is late too 12:04:17 Welcome everyone. And welcome cloph! 12:04:21 * karthik_ is here 12:04:22 Let's start 12:04:33 #topic Next week's meeting host 12:04:38 Hey hagarth! 12:04:47 kshlm: hey! 12:04:52 Okay, so who wants to host next weeks meeting? 12:05:18 We need a volunteer. 12:05:48 Anyone? 12:05:56 I will. 12:06:01 Cool. 12:06:05 Thanks rastar. 12:06:15 #info rastar will host next weeks meeting 12:06:19 thats a good way to force myself to attend the meeting too :) 12:06:25 Let's move on. 12:06:30 rastar, Agreed. 12:06:42 #topic GlusterFS-4.0 12:06:44 * rjoseph joins late 12:06:56 Do we have updates for 4.0? 12:07:29 I don't see anyone aprat from me around. So I'll go. 12:08:01 In GlusterD-2.0 land, last week I hit a problem with the transaction framework. 12:08:21 I was testing out solutions for it, and I've landed on my choice. 12:09:03 I was having trouble sending random data-structs as json and rebuilding the struct from json. 12:09:27 Instead of sending it on wire, I'll just use the etcd store to share these bits of info. 12:09:44 I'm working on cleaning up the change to push it for review. 12:10:05 Also, last week I found that etcd now allows embedding itself in other apps. 12:10:24 I will check out how it's gonna work in the weeks to come. 12:10:38 This should allow very easy deployments. 12:10:47 (FYI: https://www.gluster.org/community/roadmap/4.0/ → glusterD 2.0 features page ( http://www.gluster.org/community/documentation/index.php/Features/thousand-node-glusterd is 404/documentation moved)) 12:11:26 cloph, Yeah, we shutdown the old wiki. 12:11:46 This GlusterD-2.0 bit now lives in the gluster-specs repository 12:12:21 https://github.com/gluster/glusterfs-specs 12:12:26 It's glusterfs-specs 12:12:34 thx 12:12:56 Does anyone else have stuff to add about 4.0 projects? 12:13:13 Yep 12:13:19 justinclift, Go on. 12:13:35 Is there an ETA for when people will be able to try something out? even extremely alpha? :D 12:14:06 For GD2 I'm aiming for something to be available at summit. 12:14:20 For DHT2 an JBR, I'm not sure. 12:14:33 Things have been progressin very (very) slowly. 12:14:50 2016 is unlikely for something other dev's could try out? 12:14:53 hagarth, Do you have anything to say about this? 12:14:55 DHT2 that is 12:15:24 kshlm: not at the moment 12:15:41 justinclift, I honestly don't know what state it is in. I just know that POCs were done. 12:15:43 justinclift: I think JBR is close to being consumable.. DHT2 is still some time way 12:15:53 k 12:16:01 Please continue :) 12:16:07 Thanks hagarth and justinclift 12:16:11 Let's get on. 12:16:23 #topic GlusterFS-3.9 12:16:46 Neither of the release-maintainers are around today. 12:16:58 So just reitierating what was said last week. 12:17:05 nothing much, AI on me to send remainder to mailing list, Not done yet 12:17:19 Oh cool! aravindavk_ is here. 12:17:49 Yup. The mail is to notify that 3.9 will be entering a stability phase at the end of August. 12:18:09 So any new feature that needs to make it in should be merged in before that. 12:18:36 After that, only bug-fixes will be allowed, to allow sometime for the release to stabilize. 12:18:45 aravindavk_, Please send that out ASAP. 12:18:47 do we have any patches that need review attention before the end of August? 12:19:01 kshlm: will send tomorrow 12:19:13 hagarth, We should get to know once the announcement goes out. 12:19:18 kshlm: right 12:19:39 aravindavk_, As soon as you can. We have a week left. 12:19:46 Shall I move on? 12:19:47 Manikandan: do you think the SELinux xlator can be ready fast enough? otherwise we'll move it to 3.9+1 12:20:19 Manikandan, are you here? 12:20:42 * ndevos thinks the Kerberos work got put down in priority, and doubt it'll be ready :-/ 12:20:58 ndevos, He's probably not around. You can take it up later. 12:21:03 I'll move to the next topic. 12:21:06 kshlm: yes, move on :) 12:21:09 #topic GlusterFS-3.8 12:21:11 ndevos, kshlm nope, I could not finish that soon 12:21:14 And you're on. 12:21:22 ndevos, ^ 12:21:25 Manikandan: ok, thanks 12:21:28 ndevos, we should move it to 3.9+1 12:21:42 we did a 3.8.3 release earlier this week 12:22:02 packages are on download.gluster.org and pushed into many distributions' repositories 12:22:25 Arch still has 3.8.1 :/ 12:22:28 the announcement was sent yesterday morning, but some people mentioned delays in receiving the email 12:22:46 post-factum, What! That's not very Arch like. 12:22:53 kshlm: true 12:22:56 post-factum: hmm, the arch maintainer should be aware, iirc he is on the packagers list 12:22:58 do we know who does the packages for Arch? 12:23:10 ndevos, Yup. Lots of delays for mails on the lists. 12:23:13 kkeithley: yep 12:23:17 ndevos, kshlm: any change in testing strategy planned for future releases? 12:23:34 kkeithley: Sergej Pupykin 12:23:45 hagarth, We need to automate a lot of it. 12:23:49 hagarth: yes, at one point we should have distaf/Glusto tests 12:24:16 The propsed method of getting maintainers to give an ACK doesn't seem to have worked well. 12:24:19 ndevos,kshlm: I am getting concerned about the consistent dead on arrival releases.. 12:24:46 It takes too long to get ACKs when we're on schedule. 12:25:08 hagarth: not only you, it is really embarrasing that we do not catch regressions :-( 12:25:22 and are those "official" Arch packages? Versus the ones we build in Ubuntu Launchpad and SuSE Build System are just our own "community" packages, and have no official standing. 12:25:30 hagarth, Yeah. But atleast out current release process is working well, so that we can get releases out quickly when something like that happens. 12:25:35 kshlm, ndevos: how do we change this pattern? 12:25:37 kkeithley, Yup. 12:25:49 kshlm: We should talk later on how to make the ack process better. 12:26:06 kkeithley: https://www.archlinux.org/packages/community/x86_64/glusterfs/ community repo as you may see 12:26:07 kshlm: considering the number of users in production .. any unstable release hurts all of us 12:26:11 hagarth: hopefully by adding more tests and have them run automatically... 12:26:41 the only packages we build that have any sort of official standing are for Fedora and CentOS Storage SIG. 12:27:03 nigelb: one of the difficulties is that only very few maintainers seem to be interested in acking releases, it should not be a manual thing :-/ 12:27:03 ndevos: we should evolve a concrete plan .. we do owe the community that 12:27:19 Everything else is just (or only) for the convenience of the community 12:27:19 ndevos: yeah, I'm happy to help automate the hell out of that. 12:27:43 * justinclift sighs 12:27:44 ndevos: we really need to update the maintainer lifecycle document and have it out 12:28:03 nigelb: we should, and have a testing framework without tests... automating the framework alone will not help 12:28:26 I suggested we change our release process slightly so we have a period where we are no longer merging things into it. Just testing and fixing bugs from said testing. 12:28:34 (On the bug) 12:29:12 nigelb: right, but we *just* changed the release schedule to allow faster releases 12:29:15 and add "Required-backport-to:" tag 12:29:48 ndevos: Let's talk after the meeting. I still think we can do both, but need to delay one release. 12:29:56 hagarth: we should, and I think you started (or wanted to?) document the maintainer addition/recycling/... 12:30:25 ndevos: I am looking for help in writing that one 12:30:41 ndevos, I had started this sometime way back. 12:30:45 hagarth: sharing the etherpad on the maintainers list would be a start :) 12:30:59 I didn't take it forward. 12:31:04 Let's restart. 12:31:10 ndevos: shared it with multiple interested folks several times already :-/ 12:31:12 hagarth: I can possibly help. I've seen policies for other projects. 12:31:17 Let's move on to the other topics. 12:31:26 nigelb: will share it with you 12:31:26 This discussion can be had later. 12:31:32 #topic GlusterFS-3.7 12:31:38 Nothing much to report here. 12:31:50 3.7.15 is still on track for 30-August. 12:32:01 * post-factum has found another memory leak within 3.7 branch and wants some help :( 12:32:11 I've sent out an announcement about this on the lists. 12:32:40 I'll stop merging patches this weekend, to get some time to do some verification of the release. 12:33:12 post-factum, Have you asked on #gluster-dev or gluster-devel@ 12:33:24 Someone should be able to help. 12:33:48 kshlm: yep, Nithya should take a look at it later, but I know kkeithley is interested as well ;) 12:33:56 Cool then. 12:34:10 Do we need to discuss anything else regarding 3.7? 12:34:24 may I ask again for 2 reviews? 12:35:02 post-factum, Reply to the 3.7.15 update mail I sent. 12:35:07 kshlm: I did 12:35:12 I'll take a look. 12:35:25 When did you do it? 12:35:25 kshlm: let me know if mail didn't hit your inbox 12:35:35 I guess it hasn't yet. 12:35:53 I'll take a look at the archives after the meeting. 12:35:56 kshlm: monday, 15:33 EET 12:36:22 I'll move on to 3.6 12:36:26 #topic GlusterFS-3.6 12:36:40 We had another bug screen this week. 12:37:08 We got lots of help, and we screened about half of the bug list. 12:37:21 We'll have 1 more next week and that should be it. 12:37:44 #topic Project Infrastructure 12:37:57 o/ 12:38:02 nigelb, Do you have update? 12:38:06 I've got a hacky fix for the NetBSD hangs, but rastar has confirmed that we're doing something bad in Gluster actually causing the hang. 12:38:10 s/update/updates/ 12:38:20 Hoping that we nail it down in the next few weeks. 12:38:48 I have not proceeded on that today 12:38:48 misc has been working on the cage migration. The last updates are on the mailing list. We'll need a downtime at some point for jenkins and Gerrit. 12:39:00 but it surely seems like a hang in gluster process than setup issue 12:39:42 I've also been working on a prototype for tracking regression failure trends. I'll share it when I have something that looks nearly functional. 12:39:42 Could it be nfs related? The gluster nfs server crashes and hangs the nfs mount. 12:39:52 NFS is my guess as well. 12:40:00 kshlm: this one was a fuse mount though 12:40:01 Because the proccess are stuck in D state 12:40:04 /mnt/glusterfs/0 12:40:11 hey, dont always blame nfs! 12:40:14 rastar, Oh. Okay. 12:40:34 heh 12:40:43 Centos CI tests should soon be moving to JJB as well. I'm in the process of cleaning up our existing JJB scripts to follow a standard (and writing down this standard). 12:40:48 ndevos, Only just guessing. I've had my laptop hang when I've mistakenly killed the gluster nfs server. 12:40:57 nigelb, Cool! 12:41:02 kshlm: yes, that'll do it 12:41:13 that's it as far as infra updates go :) 12:41:22 Thanks nigelb. 12:41:31 Let's move on to 12:41:37 #topic Ganesha 12:41:56 2.4 RC1 was supposed to be tagged last week, but probably will happen this week. 12:42:14 planned 2-3 weeks of RCs before GA 12:42:30 Of course it was originally supposed to GA back in February 12:42:59 that's all unless skoduri is here and wants to add anything 12:43:02 or jiffin 12:43:07 kkeithley, Woah! So we're actually better with our releases! 12:43:14 careful 12:43:17 ;-) 12:43:27 I do not have anything to add 12:43:51 Better with out release schedules, (probably not so much the actual release). 12:44:00 Thanks kkeithley. 12:44:07 #topic Samba 12:44:18 * obnox waves 12:44:23 Hey obnox! 12:44:41 samba upstream is preparing for release of samba 4.5.0 12:45:12 our gluster-focused people are working on the performance improvements for samba on gluster 12:45:30 most notable the generic md-cache improvements of poornima 12:45:58 apart from that, work is ongoing regarding generic SMB3 feature implementation in Samba 12:46:08 SMB3 multichannel 12:46:23 that's my brief update 12:46:34 Thanks obnox. 12:46:52 The md-cache work is really awesome. 12:46:57 \o/ 12:47:13 can't wait to get it (the md-cache work) 12:47:16 :-) 12:47:19 for nfs-ganesha 12:47:26 #topic Last weeks AIs 12:47:50 Of the 4 AIs from last week, 3 have been addressed. 12:47:53 i forgot: the other items in samba are related to ssl-support and multi-volfile support via samba-gluster-vfs / libgfapi 12:47:57 sry 12:48:20 The one remaining I'm carrying forward. 12:48:30 #action pranithk/aravindavk/dblack to send out a reminder about the feature deadline for 3.9 12:48:33 obnox will buy beer in Berlin to atone 12:48:47 kkeithley: :-) 12:48:57 I totally forgot to say, I finished one of the long stanging AIs from this meeting -> We now have Worker Ant for commenting on bugs. 12:48:59 obnox, Yup. I need to review those gfapi ssl changes. 12:49:10 nigelb, Yey!!!! 12:49:29 workerant++ 12:49:30 kshlm: workerant's karma is now 1 12:49:34 nigelb++ 12:49:34 kshlm: Karma for nigelb changed to 1 (for the f24 release cycle): https://badges.fedoraproject.org/tags/cookie/any 12:49:34 kshlm: nigelb's karma is now 1 12:49:48 #topic Open Floor 12:49:53 Let me check what we have. 12:50:21 kkeithley, Both the topics are yours? 12:50:33 no, only longevity is mine 12:50:50 Ah the other one is atinms IIRC. 12:51:21 Since you're here kkeithley 12:51:27 #topic Longetivity 12:51:43 not much to say about mine besides what's in the etherpad 12:52:02 people need to look into where the memory is going 12:52:06 kinda "name and shame" 12:52:28 yes please :) 12:52:34 Didn't the last cluster you ran, run fine for a long time? 12:52:45 What was that? 3.6 or previous? 12:52:48 e.g. shd? why is shd growing? The network is solid, there shouldn't be any healing 12:52:53 this is 3.8.1 12:53:06 but old numbers are still there to look at 12:53:28 it ran for a long time. "fine" is subjective 12:53:53 kkeithley, 1 request about this. 12:54:05 the workload I'm running may not be as agressive or as busy as Real World workloads 12:54:08 yes? 12:54:14 When you take the memory samples, could you also take statedumps? 12:54:32 Would make is much easier to compare. 12:54:52 BTW, glusterd seems to be doing pretty good. :) 12:54:55 sure, that shouldn't be too hard 12:55:09 yes, glusterd is better 12:55:12 than it was 12:55:36 but it's not doing anything, and it is still growing a bit 12:55:44 Thank you everyone who helped get it into this shape! 12:55:51 We still got more work to do. 12:56:37 So people (developers and maintainers mainly) please look into the reports kkeithley's longetivity cluster is generating, 12:56:42 and get working. 12:56:53 kkeithley, Shall I move on? 12:56:57 sure 12:57:02 where is 'the etherpad' of kkeithley ? 12:57:02 Thanks 12:57:14 https://public.pad.fsfe.org/p/gluster-community-meetings 12:57:28 kkeithley: thanks 12:57:30 obnox, https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/ has the reports. 12:57:35 ah tha 12:57:37 kshlm: thx 12:57:45 #topic restructuring bugzilla components/subcomponents 12:57:58 Was this about adding new components into bugzilla? 12:58:27 Anyone? 12:58:31 doesn't atinm want to sync upstream and downstream components? 12:59:01 Wouldn't that be done downstream then. 12:59:01 he should make a specific proposal? 12:59:29 kkeithley, honestly its in the pipeline but due to lack of time I haven't got a chance to get someone to propose and work on it 12:59:36 Upstream has more fine grained components, and I think it was requested downstream as well. 12:59:51 let's not talk about downstream here 13:00:05 atinm, Cool. 13:00:09 I think the proposal here was to fine grain upstream in a better way 13:00:24 atinm, So do you want to discuss that here? 13:00:49 Or will you send a mail to gluster-devel (or maintainers)? 13:01:12 * kshlm notes we are in overtime now. 13:01:22 as I said, I still have to work on it and I don't have any concrete data, although the AI is on me but I will try to get a volunteer to do it 13:01:43 Okay. Find a volunteer soon. 13:01:54 If that's it, I'll end the meeting. 13:02:28 #topic Announcements 13:02:31 If you're attending any event/conference please add the event and yourselves to Gluster attendance of events: http://www.gluster.org/events (replaces https://public.pad.fsfe.org/p/gluster-events) 13:02:31 Put (even minor) interesting topics on https://public.pad.fsfe.org/p/gluster-weekly-news 13:02:55 (I'm skipping the backport etherpad, as I don't think anyone is using it). 13:03:04 Thanks for attending the meeting everyone. 13:03:09 yeah, kill it 13:03:13 See you all next week. 13:03:20 #endmeeting