12:00:49 #startmeeting Gluster Community 12:00:50 Meeting started Wed Oct 19 12:00:49 2016 UTC. The chair is kkeithley. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:00:50 Useful Commands: #action #agreed #halp #info #idea #link #topic. 12:00:50 The meeting name has been set to 'gluster_community' 12:01:06 #topic roll call 12:01:09 who is here? 12:01:10 \o/ 12:01:19 * kshlm is here 12:01:30 * Saravanakmr is here 12:01:56 * atinm is partially here 12:02:03 I guess it's been 4 weeks since I was here last time. 12:02:13 * samikshan is here 12:02:15 we missed you kshlm 12:02:16 good to have you back 12:02:29 :) 12:02:53 * obnox waves 12:02:58 Before you ask, I'm volunteering to host the next meeting. 12:03:26 * karthik_us is here 12:03:34 * rjoseph is here 12:03:38 * msvbhat is present 12:03:39 kshlm: you're a gentleman and a scholar 12:03:57 quite a lot ppl this time 12:04:14 one more minute to let people arrive, then we'll start 12:05:13 #topic: host for next week 12:05:16 is kshlm 12:05:19 thanks kshlm 12:05:32 #topic Gluster 4.0 12:05:59 i saw glusterd 2.0 update on ml 12:06:10 * jiffin is here 12:06:14 Yup. 12:06:23 got a link? 12:06:26 I did a 2nd preview release. 12:06:47 #link http://www.gluster.org/pipermail/gluster-devel/2016-October/051198.html 12:06:47 https://www.gluster.org/pipermail/gluster-devel/2016-October/051198.html 12:07:04 This release has gRPC and the ability to start bricks. 12:07:11 #info https://www.gluster.org/pipermail/gluster-devel/2016-October/051198.html 12:07:24 Unfortunately our volfile generation isn't correct so the bricks don't start. 12:07:32 hmm\ 12:07:47 I'll be working on fixing that in the coming week. 12:08:09 I'll also be creating docker images and vagrantfiles to make it easier to test/develop GD2. 12:08:35 ppai is working on embedding etcd. This will make it easier to deploy GD2. 12:08:49 That's all the updates from this week. 12:09:25 jdarcy just joined. any updates jeff? 12:10:36 None really. Been working on some of the underpinnings for brick multiplexing. 12:10:53 any news on memory management? 12:11:49 Yes, but let's defer that until later. 12:12:03 okay, moving on 12:12:09 #topic Gluster 3.9 12:13:06 pranithk, aravindak: ??? 12:13:13 this needs a serious discussion 12:13:31 atinm: please proceed 12:13:45 Is there any deadline we have in our mind for releasing it? this can't wait for ever 12:14:12 30 September, wasn't it? 12:14:31 that's a history :) 12:15:13 how about setting a deadline and do a constant follow up on the maintainers list for the ack for every components 12:15:39 we need a benevolent dictator for that 12:16:05 or at least a dedicated release-engineer/manager. 12:16:28 We have a benevolent triumvirate: pranithk, aravindak, and dblack. 12:16:43 and none of those are here :)? 12:16:44 All of whom are missing here. 12:17:17 atinm, Would you be willing to raise a strom about this on the mailing lists? :) 12:17:56 kshlm, I have already passed on my concerns to pranithk 12:18:33 fwiw, there's a backport of gfapi/glfs_upcall* that needs to be merged into 3.9 before 3.9 can ship, so gfapi as a component isn't ready 12:18:37 kshlm, having said that I will see what best I can offer from my end 12:18:49 atinm, in private? It would be good if its out in the open. 12:19:00 give them a poke. Be gentle 12:19:28 but not too gentle 12:19:33 this needs to get done 12:19:53 shall we have an action item on atinm to poke pranithk and aravindak? 12:20:20 kkeithley, go ahead 12:20:51 #action atinm to poke 3.9 release mgrs to finish and release 12:21:03 here is our dictator! 12:21:15 #topic Gluster 3.8 12:21:43 ndevos: Any news 12:21:45 ? 12:21:46 ndevos is away. But he provided an update on the maintainers list. 12:22:05 "The 3.8.5 release is planned to get announced later today. 3.8.6 should be following the normal schedule of approx. 10th of November." 12:22:30 #link https://www.gluster.org/pipermail/maintainers/2016-October/001562.html 12:22:32 #info The 3.8.5 release is planned to get announced later today. 3.8.6 should be following the normal schedule of approx. 10th of November. 12:22:40 #info https://www.gluster.org/pipermail/maintainers/2016-October/001562.html 12:22:44 is #link a thing? 12:22:58 yup, it is 12:22:58 Useful Commands: #action #agreed #halp #info #idea #link #topic. 12:23:13 moving on 12:23:18 #topic Gluster 3.7 12:23:27 kshlm: your turn 12:23:33 I finally announced 3.7.16 yesterday. 12:23:54 It was tagged early this month on time. But the announcement got delayed due to various reasons. 12:24:20 #link https://www.gluster.org/pipermail/gluster-devel/2016-October/051187.html 12:24:23 and we're looking for someone to take the reins for 3.7.17, correct? 12:24:30 Yeah. 12:24:37 So any volunteers? 12:24:58 we'll pause so everyone can look away and shuffle their feet 12:25:45 I'll ask again on the mailing lists. 12:25:59 kshlm: I'll try it. You'd need to mentor me or something. :D 12:26:25 samikshan, I can do that. 12:26:32 excellent, thanks 12:26:47 anything else for 3.7? 12:27:03 Cool. Okay that's settled then at last :) 12:27:06 Just that .17 is still on target for 30th. 12:27:14 kewl 12:27:20 next.... 12:27:26 #topic Gluster 3.6 12:27:26 Me or samikshan will send out a reminder early next week. 12:27:46 Nothing here. 12:27:48 #action kshlm or samikshan will send 3.7.17 reminder 12:28:04 Just counting down the days till 3.9 appears. 12:28:07 okay, 3.6? Probably nothing to report 12:28:35 #topic Infrastructure 12:28:42 kkeithley, you can move on. 12:28:44 misc, nigelb: are you in th ebuilding 12:28:51 kkeithley: yes, I do 12:29:15 nothing to announce that would be relevent to the rest of the community 12:29:25 (I mean, i wrote doc, moved and patched stuff) 12:29:45 okay 12:29:54 not sure for nigelb, he may have more 12:30:10 we'll back up if he shows up 12:30:19 #topic: nfs-ganesha 12:31:00 issues firing up in bakeathon :) 12:31:02 Red Hat is hosting NFS Bake-a-thon in Westford this week. skoduri and myself are testing the bleeding edge bits and have found a couple bugs. 12:31:20 Actually it was Sun/Oracle that found them. 12:31:36 right..and even steved (redhat client) as well 12:31:38 2.4.1 will GA after Bake-a-Thon 12:31:40 issue with ACLs 12:32:23 yup 12:32:26 okay, next 12:32:30 #topic Samba 12:32:40 obnox: are you lurking? 12:33:04 no, guess not 12:33:26 yes 12:33:37 proceed 12:33:48 please proceed 12:33:54 yeah 12:34:16 samba upstream: 4.5.1 will be released next week 12:34:28 along with 4.4.7 12:34:35 gluster related: 12:34:59 the md-cache improvements are almost done in gluster master, one last patch under review. 12:35:30 this speeds up samba metadata-heavy workloads a *lot* on gluster 12:35:40 (readdir and create /small file) 12:35:46 that's excellent news 12:36:06 #info md-cache improvements speeds up samba metadata-heavy workloads a *lot* on gluste 12:36:09 will md-cache improvements affect fuse users? 12:36:13 other scenarios affected very much as well 12:36:21 post-factum: actually yes 12:36:26 rising tide lifts all boats 12:36:42 i just think that samba benefits most due to its heavy use of xattrs and metadata 12:36:43 will that be backported into stable branches? 12:37:02 not 3.8 probably, but that's not my call 12:37:07 3.9 I can imagine 12:37:32 but currently sanity testing is done with these changes on 3.8 to rule out regressions 12:37:45 because it virtually affects every setup 12:37:49 is 3.8 lts? 12:37:58 yes 12:38:00 poornima could detail more, but she's not here today 12:38:12 but my take is not to backport them to 3.8 12:38:26 already a few issues have been found and they are beeing rcad 12:38:31 that is why i'm asking. 3.9 then 12:38:51 atinm: agreed, some people are testing on 3.8 anyways, without the intention to do an official upstream backport ;-) 12:39:19 other things: multi-volfile support has been merged into the glusterfs vfs-module in samba upstream 12:39:27 by rtalur 12:39:51 and then there is the memory issue 12:39:56 is everybody aware? 12:40:03 which one :)? 12:40:18 i'll attempt a very brief summary: 12:40:44 samba is a multi-process architecture: each tcp connection forking a new didicated child smbd process 12:40:56 the connection to the gluster volume is done by the child process 12:41:09 oh, i've reported that issue twice here, iirc 12:41:30 Sort of the client-side version of what multiplexing addresses on the server side. 12:41:32 that means when the glusterfs vfs module is used, each process loads libgfapi and thereby the whole client xlator stac 12:41:35 k 12:41:54 with the memory use for an instance ranging between 200 and 500 MB 12:42:03 so you have a solution? 12:42:08 we can only serve 2-3 clients per GB of free RAM :-/ 12:42:19 as opposed to 200 clients / GB with vaniall 12:42:33 solution would result in a proxy of sorts 12:42:52 like fuse but not fuse 12:42:58 * obnox is thinking about prototyping one from samba's vfs code 12:43:22 I'm trying to remember what's different between this and GFProxy. 12:43:23 post-factum: indeed using fuse instead of libgfapi is such a proxy, but there were reasons originally to move to vfs 12:43:42 obnox, because vfs should introduce much less overhead, obviously 12:43:47 * shyam throws in the thought of AHA + GFProxy presented at the summit 12:43:55 jdarcy: gfproxy is on the server (does not need to, maybe) 12:44:03 shyam, do you have video/slides? 12:44:07 * shyam and realizes jdarcy has already done that... 12:44:08 so, this is going into a discussion... 12:44:33 i suggest taking the discussion out of the community meeting 12:44:37 unless you want to have it now 12:44:42 this is the status 12:45:08 post-factum: gfproxy is done by fb and not (yet) complete or published 12:45:10 So the AI would be to cue up a separate discussion? 12:45:32 jdarcy: yeah. continuing the discussion from the bof at the summit 12:45:34 post-factum: GDC videos and slides are being (post) processed, slides should land here: http://www.slideshare.net/GlusterCommunity/presentations 12:45:49 #action obnox to starting discussion of Samba memory solutions 12:45:56 shyam, thanks! 12:46:20 shall we move on? 12:46:25 i hope Amye will post the links to ML :) 12:46:27 kkeithley: yes. tjanks 12:46:33 #topic Heketi 12:46:50 will lpabon or someone else give status? 12:46:58 is lpabon lurking? 12:47:03 i can, let me think... 12:47:12 we are currently preparing version 3 of heketi 12:47:27 will be announced this week (currently waiting on finishing documentation) 12:47:39 this carries quite a few bug fixes and enhancements 12:47:57 it is now possible to run heketi directly in kubernetes. 12:48:42 some error reportings when creating gluster volumes have been clarified, and it is now possible to creat only replicate (w/o distribute) volumes 12:48:56 just some examples. 12:49:41 this version of heketi will be used in conjunction with kubernetes 1.4 and openshift 3.4 as dynamic provider of persistent storage volumes for the containers 12:50:01 obnox: Are there any Heketi asks/needs from core Gluster? If so where can I find them? 12:50:11 this is a tremendous enabler of gluster in the cloud 12:50:28 shyam: in the near future. i think. 12:50:58 great news. anything else on Heketi? 12:50:59 shyam: for the next generation , we are thinking about using block storage / iscsi. i think pkalever is working on that 12:51:11 kkeithley: that's it from my end 12:51:23 time check: 9 minutes remaining.... 12:51:27 post-factum, yep, I will post links this week. 12:51:27 moving on 12:51:45 #topic last week's action items 12:51:46 shyam: but we should take the AI to create a place to communicate on the intersection between heketi and gluster proper 12:52:09 rastar/ndevos/jdarcy to improve cleanup to control the processes that test starts. 12:52:21 amye, thanks! 12:52:54 * obnox signs off -- next meeting coming up 12:53:01 obnox: AI in progress, being discussed at the maintainers meeting later in the day (basically planing pages, hope that solves this problem) 12:53:15 guess no, leaving open for next week 12:53:18 Haven't done anything on that front myself, nor heard from rastar/ndevos. 12:53:29 okay 12:53:43 unassigned: document RC tagging guidelines in release steps document 12:53:51 anyone do anything with this? 12:54:35 I guess I'll take this since it affects building Fedora packages. 12:54:45 #action kkeithley to document RC tagging guidelines in release steps document 12:54:54 ditto for the other one 12:54:59 moving on 12:55:17 #topic open floor 12:55:20 dbench in smoke tests - lots of spurious failures, little/no info to debug 12:55:36 jdarcy, that was you I believe 12:55:58 Yeah. Should we keep it as part of smoke, or ditch it? 12:56:13 It doesn't seem to be helping us in any way AFAICT. 12:56:58 It's not in our source tree or packages or vagrant boxes, so developers running it themselves is a pain. 12:57:13 It provides practically no diagnostic output when it fails. 12:57:29 Additional context, bug for the same here: https://bugzilla.redhat.com/show_bug.cgi?id=1379228 12:58:21 doesn't sound very useful to me the way it is now 12:58:39 how's that for a run-on sentance 12:59:16 One of the concern is why is it failing more now, is there an issue in the code that got introduced? 12:59:28 When it's debugged, it might be useful. Until then, I propose reverting whichever change added it. 12:59:41 is it voting? 12:59:46 Excellent point, shyam. Its addition seems to be rather unilateral. 12:59:51 jdarcy: To my reading this was there from ancient past... 13:00:02 kkeithley: It causes smoke to fail, changing smoke's vote. 13:00:49 I've had, and seen, patches delayed unnecessarily because of this. 13:01:58 jdarcy: Agreed it is causing more pain at present 13:01:59 okay, doesn't seem like we can resolve it here. Can it be discussed in IRC or email after the meeting? 13:02:12 I'll start an email thread. 13:02:28 thanks 13:03:26 we're past the hour. Looks like an announcement that Go bindings to gfapi are moving to github/gluster. If there's more to say about that we'll come back after jdarcy's memory topic 13:03:34 jdarcy: you have the floor 13:03:44 Nothing else for now. 13:03:55 #action: jdarcy to discuss dbench smoke test failures on email 13:04:00 We can defer the memory stuff to email. 13:04:06 (Kind of already there) 13:04:06 okay 13:04:35 go bindings? anything to say kshlm? 13:04:57 I just wanted to bring it back to everyones notice. 13:05:12 #info Go bindings to gfapi are moving to github/gluster 13:05:17 I moved it to the Gluster organization. It was under my name till now. 13:05:18 more good news. 13:05:45 #topic recurring topics 13:05:47 I'll send out a mailing list announcement about it soon. 13:06:04 #action kshlm to send email about go bindings to gfapi 13:06:08 If you're attending any event/conference please add the event and yourselves to Gluster attendance of events: http://www.gluster.org/events (replaces https://public.pad.fsfe.org/p/gluster-events) 13:06:08 Put (even minor) interesting topics on https://public.pad.fsfe.org/p/gluster-weekly-news 13:06:18 anything else? 13:06:24 motion to adjourn? 13:06:32 * shyam seconded 13:06:48 going once 13:06:54 going twice! 13:07:01 three..... 13:07:04 #endmeeting