12:02:07 #startmeeting Weekly community meeting 25/May/2016 12:02:07 Meeting started Wed May 25 12:02:07 2016 UTC. The chair is rastar. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:02:07 Useful Commands: #action #agreed #halp #info #idea #link #topic. 12:02:07 The meeting name has been set to 'weekly_community_meeting_25/may/2016' 12:02:23 #topic Rollcall 12:02:29 o/ 12:02:33 o/ 12:02:42 * anoopcs is here 12:02:46 * Saravanakmr here 12:02:57 * jiffin is here 12:03:01 * karthik___ is here 12:03:12 * ndevos _o/ 12:03:22 * jdarcy \o 12:03:29 awesome, we have good number of participants 12:03:53 * partner late :/ 12:04:04 partner: just in time 12:04:10 partner: we are doing roll call 12:04:22 ok then, next topic 12:04:24 ✋ 12:04:48 #topic Next weeks meeting host 12:04:50 jdarcy, That's cool. 12:05:10 jdarcy: I wish you had raised your hand after I asked that question :) 12:05:12 * skoduri is here 12:05:26 rastar: Yeah, dodged a bullet there. 12:05:39 anyways, I am unsure of my availability next week.. 12:05:50 kshlm is out next week too 12:05:58 anyone wants to volunteer 12:06:00 ? 12:06:56 anoopcs: ? 12:07:35 * shyam is here at times... 12:07:36 no answer 12:08:00 ok, I will put my name for now 12:08:16 oh, hi shyam and atinm 12:08:40 This time we thought we could do interesting stuff early in the meeting 12:08:48 changing the order of topics 12:09:09 #topic GlusterFS 4.0 12:09:37 jdarcy: shyam atinm all yours 12:10:25 For JBR, there's a patch out that adds (very) basic reconciliation. Finishing that up, and starting on the "in memory map" (to speed reads from the journal). 12:10:49 Shyam, Atin, and I are also discussing some ideas about how to identify subvolumes. Expect -devel mail soon. 12:11:27 Ok, here I go... kickstarted DHT2 upstream with the first commit for xlator init and layout management abstraction 12:12:02 👏 12:12:06 Next tasks are to close the init routines for the server side DHT2 xlator and add a posix2 xlator shell. At which point we should start with the FOPs (when it gets interesting) 12:12:51 Thats cool 12:13:00 Then we can start making some noise in the community I hope ;) 12:13:08 👍 12:13:27 shyam: with respect to FOPs, are we going to merge them on the master branch? 12:14:04 Yup, all work is now on the master branch, the gerrithub instance where we were working is closed (or not being worked on any more, and that was the POC) 12:14:48 shyam: ok!, looking forward 12:14:52 * atinm is little late to the meeting 12:15:11 atinm: we started with Gluster 4.0 12:15:16 From GlusterD2 side, we have started working on the establishing the txn framework support for multi nodes 12:16:06 I've sent a pull request https://github.com/gluster/glusterd2/pull/89 to get started on that 12:16:27 nice, we should really start having show and tell sessions for all this new stuff , to see them in action 12:16:33 Also we are trying to work on a PoC for the flexi volgen idea which has been evolved after iterative discussions 12:17:20 so that makes a good update on 4.0 12:17:31 #topic GlusterFS 3.8 12:17:48 we're making progress there 12:17:55 rastar, yes, we do have a plan on that, I am waiting on hagarth to finalize the plan 12:18:02 release candidate 2 has just been tagged an hour(?) ago 12:18:22 #link https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8.0.md 12:18:54 #halp need assistance from feature owners and maintainers to get the release-notes in order 12:19:58 there are also patches under review that need more +1/+2's : http://review.gluster.org/#/q/status:open+project:glusterfs+branch:release-3.8 12:20:38 at the moment, packages are being build for the CentOS Storage SIG, and maybe kkeithley is doing the Fedora 24 ones 12:20:47 indeed 12:20:49 I am 12:21:05 I'll move all the bugs to ON_QA later today 12:21:42 I am not aware of any blockers that prevent releasing 3.8.0 next week 12:21:56 ndevos: Thanks a lot for all the work , that changelog is really long and useful 12:22:13 #link https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-3.8.0&hide_resolved=1 12:22:49 rastar: yeah, lets show what's been done - and the description for the features are still missing 12:23:08 2/3 of those patches failed regression because of tests/bugs/replicate/bug-977797.t 12:23:18 ndevos: yes, all of us have got requests from amye to fill out the feature descriptions 12:23:47 rastar: thats nice, but the etherpad only has very few details :-( 12:24:01 Is anybody actively looking into why that particular test keeps failing in 3.8? 12:24:05 rastar: it would be good if people actually add the descriptions at one point! 12:24:09 * msvbhat arrives late to the meeting and reads the chat log 12:24:47 ndevos: I agree, we should all start poking the feaure owners 12:25:08 jdarcy: I have not looked into it.. and not aware of anyone who is 12:25:45 jdarcy: I did not check that, but want to note the test failure with the "recheck ... because tests/.../...t failed" comments, it would help in seeing the details in Gerrit 12:26:26 rastar: please help jiffin with that if you can :) 12:26:28 ndevos: Good idea. I assume our trigger code will still work if there's extra text like that. 12:26:41 jdarcy: yes it does :) 12:26:49 jdarcy: ndevos yes, it should work 12:27:03 rastar, ndevos: I started to poke some of feature owners 12:27:15 jiffin: thanks! 12:27:29 any questions for 3.8? 12:28:04 ndevos: Looks like a much better job than previous releases... :) good going 12:28:18 shyam: cool, good to hear! 12:28:33 ndevos: thats true 12:28:38 ... I guess rastar can continue with the next topic 12:28:41 no more questions I guess 12:28:59 #topic GlusterFS 3.7 12:29:48 ndevos++ 12:29:48 hagarth would be sending out a mail on this soon 12:29:48 atinm: ndevos's karma is now 12 12:30:42 I must say ndevos has sticked to the schedule 12:30:53 as of now, we are looking at component based checks by maintainers before a release is done 12:31:15 moving on 12:31:21 #topic GlusterFS 3.6 12:31:58 hi! sorry, it's my first time here... for the 3.6 just one question related to the smoke tests in BSD, it appears that they are failing 12:32:34 does any one has an idea on how to fix this ? 12:33:05 vangelis: do you have a link for one such failure? 12:33:15 vangelis: I thought the voting for *BSD on 3.6 was disabled, and the errors would not get propegated to Gerrit? 12:33:23 vangelis, indeed it's failing, http://review.gluster.org/#/c/12710/ tried to fix it, but it didn't 12:33:37 rastar, http://build.gluster.org/job/freebsd-smoke/10844/ 12:34:35 ndevos: that was mostly for regressions 12:34:43 ndevos: not for smoke 12:34:50 atinm: thanks! 12:34:50 you were faster :-) actually they are 3 review requests that fail for compile errors 12:34:51 http://review.gluster.org/#/c/14007/ http://review.gluster.org/#/c/14403/ http://review.gluster.org/#/c/14418/ 12:35:25 vangelis: thanks for pointing it out, I will have a look 12:35:35 thx a lot! 12:35:40 #action rastar to looks at 3.6 builds failures on BSD 12:35:41 rastar: oh, but maybe you can disable the smoke job in a similar way? 12:36:19 rastar++ 12:36:19 jdarcy: Karma for rastar changed to 3 (for the f23 release cycle): https://badges.fedoraproject.org/tags/cookie/any 12:36:20 jdarcy: rastar's karma is now 6 12:36:27 ndevos: we can, if it is incompatibility then I will, just want to see if it is anything genuine 12:36:36 rastar, look at this : http://review.gluster.org/#/c/13633/ 12:36:49 rastar, this indicates we don't trigger smoke 12:37:17 rastar, somehow I have a feeling that this got enabled again 12:37:17 rastar: I tried to fix it before, but that didnt work out so well, I *think* it has to do with parrallel make or something 12:37:36 atinm: we don't trigger smoke as in? 12:38:04 ndevos: worst case, change smoke.sh to have a if condition around make? 12:38:05 rastar, patch hasn't considered any vote from smoke is what I can see 12:38:31 rastar, rephrase, *BSD smoke 12:39:01 atinm: you are right 12:39:11 rastar: maybe, but that is my guess... and I failed to fix the Makefile.am :-/ 12:39:12 atinm: we are hitting the status overwrite error again 12:39:27 rastar: but, its appreciated if you have a look at it :) 12:39:49 * shyam needs to drop the kids to school, but would like it if "gluster design summits" were discussed in the open floor time (like have a virtual one every 4 months etc. to shore up designs for gluster, get more focused attention on things under design/development and seed future needs) 12:39:52 ndevos: :), not much hopeful after you try 12:40:20 rastar: you never know! 12:41:03 shyam: would you prefer to send a mail? 12:41:38 ok, point noted, moving on 12:41:48 #topic GlusterFS 3.5 12:42:01 no updates here I guess, waiting for EOL mostly 12:42:11 indeed, nothing of interest to note 12:42:14 Nuke it from orbit, it's the only way to be sure. 3.5 that is. 12:42:24 kkeithley: :) 12:42:51 would like to skip last weeks AIs unless people have update on it 12:43:01 lets get 3.8.0 out and we'll add a "dead road ahead" commit in 3.5 :) 12:43:27 I don't see any one here who should respond on the AIs except Saravanakmr 12:44:05 ok, that was a heads up to save time 12:44:11 I will come back to AIs later 12:44:19 #topic NFS Ganesha and Gluster 12:45:10 kkeithley, skoduri or jiffin? 12:45:25 heh 12:45:41 nothing much to report, work on 2.4 continues 12:46:29 won't GA until all the FSALs have been ported to ex_api (i.e. the new FSAL interface) 12:46:45 but nobody is asking for 2.4 yet anyway. 12:47:10 Gluster has already submitted patches for FSAL_GLUSTER 12:47:39 yes..under review & more patches yet to come 12:48:12 cool 12:48:28 #topic Samba and Gluster 12:48:50 not much to report here too 12:49:13 but the work on integrating vfs_shadow_copy2 and Gluster is under testing stage now 12:49:39 it feels good to see file snapshots in windows the way they are meant to be seen 12:50:05 rastar: oh, do you know if anyone is looking into adding glfs_lseek() with SEEK_DATA/HOLE to Samba vfs_gluster? 12:50:41 ndevos: no 12:51:03 ok, I'll keep it in the back of my head then 12:51:12 I will put it on our backlog list 12:51:23 next topic 12:51:56 #topic Saravanakmr to add documentation on how to add blogs 12:52:05 this is done #link https://github.com/gluster/glusterdocs/pull/114 12:52:11 Thanks Saravanakmr 12:52:58 #link http://gluster.readthedocs.io/en/latest/Contributors-Guide/Adding-your-blog/ 12:53:16 ndevos, Thanks! makes sense :) 12:53:32 I will skip the rest of the AIs 12:53:44 #topic Open Floor 12:53:59 #link http://review.gluster.org/#/c/14399/ 12:54:07 would like to get some review please 12:54:39 that is a nice feature to have 12:54:58 "glusterfsd/main: Add ability to set oom_score_adj" 12:55:31 should be interesting for anyone that wants to control memory usage of gluster processes 12:55:46 iow, for everyone 12:55:48 speaking of memory.. 12:55:48 post-factum: added myself as reviewer, you should add others to notify them of the patch 12:55:59 post-factum++ 12:56:01 atinm: post-factum's karma is now 2 12:56:07 rastar: who do you suggest to get added? 12:56:13 hagarth should do the review, but i so not see him here 12:56:16 post-factum, I liked the commit heading, but didn't have chance to go through it 12:56:18 *do not 12:56:23 i can take this to other channel but libgfapi seems to make libvirtd leak memory badly and they forward to here.. 12:56:37 ndevos: considering that all files are under core component, all the core component owners? 12:56:59 partner: what version of gluster? 12:57:04 3.6.6 12:57:39 some rumours around its still in 3.7.11 too 12:58:10 but if anybody knows anything on the topic i'd be happy to hear and perhaps get some help on debugging the issue. 12:58:25 other than that it seems after a year of gone i seem to be again using glusterfs here :) 12:58:39 partner: we fixed many memory leaks in 3.6, so you should have them 12:58:52 partner: do you have a way to reproduce the leak? something scripted maybe? 12:58:52 rastar: rgr, will try out 3.6.9 12:58:59 partner: we know of some areas where there are more leaks 12:59:13 yes, when used with openstack a simple attach/detach will make it visible 12:59:23 partner: does your use case involve a lot of VMs spin up and shutdowns? 12:59:28 yes 12:59:49 rastar: added hagarth explicitly ;) 12:59:56 partner: how about one VM, and attach/detach a 2nd disk repeatedly? 13:00:19 ndevos: same things happen with a single vm 13:00:32 i can try to collect some evidence to make it visible 13:00:35 partner: yes, a little more detail is that whenever a gfapi init fini cycle is done, some memory is leaked 13:00:48 partner: ok, that should make it scriptable, and much easier than requirng openstack :) 13:01:29 you wouldn't see the problem if your VMs just kept running. Not to say there is no problem, this is just the root cause. This certainly needs to be fixed. 13:01:38 yes, and recent 3.7 versions should reduce the leaks a little more too 13:01:45 thats right 13:01:53 but, feel free to close the meeting, i'll get back on the other channel later on, thanks for your comments! 13:02:29 ok, we have run out of time and it does look good if we cross the time limit in every meeting 13:02:45 we were not able to discuss on virtual design summits that shyam proposed 13:03:00 oh, sorry :o 13:03:46 partner: not a problem at all. Knowing about what bothers community the most is equally important 13:04:01 I hope shyam sends a mail or else we will discuss it in next meet 13:04:08 Thanks for attending the meeting everyone 13:04:14 Until next week 13:04:24 #endmeeting