12:00:43 <atinm> #startmeeting Gluster community weekly meeting
12:00:43 <zodbot> Meeting started Wed Jan 27 12:00:43 2016 UTC.  The chair is atinm. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:00:43 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:00:43 <zodbot> The meeting name has been set to 'gluster_community_weekly_meeting'
12:01:30 <atinm> We didn't have a meeting last time due to less participation and hence I expect some attendance this time :)
12:01:53 <atinm> #info The agenda is right here : https://public.pad.fsfe.org/p/gluster-community-meetings
12:01:58 <atinm> #topic Roll call
12:02:05 <atinm> Who all do we have here today?
12:02:12 * Manikandan_ is here
12:02:19 * msvbhat is present
12:02:22 * overclk is around
12:02:24 * hgowtham here
12:03:02 <atinm> I am glad that Manikandan has volunteered to learn the process of hosting and next time onwards he is going to help us in hosting
12:03:17 <atinm> We'll wait for couple of minutes to get settled and then move on
12:03:22 <Manikandan_> atinm, thanks ;-)
12:04:31 * ndevos _o/
12:04:53 <atinm> only 6 people ?
12:05:06 <overclk> atinm: :-/
12:05:40 <ndevos> kkeithley is travelling, I dont expect him to join
12:05:54 <ndevos> not sure if hagarth is awake yet...
12:06:06 <atinm> last time we canceled it due to less participation
12:06:24 <atinm> hagarth informed me that he has to run through some other meetings
12:06:27 * rastar is here
12:06:39 <atinm> Anyways lets get started
12:06:40 <ndevos> rafi: ?
12:06:52 <atinm> #topic AI from last week
12:07:01 <atinm> ndevos to send out a reminder to the maintainers about more actively
12:07:01 <atinm> enforcing backports of bugfixes (this year)
12:07:23 <ndevos> oh, someone added "this year", that should be doable
12:07:24 <atinm> ndevos, any updates on this?
12:07:36 <ndevos> no update, still need to do it
12:07:38 <atinm> ndevos, I just carried it forward, so not sure who added ;)
12:07:57 <atinm> #action ndevos to send out a reminder to the maintainers about more actively enforcing backports of bugfixes
12:08:02 <atinm> Moving on
12:08:09 <atinm> kshlm & csim to set up faux/pseudo user email for gerrit, bugzilla,  github
12:08:15 <atinm> csim, kshlm : any updates on this?
12:08:47 <atinm> csim, kshlm : Are you guys around?
12:09:30 <atinm> seems like no, so moving on
12:09:46 <atinm> #action kshlm & csim to set up faux/pseudo user email for gerrit, bugzilla,  github
12:10:00 <atinm> Next one is rastar and msvbhat to consolidate and publish testing matrix on gluster.org. amye can help post Jan 1
12:10:08 <atinm> Any updates on it msvbhat & rastar ?
12:10:10 <msvbhat> atinm: Not done yet
12:10:24 <atinm> msvbhat, so move to next week?
12:10:26 <msvbhat> atinm: In fact I had conventiently forgotton it
12:10:32 <msvbhat> atinm: Please
12:10:40 <msvbhat> atinm: I try to finish by next weel
12:10:42 <msvbhat> *week
12:10:48 <rastar> it might take more than a week
12:11:03 <atinm> so two weeks is a realistic time?
12:11:06 <rastar> yes
12:11:22 <atinm> #action rastar and msvbhat to consolidate and publish testing matrix on gluster.org in two weeks time, amye can help
12:11:45 <atinm> Since kkeithley is not around I am skipping his AI
12:11:49 <atinm> #action kkeithley to send a mail about using sanity checker tools in the codebase
12:11:58 <atinm> Next AI was kshlm to write up a README for glusterfs-specs
12:12:02 <atinm> I think this was done
12:12:15 <ndevos> yes, I think the new README is much clearer
12:12:35 <atinm> #info new README is at http://review.gluster.org/13187 which is merged in glusterfs-specs now
12:12:36 <ndevos> #link https://github.com/gluster/glusterfs-specs/blob/master/README.md
12:12:50 <atinm> Moving on
12:12:52 <atinm> kshlm to announce the availability of 3.6.8
12:13:06 <atinm> I've not seen any mail on this yet
12:13:39 <ndevos> me neither...
12:13:46 <atinm> Its just the announcement which is pending
12:13:57 <atinm> I'll try to poke him and remind him about it :)
12:14:18 <atinm> #action atinm to talk to kshlm on announcing 3.6.8
12:14:34 <atinm> The last AI for today is atinm to move all the design docs of GD2 to glusterfs-specs
12:15:13 <atinm> This is WIP, patch http://review.gluster.org/#/c/13237/ is under review, got a +1 from one of the reviewer, should be able to get it done in a day's time
12:15:26 <atinm> That concludes the AI list
12:15:47 <atinm> Moving on
12:15:56 <atinm> #topic GlusterFS 3.7
12:16:05 <atinm> pranithk seems to be not around
12:16:17 <atinm> anyone knows about the current status of the next release
12:16:18 <atinm> ?
12:16:20 <rastar> I think he is waiting for regression to pass for some of the patches.
12:16:35 <atinm> hmm
12:17:07 <atinm> I don't think we should wait too long for a release
12:18:00 <atinm> I can understand that few critical fixes are on that pending list, but not releasing for almost two months also doesn't make a good impression IMO
12:18:11 <atinm> I think we need to be strategic here
12:18:24 <ndevos> pending patches: https://public.pad.fsfe.org/p/glusterfs-3.7.7
12:18:24 <atinm> what you guys think?
12:18:57 <ndevos> I think there was a severe issue that needs to be fixed, and we definitely want to have that included
12:19:42 <rastar> I agree. If there are bugs which are critical and already identified then we should make a release only with those bugs fixed.
12:19:43 <ndevos> not sure if we need all of them, minor bugs can be acceptible and fixed in the release for next month - assuming those are not regressions
12:20:01 <atinm> #info 3.7.7 is delayed as some critical patches are still not in due to regression failures
12:20:17 <atinm> ndevos, yes we need to filter them
12:20:19 <overclk> atinm: if there's a critical bug/issue, it's good to wait and have that in the next release..
12:20:45 <overclk> atinm: I see a data loss bug in the list..
12:21:23 <atinm> overclk, I am not against it, my point is have we done that exercise that all of them are severe?
12:22:12 <atinm> My point is we shouldn't be waiting for patches which are not absolutely critical :)
12:22:14 <overclk> atinm: data loss == sever (at least 99.99% of the time)
12:22:25 <overclk> s/sever/severe/
12:22:49 <atinm> anyways I'd leave it upto pranithk to take that call
12:22:52 <atinm> Moving on
12:22:59 <atinm> #topic GlusterFS 3.6
12:23:17 <atinm> I don't think kshlm is around
12:23:30 <atinm> raghu, are you? Any plan for 3.6.9?
12:24:11 <ndevos> he was merging patches earlier today
12:24:32 <atinm> ndevos, ok
12:24:40 <atinm> So lets sync up on this next week
12:24:58 <atinm> #topic GlusterFS 3.5
12:25:12 <atinm> ndevos, 3.5.8 is not out yet right?
12:25:19 <ndevos> waiting for some quota changes/feedback from Manikandan_
12:25:38 <Manikandan_> ndevos, ahh I forgot, I will get it done today
12:25:38 <ndevos> no, there were no patches merged, so no need to release anything
12:26:05 <Manikandan_> ndevos, sorry for the delay
12:26:24 <ndevos> if Manikandan_ checks and comfirms the upstream status of the reported issues/fixes, the next 3.5 release will get those improvements
12:27:05 <ndevos> the release is planned for around the 10th of Feb.
12:27:40 <atinm> ndevos, so is it the only patch which you are waiting for?
12:27:48 <Manikandan_> ndevos, sure, will update you soon
12:28:10 <ndevos> atinm: yes, 3.5 is pretty stable, and does not receive a lot of backports
12:28:30 <Manikandan_> ndevos, I have sent two patches for 3.5
12:28:37 <atinm> ndevos, ok
12:28:51 <ndevos> Manikandan_: yes, both for quota, right?
12:28:51 <Manikandan_> http://review.gluster.org/#/c/12990/ and http://review.gluster.org/#/c/13215/
12:28:55 <Manikandan_> ndevos, yup
12:29:19 <ndevos> Manikandan_: thanks, just confirm that they are backports, or why the problem is not in other releases
12:29:28 <atinm> ndevos, so do you plan to take them in and release 3.5.8?
12:29:57 <ndevos> atinm: sure, anything that fixes a bug in 3.5 stands a chance of getting merged :)
12:31:09 <Manikandan_> ndevos, I will update that
12:31:28 <atinm> ndevos, anything else on 3.5?
12:31:41 <ndevos> Manikandan_: thanks, you still have some time before I do a release :)
12:31:46 <ndevos> atinm: nope, that's all
12:31:51 <atinm> ndevos, thanks
12:31:53 <atinm> Moving on
12:31:55 <Manikandan_> Manikandan_, thanks too ;-)
12:32:00 <atinm> #topic GlusterFS 3.8
12:32:17 <atinm> hagarth, are you around? DO you want to update anything here?
12:33:09 <atinm> One important announcement about 3.8 was http://www.gluster.org/pipermail/gluster-devel/2016-January/047729.html
12:33:35 <atinm> #info 3.8 will be out by end of May or early June 2016
12:34:00 <atinm> We haven't heard any objections on this proposal so I think the plan is on
12:34:19 <atinm> Any other things on 3.8 which needs a discussion?
12:35:04 <ndevos> get those feature pages and designs posted?
12:35:05 <atinm> I take it no and hence moving to the next topic
12:35:39 <atinm> that's a request to all 3.8 feature owners
12:35:51 <atinm> #topic GlusterFS 4.0
12:36:04 <atinm> overclk, want to update about DHTv2?
12:36:12 <overclk> atinm, sure.
12:36:34 <overclk> So, we had planned to complete POC last week, but that didn't happen due to various reasons..
12:36:55 <overclk> reasons == Shyam got busy elsewhere, I got tied up with things
12:37:16 <overclk> So, POC was planned for this week, but I don't think we'll make it.
12:37:45 <overclk> Mostly next week I think should be our target (also Shyam will be in BLR office on 1st Feb)
12:37:57 <overclk> That's about it
12:38:01 <atinm> overclk, thanks for the updates
12:38:07 <atinm> I will go with GlusterD2.0
12:38:08 <overclk> small note
12:38:20 <atinm> overclk, sure, go ahead
12:38:28 <overclk> next plan would be to get dhtv2 code into xlators/experimental
12:38:41 <atinm> #info DHTv2 PoC might take another week to get over
12:38:43 <overclk> -- done --
12:39:11 <atinm> In GlusterD 2.0 we are working on etcd bootstrapping part
12:39:44 <atinm> along with that work on txn framework is in progress
12:39:51 <atinm> that's about GD2
12:40:01 <atinm> DO we have any representatives from NSR?
12:40:08 <atinm> asengupt, ??
12:40:43 <atinm> I guess he is not around
12:41:06 <atinm> The updates what I have from NSR is they are working on reconciliation and client/server translators
12:41:20 <atinm> aravindavk, Do you want to update about Eventing?
12:41:48 <overclk> asengupt still busy with finding a name for NSR ? ;)
12:41:55 <atinm> overclk, :)
12:42:15 <atinm> overclk, and he has started a poll and promised to come up with  goodies ;)
12:42:31 <overclk> atinm, then he's busy shopping :)
12:42:51 <atinm> seems like aravindavk is also not around
12:42:54 <hagarth> i would propose ORR
12:43:10 <hagarth> One Replication to Rule (them all) ;)
12:43:19 <atinm> hagarth, :)
12:43:22 <overclk> hagarth: Outer Ring Road :P
12:43:31 <hagarth> lol
12:43:39 <overclk> hagarth, atinm: I proposed "Flexi Rep"
12:43:59 <atinm> Add them in the etherpad which he shared :)
12:44:14 <overclk> already done
12:44:23 <atinm> anyways moving on
12:44:46 <atinm> I think eventing team is going to call for a hangout to discuss about the design
12:44:55 <atinm> I haven't heard from them after it
12:45:08 <atinm> hagarth, do you have any information around it?
12:45:29 <hagarth> atinm: that is the latest I have too
12:45:39 <atinm> hagarth, ok
12:45:41 <atinm> That concludes 4.0
12:45:48 <atinm> anything else ?
12:45:59 <atinm> or I will move to the last topic
12:46:14 <atinm> I take it as No
12:46:26 <atinm> #topic Open Floor
12:49:09 <atinm> There is a note about python3 in the open floor section however the same has been discussed over email and link for it is http://www.gluster.org/pipermail/gluster-devel/2016-January/047808.html
12:49:19 <atinm> Apart from that I don't see anything else
12:49:49 <atinm> Does anyone have any other thing to discuss or else we are done for today
12:50:41 <atinm> I'd take it as No
12:50:48 <atinm> Thanks everyone for attending
12:50:54 <hagarth> atinm: thank you, good luck for the week ahead!
12:50:54 <atinm> Have a great time!
12:51:13 <atinm> hagarth, I 'll be speaking at LCA2016, so won't be available next week
12:51:14 <ndevos> thanks atinm!
12:51:18 <overclk> thanks atinm and all the best for LCA.
12:51:29 <atinm> #endmeeting