12:00:20 <meghanam_> #startmeeting
12:00:20 <zodbot> Meeting started Wed Aug 26 12:00:20 2015 UTC.  The chair is meghanam_. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:00:20 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:00:39 <meghanam_> #info agenda is available @ https://public.pad.fsfe.org/p/gluster-community-meetings
12:00:54 <meghanam_> #topic roll call
12:01:05 * ndevos _o/
12:01:25 * asengupt is present
12:01:28 * tigert is here
12:02:02 * saurabh_ saurabh_ is present
12:02:44 <meghanam_> I'll wait for 2 more minutes before I move on to the next topic
12:02:46 * raghu is here
12:03:05 * kkeithley_blr is here
12:03:21 * krishnan_p is here
12:03:57 <meghanam_> #topic  Action items from last week
12:04:24 <meghanam_> #topic  raghu to fill in the contents for release schedule and ask tigert to push the page in gluster.org
12:04:51 <meghanam_> Is raghu here?
12:04:53 <raghu> meghanam_: oops. I have the contents. But I have to send the pull request to tigert. Will do it today
12:05:45 <meghanam_> #action  raghu to fill in the contents for release schedule and ask tigert to push the page in gluster.org
12:06:00 <meghanam_> #topic msvbhat to  do 3.7.3 announcement on the gluster blog and social media
12:06:26 * rastar is late for the meeting
12:06:28 * skoduri is here
12:06:48 <meghanam_> Does anyone remember seeing this on social media?
12:06:58 <rastar> yes
12:07:05 <rastar> msvbhat: did that
12:07:05 <tigert> rastar: ok
12:07:12 <meghanam_> Cool.
12:07:21 <tigert> erm, raghu even
12:07:29 * overclk is late
12:07:40 <meghanam_> #info msvbhat and raghu announced 3.7.3 on social media
12:07:55 * anoopcs is late
12:07:59 <rastar> meghanam_: here is the link https://medium.com/@msvbhat/gluster-news-of-the-week-30-2015-30452f44a144
12:08:12 <meghanam_> #topic msvbhat/rtalur to send update mailing list with a DiSTAF how-to and start discussion on enhancements to DiSTAF.
12:08:17 * msvbhat joins the meeting bit late
12:08:18 * rjoseph is late
12:08:26 <meghanam_> #info https://medium.com/@msvbhat/gluster-news-of-the-week-30-2015-30452f44a144
12:08:33 <meghanam_> rastar?
12:08:48 <msvbhat> That's the announcement, Yes
12:08:52 <msvbhat> meghanam_: ^^
12:09:04 <meghanam_> Yes. I got that. Thanks msvbhat.
12:09:16 <msvbhat> And about the distaf thing, I will create a review in gluster-specs and send for review
12:09:18 <rastar> meghanam_: about DiSTAF, no announcement yet
12:09:19 <meghanam_> DiSTAF topic.
12:09:22 <msvbhat> as a proposal
12:09:28 <rastar> had more discussion with ndevos
12:09:40 <ndevos> msvbhat: got an ETA for that doc?
12:09:57 <msvbhat> ndevos: This weekend?
12:10:06 <meghanam_> #info Discussion on DiSTAF enahancements started
12:10:28 <ndevos> msvbhat: thats up to you, asap :)
12:10:40 <msvbhat> meghanam_: Create an action item on me to send a proposal request
12:10:47 <msvbhat> in gluster-specs
12:11:11 <meghanam_> #action msvbhat to send a proposal request for DiSTAF
12:11:12 <ndevos> msvbhat: I wont push the packaging too much until there is a clear user-facing stability
12:11:21 <msvbhat> ndevos: I will try to complete tomorrow or day after. Weekend is my fallback if I couldn't finish then :P
12:11:44 <msvbhat> ndevos: Sure
12:11:50 <meghanam_> Okay. Moving on.
12:11:52 <ndevos> msvbhat: sounds good to me, people should then be able to review it starting next week
12:11:59 <meghanam_> #topic kshlm to checkback with misc on the new jenkins slaves.
12:12:34 <meghanam_> kslm, are you here?
12:12:44 <meghanam_> kshlm?
12:12:56 <ndevos> or, maybe csim has something to report?
12:13:31 <meghanam_> Looks like he is not here
12:13:38 <meghanam_> #action kshlm to checkback with misc on the new jenkins slaves.
12:13:53 <meghanam_> #topic poornimag needs help to backport glfs_fini patches to 3.5
12:14:09 <meghanam_> poornimag, have you found any volunteers yet?
12:14:34 <ndevos> I did not see any patches flow in yet
12:15:15 <meghanam_> Looks like she is not here too
12:15:40 <meghanam_> #action poornimag needs help to backport glfs_fini patches to 3.5
12:16:03 <meghanam_> #topic kslm to send  out an announcement regarding the approaching deadline
12:16:03 <meghanam_> *  for Gluster 3.7 next week.
12:16:25 <meghanam_> Does someone remember seeing this?
12:16:46 * kshlm is here
12:16:53 <kshlm> I'll send it out tonight.
12:17:00 <meghanam_> #action  kslm to send  out an announcement regarding the approaching deadline for Gluster 3.7
12:17:11 <kshlm> And plan the release for monday
12:17:24 <meghanam_> #topic raghu to make 3.6.5 this Friday.
12:17:29 <meghanam_> Okay kshlm.
12:17:36 <raghu> meghanam_: I made the release and the tarball is ready
12:17:48 <hagarth> raghu: what about various packages?
12:17:53 <raghu> http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/
12:18:01 <meghanam_> #info raghu released 3.6.5 and the tarball is ready.
12:18:07 <ndevos> nice, thanks raghu!
12:18:10 <meghanam_> #info http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/
12:18:15 <meghanam_> thanks raghu
12:18:28 <raghu> hagarth: I think RPMs are ready. hchiramm was mentioning about Ubuntu packages to be  prepared. I am waiting for that
12:18:31 <hagarth> raghu: announced on mailing lists?
12:18:37 <raghu> will make the announcement once that is ready
12:18:54 <hagarth> raghu: ok, sounds good
12:18:58 <meghanam_> #action raghu to announce 3.6.5 on mailing lists
12:19:00 <raghu> hagarth: nope. Not yet. I can make it right now if RPMs are suffecient for that
12:19:06 <ndevos> I think the debian/ubuntu packaging was done today, kkeithley_blr?
12:19:37 <rastar> ndevos: kkeithley did for one release of debian and one release of ubuntu today.
12:19:46 <meghanam_> #topic hchiramm_ to improve release documentation
12:19:46 <rastar> a few more to go
12:20:09 <meghanam_> hchiramm_home?
12:20:22 <meghanam_> Do you have any updates?
12:20:50 <kkeithley_blr> ubuntu was done today.  debian wheezy is done. jessie and stretch will be done soon.  I would not wait for those to announce
12:20:58 <meghanam_> #action hchiramm_ to improve release documentation
12:21:02 <hagarth> kkeithley_blr: +1 to that
12:21:18 <meghanam_> #topic Saravana_ to send out a mail to gluster-infra
12:21:38 <ndevos> an email about what?
12:21:40 <Saravana_> I have sent out a mail to gluster-infra about that
12:21:50 <Saravana_> FAQ on gluster-users mailing list
12:21:56 <ndevos> ah, right!
12:22:03 <meghanam_> Alright. Thanks.
12:22:03 <raghu> kkeithley_blr: shall I go ahead and announce 3.6.5 in the mailing list?
12:22:10 <hagarth> raghu: go ahead
12:22:23 <ndevos> #info Saravana_ sent FAQ on gluster-users mailing list
12:22:30 <ndevos> Saravana_: did you get any traction to that?
12:22:32 <Saravana_> I need all your inputs on this. :)
12:22:33 <kkeithley_blr> raghu: yes, +1 to announce
12:22:52 <meghanam_> #topic GlusterFS 3.7
12:22:52 <raghu> where is the Ubuntu package link? I want to include that as well in the announcement
12:23:21 <kkeithley_blr> ubuntu dpkgs are in the launchpad.net/~gluster PPA
12:23:26 <ndevos> kshlm: you're up for the 3.7 release?
12:23:45 <kshlm> Yeah.
12:23:55 <krishnan_p> kshlm, ndevos I have couple of patches targetted for 3.7.4 and its not yet merged on master :(
12:24:07 <kshlm> I'll send out a mail announcing the approaching deadline.
12:24:08 <krishnan_p> kshlm, I have added the bug to 3.7.4 tracker, if that counts
12:24:12 <krishnan_p> kshlm, OK
12:24:25 <ndevos> krishnan_p: release was scheduled for later this week, would you be able to make that?
12:24:45 <kshlm> Looking at the 3.6.x release raghu is planning to do this week, I'm considering pushing 3.7.5 a little bit.
12:25:02 <krishnan_p> ndevos, I should be. This is the patch adding multiple threaded epoll to glusterd
12:25:03 * ndevos will be travelling tomorrow, and is unlikely in a position to review things before friday
12:25:23 <Saravana_> ndevos,  I have sent idea and few faqs(with answers) - prefer to have a link to gluster.org,  Please get back with your suggestions.
12:25:24 <kshlm> I previously said, I was targetting monday, but I'd like to push it to next friday to get a better gap.
12:25:39 <krishnan_p> ndevos, ah!
12:25:42 <hagarth> krishnan_p: what necessitates multi-threaded epoll with glusterd?
12:25:43 <raghu> kshlm: You can release the tarball as per ur schedule itself. By the time the packages are built, I think it would be right time
12:25:48 <kshlm> Does anyone have objections to this?
12:25:52 <hagarth> kshlm: agree with raghu
12:26:11 <krishnan_p> hagarth, to make snapshot commands work at 'limit
12:26:16 <hagarth> kshlm: if some patches miss this train, they will get picked up in the next month's release
12:26:19 <hagarth> krishnan_p: ah ok
12:26:19 <krishnan_p> without ping-timer between glusterd failing
12:26:36 <krishnan_p> kshlm, hagarth, ndevos it is not must-have
12:26:37 <kshlm> hagarth, raghu, from the last release experience I had, building the packages didn't take too long.
12:26:48 <krishnan_p> I'd like to have it. Will get necessary things going for it
12:26:49 * kkeithley_blr thought we were doing the jenkins release per the schedule, then the email announcement is made when "enough" packages land on d.g.o
12:27:09 <kkeithley_blr> and doesn't affect the schedule for the next 3.X release
12:27:23 <hagarth> kkeithley_blr: agree with that
12:27:48 <hagarth> kshlm: let us adhere to the announced release schedule. I think it will help us be more predictable.
12:27:55 <raghu> kshlm: yeah. I think it takes atleast 1-2 days for all the packages to be available. And I think 1-2 days is ok. ???
12:28:16 <ndevos> #info krishnan_p will try to get mulit-threaded epoll patches for glusterd ready before 3.7.4, but release will not be blocked if missing
12:28:30 <kshlm> Okay.
12:28:31 <krishnan_p> ndevos, the patches are already available for review
12:28:31 <ira> 340858
12:28:53 <ndevos> krishnan_p: with "ready" is merged + backporting included ;-)
12:28:56 <krishnan_p> It has passed Linux regression, waiting for NetBSD to pass
12:29:02 <kshlm> Monday, is the deadline for 3.7.4 then.
12:29:03 <krishnan_p> ndevos, OK
12:29:09 <krishnan_p> kshlm, OK
12:29:15 <ndevos> kshlm: ok :)
12:29:34 <meghanam_> #info Monday is the deadline for 3.7.4
12:29:34 <hagarth> kshlm: cool :)
12:29:48 <meghanam_> #topic GlusterFS 3.6
12:30:06 <ndevos> raghu: ?
12:30:14 <raghu> As I said, glusterfs-3.6.5 is released. I just have to make the announcement
12:30:17 <raghu> kkeithley_blr:  This is the link for Ubuntu packages right?? "https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.6"
12:30:39 <raghu> kkeithley_blr: shall I include that link in the announcement mail??
12:30:47 <kkeithley_blr> raghu: yes.  yes, include the link if you like
12:31:02 <raghu> kkeithley_blr: sure. Thanks.
12:31:07 <meghanam_> Moving on.
12:31:11 <raghu> I will be sending out the announcement shortly
12:31:18 <meghanam_> #topic GlusterFS 3.5
12:31:25 <meghanam_> Thanks raghu.
12:31:31 <meghanam_> poornimag?
12:31:35 <meghanam_> Are you here?
12:31:51 <ndevos> bugs that need some work are listed on https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2&hide_resolved=1&id=glusterfs-3.5.6
12:32:06 <ndevos> and the main one that gets user requests is the glfs_fini() cleanup
12:32:26 <meghanam_> poornimag had sought help from volunteers last week
12:32:36 <ndevos> poornimag posted links to patches for backporting, and the backporting still needs to get done
12:32:50 <meghanam_> Can someone help with the backports?
12:33:10 <ndevos> #halp Volunteers needed to backport patches listed in bug 1134050
12:33:35 <ndevos> #info bug with patches needed backporting: https://bugzilla.redhat.com/show_bug.cgi?id=1134050
12:33:56 <meghanam_> #topic Glusterfs 3.8
12:34:29 <ndevos> hagarth: maybe you have something for 3.8?
12:34:34 <ndevos> or krishnan_p?
12:34:40 <meghanam_> Are there any specific topics that needs discussion?
12:35:26 <ndevos> #info designs for 3.8 features should get posted to the glusterfs-specs repo REALLY SOON NOW
12:35:29 <krishnan_p> none from me
12:35:46 <meghanam_> #topic Gluster 4.0
12:36:12 <ndevos> #info designs for 4.0 features should get posted to the glusterfs-specs repo pretty soon
12:36:20 <hagarth> agree with ndevos
12:36:42 <hagarth> all planned features for Gluster.next should start discussions upstream
12:37:10 <krishnan_p> We have shared our plan for glusterD 2.0 for the next 3 months.
12:37:15 <ndevos> and Gluster.Next is both 3.8 and 4.0
12:37:22 <hagarth> if you are working on a feature for gluster.next and have not reported progress in the last few weeks, please do so asap :)
12:37:31 <krishnan_p> We will be following up with emails discussing progress and open items
12:37:37 <hagarth> krishnan_p: a more granular progress update would be very welcome!
12:37:44 <krishnan_p> hagarth, definitely
12:37:50 <hagarth> krishnan_p: thanks for sharing the plan!
12:38:16 <meghanam_> thanks.
12:38:22 <ndevos> #info discussions in Google Docs design documents need some summary on gluster-devel too
12:38:26 <krishnan_p> hagarth, ndevos would gluster weekly news be a good platform for sharing GLuster Next progress
12:38:36 <hagarth> krishnan_p: absolutely!
12:38:39 <ndevos> krishnan_p: sure, that would be great
12:39:05 <krishnan_p> hagarth, ndevos OK :)
12:39:18 <ndevos> #info Update Gluster News with Gluster.next feature progress on https://public.pad.fsfe.org/p/gluster-weekly-news
12:39:49 <meghanam_> #action krishnan_p to update Gluster News about Gluster.next progress
12:40:01 <meghanam_> #topic Open Floor
12:40:20 <krishnan_p> meghanam_, I think it is for everyone to update their respective progress in the etherpad so that weekly news can import it
12:40:28 <hagarth> krishnan_p: +1
12:40:54 <hagarth> how are we doing on regression failures?
12:41:06 <meghanam_> Alright. Can you send an email to the devel list about this?
12:41:08 <ndevos> krishnan_p: yeah, that should work, but we're counting on you to show the way :D
12:41:35 <krishnan_p> ndevos, that is encouraging :)
12:41:42 <ndevos> krishnan_p: lead by example!
12:41:51 * krishnan_p ducks
12:42:08 <kshlm> Are there no existing topics for open floor?
12:42:11 <hagarth> should we enforce ban on patch acceptance for those components that cause regression failures?
12:42:15 <rastar> hagarth: about regression , we might have fixed the NetBSD failures at last. Keeping fingers crossed. Lets see for a few days
12:42:28 <hagarth> rastar: cool, thanks for the update
12:42:29 <rastar> hagarth: we have around 7 tests in bad_tests list
12:42:30 <krishnan_p> All, I recently bumped into a messaging library called nanomsg (nanomsg.org).
12:42:47 <krishnan_p> Would open floor be ideal for discussing about it and its relevance in gluster project?
12:42:54 <kshlm> krishnan_p, recently :D
12:42:54 <raghu> tigert: I have created a pull request for the release-schedule to be updated in the gluster website. Please have a look at it and give me the feedback if there is any changes needed.
12:42:55 <ndevos> rastar: do all the tests in bad_tests have a bug associated with the,?
12:43:04 <hagarth> krishnan_p: sure
12:43:06 * msvbhat brb
12:43:11 <krishnan_p> kshlm, recently it all dawned on me. I have been looking at it for a while
12:43:16 <rastar> ndevos: no but I agree with you, we should have one
12:43:30 <rastar> ndevos: I will create them by end of this week.
12:43:42 <ndevos> rastar: yes, definitely, could you check and file bugs for the ones missing?
12:43:43 <krishnan_p> This library abstracts messaging for distributed systems software
12:43:55 <raghu> tigert: this is the pull request (https://github.com/gluster/glusterweb/pull/5)
12:44:16 <hagarth> krishnan_p: what all messaging types does nanomsg support?
12:44:19 <rastar> krishnan_p: looks interesting
12:44:26 <krishnan_p> At a high-level, it provides a socket like API to send/recv messages to a topology (read a group of network endpoints)
12:44:36 <ndevos> krishnan_p: looks interesting
12:45:04 <krishnan_p> hagarth, we need all endpoints use nanomsg to send/recv messages
12:45:21 <hagarth> krishnan_p: ok..
12:45:22 <krishnan_p> hagarth, the application layer can have its own on-wire format
12:45:37 <hagarth> nice, just looked up the website
12:45:44 <krishnan_p> e.g, there is a topology called SURVEYOR
12:46:10 <krishnan_p> This has two endpoints, a surveyor (type socket) and one or more respondents
12:46:21 <krishnan_p> The surveyor binds on a well-known address
12:46:32 <krishnan_p> The respondents connect when they come 'online'
12:46:47 <tigert> raghu: thanks
12:46:48 <krishnan_p> The surveyor sends a survey and sets a deadline for it
12:47:12 <krishnan_p> Performs as many recvs as surveyor deems fit (think quorum in case of consensus)
12:47:17 <ndevos> krishnan_p: I was wondering if there is a need for glusterd to speak to bricks? you would still need the current rpc/xdr too
12:47:31 <krishnan_p> Post deadline, messages from respondents are dropped
12:48:00 <kshlm> ndevos, there still is such a need.
12:48:11 <hagarth> krishnan_p: cool
12:48:29 <krishnan_p> This abstracts communication differently to our asynchronous send message over every connection
12:49:04 <krishnan_p> For instance, afr/dht could set a remote endpoint (topology) for all its bricks and issue the same message (when applicable)
12:49:23 <krishnan_p> nanomsg also supports publisher-subscriber pattern too
12:49:50 <krishnan_p> I would appreciate if interested folks hang out in gluster-dev and ping me with what you think :)
12:49:55 <hagarth> krishnan_p: seems very interesting
12:50:06 <krishnan_p> hagarth, I am a huge fan of it!
12:50:08 <ndevos> how does it communicate, does it use multicast or what?
12:50:16 <hagarth> thanks for sharing your discovery! :)
12:50:34 <krishnan_p> ndevos, the transport can be chosen. There is TCP, PGM  and so on
12:50:47 <krishnan_p> So, if we chose TCP, there would be one-to-one message underneath.
12:50:51 <ndevos> krishnan_p: not only on #gluster-dev, send an email with the idea/suggestion to the list too
12:51:04 <krishnan_p> ndevos, I will.
12:51:32 <krishnan_p> ndevos, it is still nascent. I need some more collabortive thinking
12:51:33 <meghanam_> #action krishnan_p to send an email about nanomsg.org to gluster-dev
12:51:55 * krishnan_p what is he getting himself into ...
12:52:13 <ndevos> krishnan_p: sure, and a pointer to nanomsg.org on the list will help in reviewing the options
12:52:25 <krishnan_p> ndevos, OK
12:52:41 <meghanam_> Any other topics?
12:53:28 <meghanam_> #info     Weekly reminder to announce Gluster attendance of events: https://public.pad.fsfe.org/p/gluster-events
12:53:44 <hagarth> we are planning to decommission forge.gluster.org soon
12:53:48 <meghanam_> #info     REMINDER to put (even minor) interesting topics on https://public.pad.fsfe.org/p/gluster-weekly-news
12:53:54 <hagarth> if you have any concerns, please drop us a note
12:54:06 <hagarth> all projects on forge will mostly move to github's gluster account
12:54:24 <meghanam_> #info forge.gluster.org will be decommisioned soon.
12:54:37 <ndevos> #info projects on forge.gluster.org will move to github.com/gluster/
12:55:15 <meghanam_> If there is nothing else, I'll end the meeting now
12:55:40 <krishnan_p> thank you meghanam_ and ndevos
12:55:41 <meghanam_> Thanks to all the participants! See you next week :)
12:55:50 <meghanam_> #endmeeting