12:00:31 <ndevos> #startmeeting
12:00:31 <zodbot> Meeting started Wed Jun 17 12:00:31 2015 UTC.  The chair is ndevos. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:00:31 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:00:37 <ndevos> #info Agenda: https://public.pad.fsfe.org/p/gluster-community-meetings
12:00:44 <ndevos> #topic Roll Call
12:00:45 * partner plop
12:00:48 * JustinClift waves
12:00:52 * jimjag is here
12:00:54 <ndevos> Hi all, who's here today?
12:01:08 <ndevos> #chair atinm hagarth
12:01:08 <zodbot> Current chairs: atinm hagarth ndevos
12:01:28 * atinm is here
12:01:50 * spot is here
12:02:03 * raghu is here
12:02:11 <ndevos> hagarth mentioned he might not be online, but travelling instead
12:03:04 <ndevos> well, lets get started, others that join late should just read the logs
12:03:08 * kshlm is here
12:03:11 <ndevos> #topic Action Items from last week
12:03:26 <ndevos> #topic JustinClift will have something to say about the Gluster Forge v2
12:03:36 * JustinClift nods
12:03:44 <JustinClift> Nothing user visible to show yet
12:03:50 <JustinClift> But I'm focused properly on this now
12:04:06 <ndevos> thats good, tell us the plan?
12:04:10 <JustinClift> I've reached out to 95% of the people with projects on the existing forge, to get them to make GitHub versions
12:04:15 <JustinClift> Many of them have
12:04:26 * msvbhat is here
12:04:43 <JustinClift> Others have let me know of the ones they don't need migrated (eg project uninteresting, or code already merged in gluster)
12:05:00 <JustinClift> I have the code to collect stats mostly written
12:05:09 * hagarth is partially here and will be out soon.
12:05:22 * krishnan_p is here
12:05:37 <JustinClift> And will soon be writing code to present stats, that will then need dislay/formatting (will ping tigert)
12:05:48 <ndevos> JustinClift: pointer to the repo with the code?
12:05:53 <JustinClift> So, I'm thinking "by next week" to have user visible stuff
12:06:02 <JustinClift> https://github.com/gluster/forge
12:06:11 <JustinClift> "master" is "stable" branch
12:06:23 <JustinClift> "development" is where I'm doing stuff that may or may not get merged. ;)
12:06:41 <ndevos> #info Code for the new Forge based on GitHub is available at https://github.com/gluster/forge
12:07:06 <ndevos> #action JustinClift to demo the new Forge next week
12:07:10 <JustinClift> Individual repo migration stats here if that's useful: https://github.com/gluster/forge/blob/master/old_forge_repos.txt
12:07:44 <JustinClift> Um, that's it for this action point for now :)
12:07:51 <ndevos> okay, cool, any questions?
12:08:18 <JustinClift> Not from me. ;)
12:08:46 <ndevos> also not from others, obviously
12:08:50 <ndevos> #topic hchiramm_ will send out a call (email and blog?) for volunteers for helping with the packaging
12:09:00 <ndevos> hchiramm_: are you here?
12:09:30 <msvbhat> he is on sick leave today
12:09:36 <msvbhat> ndevos: ^^
12:09:56 <ndevos> #topic tigert summarize options for integrating a calendar on the website, call for feedback
12:10:28 <ndevos> I wonder if tigert sent out an email about this, I might have missed it?
12:11:00 <tigert> oops I did it again
12:11:02 <atinm> ndevos, I've not seen any mail from him
12:11:18 <tigert> sorry was super swamped with doing designs :P
12:11:20 <ndevos> tigert: next week?
12:11:34 <tigert> hopefully
12:11:36 <ndevos> #action tigert summarize options for integrating a calendar on the website, call for feedback
12:11:40 <ndevos> #topic msvbhat to start a discussion about more featured automated testing (distaf + jenkins?)
12:11:55 <ndevos> msvbhat: did that take off?
12:12:03 <msvbhat> ndevos: RaSTar  has sent it few hours ago
12:12:40 <msvbhat> It covers most of the things, but I will be adding few points to that
12:12:46 <ndevos> #info RaSTar started the discussion earlier today
12:13:13 <ndevos> msvbhat: thanks, as long as there is a start, we can follow the plan on the list
12:13:34 * ndevos skips hagarth AI's
12:13:38 <ndevos> #topic csim/misc will send an email about the idea to migrate to Gerrithub.io, and collect questions for relaying
12:13:52 <ndevos> so, that email was sent, but I did not see any replies
12:14:20 <ndevos> csim: did you get any replies? or can you proceed without them?
12:14:58 <ndevos> ... okay, seems to be busy elsewhere?
12:15:03 <ndevos> #topic csim/misc to pickup the requesting of public IPs and those kind of things for our new servers
12:15:21 <ndevos> still on his TODO list, I think
12:15:33 <ndevos> #topic GlusterFS 3.7
12:15:41 <ndevos> hagarth, atinm: status update?
12:16:05 <atinm> ndevos, I've a plan to release 3.7.2 by this weekend
12:16:19 * hagarth defers it to atinm. need to drop off now.
12:16:26 <atinm> ndevos, I would need to run through 2.7.2 blocker and see if any patches pending merge
12:16:27 <ndevos> cya hagarth!
12:16:46 <atinm> ndevos, netbsd is not helping us to move forward
12:16:46 <ndevos> #info atinm is planning to release 3.7.2 this week
12:17:26 <ndevos> atinm: yeah, and the exponential rebasing that is done does not help with that...
12:17:49 <ndevos> #info there is no need to rebase each change after something else got merged
12:17:53 <atinm> ndevos, however we have seen community reporting on the glusterd crash on rebalance path multiple times, we need to release asap
12:18:16 <ndevos> #info proposed blockers for 3.7.2: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.2
12:18:51 <ndevos> atinm: sure, and if 3.7.2 fixes important bugs, release soon and move the outstanding ones to a new 3.7.3 tracker
12:19:21 <ndevos> atinm: could you create a 3.7.3 tracker already? I guess we should not add more bugs to the 3.7.2 one
12:19:23 <atinm> ndevos, yes, that's the plan
12:19:32 <atinm> ndevos, I will do that
12:19:40 <atinm> ndevos, make an AI on me
12:19:54 <ndevos> #action atinm will create a 3.7.3 tracker and announce it on the list(s)
12:19:54 <atinm> ndevos, such that I don't forget :)
12:20:25 <ndevos> atinm: do you know of any bugs that need work before you can do a 3.7.2 release?
12:20:51 <atinm> ndevos, I need to get the rebalance glusterd crash fix in 3.7, its not yet merged
12:21:06 <atinm> ndevos, some tiering patches are also pending
12:21:32 <atinm> ndevos, as I said I need to go through the list which I haven't done yet
12:21:34 <ndevos> atinm: okay, but dont hesitate to move bugs from the 3.7.2 tracker to the upcoming 3.7.3
12:21:43 <atinm> ndevos, sure
12:22:10 <ndevos> #info only few urgent fixes are pending for 3.7.2, others will get moved out to 3.7.3
12:22:22 <ndevos> atinm: anything else for 3.7?
12:22:29 <ndevos> or, any questions from others?
12:22:45 <atinm> ndevos, no
12:23:00 <ndevos> #topic GlusterFS 3.6
12:23:06 <ndevos> raghu: your turn!
12:23:12 <raghu> I have made 3.6.4beta2 (tarball is ready. RPMs have to be done. Will announce once they are done). There are no more patches pending for release-3.6 branch. If no issues are not reported for 3.6.4beta2, then I will make 3.6.4 end of this week.
12:24:14 <itisravi> Thanks to raghu for merging my backports!
12:24:23 <itisravi> raghu++
12:24:28 <raghu> itisravi: :)
12:24:52 <ndevos> #info 3.6.4 will likely get released later this week
12:25:11 <ndevos> raghu: got more to share?
12:25:19 <raghu> ndevos: nope.
12:25:29 <ndevos> #topic GlusterFS 3.5
12:25:43 <ndevos> a few (or one?) patch has been merged
12:26:08 <ndevos> plan is still to release the next 3.5 later this month
12:26:25 <ndevos> #topic Gluster 4.0
12:26:52 <ndevos> jdarcy: some update on 4.0?
12:27:21 <krishnan_p> jdarcy, thanks for collating discussions around DHTv2 and brick multiplexing
12:27:29 <jdarcy> Some discussion of brick multiplexing and DHT2.  Mostly Summit prep his week though.
12:27:47 <jdarcy> krishnan_p, my pleasure.  Seems like a way I can help the people who do the actual work.
12:28:10 <krishnan_p> jdarcy, I want to add sections around how inode,fd tables should be designed to support multi-tenancy so to speak
12:28:18 <jdarcy> Cool.
12:28:51 <ndevos> anything else?
12:28:55 <krishnan_p> jdarcy, though I'd like to distinguish multiplexing and multi-tenancy. One is one volume, many tenants while the other is many volumes in a single process.
12:29:32 <jdarcy> Should probably start a whole separate doc for multi-tenancy.  I guess that's an AI.
12:29:37 <krishnan_p> jdarcy, I am thinking of coming up with a small list of things that we need to address in epoll layer (for lack of sophisticated terminology)
12:29:56 <krishnan_p> jdarcy, that would be awesome
12:30:07 <ndevos> jdarcy: you're happy to mail about multi-tenancy?
12:30:25 <jdarcy> Oh yeah, do we have anyone here who can talk about Manila?  That should probably be a regular part of this meeting.
12:30:35 <jdarcy> ndevos, sure.
12:30:53 <ndevos> doesnt look like Deepak or Csaba is here...
12:31:18 <ndevos> #action jdarcy to start a new doc about multi-tenancy and announce it on the list(s)
12:31:20 <kshlm> I'll representing them.
12:31:30 <atinm> ndevos, kshlm can talk
12:31:43 <atinm> oops, he already volunteered
12:32:19 <ndevos> #action ndevos extends the meeting agenda with a regular Manilla topic
12:32:48 <kshlm> Manila requires a fully supported access method with all features they've been targetting for the Liberty release coming in Oct (I think).
12:32:59 <ndevos> #action kshlm will provide regular updates about Manila (when Deepak and Csaba are missing)
12:33:11 <kshlm> We had two options, sub-directory based or volumes based.
12:33:46 <kshlm> We've currently decided to proceed with the volume based approach.
12:34:32 <ndevos> is that documented somewhere?
12:34:35 <kshlm> For this approach we require a intelligent volume creation method. lpabon announced heketi on the mailing lists which will attempt to do this.
12:35:28 <kshlm> We as GlusterD maintainers would like to use this to kickstart GlusterD-2.0.
12:35:41 <atinm> ndevos, I believe kshlm will be sending a mail shortly on this
12:35:45 <kshlm> I'll be posting more details regarding this to the mailing list soon.
12:36:25 <kshlm> We currently don't have any documentation on the decision to proceed with volume based approach.
12:36:28 <ndevos> okay, thanks!
12:36:39 <kshlm> I'll update the mailing thread csaba started with this information.
12:37:02 <jdarcy> https://github.com/heketi/heketi
12:37:05 <kshlm> That's all.
12:37:24 <jdarcy> Looks like Luis has been having fun experimenting with a bunch of new (to us) technologies.
12:37:28 <ndevos> #action kshlm will send more details about Manila, GlusterD-2.0 and maybe Heketi to the list
12:37:56 <kshlm> jdarcy, yes. we hope to join in the fun soon.
12:38:06 <krishnan_p> kshlm, +1
12:38:44 <ndevos> any questions about this?
12:39:03 * krishnan_p waiting for kshlm's proposal.
12:39:04 <ndevos> not too complex, wait for kshlms email ;-)
12:39:22 <kshlm> I have at least 2 mails to send then.
12:39:51 <ndevos> #topic Open Floor / Bring Your Own Topic
12:40:02 * krishnan_p _o/
12:40:02 <atinm> we would have to collaborate such that there are no duplicate efforts between glusterd 2.0 or heketi
12:40:13 <ndevos> #info Weekly reminder to announce Gluster attendance of events: https://public.pad.fsfe.org/p/gluster-events
12:40:21 <ndevos> krishnan_p: yes?
12:40:28 <jdarcy> Some good discussion/collaboration going on w.r.t. command-line GFAPI clients.
12:40:39 <krishnan_p> ndevos, I want to pitch working on epoll a bit.
12:40:42 <atinm> I am talking on using URCU in glusterd in coming FUDCon
12:40:42 <kshlm> atinm, lpabon is enthusiastic about collaborating.
12:41:01 <atinm> I will publish that in the pad
12:41:09 <ndevos> atinm: do a #info?
12:41:24 <krishnan_p> I am not sure how many here find GlusterFS' epoll based notification layer fun.
12:41:35 <ndevos> #topic work on epoll
12:41:39 <krishnan_p> I have found a few interesting problems that we could solve.
12:41:49 <atinm> #info atinm js talking about using URCU in glusterd in coming FUDCon 2015
12:41:51 <krishnan_p> I am still in the process of coming with a refined set.
12:42:02 <krishnan_p> Keep looking here for more details - https://public.pad.fsfe.org/p/glusterfs-epoll
12:42:24 <krishnan_p> This is in kp speak at the moment. Hope to make it generally readable soon.
12:42:36 <ndevos> krishnan_p: you're looking for voluneers to help you with what exactly?
12:42:41 <krishnan_p> I would be more than happy to work with anyone who shows the slightest interest
12:43:08 <krishnan_p> ndevos, I am keeping that open. I would prefer to work with somone on this, than alone
12:43:23 <kshlm> krishnan_p, howbout checking out libevent?
12:43:28 <krishnan_p> ndevos, no volunteers would mean you will see patches from me.
12:43:41 <krishnan_p> kshlm, I need volunteers for that one for sure.
12:43:55 <ndevos> #info anyone that is interested in doing some epoll development/checking/whatever should talk to krishnan_p
12:43:57 <krishnan_p> kshlm, I am not sure about their SSL, rdma support.
12:44:02 <kshlm> It's supposed a cross platform eventing library, which is supposed to use the best event mechanism for each platform automatically.
12:44:47 <krishnan_p> kshlm, at the moment I have resigned to fixing issues in our epoll implementation. I am willing to work with someone who is interested in exploring libuv/libevent etc
12:44:55 <kshlm> I don't expect any problems with either. Any ways your call!
12:45:30 <ndevos> oh, using an existing library would be my preference too :)
12:45:40 <krishnan_p> ndevos, care to join? :)
12:46:04 <ndevos> krishnan_p: I should be able to find a few spare minutes every now and then... maybe
12:46:17 * ndevos will definitely read that etherpad
12:46:18 <krishnan_p> ndevos, I know :)
12:46:29 <krishnan_p> ndevos, I can translate that for you whenever you ask me :)
12:46:44 <krishnan_p> last call, epoll anyone? :)
12:46:57 <ndevos> #action krishnan_p will send out an email to the devel list about improving our epoll solution
12:47:19 <krishnan_p> ndevos, thanks!
12:47:27 <ndevos> #topic More Open Floor
12:47:31 <atinm> yes
12:47:32 <ndevos> #info maintainers@gluster.org is live, component maintainers have been subscribed
12:47:34 <atinm> NETBSD
12:48:01 <ndevos> so, if someone needs to contact the maintainers of a certain component, send an email to that list
12:48:27 <ndevos> #topic Changes to review workflow wrt regression http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/11486
12:48:33 <kshlm> Wasn't this supposed to be used for reaching out to package maintainers? Or was it some thing eles?
12:48:44 <ndevos> atinm: thats on the list too, the next topic
12:48:51 <atinm> ndevos, thank you
12:49:03 <ndevos> kshlm: we have packaging@gluster.org for package maintainers from different distributions
12:49:13 <kshlm> Anyways, I added the review workflow topic.
12:49:36 <kshlm> I sent out a mail describing what I thought will be a good workflow.
12:49:41 <ndevos> kshlm: maintainers@gluster.org are the maintainers for Gluster components, the people that should review and merge changes
12:49:43 <kshlm> The community seems to agree.
12:50:15 <ndevos> kshlm: short description of the email, in case people did not read it yet?
12:50:20 <kshlm> So now, the question is when do we start implementing the new workflow.
12:51:20 <kshlm> A tl;dr: Run regressions only once a patch gets a +2 review from a maintainer. Possibly use a gating tool like Zuul, to automate this as Gerrit/Jenkins don't seem to be able to this on their own.
12:52:22 <kshlm> Back to my question, so when does the community think we should do this?
12:52:31 <ndevos> kshlm: I think any of the changes to Gerrit/Jenkins will need to wait until the new servers are avilable for us?
12:52:57 <kshlm> It was what I was thinking as well.
12:53:36 <kshlm> Maybe we can start a POC for Gerrit+Jenkins+Zuul.
12:53:44 <ndevos> I've also not responded to the email yet... but I would like to have a simple solution, adding Zuul or something else complicates things and may make things even less stable?
12:54:28 <atinm> ndevos, kshlm : but what if it takes time?
12:54:59 <kshlm> ndevos, I'd like that as well. If we can find an easier way, without involving anything external, we should be able to do it with the current servers itself.
12:55:27 <ndevos> atinm: teaching people to run regression tests before posting patches will take time
12:55:47 <ndevos> when I review a patch and +2 it, I would be EXTREMELY annoyed if regression tests fail
12:56:32 <ndevos> I am not sure if it would speed up the merging process... the maintainers will become the bottleneck earlier in the process
12:58:02 <jdarcy> Might I suggest that this discussion should continue on the ML?  Larger audience etc.
12:58:04 <ndevos> I guess we should continue the discussion on the list
12:58:14 <kshlm> ndevos, sure.
12:58:17 <jdarcy> :)
12:58:18 <ndevos> #topic NetBSD regression improvements
12:58:34 <ndevos> atinm: thats yours?
12:58:50 <atinm> ndevos, yes
12:58:59 <ndevos> atinm: you may :)
12:59:39 <atinm> ndevos, so what's the progress on this, I've seen few mail exchanges but when the actual result will be in place?
13:00:22 <ndevos> atinm: best would be to have the new servers available and move Gerrit+Jenkins there - thats an AI for csim
13:00:36 <atinm> this is really blocking us in merging patches, with this rate even major bugs might not be fixed in 3.7.2 or we may have to delay it
13:01:05 <ndevos> atinm: to work around the broken DNS at the hoster, there is a local dns server, and that seems to help a little bit
13:01:22 <atinm> ndevos, we can definitely look for a permanent solution which might take some time, but does community feel that we can take this as an exception and merge patches with NetBSD's vote
13:01:44 <kshlm> I think we should do the rackspace api scraping and populating /etc/hosts idea.
13:02:02 <kshlm> This should at least let us know if the problem was DNS all along.
13:02:07 <atinm> later on if we see any bugs we will fix them, that's what we have been doing in past
13:02:50 <ndevos> atinm: if the regressions are spurious failures, we can probably do that, but we still need to have some testing run on NetBSD
13:03:22 <ndevos> there have been several compile/build issues that were only caught on NetBSD, we should try hard to prevent merging patches that break NetBSD again
13:03:40 <atinm> NetBSD is an upstream platform for gluster and due to its infrastructural problems it should not impact the other side IMO
13:04:11 <ndevos> I think many tests are also passing on NetBSD?
13:04:28 <atinm> ndevos, but not in the same rate what we want :)
13:04:35 <jdarcy> "Other side"?
13:04:35 <hagarth> we have only subset of the tests running on NetBSD and the primary problem today seems to be with network issues
13:04:38 <ndevos> my feeling is that there are close to equal failures on the Linux regression tests
13:04:49 <hagarth> jdarcy: the dark side? ;)
13:05:12 <atinm> jdarcy, I meant linux
13:05:38 <ndevos> atinm: there are way too many rebases done for patches that do not need rebasing at all... that seems to overflow our infra a little from time to time
13:05:42 <hagarth> the need of the hour to me is to fix the networking  issues so that NetBSD tests run and report back seamlessly
13:06:42 <ndevos> the networking issue could be fixed with the /etc/hosts pulling from the Rackspace API?
13:07:15 <ndevos> thats the approach I'll look into later today, it cant be difficult
13:07:17 <hagarth> ndevos: I feel so
13:07:32 <hagarth> half the time I cannot ping a netbsd VM from b.g.o
13:07:36 <atinm> so its in right hands :D
13:07:46 <hagarth> where as I can ping it locally from my laptop
13:08:08 <hagarth> I would also like a way for us to trigger NetBSD regressions as we do it for linux
13:08:15 <ndevos> hagarth: dns issue, or something else networky? can you ping the IP?
13:08:30 <hagarth> ndevos: haven't tried that, will try and update
13:08:32 <kshlm> hagarth, you mean manual triggering?
13:08:37 <kshlm> You can already do it.
13:08:42 * ndevos thinks we can manually trigger?
13:08:52 <hagarth> kshlm: yes, remember reading the email now
13:09:21 <ndevos> good, so that should get us some more stability there?
13:09:33 <ndevos> anything else to discuss while we're gathered?
13:09:35 <hagarth> If everything works fine, the NetBSD tests complete in about half the time as Linux regression tests.
13:09:53 <hagarth> ndevos: do we need a NetBSD infra watchdog team to help with temporal issues? :)
13:10:33 <ndevos> hagarth: that might be a good idea, I tend to look at the systems on occasion, but definitely not regular
13:11:29 <hagarth> ndevos: let us announce it on gluster-devel and the team can help with issues till we gain stability?
13:11:39 <hagarth> any volunteers here to be part of the team?
13:11:41 <ndevos> hagarth: but, I guess we will see soon enough if the /etc/hosts workaround makes a different
13:11:46 <ndevos> *difference
13:12:10 <hagarth> ndevos: right, wait and see how it goes?
13:12:56 <ndevos> hagarth: yeah, we've been in this situation for a while already, if it does not improve, we can go thar route
13:13:34 <ndevos> we're over our meeting time quite a bit now, any very quick last additions?
13:13:42 <hagarth> ndevos: sounds good, let us aim to have NetBSD seamlessly operational by this meeting next week.
13:14:00 <hagarth> quite a lot of gluster talks in the offing over the next 3 weeks
13:14:10 <ndevos> #action * will try to make NetBSD work stably by the next weeks meeting
13:14:45 <ndevos> hagarth: I hope those talks are added to the event etherpad!
13:14:51 <ndevos> #endmeeting