12:00:31 #startmeeting 12:00:31 Meeting started Wed Jun 17 12:00:31 2015 UTC. The chair is ndevos. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:00:31 Useful Commands: #action #agreed #halp #info #idea #link #topic. 12:00:37 #info Agenda: https://public.pad.fsfe.org/p/gluster-community-meetings 12:00:44 #topic Roll Call 12:00:45 * partner plop 12:00:48 * JustinClift waves 12:00:52 * jimjag is here 12:00:54 Hi all, who's here today? 12:01:08 #chair atinm hagarth 12:01:08 Current chairs: atinm hagarth ndevos 12:01:28 * atinm is here 12:01:50 * spot is here 12:02:03 * raghu is here 12:02:11 hagarth mentioned he might not be online, but travelling instead 12:03:04 well, lets get started, others that join late should just read the logs 12:03:08 * kshlm is here 12:03:11 #topic Action Items from last week 12:03:26 #topic JustinClift will have something to say about the Gluster Forge v2 12:03:36 * JustinClift nods 12:03:44 Nothing user visible to show yet 12:03:50 But I'm focused properly on this now 12:04:06 thats good, tell us the plan? 12:04:10 I've reached out to 95% of the people with projects on the existing forge, to get them to make GitHub versions 12:04:15 Many of them have 12:04:26 * msvbhat is here 12:04:43 Others have let me know of the ones they don't need migrated (eg project uninteresting, or code already merged in gluster) 12:05:00 I have the code to collect stats mostly written 12:05:09 * hagarth is partially here and will be out soon. 12:05:22 * krishnan_p is here 12:05:37 And will soon be writing code to present stats, that will then need dislay/formatting (will ping tigert) 12:05:48 JustinClift: pointer to the repo with the code? 12:05:53 So, I'm thinking "by next week" to have user visible stuff 12:06:02 https://github.com/gluster/forge 12:06:11 "master" is "stable" branch 12:06:23 "development" is where I'm doing stuff that may or may not get merged. ;) 12:06:41 #info Code for the new Forge based on GitHub is available at https://github.com/gluster/forge 12:07:06 #action JustinClift to demo the new Forge next week 12:07:10 Individual repo migration stats here if that's useful: https://github.com/gluster/forge/blob/master/old_forge_repos.txt 12:07:44 Um, that's it for this action point for now :) 12:07:51 okay, cool, any questions? 12:08:18 Not from me. ;) 12:08:46 also not from others, obviously 12:08:50 #topic hchiramm_ will send out a call (email and blog?) for volunteers for helping with the packaging 12:09:00 hchiramm_: are you here? 12:09:30 he is on sick leave today 12:09:36 ndevos: ^^ 12:09:56 #topic tigert summarize options for integrating a calendar on the website, call for feedback 12:10:28 I wonder if tigert sent out an email about this, I might have missed it? 12:11:00 oops I did it again 12:11:02 ndevos, I've not seen any mail from him 12:11:18 sorry was super swamped with doing designs :P 12:11:20 tigert: next week? 12:11:34 hopefully 12:11:36 #action tigert summarize options for integrating a calendar on the website, call for feedback 12:11:40 #topic msvbhat to start a discussion about more featured automated testing (distaf + jenkins?) 12:11:55 msvbhat: did that take off? 12:12:03 ndevos: RaSTar has sent it few hours ago 12:12:40 It covers most of the things, but I will be adding few points to that 12:12:46 #info RaSTar started the discussion earlier today 12:13:13 msvbhat: thanks, as long as there is a start, we can follow the plan on the list 12:13:34 * ndevos skips hagarth AI's 12:13:38 #topic csim/misc will send an email about the idea to migrate to Gerrithub.io, and collect questions for relaying 12:13:52 so, that email was sent, but I did not see any replies 12:14:20 csim: did you get any replies? or can you proceed without them? 12:14:58 ... okay, seems to be busy elsewhere? 12:15:03 #topic csim/misc to pickup the requesting of public IPs and those kind of things for our new servers 12:15:21 still on his TODO list, I think 12:15:33 #topic GlusterFS 3.7 12:15:41 hagarth, atinm: status update? 12:16:05 ndevos, I've a plan to release 3.7.2 by this weekend 12:16:19 * hagarth defers it to atinm. need to drop off now. 12:16:26 ndevos, I would need to run through 2.7.2 blocker and see if any patches pending merge 12:16:27 cya hagarth! 12:16:46 ndevos, netbsd is not helping us to move forward 12:16:46 #info atinm is planning to release 3.7.2 this week 12:17:26 atinm: yeah, and the exponential rebasing that is done does not help with that... 12:17:49 #info there is no need to rebase each change after something else got merged 12:17:53 ndevos, however we have seen community reporting on the glusterd crash on rebalance path multiple times, we need to release asap 12:18:16 #info proposed blockers for 3.7.2: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.2 12:18:51 atinm: sure, and if 3.7.2 fixes important bugs, release soon and move the outstanding ones to a new 3.7.3 tracker 12:19:21 atinm: could you create a 3.7.3 tracker already? I guess we should not add more bugs to the 3.7.2 one 12:19:23 ndevos, yes, that's the plan 12:19:32 ndevos, I will do that 12:19:40 ndevos, make an AI on me 12:19:54 #action atinm will create a 3.7.3 tracker and announce it on the list(s) 12:19:54 ndevos, such that I don't forget :) 12:20:25 atinm: do you know of any bugs that need work before you can do a 3.7.2 release? 12:20:51 ndevos, I need to get the rebalance glusterd crash fix in 3.7, its not yet merged 12:21:06 ndevos, some tiering patches are also pending 12:21:32 ndevos, as I said I need to go through the list which I haven't done yet 12:21:34 atinm: okay, but dont hesitate to move bugs from the 3.7.2 tracker to the upcoming 3.7.3 12:21:43 ndevos, sure 12:22:10 #info only few urgent fixes are pending for 3.7.2, others will get moved out to 3.7.3 12:22:22 atinm: anything else for 3.7? 12:22:29 or, any questions from others? 12:22:45 ndevos, no 12:23:00 #topic GlusterFS 3.6 12:23:06 raghu: your turn! 12:23:12 I have made 3.6.4beta2 (tarball is ready. RPMs have to be done. Will announce once they are done). There are no more patches pending for release-3.6 branch. If no issues are not reported for 3.6.4beta2, then I will make 3.6.4 end of this week. 12:24:14 Thanks to raghu for merging my backports! 12:24:23 raghu++ 12:24:28 itisravi: :) 12:24:52 #info 3.6.4 will likely get released later this week 12:25:11 raghu: got more to share? 12:25:19 ndevos: nope. 12:25:29 #topic GlusterFS 3.5 12:25:43 a few (or one?) patch has been merged 12:26:08 plan is still to release the next 3.5 later this month 12:26:25 #topic Gluster 4.0 12:26:52 jdarcy: some update on 4.0? 12:27:21 jdarcy, thanks for collating discussions around DHTv2 and brick multiplexing 12:27:29 Some discussion of brick multiplexing and DHT2. Mostly Summit prep his week though. 12:27:47 krishnan_p, my pleasure. Seems like a way I can help the people who do the actual work. 12:28:10 jdarcy, I want to add sections around how inode,fd tables should be designed to support multi-tenancy so to speak 12:28:18 Cool. 12:28:51 anything else? 12:28:55 jdarcy, though I'd like to distinguish multiplexing and multi-tenancy. One is one volume, many tenants while the other is many volumes in a single process. 12:29:32 Should probably start a whole separate doc for multi-tenancy. I guess that's an AI. 12:29:37 jdarcy, I am thinking of coming up with a small list of things that we need to address in epoll layer (for lack of sophisticated terminology) 12:29:56 jdarcy, that would be awesome 12:30:07 jdarcy: you're happy to mail about multi-tenancy? 12:30:25 Oh yeah, do we have anyone here who can talk about Manila? That should probably be a regular part of this meeting. 12:30:35 ndevos, sure. 12:30:53 doesnt look like Deepak or Csaba is here... 12:31:18 #action jdarcy to start a new doc about multi-tenancy and announce it on the list(s) 12:31:20 I'll representing them. 12:31:30 ndevos, kshlm can talk 12:31:43 oops, he already volunteered 12:32:19 #action ndevos extends the meeting agenda with a regular Manilla topic 12:32:48 Manila requires a fully supported access method with all features they've been targetting for the Liberty release coming in Oct (I think). 12:32:59 #action kshlm will provide regular updates about Manila (when Deepak and Csaba are missing) 12:33:11 We had two options, sub-directory based or volumes based. 12:33:46 We've currently decided to proceed with the volume based approach. 12:34:32 is that documented somewhere? 12:34:35 For this approach we require a intelligent volume creation method. lpabon announced heketi on the mailing lists which will attempt to do this. 12:35:28 We as GlusterD maintainers would like to use this to kickstart GlusterD-2.0. 12:35:41 ndevos, I believe kshlm will be sending a mail shortly on this 12:35:45 I'll be posting more details regarding this to the mailing list soon. 12:36:25 We currently don't have any documentation on the decision to proceed with volume based approach. 12:36:28 okay, thanks! 12:36:39 I'll update the mailing thread csaba started with this information. 12:37:02 https://github.com/heketi/heketi 12:37:05 That's all. 12:37:24 Looks like Luis has been having fun experimenting with a bunch of new (to us) technologies. 12:37:28 #action kshlm will send more details about Manila, GlusterD-2.0 and maybe Heketi to the list 12:37:56 jdarcy, yes. we hope to join in the fun soon. 12:38:06 kshlm, +1 12:38:44 any questions about this? 12:39:03 * krishnan_p waiting for kshlm's proposal. 12:39:04 not too complex, wait for kshlms email ;-) 12:39:22 I have at least 2 mails to send then. 12:39:51 #topic Open Floor / Bring Your Own Topic 12:40:02 * krishnan_p _o/ 12:40:02 we would have to collaborate such that there are no duplicate efforts between glusterd 2.0 or heketi 12:40:13 #info Weekly reminder to announce Gluster attendance of events: https://public.pad.fsfe.org/p/gluster-events 12:40:21 krishnan_p: yes? 12:40:28 Some good discussion/collaboration going on w.r.t. command-line GFAPI clients. 12:40:39 ndevos, I want to pitch working on epoll a bit. 12:40:42 I am talking on using URCU in glusterd in coming FUDCon 12:40:42 atinm, lpabon is enthusiastic about collaborating. 12:41:01 I will publish that in the pad 12:41:09 atinm: do a #info? 12:41:24 I am not sure how many here find GlusterFS' epoll based notification layer fun. 12:41:35 #topic work on epoll 12:41:39 I have found a few interesting problems that we could solve. 12:41:49 #info atinm js talking about using URCU in glusterd in coming FUDCon 2015 12:41:51 I am still in the process of coming with a refined set. 12:42:02 Keep looking here for more details - https://public.pad.fsfe.org/p/glusterfs-epoll 12:42:24 This is in kp speak at the moment. Hope to make it generally readable soon. 12:42:36 krishnan_p: you're looking for voluneers to help you with what exactly? 12:42:41 I would be more than happy to work with anyone who shows the slightest interest 12:43:08 ndevos, I am keeping that open. I would prefer to work with somone on this, than alone 12:43:23 krishnan_p, howbout checking out libevent? 12:43:28 ndevos, no volunteers would mean you will see patches from me. 12:43:41 kshlm, I need volunteers for that one for sure. 12:43:55 #info anyone that is interested in doing some epoll development/checking/whatever should talk to krishnan_p 12:43:57 kshlm, I am not sure about their SSL, rdma support. 12:44:02 It's supposed a cross platform eventing library, which is supposed to use the best event mechanism for each platform automatically. 12:44:47 kshlm, at the moment I have resigned to fixing issues in our epoll implementation. I am willing to work with someone who is interested in exploring libuv/libevent etc 12:44:55 I don't expect any problems with either. Any ways your call! 12:45:30 oh, using an existing library would be my preference too :) 12:45:40 ndevos, care to join? :) 12:46:04 krishnan_p: I should be able to find a few spare minutes every now and then... maybe 12:46:17 * ndevos will definitely read that etherpad 12:46:18 ndevos, I know :) 12:46:29 ndevos, I can translate that for you whenever you ask me :) 12:46:44 last call, epoll anyone? :) 12:46:57 #action krishnan_p will send out an email to the devel list about improving our epoll solution 12:47:19 ndevos, thanks! 12:47:27 #topic More Open Floor 12:47:31 yes 12:47:32 #info maintainers@gluster.org is live, component maintainers have been subscribed 12:47:34 NETBSD 12:48:01 so, if someone needs to contact the maintainers of a certain component, send an email to that list 12:48:27 #topic Changes to review workflow wrt regression http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/11486 12:48:33 Wasn't this supposed to be used for reaching out to package maintainers? Or was it some thing eles? 12:48:44 atinm: thats on the list too, the next topic 12:48:51 ndevos, thank you 12:49:03 kshlm: we have packaging@gluster.org for package maintainers from different distributions 12:49:13 Anyways, I added the review workflow topic. 12:49:36 I sent out a mail describing what I thought will be a good workflow. 12:49:41 kshlm: maintainers@gluster.org are the maintainers for Gluster components, the people that should review and merge changes 12:49:43 The community seems to agree. 12:50:15 kshlm: short description of the email, in case people did not read it yet? 12:50:20 So now, the question is when do we start implementing the new workflow. 12:51:20 A tl;dr: Run regressions only once a patch gets a +2 review from a maintainer. Possibly use a gating tool like Zuul, to automate this as Gerrit/Jenkins don't seem to be able to this on their own. 12:52:22 Back to my question, so when does the community think we should do this? 12:52:31 kshlm: I think any of the changes to Gerrit/Jenkins will need to wait until the new servers are avilable for us? 12:52:57 It was what I was thinking as well. 12:53:36 Maybe we can start a POC for Gerrit+Jenkins+Zuul. 12:53:44 I've also not responded to the email yet... but I would like to have a simple solution, adding Zuul or something else complicates things and may make things even less stable? 12:54:28 ndevos, kshlm : but what if it takes time? 12:54:59 ndevos, I'd like that as well. If we can find an easier way, without involving anything external, we should be able to do it with the current servers itself. 12:55:27 atinm: teaching people to run regression tests before posting patches will take time 12:55:47 when I review a patch and +2 it, I would be EXTREMELY annoyed if regression tests fail 12:56:32 I am not sure if it would speed up the merging process... the maintainers will become the bottleneck earlier in the process 12:58:02 Might I suggest that this discussion should continue on the ML? Larger audience etc. 12:58:04 I guess we should continue the discussion on the list 12:58:14 ndevos, sure. 12:58:17 :) 12:58:18 #topic NetBSD regression improvements 12:58:34 atinm: thats yours? 12:58:50 ndevos, yes 12:58:59 atinm: you may :) 12:59:39 ndevos, so what's the progress on this, I've seen few mail exchanges but when the actual result will be in place? 13:00:22 atinm: best would be to have the new servers available and move Gerrit+Jenkins there - thats an AI for csim 13:00:36 this is really blocking us in merging patches, with this rate even major bugs might not be fixed in 3.7.2 or we may have to delay it 13:01:05 atinm: to work around the broken DNS at the hoster, there is a local dns server, and that seems to help a little bit 13:01:22 ndevos, we can definitely look for a permanent solution which might take some time, but does community feel that we can take this as an exception and merge patches with NetBSD's vote 13:01:44 I think we should do the rackspace api scraping and populating /etc/hosts idea. 13:02:02 This should at least let us know if the problem was DNS all along. 13:02:07 later on if we see any bugs we will fix them, that's what we have been doing in past 13:02:50 atinm: if the regressions are spurious failures, we can probably do that, but we still need to have some testing run on NetBSD 13:03:22 there have been several compile/build issues that were only caught on NetBSD, we should try hard to prevent merging patches that break NetBSD again 13:03:40 NetBSD is an upstream platform for gluster and due to its infrastructural problems it should not impact the other side IMO 13:04:11 I think many tests are also passing on NetBSD? 13:04:28 ndevos, but not in the same rate what we want :) 13:04:35 "Other side"? 13:04:35 we have only subset of the tests running on NetBSD and the primary problem today seems to be with network issues 13:04:38 my feeling is that there are close to equal failures on the Linux regression tests 13:04:49 jdarcy: the dark side? ;) 13:05:12 jdarcy, I meant linux 13:05:38 atinm: there are way too many rebases done for patches that do not need rebasing at all... that seems to overflow our infra a little from time to time 13:05:42 the need of the hour to me is to fix the networking issues so that NetBSD tests run and report back seamlessly 13:06:42 the networking issue could be fixed with the /etc/hosts pulling from the Rackspace API? 13:07:15 thats the approach I'll look into later today, it cant be difficult 13:07:17 ndevos: I feel so 13:07:32 half the time I cannot ping a netbsd VM from b.g.o 13:07:36 so its in right hands :D 13:07:46 where as I can ping it locally from my laptop 13:08:08 I would also like a way for us to trigger NetBSD regressions as we do it for linux 13:08:15 hagarth: dns issue, or something else networky? can you ping the IP? 13:08:30 ndevos: haven't tried that, will try and update 13:08:32 hagarth, you mean manual triggering? 13:08:37 You can already do it. 13:08:42 * ndevos thinks we can manually trigger? 13:08:52 kshlm: yes, remember reading the email now 13:09:21 good, so that should get us some more stability there? 13:09:33 anything else to discuss while we're gathered? 13:09:35 If everything works fine, the NetBSD tests complete in about half the time as Linux regression tests. 13:09:53 ndevos: do we need a NetBSD infra watchdog team to help with temporal issues? :) 13:10:33 hagarth: that might be a good idea, I tend to look at the systems on occasion, but definitely not regular 13:11:29 ndevos: let us announce it on gluster-devel and the team can help with issues till we gain stability? 13:11:39 any volunteers here to be part of the team? 13:11:41 hagarth: but, I guess we will see soon enough if the /etc/hosts workaround makes a different 13:11:46 *difference 13:12:10 ndevos: right, wait and see how it goes? 13:12:56 hagarth: yeah, we've been in this situation for a while already, if it does not improve, we can go thar route 13:13:34 we're over our meeting time quite a bit now, any very quick last additions? 13:13:42 ndevos: sounds good, let us aim to have NetBSD seamlessly operational by this meeting next week. 13:14:00 quite a lot of gluster talks in the offing over the next 3 weeks 13:14:10 #action * will try to make NetBSD work stably by the next weeks meeting 13:14:45 hagarth: I hope those talks are added to the event etherpad! 13:14:51 #endmeeting