12:02:14 #startmeeting 12:02:14 Meeting started Wed Jan 7 12:02:14 2015 UTC. The chair is hagarth. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:02:14 Useful Commands: #action #agreed #halp #info #idea #link #topic. 12:02:29 Welcome everybody to the 1st community meeting of 2015! 12:02:46 meeting agenda is here - https://public.pad.fsfe.org/p/gluster-community-meetings 12:03:00 #topic Roll Call 12:03:07 who do we have here today? 12:03:13 * krishnan_p is present 12:03:31 * spot is here 12:03:40 * ndevos _o/ 12:04:09 cool, I think few others are dormant atm 12:04:17 * lalatenduM is here 12:04:48 #topic Introduction 12:05:03 we have a new arrival today - spot. 12:05:12 spot: would you want to introduce yourself to us? 12:05:40 My name is Tom Callaway, I'm on the OSAS team at Red Hat 12:05:46 welcome spot! 12:06:17 I'm working with Jim Jagielski to take over the community management responsibilities for Gluster 12:06:19 spot, welcome to the party :) 12:06:26 hep, i was off by one channel 12:06:37 Welcome spot! 12:06:38 Jim is going to be focused on the board level and I'll be working with, well, everything else. :) 12:06:54 spot: welcome again, great to have you with us! 12:07:24 thats really great to hear :) 12:08:04 ok, let us move on to action items from last meeting 12:08:21 #topic AI review 12:08:32 AI - hagarth needs to send a cleanup patch for the MAINTAINERS file to remove inactive maintainers (add a allumni section?) 12:08:52 this needs to be done, any thoughts on how to mark inactive maintainers as alumni? 12:09:12 maybe use A - ? "A" being for alumni? 12:09:21 +1 for alumni , how much inactive is alumni? 12:09:38 hagarth: perhaps just, at the bottom say, "Gluster would not be possible without the contributions of: ", then list the inactive folks 12:09:50 3 months, 6months? 12:09:56 spot: sounds like a good idea 12:09:58 that way, there is no need for "am i inactive or not" 12:10:10 if they come back, hey welcome back. :) 12:10:35 lalatenduM: we have folks who have been inactive for more than 6 months now 12:10:46 maybe we could use 6 months as the tipping point 12:10:50 hagarth, yeah agree 12:10:58 spot: sounds fantastic! :) 12:11:08 ok, consider this done by next meeting 12:11:18 #action hagarth needs to send a cleanup patch for the MAINTAINERS file to remove inactive maintainers (add a alumni section?) 12:11:30 AI - hchiramm will try to fix the duplicate syndication of posts from blog.nixpanic.net 12:11:43 hchiramm: any update on this one? 12:12:10 hagarth, not yet 12:12:37 ndevos: we cleaned up a few duplicates, but were not able to figure out how some of your posts were being syndicated twice 12:12:46 need to check the changes made and have to confirm 12:12:58 hchiramm_: let us carry forward this AI 12:13:05 #action hchiramm will try to fix the duplicate syndication of posts from blog.nixpanic.net 12:13:17 hagarth: sorry, but I really have no idea how the syndication works... I only post on my blog and the magic happens 12:13:27 ndevos do u have any other mirrors 12:13:34 we were not able to see it in syndication though 12:13:47 hchiramm_: hmm, maybe Fedora Planet? 12:13:52 ndevos: yes, needs some more investigation. 12:14:13 AI - Debloper to write  a blog post about complete website revamp 12:14:26 he should be doing it soon 12:14:36 hagarth , ^^ as we discussed .. 12:14:41 hchiramm_: right, can we target it for next week? 12:14:48 sure.. I wil update him 12:14:52 #action Debloper to write  a blog post about complete website revamp 12:15:01 AI - atinmu will look for someone that can fix the spurious fop-sanity test 12:15:23 atinmu: any update on this one? I have not observed more failures with this test of late 12:15:50 hagarth, I have not seen this spurious failure in recent regression runs 12:16:09 atinmu: thought as much, shall we drop this AI for now? 12:16:10 hagarth, I haven't got any volunteer as well 12:16:46 or is it worth an investigation? maybe run the test for 100 times and see what happens? 12:16:50 hagarth, I will keep an eye on the regression report, if this spurious failure resurfaces I will look into it 12:17:13 hagarth, or probably find someone who can help debugging it 12:17:14 Speaking of spurious regressions... 12:17:18 atinmu: ok thanks, I will drop this AI from regular follow ups. 12:17:47 We really need to get one of the fixes for the quota-anon-fd-nfs.t failures in. 12:18:02 oh, yes, definitely! 12:18:14 jdarcy, this one is now omitted from the codebase temporarily 12:18:22 atinmu: Yay! 12:18:33 atinmu: ah, workaround :D 12:18:55 the quota folks are working with a vengeance to have it back :D 12:19:09 * jdarcy checks, doesn't see either of the relevant patches went in since last night. 12:19:10 atinmu: add a conv=fsync to the dd command? thats what I was going to try... 12:19:43 jdarcy: http://review.gluster.org/9390 12:19:45 ndevos, Sachin Pandit is working on it... 12:19:52 atinmu: okay, cool 12:20:05 AI - JustinClift and misc should post an update or plan about upgrading Gerrit 12:20:16 I don't think we have either of them here today 12:20:31 hchiramm_: would it be possible to send a reminder on gluster-infra ML about this? 12:20:43 hagarth, sure 12:21:01 #action hchiramm_ to send an email on gluster-infra about gerrit upgrade 12:21:08 * jdarcy looks at the right list this time, and sees it. ;) 12:21:25 All those hard drugs must be affecting my brain. 12:21:26 AI - pointer to wiki from site needs to be added 12:21:52 does anybody have more context about this? Justin mentioned that tigert was working on this one. 12:22:05 I thought we have a pointer to the wiki from gluster.org 12:22:12 thats about the links on the top of gluster.org 12:22:33 we do have a wiki link there, right? 12:22:38 well, the link to the wiki was there, but it pointed to the blog (or the other way around?) 12:22:59 ndevos: right, it lands us here - http://www.gluster.org/community/documentation/index.php/Main_Page 12:23:16 isn't that the wiki? 12:23:45 spot: looks like that. 12:23:55 yes, looks as its been corrected 12:24:08 CLOSED:WORKSFORME 12:24:09 ok, so we're good with this AI. 12:24:31 AI - JustinClift to update the regression.sh script and capture /var/log/messages too 12:24:48 atinmu: do you happen to know if JustinClift was able to get this done? 12:25:17 if not, we can carry forward this AI 12:25:18 hagarth, yes, he has done it, now it captures 12:25:26 atinmu: cool, thanks! 12:25:40 AI - tigert to respond to Soumya Deb on gluster-infra about website. 12:25:46 this hasn't happened 12:25:51 #action tigert to respond to Soumya Deb on gluster-infra about website. 12:26:03 AI - hagarth to contact Rikki Endsley about fixing the twitter glusterfs domain squatter problem 12:26:14 I haven't completed this, will do this week 12:26:19 #action hagarth to contact Rikki Endsley about fixing the twitter glusterfs domain squatter problem 12:26:23 hagarth: if i can help there, let me know 12:26:40 spot: thanks, will loop you in on that thread. 12:27:00 atinmu, I think I got aconfirmation from Justin where u r Ccd ? 12:27:12 that finishes our short list of AIs :) 12:27:18 #topic Gluster 3.6 12:27:18 * hchiramm_ cross-checking 12:27:30 raghu: can you update us about 3.6.2? 12:27:50 yeah. I was waiting till now for RDMA and USS patches 12:28:09 hchiramm, I am sure he has done it, bcoz now I can see /var/log/messages in regression logs tar 12:28:12 I have pushed all the rdma related patches 12:28:18 atinmu, awesome :) 12:28:27 raghu: fantastic, are we now all set to remove tech preview for rdma in 3.6.2? 12:28:31 JustinClift++ 12:29:24 but some USS related patches are still failing the regression tests. Yeah. I think we can remove the tech preview for rdma in 3.6.2. In fact code changes to remove the warning when a user creates a rdma volume has also gone in 12:29:53 raghu: you plan to do a 2nd beta later this week? 12:30:17 raghu: cool! any help needed for failing regression tests? 12:30:20 #info rdma to be fully supported from 3.6.2 12:30:25 I am thinking of making beta2 later this week. Run some tests and make 3.6.2 next week 12:30:59 raghu: ok, do we plan to merge anymore patches after beta2? 12:31:22 raghu: which uss test cases are failing? 12:31:59 I will stick to this plan of making beta2 later this week and making 3.6.2 next week. I will trigger one more regression run. If they pass the tests, then I will accept them for 3.6.2. Otherwise I will consider them for 3.6.3 12:32:11 raghu: ok 12:32:40 its not USS testcases. Some other tests in quota and erasure coding are failing when USS patches are applied and regression is triggered 12:33:25 raghu: ok 12:33:27 so if uss patches fail regression again, then I will make 3.6.2 with whatever is there till today. 12:33:36 raghu: ok 12:33:41 * ndevos likes that plan 12:33:50 raghu: +1, lot of good fixes are awaiting to be pushed out in 3.6.2. 12:33:59 +1 12:34:15 anything more on 3.6.x? 12:34:20 I can make a quick 3.6.3beta1 after 3.6.2 is released if USS cases happen to pass regression after I make beta2 12:34:38 raghu: sounds good 12:35:02 let us move on to 3.5 now 12:35:09 #topic GlusterFS 3.5 12:35:11 ndevos: all yours 12:35:43 there are still only a few bugs that are important to get fixed: https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=1&id=1141158&hide_resolved=1 12:36:03 9 in total, not all have patches yet 12:36:22 rdma fixes have been posted for 3.5 as well, but are lacking reviews 12:36:44 patches that could use some reviews are here: http://review.gluster.org/#/q/status:open+project:glusterfs+branch:release-3.5,n,z 12:36:54 ndevos: should we send a reminder on gluster-devel about bugs needing patches? 12:37:08 I could do that, sure 12:37:19 ndevos: thanks! 12:37:30 #action ndevos will send an email about bugs that need patches (or reviews) for 3.5.x 12:37:59 as long as nobody complains about urgent issues, I am not planning to do a beta release 12:38:08 ndevos: ok 12:38:10 +1 12:38:35 any questions, comments on 3.5.x? 12:38:36 hchiramm, :) 12:38:39 well, I'll do one when all proposed bugs have patches, just so that our users are a little more happy :) 12:38:39 :) 12:39:03 move to 3.4? 12:39:04 ndevos: right :) 12:39:06 sure 12:39:11 #topic GlusterFS 3.4 12:39:17 kkeithley_: any updates here? 12:39:26 memory leak is still the gating factor 12:39:37 I"ve been working on narrowing down the leak 12:39:44 using the longevity cluster 12:39:57 kkeithley_: darn, let me add this to my todo too. 12:40:12 I turned off all the performance xlators and memory usage is stable 12:40:44 Summary of where I'm at is at http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/README 12:41:26 kkeithley_: what workload is being run on the client? 12:41:59 Ben England's fsstress, doing a mix of things 12:42:10 create, read, write, delete, rename, etc. 12:42:14 kkeithley_: ok 12:42:50 with all the default performance options enabled, it leaks immediately 12:42:58 with everything disabled, no leak 12:43:09 * hagarth suspects stat-prefetch/md-cache 12:43:37 kkeithley_: thanks for the update. Once the leak is fixed, we are good to go with 3.4.7 right? 12:43:42 naturally the order I'm trying has stat-prefetch last. ;-) 12:43:54 yes, I expect so. 12:44:00 kkeithley_: ok, cool. 12:44:07 anything more on 3.4.x? 12:44:09 There are a couple other BZs, but they're minor IMO 12:44:14 kkeithley_: ok 12:44:36 moving on to Gluster next 12:44:42 #topic Gluster next - 3.7 12:44:58 Planning page for 3.7 is here - http://www.gluster.org/community/documentation/index.php/Planning37 12:45:38 please link your feature page from the planning page if you are working on anything significant for 3.7 12:45:51 we are about 2 months away from the feature freeze 12:46:15 so, let us try to get our features in soon(er) :) 12:46:30 does anyone know if there was an announcement for NFS exports+netgroups support to the list? 12:46:44 ndevos: don't recollect that 12:46:56 hagarth: okay, I guess I can send one? 12:47:01 hey 12:47:02 ndevos: please do 12:47:13 okay, will do 12:47:14 * tigert got pinged 12:47:26 tigert: hello, we had a cpl of action items for you earlier. will ping them after the meeting. 12:47:36 any queries on 3.7? 12:47:47 Should "hyperconvergence" have a feature page? There seem to be bugs/work items filed against it. 12:47:51 hagarth: ok 12:48:02 jdarcy: the ovirt folks have been active on that 12:48:06 #action ndevos will send out an announcment about the NFS with exports+netgroups feature (and lobby for reviews) 12:48:29 good idea, I will open a feature page for that 12:48:55 #action hagarth to open a feature page for (k)vm hyperconvergence 12:49:15 moving on to 4.0 12:49:23 #topic GlusterFS 4.0 12:49:30 jdarcy: any updates to share here? 12:50:09 Not a lot. There's a blog post (+ hangout) about to be published. 12:50:20 I've been working on some random bits of infrastructure. 12:50:22 jdarcy: cool 12:50:51 Probably next week we'll have as much of an online "summit" as we can to kick off more serious work. 12:51:08 jdarcy: neat 12:51:22 jdarcy, look forward to it. 12:51:32 I think it would be good if all of us can propose (BHA) features in the 4.0 planning page 12:51:37 * jdarcy has auto-generation of defaults.[ch] working, with an infrastructure that can also be used to generate NSR code. Stubs and maybe even syncops could be done this way too. 12:51:48 jdarcy: where is the online summit announced? 12:51:56 ndevos: It's not, yet. 12:52:02 * ndevos probably missed it... not 12:52:54 jdarcy: if there is some schedule, I (and others) would probably like to join some sessions 12:53:01 on the topic of 4.0, I plan to share results of community survey soon and solicit more features from the community. 12:53:12 That would be cool. 12:53:41 #action hagarth to share results of community survey and solicit feature proposals from the community 12:54:01 anything more on 4.0? 12:54:05 Nope. 12:54:24 we had a topic on Small file performance but will skip that since we had a dedicated meeting for that a few days back 12:54:36 #topic Other Agenda items 12:54:44 #topic Patches to release branches 12:54:57 This is about backports to release branches 12:55:25 how can we be more systematic in sending backports to release branches? 12:55:33 like http://www.gluster.org/community/documentation/index.php/Backport_Guidelines ? 12:55:45 I have felt the need for it and raghu also mentioned this to me recently while managing patches for 3.6.x 12:56:08 ndevos: I think the process of sending a patch is fine but how do we identify candidates for backports? 12:56:22 or rather who owns that responsibility? 12:56:56 I think it should be bug fixes only and mostly blockers 12:57:06 one thought I have is to let maintainers notify release-maintainers & patch submitters about the need to backport during a patch merge 12:57:32 +1 12:57:35 the bug reporter gives a version where the bug is present - a developer should fix it in mainline and do the backport for all versions upto+including the version the reporter uses 12:57:46 lalatenduM: do we identify blockers during the bug triage meetings? 12:57:55 ahh, nope 12:58:03 thats what I was thinking 12:58:03 ndevos: most developers seldom do it that way 12:58:18 hagarth: indeed, they need some poking :) 12:58:22 we should mark blockers for release branches and then track it 12:58:35 lalatenduM: should we start doing that in bug triage meetings? 12:59:00 yes, I think if the bugs are linked with tracker bug , it helps 12:59:00 ndevos: scale-out poking by maintainers :) 12:59:14 I think the release maintainers should check for important bugs for the version they maintain, and add those bugs to the blocker 12:59:45 Lets say we have 10 bugs for back porting , release maintainer can send a mail to devel saying he needs the back ports 13:00:22 lalatenduM: ok, as ndevos plans to do it for 3.5.x later today 13:00:31 if a backport is trivial, I normally backport myself and have the original author review it 13:00:37 should we discuss this further on -devel? 13:00:50 I think this can benefit by more participation 13:00:56 hagarth, yes 13:01:01 agreed 13:01:22 hagarth: yes, others need to become more aware of the need for backports 13:01:31 #action hagarth to carry forward discussion around backporting patches on gluster-devel 13:01:49 since we are running out of time, moving on to the next iterm 13:02:00 #topic Duplication of posts on Facebook, twitter 13:02:24 this is a legit problem, we see blog posts appearing multiple times on social networking media 13:02:41 spot: if you have some ninja skills on these, can you please help here? 13:02:52 one source to rule them all? 13:03:02 as in "twitter shows the same thing three times" or "twitter and facebook have the same content" ? 13:03:14 the 1st :) 13:03:17 besides TechWars "we compared gluster to ceph" spam? 13:03:20 spot: usually both show the same content multiple times 13:03:24 hmm, okay. I'll look. 13:03:26 spot: Twitter shows the same thing several times. Sometimes *really old* things reappear. 13:03:38 kkeithley_: thankfully we are not tweeting TechWars stuff :) 13:03:40 i think we're using hootsuite for that, but I'll find out. 13:03:52 spot: thanks! 13:04:08 hagarth: Other thing I noticed some time back was non-gluster content apearing in gluster twitter feed. 13:04:15 #action spot to investigate repetitive posts on social networking sites 13:04:23 msvbhat: like the trip to bandipur? :) 13:04:30 hagarth: Yes :) 13:04:34 msvbhat, yeah , thats the older post in gluster blogs :) 13:04:38 that was due to a bad syndication, it is fixed now 13:04:45 s/bandipur/bannerghatta/ 13:04:48 hagarth: AH, Okay 13:04:55 #topic Announcing events: FOSDEM, devconf.cz, ... 13:05:08 ndevos: is this topic yours? 13:05:14 yes, I just added that 13:05:24 ndevos: go ahead please 13:05:28 we should announce our presense at events more clearly 13:05:33 +1 13:05:33 ndevos: +1 13:05:39 * spot agrees 13:05:47 we should have applied for a table in Fosdem :( 13:05:55 and, maybe get some users/devs together for some drink or such 13:05:58 should we publish a gluster events calendar on the website? 13:06:08 yeas 13:06:09 yes 13:06:10 yes' 13:06:13 i heard this morning that there are some gluster talks on the FOSDEM schedule this year 13:06:14 just by pure luck i happened to notice local ceph+glusterfs meetup organized by RH.. 13:06:15 yes, a calendar, and announce emails in advance 13:06:17 i'm trying to get details there 13:06:39 spot: yes, lalatenduM, kkeithley_ & ndevos are going to be there I think 13:06:47 Also, the CentOS booth has offered 'office hours' time to Gluster if we want it 13:06:56 basically, time to run demos and be present to talk about Gluster 13:06:58 spot: that would be fantastic. 13:07:12 yes, lalatenduM propsed that on the CentOS list 13:07:18 we have a few talks in devconf.cz too 13:07:20 ndevos: awesome. 13:07:35 spot, yeah that is part of Storage SIG I think 13:07:58 I mean RE: CentOS booth has offered 'office hours' time to Gluster 13:08:22 so, who wants to figure out a tentative schedule for gluster talks/events in the next conferences and send an email about that? 13:08:32 ndevos: i'll do thar 13:08:34 that, rather 13:08:41 spot: cool, thanks! 13:08:47 * spot is already working on a "places we should be in 2015" list. :) 13:08:57 Do we need to decide time(s) for the our Office Hours in the CentOS booth and publicize them? 13:09:06 anything else for today? 13:09:13 kkeithley_: i think the CentOS people will do that if lalatenduM is talking to them 13:09:14 #action spot will send out an email with upcoming Gluster events and meetups 13:09:20 okay 13:09:26 but we should make noise once the times are locked in 13:09:35 spot: BIG +1 :) 13:09:41 ndevos, kkeithley_ I will make sure of that 13:09:48 quick question: anyone know how to fix the git sub-repo problem that's plaguing Launchpad builds? 13:09:49 this in couple of weeks: http://www.meetup.com/RedHatFinland/events/218774694/ 13:10:04 arggh, yes that one too! 13:10:05 kkeithley_: maybe lpabon does 13:10:24 spot, +1 13:10:56 ok folks, I think that's all we have for today. Thanks for being part of the 1st community meeting in 2015 :). 13:10:59 Finland is one week before FOSDEM. I thought about going, but that's too long to be away 13:11:09 #endmeeting