12:01:38 #startmeeting Weekly community meeting 03-Aug-2016 12:01:38 Meeting started Wed Aug 3 12:01:38 2016 UTC. The chair is kkeithley. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:01:38 Useful Commands: #action #agreed #halp #info #idea #link #topic. 12:01:38 The meeting name has been set to 'weekly_community_meeting_03-aug-2016' 12:01:38 hagarth: Error: Can't start another meeting, one is in progress. 12:01:41 zod's dead, baby 12:01:46 ah good 12:01:49 #chair ankitraj 12:01:49 Current chairs: ankitraj kkeithley 12:01:56 #topic Roll call 12:02:00 o/ 12:02:05 * kkeithley is here 12:02:05 * jiffin is here 12:02:06 o/ 12:02:12 * anoopcs is here 12:02:17 * loadtheacc is here 12:02:26 I'll be waiting for a minute to the agenda open. 12:02:26 hi! 12:02:26 * skoduri is here 12:03:21 Welcome everyone! 12:03:36 #topic Next weeks meeting host 12:03:51 * ira is here. 12:04:11 I'll do it next week. 12:04:14 anyone interested for hosting meeting next week 12:04:28 ok kshlm I will assign it you 12:04:29 I'll alternate every other week from now on. 12:04:32 * aravindavk is here 12:05:02 #topic 12:05:02 GlusterFS 4.0 12:05:08 * hagarth is here (on and off) 12:05:09 #topic GlusterFS 4.0 12:05:28 Are the 4.0 developers around? 12:05:32 atinm, jdarcy? 12:05:40 I'll give an update on GD2. 12:05:44 ok 12:05:52 I've nearly go the multi-node txn working. 12:06:08 It's been moving very slowly. 12:06:13 jdarcy is on a vacation and there is no movement on JBR as he is occupied with brick multiplexing stuffs 12:06:19 Some existing code needed refactoring. 12:06:30 Also, I set up CI for GD2 on centos-ci this week. 12:06:38 Cool 12:06:44 (One AI ticked for this week) 12:06:47 I haven't heard any progress on DHT2 from shyam 12:07:06 But it has some poor github integration causing builds to marked as failure even when they aren't. 12:07:45 Anyone else around to provide updates on DHT2 and JBR? 12:07:58 * msvbhat arrives bit late 12:08:55 is there anything more to discuss on this topic 12:09:21 ok now moving to next topic 12:09:25 #topic GlusterFS 3.9 12:09:39 aravindavk, Any updates? 12:09:40 any updates 12:09:56 Where is pranithk? He should attend these meetings. 12:10:49 he is idle in -dev 12:10:50 aravindavk, ? 12:10:50 nothing much from last week 12:10:57 we have got some bunch of eventing patches getting into mainline is what I can update from 3.9 perspective 12:10:58 aravindavk, I added a feature-spec for posix-locks reclaim support 12:11:01 http://review.gluster.org/#/c/15053/ 12:11:07 expecting some more pull requests for roadmap page 12:11:09 aravindavk, I appreciate if you can add it to 3.9 12:11:40 skoduri: edit the roadmap page by clicking "Edit" link in footer 12:11:52 aravindavk, okay will do..thanks 12:12:11 pranithk and I have been discussing sub-directory mounts on and off. 12:12:28 Still no idea if we can get it done for 3.9 12:12:28 ... they still work with NFS and Samba ;-) 12:12:36 kshlm: that is long-awaited killer-feature 12:13:01 ndevos, Yeah. They're always there. 12:13:04 post-factum, Yup. 12:13:07 +1 to sub-directory mounts .. would be really cool to have with fuse. 12:13:18 But pranithk is just so busy right now. 12:13:29 kshlm: anything worth discussing on -devel wrt sub-dir mounts? 12:13:33 I can hear him around, but can't see. 12:13:50 kshlm: will call him, I can see 12:13:54 hagarth, Mainly on the UX. 12:14:10 The UX for authentication. 12:14:38 We also need to better understand the patch under review sent by jdarcy. 12:14:46 kshlm: ok 12:15:04 We wanted to sit together to understand it better before starting a mailing list conversation. 12:15:33 I guess that's all for 3.9 this week. 12:15:35 ok now moving to next topic 12:15:39 #topic GlusterFS 3.8 12:15:49 any updates? 12:16:15 nope, nothing special 12:16:32 Still on track for 10th? 12:16:33 3.8.2 will get done in a week from now, things look good 12:16:40 infra team was planning things for 8-9th. 12:16:42 Awesome. 12:16:59 kkeithley, I forgot that. Thanks for bringing that up. 12:17:02 if their things don't go as planned, then what? 12:17:02 oh, I thought that got rescheduled by amye, kkeithley? 12:17:08 did it? 12:17:17 I'm not clear about it either. 12:18:01 amye did mention that the move would be scheduled after 3.8.2 12:18:08 No specific date was given. 12:18:11 but yes, if that happens, Jenkins/Gerrit might missbehave and the release gets delayed 12:18:41 mh ? 12:19:05 I said in meeting after the release 12:19:15 cause I do not want to make any change while a release is going on 12:19:25 thanks misc! 12:19:44 what meeting was that, and is there a chat log? 12:19:53 https://www.gluster.org/pipermail/gluster-infra/2016-July/002523.html was the mail amye sent. 12:20:14 my team meeting, and the cage one (both internal meeting) 12:20:27 ah, ok 12:20:36 mhh so yeah, i did miss that email 12:20:55 but yeah, not gonna do when a release is out 12:21:11 sounds like we're good then 12:21:16 and so far, we still didn't got the server being physically moved, waiting on network 12:21:17 There should be a big enough window between 3.82. and 3.7.15 12:21:36 kshlm: yeah 12:21:37 3.82 is somewhat distant... 12:21:38 3.6.10? 12:21:50 3.8.2 is in 7 days 12:22:05 kkeithley, We're not doing anymore regular 3.6.x releases 12:22:19 and no irregular release :) ? 12:22:24 post-factum, :) 12:22:34 misc, That we're not sure of yet. 12:22:50 kshlm: there were some 3.6.x bugs reported recently, some looked quite severe 12:22:52 there was something on #gluster yesterday about 3.6 that sounded like it was worthy of a 3.6.10 release 12:22:55 when we are sure 3.9 is out, and then we are certainly sure 12:22:56 kshlm: how long before would it be decided ? 12:23:15 ndevos, I didn't know that. 12:23:42 kkeithley, Any links? 12:23:44 and yes, I know we're not doing _regular_ 3.6 releases 12:23:57 so basically, if we want to touch to service, doing that more than 1 week before a release would be ok ? 12:24:15 I'll have to look at the #gluster logs to reconstruct 12:24:21 kkeithley, Thanks. 12:24:27 misc, I think so. 12:24:41 latest 3.6 bugs are on https://bugzilla.redhat.com/buglist.cgi?f1=bug_status&f2=version&list_id=5591260&o1=notequals&o2=regexp&order=changeddate%20DESC%2Ccomponent%2Cbug_status%2Cpriority%2Cassigned_to%2Cbug_id&product=GlusterFS&query_based_on=&query_format=advanced&v1=CLOSED&v2=%5E3.6 12:24:51 well, thats an ugly urls 12:24:52 kshlm: unless there is a emergency security release, but I can imagine that being "exceptional" 12:25:00 and well, if we are unlucky, we are unlucky 12:25:23 ... also, wasnt this the 3.8 topic? 12:25:31 Yeah. We got carried away. 12:25:36 oups sorry : 12:25:37 ( 12:25:49 ok we are moving to next topic 12:25:53 #topic GlusterFS 3.7 12:26:02 So 3.7.14 was done. 12:26:04 v3.7.14 is the first release that contains all memleak-related fixes planned after 3.7.6. yay! 12:26:11 post-factum, Yay!!! 12:26:23 so long road 12:26:23 This was a pretty good release. 12:26:40 This went out nearly on time. 12:26:55 The builds were done really quickly. 12:26:58 hurray for going out (nearly) on time. 12:27:01 Thanks ndevos and kkeithley 12:27:07 ndevos++ kkeithley++ 12:27:07 kshlm: Karma for devos changed to 1 (for the f24 release cycle): https://badges.fedoraproject.org/tags/cookie/any 12:27:26 I haven't heard of any complaints yet. 12:27:44 And I think this has also fixed the problems with proxmox. 12:27:52 A really good release overall. 12:28:02 nice! 12:28:06 3.7.15 is now on track for 10th September 12:28:14 let 3.7.15 be as good as 3.7.14 12:28:20 and lets try not to break it with the next release ;-) 12:28:35 The tracker is open https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.15 12:28:36 Bug glusterfs: could not be retrieved: InvalidBugId 12:28:37 3.7.15 would be 30 August? 12:28:54 Oops! Got confused 12:29:03 It's 30 August. 12:29:04 :) 12:29:13 I'll continue as the release-manager. 12:29:24 kkeithley++ 12:29:25 kshlm: kkeithley's karma is now 9 12:29:30 You didn't get the karma earlier. 12:29:35 heh 12:29:41 And that's it. 12:30:02 #topic GlusterFS 3.6 12:30:21 goto out; 12:30:30 post-factum, not so fast. 12:30:35 :) 12:30:52 I did not get any reactions on http://www.mail-archive.com/gluster-devel@gluster.org/msg09771.html yet 12:31:10 that is for going through the open 3.6 bugs and closing the non-important ones out 12:31:35 or moving them to mainline, in case it is a feature request 12:31:46 ndevos, I a few bug closed messages from bugzilla in my inbox. 12:31:58 I thought someone was already looking into this. 12:32:18 kshlm: ah, good, maybe people started to look into it without letting anyone know 12:32:32 Do you think it'll be good to set up a time for people to get together on this> 12:32:35 ? 12:33:27 could be the most efficient, I was hoping for suggestions from others ;-) 12:34:36 ndevos: will add needinfo on reporter for georep bugs. If not expecting in 3.6, I will close it 12:35:04 I would join if a time was setup. But I'll not be available for the rest of the week. 12:35:39 Also, if we're planning on doing another 3.6 release, we need to find a new release-manager. 12:35:44 The current manager is MIA. 12:36:07 ndevos, I'll reply to the mail thread. 12:36:17 aravindavk: thats a good approach, mention a deadline for the reply in the comment too, and close it after it passed 12:36:22 We can continue with the other topics here. 12:36:31 ndevos: ok 12:37:08 #topic Community Infrastructure 12:37:39 misc, nigelb any updates? 12:37:55 I have a complaint though. I'll go after you've given your updates. 12:38:03 kshlm: besides the server stuff, not much 12:38:17 nigelb has maybe more (but right now, i am in another meeting) 12:38:25 misc, Ok. 12:38:39 I wanted to check about planet.gluster.org. 12:38:45 also, if complain is not in bugzilla, it doesn't exist :p 12:38:57 I doesn't seem to have synced my post. 12:39:01 To my blog. 12:39:06 mhh, again ? 12:39:14 s/To/from/ 12:39:29 I had a 3.7.14 announcement. 12:39:31 it kidna fixed by itself last time, so I didn't found the problem, I guess I need to investigate again 12:39:36 It hasn't showed up yet. 12:39:47 kshlm: can you fill a bug ? 12:39:53 misc, Okay. 12:40:10 I've got nothing more. 12:40:43 mhh, the build is showing error 12:40:55 mhh, nope 12:41:13 misc, We can take it up after the meeting. 12:41:21 Let's move on to the next topic. 12:41:46 ok we are moving to next topic 12:42:01 #topic Community NFS Ganesha 12:42:22 nothing, just waiting to wrap up development and GA 2.4 12:43:09 kkeithley: when is that planned? 12:43:26 don't know the exact date yet. Soon. (I hope) 12:43:38 "when it is ready" 12:44:01 ok, that gives a little more time to get the cache invalidation memory corrections in 12:44:04 Was originally scheduled for February. 12:45:03 It was originally.... 12:46:40 Anything else to discuss on NFS Ganesha? 12:46:48 or we move to next topic 12:47:14 #topic Community NFS Samba 12:47:35 ?? 12:47:36 nfs samba? 12:47:37 * ira didn't know Samba served NFS. 12:47:47 * ira learned something new today. 12:48:19 I am trying to carve out time to take in Jose's extended HA (storhaug) for 3.9 12:48:26 just Samba 12:48:29 ira: even windows has bash now, so why not 12:48:40 Windows has NFS too these days 12:48:41 Samba is for SMB... not NFS. :) 12:49:00 oh, I thought someone finally did the NFS part for Samba! 12:49:07 Ok... Samba 4.5.0rc1, has been cut. 12:49:08 typo error was that 12:49:13 #topic Community Samba 12:49:18 Ok... Samba 4.5.0rc1, has been cut. 12:49:23 And we now have the required changes for exposing gluster snapshots to Windows clients(as shadow copies), known as Volume Shadow Copy Services or VSS, via Samba's vfs shadow_copy module merged into master branch. kudos to rjoseph for getting it done in vfs shadow_copy module inside Samba. 12:49:40 Yep. 12:49:47 Cool! 12:49:56 Is this in the rc? 12:50:04 It should be.. 12:50:15 yeah, as is some ACL code refactoring between Ceph and Gluster. 12:50:28 Cool! I need to test out samba and nfs-ganesha sometime. 12:50:47 The gluster integration in these. 12:52:20 anything else on samba or we move to next one 12:52:24 Please do, BZ and patches accepted ;) 12:52:29 Move along afaik. 12:52:49 #topic Community AI 12:53:09 All done! Yay! 12:53:35 Possibly for the first time ever. 12:53:41 ankitraj: hmm, prepended a space, and in some other #topic commands too, I guess you need to edit the minutes before sending them out 12:54:01 we need to find something to assign to kshlm now... 12:54:20 Running a meeting 2 weeks from now :P 12:54:48 That's not an AI for this week. 12:54:56 maybe he can find the 3.6 crash discussion from yesterday or the day before in #gluster. The one I can't find now. 12:55:05 or #gluster-dev 12:55:22 kkeithley, I could try. 12:55:26 ;-) 12:56:00 hehe, I think kshlm does not sit idly around, there are many things he needs to do :) 12:56:25 when you want something done, ask a busy person 12:56:27 I'll probably set a time to weed out the 3.6 list as ndevos wanted. 12:56:42 That should be my 1 AI for the week. 12:57:21 oh, that would be very good! 12:57:40 #action kshlm to setup a time to go through the 3.6 buglist one last time (everyone should attend). 12:58:05 maybe do that in the bug triage meeting? 12:58:28 kkeithley, That would be good as well. 12:58:46 I'll continue this discussion on the mailing lists. 12:58:57 Let's finish this meeting first. 12:59:01 ok 12:59:38 #topic Open Floor 13:00:00 kkeithley and loadtheacc have things to share. 13:00:26 what do people think about switching to the CentOS Storage SIG for the 3.7 EL packages? (except epel-5 perhaps?) 13:00:28 loadtheacc, Thanks for putting up the videos. I've not gone through them all yet, but I plan to do it soon. 13:00:57 kkeithley: ok from me, and yes, no epel-5 because that is not an option for the SIG at the moment 13:00:59 kkeithley: will it break the upgrade for people using the current 3.7 package ? 13:00:59 * post-factum does not care since builds packages by himself 13:01:06 kkeithley, +1 from me if there are no problems. 13:01:14 kkeithley: +1 13:01:37 it will certainly break people who have /etc/yum.repos.d/gluster*.repo files pointing at d.g.o 13:01:47 misc: probably, upgrade could be fixed with proper redirection 13:01:48 they will have to switch to the Storage SIG repos 13:01:53 misc: maybe we need some redirection on download.gluster.org, or something like that 13:02:11 well, I guess we can try the redirect 13:02:20 that could work, although off the top of my head I'm not sure of the details 13:02:22 provided yum do folow redirect and this kind of stuff 13:02:31 #link https://www.gluster.org/pipermail/gluster-devel/2016-July/050307.html 13:02:37 For the glusto videos. 13:02:39 s/could/might 13:02:42 misc: it should, isnt that how some of the mirrors work? 13:03:05 * ndevos still needs to watch the Glusto videos 13:03:11 ndevos: better double check :) 13:03:19 kkeithley, Could you just sync the SIG packages to d.g.o? 13:03:26 I think mirror work by getting a list of mirror, then using that 13:03:32 but yeah, dnf do that too 13:03:50 we originally said we were only going to force people over to the Storage SIG starting with 3.8 13:03:57 misc: on Fedora yes, for CentOS I thought it was just redirection/geo-ip 13:04:38 also, it will break people using curl directly 13:04:46 but I doubt there is a lot of people doing that 13:04:51 * kshlm notes we're over time 13:04:55 -L switch :)? 13:04:59 and FYI, after almost six months of continuous uptime running 3.7.8, running load (https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity378/) 13:05:05 post-factum: not default, hence the breakage :) 13:05:14 I updated the longevity cluster to 3.8.1 13:05:25 kshlm, thanks. looking forward to feedback and next steps. 13:05:26 https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/ 13:05:41 kkeithley, What load is it serving? 13:06:06 fsstress.py. Ben England's home grown load generator 13:06:26 kkeithley: it would leaked for us for 40G of RAM, i guess, for those six months 13:06:42 Cool. 13:06:53 you can see how much it leaked in the logs at the above location 13:07:01 we'll see how 3.8.1 compares 13:07:09 * post-factum should show his zabbix charts somewhen 13:07:38 * kshlm wants to leave, big team dinner tonight! 13:07:43 post-factum should write a blog post, or submit a talk at the upcoming Gluster Summit in Berlin 13:07:54 kshlm: I hope you're not wearing slippers! 13:07:58 kkeithley: let me join RH first ;) 13:08:08 finally, any thoughts on new pkg signing keys for 3.9? 13:08:21 ndevos, Thankfully I read the mail before coming into work today. 13:08:23 are slippers the same as sandals? 13:08:47 kkeithley, What would new signing keys for each release help with? 13:08:49 kkeithley: I was wondering about that too - and also how it would apply to foreigners 13:09:06 kkeithley: yes, what is the advantage of new keys? 13:09:23 OT, slippers in India genrally stand for flip-flops. 13:10:02 I think we are over with meeting 13:10:04 more secure. Even if I'm not paranoid, someone could compromise pkgs if they've managed to break the key. 13:10:18 kkeithley: ok, then go for it :) 13:10:48 kkeithley: and distribute the keys from a different (or just more) server(s) than the one where the packages are 13:11:08 It's possibly more work for you. But I say go for it as well. 13:11:30 the private key(s) are not on d.g.o 13:12:01 no, but what sense does it make to sign packages when the public key used to check the signature comes from the same server? 13:12:19 ankitraj: you may use the force to end this overtime ;) 13:12:20 at least there should be the option to verify the key from a different server 13:13:05 Thanks to all for participating community meeting 13:13:09 if someone changes the pub key on d.g.o but the pkgs are signed with the private key on another server, then the verification would fail. 13:13:11 * msvbhat leaves the meeting goes for his evening run 13:13:15 I am ending it now 13:13:17 thanks ankitraj! 13:13:25 #endmeeting