12:06:54 #startmeeting Weekly community meeting 18/May/2016 12:06:54 Meeting started Wed May 18 12:06:54 2016 UTC. The chair is rastar. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:06:54 Useful Commands: #action #agreed #halp #info #idea #link #topic. 12:06:54 The meeting name has been set to 'weekly_community_meeting_18/may/2016' 12:07:08 #topic Rollcall 12:07:12 * post-factum is here 12:07:19 * kkeithley is here 12:07:40 * hagarth is here 12:08:50 I know there are others who joined too 12:09:10 * jdarcy is here 12:09:28 #topic Next weeks meeting host 12:10:13 I guess this is on me, given that I skipped the last time. 12:10:37 great admission 12:10:55 #topic kshlm & csim to set up faux/pseudo user email for gerrit, bugzilla, github 12:11:07 * anoopcs is also here.. 12:11:34 * ndevos _o/ 12:11:54 ndevos: Saravanakmr anoopcs hi :) 12:12:21 we don't have kshlm or csim here 12:12:23 moving on 12:12:40 I guess you can add nigelb to that AI too 12:13:19 #action kshlm/csim/nigelb to set up faux/pseudo user email for gerrit, bugzilla, github 12:13:34 #topic amye to check on some blog posts being distorted on blog.gluster.org, josferna's post in particular 12:13:39 wasn't this resolved? 12:14:00 aravinda offered to look into that too, I've not heard about any outcome 12:14:57 don't see any update.. 12:15:09 #action aravinda/amye to check on some blog posts being distorted on blog.gluster.org, josferna's post in particular 12:15:27 #topic pranithk1 sends out a summary of release requirements, with some ideas 12:16:21 is this the same topic as release cycles or something else? 12:16:40 * ira is here. 12:16:54 rastar: I think so 12:17:02 hagarth: ok 12:17:16 rastar: however I think we need to close that discussion soon 12:17:18 I think that was similar to what aravinda wrote, but more with respect to what users and devs expect from releases 12:17:29 rastar: it is pending for a while now 12:17:52 hagarth: yes 12:18:13 it would be good to know what we'll follow for the 3.8 release, we should tell our users what the plan for that version is :) 12:18:22 I think apart from the inputs in the email thread, we won't get it from anyone else who are not in this meeting 12:18:25 ndevos: agree 12:19:03 ok, let us take an action item to close it out this week 12:19:15 I think amye was collecting some information about a LTS release, maybe she can weigh in with that? 12:19:43 ok then, we will have an action item but resolve it in email 12:20:05 who should own that? pranith has done his work 12:20:36 * ndevos points to hagarth 12:20:47 perfect timing :) 12:20:51 hehe 12:21:15 I would still say it should be hagarth 12:21:29 sure, put it on him 12:21:50 #action hagarth to announce release strategy after getting inputs from amye 12:21:54 ndevos apparently has the pointing-finger of death 12:22:07 that is magic wand 12:22:08 hagarth: we assigned that AI to you 12:22:37 #topic kshlm to check with reported of 3.6 leaks on backport need 12:22:59 kshlm is not here, moving on.. 12:23:12 #action kshlm to check with reported of 3.6 leaks on backport need 12:23:14 rastar: cool, thanks 12:23:38 #topic GlusterFS 3.7 12:24:15 hagarth: 3.7.12? 12:24:37 we missed the 3.7.12 release date with almost 3 weeks now :-/ 12:25:17 post-factum: will take that topic this week with maintainers 12:25:46 hagarth: I remember, 3.7.12 is what lead to the idea of maintainers signoff 12:25:46 ndevos: if there's no urgency, I don't mind missing a scheduled date 12:25:53 well, 3.7.11 was released only about 12 days before 3.7.12 was due 12:26:38 hagarth: yeah, thats fine, I would expect 3.7.12 at the end of this month, just one date skipped 12:26:38 kkeithley: yes, thats true 12:26:44 which itself was three weeks late 12:27:04 ok, I will follow up on 3.7.12 12:27:20 skipping a release should be fine, as long as there are no urgent issues to fix 12:27:36 hello misc 12:27:49 hi 12:28:36 misc: we have covered AIs, nothing to update there I guess 12:28:36 who decides that there are no urgent issues? Is it a community decision? 12:29:03 kkeithley: yes, if there is an urgent issue I would expect to hear about it in community forums 12:29:05 kkeithley: Lack of patches? :) 12:29:25 * kkeithley believes there are plenty of 3.7 patches queued up 12:29:29 ira: there are lots of commits merged for .12 already 12:29:29 ira: sometimes it is the other way around 12:29:49 some commits don't get backported till a bug is filed for 3.7 12:30:28 Then why not release? If there's patches made... 12:30:46 ira: oh :) 12:30:47 kkeithley: I would say it is up to the assigned release engineer, and I suggest that even 5+ minor patches would be sufficient for a release 12:30:52 * kkeithley kinda expects that if someone fixes a bug in master/mainline that the dev will automatically file a bug for it in 3.8 and 3.8 12:31:13 3.8 and 3.7 12:31:17 kkeithley: I expect that too, just my observation that it is not always true 12:31:24 at this stage of the game 12:31:40 and even 3.6, if the bug is there too 12:32:37 ok, i guess we have decided on required things, hagarth will talk to maintainers and we will have a 3.7.12 release by end of this month 12:32:37 you're correct, it's not always true. But if I keep saying it enough maybe it'll start happening on a more regular basis 12:32:47 kkeithley: :) 12:33:06 I want world peace too. 12:33:06 #topic GlusterFS 3.6 12:34:32 raghu isnt there... and nobody stepped up to help him with 3.6 for all I know 12:34:44 rabhat had requested for help in maintaining 3.6 12:34:58 ndevos: yes 12:35:22 anyone? 12:35:38 the patch volume is low atm in release-3.6: http://review.gluster.org/#/q/status:+open+branch:+release-3.6 12:36:31 hagarth: yes, just looking for a backup 12:36:42 ok , I will put my name here 12:36:49 will work with rabhat on this 12:37:32 rastar: great, thank you! 12:37:32 i updated the etherpad 12:37:39 I think that is enough. 12:37:50 #topic GlusterFS 3.5 12:38:18 no ugs have been brought to my attention, and I have not noticed any patches that got submitted 12:38:25 ugs = bugs 12:38:42 ndevos: cool, anyways it is at the end of life 12:38:47 there is no plan to release an other 3.5 update 12:39:04 yes, it'll be officially EOL when 3.8.0 ships 12:39:12 next topic, the coolest thing for now 12:39:15 #topic GlusterFS 3.8 12:40:06 maintainers and packagers are already looking into 3.8rc1 12:40:11 #link http://thread.gmane.org/gmane.comp.file-systems.gluster.maintainers/727 12:40:23 we got some feedback from the Debian maintainer 12:40:33 3.8(.0)rc1 is packaged for Fedora24 and F25. It'll be in Fedora24 Updates-Testing repo soon. 12:40:38 ... and that is the only one that gave direct feedback :-/ 12:41:18 cool, thanks kkeithley! please remind everyone by sending a reply to the announcement 12:41:19 I'm debating (or wondering) whether to package it for other distributions, e.g. Ubuntu or SuSE. 12:41:41 no el7 packages? 12:41:46 I see two problems so far in my limited testing with 3.8: 12:42:09 1. afr op-version dependency problems. itisravi is aware of this problem. 12:42:11 I believe ndevos is getting el7 and el6 in the CentOS Storage SIG. TBA 12:42:24 post-factum: those are in the CentOS Storage SIG, there is a link in the email for those 12:42:38 ndevos: by bad, already looking at koji 12:42:46 I don't feel good about shipping rc1 to users 12:42:59 el6 isnt ready yet, that is something the CentOS team still needs to setup for us 12:43:04 hagarth: i was unable to mount 3.7 volume by master client... probably, it is related 12:43:33 2. rolling upgrades being broken due to additional things we are looking for in dictionary during handshake 12:43:46 possibly related to 2bfdc30e0e7fba6f97d8829b2618a1c5907dc404 12:44:01 hagarth: kinda of "Unable to fetch afr pending changelogs. Is op-version >= 30707? [Invalid argument]" 12:44:13 post-factum: this is problem 1. for me 12:44:21 hagarth: file a bug with steps to reproduce and send a mail (one per issue) to the -devel list? 12:44:25 hagarth: no rolling upgrades O_o? 12:44:43 post-factum: yes, rolling upgrades (with clients online) is broken atm 12:44:45 FYI (reminder) we don't (can't) ship in EPEL because RHS/RHGS client-side pkgs are in RHEL. We took a decision to not provide el[567] pkgs on download.gluster.org because they will be in the CentOS Storage SIG 12:44:50 carp :( 12:44:53 *crap 12:45:17 hagarth: hope it could be fixed 12:45:24 kkeithley: you should #info that :) 12:45:43 We took a decision to not provide el[567] 3.8 pkgs on download.gluster.org because they will be in the CentOS Storage SIG 12:45:44 ndevos: too much of overhead with my limited cycles but will try doing that 12:46:22 hagarth: sharing experience is rather important, but yes, we're all quite busy... 12:46:23 #info FYI (reminder) we don't (can't) ship in EPEL because RHS/RHGS client-side pkgs are in RHEL. We took a decision to not provide el[567] 3.8 pkgs on download.gluster.org because they will be in the CentOS Storage SIG 12:47:06 anyway, 3.8 is progressing 12:47:16 kkeithley: would we be able to get download stats from centos SIG? 12:47:36 we still need all feature owners to provide release-notes on https://public.pad.fsfe.org/p/glusterfs-3.8-release-notes 12:47:54 hagarth: that's a good question. I'll ask kbsingh 12:47:56 hagarth: not really, it is just like other distributions 12:48:36 ok, is that all on 3.8? 12:49:01 moving on, we are running out of time 12:49:04 #topic GlusterFS 4.0 12:49:06 the release for 3.8.0 is still planned for the end of this month (or the 1st few days in June) 12:49:22 ... and thats all :) 12:49:27 ndevos: is that even possible given only one rc is released? 12:49:33 I pushed a big blob of crappy reconciliation code for people to laugh at. 12:49:39 post-factum: more will follow 12:49:53 ndevos: ah (where is my popcorn) 12:49:54 jdarcy: atinm all yours :) 12:50:13 Haven't heard much from the other 4.0 leads, so I'll let them speak for themselves. 12:50:54 post-factum: rolling upgrades can be fixed. release cannot happen till we fix that problem. 12:51:19 hagarth: glad to hear that 12:51:26 hagarth: nice 12:52:02 jdarcy: ok 12:52:19 I don't see any other updates on 4.0 12:52:35 new topics on the agenda :) 12:52:53 #topic NFS Ganesha and Gluster updates 12:53:12 * ndevos points to kkeithley 12:53:28 oh 12:53:57 wingardium leviosa 12:54:08 NFS-Ganesha starts to become the default NFS server in 3.8 12:54:13 * ndevos "avada ka..." 12:54:22 work is progressing on 2.4 with dev-18 posted on Friday 12:54:30 ndevos: with great power comes great responsibility 12:54:32 obliviate 12:54:53 lumos 12:54:58 and we'll start testing the combination more in the CentOS CI soon too, hopefully 12:55:05 Ash nazg durbatuluk . . . oops, wrong canon. 12:55:55 sorry I was afk :( 12:56:17 Sonorus: That is how we refer to harry potter, Gluster community++ 12:56:21 kkeithley: had some issues with nfs-ganesha+dovecot storage :(( 12:56:40 lease work in gluster is progressing. Leases are the basis for NFS reservations and Samba op-locks, and will even use the lease framework for pNFS layout recall. 12:57:05 kkeithley: with nfs kernel client blocking in D state. so if it really will replace builtin nfs server, i need to find out what happens there 12:57:32 post-factum: was the nfs client on the same node as server? 12:57:37 nope 12:57:44 post-factum: let's follow up in #gluster-dev after the meeting 12:57:52 kkeithley: okay 12:57:55 post-factum: replacement will happen at one point, 3.8 will have Gluster/NFS disabled by default to encourage users to migrate to Ganesha 12:58:22 ndevos: ah ok, so it could be enabled again 12:58:39 post-factum: yes, it's only one volume option away 12:58:58 nfs.disable off will bring back gNFS right with 3.8? 12:59:08 post-factum: but we want to fix any issues that you have with ganesha 12:59:10 just `gluster v set $vol nfs.disable false` to bring it back. (in 3.8) 12:59:15 hagarth: yes 12:59:18 it's not disabled in the build or anything. 12:59:21 ndevos: so want I :) 12:59:33 ok, cool! 12:59:42 ok, next topic 12:59:48 * ndevos points to https://public.pad.fsfe.org/p/glusterfs-3.8-release-notes 13:00:22 #topic Samba and Gluster 13:00:23 ndevos: after kkeithley's obliviate it is hard to remember... 13:00:54 be glad I didn't use sectumsepre 13:01:02 sectumsempre 13:01:06 who wants Samba if you can have Ganesha? 13:01:09 Some updates from my side 13:01:21 ndevos: we want ;) 13:01:25 :) 13:01:31 crazy Windows users 13:01:39 :) 13:01:51 * ndevos ... heh 13:01:54 but it is all about memory consumption for samba+gfapi 13:02:31 ok , leases xlator was merged in 3.8 and will be basis for leases in Samba too. 13:02:51 isn't that called op-locks in Samba? 13:03:01 kkeithley: No, leases. 13:03:10 no, leases, and delegations in NFS 13:03:10 post-factum made a good observation, gfapi based access takes a lot of RAM when used with Samba because every connection is a new process with Samba 13:03:32 :( 13:03:39 kkeithley: think of leases in SMB as op-locks done right, that is op-locks2 13:03:48 * kkeithley wonders where he got op-locks from 13:04:05 it used to be oplocks, pre 2.1. 13:04:17 kkeithley shows his age 13:04:24 Only 30 13:04:33 *still 30 13:04:33 hagarth: do we strictly say no to FUSE reexport method for Samba use cases? 13:04:38 kkeithley: In which base? ;) 13:04:50 rastar: are there any benefits in doing so? 13:04:52 rastar: Yes. 13:05:03 rastar: I do not think we can prevent users from setting that up... 13:05:13 yes, one glusterfs process instead of as many as smbd processes 13:05:21 hagarth: ^^ 13:05:22 ndevos: We can't stop them from using a POSIX FSAL with Ganesha. 13:05:35 rastar: what is the footprint of samba without libgfapi? 13:05:46 hagarth: around 10 MB 13:05:55 ira: indeed, we can neither prevent them from exporting fuse mounts with kernel-nfs, but we can strongly recommend against it 13:05:56 hagarth: with Gluster it is around 200MB 13:06:06 * ndevos *cough*! 13:06:13 gfapi has some memory issues itself (that I'm looking into) 13:06:24 rastar: I'd say, 100–120, but that does not change things much 13:06:39 rastar: would that be true if we disable all perf xlators? 13:06:40 other people can look too. 13:06:53 hagarth: that would reduce it to 60 maybe 13:07:08 and then what happens to performance? 13:07:27 kkeithley: For non-metadata ops, it'll probably improve ;) 13:08:10 looks like it deserves a mail thread of its own, I will start it 13:08:14 rastar: agree 13:08:21 we have crossed our time limits 13:08:26 #topic open floor 13:08:44 hagarth: http://review.gluster.org/14399 passed regression tests, need code review :) 13:09:54 post-factum: will do :) 13:10:05 hagarth: thanks! 13:10:14 did you folks check out lio + tcmu-runner with libgfapi? 13:10:36 oh, thats reminds me 13:10:57 thought for the day: devs should occasionally do a build on a 32-bit system. It's a good way to catch log/printf format string mistakes 13:11:04 #help packagers for lio/tcmu-runner with libgfapi wanted to get the packages in the CentOS Storage SIG 13:11:24 kkeithley: just #idea that 13:11:41 #idea thought for the day: devs should occasionally do a build on a 32-bit system. It's a good way to catch log/printf format string mistakes 13:11:52 ndevos: let us send out a note on -devel and -users. actually I want to write up about this integration 13:12:06 as it addresses a long pending request in the community about block storage 13:12:14 hagarth: I assume that with "us" you mean yourself ;-) 13:12:35 ndevos: check the second part of the same sentence ;) 13:12:43 hagarth: I also expect to see it integrated with storhaug (sp?) at one point 13:13:01 ndevos: possibly yes 13:13:42 I have few things to add, there is SDC happening in BLR next week. We have presentations from rafi on tiering in Gluster, surabhi is presenting on Multichannel in Samba and I am sure there was a third presentation that I am not remembering now. 13:14:13 rastar: from atinm? 13:14:16 rastar: cool! is that on the event page already? 13:14:35 rastar: is multichannel like multipath-tcp but for poor people :)? 13:14:43 hagarth: i am not aware of but atinm might have 13:14:50 #link https://www.gluster.org/events/ 13:14:53 ndevos: it has not been updated yet, I will ask them to 13:15:00 I have something to ask about gluster blog.. 13:15:16 Do we have some document which explains how to add a blog in gluster.org? 13:15:38 post-factum: I don't know about multipath-tcp but this enables windows clients to contact server on as many NICs as it can get route to 13:15:49 post-factum: No, it is the enabling technology for working SMB Direct, and RDMA ;). 13:15:56 Saravanakmr: add an entry in https://github.com/gluster/planet-gluster/blob/master/data/feeds.yml and it'ss get synced on plant.gluster.org 13:16:10 ira: sounds like mature enterprise, ok 13:16:24 post-factum: and the connection dies only when the last tcp route is gone 13:16:43 allows for multiple TCP connections per nic for better throughput... etc. 13:16:48 Good stuff. 13:16:53 ndevos, thanks! can have this added as part of documentation somewhere ? 13:17:14 obnox_: can fill you in with lots, and lots of details ;) 13:17:26 Saravanakmr: sure, probably suitable under http://gluster.readthedocs.io/en/latest/Contributors-Guide/Index/ 13:17:44 #action Saravanakmr to add documentation on how to add blogs 13:17:47 ndevos, ok..will check and add here 13:18:01 another one, related to blog 13:18:03 what is the difference between planet.gluster.org and blog.gluster.org ? 13:18:18 I can see different blogs updated in both the pages - can we have one single blog interface please? 13:19:23 Saravanakmr: blogs.gluster.org was supposed to get removed, not sure why it is still around 13:19:23 ndevos: should maintainers merge patches for 3.8 or do you want to manage those? 13:19:27 we have really run out of time. kkeithley changed the topic too 13:19:41 that was subtle, wasn't it. ;-) 13:19:42 kkeithley: yes, relevant question 13:19:56 kkeithley: as subtle as a brick through a window. 13:20:21 kkeithley: I do not need to be the only one that merges backports in 3.8, maintainers are free to merge too 13:20:35 ndevos: cool, thanks! 13:20:37 ndevos, but I can see "Using LIO with Gluster" blog is updated in blogs.gluster.org and not in planet.gluster.org 13:20:42 Saravanakmr, blog.gluster.org is a WordPress blog - 13:21:04 planet.gluster.org is a feed aggregator 13:21:25 Saravanakmr: well, I dont know, but I thought the infra team didnt want to keep maintaining their own blog instance, and the planet.gluster.org was the better approach 13:21:50 kkeithley: ira :) 13:22:03 but, maybe amye decided differently and we'll keep the wordpress application around anyway? 13:22:26 ndevos, the blog.gluster.org is how we're able to put out posts that don't need to be on the main webpage - 13:22:38 this is the first I've heard that the infra team didn't want to maintain it anymore 13:22:55 I will end this meeting now, please discuss rest of the topics in a mail thread or on gluster-devell 13:23:09 amye: oh, well, the plan to drop it caused planet.gluster.org to popup... 13:23:42 * ndevos isnt much in favour of two sources with the same(?) information 13:23:43 amye, ndevos I think it is better to have one single link for all blogs related to gluster from gluster.org TOP page. 13:23:55 ndevos, aha, makes sense, and that feed makes sense to keep around. Blog.gluster.org is something that causes the twitter feed to automagically link. :) 13:24:05 pong 13:24:13 ira: srry late pong - what's up> 13:24:14 ? 13:24:23 Saravanakmr, so things like the posts for the newsletter? 13:24:39 just multichannel discussion. 13:24:53 Thank you everyone for attending. 13:24:58 ah 13:25:03 #endmeeting