12:04:47 #startmeeting 12:04:47 Meeting started Wed Nov 19 12:04:47 2014 UTC. The chair is davemc. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:04:47 Useful Commands: #action #agreed #halp #info #idea #link #topic. 12:05:05 community meeting 12:05:19 etherpad is at https://public.pad.fsfe.org/p/gluster-community-meetings 12:05:35 which also has the agenda 12:05:42 #chair ndevos hchiramm Debloper lalatenduM 12:05:49 Roll call 12:05:54 who's here 12:05:59 * davemc is here 12:06:01 * overclk is here 12:06:02 * hagarth is here 12:06:03 * ndevos is here 12:06:04 * lalatenduM is here 12:06:05 * Debloper is here 12:06:05 * hchiramm_ is here 12:06:27 from th elast meeting 12:06:46 #item hagarth to update maintainers file 12:06:51 davemc: still TBD 12:06:58 i hope to complete it this week 12:07:08 no problem 12:07:38 #item hagarth to send an email about release maintainer for 3.6 12:07:44 davemc: done 12:08:12 cool 12:08:22 #item Humble create a maintainer wiki page 12:08:25 looks done 12:08:45 page is available .. 12:08:54 I will update the content once hagarth's patch is in 12:09:06 sounds good 12:09:18 #item ndevos, lalatenduM, kkeithley and Humble to propose a long term solution to the packaging problem in CentOS/RHEL 12:09:39 davemc, not done till now 12:09:51 davemc, sorry abt it 12:10:01 lets move it to next week 12:10:10 as we time till 3.7 :) 12:10:17 s/we/we have/ 12:10:21 now problem. As a master of the last minute fix myself 12:10:33 agree, we have loads of time for this one :) 12:10:33 s/now/no/ 12:10:43 :) 12:10:50 #item JustinClift to create initial GlusterFS Release Checklist page on the wiki 12:10:59 didn't see Justin join 12:11:10 will ping him later 12:11:32 #item davemc to add pointer to wiki from site 12:11:46 ok 12:11:59 davemc, as Debloper is here, can we talk abt gluster website revamp ? 12:12:06 discussed the web site at length last week 12:12:18 with tigert 12:12:30 * Debloper will check the logs 12:12:37 I think first step is to just fix things to work 12:12:50 and get stuff up to date 12:13:01 davemc: agreed on the important+urgent fixes. 12:13:16 longer term is to revamp it and I'd like input 12:13:30 in parallel, we should mobilize working on a full fledged revamp. what's the planned timeline for that? 12:13:56 there is a front page revamp at http://glustermm-tigert.rhcloud.com/ 12:14:18 checking that out, I have my feedback. should send a mail, or discuss here/now? 12:14:22 we have two OSAS resources to use for the next few months 12:14:32 (now won't be complete/concise) 12:14:43 either works. 12:14:53 mail it is, then. 12:15:31 tigert will help with design, and a new person is willing to help clean up/ clear out docmentation 12:16:01 davemc: awesome 12:16:09 awesome!! 12:16:32 we, as community, just need to help get what we want. else we get what they wnt 12:16:33 davemc: I'm focusing on the complete revamp then (or any help/input I can offer on that). the urgent fixes are in good hands, it looks like. 12:16:56 if I've misse durgent stuff, please let me know. 12:16:59 I want to get rid of the "Write Once, Read anywhere" theme from the landing page :D 12:17:04 +1 12:17:08 +1 12:17:18 some other caption has to be there :) 12:17:36 hagarth, +1 , yes plz 12:17:43 a short welcome, or statement of what it is 12:17:58 davemc: +1 12:18:03 again, suggestions welcome 12:18:39 #action Debloper to send email on web revamp 12:18:39 davemc, do we need to pass our thoughts via gluster-infra mailing list ? 12:19:01 hchiramm, davemc for caption we can use gluster-users ML 12:19:05 hchiramm_: should be ideal scenario. 12:19:10 personal opinion, I'd rather build a target and then pass through infra 12:19:13 but that's me 12:19:21 lalatenduM++ 12:20:17 #action davemc to run a "what should it say?" through users ML 12:20:30 davemc: that leaves a space for non-transparency - in long run. 12:20:33 "we, as community, just need to help get what we want. else we get what they wnt" -> where this will go ? 12:20:48 is it gluster-infra or users list ? 12:20:58 users 12:21:02 caption to users list, revamp to infra 12:21:09 davemc++ 12:21:15 davemc, +1 12:21:22 sounds good 12:21:23 +1 12:21:35 davemc: +1 12:21:56 okay, we'll revisit this via email, early and often 12:22:16 moving to next item 12:22:26 #item 3.6 12:22:45 any thing to bring up? 12:22:45 few patches were merged over the last week 12:22:54 aiming to have 3.6.2 sometime next week 12:23:04 "early and often" 12:23:18 yes, and possibly with full rdma support :) 12:23:24 hagarth, http://review.gluster.org/#/c/8988/ is in line to get merged 12:23:30 way cool 12:23:38 lalatenduM: yes, will get that in 12:23:40 hagarth, should I add the bug to the 3.6.2 tracker 12:23:49 lalatenduM: that would be helpful 12:23:55 hagarth, will do 12:24:39 any other 3.6 stuff? 12:25:00 k, moving on 12:25:18 #Item 3.5 12:25:30 I got the announcement out yesterday 12:25:41 on every channel I could think of 12:26:14 davemc++ 12:26:23 davemc: awesome, did we get it out on g+ too? 12:26:32 sorry for the delay, last week was rather a mess 12:26:37 hagarth, yes 12:26:53 davemc: fantastic, we've not been very active there lately. 12:26:58 irc, twitter, FB, lN, g+ 12:27:08 EMAIL 12:27:32 blogs 12:27:32 if I missed something, let me know 12:27:40 I'll add it to the list 12:27:46 that completes it I think 12:27:56 lalatenduM, thanks, yeah ther etoo 12:28:02 yup 12:28:28 so, anything on 3.5 we need to cover? 12:28:50 on to 3.4 12:28:55 #item 3.4 12:29:03 see above announce scenario 12:29:11 for 3.4.6 12:29:24 we probably need new trackers for 3.4.7 and 3.5.4 12:29:48 Query: how long do we support 3.4 futures? :) 12:30:04 one question, how long do we support the 3.4 family? was asked by red hat strage mgmt team last week 12:30:08 kkeithley, ^^ 12:30:16 hchiramm_: till we stabilize 3.6.0 12:30:19 s/strage/storage/ 12:30:22 s/3.6.0/3.6/ 12:30:36 davemc: longer than what red hat storage intends doing ;) 12:30:40 there is a 3.5.4 tracker already :) 12:30:41 we support current - 2, right? So 3.4.x until 3.7 12:30:43 whew, that was my answer 12:30:47 ndevos++ 12:30:58 darn no karma bot here 12:31:45 so 3.4 lies till 3.7 release 12:32:00 works for me. but I'm not doing the hard stuff 12:32:11 lies? lives? 12:32:16 hagarth++ 12:32:21 yes lives 12:32:35 keyboard is giving me fits today 12:32:45 still users report issues on v3.2 and v3.3 12:32:48 anything more on 3.4? 12:33:01 hchiramm_: we need to encourage them to upgrade me thinks 12:33:09 may be we need an announcement about these releases and its EOL 12:33:11 BUt we shoudl reduce the frequency of builds in 3.4.X 12:33:12 hagarth, true 12:33:22 lalatenduM, +1 12:33:25 lalatenduM: +1 12:33:27 hchiramm, agreed 12:33:33 kkeithley, ^^ 12:33:38 yes 12:33:56 ndevos, ^^ :) 12:34:20 for 3.4.7 I only have memory leak, libgfapi symbol versions, and I think there is one more BZ open 12:34:28 at least in the tracker 12:35:03 sounds good 12:35:14 any more 3.4? 12:35:27 #item gluster.next 12:35:46 #item jdarcy to convene a meeting to discuss improvements in small file performance 12:36:14 not sure if Jeff is around today but we need this 12:36:29 as we plan to address a bunch of small file performance problems in 3.7 12:37:01 we need to ping Jeff 12:37:35 moving on 12:37:37 #action davemc Announcement about GlusterFS releases and its EOL :) 12:38:19 #item planning pages 12:38:34 thanks for getting these up 12:38:54 yeah, we expect to get more there 12:39:42 much easier to promote community involvement when we can point at them 12:39:51 Sorry about that. Had to get my daughter's breakfast. 12:40:03 no problem jdarcy 12:40:28 Everybody: please read/comment on the GlusterFS 4.0 feature pages - http://www.gluster.org/community/documentation/index.php/Planning40 12:40:34 I've got three kittns who think they should get fed now, because, hey, I'm up 12:41:08 davemc: can I have one too? 12:41:08 jdarcy, +1 12:41:40 Lots of exciting stuff, where "exciting" is pretty close to "scary" 12:42:09 ready to move to "other"? 12:42:18 +1 12:42:21 #item other agenda 12:42:45 #item gluster into ubuntu issue? 12:42:52 davemc: yes, please 12:43:12 as an aside, ubuntu is the top distro from the survey 12:43:41 we need this one merged: http://review.gluster.org/8064 12:44:15 backports are waiting for that... blocking inclusion in Ubuntu proper 12:44:22 ndevos, +1 12:44:33 indeed 12:44:48 kkeithley, you have some new comments thr by kshlm 12:44:54 Where are the full survey results? 12:44:58 oh, okay 12:45:18 davemc: I think we need to open a conversation with Ubuntu folks to make their criterion for package acceptance public 12:45:23 jdarcy, I'll be finalizing them and setting up a separate meeting 12:45:34 I honestly feel that we are being snubbed 12:45:45 in the BZ I think, and in the nightly static analyses 12:46:06 hagarth, will try to figure that out. 12:46:07 http://download.gluster.org/pub/gluster/glusterfs/static-analysis/ 12:47:12 kkeithley, does this ^^^ qualify for some space in gluster.org ? 12:47:51 I think it deserves some level of exposure 12:47:57 indeed 12:48:33 hchiramm, it is present somewhere in wiki 12:48:34 may be an easy bug fix section or something similar ? 12:49:01 http://www.gluster.org/community/documentation/index.php/Fixing_Issues_Reported_By_Tools_For_Static_Code_Analysis 12:49:14 lalatenduM, yep .. but if we can attract new comers who land in gluster.org , its awesome.. 12:49:46 hchiramm, agree 12:50:10 BTW, did anyone else see that Suse is going to package their own version of Ceph? 12:50:11 might be useful to consider "attracting newcomers" during revamp of site 12:50:19 yep 12:50:22 davemc, +3 12:50:30 jdarcy, yep 12:50:45 jdarcy, ohh , any link for details news 12:50:56 package their own version, as in Supported Product? 12:51:20 lalatenduM: Thin on detail, but http://www.theregister.co.uk/2014/11/19/susecon_suse_storage_announcement/ 12:51:39 jdarcy, yup just got it , thanks 12:51:45 SUSE Cloud 4, the newest version of SUSE's OpenStack distribution for building Infrastructure-as-a-Service private clouds, is now available. SUSE Cloud 4 is based on the latest OpenStack release (Icehouse) and features full support for the Ceph distributed storage system. Along with Ceph support, the latest SUSE Cloud platform includes advanced VMware capabilities and enhanced scalability, automation and availability features 12:51:45 to ease enterprise adoption of OpenStack and help organizations maximize current IT investments. 12:52:06 https://www.suse.com/company/press/2014/8/suse-cloud-4-now-available-featuring-ceph-distributed-storage.html 12:52:07 Are we in Suse? 12:52:09 speaking of which, anyone have a contact at SuSE that can tell me why my requests for a SLES12 eval license have not had a response? (They gave me one for SLES11sp3.) 12:52:41 we have OpenSuSE and SLES rpms on d.g.o 12:52:49 kkeithley, not sure my contacts are still there, but can check 12:53:09 jdarcy: Suse has been using Ceph for a while 12:53:18 even an unambiguous "no" would be preferable to the dead air I've been getting 12:53:18 jdarcy: don't think we are in Suse 12:53:32 jdarcy, I can't find any Gluster stuff from suse 12:53:51 Being in Suse will help us get adoption in Europe 12:54:23 unlike Ceph-openstack love affair, we don't have a market pull to be included 12:54:40 davemc: we are the third most sought after cinder driver in OpenStack ;) 12:54:48 12:55:14 glusterFS is 3rd most popular block storage for openstack, 12:55:16 Is that like being the third most sought after candidate for president? 12:55:25 * lalatenduM waiting for a truth happens moment for GlusterFS :) 12:55:26 note the "block" 12:56:06 I think our adoption percentage has increased from the last OpenStack summit survey 12:56:16 I think we're drifting here 12:56:23 4 minutes to go 12:56:57 I'm all out of random digressions. 12:56:59 my last thing is just to mention we still would like to get some folks to talk about gluster 12:57:06 on a short video 12:57:15 on topic of your choice 12:57:26 or I start drafting people. 12:57:47 davemc: send out a note on devel? 12:57:58 actually, I think I may hit up the users and devel ML for this as well 12:58:03 davemc: cool 12:58:16 anything else to cover? 12:58:32 if not, lets end 2 minutes early 12:58:40 GOING 12:58:44 going 12:58:47 and 12:58:51 #endmeeting