12:07:33 #startmeeting Weekly GlusterFS Community Meeting 12:07:33 Meeting started Wed Sep 24 12:07:33 2014 UTC. The chair is JustinClift. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:07:33 Useful Commands: #action #agreed #halp #info #idea #link #topic. 12:07:42 kshlm: Yeah, I'm way behind some threads too 12:07:47 k, Roll call 12:07:59 kshlm, we discussed that in last meeting iirc 12:08:17 * kshlm is here. 12:08:23 * Humble here 12:08:24 * lalatenduM is here 12:08:32 Humble, I wasn't part of the last meeting as well. 12:08:37 ok :) 12:08:45 * itisravi is here 12:08:45 overclk: Are you "here" for the meeting? ;) 12:08:57 * overclk is here 12:08:58 Cool, at least it's not just me. :D 12:09:05 I think till November we are planning to continue in the same time.. 12:09:13 #topic Action items from the last meeting 12:09:19 "ndevos and hagarth to discuss on gluster-devel (by 24th Sept) the outstanding 3.5.3 blocker BZ's, and which ones to move to 3.5.4" 12:09:28 I don't remember seeing that ^ 12:09:32 Anyone else know? 12:09:41 neither me 12:09:56 nope :( 12:10:03 * JustinClift has a feeling this will be a short meeting 12:10:14 k, it's marked as still needing tbd 12:10:16 "davemc to get the GlusterFS Consultants and Support Company's page online" 12:10:26 That's not there yet either 12:10:54 Soumya Deb has been discussing overall web strategy thoughts which are useful 12:11:09 yeah, discussions are going on about that 12:11:13 * JustinClift would prefer that to be on gluster-infra instead of in private email 12:11:18 JustinClift: +1 12:11:24 s/prefer/strongly prefer/ 12:11:27 JustinClift, it will be in gluster infra soon 12:11:37 just placing a decent draft before reaching there 12:11:44 Humble: I think we should extend invite to Soumya Deb to these meetings 12:11:52 I will let him know 12:12:09 Humble: tx :) 12:12:14 np :) 12:12:28 #action Humble to invite Soumya Deb to the Weekly GlusterFS Community Meetings 12:12:46 hagarth: how did "ndevos and hagarth to discuss on gluster-devel (by 24th Sept) the outstanding 3.5.3 blocker BZ's, and which ones to move to 3.5.4" go? 12:12:52 Still TBD? 12:13:02 JustinClift: yes 12:13:46 k. What's a realistic eta for this? 1 week? 2 weeks? x weeks? :) 12:14:06 JustinClift: I expect this week .. but will check with ndevos 12:14:29 k. Won't attach an eta to it then 12:14:54 The GlusterFS Consultants and Support Company page isn't having much luck is it? 12:15:20 Humble: Should we wait for Soumya Deb for that? 12:15:38 JustinClift: might be a good idea to have davemc and Soumya Deb work together on this one 12:15:40 depends. 12:16:19 k. I'll just mark it as TBD and we can figure it out later 12:16:20 I can pass this to Soumya and discuss.. 12:16:34 Humble: Please do 12:16:39 k.. 12:16:51 #action Humble to discuss the GlusterFS Consultants and Support Company page with Soumya 12:16:57 "JustinClift to retrigger regression runs for the failed release-3.4 CR's" 12:17:05 Done. They failed too (so, non spurious) 12:17:11 * JustinClift hasn't looked at them since though 12:17:17 "hagarth to work with Raghavendra G on dht bug fixes for 3.4.x" 12:17:24 hagarth: How'd that go? 12:17:37 JustinClift: the patches are landing 12:18:02 JustinClift: http://review.gluster.org/#/q/status:+open+branch:+release-3.4,n,z 12:18:33 we possibly need to overcome a regression failure.. (possibly a spurious one) 12:19:04 hagarth: k. :) 12:19:17 "Humble to email gluster-devel and gluster-users about the upcoming test day, and including thest 3.6_test_day page" 12:19:21 Done. 12:19:23 done :) 12:19:23 Humble: Thanks. :) 12:19:26 ;) 12:19:30 Humble++ :) 12:19:32 #topic 3.4 12:19:41 hagarth++ :) 12:20:02 Also: http://review.gluster.org/#/q/status:open+project:glusterfs+branch:release-3.4,n,z 12:20:29 Looks like lots of dht stuff still needing to get in? 12:20:33 maybe we should re-trigger regression for the 3.4 dht patches 12:20:34 nothing much to report on 3.4. Now that Denali is released maybe.... 12:20:44 1. Raghavendra's DHT 12:20:49 2. memory leak 12:20:56 will get some love 12:21:03 Yeah. 12:21:07 * JustinClift hopes so 12:21:20 People seem to have been pretty slammed last few weeks 12:21:30 k, lets see next week how we're tracking 12:21:41 #topic 3.5 12:22:07 As per the above 3.5.3 thought, it's still in progress 12:22:14 So, we'll figure this one out next week too :) 12:22:21 #topic 3.6 12:22:38 hagarth: How's the beta2 preparation looking? 12:22:47 3.6.0beta1 was released over the weekend 12:22:51 hagarth++ 12:22:52 Humble: hagarth's karma is now 3 12:23:02 a few bugs were identified and patches have landed since then 12:23:13 plan to do beta2 later today 12:23:15 hagarth: Should we build beta2 rpms and sanity test them before announcing the new tarball + rpms them? 12:23:34 JustinClift: there are a few folks who make use of the tarball too 12:23:35 Or would this stuff now show up in sanity tests, and we need active testers? 12:23:44 s/now/not/ 12:23:49 the beta1 RPMs were broken, so yes, we should build beta2 RPMs 12:24:13 yes, testing of RPMs would be good before we announce them. 12:24:16 * JustinClift is just thinking we might not want to announce tarball separately to the rpms 12:24:21 As I have said on the mailing lists, I currently have problems building Qemu on Fedora-20 12:24:27 well, only the -server RPM was broken 12:24:35 kkeithley_, we have updated d.g.o with 3.6.0-0.2.beta1 12:24:37 andersb_: I face that on one of my systems too 12:24:42 and it is not broken anymore 12:24:44 lalatenduM++ 12:24:45 kkeithley_: lalatenduM's karma is now 2 12:24:52 yeah , beta1 rpms are avialable 12:24:54 So, maybe we prepare initial tarball, build rpms from that, and do basic sanity testing (eg qemu building on f20), and then announce them if it all passes 12:24:58 and its usable .. 12:25:17 JustinClift: building qemu is a different problem .. maybe I'll hold it for beta3 12:25:26 k 12:25:36 andersb_, hagarth yeah thats an issue 12:25:40 I can hold the announcement for beta2 and notify the rpm packagers about the tarball 12:25:46 not sure how to fix it 12:25:49 hagarth, yeah 12:25:50 hagarth: Lets try that 12:25:54 we can send out an announcement once the rpms are built 12:25:59 ok, cool 12:26:02 indeed thats better 12:26:16 hagarth +1 12:26:18 hagarth: Yeah, that's also more in line with how other projects do it too 12:26:24 In this case, I think it's a good thing :) 12:26:43 at least in beta builds, it is a good idea 12:27:00 It would also be good if we review documentation for 3.6 now 12:27:04 #action hagarth to send beta2 announcement once the tarball and rpms are ready 12:27:10 or at least new features introduced in 3.6 12:27:12 hagarth, kkeithley_ we need http://review.gluster.org/#/c/8836/ for beta2 12:27:14 * kkeithley_ wonders if DPKGs for Debian and Ubuntu are useful. 12:27:16 hagarth: That's a very good idea 12:27:27 lalatenduM: will push that soon 12:27:34 kkeithley_: Yes, is it feasible to have them available at the same time? 12:27:49 hagarth, thanks, the same patch is available for master too 12:27:53 kkeithley_: +1 to that, getting the builds done on time is hard 12:27:58 hagarth, once u push the release, we will try out best to make rpms available asap 12:28:02 putting up the Qemu rpm's as well, would work for me (hopefully the patch in BZ1145993 is enough), machine is still building 12:28:05 If we can have tarball, rpms, and deb's all at the same time, that would be pretty optimal 12:28:06 Humble: cool 12:28:14 I would not hold the announcement waiting for DPKGs 12:28:34 kkeithley_: No worries. :) 12:28:34 maybe we should have one grand script that automates building packages for all distros :) 12:28:35 I agree it'd be a "nice to have" 12:29:06 I'll send a list of documents that we have for new features in 3.6 12:29:24 hagarth, pkg building for EL and Fedora is not difficult :) but the steps after that take time :) 12:29:32 we can start reviewing and polishing those docs 12:29:38 when the specfile is ready 12:29:39 hagarth: maybe we can get an intern to work on that 12:29:47 lalatenduM, above apply only when there is not much change in spec file 12:29:47 hagarth: make an AI for it :) 12:29:56 Humble, right , agree 12:29:59 i.e. the one, grand, build everything script 12:30:01 for first build of 3.6.0 , it was not the case 12:30:10 written in DTRT. 12:30:10 Humble, yup 12:30:15 #action hagarth to send out documentation pointers to new features in 3.6 12:30:39 #action JustinClift to update GlusterFS OSX Homebrew formula for beta2 12:30:57 ^ that will let people on OSX test the client side bits easier 12:31:03 JustinClift: cool 12:31:16 hagarth: If you let me know when the tarball for beta2 is online, I'll do that. It's pretty quick 12:31:27 JustinClift: should we reach out to FreeNAS to see if they have any interest with the FreeBSD port? 12:31:35 JustinClift: will do 12:31:40 Can't see why not. :) 12:31:41 hagarth, by documentation pointers , r u referring 'admin guide' as well ? 12:31:48 Humble: primarily admin-guide 12:31:54 ok.. cool 12:32:10 in the absence of relevant chapters in admin-guide, I'll resort to feature pages :) 12:32:17 #action JustinClift to reach out to the FreeBSD and FreeNAS Communities, asking them to test beta2 12:32:19 that would be good, 12:32:25 we do need more testing of beta2 12:32:28 our 'features' folder looks ok now :) 12:32:46 Corvid Tech may be able to throw a few hundred nodes at it 12:32:47 please drag and involve whomever you can find to test beta2 :D 12:32:59 Probably just using the existing features rather than new ones, but I'm unsure 12:33:21 JustinClift: yes, feedback on both old & new would be great 12:33:48 k. We need to plan out the marketing activities for 3.6.0 as well 12:34:12 hagarth, davemc, johnmark and I have started initial planning discussions off-list 12:34:24 But, I'm not really seeing why we shouldn't do this on-list somewhere 12:34:32 Would it be possible to let new versions include a compatibility libglusterfs.so.0, to avoid having to rebuild everything? 12:34:54 JustinClift, +1 for moving discussion to a list 12:34:58 andersb_: I think we could do that 12:35:14 andersb_, lets disscuss this in gluster-devel, we need to fix this issue 12:35:33 lalatenduM: With PostgreSQL we set up an "Advocacy and Marketing mailing list" and people got involved in that 12:35:42 lalatenduM: http://www.gluster.org/community/documentation/index.php/Planning/GlusterCommunity has some early thoughts 12:35:45 But I think gluster-devel would be suitable for us for now 12:35:53 hagarth, andersb_, else qemu users, samba users cant update glusterfs from 3.5 to 3.6 12:35:56 libglusterfs.so.0? AFAIK we're only bumping the SO_VERSION in libgfapi.so 12:36:10 kkeithley_, yes 12:36:20 for rpm based installation , we need to uninstall glusterfs-api and install glusterfs-api 12:36:34 Humble, thast is not working 12:36:40 Humble, I just tried that 12:36:41 I feel it should work.. 12:37:08 Humble, I have uninstalled qemu and samba-vfs-glusterfs, then installed 3.6 12:37:15 let us think through this .. it is an important issue to be fixed for 3.6 12:37:19 OK, probably sloppy interpretation of upgrade failure reasons on my part :-( 12:37:30 and again tried installed qemu and samba-vfs-glusterfs, it is failing 12:37:57 hagarth, yup, else it would stop upgrade of 3.5 to 3.6 12:38:30 I think only glusterfs-api is affected lalatenduM 12:38:49 Humble, yes 12:39:07 which got dependencies through libgfapi versioning, not entire glusterfs packages 12:39:08 Correct, should be libgfapi.so.0 :-( 12:39:49 That's why we need to notify devel@fedoraproject.org, so that those maintainers will rebuild and new versions of qemu, samba, and ganesha land at the same time as the glusterfs-3.6.0 12:39:57 andersb_, nope, if the api is changed the version should be bumped up 12:40:09 kkeithley_, +1 12:40:25 kkeithley_, yep.. 12:40:25 kkeithley_, but I am wondering hwo that will work for EL 12:40:50 Not possible to do a compatiblity lib then? Makes upgrading much harder :-( 12:40:55 we don't have glusterfs in EPEL 12:40:55 * lalatenduM thinking of EL5, 6, 7 12:41:05 kkeithley_, CentOS SIG 12:42:09 * lalatenduM thinks we should discuss this in gluster-devel ML 12:42:18 and we (as in you) are the maintainer of samba and ganesha in the CentOS Storage SIG. 12:42:23 lalatenduM: +1 12:42:33 so you'll be on top of rebuilding those. 12:42:34 kkeithley_, yeah, I can rebuild :) 12:42:37 kkeithley_++ 12:42:39 lalatenduM: kkeithley_'s karma is now 5 12:43:25 k, is there an action item here? 12:43:30 or items? :) 12:43:35 kkeithley_, but dont have qemu in SIG till now 12:43:48 will discuss this offline with you 12:43:55 who maintains qemu in CentOS SIG? 12:44:10 kkeithley_, not sure , will find out 12:44:18 JustinClift, you can put that on me 12:44:37 lalatenduM: You can make it :) 12:44:44 * JustinClift isn't sure of the details for the ai 12:44:45 #AI notify devel@fedoraproject.org, so that those maintainers will rebuild and new versions of qemu, samba, and ganesha land at the same time as the glusterfs-3.6.0 12:44:46 Michael Tokarev not 100%sure though 12:44:58 lalatenduM: thx 12:45:11 It's more like this though: 12:45:21 #action lalatenduM to notify devel@fedoraproject.org, so that those maintainers will rebuild and new versions of qemu, samba, and ganesha land at the same time as the glusterfs-3.6.0 12:45:36 yeah ;) 12:45:38 (in theory, that should work. we get to find out) 12:45:50 k, anything else for 3.6? 12:46:18 Cool, moving on. :) 12:46:28 #topic Other items to discuss 12:46:43 The etherpad doesn't have any 12:46:48 Anyone got stuff? 12:46:52 yep 12:46:59 overclk: BTRFS stuff 12:47:00 The floor is yours... :) 12:47:02 hagarth, want to discuss about our meeting last week 12:47:31 itisravi, yes, thanks! 12:47:32 overclk: go ahead! 12:48:03 so, last week a handful of folks met and discussed about using BTRFS as bricks and using some of it's "killing" features :) 12:48:13 Cool 12:48:19 btw, the longevity cluster is running 3.6.0beta1 12:48:29 kkeithley++ :) 12:48:30 kkeithley++ 12:48:31 Humble: kkeithley's karma is now 2 12:48:33 kkeithley_, awesome :) 12:48:39 features such as data/metadata checksumming, subvolumes, etc... 12:48:47 http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/ 12:48:50 Yeah. checksumming sounds super useful :) 12:48:52 kkeithley_: Wait 12:48:59 kkeithley_: Let overclk finish first 12:49:05 sry 12:49:09 kkeithley_, np 12:49:45 Form the handful of features, we decided to explore checksumming and subvolumes to start with. checksumming helps with offloading bitrot detection to btrfs 12:49:55 s/Form/From/ 12:50:11 overclk: might be worth a check to see if folks are using glusterfs with btrfs in the community already 12:50:26 overclk: Would it be useful to have a vm in rackspace with btrfs attached disk, for running regression tests on? 12:50:44 JustinClift: That would be cool. 12:50:46 * kkeithley_ has used btrfs for bricks in his testing. Hasn't done anything exotic with it though 12:51:08 JustinClift: yes, that would be great! 12:51:09 hagarth, yep! I plan to mail gluster-devel@ asking for inputs and our stratergy 12:51:22 overclk++ :) 12:51:58 itisravi hagarth: I can create a basic slave node in Rackspace pretty easily, but it would be better if someone clueful with btrfs did the btrfs setup bit 12:52:03 Any volunteers? 12:52:14 JustinClift, me 12:52:24 I can pitch in too 12:52:27 overclk: Cool. :) 12:52:28 cool! 12:52:33 itisravi: :) 12:52:46 JustinClift: :) 12:52:51 That about it for BTRFS as of now ... hopefully next week we'd have much to talk about :) 12:53:08 #action JustinClift to setup new slave vm in Rackspace for btrfs testing, and pass it across to overclk + itisravi for btrfs setup 12:53:11 Now, w.r.t. BitRot (without BTRFS :)) 12:53:30 overclk itisravi: Which base OS should go on it? CentOS 7? 12:53:46 JustinClift: yes, CentOS 7 would be right 12:53:50 JustinClift, I'm OK with that. Anyone else has another opinion? 12:54:01 JustinClift: what kernel version does CentOS 7 run on? 12:54:07 * JustinClift looks 12:54:16 we can use Fedora as well 12:54:20 overclk: go ahead with bitrot (without btrfs) 12:54:25 it's something like 3.10 AFAIK 12:54:33 Fedora would have latest karnel 12:54:33 Fedora 20 has 3.16X as of now, so maybe that's a better choice 12:54:40 yeah, 3.10 with backported patches 12:54:44 I would like 3.17-rc6 to be used and the latest btrfs-progs (just to minimize data losses :P) 12:55:01 overclk: and the deadlock too ;) 12:55:08 hagarth, :) 12:55:22 Right, we're installing IlluminOS then :p 12:55:53 k, so Fedora that Rackspace has then 12:55:54 Fedora21 alpha seems pretty stable too. You might get a longer run over the life of f21 12:56:04 kkeithley_,+1 12:56:22 Sure. I'll see what Rackspace has, and also if we can force it up to F21alpha 12:56:27 Anyway, moving on... 12:56:42 OK. so to bitor 12:56:43 overclk: You were saying bitrot detection without btrfs 12:56:47 bitrot* 12:57:20 yep.. So, I had sent out a basic approach for BitRot (basically Bitrot daemon) a while ago. 12:57:46 overclk: right.. 12:57:50 That was based on a long mail thread started by Shishir... 12:58:08 with a few changes here and there... but the approach is pretty much the same. 12:58:18 yeah.. 12:58:41 So, before I send out the task breakup I would appreciate some inputs on the doc. 12:59:13 overclk: will do. 12:59:21 overclk: I will go through it over the weekend 12:59:23 Once we all agree what's expected and the overall approach, things can move forward. 12:59:40 hagarth, itisravi thanks! 12:59:58 Joe Fernandes is working on complance and tiering, which includes bitrot. He should be involved in bitrot 12:59:58 is that http://www.gluster.org/community/documentation/index.php/Features/BitRot 13:00:02 overclk: cool, thanks for the detailed update! 13:00:23 andersb_, yep 13:00:33 Just make sure everything plays well together 13:00:46 kkeithley_, I'll loop in joe 13:00:57 Make an AI for this :) 13:01:08 eg: "#action [name] [stuff to be done]" 13:01:18 dan lambright too 13:01:40 #AI overclk to loop in Joe|Dan regarding BitRot stuffs 13:01:51 #action overclk to loop in Joe|Dan regarding BitRot stuffs 13:01:56 Close ;) 13:01:57 overclk: do sync up with rabhat too :) 13:02:02 JustinClift, Thanks! :) 13:02:08 hagarth, sure 13:02:18 k, we're outta time. 13:02:26 I'm done :) 13:02:29 the cppcheck fixes from kkeithley_ are not merged yet ;( 13:02:36 Thanks for attending everyone. kkeithley_, thanks for the longevity cluster too 13:02:38 lalatenduM: on my list for this week 13:02:43 JustinClift: one sec 13:02:46 hagarth: Make an ai 13:02:48 hagarth, thanks 13:02:48 Sure. 13:02:51 * JustinClift waits 13:03:04 next Wed happens to be the eve of a long weekend in India 13:03:17 Ahhh. Skip next week then? 13:03:29 JustinClift: yeah, that might be better 13:03:32 hagarth, http://review.gluster.org/#/c/8213/ too :) 13:03:34 np 13:03:49 #action hagarth to review cppcheck and http://review.gluster.org/#/c/8213/ this week 13:03:54 :) 13:04:15 k, all done? 13:04:17 that's all from me 13:04:24 Cool. :) 13:04:28 #endmeeting