15:01:03 #startmeeting Weekly Community Meeting 15:01:03 Meeting started Wed Mar 19 15:01:03 2014 UTC. The chair is jclift. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:03 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:01:24 Hi everyone, who's here? 15:01:26 (hi Lala) :) 15:01:36 * msvbhat is present 15:01:48 Hello all 15:01:58 hya! 15:02:09 Etherpad for this meeting is here: http://titanpad.com/gluster-community-meetings 15:02:19 o/ 15:02:59 Hey (this is RJ) 15:03:15 Hi RJ, thanks for being part of this. :) 15:03:31 #topic Action Items from previous minutes 15:03:48 * xavih is here 15:03:49 Glad to be here :) 15:04:02 "lalatenduM to set up bug triage process page in wiki" 15:04:12 That's done earlier today yeah? 15:04:17 jclift, yes 15:04:27 Cool. :) 15:04:29 "jclift_ and johnmark to update guidelines on community standards" 15:04:36 I still haven't done this 15:04:38 Bad me 15:04:45 Punting that to later 15:04:59 "hagarth to send out a note on abandoning patches over 1yo" 15:05:20 Haven't seen that happen, so I'll AI to do it this week 15:05:54 #action hagarth to send note to gluster-devel, that patches older than 1 year will be abandoned 15:06:47 Hmmm, has TitanPad dropped out for anyone else just now? 15:07:09 k, its back 15:07:16 F5 worked for me 15:07:22 :O 15:07:39 :) 15:07:40 * sas_ was late to the party :( 15:07:41 "jclift to get the Rackspace info and credentials from lpabon + johnmark" 15:07:44 That's done 15:07:52 "jclift to give the Rackspace credentials to lalatenduM + purpleidea so they can setup the Gluster puppet rackspace testing stuff" 15:07:56 That's done too 15:08:04 jclift, thanks 15:08:09 "lalatenduM + purpleidea to try setting up Rackspace vm's for automatic testing using puppet-gluster" 15:08:20 lalatenduM: Hows that going? 15:08:38 jclift, purpleidea is trying puppet + vagrant on rackspace 15:08:51 Ahhh. Badly written action item? 15:08:54 jclift, once is ready we will deploy jenkins 15:09:04 Cool 15:09:23 Don't suppose you spun up the new jenkin instance in Rackspace? 15:09:43 jclift, nope 15:09:45 * jclift is just trying to find the owner of the new VM, since purpleidea said it's not him 15:09:53 Heh, it's still a mystery then. ;) 15:09:54 jclift, I am not the one :) 15:09:59 jclift: thats probably lpabon 15:10:02 jclift, I think we should carry this AI to next week 15:10:13 ndevos: Ahhh, cool. I'll ask him. 15:10:28 "jclift will include lpabon in the jenkins testing stuff" 15:10:41 jclift: when you find out who's responsible for jenkins let me know. i'll add projects for the java stuff. 15:10:51 jclift, yup, that would be right 15:11:01 jclift: just need a jenkins login, i'm familiar with it 15:11:03 In progress. He has a Rackspace account, but I haven't pinged him yet to see if he's had time to do stuff 15:11:18 Cool 15:11:50 #action jclift will ping lpabon to find out if he's the owner of the new Jenkins instance in Rackspace 15:12:24 #action jclift will put semiosis in touch with whoever the owner of the new Jenkins instance is, so he can get an account 15:12:30 jclift: do you mean a new jenkins instance different than build.gluster.org? 15:12:37 tdasilva: Yep 15:12:41 #action semiosis will add java projects to jenkins 15:12:47 Good thinking :) 15:13:31 jclift: I'm not aware of a new jenkins instance on rackspace, but lpabon has login credentials to build.gluster.org and the racksapce slave VMs 15:13:38 tdasilva: build.gluster.org is a old + having issues, so we're looking to implement a better approach 15:13:41 and he can create new slaves 15:13:59 tdasilva: Cool. He definitely sounds like the right guy then :) 15:14:14 "msvbhat Will email Vijay to find out where the geo-replication fixes for beta3 are up to, and try to get them into 3.5.0 beta4 if they're not already" 15:14:22 msvbhat: How's that looking? 15:15:23 msvbhat: ping? 15:15:46 jclift: Done 15:15:52 Cool. :) 15:16:24 k, does anyone know what this one is about? "several new xlators (encryption, cdc, changelog, prot_client, prot_server, reddir-ahead, dht) have .so.0 and .so.0.0.0 in both release-3.5 and master branch" 15:16:39 * lalatenduM is wondering if we have documentation available for Geo-rep in 3.5 15:16:41 Seems to have been added to agenda in the wrong spot 15:16:59 sounds like autotools added libtool versioning to the xlators 15:16:59 #action lalatenduM to find out if wew have docs available for geo-rep in 3.5 15:17:01 jclift: I think removing some xlators from spec, so that they don't get built 15:17:09 jclift, haha :) 15:17:11 :) 15:17:26 jclift: lalatenduM: They are not complete :) the geo-rep doc 15:17:43 msvbhat: nah, the .so for a xlator should not be versioned, that is some switch in the Makefile.am 15:17:45 lalatenduM: Well, that's you AI done easily then 15:17:53 msvbhat, can you take the documentation part? 15:18:34 jclift: lalatenduM: Sure, they are already available as part of rhs-2.1. We need to port it to upstream 15:18:47 ndevos: k. That's definitely bug-sounding. There's no mention on the etherpad if there's an associated BZ for it 15:18:53 msvbhat, cool 15:19:05 msvbhat: Are you ok to do that porting? 15:19:24 jclift, I think I know abt the so file bug 15:19:31 i mean .so file 15:19:41 jclift: xlators with a .so.0.0.0 is surely a bug we want to have fixed in 3.5 15:19:46 jclift: I will do it with some help from doc team. I need to know the best way to do it 15:20:04 jclift, that is on kaleb actually 15:20:08 lalatenduM: k. Are you ok to find out if there's an appropriate BZ for it, and if so to make sure it's on the 3.5.0 blocker list? 15:20:15 Ahh, cool 15:20:28 jclift, I know of a RHS bug but not sure if we have cloned it 15:20:29 So it's already all in progress, we don't need to do anything about it now? 15:20:35 jclift, yes 15:20:44 k, moving on then 15:21:06 msvbhat: For the geo-rep docs to be ported to upstream, any idea if that's already under way? 15:21:07 lalatenduM: can you add me on CC of that bug? I can patch that pretty quickly 15:21:20 ndevos, sure 15:21:44 lalatenduM: Do it now, or should we make an action for that 15:21:45 ? 15:21:46 jclift: Not, sure. I will find out and will take it up, if not underway 15:21:59 jclift, yup on it :) 15:22:25 #action msvbhat to find out if the porting of geo-rep docs from RHS to upstream is already underway 15:22:29 lalatenduM: :) 15:22:49 k, moving on 15:22:53 jclift: #action the .so.0.0.0 one too please, kaleb then can read-up quicker :) 15:23:00 k 15:23:21 Hmmm, what's a good AI for it? 15:23:33 ndevos: Can you action it, as you'll word it clearer 15:23:50 jclift, put AI on me :) 15:23:59 ndevos, it s BZ 1076127 15:24:00 #action lala to find the bug for the xlator .so.0.0.0 and inform ndevos so he can fix it 15:24:18 ndevos, lets talk abt it on #gluster-devel 15:24:21 * jclift hopes MeetBot is working today Manual note creation took a while last time. ;) 15:24:37 #link https://bugzilla.redhat.com/1076127 15:24:55 #topic Gluster 3.6 15:25:37 We had the Planning Meeting last week, which seemed good. But there's still mention in the Etherpad about a Go/No-Go meeting. 15:25:49 Does anyone know what that's about, because I don't? 15:26:29 k, moving on 15:26:30 * msvbhat thinks Go/No-Go meeting will be held later when the developement is done 15:26:37 #topic 3.5.0 15:26:43 I guess its about branching from master? But that should only be done when all the core features are in 15:26:57 * jclift shrugs 15:27:07 We can ask Vijay when he's back :) 15:27:35 :) 15:27:56 Asked Vijay about this earlier today, and he said: 15:27:57 11:55 3.5 is in the final lap 15:27:57 11:55 most of the items in the blocker list have been addressed 15:28:00 11:56 excepting quota and glupy possibly .. 15:28:02 11:56 We can provide an update that the release might happen over the next two weeks 15:28:26 It sounds like the .so naming thing above needs to be done too 15:28:42 Atin also has a patch up for review that we'd like in 3.5.0 too: 15:28:54 http://review.gluster.org/#/c/7292/ 15:29:04 Does anyone have time to look over that? It seems pretty simple 15:29:23 It's changing the default behaviour of remove-brick so it no longer force-commits 15:29:37 eg stop data loss risk from simple mistake 15:30:01 lalatenduM: Your kind of thing? 15:30:13 jclift, yeah I will take a look 15:30:18 Thanks. :) 15:30:35 jclift, somehow missed the patch till now :) 15:30:37 #action lalatenduM To do a review of Atin's patch http://review.gluster.org/#/c/7292/ 15:30:46 lalatenduM: No worries. :) 15:31:10 Anyone else have anything for mentioning for 3.5.0? 15:31:48 k, moving on 15:31:48 * kshlm is here 15:31:58 Cool :) 15:31:59 jclift, I think we still have a mem leak issue 15:32:05 jclift, in 3.5.0 15:32:07 jclift, atin's patch is on master. 15:32:14 Oh, cool 15:32:19 he'll send a seperate patch for 3.5 with just a deprecation message. 15:32:37 kshlm: We need it accepted into master first don't we? 15:32:41 eg to cherry pick back? 15:32:46 * kshlm had forgotten today was Wednesday. 15:32:53 Heh ;> 15:33:05 Ahhh 15:33:07 Sorry 15:33:10 Both patches are gonna be seperate. 15:33:12 I understand what you mean 15:33:14 yeah 15:33:21 Different one. Got it. 15:33:31 jclift, the mem leak issue from Emmanuel's mail 15:33:40 We need Atin to write the release-3.5 branch patch for it 15:33:55 Just asked him for it on the review. 15:34:02 Thanks 15:35:12 Anyone know how we should approach this memory leak problem? 15:35:46 jclift, I think hagarath is our ans :) 15:35:58 Does anyone have time to run NetBSD up in a VM, and then isolate the commit that caused the problem? 15:36:34 lalatenduM: He's super busy recently, so we probably shouldn't rely solely on him for this. Emmanuel Dreyfus seems to want assistance. 15:37:18 I'm putting this down as "We're open to idea on how to solve this one." 15:37:43 lpabon: Cool. 15:38:07 lpabon: sorry im late.. took a little while to drive here 15:38:24 lol, i sent a message to myself...i need sleep 15:38:27 lpabon: Np. We had some stuff regarding you before 15:39:14 lpabon: There's a VM in rackspace that has -jenkins in it's name. Seems to be new-ish, started within the last month. Is that one of yours? 15:39:56 yes, i am playing with making it a regression system. Once I do, i'll clone it N times to make regressions run in parallel 15:40:05 but its low priority atm 15:40:31 is the meeting over? i have a suggestion on the build system 15:40:32 Cool. I just wanted to know who the owner is, as it wasnt' obvious in the Rackspace gui 15:40:44 lpabon: No, the meeting is still very much going 15:40:49 http://titanpad.com/gluster-community-meetings 15:41:07 lpabon: We're on the 3.5.0 item. Just finished it mostly. 15:41:21 TitanPad keeps dropping though :/ 15:41:47 yeah, i don't see anyone else talking.. i think i'm in the correct channel, no? 15:41:54 You are 15:41:59 I just got busy typing 15:42:10 And TitanPad has dropped out. 15:42:13 Gah 15:42:24 Moving on 15:42:33 #topic 3.4.3 15:43:07 In the now-down-etherpad it mentioned hagarth is planning to release it this week 15:43:16 Anyone have objections to that? 15:43:53 jclift, nope 15:43:53 #action jclift to find a more stable Etherpad than TitanPad 15:44:15 GluserPad 15:44:20 GlusterPad ;) 15:44:32 jclift, nice :) 15:44:52 k, it's back 15:45:06 On the etherpad it menions a few items for 3.4.3 15:45:18 There's https://bugzilla.redhat.com/show_bug.cgi?id=859581 15:45:23 Bug 859581: high, unspecified, ---, vsomyaju, ASSIGNED , self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs 15:46:22 Looks like we need reviewers for this change: http://review.gluster.org/#/c/6737/ 15:46:42 The patch is in the Posix handling code 15:47:19 lalatenduM kshlm msvbhat ndevos: Are any of you guys familiar with that section of the code, and could take a look? It seems simple code wise 15:47:25 http://review.gluster.org/#/c/6737/2/xlators/storage/posix/src/posix-handle.h 15:47:51 jclift, nope, :(, but will be some day :) 15:48:00 :) 15:48:16 will vote for ndevos :) 15:48:18 jclift: maybe Kaleb? 15:48:19 jclift: I think thats a backport... 15:48:23 I'm not familiar with it but I'll take a look. 15:48:28 kshlm: Thanks 15:48:50 #action kshlm will look into the review of http://review.gluster.org/#/c/6737/2 so we can get it into 3.4.3 15:49:21 ndevos: Yeah, it could be. Still needs reviewers though, etc. ;) 15:49:35 kshlm: that patch is available for review on master. release-3.5 and release-3.4, none seems to have been merged yet (bug 859581) 15:49:37 Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=859581 high, unspecified, ---, vsomyaju, ASSIGNED , self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs 15:50:05 On the Etherpad it mentions two other 3.4.3 related items: 15:50:07 pk seems to have reviewed it. 15:50:22 pk has reviewed it. 15:50:35 A BZ about a bug that Susant can't reproduce, so that ones' been dropped 15:50:42 yeah, pk +1'd it, it only needs a +2 for the maintainers to merge it 15:51:07 kshlm: maybe you can ask pk to +2 those patches instead? 15:51:13 who's the maintainer for posix? 15:51:20 And an item about the patch that Yang Feng requested on gluster-users, but can't be merged because the code around it has changed substantially between 3.4 and 3.5 15:51:34 kshlm: Avati I suppose 15:51:38 * jclift nukes both of these old items from the etherpad 15:51:39 ndevos,I will. 15:52:05 the MAINTAINERS files does not have someone specific for the posix xlator 15:52:46 k, do you guys want to continue this on gluster-devel after this meeting? 15:52:58 Moving on... 15:53:00 #action kshlm to talk to pk about +2'ing the patches for bug 859581 15:53:00 so avati or vijay can +2 it and take it in then. 15:53:01 Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=859581 high, unspecified, ---, vsomyaju, ASSIGNED , self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs 15:53:12 #topic other items 15:53:50 kshlm: We're in the last few minutes of alloted time. So, do that in gluster-devel. ;) 15:53:55 #action jclift to do a live webcast with johnmark talking about glusterflow! 15:54:02 it's your turn 15:54:10 hehe, he's putting it off 15:54:20 discretestates: Want to intro yourself? 15:54:36 hey all 15:54:40 hi again 15:54:42 I'm RJ 15:54:48 I know Jay Vyas 15:54:49 :) 15:54:55 He sent me your emails about Gluster and GSoC 15:54:59 I'm really interested in working with you 15:55:03 welcome discretestates! 15:55:15 You saw the proposal I sent to the list -- I've edited it more and looking into people's suggestions 15:55:38 Since Jay has volunteered to mentor, that's great -- seems everyone is friendly so I can ask questions if neeeded 15:55:51 We will need to talk with Fedora since it's under them 15:55:56 I spoke briefly with John Mark 15:55:56 discretestates: We're happy to have you. You're enthusiastic, positive, and have a clue. That's all good. :) 15:56:25 but I'm not sure if Fedora is expecting a Gluster project or not 15:56:30 Thanks, jclift 15:56:46 So, I'll work on all that and track John Mark and others down :) 15:56:55 discretestates: Cool, was just about to ask. 15:57:09 If you want to introduce me to people, that's always helpful, too :) 15:57:16 I just don't want to catch Fedora off guard 15:57:23 I sent out a mail regarding that to the Fedora GSOC admin. I haven't heard back yet. 15:57:29 Apparently there's not much time left to chase stuff up (3 days?) so if you're trying to get a hold of someone, but can't, let us know. 15:57:34 Thanks! 15:57:40 discretestates: unclear to me from the ML post, are we talking about a RESTful API for glusterfs control, or data/filesystem access? 15:57:47 data / file system access 15:58:00 discretestates: hi, do you mind taking a look at gluster-swift? 15:58:11 lpabon: That was on my list. I'd love to 15:58:11 discretestates: right, we have UFO already 15:58:16 UFO? 15:58:20 ndevos: You might be able to reverse proxy that too. ;) 15:58:25 please don't call it ufo 15:58:31 lol 15:58:44 ufo is definitely a nice goal, but we are far far from that 15:58:45 discretestates: also I'd like to talk to you about possibility of using my gluster java libs for this 15:59:01 lpabon: woops, noted 15:59:05 semiosis: Jay mentioned your libs. That'd be great. Someone also mentioned Python libs 15:59:16 python meh 15:59:19 * lalatenduM thinks it is g4s now :) 15:59:19 java woo! 15:59:19 If people could respond to my email on the dev list, I'd appreciate it 15:59:25 g4s? okay 15:59:28 python rules-- java meh 15:59:28 jclift: you can reverse proxy alomst anything, but it's the protocol/api the client speaks that you need to support (a webbrowser doesnt speak SWIFT) 15:59:35 lpabon: :D 15:59:38 :-D 15:59:46 ndevos: Thanks. :) 15:59:47 I want to make sure I know about the current work in the community so I don't duplicate 15:59:54 I'll look into the swift backend and g4s 16:00:11 yes, it may provide most of what you are looking for... 16:00:14 Jay and I were thinking of emulating the WebHDFS API to allow Gluster to be used with any WebHDFS client (spring, fluentd, other have them) 16:00:37 * jclift points out we're at the end of our time limit 16:00:39 discretestates: the best part is that it uses wsgi, so you can write your own *filter* 16:00:50 oh awesome, okay 16:01:01 discretestates: and easily insert it in the I/O path 16:01:05 very nice 16:01:21 Cool. That sounds like a successful intro discretestates :) 16:01:23 jclift: final note, let me know if you have logstash related questions for glusterflow. i'm a logstash dev too 16:01:35 semiosis: Oh sweet 16:01:35 discretestates: you can also subclass a gluster-swift class and write your own app if you need to 16:01:35 thanks, semiosis! 16:01:48 yw 16:01:49 thank you everyone :0 16:01:52 :) 16:01:53 * :) 16:02:00 semiosis, good to know :) 16:02:16 lpabon: now you only need to make g4s location aware, maybe change the storage-url according to which server hosts a brick with the file? 16:02:20 re GlusterFlow... I need to get Glupy working in master then 3.5.0 before I do any real promo of it 16:02:22 one thing on REST -- at some point we will need a method to communicate with glusterd for management other than the CLI 16:02:35 ndevos: O.o 16:02:40 Once that's done I'll promo the heck out of it :) 16:02:57 lpabon, I think thats already on the cards for 3.6 16:03:09 k, going to endmeeting in a sec 16:03:16 ndevos: maybe if we send a metadata with it, a pipeline filter can redirect 16:03:23 Any objections? eg otehr topic ppl want to bring up? 16:03:38 lpabon: and you think I can make sense out of that? 16:03:47 fyi, i have to leave..(im running late for my next meeting) .. ttyl 16:03:54 i'm setting up a SonarQube instance for gluster projects. if anyone is interested please ping me later 16:03:56 ndevos: :-D.. we can discuss offline.. ttyl 16:04:01 * ndevos really isnt a swift guy, unless its ont a squash court 16:04:01 :) 16:04:09 #endmeeting