15:00:29 #startmeeting 15:00:29 Meeting started Wed Feb 19 15:00:29 2014 UTC. The chair is hagarth. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:29 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:00:41 hagarth, hi 15:00:46 who do we have here today? 15:00:49 * lalatenduM here 15:00:54 * purpleidea ay 15:00:55 Hello 15:00:57 hello! 15:00:59 * sas here 15:01:02 * jclift_ gets coffee 15:01:11 But I'm here. :) 15:01:12 * kkeithley aqui 15:01:22 #topic AI from last week 15:01:34 * xavih here 15:01:47 I think xavih's patches did get reviewed over last week 15:01:54 we need to get that in. 15:01:57 * ira is here. 15:02:04 Here 15:02:28 I have updated the 3.6 schedule in the planning page as per last week's discussion 15:02:41 #link http://www.gluster.org/community/documentation/index.php/Planning36 15:02:48 xavih's patches only have +1 on release-3.4 and release-3.5. Both pass regression. Did it get +2 on master? 15:03:24 kkeithley: good catch, we still need to review it on master 15:03:34 will do that for master 15:03:35 kkeithley: +1 on 3.4 and 3.5, not reviewed on master 15:04:08 lalatenduM managed to send out an email on CentOS SIG, thanks for that! 15:04:29 I think we are fairly covered on AIs, let us move on 15:04:44 #topic 3.5.0 15:05:11 beta3 has got some test coverage 15:05:43 we have encountered issues in encryption and compression - my take is to call them as beta features for 3.5.0. any thoughts? 15:06:02 Sounds reasonable. 15:06:13 Have the issues been BZed? 15:06:18 Depends on the issues? 15:06:29 jdarcy: yes, they have been BZed. 15:06:39 hagarth, agree with ira 15:07:04 ira: I encountered one data corruption with both compression + encryption loaded 15:07:10 What if feature is mostly broken 15:07:22 hagarth: Do you know which was the culprit? 15:07:37 both xlators cannot work well wit most of our performance xlators 15:08:03 Are they even beta quality? 15:08:23 ira: I suspect that it is related to compression - https://bugzilla.redhat.com/show_bug.cgi?id=1065634 15:08:24 Bug 1065634: urgent, unspecified, ---, vbellur, NEW , Enabling compression and encryption translators on the same volume causes data corruption 15:08:43 ira: I think they are ready to be tested out in this release 15:09:07 I also encountered an issue with compression xlator, input/output error https://bugzilla.redhat.com/show_bug.cgi?id=1065644 15:09:09 Bug 1065644: unspecified, unspecified, ---, vraman, NEW , With compression translator for a volume fuse mount I/O is returning input/output error 15:09:30 however needinfo is on me now for the bug 15:09:45 lalatenduM: right 15:09:46 For encryption, I'm a little more worried about 1065639 ( Crash in nfs with encryption enabled). 15:10:11 jdarcy: Ironically, crashes worry me less than the straight out data corruptions ;) 15:10:15 If encryption and compression can't be used together, well that's sad, but OK. If encryption can't be used with *NFS* that seems more seriously. 15:10:21 Hmmm, do we have an existing way to include new features in a release (eg the network.compression stuff), but state that it's "for preview only". Eg don't use it for production data 15:10:33 ? 15:10:44 jdarcy: encryption cannot be used with NFS - crypt blocks all access fops 15:10:47 ira: In my world, data loss/corruption is way worse than a crash, because it implies *permanent* loss of access to that data. 15:11:08 jdarcy: (nod) We're in agreement. 15:11:16 hagarth: I'd call that a blocker for including encryption in a release. 15:11:17 jclift_: we will obviously release note it 15:11:24 np 15:11:46 +1 on NFS + encryption. 15:11:47 jdarcy: even for getting some testing coverage with other access protocols? 15:11:59 * jclift_ apologises in advance if his communicatino is a bit crap. Very stressed. 15:12:21 other access protocols being fuse mostly :) 15:12:37 I assume libgfapi also ;) 15:12:58 right, libgfapi as well 15:12:59 fuse=gluster=gfapi 15:13:02 hagarth: My gut says yes, even then. We just don't have enough control over that, and if they *just once* try to access that volume over NFS then they risk screwing it up even for subsequent FUSE access. 15:13:09 versus NFS 15:13:40 I think the crash in bz 1065639 is due to a graph cleanup change which has been added in 3.5 15:13:46 How much of our client access is over NFS? 15:13:58 hagarth: OTOH, if it's just a locking problem, then they wouldn't actually be losing data. 15:14:07 * jdarcy waffles. Send syrup. 15:14:23 jdarcy: canada has syrup. you send waffles 15:14:24 init() failed with encryption and the resulting graph cleanup caused the process to crash. 15:14:52 ira: for most non-linux, non-windows use cases, users fallback to NFS. 15:15:32 I'd feel a bit more comfortable if we could temporarily *enforce* mutual exclusivity of NFS and encryption. 15:15:32 hagarth: 1% 5% 10% 25% 50% 90%? 15:15:33 ok, here's a quick poll - should we not package encryption given that it doesn't work over NFS? options: 15:16:06 hagarth: rephrase to remove the not? 15:16:18 purpleidea: here goes 15:16:34 should we package encryption since it does not work over nfs? 15:16:36 options: 15:16:57 1. yes 2. No, let us get some testing coverage by bundling it in a release 15:17:31 3. package it but document it as broken 15:17:43 jdarcy: crypt blocks access() and sends back an error to the client, hence operations from nfs clients fail when crypt is loaded. 15:17:45 does the 3rd option make sense 15:17:49 er, where do we want encryption with NFS? We're not talking about NFS+krb5p 15:18:02 lalatenduM: 2 and 3 seem to be related 15:18:10 3 15:18:14 2 15:18:17 2. I don't want to corrupt someone's data. 15:18:32 I vote yes, package it, but let's make sure front-line folks are prepared for calls about NFS access hanging. 15:18:40 hagarth, agree if option 2 includes documentation to declare it broken 15:18:47 Isn't the fix just to require nfs.disable option on volume and still go with option 2? 15:18:57 3 15:18:59 One sec... BOTH options result in release. 15:19:05 MU! 15:19:05 ira: OSX users use NFS a fair bit, depending on their environment (can be a pain) 15:19:08 hmm, I think I screwed up on the options :) 15:19:33 social: I would be in favor of that too. 15:20:00 I'd release it if it forced the issue. 15:20:02 but we seem to be trending towards packaging with adequate documentation + warnings & possibly disabling nfs when crypt is enabled 15:20:05 kkeithley: I think like this: client <- NFSv3 -> nfs-server / glusterfs-client <- encrypted -> glusterfs-server 15:20:17 ndevos: that is right 15:20:18 hagarth, yes 15:20:28 hagarth, I mean thats right 15:20:35 isnt it just safer to enforce nfs.disable=on when encryption is enabled? 15:20:43 ndevos: right 15:21:01 I think it would be fairly easy for the crypt translator to check whether it's running in the same graph as NFS. 15:21:17 also you could prepare whole documentation and say it's demo and it might get enabled later if the nfs gets fixed 15:21:43 social, yeah agree 15:21:53 We shouldn't rely on people just reading docs. Just saying. 15:21:53 social, jdarcy: agree, let us ensure that we don't give users a chance to have nfs and crypt work together 15:22:00 till issues are addressed 15:22:02 Yeah 15:22:18 What about compression? 15:22:44 I think the same applies to compression 15:22:54 IMO it is not clean enough to be declared GA 15:23:03 As long as there's potential for data corruption *with* encryption, I'll bet there's potential *without* as well. 15:23:41 So I'd say turn it off until we better understand that corruption. 15:23:43 jdarcy: ^^ then it should be nacked 15:24:14 none of these translators should be enabled by default 15:24:36 I will get some more tests going with compression 15:24:40 Not much informaton in https://bugzilla.redhat.com/show_bug.cgi?id=1065634 unfortunately. 15:24:42 Bug 1065634: urgent, unspecified, ---, vbellur, NEW , Enabling compression and encryption translators on the same volume causes data corruption 15:24:54 Also unknown: how much does compression even help? 15:25:06 yes, dont ship anything (without a BIG warning) where adventurous users suddenly have some corruption 15:25:20 mea culpa - will try to add some more detail to that bz 15:25:31 jdarcy: that needs to be quantified as well 15:25:37 ndevos: +1 15:25:54 jdarcy: with zlib .. not much. 15:26:00 ndevos, +1 15:26:35 ok, let us take a call on compression in our next meeting or over a ML discussion after we get some more detail. 15:27:10 hagarth, howmuch time we have for 3.5 release? 15:27:23 There will be possibility to drop the warning and say that it's now safe during the minor release or it'll have to wait in the off mode till next major? 15:27:29 we also had a 3.5 test week in BLR Red Hat office last week, lalatenduM - would you want to provide some updates from there? 15:27:41 hagarth, sure 15:28:06 lalatenduM: there is a memory leak reported on gluster-devel, want to understand that better and get convinced myself that geo-rep & quota work fine 15:28:07 We had organised a test day for 3.5beta3 and a heckathon 15:28:20 once those are addressed, we can firm up on a release date for 3.5 15:28:26 around 20 people turned up 15:28:31 my experience years ago with lbx in X11 is that compression was a win only on low speed links and some of the win was probably lost due to the cpu (50Mhz 486 and Pentium Pro days) being a bottleneck. 15:28:33 hagarth, yeah 15:29:18 kkeithley: for a wan link, I can see it, especially if we are using high speed compressors like lz4... 15:29:19 There are 21 bugs logged on the qa day and 2 code patches and 1 doc patch 15:29:51 lalatenduM: nobody tested 3.4.3alpha1 on the test day? 15:30:00 ira: for most client - server transfers we normally do not go over a wan link 15:30:13 kkeithley, nope, may b ewe missed that one 15:30:15 kkeithley: georep. 15:30:50 kkeithley: Otherwise, I pretty much agree... little use. 15:30:53 kkeithley: we are running most of 3.4.3 patches already on production (really needed the memleak fix) 15:31:14 lalatenduM: thanks for the update 15:31:22 * lalatenduM not sure if he is interrupting the other discussion l 15:31:26 :) 15:31:27 social: which BZ for the memleak? 15:31:31 we are moving to 3.4, so let me update the topic :) 15:31:35 #topic 3.4 15:31:59 kkeithley: sec I'll just open our git and paste the stuff we have 15:32:07 social: are you referring to the libxattr mem leak? 15:32:25 s/libxattr/libxlator/ 15:33:00 BUGS: 977497 841617 1057846 971805 15:33:01 seems to be bz 841617 15:33:37 we better pulled in stuff that seemd urgent as we won't have another window for some time 15:34:19 kkeithley: I used to use the compression in X11 remotely (eg to data centers in other countries), because it make it workable. Seriously annoyed they dropped it because it made things unworkable. :( 15:34:20 I already have it as a (candidate) blocker in the tracker bz 15:34:55 kkeithley: what would be our preferred mode for tracking - backport wishlist or the tracker bz? 15:35:14 tracker bz I suppose. 15:35:15 I'd note that we'd love to see this fixed 1063832 - it's annoying 15:35:23 * ndevos prefers a tracker, that gets updated automatically 15:35:53 right, we probably should add a pointer to this bz from the backport wishlist wiki page 15:36:34 ok, anything more on 3.4? 15:37:03 social: please feel free to update the tracker bz or let kkeithley know 15:37:09 moving on 15:37:16 #topic rpm packaging changes 15:37:18 looks like the fix is in master, needs backport to 3.4 15:37:35 ndevos, jclift_: do we need any action on rpm packaging? 15:37:44 Yeah 15:37:58 hagarth: no not really ... oh well, I'll leave that to jclift_ 15:38:20 jclift_: what are the changes that we need now? 15:38:51 Niels has proposed to split out glupy and other less used translators into a glusterfs-extra-xlators rpm 15:39:02 (glusterfs-server and glusterfs-geo-replication stay as it currently is) 15:39:10 ndevos: right 15:39:28 AFAIK, it's so there no longer needs to be a dependency on Python for the base glusterfs rpm 15:39:40 * jclift_ isn't against it 15:40:02 jclift_: sounds like a good idea, shall we consider this for 3.6? 15:40:13 I'm generally in favor of splitting out parts that add extra dependencies. 15:40:18 I'm more wondering if we want it for 3.5 (Niels is more for this), or for 3.6 15:40:34 jdarcy: +1 15:40:37 * jdarcy hates it when other packages pull in dependencies specific to features he doesn't even use. 15:40:45 I'm kind of thinking it's a bit late in the cycle for 3.5, but I've already written the code to do it, and it's in Gerrit waiting for review 15:41:06 so, glupy in the glusterfs (base) package introduces a dependency on Python - thats the reason to put it in glusterfs-extra-xlators 15:41:07 So we could literally do it today (I need to fix a glupy test first though I realised earlier) 15:41:21 jclift_: I am open to 3.5 as well. Let us review it and take it further? 15:41:26 Do you want to break out glupy on its own because it hauls in python? 15:41:39 * jclift_ shrugs 15:41:50 #action /me to consider new rpm packaging for 3.5 15:41:59 mark it as a python module... 15:42:03 Honestly I'm not bothered. Glupy isn't the only thing that uses python 15:42:03 ndevos: I'd actually say we should have gluster-python to include the gfapi bindings as well. 15:42:18 kkeithley, should put samba hook script pkg also in 3.5? 15:42:20 Heh. I suggested something like this. :) 15:42:29 s/should/should we/ 15:42:35 what machines running gluster don't already have python installed? 15:42:38 jdarcy: that is currently in glusterfs-api, together with libgfapi 15:42:59 You shouldn't get python unless you ask for it here... ;) 15:43:01 purpleidea: Ndevos mentioned that some minimal cloud images are able to exclude it as part of their slimming down 15:43:09 jclift_: ah 15:43:09 purpleidea: It's actually not just python, but python-devel as a build dependency too. 15:43:16 purpleidea: it is intended to keep images for cloud environments small, these dont seem to have python installed 15:43:21 we have samba4 rpms on download.g.o for 3.4.2+ 15:44:05 In short, it seems like re-arranging the rpms isn't a problem 15:44:16 We just need to figure out how, and if for 3.5/3.6 15:44:20 jclift_: right, let us get it in soon if we need it for 3.5 15:44:24 I'd agree with that. 15:44:45 ok, anything more on rpm packaging? 15:44:45 If I saw a patch to split that stuff out, I'd +1 it. 15:44:51 +1 it would be great to get this is so people can use it sooner. it's a great project 15:44:58 * jclift_ points out that even for 3.5, we need _something_ for glupy which has to have gluster.py renamed anyway 15:45:15 (eg glupy is currently broken in release-3.5 and master branches) 15:45:16 geo-rep has a lot of python in it. I guess those cloud environments just aren't going to do geo-rep. 15:45:28 kkeithley: Yeah, the syncdaemon 15:45:32 kkeithley: Apparently not. 15:45:43 kkeithley: we probably should re-write syncdaemon in C too ;) 15:45:56 * ndevos has python on all his systems... 15:46:07 erlang 15:46:19 There used to be ways to turn Python blobs into standalone executables, not sure if any of them still work. 15:46:36 jdarcy: Here's a patch to split Glupy into a new glusterfs-extra-xlators rpm: http://review.gluster.org/#/c/6979/ 15:46:50 let us move on folks - we seem to have lot more topics than time permits us today. 15:46:58 #topic 3.6 15:46:59 jdarcy: It's failing on glupy.t regression test though. /me will fix today. 15:47:02 np 15:47:28 I'm surprised someone hasn't written a python front end for gcc. 15:47:45 as noted earlier, I have moved the 3.6 schedule as per last week's meeting. 15:47:50 kkeithley: http://gcc.gnu.org/wiki/PythonFrontEnd 15:48:07 kkeithley, http://gcc.gnu.org/wiki/PythonFrontEnd 15:48:08 if you have a feature to submit, please do so by the feature proposal deadline 15:48:14 yup, I just googled it myself and found it 15:48:16 purpleidea, you beat me :) 15:48:19 #link http://www.gluster.org/community/documentation/index.php/Planning36 15:48:24 lalatenduM: hehe 15:48:42 we also have had snapshot patches that landed on master for review yesterday 15:49:02 those patches almost DDOS'd our build infra :) 15:49:03 Didn't thousand-node-glusterd get moved to 4.x? 15:49:03 hagarth, cool :) 15:49:25 * lpabon dreams of glusterfs all in python. no memory allocations, buffer overruns, classes... 15:49:26 hagarth: we have geo-rep on cloud >.> 15:49:29 jclift_: it did, we can update the page after we have a meeting in 2 weeks 15:49:42 lpabon: +1 15:49:50 lpabon, haha 15:49:58 rjoseph updated the whole of snapshot feature into a single patch today 15:50:03 lpabon, what bt performance :) 15:50:14 hagarth, that's nice 15:50:22 we need some help in reviewing that patch 15:50:29 Kudos to rjoseph for that. :) 15:50:31 lalatenduM: something, something, only a linear difference + rewrite slow parts in c 15:50:41 np 15:50:42 #link http://review.gluster.org/7128 15:50:58 concurrent python is fast enought (but i don't want to steer the conversation) we can take it offline :-) .. (fyi look at openstack swift) 15:51:09 I will start a mailing list discussion on that - we probably can divide and conquer 15:51:18 * lalatenduM will look for probable Coverity complains :) 15:51:30 lalatenduM: thanks! 15:51:55 so one more topic that spans both 4.0 and 3.6 15:51:56 lpabon, yeah another perspective :) 15:52:04 hagarth: so no more 50+ patch patchbombs in gerrit? 15:52:35 kkeithley: no thankfully, I just have to figure out the magic gerrit gsql query to abandon those patches 15:52:52 #action hagarth to start a thread on review of snapshot patch 15:52:53 DROP TABLE 15:53:02 jclift_: :) 15:53:05 :) 15:53:15 we have revamped stripe proposal in 4.0 15:53:36 should we consider getting that into 3.6 as our current implementation is mostly unusable? 15:54:00 Damn. Didn't know that. 15:54:07 this scheme also has the benefit of offloading something from 4.0 15:54:47 If we don't have immediate ideas, I can follow up on this with a broader discussion on gluster-devel 15:55:26 that seems to be it #action hagarth to start a mailing list discussion on striping for 3.6 15:55:27 hagarth: I'd favor bringing it forward if you think it's feasible 15:55:54 jdarcy: thanks, let us try evolving more details on the ML thread 15:55:55 Prob better to discuss on mailing list. :) 15:56:04 jclift_, +1 15:56:14 #topic open discussion 15:56:16 * jdarcy wasn't aware that the current implementation was mostly unusable. Can someone please send a short email explaining why? 15:56:40 Our good buddies in Ceph-land have erasure coding and tiering as of yesterday. 15:56:43 jdarcy: sure, will do or have a2 follow up on that. 15:56:45 I have a topic: https://bugzilla.redhat.com/show_bug.cgi?id=1067059 - Unit tests 15:56:46 * lalatenduM agrees with jdarcy 15:56:50 Bug 1067059: low, unspecified, ---, lpabon, ASSIGNED , Support for unit tests in GlusterFS 15:57:10 jdarcy: btw, thank you for your email. i haven't had time to go through it in detail yet. of course patches are always welcome too! but now i have some homework. 15:57:17 lpabon: shall we queue it for the last? 15:57:21 sure 15:57:33 For discussion. Jenkins. 15:57:46 lpabon: Should the xlator-test framework be considered part of that, or separate? 15:57:54 jclift_: go ahead with your points for discussion on Jenkins 15:57:59 i think of it as "phase 2" 15:58:02 of that project 15:58:11 ^^ @jdarcy 15:58:21 purpleidea: I actually have a much more developed Python script (written on the plane) containing the recursive implementation. 15:58:39 jdarcy: ! sweet... can you send it to me? 15:58:43 * lalatenduM looking forward to unittests in GlusterFS 15:58:47 Re: Jenkins. In right-now terms, we seem to be hitting lots of regression failures in master, on rpm.t. 15:58:52 jdarcy: s/sweet/totally awesome/ 15:59:09 jclift_: right, have we debugged that failure? 15:59:22 Re: Jenkins. Someone that knows rpm.t should be able to figure it out pretty quickly. I don't think anyone'slooked into it yet. 15:59:31 I'll take a look 15:59:35 Tx 15:59:42 #action kkeithley to look into rpm.t failure 15:59:43 Thank you, kkeithley. 15:59:56 been meaning to, just haven't gotten there yet 15:59:58 kkeithley: There are two example failing URLs in the Etherpad that might be helpful 16:00:13 we need to scale out jenkins - maybe I'll push this topic for the next meeting. 16:00:13 yup 16:00:29 hagarth: Yeah, it'd be useful to have JM around 16:00:34 topic 2. bug triage guidelines 16:00:38 He can likely do a better call to action, etc. 16:00:44 For scaling Jenkins (kinda related), I'm willing to pay for a couple of extra build servers in DigitalOcean or similar out of my own pocket as long as the setup's not too onerous. 16:00:54 lalatenduM: would you want to update on bug triaging? 16:00:59 hagarth, regarding bug triage sas is interested to do bug triage for GlusterFS, I think JM will be happy to hear that 16:01:01 jdarcy: It shouldn't come to that.... 16:01:11 hagarth: i'm happy to help scaling jenkins on the sysadmin side if there is funding for it 16:01:17 sas, welcome to qe team :) 16:01:20 fwiw, eng-ops is starting to rack 20+ servers from the old Sunnyvale lab (although nothing in racks yet.) Once they go in we can add them as jenkins slaves. 16:01:28 jdarcy, purpleidea: thanks, let us figure out more when JM is around. 16:01:29 lalatenduM, thanks :) 16:01:34 kkeithley: Ah, excellent idea. 16:01:37 jdarcy: I have access to Rackspace from johnmark, I can spin up N number of VM nodes 16:01:39 jdarcy: (we should be able to put out a "call to action" and get donated resources. lets discuss next meeting?) 16:01:42 sas: welcome aboard! 16:01:44 add some of them 16:01:51 hagarth, wil come up the doc for bug triage 16:01:51 lpabon's idea seems good ;) 16:01:53 hagarth, yes!! 16:01:54 Can we repurpose some of the heka machines as a temporary stopgap? Should we? 16:02:04 lpabon, cool 16:02:06 lalatenduM: adding an AI for you 16:02:12 hagarth, sure 16:02:17 That is why we use 4 for gluster-swift 16:02:23 #action lalatenduM to set up bug triage process page in wiki 16:02:30 We can also spin up CentOS and Fedora VMs 16:02:38 and other linuxes :-) 16:02:41 I think people are using the heka machines. At least they're reserved in beaker. 16:02:41 lpabon: certainly 16:03:04 I'll take an action item to talk to my friends @RAX about developer/open-source discounts. 16:03:22 jdarcy: we don't need at , afaik, we have an account 16:03:29 s/at/to/ 16:03:29 since we seem to be running out of time, shall we just have the unit test discussion and carry over the other "open topics" to next meeting? 16:03:35 Just a general note, the machines that get setup need to have publically accessible interface. 16:03:36 and we do have one jenkins slave available now, running NetBSD, but it's not in use yet. 16:03:43 lpabon: Cheaper is still cheaper. :) 16:03:51 So stuff behind corp firewall with no way inside isn't workable for this 16:03:53 kkeithley: will sync up with you on NetBSD later this week 16:04:05 jdarcy: good point 16:04:06 ssh reverse tunnels ftw 16:04:13 I take that as a yes to my question ;) 16:04:22 hagarth: Can we extend the meeting by 20 mins? 16:04:24 kkeithley: I guarantee InfoSec would come down on us for that. Don't ask how I know. 16:04:29 and eng-ops is going to work with IT on an "official" solution to that 16:04:34 jclift_: I am fine 16:04:45 np here either. 16:04:51 Can most of us stay back for 20 more minutes? 16:04:58 eng-ops as much as told me to use reverse tunnels. 16:05:02 yes 16:05:10 kkeithley: That's eng-ops. Different group. 16:05:14 kkeithley: Lets discuss in -devel or somethere else? 16:05:35 Next topic? 16:05:38 hagarth: i can continue on -devel if that is ok 16:05:46 lpabon: that works too, thanks. 16:05:57 #topic Mailing list vs IRC for "binding" stuff 16:06:01 jclift_: all yours 16:06:05 I don't mind extending a bit BTW, but still need to shower for 9am PST keynote. 16:06:20 Along with this: Can we get all the tests run on build.gluster.org checked into the tree? 16:06:27 jdarcy: I don't need to ask, I can imagine. 16:06:42 Yeah. I'm just a bit confused about if stuff discussed on IRC but not on the mailing-list is considered "good enough" for major changes 16:06:46 eg rpm packaging 16:06:53 By IRC, I mean -devel, not here. 16:07:02 jclift_: my take would be this 16:07:26 Recent example was seeing a patch to merge geo-rep into gluster-server. I'm not against it, but was surprised at it only having been discussed on -devel. 16:07:36 irrespective of where we have discussions 16:07:39 (the IRC channel) 16:07:51 it would be good to communicate the proposal and the rationale on gluster-devel 16:07:58 (mailing list?) 16:08:32 "on gluster-devel", the _mailing list_ yeah? 16:08:35 but the patch was reviewed, and much discussion ensued; 'though not much was captured anywhere 16:08:37 jclift_: yes 16:08:39 hagarth: Agreed. It's reasonable to assume that people can come to IRC for a discussion *if they're notified on the ML*, but not that they hang out there to see everything. 16:09:07 No worries. Am just clarifying, because I wasn't sure. 16:09:27 This is especially important because of time-zone issues. 16:09:39 should we just lay down this rule somewhere so that new comers understand better? 16:09:49 We had significant problems with previous project (Aeolus) having stuff discussed only on IRC, which excluded a lot of people that were affected/involved in things. 16:09:49 Topic in -devel? 16:10:16 jdarcy: right, who can set the topic in IRC -devel? 16:10:35 * sas leaves now, will sync up with recorded chats 16:10:51 sas: thanks 16:10:53 hagarth: yeah, we should probably write this rule/guideline down somewhere 16:11:01 jclift_: AI for you? ;) 16:11:06 Community Standards or Governance or something 16:11:08 * lpabon going to discuss unit tests on -devel 16:11:11 Bleargh 16:11:14 Yeah, I suppose 16:11:16 ;D 16:11:27 Actually, how about we JM it? :D 16:11:44 #action jclift_ and johnmark to update guidelines on community standards 16:11:47 We might need a bot to maintain that. 16:11:49 Heh 16:11:50 ;) 16:12:07 we probably could program glusterbot :) 16:12:11 #topic Policy on build breakers in git 16:12:19 Heh, me again. 16:12:21 jdarcy: someone likes my bot! https://github.com/purpleidea/jmwbot/ free for all to make your own 16:12:49 Again a clarification thing. There's a change in git master that's causing make glusterrpms to fail on F19/F20/EL7, etc. 16:12:59 * jdarcy LOLs @ JMWbot 16:13:03 jclift_: My take would be this - 16:13:03 It's been hanging around for days. A fix has been proposed. 16:13:27 1. Let the build breaker hang around if a patch is available. 16:13:35 2. If not, we revert it. 16:13:50 +1 16:13:51 k, that good clarity 16:14:07 #topic Patch queue cleanup 16:14:10 hagarth: 3, ask how it got there, so we don't see it again? ;) 16:14:25 ira: +1 :) 16:14:33 ira: No automatic testing of things on stuff other than CentOS 6.x atm. 16:14:34 jclift_: your topic again :D 16:14:52 jclift_: right, scaling jenkins instances will definitely help here 16:14:58 Yeah. It's a concern that we have outstanding open changes for review in Gerrit dating back to May 2012. 16:14:58 jclift_: Doesn't even sound hard to countermeasure ;) 16:15:03 hagarth: I suggest that we send out a call to update or abandon patches over 1yo. 16:15:11 jclift_: +1 16:15:16 +1 16:15:18 jdarcy: right, will do that. 16:15:22 I know they May 2012 one (which is mine) is so stale that I'd have to redo it anyway. 16:15:33 I also have been thinking of auto-abandoning aged patches 16:15:39 Yeah, some of the stuff looks potentially useful, but no idea if it's relevant. etc. Such a call should help cull it. 16:15:42 via a gerrit script 16:15:49 It's not like we totally lose the history. Those patches are still there. 16:15:57 jdarcy: right 16:16:12 IMO we should auto-prune anything that's greater than 1yo *and* rfc. 16:16:27 #action hagarth to send out a note on abandoning patches over 1yo 16:16:32 jdarcy: +1 16:16:33 If it has a bug number, perhaps we should treat that differently. 16:16:37 I'm wonder what we want to do with all of Amar's outstanding patches? 16:16:44 s/wonder/wondering/ 16:16:54 * lalatenduM needs to leave now, will catch the log later 16:17:03 Amar is still active in the community - some of us can inherit them if he happens to be busy 16:17:23 Let's ask him if he can kill the ones he's not interestd in finishing, etc (as per the note) 16:17:24 Who's the best person to solicit Amar's input? 16:17:37 I still have some thoughts on review discipline - will take that up in a subsequent meeting 16:17:44 jdarcy: I can reach out to Amar 16:18:17 ok, we seem to have come to the end of our listed topics 16:18:24 one more quick topic 16:18:38 I don't think I will be able to attend next week's meeting at this hour 16:18:47 I doubt I will. 16:18:59 (likely the same reason.) 16:19:05 can anybody run the meeting or should we cancel next week's instance? 16:19:07 hagarth: To amend the previous suggestion, let's queue up an IRC meeting to decide disposition of old patches where the authors ignored the call-to-action. 16:19:26 jdarcy: right, noted. 16:19:35 aren't about half the people here now going to be unavailable next week for the same reason? 16:19:41 hagarth: Lets re-schedule it? 16:19:46 That's my guess. 16:19:48 kkeithley: mostly yes 16:19:57 jclift_: most of us will be busy next week 16:20:00 wish I had that excuse 16:20:01 I'll be available, but it might not be worth having the meeting without quorum. 16:20:05 ahhh, k. 16:20:12 yeah 16:20:26 ok, we seem to be trending towards a cancellation. Let us do that and meet again in 2 weeks 16:20:30 Lets punt to the weel after then 16:20:33 week 16:20:44 Last topic? "< lpabon> I have a topic: https://bugzilla.redhat.com/show_bug.cgi?id=1067059 - Unit tests" ? 16:20:47 Bug 1067059: low, unspecified, ---, lpabon, ASSIGNED , Support for unit tests in GlusterFS 16:21:04 I thought lpabon was about to send out a mail on gluster-devel 16:21:09 i can discuss now or in gluster-devel 16:21:11 Oops, missed that 16:21:14 -devel. 16:21:14 I'm ok either way 16:21:17 sure 16:21:20 np :) 16:21:21 -devel it is 16:21:21 lpabon: gluster-devel would be better 16:21:35 thanks everyone for staying back, talk to you all in 2 weeks. 16:21:41 #endmeeting