12:01:29 #startmeeting 12:01:29 Meeting started Tue Jan 13 12:01:29 2015 UTC. The chair is ndevos. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:01:29 Useful Commands: #action #agreed #halp #info #idea #link #topic. 12:01:57 Hello, the agenda for today: https://public.pad.fsfe.org/p/gluster-bug-triage 12:02:13 #topic Roll Call 12:02:20 * lalatenduM is here :) 12:02:31 hey lalatenduM! 12:02:33 * partner plop 12:02:42 i have no idea what you do here but monitoring in the background 12:03:27 * kkeithley1 is here 12:03:28 (or rather what i do here but trying to learn how you work on the issues) 12:03:30 partner: we'll check on bugs that need some attention and things... you'll see 12:03:40 hchiramm_: ? 12:03:53 I think kkeithley1 is alsoaround 12:03:56 * hchiramm_ is here 12:04:00 yeah :) 12:04:04 hii kkeithley1 12:04:07 hi 12:04:31 #topic Status of last weeks action items 12:05:12 #topic hchiramm request NetBSD to be added to the hardware list in Bugzilla 12:05:26 ndevos, its with bugzilla team 12:05:30 yet to close. 12:05:36 okay, thanks 12:05:45 #topic ndevos will send an email to gluster-devel with some standard bugzilla queries/links to encourage developers to take NEW+Triaged bugs 12:05:58 uh, anyone remembers if I sent out that email? 12:06:19 ndevos, nope, 12:06:36 no, looks like I forgot :-/ 12:06:38 * kkeithley_ doesn't remember it 12:07:12 the other action items look untouched, so I'll skip them :) 12:07:22 #topic Group Triage 12:07:43 no needinfo on bugs@gluster.org 12:08:04 and 9 new bugs: http://goo.gl/0IqF2q 12:08:25 we should be able to triage those pretty quickly now 12:08:38 please take an IRC lock when you open a bug 12:08:51 * lalatenduM locks 1180060 12:08:59 partner: we try to triage bugs according to http://www.gluster.org/community/documentation/index.php/Bug_triage 12:09:32 1180231 12:10:11 ndevos: ah that one, thanks 12:11:52 partner: and, if you want to triage a bug, note the BZ number in this chat, so it doesnt collide with others 12:12:49 this meeting should be mainly about bugs that got proposed by triagers, or need some other kind of discussion 12:13:04 but, in the end we tend to triage the new bugs in this meeting too 12:13:43 * ndevos locks 1181500 12:14:26 1180394 12:16:05 1180411 12:17:33 1181500 12:17:48 kkeithley_: do you know about an issue with a 3.4 client, mounting from a 3.6 server? 12:18:01 * ndevos thinks he heard about it before... 12:18:07 not sure 12:18:14 ndevos, kkeithley_ yes 12:18:19 we have a know issue 12:18:23 1181048 12:19:01 okay 12:19:45 Volumes which existed on the Gluster cluster before upgrading to 3.6 would be accessible to older clients as "readdir-ahead" would not be on for the volume 12:19:46 Also manually disabling readdir-ahead should allow the older clients to mount the volume. 12:19:56 kkeithley_, ndevos ^^ 12:20:49 lalatenduM: oh, interesting 12:20:59 lalatenduM: can you find a bug for that? 12:21:34 Sounds like a feature, not a bug. :-) 12:21:38 ndevos, thats my knowledge from downstream , bugs are here [1] https://bugzilla.redhat.com/show_bug.cgi?id=1112527 12:21:38 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1136838 12:21:45 IOW fix it in the docs? 12:23:20 https://bugzilla.redhat.com/show_bug.cgi?id=1181500 is filed for rhs builds ??? what should we do 12:23:32 it is not community bits 12:24:23 yeah, that the bug I was looking at 12:24:40 partner, how is your Coverity fixes coming up? I think you were interested for Coverity fixes 12:24:54 clone the BZ please, if only to document that we know about it. 12:25:47 kkeithley_, did get it? you meant to clone https://bugzilla.redhat.com/show_bug.cgi?id=1181500 ? 12:25:55 s/did/did not/ 12:26:25 oh, sorry, I meant for the 3.4 client mount from 3.6 server 12:26:34 Or we don't need to clone that 12:26:35 https://bugzilla.redhat.com/show_bug.cgi?id=1136201 seems to address an issue with rebalance when < 3.6 clients are connected 12:26:46 I guess we should include that in 3.6 too 12:26:56 * ndevos clones that one to 3.6 12:27:09 lalatenduM: i think you are confusing me with somebody else, i don't code nor read it too fluently either :o 12:27:39 partner, yeah :) sorry abt the confusion 12:28:11 no worries, i wish i was capable of helping there or elsewhere 12:29:58 lalatenduM: disregard my comment about cloning the BZ. I misunderstood. 12:30:04 kkeithley_, np 12:30:23 I have't had my coffee yet 12:30:35 partner, whats your interest , I am sure we can find something matching your interest 12:30:48 kkeithley_, yeah it is early for you 12:33:12 brb 12:35:28 okay, so all bugs have been triaged now 12:35:56 #topic Open Floor 12:36:17 anyone has some interesting things to discuss? 12:36:41 ah, I have one! 12:36:45 yes, ndevos? 12:36:56 Did anyone use http://bugs.cloud.gluster.org/ already? 12:37:59 okay, I guess *not* 12:38:10 well, lalatenduM wanted to send patches 12:38:23 ndevos, yeah I am :) 12:38:34 partner: do you know Python? 12:39:20 * ndevos -> https://github.com/gluster/gluster-bugs-webui 12:39:45 lalatenduM: any other topics for today? 12:39:54 ndevos, nope 12:40:13 ndevos, wait 12:41:00 ndevos, how do I tryout gluster-bugs-webui? 12:42:36 lalatenduM: run the included .sh file and open the .html in a browser :) 12:43:11 ndevos, ah , the readme does not tell it :) 12:43:26 lalatenduM: if you only change the .js or .css file, you would not need to re-run the ,sh 12:43:45 the .sh generates the .json file, and the .html/.js parses that 12:44:08 so, if you change the contents of the .json, you would need to run the script again 12:44:18 ndevos, ok 12:44:20 make sense 12:44:31 * lalatenduM nods 12:44:39 and fetching all bugs might take a little time, 5-10 minutes on my side, depends on the network 12:44:59 ndevos, ok 12:46:05 nothing more? 12:46:08 ndevos, nope 12:46:18 Iam running the script 12:46:23 ndevos: some, we mostly do everything with python so i at least have loads of knowledge around 12:46:53 partner: how about packaging things? 12:47:17 lalatenduM: sorry, was away for a moment, still figuring out where i can put my limited time, mostly trying to help on #gluster channel for now 12:47:35 we have couple of interesting python projects , like gstatus 12:47:51 ndevos: i know something of that area and possibly there might be some interest amongst my colleagues too 12:48:30 partner: or something in the area of sysadmin? also automatic installation of machines for the regression testing (rackspace hosted) 12:48:31 i've pretty much built our building infra and mirroring 12:49:42 i'll think a bit, superbusy right now as we are moving loads of servers to a new datacenter, just now my dear gluster storage (part of it) is on its way 12:50:15 anything else python than the gstatus? 12:50:39 geo-replication is mainly written in python 12:50:44 i'll see if we could release our python monitoring plugin also into wild, IMO its the best around and community can always make it better 12:50:46 I think we have couple of them, but not able remember now 12:51:13 found gstatus from forge 12:51:23 partner: plugins for nagios or something else? 12:51:27 partner, what abt config tools i.e. puppet or anishible 12:51:49 ndevos: nagios, sounds like gstatus is almost the same as our plugin :o 12:52:05 :) 12:52:28 partner: oh but gstatus isnt nagios, so your plugin would be nice to see 12:52:40 lalatenduM: we're using cfengine mostly and puppet on other project i'm involved 12:53:11 ndevos: no its not but the output it gives sounds pretty much what our plugin gives out, ie. various health checks 12:53:16 i'll have a look at it 12:53:17 partner, you use cfengine for gluster? 12:53:44 lalatenduM: do you accept answer: we use puppet for ceph?-D 12:54:07 partner: nice! there is a gluster-nagios project somewhere, but I dont know what details they check/display 12:54:09 lalatenduM: we use it partially for gluster yes but there's more work to be done 12:55:05 partner, np :) Ceph is brother in the war against proprietary storage :) 12:55:29 partner: but yeah, if you find something where you could assist our Gluster Community, let us know :D 12:55:36 partner, do you know abt https://github.com/purpleidea/puppet-gluster 12:56:07 lalatenduM: yeah i've seen that but haven't used it as i don't want to put competing conf management into same boxes 12:56:43 partner, ok 12:57:18 ok, this has been an useful open floor for once :) 12:57:22 #endmeeting