12:02:14 <hagarth> #startmeeting
12:02:14 <zodbot> Meeting started Wed Jan  7 12:02:14 2015 UTC.  The chair is hagarth. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:02:14 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:02:29 <hagarth> Welcome everybody to the 1st community meeting of 2015!
12:02:46 <hagarth> meeting agenda is here - https://public.pad.fsfe.org/p/gluster-community-meetings
12:03:00 <hagarth> #topic Roll Call
12:03:07 <hagarth> who do we have here today?
12:03:13 * krishnan_p is present
12:03:31 * spot is here
12:03:40 * ndevos _o/
12:04:09 <hagarth> cool, I think few others are dormant atm
12:04:17 * lalatenduM is here
12:04:48 <hagarth> #topic Introduction
12:05:03 <hagarth> we have a new arrival today - spot.
12:05:12 <hagarth> spot: would you want to introduce yourself to us?
12:05:40 <spot> My name is Tom Callaway, I'm on the OSAS team at Red Hat
12:05:46 <ndevos> welcome spot!
12:06:17 <spot> I'm working with Jim Jagielski to take over the community management responsibilities for Gluster
12:06:19 <lalatenduM> spot, welcome to the party :)
12:06:26 <partner> hep, i was off by one channel
12:06:37 <krishnan_p> Welcome spot!
12:06:38 <spot> Jim is going to be focused on the board level and I'll be working with, well, everything else. :)
12:06:54 <hagarth> spot: welcome again, great to have you with us!
12:07:24 <ndevos> thats really great to hear :)
12:08:04 <hagarth> ok, let us move on to action items from last meeting
12:08:21 <hagarth> #topic AI review
12:08:32 <hagarth> AI - hagarth needs to send a cleanup patch for the MAINTAINERS file to remove inactive maintainers (add a allumni section?)
12:08:52 <hagarth> this needs to be done, any thoughts on how to mark inactive maintainers as alumni?
12:09:12 <hagarth> maybe use A - <inactive-maintainer> ? "A" being for alumni?
12:09:21 <lalatenduM> +1 for alumni , how much inactive is alumni?
12:09:38 <spot> hagarth: perhaps just, at the bottom say, "Gluster would not be possible without the contributions of: ", then list the inactive folks
12:09:50 <lalatenduM> 3 months, 6months?
12:09:56 <hagarth> spot: sounds like a good idea
12:09:58 <spot> that way, there is no need for "am i inactive or not"
12:10:10 <spot> if they come back, hey welcome back. :)
12:10:35 <hagarth> lalatenduM: we have folks who have been inactive for more than 6 months now
12:10:46 <hagarth> maybe we could use 6 months as the tipping point
12:10:50 <lalatenduM> hagarth, yeah agree
12:10:58 <hagarth> spot: sounds fantastic! :)
12:11:08 <hagarth> ok, consider this done by next meeting
12:11:18 <hagarth> #action hagarth needs to send a cleanup patch for the MAINTAINERS file to remove inactive maintainers (add a alumni section?)
12:11:30 <hagarth> AI - hchiramm will try to fix the duplicate syndication of posts from blog.nixpanic.net
12:11:43 <hagarth> hchiramm: any update on this one?
12:12:10 <hchiramm_> hagarth, not yet
12:12:37 <hagarth> ndevos: we cleaned up a few duplicates, but were not able to figure out how some of your posts were being syndicated twice
12:12:46 <hchiramm_> need to check the changes made and have to confirm
12:12:58 <hagarth> hchiramm_: let us carry forward this AI
12:13:05 <hagarth> #action hchiramm will try to fix the duplicate syndication of posts from blog.nixpanic.net
12:13:17 <ndevos> hagarth: sorry, but I really have no idea how the syndication works... I only post on my blog and the magic happens
12:13:27 <hchiramm_> ndevos  do u have any other mirrors
12:13:34 <hchiramm_> we were not able to see it in syndication though
12:13:47 <ndevos> hchiramm_: hmm, maybe Fedora Planet?
12:13:52 <hagarth> ndevos: yes, needs some more investigation.
12:14:13 <hagarth> AI - Debloper to write  a blog post about complete website revamp
12:14:26 <hchiramm_> he should be doing it soon
12:14:36 <hchiramm_> hagarth , ^^ as we discussed ..
12:14:41 <hagarth> hchiramm_: right, can we target it for next week?
12:14:48 <hchiramm_> sure.. I wil update him
12:14:52 <hagarth> #action Debloper to write  a blog post about complete website revamp
12:15:01 <hagarth> AI - atinmu will look for someone that can fix the spurious fop-sanity test
12:15:23 <hagarth> atinmu: any update on this one? I have not observed more failures with this test of late
12:15:50 <atinmu> hagarth, I have not seen this spurious failure in recent regression runs
12:16:09 <hagarth> atinmu: thought as much, shall we drop this AI for now?
12:16:10 <atinmu> hagarth, I  haven't got any volunteer as well
12:16:46 <hagarth> or is it worth an investigation? maybe run the test for 100 times and see what happens?
12:16:50 <atinmu> hagarth, I will keep an eye on the regression report, if this spurious failure resurfaces I will look into it
12:17:13 <atinmu> hagarth, or probably find someone who can help debugging it
12:17:14 <jdarcy> Speaking of spurious regressions...
12:17:18 <hagarth> atinmu: ok thanks, I will drop this AI from regular follow ups.
12:17:47 <jdarcy> We really need to get one of the fixes for the quota-anon-fd-nfs.t failures in.
12:18:02 <ndevos> oh, yes, definitely!
12:18:14 <atinmu> jdarcy, this one is now omitted from the codebase temporarily
12:18:22 <jdarcy> atinmu: Yay!
12:18:33 <ndevos> atinmu: ah, workaround :D
12:18:55 <hagarth> the quota folks are working with a vengeance to have it back :D
12:19:09 * jdarcy checks, doesn't see either of the relevant patches went in since last night.
12:19:10 <ndevos> atinmu: add a conv=fsync to the dd command? thats what I was going to try...
12:19:43 <hagarth> jdarcy: http://review.gluster.org/9390
12:19:45 <atinmu> ndevos, Sachin Pandit is working on it...
12:19:52 <ndevos> atinmu: okay, cool
12:20:05 <hagarth> AI - JustinClift and misc should post an update or plan about upgrading Gerrit
12:20:16 <hagarth> I don't think we have either of them here today
12:20:31 <hagarth> hchiramm_: would it be possible to send a reminder on gluster-infra ML about this?
12:20:43 <hchiramm_> hagarth, sure
12:21:01 <hagarth> #action hchiramm_ to send an email on gluster-infra about gerrit upgrade
12:21:08 * jdarcy looks at the right list this time, and sees it.  ;)
12:21:25 <jdarcy> All those hard drugs must be affecting my brain.
12:21:26 <hagarth> AI - pointer to wiki from site needs to be added
12:21:52 <hagarth> does anybody have more context about this? Justin mentioned that tigert was working on this one.
12:22:05 <hagarth> I thought we have a pointer to the wiki from gluster.org
12:22:12 <ndevos> thats about the links on the top of gluster.org
12:22:33 <hagarth> we do have a wiki link there, right?
12:22:38 <ndevos> well, the link to the wiki was there, but it pointed to the blog (or the other way around?)
12:22:59 <hagarth> ndevos: right, it lands us here - http://www.gluster.org/community/documentation/index.php/Main_Page
12:23:16 <spot> isn't that the wiki?
12:23:45 <hagarth> spot: looks like that.
12:23:55 <ndevos> yes, looks as its been corrected
12:24:08 <spot> CLOSED:WORKSFORME
12:24:09 <hagarth> ok, so we're good with this AI.
12:24:31 <hagarth> AI - JustinClift to update the regression.sh script and capture /var/log/messages too
12:24:48 <hagarth> atinmu: do you happen to know if JustinClift was able to get this done?
12:25:17 <hagarth> if not, we can carry forward this AI
12:25:18 <atinmu> hagarth, yes, he has done it, now it captures
12:25:26 <hagarth> atinmu: cool, thanks!
12:25:40 <hagarth> AI - tigert to respond to Soumya Deb on gluster-infra about website.
12:25:46 <hagarth> this hasn't happened
12:25:51 <hagarth> #action tigert to respond to Soumya Deb on gluster-infra about website.
12:26:03 <hagarth> AI - hagarth to contact Rikki Endsley about fixing the twitter glusterfs domain squatter problem
12:26:14 <hagarth> I haven't completed this, will do this week
12:26:19 <hagarth> #action hagarth to contact Rikki Endsley about fixing the twitter glusterfs domain squatter problem
12:26:23 <spot> hagarth: if i can help there, let me know
12:26:40 <hagarth> spot: thanks, will loop you in on that thread.
12:27:00 <hchiramm_> atinmu, I think I got aconfirmation from Justin where u r Ccd ?
12:27:12 <hagarth> that finishes our short list of AIs :)
12:27:18 <hagarth> #topic Gluster 3.6
12:27:18 * hchiramm_ cross-checking
12:27:30 <hagarth> raghu: can you update us about 3.6.2?
12:27:50 <raghu> yeah. I was waiting till now for RDMA and USS patches
12:28:09 <atinmu> hchiramm, I am sure he has done it, bcoz now I can see /var/log/messages in regression logs tar
12:28:12 <raghu> I have pushed all the rdma related patches
12:28:18 <hchiramm_> atinmu, awesome :)
12:28:27 <hagarth> raghu: fantastic, are we now all set to remove tech preview for rdma in 3.6.2?
12:28:31 <hchiramm_> JustinClift++
12:29:24 <raghu> but some USS related patches are still failing the regression tests. Yeah. I think we can remove the tech preview for rdma in 3.6.2. In fact code changes to remove the warning when a user creates a rdma volume has also gone in
12:29:53 <ndevos> raghu: you plan to do a 2nd beta later this week?
12:30:17 <hagarth> raghu: cool! any help needed for failing regression tests?
12:30:20 <hagarth> #info rdma to be fully supported from 3.6.2
12:30:25 <raghu> I am thinking of making beta2 later this week. Run some tests and make 3.6.2 next week
12:30:59 <hagarth> raghu: ok, do we plan to merge anymore patches after beta2?
12:31:22 <rjoseph> raghu: which uss test cases are failing?
12:31:59 <raghu> I will stick to this plan of making beta2 later this week and making 3.6.2 next week. I will trigger one more regression run. If they pass the tests, then I will accept them for 3.6.2. Otherwise I will consider them for 3.6.3
12:32:11 <hagarth> raghu: ok
12:32:40 <raghu> its not USS testcases. Some other tests in quota and erasure coding are failing when USS patches are applied and regression is triggered
12:33:25 <hagarth> raghu: ok
12:33:27 <raghu> so if uss patches fail regression again, then I will make 3.6.2 with whatever is there till today.
12:33:36 <rjoseph> raghu: ok
12:33:41 * ndevos likes that plan
12:33:50 <hagarth> raghu: +1, lot of good fixes are awaiting to be pushed out in 3.6.2.
12:33:59 <lalatenduM> +1
12:34:15 <hagarth> anything more on 3.6.x?
12:34:20 <raghu> I can make a quick 3.6.3beta1 after 3.6.2 is released if USS cases happen to pass regression after I make beta2
12:34:38 <hagarth> raghu: sounds good
12:35:02 <hagarth> let us move on to 3.5 now
12:35:09 <hagarth> #topic GlusterFS 3.5
12:35:11 <hagarth> ndevos: all yours
12:35:43 <ndevos> there are still only a few bugs that are important to get fixed: https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=1&id=1141158&hide_resolved=1
12:36:03 <ndevos> 9 in total, not all have patches yet
12:36:22 <ndevos> rdma fixes have been posted for 3.5 as well, but are lacking reviews
12:36:44 <ndevos> patches that could use some reviews are here: http://review.gluster.org/#/q/status:open+project:glusterfs+branch:release-3.5,n,z
12:36:54 <hagarth> ndevos: should we send a reminder on gluster-devel about bugs needing patches?
12:37:08 <ndevos> I could do that, sure
12:37:19 <hagarth> ndevos: thanks!
12:37:30 <ndevos> #action ndevos will send an email about bugs that need patches (or reviews) for 3.5.x
12:37:59 <ndevos> as long as nobody complains about urgent issues, I am not planning to do a beta release
12:38:08 <hagarth> ndevos: ok
12:38:10 <hchiramm_> +1
12:38:35 <hagarth> any questions, comments on 3.5.x?
12:38:36 <lalatenduM> hchiramm, :)
12:38:39 <ndevos> well, I'll do one when all proposed bugs have patches, just so that our users are a little more happy :)
12:38:39 <hchiramm_> :)
12:39:03 <ndevos> move to 3.4?
12:39:04 <hagarth> ndevos: right :)
12:39:06 <hagarth> sure
12:39:11 <hagarth> #topic GlusterFS 3.4
12:39:17 <hagarth> kkeithley_: any updates here?
12:39:26 <kkeithley_> memory leak is still the gating factor
12:39:37 <kkeithley_> I"ve been working on narrowing down the leak
12:39:44 <kkeithley_> using the longevity cluster
12:39:57 <hagarth> kkeithley_: darn, let me add this to my todo too.
12:40:12 <kkeithley_> I turned off all the performance xlators and memory usage is stable
12:40:44 <kkeithley_> Summary of where I'm at is at http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/README
12:41:26 <hagarth> kkeithley_: what workload is being run on the client?
12:41:59 <kkeithley_> Ben England's fsstress, doing a mix of things
12:42:10 <kkeithley_> create, read, write, delete, rename, etc.
12:42:14 <hagarth> kkeithley_: ok
12:42:50 <kkeithley_> with all the default performance  options enabled, it leaks immediately
12:42:58 <kkeithley_> with everything disabled, no leak
12:43:09 * hagarth suspects stat-prefetch/md-cache
12:43:37 <hagarth> kkeithley_: thanks for the update. Once the leak is fixed, we are good to go with 3.4.7 right?
12:43:42 <kkeithley_> naturally the order I'm trying has stat-prefetch last. ;-)
12:43:54 <kkeithley_> yes, I expect so.
12:44:00 <hagarth> kkeithley_: ok, cool.
12:44:07 <hagarth> anything more on 3.4.x?
12:44:09 <kkeithley_> There are a couple other BZs, but they're minor IMO
12:44:14 <hagarth> kkeithley_: ok
12:44:36 <hagarth> moving on to Gluster next
12:44:42 <hagarth> #topic Gluster next - 3.7
12:44:58 <hagarth> Planning page for 3.7 is here - http://www.gluster.org/community/documentation/index.php/Planning37
12:45:38 <hagarth> please link your feature page from the planning page if you are working on anything significant for 3.7
12:45:51 <hagarth> we are about 2 months away from the feature freeze
12:46:15 <hagarth> so, let us try to get our features in soon(er) :)
12:46:30 <ndevos> does anyone know if there was an announcement for NFS exports+netgroups support to the list?
12:46:44 <hagarth> ndevos: don't recollect that
12:46:56 <ndevos> hagarth: okay, I guess I can send one?
12:47:01 <tigert> hey
12:47:02 <hagarth> ndevos: please do
12:47:13 <ndevos> okay, will do
12:47:14 * tigert got pinged
12:47:26 <hagarth> tigert: hello, we had a cpl of action items for you earlier. will ping them after the meeting.
12:47:36 <hagarth> any queries on 3.7?
12:47:47 <jdarcy> Should "hyperconvergence" have a feature page?  There seem to be bugs/work items filed against it.
12:47:51 <tigert> hagarth: ok
12:48:02 <hagarth> jdarcy: the ovirt folks have been active on that
12:48:06 <ndevos> #action ndevos will send out an announcment about the NFS with exports+netgroups feature (and lobby for reviews)
12:48:29 <hagarth> good idea, I will open a feature page for that
12:48:55 <hagarth> #action hagarth to open a feature page for (k)vm hyperconvergence
12:49:15 <hagarth> moving on to 4.0
12:49:23 <hagarth> #topic GlusterFS 4.0
12:49:30 <hagarth> jdarcy: any updates to share here?
12:50:09 <jdarcy> Not a lot.  There's a blog post (+ hangout) about to be published.
12:50:20 <jdarcy> I've been working on some random bits of infrastructure.
12:50:22 <hagarth> jdarcy: cool
12:50:51 <jdarcy> Probably next week we'll have as much of an online "summit" as we can to kick off more serious work.
12:51:08 <hagarth> jdarcy: neat
12:51:22 <krishnan_p> jdarcy, look forward to it.
12:51:32 <hagarth> I think it would be good if all of us can propose (BHA) features in the 4.0 planning page
12:51:37 * jdarcy has auto-generation of defaults.[ch] working, with an infrastructure that can also be used to generate NSR code.  Stubs and maybe even syncops could be done this way too.
12:51:48 <ndevos> jdarcy: where is the online summit announced?
12:51:56 <jdarcy> ndevos: It's not, yet.
12:52:02 * ndevos probably missed it... not
12:52:54 <ndevos> jdarcy: if there is some schedule, I (and others) would probably like to join some sessions
12:53:01 <hagarth> on the topic of 4.0, I plan to share results of community survey soon and solicit more features from the community.
12:53:12 <jdarcy> That would be cool.
12:53:41 <hagarth> #action hagarth to share results of community survey and solicit feature proposals from the community
12:54:01 <hagarth> anything more on 4.0?
12:54:05 <jdarcy> Nope.
12:54:24 <hagarth> we had a topic on Small file performance but will skip that since we had a dedicated meeting for that a few days back
12:54:36 <hagarth> #topic Other Agenda items
12:54:44 <hagarth> #topic Patches to release branches
12:54:57 <hagarth> This is about backports to release branches
12:55:25 <hagarth> how can we be more systematic in sending backports to release branches?
12:55:33 <ndevos> like http://www.gluster.org/community/documentation/index.php/Backport_Guidelines ?
12:55:45 <hagarth> I have felt the need for it and raghu also mentioned this to me recently while managing patches for 3.6.x
12:56:08 <hagarth> ndevos: I think the process of sending a patch is fine but how do we identify candidates for backports?
12:56:22 <hagarth> or rather who owns that responsibility?
12:56:56 <lalatenduM> I think it should be bug fixes only and mostly blockers
12:57:06 <hagarth> one thought I have is to let maintainers notify release-maintainers & patch submitters about the need to backport during a patch merge
12:57:32 <raghu> +1
12:57:35 <ndevos> the bug reporter gives a version where the bug is present - a developer should fix it in mainline and do the backport for all versions upto+including the version the reporter uses
12:57:46 <hagarth> lalatenduM: do we identify blockers during the bug triage meetings?
12:57:55 <lalatenduM> ahh, nope
12:58:03 <lalatenduM> thats what I was thinking
12:58:03 <hagarth> ndevos: most developers seldom do it that way
12:58:18 <ndevos> hagarth: indeed, they need some poking :)
12:58:22 <lalatenduM> we should mark blockers for release branches and then track it
12:58:35 <hagarth> lalatenduM: should we start doing that in bug triage meetings?
12:59:00 <lalatenduM> yes, I think if the bugs are linked with tracker bug , it helps
12:59:00 <hagarth> ndevos: scale-out poking by maintainers :)
12:59:14 <ndevos> I think the release maintainers should check for important bugs for the version they maintain, and add those bugs to the blocker
12:59:45 <lalatenduM> Lets say we have 10 bugs for back porting , release maintainer can send a mail to devel saying he needs the back ports
13:00:22 <hagarth> lalatenduM: ok, as ndevos plans to do it for 3.5.x later today
13:00:31 <ndevos> if a backport is trivial, I normally backport myself and have the original author review it
13:00:37 <hagarth> should we discuss this further on -devel?
13:00:50 <hagarth> I think this can benefit by more participation
13:00:56 <lalatenduM> hagarth, yes
13:01:01 <kkeithley_> agreed
13:01:22 <ndevos> hagarth: yes, others need to become more aware of the need for backports
13:01:31 <hagarth> #action hagarth to carry forward discussion around backporting patches on gluster-devel
13:01:49 <hagarth> since we are running out of time, moving on to the next iterm
13:02:00 <hagarth> #topic Duplication of posts on Facebook, twitter
13:02:24 <hagarth> this is a legit problem, we see blog posts appearing multiple times on social networking media
13:02:41 <hagarth> spot: if you have some ninja skills on these, can you please help here?
13:02:52 <ndevos> one source to rule them all?
13:03:02 <spot> as in "twitter shows the same thing three times" or "twitter and facebook have the same content" ?
13:03:14 <ndevos> the 1st :)
13:03:17 <kkeithley_> besides TechWars "we compared gluster to ceph" spam?
13:03:20 <hagarth> spot: usually both show the same content multiple times
13:03:24 <spot> hmm, okay. I'll look.
13:03:26 <jdarcy> spot: Twitter shows the same thing several times.  Sometimes *really old* things reappear.
13:03:38 <hagarth> kkeithley_: thankfully we are not tweeting TechWars stuff :)
13:03:40 <spot> i think we're using hootsuite for that, but I'll find out.
13:03:52 <hagarth> spot: thanks!
13:04:08 <msvbhat> hagarth: Other thing I noticed some time back was non-gluster content apearing in gluster twitter feed.
13:04:15 <hagarth> #action spot to investigate repetitive posts on social networking sites
13:04:23 <hagarth> msvbhat: like the trip to bandipur? :)
13:04:30 <msvbhat> hagarth: Yes :)
13:04:34 <lalatenduM> msvbhat, yeah , thats the older post in gluster blogs :)
13:04:38 <hagarth> that was due to a bad syndication, it is fixed now
13:04:45 <hagarth> s/bandipur/bannerghatta/
13:04:48 <msvbhat> hagarth: AH, Okay
13:04:55 <hagarth> #topic Announcing events: FOSDEM, devconf.cz, ...
13:05:08 <hagarth> ndevos: is this topic yours?
13:05:14 <ndevos> yes, I just added that
13:05:24 <hagarth> ndevos: go ahead please
13:05:28 <ndevos> we should announce our presense at events more clearly
13:05:33 <partner> +1
13:05:33 <hagarth> ndevos: +1
13:05:39 * spot agrees
13:05:47 <lalatenduM> we should have applied for a table in Fosdem :(
13:05:55 <ndevos> and, maybe get some users/devs together for some drink or such
13:05:58 <hagarth> should we publish a gluster events calendar on the website?
13:06:08 <kkeithley_> yeas
13:06:09 <kkeithley_> yes
13:06:10 <lalatenduM> yes'
13:06:13 <spot> i heard this morning that there are some gluster talks on the FOSDEM schedule this year
13:06:14 <partner> just by pure luck i happened to notice local ceph+glusterfs meetup organized by RH..
13:06:15 <ndevos> yes, a calendar, and announce emails in advance
13:06:17 <spot> i'm trying to get details there
13:06:39 <hagarth> spot: yes, lalatenduM, kkeithley_ & ndevos are going to be there I think
13:06:47 <spot> Also, the CentOS booth has offered 'office hours' time to Gluster if we want it
13:06:56 <spot> basically, time to run demos and be present to talk about Gluster
13:06:58 <hagarth> spot: that would be fantastic.
13:07:12 <ndevos> yes, lalatenduM propsed that on the CentOS list
13:07:18 <hagarth> we have a few talks in devconf.cz too
13:07:20 <spot> ndevos: awesome.
13:07:35 <lalatenduM> spot, yeah that is part of Storage SIG I think
13:07:58 <lalatenduM> I mean RE: CentOS booth has offered 'office hours' time to Gluster
13:08:22 <ndevos> so, who wants to figure out a tentative schedule for gluster talks/events in the next conferences and send an email about that?
13:08:32 <spot> ndevos: i'll do thar
13:08:34 <spot> that, rather
13:08:41 <ndevos> spot: cool, thanks!
13:08:47 * spot is already working on a "places we should be in 2015" list. :)
13:08:57 <kkeithley_> Do we need to decide time(s) for the our Office Hours in the CentOS booth and publicize them?
13:09:06 <hagarth> anything else for today?
13:09:13 <spot> kkeithley_: i think the CentOS people will do that if lalatenduM is talking to them
13:09:14 <ndevos> #action spot will send out an email with upcoming Gluster events and meetups
13:09:20 <kkeithley_> okay
13:09:26 <spot> but we should make noise once the times are locked in
13:09:35 <hagarth> spot: BIG +1 :)
13:09:41 <lalatenduM> ndevos, kkeithley_ I will make sure of that
13:09:48 <kkeithley_> quick question: anyone know how to fix the git sub-repo problem that's plaguing Launchpad builds?
13:09:49 <partner> this in couple of weeks: http://www.meetup.com/RedHatFinland/events/218774694/
13:10:04 <ndevos> arggh, yes that one too!
13:10:05 <hagarth> kkeithley_: maybe lpabon does
13:10:24 <lalatenduM> spot, +1
13:10:56 <hagarth> ok folks, I think that's all we have for today. Thanks for being part of the 1st community meeting in 2015 :).
13:10:59 <kkeithley_> Finland is one week before FOSDEM. I thought about going, but that's too long to be away
13:11:09 <hagarth> #endmeeting