15:00:05 #startmeeting 15:00:05 Meeting started Wed Jan 15 15:00:05 2014 UTC. The chair is hagarth. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:05 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:00:05 hello all! 15:00:06 Meeting started Wed Jan 15 15:09:35 2014 UTC. The chair is hagarth. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:06 Useful Commands: #action #agreed #help #info #idea #link #topic. 15:00:12 * kkeithley is here 15:00:24 #topic AI follow up 15:00:45 I had an AI on geo-rep for 3.5.0 15:00:55 we seem to be making progress there 15:01:27 3.5beta1 doesn't contain a functional geo-replication, but my expectation is that we will have a working geo-rep in beta2 or beta3 15:01:45 hopefully we don't have to revert to the 3.4 behavior 15:02:01 hrm 15:02:14 cool. 15:02:27 sorry, flaky wifi. may not be able to stay connected :( 15:02:41 we can look forward to have more beta releases in the coming days 15:02:50 johnmark: ouch, hope you stay on! 15:03:23 nightly builds - ndevos has done some excellent groundwork. thanks to him! 15:03:30 hagarth: thanks for the beta 15:03:30 on that,,,, can we please use 3.5.0 (not 3.5) in the tags and elsewhere. I don't know what changed but this time it's wreaking havoc with the build 15:03:40 ndevos: would you want to brief about the plan? 15:03:42 the fedora koji rpm build 15:03:43 ndevos: thanks for that! 15:04:18 well, the "plan" is to build gluster from several branches every night (00:00 UTC) 15:04:39 kkeithley: sure, we probably should evolve naming guidelines for our releases. I tried to keep it aligned with the Fedora guidelines but got it wrong with qa3. 15:04:53 Fedora provides a COPR service, that makes it really easy to rebuild src.rpms for several releases 15:05:11 I don't know what else changed in rpmbuild and koji. qa3 wasn't this bad 15:05:50 kkeithley: probably worth a thread on gluster-devel to finalize naming conventions for our releases. 15:06:34 * ndevos votes for a 3-digit (like x.y.z) release numbering 15:06:36 okay. The vast majority have been major.minor.teeny. Just a few omit the .teeny 15:07:07 I have had complaints about upgrades failing from x.y.zqa releases to x.y.z ga 15:07:36 so thought of having x.yqa release to x.y.z ga 15:08:09 #link https://fedoraproject.org/wiki/Packaging:NamingGuidelines has interesting set of options available for naming our pre-release packages 15:08:31 I think that x.yqa is higher than x.y.0... $ rpmdev-vercmp x.yqa x.y.0 15:08:39 we follow the Fedora release number conventions. 3.5.0-0.1beta1 -> 3.5.0-1 should work. I've not had problems with it. 15:09:25 kkeithley's suggestion if it works sounds grand :). 15:09:39 yes, for the rpms we can add a 0.1 as the release for the package, that'll work fine 15:09:56 kkeithley: now that we don't build RPMs in the release script, I think having a x.y.zqa release should work for the src tarball. 15:10:28 ok, looks like we have consensus here. 15:10:28 what may not work so well is 3.5.0-0.1qa3 -> 3.5.0-0.1beta1 15:10:49 we'll continue on #gluster-devel 15:10:50 kkeithley: yes, q > b 15:10:52 right 15:10:59 moving on to 3.5.0 15:11:02 #topic 3.5.0 15:11:10 we got the first beta out 15:11:34 quota and geo-rep are not completely functional in beta1 15:11:52 we should get both of them working in the next beta releases. 15:12:31 testing feedback on new features will help in determining if certain features have to be tagged as beta in 3.5.0 GA 15:12:39 so your testing feedback would be very welcome :) 15:12:52 johnmark: we are going ahead with the community test weekend with beta1 right? 15:13:48 not sure if johnmark's flaky wifi is acting up, but the tentative plan is to go ahead with community test weekend 15:14:03 When is the test weekend? 15:14:09 * lalatenduM thinks it is Johnmark's flaky wifi 15:14:15 davidjpeacock: scheduled to start on 17th 15:14:58 let us note an AI on johnmark for the test weekend. 15:15:10 #action johnmark to send out a confirmation on test weekend 15:15:40 once we have quota and geo-rep working, it should be a home run for 3.5.0 GA 15:16:13 any questions on 3.5.0? 15:16:27 guess not, let us move on 15:16:31 #topic 3.4.3 15:17:01 kkeithley will be the release wrangler for 3.4.3 15:17:06 indeed 15:17:35 we don't seem to have too many candidates for inclusion (which is a good sign :)) 15:17:40 I put 3.4.3 (and 3.3.3) backport requests on the release planning pages. Hopefully everyone saw the email I sent out 15:18:01 yep 15:18:11 yeah, but we might get some nominations in the coming weeks. please feel free to add to the backport wishlist page. 15:18:47 ndevos: do we plan to start the nightly builds soon? 15:19:01 hagarth: YES 15:19:06 pk wants a 3.4.0nightly to have a bug fix verified. 15:19:52 ndevos: feel free to chime in when you see this. 15:20:02 any more questions on 3.4.3? 15:20:19 moving on 15:20:24 #topic 3.6 planning 15:20:34 I sent out a note on the mailing lists around plans for 3.6 15:21:03 we are again attempting a 4 month release cycle and I think we are in a better position to adhere to the dates this time around. 15:21:40 I believe we want to do Nagios monitoring in 3.6 as well? Need to collaborate with Dusmant (sp?). 15:21:51 hagarth: nightly builds (for testing only) can be scheduled very soon - just need some tools on build.gluster.org 15:22:04 kkeithley: yes, we should create a feature page for Nagios monitoring. 15:22:09 ndevos: cool 15:22:45 if you have comments on the release schedule, please add them to the ML discussion. 15:22:45 hagarth: who should I ping to install some (1, 2) packages? 15:23:06 ndevos: I can install those packages if you let me know what's needed. 15:23:18 ok, will do 15:23:49 any other comments on the 3.6 plan? 15:24:26 guess not .. let us move on to a discussion around volume snapshots slotted for 3.6 15:24:33 rjoseph: do you want to take over? 15:24:43 thanks hagarth 15:25:03 we planning to introduce Gluster snapshot feature in 3.6 15:25:38 is that based on lvm? 15:25:43 This feature will be based on thinly provisioned LVM 15:25:58 What's the two line scope of Gluster snapshots? 15:26:42 Point-in-time copy of online gluster snapshot volume. 15:26:44 Is volume snapshots required for using georeplication with VM images? 15:27:01 * jclift looks for the 3.6 plan 15:27:15 As of now geo-rep will not make use of snapshot 15:27:23 for geo replication 15:27:41 Found it. http://www.gluster.org/community/documentation/index.php/Planning36 15:27:56 Thanks jclift 15:28:22 can the volume that was snap-shotted be accessed easily? Like, would there be a new volume like VOLNAME_snap_$DATE ? 15:28:28 The snapshot of a volume will be a virtual gluster volume 15:28:42 samppah: file snapshots probably would be more useful for geo-replicating VM images 15:28:56 rjoseph: Does the design deal with future filesystems? (btrfs/ZFS) 15:29:23 hagarth: nod, any plans for that? (sorry for offtopic question) 15:29:24 ira: Its lvm snapshots... 15:29:52 As of now it is lvm based, but we are planning to extend it to btrfs in future 15:30:36 samppah: file snapshots is going to be available in 3.5 (need to check how we can build a story around this and geo-rep). 15:31:04 I'm sill wondering, can I just access a "virtual gluster volume" to make a backup, or how would I use it? 15:31:05 the intention with volume snaps is to make the snapshotting technology pluggable 15:31:54 and have an abstraction so that we could use lvm, zfs, btrfs etc. for snapshots. 15:32:23 ndevos: IIUC, we would need to perform a snap start to make the "virtual gluster volume" available. 15:32:40 ndevos: Yes you can use the snapshot volume as a regular (read-only) volume, and thus can access the volume for backup 15:32:55 sounds really cool! 15:33:23 samppah: block device xlator is also much improved in 3.5, though I haven't played around with it as yet. we can trigger snaps there too. 15:33:30 * lalatenduM likes the idea of read-only volume i.e. snapshot 15:33:48 The main challenges with the snapshot feature is to make gluster crash consistent 15:33:58 hagarth: oh, nice.. need to take a look at it :) 15:34:24 All the components and xlators need to be crash consistent. 15:34:44 rjoseph: that is an interesting challenge :) 15:34:50 :) 15:34:52 rjoseph: do you've any write-up on the challenges involved for crash consistency requirements? 15:35:35 As of now I don't have a write-up with me.. I am working on it... and will share it with you all 15:35:45 Are these snapshots to support arbitrary tagging? 15:36:28 I'm just trying to wrap my head around if this is going to be similar to version control in usability 15:37:05 davidjpeacock: yes, in a sense. A snap is going to be the equivalent of a commit id in version control - you can always roll back to that commit id/tag. 15:37:12 Got it - thanks 15:37:56 rjoseph: can you share a link to the evolving codebase for snapshots? 15:39:15 All the code review can be seen @ http://review.gluster.com under "gluster-snapshot" project 15:39:20 rjoseph: also can we have early drops of this functionality available? it surely looks like a cool feature to play with! 15:39:45 rjoseph: have you spoken to some of the lvm developers to check with them for best practises? 15:40:02 #link http://review.gluster.org/#/q/project:glusterfs-snapshot,n,z for following snapshot development 15:40:05 hagarth, rjoseph , is there a plan to make the current code of snapshot available at https://github.com/gluster 15:40:16 In a day or two we will be sending rpm and code to the mailing list 15:40:25 rjoseph: awesome! 15:40:31 indeed 15:40:34 also we are planning to merge the current work upstream soon 15:40:59 ndevos: we are in discussion with lvm team on this 15:41:07 rjoseph: very good :) 15:41:29 lalatenduM: we could mirror it on forge , if you are looking for a git repo , you can see it here - #link git clone git://review.gluster.org/glusterfs-snapshot 15:41:30 So they providing thier input on this 15:41:49 hagarth, cool 15:42:28 I think there will be some general recommendations on how bricks should be laid out on top of logical volumes to get better performance etc. 15:43:19 We have already identified some recommendations on brick layouts and VG layouts. 15:43:33 rjoseph: great! 15:44:00 Also this feature as of now will only support thinly provisioned lvms, i.e. bricks should be carved out of thinp 15:44:26 We will be publishing these recommendations as well 15:45:12 cool, looking forward to that 15:45:31 looks like I got affected by a flaky wifi now :) 15:46:04 anything more on snapshots? 15:46:25 hagarth: What was the last line you saw? 15:46:40 jclift: rjoseph: We have already identified some recommendations on brick layouts and VG layouts. 15:47:01 hagarth: http://fpaste.org/68642/80081513/ 15:47:09 jclift: thanks! 15:47:13 np :) 15:47:49 rjoseph: thanks for that useful introduction to snapshots 15:47:56 yes, that was good :-) 15:48:08 moving on 15:48:18 #topic vagrant, puppet & gluster 15:48:49 purpleidea has integrated vagrant, puppet & gluster and it does look like a rapid provisioning scheme 15:49:06 more details can be found here - #link https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/ 15:49:50 purpleidea is interested in feedback on this .. so if any of us get to play with it, please drop a note to purpleidea here or on his blog post or on the mailing lists. 15:50:42 purpleidea has also been able to re-create a bug that was elusive using this deployment toolset - https://bugzilla.redhat.com/show_bug.cgi?id=1051992 15:50:45 Bug 1051992: unspecified, unspecified, ---, vraman, NEW , Peer stuck on "accepted peer request" 15:51:10 ok .. moving on to next topic 15:51:14 #topic open discussion 15:51:27 lalatenduM: do you want to go ahead? 15:51:48 hagarth, sure 15:52:25 I will suggest to have our users mailing list and a web interface integrated 15:52:47 lalatenduM: using something like hyperkitty? 15:53:03 hagarth, yeah thast last time ndevos suggested 15:53:24 hagarth, do we have hardware needed to deploy these stuff 15:53:38 lalatenduM: is this to make the mailing list discussions more search friendly? 15:54:22 hagarth, yeah, it would help others to get the answers for previously asked questions 15:54:33 using web search engine 15:54:40 lalatenduM: we do have some. If they are busy, we can have spin up a new instance in a public cloud. 15:54:43 johnmark: ^^^ 15:54:43 lalatenduM: I believe this is the replacement for Q&A forum we had long back. 15:54:56 msvbhat, yes 15:55:47 sounds like a good idea, does hyperkitty translate a new post in the forums to an email thread as well? 15:56:01 The idea is to link the Q&A forum linked with emails 15:56:24 hagarth, yeah 15:56:32 lalatenduM: that would be cool to have. let us follow up with johnmark on this one. 15:56:48 #action hagarth and lalatenduM to follow up with johnmark on hyperkitty 15:57:14 any more topics? 15:57:16 real quick: build.gluster.org.... is running CentOS 6.3. I have several hosts that can't ssh to it — mainly it's some RHEL6 boxes (but not others) but also the NetBSD vm (running on RHEL6) that's meant to be a jenkins slave. With wireshark I can see that b.g.o isn't ACKing the SYNs. And it's not just ssh SYNs that aren't ACKed, you can see it with http, telnet, etc. We should update the kernel at least (and reboot) and see i 15:58:11 kkeithley: might be a firewall rule to block SYN scans too 15:58:22 kkeithley: let us check that out.. 15:58:28 iptables are empty. 15:58:36 no selinux 15:58:41 kkeithley: Is build.gluster.org physical or virtual? 15:58:49 jclift: virtual 15:59:23 kkeithley: You're logged into it atm? 15:59:28 Dan Mons from cuttingedge in Australia will be around in the next community hangout 15:59:44 I was and can be again in about 3 sec 15:59:56 they have a fascinating gluster deployment and do tune in if you are interested tomorrow. 16:00:25 ok, that should be it for today folks. Thanks for dropping by, see you all next week! 16:00:28 hagarth, it is a great idea to listen to community 16:00:44 lalatenduM: yeah! 16:00:46 I mean the gluster implementations 16:00:54 #endmeeting