20:01:09 <mmcgrath> #startmeeting Infrastructure 20:01:09 <zodbot> Meeting started Thu Feb 4 20:01:09 2010 UTC. The chair is mmcgrath. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:01:10 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic. 20:01:11 <mmcgrath> who's here? 20:01:15 * lmacken 20:01:22 <Oxf13> I'm here, but distracted with lunch and another meeting 20:01:22 * jaxjax is here 20:01:28 <yawns1> here 20:01:35 <wzzrd> here 20:01:57 * dgilmore is present 20:01:59 * hiemanshu 20:02:00 * skvidal is here 20:02:05 * nirik is around in the cheap seats. 20:02:11 <sheid> is here 20:02:21 * a-k is here 20:02:23 <mmcgrath> no one creates meeting tickets anymore so we can skip that :) 20:02:24 * abadger1999 here 20:02:29 <mmcgrath> #topic /mnt/koji 20:02:35 <mmcgrath> So I'm just making everyone aware what's going on here. 20:02:42 <mmcgrath> 1) we have a try'n buy from Dell on a equallogic 20:02:54 <mmcgrath> 2) /mnt/koji is 91% full 20:02:57 <mmcgrath> so we need to get cracking. 20:03:07 <mmcgrath> the equallogic is in a box in the colo so hopefully it won't take long. 20:03:08 <lmacken> I still have a script running from last night that is cleaning /mnt/koji/mash/updates 20:03:21 <mmcgrath> lmacken: oh that's good to know, any estimate on how much it'll clean up? 20:03:43 <lmacken> mmcgrath: I'm not quite sure... 20:03:48 <mmcgrath> <nod> no worries. 20:03:50 <lmacken> there are a *ton* of mashes to clean up 20:03:53 <Oxf13> hard to say with the hardlinks 20:03:53 <dgilmore> i expect little 20:03:59 * ricky is around 20:04:02 <dgilmore> since it should be mostly hardlinks 20:04:05 <mmcgrath> I'm *hoping* to have that thing installed and pingable by early next week. 20:04:08 <mmcgrath> the plan is going to be this. 20:04:17 <mmcgrath> once it's up and running I'm going to drop everything I'm doing and try to get it up and going. 20:04:26 <mmcgrath> dgilmore has promised a significant portion of his time as well. 20:04:30 * dgilmore will be focusing on it also 20:04:45 <mmcgrath> we're going to be focusing on testing, speed, what works, virtualized, unvirtualized, etc. 20:05:02 <mmcgrath> the equallogic will be exporting an iscsi interface, it's our job to figure out what to do with it. 20:05:05 <Oxf13> I've also promised some of my time to help generate traffic for testing 20:05:12 <mmcgrath> Oxf13: excellent 20:05:30 <mmcgrath> mdomsch: I know you enjoy the equallogics, do you have any interest in being involved in this? 20:06:04 <mdomsch> mmcgrath, I probably shouldn't, just so you feel it's a fair eval 20:06:16 <mdomsch> but if have questions, hit me and I'll try to help 20:06:25 <mmcgrath> mdomsch: fair enough :) 20:06:36 <mmcgrath> Ok, so that's really all I have on that for the moment, any other questions? 20:06:42 * mdomsch wants to bias the decision, but :-) 20:07:04 <mmcgrath> mdomsch: you just want to make it a 'n buy? :) 20:07:16 <mmcgrath> Ok, moving on. 20:07:23 <mmcgrath> #topic PHX2 network issues 20:07:41 <mmcgrath> so there's been just a lot of strange things at the network layer in PHX2. 20:07:47 <mmcgrath> our data layer traffic has been fine so that's good. 20:07:55 <skvidal> mmcgrath: scooby doo sees odd things - phx2 is downright haunted 20:08:02 <mmcgrath> It seems much of that has been fixed at least as of right now. 20:08:03 <mmcgrath> skvidal: :) 20:08:15 <mmcgrath> Still, the way things are is just too bad for releng and QA to do their work 20:08:22 <smooge> here.. 20:08:25 <mmcgrath> so I'm working on setting up alternate sites to grab their snapshots and test info. 20:08:43 <mmcgrath> one example is on sb1: http://serverbeach1.fedoraproject.org/pub/alt/stage/ 20:09:00 * mmcgrath thanks the websites-team and likely ricky for getting that all properly branded. 20:09:13 * ricky passes the thanks onto sijis :-) 20:09:24 <mmcgrath> The good news is so far this setup hasn't required a change in releng's workflow. 20:09:42 <mmcgrath> the eh' news is that we haven't fully tested it yet but over time as people use it if it's working we'll keep it and/or add additional sites. 20:09:47 <mmcgrath> Any questions on that? 20:10:24 <mmcgrath> alllrighty 20:10:29 <mmcgrath> #topic Fedora Search Engine 20:10:32 <mmcgrath> a-k: whats the latest? 20:10:43 <a-k> The news this week is that I've made Nutch available at 20:10:50 <a-k> #link http://publictest3.fedoraproject.org/nutch 20:11:06 <a-k> I intend to see how much both Xapian and Nutch can crawl before they break 20:11:15 <a-k> With Nutch, I expect the time it takes will just become unacceptable eventually 20:11:24 <a-k> Nutch takes longer than Xapian to crawl 20:11:36 <a-k> I still intend to keep looking for/at other candidates, too 20:11:36 <nirik> a-k: what content are you pointing it at right now? 20:11:39 <mmcgrath> Does Nutch make any smart decisions about crawling? 20:12:02 <a-k> I point both at just http://fedoraproject.org 20:12:07 <mmcgrath> a-k: FWIW, one of the test things I've been doing is searching for "UTC" I've found it's a good way to determine a good engine from a bad one on the wiki 20:12:11 <mmcgrath> for example: 20:12:12 <mmcgrath> https://fedoraproject.org/wiki/Special:Search?search=UTC&go=Go 20:12:14 <mmcgrath> CRAP 20:12:16 <a-k> mmcgrath: what do you mean by smart? 20:12:20 <mmcgrath> http://publictest3.fedoraproject.org/nutch/search.jsp?lang=en&query=UTC 20:12:21 <mmcgrath> not bad 20:12:34 <mmcgrath> well, nutch found the UTCHowto 20:12:41 <mmcgrath> instead of all the ones below it. 20:12:53 * mmcgrath just sayin. 20:12:59 <skvidal> cool 20:13:00 <a-k> It's important nor to confuse searching with indexing 20:13:00 <mmcgrath> a-k: how long are we talking about for crawling with nutch? 20:13:15 <nirik> a-k: you might also try meetbot.fedoraproject.org and see how it does with irc logs. 20:13:46 <a-k> Nutch crawled in about 16 hours what Xapian crawled in 8 20:14:03 <a-k> Neither crawls are the complete site yet 20:14:17 <mmcgrath> are there tunables? is this as simple as 'add more processes' ? 20:14:57 <a-k> Nothing is especially tunable. It might be limited by bandwicth. 20:15:02 <mmcgrath> yeah 20:15:03 <smooge> crawler needs more systems badly... 20:15:17 <nirik> don't shoot the url! ;) 20:15:20 <mmcgrath> 16 hours is a lot but might be acceptable. 20:15:40 <a-k> Although part of Nutch's problem could be an inherent inefficiecy in it's Java code 20:15:50 <a-k> Xapian is compiled C 20:16:12 <mmcgrath> a-k: what did we get with that 16 hours exactly? 20:16:29 <a-k> About 44k documents indexed 20:16:50 <mmcgrath> and Xapian crawled the same thing? 20:17:05 <a-k> Nutch and Xapian crawl differently 20:17:09 <mmcgrath> a-k: where was the Xapian url again? 20:17:41 <a-k> #link http://publictest3.fedoraproject.org/cgi-bin/omega 20:17:56 <a-k> As always, I keep notes on the wiki page 20:18:02 <a-k> #link http://fedoraproject.org/wiki/Infrastructure/Search 20:18:23 <abadger1999> a-k: You also had the unicode thing you posted in #fedora-admin 20:18:32 <abadger1999> Were you able to find a fix for that? 20:19:01 <a-k> No fixes. Non-Latin characters hasn't really been something for which there's a requirement yet. 20:19:07 <a-k> I thought Nutch was a little funky with non-Latin characters, e.g., переводу, compared to Xapian 20:19:15 <a-k> But I've found Xapian examples that handle non-Latin just as bizarrely 20:19:28 <a-k> Neither Xapian nor Nutch claim to handle non-Latin characters 20:20:22 <a-k> We breifly mentioned non-Latin (non-UTF8) in a previous meeting 20:20:22 <abadger1999> <nod> 20:20:32 <a-k> Should there be a requirement around it? 20:20:46 <abadger1999> mmcgrath: What do you think? 20:20:53 <a-k> I suspect any requirement would eliminate ALL candidates 20:21:01 <abadger1999> We have a lot of non-native English users. 20:21:26 <mmcgrath> it probably should be a requirement. 20:21:41 <jaxjax> ññ 20:21:42 <mmcgrath> a-k: I'd think most engines have support for it, if not we should contact them and find out why 20:22:15 <a-k> A requiirement as opposed to something we take into consideration when choosing finally? 20:22:22 <smooge> actually its really really really slow to do non-ascii at times 20:22:43 <dgilmore> a-k: handling all languages should be a requirement 20:23:04 <a-k> Both seem to handle searching by expanding DBCS into hex 20:23:16 <a-k> Most of the time it seems to work 20:23:26 <a-k> Some of the time the results look screwed up 20:24:31 <a-k> Anyway I don't think I've got much more to add right now 20:24:42 <mmcgrath> a-k: thanks 20:24:55 <mmcgrath> We'll move on for now 20:25:00 <mmcgrath> a-k: try to find out what the language deal is 20:25:02 <smooge> a-k I remember old search engines had problems where language formats got combined on the same page. 20:25:16 <a-k> mmcgrath: ok 20:25:18 <mmcgrath> Anyone have anything else on that? 20:25:40 <mmcgrath> k 20:25:44 <mmcgrath> #topic Our 'cloud' 20:25:52 <mmcgrath> so I'm trying to get our cloud hardware back in order. 20:26:02 <mmcgrath> I've been rebuilding the environment and getting it prepared for virt_web 20:26:06 <smooge> yeah 20:26:08 <mmcgrath> which should be at or near usable at this point. 20:26:10 <smooge> what can I do to help 20:26:17 <smooge> oh you already did it 20:26:27 <mmcgrath> smooge: not sure yet, we have a new volunteer working with me, sheid 20:26:34 <mmcgrath> and I'm sure SmootherFrOgZ as well. 20:26:38 <smooge> cool 20:26:38 <mmcgrath> setting things up initially won't take long 20:26:51 <mmcgrath> it's getting them working and coming up with a solid maintanence plan that will be the tricky part. 20:27:04 <dgilmore> mmcgrath: what base are we using? 20:27:14 <mmcgrath> dgilmore: RHEL 20:27:17 <mmcgrath> and xen at first 20:27:23 <dgilmore> mmcgrath: ok 20:27:35 <mmcgrath> though the conversion to kvm should be quick 20:27:41 <dgilmore> did we sort out the libvirt-qpid memory leaks? 20:27:50 <mmcgrath> dgilmore: nope, I've got a ticket submitted upstream 20:27:56 <dgilmore> mmcgrath: any reason not to start with kvm? 20:27:58 * mmcgrath is hoping to find some C coders to submit patches for me. 20:28:04 <mmcgrath> dgilmore: not really 20:28:21 * dgilmore set up new box in colo with centos 5.4 and kvm 20:28:25 <dgilmore> its working great 20:28:47 <jokajak> i use kvm with my rhel 5.4 box and it works much better than xen ever did 20:28:48 <mmcgrath> the memory leak *might* be limited only to libvirt-qpid installs that can't contact the broker. 20:29:13 <mmcgrath> jokajak: that's weird, we've had generally the opposite experience. Performance has either been terrible or as good as xen but never better. 20:29:24 <dgilmore> i never got the deps sorted out to get libvirt-qpid running on my new box 20:29:27 <jokajak> i had stability problems with xen 20:29:42 <Oxf13> mmcgrath: I think it depends on what you install into the vm, and whether or not virtio is used 20:29:49 <mmcgrath> dgilmore: yeah, I need to come up with a long term plan for that too. 20:29:51 <Oxf13> without virtio, kvm is going to be slower than paravirt xen 20:30:00 <mmcgrath> Oxf13: yeah for us most issues were cleard with different drivers 20:30:10 <mmcgrath> I think we have most of it figured out now, our app7 is kvm 20:30:16 <nirik> kvm works great here, but I am using fedora hosts. ;) 20:30:23 <mmcgrath> nirik: yeah 20:30:28 <mmcgrath> so anyone have any questions on this for now? 20:30:48 <dgilmore> the most recent rhel kernel fixed some clock issues i was having 20:31:06 <mmcgrath> k 20:31:10 <mmcgrath> #topic Hosted automation 20:31:14 <mmcgrath> jaxjax: you want to talk about this? 20:31:30 <jaxjax> yep 20:32:03 <jaxjax> I'm currently in the process of installing a full environment on a kvm v machine 20:32:39 <jaxjax> testing on my desktop was a bit crap and I expect to have it ready by end of this week so I can test properly 20:32:48 <jaxjax> some questions about fas integration 20:32:49 <mmcgrath> <nod> 20:32:58 <mmcgrath> sure, whats up? 20:33:39 <jaxjax> Can I work in the automatic creation of groups when required? 20:33:52 <jaxjax> or we would have to do it manually? 20:33:56 <mmcgrath> jaxjax: yeah, and it'll be required almost every time. 20:34:00 <mmcgrath> ricky: you still around? 20:34:03 <ricky> Yup 20:34:27 <mmcgrath> ricky: would you be interested in writing a CLI based fas client that creates groups? 20:34:31 <ricky> I don't think we have write methods exposed in FAS yet, so that will require FAS extra 20:34:35 <ricky> **extra FAS support 20:34:53 <mmcgrath> once you're logged in couldn't you just post? 20:35:24 <ricky> Well... I guess you can use the normal form and skip past having a JSON function for it 20:35:36 <ricky> You will probably just have hacky error handling in that case. 20:35:50 <jaxjax> Ricky: Do you mind if I contact you 2morrow or Sat for this? 20:36:01 <mmcgrath> ricky: well, should we focus on getting SA0.5 out the door so we can continue working on stuff like that? 20:36:20 <ricky> Yes 20:36:45 <ricky> jaxjax: Sure, eitiher of those is fine 20:36:53 <jaxjax> thx, will do. 20:37:01 <mmcgrath> k 20:37:10 <mmcgrath> we'll have to meet up and figure out exactly what is still busted 20:37:33 <ricky> There's currently a privacy branch in the git repo 20:37:53 <ricky> (privacy filtering is the current main broken thing) 20:38:34 <ricky> There's basically one design decision I'd like to make before we can refactor all privacy stuff :-) 20:38:47 <mmcgrath> ricky: is that something you can work on in the comming week? 20:39:18 <ricky> Yeah, I'll get started on that this weekend 20:39:26 <mmcgrath> ricky: excellent, happy to hear it 20:39:30 <mmcgrath> anyone have anything else on this topic? 20:39:52 <mmcgrath> k, we'll move on 20:39:53 <mmcgrath> jaxjax: thanks 20:39:58 <mmcgrath> #topic Patch Wed. 20:40:01 <mmcgrath> smooge: want to take this one? 20:40:02 <ricky> Haha 20:40:13 * sijis is here late. sorry 20:40:24 <smooge> yes 20:40:46 <smooge> Ok I would like to make every second Wednesday of the month patch day 20:41:16 <smooge> we would run yum update on the systems and reboot as needed 20:41:33 <smooge> which lately has been, we will be rebooting every 2nd wednesday of the month 20:41:47 <mmcgrath> smooge: do you want to alter when our yum nag mail gets sent to us? 20:41:54 <mmcgrath> right now I think it's on the first day of the month 20:42:12 <smooge> yes. I will change it to the first weekend of the month 20:42:19 <smooge> close enough for government work 20:42:32 <smooge> in the case of emergency security items, we will patch as needed 20:42:43 <mmcgrath> yeah 20:42:47 * mmcgrath is fine with that 20:42:50 <mmcgrath> anyone have any issues there? 20:43:02 <smooge> usually systems will need to be rebooted per xen/kvm server 20:43:06 <mmcgrath> smooge: It'd be good to get this in an SOP 20:43:13 <mmcgrath> now that we're getting some actual structure around it. 20:43:14 <smooge> yes 20:43:43 <smooge> I have two in mind 20:43:56 <smooge> update strategy, server layout strategy 20:44:12 <ricky> Just curious, is this roughly the way big companies, etc. do updates? 20:44:13 <smooge> making sure we have services on different boxes so we don't screw up things too much 20:44:21 <smooge> it depends 20:44:42 <smooge> some big companies will do them at something like 2am every saturday morning 20:44:55 <smooge> some big companies will do them once a month 20:45:10 <smooge> and some will rely on their sub-parts to do it appropriately (eg never) 20:45:12 <ricky> But nothing like "reboot the db server automatically once a month," right? 20:45:23 <jaxjax> nop 20:45:26 <smooge> depends on the db server 20:45:36 <smooge> if it has a memory leak then yes 20:45:43 <mmcgrath> heh 20:45:45 <jaxjax> you dont do the updates for all servers at the same time 20:45:46 <ricky> Hahaa 20:45:47 <jokajak> why not use something like spacewalk to better manage updates? 20:45:57 <mmcgrath> jaxjax: because then we'd be using spacewalk? 20:46:09 <mmcgrath> doesn't that still require oracle anyway? 20:46:17 <smooge> we might when its postgres support is ready 20:46:23 <wzzrd> yes it does 20:47:03 <smooge> jokajak, it is a good idea. we are just having to wait for things we have little knowledge of to help with 20:47:20 <mmcgrath> smooge: got anything else on that? 20:47:28 <skvidal> how does spacewalk help? 20:47:32 <smooge> jaxjax, yeah you usually schedule the servers into classes and do them per 'class' so that services stay up 20:47:38 <jaxjax> sorry was at phone 20:47:52 <smooge> skvidal, knowledge of what boxes are in what state. 20:47:59 <mmcgrath> skvidal: it makes it easy to track what servers need updates, send the 'do the update' requirement and see how it went afterward. 20:48:04 <skvidal> smooge: and massive infrastructure to do that 20:48:07 <jaxjax> yes normally what you wanna is avoiding downtime because some patches make the system not working properly 20:48:10 <mmcgrath> I have to say that aspect of satellite did appeal 20:48:10 <ricky> Is it necessary to reboot the xen machines as often as the other ones? 20:48:18 <mmcgrath> skvidal: yeah it does have a cost 20:48:23 <skvidal> mmcgrath: a huge cost 20:48:23 <mmcgrath> ricky: they keep releasing kernel updates. 20:48:28 <ricky> They don't seem to touch sa much user data, so it's nice to avoid rebooting them if we can :-) 20:48:31 <smooge> I wouldn't call it a huge cost 20:48:33 <skvidal> mmcgrath: and for more or less 'yum list updates' that's a lot of crap to sift 20:48:42 <smooge> its pretty minimal compared to some of the beasts I have had to deal with 20:48:44 <ricky> Ah, I was thinking about the value of security updates on those vs. on proxies, etc. 20:48:45 <skvidal> smooge: you have to run an entire infrastructure and communiucations mechanism 20:48:51 * nirik notes some of the kernel updates lately don't pertain to all machines. 20:48:52 <mmcgrath> skvidal: updating all of our hosts monthly has become expensive though too. 20:49:08 <nirik> ie, driver fixes where the machine doesn't use that driver at all. 20:49:10 <skvidal> mmcgrath: how would spacewalk help that, then? 20:49:21 <mmcgrath> it's just a couple of clicks and it'll go do the rest. 20:49:23 <skvidal> mmcgrath: I'm not arguing against patch wednesday 20:49:31 <skvidal> I'm arguing against spacewalk being the answer 20:49:40 <mmcgrath> yeah I'm not so sold on spacewalk either 20:49:45 <mmcgrath> but the way we do updates now is pretty expensive. 20:49:48 <smooge> skvidal, I didn't say it was the answer. I said it "might" be the answer 20:50:07 <skvidal> smooge: let's talk about other solutions 20:50:21 <smooge> when the time comes it will be evaluated against what other frankenstein we can come up with to do it better 20:50:27 <mmcgrath> skvidal: FWIW, no one's actually said "we should use spacewalk" 20:50:41 <mmcgrath> jaxjax just asked why we don't and we told him :) 20:50:43 <smooge> I am not against frankensteins.. Its the Unix way 20:50:52 <skvidal> smooge: I'm not talking about frankensteins, either 20:50:59 <smooge> oh I am. 20:51:03 <jaxjax> I see. 20:51:05 <jokajak> s/jaxjax/jokajak ;-) 20:51:08 <skvidal> I'm talking about using the tools we have 20:51:17 <mmcgrath> oh jokajak 20:51:23 <mmcgrath> jokajak: jaxjax: wait, you two aren't the same person? 20:51:28 * mmcgrath only just realized that 20:51:28 <skvidal> smooge: do you have a rough set of requirments? 20:51:34 <mmcgrath> I kept thinking jaxjax was changing his nic to jokajak :) 20:51:34 <jaxjax> :D 20:51:43 <jaxjax> not at all 20:51:51 <smooge> skvidal, yes.. and when you assemble them together they become a frankenstein of parts. talk off channel after meeting 20:51:56 <skvidal> smooge: ok 20:52:15 <mmcgrath> Ok, anyone have anything else on that? if not we'll open the floor 20:52:52 <mmcgrath> alrighty 20:52:54 <mmcgrath> #topic Open Floor 20:52:59 <mmcgrath> anyone have anything else they'd like to discuss? 20:53:04 <mmcgrath> any new people around that want to say hello? 20:53:25 <jpwdsm> I think OpenID might (finally) be ready for some testing 20:53:37 <sheid> hello, i'm new ;) 20:53:39 <mmcgrath> jpwdsm: oh that's excellent news. 20:53:52 <mmcgrath> jpwdsm: how far away from it being packaged and whatnot 20:53:59 <jpwdsm> I can log into StackOverflow and LiveJournal with it, but that's all I've done 20:54:01 <mmcgrath> jpwdsm: is it directly tied to FAS or is it it's own product? 20:54:10 <mmcgrath> jpwdsm: test opensource.com 20:54:10 <jpwdsm> mmcgrath: own product 20:54:10 <ricky> Nice :-) What publictest are you on again? 20:54:16 <jpwdsm> mmcgrath: will do 20:54:25 <jpwdsm> ricky: pt6.fp.o/id 20:54:33 <ricky> For what it's worth, I haven't had luck with opensource.com and google or livejournal's openid :-( 20:54:37 <mmcgrath> sheid: welcome 20:54:51 <ricky> Good to hear - I look forward to dropping openid out of FAS :-) 20:55:00 <jpwdsm> mmcgrath: I haven't done much packaging, so I'll probably need some help with that 20:55:29 <jpwdsm> ricky: It uses FasProxyClient, but that's it :) 20:55:37 <ricky> abadger1999 is our python/packaging guru, and we're all around if you have any questions on it 20:56:02 <ricky> We'll also want to ask abadger1999 and lmacken about using the FAS identity provider (and if the TG2 one works with pylons) 20:56:21 <mmcgrath> <nod> 20:56:44 <ricky> (disclaimer if you're not aware - this is written in pylons, which is kind of a subset of TG2 I guess) 20:56:57 <mmcgrath> yeah 20:57:12 <mmcgrath> Ok, anyone have anything else they'd like to discuss? If not we can close the meeting. 20:58:16 <Oxf13> I'd like to point out something for no frozen rawhide 20:58:32 <dgilmore> Oxf13: have at it 20:58:36 <G> oh, Infra meeting? 20:58:38 <Oxf13> my initial tests of doing two composes on two machines at once was favorable. there was not a significant increase in the amount of time necessary to compose 20:58:39 <mmcgrath> Oxf13: sure 20:58:43 <mmcgrath> G: hey 20:58:45 <G> damn, I was awake for it too 20:58:48 <mmcgrath> heheh 20:58:49 <dgilmore> Oxf13: :) nice 20:58:54 <Oxf13> this combined with lmacken's testing of bodhi means I think we can move forward with no frozen rawhide 20:58:56 <ricky> Have koji01.stg and releng01.stg been good for you and lmacken's testing? 20:59:01 <Oxf13> which means we will be stressing things more in the near future 20:59:08 <Oxf13> and ti's going to cause a lot of confusion amongst the masses 20:59:13 <ricky> (and cvs01.stg) 20:59:18 <mmcgrath> Oxf13: that should be fine and we should have more hardware for you 20:59:19 <G> The good news guys, when I'm back in NZ I'll be able to attend them more often 21:00:29 <Oxf13> ricky: it was for luke. I wasn't using .stg for my testing 21:00:34 <mmcgrath> G: excellent :) 21:00:44 <Oxf13> ricky: I will be using .stg for dist-git testing soon, but that will require modifications to koji.stg 21:00:48 <mmcgrath> Oxf13: Ok, well I'm glad that's working out for you 21:01:28 <ricky> Cool 21:02:00 <mmcgrath> ok, if that's it I'll close the meeting 21:02:02 <mmcgrath> #endmeeting