18:59:51 #startmeeting Infrastructure (2011-08-18) 18:59:51 Meeting started Thu Aug 18 18:59:51 2011 UTC. The chair is nirik. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:59:51 Useful Commands: #action #agreed #halp #info #idea #link #topic. 18:59:52 #meetingname infrastructure 18:59:52 The meeting name has been set to 'infrastructure' 18:59:52 #topic Robot Roll Call 18:59:52 #chair smooge skvidal codeblock ricky nirik abadger1999 18:59:52 Current chairs: abadger1999 codeblock nirik ricky skvidal smooge 18:59:58 * skvidal is here 19:00:00 * Flare183 is here 19:00:10 * athmane is here 19:00:16 hi, here, etc 19:01:32 ok, I guess lets dive on in. 19:01:43 #topic New folks introductions and apprentice tasks/feedback 19:01:58 any new folks? or apprentice tasks we should discuss? 19:02:39 * nirik listens to crickets. 19:02:49 the ones here at night 19:02:51 as always, feel free to chime in in #fedora-admin/noc. 19:02:52 they've been really loud 19:02:54 How many items have been added to the talk repo 19:02:57 Not new persay, just back after being gone. 19:03:01 the infra-repo 19:03:13 skvidal: same here of late. 19:03:29 smooge: infra-docs? or infra-hosts? 19:03:33 geez I can't type.. infra-docs 19:03:36 rfelsburg: welcome back. ;) 19:03:40 welcome back 19:03:46 smooge: some... lots more need to be done tho 19:03:49 thanks nirik/smooge 19:04:26 I'd be happy to assist anyone wanting to work on that... 19:04:44 me too. email works great if I am not on IRC 19:04:51 smooge: some - not all of them - I've been doing them as I read them 19:05:15 * nirik means to do some too, but hasn't. 19:05:34 ok, moving along... 19:05:37 #topic F16 Alpha Freeze reminder and tickets. 19:05:58 smooge: mainly b/c some of them I have to fix as I migrate 19:06:00 I think sijis is working on getting the alpha website squared away. 19:06:06 smooge: b/c some of our SoPs are badly out of date 19:06:07 (sorry) 19:06:12 np 19:06:17 * pingou hi 19:06:21 yeah, many need updating. ;) 19:06:43 we should have alpha syncing out soon, so we can check perms and space and all that fun stuff. 19:07:10 anyone have any alpha related issues or concerns? 19:07:25 no this one is pretty quiet on our side 19:07:31 knock on silicon wood 19:08:04 yeah, hopefully it will all go smoothly. 19:08:09 ok, moving on then... 19:08:21 oh, and hi pingou. ;) 19:08:24 #topic Fedora hosted plans 19:08:26 I'll try to deal with the simple ones as needed 19:09:27 we talked about this on the list some. I think we have at least a short term plan hashed out... make new collab/hosted pairs much like they are now, move lists to new collab, and migrate stuff to hosted with a eye for expansion down the road if need be. 19:09:32 Flare183: thank you 19:09:35 smooge: I looked at allura's install doc... I think it would make hosted more heavy-weight than it currently is. Not sure if that's necessarily bad. 19:09:51 Flare183: please do ask if anything looks unclear or out of date. ;) 19:10:08 Flare183: or migrate it and just mark it as OUT OF DATE at the top 19:10:21 * nirik has a dislike of sourceforge... possibly based on the old code, but it used to be really difficult to use. 19:10:21 abadger1999, yeah.. this looks a lot more enterprise class than trac 19:10:29 smooge: It might be hard to package... the download is a 25MB pybundle... whatever that is. 19:10:35 abadger1999: I thought the same thing when I looked at it - that allura looked like it had a lot of infrastructure we may or may not ever use ubt willl require a lot to keep it running 19:10:48 so 19:10:48 19:10:55 I actually have a ridiculous dream 19:11:01 if we could do redmine in openshift 19:11:06 a git repo for every .pot? 19:11:12 and point it back at git repos on hosted 19:11:31 this is partly b/c I'm lazy and partly b/c I don't think it is a terrible thing to help openshift ;) 19:11:38 19:11:50 yeah, worth exploring... 19:11:50 and b/c they are not bound by the 'need an rpm' rules 19:11:54 b/c of how they are laid out 19:12:06 anyway - I like the plan we've been discussing for hosted 19:12:12 ok just as long we say this is an experiment and if openshift goes away for some reason we can roll with it 19:12:18 does everyone agree on the short term plan? and we continue to look at longer term? 19:12:28 nirik: oh and even if we decide that merging hosted lists and collab lists is not a good plan - load-wise 19:12:39 nirik: I still think a single instance that just does lists for hosted saves us a bunch of pain 19:12:46 nirik: would you agree? 19:12:47 yeah. 19:12:52 agreed. 19:13:10 * nirik notes the archiver is still working on the 16th on hosted. Perhaps it will catch up someday. 19:13:21 nirik: right - do you know if it is cpu or io-bound?? 19:13:35 nirik: What's the short term plan precisely? spinning new hosted boxes for backend and using dns to make it appear like one from the front? 19:13:52 skvidal: cpu it looks like. 19:13:58 is the splitting just for repos or for repos and trac? 19:14:05 nirik: can we throttle back other processes adn nice that one 19:14:28 abadger1999: i think it's for repos and trac? 19:14:40 note _think_ 19:15:22 abadger1999: make new collab03/04, and hosted03/04 that are rhel6 and similar to what we have now. Migrate projects over, but add in cname or the like for trac/scm. It's still 1 machine, but we can easily move things around to 2 or more machines if need be down the road... 19:15:53 and all lists migrate to the new collab03 19:16:23 sounds good 19:16:26 what else is on collab? 19:16:29 okay. So actually. the initial change is very small. 19:16:30 some of our load issues are mailman, some of them are our old trac/gitplugin. 19:16:34 the older collab01/02 19:16:39 so, new version might be much better. 19:16:39 I didn't think the archiver normally chewed that much on CPU nirik, but perhaps I'm just out of the loops, is this something new? 19:16:43 the only thing splitting off is mailing lists. 19:16:49 abadger1999: yep. 19:16:54 and moving to rhel6. 19:16:57 new trac, etc. 19:16:58 The other portion is just a one-by-one migration to rhel6/new trac 19:17:11 rfelsburg: well, we have this list... autoqa-results. 19:17:12 Sounds good to me. 19:17:15 it gets a LOT of emails. 19:17:17 nirik: question 19:17:28 do we want to go ahead and generate the cnames now? 19:17:31 Say no more nirik 19:17:37 point them all at hosted01 19:17:47 and then move them as we migrate hosts off? 19:17:51 s/hosts/projects/ 19:17:52 or 19:17:56 just add them as we migrate? 19:18:06 skvidal: we could, but it might be better to add as migrated, so we have a good way to tell whats moved? 19:18:21 rfelsburg: the 16th (which it's working on) had so far 11,000 emails. 19:18:25 from a support pov, seems better to have the cnames now. 19:18:40 so that supporting end users, we don't have to know what's moved. 19:18:42 I guess it could be easy to tell from other ways. 19:18:44 abadger1999: b/c then we could tell everyone the same thing 19:18:50 actually, we have the cnames now. 19:18:50 exactly. 19:18:50 * skvidal is willing to do either :) 19:18:55 nirik: we do? 19:19:00 nirik: Geez, exceptionally bad day or has the email count really gone up that much. 19:19:07 nirik: I thought we just had a wildcard 19:19:15 rfelsburg: the latter 19:19:21 damn. 19:19:34 skvidal: right, I was joking about the wildcard. They all resolve already. ;) 19:19:47 due to wildcard 19:20:05 nirik: right - so we don't have the cnames now 19:20:07 I guess I don't care strongly about making them now or later. 19:21:11 I expect us to have a small pool of interested projects migrate soon... 19:21:22 then move over time, then we set a flag day to move the laggers. 19:22:04 gotta love flag day 19:22:17 Yep. ;) 19:23:15 anyhow, anything more on the short term hosted plans? or long term? 19:23:39 I'll note that I am also waiting to hear from serverbeach about a hardware refresh... would be nice to setup these new ones on new machines. 19:23:53 nirik: question 19:24:03 nirik: do we want to keep all of hosted in a single site colo-wise? 19:24:03 or we could put them somewhere else. 19:24:15 well, we could stick the secondary one somewhere else... 19:24:21 it's a fair bit of BW probibly. 19:24:27 but otherwise I don't see an issue... 19:24:28 ie: should we have some of them in ibiblio? 19:24:32 and some in osuosl, maybe? 19:24:50 so, split them out now? 19:25:22 I was thinking maybe so? 19:25:34 do we gain anything by having them be on one site? 19:25:50 well, ability to have the warm spare I suppose. 19:26:04 or we could still do that on other sites, just more machines. 19:26:25 I think osu is pretty low on disk 19:27:09 true 19:27:11 I don't know how important the warm spare is. 19:27:19 one advantage of having lots of slices 19:27:23 for git repos/trac 19:27:29 it means less downtime and less data lost in case of doom. 19:27:37 and less resource required to run a slice 19:27:52 once we have the base image 19:27:56 that we build from 19:28:12 I should be able to whip up something in func to select the slice with the least instances but most disk space 19:28:14 to allocate to 19:28:15 we've also used the warm spare as a "staging" for hosted. 19:28:28 which does kinda defeat using it as a warm spare, though :-( 19:28:28 abadger1999: to be fair we've done that kinda badly 19:28:47 abadger1999: the boxes are frequently not the same as one another 19:28:51 19:28:54 and if we want to test something out in the slice model 19:28:56 well, we could still do warm spares/pairs... in more places. Just means more machines and such. 19:29:00 we can just have a slice 19:29:04 so turn it around, what are the benefits of moving it to osuosl or ibiblio 19:29:12 rfelsburg: I'm not suggesting moving it there 19:29:19 I'm suggesting allowing a slice to be there 19:29:23 rfelsburg: diversity... if one site is down others are still up. 19:29:28 less eggs in one basket. ;) 19:29:30 in other words do we need all of it in one colo 19:29:31 we're juggling too many tasks with too few boxes... slices sound nice in that model. 19:29:37 abadger1999: +1 19:29:37 okay, that makes more sense. are we having any problems with reliability at this point? 19:29:48 rfelsburg: nope, just paranoia. ;) 19:29:54 rfelsburg: all of our colos have had outages at one point or another 19:30:09 having it so all of any single resource is bound up behind a spof is, imo, a bad thing 19:30:13 we do backup hosted to phx2, but thats daily... 19:30:25 we are trying to minimize the spofs 19:30:55 gotcha, sorry still trying to catch up to where we're at 19:31:27 rfelsburg: no sweat 19:31:29 I guess the question is: do this now short term, or wait until we perhaps have a more cloudy setup later to better handle this... 19:31:44 nirik: with or without cloud instances 19:31:51 we need to setup the base slice 19:31:55 right. 19:31:58 and figure out if this shit is gonna work at all :) 19:32:07 so, how about this is 1.5? 19:32:18 1.0 -> rhel6 / base instance setup working. 19:32:33 1.5 -> duplicate on another site, and get working overall with the other one. 19:32:38 2.0 -> rainbows 19:33:02 I'm okay w/that 19:33:10 provided you add unicorns to 2.0 19:33:34 * skvidal is a holdout for unicorns 19:33:38 once we have a hosted03 ready, it should be not too bad to duplicate it to a hosted05 at another site and migrate some things to it. 19:33:46 aren't they messy tho? 19:33:48 :) 19:33:51 nirik: they shit rainbows 19:33:53 so... no 19:34:04 ha. 19:34:12 * skvidal is full of class today 19:34:13 okay 19:34:13 ok, anything else on hosted? 19:34:23 do you care where hosted03 happens? 19:34:33 I guess not... 19:34:33 or do we have a space for it at serverbeach? 19:34:44 well, I wanted to wait and see if they were giving us new hardware. 19:34:55 then we could setup the new stuff on the new hardware. 19:35:08 fair enough 19:35:13 * skvidal has no problem with waiting :0 19:35:14 :) 19:35:35 I don't want to wait too long tho. RHEL6/trac0.12 has been waited for a long time. 19:35:57 nirik: agreed 19:36:29 ok, moving along then... 19:36:31 #topic Upcoming Tasks/Items (nirik) 19:36:41 Freeze ends after the release next week. 19:36:52 I have a few minor commits queued up. 19:37:12 I'd like to get a rhel6 app server setup 19:37:24 switch some backups to new backup box 19:37:48 possibly look at re-installing tummy01 or one of the smaller remote hosts. 19:38:12 I think we could look at migrating ibiblio01 stuff to new rhel6 versions on ibiblio02 19:38:34 any folks have other items waiting for after the freeze? 19:38:48 beta freeze starts 09-13 19:40:17 we have some updates piled up... none of them seem reboot requiring. 19:40:45 always nice to hear 19:41:04 although I am sure there will be a os minor release sometime to mess us up. ;) 19:41:48 Of course I'd like to get the new boxes installed and usable hopefully soon too. ;) 19:42:00 anyhow, if no one has any exciting plans, will move on in a min 19:42:41 I've got the ambassadors raffle app about ready -- just suddenly realized that I was using CSS code from fedoracommunity and it's agpl so I have to fix that somehow before putting it someplace public for inode0 to evaluate. 19:42:52 cool. 19:42:55 abadger1999: :( 19:42:58 * abadger1999 sent email to mizmo to see if she can relicense 19:43:09 otherwise, I guess I throw out the css and reimplement clean. 19:43:26 :( 19:43:36 nirik: how's askbot coming? 19:43:45 oh yeah, I was going to discuss that some. 19:43:52 #topic RFR's 19:44:03 so, askbot is looking to move to staging now... 19:44:15 I posted to the list a idea/plan for setting it up. 19:44:31 basically: add to proxy01.stg and make a new ask01.stg instance for backend. 19:44:49 I am not sure if it makes sense to have it's db on the main db host or seperate on it's own instance. 19:45:07 ask does need rhel6. 19:45:34 so, feedback on that welcome. 19:46:10 I think fpaste is still working on their dev instance to get everything aligned and ready... 19:47:17 Hopefully people will like askbot and use it. :) 19:47:30 #topic Meeting tagged tickets: 19:47:30 https://fedorahosted.org/fedora-infrastructure/query?status=new&status=assigned&status=reopened&group=milestone&keywords=~Meeting&order=priority 19:47:37 any other tickets folks would like to discuss? 19:48:35 not me.. I am still giggling about unicorn farts 19:48:42 smooge: unicorn poop 19:48:44 get it right 19:48:52 just as a side note, I'd like to get us under 200 tickets by the end of the year. ;) We will see... 19:48:55 unicorn farts are just methane like anyone else :) 19:48:57 no I was thinking that if they poop rainbows what do they fart? 19:49:09 skvidal: thats boring. ; 19:49:10 smooge: it's colorful methane, I admit 19:49:25 nirik: rainbow colored methane is boring? 19:49:27 hrmph 19:49:29 #topic Open Floor 19:49:38 Anyone have anything for open floor? 19:49:41 I figure they farted lasers 19:49:48 methane powered lasers 19:49:52 perhaps ponys? 19:50:02 nirik: they fart ponies? 19:50:07 yeah. ;) 19:50:08 dear god that would hurt 19:50:13 I'd say so. 19:50:21 open floor 19:50:28 I emailed the epylog people 19:50:35 and they said "what? oh that? yah, okay" 19:50:52 so I'll probably rip it away from svn with all of its weirdnesses 19:50:58 and put it into git with all of its weirdnesses 19:51:03 excellent. 19:51:04 but weirdnesses I have to deal w/more often 19:51:10 would be good to upstream things. 19:51:11 and merge all the modules and patches I have 19:51:39 any further thoughts on the auth issue? 19:51:47 that I hates it? 19:51:47 * nirik didn't come up with any brilliant solution. 19:52:44 if nothing else 19:52:48 I will make an htpasswd file 19:52:51 in the dir 19:52:58 and chown it to sysadmin-logs 19:53:04 and you can add your damned self 19:53:12 sure. simple. I like it. 19:53:25 and to be fair 19:53:28 if you can't add yourself 19:53:33 then you don't need to look at the logs ; 19:53:34 ) 19:54:09 sounds fine. might add it to the log sop too then. 19:54:16 indeed 19:54:18 sounded a little jaded there skvidal lol 19:54:24 rfelsburg: a little? 19:54:27 hmm 19:54:31 tin y bit :) 19:54:32 I must be softening in my old age 19:54:36 I meant to sound a lot jaded 19:54:36 :) 19:54:44 lol 19:54:53 just in case it isn't clear 19:54:56 19:54:58 I'm just messing about 19:55:01 * nirik will close out the meeting in a minute or two if nothing else comes up. 19:56:13 If I didn't understand sarcasm I'd have no place in IT 19:56:47 ok, thanks for coming everyone! 19:56:52 #endmeeting