15:00:22 #startmeeting fedora-qadevel 15:00:22 Meeting started Mon Nov 20 15:00:22 2017 UTC. The chair is tflink. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:22 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:00:22 The meeting name has been set to 'fedora-qadevel' 15:00:29 #topic Roll Call 15:00:40 #chair kparal lbrabec 15:00:40 Current chairs: kparal lbrabec tflink 15:00:53 #chair jskladan 15:00:53 Current chairs: jskladan kparal lbrabec tflink 15:01:04 * jskladan rushes in 15:01:40 * kparal is here 15:01:47 * kparal pokes lbrabec frantisekz 15:01:54 * lbrabec is here 15:02:00 * frantisekz is here 15:02:54 dem meetings 15:05:16 cool, let's get started 15:05:25 #topic Announcements and Information 15:05:35 #info got libtaskotron POC using openstack working - tflink 15:05:36 #info more work on using disk-image-builder to build images for Taskotron - tflink 15:05:36 #info added 'pull_request' type to libtaskotron -- kparal 15:05:36 #info built scratch libtaskotron and taskotron-trigger and deployed to dev in order to get github pull-request triggered tasks working -- kparal 15:05:36 #info updated yumrepoinfo in libtaskotron and on servers to reflect F27 being stable -- kparal 15:06:04 anything to add? 15:06:59 comments/questions? 15:07:21 how well the openstack integration works? 15:07:57 I think it works well. it certainly makes some bits easier 15:08:07 also, have you modified the standard or the ansiblized libtaskotron? 15:08:13 ansiblized 15:08:37 ok, good. another branch to merge would get complicated 15:09:27 will we have a separate topic regarding openstack, or is the time to delve into details now? 15:09:39 I have it set as another topic 15:09:41 yaay, moar branches! 15:09:45 ok 15:09:50 no further questions from me then 15:10:25 I was going to go into the staging topic first as that sets the stage for "why bother with openstack?" 15:11:08 tflink: well, openstack was always kind of cool, but IIRC the "who will take care of the openstack instance" was always the bummer 15:11:08 go ahead 15:11:32 jskladan: infra - they're moving to a more supported openstack instance as I understand it 15:11:33 jskladan: we might have an answer for that soon :) 15:11:40 even better 15:11:57 yeah, I'm not nearly crazy enough to take that on :) 15:12:09 tflink: just checking :) 15:12:26 it's a valid concern :) 15:12:34 anyhow, moving on 15:12:41 #topic What to do about staging? 15:13:22 iirc we wanted to replace it with a fake resultsdb that people can test against 15:13:37 Parts of this conversation have been going on for a while but have been given a timeline as the now ancient qa boxes are being turned off and recycled in early December 15:14:08 Both beaker and openqa will lose some of their worker boxes 15:14:20 do they *have to* be turned off? 15:14:21 I'm not so worried about beaker right now since it's not running much 15:14:23 yes 15:14:33 why is that? 15:14:35 they need to go off to the big server farm in the sky 15:14:46 kparal: warranty, I guess? 15:14:49 they're ancient, out of warranty and not worth moving to the new racks 15:14:52 euthanasia is not legal yet in here 15:15:09 we have new systems to replace them, too 15:15:14 so, what exactly will we lose? 15:15:19 qa01-qa08 15:15:23 and where do we get the replacement boxes from? 15:15:31 we got them back in May 15:15:43 but you're getting ahead of me 15:16:10 hmm, I don't see qa01 in the ansible inventory. what am I doing wrong... 15:16:15 the concern is the loss of throughput for openqa 15:16:28 kparal: qa01 and qa03 are kinda special cases 15:17:12 earlier this year, we got a bladecenter to replace the old boxes that we're losing 15:17:40 while we could just use that for openqa workers, I'm thinking it would be a good time to look into moving to the cloud 15:17:59 that bladecenter is identical to the machines which will make up the new openstack instance 15:18:33 my proposal is to start moving taskotron workers into the cloud, free up some client hosts and use those as openqa workers 15:18:44 so some of these boxes are used for beaker, some for openqa. how is this related to stg? 15:18:56 (I assumed stg means taskotron.stg) 15:19:11 I probably should have said dev/stg 15:19:59 if we decide that we want to move to the cloud, that can't really start en masse until after all the reracking is done in December 15:20:12 which leaves us with a period of time with less openqa throughput 15:20:32 stg comes into this with what kparal was talking about earlier 15:20:46 this might need to be discussed with adamw. we still have F27 modular server to test in openqa for a month or so 15:20:54 tflink: is that such big of a deal, though (the slower openqa testing) at the moment? 15:20:56 I've talked to him about this 15:21:12 jskladan: it's not fatal but it's not ideal 15:21:43 about a month ago, kparal and I were talking to infra folk about putting our stg instance on the prod networks 15:22:04 * tflink apologies for the winding-ness of this topic, hopes it'll be clearer in a moment 15:22:40 what we came up with was to get rid of stg as we currently know it and just have a resultsdb instance since that's what others interface with 15:23:06 possibly some scripts to put some results into that stg instance for the purposes of testing 15:23:26 if we did that, we'd be freeing up one client host which could be used for openqa 15:24:28 we'd be losing one of our testing grounds for taskotron until we started going all cloudy but I think that's a reasonable compromise 15:24:36 is this making sense so far? 15:24:46 tflink: yup 15:24:49 yes 15:25:38 any objections to this plan, assuming that we do end up "embracing the cloud"? 15:25:59 none at the moment 15:26:12 do you intend to set up the fake resultsdb infra in stg soon? 15:26:18 yeah 15:26:28 I'm not sure how trivial or not trivial it's gonna be 15:26:34 it's just a resultsdb instance 15:26:41 kparal: nothing hard about it 15:27:00 * tflink was going to rebuild stg w/o master or other taskotron components 15:27:13 the current dev becomes our stg/testing instance 15:27:14 with a trigger attached that intercepts all new koji builds and bodhi updates etc and submits some random results 15:27:36 I was thinking that could be manual 15:27:38 it doesn't have to be difficult, right. it will just take some effort to keep it in sync with our production setup 15:27:40 thinking/hoping 15:27:43 hmm 15:27:51 yeah, at least a plan of what will happen with it 15:27:55 kparal: would it not be better to intercept resutlsdb fedmessages, and just store the results added to prod? 15:28:23 jskladan: we need to intercept stg fedmsgs 15:28:29 (if the goal is having same-ish data) 15:28:44 the goal is to have any data (pass/fail doesn't have to match) 15:28:50 kparal: me no understand - why exactly? if it is just an instance with random data 15:28:57 what is the difference? 15:29:03 e.g. pingou wants to see some results in stg bodhi as soon as a stg koji build is done 15:29:30 I suspect that this would be a good conversation to have with folk who interface with resultsdb (greenwave, bodhi, etc.) 15:29:48 manual submission is fine, as long as we abstract it enough - the people won't know the right fields to submit to resultsdb 15:30:01 but at that point we might make it even automated 15:30:22 kparal: depends on what the goal is... 15:30:46 yeah, I can see that making sense but I'm not sure I fully understand what the folks interfacing with resultsdb would need/want 15:30:46 that's what I understood from our past conversation with pingou 15:31:01 ok, anyway, ack in general 15:31:10 in general, yes 15:32:02 #info no immediate objections to the idea of dismantling most of taskotron stg to free up a worker for openqa 15:32:06 does that sound OK? 15:32:44 ack 15:32:46 in practice, it really depends on the usecases, and why are we doing that. Especially as in _us_ directly, pingou could easily set his own resultsdb instance for the staging bodhi, and fill it with "random" data directly) 15:32:52 tflink: ack 15:33:19 jskladan: there are interactions between multiple systems, like bodhi+greenwave 15:33:26 sure, he can set up a custom resultsdb 15:33:30 jskladan: yeah, it'll require some more discussions with the other folk involved 15:33:40 but that's exactly why they want us to set up a proper stg service 15:33:55 kparal: sure, I'm just debating, whether our limited pool of HW should be used for "somebody else", and why 15:34:00 so that they don't need to do it for each their service. and that the workflows are tested in full 15:34:15 jskladan: one resultsdb instance isn't a big deal in my mind 15:34:26 tflink: it is a precedens, though :) 15:34:26 it's in our best interest to make sure bodhi and greenwave works 15:34:31 and we can move it to other infra machines if it ends up being an issue 15:34:53 also, the traffic in stg is extremely low 15:34:54 ok s/our HW/our resources/ to be more accurate 15:34:54 if I'm remembering the conversation correctly, that was offered 15:34:55 but whatever 15:35:16 let's move on, we can tackle this once it is on the table 15:35:21 sounds good 15:35:45 #topic Moving to the cloud? 15:37:11 I think that we've covered a lot of this in other topics already but for the sake of being complete, I've been working on a libtaskotron POC which uses openstack instances in place of local VMs 15:37:23 tflink: love it! 15:37:37 what are the drawbacks/pitfalls/not implemented parts? 15:37:48 in this otherwise awesome project? 15:38:01 I've not gotten into image discovery/choosing 15:38:10 the code has hardcoded image names right now 15:38:22 do you expect this to be a problem? 15:38:45 no more than what we currently do, no 15:38:49 how exactly work VM images? you upload them into openstack with some ID and then specify this ID when requesting a new VM instance? 15:39:03 pretty much, yeah 15:39:17 sounds like it could fit our workflow 15:39:30 the idea in the back of my head is to pretty much use what we have been using WRT naming convention 15:39:30 kparal: tflink: and now you broke the spell - I hoped we'd finally get out of the image-building bussiness 15:39:33 * jskladan is sad panda 15:39:51 jskladan: that's another thing I've been working on, actually 15:40:03 jskladan: do you think openstack creates the VMs out of thin air? :) 15:40:22 kparal: it should solve all the problems - its cloud! 15:40:31 I forgot 15:40:33 but long story short, I don't think we're going to be completely out of the image creation business anytime soon 15:40:49 kparal: honestly, I'd just hope the infra guys would finally be responsible for that :) 15:41:21 the other complications that I suspect we'd hit would come with implementation 15:41:47 since the openstack instance is more or less isolated from the rest of infra, we'd need to think about how the buildmaster/buildslave communication is set up 15:42:12 btw, openstack requires cloud-init to be installed, right? 15:42:30 tflink: I have a mad idea - we could have a machine in cloud, that would create the base images for said cloud! 15:42:43 my idea there is to move the buildslaves to a persistent cloud instance which ends up requesting more openstack instances and communicates with those instances in a private ip space 15:43:04 jskladan: not sure if you're serious or not but that's kind of what I had in mind :) 15:43:12 * jskladan was serious 15:43:25 but we'll get to that part in a minute 15:43:52 tflink: will the VMs have access to all koji/bodhi/etc services? or in what sense it's isolated? 15:44:02 the other pitfall that I can think of is that we'd be losing some of our "run it on your machine and it's as close to production as we can make it" 15:44:23 kparal: we'd be accessing them using public hostnames as if we weren't in infra 15:44:41 tflink: well, that is a compromise I'm more than willing to do (that run on your machine stuff) 15:44:45 that's much better than dealing with private hostnames honestly 15:45:12 the cloud is in PHX2 but the networking is routed such that it's effectively in another networking space 15:45:37 I figure that we'd probably end up keeping testcloud for local running and testing 15:46:03 so the local use case would continue to be testcloud but the production use case would start using openstack 15:46:07 if we can boot those VMs with testcloud, I think the promise is still kept the same way we spawn VMs now 15:46:20 yeah, it's close but not quite the same thing 15:46:32 I don't have a problem with that 15:46:33 that being said, I'm not sure many people actually use the local execution bits :) 15:46:59 other than the question about images, any other questions/concerns? 15:47:00 but regarding networking, what about our heavy access to koji pkgs. it's still on local super fast network, right? 15:47:09 it should be, yes 15:47:17 kparal: tflink: that S-word, though :) 15:47:25 oh, another alternative would be to use a vpn 15:47:32 no Swear words in here! 15:47:46 jskladan: true but I think some of that is unavoidable at the moment 15:48:21 from what I understand, newer openstack versions have a kind of vpn-as-a service that we could utilize 15:48:37 tflink: I'm just cheerfully poking, I absolutely know this is a S-kind territorry 15:48:39 but that's another of those "cross that bridge when we get there" kind of things 15:48:44 no worries 15:48:45 tflink: agreed 15:49:09 ok, so what's blocking us except for image selection? 15:49:15 the way I see it, the problem could be solved by either a VPN or putting more of our stuff in the cloud 15:49:30 the current cloud is not really production ready 15:49:31 * kparal brb 15:49:52 there are plans in motion to build a new cloud that will be production ready but that won't be ready until mid-december at the earliest 15:50:26 * kparal is back 15:50:50 ok 15:50:57 the remaining bits would be image creation, image selection, sorting out openstack user/tenant/quota bits and a lot of testing 15:51:06 at least we have time to iron out the details 15:51:19 we should be able to use the current cloud for development 15:51:38 more s-words but the python API shouldn't change significantly between openstack releases 15:51:39 great 15:52:25 wow, I didn't think it was this late already 15:52:50 do we have a qa meeting today? 15:52:55 * tflink doesn't remember seeing an announcement 15:53:16 ah, canceled 15:53:40 are folks OK with going past the top of the hour? 15:53:46 fine with me 15:54:03 yup 15:54:18 cool 15:54:38 any objections to what we've talked about so far? 15:54:48 "building" images will be the next topic 15:55:22 no objections. sudo make it happen 15:55:39 no objections\ 15:55:41 sudo -u kparal make it happen 15:55:48 rather 15:55:53 sudo -u kparal do all the work 15:55:58 :-D 15:56:00 damn, I forgot to cast a protection from evil spell first 15:56:40 #info no immediate objections to the idea of moving taskotron to use openstack instances instead of local VMs on client hosts for our deployments 15:57:02 sound OK? 15:57:28 ack 15:57:46 ack 15:58:07 #topic image "building" monkey business 15:58:40 the base of where I'd like to go with this is diskimage-builder 15:58:42 https://docs.openstack.org/diskimage-builder/latest/ 15:59:18 the reason I'm proposing this instead of doing things the way that openqa does them is mostly because diskimage-builder is designed to customize and update released images 15:59:34 so with one command, I can build an updated F26 image that has libtaskotron installed 15:59:57 "DIB_RELEASE=26 disk-image-create -o taskotron-fedora-26-20171114 -p libtaskotron-fedora fedora vm" 16:00:06 tflink: can you run any command, or just something from a pre-defined set of commands? 16:00:20 jskladan: not sure what you mean by command 16:00:21 asking mostly because of the dnf cache voodoo we do 16:00:23 that rebuilds the image from scratch or modifies the existing image by mounting it and installing the package? 16:00:45 it modifies the existing base image by mounting it, updating it and installing packages 16:00:46 jskladan: we no longer do dnf cache voodoo, I believe 16:01:19 I guess it uses libguestfs to do that 16:01:34 so no arbitrary command, but some selected useful things 16:01:39 I'm still trying to understand the mechanism through which you can change individual files 16:01:58 it's there but I'm having trouble understanding exactly how it's invoked 16:02:39 the first time you create the image, does it use anaconda or just installs into chroot? 16:02:47 and if all else fails, we can write elements to do what we need 16:03:03 it starts with the released image 16:03:50 ok, then it doesn't really answer the question, but we might not care. we simply always update the released one 16:04:03 I haven't tried it yet but we should be able to do rawhide with no real customization by downloading the image first and pointing diskimage-builder at the downloaded image instead of relying on known urls for released images 16:04:09 anyway, sounds interesting, and could make our image building process a lot more reliable 16:04:33 kparal: I think it does answer the question. anaconda is not involved because it starts from a released base cloud image 16:04:37 kparal: well, since we would not be building them :) 16:04:41 so there's no anaconda invoked 16:05:07 tflink: oh, so it can't create an image from scratch, and always needs one as a base. ok, that answers the question, yes 16:05:26 good enough for us, I think 16:05:37 yeah, that's one of the things that appeals to me about this method - we use the images that infra produces instead of trying to replicate their build system 16:05:40 tflink: that is as close to "we are not building the images" as we could get, IMO 16:05:43 unless we want to change the filesystem, or something 16:05:55 that was going to be my question 16:05:58 the downside is that the diskimage-builder package in fedora is out of date 16:05:59 hm, what about partitions sizes? 16:06:14 kparal: that's something that can be customized 16:06:21 ok 16:06:40 https://docs.openstack.org/diskimage-builder/latest/user_guide/building_an_image.html#disk-image-layout 16:06:59 I know that there was talk of updating diskimage-builder but I don't think that has happened yet 16:07:27 I've been doing testing with diskimage-builder from pip 16:08:16 we'd still need to have some code/automation around building new images, uploading them to openstack and purging old images 16:08:27 but I don't see that as a deal breaker 16:08:33 tflink: neither do I 16:08:47 as it would be a huge improvement over the current setup 16:09:37 yep 16:09:40 it depends on the APIs, but the basic housekeeping would IMO be done (the algorithm) the same way we do it now 16:10:08 not that most of the code could be reused, but at least we have a known system in place :) 16:10:39 I would expect that the openstack apis for image management are stable and relatively well documented 16:11:06 honestly, I'd like to see that move to a task that's run in response to 'repo push complete' fedmsgs 16:11:56 or compose complete in the case of branched or rawhide 16:12:34 to circle back, the big TODOs in this that I see are: 16:13:05 1. make sure that the "specify a downloaded image" bit works the way I think it will WRT not-yet-released images (rawhide, branched) 16:13:38 2. figure out which, if any, image customization we still need to figure out 16:14:18 3. get the diskimage-builder package updated and figure out who's going to own that going forward. 16:14:19 can we use openstack with our current image-factory built images? or is this a requirement for openstack? 16:14:45 we could use the current images that we build but they'd still need to be uploaded to openstack 16:15:01 sure. ok 16:15:05 the code we have for syncing images and pruning old images would likely not work 16:15:23 unless jskladan wants to delve into the bits of imagefactory that upload stuff to openstack :) 16:15:30 tflink: kparal: and possibly changed, no? Since we disable the cloud-init 16:15:42 we do? 16:15:57 we configure it to not connect to online servers 16:16:01 ah 16:16:08 tflink: we did, IMO, make it "all but disabled" 16:16:09 but that might be done by testcloud, not us 16:16:14 since it was causing all kinds of trouble 16:16:16 I don't think that would be needed as much 16:16:30 kparal: nope, it is in the KS 16:16:31 since cloud-init would be partially relying on openstack 16:16:56 actually, I think that would have to go away 16:20:46 but that's a small change, just remember that Testcloud was naughty when images were trying to connect to the external server 16:20:46 yeah, cloud-init is loved by all 16:20:46 :) 16:20:46 (also, testcloud should finally die) 16:20:46 jskladan: why's that? 16:20:46 💗 16:20:47 (that was for cloud-init) 16:20:47 I just have an animosity towards it :) always seemed to cause more trouble than anything else for me 16:20:47 jskladan: you're welcome to start using cloud images locally without it 16:20:47 but I'm pretty sure we were talking about keeping it around to facilitate VM spawning in the local case for libtaskotron 16:20:48 tflink: well, that's the thing - the causality was IIRC "we are using cloud images, because we already have testcloud", not "we want to use cloud images for local vm execution, if only we had something like testcloud" :) 16:21:16 whatevs, as long as at least on of the Imagefactory/Testcloud dies in fire, I'm celebrating :) 16:21:30 I thought it was more "we want to use the cloud base images so we're going to use testcloud because that makes the most sense" 16:22:26 if cloud-init was not required for openstack images, we might find easier ways to run the images 16:22:27 as I'm not sure there are any alternatives to the cloud images unless we want to build our own stuff 16:23:07 or we can finally patch up testcloud to not require initial user configuration, which is the most hassle here 16:23:09 or use vagrant or something like that 16:23:24 kparal: I'm not sure how that would work 16:23:36 there is no user present in the base cloud iamge 16:23:37 image 16:24:16 tflink: I don't really care that much, if testcloud stays for users outside our deployment, but we don't use it, I consider it a win 16:24:29 we wouldn't be using it in production, no 16:24:35 let's leave this discussion for another time 16:24:47 but we would still need to be testing it some to make sure it continues to work 16:25:02 but I suspect that would happen as a consequence of development 16:25:39 tflink: yeah, but if it stops, it does not need to bother us as much as if we were dependent on it for our deployment 16:25:49 true 16:25:59 if there are small lags in boot time, it doesn't matter so much 16:26:09 and there are easy workarounds for the "local" user 16:26:16 any other questions/comments/concerns here? 16:26:36 nope, loving the "lets get rid of imagefactory/testcloud" in production movement :) 16:27:34 #info no immediate objections to explore switching over to diskimage-builder to create images 16:27:38 sound OK? 16:27:48 👍 16:28:01 ack 16:28:10 cool 16:28:22 yikes, already 1.5 hours 16:28:41 #topic initial plan of action 16:29:16 who all has time to work on the various parts of this? 16:29:30 time/interest 16:29:32 I do 16:29:52 what are our plans regarding ansiblize? 16:29:55 but, I'd also like to see the ansiblize branch deployed for testing :) so... I might have clash of interests 16:29:58 I see dismantling stg as being one of the more urgent bits 16:30:04 fair enough 16:30:32 btw, we might get some resources back if we finally kill phab 16:30:47 does it make sense to get ansiblize deployed to dev at least before moving on the other parts of this? 16:31:06 kparal: I'm not sure where you're getting the resource scarcity to the extent from 16:31:13 not that I don't need to kill off phab 16:31:24 what is blocking deploying ansiblize? 16:31:32 upgrading to F26+? 16:31:35 tflink: would make sense to me (deploying ansiblize to dev first) 16:31:53 do we have any tasks left to port? 16:32:10 sure, but that doesn't matter. we need some POC running 16:32:10 but yeah, redeployment is one of those things 16:32:22 oh, I was thinking of production 16:32:34 I was talking about dev 16:32:45 I was under impression that the ball was in your field regarding ansiblize dev :) 16:33:00 I'm not aware of anything that's really blocking deployment on dev other than the selinux-fixing home dir moving thing that jskladan was/is working on 16:33:29 tflink: for the lack of a better place for it, I sent a review request via email 16:33:30 I'll be happy to continue polishing ansiblize and porting tests, but I'd really appreciate seeing something being run in dev 16:33:38 jskladan: did I miss it, then? 16:33:51 kparal: yeah, agreed that needs to happen 16:34:01 it was just today, so you might have (had it done since wednesday, but.. life) 16:34:55 https://lists.fedoraproject.org/archives/list/qa-devel@lists.fedoraproject.org/thread/ADQQOZ5XZJN7I3HBHAYXONVTQMZFWROK/ 16:34:59 oh 16:35:07 * tflink hasn't gotten to checkign lists yet today 16:35:31 so, for me, I'd really like to see ansiblize deployed to dev, before tackling stg 16:35:38 kparal: +1 16:35:58 but note that dev is on F25 atm 16:36:00 so long as we get both done before openqa loses its old workers, fine by me 16:36:16 kparal: I got stuck on the client host and never got to the master or resultsdb 16:36:22 the client-host is F26 16:36:52 frantisekz was mentioning he would be interested to help out with the upgrade/deployment stuff 16:37:06 I'm certainly not going to turn down help 16:37:23 he's going through the RH ansible course to learn ansible 16:37:38 so, just a note 16:38:00 he's not in sysadmin group yet, though 16:38:07 that's pretty easy to fix 16:39:07 it seems reasonable to me to shoot for dev redeployment this week 16:39:18 well, it would if it wasn't a holiday week here 16:39:43 * jskladan is off on Friday, available on Thu, if needed 16:40:07 thu and fri are US holidays 16:40:23 and I expect a bunch of folk to be on PTO this week 16:40:49 how about next monday, then? 16:41:03 as a backup, sure 16:41:13 I still think that this week is possible if nothing goes wrong 16:41:21 so ... next week :) 16:41:36 depends on you 16:41:46 I just won't be available over night on wed 16:42:00 frantisekz: might be free to help, though 16:42:28 I have PTO on Wed and Fri 16:42:31 yep, you can count on me throughout the entire week :) 16:42:46 would tomorrow work for doing the deployment? 16:43:01 frantisekz: ^^ 16:43:03 after the team meeting 16:43:07 yeah, sure 16:43:12 he'll probably need to set up the infra tokens and everything 16:43:13 assuming that doesn't get canceled 16:43:53 frantisekz: do you have a few minutes after this meeting to go over how to get signed up? 16:44:22 tflink: few minutes should be fine 16:44:25 so, summarizing in the interest of not going past 2 hours: 16:45:10 the two immediate priorities are getting dev ansible-ized and dismantling stg 16:45:38 tflink: ack 16:45:51 ansible-izing is mostly needing redeployment of dev and will hopefully be done this week or early next week at the latest if things go wrong 16:46:43 dismantling stg is mostly needing a talk with folks who interface with resultsdb to figure out what, if any, fake data will need to be fed into the stg instance 16:47:38 jskladan: are you wanting/planning to help with the redeployment? 16:48:05 not really wanting, but I could be present, if help is needed 16:48:25 but from what I remember, it is mostly waiting :) 16:48:48 jskladan: what about finding the resultsdb-using folk and starting to figure out what will be needed there 16:48:54 dev is less waiting, usually 16:49:14 more poking at stuff and figuring out why something doesn't work anymore :) 16:50:06 I could be there, then, no problem 16:50:44 jskladan: does that mean you're planning to help with the deployment or talking with the folks who use resultsdb? 16:51:14 ad resultsdb-wanting folk - do we have a list? If not, I could ping pingou to find out his side, and hopefully get the list of "ohers" from him 16:51:34 pingou and threebean would be good to start with 16:51:39 tflink: I can stay at work, and help with the redeployment 16:51:42 tomorrow 16:52:07 ok. hopefully it will be relatively smooth :) 16:52:59 are there remaining questions about what to work on for the next week? 16:53:55 none here 16:53:58 I expect things will be more organized-looking by next week if dev is redeployed and working so testing and polish work on tasks can keep moving 16:55:06 since we're getting close to the 2 hour mark ... 16:55:10 #topic Open Floor 16:55:18 assuming there were no questions left about the previous topic 16:55:31 nothing from me 16:55:32 any additional topics that folks wanted to bring up? 16:55:55 * jskladan has nothing 16:56:17 ok. thanks for coming and putting up with the long meeting today 16:56:23 * tflink will send out minutes shortly 16:56:29 #endmeeting