14:04:27 #startmeeting 14:04:27 Meeting started Mon Mar 30 14:04:27 2015 UTC. The chair is tflink. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:04:27 Useful Commands: #action #agreed #halp #info #idea #link #topic. 14:04:27 #meetingname qadevel 14:04:27 #topic Roll Call 14:04:27 The meeting name has been set to 'qadevel' 14:04:34 #chair mkrizek kparal 14:04:34 Current chairs: kparal mkrizek tflink 14:04:38 * mkrizek is here 14:04:49 I remembered to #chair this time :) 14:04:53 * kparal here 14:05:32 who wants to go first for status? 14:05:41 I'll go 14:05:49 #topic mkrizek status report 14:05:56 #info finished research on paramiko 14:06:01 #link https://phab.qadevel.cloud.fedoraproject.org/T436 14:06:07 #info infra playbook fixes 14:06:12 #info found out that taskotron-dev playbook was run deploying execdb bits on buildmaster making dev machine stop executing jobs 14:06:23 questions? 14:06:44 I suppose that makes getting execdb built so it can deploy that much more important 14:06:59 yep :) 14:07:16 I can try building it 14:07:23 * tflink knew it was causing problems with the playbook, didn't realize it was causing problems with task execution 14:07:31 but now that you mention it, that makes sense 14:07:49 mkrizek: where's the initial implementation for paramiko wrapper you mention in T436? 14:07:53 buildmaster tries to create job in execdb 14:08:11 kparal: on my laptop :P 14:08:16 ok 14:08:42 I thought I missed something 14:08:57 * danofsatx is here if you need him 14:09:46 anything else? otherwise that's about it from me 14:09:57 i don't think so 14:10:06 nothing from me, anyways 14:10:24 thanks mkrizek 14:10:33 I'll go 14:10:44 #topic kparal status report 14:10:51 #info looking at testCloud and other tools for VM boot and customization (cloud-init, imagefactory, oz, virt-builder) 14:10:56 #link https://phab.qadevel.cloud.fedoraproject.org/T407 14:11:05 I really like virt-builder at the moment, because it seems it could provide us with the exact images we would like to use (configured for our needs), it should be easy generate updated images very often, and we could have several different images for different use cases (a minimal one, one with preinstalled GNOME, etc). But I'm still exploring how exactly it works, it seems to boot the image in qemu for certain configuration steps, 14:11:05 so this approach might not be easily usable inside a virtualized environment without any extra configuration. It depends whether we would run this on bare metal or in a VM. We could also try to make this into a taskotron task, of course, but it's more work, so probably not initially. 14:11:54 there are some bugs that need to be fixed, in virt-builder, as well 14:11:56 i think it makes sense to eventually run it as a task 14:12:01 but I managed to work around them for the moment 14:12:10 but yeah, not sure it's required to start with 14:12:12 tflink: I agree, eventually 14:12:23 kparal: is there a list of the stuff that needs to be fixed? 14:12:59 tflink: not at the moment. if we decide to go with virt-builder, I'll start filing bugs into the project. I haven't found a showstopper yet 14:13:14 one question, though 14:13:21 I forget, can virt-builder update images? 14:13:48 tflink: it can create them anew with updated packages. which should be pretty much the same 14:13:56 nvm, I was thinking of something different 14:15:05 Q: does it make sense to think about having multiple VM images for different use cases? e.g. a minimal one for overhead, but a GNOME one for GNOME-related tasks? otherwise we would have to install GNOME packages every time we run this task, and that's a lot of packages 14:15:21 or am I overthinking this? 14:15:42 *a minimal one for minimal overhead 14:15:55 I think it makes sense to use different images for some different things 14:15:59 I guess that would make sense, eventually but I'd start simple 14:16:11 but I'm not sure it'll make sense to use cloud-style images for gnome testing 14:16:31 I suspect that the workstation folks would want to see the live image used 14:16:42 but that'd require livecd support 14:17:04 ok, it's just whether I should take this into account when exploring tools and their possibilities. whether their ability to create/customize different images with different contents is desired 14:17:44 i don't think it would hurt to keep it in mind when exploring tools 14:18:05 tflink: we would have to install in onto disk anyways. the livecds are quickly out of memory if you start touching the disk 14:18:27 just because live images seem to make more sense right now doesn't mean it'll definitely happen 14:18:48 kparal: hopefully, we wouldn't be installing much on top of the livecds but that's a good point 14:19:00 ok, thanks for answers 14:19:12 that's it from me 14:19:16 k, thanks 14:19:36 #topic tflink status report 14:19:48 not a spectacularly productive week for me :-/ 14:19:59 #info some more work on digging into depcheck 14:19:59 #info looking into script to reproduce depcheck failures 14:20:17 #info some debugging and new builds for ansible_utils (new rbac-playbook) 14:20:46 tflink: one of the failures is here, just in case: https://admin.fedoraproject.org/updates/FEDORA-2015-4896/mate-control-center-1.8.3-2.fc21,mate-desktop-1.8.2-2.fc21 14:21:03 I did find out that old mash data is archived for a bit 14:21:23 so that'll help with reproducing 14:21:23 kparal: thanks 14:22:01 tflink: is this the archive you talked about, or is there some place where you can find even older composes? http://koji.fedoraproject.org/mash/ 14:22:19 that's just branched and rawhide 14:22:52 and updates repo for stable releases 14:23:05 yeah, I spoke too quickly 14:23:10 that's what I was talking about 14:23:19 ok, thanks 14:23:52 but it's at least a week, so at least we aren't required to get repodata for reproducers the day it happens 14:24:23 good news, didn't occur to me before 14:25:22 with that archive, I'm thinking about writing a script to reproduce runs using the taskotron.log from failed runs - that lists all the rpms involved in a failure and gives the date that the task was run (to get repodata) 14:25:28 but that's about all from me 14:25:37 any other comments/questions? 14:25:52 not here 14:25:55 nope 14:26:00 ok, moving on 14:26:03 #topic planning 14:26:25 one thing that I'd like to get done is rebuild stg with f21, keeping history this time 14:26:35 which we didn't do when rebuilding dev 14:26:53 I think beta freeze starts tomorrow, so I'd like to get that done today 14:27:06 since rebuilding would require freeze break exceptions otherwise 14:27:32 FYI, I think the dev playbooks are still not completing... 14:27:38 any objections or concerns with that? 14:27:49 nirik: I was about to get to that :) 14:28:04 no objections 14:28:10 no objections from me either :) 14:28:21 execdb still isn't built as an rpm which is why I think the dev playbook is failing 14:28:30 yes 14:28:40 we can add execdb into stg later - that won't require a freeze break since that would only affect stg systems 14:28:53 rebuilding requires host key changes that affect all infra systems, though 14:29:04 yeah, we should detangle that sometime. ;( 14:29:33 #info will be rebuilding taskotron-stg with f21 today (before beta freeze starts) 14:30:18 a hopefully minor task is wrapping up the execdb stuff - building the rpm, getting it into the infra-testing repo and deployed on dev so that it starts functioning 14:30:49 I think that some more playbook changes are going to be required, as well 14:31:07 the taskotron-proxy role doesn't handle the frontend for execdb 14:32:35 I'll get tickets filed for that if anyone wants to poke at it 14:32:48 but getting dev functioning is more important :) 14:33:25 * tflink also wants to move the execdb repo from josef's user to fedoraqa so it matches the rest of the projects, but that isn't incredibly urgent 14:33:46 any other questions or thoughts on execdb? 14:33:56 even though josef is not here today 14:34:11 * kparal agrees to moving the repo 14:35:21 * tflink assumes nothing more for execdb 14:35:40 not here 14:35:50 #info execdb to be built as rpm, some more playbook changes are likely to be needed to complete deployment in dev 14:36:17 #info different playbook changes will be needed for deployment in stg/prod due to different proxy setup 14:36:43 moving on to disposable clients 14:37:19 mkrizek created a new remote tracking branch for disposable client work last week but it's not diverged from develop yet 14:37:38 * tflink has been thinking about how to best approach moving forward with code and came up with a couple approaches 14:38:16 1) submit most of the PoC code for review as a starting point, knowing that it'll need to be changed before it's complete 14:38:45 2) come up with a skeleton for functionality so that folks can start working on different bits 14:39:01 unless there are other ideas on how to start moving forward 14:39:53 sounds reasonable 14:40:18 any thoughts on which approach is better? 14:40:32 keeping in mind that I don't remember if the test suite passes on the PoC 14:40:34 2) seems cleaner 14:41:03 aha, I thought having both of them :) 14:41:26 kparal: how would both work? 14:42:29 skeleton is probably cleaner and easier to look at 14:42:43 provided we have the full picture and can write it 14:43:05 kparal: not sure I understand what you mean by "full picture and can write it" 14:43:32 whether we know how exactly the different pieces fit together 14:44:12 most of it makes sense in my head - not sure if it's been communicated well enough, though 14:44:24 you all would know that better than I would, though :) 14:44:31 maybe I'm just confused a bit, don't mind me and go on, I'll catch up :) 14:46:02 anyway I vote for 2) :) 14:46:12 it sounds like 2) is the winner 14:46:28 * tflink agrees that it's a cleaner start. there's a reason he did the PoC in a different repo 14:46:49 the next question is who works on the skeleton 14:47:12 I suspect that rebuilding stg will keep me busy for most of the day 14:47:45 so it'd be tuesday/wednesday before I could have anything ready for review 14:48:11 unless someone else wants to work on the skeleton 14:48:45 not sure if we have that much insight to work on that 14:49:03 I still have plenty to do. are we in a hurry about this? 14:49:31 i think that mkrizek is mostly blocked until it's done 14:50:36 I can work on infra tickets until then 14:50:41 and I'm not sure how much jskladan has left in his queue without the skeleton, either but that's a question for him :) 14:51:34 tflink: if you're willing to create that skeleton, that would be great. I'd have to re-read a lot of docs to get into the picture again 14:51:55 ok, I'll work on that as soon as stg is redeployed 14:51:59 since that has to be done today 14:52:27 I'd rather not wait until after beta so that we have more time to make sure everything's stable 14:53:43 #action tflink to work on disposable client skeleton after stg is redeployed as f21 14:54:08 I think that's about it for planning from me 14:54:12 any other topics? 14:54:25 I have one 14:54:37 do we want to proceed with fedmsg integration? 14:54:52 it's blocked on depcheck 14:55:51 you mean making depcheck report per-build as well as per-update, right? 14:55:55 yes 14:56:42 I have no idea how hard it can be, but I can ask jskladan to have a look at it tomorrow, if he's no longer sick 14:56:49 he was looking for a new ticket, I think 14:57:10 sounds good 14:57:24 does it matter if we proceed with fedmsg integration and have just upgradepath reporting for the moment? 14:57:52 did we finalize a fedmsg schema for results? 14:58:05 afaik yes 14:58:11 * tflink remembers talking about it, doesn't remember the conclusion off the top of his head 14:58:30 on @qadevel 14:59:05 * tflink reads 15:01:46 yeah, the schema was mostly decided on list (org.fedoraproejct.taskotron.result) 15:02:05 ticket for depcheck exists, not taken yet 15:02:07 https://phab.qadevel.cloud.fedoraproject.org/T388 15:03:10 er, ticket to get depcheck reporting list of related builds 15:05:08 i think it'd be worth running through the entire plan before we deploy to stg (not sure we can do this in dev since there's no dev fedmsg AFAIK) 15:05:42 I discussed initial steps with ralph some time ago 15:05:54 have most of them ready 15:06:25 yeah, just thinking it would be worth having everything in one place to make sure that everyone's on the same page before we start deploying stuff 15:06:39 * mkrizek agrees 15:08:06 any other thoughts on this? 15:08:16 or questions/concerns that haven't been covered? 15:08:29 * kparal has none 15:08:38 nothing here 15:08:43 #topic Open Floor 15:08:49 anything for open floor? 15:09:02 nope 15:09:06 no 15:09:18 almost an hour :) 15:09:20 I think I got things working for testCloud 15:09:29 roshi: oh, great news 15:09:32 the issues I was running into ended up being https://bugzilla.redhat.com/show_bug.cgi?id=1159823 15:09:36 roshi: got a new laptop? :-P 15:09:41 tldr: nvidia blobs 15:09:42 heh 15:10:14 so if you're not using nvidia blobs (which I use to drive HDMI) it should work fine ootb now 15:10:44 it was a long, annoying yak shave. But at least the yak is finally shaved 15:10:53 while we're on the topic - you finalized 'testcloud' as the name for the project, right? 15:11:05 yeah, it turns out legal had no issues with it 15:11:14 ie, approved by legal and no plans to change the name anytime in the near future 15:11:17 so figure I'll just stick with it 15:12:31 that's all I had, just wanted to update people 15:12:40 since I knew kparal was poking at things 15:12:53 thanks for info 15:12:55 #info testcloud should be working out of the box now 15:13:01 I'll update my git 15:14:05 if there's nothing else, I'll close the meeting 15:14:33 thanks for leading, tflink 15:14:42 thanks for coming, everyone 15:14:48 * tflink will send out minutes shortly 15:14:52 #endmeeting