14:02:19 #startmeeting Cockpit weekly meeting 160222 14:02:19 Meeting started Mon Feb 22 14:02:19 2016 UTC. The chair is andreasn. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:02:19 Useful Commands: #action #agreed #halp #info #idea #link #topic. 14:02:19 The meeting name has been set to 'cockpit_weekly_meeting_160222' 14:02:53 #topic agenda 14:02:59 .hello mvo 14:03:00 .hello dperpeet 14:03:00 mvollmer: mvo 'Marius Vollmer' 14:03:03 dperpeet: dperpeet 'None' 14:03:05 .hello stefw 14:03:07 * metric tests 14:03:09 stefw: stefw 'Stef Walter' 14:03:09 .hello andreasn 14:03:11 andreasn: andreasn 'Andreas Nilsson' 14:03:16 * storaged / udisks2 14:04:27 anything else? 14:04:35 #topic Metric tests 14:05:07 alright 14:05:17 I made the first careful step 14:05:29 https://github.com/cockpit-project/cockpit/pull/3786 14:05:44 mostly to see what you think about the approach 14:06:01 the tests are quite lax because it's hard to say exactly what the metrics should be 14:06:18 like, we can't get 100% CPU load during the tests 14:06:29 it goes down to 80% sometimes 14:06:51 makes sense, i think you have a clear goal no? 14:06:59 well... 14:07:20 ideally I'd like to catch things like block devices getting counted twice 14:07:43 so we would need to know what the numbers should be 14:07:49 maybe reading them out from dd 14:07:57 or measuring them independently 14:08:07 I am not sure how to approach that 14:08:44 maybe just checking consistency between the sumamry plot and the detailed plot 14:08:47 for storage 14:08:55 it would have caught this 14:09:20 now we check for consistency between pcp and internal samplers 14:09:23 which is something 14:09:34 and I found a bug already 14:09:36 nice 14:09:59 so what the tests does 14:10:18 right, so if it's not testing the actual metrics counters, performance impact etc., i think this is a good appreach 14:10:19 is to give access to the data that the plots are drawn from 14:10:33 so the actual curve, but as numbers in an array 14:11:00 and then they check whether there is a plateau in it, for example 50% - 105% cpu load for 15 seconds 14:11:31 after the machine was loaded for 20 seconds 14:12:35 our internal samplers are used in the field, but get never tested by us, I guess 14:12:46 so this might be important to test already 14:13:09 of course, ideally we would et rid of them 14:14:03 anything else on that subject? 14:14:16 not from me 14:14:23 #topic storaged / udisks2 14:14:25 sounds really good to get the metrics stuff more tested 14:14:29 yeah 14:14:48 let's hope it's not doing more damage than good 14:14:52 okay 14:15:14 so phatina has made a experimental version of storaged that uses UDisks2 names in the D-Bus API 14:15:22 and actually everywhere else 14:15:36 so it's a complete replacement for UDisks2 14:15:49 he has tested it against existing UDIsks2 clients and all looks well 14:16:04 wow, good news 14:16:13 I am right now changing pkg/storaged to use those names 14:16:24 https://copr.fedorainfracloud.org/coprs/phatina/storaged-udisks2-dropin/ 14:16:29 manual testing looks good, I'll run the integration tests next 14:16:31 and then report 14:16:47 does this work include working with the real udisks as well? 14:16:47 https://github.com/cockpit-project/cockpit/pull/3795 14:16:48 cool 14:16:54 stefw, no 14:17:02 i haven't planned that 14:17:18 but I think we will be quite close to that 14:17:37 we will lose features of course 14:17:38 nice 14:17:53 lvm2, iscsi, that's obvious 14:18:03 but also things like cleaning up of fstab 14:18:09 which might be dangerous 14:18:17 or hopefully only annoying 14:18:41 our tests should tell us pretty exactly 14:19:08 phatina, do you want to comment? 14:19:46 the idea is to propose this for f24, right? 14:20:00 this = replace udisks2 with storaged completely 14:20:05 mvollmer: actually, everything has been already said: yes, I have prepared another branch of Storaged, which replaces the UDisks2 and extends it by all the plugins.... There is also the copr repo for testing 14:20:35 mvollmer: I think, we are late for this change (feature freeze was several days ago, right?) 14:20:49 (i don't know) 14:21:13 perhaps it could "conflict" with udisks for a version before replacing it? 14:21:14 mvollmer: https://fedoraproject.org/wiki/Releases/24/Schedule 14:22:33 phatina, is the udisks2 upstream in the loop? 14:22:42 i think that's martin pitt. 14:22:43 mvollmer: no, it's not 14:22:48 mvollmer: yes, martin 14:23:32 okay 14:23:36 mvollmer: we are in the testing phase (Ondrej Holy agrees to have a look at this replacement; and test it with gvfs, disks, ...) 14:23:56 could it be called UDisks3? :-) 14:26:06 anything else on that? 14:26:14 nope 14:26:27 #topic open floor 14:26:48 ansible for our verify machines? 14:27:10 or something like it, not that I know what I am talking about 14:27:45 yes, i think we may want to have something like that 14:27:55 both the Fedora guys and CentOS guys have asked about it 14:28:09 i see 14:28:10 the Fedora guys would like to be able to reprovision the Openstack instances we have with something like ansible 14:28:16 and the CentOS guys want to start giving us hardware to run testing 14:28:32 but want to be able to provision it when tasks are available ... and bringi t down a few hours after its idle 14:29:13 it seems like alivigni may have already gotten started on that 14:29:22 and built an ansilbe script for provisioning all the requirements 14:30:05 stefw: Yes I have a provisioner that setup resources in Openstack easily 14:30:13 nice 14:30:28 It sets up the slave with all requirements by executing a playbook 14:30:50 cool, something we may want to upstream into our cockpituous repo 14:30:55 and we can drop the docker image stuff we do now 14:30:57 and just use ansible? 14:31:01 stefw: I am working on adding a central mount point for images to the playbook 14:31:13 ah right, that's important 14:31:18 how long does the playbook take to run? 14:31:53 Yes we could just use ansible. I have changes to make in my provisioner to call ansible directly we used some other projects but that shouldn't be too hard to changes 14:32:06 The playbook probably takes about a minute 14:32:16 sounds pretty good 14:32:39 Fedora has also offered us a way to mount secrets into the provisioned machines 14:32:48 i guess you didn't run into that requirement alivigni, right? 14:32:49 stefw: I could also throw something together pretty quick to provision using straight ansible and we can use that in cockpituous 14:32:55 because you're not sending the logs anywhere, or updating github status, etc. 14:33:41 stefw we use Jenkins for that it can mask all credentials used for github ot fedora updating 14:34:23 okay, so that'll be something to make sure we handle correctly for both cases 14:34:24 stefw: we currently can send longs that have been created on any Jenkins for PRs we could also do this on a fedora people page as well 14:34:43 well it really depends on the use case for why the tests are being run 14:34:57 stefw: understood 14:35:18 I think in the PR case it can just go to github for release testing I would want this on fedora people 14:35:38 right 14:36:24 stefw: We do this well for openshift-ansible and update github PRs with a Amazon S3 link to the logs to look for issues 14:36:46 cool 14:37:09 so i when you think the ansible playbook is ready for more folks to play with 14:37:22 i'd provision a Fedora openstack node with it 14:37:35 and get the playbook upstream 14:39:02 stefw: The provisioner I wrote does that part the playbook configures the slave in our case. I will put something together that can run ansible to do both then anyone can use it as they wish 14:39:53 nice 14:44:02 anything else on that? 14:44:16 or are we done with todays meeting? 14:46:23 i think that's it 14:46:38 #endmeeting