13:05:18 #startmeeting 13:05:18 Meeting started Mon Aug 3 13:05:18 2015 UTC. The chair is andreasn. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:05:18 Useful Commands: #action #agreed #halp #info #idea #link #topic. 13:05:24 #topic agenda 13:05:28 .hello andreasn 13:05:29 andreasn: andreasn 'Andreas Nilsson' 13:05:33 .hello dperpeet 13:05:33 dperpeet: dperpeet 'Dominik Perpeet' 13:06:14 .hello stefw 13:06:15 stefw: stefw 'Stef Walter' 13:06:39 .hello mvo 13:06:40 mvollmer: mvo 'Marius Vollmer' 13:06:50 ok, what do we have on the agenda today 13:06:52 ? 13:06:54 * Cockpit info at F23 login screen 13:07:02 * storage tests 13:07:24 * Running shell scripts from javascript 13:08:18 sounds good 13:08:30 #topic Cockpit info at F23 login screen 13:09:10 I did some work on agetty to update its VT display when IP addresses change 13:09:19 and that means we can include a cockpit address in /etc/issue 13:09:26 wohoo! 13:09:29 sgallagh is going to do the work to include it in the fedora-release-server /etc/issue 13:09:36 nice 13:09:45 https://bugzilla.redhat.com/show_bug.cgi?id=1239089 13:09:52 #info https://bugzilla.redhat.com/show_bug.cgi?id=1239089 13:10:57 cool, ok, next 13:11:02 #topic Storage tests 13:11:10 ok, we have a recurring error 13:11:16 I understand mvollmer was looking into it? 13:11:23 yes 13:11:24 do we need to have a temporary fix? 13:11:33 I have been running the tests now for some time 13:11:46 nothing conclusive, unfortunately 13:12:22 i have seen other failures, for example the wipefs in storaged failing after parted because the new partition block device wasn't found 13:12:28 which should be 'impossible' 13:12:40 storaged bug? 13:12:44 because storaged has explicitly waited for it to appear 13:12:49 can't say. 13:13:10 should we add something to ignore that test failure? 13:13:17 but these are all "once in a million times" bugs... 13:13:20 I think currently we don't have a way to run a test but ignore failure 13:13:27 i haven't seen testHidden fail 13:13:34 the *luks failure crops up quite often 13:13:48 right 13:13:59 these are not once in a million bugs though 13:13:59 we could make it retry 13:14:06 they fail on the test machine 13:14:11 2/3rds of the time 13:14:17 have you tried with sit on the test machine? 13:14:21 yes 13:14:38 ./verify with sit? 13:14:40 i believe that they fail 13:14:55 parallel fights with -s, stdin is closed 13:14:56 can it be reproduced when running just one test? 13:15:22 no 13:15:33 or rather, I wasn't able to. 13:15:55 do you want to keep at it or should someone else look, too? 13:16:01 from reading the code, fstab _should_ have been updated when Delete returns. 13:16:14 I'll make some concrete proposals soon, I hope. 13:16:15 we could watch that file 13:16:20 in the test 13:16:25 so let me try a bit longer 13:16:27 ok 13:16:31 let me know if I can help 13:17:00 I'll add some debugging output to the journal, if possible. 13:17:04 I say we only add stopgap measures if we can't fix it in a day or two 13:17:12 until then we can just ignore the error 13:17:24 "manually" 13:17:25 ok 13:17:53 there is also the kubernetes failure, which looks frequent to me, too 13:18:06 which one? 13:18:12 > watching kubernetes events failed: 404 13:18:15 it feels like it's not that frequent anymore 13:18:17 http://files.cockpit-project.org/hubbot/e880705d43e157d9ca54c6a8191c58103880a08f_f22_x86-64.2/hubbot.html 13:18:48 ignore what I said, I'm not aware of seeing *that* bug 13:18:57 interesting very strange 13:19:28 also on 22 testing: http://files.cockpit-project.org/hubbot/e880705d43e157d9ca54c6a8191c58103880a08f_f22-t_x86-64.2/hubbot.html 13:19:49 and atomic: http://files.cockpit-project.org/hubbot/e880705d43e157d9ca54c6a8191c58103880a08f_f22-atomic_x86-64.2/hubbot.html 13:20:05 all on master, i.e., with recent packages 13:20:05 just saw it on atomic 13:20:17 I guess some new update broke our kubernetes api 13:20:23 likely 13:20:24 is that since yseterday? 13:20:29 maybe with the v1 api change 13:20:38 I didn't see it on Friday 13:20:42 but testHidden etc are broken in the 'stable' images also, right? 13:20:43 i'll bump the kubernetes requirement 13:20:52 mvollmer, yes 13:20:56 right 13:21:05 dperpeet, i wonder if we should rebuild the atomic image 13:21:07 does that happen? 13:21:12 automatically? 13:21:13 the storage stack really feels brittle 13:21:24 stefw, the base image? 13:21:28 yeah 13:21:43 because i think kubernetes has been updated to a later version in f22 atomic 13:21:51 er, maybe it's the f23 atomic 13:22:05 let me check the script 13:22:26 we use http://mirror2.hs-esslingen.de/fedora/linux/releases/22/Cloud/x86_64/Images/ as base 13:22:42 sorry, that was for me 13:22:44 script says http://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/ 13:22:50 that's the newest base image 13:22:58 from May 13:23:15 yup, we may need to use f23 to have the kubernetes v1 API 13:23:31 * stefw gets it 13:24:04 http://dl.fedoraproject.org/pub/alt/fedora-atomic/images/testing/ 13:24:24 also, manual rebuilding via pull request tag is currently blocking on https://github.com/cockpit-project/hubbot/pull/24 13:24:54 mvollmer, if you work on storage tests, you can cherry pick https://github.com/cockpit-project/cockpit/pull/2518 13:25:11 or be aware of it 13:25:34 I think that can be merged anyway 13:25:36 yep, thanks 13:25:45 it can't break the tests more than they already are on atomic 13:25:46 :) 13:25:50 yes, forgot about it in the PR... 13:26:32 ok, next topic? 13:26:45 one thing 13:26:53 sure 13:27:00 should we consider not running the storage tests in parallel? 13:27:21 getting rid of the races would be ideal, but then again, they seem to be all over the stack 13:27:34 it's nice to be aware of possible races 13:27:37 whit kind of races? 13:27:46 especially with a DBus API we should work to get rid of the races, no? 13:28:07 fstab changes bubbling up through the stack, for example 13:28:24 wipefs not seeing the partition that was just created 13:28:52 with fstab, shouldn't we force it to be updated before returning from a method that modifies it? 13:28:55 but I agree, i think we have them under control, almost. 13:29:20 yes,we can change storaged to do that 13:29:40 but there are also external changes, and a 'no-block' Format 13:29:41 or wait for it in the test 13:29:45 we do 13:29:56 [cockpit] stefwalter opened pull request #2529: Fix the shebang lines in VERIFY and friend (master...fix-shebang-lines) http://git.io/vO4vS 13:30:24 ok, let's keep them running under load. 13:30:24 we don't wait for fstab changes in the test 13:30:32 i think we just read /etc/fstab and expect it to have changed 13:30:35 (which it should have) 13:30:40 yes, in that case 13:30:55 "it's complicated" 13:31:01 [cockpit] stefwalter opened pull request #2530: base: Actually make the waitSeconds change take effect (master...wait-seconds-for-real) http://git.io/vO4vh 13:31:46 The AddConfiguration method adds to fstab or crypttab, but it doesn't guarantee that the D-Bus properties are up-to-date when it returns. 13:31:58 that can be fixed 13:32:42 but there are cases in the tests where fstab really changes asynchronously, and we need to wait for the change. 13:33:05 and since we need to do that anyway, I didn't want to block on fixing storaged. 13:33:31 so, yeah, let's find that testHidden bug as well, it's just so frstrating to reproduce. 13:34:39 eot. :-) 13:34:50 #topic Running shell scripts from javascript 13:35:50 stefw ^ 13:36:06 ah yes 13:36:22 i've often been thinking about how we would run multiple commands together in the absence of an API without a lot of round trips and races 13:36:28 and i have a suggestion 13:36:44 * stefw has added a cockpit.script() function 13:36:51 which accepts a shell script to run 13:37:05 sounds like a good way to avoid boilerplate 13:37:20 here: https://github.com/stefwalter/cockpit/commit/bdb991d1ebff052c863f6e23a82cf9000f96c646 13:37:27 and here's a use of it: https://github.com/stefwalter/cockpit/commit/3c2672be6cc4db2910580f791504a2b6fe821f37 13:37:37 it also allows certain things to be tested better 13:37:42 anyway, i wanted to highlight it 13:37:58 may be worth thinking about and seeing if it's a recurring pattern we want to use 13:38:06 especially in situations where there's no solid API to interact with 13:38:41 that's it on that topic 13:38:50 #topic Open Floor 13:39:04 on the old topic: thanks for that, I think it's a lot cleaner 13:39:24 I have a question about Internet Explorer 13:39:32 how are we doing with support for IE11 fixes and stuff? 13:39:41 yeah, makes me think of the "batching" problem we have with stroage 13:39:57 in progress 13:40:06 sgallagh reported that everything works well in Edge, the new default browser in Windows 10, so that's a positive sign 13:40:14 stefw, are there guarantees about what happens to a running 'spawn' when the browser disconnects= 13:40:17 *? 13:40:30 mvollmer, we could add an option to the bridge 13:40:37 is it killed immediately with the bridge, or is it allowed to finish? 13:40:40 so that it spawns the process in such a way that it's reparented and not killed when we exit 13:40:44 does that make sense even? 13:40:47 we would need to finish that part of it's interesting 13:41:19 right 13:41:41 not automatically killing a job can be pretty dangerous 13:41:49 considering resources 13:41:58 yes 13:42:18 i guess "background batching" needs to be quite explicit. 13:42:27 i think so 13:42:31 is there any urgent need? 13:42:35 no 13:42:39 you can always start a job backgrounded yourself in the terminal 13:43:10 I agree that it would need to be very explicit 13:43:36 in most cases that job is probably better left to services 13:43:42 e.g. storaged 13:43:46 and we just interface with those 13:44:03 yes 13:44:06 certainly 13:44:34 stefw, are those script commits ready to cherry-pick? 13:44:35 it means that storaged needs to implement what we need for our UI, but I guess that's ok. 13:45:06 mvollmer, yeah - I think that suits Cockpit better 13:45:19 dperpeet, yes, you could bring them into your keys branch if you want 13:45:29 then it's all in one place 13:45:37 and only mvollmer left to review 13:45:52 well we can review each other's commits 13:45:58 yeah :) 13:51:25 anything else? 13:52:21 no 13:52:28 all right, thanks everyone! 13:52:31 #endmeeting