10:26:41 #startmeeting ssd_cache_test_day_f20 10:26:41 Meeting started Sun Oct 13 10:26:41 2013 UTC. The chair is ignatenkobrain. Information about MeetBot at http://wiki.debian.org/MeetBot. 10:26:41 Useful Commands: #action #agreed #halp #info #idea #link #topic. 10:27:50 #topic SSD Cache Test Day: https://fedoraproject.org/wiki/Test_Day:2013-10-13_SSD_Cache 10:36:23 hi guys, i have my home system with hdd /dev/sdb and ssd /dev/sda. on ssd i have volume group vg_bb. vg_bb have about 3 gb of free space. I create lv vg_bb/bcache (2.5 gb). 10:37:22 can I use vg_bb/bcache instead of /dev/sdb1 in prerequisites? 10:40:51 rolffokkens: bon appetit =) 10:58:52 * adamw sent out blog post and forum news post about the test day 10:58:59 may be too late to get people aware of it though :( 10:59:06 i can't help testing sadly as i don't have the hardware around atm 11:06:41 adamw: you can also test in a VM. Not the performance of course, but you can test the other aspects. 11:07:22 rolffokkens: hmm, i can try, not even sure i can squeeze sufficiently sized disks onto this laptop :) 11:08:16 k3olo: I did some testing with bcache on LV's. It worked, so you can try as well. But the test day pages have no docs on that. 11:09:22 rolffokkens: ok, i'll try and tell you about results 11:09:33 k3olo: thx 11:10:18 adamw: you need about 10 GB free space. 11:10:31 rolffokkens: hum, might be able to scrape that together 11:15:56 I checked in my own VM: alle 2GB values in the test day page are the double of the bare minumum, so if you read 1 GB it'll still work. 11:17:02 my root FS needs actually 917MB. 11:17:23 adamw: so a total of 5GB ,ay be sufficient to test. 11:24:38 k3olo: you may also consider testing in a VM. virtual SSD is on vg_bb/bcache and virtual HDD somewhere on your HDD. 11:33:28 so. /me here 11:34:02 k3olo: at what step you now ? 11:38:23 another question - can bcache destroy my data in /home (/sdb3) ? 11:38:46 k3olo: destroy in casual use ? 11:39:29 if so that can. who knowns, but some critical issues was fixed 11:39:31 ignatenkobrain: during https://fedoraproject.org/wiki/QA:Testcase_bcache-tools_home_on_bcache_(no_LVM)#Setup or test 11:41:30 k3olo: yup. it will destroy all data on step "howto test" 11:42:31 .fire nirik 11:42:31 adamw fires nirik 11:42:33 k3olo: I guess "Setup, item 4" is not clear on this. 11:43:42 okay, you saved me ) 11:43:45 k3olo: you need backup you /home/* files to another place and restore it after 11:44:24 Changed the wiki text. If needed "backup" all data in /home to another place on your root filesystem, because all data on /dev/sda2 will be destroyed. 11:44:24 ignatenkobrain: unfortunately I have not another place to store data 11:44:59 rolffokkens: and make it bold =) 11:45:41 k3olo: you can backup it to external device or to e.g. /var/tmp 11:45:43 or the same 11:46:25 k3olo: bold it is! 11:46:49 k3olo: I mean, changed it to bold :-) 11:47:09 rolffokkens: do for it 72pt font size and paint it to white :D 11:49:56 :-) 11:56:56 k3olo: at what step are you? 11:58:25 rolffokkens: going away because I can't take it. :( 11:58:35 *can't test 11:58:50 k3olo: what's the problem? 11:59:25 rolffokkens: I think not have space on / for backup 11:59:35 rolffokkens: the problem is I have not enough space to backup /home 11:59:44 You have 3 GB available on your SSD. How mucht space on your HDD? 11:59:56 (but I can test on mock partition on hdd) 12:00:10 rolffokkens: I have about 7 gb unallocated on hdd 12:00:17 k3olo: you may also consider testing in a VM. virtual SSD is on vg_bb/bcache and virtual HDD somewhere on your HDD. 12:00:18 k3olo: this ! test w/ mock 12:00:23 this good test 12:01:01 create mock partitions /var/cache/mock and /var/lib/mock on HDD 12:01:05 on bcache 12:01:23 and run build pkgs 12:01:32 rolffokkens: I think we could add this as test 12:02:01 rolffokkens: I think if our koji don't runs on ssd w/ bcache building packages should be faster 12:02:10 nirik: what do you think ? 12:03:03 too much time of building uses disk for installing dependencies nad et.c 12:03:08 s/nad/and/ 12:03:24 ignatenkobrain: >and run build pkgs 12:03:24 ? 12:03:52 I don't know how to build something ) 12:03:54 ignatenkobrain: you mean mock as a use case also for performance testing? 12:04:06 k3olo: how bout VM? 12:04:16 k3olo: it's easy! 12:04:18 $ fedpkg co gnome-shell 12:04:20 $ cd gnome-shell 12:04:22 $ fedpkg srpm 12:04:24 $ mock blahblahblah 12:04:29 rolffokkens: not almost 12:04:33 ignatenkobrain: ok 12:04:41 rolffokkens: not / or /home on bcache 12:04:46 mock partitions on bcache I mean 12:05:00 additional testcase 12:05:22 for this cases (when not enough space for backuping data) 12:05:30 k3olo: I'm not sure of testers all are familiar with mock. 12:05:56 k3olo: please respond: have you considered VM? 12:06:16 rolffokkens: no 12:06:28 k3olo: why not? 12:06:51 k3olo: no issues is losing data, ideal for testing 12:06:58 s/is/in/ 12:07:31 rolffokkens: I have not enough time, I need to go after ~40 min 12:07:45 and I haven't f20 vm 12:08:04 ignatenkobrain: I'm not sure of testers all are familiar with mock 12:08:10 rolffokkens: but I can use my 8 gb of unallocated space on hdd to test. 12:08:34 if you tells me, how 12:08:36 k3olo: OK. Later you could do the test, there's enough space. 12:08:57 k3olo: ever used "virt-manager"? 12:09:09 rolffokkens: yes, only vbox 12:09:15 k3olo: oh no.. 12:10:34 vbox have many nice feautures in user interface like easy device integration, shared clipboard, operations with screen of vm 12:10:39 ignatenkobrain: ^ 12:11:00 rolffokkens: let me create testcases for mock partitions 12:11:14 ignatenkobrain: it is very user friendly, if you need gui in virtual machine 12:11:29 k3olo: no. 12:11:30 :P 12:12:04 ignatenkobrain: sure, could be interresting in real life to speedup koji. 12:13:26 bye, have a nice day 12:13:35 nirik: Kevin, сut it to sleep! 12:35:10 adamw: ping 12:55:33 ignatenkobrain: ahoyho 12:55:34 y 12:55:52 adamw: how much you know how runned our koji ? 12:56:11 adamw: we have genial idea 12:59:54 what do you mean exactly? 13:01:44 adamw: we have genial idea to make another partition for cache and etc. for mock (koji use it) for speeding up koji in IOwait ;) 13:02:33 ah, I don't know at that level of detail, no 13:02:44 might be best to talk to nirik about that I think? 13:03:12 adamw: he's not available now. but generally what do you think about our idea? 13:03:44 I'll send mail to maillists today at the night about this too 13:08:30 i really don't know how much of a bottleneck it is for koji or if something like this would help, sorry! 17:01:35 how is the testing going? 17:01:57 roshi: not good :( one participant :( 17:02:06 you could be second 17:02:33 I'm getting ready to spin up a VM right now :) 17:03:13 part of the issue might be people don't use bcache - or know what it is 17:03:23 * roshi had to look it up 17:18:04 roshi: thanks for your feedback. I added a brief explanation on the Test page. 17:18:31 sweet :) 17:22:57 roshi: are focussing on a particular test case? 17:23:10 reading :) 17:23:24 want to make sure I know what I'm setting up before I start 17:24:24 In case anything is unclear, there's a lot of support available for you right now! :-) 17:24:38 probably start with 1.b because it hasn't been run yet 17:24:45 true :) 17:26:54 FYI 1.b requires 1.a as a preparation. 17:27:20 then I'll record my results and start there :) 17:29:20 installing minimal now 17:31:35 roshi: taken care of the right partitioning? 17:32:08 doing that now 17:34:24 for a VM, I've added a second drive (I'm using virt-manager) 17:34:34 how do I set it as an SSD 17:35:14 ? 17:35:22 roshi: so you have a /dev/vda (sda) as HDD and a /dev/vdb (sdb) as SSD. 17:35:58 I'm setting up the hw configuration now 17:35:59 roshi: if you have a real SSD you could create a disk image on that device. 17:36:17 didn' t know if I could set it in the vm config 17:36:19 I don't 17:36:19 roshi: rolffokkens, also. I don't know about virt-manager, but qemu can set speed for disks 17:36:20 :( 17:36:37 virt-manager is using qemu 17:36:44 and tests can be passed on one real hdd 17:36:56 but vda will set to 10mb/s 17:37:03 and vdb set to 100mbs 17:37:06 and test 17:37:09 roshi: then just create a disk. There are no options to make it behave like an SSD in virt-,amager 17:37:34 Except for performance there is no difference. 17:37:46 I didn't think there was much difference 17:37:58 but, having never used an SSD, I wasn't sure 17:38:11 for optional testing mock will do too much difference ;) 17:42:14 roshi: on the outside they look like any 2.5'' drive with the same SATA connectors, and to your system they're juist hard disks as well. 17:42:48 I've worked with them - I just don't have any personally 17:45:27 I waited to buy one because I don't want to decide what data should be on the SSD and what data should be on the HDD. But when SSD caching became available, I bought one. 17:53:12 * roshi is running into space issues 17:54:17 roshi: what issues ? 17:54:36 not enough disk space on host drive 17:54:42 * roshi fixing now 17:55:30 virt-manager (or perhaps qemu, not sure) saves images in /var/lib/libvirt/images -> which was out of space 17:56:01 roshi: oh dude ;) delete you porn-vide^W^W^W yum cache :D 17:56:11 roshi: you can fix it in virt-manager 17:56:26 I just got rid of some VM's I don't need anymore 17:56:43 * roshi needs to be more proactive with keeping files organized 17:57:12 you can use "Edit"->"Preferences"->"Storage"->"Add" 17:57:17 in virt-manager 17:57:19 ;) 17:59:26 ep 17:59:32 s/ep/yep 18:00:34 wait, I have no 'storage' in preferences (not in the gui at least) 18:01:16 ah. I'm wrong 18:01:24 not Preferences. Connection details 18:01:58 roshi: http://storage9.static.itmages.com/i/13/1013/h_1381687317_7018361_d414f717c1.png 18:02:59 I got it :) 18:14:52 roshi: creating VM is going well? 18:15:14 from what I can tell 18:15:38 handling the partitions now 18:18:56 roshi: In Anaconda I never actually made the partitions on the (virtual) SSD, allways did that later. 18:19:20 jreznik: hey ;) 18:19:21 I'm doing it in anaconda right now 18:19:28 OK. 18:21:18 we'll see if it works 18:21:23 :) 18:22:14 in the interest of being thorough - should we specify a swap size on the wiki for the partitioning? 18:22:33 * roshi is using 1GB for the minimal install 18:22:35 roshi: no. use needed swap size 18:22:52 use 1 GiB ;) 18:23:16 I don't like GBs. I'd like GiBs :D 18:23:59 lol 18:26:54 * roshi is getting hungry 18:27:21 roshi: `date` ? 18:27:30 ? 18:27:49 2013-10-13 18:27:54 around lunch time 18:27:55 roshi: time ;) 18:27:58 I meant 18:28:02 well, a bit past lunch time 18:28:08 ah 18:28:41 * roshi has the VM running - starting test cases 18:28:54 roshi: excellent! 18:31:10 roshi: great! I wasn't able to create all partition is F20 Anaconda. 18:31:48 well, I think I got it all working ;) 18:31:53 time will tell 18:32:05 roshi: or fdisk :-) 18:32:41 rolffokkens: gdisk for GPT (all UEFI systems) 18:33:49 ignatenkobrain: not in the VM. Well, not in my VM's. 18:34:30 rolffokkens: not in QEMU/KVM VMs, probably in vbox VMs and real machines 18:34:33 also 18:34:41 we should write EFI virtualiztion 18:36:24 https://bugzilla.redhat.com/show_bug.cgi?id=477035 18:36:38 ignatenkobrain: hi man! 18:37:27 rolffokkens: I should packaging OVMF 18:37:33 jreznik: how are you ? 18:37:41 jreznik: will you test ;) 18:37:43 ? 18:38:52 jreznik: yes, we need another tester! :-) 18:39:18 jreznik: we need mooore experimental hamsters :D 18:40:42 :D 18:40:51 how is it going? 18:41:40 jreznik: one human in table, second roshi lunching and testing, third human will test after 30-40 mins. 18:42:55 nothing more 18:43:11 well, it's pretty specific test day... 18:43:15 we need jreznik for testing ;) 18:48:35 I'd be killed at home if I won't shutdown laptop now :( 18:49:20 rolffokkens: I think for next testday (F21 ssd cache) we could update tests w/ testing performance on ssd+hdd real, hdd VM (ristrict IO for one of virtual disks) 18:49:34 jreznik: heh. sad 18:50:32 jreznik: bcache isn't worth getting killed 18:51:21 * rolffokkens wonder if that's actually understandable English. 18:54:16 ignatenkobrain: could be a nice test. 18:55:36 part of the issue is this is a Sunday 18:55:47 here in the states almost no one works on sunday 18:57:30 work as in "getting paid for" or "playing with computers"? 18:57:52 lol 18:58:16 sorry - was meaning that basically people hang out with friends/family, watch TV play games 18:58:26 even nerds usually have something to be doing 18:58:38 if I had the time I would be out camping right now :) or hiking 18:58:50 * roshi has been trying to find some time to go fishing... 18:59:05 roshi, reform USA ;) 19:00:39 interesting that in the US watching TV is considered a better way to spend time than playing with computers :-) 19:01:03 Unless you mean watching TV with friends/familiy. 19:01:38 but why not playing on piano/trombone or the same ? :( 19:02:07 it's a US thing 19:02:18 * roshi doesn't have TV 19:02:34 too expensive :p 19:02:47 would rather spend money on good beer and food :) 19:03:09 in Russia we name TV as zombie box 19:04:10 beer and food: primary needs in life. 19:04:15 and internet. 19:04:28 s/beer/vodka/g 19:04:30 fixed 19:05:17 I've seen a lot of zombie movies on TV, but that's not what you mean I guess :-) 19:05:39 * roshi is a fan of vodka 19:05:57 roshi: zombie box because TV is zombiing people 19:05:58 I use the TV for games and an extra monitor when I really need it 19:06:10 I follow :) 19:07:09 roshi: redhat looking for you, alcoholic! :D 19:07:20 roshi: how about tests ? 19:07:25 roshi: lunching or also torturing your VM with test cases? 19:07:55 hm, the support people are a little eager 19:08:07 figuring out what to eat for lunch 19:08:11 :p 19:08:24 also, double checking my partitions 19:08:38 * roshi usually lets the installer handle the partitioning 19:09:15 * rolffokkens wonders if and roshi really did that 19:09:40 ? 19:10:06 s/if and/if and how/ 19:10:57 did what? 19:11:10 F20 anaconda only wants to make a partition when I assign a filesystem to it. But for the (virtual) SSD that's too early. 19:11:31 yeah - I don't think it worked like I thought it did :) 19:11:42 hence, me rechecking the partitions 19:12:03 Hm. what does fdisk /dev/vda show? 19:12:25 rolffokkens: fdisk -l might be more beautiful 19:12:58 * rolffokkens agrees 19:14:08 almost 9h was passed :D 19:14:10 it seems my /swap got put on the ssd instead of the hdd 19:14:46 roshi: but you should have free space on ssd 19:14:52 I do 19:14:54 that's no problem - you can do without swap during the test. 19:14:55 and you can use it for test 19:15:25 And the partition order on sda/vda is OK? 19:16:12 not sure 19:16:39 roshi: # fdisk -l | fpaste 19:16:43 roshi: provide please 19:18:08 https://www.irccloud.com/#!/ircs://irc.freenode.net:6697/%23fedora-test-day 19:18:31 meh - pentadactyl pasting.... 19:18:36 /me grumbles 19:18:51 http://paste.fedoraproject.org/46507/38169184/ 19:20:09 Looks OK. is /dev/vda3 your root FS? 19:20:51 df may be an easy way to show that. 19:21:17 rolffokkens: too ;) 19:21:38 we should integrate df and fdisk to systemd :D we not needed dupe dunctional :D 19:21:43 s/d/f/ 19:22:34 http://ur1.ca/fvtk0 19:23:33 vda2 19:23:49 The prereqs say that you need to have vda3 as root FS. 19:24:02 * rolffokkens thinks if we can work around this 19:24:44 well, I did try to set this up via anaconda 19:25:04 I think all good 19:25:21 and not needed to reinstall 19:25:37 In Anacondo the order is really important. I first create /boot, then /home and then / 19:25:39 only replace 2 and 3 in commands 19:26:09 ah - didn't know it was specific like that 19:26:34 yes, but for test case 1.b it'll be impossible to use the whole disk for your root FS. 19:26:38 rolffokkens: I think different configs more better to test 19:28:23 OK, /dev/sda2 could be used in the end as swap on bcache. 19:28:23 well, I can reinstall - it doesn't take long with the minimal 19:28:38 roshi: no. don't do this 19:28:46 * roshi is getting ready to go grab a burrito :) 19:28:54 I haven't 19:29:59 roshi: no need to do this. But plz remove swap from /etc/fstab and the virtual SSD. 19:30:31 roshi: if we do that, we'll be able to use /dev/vda2 as a bcached swap in the and. 19:30:44 s/we/you/ 19:30:48 reinstall - shindows way 19:31:18 removed swap from fstab 19:31:33 roshi: swapoff blahblah 19:31:46 swapoff -a 19:32:18 done - brb 19:32:22 er, bbiab 19:32:35 (be back in a bit) 19:34:50 (back in a bite) :-) 19:36:29 rolffokkens: bite or bitte ? ;) 19:38:20 roshi is burrito :D 19:40:01 k3olo: how to go you testing ? 19:40:33 ignatenkobrain: I'm on the "create a filesystem: mkfs -t ext4 -L HOME /dev/bcache0" step =) 19:40:37 ignatenkobrain: FYI, caching in koji is off so we have reproducable builds. We currently know every package version thats in every buildroot used by every build. If there was caching it would be a lot more complex. You're welcome to try and talk to dgillmore and see if there's some way to do it that still makes things reproducable. 19:41:00 ignatenkobrain: but i'll mount it into my ~/bcache_test dir 19:41:11 nirik is alive ;) wow 19:41:24 ignatenkobrain: I was on vacation, just got back. 19:42:08 nirik: I've pinged dgilmore for take a look for my critical bug, but have no pong ;( 19:42:17 what critical bug? 19:42:46 nirik: in GitPython 19:42:46 searching 19:42:52 also I've requested acls for it 19:42:58 * k3olo reboot 19:43:13 ok, just curious. I have no connection to GitPython... 19:43:25 nirik: https://bugzilla.redhat.com/show_bug.cgi?id=1010706 19:43:55 nirik: I'm using it in my kernel-package program 19:44:55 ok... I don't know what that is... but good luck. ;) 19:45:12 ignatenkobrain: I have not write access into ~/bcache_test 19:45:34 k3olo: from root ? 19:47:00 k3olo: so you need to `$ cd ~/bcache_test && sudo mkdir test && sudo chown blahblah:blahblah -R test` the same command 19:47:35 nirik: you can take a look for my pkg for qa :D 19:47:44 nirik: https://github.com/ignatenkobrain/kernel-package 19:49:36 ignatenkobrain: I don't really have time to, but good luck with it. 19:49:56 nirik: thanks ;) 19:54:28 ignatenkobrain: не писал мне ничего? 19:54:44 k3olo: oh no. not a russian :D 19:54:56 k3olo: so you need to `$ cd ~/bcache_test && sudo mkdir test && sudo chown blahblah:blahblah -R test` the same command 19:55:52 ignatenkobrain: holy fuck, now they reveal us =) 19:58:28 ignatenkobrain: during reboot there was some freeze: 19:58:29 окт 13 23:47:04 bb.lan systemd[1]: sddm.service stopping timed out. Killing. 19:58:29 окт 13 23:47:04 bb.lan systemd[1]: sddm.service: main process exited, code=killed, status=9/KILL 19:58:29 окт 13 23:47:04 bb.lan systemd[1]: Stopped Simple Desktop Display Manager. 19:58:29 окт 13 23:47:04 bb.lan systemd[1]: Unit sddm.service entered failed state. 19:59:33 i didn't see it before 20:01:11 k3olo: you mounted bcache as /home ? 20:01:28 ignatenkobrain: no, as /home/nedr/bcache_test 20:01:43 k3olo: ok. so. now is ok ? 20:02:05 k3olo: mount | fpaste 20:02:39 ignatenkobrain: yes, all is ok. how can i test io speed? 20:02:54 *everything is ok 20:03:20 k3olo: excellent. 20:04:18 rolffokkens: I want to test speed, any ideas about that? 20:04:20 k3olo: # dd if=/dev/zero of=/home/user/bcache_test/test.file bs=64M and see iotop 20:04:29 ok 20:04:42 k3olo: or copy (a lot of) files from and to the bcache device. 20:05:11 py1hon: are you here? 20:05:58 k3olo: the dd suggestion may not work really well, because bcache tends to not use the cache for sequential I/O. 20:06:28 rolffokkens: probably 20:06:40 rolffokkens: how can I see I/O speed while copying files? 20:06:49 k3olo: iotop 20:06:53 or iostat 20:07:07 k3olo: you may also need to change your cache_mode to writeback, default is writethrough 20:07:30 rolffokkens: where can I change it? 20:07:35 writethrough works only for read. 20:07:52 k3olo: first check please. what is the cache_mode? 20:08:03 use bcache-status to see 20:08:50 rolffokkens: writethrough 20:09:24 OK. What's your backing device for /dev/bcache0? 20:09:40 rolffokkens: /dev/sdb3 20:10:49 k3olo: echo writeback > /sys/block/sdb/sdb3/bcache/cache_mode 20:12:07 rolffokkens: Cache Mode writethrough [writeback] writearound none 20:12:17 right! 20:13:53 "iostat 1 sda sdb" will show you the I/O 20:15:14 rolffokkens: hm, I see in KDE's transfers: copying begins from 30 MB/s but now it stops at all at ~2 GB done 20:16:20 k3olo: you mean the copy command is blocked? 20:17:06 rolffokkens: copy is just freezed without terminating 20:17:22 k3olo: sounds like an issue I read about.... 20:17:50 .fire py1hon 20:17:50 adamw fires py1hon 20:18:11 py1hon: we need you ;) 20:19:07 * rolffokkens is searching... 20:24:24 http://thread.gmane.org/gmane.linux.kernel.bcache.devel/2049 20:24:32 Looks like that. 20:27:50 rolffokkens: I can build kernel w/ this patch 20:27:51 k3olo: did you read: http://thread.gmane.org/gmane.linux.kernel.bcache.devel/2049 ? 20:28:26 rolffokkens: reading it now 20:29:25 ignatenkobrain: building the kernel will take a while. For me there's 1.5 hour left, then to bed. 20:29:39 rolffokkens: looks like my case 20:29:41 rolffokkens: 25-30mins for me 20:30:07 ignatenkobrain: OK. plz try. 20:30:15 k3olo: can you wait ? 20:30:49 ignatenkobrain, k3olo: or later this week? 20:31:42 ignatenkobrain: no, I need to wake up in 6 hours 20:31:59 I can do it tomorrow 20:32:04 k3olo: ok. please fill a bug in kernel component 20:32:11 and CC us 20:32:38 Unfortunately py1hon busy today :9 20:32:42 it's very sad 20:32:45 ignatenkobrain: I can comment http://thread.gmane.org/gmane.linux.kernel.bcache.devel/2049 20:32:49 with my dmesg 20:33:10 k3olo: more better to have bug for building and testing patches 20:33:18 https://bugzilla.redhat.com/ 20:34:04 k3olo: py1hon is reading linux-bcache mail list actively.... 20:34:50 rolffokkens: for adding patches as downstream we need to have bug at RHBZ ;) 20:35:57 ignatenkobrain: OK, but I'll get py1hon in the loop for a final solution 20:44:56 rolffokkens: https://bugzilla.redhat.com/show_bug.cgi?id=1018615 20:46:12 k3olo: cat /proc/cmdline 20:49:30 ignatenkobrain: cat /proc/cmdline 20:49:30 BOOT_IMAGE=/vmlinuz-3.11.4-301.fc20.x86_64 root=/dev/mapper/vg_bb-root ro rd.lvm.lv=vg_bb/root vconsole.font=latarcyrheb-sun16 vconsole.keymap=us rhgb quiet LANG=ru_RU.UTF-8 20:49:43 good night, rolffokkens, ignatenkobrain 20:50:01 k3olo: good night to ypu 20:50:24 k3olo: you forget write to table you results ;) https://fedoraproject.org/wiki/Test_Day:2013-10-13_SSD_Cache#Test_Results 20:51:41 rolffokkens: unfortunately you link isn't correctly for k3olo bug :( 20:56:18 why? 20:56:43 rolffokkens: Igor Gnatenko 2013-10-13 16:49:19 EDT 20:56:47 Bug present in the kernel bcache realisation. 20:56:51 http://thread.gmane.org/gmane.linux.kernel.bcache.devel/2049 20:56:55 but in 3.11.4 patch was applied: 20:56:57 https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/md/bcache/request.c?id=c0f04d88e46d14de51f4baebb6efafb7d59e9f96 20:57:01 Rolf or I will send message to upstream bcache mailing list. 20:57:41 1.A finished without a hitch 20:57:55 though, getting permissions errors when logging in with a test user 20:58:02 roshi: oh my god. you is still alive ! ;) 20:58:08 can't cd to /home/ 20:58:10 :D 20:58:19 roshi: ll /home/ | fpaste 20:58:28 it takes more than some partitioning and a burrito to kill me 20:58:29 ignatenkobrain: could you please do in response to the mail I just tent to the list? 20:58:32 ;) 20:59:04 rolffokkens: sure. will do. but tomorrow 20:59:23 roshi: provide paste of my command 20:59:50 http://ur1.ca/fvtzy 20:59:56 I was :p 21:00:14 roshi: sure. chown -R test:test /home/test 21:00:16 roshi: ownership issue 21:00:17 as root 21:00:23 yep 21:00:34 right :) 21:02:24 also 21:02:26 $ stat -c %a /home/brain 21:02:28 710 21:02:50 you need set chmod 0710 /home/test 21:04:22 roshi: so now case 1.B? 21:04:43 that's the plan rolffokkens 21:05:15 roshi: how bout permissions ? fxd ? 21:05:57 well, I fixed the permissions so that I can do things in the right home directory 21:06:12 but for some reason permission is denied when I log in 21:06:27 so, it doesn't drop the user in their home directory on login 21:06:46 probably $ stat -c %a /home/brain 21:06:48 710 21:06:55 probably you don't have 710 21:07:04 and DM could block 21:07:27 710 is done 21:08:18 roshi: ll -d /home 21:11:26 roshi: case 1.B should be no problem, only the optional part at the end does not apply. 21:12:12 * roshi thinks it could have something to do with selinux policies 21:12:28 roshi: that's is very well possible! 21:12:51 roshi: please edit /etc/selinux/config 21:12:51 setting it to permissive to see if that changes anything 21:13:30 roshi: selinux is something I should warn about in the test docs. 21:14:27 that fixed it 21:14:50 prompt still looks weird though: -bash-4.2$ 21:15:05 oops 21:16:44 well, first test passed - and I'm learning about bcache, so it's a good day 21:16:54 for me, anyway 21:18:33 roshi: case 1.B should be no problem, only the optional part at the end does not apply. 21:19:08 sounds good - I'll let you know if I have any hiccups 21:28:32 rolffokkens: http://thread.gmane.org/gmane.linux.kernel.bcache.devel/2113 21:28:46 I will build custom kernel w/ this patch 21:30:14 ignatenkobrain: for k3olo? 21:30:21 rolffokkens: yep 21:30:36 rolffokkens: I think it should fix hes problem 21:30:50 was that his problem? NULL ptr deref? 21:31:11 rolffokkens: I think so. I looked dmesg 21:31:19 [ 1550.135293] BUG: unable to handle kernel NULL pointer dereference at 21:31:21 000000000000006c 21:31:29 OK. 21:37:47 roshi: how to go you ? 21:38:06 good, 1.B is done 21:38:22 everything went well - no errors and / is on bcache0 21:38:31 roshi: so good 21:38:35 yeah 21:38:43 * roshi updates the matrices on wiki 21:38:58 roshi: also we need to test one bug 21:39:03 and probably fix 21:39:19 rolffokkens: could you tell to roshi how-to reproduce bug ? 21:41:51 * roshi goes for a quick run 21:42:41 rolffokkens 12:10:49 AM 21:42:44 k3olo: echo writeback > /sys/block/sdb/sdb3/bcache/cache_mode 21:42:53 roshi: replace sdb3 to you device 21:43:00 and copy files 21:48:14 roshi: actually you should get unreachable copy 21:48:18 i.e. bug 21:49:00 Was away for a while. roshi went smooth... 21:50:17 rolffokkens: we need to test bug which reported by k3olo and if it is still persist - test kernel, to which I referenced in the bug 21:51:04 ignatenkobrain: I can try to crash my own VM 21:51:34 rolffokkens: good. please test. I'll be back after 15-40mins 21:52:03 ignatenkobrain: about to go to bed - tomorrow monday. 21:52:41 rolffokkens: I'm not needed more sleep. I should not wake up early ;) 21:53:20 rolffokkens: but if you needs sleep - don't worry. we can continue tomorrow ;) 21:55:03 ignatenkobrain: doing some tests now 22:01:07 ignatenkobrain: interesting issue with VM: the underlying host OS (BTRFS) complains: BTRFS error (device dm-0): csum failed ino 3026069 off 1449168896 csum 3343383093 private 844597172 22:01:29 An the VM itself reports an I/O error. 22:02:13 The underlying OS is using BTRFS on bcache. Don;t know which one to blame. 22:02:36 Well, I'm off. See y'all later! 22:22:39 ignatenkobrain: was there instructions on how to recreate that bug? 22:33:46 so 22:33:49 * ignatenkobrain is here 22:33:59 roshi: at what step are you ? 22:35:10 rolffokkens, I think btrfs is bad choose 22:43:49 just got back from a quick jog 22:43:57 I've finished 1.B 22:44:20 tested bug ? 22:44:37 not yet 22:44:41 echo writeback > /sys/block/sdb/sdb3/bcache/cache_mode 22:44:54 replace sdb and sdb3 w/ you partition 22:45:35 IIRC for you case sdb == sda and sdb3 == sda2 or sda1 22:46:06 `echo writeback`? 22:46:40 `echo writeback > ........` 22:48:27 ok, done 22:48:42 roshi: copy (not move) data above folders 22:50:29 copy data from anywhere to vda3? 22:50:44 or just somewhere in /home 22:50:53 is on bcache atm 22:51:03 to /home which on bcache 22:52:18 kk 22:52:44 roshi: if it will stay on some % - you have this bug 22:55:53 cp my_file.txt /home 22:55:55 from / 22:55:57 no error 22:56:25 big file 22:56:33 kk 22:56:48 also. you can generate it w/ dd from /dev/urandom ;) 22:59:19 yeah 22:59:33 getting a big file now - netinst iso :) 22:59:53 i think it is nit a big 23:00:09 we copied >2GiB and on 2GiB it was frozen 23:00:50 so you think I need bigger? 23:01:00 roshi: probably.. 23:01:43 hm 23:01:49 wget for the iso dies 23:02:00 heh 23:02:05 don't panic 23:02:28 I rarely panic :) 23:02:36 wait 23:03:10 press sysrq-t 23:03:48 huh, systemd is having issues: user@1000.service entered a failed state 23:03:52 sysrq? 23:04:08 SysRq or PrtScr 23:04:32 ok 23:04:33 why? 23:04:41 what why ? 23:04:49 PrtScr - t 23:04:52 why press that? 23:05:34 we need to get trace 23:05:35 http://thread.gmane.org/gmane.linux.kernel.bcache.devel/2049 23:05:37 argh 23:05:39 't' - Will dump a list of current tasks and their information to your 23:05:41 console. 23:05:57 after do - see dmesg 23:06:01 and paste it 23:06:42 hm 23:06:58 damn. I'm wront 23:07:00 wrong 23:07:14 Ctrl - Alt - SysRq - t 23:07:15 wget was doing 1.2 m/s down for a while - now it's down to 10kb/s 23:07:25 I know 23:08:01 do C-A-SysRq-t and provide dmesg | fpaste 23:08:03 doing that on the host won't do anything 23:08:24 heh 23:08:40 if will do, but will not critical 23:09:05 but I hope it will do consider VM 23:09:16 damn. my brain 23:09:28 inside. not consider 23:10:59 .fire roshi 23:10:59 adamw fires roshi 23:11:27 http://ur1.ca/fvum1 23:11:51 yup 23:11:53 this 23:11:55 you have bug 23:12:38 https://bugzilla.redhat.com/show_bug.cgi?id=1018615 23:12:46 what line indicates the bug? 23:13:02 * roshi doesn't really know what he's testing at this point 23:13:17 roshi: se 568 string 23:13:28 unable to handle kernel NULL pointer dereference at 23:13:47 ah 23:13:50 and you wget stopped or too slow. right ? 23:13:58 I stopped wget 23:14:07 10kb/s is a big slow 23:14:11 yup 23:14:16 please test new kernel 23:14:19 from koji 23:14:25 * ignatenkobrain looking for link 23:14:40 and sure CC to bug 23:14:47 http://koji.fedoraproject.org/koji/buildinfo?buildID=470953 23:19:59 roshi: at what step? 23:20:27 installing rpm dev tools to upgrade the kernel 23:20:45 roshi: wxy you need rpm dev tools ? O_o 23:20:51 * roshi thinks that's what he needs 23:21:08 roshi: you are weird :D 23:21:10 this is deeper stuff than I'm used to doing 23:21:18 * roshi is learning 23:21:27 # yum install http://kojipkgs.fedoraproject.org//packages/kernel/3.12.0/0.rc4.git4.1.fc21/x86_64/kernel-3.12.0-0.rc4.git4.1.fc21.x86_64.rpm 23:21:30 and etc 23:21:41 * roshi didn't think it would be that easy :p 23:22:41 u need kernel kernel-modules-extra and probably other pkg 23:22:43 s 23:23:38 it keeps hanging on yum install 23:24:04 * roshi would guess that the 10kb/s is happening again 23:24:10 damn 23:24:32 have you fallback w/o/ bcache on root fs ? 23:24:47 nope 23:25:18 ok. so. 23:25:35 should I give yum a chance to do it's thing? 23:26:05 I think this not needed. it will freeze after sometime 23:26:31 update matrices in wiki w/ link on this bug and tomorrow we can quick re-test this. ok ? 23:27:02 wiki already was referenced to this bug 23:27:06 you should see 23:27:14 which test is this regarding? 23:27:34 I'll be on tomorrow - so I can take another crack at it 23:27:35 all ;) 23:27:50 ah 23:28:03 it's general issue in kernel 23:28:09 not /home or / specific 23:29:01 so update both test cases then 23:29:40 also: thanks for the help, I learned a bit today :) 23:29:54 will do. 23:30:08 roshi: thanks for the testing ;) 23:30:28 :) 23:30:37 someone's got to do it :p 23:30:48 * roshi will know more about the process next time 23:34:33 updated testcases 23:35:41 roshi: thanks againg. and who tested thanks too. 23:35:48 should have a some sleep 23:35:56 all: good night! 23:36:11 record will be available 23:36:19 #endmeeting