harddisk benchmarks

sequential read large files

basically meassuring raw I/O

noscript:


# test write
time time dd if=/dev/random bs=1024k of=test_file_harddisk_bench count=1024
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 10,7764 s, 99,6 MB/s

real    0m11.417s
user    0m0.004s
sys     0m0.750s

# test read:
time dd if=test_file_harddisk_bench bs=1024k of=/dev/null count=1024
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 11,9484 s, 89,9 MB/s

real    0m11.950s
user    0m0.004s
sys     0m0.531s

# do not forget to delete your test_file_harddisk_bench afterwars
rm test_file_harddisk_bench;

script it:

# create the script
vim ~/scripts/bench_harddisk_sequential.sh

content:

#!/bin/bash

echo "============ OS and Kernel"
uname -a;

echo "============ filesystem used"
df -hT;

echo "test filename:"
TMPFILE=`mktemp test_file_harddisk_bench.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX` &&  echo $TMPFILE

echo "============ harddisk bench: test write"
time dd if=/dev/zero bs=1024k of=$TMPFILE count=1024

echo "============ harddisk bench: test read"
time dd if=$TMPFILE bs=1024k of=/dev/null count=1024

rm -rf $TMPFILE;

chmod u+x ~/scripts/bench_harddisk_sequential.sh; # mark script executable
~/scripts/bench_harddisk_sequential.sh; # run

these VMs are all on Hyper-V (Version: 6.3.9600.16384) windows 8.1 Enterprise host (NTFS) on consumer-grade hardware!!!

so this is „distorting“ results in unkown and mysterious ways… so handle those results with caution.

(those are all dynamically sized harddisk files and you also never know when RAM caching is making a major difference and when not)

the limiting factor here is definately the harddisk going up to 100% usage in every scenario.

Hyper-V/Windows does some harddisk-RAM caching… so if benchmarks are rerun performance might go up.
(strangely enough not with XFS on SUSE, mabye i am missing something here… could also be Hyper-V guest integration modules that are loaded/not loaded)

some sequential results

– probably not real-world usable

… actually the 3rd result, the SUSE one is closest to reality, because the other two were definately RAM-cached results… with read and write of 1GByte/sec 😀

so what this also showed that Hyper-V-RAM-caching worked on Debian8.7 and CentOS7 but not on SUSE12? 😀 also interesting.

[user@centos ~]$ ~/scripts/bench_harddisk_sequential.sh
============ OS and Kernel
Linux centos 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
============ filesystem used
Dateisystem         Typ      Größe Benutzt Verf. Verw% Eingehängt auf
/dev/mapper/cl-root xfs        50G    1,3G   49G    3% /
devtmpfs            devtmpfs  482M       0  482M    0% /dev
tmpfs               tmpfs     493M       0  493M    0% /dev/shm
tmpfs               tmpfs     493M    6,5M  486M    2% /run
tmpfs               tmpfs     493M       0  493M    0% /sys/fs/cgroup
/dev/sda1           xfs      1014M    138M  877M   14% /boot
/dev/mapper/cl-home xfs        74G    2,4G   72G    4% /home
tmpfs               tmpfs      99M       0   99M    0% /run/user/1000
test filename:
test_file_harddisk_bench.t936rsgjKIueaJS13EJwAAawT64M8lR6
============ harddisk bench: test write
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 0,736847 s, 1,5 GB/s

real    0m0.739s
user    0m0.003s
sys     0m0.729s
============ harddisk bench: test read
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 4,02957 s, 266 MB/s

real    0m4.071s
user    0m0.004s
sys     0m0.538s

user@debian:~$ ~/scripts/bench_harddisk_sequential.sh
============ OS and Kernel
Linux debian 3.16.0-4-686-pae #1 SMP Debian 3.16.39-1+deb8u2 (2017-03-07) i686 GNU/Linux
============ filesystem used
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda1      ext4      123G  3.4G  114G   3% /
udev           devtmpfs   10M     0   10M   0% /dev
tmpfs          tmpfs     202M  4.9M  197M   3% /run
tmpfs          tmpfs     503M  8.0K  503M   1% /dev/shm
tmpfs          tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs          tmpfs     503M     0  503M   0% /sys/fs/cgroup
tmpfs          tmpfs     101M   16K  101M   1% /run/user/1000
test filename:
test_file_harddisk_bench.rkzP6IMmcvNBWfgEM31HZtM3OdUzgFgg
============ harddisk bench: test write
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.02797 s, 1.0 GB/s

real    0m1.031s
user    0m0.008s
sys     0m0.704s
============ harddisk bench: test read
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.770018 s, 1.4 GB/s

real    0m0.798s
user    0m0.000s
sys     0m0.288s


user@suse:~> ~/scripts/bench_harddisk_sequential.sh
============ OS and Kernel
Linux suse 4.4.21-69-default #1 SMP Tue Oct 25 10:58:20 UTC 2016 (9464f67) x86_64 x86_64 x86_64 GNU/Linux
============ filesystem used
Dateisystem    Typ      Größe Benutzt Verf. Verw% Eingehängt auf
devtmpfs       devtmpfs  484M       0  484M    0% /dev
tmpfs          tmpfs     492M     80K  492M    1% /dev/shm
tmpfs          tmpfs     492M    8,0M  484M    2% /run
tmpfs          tmpfs     492M       0  492M    0% /sys/fs/cgroup
/dev/sda2      btrfs      41G    6,0G   34G   16% /
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/mariadb
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/pgsql
/dev/sda2      btrfs      41G    6,0G   34G   16% /opt
/dev/sda2      btrfs      41G    6,0G   34G   16% /.snapshots
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/machines
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/log
/dev/sda2      btrfs      41G    6,0G   34G   16% /usr/local
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/mailman
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/libvirt/images
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/crash
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/named
/dev/sda2      btrfs      41G    6,0G   34G   16% /boot/grub2/i386-pc
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/mysql
/dev/sda2      btrfs      41G    6,0G   34G   16% /tmp
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/tmp
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/cache
/dev/sda2      btrfs      41G    6,0G   34G   16% /boot/grub2/x86_64-efi
/dev/sda2      btrfs      41G    6,0G   34G   16% /srv
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/opt
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/spool
/dev/sda3      xfs        85G    1,1G   84G    2% /home
tmpfs          tmpfs      99M     16K   99M    1% /run/user/483
tmpfs          tmpfs      99M       0   99M    0% /run/user/1000
test filename:
test_file_harddisk_bench.YKmq07oAhg9txwN2TBxpnryTN1SNv25z
============ harddisk bench: test write
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 11,6924 s, 91,8 MB/s

real    0m11.694s
user    0m0.000s
sys     0m0.746s
============ harddisk bench: test read
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 10,8167 s, 99,3 MB/s

real    0m11.059s
user    0m0.006s
sys     0m0.529s

Harddisk Benchmark Small Files read and write

luckily (thanks archive.org!) i could recover the python source of this benchmark…

here it is: bench_harddisk_small_files.py.txt (you will have to have python installed and rename it to *.py)

unfortunately: 1. you will need python installed 2. you will have to run this as root!

from the Author: „I wrote a small Benchmark (source), to find out, what file system performs best with hundred thousands of small files:“ (results from 2012)

# what this benchmark does is create 300.000 files with filesize 1024Byte and random content

filecount = 300.000
filesize = 1024

ll test
-rw-r--r--. 1 root root 1,4K 12. Mai 14:35 9997
-rw-r--r--. 1 root root 1,2K 12. Mai 14:35 99970
-rw-r--r--. 1 root root  956 12. Mai 14:35 99971
-rw-r--r--. 1 root root 1,3K 12. Mai 14:35 99972
-rw-r--r--. 1 root root 1,5K 12. Mai 14:35 99973
-rw-r--r--. 1 root root 1007 12. Mai 14:35 99974
-rw-r--r--. 1 root root  809 12. Mai 14:35 99975
-rw-r--r--. 1 root root 1,5K 12. Mai 14:35 99976

cat 9997
8o@▒▒mJj8▒▒s▒▒▒ˊ▒               r▒▒BnryqJ$a▒▒5▒▒j▒J▒z+▒▒▒▒f▒jc`G▒2H▒W▒&▒▒▒▒▒▒-T d
▒▒2Ă▒▒▒Y▒▒x▒'%▒▒1▒M}/H▒▒▒G▒XL2▒▒▒▒      ▒▒▒%o;!▒▒#F▒Sf▒U▒\28=MFۛ▒u▒▒8-▒=
%C▒̃▒▒}▒▒▒▒▒▒1▒▒₽O=7▒▒q▒▒▒"▒▒?▒▒7[Kd▒N▒؎x▒f▒:▒Prx▒▒?▒▒▒g▒▒T▒▒{                          )▒a▒%▒▒▒▒*&e▒▒Oc▒6
▒V▒▒ȯ_}▒▒MK▒▒▒=6gY▒c▒ ▒|"S▒A▒Ս▒Жe!▒3em▒t`X▒_▒▒/L4H▒d▒▒+▒k▒Ɩ/s
ن▒DІЧ▒▒e▒*ߵ▒▒y▒▒B▒▒0▒;Gf[▒▒5T▒▒▒▒0}▒Ul▒u▒▒▒▒▒^ٸ▒▒]#▒▒▒L▒x
x▒WI▒%▒>g▒▒▒▒λ&▒▒.▒▒u▒ab▒▒▒M▒Rb▒▒▒▒▒▒▒F▒̸;▒Ǩ▒Nc(▒w▒▒▒9▒▒+▒^▒R▒Ys▒z▒d▒!fS▒▒I▒▒q▒▒:▒h%4▒▒gs▒E▒IN^▒▒;▒▒▒▒\]Ѣ▒o▒▒▒▒l▒K%▒C▒▒>▒▒x▒▒ ▒'?`▒&▒B޳ps|tfwR▒Ls▒
                                                                                                                                                 n▒▒▒▒P▒        ▒▒аj▒▒▒▒▒3▒Ǩ5▒y▒▒HLȦ▒▒ÿ-O▒

python -V; # test what version of python is installed
Python 2.7.9

wget http://dwaves.de/wp-content/uploads/2015/03/bench_harddisk_small_files.py_.txt; # download it

mv bench_harddisk_small_files.py_.txt bench_harddisk_small_files.py; # rename it

chmod u+x bench_harddisk_small_files.py; # make it runnable

./bench_harddisk_small_files.py; # run it (will take a while ;)

    create 300000 files (512B to 1536B) with data from /dev/urandom
    rewrite 30000 random files and change the size
    read 30000 sequential files
    read 30000 random files

    delete all files

    sync and drop cache after every step

Results (average time in seconds, lower = better):

Using Linux Kernel version 3.1.7
Btrfs:
    create:    53 s
    rewrite:    6 s
    read sq:    4 s
    read rn:  312 s
    delete:   373 s

ext4:
    create:    46 s
    rewrite:   18 s
    read sq:   29 s
    read rn:  272 s
    delete:    12 s

ReiserFS:
    create:    62 s
    rewrite:  321 s
    read sq:    6 s
    read rn:  246 s
    delete:    41 s

XFS:
    create:    68 s
    rewrite:  430 s
    read sq:   37 s
    read rn:  367 s
    delete:    36 s

Result:

While Ext4 had good overall performance, ReiserFS was extreme fast at reading sequential files. It turned out that XFS is slow with many small files – you should not use it for this use case.
Fragmentation issue

The only way to prevent file systems from distributing files over the drive, is to keep the partition only as big as you really need it, but pay attention not to make the partition too small, to prevent intrafile-fragmenting. Using LVM can be very helpful.
Further reading

The Arch Wiki has some great articles dealing with file system performance:

https://wiki.archlinux.org/index.php/Beginner%27s_Guide#Filesystem_types

https://wiki.archlinux.org/index.php/Maximizing_Performance#Storage_devices

src: https://unix.stackexchange.com/questions/28756/what-is-the-most-high-performance-linux-filesystem-for-storing-a-lot-of-small-fi

my results:

(smaller numbers = task finished in less time = faster)

the winner is: find out for yourself – ext4, ext3 but also BTRFS are doing good with a lot of small files – while ext4 and ext3 are actually performace wise pretty similiar.

ext4 being -23,1sec faster in rewrite files, ext3 was -89,9 sec faster in create files.

what is also important besides speed: stability – capability to recover from errors (i managed to recover files via extundelete under ext3… i believe it is a little harder with ext4)

also i can confirm the results of the benchmark-author… XFS is no good at dealing with a lot of small files.

one could increase the filesize in use and decreaste the number of files and rerun the benchmark and see what happens.

again

these are all hyper-V (Version: 6.3.9600.16384) VMs on a normal consumer-grade hardware…

so don’t expect massively fast results…

the first 3 results are definately without RAM caching 😉 (first run is always is)

also it seems that ext3 is doing even slightly better with small files than ext4 does in this Hyper-V scenario?
(ext3: total runtime 13m38.279s, ext4: total runtime 16m29.313s)

you could do a time ./benchmark.py so you get the total runtime of the benchmark.

ext4 on Debian8.7 VM

uname -a; # OS and kernel version
Linux debian 3.16.0-4-686-pae #1 SMP Debian 3.16.39-1+deb8u2 (2017-03-07) i686 GNU/Linux

lsmod |grep hyper; # loaded guest-integration modules
hyperv_keyboard        12523  0
hyperv_fb              16882  2
hid_hyperv             12725  0
hid                    81008  2 hid_hyperv,hid_generic
hv_vmbus               28013  6 hyperv_keyboard,hv_netvsc,hid_hyperv,hv_utils,hyperv_fb,hv_storvsc

modinfo hv_storvsc
filename:       /lib/modules/3.16.0-4-686-pae/kernel/drivers/scsi/hv_storvsc.ko
description:    Microsoft Hyper-V virtual storage driver
license:        GPL
alias:          vmbus:4acc9b2f6900f34ab76b6fd0be528cda
alias:          vmbus:32264132cb86a2449b5c50d1417354f5
alias:          vmbus:d96361baa104294db60572e2ffb1dc7f
depends:        hv_vmbus,scsi_mod
intree:         Y
vermagic:       3.16.0-4-686-pae SMP mod_unload modversions 686
parm:           storvsc_ringbuffer_size:Ring buffer size (bytes) (int)

python -V; # python version
Python 2.7.9

df -hT; # filesystem
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda1      ext4      123G  3.4G  114G   3% /
udev           devtmpfs   10M     0   10M   0% /dev
tmpfs          tmpfs     202M  4.9M  197M   3% /run
tmpfs          tmpfs     503M  8.0K  503M   1% /dev/shm
tmpfs          tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs          tmpfs     503M     0  503M   0% /sys/fs/cgroup
tmpfs          tmpfs     101M   16K  101M   1% /run/user/1000

# 1st run
./bench_harddisk_small_files.py; # python based small file read/write benchmark

create test folder:
14.9104299545

create files:
48.2082529068

rewrite files:
27.523827076

read linear:
49.7503108978

read random:
334.094204903

delete all files:
159.576689005

# 2nd run
 time /home/user/scripts/bench_harddisk_small_files.py

create test folder:
0.0390877723694

create files:
43.998016119

rewrite files:
28.5503320694

read linear:
47.4662151337

read random:
332.29962492

delete all files:
154.593928099

real    10m32.788s
user    0m6.708s
sys     0m52.472s

ext3 on Debian8.7 VM

newly created dynamically sized harddisk

total runtime real 18m16.547s

root@debian:/mnt/sdb1# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sdb1      ext3      9.8G   23M  9.2G   1% /mnt/sdb1

root@debian:/mnt/sdb1# time /home/user/scripts/bench_harddisk_small_files.py

create test folder:
0.0149130821228

create files:
268.888262987

rewrite files:
96.2679889202

read linear:
67.2693779469

read random:

469.799334049

delete all files:
165.838576078

real    18m16.547s
user    0m7.720s
sys     1m1.924s

btrfs on Debian8.7 VM

total runtime real 13m52.389s

is easy to install and get going:

fdisk /dev/sdb; # create a new partition

apt-get install btrfs-tools; # install software

mkfs.btrfs -L "HD_NAME" /dev/sdb2; # format partition
root@debian:/mnt/sdb1# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sdb2      btrfs      10G  512K  8.0G   1% /mnt/sdb2

cd /mnt/sdb2;

time /home/user/scripts/bench_harddisk_small_files.py

create test folder:
0.0400609970093

create files:
122.124496222

rewrite files:
217.362040043

read linear:
29.3034169674

read random:
203.46718502

delete all files:
239.52281785

real    13m52.389s
user    0m8.192s
sys     1m12.172s

XFS on CentOS7 VM

[root@centos scripts]#
su; # become root

uname -a; # OS and kernel version
# Linux centos 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

lsmod |grep hyper; # loaded guest-integration modules
hyperv_keyboard        12777  0
hid_hyperv             13108  0
hyperv_fb              17769  1
hv_vmbus              397185  6 hyperv_keyboard,hv_netvsc,hid_hyperv,hv_utils,hyperv_fb,hv_storvsc

modinfo hv_storvsc
filename:       /lib/modules/3.10.0-514.el7.x86_64/kernel/drivers/scsi/hv_storvsc.ko
description:    Microsoft Hyper-V virtual storage driver
license:        GPL
rhelversion:    7.3
srcversion:     409D5193D2BFDCE53342BAC
alias:          vmbus:4acc9b2f6900f34ab76b6fd0be528cda
alias:          vmbus:32264132cb86a2449b5c50d1417354f5
alias:          vmbus:d96361baa104294db60572e2ffb1dc7f
depends:        hv_vmbus
intree:         Y
vermagic:       3.10.0-514.el7.x86_64 SMP mod_unload modversions
signer:         CentOS Linux kernel signing key
sig_key:        D4:88:63:A7:C1:6F:CC:27:41:23:E6:29:8F:74:F0:57:AF:19:FC:54
sig_hashalgo:   sha256
parm:           logging_level:Logging level, 0 - None, 1 - Error (default), 2 - Warning. (int)
parm:           storvsc_ringbuffer_size:Ring buffer size (bytes) (int)
parm:           storvsc_vcpus_per_sub_channel:int
parm:           vcpus_per_sub_channel:Ratio of VCPUs to subchannels

python -V; # python version
# Python 2.7.5

df -hT; # filesystem
Dateisystem         Typ      Größe Benutzt Verf. Verw% Eingehängt auf
/dev/mapper/cl-root xfs        50G    1,3G   49G    3% /
devtmpfs            devtmpfs  482M       0  482M    0% /dev
tmpfs               tmpfs     493M       0  493M    0% /dev/shm
tmpfs               tmpfs     493M    6,5M  486M    2% /run
tmpfs               tmpfs     493M       0  493M    0% /sys/fs/cgroup
/dev/sda1           xfs      1014M    138M  877M   14% /boot
/dev/mapper/cl-home xfs        74G    1,1G   73G    2% /home
tmpfs               tmpfs      99M       0   99M    0% /run/user/1000

./bench_harddisk_small_files.py; # python based small file read/write benchmark

create test folder:
64.8546340466

create files:
198.000077963

rewrite files:
266.317943096

read linear:
236.771200895

read random:
411.996505022

delete all files:
185.099548817

XFS on SUSE12 VM

suse:~ # uname -a;
Linux suse 4.4.21-69-default #1 SMP Tue Oct 25 10:58:20 UTC 2016 (9464f67) x86_64 x86_64 x86_64 GNU/Linux

suse:~ # lsmod |grep hyper; # loaded guest-integration modules
hyperv_keyboard        16384  0
hid_hyperv             16384  0
hyperv_fb              20480  2
hv_vmbus              614400  6 hyperv_keyboard,hv_netvsc,hid_hyperv,hv_utils,hyperv_fb,hv_storvsc

modinfo hv_storvsc
filename:       /lib/modules/4.4.21-69-default/kernel/drivers/scsi/hv_storvsc.ko
description:    Microsoft Hyper-V virtual storage driver
license:        GPL
srcversion:     E2B033E0227A4BC7E67F05B
alias:          vmbus:4acc9b2f6900f34ab76b6fd0be528cda
alias:          vmbus:32264132cb86a2449b5c50d1417354f5
alias:          vmbus:d96361baa104294db60572e2ffb1dc7f
depends:        hv_vmbus,scsi_mod,scsi_transport_fc
supported:      external
intree:         Y
vermagic:       4.4.21-69-default SMP mod_unload modversions
signer:         SUSE Linux Enterprise Secure Boot Signkey
sig_key:        3F:B0:77:B6:CE:BC:6F:F2:52:2E:1C:14:8C:57:C7:77:C7:88:E3:E7
sig_hashalgo:   sha256
parm:           logging_level:Logging level, 0 - None, 1 - Error (default), 2 - Warning. (int)
parm:           storvsc_ringbuffer_size:Ring buffer size (bytes) (int)
parm:           storvsc_vcpus_per_sub_channel:Ratio of VCPUs to subchannels (int)

suse:~ # python -V; # python version
Python 2.7.9
suse:~ # df -hT; # filesystem
Dateisystem    Typ      Größe Benutzt Verf. Verw% Eingehängt auf
devtmpfs       devtmpfs  484M       0  484M    0% /dev
tmpfs          tmpfs     492M     80K  492M    1% /dev/shm
tmpfs          tmpfs     492M    8,0M  484M    2% /run
tmpfs          tmpfs     492M       0  492M    0% /sys/fs/cgroup
/dev/sda2      btrfs      41G    6,0G   34G   16% /
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/mariadb
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/pgsql
/dev/sda2      btrfs      41G    6,0G   34G   16% /opt
/dev/sda2      btrfs      41G    6,0G   34G   16% /.snapshots
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/machines
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/log
/dev/sda2      btrfs      41G    6,0G   34G   16% /usr/local
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/mailman
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/libvirt/images
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/crash
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/named
/dev/sda2      btrfs      41G    6,0G   34G   16% /boot/grub2/i386-pc
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/mysql
/dev/sda2      btrfs      41G    6,0G   34G   16% /tmp
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/tmp
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/cache
/dev/sda2      btrfs      41G    6,0G   34G   16% /boot/grub2/x86_64-efi
/dev/sda2      btrfs      41G    6,0G   34G   16% /srv
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/opt
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/spool
/dev/sda3      xfs        85G    1,1G   84G    2% /home <- XFS
tmpfs          tmpfs      99M     16K   99M    1% /run/user/483
tmpfs          tmpfs      99M       0   99M    0% /run/user/1000

/home/user/scripts/bench_harddisk_small_files.py; # python based small file read/write benchmark

create test folder:
0.00403809547424

create files:
77.052531004

rewrite files:
246.546470881

read linear:
228.652709007

read random:
414.306197166

delete all files:
122.249341011

EXT3 on CentOS7 VM

# total runtime!!! 13m38.279s !!!

[root@centos /]# df -hT
/dev/sdb2           ext3      9,8G     23M  9,2G    1% /mnt/sdb2

 time /home/user/scripts/bench_harddisk_small_files.py

create test folder:
0.0190849304199

create files:
66.4392571449

rewrite files:
70.9426250458

read linear:
65.0275251865

read random:
419.262803078

delete all files:
175.337391138

# total runtime:
real    13m38.279s
user    0m10.147s
sys     1m5.594s

EXT4 on CentOS7 VM

total runtime real 16m29.313s

[root@centos /]# df -hT
/dev/sdb1           ext4      9,8G     37M  9,2G    1% /mnt/sdb1

[root@centos sdb1]# time /home/user/scripts/bench_harddisk_small_files.py

create test folder:
0.109514951706

create files:
156.300988913

rewrite files:
47.8779950142

read linear:
138.047894955

read random:
430.069261074

delete all files:
175.878396988

real    16m29.313s
user    0m11.569s
sys     1m11.605s

BTRFS on SUSE12 VM

total runtime real 14m55.076s

suse:/var/tmp # df -hT
/dev/sda2      btrfs      41G    6,0G   33G   16% /tmp

suse:/var/tmp # time /home/user/scripts/bench_harddisk_small_files.py

create test folder:
0.0999500751495

create files:
54.8537290096

rewrite files:
259.263952971

read linear:
31.106744051

read random:
215.53058815

delete all files:
294.819381952

real    14m55.076s
user    0m8.432s
sys     1m4.469s

hdparm

# test read performance
hdparm -t /dev/sda1;
debian under virtualbox on windows 7 supermicro raid 5 lsi 1068e controler:
1073741824 bytes (1.1 GB) copied, 3.6072 s, 298 MB/s

About BTRFS:

… SUSE12 is using btrfs for system-root / and XFS for /home.

btrfs supports cool features such as snapshots – started by Oracle now developed further and used by Facebook – https://btrfs.wiki.kernel.org/index.php/Main_Page

https://en.wikipedia.org/wiki/Btrfs

Developer(s) Facebook, Fujitsu, Fusion-IO, Intel, Linux Foundation, Netgear, Oracle Corporation, Red Hat, STRATO AG, and SUSE[1]

Btrfs is intended to address the lack of pooling, snapshots, checksums, and integral multi-device spanning in Linux file systems.[8] Chris Mason, the principal Btrfs author, has stated that its goal was „to let Linux scale for the storage that will be available. Scaling is not just about addressing the storage but also means being able to administer and to manage it with a clean interface that lets people see what’s being used and makes it more reliable.“[14]

admin