smart first:

one might want to check the SMART status of one’s harddisks to make sure, they are running under normal conditions and do not show signs of failure.

pre-benchmark preparations:

halt as much programs as possible:

one SHOULD suspend all running VMs / io intense processes before performing the actual benchmarks or – of course – one will not really get correct results. (slower)

also bear in mind that a lot of RAM and harddisk internal caching is going on so the results are a rough estimate of the actual harddisk speeds.

tested on:

# tested on
cat /etc/os-release |grep -e NAME -e VERSION
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye

what hardware / harddisk / storage is used?

dmidecode -t 2
# dmidecode 3.0
Getting SMBIOS data from sysfs.
SMBIOS 2.7 present.

Handle 0x0002, DMI type 2, 15 bytes
Base Board Information
	Manufacturer: ASUSTeK COMPUTER INC.
	Product Name: P8H77-M PRO
	Version: Rev X.0x
	Serial Number: XXXXXXXXXXX
	Asset Tag: To be filled by O.E.M.
	Features:
		Board is a hosting board
		Board is replaceable
	Location In Chassis: To be filled by O.E.M.
	Chassis Handle: 0x0003
	Type: Motherboard
	Contained Object Handles: 0

# we got i7 with 8 cores... and bogompis of:
grep -c ^processor /proc/cpuinfo
8

cat /proc/cpuinfo | grep bogomips
bogomips : 6799.90
bogomips : 6799.90
bogomips : 6799.90
bogomips : 6799.90
bogomips : 6799.90
bogomips : 6799.90
bogomips : 6799.90
bogomips : 6799.90

lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    2
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 58
Model name:            Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
Stepping:              9
CPU MHz:               3785.856
CPU max MHz:           3900.0000
CPU min MHz:           1600.0000
BogoMIPS:              6799.90
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              8192K
NUMA node0 CPU(s):     0-7
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault epb pti ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts

# what harddisks / harddisk controllers
lshw -class tape -class disk -class storage -short
H/W path         Device     Class          Description
======================================================
/0/100/1f.2      scsi0      storage        7 Series/C210 Series Chipset Family 6-port SATA Controller [AHCI mode]
/0/100/1f.2/0    /dev/sda   disk           3TB TOSHIBA DT01ACA3
/0/100/1f.2/1    /dev/sdb   disk           3TB WDC WD3000FYYZ-0

# for NVMes
su - root
apt install nvme
nvme list

# sample output
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev  
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     X                    KINGSTON SA2000M81000G                   1         261.01  GB /   1.00  TB    512   B +  0 B   S5Z42105

# for sata disks
hdparm -I /dev/sda

# sample output
/dev/sda:
ATA device, with non-removable media
	Model Number:       KINGSTON SKC600512G                     
	Serial Number:      XXXX
	Firmware Revision:  S4200102
	Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0

fio based benchmark:

# install
su - root;
apt update;
apt install fio;
# run it (testing the SA2000M81000G NVMe)
fio --rw=readwrite --name=test --size=50M --direct=1 --bs=1024k --numjobs=10 |tail -8 | head -3
Run status group 0 (all jobs):
   READ: bw=746MiB/s (782MB/s), 61.9MiB/s-98.4MiB/s (64.9MB/s-103MB/s), io=232MiB (243MB), run=288-311msec
  WRITE: bw=862MiB/s (904MB/s), 65.6MiB/s-101MiB/s (68.8MB/s-106MB/s), io=268MiB (281MB), run=288-311msec

dd based benchmark: test sequential write

if one rerun the test… it does a lot of RAM caching and one get false results – one can use htop while running the test to look at the harddisk io speeds, if its not used at all one know that ram caching is doing the harddisk job.

hdparm -t --direct /dev/sdX
 -t Perform timings of device reads for benchmark and comparison purposes.
For meaningful results, this operation should be repeated 2–3 times on an otherwise inactive system
(no other active processes) with at least a couple of megabytes of free memory.
This displays the speed of reading through the buffer cache to the disk without any prior caching of data.
This measurement is an indication of how fast the drive can sustain sequential data reads under Linux,
without any filesystem overhead. To ensure accurate measurements,
the buffer cache is flushed during the processing of -t using the BLKFLSBUF ioctl.
--direct
Use the kernel's "O_DIRECT" flag when performing a -t timing test.
This bypasses the page cache, causing the reads to go directly from the drive into hdparm's buffers,
using so-called "raw" I/O. In many cases, this can produce results that appear much faster than the usual
page cache method, giving a better indication of raw device and driver performance.

time dd if=/dev/zero of=/root/testfile bs=3G count=3 oflag=direct
dd: warning: partial read (2147479552 bytes); suggest iflag=fullblock
0+3 records in
0+3 records out
6442438656 bytes (6.4 GB) copied, 40.839 s, 158 MB/s

real	0m40.847s
user	0m0.000s
sys	0m3.921s

/dev/random is superslow use /dev/urandom!

# seems to work but is super slow?
cat /dev/random > test_file_harddisk_bench;

# even after installing
yum install -y rng-tools

# and starting (but not enabling (autostart))
systemctl start rngd

# write rngd uses 1x core to 100% and write speed for ranomd is in the kilobytes/sec

# it is actually very interesting running this
# just be carefull one might run out of disk space
cat /dev/urandom > test_file_harddisk_bench

# because one can see that cpu is not the limiting factor
# one see the RAM cache (orange) filling up
# and then the write speed drops to the actual value
# (without assistance of the RAM)

back to /dev/zero sequential read test:

time dd if=/dev/zero of=/tmp/test_file_harddisk_bench bs=1024k count=1024 oflag=direct

1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 7.2401 s, 148 MB/s

real	0m7.290s
user	0m0.003s
sys	0m0.277s
time dd if=/tmp/test_file_harddisk_bench bs=1024k count=1024 of=/dev/null

1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 6.62068 s, 162 MB/s

real	0m6.622s
user	0m0.001s
sys	0m0.592s

# do not forget to delete one's test_file_harddisk_bench afterwars
rm -rf /tmp/test_file_harddisk_bench;

script it:

watch htop with io columns while running it to get an better impression of one disk speedness/slowliness.

cat /scripts/bench_harddisk.sh

#!/bin/bash

echo "========== get mobo model =========="
dmidecode -t 2

echo "========== what cpu =========="
lscpu

echo "========== number of cores =========="
grep -c ^processor /proc/cpuinfo

echo "========== show bogomips per core =========="
cat /proc/cpuinfo | grep bogomips

echo "========== what harddisk / controllers are used =========="
lshw -class tape -class disk -class storage -short 

echo "========== writing 3GB of zeroes to /root/testfile =========="
time dd if=/dev/zero of=/root/testfile bs=3G count=1 oflag=direct

echo "========== reading 6GB of zeroes from /root/testfile =========="
time dd if=/root/testfile bs=3GB count=1 of=/dev/null

echo "========== tidy up, removing testfile =========="
rm -rf /root/testfile;

don’t forget to:

# mark script executable
chmod u+x /scripts/bench_harddisk.sh;

# run
/scripts/bench_harddisk.sh;

small files python harddisk benchmark:

looks like this:

cat /scripts/bench_harddisk_small_files.py 
#!/usr/bin/python
# -*- coding: utf-8 -*-

# credits: https://unix.stackexchange.com/users/14161/taffer
# https://unix.stackexchange.com/questions/28756/what-is-the-most-high-performance-linux-filesystem-for-storing-a-lot-of-small-fi
# recoverd via: https://web.archive.org/web/20141201000000*/https://dl.dropboxusercontent.com/u/40969346/stackoverflow/bench.py

filecount = 300000
filesize = 1024

import random, time
from os import system
flush = "sudo su -c 'sync ; echo 3 > /proc/sys/vm/drop_caches'"

randfile = open("/dev/urandom", "r")

print "\ncreate test folder:"
starttime = time.time()
system("rm -rf test && mkdir test")
print time.time() - starttime
system(flush)

print "\ncreate files:"
starttime = time.time()
for i in xrange(filecount):
    rand = randfile.read(int(filesize * 0.5 + filesize * random.random()))
    outfile = open("test/" + unicode(i), "w")
    outfile.write(rand)
print time.time() - starttime
system(flush)

print "\nrewrite files:"
starttime = time.time()
for i in xrange(int(filecount / 10)):
    rand = randfile.read(int(filesize * 0.5 + filesize * random.random()))
    outfile = open("test/" + unicode(int(random.random() * filecount)), "w")
    outfile.write(rand)
print time.time() - starttime
system(flush)

print "\nread linear:"
starttime = time.time()
for i in xrange(int(filecount / 10)):
    infile = open("test/" + unicode(i), "r")
    outfile.write(infile.read());
print time.time() - starttime
system(flush)

print "\nread random:"
starttime = time.time()
outfile = open("/dev/null", "w")
for i in xrange(int(filecount / 10)):
    infile = open("test/" + unicode(int(random.random() * filecount)), "r")
    outfile.write(infile.read());
print time.time() - starttime
system(flush)

print "\ndelete all files:"
starttime = time.time()
system("rm -rf test")
print time.time() - starttime
system(flush)

results for above ASUS P8H77-M PRO system:

python -V
Python 2.7.5

/scripts/bench_harddisk_small_files.py 

create test folder:
0.0151031017303

create files:
15.1330468655

rewrite files:
10.6340930462

read linear:
25.2636039257

read random:

213.365452766

delete all files:
9.52882599831

these VMs are all on Hyper-V (Version: 6.3.9600.16384, was forced to use it by company) windows 8.1 Enterprise host (NTFS) on consumer-grade hardware!!!

so this is “distorting” results in unkown and mysterious ways… so handle those results with caution.

(those are all dynamically sized harddisk files and one also never know when RAM caching is making a major difference and when not)

the limiting factor here is definately the harddisk going up to 100% usage in every scenario.

Hyper-V/Windows does some harddisk-RAM caching… so if benchmarks are rerun performance might go up.
(strangely enough not with XFS on SUSE, mabye i am missing something here… could also be Hyper-V guest integration modules that are loaded/not loaded)

some sequential results

– probably not real-world usable

… actually the 3rd result, the SUSE one is closest to reality, because the other two were definately RAM-cached results… with read and write of 1GByte/sec 😀

so what this also showed that Hyper-V-RAM-caching worked on Debian8.7 and CentOS7 but not on SUSE12? 😀 also interesting.

[user@centos ~]$ ~/scripts/bench_harddisk_sequential.sh
============ OS and Kernel
Linux centos 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
============ filesystem used
Dateisystem         Typ      Größe Benutzt Verf. Verw% Eingehängt auf
/dev/mapper/cl-root xfs        50G    1,3G   49G    3% /
devtmpfs            devtmpfs  482M       0  482M    0% /dev
tmpfs               tmpfs     493M       0  493M    0% /dev/shm
tmpfs               tmpfs     493M    6,5M  486M    2% /run
tmpfs               tmpfs     493M       0  493M    0% /sys/fs/cgroup
/dev/sda1           xfs      1014M    138M  877M   14% /boot
/dev/mapper/cl-home xfs        74G    2,4G   72G    4% /home
tmpfs               tmpfs      99M       0   99M    0% /run/user/1000
test filename:
test_file_harddisk_bench.t936rsgjKIueaJS13EJwAAawT64M8lR6
============ harddisk bench: test write
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 0,736847 s, 1,5 GB/s

real    0m0.739s
user    0m0.003s
sys     0m0.729s
============ harddisk bench: test read
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 4,02957 s, 266 MB/s

real    0m4.071s
user    0m0.004s
sys     0m0.538s

user@debian:~$ ~/scripts/bench_harddisk_sequential.sh
============ OS and Kernel
Linux debian 3.16.0-4-686-pae #1 SMP Debian 3.16.39-1+deb8u2 (2017-03-07) i686 GNU/Linux
============ filesystem used
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda1      ext4      123G  3.4G  114G   3% /
udev           devtmpfs   10M     0   10M   0% /dev
tmpfs          tmpfs     202M  4.9M  197M   3% /run
tmpfs          tmpfs     503M  8.0K  503M   1% /dev/shm
tmpfs          tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs          tmpfs     503M     0  503M   0% /sys/fs/cgroup
tmpfs          tmpfs     101M   16K  101M   1% /run/user/1000
test filename:
test_file_harddisk_bench.rkzP6IMmcvNBWfgEM31HZtM3OdUzgFgg
============ harddisk bench: test write
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.02797 s, 1.0 GB/s

real    0m1.031s
user    0m0.008s
sys     0m0.704s
============ harddisk bench: test read
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.770018 s, 1.4 GB/s

real    0m0.798s
user    0m0.000s
sys     0m0.288s


user@suse:~> ~/scripts/bench_harddisk_sequential.sh
============ OS and Kernel
Linux suse 4.4.21-69-default #1 SMP Tue Oct 25 10:58:20 UTC 2016 (9464f67) x86_64 x86_64 x86_64 GNU/Linux
============ filesystem used
Dateisystem    Typ      Größe Benutzt Verf. Verw% Eingehängt auf
devtmpfs       devtmpfs  484M       0  484M    0% /dev
tmpfs          tmpfs     492M     80K  492M    1% /dev/shm
tmpfs          tmpfs     492M    8,0M  484M    2% /run
tmpfs          tmpfs     492M       0  492M    0% /sys/fs/cgroup
/dev/sda2      btrfs      41G    6,0G   34G   16% /
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/mariadb
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/pgsql
/dev/sda2      btrfs      41G    6,0G   34G   16% /opt
/dev/sda2      btrfs      41G    6,0G   34G   16% /.snapshots
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/machines
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/log
/dev/sda2      btrfs      41G    6,0G   34G   16% /usr/local
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/mailman
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/libvirt/images
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/crash
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/named
/dev/sda2      btrfs      41G    6,0G   34G   16% /boot/grub2/i386-pc
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/mysql
/dev/sda2      btrfs      41G    6,0G   34G   16% /tmp
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/tmp
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/cache
/dev/sda2      btrfs      41G    6,0G   34G   16% /boot/grub2/x86_64-efi
/dev/sda2      btrfs      41G    6,0G   34G   16% /srv
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/opt
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/spool
/dev/sda3      xfs        85G    1,1G   84G    2% /home
tmpfs          tmpfs      99M     16K   99M    1% /run/user/483
tmpfs          tmpfs      99M       0   99M    0% /run/user/1000
test filename:
test_file_harddisk_bench.YKmq07oAhg9txwN2TBxpnryTN1SNv25z
============ harddisk bench: test write
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 11,6924 s, 91,8 MB/s

real    0m11.694s
user    0m0.000s
sys     0m0.746s
============ harddisk bench: test read
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 10,8167 s, 99,3 MB/s

real    0m11.059s
user    0m0.006s
sys     0m0.529s

Harddisk Benchmark Small Files read and write

luckily (thanks archive.org!) i could recover the python source of this benchmark…

here it is: bench_harddisk_small_files.py.txt (one will have to have python installed and rename it to *.py)

unfortunately:

  1. one will need python installed
  2. one will have to run this as root!

from the Author: “I wrote a small Benchmark (source), to find out, what file system performs best with hundred thousands of small files:” (results from 2012)

# what this benchmark does is create 300.000 files with filesize 1024Byte and random content

filecount = 300.000
filesize = 1024

ll test
-rw-r--r--. 1 root root 1,4K 12. Mai 14:35 9997
-rw-r--r--. 1 root root 1,2K 12. Mai 14:35 99970
-rw-r--r--. 1 root root  956 12. Mai 14:35 99971
-rw-r--r--. 1 root root 1,3K 12. Mai 14:35 99972
-rw-r--r--. 1 root root 1,5K 12. Mai 14:35 99973
-rw-r--r--. 1 root root 1007 12. Mai 14:35 99974
-rw-r--r--. 1 root root  809 12. Mai 14:35 99975
-rw-r--r--. 1 root root 1,5K 12. Mai 14:35 99976

cat 9997
8o@▒▒mJj8▒▒s▒▒▒ˊ▒               r▒▒BnryqJ$a▒▒5▒▒j▒J▒z+▒▒▒▒f▒jc`G▒2H▒W▒&▒▒▒▒▒▒-T d
▒▒2Ă▒▒▒Y▒▒x▒'%▒▒1▒M}/H▒▒▒G▒XL2▒▒▒▒      ▒▒▒%o;!▒▒#F▒Sf▒U▒\28=MFۛ▒<u▒▒▒#▒W)1-▒5▒R▒v[ڱ▒X▒$▒▒▒q7f▒&▒▒▒▒~D▒▒▒▒M▒8▒▒  ▒ٍ
b< ▒▒+:▒▒Q▒kl▒ G/▒al]0&▒bt▒YU▒$▒-▒▒▒6▒▒▒&▒=▒5▒;zoND9▒Kjn=8N▒t4▒h▒▒▒2dQο▒▒▒&▒u▒>u▒▒8-▒=
%C▒̃▒▒}▒▒▒▒▒▒1▒▒₽O=7▒▒q▒▒▒"▒▒?▒▒7[Kd▒N▒؎x▒f▒:▒Prx▒▒?▒▒▒g▒▒T▒▒{                          )▒a▒%▒▒▒▒*&e▒▒Oc▒6
▒V▒▒ȯ_}▒▒MK▒▒▒=6gY▒c▒ ▒|"S▒A▒Ս▒Жe!▒3em▒t`X▒_▒▒/L4H▒d▒▒+▒k▒Ɩ/s
ن▒DІЧ▒▒e▒*ߵ▒▒y▒▒B▒▒0▒;Gf[▒▒5T▒▒▒▒0}▒Ul▒u▒▒▒▒▒^ٸ▒▒]#▒▒▒L▒x
x▒WI▒%▒>g▒▒▒▒λ&▒▒.▒▒u▒ab▒▒▒M▒Rb▒▒▒▒▒▒▒F▒̸;▒Ǩ▒Nc(▒w▒▒▒9▒▒+▒^▒R▒Ys▒z▒d▒!fS▒▒I▒▒q▒▒:▒h%4▒▒gs▒E▒IN^▒▒;▒▒▒▒\]Ѣ▒o▒▒▒▒l▒K%▒C▒▒>▒▒x▒▒ ▒'?`▒&▒B޳ps|tfwR▒Ls▒
                                                                                                                                                 n▒▒▒▒P▒        ▒▒аj▒▒▒▒▒3▒Ǩ5▒y▒▒HLȦ▒▒ÿ-O▒

python -V; # test what version of python is installed
Python 2.7.9
wget https://dwaves.de/wp-content/uploads/2015/03/bench_harddisk_small_files.py_.txt; # download it

mv bench_harddisk_small_files.py_.txt bench_harddisk_small_files.py; # rename it

chmod u+x bench_harddisk_small_files.py; # make it runnable

# again one should probably have htop with iocolumns configured
# on a second terminal to get a better impression of the actual speeds of one's harddisks 

./bench_harddisk_small_files.py; # run it (will take a while ;)

    create 300000 files (512B to 1536B) with data from /dev/urandom
    rewrite 30000 random files and change the size
    read 30000 sequential files
    read 30000 random files

    delete all files

    sync and drop cache after every step

Results (average time in seconds, lower = better):

Using Linux Kernel version 3.1.7
Btrfs:
    create:    53 s
    rewrite:    6 s
    read sq:    4 s
    read rn:  312 s
    delete:   373 s

ext4:
    create:    46 s
    rewrite:   18 s
    read sq:   29 s
    read rn:  272 s
    delete:    12 s

ReiserFS:
    create:    62 s
    rewrite:  321 s
    read sq:    6 s
    read rn:  246 s
    delete:    41 s

XFS:
    create:    68 s
    rewrite:  430 s
    read sq:   37 s
    read rn:  367 s
    delete:    36 s

ext4 on Debian8.7 VM inside hyper-v

uname -a; # OS and kernel version
Linux debian 3.16.0-4-686-pae #1 SMP Debian 3.16.39-1+deb8u2 (2017-03-07) i686 GNU/Linux

lsmod |grep hyper; # loaded guest-integration modules
hyperv_keyboard        12523  0
hyperv_fb              16882  2
hid_hyperv             12725  0
hid                    81008  2 hid_hyperv,hid_generic
hv_vmbus               28013  6 hyperv_keyboard,hv_netvsc,hid_hyperv,hv_utils,hyperv_fb,hv_storvsc

modinfo hv_storvsc
filename:       /lib/modules/3.16.0-4-686-pae/kernel/drivers/scsi/hv_storvsc.ko
description:    Microsoft Hyper-V virtual storage driver
license:        GPL
alias:          vmbus:4acc9b2f6900f34ab76b6fd0be528cda
alias:          vmbus:32264132cb86a2449b5c50d1417354f5
alias:          vmbus:d96361baa104294db60572e2ffb1dc7f
depends:        hv_vmbus,scsi_mod
intree:         Y
vermagic:       3.16.0-4-686-pae SMP mod_unload modversions 686
parm:           storvsc_ringbuffer_size:Ring buffer size (bytes) (int)

python -V; # python version
Python 2.7.9

df -hT; # filesystem
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda1      ext4      123G  3.4G  114G   3% /
udev           devtmpfs   10M     0   10M   0% /dev
tmpfs          tmpfs     202M  4.9M  197M   3% /run
tmpfs          tmpfs     503M  8.0K  503M   1% /dev/shm
tmpfs          tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs          tmpfs     503M     0  503M   0% /sys/fs/cgroup
tmpfs          tmpfs     101M   16K  101M   1% /run/user/1000

# 1st run
./bench_harddisk_small_files.py; # python based small file read/write benchmark

create test folder:
14.9104299545

create files:
48.2082529068

rewrite files:
27.523827076

read linear:
49.7503108978

read random:
334.094204903

delete all files:
159.576689005

# 2nd run
 time /home/user/scripts/bench_harddisk_small_files.py

create test folder:
0.0390877723694

create files:
43.998016119

rewrite files:
28.5503320694

read linear:
47.4662151337

read random:
332.29962492

delete all files:
154.593928099

real    10m32.788s
user    0m6.708s
sys     0m52.472s

ext3 on Debian8.7 VM

newly created dynamically sized harddisk

total runtime real 18m16.547s

root@debian:/mnt/sdb1# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sdb1      ext3      9.8G   23M  9.2G   1% /mnt/sdb1

root@debian:/mnt/sdb1# time /home/user/scripts/bench_harddisk_small_files.py

create test folder:
0.0149130821228

create files:
268.888262987

rewrite files:
96.2679889202

read linear:
67.2693779469

read random:

469.799334049

delete all files:
165.838576078

real    18m16.547s
user    0m7.720s
sys     1m1.924s

btrfs on Debian8.7 VM

total runtime real 13m52.389s

is easy to install and get going:

fdisk /dev/sdb; # create a new partition

apt-get install btrfs-tools; # install software

mkfs.btrfs -L "HD_NAME" /dev/sdb2; # format partition
root@debian:/mnt/sdb1# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sdb2      btrfs      10G  512K  8.0G   1% /mnt/sdb2

cd /mnt/sdb2;

time /home/user/scripts/bench_harddisk_small_files.py

create test folder:
0.0400609970093

create files:
122.124496222

rewrite files:
217.362040043

read linear:
29.3034169674

read random:
203.46718502

delete all files:
239.52281785

real    13m52.389s
user    0m8.192s
sys     1m12.172s

XFS on CentOS7 VM

[root@centos scripts]#
su; # become root

uname -a; # OS and kernel version
# Linux centos 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

lsmod |grep hyper; # loaded guest-integration modules
hyperv_keyboard        12777  0
hid_hyperv             13108  0
hyperv_fb              17769  1
hv_vmbus              397185  6 hyperv_keyboard,hv_netvsc,hid_hyperv,hv_utils,hyperv_fb,hv_storvsc

modinfo hv_storvsc
filename:       /lib/modules/3.10.0-514.el7.x86_64/kernel/drivers/scsi/hv_storvsc.ko
description:    Microsoft Hyper-V virtual storage driver
license:        GPL
rhelversion:    7.3
srcversion:     409D5193D2BFDCE53342BAC
alias:          vmbus:4acc9b2f6900f34ab76b6fd0be528cda
alias:          vmbus:32264132cb86a2449b5c50d1417354f5
alias:          vmbus:d96361baa104294db60572e2ffb1dc7f
depends:        hv_vmbus
intree:         Y
vermagic:       3.10.0-514.el7.x86_64 SMP mod_unload modversions
signer:         CentOS Linux kernel signing key
sig_key:        D4:88:63:A7:C1:6F:CC:27:41:23:E6:29:8F:74:F0:57:AF:19:FC:54
sig_hashalgo:   sha256
parm:           logging_level:Logging level, 0 - None, 1 - Error (default), 2 - Warning. (int)
parm:           storvsc_ringbuffer_size:Ring buffer size (bytes) (int)
parm:           storvsc_vcpus_per_sub_channel:int
parm:           vcpus_per_sub_channel:Ratio of VCPUs to subchannels

python -V; # python version
# Python 2.7.5

df -hT; # filesystem 
Dateisystem         Typ      Größe Benutzt Verf. Verw% Eingehängt auf
/dev/mapper/cl-root xfs        50G    1,3G   49G    3% /
devtmpfs            devtmpfs  482M       0  482M    0% /dev
tmpfs               tmpfs     493M       0  493M    0% /dev/shm
tmpfs               tmpfs     493M    6,5M  486M    2% /run
tmpfs               tmpfs     493M       0  493M    0% /sys/fs/cgroup
/dev/sda1           xfs      1014M    138M  877M   14% /boot
/dev/mapper/cl-home xfs        74G    1,1G   73G    2% /home
tmpfs               tmpfs      99M       0   99M    0% /run/user/1000

./bench_harddisk_small_files.py; # python based small file read/write benchmark

create test folder:
64.8546340466

create files:
198.000077963

rewrite files:
266.317943096

read linear:
236.771200895

read random:
411.996505022

delete all files:
185.099548817

XFS on SUSE12 VM

suse:~ # uname -a;
Linux suse 4.4.21-69-default #1 SMP Tue Oct 25 10:58:20 UTC 2016 (9464f67) x86_64 x86_64 x86_64 GNU/Linux

suse:~ # lsmod |grep hyper; # loaded guest-integration modules
hyperv_keyboard        16384  0
hid_hyperv             16384  0
hyperv_fb              20480  2
hv_vmbus              614400  6 hyperv_keyboard,hv_netvsc,hid_hyperv,hv_utils,hyperv_fb,hv_storvsc

modinfo hv_storvsc
filename:       /lib/modules/4.4.21-69-default/kernel/drivers/scsi/hv_storvsc.ko
description:    Microsoft Hyper-V virtual storage driver
license:        GPL
srcversion:     E2B033E0227A4BC7E67F05B
alias:          vmbus:4acc9b2f6900f34ab76b6fd0be528cda
alias:          vmbus:32264132cb86a2449b5c50d1417354f5
alias:          vmbus:d96361baa104294db60572e2ffb1dc7f
depends:        hv_vmbus,scsi_mod,scsi_transport_fc
supported:      external
intree:         Y
vermagic:       4.4.21-69-default SMP mod_unload modversions
signer:         SUSE Linux Enterprise Secure Boot Signkey
sig_key:        3F:B0:77:B6:CE:BC:6F:F2:52:2E:1C:14:8C:57:C7:77:C7:88:E3:E7
sig_hashalgo:   sha256
parm:           logging_level:Logging level, 0 - None, 1 - Error (default), 2 - Warning. (int)
parm:           storvsc_ringbuffer_size:Ring buffer size (bytes) (int)
parm:           storvsc_vcpus_per_sub_channel:Ratio of VCPUs to subchannels (int)

suse:~ # python -V; # python version
Python 2.7.9
suse:~ # df -hT; # filesystem
Dateisystem    Typ      Größe Benutzt Verf. Verw% Eingehängt auf
devtmpfs       devtmpfs  484M       0  484M    0% /dev
tmpfs          tmpfs     492M     80K  492M    1% /dev/shm
tmpfs          tmpfs     492M    8,0M  484M    2% /run
tmpfs          tmpfs     492M       0  492M    0% /sys/fs/cgroup
/dev/sda2      btrfs      41G    6,0G   34G   16% /
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/mariadb
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/pgsql
/dev/sda2      btrfs      41G    6,0G   34G   16% /opt
/dev/sda2      btrfs      41G    6,0G   34G   16% /.snapshots
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/machines
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/log
/dev/sda2      btrfs      41G    6,0G   34G   16% /usr/local
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/mailman
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/libvirt/images
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/crash
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/named
/dev/sda2      btrfs      41G    6,0G   34G   16% /boot/grub2/i386-pc
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/lib/mysql
/dev/sda2      btrfs      41G    6,0G   34G   16% /tmp
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/tmp
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/cache
/dev/sda2      btrfs      41G    6,0G   34G   16% /boot/grub2/x86_64-efi
/dev/sda2      btrfs      41G    6,0G   34G   16% /srv
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/opt
/dev/sda2      btrfs      41G    6,0G   34G   16% /var/spool
/dev/sda3      xfs        85G    1,1G   84G    2% /home <- XFS
tmpfs          tmpfs      99M     16K   99M    1% /run/user/483
tmpfs          tmpfs      99M       0   99M    0% /run/user/1000

/home/user/scripts/bench_harddisk_small_files.py; # python based small file read/write benchmark

create test folder:
0.00403809547424

create files:
77.052531004

rewrite files:
246.546470881

read linear:
228.652709007

read random:
414.306197166

delete all files:
122.249341011

EXT3 on CentOS7 VM

# total runtime!!! 13m38.279s !!!

[root@centos /]# df -hT
/dev/sdb2           ext3      9,8G     23M  9,2G    1% /mnt/sdb2

 time /home/user/scripts/bench_harddisk_small_files.py

create test folder:
0.0190849304199

create files:
66.4392571449

rewrite files:
70.9426250458

read linear:
65.0275251865

read random:
419.262803078

delete all files:
175.337391138

# total runtime:
real    13m38.279s
user    0m10.147s
sys     1m5.594s

EXT4 on CentOS7 VM

total runtime real 16m29.313s

[root@centos /]# df -hT
/dev/sdb1           ext4      9,8G     37M  9,2G    1% /mnt/sdb1

[root@centos sdb1]# time /home/user/scripts/bench_harddisk_small_files.py

create test folder:
0.109514951706

create files:
156.300988913

rewrite files:
47.8779950142

read linear:
138.047894955

read random:
430.069261074

delete all files:
175.878396988

real    16m29.313s
user    0m11.569s
sys     1m11.605s

BTRFS on SUSE12 VM

total runtime real 14m55.076s

suse:/var/tmp # df -hT
/dev/sda2      btrfs      41G    6,0G   33G   16% /tmp

suse:/var/tmp # time /home/user/scripts/bench_harddisk_small_files.py

create test folder:
0.0999500751495

create files:
54.8537290096

rewrite files:
259.263952971

read linear:
31.106744051

read random:
215.53058815

delete all files:
294.819381952

real    14m55.076s
user    0m8.432s
sys     1m4.469s

hdparm

# test read performance
hdparm -t /dev/sda1;
# debian under virtualbox on windows 7
# supermicro raid 5 lsi 1068e controler:
1073741824 bytes (1.1 GB) copied, 3.6072 s, 298 MB/s

About BTRFS:

… SUSE12 is using btrfs for system-root / and XFS for /home.

btrfs supports cool features such as snapshots – started by Oracle now developed further and used by Facebook – https://btrfs.wiki.kernel.org/index.php/Main_Page

https://en.wikipedia.org/wiki/Btrfs

Developer(s)Facebook, Fujitsu, Fusion-IO, Intel, Linux Foundation, Netgear, Oracle Corporation, Red Hat, STRATO AG, and SUSE[1]

Btrfs is intended to address the lack of pooling, snapshots, checksums, and integral multi-device spanning in Linux file systems.[8] Chris Mason, the principal Btrfs author, has stated that its goal was “to let Linux scale for the storage that will be available. Scaling is not just about addressing the storage but also means being able to administer and to manage it with a clean interface that lets people see what’s being used and makes it more reliable.”[14]

liked this article?

  • only together we can create a truly free world
  • plz support dwaves to keep it up & running!
  • (yes the info on the internet is (mostly) free but beer is still not free (still have to work on that))
  • really really hate advertisement
  • contribute: whenever a solution was found, blog about it for others to find!
  • talk about, recommend & link to this blog and articles
  • thanks to all who contribute!
admin