this script is for cront-job regular measurement of the available (download) bandwidth, by downloading a 100MByte dd crated (https://dwaves.de/testfile) testfile

created like this (creditz):

head -c 100m /dev/zero | openssl enc -aes-128-cbc -pass pass:"$(head -c 20 /dev/urandom | base64)"  > testfile

(full of random data (so compression will not speed up the download much) and the result is more realistic for e.g. content that does not compress well like images)

sha512sum of the test file: testfile.sha512sum.txt

HINT! for some reason vim won’t display the line breaks of curl correctly, use cat to view the log file!

testing bandwidth: bench_inet_bandwidth.sh

# requirements: curl
su - root;
apt update;
apt install curl;

# now the script:
#!/bin/bash
LOGFILE=bench_inet_bandwidth.log
ISP=StarLink

# tidy up
rm -rf testfile

echo "=== $ISP inet benchmark (downloading 100MByte of random data) started on: $(date '+%Y-%m-%d-%H:%M:%S') ===" >> $LOGFILE;
curl -o testfile https://dwaves.de/testfile 2>&1 | tee -a ${LOGFILE}; 

# output results
cat $LOGFILE;


# manual testing of file integrity
wget https://dwaves.de/testfile.sha512sum.txt
sha512sum -c testfile.sha512sum.txt


# example output:

cat bench_inet_bandwidth.log 

=== StarLink inet benchmark (downloading 100MByte of random data) started on: 2021-05-11-12:04:09 ===
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 99.9M  100 99.9M    0     0  12.5M      0  0:00:07  0:00:07 --:--:-- 16.9M

testing long term reliability: bench_inet_reliability.sh

this script is intended for long term testing of reliability of network connection/connectivity

it should quit after 24hours and thus not spamming the logs


#!/bin/bash
# $1 = ip to test
# $2 = how many hours to test

LOGFILE="test_connection_$1.log"

HOURS=$2

MINUTES=$(($2*60))

MIN_CURRENT=0

echo "====== connection test for $1 started on : $(date '+%Y-%m-%d-%H-%M')" > $LOGFILE;
echo "script is set to run for $HOURS h (= $MINUTES min)" >> $LOGFILE;

time for MIN_CURRENT in $(seq 1 $MINUTES);
do
	ping -c 60 $1 >> $LOGFILE; # will ping every second thus one loop is 1 min
	printf "\n--- MIN_CURRENT $MIN_CURRENT of $MINUTES --- timestamp: $(date '+%Y-%m-%d %H:%M:%S') \n" >> $LOGFILE;
done;

usage example:

# ping-test connection for 24hours and log it to file
/scripts/test_connection.sh 192.168.0.223 24

# how to view progress?
# open up a new terminal (Ctrl+Shift+N in GNU Debian Linux MATE)
tail -f test_connection_192.168.0.223.log

# how to generate nice summary, if any packet got lost
cat test_connection_192.168.0.223.log |grep -e "MIN_CURRENT" -e "packet loss"

60 packets transmitted, 60 received, 0% packet loss, time 137ms
--- MIN_CURRENT 1 of 1440 --- timestamp: 2021-01-21 11:35:57 
60 packets transmitted, 60 received, 0% packet loss, time 142ms
--- MIN_CURRENT 2 of 1440 --- timestamp: 2021-01-21 11:36:56 
60 packets transmitted, 60 received, 0% packet loss, time 142ms
--- MIN_CURRENT 3 of 1440 --- timestamp: 2021-01-21 11:37:55 

# to terminate the script
kill -SIGTERM $(pgrep test_connection)

possible improvements:

alternatively the log could be compressed afterwards to save disk space

tar fcvz $LOGFILE.tar.gz $LOGFILE;


admin