yes it’s possible to boot dwave’s ideal linux debian based usb stick on QNAP NAS using atom cpu

now the user realizes: damn. no mdadm package?

but how to get a package for older version of debian 10 buster?

  1. modify sources.list
  2. vim /etc/apt/sources.list
    deb https://archive.debian.org/debian buster main contrib non-free
    deb-src https://archive.debian.org/debian buster main contrib non-free
    
    deb https://archive.debian.org/debian buster/updates main contrib non-free
    deb-src https://archive.debian.org/debian buster/updates main contrib non-free
    
    deb https://archive.debian.org/debian buster-updates main contrib non-free
    deb-src https://archive.debian.org/debian buster-updates main contrib non-free
  3. make sure the CMOS clock is set correctly or face certificate errors such as: Certificate verification failed: The certificate is NOT trusted. The certificate chain uses not yet valid certificate. Could not handshake: Error in the certificate verification.
  4. # try this first
    • ntpdate known.timeserver.ip.adress; # could be the user's router or internet service provider
      # if that fails because ntpdate is not installed
      # set date
      date +%Y%m%d -s "20260404";
      # and time manually
      date +%T -s "15:41:00";
      # sync system-time to cmos-hardware-realtime-clock (BIOS)
      hwclock --systohc;
      # now try 
      apt update; # again should work now without certificate error
      apt install mdadm; # should work now too
      
      
    • now that this is installed (including rebuilding the /boot kernel image) mdadm is so smart it should automatically find all relevant harddisks and their raid configuration
    • # checkout partition layout
      lsblk -fs
      NAME FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
      sdc1 ext2 aabf92c8-14ce-46a8-9af2-9e27b1a5412e 1.7M 16% /media/user/aabf92c8-14ce-46a8-9af2-9e27b1a5412
      └─sdc 
      sdc2 ext2 QTS_BOOT_PART2 9a980f7b-6178-4e2c-99c2-d1221e8f28c8 59.6M 74% /media/user/QTS_BOOT_PART2
      └─sdc 
      sdc3 ext2 QTS_BOOT_PART3 92305f5b-ff61-4486-ac95-b35bad2aa80a 59.6M 74% /media/user/QTS_BOOT_PART3
      └─sdc 
      sdc4 
      └─sdc 
      sdc5 ext2 ba5d6eee-24a9-43bc-b861-8f4ebfbcd2c7 7.8M 0% /media/user/ba5d6eee-24a9-43bc-b861-8f4ebfbcd2c
      └─sdc 
      sdc6 ext2 c5a5224c-2a7e-46b4-b98c-e87b07fd65f9 3.8M 1% /media/user/c5a5224c-2a7e-46b4-b98c-e87b07fd65f
      └─sdc 
      sdd1 ext2 47576b45-eeeb-4346-8604-11cbe59a64d4 364.7M 18% /boot
      └─sdd 
      sdd2 
      └─sdd 
      sdf2 linux_raid_membe 5 b7bb1fe9-a07d-aad1-ca67-34c0a0a0810a 
      └─sdf 
      sdg2 linux_raid_membe 5 b7bb1fe9-a07d-aad1-ca67-34c0a0a0810a 
      └─sdg 
      md0 ext4 2fa11214-bf45-42f5-a7ef-a018f14dbce5 
      ├─sda3 linux_raid_membe 0 350f993c-c104-3c94-8d2a-51f51ca4b4f8 
      │ └─sda 
      ├─sdb3 linux_raid_membe 0 350f993c-c104-3c94-8d2a-51f51ca4b4f8 
      │ └─sdb 
      ├─sde3 linux_raid_membe 0 350f993c-c104-3c94-8d2a-51f51ca4b4f8 
      │ └─sde 
      ├─sdf3 linux_raid_membe 0 350f993c-c104-3c94-8d2a-51f51ca4b4f8 
      │ └─sdf 
      └─sdg3 linux_raid_membe 0 350f993c-c104-3c94-8d2a-51f51ca4b4f8 
      └─sdg 
      md9 ext3 e4a7a8c4-0dc7-4fdd-b0f0-bfda01a8077d 
      ├─sda1 linux_raid_membe 9 4a19d3cc-159e-8eec-7be9-11627a31cca1 
      │ └─sda 
      ├─sdb1 linux_raid_membe 9 4a19d3cc-159e-8eec-7be9-11627a31cca1 
      │ └─sdb 
      ├─sde1 linux_raid_membe 9 4a19d3cc-159e-8eec-7be9-11627a31cca1 
      │ └─sde 
      ├─sdf1 linux_raid_membe 9 4a19d3cc-159e-8eec-7be9-11627a31cca1 
      │ └─sdf 
      └─sdg1 linux_raid_membe 9 4a19d3cc-159e-8eec-7be9-11627a31cca1 
      └─sdg 
      md127 ext3 44497a9d-0036-4af3-94e2-ee6168d1754f 
      ├─sda4 linux_raid_membe 4076f589-ce64-e13a-e543-c6a7ea5c417c 
      │ └─sda 
      ├─sdb4 linux_raid_membe 4076f589-ce64-e13a-e543-c6a7ea5c417c 
      │ └─sdb 
      ├─sde4 linux_raid_membe 4076f589-ce64-e13a-e543-c6a7ea5c417c 
      │ └─sde 
      ├─sdf4 linux_raid_membe 4076f589-ce64-e13a-e543-c6a7ea5c417c 
      │ └─sdf 
      └─sdg4 linux_raid_membe sda4 9349eeb2-2582-9918-f1d0-7a95467a157c 
      
      cat /proc/mdstat 
      
      Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10] 
      md126 : active (auto-read-only) raid1 sdf2[2] sdg2[0]
            530128 blocks super 1.0 [2/2] [UU]
            
      md127 : active (auto-read-only) raid1 sdf4[5] sdg4[0] sdb4[7] sda4[8] sde4[6]
            458880 blocks super 1.0 [5/5] [UUUUU]
            bitmap: 0/8 pages [0KB], 32KB chunk
      
      md0 : active raid6 sdf3[1] sdg3[0] sdb3[3] sda3[4] sde3[2]
            5855836608 blocks super 1.0 level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]
            
      md9 : active (auto-read-only) raid1 sdf1[5] sdg1[0] sdb1[7] sda1[8] sde1[6]
            530112 blocks super 1.0 [5/5] [UUUUU]
            bitmap: 0/9 pages [0KB], 32KB chunk
      
      md5 : inactive sdb2[3](S) sda2[2](S) sde2[4](S)
            1590144 blocks
             
      unused devices: 
      
      
      # checkout; there should be already infos about all detected raid array
      vim /etc/mdadm/mdadm.conf
      
      # it should show something like this
      
      cat /etc/mdadm/mdadm.conf
      # mdadm.conf
      #
      # !NB! Run update-initramfs -u after updating this file.
      # !NB! This will ensure that initramfs has an uptodate copy.
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #
      # by default (built-in), scan all partitions (/proc/partitions) and all
      # containers for MD superblocks. alternatively, specify devices to scan, using
      # wildcards if desired.
      #DEVICE partitions containers
      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>
      # instruct the monitoring daemon where to send mail alerts
      MAILADDR root
      # definitions of existing MD arrays
      ARRAY /dev/md5 UUID=e7922c31:12b62ae1:f6220798:5207bcf9
      spares=3
      ARRAY /dev/md/9 metadata=1.0 UUID=4a19d3cc:159e8eec:7be91162:7a31cca1 name=9
      ARRAY /dev/md/5 metadata=1.0 UUID=b7bb1fe9:a07daad1:ca6734c0:a0a0810a name=5
      ARRAY /dev/md/0 metadata=1.0 UUID=350f993c:c1043c94:8d2a51f5:1ca4b4f8 name=0
      ARRAY /dev/md/sda4 metadata=1.0 UUID=9349eeb2:25829918:f1d07a95:467a157c name=sda4
      # This configuration was auto-generated on Sat, 04 Apr 2026 15:54:05 +0200 by mkconf
    • # create mountpoint 
      mkdir /media/user/md0
      # mount the raid5 (it's raid6!)
      # raid6 with 5 disks = 3x disks used for data, 2x disks used for checksums = tolerate 2x disks fialing
      # raid5 with 5 disks = 4x disks used for data, 1x disks used for checksums = tolerate 1x disk fialing
      mount /dev/md0 /media/user/md0
      
      df -h /dev/md0
      Filesystem Size Used Avail Use% Mounted on
      /dev/md0 5.5T 3.6T 1.9T 67% /media/user/md0
      
      # check status of all raids
      cat /proc/mdstat
      Personalities : [raid1] [raid6] [raid5] [raid4] 
      md127 : active (auto-read-only) raid1 sdg4[0] sda4[8] sdb4[7] sde4[6] sdf4[5]
            458880 blocks super 1.0 [5/5] [UUUUU]
            bitmap: 0/8 pages [0KB], 32KB chunk
      
      md0 : active raid6 sdg3[0] sda3[4] sdb3[3] sde3[2] sdf3[1]
            5855836608 blocks super 1.0 level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]
            
      md9 : active (auto-read-only) raid1 sdg1[0] sda1[8] sdb1[7] sde1[6] sdf1[5]
            530112 blocks super 1.0 [5/5] [UUUUU]
            bitmap: 0/9 pages [0KB], 32KB chunk
      
      md5 : inactive sdb2[3](S) sda2[2](S) sde2[4](S)
            1590144 blocks
             
      unused devices: 
      
      
      # check what data is there
      cd /media/user/md0
      ls -lah
      # check where is the bulk of the data
      du -hs *
      
      # start a screen session (so process will continue even during disconnect)
      apt install screen
      screen -S transfer
      # then rsync backup over ssh transfer data to other harddisks before installing Debian on the QNAP NAS :D
      rsync -r -vv --update --progress /media/user/md0/ user@192.168.4.120:/media/user/md0/qnap 

how to install Debian?

 

Links:

Debian on Qnap Turbo Station TS-219P NAS – how to setup RAID1 in under 5min

liked this article?

  • only together we can create a truly free world
  • plz support dwaves to keep it up & running!
  • (yes the info on the internet is (mostly) free but beer is still not free (still have to work on that))
  • really really hate advertisement
  • contribute: whenever a solution was found, blog about it for others to find!
  • talk about, recommend & link to this blog and articles
  • thanks to all who contribute!
admin