tape: is it worth the hazzle?
what is better (+ cheaper +faster +easier): 1PetaByte of tape or 1PetaByte of harddisk storage server? the race is on!
that’s what it might look from the inside: a lot of dirt and dust
theoretical pros and cons advantages: (if the software stack works)
- [?] less energy consumption
- TL4000 + 48x LTO5 can store 72TBytes (uncompressed) and while using ~60Watts meassured (datasheed says 168Watts? but imho it’s way less)
- 72TBytes of storage = is only 4x harddisks Seagate Exos X18 18TB in 2025 each using ~10Watts = so 40Watts!
- (in real life the user would want data redundancy ? store all data on 2x tapes or 2x harddisks (RAID10 or RAID1), so that would half the storage capacity of the tape library and double the energy consumption of the harddisks, not ideal but necessary if the data is of value)
- 72TBytes of storage = is only 4x harddisks Seagate Exos X18 18TB in 2025 each using ~10Watts = so 40Watts!
- TL4000 + 48x LTO5 can store 72TBytes (uncompressed) and while using ~60Watts meassured (datasheed says 168Watts? but imho it’s way less)
- [-] way way slower than harddisks than harddisks
- it takes about 20seconds for the robot to change to a new tape… then it takes time for the drive to seek to the file the user wants to retrieve
- so it is definitely only good for data
- that can be written and read sequential
- data that is not often accessed = where access time does (up to multiple minutes?) does not matter
- it takes about 20seconds for the robot to change to a new tape… then it takes time for the drive to seek to the file the user wants to retrieve
- encryption:
- [+] disk encryption with lvm2 is widespread and relatively easy to setup, it’s open source so no worries about licences
- [-] tape: not so easy maybe even expensive extra!
- [?] longevity
- data on tapes should be copied to new tapes every 10 years or risk data loss
- have 4TB HGST HUS726040AL (used to be Hitachi was sold to WD) drives here that are still going strong after 10 years!
- [?] cleaning?
- tape drives need regular cleaning via a cleaning cardrige = even more overhead and maintenance
- [?] cheaper storage per Gigabyte or TerraByte than harddisks
-
=== 2024-02-16 ebay.de prices === lto4 (800GB+x) (https://www.ebay.de/sch/80133/i.html?_from=R40&_nkw=lto4+tape) echo "scale=5; (8.99)/(0.8)"|bc 11.23750 pro 1TB lto5 (1.5TB+x) (https://www.ebay.de/sch/80133/i.html?_from=R40&_nkw=lto5+tape) echo "scale=5; (93.50+5.49)/(1.5*5)"|bc 13.19866 pro 1TB lto6 (2.5TB+x) (https://www.ebay.de/sch/80133/i.html?_from=R40&_nkw=lto6+tape) echo "scale=5; (112.26+16.30)/(2.5*5)"|bc 10.28480 pro 1TB lto7 (6TB, better write head even 9TB!), (https://www.ebay.de/sch/80133/i.html?_from=R40&_nkw=lto7+tape) echo "scale=5; (67.10)/(6)"|bc 11.18333 pro 1TB lto8 (12TB+x) (https://www.ebay.de/sch/80133/i.html?_from=R40&_nkw=lto8+tape) echo "scale=5; (77.07)/(12)"|bc 6.42250 pro 1TB # LTO9 drives are VERY expensive 4000 to 8000 bucks kind of expensive! # (could buy 22x Seagate Exos X18 18TB = 396TBytes for that money!) lto9 (18TB+x) (https://www.ebay.de/sch/80133/i.html?_from=R40&_nkw=lto9+tape) echo "scale=5; (115.93+3.49)/(18)"|bc 6.63444 pro 1TB
-
# tested on Debian12 # install basic tools su - root apt install mt-st mtx lsscsi procinfo sg3-utils sysstat unzip # or on fedora, rhel redhat and rpm based yum install mt-st mtx lsscsi
- mt command stands for “magnetic tape control”
- It is a command-line utility used to control and manipulate magnetic tape devices in Unix-like operating systems.
- mtx command stands for “magtape transport and control system.” It is also a command-line utility used for controlling robotic tape libraries, including functions such as loading and unloading tapes from the library slots, as well as querying the status of the library and its drives.
how to connect the tape library?
-
- SAS controller or FiberChannel controller
lspci; # this is the SAS controller, so make sure to get SAS tape drives! 07:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
lsscsi # this is how Dell TL4000 + LTO5 drive (1x installed of 2x possible) shows up [0:0:0:0] cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0 [6:0:0:0] tape IBM ULT3580-HH5 H971 /dev/st0 [6:0:0:1] mediumx IBM 3573-TL E.70 /dev/sch0 [6:0:1:0] tape IBM ULT3580-HH5 H971 /dev/st1 <- it is the same tape drive connected via 2nd cable (redundancy?) [6:0:1:1] mediumx IBM 3573-TL E.70 /dev/sch1
specs of the drive: ULT3580-HH5
lsscsi --list [10:0:0:0] tape IBM ULT3580-HH5 H971 /dev/st0 device_blocked=0 iocounterbits=32 iodone_cnt=0x1f ioerr_cnt=0x0 iorequest_cnt=0x1f queue_depth=254 queue_type=simple scsi_level=7 state=running timeout=900 type=1 [10:0:0:1] mediumx IBM 3573-TL E.70 /dev/sch0 device_blocked=0 iocounterbits=32 iodone_cnt=0x15 ioerr_cnt=0x1 iorequest_cnt=0x15 queue_depth=254 queue_type=simple scsi_level=6 state=running timeout=30 type=8 [10:0:1:0] tape IBM ULT3580-HH5 H971 /dev/st1 device_blocked=0 iocounterbits=32 iodone_cnt=0x1f ioerr_cnt=0x0 iorequest_cnt=0x1f queue_depth=254 queue_type=simple scsi_level=7 state=running timeout=900 type=1 [10:0:1:1] mediumx IBM 3573-TL E.70 /dev/sch1 device_blocked=0 iocounterbits=32 iodone_cnt=0x15 ioerr_cnt=0x1 iorequest_cnt=0x15 queue_depth=254 queue_type=simple scsi_level=6 state=running timeout=30 type=8 tapeinfo -f /dev/sg1 Product Type: Tape Drive Vendor ID: 'IBM ' Product ID: 'ULT3580-HH5 ' Revision: 'H971' Attached Changer API: No SerialNumber: '10WT036509' MinBlock: 1 MaxBlock: 8388608 SCSI ID: 0 SCSI LUN: 0 Ready: yes BufferedMode: yes Medium Type: 0x58 Density Code: 0x58 BlockSize: 0 DataCompEnabled: yes DataCompCapable: yes DataDeCompEnabled: yes CompType: 0x1 DeCompType: 0x1 BOP: yes Block Position: 0 Partition 0 Remaining Kbytes: -1 Partition 0 Size in Kbytes: -1 ActivePartition: 0 EarlyWarningSize: 0 NumPartitions: 1 MaxPartitions: 1 Partition0: 380 Partition1: 14440 tapeinfo -f /dev/sg2 Product Type: Medium Changer Vendor ID: 'IBM ' Product ID: '3573-TL ' Revision: 'E.70' Attached Changer API: No SerialNumber: '00X4U78T4452_LL0' SCSI ID: 0 SCSI LUN: 1 Ready: yes tapeinfo -f /dev/sg3 Product Type: Tape Drive Vendor ID: 'IBM ' Product ID: 'ULT3580-HH5 ' Revision: 'H971' Attached Changer API: No SerialNumber: '10WT036509' MinBlock: 1 MaxBlock: 8388608 SCSI ID: 1 SCSI LUN: 0 Ready: yes BufferedMode: yes Medium Type: 0x58 Density Code: 0x58 BlockSize: 0 DataCompEnabled: yes DataCompCapable: yes DataDeCompEnabled: yes CompType: 0x1 DeCompType: 0x1 BOP: yes Block Position: 0 Partition 0 Remaining Kbytes: -1 Partition 0 Size in Kbytes: -1 ActivePartition: 0 EarlyWarningSize: 0 NumPartitions: 1 MaxPartitions: 1 Partition0: 380 Partition1: 14440 tapeinfo -f /dev/sg4 Product Type: Medium Changer Vendor ID: 'IBM ' Product ID: '3573-TL ' Revision: 'E.70' Attached Changer API: No SerialNumber: '00X4U78T4452_LL0' SCSI ID: 1 SCSI LUN: 1 Ready: yes === terminology === tl = tape library sg = scsi generic device st = scsi tape device mt = magnetic tape mtx = magnetic tape and control Data Transfer Elemen = a TapeDrive Data Transfer Element = a tape Medium Changer = Storage Changer = tape library itself + the robot sch = Storage CHanger? LTFS = Long Term File System, LTFS-formatted LTO tape appears as a file system similar to a USB disk drive control path = A control path is the location where a server sends SCSI Medium Changer directives to control a specific logical library Status Bit Description BOT Beginning Of tape (Beginning of First File) EOT End Of Tape (either physical End of Tape, or End of Data area) EOF At the End Of some File WR_PROT Write Protected. Either the drive or tape is in write-protected mode; or the current drive only supports this tape type in read-only mode. ONLINE Drive loaded and ready to work. DR_OPEN Drive Open. The drive is empty, or the tape has been ejected and hangs at the door (in that case, you can use *mt load* to load the tape). IM_REP_EN Immediate Response Mode. the drive buffers data and acknowledge read and write operations before data has hit the tape. SM Tape is on a Marker (Set Mark). May work only on DDS drives(?) EOD End Of Data. The tape is at the end of recorded area. Depending upon drives, may be the same thing as EOT === how long? === most manufacturers estimate that LTO tapes can hold data lossless for 15 to 30 years === what is for what? === mt is used to control the tape drive (tape movements, ejection, and drive settings) mtx is used to control tape libraries (moving cartridges from slots to drives, etc) === examples === hostnamectl; # tested on Static hostname: ubuntuxts Virtualization: kvm Operating System: Ubuntu 22.04.4 LTS Kernel: Linux 5.15.0-107-generic Architecture: x86-64 Hardware Vendor: GIGABYTE Hardware Model: G292-Z20-00 Firmware Version: R23 hostnamectl; # also tested on Static hostname: gigabyte-srv1-debian Operating System: Debian GNU/Linux 12 (bookworm) Kernel: Linux 6.1.0-21-amd64 Architecture: x86-64 Hardware Vendor: GIGABYTE Hardware Model: G292-Z20-00 Firmware Version: R23 lspci |grep LSI 01:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) wget https://www.supermicro.com/support/faqs/data_lib/FAQ_9633_SAS2IRCU_Phase_5.0-5.00.00.00.zip SAS2IRCU: No command specified. sas2ircu <command> ... where is: Number between 0 and 255 where <command> is: DISPLAY - display controller, volume and physical device info LIST - Lists all available LSI adapters (does not need ctlr #> CREATE - create an IR volume DELETE - set controller configuration to factory defaults HOTSPARE - make drive a hot spare STATUS - display current volume status info CONSTCHK - Start Consistency Check operation on the specified IR Volume ACTIVATE - Activate an Inactive IR volume LOCATE - Locate a disk drive on an enclosure LOGIR - Upload or Clear IR Log data BOOTIR - Select an IR Boot Volume as primary boot device BOOTENCL - Select an Enclosure/Bay as primary boot device where are: Command specific values; enter "sas2ircu <command>" to get command specific help ./sas2ircu list LSI Corporation SAS2 IR Configuration Utility. Adapter Vendor Device SubSys SubSys Index Type ID ID Pci Address Ven ID Dev ID ----- ------------ ------ ------ ----------------- ------ ------ 0 SAS2008 1000h 72h 00h:01h:00h:00h 1028h 1f1ch SAS2IRCU: Utility Completed Successfully. ./sas2ircu 0 display LSI Corporation SAS2 IR Configuration Utility. Version 5.00.00.00 (2010.02.09) Copyright (c) 2009 LSI Corporation. All rights reserved. Read configuration has been initiated for controller 0 ------------------------------------------------------------------------ Controller information ------------------------------------------------------------------------ Controller type : SAS2008 BIOS version : 7.11.10.00 Firmware version : 7.15.08.00 Channel description : 1 Serial Attached SCSI Initiator ID : 0 Maximum physical devices : 125 Concurrent commands supported : 2879 Slot : 4 Segment : 0 Bus : 1 Device : 0 Function : 0 ------------------------------------------------------------------------ IR Volume information ------------------------------------------------------------------------ ------------------------------------------------------------------------ Physical device information ------------------------------------------------------------------------ Initiator at ID #0 Device is a Sequential access device Enclosure # : 1 Slot # : 0 State : Standby (SBY) Manufacturer : IBM Model Number : ULT3580-HH5 Firmware Revision : H971 Serial No : j46X6081 Protocol : SAS Drive Type : SAS_HDD Device is a Sequential access device Enclosure # : 1 Slot # : 4 State : Standby (SBY) Manufacturer : IBM Model Number : ULT3580-HH5 Firmware Revision : H971 Serial No : j46X6081 Protocol : SAS Drive Type : SAS_HDD ------------------------------------------------------------------------ Enclosure information ------------------------------------------------------------------------ Enclosure# : 1 Logical ID : 5b083fe0:d1ea6600 Numslots : 8 StartSlot : 0 ------------------------------------------------------------------------ SAS2IRCU: Command DISPLAY Completed Successfully. SAS2IRCU: Utility Completed Successfully. # under debian Drive2 ULT3580-HH5: 10WT022982 connected via: 2 (LUN), identifies as: [0:0:0:0] = /dev/st0 = /dev/sg0 # known to be good media: # read test time tar -vxzf /dev/st0 # write test time tar -vczf /dev/st0 /home/user/testdata tapeinfo -f /dev/st1 Product Type: Tape Drive Vendor ID: 'IBM ' Product ID: 'ULT3580-HH5 ' Revision: 'H971' Attached Changer API: No SerialNumber: '10WT036509' MinBlock: 1 MaxBlock: 8388608 SCSI ID: 1 SCSI LUN: 0 Ready: yes BufferedMode: yes Medium Type: 0x58 Density Code: 0x58 BlockSize: 0 DataCompEnabled: yes DataCompCapable: yes DataDeCompEnabled: yes CompType: 0x1 DeCompType: 0x1 BOP: yes Block Position: 0 Partition 0 Remaining Kbytes: -1 Partition 0 Size in Kbytes: -1 ActivePartition: 0 EarlyWarningSize: 0 NumPartitions: 1 MaxPartitions: 1 Partition0: 380 Partition1: 14440 lsscsi --generic [1:0:0:0] cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0 /dev/sg0 [7:0:0:0] tape IBM ULT3580-HH5 H971 /dev/st0 /dev/sg1 <- TapeDriveA0 of LibraryA [7:0:0:1] mediumx IBM 3573-TL E.70 /dev/sch0 /dev/sg2 <- LibraryA [7:0:1:0] tape IBM ULT3580-HH5 H971 /dev/st1 /dev/sg3 <- TapeDriveA1 of LibraryA # also show Luns lsscsi --classic Attached devices: Host: scsi0 Channel: 00 Target: 00 Lun: 00 Vendor: IBM Model: ULT3580-HH5 Rev: H971 Type: Sequential-Access ANSI SCSI revision: 06 Host: scsi0 Channel: 00 Target: 00 Lun: 01 Vendor: IBM Model: 3573-TL Rev: E.70 Type: Medium Changer ANSI SCSI revision: 05 Host: scsi0 Channel: 00 Target: 01 Lun: 00 Vendor: IBM Model: ULT3580-HH5 Rev: H971 Type: Sequential-Access ANSI SCSI revision: 06 find /dev|grep "9:0" ll /dev/char/9:0 lrwxrwxrwx 1 root root 6 2024-05-16 20:25 /dev/char/9:0 -> ../st0 find /dev|grep "9:1" ll /dev/char/9:1 lrwxrwxrwx 1 root root 6 2024-05-16 20:25 /dev/char/9:1 -> ../st1 lsscsi --kname [0:0:0:0] tape IBM ULT3580-HH5 H971 /dev/st0 [0:0:0:1] mediumx IBM 3573-TL E.70 /dev/sch0 [0:0:1:0] tape IBM ULT3580-HH5 H971 /dev/st1 [N:0:1:1] disk KINGSTON SKC3000D2048G__1 /dev/nvme0n1 [N:1:1:1] disk KINGSTON SKC3000D2048G__1 /dev/nvme1n1 [N:2:1:1] disk KINGSTON SEDC1500M1920G__1 /dev/nvme2n1 # also try lsscsi --list lsscsi --protection lsscsi --transport [1:0:0:0] cd/dvd sata:ATA QEMU DVD-ROM QM00001 /dev/sr0 [7:0:0:0] tape sas:0x5000e1114ff7f005 /dev/st0 [7:0:0:1] mediumx sas:0x5000e1114ff7f005 /dev/sch0 [7:0:1:0] tape sas:0x5000e1114ff7f002 /dev/st1 lsscsi --verbose tapeinfo -f /dev/sg0 Product Type: CD-ROM tapeinfo -f /dev/sg1 Product Type: Tape Drive <- tapeDriveA0 Vendor ID: 'IBM ' Product ID: 'ULT3580-HH5 ' Revision: 'H971' Attached Changer API: No SerialNumber: '10WT022982' MinBlock: 1 MaxBlock: 8388608 SCSI ID: 0 SCSI LUN: 0 Ready: no tapeinfo -f /dev/sg2 Product Type: Medium Changer <- aka the tape library itself + the robot that puts tapes into the drives Vendor ID: 'IBM ' Product ID: '3573-TL ' Revision: 'E.70' Attached Changer API: No SerialNumber: '00X4U78T4452_LL0' SCSI ID: 0 SCSI LUN: 1 Ready: yes tapeinfo -f /dev/sg3 Product Type: Tape Drive Vendor ID: 'IBM ' Product ID: 'ULT3580-HH5 ' <- tapeDriveA1 Revision: 'H971' Attached Changer API: No SerialNumber: '10WT036509' MinBlock: 1 MaxBlock: 8388608 SCSI ID: 1 SCSI LUN: 0 Ready: no mtx -f /dev/sg2 status Storage Changer /dev/sg2:2 Drives, 46 Slots ( 3 Import/Export ) Data Transfer Element 0:Empty Data Transfer Element 1:Empty Storage Element 1:Full :VolumeTag=A04867L5 Storage Element 2:Full :VolumeTag=A05751L5 Storage Element 3:Full :VolumeTag=A05975L5 Storage Element 4:Full :VolumeTag=A05549L5 Storage Element 5:Full :VolumeTag=A07151L5 Storage Element 6:Full :VolumeTag=A0572%K5 Storage Element 7:Full :VolumeTag=A07200L5 Storage Element 8:Full :VolumeTag=A05648L5 Storage Element 9:Full :VolumeTag=N00224L5 Storage Element 10:Full :VolumeTag=A05886L5 Storage Element 11:Empty ... Storage Element 43:Empty Storage Element 44 IMPORT/EXPORT:Empty Storage Element 45 IMPORT/EXPORT:Empty Storage Element 46 IMPORT/EXPORT:Empty mtx -f /dev/sch0 status Storage Changer /dev/sch0:2 Drives, 46 Slots ( 3 Import/Export ) Data Transfer Element 0:Empty Data Transfer Element 1:Empty Storage Element 1:Full :VolumeTag=A04867L5 Storage Element 2:Full :VolumeTag=A05751L5 Storage Element 3:Full :VolumeTag=A05975L5 Storage Element 4:Full :VolumeTag=A05549L5 Storage Element 5:Full :VolumeTag=A07151L5 Storage Element 6:Full :VolumeTag=A0572%K5 Storage Element 7:Full :VolumeTag=A07200L5 Storage Element 8:Full :VolumeTag=A05648L5 Storage Element 9:Full :VolumeTag=N00224L5 Storage Element 10:Full :VolumeTag=A05886L5 Storage Element 11:Empty ... Storage Element 44 IMPORT/EXPORT:Empty Storage Element 45 IMPORT/EXPORT:Empty Storage Element 46 IMPORT/EXPORT:Empty # rewind tape to start mt -f /dev/st0 rewind # before unloading the tape mt -f /dev/st0 offline # loading tape again mt -f /dev/st0 load # how to load load tape of slot 1 into drive0 mtx -f /dev/sg2 load 1 0 Loading media from Storage Element 1 into drive 0...done # how to load tape of slot 2 into drive1 mtx -f /dev/sg2 load 2 1 Loading media from Storage Element 2 into drive 1...done # after loading a tape, check drive0 status mt -f /dev/st0 status drive type = 114 <- Type 114 in decimal is 0x72 in hex, so the value indicates your tape drive is a "generic ANSI SCSI-2 tape unit", which is probably what most modern tape drives are (src: https://unix.stackexchange.com/questions/759138/how-do-i-interpret-the-output-from-mt-status) drive status = 1476395008 Status 419430400 is 0x19000000 in hex, so it is a sum of: 0x10000000 = DDS setmark 0x08000000 = DDS EOD (end-of-data) 0x01000000 = GMT_ONLINE (= there is a tape inserted and the drive is ready to receive commands) sense key error = 0 <- communication error status residue count = 0 file number = 0 <- on which file the tape is currently positioned block number = 0 <- block number on which the tape is currently positioned # after loading a tape, check drive1 status mt -f /dev/st1 status drive type = 114 drive status = 1476395008 sense key error = 0 residue count = 0 file number = 0 block number = 0 # run again with tape inserted tapeinfo -f /dev/sg1 Product Type: Tape Drive Vendor ID: 'IBM ' Product ID: 'ULT3580-HH5 ' Revision: 'H971' Attached Changer API: No SerialNumber: '10WT022982' MinBlock: 1 MaxBlock: 8388608 SCSI ID: 0 SCSI LUN: 0 Ready: yes BufferedMode: yes Medium Type: 0x58 Density Code: 0x58 -> 0x58 LTO-5 <- what kind of tape drive is it? BlockSize: 0 DataCompEnabled: yes DataCompCapable: yes DataDeCompEnabled: yes CompType: 0x1 DeCompType: 0x1 BOP: yes Block Position: 0 Partition 0 Remaining Kbytes: -1 Partition 0 Size in Kbytes: -1 ActivePartition: 0 EarlyWarningSize: 0 NumPartitions: 0 MaxPartitions: 1 # list all tapes of library = show inventory mtx -f /dev/sg1 status Storage Changer /dev/sg1:2 Drives, 46 Slots ( 3 Import/Export ) Data Transfer Element 0:Full (Storage Element 10 Loaded):VolumeTag = A07200L5 Data Transfer Element 1:Empty Storage Element 1:Full :VolumeTag=A05727L5 Storage Element 2:Full :VolumeTag=P0010SL5 Storage Element 3:Full :VolumeTag=P0007SL5 Storage Element 4:Full :VolumeTag=A05886L5 Storage Element 5:Full :VolumeTag=P0009SL5 Storage Element 6:Full :VolumeTag=A05751L5 Storage Element 7:Full :VolumeTag=A05975L5 Storage Element 8:Full :VolumeTag=P0008SL5 Storage Element 9:Full :VolumeTag=P0011SL5 Storage Element 10:Empty Storage Element 11:Full :VolumeTag=A05549L5 Storage Element 12:Full :VolumeTag=A07151L5 Storage Element 13:Full :VolumeTag=A05648L5 Storage Element 14:Full :VolumeTag=P0006SL5 Storage Element 15:Full :VolumeTag=P0001SL5 Storage Element 16:Full :VolumeTag=P0005SL5 Storage Element 17:Full :VolumeTag=P0003SL5 Storage Element 18:Full :VolumeTag=P0002SL5 Storage Element 19:Full :VolumeTag=A04867L5 Storage Element 20:Full :VolumeTag=P0004SL5 Storage Element 21:Full :VolumeTag=N00224L5 Storage Element 22:Empty Storage Element 23:Empty Storage Element 24:Empty Storage Element 25:Empty Storage Element 26:Empty Storage Element 27:Empty Storage Element 28:Empty Storage Element 29:Empty Storage Element 30:Empty Storage Element 31:Empty Storage Element 32:Empty Storage Element 33:Empty Storage Element 34:Empty Storage Element 35:Empty Storage Element 36:Empty Storage Element 37:Empty Storage Element 38:Empty Storage Element 39:Empty Storage Element 40:Empty Storage Element 41:Empty Storage Element 42:Empty Storage Element 43:Empty Storage Element 44 IMPORT/EXPORT:Empty Storage Element 45 IMPORT/EXPORT:Empty Storage Element 46 IMPORT/EXPORT:Empty # drive0: rewind = go to the beginning of the tape mt -f /dev/st0 rewind # how to write (backup) testdata to tape time tar -vczf /dev/st0 /home/user/testdata # how to restore testdata from tape time tar -vxzf /dev/st0 # forward space count files, tape is positioned on the first block of the next file mt -f /dev/st1 fsf # rewind and unload mt -f /dev/st1 rewoffl while true; do tapestat; sleep 1; clear; done Tape: r/s w/s kB_read/s kB_wrtn/s %Rd %Wr %Oa Rs/s Ot/s st0 0 0 0 0 0 0 0 0 0 st1 0 0 0 0 0 0 0 0 0 # rewriting tapes with the same data = to refresh the magnetic signal and prevent data loss = retensioning = rerecording of the tape # transcription = copying of one tape to another, has also been referred to as refreshing mt -f /dev/rmt/1 retension === density codes === code description 0x00 default 0x01 NRZI (800 bpi) 0x02 PE (1600 bpi) 0x03 GCR (6250 bpi) 0x04 QIC-11 0x05 QIC-45/60 (GCR, 8000 bpi) 0x06 PE (3200 bpi) 0x07 IMFM (6400 bpi) 0x08 GCR (8000 bpi) 0x09 GCR (37871 bpi) 0x0a MFM (6667 bpi) 0x0b PE (1600 bpi) 0x0c GCR (12960 bpi) 0x0d GCR (25380 bpi) 0x0f QIC-120 (GCR 10000 bpi) 0x10 QIC-150/250 (GCR 10000 bpi) 0x11 QIC-320/525 (GCR 16000 bpi) 0x12 QIC-1350 (RLL 51667 bpi) 0x13 DDS (61000 bpi) 0x14 EXB-8200 (RLL 43245 bpi) 0x15 EXB-8500 or QIC-1000 0x16 MFM 10000 bpi 0x17 MFM 42500 bpi 0x18 TZ86 0x19 DLT 10GB 0x1a DLT 20GB 0x1b DLT 35GB 0x1c QIC-385M 0x1d QIC-410M 0x1e QIC-1000C 0x1f QIC-2100C 0x20 QIC-6GB 0x21 QIC-20GB 0x22 QIC-2GB 0x23 QIC-875 0x24 DDS-2 0x25 DDS-3 0x26 DDS-4 or QIC-4GB 0x27 Exabyte Mammoth 0x28 Exabyte Mammoth-2 0x29 QIC-3080MC 0x30 AIT-1 or MLR3 0x31 AIT-2 0x32 AIT-3 / SLR7 0x33 SLR6 0x34 SLR100 0x40 DLT1 40 GB, or Ultrium 0x41 DLT 40GB, or Ultrium2 0x42 LTO-2 0x44 LTO-3 0x45 QIC-3095-MC (TR-4) 0x46 LTO-4 0x47 TR-5 / DDS-5 0x48 Quantum SDLT220 0x49 Quantum SDLT320 0x51 IBM 3592 J1A 0x52 IBM 3592 E05 0x58 LTO-5 0x5a LTO-6 0x5c LTO-7 0x5d LTO-M8 0x5e LTO-8 0x80 DLT 15GB uncomp. or Ecrix / VXA-1 0x81 DLT 15GB compressed / VXA-2 0x82 DLT 20GB uncompressed / VXA-3 / VXA-320 0x83 DLT 20GB compressed 0x84 DLT 35GB uncompressed 0x85 DLT 35GB compressed 0x86 DLT1 40 GB uncompressed 0x87 DLT1 40 GB compressed 0x88 DLT 40GB uncompressed 0x89 DLT 40GB compressed 0x8c EXB-8505 compressed 0x90 SDLT110 uncompr/EXB-8205 compr 0x91 SDLT110 compressed 0x92 SDLT160 uncompressed 0x93 SDLT160 compressed
how to ltfs?
- LTFS is like ext4 but for tape
- LTFS allows for mounting a tape like a ext4 formatted harddisk (or partition)
- how to build LTFS driver from scratch Debian + Ubuntu: https://dwaves.de/2024/05/03/gnu-linux-ubuntu-22-04-and-debian12-how-to-build-ltfs-from-src-aka-scratch/
- it might be wise to enable additional Logical Block Protection (LBP) (if available LTO5 (with proper firmware) & and higher-later versions):
=== LTFS howto ==== ltfs -o device_list Device Name = /dev/sg2 (0.0.1.0), Vendor ID = IBM , Product ID = ULT3580-HH5 , Serial Number = 10WT036509, Product Name =[ULT3580-HH5]. Device Name = /dev/sg0 (0.0.0.0), Vendor ID = IBM , Product ID = ULT3580-HH5 , Serial Number = 10WT036509, Product Name =[ULT3580-HH5]. # ltfs format tape in TapeDriveA0 mkltfs -f -d /dev/st0 # ltfs format tape in TapeDriveA1 mkltfs -f -d /dev/st1 # sample output LTFS15000I Starting mkltfs, LTFS version 2.5.0.0 (Prelim), log level 2. LTFS15041I Launched by "mkltfs -f -d /dev/st0". LTFS15042I This binary is built for Linux (x86_64). LTFS15043I GCC version is 12.2.0. LTFS17087I Kernel version: Linux version 6.1.0-34-amd64 (debian-kernel@lists.debian.org) (gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC Debian 6.1.135-1 (2025-04-25) i386. LTFS17089I Distribution: PRETTY_NAME="Debian GNU/Linux 12 (bookworm)". LTFS15003I Formatting device '/dev/st0'. LTFS15004I LTFS volume blocksize: 524288. LTFS15005I Index partition placement policy: None. LTFS11337I Update index-dirty flag (1) - NO_BARCODE (0x0x559e77c99990). LTFS17085I Plugin: Loading "sg" tape backend. LTFS30209I Opening a device through sg-ibmtape driver (/dev/st0). LTFS30250I Opened the SCSI tape device 0.0.0.-1 (/dev/st0). LTFS30207I Vendor ID is IBM . LTFS30208I Product ID is ULT3580-HH5 . LTFS30214I Firmware revision is H971. LTFS30215I Drive serial is 10WT036509. LTFS30285I The reserved buffer size of /dev/st0 is 1048576. LTFS30294I Setting up timeout values from RSOC. LTFS17160I Maximum device block size is 1048576. LTFS11330I Loading cartridge. LTFS30252I Logical block protection is disabled. LTFS11332I Load successful. LTFS17157I Changing the drive setting to write-anywhere mode. LTFS15049I Checking the medium (load). LTFS30252I Logical block protection is disabled. LTFS15010I Creating data partition b on SCSI partition 1. LTFS15011I Creating index partition a on SCSI partition 0. LTFS17165I Resetting the medium's capacity proportion. LTFS11097I Partitioning the medium. LTFS30263I MODESELECT returns Mode Parameters Rounded (-20101) /dev/st0. LTFS11100I Writing label to partition b. LTFS11278I Writing index to partition b. LTFS30205I READ_ATTR (0x8c) returns -20501. LTFS30263I READ_ATTR returns Invalid Field in CDB (-20501) /dev/st0. LTFS30233I Cannot read attribute (-20501). LTFS11336I The attribute does not exist. Ignore the expected error. LTFS17235I Writing index of NO_BARCODE to b (Reason: Format, 0 files) 10WT036509. LTFS17236I Wrote index of NO_BARCODE (Gen = 1, Part = b, Pos = 5, 10WT036509). LTFS11337I Update index-dirty flag (0) - NO_BARCODE (0x0x559e77c99990). LTFS11100I Writing label to partition a. LTFS11278I Writing index to partition a. LTFS30205I READ_ATTR (0x8c) returns -20501. LTFS30263I READ_ATTR returns Invalid Field in CDB (-20501) /dev/st0. LTFS30233I Cannot read attribute (-20501). LTFS11336I The attribute does not exist. Ignore the expected error. LTFS17235I Writing index of NO_BARCODE to a (Reason: Format, 0 files) 10WT036509. LTFS17236I Wrote index of NO_BARCODE (Gen = 1, Part = a, Pos = 5, 10WT036509). LTFS15013I Volume UUID is: 38b0d1af-4c19-4f3b-8122-66353a6a1395. LTFS15019I Volume capacity is 1425 GB. LTFS30252I Logical block protection is disabled. LTFS15024I Medium formatted successfully. # https://github.com/amiaopensource/ltfs/tree/master?tab=readme-ov-file # https://github.com/amiaopensource/ltfs/blob/master/README ==== how to mount LTFS tape ==== # it can be mounted like a harddisk (partition) mt -f /dev/st0 rewind; # rewind again mtx -f /dev/sg1 status|grep Loaded; # what tape is loaded? Data Transfer Element 0:Full (Storage Element 2 Loaded):VolumeTag = P0010SL5 mkdir /media/user/P0010SL5; # create mount point ltfs -o devname=/dev/st0 /media/user/P0010SL5; # mount 8d6 LTFS14000I LTFS starting, LTFS version 2.5.0.0 (Prelim), log level 2. 8d6 LTFS14058I LTFS Format Specification version 2.4.0. 8d6 LTFS14104I Launched by "ltfs -o devname=/dev/st0 /media/user/P0010SL5". 8d6 LTFS14105I This binary is built for Linux (x86_64). 8d6 LTFS14106I GCC version is 12.2.0. 8d6 LTFS17087I Kernel version: Linux version 6.1.0-34-amd64 (debian-kernel@lists.debian.org) (gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC Debian 6.1.135-1 (2025-04-25) i386. 8d6 LTFS17089I Distribution: PRETTY_NAME="Debian GNU/Linux 12 (bookworm)". 8d6 LTFS14063I Sync type is "time", Sync time is 300 sec. 8d6 LTFS17085I Plugin: Loading "sg" tape backend. 8d6 LTFS17085I Plugin: Loading "unified" iosched backend. 8d6 LTFS14095I Set the tape device write-anywhere mode to avoid cartridge ejection. 8d6 LTFS30209I Opening a device through sg-ibmtape driver (/dev/st0). 8d6 LTFS30250I Opened the SCSI tape device 0.0.0.-1 (/dev/st0). 8d6 LTFS30207I Vendor ID is IBM . 8d6 LTFS30208I Product ID is ULT3580-HH5 . 8d6 LTFS30214I Firmware revision is H971. 8d6 LTFS30215I Drive serial is 10WT036509. 8d6 LTFS30285I The reserved buffer size of /dev/st0 is 1048576. 8d6 LTFS30294I Setting up timeout values from RSOC. 8d6 LTFS17160I Maximum device block size is 1048576. 8d6 LTFS11330I Loading cartridge. 8d6 LTFS30252I Logical block protection is disabled. 8d6 LTFS11332I Load successful. 8d6 LTFS17157I Changing the drive setting to write-anywhere mode. 8d6 LTFS11005I Mounting the volume from device. 8d6 LTFS30252I Logical block protection is disabled. 8d6 LTFS17227I Tape attribute: Vendor = IBM . 8d6 LTFS17227I Tape attribute: Application Name = LTFS . 8d6 LTFS17227I Tape attribute: Application Version = 2.5.0.0 . 8d6 LTFS17227I Tape attribute: Medium Label = . 8d6 LTFS17228I Tape attribute: Text Localization ID = 0x81. 8d6 LTFS17227I Tape attribute: Barcode = . 8d6 LTFS17227I Tape attribute: Application Format Version = 2.4.0 . 8d6 LTFS17228I Tape attribute: Volume Lock Status = 0x00. 8d6 LTFS17227I Tape attribute: Media Pool name = . 8d6 LTFS11031I Volume mounted successfully. NO_BARCODE : Gen = 1 / (a, 5) -> (b, 5) / 10WT036509. 8d6 LTFS14122I Index will not be captured. 8d6 LTFS14111I Initial setup completed successfully. 8d6 LTFS14112I Invoke 'mount' command to check the result of final setup. 8d6 LTFS14113I Specified mount point is listed if succeeded. # test if mount was success df -h|grep P0010SL5 ltfs:/dev/st0 1.3T 3.0M 1.3T 1% /media/user/P0010SL5 # YESSSS! :D
==== misc ==== cat /var/log/kern.log|grep mpt2sas_cm0 2024-05-22T12:04:38.630236+02:00 gigabyte kernel: [ 1.729428] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (49192216 kB) 2024-05-22T12:04:38.630316+02:00 gigabyte kernel: [ 1.781117] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k 2024-05-22T12:04:38.630316+02:00 gigabyte kernel: [ 1.781283] mpt2sas_cm0: MSI-X vectors supported: 1 2024-05-22T12:04:38.630317+02:00 gigabyte kernel: [ 1.781515] mpt2sas_cm0: 0 1 1 2024-05-22T12:04:38.630369+02:00 gigabyte kernel: [ 1.781731] mpt2sas_cm0: High IOPs queues : disabled 2024-05-22T12:04:38.630370+02:00 gigabyte kernel: [ 1.781962] mpt2sas_cm0: iomem(0x00000000fdb40000), mapped(0x0000000068e7e24d), size(65536) 2024-05-22T12:04:38.630373+02:00 gigabyte kernel: [ 1.782121] mpt2sas_cm0: ioport(0x0000000000004000), size(256) 2024-05-22T12:04:38.630398+02:00 gigabyte kernel: [ 1.882626] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k 2024-05-22T12:04:38.630398+02:00 gigabyte kernel: [ 1.882792] mpt2sas_cm0: sending diag reset !! 2024-05-22T12:04:38.630464+02:00 gigabyte kernel: [ 3.114669] mpt2sas_cm0: diag reset: SUCCESS 2024-05-22T12:04:38.630465+02:00 gigabyte kernel: [ 3.146122] mpt2sas_cm0: scatter gather: sge_in_main_msg(1), sge_per_chain(9), sge_per_io(128), chains_per_io(15) 2024-05-22T12:04:38.630465+02:00 gigabyte kernel: [ 3.147380] mpt2sas_cm0: request pool(0x000000002e955be3) - dma(0x180400000): depth(2942), frame_size(128), pool_size(367 kB) 2024-05-22T12:04:38.630466+02:00 gigabyte kernel: [ 3.154033] mpt2sas_cm0: sense pool(0x00000000d62d330b) - dma(0x171180000): depth(2811), element_size(96), pool_size (263 kB) 2024-05-22T12:04:38.630519+02:00 gigabyte kernel: [ 3.173900] mpt2sas_cm0: reply pool(0x000000005c8cdb32) - dma(0x171200000): depth(3006), frame_size(128), pool_size(375 kB) 2024-05-22T12:04:38.630520+02:00 gigabyte kernel: [ 3.196737] mpt2sas_cm0: config page(0x000000009a888f25) - dma(0x171126000): size(512) 2024-05-22T12:04:38.630520+02:00 gigabyte kernel: [ 3.196739] mpt2sas_cm0: Allocated physical memory: size(6336 kB) 2024-05-22T12:04:38.630521+02:00 gigabyte kernel: [ 3.196741] mpt2sas_cm0: Current Controller Queue Depth(2808),Max Controller Queue Depth(2879) 2024-05-22T12:04:38.630525+02:00 gigabyte kernel: [ 3.196743] mpt2sas_cm0: Scatter Gather Elements per IO(128) 2024-05-22T12:04:38.630525+02:00 gigabyte kernel: [ 3.241712] mpt2sas_cm0: log_info(0x30030100): originator(IOP), code(0x03), sub_code(0x0100) 2024-05-22T12:04:38.630526+02:00 gigabyte kernel: [ 3.242687] mpt2sas_cm0: log_info(0x30030100): originator(IOP), code(0x03), sub_code(0x0100) 2024-05-22T12:04:38.630527+02:00 gigabyte kernel: [ 3.242691] mpt2sas_cm0: LSISAS2008: FWVersion(07.15.08.00), ChipRevision(0x03), BiosVersion(07.11.10.00) 2024-05-22T12:04:38.630527+02:00 gigabyte kernel: [ 3.262410] mpt2sas_cm0: Protocol=(Initiator,Target), Capabilities=(Raid,TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ) 2024-05-22T12:04:38.630531+02:00 gigabyte kernel: [ 3.264891] mpt2sas_cm0: sending port enable !! 2024-05-22T12:04:38.630543+02:00 gigabyte kernel: [ 4.774446] mpt2sas_cm0: hba_port entry: 000000008f32ab5f, port: 255 is added to hba_port list 2024-05-22T12:04:38.630543+02:00 gigabyte kernel: [ 4.779923] mpt2sas_cm0: host_add: handle(0x0001), sas_addr(0x5b083fe0d1ea6600), phys(8) 2024-05-22T12:04:38.630544+02:00 gigabyte kernel: [ 4.781819] mpt2sas_cm0: handle(0x9) sas_address(0x5000e1114ff7f005) port_type(0x1) 2024-05-22T12:04:38.630544+02:00 gigabyte kernel: [ 4.783128] mpt2sas_cm0: handle(0xa) sas_address(0x5000e1114ff7f002) port_type(0x1) 2024-05-22T12:04:38.630545+02:00 gigabyte kernel: [ 10.914246] mpt2sas_cm0: port enable: SUCCESS ==== drive hot swap SAS cable unplugged portA and plugged into portB of drive ==== dmesg|grep scsi [ 1.927174] scsi host0: Fusion MPT SAS Host [ 2.047921] scsi: waiting for bus probes to complete ... [ 9.614945] scsi 0:0:0:0: Sequential-Access IBM ULT3580-HH5 H971 PQ: 0 ANSI: 6 [ 9.615594] scsi 0:0:0:0: SSP: handle(0x0009), sas_addr(0x5000e1114ff7f002), phy(3), device_name(0x11e1005001f0f74f) [ 9.616284] scsi 0:0:0:0: enclosure logical id (0x5b083fe0d1ea6600), slot(4) [ 9.616758] scsi 0:0:0:0: qdepth(254), tagged(1), scsi_level(7), cmd_que(1) [ 9.620607] scsi 0:0:0:0: TLR Enabled [ 9.637473] scsi 0:0:0:1: Medium Changer IBM 3573-TL E.70 PQ: 0 ANSI: 5 [ 9.638118] scsi 0:0:0:1: SSP: handle(0x0009), sas_addr(0x5000e1114ff7f002), phy(3), device_name(0x11e1005001f0f74f) [ 9.638813] scsi 0:0:0:1: enclosure logical id (0x5b083fe0d1ea6600), slot(4) [ 9.639273] scsi 0:0:0:1: qdepth(254), tagged(1), scsi_level(6), cmd_que(1) [ 9.667059] scsi 0:0:1:0: Sequential-Access IBM ULT3580-HH5 H971 PQ: 0 ANSI: 6 [ 9.667697] scsi 0:0:1:0: SSP: handle(0x000a), sas_addr(0x5000e1114ff7f003), phy(7), device_name(0x11e1005001f0f74f) [ 9.668385] scsi 0:0:1:0: enclosure logical id (0x5b083fe0d1ea6600), slot(0) [ 9.668866] scsi 0:0:1:0: qdepth(254), tagged(1), scsi_level(7), cmd_que(1) [ 9.672462] scsi 0:0:1:0: TLR Enabled [ 9.688657] scsi 0:0:1:1: Medium Changer IBM 3573-TL E.70 PQ: 0 ANSI: 5 [ 9.689294] scsi 0:0:1:1: SSP: handle(0x000a), sas_addr(0x5000e1114ff7f003), phy(7), device_name(0x11e1005001f0f74f) [ 9.694088] scsi 0:0:1:1: enclosure logical id (0x5b083fe0d1ea6600), slot(0) [ 9.694577] scsi 0:0:1:1: qdepth(254), tagged(1), scsi_level(6), cmd_que(1) [ 9.721286] scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 [ 9.778755] st 0:0:0:0: Attached scsi tape st0 [ 9.864820] ch 0:0:0:1: Attached scsi changer ch0 [ 9.865804] st 0:0:1:0: Attached scsi tape st1 [ 9.954943] ch 0:0:1:1: Attached scsi changer ch1 [ 9.955761] sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray [ 10.004737] sr 1:0:0:0: Attached scsi CD-ROM sr0 [ 10.803477] st 0:0:0:0: Attached scsi generic sg0 type 1 [ 10.805257] ch 0:0:0:1: Attached scsi generic sg1 type 8 [ 10.812687] st 0:0:1:0: Attached scsi generic sg2 type 1 [ 10.817580] ch 0:0:1:1: Attached scsi generic sg3 type 8 [ 10.817936] sr 1:0:0:0: Attached scsi generic sg4 type 5 Service Dump from: 2024-05-22 14:28:24 -------------------------------------- Library Information: -------------------- HW Revision : FF Upgrade Personality : 06.10.05 HW Variant : 03 OS Major Version : 1 Firmware Revision : E.70 Firmware Build date : 10-06-2017 Robotic FW Revision : 3.20e Robot FW Build date : 09-24-2012 Firmware Checksum : 9473 Bootcode Revision : 0.82 Vendor ID : IBM Product ID : 3573-TL Serial Number : X4U78T4452 Service Tag : DHZSXS1 Library WWI : 2000000E1114FF7F Library height : 4U Connection : LUN Language : English Network enabled : Yes DHCP enabled : No IP Address : 192.168.4.100 Netmask : 255.255.255.0 Gateway Address : 0.0.0.0 MAC Address : 000E1114FF7F Network active : Yes Hostname : odtape Library Name : TL4000 Protocol Stack : IPv4 only Speed : Auto Duplex : Auto SNMP enabled : Yes SNMP Target 1 Adr. : 0.0.0.0 (SNMPv1) SNMP Target 2 Adr. : 0.0.0.0 (SNMPv1) SNMP Target 3 Adr. : 0.0.0.0 (SNMPv1) SNMP Community Name : public Number of Powerups : 78 Minutes of Operation: 4189978 (2909d, 16h, 58min) Move count : 16877 Robotics Temperature: 35.4 Contr. Temperature : 31.1 Inventory valid : Yes Robotic Conn. Speed : 57600 Baud Barcode Reader : SE625 PLD Version : 0x33 LED Status: Ready : On Clean : Off Attention : On Error : Off Drive No. 0 Information: ------------------------ Vendor ID : IBM Product ID : ULT3580-HH5 Firmware Revision : H971 Load ID : A1700D77 Drive version : 01 Serial number : 10WT036509 Personality : 00 0D Data compression : Yes Interface type : SAS Port A Identifier : 5000E1114FF7F002 Status : Not ready Link : Not connected Speed : - Hashed SAS address : 000000 Port B Identifier : 5000E1114FF7F003 Status : Ready Link : Connected Speed : 6 Gb/s Hashed SAS address : 87D7A5 Encryption method : Disabled Key path : Default by method BOP policy : Disabled Density reporting : Other Media loads : 11148 Cleaning operations : 305 Power on hours : 47756 Media motion hours : 12429 Tape meters : 143972588 Drive No. 1 Information: ( LUN ) ------------------------ Vendor ID : IBM Product ID : ULT3580-HH5 Firmware Revision : H971 Load ID : A1700D77 Drive version : 01 Serial number : 10WT022982 Personality : 00 0D Data compression : Yes Interface type : SAS Port A Identifier : 5000E1114FF7F005 Status : Ready Link : Connected Speed : 6 Gb/s Hashed SAS address : 306D0E Port B Identifier : 5000E1114FF7F006 Status : Not ready Link : Not connected Speed : - Hashed SAS address : 000000 Encryption method : Disabled Key path : Default by method BOP policy : Disabled Density reporting : Other Media loads : 2175 Cleaning operations : 78 Power on hours : 54579 Media motion hours : 4555 Tape meters : 36004848 Installed Memory Types and Sizes on Library and Robotics Controller : ------------------------------------------------------------------------- Library Controller - LCM 1.1 : ------------------------------------------------------------------------- EEPROM Type : 24LC08B EEPROM Size : 1 KB FRAM Type : FM25V02 FRAM Size : 32 KB DataFlash Type : AT45DB321 DataFlash Size : 4 MB SDRAM Type : MT48LC8M16A2P-75 SDRAM Size : 32 MB Robotics Controller - SE625 : ------------------------------------------------------------------------- EEPROM Type : 24LC256B EEPROM Size : 32 KB Library Inventory table: ------------------------ Mailslot Lib-Elem. Medium PVT Loads AIH Exceptions 0 0 - 16 Unknown 110 4 0 - 17 Unknown 110 8 0 - 18 100 Slot Lib-Elem. Medium PVT Loads A H Exceptions 1 0 - 4096 Data 5 A04867L5 85 1 0 Medium: 63 0 2 0 - 4097 Data 5 A05751L5 42 1 0 3 0 - 4098 Data 5 A05975L5 58 1 0 Medium: 63 0 5 0 - 4099 Data 5 A05549L5 1 0 6 0 - 4100 1 0 7 0 - 4101 Data 5 A05727L5 1 0 9 0 - 4102 1 0 10 0 - 4103 Data 5 A05648L5 1 0 11 0 - 4104 Data 5 N00224L5 1 0 12 0 - 4105 Data 5 A05886L5 1 0 13 0 - 4106 1 0 14 0 - 4107 1 0 15 0 - 4108 1 0 16 0 - 4109 1 0 17 0 - 4110 1 0 18 0 - 4111 1 0 19 0 - 4112 1 0 20 0 - 4113 1 0 21 0 - 4114 1 0 22 0 - 4115 1 0 23 0 - 4116 1 0 24 0 - 4117 1 0 25 0 - 4118 1 0 26 0 - 4119 1 0 27 0 - 4120 1 0 28 0 - 4121 1 0 29 0 - 4122 1 0 30 0 - 4123 1 0 31 0 - 4124 1 0 32 0 - 4125 1 0 33 0 - 4126 1 0 34 0 - 4127 1 0 35 0 - 4128 1 0 36 0 - 4129 1 0 37 0 - 4130 1 0 38 0 - 4131 1 0 39 0 - 4132 1 0 40 0 - 4133 1 0 41 0 - 4134 1 0 42 0 - 4135 1 0 43 0 - 4136 1 0 44 0 - 4137 1 0 45 0 - 4138 1 0 46 - 1 1 47 - Unknown 1 1 Drive Lib-Elem. Medium PVT Loads A H Exceptions 0 0 - 256 Data 5 A07200L5 54 1 0 Medium: 63 0 TAF: 3, 5, 39 from 0 - 4102 1 0 - 257 Data 5 A07151L5 97 1 0 from 0 - 4100 Transp. Lib-Elem. Medium PVT Loads A H Exceptions 0 0 - 1 1 0 Lib 0 Loader mode: Manual, RND Serial Number: X4U78T4452_LL0 Wwide Node Name: 2000000E1114FF7F Drive Range: 256 - 257 Storage slot range: 4096-4138 Autoclean: Disabled Drive sleds Fan status: ----------------------- Drive Fan speed Cooling Mode Monitoring Sled HW Rev. Drive Temp. 0 7 On auto off 1 31.1 1 7 On auto off 1 31.1 Retry counter table: ------------------------------------------------------------------------- Slot Usage Retry 0 810 0 1 370 0 2 252 0 3 224 0 4 822 0 5 511 0 6 1059 0 7 434 0 8 834 0 9 610 0 10 268 1 11 1130 0 12 475 0 13 547 0 14 790 0 15 407 0 16 398 1 17 294 0 18 278 0 19 348 0 20 328 0 21 269 0 22 306 0 23 281 0 24 296 0 25 276 0 26 287 0 27 328 0 28 271 0 29 324 0 30 375 0 31 340 0 32 443 0 33 333 0 34 335 0 35 337 0 36 228 0 37 202 0 38 168 0 39 807 0 40 254 1 41 228 0 42 212 0 43 141 0 44 216 0 45 193 0 46 266 0 47 419 0 Drive Usage Retry 0 3777 0 1 3441 7057 2 0 0 3 0 0 Transp. Usage Retry 0 126758 96 Motor Usage Retry 0 961255 0 1 533535 0 2 441087 0 3 215139 0 4 958528 0 ADI communication retries: ------------------------------------------------------------------------- Drive Rx Rreq Tx Rreq RxLogout TxLogin 0 259 2 1 0 1 252 3 2 0 2 0 1 1 2 3 0 0 0 0 Network Status and Information: ------------------------------- Hostname : odtape Domain Name : odg.local FQDN : odtape.odg.local MAC Address : 000E1114FF7F UUID : 1114FF7F0001000E1114FF7F Protocol Stack : IPv4 only HTTPS enabled : No MTU size : 1500 LAN Cable : Connected IPv4 Settings: -------------- DHCP enabled : No IP Address : 192.168.4.100 Netmask : 255.255.255.0 Gateway Address : 0.0.0.0 DNS 1 : 10.15.1.5 DNS 2 : 10.15.1.51 DNS by DHCP : No Ethernet Settings: ------------------ Speed : Auto Duplex : Auto SNMP Settings: -------------- SNMP enabled : Yes Audit logging : No SNMP Target 1 Adr. : 0.0.0.0 (SNMPv1) SNMP Target 2 Adr. : 0.0.0.0 (SNMPv1) SNMP Target 3 Adr. : 0.0.0.0 (SNMPv1) SNMP Trap Filter : Critical, Warning, Configuration and Informational Events SNMP Community Name : public Security User Name : initial Authentication : none Privacy : none Email Notification Settings: ---------------------------- Notification Level : No Events SMTP Address : 0.0.0.0 To Email Address : ------------------------------------------------------------------------------- A D P - C o n f i g u r a t i o n a n d L i n k - S t a t u s ------------------------------------------------------------------------------- Drive Lib SvrPri IP-Address Port IPVer SSL Link DNS-Name ------------------------------------------------------------------------------- 0 0 *1 0:0:0:0:0:0:0:0 0 6 N down 0 0 2 0:0:0:0:0:0:0:0 0 6 N down 1 0 *1 0:0:0:0:0:0:0:0 0 6 N down 1 0 2 0:0:0:0:0:0:0:0 0 6 N down 2 0 *1 0:0:0:0:0:0:0:0 0 6 N down 2 0 2 0:0:0:0:0:0:0:0 0 6 N down 3 0 *1 0:0:0:0:0:0:0:0 0 6 N down 3 0 2 0:0:0:0:0:0:0:0 0 6 N down Legend: SvrPri: server priority, 1 = Primary, 2 = Secondary An asterisk indicates the current server selection ------------------------------------------------------------------------------ A D P - H e a l t h ( D r i v e - V i e w ) ------------------------------------------------------------------------------ EKM-Path | Error Information -------------------------------------------------+---------------------------- Drive | Server-IP | ErrCode OpCode Status -------+-----------------------------------------+---------------------------- 0 | N/A | 00 00 GREEN 1 | N/A | 00 00 GREEN 2 | N/A | 00 00 GREEN 3 | N/A | 00 00 GREEN | N/A | 00 00 GREEN ------------------------------------------------------------------------------ A D P - H e a l t h ( L i b r a r y - V i e w ) ------------------------------------------------------------------------------ EKM-Path | Error Information ------------------------------------------------------+----------------------- Lib | Svr | Server-IP | Drv ErrC OpC Stat ------+-----+-----------------------------------------+----------------------- 0 | 1 | N/A | - 00 00 N/A 0 | 2 | N/A | - 00 00 N/A 0 || N/A | - 00 00 GREEN || N/A | - 00 00 GREEN -------------------------------------------- Drive | State of communication tasks | Drive I/O Net send Net receive -------+------------------------------------ 0 | 01 21 41 1 | 01 21 41 2 | 01 20 40 3 | 01 20 40 Critical Events: ---------------------------------------------------------------------------- Timestamp Type Events Informational Events: ---------------------------------------------------------------------------- Timestamp Type Events 24.05.22 14:08:28.39 TRC/CST<0A 0F 03 > Login event 24.05.22 14:04:37.52 TRC/CST<01 0F 00 02 07 04 01 > Move event 24.05.22 14:03:12.74 TRC/CST<0A 0F 03 > Login event 24.05.22 13:54:20.07 TRC/CST<08 > Power On event 24.05.22 13:53:46.20 TRC/CST<09 > Power Off event 24.05.22 13:43:44.10 TRC/CST<0A 0F 03 > Login event 24.05.22 13:38:00.07 TRC/CST<08 > Power On event 24.05.22 13:36:51.89 TRC/CST<09 > Power Off event 24.05.22 13:21:58.57 TRC/CST<01 0F 00 02 05 04 02 > Move event 24.05.22 13:21:03.53 TRC/CST<01 0F 00 04 02 02 02 > Move event 24.05.22 13:19:18.63 TRC/CST<0A 0F 03 > Login event 24.05.22 13:13:51.55 TRC/CST<02 00 00 > Rescan event 24.05.22 13:12:39.58 TRC/CST<07 00 00 > Magazine Insert event 24.05.22 13:12:09.00 TRC/CST<06 00 00 > Magazine Remove event 24.05.22 13:09:06.14 TRC/CST<0A 0F 03 > Login event 24.05.22 13:06:38.70 TRC/CST<0B 0F 03 > Logout event 24.05.22 12:51:37.10 TRC/CST<02 00 00 > Rescan event 24.05.22 12:50:14.73 TRC/CST<07 00 00 > Magazine Insert event 24.05.22 12:48:42.28 TRC/CST<06 00 00 > Magazine Remove event 24.05.22 12:45:02.42 TRC/CST<01 0F 00 04 01 02 2D > Move event 24.05.22 12:41:29.11 TRC/CST<01 0F 00 02 2D 04 01 > Move event Warning Events: ---------------------------------------------------------------------------- Timestamp Type Events 24.05.22 14:17:34.22 TRC/CST<84 07 01 27 > Drive Warn or Crit Tape Alert flag 24.05.22 14:17:34.21 TRC/CST<84 07 01 05 > Drive Warn or Crit Tape Alert flag 24.05.22 14:17:34.21 TRC/CST<84 07 01 03 > Drive Warn or Crit Tape Alert flag 24.05.22 11:57:55.43 TRC/CST<84 05 01 27 > Drive Warn or Crit Tape Alert flag 24.05.22 11:57:55.43 TRC/CST<84 05 01 04 > Drive Warn or Crit Tape Alert flag 24.05.22 11:57:55.42 TRC/CST<84 05 01 03 > Drive Warn or Crit Tape Alert flag 24.05.22 10:34:15.79 TRC/CST<84 05 01 14 > Drive Warn or Crit Tape Alert flag 24.05.22 10:34:15.79 TRC/CST<82 01 > Drive Cleaning request 24.05.20 12:54:03.73 TRC/CST<84 05 01 14 > Drive Warn or Crit Tape Alert flag 24.05.20 12:54:03.73 TRC/CST<82 01 > Drive Cleaning request 24.05.17 19:59:14.83 TRC/CST<84 05 01 27 > Drive Warn or Crit Tape Alert flag 24.05.17 19:59:14.83 TRC/CST<84 05 01 04 > Drive Warn or Crit Tape Alert flag 24.05.17 19:59:14.83 TRC/CST<84 05 01 03 > Drive Warn or Crit Tape Alert flag 24.05.17 10:22:07.79 TRC/CST<84 05 01 14 > Drive Warn or Crit Tape Alert flag 24.05.17 10:22:07.79 TRC/CST<82 01 > Drive Cleaning request 24.05.16 19:15:13.69 TRC/CST<84 05 01 14 > Drive Warn or Crit Tape Alert flag 24.05.16 19:15:13.69 TRC/CST<82 01 > Drive Cleaning request 24.05.16 18:13:36.88 TRC/CST<84 05 01 27 > Drive Warn or Crit Tape Alert flag 24.05.16 18:13:36.88 TRC/CST<84 05 01 04 > Drive Warn or Crit Tape Alert flag 24.05.16 18:13:36.88 TRC/CST<84 05 01 03 > Drive Warn or Crit Tape Alert flag 24.08.21 19:14:03.46 TRC/CST<84 03 01 14 > Drive Warn or Crit Tape Alert flag 24.08.21 19:14:03.46 TRC/CST<82 01 > Drive Cleaning request 24.08.21 19:11:06.80 TRC/CST<84 03 01 14 > Drive Warn or Crit Tape Alert flag 24.08.21 19:11:06.80 TRC/CST<82 01 > Drive Cleaning request 24.08.21 19:05:35.81 TRC/CST<84 03 01 27 > Drive Warn or Crit Tape Alert flag 24.08.21 19:05:35.81 TRC/CST<84 03 01 14 > Drive Warn or Crit Tape Alert flag 24.08.21 19:05:35.81 TRC/CST<84 03 01 04 > Drive Warn or Crit Tape Alert flag 24.08.21 19:05:35.80 TRC/CST<84 03 01 03 > Drive Warn or Crit Tape Alert flag 24.08.21 19:05:35.80 TRC/CST<82 01 > Drive Cleaning request 24.08.21 18:42:16.08 TRC/CST<84 01 01 14 > Drive Warn or Crit Tape Alert flag 24.08.21 18:42:16.08 TRC/CST<82 01 > Drive Cleaning request 24.08.21 18:41:03.97 TRC/CST<84 01 01 27 > Drive Warn or Crit Tape Alert flag 24.08.21 18:41:03.97 TRC/CST<84 01 01 14 > Drive Warn or Crit Tape Alert flag 24.08.21 18:41:03.97 TRC/CST<84 01 01 04 > Drive Warn or Crit Tape Alert flag 24.08.21 18:41:03.97 TRC/CST<84 01 01 03 > Drive Warn or Crit Tape Alert flag 24.08.21 18:41:03.96 TRC/CST<82 01 > Drive Cleaning request 24.04.06 08:18:25.80 TRC/CST<84 01 01 14 > Drive Warn or Crit Tape Alert flag 24.04.06 08:18:25.80 TRC/CST<82 01 > Drive Cleaning request 24.04.03 14:03:03.25 TRC/CST<84 01 01 14 > Drive Warn or Crit Tape Alert flag 24.04.03 14:03:03.25 TRC/CST<84 01 01 04 > Drive Warn or Crit Tape Alert flag 24.04.03 14:03:03.25 TRC/CST<82 01 > Drive Cleaning request 24.04.03 13:48:49.82 TRC/CST<84 01 01 27 > Drive Warn or Crit Tape Alert flag 24.04.03 13:48:49.82 TRC/CST<84 01 01 03 > Drive Warn or Crit Tape Alert flag 00.01.19 12:26:28.31 TRC/CST<88 02 > Drive FW unchanged after upgrade 00.01.19 11:59:38.23 TRC/CST<88 01 > Drive FW unchanged after upgrade 00.01.14 14:26:13.02 TRC/CST<85 5A > Network problem 21.01.13 10:54:52.69 TRC/CST<86 > No tape for autoclean 21.01.09 03:13:10.16 TRC/CST<84 0C 02 27 > Drive Warn or Crit Tape Alert flag 21.01.09 03:13:10.15 TRC/CST<84 0C 02 14 > Drive Warn or Crit Tape Alert flag 21.01.09 03:13:10.15 TRC/CST<84 0C 02 06 > Drive Warn or Crit Tape Alert flag 21.01.09 03:13:10.15 TRC/CST<84 0C 02 03 > Drive Warn or Crit Tape Alert flag 21.01.09 03:13:10.15 TRC/CST<82 02 > Drive Cleaning request 21.01.05 15:05:37.16 TRC/CST<84 04 02 27 > Drive Warn or Crit Tape Alert flag 21.01.05 15:05:37.15 TRC/CST<84 04 02 14 > Drive Warn or Crit Tape Alert flag 21.01.05 15:05:37.15 TRC/CST<84 04 02 06 > Drive Warn or Crit Tape Alert flag Configuration change Events ---------------------------------------------------------------------------- Timestamp Type Events 24.05.16 17:04:09.06 TRC/CST<4E FF > Date/Time conf. change 24.05.16 19:14:09.06 TRC/CST<4E FF > Date/Time conf. change 24.08.21 19:11:16.35 TRC/CST<45 0F 01 00 00 > Master drive conf. change 24.08.21 19:10:50.48 TRC/CST<79 0F 03 01 > Drive configuration changed 24.08.21 19:10:50.48 TRC/CST<7C 0F 03 01 > Drive has been power cycled from the Web UI 24.08.21 18:43:11.81 TRC/CST<45 0F 01 00 01 > Master drive conf. change 24.08.21 18:41:59.83 TRC/CST<79 0F 03 01 > Drive configuration changed 24.08.21 18:41:59.83 TRC/CST<7C 0F 03 01 > Drive has been power cycled from the Web UI 24.03.16 13:17:13.06 TRC/CST<4E FF > Date/Time conf. change 24.03.16 12:48:00.17 TRC/CST<4E FF > Date/Time conf. change 00.01.19 11:42:39.13 TRC/CST<51 45 2E 37 30 > Library firmware upgrade 21.01.13 11:18:14.48 TRC/CST<53 FF 00 > Autoclean conf. change 21.01.08 09:42:58.28 TRC/CST<44 0F 01 > Drive disable conf change 21.01.08 09:42:56.30 TRC/CST<7C 0F 03 01 > Drive has been power cycled from the Web UI 20.11.10 12:40:35.24 TRC/CST<43 0F 01 > Drive enable conf. change 20.11.10 12:39:45.50 TRC/CST<79 0F 03 01 > Drive configuration changed 20.11.10 12:39:45.49 TRC/CST<7C 0F 03 01 > Drive has been power cycled from the Web UI 20.11.09 09:14:55.62 TRC/CST<64 02 00 00 > Library door lock status change 20.11.09 09:14:24.15 TRC/CST<64 02 00 01 > Library door lock status change 20.11.09 09:06:56.03 TRC/CST<44 0F 01 > Drive disable conf change 20.11.09 09:06:54.05 TRC/CST<7C 0F 03 01 > Drive has been power cycled from the Web UI Calibration Events ---------------------------------------------------------------------------- Timestamp Type Events 24.03.24 15:58:18.83 TRC/CST<80 0F 17 02 26 02 06 9B 37 > Calibration event 24.03.24 15:40:12.59 TRC/CST<80 0F 40 02 0E 04 01 9B 30 > Calibration event 24.03.24 15:40:12.58 TRC/CST<80 0F 17 02 0E 04 01 9B 30 > Calibration event 21.01.09 03:23:42.33 TRC/CST<80 FF 00 04 02 02 2C FC 01 > Calibration event 21.01.09 03:18:45.71 TRC/CST<80 02 15 04 02 02 0C FC 01 > Calibration event 21.01.08 10:14:57.26 TRC/CST<80 02 15 04 02 02 06 FC 01 > Calibration event 21.01.07 11:28:41.52 TRC/CST<80 02 15 04 02 02 04 FC 01 > Calibration event 21.01.06 18:03:37.54 TRC/CST<80 02 15 04 02 02 0A FC 01 > Calibration event 21.01.05 15:13:04.60 TRC/CST<80 FF 00 04 02 02 2C FC 01 > Calibration event 21.01.05 15:07:56.40 TRC/CST<80 02 15 04 02 02 04 FC 01 > Calibration event 21.01.04 18:23:17.24 TRC/CST<80 02 15 04 02 02 09 FC 01 > Calibration event 21.01.04 12:31:27.01 TRC/CST<80 02 15 04 02 02 0C FC 01 > Calibration event 21.01.02 17:46:32.91 TRC/CST<80 02 15 04 02 02 0B FC 01 > Calibration event 21.01.02 03:30:55.52 TRC/CST<80 FF 00 04 02 02 2C FC 01 > Calibration event 21.01.02 03:26:01.17 TRC/CST<80 02 15 04 02 02 0C FC 01 > Calibration event Verbose Trace ---------------------------------------------------------------------------- Timestamp Type Events 24.05.22 14:28:24.45 SDC/TRC<01 41 01 00 00 00 > 24.05.22 14:28:24.45 SDC/TRC<01 41 01 06 28 00 > ... RAM Verbose Trace ---------------------------------------------------------------------------- Timestamp Type Events 24.05.22 14:28:24.45 SDC/TRC<01 41 01 00 00 00 > 24.05.22 14:28:24.45 SDC/TRC<01 41 01 06 28 00 > 24.05.22 14:28:24.14 LIB/RSP<00 00 00 4C 2B 0B 00 00 00 00 00 00 FF 04 01 FF FF FF 00 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 00 00 00 00 00 00 00 00 > 24.05.22 14:28:24.04 DRV/RSP<00 00 00 1C 2B 00 00 00 00 00 00 00 FF 04 01 FF FF FF 00 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 00 00 00 00 00 00 00 00 > 24.05.22 14:28:24.03 DRV/CMD<1C 01 01 0A > 24.05.22 14:28:24.03 DRV/RSP<00 00 00 1C 2B 00 00 00 00 00 00 00 FF 04 01 FF FF FF 00 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 00 00 00 00 00 00 00 00 C9 54 A8 0B 00 00 00 00 00 09 00 03 0C 00 1F 6E > 24.05.22 14:28:24.03 SDC/TRC<01 1C > ... Drive 0 Error Log: ------------------ Timestamp Entry Log Data Drive 0 SCSI Log: ----------------- Timestamp Entry Log Data 1989:20:01:56.00 0000 03 11 00 1F 47 1989:19:07:57.00 0001 03 31 00 00 00 1989:18:42:47.00 0002 03 09 00 23 40 1989:18:42:47.00 0003 03 09 00 23 40 1989:18:30:41.00 0004 03 09 00 23 40 1989:18:30:40.00 0005 03 09 00 23 40 1988:01:21:41.00 0006 03 31 00 00 00 1988:01:16:31.00 0007 03 09 00 23 40 1988:01:16:31.00 0008 03 09 00 23 40 1987:15:37:27.00 0009 03 09 00 23 40 Drive 1 Error Log: ------------------ Timestamp Entry Log Data Drive 1 SCSI Log: ----------------- Timestamp Entry Log Data 2263:16:05:29.00 0000 03 0C 00 1F 6E 2263:16:05:29.00 0001 03 0C 00 1F 6E 2263:16:05:29.00 0002 03 0C 00 1F 6E 2263:16:05:29.00 0003 03 0C 00 1F 6E 2263:16:05:29.00 0004 03 0C 00 1F 6E 2263:16:05:29.00 0005 03 0C 00 1F 6E 2263:16:05:29.00 0006 03 0C 00 1F 6E 2263:16:05:29.00 0007 03 0C 00 1F 6E 2263:16:05:29.00 0008 03 0C 00 1F 6E 2263:16:05:29.00 0009 03 0C 00 1F 6E ------------------------------------------------------------------------------ Taskname Taskstate ScCount Prio Stack Size Avail Unmod ------------------------------------------------------------------------------ TraceTsk NU_PIPE_SUSP 16410 26 0x208D4AB0 2800 2728 1396 SpoolTsk NU_FINISHED 4689 90 0x208D55CC 3000 2928 872 DMSITrc NU_PIPE_SUSP 35 26 0x208DA514 1000 928 540 DMSIIO NU_DRIVER_SUSP 2 60 0x208DA928 1000 928 632 DMONMain NU_PIPE_SUSP 8 60 0x208DD10C 3000 2928 2000 TestMain NU_RUNNING 3299 60 0x208DE264 9000 8928 5284 Drive NU_PIPE_SUSP 110879 3 0x208E05B8 5000 4928 2516 DrvStat NU_EVENT_SUSP 8878 32 0x208E196C 2500 2428 628 SledMoni NU_SLEEP_SUSP 59957 87 0x208E412C 3000 2928 2108 Robotic NU_PIPE_SUSP 29648 45 0x208E4D10 5000 4928 1708 ROBUART NU_DRIVER_SUSP 34639 35 0x208E6460 3000 2928 1756 ChkSlPo NU_EVENT_SUSP 3117 35 0x208E7044 1500 1428 440 OCPOut NU_PIPE_SUSP 33257 36 0x200AFCC0 5000 4928 3624 OCPInp NU_SLEEP_SUSP 213815 36 0x200B1074 5000 4928 4240 OCPAnim NU_SLEEP_SUSP 136269 96 0x200B2428 5000 4928 3988 LunCdb0 NU_PURE_SUSP 0 22 0x208E7AA4 6000 5928 5924 LunCtrl0 NU_EVENT_SUSP 2 23 0x208E9640 2000 1928 1704 LunCdb1 NU_EVENT_SUSP 449 22 0x208E9F0C 6000 5928 4976 LunCtrl1 NU_EVENT_SUSP 189 23 0x208EBAA8 2000 1928 964 LunCdb2 NU_PURE_SUSP 0 22 0x208EC374 6000 5928 5924 LunCtrl2 NU_EVENT_SUSP 3 23 0x208EDF10 2000 1928 1704 LunCdb3 NU_PURE_SUSP 0 22 0x208EE7DC 6000 5928 5924 LunCtrl3 NU_EVENT_SUSP 2 23 0x208F0378 2000 1928 1704 LunCdb4 NU_PURE_SUSP 0 22 0x208F0C44 6000 5928 5924 LunCtrl4 NU_EVENT_SUSP 2 23 0x208F27E0 2000 1928 1704 LunCdb5 NU_PURE_SUSP 0 22 0x208F30AC 6000 5928 5924 LunCtrl5 NU_EVENT_SUSP 2 23 0x208F4C48 2000 1928 1704 CDB_Inte NU_PIPE_SUSP 1550 33 0x208F5514 20480 20408 19592 LibCtrl NU_READY 3544 80 0x208FB9E0 60000 59928 57536 LibFeed NU_PIPE_SUSP 1 80 0x2090A46C 60000 59928 59644 NETSTART NU_EVENT_SUSP 4846 82 0x20918EF8 28000 27928 21204 LibraryS NU_SLEEP_SUSP 156295 50 0x2092157C 4000 3928 3408 Fans_Ser NU_SLEEP_SUSP 61028 89 0x20922548 1500 1428 832 PwrBtn_S NU_EVENT_SUSP 2 86 0x20922F28 1000 928 472 LibAtcln NU_EVENT_SUSP 21 80 0x2092333C 60000 59928 59464 UPGCLEAN NU_FINISHED 5186 95 0x20931DC8 3000 2928 2116 RTCTask NU_SLEEP_SUSP 11110 55 0x209329AC 800 728 312 JTAGPLD NU_FINISHED 14 1 0x20932CF8 8192 8120 7528 Watchdog NU_SLEEP_SUSP 2279 255 0x20934D24 1000 928 672 DMFileT NU_PIPE_SUSP 1 26 0x20ACF4B0 4000 3928 3728 USBH-Stk NU_SEMAPH_SUSP 31 2 0x200CFCB0 77824 77752 77464 USBH-HUB NU_SEMAPH_SUSP 37 10 0x200E3CD0 8192 8120 7300 USBMON NU_QUEUE_SUSP 1 38 0x20AD0530 20000 19928 19560 ADT0RXFR NU_DRIVER_SUSP 79139 1 0x20AE0980 2000 1928 1428 ADT0TXFR NU_PIPE_SUSP 7693 2 0x20AE117C 2000 1928 1496 ADT0LINK NU_EVENT_SUSP 58264 0 0x20AE1978 2000 1928 1324 SMC0SERV NU_PIPE_SUSP 1 3 0x20AE2200 2000 1928 1640 AER0SERV NU_PIPE_SUSP 1 3 0x20AE2A88 2000 1928 1696 EvntDsp NU_QUEUE_SUSP 14614 3 0x20AE33F8 5000 4928 4268 TIMER NU_EVENT_SUSP 27899 3 0x20AE4790 5000 4928 4076 DHCP6CL NU_PURE_SUSP 4 3 0x20AE7074 5000 4928 3044 DHCP6EV NU_QUEUE_SUSP 2 3 0x20AE840C 5000 4928 4692 SNMPNo NU_EVENT_SUSP 66 3 0x20AEB930 9000 8928 4156 SNMPTask NU_PURE_SUSP 5 3 0x20AEDC68 9000 8928 8012 SNMPCr NU_EVENT_SUSP 1 3 0x20AEFFA0 9000 8928 8680 LN91C111 NU_EVENT_SUSP 36033 4 0x20AF6710 8000 7928 7580 DMTELTrc NU_PIPE_SUSP 2 26 0x20B108B0 2000 1928 1720 RMUDrvDu NU_READY 42205 80 0x20B14018 3000 2928 2524 RMUDrvCo NU_READY 42413 80 0x20B14BFC 3000 2928 2584 RMUVLibC NU_READY 42129 80 0x20B157E0 3000 2928 2612 DMRmuT NU_PIPE_SUSP 10411 26 0x20B16D94 4000 3928 3560 NUWebSrv NU_PURE_SUSP 32 5 0x2001D4F0 8000 7928 2000 wserv_A NU_PURE_SUSP 42429 8 0x2001F4C8 75000 74928 64156 wserv_B NU_PURE_SUSP 43004 8 0x20031B98 75000 74928 64140 wserv_C NU_SEMAPH_SUSP 216744 8 0x20044268 75000 74928 63060 wserv_D NU_PURE_SUSP 52933 8 0x20056938 75000 74928 64128 wolfSSLS NU_PURE_SUSP 0 5 0x20069018 2000 1928 1928 SNMPTask NU_SLEEP_SUSP 11450 70 0x20B1F950 10000 9928 2744 Notify NU_PIPE_SUSP 3 70 0x20B250D4 75000 74928 72636 LMEDIO0 NU_EVENT_SUSP 22 70 0x20B3C90C 10000 9928 8044 LMESnd0 NU_EVENT_SUSP 67 70 0x20B3F048 55000 54928 53920 LMERcv0 NU_EVENT_SUSP 4 70 0x20B4C74C 10000 9928 9196 LMEDIO1 NU_EVENT_SUSP 28 70 0x20B534E0 10000 9928 8048 LMESnd1 NU_EVENT_SUSP 48 70 0x20B55C1C 55000 54928 53868 LMERcv1 NU_EVENT_SUSP 3 70 0x20B63320 10000 9928 9200 LMEDIO2 NU_EVENT_SUSP 20 70 0x20B6A0B4 10000 9928 8088 LMESnd2 NU_EVENT_SUSP 2 70 0x20B6C7F0 55000 54928 54104 LMERcv2 NU_EVENT_SUSP 3 70 0x20B79EF4 10000 9928 9196 LMEDIO3 NU_EVENT_SUSP 17 70 0x20B80C88 10000 9928 8092 LMESnd3 NU_EVENT_SUSP 2 70 0x20B833C4 55000 54928 54100 LMERcv3 NU_EVENT_SUSP 2 70 0x20B90AC8 10000 9928 9200 LMEDeb NU_PIPE_SUSP 1 60 0x20B23688 3000 2928 1688 ADT1RXFR NU_DRIVER_SUSP 112614 1 0x20B9EAF4 2000 1928 1424 ADT1TXFR NU_PIPE_SUSP 11640 2 0x20B9F2F0 2000 1928 1480 ADT1LINK NU_EVENT_SUSP 84382 0 0x20B9FAEC 2000 1928 1304 SMC1SERV NU_PIPE_SUSP 89 3 0x20BA02E8 2000 1928 1228 AER1SERV NU_PIPE_SUSP 1 3 0x20BA0AE4 2000 1928 1692 ------------------------------------------------------------------------------ Total number of tasks in system : 87 Total memory size of tasks in system : 1248788 Total available memory : 1242524 Total unused memory : 1110400 ------------------------------------------------------------------------------ Pool Startadr. Size Allocated Available Susp Susp First Name Type Tasks Task ------------------------------------------------------------------------------ TRCPOOL 0x208B2DE0 131072 131072 0 FIFO 0 <N/A> SYSMEM 0x208D2DE0 20971520 3051920 17919600 FIFO 0 <N/A> NETMEM 0x20000000 120032 120032 0 FIFO 0 <N/A> WEBMEM 0x2001D4E0 600000 317012 282988 FIFO 0 <N/A> OCPMEM 0x200AFCA0 131072 26884 104188 FIFO 0 <N/A> USBMEM 0x200CFCA0 262144 99828 162316 FIFO 0 <N/A> USBHCM 0x2010FCA0 524288 1664 522624 FIFO 0 <N/A> USBMSM 0x2018FCA0 262144 1820 260324 FIFO 0 <N/A> DBSDPOOL 0x201CFCA0 52256 43392 8864 FIFO 0 <N/A> ------------------------------------------------------------------------------ Complete Pool Size: 23054528. ------------------------------------------------------------------------------ Part. Startadr. Pool Part. Avail. Alloc. Susp Susp First Name Size Size Parts Parts Type Tasks Task ------------------------------------------------------------------------------ RAMDISK 0x20946BC8 1574784 16384 0 96 FIFO 0 <N/A> ------------------------------------------------------------------------------ _ _ _____ | | | | __ _ _ __ __ _ |_ _| __ __ _ ___ ___ _ | |_| |/ _` | '_ \ / _` | | || '__/ _` |/ __/ _ \ (_) | _ | (_| | | | | (_| | | || | | (_| | (_| __/ _ |_| |_|\__,_|_| |_|\__, | |_||_| \__,_|\___\___| (_) |___/ No entries found or trace data corrupted! Try reading out raw data. === tape library what are i/o stations used? === Mailslot is useful in the following situations: o When inserting/ejecting specific volume cartridge tapes o When inserting/ejecting the cleaning cartridge tapes (where to store the cleaning cardrige) o When ejecting defective cartridge tapes Using mailslot shortens response time compared to replacing whole magazines. o Eliminates a bar code scan of all slots o Eliminates need to update media mapping of backup software # https://www.fujitsu.com/au/products/computing/storage/tape/eternus-lt/feature/strsys-b12.html === IBM Tape Library tools === # tested faulty drive (it was repaired afterwards by sprague-europe.com): 10WT036509 ./itdt { "METRO BLOB HEADER": { "START TIME": "Wed May 22 15:27:06.706 2024" , "PROGRAM NAME": "ITDT-SE" , "PROGRAM VERSION": "9.6.2.20231114" , "OPERATING SYSTEM": "Linux" , "OPERATING VERS": " Release: 6.1.0-21-amd64 Version:#1 SMP PREEMPT_DYNAMIC Debian 6.1.90-1 (2024-05-03)" , "SYSTEM INFO": " Systemname:Linux Nodename: gigabyte-srv1-debian MachineType:x86_64" , "TAPE DEVICE NAME": "/dev/sg0" , "TAPE DRIVER VERSION": "LinuxDefault" , "PID NAME": "PID_SYSTEM_TEST" , "SERIAL_NO": "10WT036509" , "MODEL_NAME": "ULT3580-HH5" , "MICROCODE": "H971" , "DEVICE TYPE": "TAPE" , "DRIVE INTERFACE TYPE": "SAS" , "HOST ID": 0 , "BUS ID": 0 , "SCSI ID": 5 , "LUN ID": 0 } , "INIT_DRIVE": { "START TIME": "Wed May 22 15:27:06.713 2024" , "TEST RESULT": "PASSED" , "RESULT CODE": "OK" , "SERIAL_NO": "10WT036509" , "MODEL_NAME": "ULT3580-HH5" , "MICROCODE": "H971" , "data": [ { "HBAPropertyText": "Driver" , "HBAPropertyValue": "mpt2sas" } ] } , "MOUNT_TAPE": { "START TIME": "Wed May 22 15:27:07.234 2024" , "DURATION": 921118 , "TEST RESULT": "ABORTED" , "RESULT CODE": "NO TEST TAPE INSERTED" , "CURRENT PARTITION SIZE": 0 , "NUMBER WRAPS": 0 , "NUMBER TRACKS": 0 , "DENSITY CODE": 0 , "CARTRIDGE SN": "" , "CARTRIDGE TYPE": "Unknown Cartridge Type" , "CARTRIDGE MFG": "" , "CARTRIDGE MFG DATE": "" , "_SENSEDATA": "" , "TARGET STATUS": "Check Condition" , "HOST STATUS": "Status Good" , "EC": 22 } , "METRO DISPOSITION": { "START TIME": "Wed May 22 15:43:14.670 2024" , "DIAG RESULT": "ABORTED" , "DIAG CODE": "NO TEST TAPE INSERTED" , "DIAG STEP ID": "MOUNT TAPE" , "COMMENTS": "" } }
tools: benchmark tape drive
- sequential copy data 1:1 via dd:
- wget https://dwaves.de/scripts/benchmark_tape_sequential_dd.sh
- # run it
- chmod +x benchmark_tape_sequential_dd.sh
- ./benchmark_tape_sequential_dd.sh
- # example output:
- # watch the web gui while running the benchmark to verify it’s working 😀
-
=== tape library benchmark script v1.0 === who? opensource ÄÄÄÄT dwaves DOOOT de what? this script shall test how fast or slow data can be written and read to and form tape ... how many GBytes of random test data should be generated and written 1:1 via dd to tape? Enter an integer: 3 generating test file of : 3 GBytes 3+0 records in 3+0 records out 3221225472 bytes (3.2 GB, 3.0 GiB) copied, 14.6075 s, 221 MB/s ... the testfile has a size of: 3.1G ./benchmark_testdata_random ... generating sha512sum for that test data: ... sha512sum is: 5f65bed657663169ea9ac0bcf1bf1c15fd3416d7da5bb3fae366f6fe9ae3013d5e9c92a94a46a646c4fde15bfea9d8ed463d7187e0aebc5fee143c78716bf228 ./benchmark_testdata_random ... please put a tape without important data into the first drive /dev/st0: hit [ENTER] when ready ... rewinding tape to start ... writing benchmark_testdata_random to /dev/st0 3216772096 bytes (3.2 GB, 3.0 GiB) copied, 192 s, 16.8 MB/s 3145728+0 records in 3145728+0 records out 3221225472 bytes (3.2 GB, 3.0 GiB) copied, 220.191 s, 14.6 MB/s <- so this is the write speed ... deleting testfile benchmark_testdata_random from harddisk ... rewinding tape to start again ... reading testfile benchmark_testdata_random back from tape to harddisk 3218196480 bytes (3.2 GB, 3.0 GiB) copied, 192 s, 16.8 MB/s 3145728+0 records in 3145728+0 records out 3221225472 bytes (3.2 GB, 3.0 GiB) copied, 215.833 s, 14.9 MB/s <- so that's the read speed ... verifying restored data ./benchmark_testdata_random: OK <- the test data was retrieved from tape intact === TEST DONE ===
-
- benchmark LTFS via rsync 😀
- create test file:
-
dd if=/dev/urandom of=./benchmark_testdata_random bs=1024M count=3 iflag=fullblock sha512sum ./benchmark_testdata_random > ./benchmark_testdata_random.sha512sum; # create checksum
- follow the “how to ltfs” above then:
-
time rsync -r -vv --update --progress ./benchmark_testdata_random /media/user/P0010SL5/ rm -rf ./benchmark_testdata_random; # delete testfile # reboot? # restore test file (it is cached somewhere X-D) time rsync -r -vv --update --progress /media/user/P0010SL5/benchmark_testdata_random ./ sending incremental file list delta-transmission disabled for local transfer or --whole-file benchmark_testdata_random 3,221,225,472 100% 128.10MB/s 0:00:23 (xfr#1, to-chk=0/1) total: matches=0 hash_hits=0 false_alarms=0 data=3221225472 sent 3,222,012,008 bytes received 102 bytes 117,164,076.73 bytes/sec total size is 3,221,225,472 speedup is 1.00 real 0m26.633s <- holy crap that's way faster than dd 1:1? HOW? like HOW? echo "scale=10; 3.1*1024/26.6"|bc = 119.33MBytes/sec, how is this possible? (must be RAM caching) how is LTFS doing caching during write + read? because write + read speeds definately greater than what the drive can actually do (aprox 15MBytes/s LTO5)
“no internal cache into the LTFS because it assume sequential W/R. To be more accurate”
“Write side: LTFS is using write buffer because of following 2 reasons. But it works as ‘cache’ only while it is on the writer queue.”
“Gather multiple write requests from FUSE, which is smaller than the LTFS block size, to the LTFS block size, 512KB by default
Implementing writer queue for efficient sequential write”“Read side: LTFS only has a one-block cache. LTFS returns data from one-block cache, which is previous read 512KB block, when it is hit.” (src)
# check data integrity sha512sum -c ./benchmark_testdata_random.sha512sum
- ok now what about copy and pasting files? X-D
- benchmark LTFS via rsync 😀
- when done
-
umount /media/user/P0010SL5/s
tools: diagnose problems: ibm tape tools
MAK SURE TO UNMOUNT ALL MOUNTED (LTFS) TAPES OR EVEN RESTART SERVER BEFORE TESTING!
install_ITDT_GE_LinuxX86_64_9.6.2.20231114.bin
install_itdt_se_Linuxx86_64_9.6.2.20231114
install_itdt_se_Linuxx86_64_9.6.2.20240425
install_itdt_se_WindowsX86_64_9.6.2.20231114.exe
here screenshots from the graphical ibm tape tool: (itdt_ge)
translates to this LTO5 drive was running for aprox 6 years… then Error EC 6 happened… but sprague-europe.com was able to repair it 🙂 (wow! thanks! seeems to be durable tech?)
there is a terminal and a gui tool from ibm to run (more extensive than from DELL front panel) diagnostic tests on the drives:
- https://www.ibm.com/support/pages/ibm-tape-diagnostic-tool-itdt-v95
- https://www.ibm.com/support/fixcentral/swg/selectFixes?parent=Tape%20drivers%20and%20software&product=ibm/Storage_Tape/IBM+Tape+Diagnostic+Tool+ITDT&release=1.0&platform=All&function=all#1.%20Linux
- requires account creation
- alternatively: check out
- how to install the tools?
- gotta pick if user wants the graphical (GE java based, requires desktop) or terminal based
-
# allow user to run sudo without password echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers # start graphical program as user "user" but running with root vim /home/user/Desktop/itdt-ge.sh export DISPLAY=':0.0' sudo cp -v /home/user/.Xauthority /root/.Xauthority sudo /home/user/software/install/itdt-ge
- terminal based:
-
./install_itdt_se_Linuxx86_64_9.6.2.20240425; # simply run the binary IBM Tape Diagnostic Tool, (C) 2004, 2024 IBM Corporation - The ITDT installer has extracted all files for the ITDT program now. - Installation and usage of ITDT does not make any changes to or leave any traces in the system configuration. - A folder "ITDT" has been created which contains the ITDT program file and 2 subfolders initially: - "license" folder with license files in different languages. - "scripts" folder for additional scripts. - When executing the program for the first time 2 more subfolders will be created in the "ITDT" folder: - "input" folder for firmware files to be uploaded to devices. - "output" folder for generated log and dump files. - To remove ITDT from your system simply erase the ITDT folder. Be aware that you also erase any log and dump files when doing so. Please start the program by entering "itdt". Installation folder is [/root/software/ITDT] ITDT-SE 9.6.2.20240425 installation successfully completed. # probably usefull to run it from anywhere simply by typing itdt ln -sv /root/software/ITDT/itdt /usr/bin/itdt
-
-
- gotta pick if user wants the graphical (GE java based, requires desktop) or terminal based
- https://www.ibm.com/docs/en/t-tt-and-t?topic=information-tape-diagnostic-tool-itdt
Tape Drive broken? Who can repair it? EC error 6?
also associated error messages are:
- web interface says “Bad Tape”
-
tapeinfo -f /dev/st1
- says: TapeAlert[3]: Hard Error: Uncorrectable read/write error. TapeAlert[4]: Media: Media Performance Degraded, Data Is At Risk. TapeAlert[20]: Clean Now: The tape drive neads cleaning NOW. TapeAlert[39]: Undefined
- … but the cleaning does not help
bought a TL4000 for 1500 bucks! and ran the default tests that are available on the front panel (that was NOT enough!)
both drives are beasically broken with EC error 6 🙁 (so definately got ripped off there 🙁
for aprox 350 bucks it’s possible to repair a broken LTO tape drive via: https://www.sprague-europe.com/
“Tape drivesFrom origin our core business is the repair of tape drives. Where we started in 1999 with the repair of DLT tape drives for third party maintenance companies, it has succeeded in repairing up to the LTO-9 generation for OEMs worldwide.Leveraging on the micro mechanical expertise we are positioning ourselves as a truly independent service provider for out of warranty products for OEM and Third-Party maintenance contractors worldwide. It ensures independence from Manufacturing for availability, as well as keeping the contract cost and waste down.We cover the repair of every manufacturer, brand, generation, and variation of the DLT, SDLT, LTO, and Jaguar drives. For specific request you can contact our sales representatives.” (src)
no gurantee that cleaning the head (did that with cleaning cardrige) does really help (it did not):
- “If your LTO5 drive just keeps spitting out error “EC 6″ there is normally abnormal sounds when damaged tapes are used, this is either head clog or improperly tentioned tape”
- “To manually clean it you can disassemble the top casing and use a cotton swap soaked in isopropanol and gently run it up and down the head, dry it off with the other end of the swab and repeat until there are no more contaminants on the head. Use hardly any pressure as you could damage/misalign the head or supports.”
- “read error 6 usually indicated that head is either really clogged and needs to be manually cleaned or is completely worn/dead”
- “More commonly it\u2019s a sign that the head is worn to the point that the drive is dead”
- https://community.spiceworks.com/t/ibm-lto-5-tape-drive-error-6/748262/6
- https://www.reddit.com/r/sysadmin/comments/155pt6a/ibm_ltfs_lto_cleaning_lto_cleaning_tapes/
as can be seen from the drive’s stats, this happened after aproximately only 6 years of usage!
why tape (probably) will face the fate of the diskette:
- reasons tape might die:
- software and hardware too complex = too much maintenance involved not enough UNIX KISS, not simple enough, not reliable enough (10 years HGST harddisk still going strong with speeds of 80MByte/sec)
- unless someone or some company can make the usage of tape SUPER reliable and dead simple: the hardware (robot, changer, drive and media = tapes) very reliable (by using error correction with checksums per default, no matter if tape capacity get’s lost who cares, it has a lots of capacity) it will not fly and has too many dissadvantages over magnetic harddisks or even flash storage
LTFS: no default packages for Debian
have to build it from src
GNU Linux (Ubuntu 22.04) how to build ltfs from src aka scratch
manpages:
links:
tape library of the future: ChatGPT: what size would a 1PetaByte LTO9 based tape cardrige have to be?
“To store 1 PB of uncompressed data using LTO-9, you’d need 56 cartridges, which would fit into a square roughly: 84 cm × 84 cm” (that would be fine with me 😀 the question: is it workable? doable?)
follup question: is it possible to build such cardriges? would it be possible to stack them vertically? and somehow read them in that position without moving the cardriges? (the reader would most likely have to sit below the cardrige and drive from one cardrige to the other)
Concept | Feasibility | Notes |
---|---|---|
1 PB LTO-9 cartridge | ❌ Not feasible | Exceeds current physical limits |
Vertical stacking | ✅ Feasible | Already used in tape libraries |
Read in-place without moving cartridges | ⚠️ Very difficult | Requires major redesign of tape handling, new drive architecture |
question for ChatGPT: can u creat an image: tape library of the future, where every tape cardrige has 1 PetaByte of storage = size of each cardrige 84x84cm tape cardrige stacked vertically next to each other
read them in that position without moving the cardriges? (the reading head would most likely sit below the cardriges and move from one tape to another)
how many 84x84cm cardriges can fit into a traditional 19″ server rack?
19″ inch = 48cm of typical server width, so of course it is possible to fit a 84x84cm into a rack (vertically) but of course it will stand out 36cm or 20cm from the front and 16cm in the back
answer: (the image generated was very bad so had to draw one, also the depiction is not very correct the tape cardriges are supposed to be square so except more depth than on the schematics)
With 2 rows (top and bottom) in a 42U rack, that’s up to 38 PB per rack
-
A 19″ server rack with oversized 84×84 cm cartridges standing vertically like LP records
-
Reading heads below the stack
-
A futuristic data center setup
what ChatGPT assumes, is that when cramming much longer tape (e.g., 70–80 km long) into a LTO9 2.15cm thick cardrige, that the tape itself and the drive’s write+read head would come under serious tear + wear with the risk of tape snapping or write+read head getting worn out fast (at 5m/s) IF THAT IS ACTUALLY TRUE HAS TO BE TESTED?
LTO9 cardrige: 2.15cm thick ~1.1 km of tape = 18 TB
question for ChatGPT: given LTO9 density, how much km of tape and how much data could a completely new cardrige format of 45 × 45 × 5 cm hold?
(9x fould fit next to each other in a 45cm wide 19″ server rack)
answer of: Llama 3.3 70B a completely new cartridge format of 45 × 45 × 5 cm could potentially hold around 61.4 kilometers of tape and approximately 7.58 exabytes of data.
ChatGPT comes to a completely different number:
Parameter | Value |
---|---|
Tape length possible | ~19.5 km |
Data capacity (native) | ~338.5 TB |
Tape width | 5 cm (50 mm) |
Tape thickness | 5.2 microns (same as LTO-9) |
Cartridge size | 45 × 45 × 5 cm |
so 338.5TBytes * 9 cardriges (vertically next to each other) = 3.0465 PetaBytes… not too bad
what would be the maximum physically possible tape length with LTO9 technology (5.2 microns)?
To estimate the maximum physically possible tape length for LTO-9 using a tape thickness of 5.2 microns, we can calculate how much tape can physically fit on a standard LTO cartridge reel.
-
Tape thickness: 5.2 microns = 0.0052 mm
-
Typical reel radius (outer): ~47 mm
-
Hub radius (inner): ~16 mm
-
Tape width: 12.65 mm (standard for LTO)
-
Packing efficiency: assume about 90% (real-world packing losses)
-
Final Answer
Maximum physically possible LTO-9 tape length (5.2 μm thick):
~1,060 km (660 miles) <- the math behind is not clear and UNCHECKED!
This is a theoretical maximum based on geometry. Actual LTO-9 cartridges store ~1,035 meters of tape — meaning LTO-9 is already approaching ~0.1% of the theoretical maximum length limit with current materials and design.
LTO-9 tape uses a high-strength biaxially oriented polyethylene naphthalate (PEN) or polyethylene terephthalate (PET) base film — not pure BoPET, but similar, with proprietary magnetic coatings (BaFe or Strontium Ferrite).
-
Material base film: PEN or PET
-
Typical thickness: 5.2 microns (LTO-9)
-
Width: 12.65 mm
Material | UTS (MPa) | Notes |
---|---|---|
PET (standard) | ~150–200 MPa | Used in older tapes (e.g. LTO-6) |
PEN (high-performance) | ~200–300 MPa | Likely base for LTO-9 |
BoPET (e.g. Mylar) | ~170–250 MPa | Similar structure to PEN |
Metal particle coatings | negligible effect on tensile strength but affect stiffness slightly |
So, it takes ~ newtons of force (about 1.677433 kgf) to snap the tape if pulled longitudinally.
but is 61.4km of tape even possible without snapping or write+read head getting worn out fast?
There are several limitations to consider:
- Tape strength and durability: LTO tapes are designed to withstand a certain amount of tension and stress, but extremely long tapes would be prone to snapping or breaking, especially during handling, transportation, or usage.
- Tape guiding and tracking: As the tape length increases, it becomes more challenging to maintain accurate guiding and tracking, which could lead to errors, misreads, or even tape breakage.
- Write and read head wear: The write and read heads in tape drives are designed to withstand a certain number of passes over the tape. With an extremely long tape, the heads would need to make many more passes, leading to accelerated wear and potential failure.
- Tape tension and winding: Maintaining consistent tape tension and winding becomes increasingly difficult with longer tapes, which could result in tape deformation, creasing, or even tape pack collapse.
- Drive design and complexity: To accommodate such a long tape, the drive design would need to be significantly modified, which could add complexity, cost, and reliability concerns.
In practice, the maximum tape length is typically limited by the drive design, tape material, and mechanical constraints. For example, LTO9 tapes are currently limited to 820 meters (0.82 kilometers) in length.
To put this into perspective, a more realistic estimate for a new cartridge format might be to aim for a tape length of around 2-5 kilometers, which would still require significant advancements in tape materials, drive design, and mechanical engineering. However, even at these shorter lengths, the tape would need to be designed with enhanced strength, durability, and stability to ensure reliable operation and minimize wear on the write and read heads.
ChatGPT also mentions that such long tapes might not be physically doable (?) and read and write heads that are in light but physical contact with the medium might wear out fast(er).
but not familiar with the physical limits of a Tape with thickness ~5.2 microns (0.000205 in)
how long could a LTO9 tape be if it would have to withstand maximum pull force of 10 newtons?
If the only force acting on the tape is its own weight, an LTO-9 tape could theoretically be:
~11 km long before snapping under 10 N of tension.
In a drive, however, tape is supported on reels, and tension is controlled, so even 1000+ km of tape could be safely wound if tension is kept below 10 N — which is the case in real systems
so why do vendors not make bigger tape cardriges that can hold more data?
When it comes to the future of large-scale data storage, there are several technologies and trends that are likely to play a crucial role. The sheer growth of data worldwide, along with the need for efficient, scalable, and cost-effective solutions, will drive innovation. Below are some of the most likely advancements and future directions in data storage:
1. Exabyte-Scale Tape Storage (e.g., LTO and Beyond)
While tape storage has been around for decades, it is still a preferred method for storing large amounts of data in industries like archives, backups, and long-term storage. The future of tape storage will likely involve:
- LTO-10 and Beyond: As the LTO (Linear Tape-Open) format progresses, we can expect tape drives that offer even higher capacities—potentially moving into the exabyte range. Innovations such as higher-density magnetic tape (e.g., using barium ferrite or heat-assisted magnetic recording), smarter compression algorithms, and advanced tape management systems will push the capacity limits of tape cartridges even further.
- LTO-9 currently offers 18 TB native capacity, with potential future versions (LTO-12, for example) reaching 100 TB or more per tape.
- Cost-Effectiveness: Tape remains one of the most cost-effective methods for storing large amounts of data, particularly for cold or archival data, and that cost advantage will continue as capacity increases and tape technology improves.
<- which is a bold guess and a big IF the few remaining producers of tapes can push density even further (just as with HAMR harddisks “The technology was initially seen as extremely difficult to achieve, with doubts expressed about its feasibility in 2013” (src))
- seagate released HAMR drives for testing to a bunch of selected customers, but there seems still to be hurdles to market?
-
Early HAMR drives are expensive to produce, especially compared to conventional PMR (Perpendicular Magnetic Recording) and SMR (Shingled Magnetic Recording) drives.
-
Manufacturers want to ramp up yields and reduce costs before offering HAMR to a wider market.
-
PMR and SMR are still evolving and can reach up to 24TB+ per drive.
-
MAMR (Microwave-Assisted Magnetic Recording) is another competing tech, especially backed by Western Digital, and it’s easier to integrate with existing production lines.
-
Seagate has already shipped 30TB HAMR drives in small volumes (as of 2023–2024).
-
Mass market rollout (consumer-level or mainstream enterprise) is expected sometime in 2025–2026, depending on manufacturing scale, demand, and cost optimization.
- what will a Seagate 30TB HAMR harddisk estimate to cost?
-
As of May 2025, Seagate’s 30TB HAMR (Heat-Assisted Magnetic Recording) hard drives—specifically the Exos Mozaic 3+ series—are primarily available to enterprise and cloud service providers. Seagate has not officially disclosed the pricing for these drives.AnandTech+8Blocks and Files+8Seagate+8
Estimated Pricing:
-
Enterprise Context: Given that 24TB enterprise HDDs are priced around €550–€600, it’s reasonable to estimate that 30TB HAMR drives could be priced between €700 and €900, depending on factors like volume discounts and contractual agreements.
-
Consumer Market: For individual consumers, once these drives become widely available, prices may be higher due to lower purchase volumes and additional retail markups.
Availability Timeline:
Seagate plans to expand the availability of these drives beyond enterprise clients. According to Seagate’s roadmap, HAMR drives are expected to become available to end-users by mid-2025 .Wikipedia
-
- so magnetic harddisk vendors are also struggling to find materials and technology to push the limits of data density higher and higher, both magnetic spinning disks and tape they might get overtaken if flash storage can become +even smaller +cheaper?
- https://www.tomshardware.com/pc-components/hdds/seagate-on-track-for-100tb-hdds-by-2030-claims-current-top-drive-will-triple-in-capacity-in-5-years
- https://www.tomshardware.com/pc-components/hdds/western-digital-to-unveil-44tb-hamr-hdds-in-2026-100tb-in-2030
-
-
- is there a problem with the HAMR technology? (reliability? costs?)
- possible reliability problems:
- drive head must endure repeated thermal cycling—potentially affecting long-term durability
- demands extreme precision and advanced materials
- Seagate claims their HAMR drives exceed the reliability of current PMR drives, this is based largely on internal testing and controlled enterprise trials (Independent, long-term field reliability data is still limited)
- possible: cost problems: Current unit costs are too high for consumer markets
-
PMR and SMR still offer good density increases at a lower cost and are easier to manufacture.
-
MAMR (Microwave-Assisted Magnetic Recording, backed by Western Digital) is a less radical departure from traditional designs, making it easier to scale and integrate
- HAMR isn’t fundamentally flawed — it’s just still maturing. Once manufacturing scales and reliability is validated in real-world use, it could become the dominant tech for high-capacity drives. Seagate is betting heavily on this.
2. Cloud Storage and Hybrid Architectures
- Massive Cloud Storage: Public and private cloud storage will continue to grow, with more and more businesses and individuals offloading data to the cloud for scalability and convenience. Cloud providers will likely integrate new storage technologies, such as object storage (e.g., Amazon S3, Google Cloud Storage) to offer vast amounts of data storage at lower costs.
- Hybrid Storage Models: A hybrid approach that combines on-premise storage (for fast access) and cloud storage (for cost-effective archiving) will likely dominate. Companies will use cloud storage to manage their cold data (rarely accessed), while retaining fast, local access to frequently accessed data.
- Software-Defined Storage (SDS): The future of cloud storage will also be enhanced by software-defined storage. This approach allows businesses to virtualize storage resources across multiple locations, enabling the creation of large, flexible storage pools, reducing complexity, and improving scalability.
3. DNA Data Storage
One of the most fascinating and futuristic possibilities is DNA-based data storage. Researchers have been experimenting with encoding data in synthetic DNA sequences, taking advantage of DNA’s extraordinary density and stability as a storage medium.
- Density: DNA is capable of storing an unimaginable amount of data in a very small physical space. For instance, it’s been estimated that one gram of DNA could store approximately 215 petabytes of data. The potential for DNA storage is vast, and it could revolutionize the storage industry, especially for long-term archival purposes.
- Durability: DNA is incredibly stable and can last for millennia under the right conditions, making it a perfect candidate for preserving data over time.
While DNA storage is still in its infancy, ongoing research and development might make it a viable alternative for extremely large-scale data storage in the future. However, it may take time to perfect and scale up due to technical challenges related to writing and reading DNA at large scales.
4. Quantum Storage
Quantum computing has also made strides in recent years, and researchers are now exploring how quantum mechanics could be used for data storage:
- Quantum Storage Technologies: Quantum storage promises to leverage quantum principles, like superposition and entanglement, to enable new types of storage that could store and process information exponentially faster and more efficiently than current technologies.
- Ultra-High Capacity: Theoretically, quantum systems could allow for massive data densities and speeds, allowing for instant access to vast amounts of data.
- While quantum storage is not yet mainstream, breakthroughs in quantum memory and quantum data transfer could be pivotal for future high-performance storage systems, especially when combined with quantum computing.
5. 3D NAND Flash and Memory-Based Storage
The future of solid-state drives (SSDs) will likely focus on 3D NAND and other memory-based technologies:
- 3D NAND: As traditional 2D NAND flash memory reaches its limits, 3D NAND—where memory cells are stacked vertically—will continue to increase storage density, enabling faster and more efficient memory storage devices.
- Memory-centric Storage: There will likely be a shift toward memory-centric storage where DRAM and flash memory are used more extensively for persistent storage in data centers. These memory technologies offer ultra-fast data access and low latency, which will be crucial for emerging technologies like AI, machine learning, and big data processing.
- Storage Class Memory (SCM): Technologies like Intel Optane represent a bridge between DRAM and traditional storage, providing near-DRAM speeds at a fraction of the cost. SCM could play a pivotal role in future storage solutions by offering both speed and persistence.
6. Edge and Distributed Storage
As we move towards 5G and the Internet of Things (IoT), data is increasingly being generated at the edge of networks, such as in smart devices, drones, vehicles, and sensors.
- Edge Storage: Edge storage will become crucial for processing and storing data locally before it’s sent to centralized cloud servers. Distributed storage systems that allow for local storage at the edge, while also enabling seamless synchronization with cloud storage, will become more prevalent.
- Decentralized Storage: Technologies like blockchain and distributed file systems (e.g., IPFS, Filecoin) could drive decentralized storage networks, where users share storage resources in a peer-to-peer fashion. These systems could make large-scale data storage more resilient and accessible.
7. Storage Efficiency Through AI and Compression
- AI-driven Storage Management: Artificial Intelligence (AI) will increasingly play a role in optimizing storage efficiency. AI algorithms could automatically manage and compress data, predict storage needs, and optimize data distribution across various tiers of storage.
- Advanced Data Compression: The development of new compression algorithms could drastically reduce the amount of physical storage required to store large datasets, allowing for more data-efficient storage systems.
Conclusion
The future of storing large amounts of data will likely be a combination of these technologies:
- Tape storage will continue to be a dominant option for cold storage, with higher capacity and efficiency.
- Cloud storage will grow, with more sophisticated hybrid architectures and distributed storage solutions.
- DNA data storage and quantum storage hold immense promise, though they are still in the research and early development phases.
- Memory-based storage and advanced flash storage will continue to provide high-speed and high-density solutions for active data.
Ultimately, the future will be characterized by diverse storage mediums tailored to specific needs—whether for speed, capacity, durability, or cost-effectiveness.
question: given the density of 2tb per microsd card: how much data could be stored in a 45x45x5cm space?
ChatGPT: A 45 × 45 × 5 cm space could theoretically store up to 120 petabytes (PB) of data using 2 TB microSD cards, assuming tight packing with no wasted space.
Llama 3.3 70B: in a 45x45x5cm space, approximately 135 petabytes of data could be stored, assuming the microSD cards are packed tightly without any gaps
there the world got the answer why the future of storage is most likely microsd cards X-D (if maximum storage density is required) otherwise a 2TB 3D NAND SSD is almost 50% cheaper and faster than a 2TB microsd card, which (again) leads to NVMe cards (even faster and cheaper) beeing the already de facto successor to SSDs, BUT with the problem of: how to connect many of them to 1x server? (no solution yet as most mainboards have a very limited number of PCIe slots)
so NVMe (currently) in a storage server scenario will most likely be used as buffer before storing data “permanently” on magnetic disks, tape or other media, flow of data: RAM -> NVMe -> magnetic disks, tape or other media
there are rare very expensive motherboards (like ASUS Pro WS WRX80E Sage) that feature 7 PCIe slots, so given that this (UNTESTED!) PCIe to 4x NVMe adapter (this one might be of better quality while MSI XPANDER looks great (active cooling)) works, that would give total connecting capability of 28x NVMe each 2tb = RAID10 based swap partition of 28TBytes
as mentioned in another article ext4 might not be a good filesystem for large storage, as it does not implement CRC, Btrfs “butter fs” enables checksumming (CRC32C) by default so as simple as: mkfs.btrfs /dev/sdX
BTRFS it comes with it’s own RAID implementation! (which might be a wee bit slower than mdadm but still)
apt install btrfs-progs
mkfs.btrfs -d raid10 -m raid10 /dev/sdX /dev/sdY /dev/sdZ /dev/sdW
btfs-RAID10 just as mdadm-raid10 giving almost +100% more performance over RAID1 (benchmark1, benchmark2)
BUT a lot can go wrong when storing data so using a lot of CRC and self error correction is adviced, no matter the medium, and that’s why this ext4 user (no CRC) is putting every ext4 filesystem on auto check on boot via fune2fs
Links:
liked this article?
- only together we can create a truly free world
- plz support dwaves to keep it up & running!
- (yes the info on the internet is (mostly) free but beer is still not free (still have to work on that))
- really really hate advertisement
- contribute: whenever a solution was found, blog about it for others to find!
- talk about, recommend & link to this blog and articles
- thanks to all who contribute!
