My music collection is now getting pretty large - nearly 5TB. My main NAS is at the house (x4 8TB raid 5) with 3 local backup drives and a further offsite NAS running raid 6.
I’ve started to get pretty anal over bit rot. Does anyone also consider this? I’m currently using rsync to backup the data - both offsite and local backups.
I’m doing abit of research for a decent application that will store and compare the bit for bit of the files and detect any potential changes but not using md5… the only program so far (but on Windows, so this is a pain for me) is Bitrot detector.
Ideally, I’d like something that is using a database (even SQLite) to store this information, rather than outputting to text files. I’m thinking I might just put together a simple SQLServer if I cant find anything and build a set of CLI scripts to calculate and store the SHA256 information.
Any advice welcome.
Oh gee something ELSE to think about!
When you detect it it is to late. Re-write the data (sector/track) now and then.
Or make sure you have an extra NAS. Copy from Nas1 to Nas2 and the run from Nas2. Next time copy Nas2->Nas1 and switch to run from Nas1.
Yes, at the moment, im only writing incrementals to the backups. So the initial plan was, that if detected, I will then write back, but I need a starting reference point to do the comparisions.
Asking for a friend . . .
How do you know when a bit has rotted?
Usually it’s from an abnormal sector count on the drive(s). Chances are then high that part of the data is thus corrupt. I had this recently on the offsite NAS. Rebuilt the array and found two drives with 30-40 abnormal sectors (read errors thrown).
Entropy rules OK.
Bit rot vs CD rot.
Like you I have a backup regime in place:
NAS > Excternal HDD + off-site backup; and the original CDs etc.
I think I will not worry additionally, but I appreciate your attention to detail.
I’ve had this happen a few times with data lost from cold storage on disks 15 years old. Sadly work related.
To guard against this, I now have both orevention and monitoring in place.
- Each year I remove a different disk from the NAS. Zero it and reinsert so that it gets rebuilt and all the data rewritten.
- I wrote some software based on Tripwire behavior and set up a rule that if it detects change of any file after a lockdown point, I get an email telling which file/s no longer match.
The latter gets a fair few false positives because it runs more often than I update the lockdown points but if it flagged a movie or track that had been there for years, I’d know something was afoot.
Of course a cold backup is critical for recovery. RAID is redundancy, not resiliency. And, although not bit rot, I’ve lost a RAID controller before and the cold backup prevented any tears.
Bit rot is a thing. It happens in many a data centre on legacy systems when people change roles and validating backup ages slips under the radar. It can happen after a few years but in my experience no HDD backup under 10 years has had it and no tape backup in maybe 6-7. If it happens mid file, you may not notice for media files. If it happens in a file header or in file system metadata, it can render a file unreadable. If it happens somewhere else like a disk label, you might be screwed without some very advanced debugging and repair knowledge.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.