How much time are you spending keeping all of that in sync...?
Just having a lot of drives is not a good use of resources from the data protection standpoint. It ensures you protection against the catastrophic failure of one or two drives simultaneously, but you seem unprotected against most other forms of data loss: for example, silent corruption of files (what are you using to ensure integrity? I don’t see any mention of hashes or DVCSes), or mistaken deletions/modifications (what stops a file deletion from percolating through each of the 7 before you realize 6 months later that it was a critical file?).
For improving general safety, you should probably drop some of those drives in favor of adding protection in the form of read-only media and error detection + forward error correction (eg periodically making a full backup with PAR2 redundancy to BluRays), and more frequent backups to the backup drives.
Synchronization is automatic. It does not take up any of my time.
I have enough drive space to maintain backups going back several months, which protects against both file corruption (volume corruption is taken care of by redundancy) and mistaken deletion/modification. In any case, the files in question are mostly text or text-based, not binary formats, so corruption is less of a concern.
Code, specifically, is of course also kept in git repositories.
Backups to read-only media are a good idea, and I do them periodically as well (not blurays, though; DVDs or even CDs suffice, as the amount of truly critical data is not that large).
How much time are you spending keeping all of that in sync...?
Just having a lot of drives is not a good use of resources from the data protection standpoint. It ensures you protection against the catastrophic failure of one or two drives simultaneously, but you seem unprotected against most other forms of data loss: for example, silent corruption of files (what are you using to ensure integrity? I don’t see any mention of hashes or DVCSes), or mistaken deletions/modifications (what stops a file deletion from percolating through each of the 7 before you realize 6 months later that it was a critical file?).
For improving general safety, you should probably drop some of those drives in favor of adding protection in the form of read-only media and error detection + forward error correction (eg periodically making a full backup with PAR2 redundancy to BluRays), and more frequent backups to the backup drives.
Synchronization is automatic. It does not take up any of my time.
I have enough drive space to maintain backups going back several months, which protects against both file corruption (volume corruption is taken care of by redundancy) and mistaken deletion/modification. In any case, the files in question are mostly text or text-based, not binary formats, so corruption is less of a concern.
Code, specifically, is of course also kept in git repositories.
Backups to read-only media are a good idea, and I do them periodically as well (not blurays, though; DVDs or even CDs suffice, as the amount of truly critical data is not that large).