I’ve been considering paying for a European provider, mounting their service with rclone
, and thus being transparent to most anything I host.
How do y’all backup your data?
Prayer
You don’t have to worry about the backups. It the data recovery that will require divine intervention.
Jesus is my
copilotraid parity.
Raid is backup right?
of course /s
It protects against drive failure. That is the threat I am most worried about, so it’s fine for me.
drive failure
Perhaps unintended but very much relevant singular. Unless you’re doing RAID 6 or the like, a simultaneous failure of two drives still means data loss. It’s also worth noting that drives of the same model and batch tend to fail after similar amounts of time.
Oh, don’t worry they’re a random mix of old drives I had lying around, they’re most certainly not the same model, let alone batch!
(But yes, fair call if you have a big Nas. I have 2TB in my desktop)
same model and batch
This is why when you buy hard drives, you should split the order across several stores rather than buying all of them from one store. You’re much more likely to get drives from different batches.
I wrote my own thing. I didn’t understand how the standard options worked so I gave up.
Two hard drives of the same size, one on site and one off site.
Where do you keep your off-site one? Like a friend or family member’s house?
At home and at the shop where I work. At work the drives are actually stored in a Faraday cage.
That’s the thing. I don’t.
I use Kopia
Manually plug in a few disks every once in a while and copy the important stuff. Disks are offline for the most part.
I keep important files on my NAS, and use Borgbackup with Borgmagic for backups. I’ve got a storage VPS with HostHatch that’s $10/month for 10TB space (was a special Black Friday deal a few years ago).
Make sure you don’t just have one backup copy. If you discover that a file was corrupted three weeks ago, you should be able to restore the file from a three week old backup. rsync and rclone will only give you a single backup. Borg dedupes files across backups so storing months of daily backups often isn’t a problem, especially if the files rarely change.
Also make sure that ransomware or an attacker can’t mess up your backup. This means it should NOT be mounted as a file system on the client, and ideally the backup system has some way of allowing new backups while disallowing deleting old ones from the client side. Borg’s “append only” mode is perfect for this. Even if an attacker were to get onto your client system and try to delete the backups, Borg’s append-only mode just marks them as deleted until you run a
compact
on the server side, so you can easily recover.Local to synology. Synology to AWS with synology’s backup app. It costs me pennies per day.
Same, although aws is my plan b. For plan a I have an older Synology that is a full backup target.
On site? I put enterprise drives in my nas. Always have and have never had a drive fail. If one does, raid is good until the replacement arrives.
Raid is no backup. Raid helps you against drive failure.
Backup helps you if you or some script screwed up your data, or you need to go back to last months version of a file for whatever other reason.
Aws helps if your house burns down and you need to set up again from scratch.
Versioning is a feature completely separate from raid or dual nas or whatever else you do. Your example of the house burning down is exactly why I questioned the dual nas… Both nas will be toast.
So please, tell me again why you need 2 nas for versioning? Maybe you’re doing some goofy hack, then ok. That’s still silly. Just do proper versioning. If you’re coding, just use git. Don’t reinvent the wheel.
I’m stunned that you are unfamiliar with the versioning feature of backups. In my bubble this has been best practice since Apple came along with the Time Machine, but really we tried that even before with rsync, albeit only with limited success.
This is different from git because this takes care about all files and configurations, and it does so automatically. Furthermore it also includes rules when to thin out and discard old versions, because space remains an issue.
Synologys backup tool is quite similar to Time Machine, and that’s what I am using the second NAS for. I used to have a USB hard drive for that task, but it crashed and my old Synology and a few old disks were available. That’s better because it also protects against a number of attacks that make all mounted paths unusable.
Git is not a backup tool. It’s a versioning tool, best used for text files.
Your condescension is matched only by your reading comprehension. I do not know what your requirements are. You said coding and alluded versioning, so I tossed out git. Enjoy your tech debt. I hope it serves you well and supports your ego for many years.
Your condescension is matched only by your reading comprehension.
Bruh. Look into a mirror.
I do an automated nightly backup via restic to Backblaze B2. Every month, I manually run a script to copy the latest backup from B2 to two local HDDs that I keep offline. Every half a year I recover the latest backup on my PC to make sure everything works in case I need it. For peace of mind, my automated backup includes a health check through healthchecks.io, so if anything goes wrong, I get a notification.
It’s pretty low-maintenance and gives a high degree of resilience:
- A ransomware attack won’t affect my local HDDs, so at most I’ll lose a month’s worth of data.
- A house fire or server failure won’t affect B2, so at most I’ll lose a day’s worth of data.
restic has been very solid, includes encryption out of the box, and I like the simplicity of it. Easily automated with cron etc. Backblaze B2 is one of the cheapest cloud storage providers I could find, an alternative might be Wasabi if you have >1TB of data.
How much are you backing up? Admittedly backblaze looks cheap but at $6 Tb leaves me with $84 pcm or just over $1000 per year.
I’m seriously considering a rpi3 with a couple of external disk in an outbuilding instead of cloud
Oh, I think we’re talking different orders of magnitude here. I’m in the <1TB range, probably around 100GB. At that size, the cost is negligible.
Isn’t backblaze is like $6 per TB 🤔🤔🤔
So $216 a year?
$6 x 14Tb = $84 month x 12 months = $1008 per year, or did I miss read the prices?
Sorry, I thought you or somebody said they store 3TB. Probably I’m mistaken, sorry 🥲
Also you know it’s also possible to setup backups on the drive connect, also a good thing to turn off the networking beforehead 😶🌫️ (Also it’s possible to do “timer usb hub”, it’s not very off-site, but a switch can turn on every X days and the machine will mount it and do the backup, then the usb hub turns off (imagine putting it in a fireproof safe with a small hole for a usb cable))
Also, i’m using ntfy.sh for notifications And if you’re using raid, you can setup it with on a drive failure
deleted by creator
what’s the pricing like? looking at my own use case a full backup to b2 would cost me almost 100 bucks a month if I understand the pricing correctly. Given the tendency of my data storage it seems cheaper to me to just buy more hdds and store them in a safe at my bank
deleted by creator
full backup to b2 would cost me almost 100 bucks a month if I understand the pricing correctly
At that point, a Hetzner storage box or auction server would likely end up cheaper
thx for the info, will have a look into those options
Wait, Proxmox Backup Server runs on ARM?
deleted by creator
deleted by creator
rclone to dropbox and opendrive for things I care about like photo backups and RAW backups, and an encrypted rclone volume to both for things that need to be backed up, but also kept secure, such as scans of my tax returns, mortgage paperwork, etc. I maintain this script for the actual rclone automation via cron
I sync all my files across 4 different computers in my house (rsync and Nextcloud) and then backups on OneDrive and Google Drive.
4 different computers? Wow…
The 4 different computers are my vr desktop, my laptop, my home server, and my wife’s computer 🤪