I ported the raspberry pi 5 operating system and my applications from the SD card to a NVME drive on the Pimoroni adapter.
The Pi 5, with the active cooling hat and NVME drive are mounted to my CNC controller in a cabinet. It no longer uses the original SD card.
My question is - what is the best way to backup the entire contents of the NVME in case of failure. Dismounting the NVME drive isn’t an ideal option - and when I connect the drive to windows it only shows one partition.
Is there a way to back it up in situ? Perhaps to a USB flash drive?
The first copy is slow because it needs to copy everything, but the second time it is fast since it only updates the backup.
The only thing you should be aware of, but this is also true for the tools that @lufbrarunner had mentioned: this are system-level backup-tools. For backing up (application/user) data, you should use something that is built for that purpose.
Point it towards your nmve drive that you want to back up. And the target location with enough free space. This destination device will need to be as large or larger than your source nvme drive!
This will create an image of your drive that you would like to backup.
Such an image could then be rewritten to another drive with enough capacity for the image, using the “flash from file option” of the same application.
One question about space: does “space” refer to the device, partition or used space?! This really makes a difference. I tend to install to large partitions to have some headroom and later backup to small ones that are only large enough to hold the files that are actually installed.
Perhaps a poor choice of words on my part.
To back up the entire volume itself. The entire data storage device.
Or a partition if preferred. Etcher is quite versatile and reliable I’ve found. YMMV.
Sorry, but this is still a misunderstanding. Take my example with the 60GB drive, 12GB of them are used. Does the imager copy 12GB to the target drive or does it copy the complete 60GB to the target drive? In the first case it would only copy the used space, in the second case it would copy the full drive, i.e. 48GB in addition of empty unused space. Copying would take 5 times longer than necessary.
Thanks for the clarification! But this also implies that Balena Etcher is not useful for backups. Some users install PiOS on a 2TB drive. The used space is probably something in the range of 5-10 GB, so backing up the whole 2TB of the drive is a waste of space and time.
You can compress the resulting image, though. If you use dd, you can just pipe the output of dd into gzip. Otherwise, zip up the image after creating it however you decide to create it.
I believe the image will copy everything including old data still on the drive in otherwise free space. A wipe of free space specifying the last write value instead of letting it do random data before creating the image would allow compression programs to go to town.
I still don’t get it. What is elegant about dd plus zipping terabyte of data instead of just copying a few gigabyte? (and incremental backups copy even less).
And “wiping” is absolutely the wrong thing to do on SSDs, since after “wiping” the controller thinks every byte is used which results in severe write-amplification.
Backup and imaging are two totally different disciplines. Anybody really thinking about backups should get a backup program. One of the most prominent use-cases of backups is for example the restore of an accidentally deleted or changed file. With your dd+zip solution you won’t get far. Especially if you backup often: how would you keep track in which of your many terabyte backup-files you will find the relevant version of the file?
Could we wrap it up with: There are many ways to preserve your data. All with their own benefits and concerns. Methods may vary, opinion can be divided, but the core principles remain the same.
I wasn’t aware of wiping an SSD setting it up for thinking everything was in use. In that case, maybe wiping isn’t a good idea. I only mentioned it because areas with a fixed value for data would compress really well and dd had already been mentioned. Unintended consequences I guess.
Linux is nice about being file based, though. Regular backups work fine.