I am building a mailserver and with all the steps, I want to image the drive at various 'checkpoints' so I can go back and redo from a particular point. The image is currently only 4GB on a 120GB drive. Fdisk reports:
Disk /dev/sdb: 111.8 GiB, 120034124288 bytes, 234441649 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x0000c89d
Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 1026047 1024000 500M 83 Linux /dev/sdb2 1026048 2074623 1048576 512M 82 Linux swap / Solaris /dev/sdb3 2074624 6268927 4194304 2G 83 Linux
and parted:
Model: Kingston SNA-DC/U (scsi) Disk /dev/sdb: 120GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags:
Number Start End Size Type File system Flags 1 1049kB 525MB 524MB primary ext3 2 525MB 1062MB 537MB primary linux-swap(v1) 3 1062MB 3210MB 2147MB primary ext4
what dd params work?
dd if=/dev/sdb of=os.img bs=1M count=3210
?
thanks
On Mar 2, 2017, at 6:36 PM, Robert Moskowitz rgm@htt-consult.com wrote:
I want to image the drive at various 'checkpoints' so I can go back and redo from a particular point… what dd params work?
dd if=/dev/sdb of=os.img bs=1M count=3210
That looks plausible. (I haven’t verified your count parameter exactly.)
However, I wonder why you’re trying to reinvent snapshots, a technology now built into several advanced filesystems, such as btrfs and ZFS?
https://en.wikipedia.org/wiki/Btrfs#Subvolumes_and_snapshots
btrfs is built into CentOS 7. While there have been some highly-publicized bugs in btrfs, they only affect the RAID-5/6 features. You don’t need that here, so you should be fine with btrfs.
And if you really distrust btrfs, ZFS is easy enough to integrate into CentOS on-site.
And if *that* is also out of the question, you have LVM2 snapshots:
http://tldp.org/HOWTO/LVM-HOWTO/snapshots_backup.html
Why reinvent the wheel?
On Mar 2, 2017, at 6:53 PM, Warren Young warren@etr-usa.com wrote:
Why reinvent the wheel?
Oh, I forgot to say, LVM2, ZFS, and btrfs snapshots don’t image the *entire* drive including slack space. They set a copy-on-write point which is near-instantaneous, so that whenever one of the current data blocks changes, its content gets copied to a new space on the disk and modified there, so that rolling back amounts to moving a bunch of pointers around, not downing the whole machine and wiping out your prior setup, including all that mail you’ve accumulated in the meantime.
If you’re after some unstated goal, such as off-machine backups, there’s generally a way to send a copy of the snapshot to another machine, such as via SSH. This is also more efficient than copying a raw dd image. Not only does it skip over slack space, you can send the snapshot to another similar machine and “play back” the snapshot there, effectively mirroring the machine, taking only as much time as needed to transmit the *changes* since the last snapshot.
If you’ve use a virtual machine manager with snapshotting features, these filesystems’ features are a lot like that. Quick, efficient, and quite robust.
On 03/02/2017 08:53 PM, Warren Young wrote:
On Mar 2, 2017, at 6:36 PM, Robert Moskowitz rgm@htt-consult.com wrote:
I want to image the drive at various 'checkpoints' so I can go back and redo from a particular point… what dd params work?
dd if=/dev/sdb of=os.img bs=1M count=3210
That looks plausible. (I haven’t verified your count parameter exactly.)
However, I wonder why you’re trying to reinvent snapshots, a technology now built into several advanced filesystems, such as btrfs and ZFS?
https://en.wikipedia.org/wiki/Btrfs#Subvolumes_and_snapshots
btrfs is built into CentOS 7. While there have been some highly-publicized bugs in btrfs, they only affect the RAID-5/6 features. You don’t need that here, so you should be fine with btrfs.
And if you really distrust btrfs, ZFS is easy enough to integrate into CentOS on-site.
And if *that* is also out of the question, you have LVM2 snapshots:
http://tldp.org/HOWTO/LVM-HOWTO/snapshots_backup.html
Why reinvent the wheel?
This is Centos7-armv7. Not all the tools are there. I keep getting surprises in some rpm not in the repo, but if I dig I will find it (but php-imap is NOT built yet and that I need).
The base image is a dd, and you start with something like:
xzcat CentOS-Userland-7-armv7hl-Minimal-1611-CubieTruck.img.xz | sudo dd of=/dev/sdb bs=4M; sync
btw, this reports:
0+354250 records in 0+354250 records out 3221225472 bytes (3.2 GB, 3.0 GiB) copied, 120.656 s, 26.7 MB/s
Then you boot up (connected via the JART with a USB/TTL for a serial console).
I want a drive image, and that is easy to do. I disconnect my drive from the CubieTruck, stick it into a USB/sata adapter, and I can image the whole drive.
For just a development and snapshotting project, dd (and xzcat) do the job, and really what the Fedora-arm and Centos-arm teams have been doing.
I actually did this 2+ years creating Redsleeve6 images, but can't find any of my notes. :(
On Mar 2, 2017, at 7:04 PM, Robert Moskowitz rgm@htt-consult.com wrote:
On 03/02/2017 08:53 PM, Warren Young wrote:
Why reinvent the wheel?
This is Centos7-armv7. Not all the tools are there.
btrfs and LVM2 appear to be built:
https://armv7.dev.centos.org/repodir/c71611-pass-1/btrfs-progs/4.4.1-1.el7/a... https://armv7.dev.centos.org/repodir/c71611-pass-1/lvm2/2.02.166-1.el7/armv7...
That’s the userland tools for both. The rest is in the kernel, so you’d need to ensure that the appropriate drivers are built and installed.
On 03/02/2017 09:17 PM, Warren Young wrote:
On Mar 2, 2017, at 7:04 PM, Robert Moskowitz rgm@htt-consult.com wrote:
On 03/02/2017 08:53 PM, Warren Young wrote:
Why reinvent the wheel?
This is Centos7-armv7. Not all the tools are there.
btrfs and LVM2 appear to be built:
https://armv7.dev.centos.org/repodir/c71611-pass-1/btrfs-progs/4.4.1-1.el7/armv7hl/ https://armv7.dev.centos.org/repodir/c71611-pass-1/lvm2/2.02.166-1.el7/armv7hl/
That’s the userland tools for both. The rest is in the kernel, so you’d need to ensure that the appropriate drivers are built and installed.
thanks. I will look at this as I put the server into production. I never use all of the drive on my RSEL6 mailserver, so I would have space for snapshotting this one.
On Thu, Mar 2, 2017 at 8:36 PM, Robert Moskowitz rgm@htt-consult.com wrote:
dd if=/dev/sdb of=os.img bs=1M count=3210
I would recommend bs=512 to keep the block sizes the same though not a huge diff just seems to be happier for some reason and add status=progress if you would like to monitor how it is doing. Seems the command you have should work otherwise.
On 03/02/2017 09:06 PM, fred roller wrote:
On Thu, Mar 2, 2017 at 8:36 PM, Robert Moskowitz rgm@htt-consult.com wrote:
dd if=/dev/sdb of=os.img bs=1M count=3210
I would recommend bs=512 to keep the block sizes the same though not a huge diff just seems to be happier for some reason and add status=progress if you would like to monitor how it is doing. Seems the command you have should work otherwise.
So, given the fdisk output,
Disk /dev/sdb: 111.8 GiB, 120034124288 bytes, 234441649 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x0000c89d
Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 1026047 1024000 500M 83 Linux /dev/sdb2 1026048 2074623 1048576 512M 82 Linux swap / Solaris /dev/sdb3 2074624 6268927 4194304 2G 83 Linux
would
count=6268927
?
oh, and this way, I can lay the image down on any drive. Even a mSD card (as that is the actual boot device, but the Cubie uboot (and linksprite) can run almost completely from a sata drive).
thanks
On Thu, Mar 02, 2017 at 09:06:52PM -0500, fred roller wrote:
On Thu, Mar 2, 2017 at 8:36 PM, Robert Moskowitz rgm@htt-consult.com wrote:
dd if=/dev/sdb of=os.img bs=1M count=3210
I would recommend bs=512 to keep the block sizes the same though not a huge diff just seems to be happier for some reason and add status=progress if you would like to monitor how it is doing. Seems the command you have should work otherwise.
The dd blocksize has nothing to do with the disk sector size.
the disk sector size is the number of bytes in a minimal read/write operation (because the physical drive can't manipulate anything smaller).
the dd blocksize is merely the number of bytes read/written in a single read/write operation. (or not bytes, but K, or Kb, or other depending on the options you use.)
It makes sense for the bs option in dd to be a multiple of the actual disk block/sector size, but isn't even required. if you did dd with a block size of, e.g., 27, it would still work, it'd just be stupidly slow.
Fred
On 03/02/2017 10:02 PM, Fred Smith wrote:
On Thu, Mar 02, 2017 at 09:06:52PM -0500, fred roller wrote:
On Thu, Mar 2, 2017 at 8:36 PM, Robert Moskowitz rgm@htt-consult.com wrote:
dd if=/dev/sdb of=os.img bs=1M count=3210
I would recommend bs=512 to keep the block sizes the same though not a huge diff just seems to be happier for some reason and add status=progress if you would like to monitor how it is doing. Seems the command you have should work otherwise.
The dd blocksize has nothing to do with the disk sector size.
the disk sector size is the number of bytes in a minimal read/write operation (because the physical drive can't manipulate anything smaller).
the dd blocksize is merely the number of bytes read/written in a single read/write operation. (or not bytes, but K, or Kb, or other depending on the options you use.)
It makes sense for the bs option in dd to be a multiple of the actual disk block/sector size, but isn't even required. if you did dd with a block size of, e.g., 27, it would still work, it'd just be stupidly slow.
Kind of wondered about that.
So the blocks reported by fdisk is what I should use as the count, as that matches the drive's real block size?
thanks
On Thu, Mar 02, 2017 at 10:57:51PM -0500, Robert Moskowitz wrote:
On 03/02/2017 10:02 PM, Fred Smith wrote:
On Thu, Mar 02, 2017 at 09:06:52PM -0500, fred roller wrote:
On Thu, Mar 2, 2017 at 8:36 PM, Robert Moskowitz rgm@htt-consult.com wrote:
dd if=/dev/sdb of=os.img bs=1M count=3210
I would recommend bs=512 to keep the block sizes the same though not a huge diff just seems to be happier for some reason and add status=progress if you would like to monitor how it is doing. Seems the command you have should work otherwise.
The dd blocksize has nothing to do with the disk sector size.
the disk sector size is the number of bytes in a minimal read/write operation (because the physical drive can't manipulate anything smaller).
the dd blocksize is merely the number of bytes read/written in a single read/write operation. (or not bytes, but K, or Kb, or other depending on the options you use.)
It makes sense for the bs option in dd to be a multiple of the actual disk block/sector size, but isn't even required. if you did dd with a block size of, e.g., 27, it would still work, it'd just be stupidly slow.
Kind of wondered about that.
So the blocks reported by fdisk is what I should use as the count, as that matches the drive's real block size?
thanks
if you're copying the entire device, you do not need to tell it how many blocks. just use a large-ish blocksize and let 'er rip.
for a single partition, you could use the blocksize and block number you get from fdisk. you would then need to say /dev/sda4, e.g., instead of /dev/sda.
when copying an entire drive I tend to use 10M as the blocksize. using a large blocksize just reduces the number of read operations that are needed. that's why a very small blocksize could slow down the copy, as it would require a whole lot more read operations.
On 03/03/2017 07:50 AM, Fred Smith wrote:
On Thu, Mar 02, 2017 at 10:57:51PM -0500, Robert Moskowitz wrote:
On 03/02/2017 10:02 PM, Fred Smith wrote:
On Thu, Mar 02, 2017 at 09:06:52PM -0500, fred roller wrote:
On Thu, Mar 2, 2017 at 8:36 PM, Robert Moskowitz rgm@htt-consult.com wrote:
dd if=/dev/sdb of=os.img bs=1M count=3210
I would recommend bs=512 to keep the block sizes the same though not a huge diff just seems to be happier for some reason and add status=progress if you would like to monitor how it is doing. Seems the command you have should work otherwise.
The dd blocksize has nothing to do with the disk sector size.
the disk sector size is the number of bytes in a minimal read/write operation (because the physical drive can't manipulate anything smaller).
the dd blocksize is merely the number of bytes read/written in a single read/write operation. (or not bytes, but K, or Kb, or other depending on the options you use.)
It makes sense for the bs option in dd to be a multiple of the actual disk block/sector size, but isn't even required. if you did dd with a block size of, e.g., 27, it would still work, it'd just be stupidly slow.
Kind of wondered about that.
So the blocks reported by fdisk is what I should use as the count, as that matches the drive's real block size?
thanks
if you're copying the entire device, you do not need to tell it how many blocks. just use a large-ish blocksize and let 'er rip.
for a single partition, you could use the blocksize and block number you get from fdisk. you would then need to say /dev/sda4, e.g., instead of /dev/sda.
when copying an entire drive I tend to use 10M as the blocksize. using a large blocksize just reduces the number of read operations that are needed. that's why a very small blocksize could slow down the copy, as it would require a whole lot more read operations.
Well, I only wanted to copy the used part of the drive which I try to keep small so I can still copy the image to an mSD card if I wish. So I have to supply the amount of the drive to copy. The bs=512 went fast enough, but then I was only copying 3.2GB.
thanks for the help.
On 3/3/2017 5:34 AM, Robert Moskowitz wrote:
Well, I only wanted to copy the used part of the drive which I try to keep small so I can still copy the image to an mSD card if I wish. So I have to supply the amount of the drive to copy. The bs=512 went fast enough, but then I was only copying 3.2GB.
thanks for the help.
personally, I would use 'dump' for ext? file systems, and xfsdump for XFS. this *just* backs up the inodes and directories, not raw blocks
On 3/3/2017 5:34 AM, Robert Moskowitz wrote:
Well, I only wanted to copy the used part of the drive which I try to keep small so I can still copy the image to an mSD card if I wish. So I have to supply the amount of the drive to copy. The bs=512 went fast enough, but then I was only copying 3.2GB.
thanks for the help.
Have you considered looking at Clonezilla? It does a nice job of imaging a disk and only saves the used part. It creates an image which can be restored and is bootable.
_______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
On 03/03/2017 12:31 PM, Styma, Robert (Nokia - US) wrote:
On 3/3/2017 5:34 AM, Robert Moskowitz wrote:
Well, I only wanted to copy the used part of the drive which I try to keep small so I can still copy the image to an mSD card if I wish. So I have to supply the amount of the drive to copy. The bs=512 went fast enough, but then I was only copying 3.2GB.
thanks for the help.
Have you considered looking at Clonezilla? It does a nice job of imaging a disk and only saves the used part. It creates an image which can be restored and is bootable.
Yes, I looked at it a couple years ago, and I may well again as a disaster recovery tool. It takes more to set up than a dd of the drive (and xzcat compression for storage).
I *DO* want a recovery strategy in place here.
thanks
On 03/03/2017 12:31 PM, Styma, Robert (Nokia - US) wrote:
On 3/3/2017 5:34 AM, Robert Moskowitz wrote:
Well, I only wanted to copy the used part of the drive which I try to keep small so I can still copy the image to an mSD card if I wish. So I have to supply the amount of the drive to copy. The bs=512 went fast enough, but then I was only copying 3.2GB.
thanks for the help.
Have you considered looking at Clonezilla? It does a nice job of imaging a disk and only saves the used part. It creates an image which can be restored and is bootable.
I have a Pogoplug (armv5) running Redsleeve 7 with a 2TB drive to back up to.
AFTER I get my new mailserver working.
Which probably won't be until either someone builds php-imap for armv7, or I figure out how to get build or mock installed (both of which had failed dependencies) and do it myself.
BTW, here is my Cubieboard2 'rack'. I should also put up my CubieTruck...
http://medon.htt-consult.com/~rgm/cubieboard/cubietower-3.JPG
Medon is the top one in the stack.
On 03/03/2017 12:23 PM, John R Pierce wrote:
On 3/3/2017 5:34 AM, Robert Moskowitz wrote:
Well, I only wanted to copy the used part of the drive which I try to keep small so I can still copy the image to an mSD card if I wish. So I have to supply the amount of the drive to copy. The bs=512 went fast enough, but then I was only copying 3.2GB.
thanks for the help.
personally, I would use 'dump' for ext? file systems, and xfsdump for XFS. this *just* backs up the inodes and directories, not raw blocks
But I have a single image that I can use to build a working drive. Even on another drive, as long as it is more that 3.2GB. Also the partition table and uboot (but on Cubie's uboot, you put the uboot as the ONLY thing on the mSD, and it switches over to the sata for all the partitions).
The following worked:
# dd if=/dev/sdb of=cubietruck.img bs=512 count=6268927
6268927+0 records in 6268927+0 records out 3209690624 bytes (3.2 GB, 3.0 GiB) copied, 114.435 s, 28.0 MB/s
So bs= IS the drive blocksize.
This is the result of trying a number of different values for bs and count.
thank you
On 03/02/2017 10:02 PM, Fred Smith wrote:
On Thu, Mar 02, 2017 at 09:06:52PM -0500, fred roller wrote:
On Thu, Mar 2, 2017 at 8:36 PM, Robert Moskowitz rgm@htt-consult.com wrote:
dd if=/dev/sdb of=os.img bs=1M count=3210
I would recommend bs=512 to keep the block sizes the same though not a huge diff just seems to be happier for some reason and add status=progress if you would like to monitor how it is doing. Seems the command you have should work otherwise.
The dd blocksize has nothing to do with the disk sector size.
the disk sector size is the number of bytes in a minimal read/write operation (because the physical drive can't manipulate anything smaller).
the dd blocksize is merely the number of bytes read/written in a single read/write operation. (or not bytes, but K, or Kb, or other depending on the options you use.)
It makes sense for the bs option in dd to be a multiple of the actual disk block/sector size, but isn't even required. if you did dd with a block size of, e.g., 27, it would still work, it'd just be stupidly slow.
Fred
On 03/02/2017 11:57 PM, Robert Moskowitz wrote:
The following worked:
# dd if=/dev/sdb of=cubietruck.img bs=512 count=6268927
6268927+0 records in 6268927+0 records out 3209690624 bytes (3.2 GB, 3.0 GiB) copied, 114.435 s, 28.0 MB/s
So bs= IS the drive blocksize.
This is the result of trying a number of different values for bs and count.
You can set bs to a multiple of 512 and it will go a lot faster. If I have to use raw dd for cloning, I will factor the count all the way down to primes, and multiply the blocksize by all of the factors up to the largest prime factor. This is trivially easy on a CentOS system (factor is part of coreutils):
[lowen@FREE-IP-92 ~]$ factor 6268927 6268927: 7 43 59 353
So you could use 512 times any of these factors, or several of these factors. I would probably use the line: dd if=/dev/sdb of=cubietruck.img bs=9092608 count=353 Note that while dd can use the abbreviation 'k' you would not want to use that here since 2 is not one of the factors of your count. A roughly 9MB blocksize is going to be loads faster than 512, but still manageable.
Or you could make it easy on yourself and use either dd_rescue or ddrescue. When I was working on the ODROID C2 stuff last year I built ddrescue from source RPM early on, before it got built as part of the EPEL aarch64 stuff. Either of these two will figure out the optimum blocksize for you for best performance, and you get progress indications without having to have another terminal open to issue the fun 'kill -USR1 $pid-of-dd' command to get that out of dd. The ddrescue utility for one includes a '--size=<bytes>' parameter so that you can clone only the portion you want.
On Mar 3, 2017, at 10:49 AM, Lamar Owen lowen@pari.edu wrote:
On 03/02/2017 11:57 PM, Robert Moskowitz wrote:
The following worked:
# dd if=/dev/sdb of=cubietruck.img bs=512 count=6268927
6268927+0 records in 6268927+0 records out 3209690624 bytes (3.2 GB, 3.0 GiB) copied, 114.435 s, 28.0 MB/s
So bs= IS the drive blocksize.
This is the result of trying a number of different values for bs and count.
You can set bs to a multiple of 512 and it will go a lot faster.
Maybe, maybe not. OP said he’s on an embedded system, which often implies low-end eMMC or SD type storage, and 28 MB/sec is typical for such things.
When mirroring HDDs and proper SSDs, yes, you want to use large block sizes.