Hi,
I'm currently experimenting with G4U (Ghost for Unix), a small cloning application sending disk images to an FTP server.
The application reads the whole disk bit by bit, compresses it and then stores it remotely. Due to this approach, it's more or less filesystem-independent. The drawback is that it sometimes results in huge image files.
Now I'm currently following a hint which suggests to fill the disks' unused space with zero bits. Here's the command for that:
# dd if=/dev/zero of=/0bits bs=20M # rm /0bits
Now I gave that a shot, but after half an hour or so, I got a bit impatient. Now the computer does not respond any more. Does that mean he's just way too busy with dd? Or is there some mistake in the command? As I see it, it will just be chugging on and on, no? Shouldn't there be a 'count=x' option somewhere?
Niki
Am 07.06.2009 um 18:22 schrieb Niki Kovacs:
Hi,
I'm currently experimenting with G4U (Ghost for Unix), a small cloning application sending disk images to an FTP server.
The application reads the whole disk bit by bit, compresses it and then stores it remotely. Due to this approach, it's more or less filesystem-independent. The drawback is that it sometimes results in huge image files.
Now I'm currently following a hint which suggests to fill the disks' unused space with zero bits. Here's the command for that:
# dd if=/dev/zero of=/0bits bs=20M # rm /0bits
This will create a file that fills up the root-partition. If you have multiple partitions beyond that, it's not of much use. Ideally, the zero'ing of the disk should take place before the OS is installed, via a boot-cd and using dd with the disk-device itself
All this made some sense when disks didn't come in sizes of 250GB upwards... If you get 20MB/s from your dd(1), it would take 1000 seconds to fill 20 GB...
Rainer
Rainer Duffner a écrit :
Ideally, the zero'ing of the disk should take place before the OS is installed, via a boot-cd and using dd with the disk-device itself
Erm... how exactly would you go about that? Let's say I want to do that with a Knoppix boot CD, and the only hard disk I have on the PC is /dev/hdc.
On Jun 7, 2009, at 12:06 PM, Niki Kovacs wrote:
Rainer Duffner a écrit :
Ideally, the zero'ing of the disk should take place before the OS is installed, via a boot-cd and using dd with the disk-device itself
Erm... how exactly would you go about that? Let's say I want to do that with a Knoppix boot CD, and the only hard disk I have on the PC is /dev/hdc.
I've done the zeroing out thing on mounted filesystems before when I wanted to move the contents of a drive to another. zeroing out before would be best if you planned to do an install, then back it up for later. Otherwise, you end up with a lot of unused space that has remnants of old data scattered around.
It does take awhile. Especially if you stuck the disk in an USB enclosure.
Kevin Krieser a écrit :
I've done the zeroing out thing on mounted filesystems before when I wanted to move the contents of a drive to another. zeroing out before would be best if you planned to do an install, then back it up for later. Otherwise, you end up with a lot of unused space that has remnants of old data scattered around.
Yeah, but I'm a bit confused here. How would you go about it from a LiveCD ?
Am 07.06.2009 um 19:27 schrieb Niki Kovacs:
Kevin Krieser a écrit :
I've done the zeroing out thing on mounted filesystems before when I wanted to move the contents of a drive to another. zeroing out before would be best if you planned to do an install, then back it up for later. Otherwise, you end up with a lot of unused space that has remnants of old data scattered around.
Yeah, but I'm a bit confused here. How would you go about it from a LiveCD ?
Ever booted a live-CD? It also knows your disks (unless it's a server, except for maybe the CentOS LiveCD, most other's suck on servers - they simply don't recognize the controllers). Of course, dd on the device wipes everything, so you do it before installing OS+applications.
Rainer
Rainer Duffner a écrit :
Ever booted a live-CD? It also knows your disks (unless it's a server, except for maybe the CentOS LiveCD, most other's suck on servers - they simply don't recognize the controllers).
The question was not about the LiveCD, but more about the use of dd. So, blanking a disk (say, /dev/hdc) from a LiveCD would amount to:
# dd if=/dev/zero of=/dev/hdc bs=20M
And that's it. Correct?
Am 07.06.2009 um 19:54 schrieb Niki Kovacs:
Rainer Duffner a écrit :
Ever booted a live-CD? It also knows your disks (unless it's a server, except for maybe the CentOS LiveCD, most other's suck on servers - they simply don't recognize the controllers).
The question was not about the LiveCD, but more about the use of dd. So, blanking a disk (say, /dev/hdc) from a LiveCD would amount to:
# dd if=/dev/zero of=/dev/hdc bs=20M
And that's it. Correct?
Yup. If you have the time, you can experiment with the blocksize and see where the throughput is best.
http://unix.derkeiler.com/Mailing-Lists/FreeBSD/questions/2008-09/msg01375.h...
Rainer
Rainer Duffner a écrit :
Yup. If you have the time, you can experiment with the blocksize and see where the throughput is best. http://unix.derkeiler.com/Mailing-Lists/FreeBSD/questions/2008-09/msg01375.h...
Interesting thread. Guess I'll give it a few spins with different blocksizes (20k, 200k, 2M, 20M) and time { } the operation. Just curious.
Anyway: thanks!
Niki
On Jun 7, 2009, at 1:11 PM, Niki Kovacs wrote:
Rainer Duffner a écrit :
Yup. If you have the time, you can experiment with the blocksize and see where the throughput is best. http://unix.derkeiler.com/Mailing-Lists/FreeBSD/questions/2008-09/msg01375.h...
Interesting thread. Guess I'll give it a few spins with different blocksizes (20k, 200k, 2M, 20M) and time { } the operation. Just curious.
Anyway: thanks!
Niki
I suspect that once you get to some small size multiple of the lower level protocol, larger block sizes probably don't matter too much. 512 byte blocks are small, so there would be considerable overhead. 1MB or larger blocks, you have exceeded the probably block size of the lower level driver or hardware, so then it is just being split up by the driver.
Niki Kovacs wrote:
Rainer Duffner a écrit :
Yup. If you have the time, you can experiment with the blocksize and see where the throughput is best. http://unix.derkeiler.com/Mailing-Lists/FreeBSD/questions/2008-09/msg01375.h...
Interesting thread. Guess I'll give it a few spins with different blocksizes (20k, 200k, 2M, 20M) and time { } the operation. Just curious.
in the past, I've found anything much over 32k is pointless.
On Jun 7, 2009, at 12:54 PM, Niki Kovacs wrote:
Rainer Duffner a écrit :
Ever booted a live-CD? It also knows your disks (unless it's a server, except for maybe the CentOS LiveCD, most other's suck on servers - they simply don't recognize the controllers).
The question was not about the LiveCD, but more about the use of dd. So, blanking a disk (say, /dev/hdc) from a LiveCD would amount to:
# dd if=/dev/zero of=/dev/hdc bs=20M
And that's it. Correct?
Yes. I generally don't use a 20M block size, but it is what I've used.
In my previous experience, zeroing the disk will result in smaller files for G4U but it will take awhile depending on many factors including the size of the disk, performance, etc.. Also, I recommend giving Clonezilla (http://clonezilla.org/) a try. It offers more options than G4U and is more efficient in my experience.
Matt
-- Mathew S. McCarrell Clarkson University '10
mccarrms@gmail.com mccarrms@clarkson.edu
On Sun, Jun 7, 2009 at 12:34 PM, Rainer Duffner rainer@ultra-secure.dewrote:
Am 07.06.2009 um 18:22 schrieb Niki Kovacs:
Hi,
I'm currently experimenting with G4U (Ghost for Unix), a small cloning application sending disk images to an FTP server.
The application reads the whole disk bit by bit, compresses it and then stores it remotely. Due to this approach, it's more or less filesystem-independent. The drawback is that it sometimes results in huge image files.
Now I'm currently following a hint which suggests to fill the disks' unused space with zero bits. Here's the command for that:
# dd if=/dev/zero of=/0bits bs=20M # rm /0bits
This will create a file that fills up the root-partition. If you have multiple partitions beyond that, it's not of much use. Ideally, the zero'ing of the disk should take place before the OS is installed, via a boot-cd and using dd with the disk-device itself
All this made some sense when disks didn't come in sizes of 250GB upwards... If you get 20MB/s from your dd(1), it would take 1000 seconds to fill 20 GB...
Rainer _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Rainer Duffner wrote:
Am 07.06.2009 um 18:22 schrieb Niki Kovacs:
Hi,
I'm currently experimenting with G4U (Ghost for Unix), a small cloning application sending disk images to an FTP server.
The application reads the whole disk bit by bit, compresses it and then stores it remotely. Due to this approach, it's more or less filesystem-independent. The drawback is that it sometimes results in huge image files.
Niki,
I suggest you look at partimage. G4U seems similar, but partimage doesn't write free blocks to the images, so you don't get these huge files. It's worked well for me. It's in rpmforge.
cheers, Nicolas
Nicolas Thierry-Mieg a écrit :
Niki,
I suggest you look at partimage. G4U seems similar, but partimage doesn't write free blocks to the images, so you don't get these huge files. It's worked well for me. It's in rpmforge.
Thanks for the suggestion. I just took a look at it. But I think G4U will suit me better, since I'm using this machine to test all kinds of exotic systems. The partimage documentation states "some problems" with things like NTFS.
But this could come in handy for backups...
Cheers,
Niki
Niki Kovacs wrote:
Hi,
I'm currently experimenting with G4U (Ghost for Unix), a small cloning application sending disk images to an FTP server.
The application reads the whole disk bit by bit, compresses it and then stores it remotely. Due to this approach, it's more or less filesystem-independent. The drawback is that it sometimes results in huge image files.
Now I'm currently following a hint which suggests to fill the disks' unused space with zero bits. Here's the command for that:
# dd if=/dev/zero of=/0bits bs=20M # rm /0bits
Now I gave that a shot, but after half an hour or so, I got a bit impatient. Now the computer does not respond any more. Does that mean he's just way too busy with dd? Or is there some mistake in the command? As I see it, it will just be chugging on and on, no? Shouldn't there be a 'count=x' option somewhere?
I'll second the recommendation for clonezilla. It knows enough about most filesystems (including windows ntfs) to only store the used blocks and it can use network storage over nfs, smb, or sshfs if you use the bootable CD clonezilla-live version. If you do a lot of cloning, you can also use the network-booting drbl version on a server that will PXE boot a client into clonezilla with the image storage directory already NFS-mounted. There is an rpm for Centos to install this.
On Jun 7, 2009, at 12:34 PM, Les Mikesell wrote:
Niki Kovacs wrote:
Hi,
I'm currently experimenting with G4U (Ghost for Unix), a small cloning application sending disk images to an FTP server.
The application reads the whole disk bit by bit, compresses it and then stores it remotely. Due to this approach, it's more or less filesystem-independent. The drawback is that it sometimes results in huge image files.
Now I'm currently following a hint which suggests to fill the disks' unused space with zero bits. Here's the command for that:
# dd if=/dev/zero of=/0bits bs=20M # rm /0bits
Now I gave that a shot, but after half an hour or so, I got a bit impatient. Now the computer does not respond any more. Does that mean he's just way too busy with dd? Or is there some mistake in the command? As I see it, it will just be chugging on and on, no? Shouldn't there be a 'count=x' option somewhere?
I'll second the recommendation for clonezilla. It knows enough about most filesystems (including windows ntfs) to only store the used blocks and it can use network storage over nfs, smb, or sshfs if you use the bootable CD clonezilla-live version. If you do a lot of cloning, you can also use the network-booting drbl version on a server that will PXE boot a client into clonezilla with the image storage directory already NFS-mounted. There is an rpm for Centos to install this.
The problem I had with clonezilla I had when I tried it once was I was attempting to clone a hard drive (windows) that had some bad sectors. Clonezilla didn't handle that well at all. Either in duplicating the drive from one drive to another, or when I tried to back it up to a file on another USB drive failed verify. Luckily, I had done a recent windows backup, so I went through the recovery DVD route on the new drive, removed programs I had previously removed from the factory install, then restored over itself. I spent a lot of effort trying to avoid that.
Kevin Krieser wrote:
I'll second the recommendation for clonezilla. It knows enough about most filesystems (including windows ntfs) to only store the used blocks and it can use network storage over nfs, smb, or sshfs if you use the bootable CD clonezilla-live version. If you do a lot of cloning, you can also use the network-booting drbl version on a server that will PXE boot a client into clonezilla with the image storage directory already NFS-mounted. There is an rpm for Centos to install this.
The problem I had with clonezilla I had when I tried it once was I was attempting to clone a hard drive (windows) that had some bad sectors. Clonezilla didn't handle that well at all.
That doesn't sound like a clonezilla-specific problem. Have you found some other tool that magically reads bad sector?
Either in duplicating the drive from one drive to another, or when I tried to back it up to a file on another USB drive failed verify. Luckily, I had done a recent windows backup, so I went through the recovery DVD route on the new drive, removed programs I had previously removed from the factory install, then restored over itself. I spent a lot of effort trying to avoid that.
But - how often are you planning to clone bad drives? I'd try to use something like ddrescue to try to recover first. In the normal case, clonezilla does a good job.
On Jun 7, 2009, at 2:59 PM, Les Mikesell wrote:
Kevin Krieser wrote:
I'll second the recommendation for clonezilla. It knows enough about most filesystems (including windows ntfs) to only store the used blocks and it can use network storage over nfs, smb, or sshfs if you use the bootable CD clonezilla-live version. If you do a lot of cloning, you can also use the network-booting drbl version on a server that will PXE boot a client into clonezilla with the image storage directory already NFS-mounted. There is an rpm for Centos to install this.
The problem I had with clonezilla I had when I tried it once was I was attempting to clone a hard drive (windows) that had some bad sectors. Clonezilla didn't handle that well at all.
That doesn't sound like a clonezilla-specific problem. Have you found some other tool that magically reads bad sector?
Either in duplicating the drive from one drive to another, or when I tried to back it up to a file on another USB drive failed verify. Luckily, I had done a recent windows backup, so I went through the recovery DVD route on the new drive, removed programs I had previously removed from the factory install, then restored over itself. I spent a lot of effort trying to avoid that.
But - how often are you planning to clone bad drives? I'd try to use something like ddrescue to try to recover first. In the normal case, clonezilla does a good job.
In my case, I was hoping it would avoid the bad sector since the bad sectors were in free space. So the hope was that it would skip it. Bad disks are a difficult case, and not a reason to avoid a tool unless it claims to be able to handle it.
Even ignoring the feature of Clonezilla where it can be used to install cloned images on many systems with lower overhead, there is the advantage that it doesn't have to be installed on the system you are cloning, like more recent versions of Ghost. If I had known about it a year ago when I wanted to clone a hard drive before sending a computer out for repair, I would have used it. Instead, I used knoppix, dd, and gzip to backup a system. Took forever, having to go backup all the unused sectors on the disk.
Kevin, On Sun, Jun 7, 2009 at 11:05 PM, Kevin Krieserk_krieser@sbcglobal.net wrote:
On Jun 7, 2009, at 2:59 PM, Les Mikesell wrote: In my case, I was hoping it would avoid the bad sector since the bad sectors were in free space. So the hope was that it would skip it. Bad disks are a difficult case, and not a reason to avoid a tool unless it claims to be able to handle it.
One option is not using a cloning tool but a backup & rescue utility. http://www.mondorescue.org/ might work, it doesn't backup the whole disk but tries to recreate the original data. I use it for cloning CentOS/RHEL servers and creating bare-bones rescue disks. Since it only reads and writes back files, it won't go through the free bad blocks.