I've got an old RAID that I attached to a box. LSI card, and the RAID has 12 drives, for a total RAID size of 9.1TB, I think. I started shred /dev/sda the Friday before last... and it's still running. Is this reasonable for it to be taking this long...?
mark
On Wed, 31 May 2017, m.roth@5-cent.us wrote:
I've got an old RAID that I attached to a box. LSI card, and the RAID has 12 drives, for a total RAID size of 9.1TB, I think. I started shred /dev/sda the Friday before last... and it's still running. Is this reasonable for it to be taking this long...?
Unless you specified non-default options, shred overwrites each file three times -- and writing 27 TB to an old RAID array will be extremely slow. Also, shred has a builtin PRNG, and I'm not really sure how speedy it is.
Still, 12 days seems like a really long time...
On Wed, May 31, 2017 10:39 am, Paul Heinlein wrote:
On Wed, 31 May 2017, m.roth@5-cent.us wrote:
I've got an old RAID that I attached to a box. LSI card, and the RAID has 12 drives, for a total RAID size of 9.1TB, I think. I started shred /dev/sda the Friday before last... and it's still running. Is this reasonable for it to be taking this long...?
Unless you specified non-default options, shred overwrites each file three times
With modern drives (read: larger than 100GB) writing the track over once with anything will be sufficient. Overwriting multiple times with different information was used awfully long ago when track had noticeable width and distinct edge (drives were smaller than 1 GB then), thus it was possible to distinguish narrow side of older record (using much more sensitive equipment)) as newly recorded track is usually slightly shifted with respect to old one, so narrow stripe of old one is not covered on one side. These times are long gone, one can clean drives one at a time just overwriting the whole device using dd (mind bs size to not impend speed). Better though: physically destroy platters, it may take less of _your_ time to do that.
Just my $0.02
Valeri
-- and writing 27 TB to an old RAID array will be extremely slow. Also, shred has a builtin PRNG, and I'm not really sure how speedy it is.
Still, 12 days seems like a really long time...
-- Paul Heinlein <> heinlein@madboa.com <> http://www.madboa.com/ _______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On 5/31/2017 8:04 AM, m.roth@5-cent.us wrote:
I've got an old RAID that I attached to a box. LSI card, and the RAID has 12 drives, for a total RAID size of 9.1TB, I think. I started shred /dev/sda the Friday before last... and it's still running. Is this reasonable for it to be taking this long...?
not at all surprising, as that raid sounds like its built with older slower drives.
I would discombobulate the raid, turn it into 12 discrete drives, and use
dd if=/dev/zero of=/dev/sdX bs=65536
on each drive, running these concurrently
unless that volume has data that requires military level destruction, where upon the proper method is to run the drives through a grinder so they are metal filings. the old DoD multipass erasure specification is long obsolete and was never that great.
John R Pierce wrote:
On 5/31/2017 8:04 AM, m.roth@5-cent.us wrote:
I've got an old RAID that I attached to a box. LSI card, and the RAID has 12 drives, for a total RAID size of 9.1TB, I think. I started shred /dev/sda the Friday before last... and it's still running. Is this reasonable for it to be taking this long...?
not at all surprising, as that raid sounds like its built with older slower drives.
It's maybe from '09 or '10. I *think* they're 1TB (which would make sense, given the size of what I remember of the RAID).
I would discombobulate the raid, turn it into 12 discrete drives, and use
Well, shred's already been running for this long... <snip>
unless that volume has data that requires military level destruction, where upon the proper method is to run the drives through a grinder so they are metal filings. the old DoD multipass erasure specification is long obsolete and was never that great.
If I had realized it would run this long, I would have used DBAN.... For single drives, I do, and choose DoD 5220.22-M (seven passes), which is *way* overkill these days... but I sign my name to a certificate that gets stuck on the outside of the server, meaning I, personally, am responsible for the sanitization of the drive(s).
And I work for a US federal contractor[1][2]
mark
1. I do not speak for my employer, the US federal government agency I work at, nor, as my late wife put it, the view out my window (if I had a window). 2. I'm with the government, and I'm here to help you. (Actually, civilian sector, so yes, I am.
On 5/31/2017 10:13 AM, m.roth@5-cent.us wrote:
If I had realized it would run this long, I would have used DBAN.... For single drives, I do, and choose DoD 5220.22-M (seven passes), which is *way* overkill these days... but I sign my name to a certificate that gets stuck on the outside of the server, meaning I, personally, am responsible for the sanitization of the drive(s).
the DoD multipass erase procedure is long obsolete and deprecated. It was based on MFM and RLL technology prevalent in the mid 1980s. NISPOM 2006-5220 replaced it in 2006, and says "DESTROY CONFIDENTIAL/SECRET INFORMATION PHYSICALLY".
http://www.infosecisland.com/blogview/16130-The-Urban-Legend-of-Multipass-Ha... http://www.dss.mil/documents/odaa/nispom2006-5220.pdf
from that blog,...
Fortunately, several security researchers presented a paper [WRIG08 http://www.springerlink.com/content/408263ql11460147/] at the Fourth International Conference on Information Systems Security (ICISS 2008) that declares the “great wiping controversy” about how many passes of overwriting with various data values to be settled: their research demonstrates that a single overwrite using an arbitrary data value will render the original data irretrievable even if MFM and STM techniques are employed.
The researchers found that the probability of recovering a single bit from a previously used HDD was only slightly better than a coin toss, and that the probability of recovering more bits decreases exponentially so that it quickly becomes close to zero.
Therefore, a single pass overwrite with any arbitrary value (randomly chosen or not) is sufficient to render the original HDD data effectively irretrievable.
so a single pass of zeros is plenty adequate for casual use, and physical device destruction is the only approved method for anything actually top secret.
John R Pierce wrote:
On 5/31/2017 10:13 AM, m.roth@5-cent.us wrote:
If I had realized it would run this long, I would have used DBAN.... For single drives, I do, and choose DoD 5220.22-M (seven passes), which is *way* overkill these days... but I sign my name to a certificate that gets stuck on the outside of the server, meaning I, personally, am responsible for the sanitization of the drive(s).
the DoD multipass erase procedure is long obsolete and deprecated. It was based on MFM and RLL technology prevalent in the mid 1980s. NISPOM 2006-5220 replaced it in 2006, and says "DESTROY CONFIDENTIAL/SECRET INFORMATION PHYSICALLY".
http://www.infosecisland.com/blogview/16130-The-Urban-Legend-of-Multipass-Ha... http://www.dss.mil/documents/odaa/nispom2006-5220.pdf
from that blog,...
Fortunately, several security researchers presented a paper [WRIG08 http://www.springerlink.com/content/408263ql11460147/] at the Fourth International Conference on Information Systems Security (ICISS 2008) that declares the “great wiping controversy” about how many passes of overwriting with various data values to be settled: their research demonstrates that a single overwrite using an arbitrary data value will render the original data irretrievable even if MFM and STM techniques are employed.
The researchers found that the probability of recovering a single bit from a previously used HDD was only slightly better than a coin toss, and that the probability of recovering more bits decreases exponentially so that it quickly becomes close to zero.
Therefore, a single pass overwrite with any arbitrary value (randomly chosen or not) is sufficient to render the original HDD data effectively irretrievable.
so a single pass of zeros is plenty adequate for casual use, and physical device destruction is the only approved method for anything actually top secret.
Not dealing with "secret", dealing with HIPAA and PII data. And *sigh* Homeland Security Theater dictates....
mark
On 5/31/2017 12:46 PM, m.roth@5-cent.us wrote:
Not dealing with "secret", dealing with HIPAA and PII data. And*sigh* Homeland Security Theater dictates....
We run all used disks through a shredder before surplusing any systems, and we are just a manufacturing company dealing with internal corporate IT stuff. the shredder is a truck from a 'data destruction' service that comes every so often and destroys the current inventory of surplus disks. A corporate eSecurity officer witnesses this to ensure drives aren't diverted into the grey market. each drive goes into the shredder and comes out as metal filings.
On 31/05/17 21:23, John R Pierce wrote:
On 5/31/2017 12:46 PM, m.roth@5-cent.us wrote:
Not dealing with "secret", dealing with HIPAA and PII data. And*sigh* Homeland Security Theater dictates....
We run all used disks through a shredder before surplusing any systems, and we are just a manufacturing company dealing with internal corporate IT stuff. the shredder is a truck from a 'data destruction' service that comes every so often and destroys the current inventory of surplus disks. A corporate eSecurity officer witnesses this to ensure drives aren't diverted into the grey market. each drive goes into the shredder and comes out as metal filings.
Not relevant to this particular instance, but for domestic disks I keep them (along with old credit cards, memory sticks etc) until I have the garden incinerator going. With a good bright red firebed the disks don't last long - some run out of the bottom as liquid aluminium. I'm pretty certain even MI5/NSA won't get much off congealed Al!
On 5/31/2017 1:36 PM, J Martin Rushton wrote:
Not relevant to this particular instance, but for domestic disks I keep them (along with old credit cards, memory sticks etc) until I have the garden incinerator going. With a good bright red firebed the disks don't last long - some run out of the bottom as liquid aluminium. I'm pretty certain even MI5/NSA won't get much off congealed Al!
Personally, I'd be concerned with toxic fumes from such an incinerator. There's all kinda stuff in a drive, rare earth platings, plastics, and so forth.
John R Pierce wrote:
On 5/31/2017 12:46 PM, m.roth@5-cent.us wrote:
Not dealing with "secret", dealing with HIPAA and PII data. And*sigh* Homeland Security Theater dictates....
We run all used disks through a shredder before surplusing any systems, and we are just a manufacturing company dealing with internal corporate IT stuff. the shredder is a truck from a 'data destruction' service that comes every so often and destroys the current inventory of surplus disks. A corporate eSecurity officer witnesses this to ensure drives aren't diverted into the grey market. each drive goes into the shredder and comes out as metal filings.
The alternative is to wait for my manager to return, and then have the drives deGaussed.
Oh... and I just looked, ahh, yeah, I think something's going on... given that it's not 12 days, but that I started it on 11 May....
mark
On 05/31/2017 08:04 AM, m.roth@5-cent.us wrote:
I've got an old RAID that I attached to a box. LSI card, and the RAID has 12 drives, for a total RAID size of 9.1TB, I think. I started shred /dev/sda the Friday before last... and it's still running. Is this reasonable for it to be taking this long...?
Was the system booting from /dev/sda, or were you running any binaries/libraries from sda? Often you'll be able to shred the device you boot from, but you won't get a prompt back when it's done.
Gordon Messmer wrote:
On 05/31/2017 08:04 AM, m.roth@5-cent.us wrote:
I've got an old RAID that I attached to a box. LSI card, and the RAID has 12 drives, for a total RAID size of 9.1TB, I think. I started shred /dev/sda the Friday before last... and it's still running. Is this reasonable for it to be taking this long...?
Was the system booting from /dev/sda, or were you running any binaries/libraries from sda? Often you'll be able to shred the device you boot from, but you won't get a prompt back when it's done.
No, the h/w RAID showed up as sda when I booted; / showed up on sdb.
mark