Has anyone actually used a SSD in a Centos setup?
My little experiment with a s/h WD drive for /tmp and SWAP partitions kicked the bucket on Wednesday, when the poor WD drive caught the click-of-death. It was a s/h drive to start with and lasted about 4 months. But that was without the /var/log/ partition being written to it, as I mounted that back onto /var/log from the original drive.
So I had to install another (WD) drive, and repartion it and rebuild my RPM package database, from the backed-up Packages file. That seems to be all OK now.
I'm wondering if it would be a good idea to use a new SSD for moving all the disk i/o to, that Linux likes to do so often. Plus putting SWAP onto a decent SSD should speed things up somewhat.
Here's a short video of a laptop fitted with a SSD drive booting macOS, compared to a similar laptop booting from the standard HDD.
The laptop with the SSD boots and loads some apps in 28 seconds. The other one takes twice as long.
http://eshop.macsales.com/shop/SSD/OWC/Mercury_Extreme_Pro_6G/?utm_source=th...
Kind Regards,
Keith Roberts
----------------------------------------------------------------- Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
2011/5/20 Keith Roberts keith@karsites.net:
Has anyone actually used a SSD in a Centos setup?
Yes.
I'm wondering if it would be a good idea to use a new SSD for moving all the disk i/o to, that Linux likes to do so often. Plus putting SWAP onto a decent SSD should speed things up somewhat.
Just buy fastest ocz drive than you can find from stores.
Here's a short video of a laptop fitted with a SSD drive booting macOS, compared to a similar laptop booting from the standard HDD.
The laptop with the SSD boots and loads some apps in 28 seconds. The other one takes twice as long.
So, slow? My macbook pro with ocz ssd boots much faster :)
-- Eero
On Sat, 21 May 2011, Eero Volotinen wrote:
To: CentOS mailing list centos@centos.org From: Eero Volotinen eero.volotinen@iki.fi Subject: Re: [CentOS] SSD for Centos SWAP /tmp & /var/ partition
2011/5/20 Keith Roberts keith@karsites.net:
Has anyone actually used a SSD in a Centos setup?
Yes.
I'm wondering if it would be a good idea to use a new SSD for moving all the disk i/o to, that Linux likes to do so often. Plus putting SWAP onto a decent SSD should speed things up somewhat.
Just buy fastest ocz drive than you can find from stores.
Here's a short video of a laptop fitted with a SSD drive booting macOS, compared to a similar laptop booting from the standard HDD.
The laptop with the SSD boots and loads some apps in 28 seconds. The other one takes twice as long.
So, slow? My macbook pro with ocz ssd boots much faster :)
Thanks for your reply Eero.
Do you have a link to the specs for your exact OCZ SSD model please, so I can take a look and compare it with the OWC drives? What size is the SDD in GB?
Regards,
Keith
----------------------------------------------------------------- Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
Hi Keith not sure about OCZ reliability for production , but i can confirm Intel x-25 drives work great with centos ( about 11 month's now ). I use two drives as /var in md mirror , using it for SQL and logs - it's an amazing boost vs ordinary drives.
if you use the SSD for swap, don't put anything important on them, I have managed to destroy a drive which was used for heavy swap operations. (insane experiment with KVM virtual machines got to that situation ). the machines used the drive as RAM. ( that was an intel drive! ).
I did experience a bad OCZ drive in the past, that's the reason i gone for the intel disks instead for production. the OCZ one died from normal usage on a laptop as a single drive.
the intels might be slower then other SSD drives, but i find them to be very reliable in contrast for normal (sane) usage.
On Sat, May 21, 2011 at 9:56 AM, Keith Roberts keith@karsites.net wrote:
On Sat, 21 May 2011, Eero Volotinen wrote:
To: CentOS mailing list centos@centos.org From: Eero Volotinen eero.volotinen@iki.fi Subject: Re: [CentOS] SSD for Centos SWAP /tmp & /var/ partition
2011/5/20 Keith Roberts keith@karsites.net:
Has anyone actually used a SSD in a Centos setup?
Yes.
I'm wondering if it would be a good idea to use a new SSD for moving all the disk i/o to, that Linux likes to do so often. Plus putting SWAP onto a decent SSD should speed things up somewhat.
Just buy fastest ocz drive than you can find from stores.
Regards,
Keith
On Sat, May 21, 2011 at 6:29 PM, yonatan pingle yonatan.pingle@gmail.com wrote:
if you use the SSD for swap, don't put anything important on them, I have managed to destroy a drive which was used for heavy swap operations. (insane experiment with KVM virtual machines got to that situation ). the machines used the drive as RAM. ( that was an intel drive! ).
I have to ask, how was your performance before death? I don't have a sacrificial SSD laying around so I can't exactly test myself. I imagine the gains had to be pretty high?
Were you running a 3 or 6 gbps sata bus?
On Sun, May 22, 2011 at 2:40 AM, Steven Crothers steven.crothers@gmail.com wrote:
On Sat, May 21, 2011 at 6:29 PM, yonatan pingle yonatan.pingle@gmail.com wrote:
if you use the SSD for swap, don't put anything important on them, I have managed to destroy a drive which was used for heavy swap operations. (insane experiment with KVM virtual machines got to that situation ). the machines used the drive as RAM. ( that was an intel drive! ).
I have to ask, how was your performance before death? I don't have a sacrificial SSD laying around so I can't exactly test myself. I imagine the gains had to be pretty high?
Were you running a 3 or 6 gbps sata bus?
-- Steven Crothers steven.crothers@gmail.com
I was running on 3gbps sata bus, and the performance was great, it just dies in one big crash without giving any clues about it.
I was running on 3gbps sata bus, and the performance was great, it just dies in one big crash without giving any clues about it.
If only SSD's were a viable solution for long-term storage, we could theoretically increase our virtualization many times over. It's to bad the technology hasn't come far enough to be used that way though without costing an arm and leg.
On Sun, May 22, 2011 at 10:57 AM, Steven Crothers steven.crothers@gmail.com wrote:
I was running on 3gbps sata bus, and the performance was great, it just dies in one big crash without giving any clues about it.
If only SSD's were a viable solution for long-term storage, we could theoretically increase our virtualization many times over. It's to bad the technology hasn't come far enough to be used that way though without costing an arm and leg.
-- Steven Crothers steven.crothers@gmail.com _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
the only way to go with SSD is RAID due to these reasons. it's unlikely that two disks will die at the same time, so it's possible to use and enjoy them , but don't forget to have a fresh backup and a raid array. ( that should be done also with an ordinary disk array anyways ).
On Sun, 22 May 2011, yonatan pingle wrote:
To: CentOS mailing list centos@centos.org From: yonatan pingle yonatan.pingle@gmail.com Subject: Re: [CentOS] SSD for Centos SWAP /tmp & /var/ partition
On Sun, May 22, 2011 at 10:57 AM, Steven Crothers steven.crothers@gmail.com wrote:
I was running on 3gbps sata bus, and the performance was great, it just dies in one big crash without giving any clues about it.
If only SSD's were a viable solution for long-term storage, we could theoretically increase our virtualization many times over. It's to bad the technology hasn't come far enough to be used that way though without costing an arm and leg.
But it's going in the right direction now.
-- Steven Crothers steven.crothers@gmail.com
the only way to go with SSD is RAID due to these reasons. it's unlikely that two disks will die at the same time, so it's possible to use and enjoy them , but don't forget to have a fresh backup and a raid array. ( that should be done also with an ordinary disk array anyways ).
That's EXACTLY what I was thinking. Two 40GB SSD drives in a RAID array would not cost much at all. Move all the disk intensive stuff to that. I only have two root partitions of 20GB each for my main install - everything else is on other partitions on 2 x 500GB E-IDE drives. So putting the root partion on a small SSD (possibly RAIDed) is another option. Like most new electronics components, as time passes the mass production cost fall dramatically, and the technology improves. Look at the way HDD technology continues to advance.
Maybe in 5 years time the cost of SSD's will be alot cheaper? Possibly in another 15 years time HDD's with moving parts will be consigned to history and science museums? I'm watching this technology very closely, and I'm very tempted to buy a small 40GB SSD like OWC's.
They keep performing at optimal speed according to the specs for that drive.
The OWC SSD's are supposed to have a MTTF of 2 million hours, PLUS they do not degrade over time. So if an OWC keeps going until MTTF, that's 24 x 365 = 8760 HPY. 2000000 / 8760 = 228.31 years MTTF ?
So why does it only have a 3 year warranty? - LOL
For me anything on SWAP has to be better than a s/h drive thats had almost a years running time according to the SMART data on the drive:
9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7913
329 days running time already - let's see how long this one lasts before it kicks the bucket.
Kind Regards,
Keith Roberts
----------------------------------------------------------------- Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
On Sun, May 22, 2011 at 2:06 PM, Keith Roberts keith@karsites.net wrote:
On Sun, 22 May 2011, yonatan pingle wrote:
To: CentOS mailing list centos@centos.org From: yonatan pingle yonatan.pingle@gmail.com Subject: Re: [CentOS] SSD for Centos SWAP /tmp & /var/ partition
On Sun, May 22, 2011 at 10:57 AM, Steven Crothers steven.crothers@gmail.com wrote:
I was running on 3gbps sata bus, and the performance was great, it just dies in one big crash without giving any clues about it.
If only SSD's were a viable solution for long-term storage, we could theoretically increase our virtualization many times over. It's to bad the technology hasn't come far enough to be used that way though without costing an arm and leg.
But it's going in the right direction now.
-- Steven Crothers steven.crothers@gmail.com
the only way to go with SSD is RAID due to these reasons. it's unlikely that two disks will die at the same time, so it's possible to use and enjoy them , but don't forget to have a fresh backup and a raid array. ( that should be done also with an ordinary disk array anyways ).
That's EXACTLY what I was thinking. Two 40GB SSD drives in a RAID array would not cost much at all. Move all the disk intensive stuff to that. I only have two root partitions of 20GB each for my main install - everything else is on other partitions on 2 x 500GB E-IDE drives. So putting the root partion on a small SSD (possibly RAIDed) is another option. Like most new electronics components, as time passes the mass production cost fall dramatically, and the technology improves. Look at the way HDD technology continues to advance.
Maybe in 5 years time the cost of SSD's will be alot cheaper? Possibly in another 15 years time HDD's with moving parts will be consigned to history and science museums? I'm watching this technology very closely, and I'm very tempted to buy a small 40GB SSD like OWC's.
They keep performing at optimal speed according to the specs for that drive.
The OWC SSD's are supposed to have a MTTF of 2 million hours, PLUS they do not degrade over time. So if an OWC keeps going until MTTF, that's 24 x 365 = 8760 HPY. 2000000 / 8760 = 228.31 years MTTF ?
So why does it only have a 3 year warranty? - LOL
For me anything on SWAP has to be better than a s/h drive thats had almost a years running time according to the SMART data on the drive:
9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7913
329 days running time already - let's see how long this one lasts before it kicks the bucket.
Kind Regards,
Keith Roberts
Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net]
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I hardly swap to disk these days , and after the bad experience with ssd as swap only ... i would stick to RAM & sata.
RAM is so cheap , just get extra ram , and use PAE if 32bit (?) adjust vm.swappiness ( sysctl ) to a lower value then 60 ( default ) , and you will be fine swapping on sata drives if and when needed.
if you are afraid of memory fragmentation , don't be .. in most cases you will be rebooting the server when a new kernel update will come out as it is.... the main question is , which kind of applications are you planning to run on your machine, and what is your actual hardware *needs*, that only you can tell.
also, for /tmp , you might like the idea of a ramdisk ( or tmpfs ) , it is a great way to speed up things without breaking the piggy bank.
this is what i use in /etc/fstab for my home desktop as /tmp :
/tmpfs /tmp tmpfs size=512M,nr_inodes=5k,noatime,nodiratime,noexec 0 0
does the job well.
anyways - if it's for home usage Don't think twice get an SSD .
yonatan pingle wrote:
On Sun, May 22, 2011 at 2:06 PM, Keith Roberts
anyways - if it's for home usage Don't think twice get an SSD .
Why? I've read most of the articles in this thread, and I haven't seen anything that persuades me SSD would be a good investment in my case, either in servers or laptops.
As far as swap is concerned, I'm not sure this has ever gone outside the 6GB RAM on my server, or even the 2GB RAM on my laptops.
As far as speed is concerned, the only operations I would like to speed up concern the internet, and I don't think SSD would help in this case.
On 5/23/2011 7:03 AM, Timothy Murphy wrote:
yonatan pingle wrote:
On Sun, May 22, 2011 at 2:06 PM, Keith Roberts
anyways - if it's for home usage Don't think twice get an SSD .
Why? I've read most of the articles in this thread, and I haven't seen anything that persuades me SSD would be a good investment in my case, either in servers or laptops.
*whistles* If you have not tried out a SSD laptop or desktop then you're in for a big surprise. Especially if you multi-task at all or work with a few thousand small files. It can make even a 10k RPM SATA seem slow when you try to do multiple things at once. Boot the machine up, start doing work while things are still loading up. Which is a situation that would bury a 7200 or 5400 RPM drive in seeks.
After having a 10k RPM SATA on the desktop for a few years, 7200 RPM seem slow and 5400 RPM drives seem glacial. The SSD in the laptop can make the 10k RPM SATA seem slow in comparison. It's the difference between 200-300 seeks/second for a mechanical and a few thousand seeks per second.
The main downside right now is cost and how big of a disk you can afford. SSDs are wonderful, but still in the $1.50-$2.00/GB range. Better then it was, but I was disappointed with Intel's 25nm pricing.
Thomas Harold wrote:
I've read most of the articles in this thread, and I haven't seen anything that persuades me SSD would be a good investment in my case, either in servers or laptops.
*whistles* If you have not tried out a SSD laptop or desktop then you're in for a big surprise.
Actually I have an SSD laptop (in fact two), and they are no faster for what I do, once I have booted up.
Especially if you multi-task at all or work with a few thousand small files.
I guess I don't do either. I often look at a few remote computers from my laptop, and perhaps download a file while editing a document, but that is about all.
The question is, which of us is more typical?
I was going to say, more typical of CentOS users, but I only run CentOS on two servers, and rarely login to them; I run Fedora on my laptops.
The main downside right now is cost and how big of a disk you can afford. SSDs are wonderful, but still in the $1.50-$2.00/GB range. Better then it was, but I was disappointed with Intel's 25nm pricing.
For me, having a small SSD on a laptop would be much more inconvenient than any increase in speed.
As I said, it all depends which of us is more typical of Linux users.
But I'm generally puzzled by the emphasis many people put on speed. Unless one is a gamer, it doesn't seem to me to make much difference if it takes 13 second or 30 seconds to boot up. Either way it is going to take the same time to get to an URL.
On Thu, 26 May 2011, Timothy Murphy wrote:
But I'm generally puzzled by the emphasis many people put on speed. Unless one is a gamer, it doesn't seem to me to make much difference if it takes 13 second or 30 seconds to boot up. Either way it is going to take the same time to get to an URL.
It all comes down to price. SSDs aren't massively popular yet just because of the price per unit of storage. When the price comes close to that of disk (and it doesn't have to match it), they'll romp away.
If you're talking from a clean boot, your SSD laptop is going to beat you to the URL too, as your browser's going to load faster. And your disk cache for the browser suddenly becomes much faster and much more useful. Your URL happens to contain a java applet, so you'll be grinding away for a bit while the VM springs itself to life.
It's funny what a diffence it seems to make. Try running a VM in a ramdisk to feel what fast storage can feel like.
So the argument is to throw so much memory into your machine that you never touch the disk, and never reboot it. Sound argument (as long as you don't mind risking your data on writes by not syncing), but expensive. So you go for the cheaper option of adding a ~32Gbyte SSD into your system. Cheaper than RAM, but faster than disk, and faster than being safe with just lots of memory. You back all that with a 2Tb 3.5" disk for bulk storage.
Spinning disks seem an awful lot like victorian technology taken too far. In the long term, what's *not* to like about the idea of fully solid state storage?
jh
On 5/26/11, John Hodrien J.H.Hodrien@leeds.ac.uk wrote:
Spinning disks seem an awful lot like victorian technology taken too far. In the long term, what's *not* to like about the idea of fully solid state storage?
Personally, I'm averse to using SSD with any important long term data is the nightmare that I could one day wake up to find everything gone without any means of recovery. Compared that to a hard disk, which barring catastrophic physical damage, I could pay somebody to just read the data off the platter.
As a performance boosting intermediary storage, yes, long term... maybe not quite yet.
On Thu, 26 May 2011, Emmanuel Noobadmin wrote:
Personally, I'm averse to using SSD with any important long term data is the nightmare that I could one day wake up to find everything gone without any means of recovery. Compared that to a hard disk, which barring catastrophic physical damage, I could pay somebody to just read the data off the platter.
As a performance boosting intermediary storage, yes, long term... maybe not quite yet.
That's what backups are for.
jh
John Hodrien wrote:
On Thu, 26 May 2011, Emmanuel Noobadmin wrote:
Personally, I'm averse to using SSD with any important long term data is the nightmare that I could one day wake up to find everything gone without any means of recovery. Compared that to a hard disk, which barring catastrophic physical damage, I could pay somebody to just read the data off the platter.
As a performance boosting intermediary storage, yes, long term... maybe not quite yet.
That's what backups are for.
Unless you are away on important business trip and you loose your system just minutes before the meeting. Yes, it can happen to regular HDD, it's much lesser probability for now.
Ljubomir
On Thu, 26 May 2011, Ljubomir Ljubojevic wrote:
Unless you are away on important business trip and you loose your system just minutes before the meeting. Yes, it can happen to regular HDD, it's much lesser probability for now.
If I'm going to a meeting where I've got documents I need, they'll be on the laptop, on a USB stick, and probably on a network accessible store as well.
I doubt an SSD is likely to be the least reliable part of a laptop.
jh
John Hodrien wrote:
On Thu, 26 May 2011, Ljubomir Ljubojevic wrote:
Unless you are away on important business trip and you loose your system just minutes before the meeting. Yes, it can happen to regular HDD, it's much lesser probability for now.
If I'm going to a meeting where I've got documents I need, they'll be on the laptop, on a USB stick, and probably on a network accessible store as well.
I doubt an SSD is likely to be the least reliable part of a laptop.
I knew someone will use that. But I am not talking about documents, but the system for example with let say architect design app, or demo version of developers new application with set database server and who knows what.
I know people that are unwilling to pay for good antivirus (MS Win naturally) because "it is easier/faster to just reinstall it, you only need 30-60 minutes" but they do not even think about how much time takes to configure ones environment, especially in business. I have several customers that don't even know their own passwords for e-mail accounts, and they expect me to know them, and to have their PC "just running".
So having SSD in laptop (if they are unreliable) is not much of an option, unless I am going to carry duplicate HDD/SSD just in case this one crashes.
Ljubomir
On Thu, 26 May 2011, Ljubomir Ljubojevic wrote:
So having SSD in laptop (if they are unreliable) is not much of an option, unless I am going to carry duplicate HDD/SSD just in case this one crashes.
I'd argue that's just one of the risks you run with a laptop. In a laptop you've typically got one battery, one charger, one screen, one disk (SSD or not), a less reliable DVD drive. I'm just not convinced SSDs are the pit of doom you seem to think they are. I'd personally guess that coffee is a bigger threat to travelling laptops than SSD failures.
jh
John Hodrien wrote:
On Thu, 26 May 2011, Ljubomir Ljubojevic wrote:
So having SSD in laptop (if they are unreliable) is not much of an option, unless I am going to carry duplicate HDD/SSD just in case this one crashes.
I'd argue that's just one of the risks you run with a laptop. In a laptop you've typically got one battery, one charger, one screen, one disk (SSD or not), a less reliable DVD drive. I'm just not convinced SSDs are the pit of doom you seem to think they are. I'd personally guess that coffee is a bigger threat to travelling laptops than SSD failures.
But I would not.
- I battery dies, I would run on PSU. - If PSU is dead I would run on battery until I locally buy universal PSU for my voltage. - If DVD dies, I would borrow/locally buy USB rack/Drive. - If integrated LAN dies I would buy cheap USB LAN NIC or use wireless (buy Wireless AP if needed) - If integrated Wireless radio dies I would buy another wireless radio or use LAN (buy Wireless AP/client if needed) - If Screen or MB dies, or anything else **except** HDD (in any form and with my system) I will unplug my system HDD (some laptops/notebooks/netbooks have SSD + HDD) or even copy system partition to another HDD and boot it and finish my mission/task. I am able to successfully, and *every* time boot old system (either Windows or Linux) on any new MB/HDD controller chipset, but I am not able to reinstall entire system will all the custom settings is short time.
So, from *my* point of view, reliability always takes precedence before speed, unless I can have both.
Ljubomir - 11 years in backing up and repairing OS-es
On May 26, 2011, at 8:12 AM, John Hodrien wrote:
On Thu, 26 May 2011, Ljubomir Ljubojevic wrote:
Unless you are away on important business trip and you loose your system just minutes before the meeting. Yes, it can happen to regular HDD, it's much lesser probability for now.
If I'm going to a meeting where I've got documents I need, they'll be on the laptop, on a USB stick, and probably on a network accessible store as well.
I doubt an SSD is likely to be the least reliable part of a laptop.
I've done that too. Travel, with data or code on hard drive, backup USB drive, and also burned to DVD or CD. I may ship the computer, but hand carry the DVD or thumb drive.
On 5/26/2011 8:04 AM, Ljubomir Ljubojevic wrote:
John Hodrien wrote:
On Thu, 26 May 2011, Emmanuel Noobadmin wrote:
Personally, I'm averse to using SSD with any important long term data is the nightmare that I could one day wake up to find everything gone without any means of recovery. Compared that to a hard disk, which barring catastrophic physical damage, I could pay somebody to just read the data off the platter.
As a performance boosting intermediary storage, yes, long term... maybe not quite yet.
That's what backups are for.
Unless you are away on important business trip and you loose your system just minutes before the meeting. Yes, it can happen to regular HDD, it's much lesser probability for now.
In a situation like a business trip, where the machine absolutely has to boot in order to do the sales presentation or demo, then a secondary traditional HD is a smart move. Mirror the system image just prior to the trip onto the external drive. If the internal dies, swap drives and carry on. It's a $50-$100 investment vs not having a bootable drive at all. If it's that important to you that a drive failure would kill the trip, then you should be doing even now with traditional drives.
All the user data should be backed up either to an external device or a server somewhere (including the data files required to do the presentation or configure one-of-a-kind software). Which means that even if the backup drive is a few days out of date, you should be able to drop it in and synchronize the user data back up with the external source within a few minutes.
I'd also still stick with the bigger names in SSDs right now. Intel for sure, then maybe consider the lesser players. The oldest SSD we have in use was bought back in '09 and that unit has shown zero issues.
On May 26, 2011, at 3:49 AM, Emmanuel Noobadmin wrote:
On 5/26/11, John Hodrien J.H.Hodrien@leeds.ac.uk wrote:
Spinning disks seem an awful lot like victorian technology taken too far. In the long term, what's *not* to like about the idea of fully solid state storage?
Personally, I'm averse to using SSD with any important long term data is the nightmare that I could one day wake up to find everything gone without any means of recovery. Compared that to a hard disk, which barring catastrophic physical damage, I could pay somebody to just read the data off the platter.
As a performance boosting intermediary storage, yes, long term... maybe not quite yet
multiple layers of backup. My main system has a main system. With scheduled backups to an external hard drive, and online. I have a lot of data on it like pictures that I wouldn't want to lose. A SSD would replace my main boot drive, with faster access to data as used. But the external drive would still be there for backup.
On May 26, 2011, at 4:36 PM, Kevin K wrote:
On May 26, 2011, at 3:49 AM, Emmanuel Noobadmin wrote:
On 5/26/11, John Hodrien J.H.Hodrien@leeds.ac.uk wrote:
Spinning disks seem an awful lot like victorian technology taken too far. In the long term, what's *not* to like about the idea of fully solid state storage?
Personally, I'm averse to using SSD with any important long term data is the nightmare that I could one day wake up to find everything gone without any means of recovery. Compared that to a hard disk, which barring catastrophic physical damage, I could pay somebody to just read the data off the platter.
As a performance boosting intermediary storage, yes, long term... maybe not quite yet
multiple layers of backup. My main system has a main system. With scheduled backups to an external hard drive, and online. I have a lot of data on it like pictures that I wouldn't want to lose. A SSD would replace my main boot drive, with faster access to data as used. But the external drive would still be there for backup.
I back up to traditional disk/tape as well.
Thing is, even though I use the Intel X25M for a mostly read only app server, there is still the issue on TRIM.
In Windows and OSX its easy to get TRIM working, does any know of TRIM for linux?
Also, for rock solid write performance, I've been using the IOExtreme which is pricy, hence biz use only. Its not bootable but very good read/write reliable I/O.
- aurf
From: "aurfalien@gmail.com" aurfalien@gmail.com
In Windows and OSX its easy to get TRIM working, does any know of TRIM for linux?
You apparently need a 2.6.33+ kernel (I read somewhere RH backported what was needed to their 2.6.32) and an fs like ext4 or brtfs. Read some people giving advice to setup the ctrl to AHCI instead of IDE, but apparently worked for us in IDE... Test = https://sites.google.com/site/lightrush/random-1/checkiftrimonext4isenableda... Tested on Fedora (15) and it worked. Tested on CentOS 5.6 with custom 2.6.35 kernel and, while it mounted with discard without complaining, it did not actualy trim in the tests... Will retest with 6.0...
JD
On 05/27/2011 05:29 AM, John Doe wrote:
From: "aurfalien@gmail.com"aurfalien@gmail.com
In Windows and OSX its easy to get TRIM working, does any know of TRIM for linux?
You apparently need a 2.6.33+ kernel (I read somewhere RH backported what was needed to their 2.6.32) and an fs like ext4 or brtfs. Read some people giving advice to setup the ctrl to AHCI instead of IDE, but apparently worked for us in IDE... Test = https://sites.google.com/site/lightrush/random-1/checkiftrimonext4isenableda... Tested on Fedora (15) and it worked. Tested on CentOS 5.6 with custom 2.6.35 kernel and, while it mounted with discard without complaining, it did not actualy trim in the tests... Will retest with 6.0...
Hmmm.... How do you determine whether TRIM worked or not?
From: Steve Clark sclark@netwolves.com
On 05/27/2011 05:29 AM, John Doe wrote:
Test = https://sites.google.com/site/lightrush/random-1/checkiftrimonext4isenableda... Tested on Fedora (15) and it worked.
Hmmm.... How do you determine whether TRIM worked or not?
See the link.
JD
On 05/27/2011 08:28 AM, John Doe wrote:
From: Steve Clarksclark@netwolves.com
On 05/27/2011 05:29 AM, John Doe wrote:
Test = https://sites.google.com/site/lightrush/random-1/checkiftrimonext4isenableda... Tested on Fedora (15) and it worked.
Hmmm.... How do you determine whether TRIM worked or not?
Thanks,
Unfortunately when I try it on SL 6.0 hdparm gets a segment violation on the --read-sector command. hdparm-9.16-3.4.el6.i686 [root@Z764041 ~]# hdparm --fibmap tempfile
tempfile: underlying filesystem: blocksize 4096, begins at LBA 1600008; assuming 512 byte sectors byte_offset begin_LBA end_LBA sectors 0 4495880 4496903 1024 524288 4554248 4557319 3072 2097152 4524552 4536839 12288 8388608 4565512 4651527 86016 [root@Z764041 ~]# hdparm --read-sector 4495880 /dev/sda
/dev/sda: Segmentation fault
From: Steve Clark sclark@netwolves.com
Unfortunately when I try it on SL 6.0 hdparm gets a segment
violation on the --read-sector command.
The fedora one is 9.36 And the one we used on CentOS 5.6 was 9.37 (compiled it).
Maybe try a more recent version...
JD
On 05/27/2011 10:04 AM, John Doe wrote:
From: Steve Clarksclark@netwolves.com
Unfortunately when I try it on SL 6.0 hdparm gets a segment
violation on the --read-sector command.
The fedora one is 9.36 And the one we used on CentOS 5.6 was 9.37 (compiled it).
Maybe try a more recent version...
Hmmm.... sector still had random data after rm tmpfile and sync;
/dev/sda3 on / type ext4 (rw,noatime,nodiratime,discard)
Device Model: KINGSTON SS100S216G Serial Number: 16GB40013421 Firmware Version: D100719
Suppose to support TRIM.
From: Steve Clark sclark@netwolves.com
Unfortunately when I try it on SL 6.0 hdparm gets a segment Hmmm.... sector still had random data after rm tmpfile and sync;
If SL 6.0 does not support it, I wonder if CentOS 6.0 (or even RH) will... damn. :/
Guess you have no option to test it with the latest Fedora?
Anybody else could confirm that TRIM is working for them...?
JD
Hmmm.... sector still had random data after rm tmpfile and sync;
/dev/sda3 on / type ext4 (rw,noatime,nodiratime,discard)
Device Model: KINGSTON SS100S216G Serial Number: 16GB40013421 Firmware Version: D100719
Suppose to support TRIM.
Just so you know, in Jan '09 there was a thread on the linux raid list where they discussed the TRIM command on SSD's. The gist of the conversation (as I understood it) was that for SATA based SSD's, the results of a raw read afterward were non-deterministic, ie you couldn't be certain what you'd get back.
On 5/22/11, yonatan pingle yonatan.pingle@gmail.com wrote:
the only way to go with SSD is RAID due to these reasons. it's unlikely that two disks will die at the same time, so it's possible to use and enjoy them , but don't forget to have a fresh backup and a raid array. ( that should be done also with an ordinary disk array anyways ).
If I'm not mistakened, one issue with using SSD was limited write cycles of the cells? So two SSD used for repeated rewrite operations would likely die around the same time, wouldn't they?
On 05/23/2011 01:22 AM, Emmanuel Noobadmin wrote:
If I'm not mistakened, one issue with using SSD was limited write cycles of the cells? So two SSD used for repeated rewrite operations would likely die around the same time, wouldn't they?
An SLC drive with wear leveling should last far longer than the system in which it's used.
On Sun, May 22, 2011 at 12:29 AM, yonatan pingle yonatan.pingle@gmail.com wrote:
Hi Keith not sure about OCZ reliability for production , but i can confirm Intel x-25 drives work great with centos ( about 11 month's now ). I use two drives as /var in md mirror , using it for SQL and logs - it's an amazing boost vs ordinary drives.
if you use the SSD for swap, don't put anything important on them, I have managed to destroy a drive which was used for heavy swap operations. (insane experiment with KVM virtual machines got to that situation ). the machines used the drive as RAM. ( that was an intel drive! ).
I did experience a bad OCZ drive in the past, that's the reason i gone for the intel disks instead for production. the OCZ one died from normal usage on a laptop as a single drive.
the intels might be slower then other SSD drives, but i find them to be very reliable in contrast for normal (sane) usage.
ZFS can use a SATA, SAS or SSD drive as cache drive to speed up common reads & writes. I have seen some small improvements even when using a cheaper grade SATA & SAS drive (as part of an experiment). The speed improvement is quite a bit more evident on larger storage arrays.
You could also use 2 cheaper MLC type SSD's, one in a "cold standy" type setup - where it's already in mounted in the server and then you simply tell ZFS to stop using SSD1, and start using SSD2 instead.
15K SAS drives also add some level of improvement if you need speed + reliability on a tight budget.
Now, the question is, is is there any way to tell EXT3/4 to use a separate drive as a cache drive for the same purpose? OR, how about telling CentOS to use a separate drive for caching purposes in the same way?
On 05/23/2011 11:01 AM, Rudi Ahlers wrote:
Now, the question is, is is there any way to tell EXT3/4 to use a separate drive as a cache drive for the same purpose? OR, how about telling CentOS to use a separate drive for caching purposes in the same way?
You can use an external journal on a SSD to speed up at least writes by quite a lot.
http://insights.oetiker.ch/linux/external-journal-on-ssd/
But, for paranoia's sake, I would RAID1 the SSD with a second SSD.
On Mon, May 23, 2011 at 8:44 PM, Jerry Franz jfranz@freerun.com wrote:
On 05/23/2011 11:01 AM, Rudi Ahlers wrote:
Now, the question is, is is there any way to tell EXT3/4 to use a separate drive as a cache drive for the same purpose? OR, how about telling CentOS to use a separate drive for caching purposes in the same way?
You can use an external journal on a SSD to speed up at least writes by quite a lot.
http://insights.oetiker.ch/linux/external-journal-on-ssd/
But, for paranoia's sake, I would RAID1 the SSD with a second SSD.
-- Benjamin Franz _______________________________________________
Interesting, and it seems like it's similar to the ZFS cache drive scenario but will probably work better for what the OP had in mind, than putting the swap & logs onto the SSD drive.
I don't know if running SSD's in RAID1 will be that much more reliable. Surely if the exact same data is read & written (aka same amount of reads & writes take place) on both drive, then both will fail at the same time?
On 05/23/2011 01:44 PM, Jerry Franz wrote:
But, for paranoia's sake, I would RAID1 the SSD with a second SSD.
Quote from http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Ad... :
Red Hat also warns that software RAID levels 1, 4, 5, and 6 are not recommended for use on SSDs. During the initialization stage of these RAID levels, some RAID management utilities (such as mdadm) write to all of the blocks on the storage device to ensure that checksums operate properly. This will cause the performance of the SSD to degrade quickly.
On Mon, May 23, 2011 at 02:29:22PM -0500, Robert Nichols wrote:
On 05/23/2011 01:44 PM, Jerry Franz wrote:
But, for paranoia's sake, I would RAID1 the SSD with a second SSD.
Quote from http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Ad... :
Red Hat also warns that software RAID levels 1, 4, 5, and 6 are not recommended for use on SSDs. During the initialization stage of these RAID levels, some RAID management utilities (such as mdadm) write to all of the blocks on the storage device to ensure that checksums operate properly. This will cause the performance of the SSD to degrade quickly.
Huh. Maybe LVM mirroring would be alright.
Ray
On Mon, 23 May 2011, Ray Van Dolson wrote:
To: centos@centos.org From: Ray Van Dolson rayvd@bludgeon.org Subject: Re: [CentOS] SSD for Centos SWAP /tmp & /var/ partition
On Mon, May 23, 2011 at 02:29:22PM -0500, Robert Nichols wrote:
On 05/23/2011 01:44 PM, Jerry Franz wrote:
But, for paranoia's sake, I would RAID1 the SSD with a second SSD.
Quote from http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Ad... :
Red Hat also warns that software RAID levels 1, 4, 5, and 6 are not recommended for use on SSDs. During the initialization stage of these RAID levels, some RAID management utilities (such as mdadm) write to all of the blocks on the storage device to ensure that checksums operate properly. This will cause the performance of the SSD to degrade quickly.
Huh. Maybe LVM mirroring would be alright.
Quote from above link.
"In addition, keep in mind that MD (software raid) does not support discards. In contrast, the logical volume manager (LVM) and the device-mapper (DM) targets that LVM uses do support discards. The only DM targets that do not support discards are dm-snapshot, dm-crypt, and dm-raid45. Discard support for the dm-mirror was added in Red Hat Enterprise Linux 6.1."
It's not a very large article. A quick 5-10 minute read maybe.
Keith
----------------------------------------------------------------- Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
On 05/23/2011 12:27 PM, Ray Van Dolson wrote:
Quote from http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Ad... :
Red Hat also warns that software RAID levels 1, 4, 5, and 6 are not recommended for use on SSDs. During the initialization stage of these RAID levels, some RAID management utilities (such as mdadm) write to all of the blocks on the storage device to ensure that checksums operate properly. This will cause the performance of the SSD to degrade quickly.
Huh. Maybe LVM mirroring would be alright.
Not actually a problem if you are just using it for journaling. Journals max out at 400MB - so you are using only a tiny fraction of the entire SSD for the journal while getting a large performance pop on small writes since the OS can safely return to you before the data is actually written to the slower magnetic disk. Another alternative is to *not use the entire SSD*. Deliberately leave say 25% or so unallocated. Kind of like short stroking a disk for performance: You sacrifice capacity for speed.
On Mon, 23 May 2011, Rudi Ahlers wrote: *snip*
ZFS can use a SATA, SAS or SSD drive as cache drive to speed up common reads & writes. I have seen some small improvements even when using a cheaper grade SATA & SAS drive (as part of an experiment). The speed improvement is quite a bit more evident on larger storage arrays.
You could also use 2 cheaper MLC type SSD's, one in a "cold standy" type setup - where it's already in mounted in the server and then you simply tell ZFS to stop using SSD1, and start using SSD2 instead.
Here's a follow on link from another posters link:
http://www.tomsguide.com/us/ssd-value-performance,review-1455.html
Apparently, SLC are supposed to last ~10x longer than MLC ?
Keith
----------------------------------------------------------------- Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
On Fri, May 20, 2011 at 10:21 PM, Eero Volotinen eero.volotinen@iki.fi wrote:
Just buy fastest ocz drive than you can find from stores.
-- Eero
Simply buying OCZ because its cheap is wrong. OCZ drives use MLC flash, I'm sure you know the difference between single level cells and multiple level cells since you are making a product recommendation. However in case you don't, your statement is incorrect. Using an OCZ drive as swap is probably the worst thing you can do to it, when you add in the fact you're also running some caches from var on it with ext3... Well the results will not be pleasant in 1-2 years to say the least.
It's really neat when an OCZ drive fails, it doesn't tick. You just lose all your data. Here today, gone tomorrow.
That's expensive, don't know about you but I don't factor in drives to be dead within 3-4 months of installation for my machines. Running swap on an MLC SSD will most definitely kill it in 3-4 months. You expect to get at least 18-36 months out of a drive before it either dies or requires an upgrade.
500GB Sata disks = $150 for good ones, that's $150 every 30 months (2 1/2 years).
A CHEAP OCZ MLC drive is $100 for 40GB, burn one of those every 4 months and you have to buy 8 in the same 2 1/2 years.
That's a $650 operating increase, or an additional $21/mo for a rented server before profit mark up. Why not just use what Linux was designed to use? A regular spinning disk, if you want performance get into Raid 10 with SAS drives. You'll get a significant speed increase at a lower monthly operating cost due to the longevity of the drives, and you can avoid all that pesky restore from backup situation 4 times a year.
Lets face it, a var partition goes and you're gonna have a hard time starting up some services on boot, depending on your setup SSH wont start. So now you're entering the realm of remote hands and eyes fees, and at minimum that's going to be $50/hr with a hefty commit (if you're in a quality datacenter of course).
On Sat, May 21, 2011 at 9:22 PM, Eero Volotinen eero.volotinen@iki.fi wrote:
It's really neat when an OCZ drive fails, it doesn't tick. You just lose all your data. Here today, gone tomorrow.
Just swap drive and restore from backup, no problem ?
-- Eero _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 05/20/2011 01:26 PM, Keith Roberts wrote:
I'm wondering if it would be a good idea to use a new SSD for moving all the disk i/o to, that Linux likes to do so often.
Yes, it's often a really good idea. If you're doing software RAID on Linux, you really should either disable disk drives' write cache or have the system on a UPS with monitoring and automated shutdown. Most people will opt for the latter. Using an SSD can boost write performance and reliability for other systems. I just found this write-up on the topic:
http://insights.oetiker.ch/linux/external-journal-on-ssd.html
Plus putting SWAP onto a decent SSD should speed things up somewhat.
Well, only if you're using swap space. If that's the case, adding RAM to the system will probably be less expensive and a lot more effective.
On Sun, May 22, 2011 at 6:18 PM, Gordon Messmer yinyang@eburg.com wrote:
On 05/20/2011 01:26 PM, Keith Roberts wrote:
I'm wondering if it would be a good idea to use a new SSD for moving all the disk i/o to, that Linux likes to do so often.
Just be aware that SSDs wear out. They have a limited number of write cycles. Nowadays they all do 'wear levelling' to even the writes across the drive but even so they don't last very long in heavy write usage.
If you're talking swap and tmp then you can get a DRAM drive which will be lightning fast but even with battery backup you can't expect the contents to be kept through a power cycle.
On Mon, May 23, 2011 at 11:31 AM, Kevin Thorpe kevin.thorpe@pibenchmark.com wrote:
Just be aware that SSDs wear out. They have a limited number of write cycles. Nowadays they all do 'wear levelling' to even the writes across the drive but even so they don't last very long in heavy write usage.
Doesn't SATA and SAS drives also wear out?
On Mon, 23 May 2011, Rudi Ahlers wrote:
Doesn't SATA and SAS drives also wear out?
Not in such a clear way related to usage. You could have a SATA disk that you write to 24 hours a day and it could last for years. With an SSD, you'd be certain to kill your disk in months if you treated it like that.
On the other hand, I'd imagine an SSD used for solely reads could last a *very* long time.
jh
On 23 May 2011 11:04, John Hodrien J.H.Hodrien@leeds.ac.uk wrote:
On Mon, 23 May 2011, Rudi Ahlers wrote:
Doesn't SATA and SAS drives also wear out?
Not in such a clear way related to usage. You could have a SATA disk that you write to 24 hours a day and it could last for years. With an SSD, you'd be certain to kill your disk in months if you treated it like that.
On the other hand, I'd imagine an SSD used for solely reads could last a *very* long time.
i have been using an 8GB PATA interface SSD (mlc) for *years* now as the sole disc on a laptop running centos for several hours on a daily basis. Other than a noticeable slowdown once it got to the point of having to do the erase before write (that TRIM would alleviate though not on this drive) it is still way faster than the spinning drive it replaced. This laptop (Dell Latitude L400) also has only 256MB of RAM and runs a desktop so swap is used an awful lot. I now have a lot of SSDs in production and have only seen spinning discs go bad for no real reason (especially WD 3.5inch and hitachi 2,5inch for some reason) whereas the SSDs have been rock solid. Either i am just lucky or maybe the guys that were recommending several years ago using SSDs for the zil in super large ZFS disc storage setups were correct. :)
Pretty soon extra large storage will be flash on PCIe -> SSD ->spinning discs as the data ages. Quite where it will then go to for really long term i'm not sure ?tape ?cloud.
I can heartily recommend moving to SSD for OS / swap / cache / db especially when CentOS 6 comes out -though in order to use TRIM on non-swap partitions you will need to be using ext4 (no LVM) and have the discard option set.
mike
On May 23, 2011, at 4:48 AM, Rudi Ahlers wrote:
On Mon, May 23, 2011 at 11:31 AM, Kevin Thorpe kevin.thorpe@pibenchmark.com wrote:
Just be aware that SSDs wear out. They have a limited number of write cycles. Nowadays they all do 'wear levelling' to even the writes across the drive but even so they don't last very long in heavy write usage.
Doesn't SATA and SAS drives also wear out?
A SSD drive can be a SATA drive. SATA is the connection/protocol between the drive and the computer.
On Tue, May 24, 2011 at 1:30 AM, Kevin K kevink1@fidnet.com wrote:
On May 23, 2011, at 4:48 AM, Rudi Ahlers wrote:
On Mon, May 23, 2011 at 11:31 AM, Kevin Thorpe kevin.thorpe@pibenchmark.com wrote:
Just be aware that SSDs wear out. They have a limited number of write cycles. Nowadays they all do 'wear levelling' to even the writes across the drive but even so they don't last very long in heavy write usage.
Doesn't SATA and SAS drives also wear out?
A SSD drive can be a SATA drive. SATA is the connection/protocol between the drive and the computer.
Not quite. SATA is a type of drive, same as IDE / ATA, SCSI, SATA :)
A SSD drive can be a SATA drive. SATA is the connection/protocol between the drive and the computer.
Not quite. SATA is a type of drive, same as IDE / ATA, SCSI, SATA :)
I disagree. :)
IDE/ATA, SATA, SAS, SCSI are all just interfaces. The underlying media, whether spinning rust or MLC/SLC NAND Flash is the drive.
So SSD's can be SATA, SAS, built into custom PCIe cards (OCZ Revo Drive's & the ilk) or even ATA (never seen one). Regardless it's still an SSD drive.
On 5/23/2011 7:42 PM, Rudi Ahlers wrote:
On Tue, May 24, 2011 at 1:30 AM, Kevin K kevink1@fidnet.com wrote:
A SSD drive can be a SATA drive. SATA is the connection/protocol between the drive and the computer.
Not quite. SATA is a type of drive, same as IDE / ATA, SCSI, SATA :)
SATA is the connection. This is why you can have SATA hard drives and DVD drives. The same goes for IDE, SCSI, USB, and Firewire. They are connection types for accessing storage devices. They can connect to traditional hard drives, SSD drives, DVD drives, raid enclosures, etc.
On Tue, May 24, 2011 at 3:51 PM, Bowie Bailey Bowie_Bailey@buc.com wrote:
On 5/23/2011 7:42 PM, Rudi Ahlers wrote:
On Tue, May 24, 2011 at 1:30 AM, Kevin K kevink1@fidnet.com wrote:
A SSD drive can be a SATA drive. SATA is the connection/protocol between the drive and the computer.
Not quite. SATA is a type of drive, same as IDE / ATA, SCSI, SATA :)
SATA is the connection. This is why you can have SATA hard drives and DVD drives. The same goes for IDE, SCSI, USB, and Firewire. They are connection types for accessing storage devices. They can connect to traditional hard drives, SSD drives, DVD drives, raid enclosures, etc.
-- Bowie _______________________________________________
So what do you call an actual SATA HDD then???? It's still a SATA HDD, and it's still different from IDE, SCSI, SAS, SSD
On 5/24/2011 10:05 AM, Rudi Ahlers wrote:
On Tue, May 24, 2011 at 3:51 PM, Bowie Bailey Bowie_Bailey@buc.com wrote:
On 5/23/2011 7:42 PM, Rudi Ahlers wrote:
On Tue, May 24, 2011 at 1:30 AM, Kevin K kevink1@fidnet.com wrote:
A SSD drive can be a SATA drive. SATA is the connection/protocol between the drive and the computer.
Not quite. SATA is a type of drive, same as IDE / ATA, SCSI, SATA :)
SATA is the connection. This is why you can have SATA hard drives and DVD drives. The same goes for IDE, SCSI, USB, and Firewire. They are connection types for accessing storage devices. They can connect to traditional hard drives, SSD drives, DVD drives, raid enclosures, etc.
-- Bowie _______________________________________________
So what do you call an actual SATA HDD then???? It's still a SATA HDD, and it's still different from IDE, SCSI, SAS, SSD
Personally, I would call it an SATA HDD vs an SATA SSD. The same would be true of a SCSI HDD vs a SCSI SSD.
At the moment, if you say "SATA drive", most people will understand you to mean hard drive simply because the solid state drives are not common enough. If the price drops and they start taking over the market, then the understanding of "SATA drive" will probably change to refer to an SSD.
From Wikipedia:
Serial ATA (SATA or Serial Advanced Technology Attachment) is a computer bus interface for connecting host bus adapters to mass storage devices such as hard disk drives and optical drives.
On Tue, May 24, 2011 at 4:22 PM, Bowie Bailey Bowie_Bailey@buc.com wrote:
On 5/24/2011 10:05 AM, Rudi Ahlers wrote:
On Tue, May 24, 2011 at 3:51 PM, Bowie Bailey Bowie_Bailey@buc.com wrote:
On 5/23/2011 7:42 PM, Rudi Ahlers wrote:
On Tue, May 24, 2011 at 1:30 AM, Kevin K kevink1@fidnet.com wrote:
A SSD drive can be a SATA drive. SATA is the connection/protocol between the drive and the computer.
Not quite. SATA is a type of drive, same as IDE / ATA, SCSI, SATA :)
SATA is the connection. This is why you can have SATA hard drives and DVD drives. The same goes for IDE, SCSI, USB, and Firewire. They are connection types for accessing storage devices. They can connect to traditional hard drives, SSD drives, DVD drives, raid enclosures, etc.
-- Bowie _______________________________________________
So what do you call an actual SATA HDD then???? It's still a SATA HDD, and it's still different from IDE, SCSI, SAS, SSD
Personally, I would call it an SATA HDD vs an SATA SSD. The same would be true of a SCSI HDD vs a SCSI SSD.
At the moment, if you say "SATA drive", most people will understand you to mean hard drive simply because the solid state drives are not common enough. If the price drops and they start taking over the market, then the understanding of "SATA drive" will probably change to refer to an SSD.
From Wikipedia:
Serial ATA (SATA or Serial Advanced Technology Attachment) is a computer bus interface for connecting host bus adapters to mass storage devices such as hard disk drives and optical drives.
-- Bowie _______________________________________________
But don't you think that a SSD, or rather Solid State Drive, would still be seen as a different type of drive than a SATA drive, even though they share the same type of bus & connector + power cable?
I know you get some USB type SSD's, but people still refer to them as SSD drives, and not USB drives
On Tue, 24 May 2011, Rudi Ahlers wrote:
But don't you think that a SSD, or rather Solid State Drive, would still be seen as a different type of drive than a SATA drive, even though they share the same type of bus & connector + power cable?
A SATA SSD is different to a SATA HDD. Yes. And the OS can tell (if it wants to) that they are different. But a SATA drive is a term that encompasses both if you ask me.
I know you get some USB type SSD's, but people still refer to them as SSD drives, and not USB drives
I think we're just saying be carefuly what you say. SSD after all stands for Solid State Drive. So SATA drive (as you keep saying) sounds like the superset of HDDs and SSDs if you ask me, rather than SATA HDD which is what you're trying to say.
Other people using ambiguous terms doesn't make you more right.
jh
On 5/24/2011 11:25 AM, Rudi Ahlers wrote:
On Tue, May 24, 2011 at 4:22 PM, Bowie Bailey Bowie_Bailey@buc.com wrote:
Personally, I would call it an SATA HDD vs an SATA SSD. The same would be true of a SCSI HDD vs a SCSI SSD.
At the moment, if you say "SATA drive", most people will understand you to mean hard drive simply because the solid state drives are not common enough. If the price drops and they start taking over the market, then the understanding of "SATA drive" will probably change to refer to an SSD.
From Wikipedia:
Serial ATA (SATA or Serial Advanced Technology Attachment) is a computer bus interface for connecting host bus adapters to mass storage devices such as hard disk drives and optical drives.
But don't you think that a SSD, or rather Solid State Drive, would still be seen as a different type of drive than a SATA drive, even though they share the same type of bus & connector + power cable?
I know you get some USB type SSD's, but people still refer to them as SSD drives, and not USB drives
We are discussing two different things here.
1) What does SATA mean?
2) What do people mean when they say "SATA drive"?
Unfortunately, common language tends to be general and vague. People tend to use terms in ways that are not technically correct -- ever heard someone refer to their tower case as a "CPU"?
Technically, SATA refers to the bus, connector, and power. Whether the general understanding of "SATA drive" will shift when SSDs become more prevalent is unknown (but likely).
Personally, I understand the general meaning of terms like "SATA drive", but I know what the technical term actually means and if someone seems to be confusing the technical term with the (non-technical) general usage, then I will correct them.
On May 24, 2011, at 11:40 AM, Bowie Bailey Bowie_Bailey@BUC.com wrote:
On 5/24/2011 11:25 AM, Rudi Ahlers wrote:
On Tue, May 24, 2011 at 4:22 PM, Bowie Bailey Bowie_Bailey@buc.com wrote:
Personally, I would call it an SATA HDD vs an SATA SSD. The same would be true of a SCSI HDD vs a SCSI SSD.
At the moment, if you say "SATA drive", most people will understand you to mean hard drive simply because the solid state drives are not common enough. If the price drops and they start taking over the market, then the understanding of "SATA drive" will probably change to refer to an SSD.
From Wikipedia:
Serial ATA (SATA or Serial Advanced Technology Attachment) is a computer bus interface for connecting host bus adapters to mass storage devices such as hard disk drives and optical drives.
But don't you think that a SSD, or rather Solid State Drive, would still be seen as a different type of drive than a SATA drive, even though they share the same type of bus & connector + power cable?
I know you get some USB type SSD's, but people still refer to them as SSD drives, and not USB drives
We are discussing two different things here.
What does SATA mean?
What do people mean when they say "SATA drive"?
Unfortunately, common language tends to be general and vague. People tend to use terms in ways that are not technically correct -- ever heard someone refer to their tower case as a "CPU"?
Technically, SATA refers to the bus, connector, and power. Whether the general understanding of "SATA drive" will shift when SSDs become more prevalent is unknown (but likely).
Personally, I understand the general meaning of terms like "SATA drive", but I know what the technical term actually means and if someone seems to be confusing the technical term with the (non-technical) general usage, then I will correct them.
A SATA drive can be either a HDD or SSD, the term drive tends to refer to fixed media block device as opposed to say a multimedia (optical) or streaming media (tape) block device. Each device type has it's own particular command set. Though SSD drives and their TRIM command kind of changes things for fixed media devices, but the SCSI PUNCH command has found a common use.
Transport, commands and interface make up the standard. Things get a little strange though when SATA is tunneled through USB or SAS and SATA commands, like TRIM, which don't have a corresponding SCSI command aren't supported, so you can't TRIM a SATA SSD on a SAS controller. The SCSI equivalent to TRIM is PUNCH which is safer then TRIM, but more complex and the two can't interoperate, it is intended to be used in SANs as well as SSDs so the initiators can free up thin provisioned space as needed.
-Ross
On 05/24/2011 08:25 AM, Rudi Ahlers wrote:
But don't you think that a SSD, or rather Solid State Drive, would still be seen as a different type of drive than a SATA drive, even though they share the same type of bus& connector + power cable?
Interface and media type are completely independent. You can have SATA DVD, SSD, hard drives, Blue Ray, magnetic tape drives, etc.. You can have SAS DVD, SSD, hard drives, Blue Ray,tape drives, etc.. You can have USB DVD, SSD, hard drives, Blue Ray, magnetic tape drives, etc..
That a drive uses a SATA interface tells you *nothing* about the physical media itself.
You are making a category error. It is as if you claimed a laptop was fundamentally different because you were using it with a 230V AC to DC power adaptor instead of a 120V AC to DC power adaptor.
I know you get some USB type SSD's, but people still refer to them as SSD drives, and not USB drives
I know a lot of people who call hard drives 'memory' - that doesn't make them right.
The correct way to describe it is 'a SSD drive *with a USB interface*' or 'a SSD drive *with a SATA interface*'.
On 5/24/11, Benjamin Franz jfranz@freerun.com wrote:
On 05/24/2011 08:25 AM, Rudi Ahlers wrote:
I know you get some USB type SSD's, but people still refer to them as SSD drives, and not USB drives
The correct way to describe it is 'a SSD drive *with a USB interface*' or 'a SSD drive *with a SATA interface*'.
I don't know... "SSD drive with a USB interface" sounds a big mouthful... most people I know just call thumb drives :D
On May 25, 2011, at 3:28 PM, Emmanuel Noobadmin wrote:
I don't know... "SSD drive with a USB interface" sounds a big mouthful... most people I know just call thumb drives :D
Though thumb drives are flash, they tend to use a slower flash than what is used in hard drive replacement units. I think that many people, when talking about SSD, may be thinking of drives in the form factor of a hard drive. Either 2.5" or 3.5". Which would probably not be called a thumb drive :)
On 5/26/11, Kevin K kevink1@fidnet.com wrote:
Though thumb drives are flash, they tend to use a slower flash than what is used in hard drive replacement units.
No actual industry facts for this, but I think the Flash used in thumb drives are not really any slower by nature/design. This is because I see that the fastest SSD currently tend to use 8 channel controllers for 200+ MB/s performance which translate to 20~30MB/sec per channel.
The better USB 2.0 thumb drives can do about 20+ MB, Kingston even has a new one that will supposedly do 70+ when connected via USB 3.0. If we take 8 of these and RAID 0 them which is pretty much what the 8-channel controller is doing, we're looking at pretty similar numbers between the flash cells in thumb drives and "SSD".
I think that many people, when talking about SSD, may be thinking of drives in the form factor of a hard drive. Either 2.5" or 3.5". Which would probably not be called a thumb drive :)
Only because it doesn't come with a USB connector! ;)
On 5/26/11, Kevin K kevink1@fidnet.com wrote:
Though thumb drives are flash, they tend to use a slower flash than what is used in hard drive replacement units.
No actual industry facts for this, but I think the Flash used in thumb drives are not really any slower by nature/design. This is because I see that the fastest SSD currently tend to use 8 channel controllers for 200+ MB/s performance which translate to 20~30MB/sec per channel.
There is quite a difference between common USB flash drives and SSDs. SSDs are supposed to replace a HDD while USB drives are not designed for it. One difference is the type of wear leveling, also documented here http://en.wikipedia.org/wiki/Wear_leveling
There are exceptions of course, for example there some "industrial grade" devices, but the commonly used ones can not be compared.
Simon
On 5/26/11, Simon Matter simon.matter@invoca.ch wrote:
On 5/26/11, Kevin K kevink1@fidnet.com wrote:
Though thumb drives are flash, they tend to use a slower flash than what is used in hard drive replacement units.
No actual industry facts for this, but I think the Flash used in thumb drives are not really any slower by nature/design. This is because I see that the fastest SSD currently tend to use 8 channel controllers for 200+ MB/s performance which translate to 20~30MB/sec per channel.
There is quite a difference between common USB flash drives and SSDs. SSDs are supposed to replace a HDD while USB drives are not designed for it. One difference is the type of wear leveling, also documented here http://en.wikipedia.org/wiki/Wear_leveling
Just to point out, that articles says the wear leveling used in USB flash drives result in faster performance which runs counter to Kevin's original claim of "slower flash". ;)
The key thing I was pointing out is that, the underlying Flash technology doesn't appear to be different in SSD Hard disks or USB flash drives.
The key differentiating component always seems to be the controller, i.e. 8-channel on the SATA Flash vs 1 channel on the USB Flash and the controller using different write leveling algorithm to map logical addresses to actual physical cells.
So the difference between a USB Thumbdrive and a USB SSD is like the difference between an eSATA single disk enclosure and an eSATA two disk RAID 0 enclosure.
On May 26, 2011, at 3:36 AM, Emmanuel Noobadmin wrote:
On 5/26/11, Kevin K kevink1@fidnet.com wrote:
Though thumb drives are flash, they tend to use a slower flash than what is used in hard drive replacement units.
No actual industry facts for this, but I think the Flash used in thumb drives are not really any slower by nature/design. This is because I see that the fastest SSD currently tend to use 8 channel controllers for 200+ MB/s performance which translate to 20~30MB/sec per channel.
The better USB 2.0 thumb drives can do about 20+ MB, Kingston even has a new one that will supposedly do 70+ when connected via USB 3.0. If we take 8 of these and RAID 0 them which is pretty much what the 8-channel controller is doing, we're looking at pretty similar numbers between the flash cells in thumb drives and "SSD".
I think that many people, when talking about SSD, may be thinking of drives in the form factor of a hard drive. Either 2.5" or 3.5". Which would probably not be called a thumb drive :)
Only because it doesn't come with a USB connector! ;)
OK. Not really slower for the flash, but still slower than what an USB based SSD drive would be. But since they are designed for USB, performance can be lower. Especially for the cheaper drives. I would assume, but don't know, that those drives marketed as ReadyBoost (?) for Vista or later may be faster .
Another thing that probably makes them seem slow is when some systems default to write cache disabled. For protection on systems like Windows where people might not remember to "safely remove".
On May 24, 2011, at 10:25 AM, Rudi Ahlers wrote:
But don't you think that a SSD, or rather Solid State Drive, would still be seen as a different type of drive than a SATA drive, even though they share the same type of bus & connector + power cable?
I know you get some USB type SSD's, but people still refer to them as SSD drives, and not USB drives
Depends on what level you are looking. Generically, it is a sequence of blocks, just like a rotating hard drive appears. Proper ID commands can find out more detailed information on it.
Some computers, like the Macbook Air, have SSD but it is NOT SATA. It is plugged into an expansion slot. I have also seen other SSDs that plug into PCI Express slots.
On Tue, 24 May 2011, Kevin K wrote:
To: CentOS mailing list centos@centos.org From: Kevin K kevink1@fidnet.com Subject: Re: [CentOS] SSD for Centos SWAP /tmp & /var/ partition
On May 24, 2011, at 10:25 AM, Rudi Ahlers wrote:
But don't you think that a SSD, or rather Solid State Drive, would still be seen as a different type of drive than a SATA drive, even though they share the same type of bus & connector + power cable?
I know you get some USB type SSD's, but people still refer to them as SSD drives, and not USB drives
Depends on what level you are looking. Generically, it is a sequence of blocks, just like a rotating hard drive appears. Proper ID commands can find out more detailed information on it.
Some computers, like the Macbook Air, have SSD but it is NOT SATA. It is plugged into an expansion slot. I have also seen other SSDs that plug into PCI Express slots.
The OWC drive I'm looking at is a 2.5" SSD drive with SATA II 3.0 Gb/s interface. It can also be used with a SATA -> IDE/ATA adaptor, that would make it appear to the OS as a P-ATA EIDE drive.
http://eshop.macsales.com/shop/SSD/OWC/Mercury_Extreme_Pro/Legacy_Edition
"Add a technological supercharger to your existing Mac or PC with the OWC Mercury EXTREME Pro Legacy Edition SSD. Thanks to the special PATA adapter included , it’s the fastest, most reliable IDE/ATA mechanism available to breathe lightning fast performance into that trusty machine and extend its usefulness.
Includes IDE/ATA adapter for use in 3.5" IDE/ATA desktop drive bays. With PATA adapter removed, SATA I (1.5Gb/s) and SATA II (3.0Gb/s) interface supported, SATA 2.6 Compliant."
So I could use this in a desktop as an EIDE ATA 133Mbs drive with the PATA adaptor, or as a SATA II desktop drive, or in a laptop as a SATA drive.
The only thing I don't like is the fact that it's a MLC SSD. I'd much rather find a SLC drive, due to the x10 reliability factor.
The SATA -> EIDE drive adaptors are on ebay cheap.
I think this is a 2-way adapter; SATA -> PATA or vice versa.
http://cgi.ebay.co.uk/ws/eBayISAPI.dll?ViewItem&item=320645765177&ss...
The other option is to throw in a PCI(e) SATA controller card, and run the SSD as a native SATA II drive in a legacy IDE desktop.
Kind Regards,
Keith Roberts
To bad I don't make purchasing decisions at work, or I would like a SSD for my Linux system, probably to be upgraded to 6 later in the year.
On Wed, 25 May 2011, Kevin K wrote:
To: CentOS mailing list centos@centos.org From: Kevin K kevink1@fidnet.com Subject: Re: [CentOS] SSD for Centos SWAP /tmp & /var/ partition
To bad I don't make purchasing decisions at work, or I would like a SSD for my Linux system, probably to be upgraded to 6 later in the year.
You try to convince them that it would help improve your productivity - so would be a great saving in the long term?
Keith
----------------------------------------------------------------- Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
On 05/23/2011 02:31 AM, Kevin Thorpe wrote:
Just be aware that SSDs wear out. They have a limited number of write cycles. Nowadays they all do 'wear levelling' to even the writes across the drive but even so they don't last very long in heavy write usage.
Yes, there's a limit number of writes. With wear leveling you should be able to write to the drive at its full rate, constantly, for years before you actually wear out the drive.
On 05/23/2011 09:39 AM, Gordon Messmer wrote:
On 05/23/2011 02:31 AM, Kevin Thorpe wrote:
Just be aware that SSDs wear out. They have a limited number of write cycles. Nowadays they all do 'wear levelling' to even the writes across the drive but even so they don't last very long in heavy write usage.
Yes, there's a limit number of writes. With wear leveling you should be able to write to the drive at its full rate, constantly, for years before you actually wear out the drive.
However, SSD drive reliability itself has been very poor in the field. The failure rate is obscene.
See Jeff Atwood's 'The Hot/Crazy Solid State Drive Scale': URL:http://www.codinghorror.com/blog/2011/05/the-hot-crazy-solid-state-drive-scale.html
Jerry Franz wrote:
On 05/23/2011 09:39 AM, Gordon Messmer wrote:
On 05/23/2011 02:31 AM, Kevin Thorpe wrote:
Just be aware that SSDs wear out. They have a limited number of write cycles. Nowadays they all do 'wear levelling' to even the writes across the drive but even so they don't last very long in heavy write usage.
Yes, there's a limit number of writes. With wear leveling you should be able to write to the drive at its full rate, constantly, for years before you actually wear out the drive.
However, SSD drive reliability itself has been very poor in the field. The failure rate is obscene.
<snip> Do note that the server-grade SSDs are far more reliable than the consumer-grade crap.
mark
--On Monday, May 23, 2011 05:05:38 PM -0700 R - elists lists07@abbacomm.net wrote:
what specific units are considered server grade ssd's ?
What you want to look for in your drive specs are the acronyms SLC and MLC.
SLC is enterprise grade, smaller capacity, expensive MLC is consumer grade, larger capacity, cheap(er)
Expected lifetimes are typically at least 10x better for SLC.
Devin
On 05/24/11 9:36 AM, Devin Reade wrote:
--On Monday, May 23, 2011 05:05:38 PM -0700 R - elists lists07@abbacomm.net wrote:
what specific units are considered server grade ssd's ?
What you want to look for in your drive specs are the acronyms SLC and MLC.
SLC is enterprise grade, smaller capacity, expensive MLC is consumer grade, larger capacity, cheap(er)
Expected lifetimes are typically at least 10x better for SLC.
also you want SSD that has a supercap on its internal cache so pending writes aren't lost in a power failure scenario.
On 05/24/2011 09:57 AM, John R Pierce wrote:
also you want SSD that has a supercap on its internal cache so pending writes aren't lost in a power failure scenario.
You know, I've asked people about that in the past since the whole block read/erase/write cycle seems like a risk in the event of power loss, but never got any satisfactory answer. What manufacturers/models offer that feature? Are most drives with caps clearly labeled?
If you're referring to capacitors, I do not believe modern SSD's used those. Or at least ones I've seen didn't (that I recall).
On Tue, May 24, 2011 at 1:32 PM, Gordon Messmer yinyang@eburg.com wrote:
On 05/24/2011 09:57 AM, John R Pierce wrote:
also you want SSD that has a supercap on its internal cache so pending writes aren't lost in a power failure scenario.
You know, I've asked people about that in the past since the whole block read/erase/write cycle seems like a risk in the event of power loss, but never got any satisfactory answer. What manufacturers/models offer that feature? Are most drives with caps clearly labeled? _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 05/24/11 10:32 AM, Gordon Messmer wrote:
On 05/24/2011 09:57 AM, John R Pierce wrote:
also you want SSD that has a supercap on its internal cache so pending writes aren't lost in a power failure scenario.
You know, I've asked people about that in the past since the whole block read/erase/write cycle seems like a risk in the event of power loss, but never got any satisfactory answer. What manufacturers/models offer that feature? Are most drives with caps clearly labeled?
I just looked at OCZ's marketing fluff--err--webpile and it appears the Vertex EX and PRO drives advertise that they have a supercapacitor. the others I sampled didn't.
http://www.ocztechnology.com/ocz-vertex-2-pro-series-sata-ii-2-5-ssd.html
my superficial scan of Intel's webpile didn't turn up any reference to them on the x25-e or 510 drives. ah, the intel 320 series has power failure protection which presumably means something like a supercap to supply sufficient power to complete any pending write cycles..
There's an article on anandtech detailing datalosses on many SSDs on power failure scenarios.
(and, please folks, UPS's are great, but they fail too, you can't rely on them for data protection).
On 05/24/2011 02:01 PM, John R Pierce wrote:
On 05/24/11 10:32 AM, Gordon Messmer wrote:
On 05/24/2011 09:57 AM, John R Pierce wrote:
also you want SSD that has a supercap on its internal cache so pending writes aren't lost in a power failure scenario.
You know, I've asked people about that in the past since the whole block read/erase/write cycle seems like a risk in the event of power loss, but never got any satisfactory answer. What manufacturers/models offer that feature? Are most drives with caps clearly labeled?
I just looked at OCZ's marketing fluff--err--webpile and it appears the Vertex EX and PRO drives advertise that they have a supercapacitor. the others I sampled didn't.
http://www.ocztechnology.com/ocz-vertex-2-pro-series-sata-ii-2-5-ssd.html
my superficial scan of Intel's webpile didn't turn up any reference to them on the x25-e or 510 drives. ah, the intel 320 series has power failure protection which presumably means something like a supercap to supply sufficient power to complete any pending write cycles..
There's an article on anandtech detailing datalosses on many SSDs on power failure scenarios.
(and, please folks, UPS's are great, but they fail too, you can't rely on them for data protection).
Thats why you have servers with redundant power supplies with each power supply plugged into a separate ups.
On Tue, May 24, 2011 at 8:01 PM, John R Pierce pierce@hogranch.com wrote:
(and, please folks, UPS's are great, but they fail too, you can't rely on them for data protection).
-- john r pierce N 37, W 123 santa cruz ca mid-left coast
RAID cards, and RAID Card's battery caches also fail :)
On Mon, 23 May 2011, Jerry Franz wrote: *snip*
However, SSD drive reliability itself has been very poor in the field. The failure rate is obscene.
See Jeff Atwood's 'The Hot/Crazy Solid State Drive Scale': URL:http://www.codinghorror.com/blog/2011/05/the-hot-crazy-solid-state-drive-scale.html
Quote
"I have a 64 GB Patriot SSD that's three years old and still going strong. It came with a ten year warranty which seems pretty incredible. I wonder what their replacement strategy is in nine years."
That sounds a better deal - a 10 year warranty!
Kind Regards,
Keith
----------------------------------------------------------------- Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
On Sun, 22 May 2011, Gordon Messmer wrote:
To: CentOS mailing list centos@centos.org From: Gordon Messmer yinyang@eburg.com Subject: Re: [CentOS] SSD for Centos SWAP /tmp & /var/ partition
On 05/20/2011 01:26 PM, Keith Roberts wrote:
I'm wondering if it would be a good idea to use a new SSD for moving all the disk i/o to, that Linux likes to do so often.
Yes, it's often a really good idea. If you're doing software RAID on Linux, you really should either disable disk drives' write cache or have the system on a UPS with monitoring and automated shutdown. Most people will opt for the latter. Using an SSD can boost write performance and reliability for other systems. I just found this write-up on the topic:
http://insights.oetiker.ch/linux/external-journal-on-ssd.html
Plus putting SWAP onto a decent SSD should speed things up somewhat.
Well, only if you're using swap space. If that's the case, adding RAM to the system will probably be less expensive and a lot more effective.
Thanks for that Gordon, and all the other replies so far. This has given something to certainly consider doing sometime soon.
Kind Regards,
Keith Roberts
----------------------------------------------------------------- Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
Keith,
On Friday, May 20, 2011 you wrote:
I'm wondering if it would be a good idea to use a new SSD for moving all the disk i/o to, that Linux likes to do so often. Plus putting SWAP onto a decent SSD should speed things up somewhat.
As far as I understand, SSD are fast at reading and slow at writing. They are strong if data rarely changes and if it is mainly read like root partitions.
This may be useful for webservers that hold the content in /var/www.
Putting your often changing data like swap and /var/log on a SSD may slow down your system.
A swap partition is used to save expensive RAM. Don't use much more expensive FLASH-ROM to replace it.
Beside that, if your system is heavily swapping, a SSD will wear out quickly.
best regards --- Michael Schumacher PAMAS Partikelmess- und Analysesysteme GmbH Dieselstr.10, D-71277 Rutesheim Tel +49-7152-99630 Fax +49-7152-996333 Geschäftsführer: Gerhard Schreck Handelsregister B Stuttgart HRB 252024
On Mon, 23 May 2011, Michael Schumacher wrote:
To: CentOS mailing list centos@centos.org From: Michael Schumacher michael.schumacher@pamas.de Subject: Re: [CentOS] SSD for Centos SWAP /tmp & /var/ partition
Keith,
On Friday, May 20, 2011 you wrote:
I'm wondering if it would be a good idea to use a new SSD for moving all the disk i/o to, that Linux likes to do so often. Plus putting SWAP onto a decent SSD should speed things up somewhat.
As far as I understand, SSD are fast at reading and slow at writing. They are strong if data rarely changes and if it is mainly read like root partitions.
This may be useful for webservers that hold the content in /var/www.
Putting your often changing data like swap and /var/log on a SSD may slow down your system.
A swap partition is used to save expensive RAM. Don't use much more expensive FLASH-ROM to replace it.
Beside that, if your system is heavily swapping, a SSD will wear out quickly.
OK Michael. Thankyou for that - I understand what you are saying.
Regards,
Keith
----------------------------------------------------------------- Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
Here, we are waiting for CentOS 6 for the discard (trim) option from the new kernel... Also, RedHat has some advices: http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Ad...
JD
On 05/23/2011 07:23 AM, Michael Schumacher wrote:
As far as I understand, SSD are fast at reading and slow at writing.
A good SSD will be substantially faster at writes than a disk drive, as well. Because there's no head seeking around a platter, latency is vastly better, which provides a massive performance advantage in many (if not most) use patterns.
On 05/23/11 9:54 AM, Gordon Messmer wrote:
On 05/23/2011 07:23 AM, Michael Schumacher wrote:
As far as I understand, SSD are fast at reading and slow at writing.
A good SSD will be substantially faster at writes than a disk drive, as well. Because there's no head seeking around a platter, latency is vastly better, which provides a massive performance advantage in many (if not most) use patterns.
yes,butt.... SSD has to erase and write a LARGE block all at once, so they don't do so well with the sorts of 8k random writes that write intensive applications like relational databases commonly perform. To write a single 8K block would require reading the whole flash block (something like 64k or 256k), flash erasing that block, then rewriting the whole thing. This would be painfully slow. So, what the drives do instead is remap blocks rather randomly onto 'new' space, caching them in drive-local buffer memory, then flashing whole blocks at once. once the new space is used up, performance tends to degrade as they have to scavenge scattered free blocks.
On Mon, 23 May 2011, John R Pierce wrote:
To: centos@centos.org From: John R Pierce pierce@hogranch.com Subject: Re: [CentOS] SSD for Centos SWAP /tmp & /var/ partition
On 05/23/11 9:54 AM, Gordon Messmer wrote:
On 05/23/2011 07:23 AM, Michael Schumacher wrote:
As far as I understand, SSD are fast at reading and slow at writing.
A good SSD will be substantially faster at writes than a disk drive, as well. Because there's no head seeking around a platter, latency is vastly better, which provides a massive performance advantage in many (if not most) use patterns.
yes,butt.... SSD has to erase and write a LARGE block all at once, so they don't do so well with the sorts of 8k random writes that write intensive applications like relational databases commonly perform. To write a single 8K block would require reading the whole flash block (something like 64k or 256k), flash erasing that block, then rewriting the whole thing. This would be painfully slow. So, what the drives do instead is remap blocks rather randomly onto 'new' space, caching them in drive-local buffer memory, then flashing whole blocks at once. once the new space is used up, performance tends to degrade as they have to scavenge scattered free blocks.
Would a defrag program work on a SSD?
Keith
----------------------------------------------------------------- Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
On 05/23/11 12:45 PM, Keith Roberts wrote:
Would a defrag program work on a SSD?
for some values of 'work'. as its completely unaware of the internal block remapping of the SSD, all it would really do would be to churn the data around.
I've read the only way to reset the block remapping on most consumer SSDs is to write all zeros on them, some vendors say twice. so, make a full file system backup, zero the disk (maybe zero it again), then restore that backup. NOW your data is all contiguous both in file system logical space AND in remapped SSD block space.
ugh.
On 05/23/2011 12:24 PM, John R Pierce wrote:
yes,butt.... SSD has to erase and write a LARGE block all at once, so they don't do so well with the sorts of 8k random writes that write intensive applications like relational databases commonly perform.
Many SSD are faster at writing even to already used blocks than disk drives are. Still, to stay on topic, the suggestion that I put forth was to use the SSD for external journals for ext3 filesystems with journal=data. In that case, the OS should pretty much always be writing full blocks to the SSD, so there should be even less concern about small random writes.