Has anyone installed a high I/O application such as an email server on SSD drives? Was thinking about doing two SSD's in RAID1. It would solve my I/O latency issues but I have heard that SSD's wear out quickly in high I/O situations? Something like each memory location only has X many writes before its done. Just wandering if anyone has tested it and if newer SSD's are better about this?
On 2/2/2012 1:19 PM, Matt wrote:
Has anyone installed a high I/O application such as an email server on SSD drives? Was thinking about doing two SSD's in RAID1. It would solve my I/O latency issues but I have heard that SSD's wear out quickly in high I/O situations? Something like each memory location only has X many writes before its done. Just wandering if anyone has tested it and if newer SSD's are better about this? _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
it all depends on how much writing you do AND how much spare space the drives have. The more spare flash the drives have the longer they'll live due to being able to spread the writing wear over a larger area.
On Thu, 2 Feb 2012, William Warren wrote:
On 2/2/2012 1:19 PM, Matt wrote:
Has anyone installed a high I/O application such as an email server on SSD drives? Was thinking about doing two SSD's in RAID1. It would solve my I/O latency issues but I have heard that SSD's wear out quickly in high I/O situations? Something like each memory location only has X many writes before its done. Just wandering if anyone has tested it and if newer SSD's are better about this?
it all depends on how much writing you do AND how much spare space the drives have. The more spare flash the drives have the longer they'll live due to being able to spread the writing wear over a larger area.
How very timely, I'm just starting to investigate something similar myself. I don't have much to contribute however this forum post: http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Enduranc... seems as though it'll be interesting, if I can ever make it through 3500+ pages to get to the conclusion.
On 02/02/12 14:05, Mike wrote:
On Thu, 2 Feb 2012, William Warren wrote:
On 2/2/2012 1:19 PM, Matt wrote:
Has anyone installed a high I/O application such as an email server on SSD drives? Was thinking about doing two SSD's in RAID1. It would solve my I/O latency issues but I have heard that SSD's wear out quickly in high I/O situations? Something like each memory location only has X many writes before its done. Just wandering if anyone has tested it and if newer SSD's are better about this?
it all depends on how much writing you do AND how much spare space the drives have. The more spare flash the drives have the longer they'll live due to being able to spread the writing wear over a larger area.
How very timely, I'm just starting to investigate something similar myself. I don't have much to contribute however this forum post: http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Enduranc... seems as though it'll be interesting, if I can ever make it through 3500+ pages to get to the conclusion.
If you're worried about io reliability, then buy a (way more expensive) SLC drive, rather than the consumer level MLC... We have some SLC drives here that from their manufacturer have been rated at 3 or more years of 100% write 24x7...
Peter.
On 2/2/2012 2:15 PM, Peter A wrote:
On 02/02/12 14:05, Mike wrote:
On Thu, 2 Feb 2012, William Warren wrote:
On 2/2/2012 1:19 PM, Matt wrote:
Has anyone installed a high I/O application such as an email server on SSD drives? Was thinking about doing two SSD's in RAID1. It would solve my I/O latency issues but I have heard that SSD's wear out quickly in high I/O situations? Something like each memory location only has X many writes before its done. Just wandering if anyone has tested it and if newer SSD's are better about this?
it all depends on how much writing you do AND how much spare space the drives have. The more spare flash the drives have the longer they'll live due to being able to spread the writing wear over a larger area.
How very timely, I'm just starting to investigate something similar myself. I don't have much to contribute however this forum post: http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Enduranc... seems as though it'll be interesting, if I can ever make it through 3500+ pages to get to the conclusion.
If you're worried about io reliability, then buy a (way more expensive) SLC drive, rather than the consumer level MLC... We have some SLC drives here that from their manufacturer have been rated at 3 or more years of 100% write 24x7...
Peter.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
exactly hence why i said stay with OCZ or Intel..MLC drives are the best. But also the smaller the process node the shorter the lifespan of the flash. MLC drives will also over provision more spare flash area most times.
On 02/02/12 17:01, William Warren wrote:
On 2/2/2012 2:15 PM, Peter A wrote:
If you're worried about io reliability, then buy a (way more expensive) SLC drive, rather than the consumer level MLC... We have some SLC drives here that from their manufacturer have been rated at 3 or more years of 100% write 24x7...
Peter.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
exactly hence why i said stay with OCZ or Intel..MLC drives are the best. But also the smaller the process node the shorter the lifespan of the flash. MLC drives will also over provision more spare flash area most times.
Aeh... that's exactly the opposite of what I said. MLC (multi level cell) SSDs store more than one bit per cell. In current devices that's mostly 2 bits per cell, but more is around the corner. On an SLC (single level cell) there is only one bit per cell - true binary just like what we have in RAM and others. SLC devices are superior in reliability because it simply takes a lot more disturbing of a cell to make it lose enough charge that a 1 gets interpreted as a 0. The devices are also usually faster, especially on a re-write basis. A Oracle 96GB flash card (SLC) physically has 128GB. Most consumer MLC devices with 128GB are sold as 120GB visible... Again in favor of the SLC. Only problem is that you pay for what you get. SLC devices are significantly more expensive. Fusion I/O and all the other server ssd vendors do the same - they give you a cheap MLC device with limited performance and reliability and a high end, much more pricey SLC unit.
Peter.
On 2/2/2012 5:19 PM, Peter A wrote:
On 02/02/12 17:01, William Warren wrote:
On 2/2/2012 2:15 PM, Peter A wrote:
If you're worried about io reliability, then buy a (way more expensive) SLC drive, rather than the consumer level MLC... We have some SLC drives here that from their manufacturer have been rated at 3 or more years of 100% write 24x7...
Peter.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
exactly hence why i said stay with OCZ or Intel..MLC drives are the best. But also the smaller the process node the shorter the lifespan of the flash. MLC drives will also over provision more spare flash area most times.
Aeh... that's exactly the opposite of what I said. MLC (multi level cell) SSDs store more than one bit per cell. In current devices that's mostly 2 bits per cell, but more is around the corner. On an SLC (single level cell) there is only one bit per cell - true binary just like what we have in RAM and others. SLC devices are superior in reliability because it simply takes a lot more disturbing of a cell to make it lose enough charge that a 1 gets interpreted as a 0. The devices are also usually faster, especially on a re-write basis. A Oracle 96GB flash card (SLC) physically has 128GB. Most consumer MLC devices with 128GB are sold as 120GB visible... Again in favor of the SLC. Only problem is that you pay for what you get. SLC devices are significantly more expensive. Fusion I/O and all the other server ssd vendors do the same - they give you a cheap MLC device with limited performance and reliability and a high end, much more pricey SLC unit.
Peter. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
i mistyped meant to type slc...:)
On 2 February 2012 18:19, Matt matt.mailinglists@gmail.com wrote:
Has anyone installed a high I/O application such as an email server on SSD drives? Was thinking about doing two SSD's in RAID1. It would solve my I/O latency issues but I have heard that SSD's wear out quickly in high I/O situations? Something like each memory location only has X many writes before its done. Just wandering if anyone has tested it and if newer SSD's are better about this?
Sun were recommending using SSDs for the ZIL in really big ZFS install *years ago* so go for it.
As long as you are using TRIM then you avoid the slowdown that happens once the ssd is full
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Ad...
mike
On 02/02/2012 10:19 AM, Matt wrote:
Has anyone installed a high I/O application such as an email server on SSD drives? Was thinking about doing two SSD's in RAID1. It would solve my I/O latency issues but I have heard that SSD's wear out quickly in high I/O situations? Something like each memory location only has X many writes before its done. Just wandering if anyone has tested it and if newer SSD's are better about this? _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Is this the best way to go? Much of the recent mail software, postfix, dovecot etc has features which make it easier to set up redundant mailservers and distribute the load across them. This will scale better if your needs grow down the road. SSD's tend to be rather costly, especially if your storage needs are high. I guess the main advantage to a single server with SSD is lower power consumption.
What about RAID10?
Nataraj