[CentOS] SSD support in C5 and C6

Fri Jul 19 18:07:10 UTC 2013
Gordon Messmer <gordon.messmer at gmail.com>

On 07/19/2013 08:48 AM, Wade Hampton wrote:
> I found lots of references to TRIM, but it is not included
> with CentOS 5.  However, I found that TRIM is in the
> newer hdparm which could be build from source,
> but AFIK is not included with CentOS 5 RPMS.  That way,
> one could trim via a cron job?

NO!

 From the man page:
        --trim-sectors
           For  Solid  State  Drives  (SSDs).  EXCEPTIONALLY DANGEROUS.
           DO NOT USE THIS FLAG!!

That command can be used to trim sectors if you know which sector to 
start at and how many to TRIM.  The only thing it's likely to be useful 
for is deleting all of the data on a drive.

> - use file system supporting TRIM (e.g., EXT4 or BTRFS).

Yes, on release 6 or newer.

> - update hdparm to get TRIM support on CentOS 5

No.

> - align on block erase boundaries for drive, or use 1M boundaries
> - use native, non LVM partitions

LVM is fine.
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/newmds-ssdtuning.html

> - under provision (only use 60-75% of drive, leave unallocated space)

That only applies to some drives, probably not current generation hardware.

> - set noatime in /etc/fstab
>      (or relatime w/ newer to keep atime data sane)

Don't bother.  The current default is relatime.

> - move some tmp files to tmpfs
>     (e.g., periodic status files and things that change often)
> - move /tmp to RAM (per some suggestions)

Same thing.  Most SSDs should have write capacity far in excess of a 
spinning disk, so the decision to do this shouldn't be driven by the use 
of SSD.

> - use secure erase before re-use of drive
> - make sure drive has the latest firmware

Not always.  Look at the changelog for your drive's firmware if you're 
concerned and decide whether you need to update it based on whether any 
of the named fixes affect your system.  For instance, one of my 
co-workers was using a Crucial brand drive in his laptop, and it 
frequently wasn't seen by the system on a cold boot.  This caused 
hibernate to always fail.  Firmware upgrades made the problem worse, as 
I recall.

> - add “elevator=noop” to the kernel boot options
>    or use deadline, can change on a drive-by-drive basis
>    (e.g., if HD + SSD in a system)
> - reduce swappiness of kernel via /etc/sysctl.conf:
> vm.swappiness=1
> vm.vfs_cache_pressure=50
> -- or swap to HD, not SSD

None of those should be driven by SSD use.  Evaluate their performance 
effects on your specific workload and decide whether they help.  I 
wouldn't use them in most cases.

> - BIOS tuning to set drives to “write back” and using hdparm:
>         hdparm -W1 /dev/sda

That's not write-back, that's write-cache.  It's probably enabled by 
default.  When it's on, the drives will be faster and less safe (this is 
why John keeps advising you to look for a drive with a capacitor-backed 
write cache).  When it's off, the drive will be slower and more safe 
(and you don't need a capacitor backed write cache).