Everyone,
I am putting together a new mail server for our firm using a SuperMicro with Centos 7.0. When performed the install of the os, I put 16 gigs of memory in the wrong slots on the mother board which caused the SuperMicro to recognize 8 gigs instead of 16 gigs. When I installed Centos 7.0, this error made the swap file 8070 megs instead of what I would have expected to be a over 16000 megs.
I am using the default xfs file system on the other partitions. Is there a way to expand the swap file? If not, then is this problem sufficiently bad enough for me to start over with a new install. I do not want to start over unless I need to.
Thanks for you help !!!
Greg Ennis
Hey Gregory,
I assume you have the issue with a swap partition which is harder to modify then a swap file. You can always add\use another swap file instead of a partition. This article describes what you will need\want: http://www.cyberciti.biz/faq/linux-add-a-swap-file-howto/
and just another one for more info: http://www.rackspace.com/knowledge_center/article/create-a-linux-swap-file
Eliezer
On 15/02/2015 19:42, Gregory P. Ennis wrote:
I am using the default xfs file system on the other partitions. Is there a way to expand the swap file? If not, then is this problem sufficiently bad enough for me to start over with a new install. I do not want to start over unless I need to.
Thanks for you help !!!
Greg Ennis
I'd like to think you will want to avoid the use of swap at almost all costs because it'll slow the system down by a lot. So really you don't need 1:1 for a server that isn't using suspend to disk. You only need enough swap so that things can chug (slowly) without totally imploding, giving you enough time to kill whatever is hogging RAM or do a more graceful reboot than you otherwise would.
It's fairly stable now, but the longer term solution for this is LVM thin provisioning will mean a bulk of extents will remain free (not associated with any LV) until they're needed. While you wouldn't use a thin volume for swap, but free extents can be used to make the conventional LV used for swap bigger. And for thin volumes it can mostly obviate even having to resize them.
Morning Gregory,
Sunday, February 15, 2015, 6:42:32 PM, you wrote:
I am putting together a new mail server for [...] I am using the default xfs file system on the other partitions. Is there a way to expand the swap file? If not, then is this problem sufficiently bad enough for me to start over with a new install. I do not want to start over unless I need to.
I think that on todays system, swap is just "the last resort" before a system crashes. If you ever run into the situation that you need the swap system, you will have to throw more memory into the machine. I wouldn't care.
Btw., are you sure you want to use XFS for a mail server? I made some tests about a year ago and found that EXT4 is by the factor 10 faster compared to XFS. The tests I performed were using the "maildir" style postfix installation that results in many thousands files in the user directories. The only problem was that Centos was not able to format huge ext4 partitions out of the box. This was valid for C6, don't know about C7.
best regards --- Michael Schumacher
On Mon, Feb 16, 2015 at 12:07 AM, Michael Schumacher michael.schumacher@pamas.de wrote:
Btw., are you sure you want to use XFS for a mail server? I made some tests about a year ago and found that EXT4 is by the factor 10 faster compared to XFS. The tests I performed were using the "maildir" style postfix installation that results in many thousands files in the user directories.
This is a recent benchmarking using Postmark which supposedly simulates mail servers. XFS stacks up a bit better than ext4. http://www.phoronix.com/scan.php?page=article&item=linux-3.19-ssd-fs&...
A neat trick for big busy mail servers that comes up on linux-raid@ and the XFS list from time to time, is using md linear/concat to put together the physical drives into a single logical block device, and then format it XFS. XFS will create multiple AG's across all of those devices, and do parallel writes across all of them. It's often quite a bit better performing than raid0 specifically because of the many thousands of small files in many directories workload.
On 16/02/2015 10:04, Chris Murphy wrote:
This is a recent benchmarking using Postmark which supposedly simulates mail servers. XFS stacks up a bit better than ext4. http://www.phoronix.com/scan.php?page=article&item=linux-3.19-ssd-fs&...
A neat trick for big busy mail servers that comes up on linux-raid@ and the XFS list from time to time, is using md linear/concat to put together the physical drives into a single logical block device, and then format it XFS. XFS will create multiple AG's across all of those devices, and do parallel writes across all of them. It's often quite a bit better performing than raid0 specifically because of the many thousands of small files in many directories workload.
Hey Chris,
I am unsure I understand what you wrote. "XFS will create multiple AG's across all of those devices," Are you comparing md linear/concat to md raid0? and that the upper level XFS will run on top them?
(Just to make sure I understood what you have written.)
Eliezer
On Mon, Feb 16, 2015 at 6:47 AM, Eliezer Croitoru eliezer@ngtech.co.il wrote:
I am unsure I understand what you wrote. "XFS will create multiple AG's across all of those devices," Are you comparing md linear/concat to md raid0? and that the upper level XFS will run on top them?
Yes to the first question, I'm not understanding the second question. Allocation groups are created at mkfs time. When the workload IO involves a lot of concurrency, XFS over linear will beat XFS or ext4 over raid0. Whereas for streaming performance workloads, striped raid will work better. If redundancy is needed, mdadm permits creation of 1+linear, as compared to 10. http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure/tmp/en-US/html/...
You can think of XFS on linear as being something like raid0 at a file level, rather than at a block level. On a completely empty file system if you start copying a pile of dozens or more (typically hundreds or thousands) of files in mail directories, XFS distributes those across AG's and hence across all drives, in parallel. ext4 would for the most part focus all writes to the first device until mostly full, then the 2nd device, then the 3rd. And on raid0 you'll get a bunch of disk contention that isn't really necessary because everyones files are striped across all drives.
So contrary to popular opinion on XFS being mainly useful for large files, it's actually quite useful for concurrent read write workflows of small files on a many disk linear/concat arrangement. This extends to using raid1 + linear instead of raid10 if some redundancy is desired.
Thanks Chris for the detailed response!
I couldn't understand the complex sentence about XFS and was almost convinced that XFS might offer a new way to spread across multiple disks.
And in this case it's mainly me and not you.
Now I understand how a md linear/concat array can be exploited with XFS!
Not related directly but given that XFS has commercial support, it can be an advantage over other file systems which are built to handle lots of small files but might not have commercial support.
Eliezer
On 16/02/2015 19:21, Chris Murphy wrote:
So contrary to popular opinion on XFS being mainly useful for large files, it's actually quite useful for concurrent read write workflows of small files on a many disk linear/concat arrangement. This extends to using raid1 + linear instead of raid10 if some redundancy is desired.
On Mon, Feb 16, 2015 at 10:48 AM, Eliezer Croitoru eliezer@ngtech.co.il wrote:
Thanks Chris for the detailed response!
I couldn't understand the complex sentence about XFS and was almost convinced that XFS might offer a new way to spread across multiple disks.
It's not new in XFS, it's behaved like this forever. But it's new in that other filesystems don't allocate this way. I'm not aware of any other filesystem that does.
Btrfs could possibly be revised to do this somewhat more easily than other fs's since it also has a concept of allocation chunks. Right now its single data profile allocates in 1GB chunks until full, and the next 1GB chunk goes on the next device in a sequence mainly determined by free space. This is how it's able to use different sized devices (including raid0,1,5,6). So it can read files from multiple drives at the same time, but it tends to only write to one drive at a time (unless using one of the striping raid-like profiles).
On Mon, Feb 16, 2015 at 10:21 AM, Chris Murphy lists@colorremedies.com wrote:
On Mon, Feb 16, 2015 at 6:47 AM, Eliezer Croitoru eliezer@ngtech.co.il wrote:
I am unsure I understand what you wrote. "XFS will create multiple AG's across all of those devices," Are you comparing md linear/concat to md raid0? and that the upper level XFS will run on top them?
Yes to the first question, I'm not understanding the second question. Allocation groups are created at mkfs time. When the workload IO involves a lot of concurrency, XFS over linear will beat XFS or ext4 over raid0. Whereas for streaming performance workloads, striped raid will work better. If redundancy is needed, mdadm permits creation of 1+linear, as compared to 10. http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure/tmp/en-US/html/...
You can think of XFS on linear as being something like raid0 at a file level, rather than at a block level. On a completely empty file system if you start copying a pile of dozens or more (typically hundreds or thousands) of files in mail directories, XFS distributes those across AG's and hence across all drives, in parallel. ext4 would for the most part focus all writes to the first device until mostly full, then the 2nd device, then the 3rd. And on raid0 you'll get a bunch of disk contention that isn't really necessary because everyones files are striped across all drives.
So contrary to popular opinion on XFS being mainly useful for large files, it's actually quite useful for concurrent read write workflows of small files on a many disk linear/concat arrangement. This extends to using raid1 + linear instead of raid10 if some redundancy is desired.
The other plus is that growing linear arrays is cake. They just get added to the end of the concat, and xfs_growfs is used. Takes less than a minute. Whereas md raid0 grow means converting to raid4, then adding the device, then converting back to raid0. And further, linear grow can be any size drive, whereas clearly with raid0 the drive sizes must all be the same.
On 16/02/2015 22:29, Chris Murphy wrote:
The other plus is that growing linear arrays is cake. They just get added to the end of the concat, and xfs_growfs is used. Takes less than a minute. Whereas md raid0 grow means converting to raid4, then adding the device, then converting back to raid0. And further, linear grow can be any size drive, whereas clearly with raid0 the drive sizes must all be the same.
Nice! I have been learning about md arrays and have seen the details about growing operation but it's another aspect which I wasn't thinking about at first. For now I am not planning any storage but it might come handy later on.
Thanks, Eliezer