On 2/2/2015 10:32 AM, John R Pierce wrote:
On 2/1/2015 8:25 PM, Jatin Davey wrote:
On 2/2/2015 9:25 AM, John R Pierce wrote:
On 2/1/2015 7:31 PM, Jatin Davey wrote:
I ran your script and here is the output for it:
Start of the Output*************************** [root@localhost bin]# lsi-raidinfo sh: /opt/MegaRAID/MegaCli/MegaCli64: No such file or directory -- Controllers --
as I said, you need to install the MegaCli software from LSI so you can manage the raid card.
I am using a server which has a Cisco 12G SAS Modular Raid Controller , So wondering if software from LSI would work on it. Please correct me if am wrong.
[root@localhost ~]# lspci | grep RAID 05:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3 3108 [Invader] (rev 02)
thats a LSI Logic Megaraid 3108 chip. This is used on the LSI 9361-8i and many other OEM rebranded cards. If cisco has their own package of the megacli management software, then sure, use that.
Hi John
I managed to install the software from LSI on my box and ran the script that you had provided and here is what i get from it.
************Output ****************** [root@localhost bin]# lsi-raidinfo -- Controllers -- -- ID | Model c0 | Cisco 12G SAS Modular Raid Controller
-- Volumes -- -- ID | Type | Size | Status | InProgress volume c0u0 | RAID10 2x2 | 1816G | Optimal | None
-- Disks -- -- Encl:Slot | vol-span-unit | Model | Status disk 252:1 | 0-0-0 | 9XG7TNQVST91000640NS CC03 | Online, Spun Up disk 252:2 | 0-0-1 | 9XG4M4X3ST91000640NS CC03 | Online, Spun Up disk 252:3 | 0-1-1 | 9XG4LY7JST91000640NS CC03 | Online, Spun Up disk 252:4 | 0-1-0 | 9XG51233ST91000640NS CC03 | Online, Spun Up End of Output **************************
Let me know if i need to change or configure anything else to make the I/O a bit faster than what it currently does. I cannot go in for using SSD's due to budget constraints. Need to make the best use of the SATA disks that i have currently.
Thanks Jatin
On 2/2/2015 8:11 PM, Jatin Davey wrote:
disk 252:1 | 0-0-0 | 9XG7TNQVST91000640NS CC03 | Online, Spun Up disk 252:2 | 0-0-1 | 9XG4M4X3ST91000640NS CC03 | Online, Spun Up disk 252:3 | 0-1-1 | 9XG4LY7JST91000640NS CC03 | Online, Spun Up disk 252:4 | 0-1-0 | 9XG51233ST91000640NS CC03 | Online, Spun Up End of Output **************************
Let me know if i need to change or configure anything else to make the I/O a bit faster than what it currently does. I cannot go in for using SSD's due to budget constraints. Need to make the best use of the SATA disks that i have currently.
so, you have 2x2 ST91000640NS http://www.seagate.com/products/enterprise-servers-storage/nearline-storage/...
those are "Nearline" disks, 7200RPM. They are intended for bulk secondary storage, archives, backups and such.
you said you have a number of virtual machines all attempting to access this raid10 at once? I'm not surprised that it is slow. You're probably limited by random IO/second, that raid likely does around 250 random operations/second. share that between 6-7 virtual systems and if they are all doing disk IO, they are going to slow down each other.
On 2/3/2015 10:00 AM, John R Pierce wrote:
On 2/2/2015 8:11 PM, Jatin Davey wrote:
disk 252:1 | 0-0-0 | 9XG7TNQVST91000640NS CC03 | Online, Spun Up disk 252:2 | 0-0-1 | 9XG4M4X3ST91000640NS CC03 | Online, Spun Up disk 252:3 | 0-1-1 | 9XG4LY7JST91000640NS CC03 | Online, Spun Up disk 252:4 | 0-1-0 | 9XG51233ST91000640NS CC03 | Online, Spun Up End of Output **************************
Let me know if i need to change or configure anything else to make the I/O a bit faster than what it currently does. I cannot go in for using SSD's due to budget constraints. Need to make the best use of the SATA disks that i have currently.
so, you have 2x2 ST91000640NS http://www.seagate.com/products/enterprise-servers-storage/nearline-storage/...
those are "Nearline" disks, 7200RPM. They are intended for bulk secondary storage, archives, backups and such.
you said you have a number of virtual machines all attempting to access this raid10 at once? I'm not surprised that it is slow. You're probably limited by random IO/second, that raid likely does around 250 random operations/second. share that between 6-7 virtual systems and if they are all doing disk IO, they are going to slow down each other.
So , You dont think that any configuration changes like increasing the number of volumes or anything else will help in reducing the I/O wait time ?
Thanks Jatin
On 2/2/2015 8:52 PM, Jatin Davey wrote:
So , You dont think that any configuration changes like increasing the number of volumes or anything else will help in reducing the I/O wait time ?
not by much. it might reduce the overhead if you use LVM volumes for virtual disks instead of using files, but if you're doing too much disk IO, there's not much that helps other than faster disks (or reducing the amount of reads by more aggressive caching via having more memory).
On 2/3/2015 10:44 AM, John R Pierce wrote:
On 2/2/2015 8:52 PM, Jatin Davey wrote:
So , You dont think that any configuration changes like increasing the number of volumes or anything else will help in reducing the I/O wait time ?
not by much. it might reduce the overhead if you use LVM volumes for virtual disks instead of using files, but if you're doing too much disk IO, there's not much that helps other than faster disks (or reducing the amount of reads by more aggressive caching via having more memory).
Thanks John
I will test and get the I/O speed results with the following and see what works best with the given workload:
Create 5 volumes each with 150 GB in size for the 5 VMs that i will be running on the server Create 1 volume with 600GB in size for the 5 VMs that i will be running on the server Try with LVM volumes instead of files
Will test and compare the I/O responsiveness in all cases and go with the one which is acceptable.
Appreciate your responses in this regard. Thanks again..
Regards, Jatin
On Mon, Feb 2, 2015 at 11:37 PM, Jatin Davey jashokda@cisco.com wrote:
I will test and get the I/O speed results with the following and see what works best with the given workload:
Create 5 volumes each with 150 GB in size for the 5 VMs that i will be running on the server Create 1 volume with 600GB in size for the 5 VMs that i will be running on the server Try with LVM volumes instead of files
Will test and compare the I/O responsiveness in all cases and go with the one which is acceptable.
Unless you put each VM on its own physical disk or raid1 mirror you aren't really doing anything to isolate the vms from each other or to increase the odds that a head will be near the place the next access needs it to be.
Lol - spinning disks? Really?
SSD is down to like 50cents a gig. And they have 1TB disks... slow disks = you get what you deserve... welcome to 2015. Autolacing shoes, self drying jackets, hoverboards - oh, yeah, and 110k IOPS 1TB SamSung Pro 850 SSD Drives for $449 on NewEgg.
dumbass
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Les Mikesell Sent: Tuesday, February 03, 2015 12:42 AM To: CentOS mailing list Subject: Re: [CentOS] Very slow disk I/O
On Mon, Feb 2, 2015 at 11:37 PM, Jatin Davey jashokda@cisco.com wrote:
I will test and get the I/O speed results with the following and see what works best with the given workload:
Create 5 volumes each with 150 GB in size for the 5 VMs that i will be running on the server Create 1 volume with 600GB in size for the 5 VMs that i will be running on the server Try with LVM volumes instead of files
Will test and compare the I/O responsiveness in all cases and go with the one which is acceptable.
Unless you put each VM on its own physical disk or raid1 mirror you aren't really doing anything to isolate the vms from each other or to increase the odds that a head will be near the place the next access needs it to be.
-- Les Mikesell lesmikesell@gmail.com _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Am 03.02.2015 um 10:14 schrieb Joseph L. Brunner:
Lol - spinning disks? Really?
SSD is down to like 50cents a gig. And they have 1TB disks... slow disks = you get what you deserve... welcome to 2015. Autolacing shoes, self drying jackets, hoverboards - oh, yeah, and 110k IOPS 1TB SamSung Pro 850 SSD Drives for $449 on NewEgg.
dumbass
Right, *consumer grade* SSD prices were coming down thzat much. But I am sure the appliance Jatin is building up (while he uses the @cisco.com mail domain) is intended for enterprise usage and has to have enterprise grade hardware components. Just like he used an enterprise grade HDD (though a SATA and not a SAS model with only 7.2krpm which is, as John pointed out, not a good choice for a virtualization host) to build the machine.
Investigate about SSDs made to be used in servers, SLC and not MLC type of chips, SAS interface. Then lets speak again after you have checked their prices.
Regards
Alexander
On 02/02/2015 08:52 PM, Jatin Davey wrote:
So , You dont think that any configuration changes like increasing the number of volumes or anything else will help in reducing the I/O wait time ?
No, because that won't change the number of heads that are present to service the IO requests, nor to segregate requests effectively. Your primary goals should be to reduce IO (investigate using LVM backed VMs instead of file-backed) or to increase hardware resources (possibly a larger number of smaller disks if SSDs are not in budget).
On 2/3/2015 11:06 AM, Gordon Messmer wrote:
On 02/02/2015 08:52 PM, Jatin Davey wrote:
So , You dont think that any configuration changes like increasing the number of volumes or anything else will help in reducing the I/O wait time ?
No, because that won't change the number of heads that are present to service the IO requests, nor to segregate requests effectively. Your primary goals should be to reduce IO (investigate using LVM backed VMs instead of file-backed)
[Jatin] Sure , I have no idea about LVM so i will do my learning on it. Thanks for pointing to it.
or to increase hardware resources (possibly a larger number of smaller disks if SSDs are not in budget).
[Jatin] This is also one option that i can try for sure since usage of disks instead of SSDs certainly falls within my budget.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos