At our Physics research labs we do a lot with low latency networks. We have been using Centos for over 3 years now and its been great! We would like to tune and optimize our setup by removing unneeded packages -- kernel modules to be specific. I was wondering, how does one measure the speed of the kernel. Is that even possible?
On Sat, 2010-05-08 at 08:35 -0400, Mag Gam wrote:
At our Physics research labs we do a lot with low latency networks. We have been using Centos for over 3 years now and its been great! We would like to tune and optimize our setup by removing unneeded packages -- kernel modules to be specific. I was wondering, how does one measure the speed of the kernel. Is that even possible?
--- Very possible indeed and Interesting also.
You need the RT Kernel to do so. You can do so by using "Latencytop". Needs to be compiled against the RT kernel then ran as root. You can not run it as a user. The RT Kernel all ready has what is needed to run it. The main line does not.
Bad News: CentOS does not yet offer a RT Built Kernel. So you have to roll your own which is not hard to do so. Which requires the whole set as in you would want the rt trace kernel also. I build my own plus the other grid packages.
kernel-rt-2.6.24.7-149.el5 kernel-rt-devel-2.6.24.7-149.el5 kernel-rt-vanilla-devel-2.6.24.7-149.el5 kernel-rt-trace-2.6.24.7-149.el5 kernel-rt-doc-2.6.24.7-149.el5 kernel-rt-vanilla-2.6.24.7-149.el5 kernel-rt-trace-devel-2.6.24.7-149.el5
There is of one place that has a RT Kernel if you want to try it so maybe that person will post a link to this thread for you.
John
On Sat, May 8, 2010 at 6:49 AM, JohnS jses27@gmail.com wrote:
There is of one place that has a RT Kernel if you want to try it so maybe that person will post a link to this thread for you.
Are you referring to me, John? :)
You are welcome to provide the link as far as it is stated that they are for testing purposes only:
http://centos.toracat.org/kernel/centos5/realtime/
Akemi
On Sat, 2010-05-08 at 07:10 -0700, Akemi Yagi wrote:
On Sat, May 8, 2010 at 6:49 AM, JohnS jses27@gmail.com wrote:
There is of one place that has a RT Kernel if you want to try it so maybe that person will post a link to this thread for you.
Are you referring to me, John? :)
You are welcome to provide the link as far as it is stated that they are for testing purposes only:
http://centos.toracat.org/kernel/centos5/realtime/
Akemi
---- Thank you :-)
John
-----Original Message----- Akemi Yagi [amyagi@gmail.com] wrote
On Sat, May 8, 2010 at 6:49 AM, JohnS jses27@gmail.com wrote:
There is of one place that has a RT Kernel if you want to try it so maybe that person will post a link to this thread for you.
Are you referring to me, John? :)
You are welcome to provide the link as far as it is stated that they are for testing purposes only:
This points to: kernel-rt-2.6.24.7-149.ay.src.rpm et amici ==============================^^^^^^^^ Do you have any kernel-rt-2.6.18-164.15.1.el5. or similarly named that are the same source base as the current Centos kernel? ==============================^^^^^^^^^^^
******************************************************************* This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept for the presence of computer viruses. www.Hubbell.com - Hubbell Incorporated**
On Mon, 2010-05-10 at 10:45 -0400, Brunner, Brian T. wrote:
-----Original Message----- Akemi Yagi [amyagi@gmail.com] wrote
On Sat, May 8, 2010 at 6:49 AM, JohnS jses27@gmail.com wrote:
There is of one place that has a RT Kernel if you want to try it so maybe that person will post a link to this thread for you.
Are you referring to me, John? :)
You are welcome to provide the link as far as it is stated that they are for testing purposes only:
This points to: kernel-rt-2.6.24.7-149.ay.src.rpm et amici ==============================^^^^^^^^ Do you have any kernel-rt-2.6.18-164.15.1.el5. or similarly named that are the same source base as the current Centos kernel? ==============================^^^^^^^^^^^
kernel-rt-2.6.24.7-149 is the newest Real Time Kernel. RT is based on 2.6.24 and not 2.6.18. So no there is not a 2.6.18-kernel-rt for CentOS or they ever was one. Akemi just has those for testing.
For as CentOS they it has no RT kernel for it. Not yet, knock knock.
Answer your question?
John
-----Original Message-----
kernel-rt-2.6.24.7-149 is the newest Real Time Kernel. RT is based on 2.6.24 and not 2.6.18. So no there is not a 2.6.18-kernel-rt for CentOS nor ever was one. Akemi just has those for testing.
For as CentOS it has no RT kernel. Not yet, knock knock.
Answer your question?
John
It answered that question (thanks!) and uncovered another:
Has anybody run CentOS 5 with the rt kernel that Akemi Yagi has built?
If so, what modules etc have to be updated to use it? (this is asking, has anybody mapped the minefield?) ******************************************************************* This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept for the presence of computer viruses. www.Hubbell.com - Hubbell Incorporated**
On Mon, May 10, 2010 at 8:17 AM, Brunner, Brian T. BBrunner@gai-tronics.com wrote:
It answered that question (thanks!) and uncovered another:
Has anybody run CentOS 5 with the rt kernel that Akemi Yagi has built?
If so, what modules etc have to be updated to use it? (this is asking, has anybody mapped the minefield?)
I (naturally) test-installed kernel-rt and ran it for a while. But I did not do anything in particular.
By the way, debuginfo is being uploaded. It takes a while because they are large ...
Akemi
On Mon, 2010-05-10 at 11:17 -0400, Brunner, Brian T. wrote:
-----Original Message-----
kernel-rt-2.6.24.7-149 is the newest Real Time Kernel. RT is based on 2.6.24 and not 2.6.18. So no there is not a 2.6.18-kernel-rt for CentOS nor ever was one. Akemi just has those for testing.
For as CentOS it has no RT kernel. Not yet, knock knock.
Answer your question?
John
It answered that question (thanks!) and uncovered another:
Has anybody run CentOS 5 with the rt kernel that Akemi Yagi has built?
If so, what modules etc have to be updated to use it? (this is asking, has anybody mapped the minefield?)
--- Yes I can say there is nothing wrong with Akemis. I have ran them. They did not explode. The ones I have built have not exploded either and the only down time is for reboots and hardware firmware updates on the servers.
I can positively say I have had no probs on Dell and IBM even my own desktop and various HP desktops.
Kmods are a no go at present so forget those.
They run very suprisingly and responsive under heavy cpu and memory bound tasks. Even when the interrupt service is off and done manually is really when the performance come in and assigning single task per cpu. It's no kernel unless it's on a smp machine.
All testing and running has been under CentOS 5.4s current gcc building.
John
On Mon, 2010-05-10 at 11:17 -0400, Brunner, Brian T. wrote:
-----Original Message-----
kernel-rt-2.6.24.7-149 is the newest Real Time Kernel. RT is based on 2.6.24 and not 2.6.18. So no there is not a 2.6.18-kernel-rt for CentOS nor ever was one. Akemi just has those for testing.
For as CentOS it has no RT kernel. Not yet, knock knock.
Answer your question?
John
It answered that question (thanks!) and uncovered another:
Has anybody run CentOS 5 with the rt kernel that Akemi Yagi has built?
If so, what modules etc have to be updated to use it? (this is asking, has anybody mapped the minefield?)
--- One last thing:
They are not for Desktop User machines. Unless they are part of the MRG messaging and grid configured to use spare cpu cycles.
John
On May 8, 2010, at 8:35 AM, Mag Gam magawake@gmail.com wrote:
At our Physics research labs we do a lot with low latency networks. We have been using Centos for over 3 years now and its been great! We would like to tune and optimize our setup by removing unneeded packages -- kernel modules to be specific. I was wondering, how does one measure the speed of the kernel. Is that even possible?
Use oprofile.
-Ross
On Sat, 2010-05-08 at 16:17 -0400, Ross Walker wrote:
On May 8, 2010, at 8:35 AM, Mag Gam magawake@gmail.com wrote:
At our Physics research labs we do a lot with low latency networks. We have been using Centos for over 3 years now and its been great! We would like to tune and optimize our setup by removing unneeded packages -- kernel modules to be specific. I was wondering, how does one measure the speed of the kernel. Is that even possible?
Use oprofile.
--- Ross,
I was kinda interested in what oprofile would do but from the faq[1] it says RH is not supported. Is this still hold true? Do you use it?
John
On Sat, 2010-05-08 at 16:17 -0400, Ross Walker wrote:
On May 8, 2010, at 8:35 AM, Mag Gam magawake@gmail.com wrote:
At our Physics research labs we do a lot with low latency networks. We have been using Centos for over 3 years now and its been great! We would like to tune and optimize our setup by removing unneeded packages -- kernel modules to be specific. I was wondering, how does one measure the speed of the kernel. Is that even possible?
Use oprofile.
-Ross
--- Ross, never mind I just yummed it onto a machine there faq is inheritly wrong.
John
This is an interesting topic.
So, how does one compare the kernel "speed" from RT and Stock kernel?
Is there a benchmark I can use? For example (I know this is wrong): can I look at /proc/cpuinfo and look at the bogmips and compare and contrast?
On Sat, May 8, 2010 at 7:38 PM, JohnS jses27@gmail.com wrote:
On Sat, 2010-05-08 at 16:17 -0400, Ross Walker wrote:
On May 8, 2010, at 8:35 AM, Mag Gam magawake@gmail.com wrote:
At our Physics research labs we do a lot with low latency networks. We have been using Centos for over 3 years now and its been great! We would like to tune and optimize our setup by removing unneeded packages -- kernel modules to be specific. I was wondering, how does one measure the speed of the kernel. Is that even possible?
Use oprofile.
-Ross
Ross, never mind I just yummed it onto a machine there faq is inheritly wrong.
John
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Sun, May 9, 2010 at 11:38 AM, Mag Gam magawake@gmail.com wrote:
On Sat, May 8, 2010 at 7:38 PM, JohnS jses27@gmail.com wrote:
On Sat, 2010-05-08 at 16:17 -0400, Ross Walker wrote:
On May 8, 2010, at 8:35 AM, Mag Gam magawake@gmail.com wrote:
At our Physics research labs we do a lot with low latency networks. We have been using Centos for over 3 years now and its been great! We would like to tune and optimize our setup by removing unneeded packages -- kernel modules to be specific. I was wondering, how does one measure the speed of the kernel. Is that even possible?
Use oprofile.
-Ross
Ross, never mind I just yummed it onto a machine there faq is inheritly wrong.
John
This is an interesting topic.
So, how does one compare the kernel "speed" from RT and Stock kernel?
Is there a benchmark I can use? For example (I know this is wrong): can I look at /proc/cpuinfo and look at the bogmips and compare and contrast?
I think there are numerous ways, but of the top of my head, oprofile the application on the stock kernel, then oprofile the application on the RT kernel and compare the results.
-Ross
On Sun, 2010-05-09 at 11:38 -0400, Mag Gam wrote:
This is an interesting topic.
So, how does one compare the kernel "speed" from RT and Stock kernel?
Is there a benchmark I can use? For example (I know this is wrong): can I look at /proc/cpuinfo and look at the bogmips and compare and contrast?
--- Please bottom post...or quote inline...
https://rt.wiki.kernel.org/index.php/Main_Page
/proc/cpuinfo and your bogomips is not going to give you what you want in this respect.
In respect to speed there may different things to measure. Like in the rt-kernel your looking for the latency of the kernel and improving that. Which that starts at the raw code layer before it is even compiled. You will never get good deterministic features from the kernel with out that. Then there is the filesystem your app is running on.
Like HL7 Messaging the speed facter comes from combining one maching translating to sending to an upstream provider or sending it to another host requiring further translation into a database.
Indeed the less kernel module space there is they say it performs faster but i can't really see that. Maybe someone with more experience will comment on that for sure. It don't seem to hold true when you allocate a sole amount of memory to the kernel.
There is a patch required for the mainline kernel to run latencytop. Oprofile looks promising also that Ross mentioned.
John
On Sat, May 8, 2010 at 7:38 PM, JohnS jses27@gmail.com wrote:
On Sat, 2010-05-08 at 16:17 -0400, Ross Walker wrote:
On May 8, 2010, at 8:35 AM, Mag Gam magawake@gmail.com wrote:
At our Physics research labs we do a lot with low latency networks. We have been using Centos for over 3 years now and its been great! We would like to tune and optimize our setup by removing unneeded packages -- kernel modules to be specific. I was wondering, how does one measure the speed of the kernel. Is that even possible?
Use oprofile.
-Ross
Ross, never mind I just yummed it onto a machine there faq is inheritly wrong.
The FAQ is only correct in respect to the project's view.
Redhat has a custom oprofile that works with their custom kernels, so stock oprofile from the project's site IS incompatible, but that's OK cause RH provides one that works with their distro.
-Ross
Dear All, I've a new server HP DL 180 G6 with quad core processor, ram 4 GB, hdd (WDC) 1x750GB Sata. I was confused when installing CentOS 5 64bit on that server, I take about two hours to format the ext3 file system. is this normal?
Because when I compare with other sata hard drive in another computer file system format is not too long like that. And when I copy the file on the local hard drive for longer time when compared with the copy of the file on another server.
How to debug on this issue?
-- Best regards, David http://blog.pnyet.web.id
2010/5/10 David Suhendrik david@pnyet.web.id:
Dear All, I've a new server HP DL 180 G6 with quad core processor, ram 4 GB, hdd (WDC) 1x750GB Sata. I was confused when installing CentOS 5 64bit on that server, I take about two hours to format the ext3 file system. is this normal?
well, that size of ext3 usually takes lot of time. use xfs, if you want fast format..
-- Eero
On Mon, 2010-05-10 at 10:03 +0700, David Suhendrik wrote:
Dear All, I've a new server HP DL 180 G6 with quad core processor, ram 4 GB, hdd (WDC) 1x750GB Sata. I was confused when installing CentOS 5 64bit on that server, I take about two hours to format the ext3 file system. is this normal?
Because when I compare with other sata hard drive in another computer file system format is not too long like that. And when I copy the file on the local hard drive for longer time when compared with the copy of the file on another server.
How to debug on this issue?
-- Best regards, David http://blog.pnyet.web.id
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Could you please check in BIOS whether SATA is in native or IDE emulation mode. I've seen this on systems where SATA were in IDE emulation.
Calin
Key fingerprint = 37B8 0DA5 9B2A 8554 FB2B 4145 5DC1 15DD A3EF E857
================================================= You're already carrying the sphere!
Greetings,
On Mon, May 10, 2010 at 8:33 AM, David Suhendrik david@pnyet.web.id wrote:
Dear All, How to debug on this issue?
Just run the hdparm -tT /dev/devname.
If the speed is anything less than 30MB/s, try addind ide0probe=no in the kernel line of the grub
check the man pages of course to get the correct stuff.
It worked for me on a a *lot* of HP systems
Regards,
Rajagopal
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
thus David Suhendrik spake:
Dear All, I've a new server HP DL 180 G6 with quad core processor, ram 4 GB, hdd (WDC) 1x750GB Sata. I was confused when installing CentOS 5 64bit on that server, I take about two hours to format the ext3 file system. is this normal?
Hi,
could you provide the exact model number of that HD?
I think it could be a 4K issue. We ran into this, too, some months ago:
http://www.hv23.net/2010/02/wd10ears-performance-larger-block-size-issues4k/
HTH,
Timo
Because when I compare with other sata hard drive in another computer file system format is not too long like that. And when I copy the file on the local hard drive for longer time when compared with the copy of the file on another server.
How to debug on this issue?
-- Best regards, David http://blog.pnyet.web.id
@Rajagopal:
This result: # hdparm -tT /dev/hda5
/dev/hda5: Timing cached reads: 9952 MB in 2.00 seconds = 4980.51 MB/sec Timing buffered disk reads: 8 MB in 3.08 seconds = 2.60 MB/sec
@Timo:
458930-B21 HP 750GB 7.2k HP MDL SATA
I don't have idea for this case :(
-- Best regards, David http://blog.pnyet.web.id
On 05/10/2010 03:19 PM, Timo Schoeler wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
thus David Suhendrik spake:
Dear All, I've a new server HP DL 180 G6 with quad core processor, ram 4 GB, hdd (WDC) 1x750GB Sata. I was confused when installing CentOS 5 64bit on that server, I take about two hours to format the ext3 file system. is this normal?
Hi,
could you provide the exact model number of that HD?
I think it could be a 4K issue. We ran into this, too, some months ago:
http://www.hv23.net/2010/02/wd10ears-performance-larger-block-size-issues4k/
HTH,
Timo
Because when I compare with other sata hard drive in another computer file system format is not too long like that. And when I copy the file on the local hard drive for longer time when compared with the copy of the file on another server.
How to debug on this issue?
-- Best regards, David http://blog.pnyet.web.id
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux)
iD8DBQFL58GGfg746kcGBOwRAj/DAKC1qjx6s5KsxrfogqFQDaX8DxiGYACdEgzi zQbtaxXCAKLsd2PZNyMTwXw= =yg+z -----END PGP SIGNATURE----- _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Greetings,
On Tue, May 11, 2010 at 8:08 AM, David Suhendrik david@pnyet.web.id wrote:
@Rajagopal:
This result: # hdparm -tT /dev/hda5
/dev/hda5: Timing buffered disk reads: 8 MB in 3.08 seconds = 2.60 MB/sec
First of all it should report /dev/sda and not /dev/hda
It is a horrible speed for modern disks.
Modern SATA disks show around 50-80 MB/Sec
I am sure ide0noprobe=no (or zero -- check docs) in the kernel mline will surely speed up in addition to other suggestions will dramatically speed up.
Regards,
Rajagopal
Am Dienstag, den 11.05.2010, 11:38 +0530 schrieb Rajagopal Swaminathan:
Greetings,
On Tue, May 11, 2010 at 8:08 AM, David Suhendrik david@pnyet.web.id wrote:
@Rajagopal:
This result: # hdparm -tT /dev/hda5
/dev/hda5: Timing buffered disk reads: 8 MB in 3.08 seconds = 2.60 MB/sec
First of all it should report /dev/sda and not /dev/hda
It is a horrible speed for modern disks.
Modern SATA disks show around 50-80 MB/Sec
I am sure ide0noprobe=no (or zero -- check docs) in the kernel mline will surely speed up in addition to other suggestions will dramatically speed up.
As will setting the operation mode in mode in BIOS from compatible to SATA. First thing i do on all HP servers when they are shipped.
Chris
I tried to change the configuration of a compatible sata in bios to AHCI, but my hard drive is not detected. I do not have a smart array controller. I do this AHCI features need smart array controller? I've been looking for a reference, but did not find.
Any suggestions?
-- Best regards, David http://blog.pnyet.web.id
On 05/12/2010 02:52 AM, Christoph Maser wrote:
Am Dienstag, den 11.05.2010, 11:38 +0530 schrieb Rajagopal Swaminathan:
Greetings,
On Tue, May 11, 2010 at 8:08 AM, David Suhendrikdavid@pnyet.web.id wrote:
@Rajagopal:
This result: # hdparm -tT /dev/hda5
/dev/hda5: Timing buffered disk reads: 8 MB in 3.08 seconds = 2.60 MB/sec
First of all it should report /dev/sda and not /dev/hda
It is a horrible speed for modern disks.
Modern SATA disks show around 50-80 MB/Sec
I am sure ide0noprobe=no (or zero -- check docs) in the kernel mline will surely speed up in addition to other suggestions will dramatically speed up.
As will setting the operation mode in mode in BIOS from compatible to SATA. First thing i do on all HP servers when they are shipped.
Chris
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
At Wed, 12 May 2010 12:01:06 +0700 CentOS mailing list centos@centos.org wrote:
I tried to change the configuration of a compatible sata in bios to AHCI, but my hard drive is not detected. I do not have a smart array controller. I do this AHCI features need smart array controller? I've been looking for a reference, but did not find.
Any suggestions?
You could try adding the 'irqpoll' kernel parameter. Some AHCI controllers are a little wonky WRT IRQ dectection (or maybe it is the AHCI driver that has a problem with IRQ dectection). This is what works for my nVidia chipset motherboard.
-- Best regards, David http://blog.pnyet.web.id
On 05/12/2010 02:52 AM, Christoph Maser wrote:
Am Dienstag, den 11.05.2010, 11:38 +0530 schrieb Rajagopal Swaminathan:
Greetings,
On Tue, May 11, 2010 at 8:08 AM, David Suhendrikdavid@pnyet.web.id wrote:
@Rajagopal:
This result: # hdparm -tT /dev/hda5
/dev/hda5: Timing buffered disk reads: 8 MB in 3.08 seconds = 2.60 MB/sec
First of all it should report /dev/sda and not /dev/hda
It is a horrible speed for modern disks.
Modern SATA disks show around 50-80 MB/Sec
I am sure ide0noprobe=no (or zero -- check docs) in the kernel mline will surely speed up in addition to other suggestions will dramatically speed up.
As will setting the operation mode in mode in BIOS from compatible to SATA. First thing i do on all HP servers when they are shipped.
Chris
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
MIME-Version: 1.0
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Thanks for the help, a solution to this problem is to update the bios, and after using the newest version of bios I can use AHCI mode on the sata controller, and indeed this is the problem.
But I had to install windows server to update the bios, then installed again using Linux.
-- Best regards, David http://blog.pnyet.web.id
On 05/12/2010 02:52 AM, Christoph Maser wrote:
Am Dienstag, den 11.05.2010, 11:38 +0530 schrieb Rajagopal Swaminathan:
Greetings,
On Tue, May 11, 2010 at 8:08 AM, David Suhendrikdavid@pnyet.web.id wrote:
@Rajagopal:
This result: # hdparm -tT /dev/hda5
/dev/hda5: Timing buffered disk reads: 8 MB in 3.08 seconds = 2.60 MB/sec
First of all it should report /dev/sda and not /dev/hda
It is a horrible speed for modern disks.
Modern SATA disks show around 50-80 MB/Sec
I am sure ide0noprobe=no (or zero -- check docs) in the kernel mline will surely speed up in addition to other suggestions will dramatically speed up.
As will setting the operation mode in mode in BIOS from compatible to SATA. First thing i do on all HP servers when they are shipped.
Chris
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Am Samstag, den 15.05.2010, 09:04 +0700 schrieb David Suhendrik:
Thanks for the help, a solution to this problem is to update the bios, and after using the newest version of bios I can use AHCI mode on the sata controller, and indeed this is the problem.
But I had to install windows server to update the bios, then installed again using Linux.
-- Best regards, David http://blog.pnyet.web.id
Usually you can download a bootable CD Image from HP for BIOS updates.
Chris
David Suhendrik wrote:
Thanks for the help, a solution to this problem is to update the bios, and after using the newest version of bios I can use AHCI mode on the sata controller, and indeed this is the problem.
But I had to install windows server to update the bios, then installed again using Linux.
you can put most windows-only flash updaters and such on a USB stick or CD with Hirens Boot or similar, and run them from there.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
thus David Suhendrik spake:
@Rajagopal:
This result: # hdparm -tT /dev/hda5
/dev/hda5: Timing cached reads: 9952 MB in 2.00 seconds = 4980.51 MB/sec Timing buffered disk reads: 8 MB in 3.08 seconds = 2.60 MB/sec
@Timo:
458930-B21 HP 750GB 7.2k HP MDL SATA
I don't have idea for this case :(
Hm, says that's *not* a 4K drive, so this is not the source for the problem. Would have been too easy...
Timo
-- Best regards, David http://blog.pnyet.web.id
On 05/10/2010 03:19 PM, Timo Schoeler wrote: thus David Suhendrik spake:
Dear All, I've a new server HP DL 180 G6 with quad core processor, ram 4 GB, hdd (WDC) 1x750GB Sata. I was confused when installing CentOS 5 64bit on that server, I take about two hours to format the ext3 file system. is this normal?
Hi,
could you provide the exact model number of that HD?
I think it could be a 4K issue. We ran into this, too, some months ago:
http://www.hv23.net/2010/02/wd10ears-performance-larger-block-size-issues4k/
HTH,
Timo
Because when I compare with other sata hard drive in another computer file system format is not too long like that. And when I copy the file on the local hard drive for longer time when compared with the copy of the file on another server.
How to debug on this issue?
-- Best regards, David http://blog.pnyet.web.id
David Suhendrik wrote:
@Rajagopal:
This result: # hdparm -tT /dev/hda5
/dev/hda5: Timing cached reads: 9952 MB in 2.00 seconds = 4980.51 MB/sec Timing buffered disk reads: 8 MB in 3.08 seconds = 2.60 MB/sec
8MB/sec sounds like Programmed IO (PIO) to me.
its probably 100% CPU busy too.
On Sun, 2010-05-09 at 21:46 -0400, Ross Walker wrote:
On Sat, May 8, 2010 at 7:38 PM, JohnS jses27@gmail.com wrote:
On Sat, 2010-05-08 at 16:17 -0400, Ross Walker wrote:
On May 8, 2010, at 8:35 AM, Mag Gam magawake@gmail.com wrote:
At our Physics research labs we do a lot with low latency networks. We have been using Centos for over 3 years now and its been great! We would like to tune and optimize our setup by removing unneeded packages -- kernel modules to be specific. I was wondering, how does one measure the speed of the kernel. Is that even possible?
Use oprofile.
-Ross
Ross, never mind I just yummed it onto a machine there faq is inheritly wrong.
The FAQ is only correct in respect to the project's view.
Redhat has a custom oprofile that works with their custom kernels, so stock oprofile from the project's site IS incompatible, but that's OK cause RH provides one that works with their distro.
-Ross
--- Correct as i found out.
John
JohnS wrote:
On Sun, 2010-05-09 at 21:46 -0400, Ross Walker wrote:
On Sat, May 8, 2010 at 7:38 PM, JohnS jses27@gmail.com wrote:
On Sat, 2010-05-08 at 16:17 -0400, Ross Walker wrote:
On May 8, 2010, at 8:35 AM, Mag Gam magawake@gmail.com wrote:
At our Physics research labs we do a lot with low latency networks. We have been using Centos for over 3 years now and its been great! We would like to tune and optimize our setup by removing unneeded packages -- kernel modules to be specific. I was wondering, how does one measure the speed of the kernel. Is that even possible?
Use oprofile.
-Ross
Ross, never mind I just yummed it onto a machine there faq is inheritly wrong.
The FAQ is only correct in respect to the project's view.
Redhat has a custom oprofile that works with their custom kernels, so stock oprofile from the project's site IS incompatible, but that's OK cause RH provides one that works with their distro.
-Ross
Correct as i found out.
Would this also be suitable for testing efficiency loss from running under VMware or other virtualization methods?
On Mon, 2010-05-10 at 07:40 -0500, Les Mikesell wrote:
JohnS wrote:
On Sun, 2010-05-09 at 21:46 -0400, Ross Walker wrote:
On Sat, May 8, 2010 at 7:38 PM, JohnS jses27@gmail.com wrote:
On Sat, 2010-05-08 at 16:17 -0400, Ross Walker wrote:
On May 8, 2010, at 8:35 AM, Mag Gam magawake@gmail.com wrote:
At our Physics research labs we do a lot with low latency networks. We have been using Centos for over 3 years now and its been great! We would like to tune and optimize our setup by removing unneeded packages -- kernel modules to be specific. I was wondering, how does one measure the speed of the kernel. Is that even possible?
Use oprofile.
-Ross
Ross, never mind I just yummed it onto a machine there faq is inheritly wrong.
The FAQ is only correct in respect to the project's view.
Redhat has a custom oprofile that works with their custom kernels, so stock oprofile from the project's site IS incompatible, but that's OK cause RH provides one that works with their distro.
-Ross
Correct as i found out.
Would this also be suitable for testing efficiency loss from running under VMware or other virtualization methods?
--- You say efficiency loss. That could mean anything from the power input down to the kernel. It looks like that can be determined by oprofile and latencytop. Latencytop will give you the millisecond time for execution. As far as Oprofile maybe Ross will indeed fill us in if he can.
John
On May 10, 2010, at 8:53 AM, JohnS jses27@gmail.com wrote:
On Mon, 2010-05-10 at 07:40 -0500, Les Mikesell wrote:
JohnS wrote:
On Sun, 2010-05-09 at 21:46 -0400, Ross Walker wrote:
On Sat, May 8, 2010 at 7:38 PM, JohnS jses27@gmail.com wrote:
On Sat, 2010-05-08 at 16:17 -0400, Ross Walker wrote:
On May 8, 2010, at 8:35 AM, Mag Gam magawake@gmail.com wrote:
> At our Physics research labs we do a lot with low latency > networks. We > have been using Centos for over 3 years now and its been > great! We > would like to tune and optimize our setup by removing unneeded > packages -- kernel modules to be specific. I was wondering, > how does > one measure the speed of the kernel. Is that even possible? Use oprofile.
-Ross
Ross, never mind I just yummed it onto a machine there faq is inheritly wrong.
The FAQ is only correct in respect to the project's view.
Redhat has a custom oprofile that works with their custom kernels, so stock oprofile from the project's site IS incompatible, but that's OK cause RH provides one that works with their distro.
-Ross
Correct as i found out.
Would this also be suitable for testing efficiency loss from running under VMware or other virtualization methods?
You say efficiency loss. That could mean anything from the power input down to the kernel. It looks like that can be determined by oprofile and latencytop. Latencytop will give you the millisecond time for execution. As far as Oprofile maybe Ross will indeed fill us in if he can.
Oprofile will show where those precious latencies timings are being used. It of course adds latency itself, so this should be factored into the latency timings.
It will time all kernel operations then you can drill down into particular modules/routines to see more granularity.
Needs debug symbols to be fully useful. Can provide timings as source code annotations.
This is useful in finding modules, or statically linked routines that suck up precious time and either eliminate or fix them.
-Ross
On Mon, May 10, 2010 at 6:46 AM, Ross Walker rswwalker@gmail.com wrote:
On May 10, 2010, at 8:53 AM, JohnS jses27@gmail.com wrote:
You say efficiency loss. That could mean anything from the power input down to the kernel. It looks like that can be determined by oprofile and latencytop. Latencytop will give you the millisecond time for execution. As far as Oprofile maybe Ross will indeed fill us in if he can.
Oprofile will show where those precious latencies timings are being used. It of course adds latency itself, so this should be factored into the latency timings.
It will time all kernel operations then you can drill down into particular modules/routines to see more granularity.
Needs debug symbols to be fully useful. Can provide timings as source code annotations.
I never thought someone would run oprofile with the RT kernel. I can upload debuginfo if anyone needs it.
Akemi
On Mon, 2010-05-10 at 07:31 -0700, Akemi Yagi wrote:
On Mon, May 10, 2010 at 6:46 AM, Ross Walker rswwalker@gmail.com wrote:
On May 10, 2010, at 8:53 AM, JohnS jses27@gmail.com wrote:
You say efficiency loss. That could mean anything from the power input down to the kernel. It looks like that can be determined by oprofile and latencytop. Latencytop will give you the millisecond time for execution. As far as Oprofile maybe Ross will indeed fill us in if he can.
Oprofile will show where those precious latencies timings are being used. It of course adds latency itself, so this should be factored into the latency timings.
It will time all kernel operations then you can drill down into particular modules/routines to see more granularity.
Needs debug symbols to be fully useful. Can provide timings as source code annotations.
I never thought someone would run oprofile with the RT kernel. I can upload debuginfo if anyone needs it.
Akemi
Actually you need kernel-trace also in there. Trace is what is recomended by TUV.
John
On Mon, May 10, 2010 at 8:00 AM, JohnS jses27@gmail.com wrote:
On Mon, 2010-05-10 at 07:31 -0700, Akemi Yagi wrote:
On Mon, May 10, 2010 at 6:46 AM, Ross Walker rswwalker@gmail.com wrote:
Oprofile will show where those precious latencies timings are being used. It of course adds latency itself, so this should be factored into the latency timings.
It will time all kernel operations then you can drill down into particular modules/routines to see more granularity.
Needs debug symbols to be fully useful. Can provide timings as source code annotations.
I never thought someone would run oprofile with the RT kernel. I can upload debuginfo if anyone needs it.
Actually you need kernel-trace also in there. Trace is what is recomended by TUV.
I have kernel-rt-trace up there ...
Akemi
On May 10, 2010, at 8:40 AM, Les Mikesell lesmikesell@gmail.com wrote:
JohnS wrote:
On Sun, 2010-05-09 at 21:46 -0400, Ross Walker wrote:
On Sat, May 8, 2010 at 7:38 PM, JohnS jses27@gmail.com wrote:
On Sat, 2010-05-08 at 16:17 -0400, Ross Walker wrote:
On May 8, 2010, at 8:35 AM, Mag Gam magawake@gmail.com wrote:
At our Physics research labs we do a lot with low latency networks. We have been using Centos for over 3 years now and its been great! We would like to tune and optimize our setup by removing unneeded packages -- kernel modules to be specific. I was wondering, how does one measure the speed of the kernel. Is that even possible?
Use oprofile.
-Ross
Ross, never mind I just yummed it onto a machine there faq is inheritly wrong.
The FAQ is only correct in respect to the project's view.
Redhat has a custom oprofile that works with their custom kernels, so stock oprofile from the project's site IS incompatible, but that's OK cause RH provides one that works with their distro.
-Ross
Correct as i found out.
Would this also be suitable for testing efficiency loss from running under VMware or other virtualization methods?
No because oprofile and latencytop's point of reference is just the running kernel and doesn't factor in CPU allocations, network/disk virualization/para-virtualization, bandwidth allocations, etc.
Efficiency loss is a slippery slope and VERY configuration dependant.
I have seen VMs perform better than physical machines and I have seen them perform worse, sometimes on the same physical host!
Go with the "user experience" indicator (assuming it is properly configured for the workload). Does it seem fast? Then it's fast. Does it seem slow? Then it is slow.
-Ross
On 5/10/2010 8:56 AM, Ross Walker wrote:
Would this also be suitable for testing efficiency loss from running under VMware or other virtualization methods?
No because oprofile and latencytop's point of reference is just the running kernel and doesn't factor in CPU allocations, network/disk virualization/para-virtualization, bandwidth allocations, etc.
Efficiency loss is a slippery slope and VERY configuration dependant.
I have seen VMs perform better than physical machines and I have seen them perform worse, sometimes on the same physical host!
Go with the "user experience" indicator (assuming it is properly configured for the workload). Does it seem fast? Then it's fast. Does it seem slow? Then it is slow.
Realistically, VM performance is going to depend mostly on how much contention you have between guests for common resources especially if you overcommit them. But, I'd like to have some idea of how much effect running under VMware ESXi would have for a single guest, compared to running directly on the hardware. If there's not a big loss (and it doesn't 'feel' like there is), I'd consider this worthwhile for servers doing oddball things where its not worth the trouble to script a re-install for every little app someone might have running as a means to deal with the usual pain of moving a working system to different hardware. Plus, if there is extra capacity you can bring up another virtual machine or test the next version almost for free, and you get an almost-hardware level kvm too (after the base install works and you have an IP address...). I'd just like to have a more objective measure of what it costs in performance.
On Mon, May 10, 2010 at 11:52 AM, Les Mikesell lesmikesell@gmail.com wrote:
On 5/10/2010 8:56 AM, Ross Walker wrote:
Would this also be suitable for testing efficiency loss from running under VMware or other virtualization methods?
No because oprofile and latencytop's point of reference is just the running kernel and doesn't factor in CPU allocations, network/disk virualization/para-virtualization, bandwidth allocations, etc.
Efficiency loss is a slippery slope and VERY configuration dependant.
I have seen VMs perform better than physical machines and I have seen them perform worse, sometimes on the same physical host!
Go with the "user experience" indicator (assuming it is properly configured for the workload). Does it seem fast? Then it's fast. Does it seem slow? Then it is slow.
Realistically, VM performance is going to depend mostly on how much contention you have between guests for common resources especially if you overcommit them. But, I'd like to have some idea of how much effect running under VMware ESXi would have for a single guest, compared to running directly on the hardware. If there's not a big loss (and it doesn't 'feel' like there is), I'd consider this worthwhile for servers doing oddball things where its not worth the trouble to script a re-install for every little app someone might have running as a means to deal with the usual pain of moving a working system to different hardware. Plus, if there is extra capacity you can bring up another virtual machine or test the next version almost for free, and you get an almost-hardware level kvm too (after the base install works and you have an IP address...). I'd just like to have a more objective measure of what it costs in performance.
With ESXi you can really control the contention on resources with allocation policies, so applications that really need the resources get them.
As with any system these days the biggest contention is going to be disk and network. Make sure storage is setup appropriately for the application, just cause the server is virtual doesn't mean you can lump all the application's data onto one common datastore, keep a datastore for the OS (which can be shared for all VMs) and a separate iSCSI/Fiber datastore/lun for each application's data. You can use RDMs if your OS is on a VMFS datastore, or do iSCSI directly in the VM if you use NFS datastores for the OS.
You will notice minimal degradation running a single VM under ESXi.
I have ESXi hosts here running 20 VMs per host with some doing terminal services, some doing email, some doing database and other network services and I have not noticed any diminished performance, and yes going virtual is simply the easiest way to perform upgrades.
-Ross
On 5/10/2010 11:37 AM, Ross Walker wrote:
I have ESXi hosts here running 20 VMs per host with some doing terminal services, some doing email, some doing database and other network services and I have not noticed any diminished performance, and yes going virtual is simply the easiest way to perform upgrades.
I think it is unfortunate how difficult it is to back up a working linux machine and restore it onto different hardware, given that the system really is very hardware independent. But, detecting the hardware and mapping it to device drivers seems to be a black art hidden inside of anaconda and then the local hardware related settings are fairly hopelessly intertwined with application and user preferences in your backup copies. I always thought that this would be a common enough problem that some distribution would address it, but so far it hasn't happened.
On Mon, May 10, 2010 at 1:15 PM, Les Mikesell lesmikesell@gmail.com wrote:
On 5/10/2010 11:37 AM, Ross Walker wrote:
I have ESXi hosts here running 20 VMs per host with some doing terminal services, some doing email, some doing database and other network services and I have not noticed any diminished performance, and yes going virtual is simply the easiest way to perform upgrades.
I think it is unfortunate how difficult it is to back up a working linux machine and restore it onto different hardware, given that the system really is very hardware independent. But, detecting the hardware and mapping it to device drivers seems to be a black art hidden inside of anaconda and then the local hardware related settings are fairly hopelessly intertwined with application and user preferences in your backup copies. I always thought that this would be a common enough problem that some distribution would address it, but so far it hasn't happened.
That's why God invented the systems administrator!
Here I use kickstart scripts for baseline server types that perform all the basic configurations on install. Then I typically keep the server-centric config in a common location, /etc/<servername> and use symbolic links to the system supplied config, this can also be scripted for quick recovery. I keep the application data on separate volumes then the OS (typical OS image is 8GB, most is swap) using iSCSI so I can connect to them from another VM easily enough (RDM or direct iSCSI), everything is installed via RPM, if the distro repo version isn't adequate I build my own and keep a custom in-house repo, no third party repos.
I have yet to look at Cobbler which is suppose to simplify the creation and management of all these kickstart scripts and provide a nice interface, but haven't had the time yet.
If I am setting up an ESXi infrastructure the first thing I would do is setup a Cobbler server and a Windows deployment server (maybe a Solaris Jump Start server) and integrate it with the VMware vCenter templates. Then it's all point-n-click server deployment from there.
-Ross
On 5/10/2010 12:37 PM, Ross Walker wrote:
I think it is unfortunate how difficult it is to back up a working linux machine and restore it onto different hardware, given that the system really is very hardware independent. But, detecting the hardware and mapping it to device drivers seems to be a black art hidden inside of anaconda and then the local hardware related settings are fairly hopelessly intertwined with application and user preferences in your backup copies. I always thought that this would be a common enough problem that some distribution would address it, but so far it hasn't happened.
That's why God invented the systems administrator!
Well, yeah - I suppose you could say the design is good for the job security of sysadmins and for requiring support subscriptions from the distribution vendors, but it's something that the computer really should be able to handle by itself just like it does during the initial install.
Here I use kickstart scripts for baseline server types that perform all the basic configurations on install.
Have you totaled up the hours you've spend on these tasks that would be unnecessary in a better-designed system? And even if that sort-of makes sense for servers that have a basic "type", what about ones that have application developers as users and end up accumulating all kinds of cruft that you don't know about?
If I am setting up an ESXi infrastructure the first thing I would do is setup a Cobbler server and a Windows deployment server (maybe a Solaris Jump Start server) and integrate it with the VMware vCenter templates. Then it's all point-n-click server deployment from there.
Don't forget that you can use ESXi for free, but not vCenter. But, there's really no problem in just copying/cloning VMware images around - you don't have to go through any extra contortions to be able to reproduce them (with variations for every OS), you just need to save a baseline copy before adding specialized applications.
On Mon, May 10, 2010 at 2:05 PM, Les Mikesell lesmikesell@gmail.com wrote:
On 5/10/2010 12:37 PM, Ross Walker wrote:
I think it is unfortunate how difficult it is to back up a working linux machine and restore it onto different hardware, given that the system really is very hardware independent. But, detecting the hardware and mapping it to device drivers seems to be a black art hidden inside of anaconda and then the local hardware related settings are fairly hopelessly intertwined with application and user preferences in your backup copies. I always thought that this would be a common enough problem that some distribution would address it, but so far it hasn't happened.
That's why God invented the systems administrator!
Well, yeah - I suppose you could say the design is good for the job security of sysadmins and for requiring support subscriptions from the distribution vendors, but it's something that the computer really should be able to handle by itself just like it does during the initial install.
Computers are dumb, and if we give too much power to the OS vendors they'll enslave us.
Here I use kickstart scripts for baseline server types that perform all the basic configurations on install.
Have you totaled up the hours you've spend on these tasks that would be unnecessary in a better-designed system? And even if that sort-of makes sense for servers that have a basic "type", what about ones that have application developers as users and end up accumulating all kinds of cruft that you don't know about?
After the initial time to research and setup the system the time to maintain and extend was minimal, and these were setup a long, long, long time ago (circa Windows 2000), use rsync or DFSR to replicate the config to other deployment servers in remote offices.
I definitely recommend it.
Just need to setup clear policies with the developers, save your work here and it will be recoverable, save your work there and you are SoL.
If I am setting up an ESXi infrastructure the first thing I would do is setup a Cobbler server and a Windows deployment server (maybe a Solaris Jump Start server) and integrate it with the VMware vCenter templates. Then it's all point-n-click server deployment from there.
Don't forget that you can use ESXi for free, but not vCenter. But, there's really no problem in just copying/cloning VMware images around - you don't have to go through any extra contortions to be able to reproduce them (with variations for every OS), you just need to save a baseline copy before adding specialized applications.
Sure, I just mentioned vCenter as I use it here, but as in the email to John, it could easily be scripted from a VM.
No need to clone or image either, I can have a server deployed over the network in much quicker time then if I imaged it and a whole lot easier to maintian long-term then a clone.
-Ross
On 5/10/2010 1:30 PM, Ross Walker wrote:
Well, yeah - I suppose you could say the design is good for the job security of sysadmins and for requiring support subscriptions from the distribution vendors, but it's something that the computer really should be able to handle by itself just like it does during the initial install.
Computers are dumb, and if we give too much power to the OS vendors they'll enslave us.
If it is smart enough to create the initial install it should be smart enough to adjust the file entries it created to what it would have done on different hardware. It is creating a lot of work for you to turn everything into a new install just because that's all it knows how to do.
Have you totaled up the hours you've spend on these tasks that would be unnecessary in a better-designed system? And even if that sort-of makes sense for servers that have a basic "type", what about ones that have application developers as users and end up accumulating all kinds of cruft that you don't know about?
After the initial time to research and setup the system the time to maintain and extend was minimal, and these were setup a long, long, long time ago (circa Windows 2000), use rsync or DFSR to replicate the config to other deployment servers in remote offices.
I definitely recommend it.
How many different OS's do you handle this way?
Just need to setup clear policies with the developers, save your work here and it will be recoverable, save your work there and you are SoL.
Getting developers to follow instructions has been described as "herding cats" - and if you lose developer work or time, everyone is sol, not just them.
No need to clone or image either, I can have a server deployed over the network in much quicker time then if I imaged it and a whole lot easier to maintian long-term then a clone.
That's assuming you've wrapped everything local in an installable package in a private repository for every OS version you use, which doesn't sound at all quicker to me. Especially if you start with hosts that are expected to be one-off types but have to be moved to new hardware once in a while.
On Mon, May 10, 2010 at 2:56 PM, Les Mikesell lesmikesell@gmail.com wrote:
On 5/10/2010 1:30 PM, Ross Walker wrote:
Well, yeah - I suppose you could say the design is good for the job security of sysadmins and for requiring support subscriptions from the distribution vendors, but it's something that the computer really should be able to handle by itself just like it does during the initial install.
Computers are dumb, and if we give too much power to the OS vendors they'll enslave us.
If it is smart enough to create the initial install it should be smart enough to adjust the file entries it created to what it would have done on different hardware. It is creating a lot of work for you to turn everything into a new install just because that's all it knows how to do.
I'm sure a vendor will come up with a complete solution, but you will be forced into the vendor's idea of what that solution will be and typically it is everything that that vendor provides and only that vendor.
Have you totaled up the hours you've spend on these tasks that would be unnecessary in a better-designed system? And even if that sort-of makes sense for servers that have a basic "type", what about ones that have application developers as users and end up accumulating all kinds of cruft that you don't know about?
After the initial time to research and setup the system the time to maintain and extend was minimal, and these were setup a long, long, long time ago (circa Windows 2000), use rsync or DFSR to replicate the config to other deployment servers in remote offices.
I definitely recommend it.
How many different OS's do you handle this way?
I had Windows 2000/2003, CentOS 4/5, Fedora, Debian, Xen Server and ESXi itself off our deployment servers at one time, now I just do Windows 2003, CentOS and ESXi.
Just need to setup clear policies with the developers, save your work here and it will be recoverable, save your work there and you are SoL.
Getting developers to follow instructions has been described as "herding cats" - and if you lose developer work or time, everyone is sol, not just them.
I have heard that, so like users you have to make it more appealing/easier to go the supported way then the unsupported way.
Make it so they have to sudo anything off an unsupported path, and they will hopefully stick with the path of least resistance.
No need to clone or image either, I can have a server deployed over the network in much quicker time then if I imaged it and a whole lot easier to maintian long-term then a clone.
That's assuming you've wrapped everything local in an installable package in a private repository for every OS version you use, which doesn't sound at all quicker to me. Especially if you start with hosts that are expected to be one-off types but have to be moved to new hardware once in a while.
Actually it is very little we need to privately support, most distro supplied tools work for us, so not a problem. Supporting Fedora/Ubuntu as a choice will allow those who need bleeding edge to get it through those channels.
Since we are almost 100% virtualized, there is no real problem rolling from testing/development into production, the virtual hardware is the same.
-Ross
On Mon, 2010-05-10 at 13:37 -0400, Ross Walker wrote:
On Mon, May 10, 2010 at 1:15 PM, Les Mikesell lesmikesell@gmail.com wrote:
If I am setting up an ESXi infrastructure the first thing I would do is setup a Cobbler server and a Windows deployment server (maybe a Solaris Jump Start server) and integrate it with the VMware vCenter templates. Then it's all point-n-click server deployment from there.
--- What happens when your in a non ESX environment? Like me. I am creating and tearing down virt machines on user request. Sounds like a lot of bs doing that but we have to for certain client machines.
I guess my question to Ross is ESX capable of doing this? Just wondering as I'm looking onto rebuildiong oVirt in house.
John
On Mon, May 10, 2010 at 2:12 PM, JohnS jses27@gmail.com wrote:
On Mon, 2010-05-10 at 13:37 -0400, Ross Walker wrote:
On Mon, May 10, 2010 at 1:15 PM, Les Mikesell lesmikesell@gmail.com wrote:
If I am setting up an ESXi infrastructure the first thing I would do is setup a Cobbler server and a Windows deployment server (maybe a Solaris Jump Start server) and integrate it with the VMware vCenter templates. Then it's all point-n-click server deployment from there.
What happens when your in a non ESX environment? Like me. I am creating and tearing down virt machines on user request. Sounds like a lot of bs doing that but we have to for certain client machines.
I guess my question to Ross is ESX capable of doing this? Just wondering as I'm looking onto rebuildiong oVirt in house.
The Cobbler and Windows deployment servers should work equally well in a Xen environment as they do in an ESX environment.
You don't need vCenter or some other virtualization management platform, of course it centralizes deployment if you do which makes things easier, but the same could be accomplished via scripting on the command line, maybe setup a "management VM" guest domain and locate your domain deployment scripts there, maybe the same domain that runs Cobbler and use XMLRPC to communicate with the Xen servers in the enterprise.
-Ross