I want to mirror an existing 40GB data only drive using software Raid1 on my new CentOS 4 server. The existing drive is connected to a Promise Ultra 100 TX2 controller (non-raid). I have read about mdadm and understand how to create the Raid1 on /dev/mdxx devices. However I would like to know if the existing data on the orignal 40GB drive in the system will be destroyed when I create the raid with mdadm. Also do the 2 drives after being setup with mdadm have to be reformated? Both the 40GB drives are now formated with ext3. If not, will the added 40GB drive assigned to the Raid1 atomaticly sync to the original 40GB drive. If so, what do I need to do to assure this will happend automaticly? Your comments are appreciated, as this is my first time to setup a software Raid1.
Lee
__________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Lee Parmeter wrote:
I want to mirror an existing 40GB data only drive using software Raid1 on my new CentOS 4 server. The existing drive is connected to a Promise Ultra 100 TX2 controller (non-raid). I have read about mdadm and understand how to create the Raid1 on /dev/mdxx devices. However I would like to know if the existing data on the orignal 40GB drive in the system will be destroyed when I create the raid with mdadm. Also do the 2 drives after being setup with mdadm have to be reformated? Both the 40GB drives are now formated with ext3. If not, will the added 40GB drive assigned to the Raid1 atomaticly sync to the original 40GB drive. If so, what do I need to do to assure this will happend automaticly? Your comments are appreciated, as this is my first time to setup a software Raid1.
You'd need to resize file systems after you import partitions into MD. An e2fsck/resize2fs would take care of that. Do an "fsck -f /dev/md0" (for example). It will complain that information in superblock is wrong (partition size fsck sees is smaller than what it found in superblock). Just answer "no" to abort question. After you did fsck, you need to do "resize2fs /dev/md0". Repeat for all md devices you created.
For above, the file systems should be either unmounted, or mounted read only. The best thing to do is to boot from CD into rescue mode, and do all job from it.
The reason is that MD uses couple of last blocks for metadata information, and that space is no longer usable for file system data. So your /dev/md* metadisks will be slightly smaller than partitions you created them on.
When you are creating mirrors, make sure you list devices in right order. Data is always copied from first disk you specify to second disk. If you get them the wrong way around, you loose data.
There are some rather good HOWTOs on this question (with much longer, detailed and better descriptions of migration process). Use the Google Luke.
On Wed, 2005-04-27 at 10:44, Aleksandar Milivojevic wrote:
You'd need to resize file systems after you import partitions into MD. An e2fsck/resize2fs would take care of that. Do an "fsck -f /dev/md0" (for example). It will complain that information in superblock is wrong (partition size fsck sees is smaller than what it found in superblock). Just answer "no" to abort question. After you did fsck, you need to do "resize2fs /dev/md0". Repeat for all md devices you created.
For above, the file systems should be either unmounted, or mounted read only. The best thing to do is to boot from CD into rescue mode, and do all job from it.
The reason is that MD uses couple of last blocks for metadata information, and that space is no longer usable for file system data. So your /dev/md* metadisks will be slightly smaller than partitions you created them on.
What happens if existing files are on these blocks before you convert?
When you are creating mirrors, make sure you list devices in right order. Data is always copied from first disk you specify to second disk. If you get them the wrong way around, you loose data.
Another approach that might be safer is to create a 'broken' mirror first by specifying the 2nd device as 'missing'. Then you can build a filesystem on the md device and mount it somewhere and copy the files over from the existing partition. Then unmount the old partition, remount the raid device in its place (adjusting /etc/fstab to match) and use mdadm to add the old partition into the new raid, which will hot-sync it to match the new setup.
There are some rather good HOWTOs on this question (with much longer, detailed and better descriptions of migration process). Use the Google Luke.
I looked for this and found lots of info on building RAIDs but none about preserving an existing filesystem while converting to a mirror. Can you provide a link or a good search term to pick that up? Also, I have noticed that after syncing the mirrored drives, you can split them and mount a single drive from it into another machine without making an md device first. (For example, if one part is on a USB/firewire drive, you can plug it into a different machine and mount the /dev/sda? partition it becomes.) However, I don't know if this is harmful or not. Does anyone know if, after running a while in this mode it will still work correctly if detected as an md? device (with or without resyncing to a partner)?
Les Mikesell wrote:
On Wed, 2005-04-27 at 10:44, Aleksandar Milivojevic wrote:
You'd need to resize file systems after you import partitions into MD. An e2fsck/resize2fs would take care of that. Do an "fsck -f /dev/md0" (for example). It will complain that information in superblock is wrong (partition size fsck sees is smaller than what it found in superblock). Just answer "no" to abort question. After you did fsck, you need to do "resize2fs /dev/md0". Repeat for all md devices you created.
For above, the file systems should be either unmounted, or mounted read only. The best thing to do is to boot from CD into rescue mode, and do all job from it.
The reason is that MD uses couple of last blocks for metadata information, and that space is no longer usable for file system data. So your /dev/md* metadisks will be slightly smaller than partitions you created them on.
What happens if existing files are on these blocks before you convert?
I've no idea what happens in this case. I guess fsck should complain loud and clar if something like this happens, and you'll probably loose files that used those blocks.
When you are creating mirrors, make sure you list devices in right order. Data is always copied from first disk you specify to second disk. If you get them the wrong way around, you loose data.
Another approach that might be safer is to create a 'broken' mirror first by specifying the 2nd device as 'missing'. Then you can build a filesystem on the md device and mount it somewhere and copy the files over from the existing partition. Then unmount the old partition, remount the raid device in its place (adjusting /etc/fstab to match) and use mdadm to add the old partition into the new raid, which will hot-sync it to match the new setup.
Yes, that might be safer approach nowdays. In the old days, it was not possible to create "broken" mirrors.
There are some rather good HOWTOs on this question (with much longer, detailed and better descriptions of migration process). Use the Google Luke.
I looked for this and found lots of info on building RAIDs but none about preserving an existing filesystem while converting to a mirror. Can you provide a link or a good search term to pick that up?
For example, "linux raid howto resize2fs" will get you to the following page:
http://www.linux.com/howtos/Software-RAID-HOWTO-7.shtml
It describes the "old" way of creating RAID devices (using /etc/raidtab file and mkraid command). However, information is still usable (simpy use mdadm instead of raidtab/mkraid).
I have noticed that after syncing the mirrored drives, you can split them and mount a single drive from it into another machine without making an md device first. (For example, if one part is on a USB/firewire drive, you can plug it into a different machine and mount the /dev/sda? partition it becomes.) However, I don't know if this is harmful or not. Does anyone know if, after running a while in this mode it will still work correctly if detected as an md? device (with or without resyncing to a partner)?
Sure thing. It is "safe" to use submirror that way, since file system does not use partition size to determine how much of the partition it can use. It uses size that is stored in the file system superblock. The only check it does is that file system size is smaller or equal to partition size (which is the case here). Actually, this is what happens during each boot. The boot partition is accessed directly to load kernel and initrd, since md devices do not exist until raid* drivers are loaded and initialized. That is why boot partition can also be simple non-striped RAID1.
However, it might not be safe to plug submirror back if you simply unplug one of the submirrors. MD metadata stored at the end of each partition is used to automatically detect and assemble arrays when Linux kernel boots. The kernel will find that it has both submirrors present. It will also detect that something is not quite right because time stamps (and probably some other info) stored in metadata on those two partitions doesn't match. What exactly the kernel will do, and what submirror it will consider as valid, I don't know (probably the one with most fresh timestamps). I also don't know if kernel will start automatically resyncing "invalid" submirror in this case.
On the other hand, if you used mdadm to fail and then remove submirror prior to unplugging it, it should be noted in the metadata and it should be safe to reconnect it. But I'm not 100% sure. After you reconnect submirror, you'd use mdadm to readd that submirror.
Software Raid 1: ===============
I ended up first creating a software raid1 with a missing 2nd drive. Then I created the file system on /dev/md0 and mounted it. Once mounted, I copied the data from the original drive to the raid drive. Then I unmounted the original data drive and defined it as the missing raid1 drive.
The next problem I had was that the raid did not automaticly start when the server booted so I got an fstab error for /dev/md0. The solution turned out to be creating a mdadm.conf file. Once the config file had DEVICE and ARRAY defines for the raid, I could both manually restart the raid and it would auto startup at boot. So the /dev/md0 lines in fstab work.
I did not find any information on the web concerning the requirement of the mdadm.conf file and the relationship to the auto start of the raid. I had to work a bit in the dark to get it to work as I thought it should.
________________________
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- Lee Parmeter Emperor, linXos - The Flying Penguin http://www.linXos.com Linux Registered User #337161
'It's free. It works. Duh.'" - Eric Harrison
The United States is NOT a democracy, it was founded as a Republic!
God is not a republican or a democrat nor is His government a democracy! - Lee Parmeter
__________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
On Fri, 2005-04-29 at 12:20, Lee Parmeter wrote:
I did not find any information on the web concerning the requirement of the mdadm.conf file and the relationship to the auto start of the raid. I had to work a bit in the dark to get it to work as I thought it should.
I think if you set the partition type to FD with fdisk, the system is supposed to figure everything out automatically at boot time. I have, however, had some trouble when moving pairs of disks built on one machine to a different one and would generally prefer to control it with a definition in a file if possible instead of having the system guess. If you have both, I'm not sure which wins. If you find documentation on the detection order or how to move devices, please post a link.
Les Mikesell wrote:
If you have both, I'm not sure which wins. If you find documentation on the detection order or how to move devices, please post a link.
The kernel wins. As soon as raid* device drivers are loaded (for example, from initrd image, waaaay before even root file is mounted), they'll do the automagic stuff. The system will spit warnings later if information in mdadm.conf is contradicting.
On Fri, 2005-04-29 at 13:18, Aleksandar Milivojevic wrote:
If you have both, I'm not sure which wins. If you find documentation on the detection order or how to move devices, please post a link.
The kernel wins. As soon as raid* device drivers are loaded (for example, from initrd image, waaaay before even root file is mounted), they'll do the automagic stuff. The system will spit warnings later if information in mdadm.conf is contradicting.
Thanks, now what happens when you move a set built on one machine to a different machine, or if you move drives intending to reformat but end up with mismatched members that are detected at bootup. I remember having a disaster years ago when I tried to minimize downtime by building a raid1 set on a different machine and pre-loading files, then shutting down just long enough to swap the drive in place. I think they were either paired with the wrong mates or the md? devices were detected in the wrong order as the machine rebooted. Maybe this has been fixed in the newer kernel versions but since then I've gone out of my way to avoid repeating the situation - sometimes as far as low-leveling on a non-production machine before moving a drive that may have been part of a raid or even one that might have a conflicting partition label.
Les Mikesell wrote:
Thanks, now what happens when you move a set built on one machine to a different machine, or if you move drives intending to reformat but end up with mismatched members that are detected at bootup. I remember having a disaster years ago when I tried to minimize downtime by building a raid1 set on a different machine and pre-loading files, then shutting down just long enough to swap the drive in place. I think they were either paired with the wrong mates or the md? devices were detected in the wrong order as the machine rebooted. Maybe this has been fixed in the newer kernel versions but since then I've gone out of my way to avoid repeating the situation - sometimes as far as low-leveling on a non-production machine before moving a drive that may have been part of a raid or even one that might have a conflicting partition label.
My guess is that detection order was different. This is a big problem on Linux that exists even if you haven't used RAID. For example, SCSI device names gets renumbered and moved around each time you add/remove devices. Udev has some nice workarounds for this that work perfectly for many types of devices, but I'm not sure if they would work for boot disc, and probably wouldn't work at all for software RAID pseudo devices ("udevinfo -a -p /sys/block/md0" doesn't show any promising output, if at least it included randomly generated ID, it could have been usefull).
I had similar problem to the one you described with LVM, when kernel read LVM info from first disc it detected (and of course, it was the wrong disc, with old, outdated LVM information from one of previous intallations). It sounds logical to me to try searching for LVM info on boot disc first, and then look around (and fail if there are conflicting volume groups). But not for Linux developers. Yeah, I know, I have a source... Too bad it is too much soruce, too little time...
File system labels also can be a bitch, given that in most distributions they are assigned in the most moronic way ("LABEL=/" might look logical and works if you are never going to replace discs or move them around; LABEL="something-random" on the other hand would work most of the time, as long as your boot loader is going to load correct kernel and pass it correct root flag, but than fstab file looks kinda ugly, and good lack guessing what the correct label name is on lilo or grub prompts).
The only solution to these problems on Linux is to boot into single user and wipe out any and all information that Linux kernel is not supposed to see. And then when you have those things sorted out, attempt normal boot. No way around it. Frankly, what is kernel supposed to do when it reads conflicting information from the disc? It's like when you ask for directions and one person tells you go right, and the other go left. Obviously, you'll can follow directions from either first or second person, or sit in the middle of intersection.
On Mon, 2005-05-02 at 08:53, Aleksandar Milivojevic wrote:
The only solution to these problems on Linux is to boot into single user and wipe out any and all information that Linux kernel is not supposed to see. And then when you have those things sorted out, attempt normal boot. No way around it. Frankly, what is kernel supposed to do when it reads conflicting information from the disc? It's like when you ask for directions and one person tells you go right, and the other go left. Obviously, you'll can follow directions from either first or second person, or sit in the middle of intersection.
I don't expect the kernel to be able to guess which conflicting label to use, or which set of disks become which md devices. Unfortunately, I haven't been able to find any documentation on how the md detection is supposed to work or how to provide the right info for the kernel when you are planning to move a set. There are times when it would be really useful to be able to pre-load them on a different box in order to have minimal downtime during the swap.
Les Mikesell wrote:
I don't expect the kernel to be able to guess which conflicting label to use, or which set of disks become which md devices. Unfortunately, I haven't been able to find any documentation on how the md detection is supposed to work or how to provide the right info for the kernel when you are planning to move a set. There are times when it would be really useful to be able to pre-load them on a different box in order to have minimal downtime during the swap.
Well, again, it should be possible. Probable cause of the problem was that you had two software RAID devices contending to become md0 (or md1, md2, or whatever). The fix is relatively easy.
Once you move the drives, boot into rescue mode. When asked to mount file systems, choose skip (you don't want Anaconda going out and attempting to do the guesswork or trigger the kernel into doing guesswork).
When you get shell prompt is the time to do some manual repairing. Let say your old arrays were md0 and md1, and the one you just moved over was also md1 on different box. So you'd want to make this new one md2 (or any other md*).
You'd first find out the UUID of that array. You can do that by querying any component of the array. Let say /dev/sda1 is one of the components of the array you moved over:
# mdadm --examine /dev/sda1
mdadm will spit out lots of info, you are interested in one of the first lines it prints. The UUID is long hex string. Can't miss it. Now, reassemble the array as /dev/md2 using UUID to select partitions that belong to this device out of all the partitions on the system, and instruct mdadm to update super-minor hint in superblock (this is important bit). This is all one command. I broke it in several lines for readability. Use the UUID from the output of "examine" command.
# mdadm --assemble /dev/md2 --auto --config=partitions \ --uuid="12345678:87654321:12345678:87654321" \ --update=super-minor
And you are all done. You may reboot now.
Or, you could do this on the machine where you originally assembled the array. Use "mdadm --examine" to find out UUID. Use "mdadm --stop" to stop array. Use "mdadm --assemble" as described above to reassemble it with device name that is not used on machine where you plan to move the array (and obviously, that device name must be unused on machine where you are doing this thing too).
Or just create array with correct super-minor in the first place ;-)
On Mon, 2005-05-02 at 14:43, Aleksandar Milivojevic wrote:
# mdadm --assemble /dev/md2 --auto --config=partitions \ --uuid="12345678:87654321:12345678:87654321" \ --update=super-minor
And you are all done. You may reboot now.
Thanks - I think back when I had the problem I was using the older raidtools programs. The newer stuff may not have the same issues and this looks like there is a way to fix it anyway. I'll give it a try as soon as I have some matching machines that I can test with.
Hi
I just got a Dell Dual Xeon 64bit 3.2ghz, Poweredge 1850, with Dual SCSI 143GB harddisk.
I tried installing CentOs 4_64 on it. But apparently, I can't even get the CentOs 4_64 installer to start i.e. it does not get to the blue text based menu system. It fails shortly after the kernel boots and mounts the root device (the initrd ram disk installer image). This installer has been verified with the GPG key.
Have anyone came across the same issue?
Any recommendations on how to fix this?
Regards Chaw Ming
Who told you that Xeon is 64bit CPU? Read it again carefully in the specs about Xeon.
-denny
On 5/3/05, Ho Chaw Ming chawming@pacific.net.sg wrote:
Hi
I just got a Dell Dual Xeon 64bit 3.2ghz, Poweredge 1850, with Dual SCSI 143GB harddisk.
I tried installing CentOs 4_64 on it. But apparently, I can't even get the CentOs 4_64 installer to start i.e. it does not get to the blue text based menu system. It fails shortly after the kernel boots and mounts the root device (the initrd ram disk installer image). This installer has been verified with the GPG key.
Have anyone came across the same issue?
Any recommendations on how to fix this?
Regards Chaw Ming
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Intel has a new line of Xeons which have EM64T support. Don't assume and let your ignorance show.
http://www1.us.dell.com/content/products/productdetails.aspx/pedge_1850?c=us &cs=555&l=en&s=biz
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Denny Prasetya Sent: 03 May 2005 14:09 To: CentOS mailing list Subject: Re: [CentOS] CentOS4_64 On Dell
Who told you that Xeon is 64bit CPU? Read it again carefully in the specs about Xeon.
-denny
On 5/3/05, Ho Chaw Ming chawming@pacific.net.sg wrote:
Hi
I just got a Dell Dual Xeon 64bit 3.2ghz, Poweredge 1850, with Dual SCSI 143GB harddisk.
I tried installing CentOs 4_64 on it. But apparently, I can't even get the CentOs 4_64 installer to start i.e. it does not get to the blue text based menu system. It fails shortly after the kernel boots and mounts the root device (the initrd ram disk installer image). This installer has been verified with the GPG key.
Have anyone came across the same issue?
Any recommendations on how to fix this?
Regards Chaw Ming
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
_______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
sorry, my fault. dont have any idea abt it. if you dont use mems > 4g, how about try the normal Centos i386?
-denny
On 5/3/05, Ho Chaw Ming chawming@pacific.net.sg wrote:
Intel has a new line of Xeons which have EM64T support. Don't assume and let your ignorance show.
http://www1.us.dell.com/content/products/productdetails.aspx/pedge_1850?c=us &cs=555&l=en&s=biz
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Denny Prasetya Sent: 03 May 2005 14:09 To: CentOS mailing list Subject: Re: [CentOS] CentOS4_64 On Dell
Who told you that Xeon is 64bit CPU? Read it again carefully in the specs about Xeon.
-denny
On 5/3/05, Ho Chaw Ming chawming@pacific.net.sg wrote:
Hi
I just got a Dell Dual Xeon 64bit 3.2ghz, Poweredge 1850, with Dual SCSI 143GB harddisk.
I tried installing CentOs 4_64 on it. But apparently, I can't even get the CentOs 4_64 installer to start i.e. it does not get to the blue text based menu system. It fails shortly after the kernel boots and mounts the root device (the initrd ram disk installer image). This installer has been verified with the GPG key.
Have anyone came across the same issue?
Any recommendations on how to fix this?
Regards Chaw Ming
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I went to CentOs 3.4_64 and it works. I will try an update to version 4 probably tommorrow. That should work.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Denny Prasetya Sent: 03 May 2005 15:35 To: CentOS mailing list Subject: Re: [CentOS] CentOS4_64 On Dell
sorry, my fault. dont have any idea abt it. if you dont use mems > 4g, how about try the normal Centos i386?
-denny
On 5/3/05, Ho Chaw Ming chawming@pacific.net.sg wrote:
Intel has a new line of Xeons which have EM64T support. Don't assume and
let
your ignorance show.
http://www1.us.dell.com/content/products/productdetails.aspx/pedge_1850?c=us
&cs=555&l=en&s=biz
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On
Behalf
Of Denny Prasetya Sent: 03 May 2005 14:09 To: CentOS mailing list Subject: Re: [CentOS] CentOS4_64 On Dell
Who told you that Xeon is 64bit CPU? Read it again carefully in the specs about Xeon.
-denny
On 5/3/05, Ho Chaw Ming chawming@pacific.net.sg wrote:
Hi
I just got a Dell Dual Xeon 64bit 3.2ghz, Poweredge 1850, with Dual SCSI 143GB harddisk.
I tried installing CentOs 4_64 on it. But apparently, I can't even get
the
CentOs 4_64 installer to start i.e. it does not get to the blue text
based
menu system. It fails shortly after the kernel boots and mounts the root device (the initrd ram disk installer image). This installer has been verified with the GPG key.
Have anyone came across the same issue?
Any recommendations on how to fix this?
Regards Chaw Ming
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
_______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Its not a 64bit CPU though, it just has large address space support.
It really doesn't need a 64 bit kernel as it isn't a 64bit CPU, in effect unless your stacking it full of RAM and need the large mem support it will probably run slower on a 64bit install.
P.
Ho Chaw Ming wrote:
I went to CentOs 3.4_64 and it works. I will try an update to version 4 probably tommorrow. That should work.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Denny Prasetya Sent: 03 May 2005 15:35 To: CentOS mailing list Subject: Re: [CentOS] CentOS4_64 On Dell
sorry, my fault. dont have any idea abt it. if you dont use mems > 4g, how about try the normal Centos i386?
-denny
On 5/3/05, Ho Chaw Ming chawming@pacific.net.sg wrote:
Intel has a new line of Xeons which have EM64T support. Don't assume and
let
your ignorance show.
http://www1.us.dell.com/content/products/productdetails.aspx/pedge_1850?c=us
&cs=555&l=en&s=biz
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On
Behalf
Of Denny Prasetya Sent: 03 May 2005 14:09 To: CentOS mailing list Subject: Re: [CentOS] CentOS4_64 On Dell
Who told you that Xeon is 64bit CPU? Read it again carefully in the specs about Xeon.
-denny
On 5/3/05, Ho Chaw Ming chawming@pacific.net.sg wrote:
Hi
I just got a Dell Dual Xeon 64bit 3.2ghz, Poweredge 1850, with Dual SCSI 143GB harddisk.
I tried installing CentOs 4_64 on it. But apparently, I can't even get
the
CentOs 4_64 installer to start i.e. it does not get to the blue text
based
menu system. It fails shortly after the kernel boots and mounts the root device (the initrd ram disk installer image). This installer has been verified with the GPG key.
Have anyone came across the same issue?
Any recommendations on how to fix this?
Regards Chaw Ming
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, 3 May 2005 at 9:25am, Peter Farrow wrote
Its not a 64bit CPU though, it just has large address space support.
It really doesn't need a 64 bit kernel as it isn't a 64bit CPU, in effect unless your stacking it full of RAM and need the large mem support it will probably run slower on a 64bit install.
1) Please don't top post, and please trim your posts to just the bits relevant to your response.
2) The Xeons in the Dell Poweredge 1850 are indeed 64bit CPUs, based on the same x86_64 technology that AMD introduced with their Opterons.
"based on the same x86_64 technology that AMD introduced with their Opterons."
Not on planet earth I'm afraid....
Joshua Baker-LePain wrote:
On Tue, 3 May 2005 at 9:25am, Peter Farrow wrote
Its not a 64bit CPU though, it just has large address space support.
It really doesn't need a 64 bit kernel as it isn't a 64bit CPU, in effect unless your stacking it full of RAM and need the large mem support it will probably run slower on a 64bit install.
- Please don't top post, and please trim your posts to just the bits
relevant to your response.
- The Xeons in the Dell Poweredge 1850 are indeed 64bit CPUs, based on
the same x86_64 technology that AMD introduced with their Opterons.
On Tue, 3 May 2005 at 9:47am, Peter Farrow wrote
"based on the same x86_64 technology that AMD introduced with their Opterons."
If you didn't top post, you wouldn't have to re-type my quote.
Not on planet earth I'm afraid....
Err, what are you talking about? EM64T (as an instruction set) is literally a clone of AMD64. Not a perfect one, of course, as there are differences. There are also differences in implementation, the most glaring of which being AMD's very smart move of putting the memory controller on the CPU and connecting the CPUs via HyperTransport. This leads to much better scaling vs. Intel's continued reliance on the bottleneck that is the shared memory controller hub.
But the bottom line is that if you compile code for an x86_64 target (and don't put in any Intel or AMD specific optimizations), it'll run on either chip.
You may of course believe what you like.......
Joshua Baker-LePain wrote:
On Tue, 3 May 2005 at 9:47am, Peter Farrow wrote
"based on the same x86_64 technology that AMD introduced with their Opterons."
If you didn't top post, you wouldn't have to re-type my quote.
Not on planet earth I'm afraid....
Err, what are you talking about? EM64T (as an instruction set) is literally a clone of AMD64. Not a perfect one, of course, as there are differences. There are also differences in implementation, the most glaring of which being AMD's very smart move of putting the memory controller on the CPU and connecting the CPUs via HyperTransport. This leads to much better scaling vs. Intel's continued reliance on the bottleneck that is the shared memory controller hub.
But the bottom line is that if you compile code for an x86_64 target (and don't put in any Intel or AMD specific optimizations), it'll run on either chip.
On Tue, 3 May 2005 at 9:56am, Peter Farrow wrote
You may of course believe what you like.......
No, really, what are you talking about? I'd really like to know what you actually think about this beyond pithy one liners as above.
Rather than drawing the whole of the Centos mailing list into this shoot out........I would love to chat with you offline.....
The Xeons with EMT64 bit support now come with a 48 bit address but and support the long instruction set, what they do not have is the full and complete architecture that the AMD has, in that sense it is not a true 64 bit CPU. you are right in that it will run the 64bit code, but my response was that indeed it may run it slower than a 32bit optimized code. The original first rev Xeons with this support actually reported themselves as Opterons, as some elements were plagiarised in such detail.
It comes down to this, would you rather have a 32bit Xeon CPU hotwired internally to run 64bit code with some extras sprinkled in, or would you want a true 64 bit CPU designed as such from scratch. - You have already pointed out already the advantages of implementation from AMD, so I think I know which one you choose.
Personally, and from a performance perspective, I'd want (and indeed use) the AMD 64 bit native offering....
What do you use?
P. Ps: if you're interested in my background and what basis I would make informed comments, rather than "pithy one liners" I'll chat offline with you, They were not intended to be offensive, just enough to make you smile.... I do have to go into a meeting now, sorry I top post, but I do use Mozilla thunderbird (so I'm not all bad), which I had to re-configure to top post after world+dog complained about me "bottom posting".
Joshua Baker-LePain wrote:
On Tue, 3 May 2005 at 9:56am, Peter Farrow wrote
You may of course believe what you like.......
No, really, what are you talking about? I'd really like to know what you actually think about this beyond pithy one liners as above.
On Tue, 3 May 2005 at 10:19am, Peter Farrow wrote
Rather than drawing the whole of the Centos mailing list into this shoot out........I would love to chat with you offline.....
I didn't mean for it to be a shoot out. It was just that your earlier comments gave no indication that you actually knew what you were talking about... ;)
Personally, and from a performance perspective, I'd want (and indeed use) the AMD 64 bit native offering....
What do you use?
Oh, I'm a big fan of the Opterons and have several.
Contrary to popular belief,
I do actually know what I am talking about, so thanks very much for that.....
I am involved with cluster design for a UK company that sells into the European cluster market.
Working both with Intel and AMD, we actually prefer AMD, in the cases where the interconnect between the cluster nodes is high performance the AMD solutions nearly always give the best performance. those linked by Ethernet sometimes give better performance on Intel.
We've generally found that the choice is usually quite simple, especially when dealing with academic establishments :-P , we've found that they just want to run their code as-is rather than optimising it to run faster. Typically such establishments specialise in their field which is not usually programming prowess... the code is usually very unelegant and not sophisticated in its approach, and it becomes a no-brainer if £1m Intel cluster runs it faster than AMD then they buy intel, but in reality some code tweaking could see a 30points improvement on the AMD cluster.
The better the node interconnect the bigger lead AMD often gets...
P.
Joshua Baker-LePain wrote:
On Tue, 3 May 2005 at 10:19am, Peter Farrow wrote
Rather than drawing the whole of the Centos mailing list into this shoot out........I would love to chat with you offline.....
I didn't mean for it to be a shoot out. It was just that your earlier comments gave no indication that you actually knew what you were talking about... ;)
Personally, and from a performance perspective, I'd want (and indeed use) the AMD 64 bit native offering....
What do you use?
Oh, I'm a big fan of the Opterons and have several.
On 5/3/05, Peter Farrow peter@farrows.org wrote:
Contrary to popular belief, I do actually know what I am talking about, so thanks very much for that.....
Do you happen to know how the 64-bit AMD and Xeon would compare when running java code?
Les Mikesell wrote:
On 5/3/05, Peter Farrow peter@farrows.org wrote:
Contrary to popular belief, I do actually know what I am talking about, so thanks very much for that.....
Do you happen to know how the 64-bit AMD and Xeon would compare when running java code?
You might find this an interesting site....
http://www.ics.muni.cz/~makub/java/speed.html
Regards
Pete
On Tuesday 03 May 2005 23:15, Les Mikesell wrote:
On 5/3/05, Peter Farrow peter@farrows.org wrote:
Contrary to popular belief, I do actually know what I am talking about, so thanks very much for that.....
Do you happen to know how the 64-bit AMD and Xeon would compare when running java code?
One of the groups at work tested an app (serverside java) and they ended up with the following results:
Xeon 32bit - 100% Opteron 32bit - 86% Xeon 64bit - 97% Opteron 64bit - 92%
It was a pretty unbalanced test since we ran a 3.6Ghz Xeon against a much lower speed Opteron (244 I think but I can't verify that until I get to work today) hence the Opteron was quite a bit slower. Anyway, the interesting part was that with the 32 -> 64 bit JDK change boosted performance about 6% on the Opteron while the Xeon actually lost 3%...
Peter.
On Tue, 3 May 2005, Peter Farrow wrote:
You may of course believe what you like.......
M. Farrow,
I am quite convinced that M. Baker-LePain is not discussing matters of belief, but of fact (fact that was on Intel's roadmap two years ago and has been a released product for nine months). Please it is preferable if you don't begin arguments, but if you choose to have an argument at least start with a remotely tenable position.
http://www.intel.com/products/processor/xeon/index.htm
It is true that the original xeons were 32bit cpus, but Intel chose to keep the Xeon name for its EM64T technology.
regards, -Ryan
Joshua Baker-LePain wrote:
"based on the same x86_64 technology that AMD introduced with their Opterons."
If you didn't top post, you wouldn't have to re-type my quote.
Thanks Ryan,
I have already posted a reply while you were typing your...
go take a look......
P.
Ryan Sweet wrote:
On Tue, 3 May 2005, Peter Farrow wrote:
You may of course believe what you like.......
M. Farrow,
I am quite convinced that M. Baker-LePain is not discussing matters of belief, but of fact (fact that was on Intel's roadmap two years ago and has been a released product for nine months). Please it is preferable if you don't begin arguments, but if you choose to have an argument at least start with a remotely tenable position.
http://www.intel.com/products/processor/xeon/index.htm
It is true that the original xeons were 32bit cpus, but Intel chose to keep the Xeon name for its EM64T technology.
regards, -Ryan
Joshua Baker-LePain wrote:
"based on the same x86_64 technology that AMD introduced with their Opterons."
If you didn't top post, you wouldn't have to re-type my quote.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, 3 May 2005, Peter Farrow wrote:
Thanks Ryan,
I have already posted a reply while you were typing your...
go take a look......
Right, and while your comments are generally correct, specifically when discussing HPC applications for the processors concerned, a) the original comments were not sufficiently useful to draw the distinction, b) even if HPC specialists (this is my day job also) may decide to draw such a distinction between EM64T and opteron (and I would say that certainly not all of the folks on the various beowulf lists would use the term 64bit the way you were using it), the fact is that the industry in general, and Intel/RedHat/Dell (the three comanies featuring in this discussion) have settled upon a different definition of a 64bit CPU than you have.
Also, as for the speed of EM64T when runnin 32bit versus 64bit, once again the HPC mantra must be "it depends upon your application", because I have seen a number of cases where it can go either way.
and yes, I use both EM64T and opteron, yet for much the same reasons that you have already given, we tend to lean heavily to the opteron.
regards, -Ryan
P.
Ryan Sweet wrote:
On Tue, 3 May 2005, Peter Farrow wrote:
You may of course believe what you like.......
M. Farrow,
I am quite convinced that M. Baker-LePain is not discussing matters of belief, but of fact (fact that was on Intel's roadmap two years ago and has been a released product for nine months). Please it is preferable if you don't begin arguments, but if you choose to have an argument at least start with a remotely tenable position.
http://www.intel.com/products/processor/xeon/index.htm
It is true that the original xeons were 32bit cpus, but Intel chose to keep the Xeon name for its EM64T technology.
regards, -Ryan
Joshua Baker-LePain wrote:
"based on the same x86_64 technology that AMD introduced with their Opterons."
If you didn't top post, you wouldn't have to re-type my quote.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hi There,
Unfortunately my background is hardware electronics design, so my definition of 64 bit tends to be a bit more "under the hood" than most.
Having worked, many years ago for <cough> Cyrix </cough> (remember them) as a CPU/motherboard design consultant at the time of the launch of the 6x86 CPU, the 6x86 at that time was the first CPU to have speculative execution etc etc. So I did my time in CPU design, on this basis I judge true 64bit or not.
Thanks for the comments, I guess I am purist on what really should carry the 64bit label or not.....
P.
Ryan Sweet wrote:
On Tue, 3 May 2005, Peter Farrow wrote:
Thanks Ryan,
I have already posted a reply while you were typing your...
go take a look......
Right, and while your comments are generally correct, specifically when discussing HPC applications for the processors concerned, a) the original comments were not sufficiently useful to draw the distinction, b) even if HPC specialists (this is my day job also) may decide to draw such a distinction between EM64T and opteron (and I would say that certainly not all of the folks on the various beowulf lists would use the term 64bit the way you were using it), the fact is that the industry in general, and Intel/RedHat/Dell (the three comanies featuring in this discussion) have settled upon a different definition of a 64bit CPU than you have.
Also, as for the speed of EM64T when runnin 32bit versus 64bit, once again the HPC mantra must be "it depends upon your application", because I have seen a number of cases where it can go either way.
and yes, I use both EM64T and opteron, yet for much the same reasons that you have already given, we tend to lean heavily to the opteron.
regards, -Ryan
P.
Ryan Sweet wrote:
On Tue, 3 May 2005, Peter Farrow wrote:
You may of course believe what you like.......
M. Farrow,
I am quite convinced that M. Baker-LePain is not discussing matters of belief, but of fact (fact that was on Intel's roadmap two years ago and has been a released product for nine months). Please it is preferable if you don't begin arguments, but if you choose to have an argument at least start with a remotely tenable position.
http://www.intel.com/products/processor/xeon/index.htm
It is true that the original xeons were 32bit cpus, but Intel chose to keep the Xeon name for its EM64T technology.
regards, -Ryan
Joshua Baker-LePain wrote:
"based on the same x86_64 technology that AMD introduced with their Opterons."
If you didn't top post, you wouldn't have to re-type my quote.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
actually there are substantial differences in the xeon's 64 bit implementation. The addressing area is one of them.
Joshua Baker-LePain wrote:
On Tue, 3 May 2005 at 9:25am, Peter Farrow wrote
Its not a 64bit CPU though, it just has large address space support.
It really doesn't need a 64 bit kernel as it isn't a 64bit CPU, in effect unless your stacking it full of RAM and need the large mem support it will probably run slower on a 64bit install.
Please don't top post, and please trim your posts to just the bits relevant to your response.
The Xeons in the Dell Poweredge 1850 are indeed 64bit CPUs, based on the same x86_64 technology that AMD introduced with their Opterons.
On Tue, 2005-05-03 at 15:38 +0800, Ho Chaw Ming wrote:
I went to CentOs 3.4_64 and it works. I will try an update to version 4 probably tommorrow. That should work.
The arch that you want to install is x86_64. That is for AMD64 and Xeon EM64T processors.
DO NOT USE ia64 ... which is for the Intel Itanium2 processor.
Johnny Hughes
Lee Parmeter wrote:
The next problem I had was that the raid did not automaticly start when the server booted so I got an fstab error for /dev/md0. The solution turned out to be creating a mdadm.conf file. Once the config file had DEVICE and ARRAY defines for the raid, I could both manually restart the raid and it would auto startup at boot. So the /dev/md0 lines in fstab work.
I did not find any information on the web concerning the requirement of the mdadm.conf file and the relationship to the auto start of the raid. I had to work a bit in the dark to get it to work as I thought it should.
I don't think you need mdadm.conf file.
Check with fdisk that partitions used for RAID devices are set to type "Linux raid autodetect". This is type "fd". If not, use "t" command in fdisk to set correct partition type, and than "w" to write out partition table. Linux kernel will automatically detect and configure only RAID device that are on the partitions tagged as "fd" (Linux raid autodetect).
Another thing to check is initrd-kernel-version.img file in /boot directory. You might need to rebuild it (in some cases). Replace "kernel-version" string with your actual kernel version:
# mkinitrd initrd-kernel-version.img kernel-version
If you are using LILO, you need to run /sbin/lilo every time after you modify initrd image file. If you are using Grub no additional steps are needed.
If raid device driver doesn't load even after you built new initrd image, it might be that mkinitrd hasn't included it for whatever reason. You can force mkinitrd to include driver with "--with-module=raid1" (for RAID1 devices).
Aleksandar Milivojevic wrote:
If raid device driver doesn't load even after you built new initrd image, it might be that mkinitrd hasn't included it for whatever reason. You can force mkinitrd to include driver with "--with-module=raid1" (for RAID1 devices).
Oops, this should have been "--with=raid1"...