I have CentOS 4.3 on two machines, one i386 (3.0GHz P4) and one x86_64 (2.6GHz Opteron).
Both have centos-release-4-3.2 autoconf-2.59-5
However, on the i386: perl-5.8.5-34.RHEL4
Whereas on the x86_64: perl-5.8.5-24.RHEL4
Is this really correct?
I ask because when I attempt to recompile zsh on the x86_64 box, I unaccountably get this error:
Can't locate object method "path" via package "Request" at /usr/share/autoconf/Autom4te/C4che.pm line 69, <GEN1> line 111.
I'm trying to recompile zsh in the first place because the "limit" command doesn't work on the x86_64 system, so the SRPM build appears to have incorrectly preprocessed sys/resource.h and I wanted to track down the details before filing a bug report.
Is this really correct?
Yes, because you're mixing repositories. The -34 version comes from the fasttrack repository while the -24 version is in the general repository.
I ask because when I attempt to recompile zsh on the x86_64 box, I unaccountably get this error: Can't locate object method "path" via package "Request" at /usr/share/autoconf/Autom4te/C4che.pm line 69, <GEN1> line 111.
How is the build machine set up? Are you mixing i386 and x86_64 packages?
I'm trying to recompile zsh in the first place because the "limit" command doesn't work on the x86_64 system, so the SRPM build appears to have incorrectly preprocessed sys/resource.h and I wanted to track down the details before filing a bug report.
How does the limit command not work for you on x86_64? Perhaps rather than rebuilding (and thereby introducing possible new variables that we cannot duplicate) you could provide us the steps to duplicate your problem and we can all work the issue.
On 7/1/06, Jim Perrin jperrin@gmail.com wrote:
Is this really correct?
Yes, because you're mixing repositories. The -34 version comes from the fasttrack repository while the -24 version is in the general repository.
Aha. Yes, I do have the fasttrack repository enabled on the i386 machine; I had forgotten about that. However, it's the x86_64 machine where the perl problem occurs.
How is the build machine set up? Are you mixing i386 and x86_64 packages?
No, there are no i386 packages on the x86_64 machine, and only clamav and pine are from the rpmforge repository; the rest is straight CentOS.
schaefer[125] rpm -qa | grep i386 schaefer[126]
I'm trying to recompile zsh in the first place because the "limit" command doesn't work on the x86_64 system, so the SRPM build appears to have incorrectly preprocessed sys/resource.h and I wanted to track down the details before filing a bug report.
How does the limit command not work for you on x86_64?
It does absolutely nothing. Note that in zsh the "limit" and "ulimit" (not "unlimit") commands are distinct, though they can both be used to change the same settings. Here's the x86_64 machine:
schaefer[117] ulimit -a cpu time (seconds) unlimited file size (blocks) unlimited data seg size (kbytes) unlimited stack size (kbytes) 10240 core file size (blocks) 0 resident set size (kbytes) unlimited processes 28663 file descriptors 1024 locked-in-memory size (kb) 32 address space (kb) unlimited file locks unlimited 1024 819200 schaefer[118] limit schaefer[119] limit coredumpsize 0 limit: no such resource: coredumpsize schaefer[120]
Now here's the i386 machine:
schaefer[502] ulimit -a cpu time (seconds) unlimited file size (blocks) unlimited data seg size (kbytes) unlimited stack size (kbytes) 10240 core file size (blocks) unlimited resident set size (kbytes) unlimited processes 32763 file descriptors 1024 locked-in-memory size (kb) 32 address space (kb) unlimited file locks unlimited 1024 819200 schaefer[503] limit cputime unlimited filesize unlimited datasize unlimited stacksize 10MB coredumpsize unlimited memoryuse unlimited maxproc 32763 descriptors 1024 memorylocked 32kB addressspace unlimited maxfilelocks unlimited schaefer[504]
Perhaps rather than rebuilding (and thereby introducing possible new variables that we cannot duplicate) you could provide us the steps to duplicate your problem and we can all work the issue.
I'm one of the zsh developers, and have been for about 15 years now. I know how the zsh build process is supposed to work. One part of it is to preprocess /usr/include/*/resource.h with an awk script to generate a new header that is used in the zsh build. The exact list of files handed to the awk script is determined by the configure script. The only way for the limit command to have lost all resources is if that awk pass went wrong -- probably because zsh's configure script did not look in /usr/include/asm-x86_64, which is possible if the same SRPM was used for both i386 and x86_64 rpmbuilds. What I intended to determine was whether this is a zsh configure bug that always occurs when compiling on this platform, or an apparent RPM build issue.
However, I'm not an expert with the modern perl-based autoconf; it's been a long time since I wrote an autoconf script from scratch.
On Sat, Jul 01, 2006 at 03:57:14PM -0700, Bart Schaefer wrote:
On 7/1/06, Jim Perrin jperrin@gmail.com wrote:
<...>
How does the limit command not work for you on x86_64?
It does absolutely nothing. Note that in zsh the "limit" and "ulimit" (not "unlimit") commands are distinct, though they can both be used to change the same settings. Here's the x86_64 machine:
<...>
upstream bug, https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=173678 the patch proposed in https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=161166 seems to fixe the issue, maybe in the next quaterly release?
Tru
On 7/2/06, Tru Huynh tru@centos.org wrote:
upstream bug, https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=173678 the patch proposed in https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=161166
Thanks. It looks like that patch covers ground that was already traversed in the zsh sources back in Jan/Feb 2005, but I've forwarded it to the zsh-workers list in case there's something in there that needs to be picked up.
seems to fixe the issue, maybe in the next quaterly release?
That patch is a year old and hasn't been in any quarterly releases yet, so I'm not going to hold my breath.
Hello:
NEED HELP Desperately!!!
Today one of my servers (RH 7.2) went down. I moved the HDD to another server and it booted fine except I could not setup the proper VIDEO display.
Then I setup a new Server (CentOS 4.3). I moved the HDD from the original server and installed it as a SLAVE device in the new server.
Now I want to display the contacts of the slave HDD (so I can copy few files). How do I display the second (slave) HDD?
Thanks in advance.
Kirti
centos-bounces@centos.org <> scribbled on Sunday, July 02, 2006 5:24 PM:
Hello:
NEED HELP Desperately!!!
Today one of my servers (RH 7.2) went down. I moved the HDD to another server and it booted fine except I could not setup the proper VIDEO display.
Then I setup a new Server (CentOS 4.3). I moved the HDD from the original server and installed it as a SLAVE device in the new server.
Now I want to display the contacts of the slave HDD (so I can copy few files). How do I display the second (slave) HDD?
Thanks in advance.
Kirti
Did you try mounting the slave drive/partition?
mount /dev/sdb1 /mnt/floppy or similar
Mike
NO I did not mount the partition.
How the name for the SLAVE device is assigned? In your example, you have used 'sdb1'. My understanding is that the SLAVE drive should be listed in /etc/fstab or /etc/mtab. Neither of these files exist.
I am a newbie, so any help is appreciated.
Kirti
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Mike Kercher Sent: Sunday, July 02, 2006 6:24 PM To: CentOS mailing list Subject: RE: [CentOS] How to display 2nd HDD
centos-bounces@centos.org <> scribbled on Sunday, July 02, 2006 5:24 PM:
Hello:
NEED HELP Desperately!!!
Today one of my servers (RH 7.2) went down. I moved the HDD to another server and it booted fine except I could not setup the proper VIDEO display.
Then I setup a new Server (CentOS 4.3). I moved the HDD from the original server and installed it as a SLAVE device in the new server.
Now I want to display the contacts of the slave HDD (so I can copy few files). How do I display the second (slave) HDD?
Thanks in advance.
Kirti
Did you try mounting the slave drive/partition?
mount /dev/sdb1 /mnt/floppy or similar
Mike _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
centos-bounces@centos.org <> scribbled on Sunday, July 02, 2006 6:02 PM:
NO I did not mount the partition.
How the name for the SLAVE device is assigned? In your example, you have used 'sdb1'. My understanding is that the SLAVE drive should be listed in /etc/fstab or /etc/mtab. Neither of these files exist.
I am a newbie, so any help is appreciated.
Kirti
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Mike Kercher Sent: Sunday, July 02, 2006 6:24 PM To: CentOS mailing list Subject: RE: [CentOS] How to display 2nd HDD
centos-bounces@centos.org <> scribbled on Sunday, July 02, 2006 5:24 PM:
Hello:
NEED HELP Desperately!!!
Today one of my servers (RH 7.2) went down. I moved the HDD
to another
server and it booted fine except I could not setup the proper VIDEO display.
Then I setup a new Server (CentOS 4.3). I moved the HDD from the original server and installed it as a SLAVE device in the
new server.
Now I want to display the contacts of the slave HDD (so I
can copy few
files). How do I display the second (slave) HDD?
Thanks in advance.
Kirti
Did you try mounting the slave drive/partition?
mount /dev/sdb1 /mnt/floppy or similar
Mike
Is this a SCSI device or IDE? If IDE, did you connect it to the primary or secondary controller? If SCSI, what is the SCSI ID of the drive? dmesg should tell you which drive devices were detected during the boot process? fdisk /dev/<device> should allow you to view what partitions are available on the drive.
Mike
On 03/07/06, Kirti S. Bajwa kbajwa@tibonline.net wrote:
NO I did not mount the partition.
How the name for the SLAVE device is assigned? In your example, you have used 'sdb1'. My understanding is that the SLAVE drive should be listed in /etc/fstab or /etc/mtab. Neither of these files exist.
I am a newbie, so any help is appreciated.
The following will list the disks and their partitions...
# fdisk -l
/dev/hda is probably already partitioned and mounted. I'd guess your old unmounted disk will be /dev/hdb, possibly /dev/hdc or hdd. If the partitions are labelled you'll be able to tell what they were mounted as with...
# e2label /dev/hdbN
... where N is the partition no. of each slice on the disk as revealed by fdisk.
Then you can mount these partitions somewhere temporarily to have a root around in them/copy stuff out. For each partition...
# mkdir /mnt/old-disk-hdbN # mount /dev/hdbN /mnt/old-disk-hdbN
Will.
Now I want to display the contacts of the slave HDD (so I can copy few files). How do I display the second (slave) HDD?
I am assuming that you are referring to IDE drives, due to your referral of a "slave" drive. IDE drives are usually referenced as /dev/hdx where x = the letter of the connected drive. For example, the primary master is /dev/hda, the primary slave is /dev/hdb, the secondary master is /dev/hdc, and the secondary slave is /dev/hdd.
As root, type "fdisk -l". This will list the partitions of the drives connected to your system. If this is a primary slave, it is most likely "/dev/hdb". If you have multiple partittions, you will want to choose which partition(s) to mount. You will need to create a directory for each partition that you want to mount. I might recommend creating /media/hdb1 /media/hdb2,etc. Then you can mount each partition by typing "mount /dev/hdb1 /media/hdb1" and "mount /dev/hdb2 /media/hdb2" .. etc.
Hope this helps.
Barry
I ask because when I attempt to recompile zsh on the x86_64 box, I unaccountably get this error: Can't locate object method "path" via package "Request" at /usr/share/autoconf/Autom4te/C4che.pm line 69, <GEN1> line 111.
Apparently an old autom4te.cache directory was copied to the new machine along with the source files, and this was causing grief to automake. Removing the cache has cleared up the problem.