Hello,
Earlier this week I installed a test server with CentOS 5.6 with
Virtualization enabled during the installer. Today I installed another
server using the same method (they are identical servers). I just did a
yum update and I found something curious. Both servers have a different
kernel. Server 1 is at 9.1 version and server 2 at 5.1. How can this be?
How to I get the latest version on server 2? If I run yum update there
are none available.
If I input xm info I get this one server 1:
host : server1
release : 2.6.18-238.9.1.el5xen
version : #1 SMP Tue Apr 12 18:53:56 EDT 2011
machine : x86_64
nr_cpus : 4
nr_nodes : 1
sockets_per_node : 1
cores_per_socket : 4
threads_per_core : 1
cpu_mhz : 2400
hw_caps :
bfebfbff:20100800:00000000:00000940:0000e3bd:00000000:00000001
total_memory : 4095
free_memory : 383
node_to_cpu : node0:0-3
xen_major : 3
xen_minor : 1
xen_extra : .2-238.9.1.el5
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_pagesize : 4096
platform_params : virt_start=0xffff800000000000
xen_changeset : unavailable
cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-50)
cc_compile_by : mockbuild
cc_compile_domain : centos.org
cc_compile_date : Tue Apr 12 18:01:03 EDT 2011
xend_config_format : 2
And on server 2 it is this:
host : server2
release : 2.6.18-238.5.1.el5xen
version : #1 SMP Fri Apr 1 19:35:13 EDT 2011
machine : x86_64
nr_cpus : 4
nr_nodes : 1
sockets_per_node : 1
cores_per_socket : 4
threads_per_core : 1
cpu_mhz : 2400
hw_caps :
bfebfbff:20100800:00000000:00000940:0000e3bd:00000000:00000001
total_memory : 4095
free_memory : 383
node_to_cpu : node0:0-3
xen_major : 3
xen_minor : 1
xen_extra : .2-238.5.1.el5
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_pagesize : 4096
platform_params : virt_start=0xffff800000000000
xen_changeset : unavailable
cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-50)
cc_compile_by : mockbuild
cc_compile_domain : centos.org
cc_compile_date : Fri Apr 1 18:30:53 EDT 2011
xend_config_format : 2
--
Met vriendelijke groet / With kind regards,
Hans Vos
Hi
I have a Dell PowerEdge T310 *tower* server.. I have to buy an ups by
apc... anyone could help me giving an hint ?
a simple "smart ups 1000" could be enough ?
thx so much!!
lewis.
[Reposted now I've joined the list, so I hopefully don't get moderated out]
Hi,
I've upgraded lots of machines to 5.6 (thanks!) and there was one
particular machine that I'd also like to upgrade to PHP 5.3.
Unfortunately it seems I can't.
On the machine I have php-mssql installed, and it appears that there is
no php53-mssql.
php-mssql is built from the php-extras SRPM, so is there going to be a
php53-extras SRPM?
I've checked upstream, and they also don't have a php53-mssql package,
so if there _were_ to be solved it'd have to be in the 'Extras'
repository I guess...
Cheers,
John.
--
John Beranek To generalise is to be an idiot.
http://redux.org.uk/ -- William Blake
More PHP fun!
I can see in the spec files that php-mcrypt support was removed by Redhat. I tried to find out why but I don't have sufficient access to redhat bugzilla. I am wondering if it is actually necessary as I have also run across a post or two that indicates applications that rely on mcrypt still work with the new php53.
Perhaps mcrypt was superceded by another module or PHP core code?
Hi.
There does not seem to be a php53-eaccelerator in standard Centos yum channels, from what I can see. That is a mainstay for us. Has anyone found that any particular php53-eacclerator from other locations play well with it?
Thanks.
On 4/13/11, Rudi Ahlers <Rudi(a)softdux.com> wrote:
>> to expand the array :)
>
> I haven't had problems doing it this way yet.
I finally figured out my mistake creating the raid devices and got a
working RAID 0 on two RAID 1 arrays. But I wasn't able to add another
RAID 1 component to the array with the error
mdadm: add new device failed for /dev/md/mdr1_3 as 2: Invalid argument
Googling up this indicates that it's the expected result trying to add
a new device to a RAID 0 array. Could you or anybody else please share
what's the trick to achieving this?
Hi,
I was just wondering if there are specific steps to take to install CentOS on
SSDs...
By example, no swap partition?
Format with a flash fs?
Sysctl parameters?
Thx,
JD
What's the current state of the art with this black magic?
It looks like there are a vast number of tuning options and the ones
that typically show up in advice are:
socket options = TCP_NODELAY IPTOS_LOWDELAY
read raw = yes
write raw = yes
oplocks = yes
max xmit = 65535
dead time = 15
getwd cache = yes
some of which may be the defaults anyway.
One thing in particular that I'd like to make faster is access to a set
of libraries (boost, etc.) that are in a directory mapped by several
windows boxes (mostly VM's on different machines)used as build servers.
--
Les Mikesell
lesmikesell(a)gmail.com
For a few days I can't boot a server in graphical mode.
The screen goes black and a have a CUI login.
I can login a user and at this point 'startx' get the graphical interface up
in /etc/inittab/
x: 5:respawn: /etc.X11/prefdm -nodaemon
and xdmcp is alivre
On this server we have 12 stations drived by LTSP as a terminal server. We can boot the stations to a point where the X graphical interface doesn't come up and the terminal stays with a grey screen with a big X in the center.
Do somebody can drive me to a solution?
---
Michel Donais
One was 32 bit, the other 64 bit.
Christopher Chan <christopher.chan(a)bradbury.edu.hk> wrote:
>On Thursday, April 14, 2011 07:26 AM, John Jasen wrote:
>> On 04/12/2011 08:19 PM, Christopher Chan wrote:
>>> On Tuesday, April 12, 2011 10:36 PM, John Jasen wrote:
>>>> On 04/12/2011 10:21 AM, Boris Epstein wrote:
>>>>> On Tue, Apr 12, 2011 at 3:36 AM, Alain Péan
>>>>> <alain.pean(a)lpp.polytechnique.fr
>>>>> <mailto:alain.pean@lpp.polytechnique.fr>> wrote:
>>>>
>>>> <snipped: two recommendations for XFS>
>>>>
>>>> I would chime in with a dis-commendation for XFS. At my previous
>>>> employer, two cases involving XFS resulted in irrecoverable data
>>>> corruption. These were on RAID systems running from 4 to 20 TB.
>>>>
>>>>
>>>
>>> What were those circumstances? Crash? Power outage? What are the
>>> components of the RAID systems?
>>
>> One was a hardware raid over fibre channel, which silently corrupted
>> itself. System checked out fine, raid array checked out fine, xfs was
>> replaced with ext3, and the system ran without issue.
>>
>> Second was multiple hardware arrays over linux md raid0, also over fibre
>> channel. This was not so silent corruption, as in xfs would detect it
>> and lock the filesystem into read-only before it, pardon the pun, truly
>> fscked itself. Happened two or three times, before we gave up, split up
>> the raid, and went ext3, Again, no issues.
>
>32-bit kernel by any chance?
>_______________________________________________
>CentOS mailing list
>CentOS(a)centos.org
>http://lists.centos.org/mailman/listinfo/centos