On 8/31/2010 11:04 AM, Stephen Harris wrote:
Stack size was only a problem for the 32 bit OS and not 64 bit. If one
is dealing with a terabyte or more of data, I don't see them using a 32 bit OS.
Huh;
/dev/mapper/Raid5-Media 3.3T 3.1T 216G 94% /Media
% uname -sr Linux 2.6.18-194.3.1.el5PAE
I really don't see any really good reasons for using anything but 64 bit any more, if the hardware supports it.
I don't find the RedHat 32bit/64bit split to be as clean as it should be (definitely messy when compared to Solaris). When it comes to needing to install 32bit and 64bit versions of the same program (eg perl 'cos you only have 32bit binary libraries from vendors) it gets a little hairy. And then Oracle really starts to get antsy on you.
As a result, when I first installed CentOS 5 I stuck with 32bit because it was more stable. After all, my memory footprint is only around 200Mb on this machine; the rest is cache!
The kernel and user apps are pretty much different things. You can run a 64-bit kernel and 32-bit apps if you want. But the issue with a 32-bit kernel besides what it can provide as a process address space is that at least the way RH and CentOS build it, it uses 4k stacks which may not be enough for some xfs operations.