PARI has been donated three large Enterprise servers; an E6500, and E6000, and an E5500. I am in process of arranging shipping and pickup, perhaps as soon as next week. Once delivered, I will be setting these beasts up and checking them out, building drive bays, etc.
The idea is to install Aurora and bring it up to 1.92, then see abut the buildsystem for CentOS SPARC.
The 6500 is destined to replace our Ultra 30 e-mail server, and might even become our webserver as well as a potential buildfarm box for CentOS/SPARC and/or Aurora.
Just wanted to keep people updated on this; exciting!
Hi,
On Thu, Aug 18, 2005 at 09:10:29PM -0400, Lamar Owen wrote:
PARI has been donated three large Enterprise servers; an E6500, and E6000, and an E5500. I am in process of arranging shipping and pickup, perhaps as soon as next week. Once delivered, I will be setting these beasts up and checking them out, building drive bays, etc.
The idea is to install Aurora and bring it up to 1.92, then see abut the buildsystem for CentOS SPARC.
You're a bit late with this. I am actually already in progress on building CentOS-4/sparc (for real this time :)
I am in phase of finaliding build round for 32bit userspace and have all the bits and pieces for 64bit parts too. Even things like
Linux netra 2.6.9-15sp.EC #1 SMP Thu Aug 18 04:39:06 EEST 2005 sparc64 sparc64 sparc64 GNU/Linux
are there, but only tested on one quad-440Mhz netra so far.
The plan being to be able to test updated methods from 1.0 or 1.92 instllations to CentOS-4/sparc this weekend. Then start to look those small pieces and anaconda hacking.
The plan being ready as of CentOS-4.2 time
You'll be perfect test victim as soon as it get all the bits and pieces together in form that i can put those public.
The 6500 is destined to replace our Ultra 30 e-mail server, and might even become our webserver as well as a potential buildfarm box for CentOS/SPARC and/or Aurora.
Builfarms are alwayswelcome. For example i do have problems with heat alone at home when i try to keep enought machines up for my work. remote machines for security fix building are always more than welcomed.
So short: Gimme at least one more week? :P
On Thu, 2005-08-18 at 21:10 -0400, Lamar Owen wrote:
PARI has been donated three large Enterprise servers; an E6500, and E6000, and an E5500. I am in process of arranging shipping and pickup, perhaps as soon as next week. Once delivered, I will be setting these beasts up and checking them out, building drive bays, etc.
The idea is to install Aurora and bring it up to 1.92, then see abut the buildsystem for CentOS SPARC.
The 6500 is destined to replace our Ultra 30 e-mail server, and might even become our webserver as well as a potential buildfarm box for CentOS/SPARC and/or Aurora.
Just wanted to keep people updated on this; exciting!
We are progressing on the sparc build right now (Pasi Pirhonen is working hard on it :)
Hi,
On Thu, Aug 18, 2005 at 09:47:50PM -0500, Johnny Hughes wrote:
On Thu, 2005-08-18 at 21:10 -0400, Lamar Owen wrote:
PARI has been donated three large Enterprise servers; an E6500, and E6000, and an E5500. I am in process of arranging shipping and pickup, perhaps as soon as next week. Once delivered, I will be setting these beasts up and checking them out, building drive bays, etc.
We are progressing on the sparc build right now (Pasi Pirhonen is working hard on it :)
I put it again here. I can't say it too often, that we (at least i) are quite desperately needing those donated boxes/space for continuous work.
I, for example, do need my local machines for installation testing etc. more demanding local access, but for day to day maintenance it starts to be PITA to swich on/off machines as one could not keep all those up and running 24/7 :P There might be critcal updates while not even near machines at home, but you can't do anything about it as those are not ether-wakeable etc. and witched off.
So this, for my part at least now, would be reach out to get some suitable alpha/sparc hardware running somewhere remote location :)
ia64 would not hurt either. Those boxes tend to be humming so warm that i just start those when i really work with those.
Another thing is, that i do have at least intention to mess up with the rpmforge stuff too, and that would need totally different space (at least chroots) to maintain. One just can't mess up with the core build system and build all weird extra stuff on same space.
So this, for my part at least now, would be reach out to get some suitable alpha/sparc hardware running somewhere remote location :)
Well, our location certainly is remote... :-)
I have a couple of alpha boxen, but they are not fast. One is an AS2100 4/275 (4 CPU), and I have a working 164SX/533.
ia64 would not hurt either. Those boxes tend to be humming so warm that i just start those when i really work with those.
No ia64 yet.
Another thing is, that i do have at least intention to mess up with the rpmforge stuff too, and that would need totally different space (at least chroots) to maintain. One just can't mess up with the core build system and build all weird extra stuff on same space.
Well, it might be possible to dedicate the E6000 to the cause, if I can justify the power consumption. The 6500, if all goes according to plan, would be my live e-mail and plone server, perhaps. In order to justify the power bill, I have to actually use the machine, so we'd have to work out a protocol for the builds that won't disturb the live system.
But, for the first few weeks to a couple of months I don't forsee using the beast in production. But, one has to admit, having an E6500 with its hotswp EVERYTHING (including CPUs, hearkening back to a previous thread) is seriously cool, and, even if I never powered the beast up, it makes a great prop on tours!
Currently, it appears we will be picking the machines up Friday, so I should have them on site Saturday, but it will be at least a week or two before any would be ready to do useful build work.
I have enough bandwidth to be a build site, but I won't be able to host a mirror, sorry, as my poor T1 might become overloaded. Working on the bandwidth issue; I hope to have something much larger within 18 months. The cost here in the boonies is very high for large pipes, but an opportunity for a very large pipe lies on the horizon....
On Thursday 18 August 2005 21:10, Lamar Owen wrote:
PARI has been donated three large Enterprise servers; an E6500, and E6000, and an E5500. I am in process of arranging shipping and pickup, perhaps as soon as next week. Once delivered, I will be setting these beasts up and checking them out, building drive bays, etc.
Update:
I have on site now the following beasts: E6500 w/8 400MHz/8MB cache CPU's E5500 w/6 400MHz/8MB cache CPU's E6000 w/10 336MHz/4MB cache CPU's 2x E3500 (336 and 250MHz cpus, on with 6 and one with 4) E3000 w 6 250MHz CPU's 3x E450 w 2 400MHz CPU's (according to the handbook, these CPU's don't exist...501-5239 :-))
A box of what appears to be 400MHz/4MB cache CPU's, around 28-30 in the box, along with a pair of the special torque screwdrivers needed to change CPU's on the cards.
By cannibalizing the 5500 and the two 3500's, and by going to the 4MB cache 400MHz CPU's, I can possibly do 24 CPU's in the 6500 (the Sun Enterprise EX500 CPU/RAM modules are all the same, the differences between models are number of slots and enclosure). If the box of CPU's turns out to be mixed speed, I'd do 14 400MHz/8MB cache CPU's.
I haven't counted RAM yet, but the CPU boards are completely full in both the 6500 and the 5500. With up to 4GB per CPU card, 7 cards (2 CPU's per card) would work out to be up to 28GB, not a shabby amount of RAM. But I haven't been through all the cards yet to take inventory of the RAM, so it could be less than 28GB of RAM.
The drives were stripped, but I also got several interesting rackmount Intel beasts with 4-6 18GB SCA drives each; maybe two dozen drives in all; the E6500 and E5500 both have a D1000 storedge array (I would take the D1000 out of the 5500 and put in the 6500, perhaps). Would just need spud brackets (I have 20+ Ultra 30's that use those brackets....). The E3500's have the standard FC-AL cages, and the E450's have their standard cages (SCSI).
What will possibly happen then would be the E3000 and/or the E6000 would get upgraded to the 400MHz CPU's and one or both put online too.
Updates will come once I figure out the RAM and how I am going to configure the 6500, which will be after they clear inventory control; it will probably be the first week in September at the earliest before I can test powerup the 6500 or the 5500, since I have to get a 250V 30A socket installed (I currently have a pair of 120/208 three-phase 30A sockets under the floor, and will have one replaced with a L6-30 instead of one of the L21-30's that are there now; have several L5-30's, but have to have 200-240V, not 120V as on the L5-30).
On Saturday 27 August 2005 15:35, Lamar Owen wrote:
By cannibalizing the 5500 and the two 3500's, and by going to the 4MB cache 400MHz CPU's, I can possibly do 24 CPU's in the 6500 (the Sun Enterprise EX500 CPU/RAM modules are all the same, the differences between models are number of slots and enclosure). If the box of CPU's turns out to be mixed speed, I'd do 14 400MHz/8MB cache CPU's.
Probably won't work. The boards you have aren't compatible with the CPUs. Boards come in 83/90 and 100 Mhz versions... If you get a board with 336Mhz cpus, it will run at 83Mhz and the cpus have a clock multiplyer of 4... Adding 400Mhz modules to it will not make a difference - the cpus will report as 400Mhz but the clock speed will show as 336.
http://sunsolve.sun.com/handbook_pub/Systems/E6500/components.html#SystemBoa... The system handbook has the board part numbers so you can check your boards before you do anything.
Peter.
On Saturday 27 August 2005 19:08, Peter Arremann wrote:
Probably won't work. The boards you have aren't compatible with the CPUs. Boards come in 83/90 and 100 Mhz versions... If you get a board with 336Mhz cpus, it will run at 83Mhz and the cpus have a clock multiplyer of 4... Adding 400Mhz modules to it will not make a difference - the cpus will report as 400Mhz but the clock speed will show as 336.
The EX500 boards are all 501-4882 83/90/100 boards. In the E6500 the Gigaplane can run up to 90MHz, and won't run at 100MHz (length of the centerplane traces; also the reason load boards are required in the E6x00 and not the E5x00 and lower, as their centerplane is smaller (the E5500, E4500, and E3500 are all the same size; the E3500 is single-sided).
However, there are some twists here that are really funky. Investigating, the 400MHz 8MB cache CPU's run a 5x multiplier (80MHz Gigaplane) in E6500's, and a 4x multiplier in 100MHz Gigaplane systems (E[345]500). The 400MHz/4MB cache module has to run at 4x only. So, no, I can't run the 400/4MB modules in the 6500, at all. But they can run in the 3500's or the 5500. They won't run on the older CPU cards, either, which is OK. Only the cards in the E3000 and the E6000 are impacted by that; might consolidate to the 6000 and load it up with 336's.
Hmm, very interesting. Probably limit myself to 14 400/8MB in the 6500 (the three cards in the 5500 plus the 4 already in the 6500 that all have the 400/8MB modules) and put the four 3500's CPU modules with the 400/4MB CPU in the 5500 (which will work fine). Then I can rotate the 336 modules around to the 6000... :-) Fun. Looking at one of the E3500's CPU cards now... two 336MHz (501-4363) CPU's and 2GB RAM... Since the 6500 already has 400/8MB modules, don't have to worry about the Clock+ upgrade, or the Flash PROM upgrade, either....
I'm going to have some fun next week... :-)
On Saturday 27 August 2005 20:13, Lamar Owen wrote:
I'm going to have some fun next week... :-)
Sounds like it.. You were perfectly right with your answer. I didn't want to get into that much detail, just make sure people aren't swapping around cpus and then be surprised it doesn't change much :-)
Peter.
On Saturday 27 August 2005 15:35, Lamar Owen wrote:
On Thursday 18 August 2005 21:10, Lamar Owen wrote:
PARI has been donated three large Enterprise servers; an E6500, and E6000, and an E5500. I am in process of arranging shipping and pickup, perhaps as soon as next week. Once delivered, I will be setting these beasts up and checking them out, building drive bays, etc.
Update:
I have on site now the following beasts: E6500 w/8 400MHz/8MB cache CPU's
Update:
Spacely is up and running. Spacely is now an E6500 with 14 400MHz/8MB CPU's and 16GB of RAM, running Aurora Tangerine 1.92. Some information: [root@spacely unixbench-4.1.0]# cat /proc/cpuinfo cpu : TI UltraSparc II (BlackBird) fpu : UltraSparc II integrated FPU promlib : Version 3 Revision 2 prom : 3.2.30 type : sun4u ncpus probed : 14 ncpus active : 14 Cpu0Bogo : 796.67 Cpu0ClkTck : 0000000017d78400 Cpu1Bogo : 794.62 Cpu1ClkTck : 0000000017d78400 Cpu4Bogo : 794.62 Cpu4ClkTck : 0000000017d78400 Cpu5Bogo : 794.62 Cpu5ClkTck : 0000000017d78400 Cpu8Bogo : 794.62 Cpu8ClkTck : 0000000017d78400 Cpu9Bogo : 794.62 Cpu9ClkTck : 0000000017d78400 Cpu12Bogo : 794.62 Cpu12ClkTck : 0000000017d78400 Cpu13Bogo : 794.62 Cpu13ClkTck : 0000000017d78400 Cpu16Bogo : 794.62 Cpu16ClkTck : 0000000017d78400 Cpu17Bogo : 794.62 Cpu17ClkTck : 0000000017d78400 Cpu20Bogo : 794.62 Cpu20ClkTck : 0000000017d78400 Cpu21Bogo : 794.62 Cpu21ClkTck : 0000000017d78400 Cpu24Bogo : 794.62 Cpu24ClkTck : 0000000017d78400 Cpu25Bogo : 794.62 Cpu25ClkTck : 0000000017d78400 MMU Type : Spitfire State: CPU0: online CPU1: online CPU4: online CPU5: online CPU8: online CPU9: online CPU12: online CPU13: online CPU16: online CPU17: online CPU20: online CPU21: online CPU24: online CPU25: online [root@spacely unixbench-4.1.0]# cat /proc/meminfo MemTotal: 16620760 kB MemFree: 16358784 kB Buffers: 10032 kB Cached: 103848 kB SwapCached: 0 kB Active: 138280 kB Inactive: 21672 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 16620760 kB LowFree: 16358784 kB SwapTotal: 2097136 kB SwapFree: 2097136 kB Dirty: 48 kB Writeback: 0 kB Mapped: 12968 kB Slab: 74336 kB Committed_AS: 12760 kB PageTables: 456 kB VmallocTotal: 3145728 kB VmallocUsed: 624 kB VmallocChunk: 3145104 kB HugePages_Total: 0 HugePages_Free: 0 Hugepagesize: 4096 kB [root@spacely unixbench-4.1.0]#
If anybody has EX500 CPU's, 400MHz/8MB cache, that they'd like to donate, I have a few more of the 501-4882 boards that can take that CPU and have 8 more slots to fill (16 CPU's more, 30 total) (one donor has come forward with some boards and such, and I am very grateful for that!).
Otherwise, the other 501-4882 boards will go into the E5500 that I robbed the 400/8MB CPU cards from, and they will get 400MHz/4MB cache CPU's. Not sure how much RAM I have left to fill the slots, though. I think I can get 12 CPU's in the E5500 (if I get another or two 501-4882 boards, I'll max the E5500 out with 14 CPUs). The E6000 will get as many 336MHz modules as I can scrounge from the 3500's (which are giving up their CPU/memory cards to the E5500) and put on the older CPU/memory cards. The smaller DIMMs will likely go in the E6000, and it probably won't be powered up often. I have a pretty good case for leaving the E6500 on (looking at running my e-mail server on it) and might have a case for the E5500 (as long as I can keep power consumption down).
On benchmarks, I've run a Unixbench 4.1 run on the E6500 (see the Aurora-devel list for the details) against another Unixbench run on a Dell PE2650 Dual 3GHz Xeon server. The results are telling in the high concurrency vein; the 16 concurrent shell script run on the E6500 pulls 244 lines per minute (lpm), on the 2650 I see 366 lpm. The other benchmarks on the suite are single CPU, but even then the 2650's Xeon is only about 5 times faster than the 400MHz/8MB cache UltraSPARC.
Now, as to plans for a buildsystem. I am willing to provide ssh access to a user account, RSA/DSA key only, to one or more CentOS developers (I have extended much the same offer to the Aurora folks). I figure that 16GB of RAM might be large enough to do the buildroot in ramdisk; correct me if I'm wrong. Doing the build totally in ramdisk might make builds go more quickly. Pasi, let me know. The Aurora folks would prefer development efforts to concentrate on Aurora, with an eye to putting the results into CentOS. The Aurora folks are working towards using plague and mock as their buildsystem; google for them to find the info (don't have links handy).