From: Kirk Bocek t004@kbocek.com
Yes, I understand. I also don't have an unlimited budget If I can get most of the benefit without needing a $300-800 add-on, that's a big plus.
You're talking about an $500-600 mainboard. Then you're talking about $500-800 _per_ dual-core CPU. And then you're gotta factor in the Registered ECC RAM costs.
And you're worried about a $125 (3Ware Escalade 8006-2) or $300 (3Ware Escalade 8506-4) controller? Furthermore, the SATA on the nForce MCP is typically only a single, 133MBps PCIe x1 channel. You'll get 533MBps out of the 8506-4 in a PCI slot at 64-bit @ 66MHz.
I conceptually understand iSCSI, but how would I use a GbE NIC in a backup solution?
What do you think iSCSI is? It's storage over IP, but just with an additional a host processor in the GbE to off-load overhead.
If you don't have the money for a HBA GbE, then just do straight GbE. Build an out-of-band IP network segmented off on a 2nd GbE connection in each system.
[ HINT: I'm exploring the issues of real-time backup and discuss near-line storage, virtual tape libraries and, in a future article after that, out-of-band networks and "cheap" SANs. All from the standpoint of a sysadmin, and not so much network/storage engineering. ]
Don't forget that a backup solution would depend on hot-pluggability. That's why I was thinking of USB/IEEE- 1394.
Mistake on a server. You don't want to hot-plug consumer interconnects. One issue, bam! Kernel panic.
Use an out-of-band IP network over a 2nd GbE connection to what you need elsewhere on the network. I have been recommending this where cost is a factor.
Cool, I will take a look at HP's offerings. Do you have any motherboards alone that you would recommend?
It's difficult because of the limited designs. A lot of mainboards "cut corners" for cost considerations. Even the Tyan 4885 puts _all_ I/O on _one_ CPU. The HP DL585 follows the AMD reference.
-- Bryan J. Smith mailto:b.j.smith@ieee.org
Bryan J. Smith b.j.smith@ieee.org wrote:
And you're worried about a $125 (3Ware Escalade 8006-2) or $300 (3Ware Escalade 8506-4) controller? Furthermore, the SATA on the nForce MCP is typically only a single, 133MBps PCIe x1 channel. You'll get 533MBps out of the 8506-4 in a PCI slot at 64-bit @ 66MHz.
Well, now you're giving me a reason to spend the money -- the 3Ware ports have better performance than the nForce ports.
Don't forget that a backup solution would depend on hot-pluggability. That's why I was thinking of USB/IEEE- 1394.
Mistake on a server. You don't want to hot-plug consumer interconnects. One issue, bam! Kernel panic.
I wasn't clear that this is for *off-site* backup. I have plenty of on-line backup going on. I need to be able to take the backup away so we have something to fall back on when the building falls down, burns up or otherwise goes away. Tape serves this function nicely. But a hard-drive would be faster and more flexible.
Are you saying that USB/1394 hot-plugging is unstable or unreliable? Do you possibly have any articles to point to?
Kirk Bocek
On Wed, 2005-06-22 at 15:36, Kirk Bocek wrote:
I wasn't clear that this is for *off-site* backup. I have plenty of on-line backup going on. I need to be able to take the backup away so we have something to fall back on when the building falls down, burns up or otherwise goes away. Tape serves this function nicely. But a hard-drive would be faster and more flexible.
Are you saying that USB/1394 hot-plugging is unstable or unreliable? Do you possibly have any articles to point to?
I've been trying to use software RAID with one partition on internal IDE, the other on external firewire and it is not reliable. I suppose the fact that RHEL4 dropped firewire support completely might be a hint. It worked OK with FC1 and the 2.4 kernel. The drive would not be detected automatically, but with the right set of commands to find it and add to the raid it would run fine. FC2 did not work with firewire. FC3 autodetects the drive but crashes within a day or less if the RAID is active. I've been unmounting the raid and adding the firewire drive just long enough to re-sync, then removing it and it usually works, but starting over, SATA might be a better choice.
On Wed, 2005-06-22 at 13:36 -0700, Kirk Bocek wrote:
Well, now you're giving me a reason to spend the money -- the 3Ware ports have better performance than the nForce ports.
What I said in response to your comment on wanting NCQ support on an individual is that an "intelligent" controller like a 3Ware "Storage Switch" with an ASIC and SRAM is doing _exactly_ what NCQ does, only far more intelligently (for _all_ disks in the array).
3Ware has repeatedly taken benchmark award after benchmark award in its ability to queue I/O requests better than any other ATA RAID option out there (and even most SCSI RAID ones too). How? Because it's not a buffering disk controller with DRAM, it's a switching ASIC with 0 wait state SRAM.
That means it's ideal for when you're just throwing data around. Things change though if you're doing more buffering (like RAID-5). That's why 3Ware introduced the 9500 series (to add DRAM buffer to its existing ASIC+SRAM design).
I wasn't clear that this is for *off-site* backup. I have plenty of on-line backup going on. I need to be able to take the backup away so we have something to fall back on when the building falls down, burns up or otherwise goes away. Tape serves this function nicely. But a hard-drive would be faster and more flexible.
I know this. I never thought otherwise. I wasn't even debating the merits of tape v. disk (that was someone else).
What I was stating was that you _never_ want to use "consumer" buses for server storage. Despite Apple's insistence that FireWire is server- grade, they've had lots of issues. Yes, it's better than USB and far more intelligent (e.g., USB can't do device-to-device, FireWire can).
I was saying further is that if you are using USB or FireWire so your backup drive is "removable," you should consider _other_ options.
1. External SCSI
SCSI just works and works well, and FireWire will get there someday on servers (it's already much, much better for consumers, I agree wholly). You can also safely unload/reload the SCSI modules in the kernel for the device that controls the tape when you want to remove the tape (assuming nothing else is on the SCSI card).
2. Backup over NIC to a "near-line" device with the tape drive
I'm not talking about on-line backup to disk. I'm talking about centralizing your backup to a system that is both a "near-line" disk _and_ can commit to tape for your other systems outside of their backup windows. Especially when your budget is limited and your tape devices are X servers to 1-2 tape drives.
Most sysadmins think they have to do their tape backups in "full" in "real-time" during their "backup window." Even when they do centralized backups, they do them in "real-time" and that means they are sending _all_ data over the network. This is wholly unnecessary and inefficient.
End-servers -> (rsync) -> "Near-line" storage server -> (attached) -> Tape Drive
Just pick a system to be the "near-line" storage server and put your limited tape resources on it. Now to 1 full rsync from each server to it in their backup window. You can spread this around a few days for this "initial rsync" if it will take more than 1 evening. Once you have all systems "initial rsync'd" to the "near-line" system, all you ever need is the incremental rsyncs to mirror the data.
Because once you have that data on the "near-line" device with the tape drive, it's a matter of committing to tape locally whenever you feel like it! Anytime, anyhow, 24x7! No more "rush" to get data to tape during the backup window, that's what the near-line device does. All the servers have to do is push an rsync of changes during their window.
Whether you decide to do this on your "normal" network, build an "out- of-band" (i.e., 2nd NIC in each server) network, or a more formal "storage area network" (SAN -- be it FC-AL or GbE/iSCSI), that's up to you.
Are you saying that USB/1394 hot-plugging is unstable or unreliable? Do you possibly have any articles to point to?
I've have had to take baseball bats to people with Mac XServe as well as PC servers running both Linux and Windows over the last 2 years because they keep plugging in FireWire and USB devices and wonder why their servers crash. Not even Apple has perfected FireWire as a replacement for SCSI on _servers_.
If you want to be able to have only 1-2 tape drives for far more servers and workstations, I really recommend you build a dedicated system just as a "near-line" backup server and put the tape drive on it. Now you just send rsyncs of systems to the "near-line" backup server and commit to tape as you see fit from those local stores.
SIDE EFFECT: If you need to restore something immediately, you don't always have to go to tape! If you are already doing this, then just put your tape backup on those systems with the "on-line" storage.
Simple, no?
On Friday 24 June 2005 01:30, Bryan J. Smith wrote:
What I was stating was that you _never_ want to use "consumer" buses for server storage. Despite Apple's insistence that FireWire is server- grade, they've had lots of issues. Yes, it's better than USB and far more intelligent (e.g., USB can't do device-to-device, FireWire can).
Completelty off topic but USB 2.0 has optional peering support- its called OTG.
Are you saying that USB/1394 hot-plugging is unstable or unreliable? Do you possibly have any articles to point to?
I've have had to take baseball bats to people with Mac XServe as well as PC servers running both Linux and Windows over the last 2 years because they keep plugging in FireWire and USB devices and wonder why their servers crash. Not even Apple has perfected FireWire as a replacement for SCSI on _servers_.
There is tons of evidence out there to show how unsuitable Firewire is for a server environment. Basicly, it is easily possible with enough devices to find a combination of server/disk that will work. But if you just pick any random device you're pretty sure to get into trouble.
Just look at those comments from the Apple universe. One good example is Softraid - they put an FAQ out there about their firewire issues (http://www.softraid.com/faq.html#firewire) with older hardware... Other than that just look at google and see how many search results you'll get :-)
Peter.
Bryan J. Smith wrote:
That means it's ideal for when you're just throwing data around. Things change though if you're doing more buffering (like RAID-5). That's why 3Ware introduced the 9500 series (to add DRAM buffer to its existing ASIC+SRAM design).
Another poster (sorry, could find the name, was it Peter?) mentioned the lack of hot-swap support in most of the hardware raid out there. If this is the case, what's the point of raid 1 or 5 if a failed drive will hang the system? What's your experience with the 3Ware cards?
What I was stating was that you _never_ want to use "consumer" buses for server storage. Despite Apple's insistence that FireWire is server- grade, they've had lots of issues. Yes, it's better than USB and far more intelligent (e.g., USB can't do device-to-device, FireWire can).
Okay, okay, you win! You're right and I'm wrong! I will never consider IEEE-1394 again. You and the other posters have convinced me. Every USB cable in my server room will be removed and destroyed. Every USB port will be filled with glue. :)
- Backup over NIC to a "near-line" device with the tape drive
What about LVM snapshots? I've been using these for awhile with no problems. Yes, I understand that it will consume disk bandwith but that's not a limitation in my installation.