RAID is basically using multiple drives in such a way that they appear as a single drive.
That's the very basic premise of it.
In execution, there are many varieties of arrays (an array being multiple drives grouped together to appear as one). I'll give you a super-simplified brief rundown:
RAID 0 - Multiple drives "striped together" so that a piece of data is split in half, with one half being written to one stripe and the other half written to the other. This provides a great performance increase, but if any one of the drives dies, the whole array is gone. Performance at the cost of security.
RAID 1 - Multiple drives "mirrored" - so that all the data written to one stripe is copied to the other. This provides perfect redundancy, but at the cost of ... well, money.. You need twice as many disks as the capacity you want - so if you wanted 120gb, you'd need two 120gb drives, etc. Expensive, no performance increase, but very safe.
RAID 1+0 - Requires at least four drives. Drives A+B are striped into a RAID 0 array, and drives C+D are striped into a RAID 0 array, and then Array A is mirrored to Array B. Provides the best of both worlds (1 and 0) but is expensive because you need a lot of drives.
RAID 5 - Striping + mirroring. Complicated. I don't fully understand it myself. It involves at least three drives - two for the data and one for the "parity" information. All I know is that if a drive fails, you still have your data. Writes are very slow but reads are fast. This is used for servers. Someone else (like Tex) will have to explain this better.
JBOD - "Just a Bunch Of Drives" - exactly what it sounds like. You take multiple drives and jam them all together as one big drive. This is pretty much useless.
Hey prime ... and dj - sorry for crappin' here ...
I'm pretty new to RAID, too. So, would it be possible to do a 4-drive RAID? As in RAID 0 type? Like splitting the info in half? Could you do, say ... two 2-drive RAID 0's and then run those into another RAID card and RAID 0 the RAID 0's? I sorta doubt it, but how would it work, if it's possible?
0
Straight_ManGeeky, in my own wayNaples, FLIcrontian
edited October 2004
Only thing about JBOD is you can use different size drives and have little waste. Volume .ne. single drive as to limits. This can be AKA SOFT RAID, or DYNAMIC VOLUMES with RAID not in use in XP. It's a pseudo raid zero as to SIZE INCREASE Of what is seen as drive letter C is equal to the two drive's definable and usable space in sum (but you can use unequal drives in size) and stripping typically is NOT used. USED ONLY where you need more volume and cannot afford two like-size or four like-size drives.
Kinda like POTS (plain old telephone service, aka old-fashioned dialup without any DSL speed increase) is to cable broadband or high speed DSL for an end user, JBOD is to RAID. IF I use multiple drives (and I do) I use static volumes or RAID, as Prime says you get no safety or good performance benefit with JBOD.
You get better performance than soft RAID actually if you have two mechs, have your swap or virtual file on one and boot stuff and core XP OS on another, than with JBOD or soft raid whihc gives you some performance HIT because XP is trying to track all the stuff on two physical drives as one logical volume. With hardware raid you can get better performance speed wise than with JBOD soft RAID, if you have a real fast RAID card or a GOOD embedded SATA RAID on motherboard. IDE RAID is cheaper, AND slower. For right now I in fact do this kind of thing with data:
IDE mirroring on one box, imaging for backup on the other. WORK data is NOT on same mech as OS core or programs that gen large volumes of data or save often. Swap file or virtual memory space on HD is NOT on same mech as OS core for XP. One way to get Linux or BSD to grow is to give it more parts spread over multiple drives. There are tricks you can do with multiple HD mech eggs in your total HD set on a box that do not need dyn volumes or RAID at all, and for a normal user with little volume of data production it really is not needed.
Criteria for implementing RAID include how critical that data is, IE do you HAVE to have it recoverable??? Second, are you writing so large a volume of data that you HAVE to have speed just to get it done (movie or game dev, higher volume high-quality digital picture editting or morphing can eat fairly large chunks of HD space and write time comensurate with the volume and net effective speed of the Computer's storage system in its entirety). IF you have a lot of this data, think about making a SERVER box and gettign a router\switch and\or hub and handling the storage off to anohter box entirely, or getting a VERY fast workstation and mirroring for safety only what you HAVE to have recoverable. IF you have a busy server that has to be up constantly, think about pairing servers and having them mirror each other and\or using a DATA STORAGE\BACKUP server or NAS setup with a LAN or W\LAN setup. Lots of good switches can act also as hubs for some ports, and in fact can act as multiple hubs with one set of ports being one logical hub worth of connection and another set of ports being another logical hub. SOME routers, the better ones, can do limted switch duty also, these are essentially hybrid boxes in thier own right, with CPU and RAM and an OS and configurability.
For an end user, I would use non-RAID, for a production box or workstation I would consider building a backup server or RAID 5 setup. One of the advantages of a real good RAID card or embedded setup can be that the RAID hardware and firmware do lots of the work, and XP simply hands off the Mirroring side and goes about its business. One thing to seriously consider with embedded RAID is lots of RAM, that's one reason servers have lots of RAM, is so Server 2003 or Linux RAID handlers can buffer writes.
One other thing about RAID (soft and hardware based), they like to be on UPSs, if you ever have the power die while bunches of stuff are pended or being written you can get data loss that never got written or never got journalled and that makes for messes where you know you worked on a file, but the only version you can find is the old one from before you worked on it. Or worse, if you get a fubarred journal or $MFT you can get large bunches of data lost. I bought UPSs before I implemented ANY kind of RAID. And recommend that. If you buy an oversize UPS compared to the needs of what you run, it will run off battery with zero line power present longer.
Hey prime ... and dj - sorry for crappin' here ...
I'm pretty new to RAID, too. So, would it be possible to do a 4-drive RAID? As in RAID 0 type? Like splitting the info in half? Could you do, say ... two 2-drive RAID 0's and then run those into another RAID card and RAID 0 the RAID 0's? I sorta doubt it, but how would it work, if it's possible?
Sure. You're talking about using two different hardware controllers, right?
I'll assume that's the case.
Here's what you need to do> there needs to be a fifth volume which your OS is going to be on. Unfortunately, there's no way to do what you want to do and have it be a boot volume.
But say you have a 5th (or 1st, depending on how you look at it) drive with Windows on it, and you have two 2channel RAID controllers. For arguments' sake, we'll say one is a HPT controller and the other is a promise.
You make a RAID-0 on each.
Then you go into disk management (windows), and convert each array to dynamic. Then you tell windows to stripe them.
Now you have a RAID 0 built from two hardware RAID 0's
I sorta figured it would be a hack-job . But, this is what I mean. To set up a RAID-0, I assume you just plug the cables into the card, then plug another cable from the card to the board. Obviously you have to set these up and such. But, what if you had 3 cards? Couldn't you just take the card-board cable and run it into a third card, then run the third card to the board? Or ... am I totally confused as to how a RAID card works (probably am, I've never really looked at a RAID card before )?
A raid card is just like any other PCI card. it plugs into a pci slot. You plug the drives directly into the raid card.
Some motherboards have raid controllers built into them, so you don't even need a raid card.
On a standard IDE raid card you usually have two ports for a total of four drives possible. On a motherboard SATA RAID controller, you generally have two ports, sometimes four. You can buy high-end raid controllers that have 8 ports, etc. You can buy two high end 8 port cards and have them act in concert for a total of 16 potential drives. There are controllers (for SCSI and such) that go even higher than that.
Comments
That's the very basic premise of it.
In execution, there are many varieties of arrays (an array being multiple drives grouped together to appear as one). I'll give you a super-simplified brief rundown:
RAID 0 - Multiple drives "striped together" so that a piece of data is split in half, with one half being written to one stripe and the other half written to the other. This provides a great performance increase, but if any one of the drives dies, the whole array is gone. Performance at the cost of security.
RAID 1 - Multiple drives "mirrored" - so that all the data written to one stripe is copied to the other. This provides perfect redundancy, but at the cost of ... well, money.. You need twice as many disks as the capacity you want - so if you wanted 120gb, you'd need two 120gb drives, etc. Expensive, no performance increase, but very safe.
RAID 1+0 - Requires at least four drives. Drives A+B are striped into a RAID 0 array, and drives C+D are striped into a RAID 0 array, and then Array A is mirrored to Array B. Provides the best of both worlds (1 and 0) but is expensive because you need a lot of drives.
RAID 5 - Striping + mirroring. Complicated. I don't fully understand it myself. It involves at least three drives - two for the data and one for the "parity" information. All I know is that if a drive fails, you still have your data. Writes are very slow but reads are fast. This is used for servers. Someone else (like Tex) will have to explain this better.
JBOD - "Just a Bunch Of Drives" - exactly what it sounds like. You take multiple drives and jam them all together as one big drive. This is pretty much useless.
I'm pretty new to RAID, too. So, would it be possible to do a 4-drive RAID? As in RAID 0 type? Like splitting the info in half? Could you do, say ... two 2-drive RAID 0's and then run those into another RAID card and RAID 0 the RAID 0's? I sorta doubt it, but how would it work, if it's possible?
Kinda like POTS (plain old telephone service, aka old-fashioned dialup without any DSL speed increase) is to cable broadband or high speed DSL for an end user, JBOD is to RAID. IF I use multiple drives (and I do) I use static volumes or RAID, as Prime says you get no safety or good performance benefit with JBOD.
You get better performance than soft RAID actually if you have two mechs, have your swap or virtual file on one and boot stuff and core XP OS on another, than with JBOD or soft raid whihc gives you some performance HIT because XP is trying to track all the stuff on two physical drives as one logical volume. With hardware raid you can get better performance speed wise than with JBOD soft RAID, if you have a real fast RAID card or a GOOD embedded SATA RAID on motherboard. IDE RAID is cheaper, AND slower. For right now I in fact do this kind of thing with data:
IDE mirroring on one box, imaging for backup on the other. WORK data is NOT on same mech as OS core or programs that gen large volumes of data or save often. Swap file or virtual memory space on HD is NOT on same mech as OS core for XP. One way to get Linux or BSD to grow is to give it more parts spread over multiple drives. There are tricks you can do with multiple HD mech eggs in your total HD set on a box that do not need dyn volumes or RAID at all, and for a normal user with little volume of data production it really is not needed.
Criteria for implementing RAID include how critical that data is, IE do you HAVE to have it recoverable??? Second, are you writing so large a volume of data that you HAVE to have speed just to get it done (movie or game dev, higher volume high-quality digital picture editting or morphing can eat fairly large chunks of HD space and write time comensurate with the volume and net effective speed of the Computer's storage system in its entirety). IF you have a lot of this data, think about making a SERVER box and gettign a router\switch and\or hub and handling the storage off to anohter box entirely, or getting a VERY fast workstation and mirroring for safety only what you HAVE to have recoverable. IF you have a busy server that has to be up constantly, think about pairing servers and having them mirror each other and\or using a DATA STORAGE\BACKUP server or NAS setup with a LAN or W\LAN setup. Lots of good switches can act also as hubs for some ports, and in fact can act as multiple hubs with one set of ports being one logical hub worth of connection and another set of ports being another logical hub. SOME routers, the better ones, can do limted switch duty also, these are essentially hybrid boxes in thier own right, with CPU and RAM and an OS and configurability.
For an end user, I would use non-RAID, for a production box or workstation I would consider building a backup server or RAID 5 setup. One of the advantages of a real good RAID card or embedded setup can be that the RAID hardware and firmware do lots of the work, and XP simply hands off the Mirroring side and goes about its business. One thing to seriously consider with embedded RAID is lots of RAM, that's one reason servers have lots of RAM, is so Server 2003 or Linux RAID handlers can buffer writes.
One other thing about RAID (soft and hardware based), they like to be on UPSs, if you ever have the power die while bunches of stuff are pended or being written you can get data loss that never got written or never got journalled and that makes for messes where you know you worked on a file, but the only version you can find is the old one from before you worked on it. Or worse, if you get a fubarred journal or $MFT you can get large bunches of data lost. I bought UPSs before I implemented ANY kind of RAID. And recommend that. If you buy an oversize UPS compared to the needs of what you run, it will run off battery with zero line power present longer.
Sure. You're talking about using two different hardware controllers, right?
I'll assume that's the case.
Here's what you need to do> there needs to be a fifth volume which your OS is going to be on. Unfortunately, there's no way to do what you want to do and have it be a boot volume.
But say you have a 5th (or 1st, depending on how you look at it) drive with Windows on it, and you have two 2channel RAID controllers. For arguments' sake, we'll say one is a HPT controller and the other is a promise.
You make a RAID-0 on each.
Then you go into disk management (windows), and convert each array to dynamic. Then you tell windows to stripe them.
Now you have a RAID 0 built from two hardware RAID 0's
Crazy, but true
A raid card is just like any other PCI card. it plugs into a pci slot. You plug the drives directly into the raid card.
Some motherboards have raid controllers built into them, so you don't even need a raid card.
On a standard IDE raid card you usually have two ports for a total of four drives possible. On a motherboard SATA RAID controller, you generally have two ports, sometimes four. You can buy high-end raid controllers that have 8 ports, etc. You can buy two high end 8 port cards and have them act in concert for a total of 16 potential drives. There are controllers (for SCSI and such) that go even higher than that.