Interesting comments on raid for desktops
Storagereview.com has long held this position.
http://www.anandtech.com/storage/showdoc.html?i=2101&p=1
interesting to see the concept spreading from the hardcore disk guys to main stream finally.
Tex
http://www.anandtech.com/storage/showdoc.html?i=2101&p=1
interesting to see the concept spreading from the hardcore disk guys to main stream finally.
Tex
0
Comments
I guess the main reason I'm very supportive of RAID-0 is because I've never experienced any reliablity problems or issues. I also once upon a time ran a RAID-0 setup with 2 Deathstars (75GXP's), but my array never corrupted, even when one of the drives started to fail.
I can however understand why folk who haven't had as much luck with RAID-0 feel it's a bit of waste of time because of the reliabilty issues.
Nevertheless, I love my array, it's here to stay.
Something that costs extra money simply to assure that I don't lose my data is something I don't want to invest in.
In truth, benchmarks are benchmarks. Real world performance... Id be inclined to agree with Anandtech's findings. I've got two raptors in RAID-0, they are blisteringly fast but do I see REAL world performance? Umm, no.
For all those complaining about losing a drive and so on.. BACKUP... BACKUP.
At the risk of sounding "prime::memtest", Il repeat.
Acronis True Image Nightly Incremental Backup.
Nuff' said.
gobbles
That solves the redundancy. It doesnt begin to address the fact that 90 percent of the desktop users using the normal onboard ide/sata raid get very little real world performance benefit with two drives in raid-0 versus two seperate drives and simply dividing the I-O up inteligently between the two drives.
Doing normal desktop user functions you can't tell whcih system of my four has the two 8mb cache 120gb maxtors in raid-0 versus the two seperate drives. I have a monster scsi hardware raid-0 with six very fast scsi drives and one of the fastest u320 hardware raid controllers with half a gb of pc3200 ddr cache made today and its awesome in a pci-x slot in a dual opteron. But for 90 percent of what a desktop user does the fastest computer I have is a amd64 with 32bit bus and three fast scsi drives non raided but with the I-O seperated across the drives as much is possible. And I paid less for the three u320 scswi drives and controller used then most pay for a pair of new raptors and its TONS faster.
The differance between raid-0 with my true hardware scsi raid controller and 6 drives and a normal ide/sata raid controller is like comparing the performance of a stealth fighter and a paper airplane. It's another world. I keep my stuff backed up religiously and the scsi's are so much more reliable then ide its salso not comparable. I could argue THAT level of performance is worth the risk. Access times in sub 2ms and str above 300,000 is breathtaking. The differance is real world perofrmance achieved with a lowend ide/sata raid controller based on what the majority of desktop users do with a computer is marginal at best.
The article was not just about the danger but the simple fact that for the majority of users, based on the limited real world improvements its simply not worth the risk.
Tex
Same here. OS and Programs on one drive or set of drives incase of a RAID set up and data on separate drives or at least separate partitions. I have always done separate partitions since I read MMs article way back when. I has made a reinstall (did this morning) a heck of a lot easier. I always keep a "New Install" folder with the programs and drivers I need to install.
I would think you could see some performance increase in opening apps. Esp after the OS install gets a bit older. Things slow down as the drive gets more cluttered. It might be a small gain but games and other large programs will load faster with a RAID 0 setup.
With a new SATA drive, different game.
What would the results be with 7200rpm 2MB drives??
I might run R5 with four drives, if it was native to the mobo and I needed that much space.
LMAO. The b.s. just continues to flow. Time for another reality check. And just to let you know..... Raid-5 without a high end cacheing raid controller (and I have many years of experiance in this) the writes hit 5mb or so PEAKED. So your writes would be equal to an ide drive made TEN YEARS ago. Can you say Sloooooooooooooooooooooooooooooooooooowwwwwwww?
here is a atto for you. And this isnt with the little "play" ide controllers on M.B.s this is atto's from a Elite 1600 with 6 drives. And it hit 260,000 on the reads and 220,000 on the writes in raid-0 with the same 6 drives with the controllers cache disabbled. This is six fast scsi drives with 4ms access and 8mb cache that smoke ide or sata drives.
This is the wonder of raid-5. The writes SUX because it has to make multiple reads from the other drives for the parity info needed to make each and every write, so no write is ever sequential.
Can you see the problem here?
This would be sorta hilarious if it wasnt so sad really. The nonsense spread as fact on these matters is just utterly stunning.
Tex
what in the crap?!
Tex = 0wN
With 512mb pc3200 ddr and a really fast cpu onboard my lsi 320-2x and 6 drives I can hit decent scores with raid-5 on my dual opteron with pci-x. Not awesome but respectable anyway??
But normal ide, sata or scsi drives in a software based raid-5 and thats 97 percent of raid-5 ide/sata capable systems made today are gonna score along these lines.
I tune high performance disk subsystems for a living. I have done this for 20 years. I tune servers for database systems. I am a benchmark whore and benchmarks also do not always accurately reflect real life performance either.
There are some in this forum that certainly can get as much out of a sata/ide raid-0 windows disk sub system by tweaking and tuning as I can perhaps but I would argue that few if any can get more then I can and after hundreds and hundreds of system setups and extensive testing I am telling you that the benefits of raid-0 for normal desktop users is greatly exagerated as compared to simply using the same two or four drives in a non-raided environment and dividing the i-o up among the drives manually. STR alone is not the most important consideration in 95 percent of the desktop I-O for normal users. You gain NOTHING in access time and it is in fact slower for many small file access's. You gain only in long SEQUENTIAL data access and the drive must be continually defragged to see even that gain with ide/sata. With my scsi drives and that Elite 1600 64bit controller for example it hit over 170,000 on Sandra and showed a 2ms access time even with the onboard cache disabled in raid-0...... With EIGHT SCSI drives. This doesnt happen on normal ide/sata raid controllers. The access time at BEST stays constant and usually goes up not down even with three or even four drives. Only str at the high end goes up. The striping also bogs down the cpu as it is handled by your cpu not the disk controller as in my hardware scsi raid controllers.
IDE/Sata raid-0 performance gain is greatly exagerated for NORMAL desktop use. At least when compared to PROPERLY setting up the identical drives in a non-raided environment by someone competant.
There was several key points made there you pick them out.
Tex
"I think that we need to divide the "RAID fans," those who know that they're throwing a lot of money at a small gain and do it anyway; from the "RAID idiots," who think that they will or have gotten some kind of incredible increase in performance. It's the overrating of RAID, expecially RAID 0, which is ignorant."
The whole thread is here for those inclined to more discussion.
http://forums.storagereview.net/index.php?showtopic=15912
I strongly feel that the forums at storagereview in general are the best around for disk and disk subsystem related matters on the web. I don't agree with everything they say just like many here don't agree with what I say either. Thats fine the forums are for airing opposing intelligent viewpoints. But at least there its guys who are very experianced in these matters as compared to some guy posting with little overall experiance in real life and you should at least pause and consider their point of view a little more strongly based on their background.
Cheers All
Tex
http://www.anandtech.com/storage/showdoc.html?i=2101
"Bottom line: RAID-0 arrays will win you just about any benchmark, but they'll deliver virtually nothing more than that for real world desktop performance. That's just the cold hard truth."
It's "perceived" to be faster. A great read on perception and the trouble with using it.
http://www.cia.gov/csi/books/19104/art5.html
Tex
I had always thought that using the IDE bus was a limiting factor.
Now I see more of the truth.
My only experience with raid was reading large files. It seemed to help a lot.
Now I will just forget about it.
thanks
I don't game so I don't know. I would bet ya that having the stuff on a couple differant scsi drives would be faster. I can buy two 36gb fast scsi drives for the price of a raptor. The beauty of scsi is it can issue reads and writes to several differant drives without waiting for a result. scsi is basicaly a bus of its own.
For you it would be critical to tune the cache on teh windows disk subsystem itself so as much data as possible was cached.
Or even better copying all the maps to a ram disk woudl be evan faster.
I can tell when I transfer huge files around when I get them off the ide raid versus a single ide drive. If its defragged and I'm not using the ide raid drive for anything. but when I copy big files from the ide raid and try and use that machine its sluggish as hell and it hardly impacts the scsi and scsi raid systems. You can really not even feel that I am copying 20gb in the background. But if you do one thing at a time especially and its reading long data streams like your maps you might be able to tell an advantage! And in your scenario thats all thats important, and its enough of an advantage to make it worth while. Would be intersting to see you add a second scsi controller and a pair of scsi drives and see if the gain wasnt far greater without the added risk.
The point is that raid-0 with two drives for most users bares only a slight advantage if any over the same two drives where you try and manually balance the I-O across the drives manualy. That most times the two drives do not show any advantage over the SAME TWO DRIVES when you setup the data to try and balance the I-O. To really compare you still gotta compare a properly setup two drive system against a two drive raid-0 system.
Good Luck !
Tex
If you want to see if the cache is tuned right defrag your disk and run sandra but make sure the options for the disk bench are set to BYPASS the windows disk cache. This is a baseline to use for comparison as we rarely ever do anything normaly in windows that bypasses the disk cache. When it completes now run the same test again with that option turned off so that you test with the windows cache.
Normally winows users are shocked and dismayed when they run this as they test much better bypassing the disk cache then using it. Sometimes horribly slower. Unfortunatley in real life amost everything uses the disk cache....
See where real life and benchmarks often blur the truth? This is called tuning the disk subsystem. Reall life results are the product of many differant functions. Whyich is why I contend that if you really know how all the pieces work together and tune the whole windows disk subsystem you can actually get as good or better real life performance from TWO seperate non raided drives for 95 percent of the desktop users.
If the cache is tuned right you would hope to score at least 50 percent or more HIGHER using the cache. Thats what the cache is for... But the default disk cache is sized wrong for most users with a reasonable amount of ram. The more ram the more you can cache obviously.
I still think if you could seperate your maps from the other data and made a ram disk you would be better off. Or used a virtual cdrom utility maybe or something else.
Cheers and best of luck!
tex
Paths=../System/*.u
Paths=../Maps/*.ut2
Paths=../Textures/*.utx
Paths=../Sounds/*.uax
Paths=../Music/*.umx
Paths=../StaticMeshes/*.usx
Paths=../Animations/*.ukx
Paths=../Saves/*.uvx
I'm going to back this up and see if I can make it look in Drive H: my backup drive and see what happens.
Tex
Raid 5 with multiple scsi disks in a server storage rack = YES YES YES RAID 0 with 2 ide/sata disks = waste o' money
We built raid 1 + 1 years ago for reliability and redundancy on the OS and raid 5 for redundancy on the storage. But he ONLY reason we did it on data servers was for the REDUNDANCY and not for performance.
Raid 1 + 1 was 4 x 4GB HD's 2 hardware mirrored and the raid controller then the two mirrors mirrored in the OS/software. Raid 5 on with 5 disks min on the raid controller. Worked efficiently and with "hotswop" drives and good monitoring the systems were up 100% except for maintenance.
Now this was great in a server environment with budgets bigger than some third world countries GDP. Relating this technology to us mortals and it becomes OTT. We gain almost nothing by RAID 0 especially when small data is transferred it can make things worse. Put the extra cash into a better graphics card or more ram!
The only useful raid us mortal should consider is raid 1 for the ppl who are lazy and don't back up often. I, like most it seems on here, have a separate drive for my stored data. That gets backed up when I remember. Now seems like a good time....
Marcus
That would depend entirely on what type of server this was? A database server would choke itself to death on the write I-O. A horrible solution for this problem.
A server used to share word processing files etc... with an appropriate number of users is ok. Not enough heavy I-o at the same time to kill it. Its just a nuisance the users will whine about. I am not a fan of raid-5 due to the horrible writes as I deal with SQL server and Oracle database systems but.... For the price of current 36gb u320 drives just buy a couple more and go raid-10 if you were considering raid-5.
raid-10 with bigger modern scsi drives is getting more attractive every day with the drop in price of larger scsi drives. But even then there are things I don't want on a raid-array with Oracle at all unless its raid-1.
Tex
Very True... I mention later in the post about data servers - as in central file storage. However, I built these a couple of years ago. I would perhaps look at it a different way now. For an oracle or SAP server the config would be a whole lot different.
This is where we go off topic... fibre, clustering etc. Raid 10 (1+0) is great for high redundancy and throughput - perfect for database servers - but at a high premium. 50% capacity from disks making it an expensive option. However, the areas where this would be needed the budget is often appropriately large...
Marcus
On the downside, I've only got 70 something gigs of storage space on this PC. If I could have done it again, I probably would have skipped out on the raptors and went for some large capacity maxtors or something along those lines. I'm really not very concerned about reliability, I've got all of my important data backed up.
With drive prices being as low as they are, and with onboard raid 0,1 on almost every motherboard now-a-days, I'd say 'why not?'. If you are worried about reliability, take raid-1 out for a spin Although I dont think the improvement RAID-0 offers is tremendous, a quick file system has a nice feel to it.
If cost and system simplicity/reliability is a primary objective, a quick single drive would probably be a better solution.
Tex