Interesting comments on raid for desktops

TexTex Dallas/Ft. Worth
edited July 2004 in Hardware
Storagereview.com has long held this position.

http://www.anandtech.com/storage/showdoc.html?i=2101&p=1

interesting to see the concept spreading from the hardcore disk guys to main stream finally.

Tex
«1

Comments

  • edited July 2004
    That's a good read, Tex. I came to the same basic conclusion a couple of years ago about raid0, that it wasn't worth it on a desktop machine due to the decrease in reliability, right after my first IBM Deathstar went tits up and wasted everything on the array. When I reinstalled my os and everything else on the remaining drive, I really didn't notice any decrease in performance that I could really feel.
  • SpinnerSpinner Birmingham, UK
    edited July 2004
    Well, as we've discussed before Tex ;), I disagree with their conclusion, at least for the most part. But I can't deny they brought some very good points to light. A very good read indeed and a worth while peruse for anyone considering making the move to and from a RAID 0 setup.

    I guess the main reason I'm very supportive of RAID-0 is because I've never experienced any reliablity problems or issues. I also once upon a time ran a RAID-0 setup with 2 Deathstars (75GXP's), but my array never corrupted, even when one of the drives started to fail.

    I can however understand why folk who haven't had as much luck with RAID-0 feel it's a bit of waste of time because of the reliabilty issues.

    Nevertheless, I love my array, it's here to stay. :)
  • ThraxThrax 🐌 Austin, TX Icrontian
    edited July 2004
    I have no plans to ever migrate to RAID0 until 100% of the reliability issues are eliminated. I don't feel particularly inclined to adopt a scenario where my chance of losing data is not only 100% larger, but the volume of data I'm slated to lose is also considerably larger.
  • mmonninmmonnin Centreville, VA
    edited July 2004
    Put the OS and Programs on the Raid 0. They can be replaced. So what if you lose your OS drive? You can always reinstall that. The OS and Programs files are what need the performance the most for desktop users (not audio or video editing members). Data and other stuff go on regular IDE drivers and is backed up. No need to worry about a raid failure.
  • ThraxThrax 🐌 Austin, TX Icrontian
    edited July 2004
    And that's the second matter.

    Something that costs extra money simply to assure that I don't lose my data is something I don't want to invest in.
  • ShortyShorty Manchester, UK Icrontian
    edited July 2004
    Makes an interesting read :)

    In truth, benchmarks are benchmarks. Real world performance... Id be inclined to agree with Anandtech's findings. I've got two raptors in RAID-0, they are blisteringly fast but do I see REAL world performance? Umm, no.

    For all those complaining about losing a drive and so on.. BACKUP... BACKUP.

    At the risk of sounding "prime::memtest", Il repeat.

    Acronis True Image Nightly Incremental Backup.

    Nuff' said.
  • SpinnerSpinner Birmingham, UK
    edited July 2004
    mmonnin wrote:
    Put the OS and Programs on the Raid 0. They can be replaced. So what if you lose your OS drive? You can always reinstall that. The OS and Programs files are what need the performance the most for desktop users (not audio or video editing members). Data and other stuff go on regular IDE drivers and is backed up. No need to worry about a raid failure.
    That's exactly what I do. But I only got the extra drive simply because my RAID 0 array (built up of two 36GIG Raptors) just wasn't big enough for storing files other than the OS and programs anyway. Besides I always like to seperate the OS from my personal files as a rule, like you said it makes the task of doing a re-install that much easier. So it works great for me. :thumbsup:
  • GobblesGobbles Ventura California
    edited July 2004
    raid 0 + 1 problem solved...

    gobbles
  • TexTex Dallas/Ft. Worth
    edited July 2004
    Gobbles wrote:
    raid 0 + 1 problem solved...

    gobbles

    That solves the redundancy. It doesnt begin to address the fact that 90 percent of the desktop users using the normal onboard ide/sata raid get very little real world performance benefit with two drives in raid-0 versus two seperate drives and simply dividing the I-O up inteligently between the two drives.

    Doing normal desktop user functions you can't tell whcih system of my four has the two 8mb cache 120gb maxtors in raid-0 versus the two seperate drives. I have a monster scsi hardware raid-0 with six very fast scsi drives and one of the fastest u320 hardware raid controllers with half a gb of pc3200 ddr cache made today and its awesome in a pci-x slot in a dual opteron. But for 90 percent of what a desktop user does the fastest computer I have is a amd64 with 32bit bus and three fast scsi drives non raided but with the I-O seperated across the drives as much is possible. And I paid less for the three u320 scswi drives and controller used then most pay for a pair of new raptors and its TONS faster.

    The differance between raid-0 with my true hardware scsi raid controller and 6 drives and a normal ide/sata raid controller is like comparing the performance of a stealth fighter and a paper airplane. It's another world. I keep my stuff backed up religiously and the scsi's are so much more reliable then ide its salso not comparable. I could argue THAT level of performance is worth the risk. Access times in sub 2ms and str above 300,000 is breathtaking. The differance is real world perofrmance achieved with a lowend ide/sata raid controller based on what the majority of desktop users do with a computer is marginal at best.

    The article was not just about the danger but the simple fact that for the majority of users, based on the limited real world improvements its simply not worth the risk.

    Tex
  • mmonninmmonnin Centreville, VA
    edited July 2004
    Spinner wrote:
    That's exactly what I do. But I only got the extra drive simply because my RAID 0 array (built up of two 36GIG Raptors) just wasn't big enough for storing files other than the OS and programs anyway. Besides I always like to seperate the OS from my personal files as a rule, like you said it makes the task of doing a re-install that much easier. So it works great for me. :thumbsup:

    Same here. OS and Programs on one drive or set of drives incase of a RAID set up and data on separate drives or at least separate partitions. I have always done separate partitions since I read MMs article way back when. I has made a reinstall (did this morning) a heck of a lot easier. I always keep a "New Install" folder with the programs and drivers I need to install.

    I would think you could see some performance increase in opening apps. Esp after the OS install gets a bit older. Things slow down as the drive gets more cluttered. It might be a small gain but games and other large programs will load faster with a RAID 0 setup.
  • edcentricedcentric near Milwaukee, Wisconsin Icrontian
    edited July 2004
    With slower drives an small cache R0 made a big improvement.
    With a new SATA drive, different game.
    What would the results be with 7200rpm 2MB drives??
    I might run R5 with four drives, if it was native to the mobo and I needed that much space.
  • TexTex Dallas/Ft. Worth
    edited July 2004
    edcentric wrote:
    With slower drives an small cache R0 made a big improvement.
    With a new SATA drive, different game.
    What would the results be with 7200rpm 2MB drives??
    I might run R5 with four drives, if it was native to the mobo and I needed that much space.

    LMAO. The b.s. just continues to flow. Time for another reality check. And just to let you know..... Raid-5 without a high end cacheing raid controller (and I have many years of experiance in this) the writes hit 5mb or so PEAKED. So your writes would be equal to an ide drive made TEN YEARS ago. Can you say Sloooooooooooooooooooooooooooooooooooowwwwwwww?

    here is a atto for you. And this isnt with the little "play" ide controllers on M.B.s this is atto's from a Elite 1600 with 6 drives. And it hit 260,000 on the reads and 220,000 on the writes in raid-0 with the same 6 drives with the controllers cache disabbled. This is six fast scsi drives with 4ms access and 8mb cache that smoke ide or sata drives.

    This is the wonder of raid-5. The writes SUX because it has to make multiple reads from the other drives for the parity info needed to make each and every write, so no write is ever sequential.

    Can you see the problem here?

    This would be sorta hilarious if it wasnt so sad really. The nonsense spread as fact on these matters is just utterly stunning.

    Tex
  • TheBaronTheBaron Austin, TX
    edited July 2004
    *double takes*
    what in the crap?!

    Tex = 0wN
  • TexTex Dallas/Ft. Worth
    edited July 2004
    Isn't raid-5 awesome? WOW !

    With 512mb pc3200 ddr and a really fast cpu onboard my lsi 320-2x and 6 drives I can hit decent scores with raid-5 on my dual opteron with pci-x. Not awesome but respectable anyway??

    But normal ide, sata or scsi drives in a software based raid-5 and thats 97 percent of raid-5 ide/sata capable systems made today are gonna score along these lines.

    I tune high performance disk subsystems for a living. I have done this for 20 years. I tune servers for database systems. I am a benchmark whore and benchmarks also do not always accurately reflect real life performance either.

    There are some in this forum that certainly can get as much out of a sata/ide raid-0 windows disk sub system by tweaking and tuning as I can perhaps but I would argue that few if any can get more then I can and after hundreds and hundreds of system setups and extensive testing I am telling you that the benefits of raid-0 for normal desktop users is greatly exagerated as compared to simply using the same two or four drives in a non-raided environment and dividing the i-o up among the drives manually. STR alone is not the most important consideration in 95 percent of the desktop I-O for normal users. You gain NOTHING in access time and it is in fact slower for many small file access's. You gain only in long SEQUENTIAL data access and the drive must be continually defragged to see even that gain with ide/sata. With my scsi drives and that Elite 1600 64bit controller for example it hit over 170,000 on Sandra and showed a 2ms access time even with the onboard cache disabled in raid-0...... With EIGHT SCSI drives. This doesnt happen on normal ide/sata raid controllers. The access time at BEST stays constant and usually goes up not down even with three or even four drives. Only str at the high end goes up. The striping also bogs down the cpu as it is handled by your cpu not the disk controller as in my hardware scsi raid controllers.

    IDE/Sata raid-0 performance gain is greatly exagerated for NORMAL desktop use. At least when compared to PROPERLY setting up the identical drives in a non-raided environment by someone competant.

    There was several key points made there you pick them out.

    Tex
  • edited July 2004
    Tex, here is another article talking about Anand's article over at Overclockers.com that try's to simplify the subject down to where everyone can understand what's happening. I'm glad I came back to this thread; learned more about raid5 from what you posted too. :cool:
  • csimoncsimon Acadiana Icrontian
    edited July 2004
    Only my server is raided now ...nothing else. =o)
  • TexTex Dallas/Ft. Worth
    edited July 2004
    I stole another guys line in this thread from Storagereview about the death of raid.

    "I think that we need to divide the "RAID fans," those who know that they're throwing a lot of money at a small gain and do it anyway; from the "RAID idiots," who think that they will or have gotten some kind of incredible increase in performance. It's the overrating of RAID, expecially RAID 0, which is ignorant."

    The whole thread is here for those inclined to more discussion.

    http://forums.storagereview.net/index.php?showtopic=15912

    I strongly feel that the forums at storagereview in general are the best around for disk and disk subsystem related matters on the web. I don't agree with everything they say just like many here don't agree with what I say either. Thats fine the forums are for airing opposing intelligent viewpoints. But at least there its guys who are very experianced in these matters as compared to some guy posting with little overall experiance in real life and you should at least pause and consider their point of view a little more strongly based on their background.

    Cheers All

    Tex
  • TexTex Dallas/Ft. Worth
    edited July 2004
    Heres another review of raid-0 with raptors from anandtech

    http://www.anandtech.com/storage/showdoc.html?i=2101

    "Bottom line: RAID-0 arrays will win you just about any benchmark, but they'll deliver virtually nothing more than that for real world desktop performance. That's just the cold hard truth."

    It's "perceived" to be faster. A great read on perception and the trouble with using it.

    http://www.cia.gov/csi/books/19104/art5.html

    Tex
  • edcentricedcentric near Milwaukee, Wisconsin Icrontian
    edited July 2004
    touche tex.
    I had always thought that using the IDE bus was a limiting factor.
    Now I see more of the truth.
    My only experience with raid was reading large files. It seemed to help a lot.
    Now I will just forget about it.
    thanks
  • KometeKomete Member
    edited July 2004
    Not getting into the technical side of things but what is important to me is fast boot times and raid 0 for me is much faster than just booting off of 1 harddrive. I game a lot and I love to get First blood in 2k4. When ever we switch maps online I'm always the first moving and get that 1st blood. Every secound counts in my world so even if it's just 2 here and there some places more thas more than fine for me.
  • TexTex Dallas/Ft. Worth
    edited July 2004
    The question is if you properly divide the data so the os was on one drive and teh maps on another etc... would it pay off? You can't compare to a single drive but rather to balancing the i-o between two drives. For example copying huge chunks of data between two directoys or partitons on the same raid-0 array is alwasy slower then from two seperate ide drives because the reads and writes are no longer sequential. The heads have to move to the file to reads from and then move back to the file to write to etc...

    I don't game so I don't know. I would bet ya that having the stuff on a couple differant scsi drives would be faster. I can buy two 36gb fast scsi drives for the price of a raptor. The beauty of scsi is it can issue reads and writes to several differant drives without waiting for a result. scsi is basicaly a bus of its own.

    For you it would be critical to tune the cache on teh windows disk subsystem itself so as much data as possible was cached.

    Or even better copying all the maps to a ram disk woudl be evan faster.

    I can tell when I transfer huge files around when I get them off the ide raid versus a single ide drive. If its defragged and I'm not using the ide raid drive for anything. but when I copy big files from the ide raid and try and use that machine its sluggish as hell and it hardly impacts the scsi and scsi raid systems. You can really not even feel that I am copying 20gb in the background. But if you do one thing at a time especially and its reading long data streams like your maps you might be able to tell an advantage! And in your scenario thats all thats important, and its enough of an advantage to make it worth while. Would be intersting to see you add a second scsi controller and a pair of scsi drives and see if the gain wasnt far greater without the added risk.

    The point is that raid-0 with two drives for most users bares only a slight advantage if any over the same two drives where you try and manually balance the I-O across the drives manualy. That most times the two drives do not show any advantage over the SAME TWO DRIVES when you setup the data to try and balance the I-O. To really compare you still gotta compare a properly setup two drive system against a two drive raid-0 system.

    Good Luck !

    Tex
  • KometeKomete Member
    edited July 2004
    Well tex this is my current setup. I have my os and all aps on the WD raptors in raid 0. I have a secound 20 gig IBM 65gxp. I've been wanting to put my windows cache folder on my secound drive but so far I havn't found a tutorial showing me exactly how to do it. I compleatly agree with you in having my map folder etc on the seprate drive but I'm not sure the game will work like that. I'm going to move over my ut cache/map folder etc and try and see if it can work. Thanks for the ideas you have givin me a lot to think about.
  • TexTex Dallas/Ft. Worth
    edited July 2004
    The "windows cache" is an area in your ram that windows uses to cache data into. You don't move it to another drive.

    If you want to see if the cache is tuned right defrag your disk and run sandra but make sure the options for the disk bench are set to BYPASS the windows disk cache. This is a baseline to use for comparison as we rarely ever do anything normaly in windows that bypasses the disk cache. When it completes now run the same test again with that option turned off so that you test with the windows cache.

    Normally winows users are shocked and dismayed when they run this as they test much better bypassing the disk cache then using it. Sometimes horribly slower. Unfortunatley in real life amost everything uses the disk cache....

    See where real life and benchmarks often blur the truth? This is called tuning the disk subsystem. Reall life results are the product of many differant functions. Whyich is why I contend that if you really know how all the pieces work together and tune the whole windows disk subsystem you can actually get as good or better real life performance from TWO seperate non raided drives for 95 percent of the desktop users.

    If the cache is tuned right you would hope to score at least 50 percent or more HIGHER using the cache. Thats what the cache is for... But the default disk cache is sized wrong for most users with a reasonable amount of ram. The more ram the more you can cache obviously.

    I still think if you could seperate your maps from the other data and made a ram disk you would be better off. Or used a virtual cdrom utility maybe or something else.

    Cheers and best of luck!

    tex
  • KometeKomete Member
    edited July 2004
    Well I tried just moving the folders over and it was a no go but I opened up the games system folder and started to take a look around and I came by this:
    Paths=../System/*.u
    Paths=../Maps/*.ut2
    Paths=../Textures/*.utx
    Paths=../Sounds/*.uax
    Paths=../Music/*.umx
    Paths=../StaticMeshes/*.usx
    Paths=../Animations/*.ukx
    Paths=../Saves/*.uvx

    I'm going to back this up and see if I can make it look in Drive H: my backup drive and see what happens.
  • TexTex Dallas/Ft. Worth
    edited July 2004
    How big is the folder? Have you checked the registry for paths to the application? Can it fit into a ram drive? Google even and see if you can set a environment variable or path setting to where the maps are found.

    Tex
  • PressXPressX Working! New
    edited July 2004
    I love this "to raid" or "not to raid" argument.

    Raid 5 with multiple scsi disks in a server storage rack = YES YES YES RAID 0 with 2 ide/sata disks = waste o' money

    We built raid 1 + 1 years ago for reliability and redundancy on the OS and raid 5 for redundancy on the storage. But he ONLY reason we did it on data servers was for the REDUNDANCY and not for performance.

    Raid 1 + 1 was 4 x 4GB HD's 2 hardware mirrored and the raid controller then the two mirrors mirrored in the OS/software. Raid 5 on with 5 disks min on the raid controller. Worked efficiently and with "hotswop" drives and good monitoring the systems were up 100% except for maintenance.

    Now this was great in a server environment with budgets bigger than some third world countries GDP. Relating this technology to us mortals and it becomes OTT. We gain almost nothing by RAID 0 especially when small data is transferred it can make things worse. Put the extra cash into a better graphics card or more ram!

    The only useful raid us mortal should consider is raid 1 for the ppl who are lazy and don't back up often. I, like most it seems on here, have a separate drive for my stored data. That gets backed up when I remember. Now seems like a good time....

    Marcus
  • TexTex Dallas/Ft. Worth
    edited July 2004
    PressX wrote:
    Raid 5 with multiple scsi disks in a server storage rack = YES YES YES

    That would depend entirely on what type of server this was? A database server would choke itself to death on the write I-O. A horrible solution for this problem.

    A server used to share word processing files etc... with an appropriate number of users is ok. Not enough heavy I-o at the same time to kill it. Its just a nuisance the users will whine about. I am not a fan of raid-5 due to the horrible writes as I deal with SQL server and Oracle database systems but.... For the price of current 36gb u320 drives just buy a couple more and go raid-10 if you were considering raid-5.

    raid-10 with bigger modern scsi drives is getting more attractive every day with the drop in price of larger scsi drives. But even then there are things I don't want on a raid-array with Oracle at all unless its raid-1.

    Tex
  • PressXPressX Working! New
    edited July 2004
    Tex wrote:
    That would depend entirely on what type of server this was?

    Very True... I mention later in the post about data servers - as in central file storage. However, I built these a couple of years ago. I would perhaps look at it a different way now. For an oracle or SAP server the config would be a whole lot different.

    This is where we go off topic... fibre, clustering etc. Raid 10 (1+0) is great for high redundancy and throughput - perfect for database servers - but at a high premium. 50% capacity from disks making it an expensive option. However, the areas where this would be needed the budget is often appropriately large...

    Marcus
  • lemonlimelemonlime Canada Member
    edited July 2004
    I jumped on the RAID-0 bandwagon during my last PC upgrade. I picked up a pair of 36gb WD raptors, and used the onboard SiL raid controller. IMO, there was a very noticable filesystem speed improvement during some tasks, and little to no improvement in others. I found system startup and program load times to be improved a bit (Maybe ~30% quicker). I remember the large smile on my face when I installed SP1a from my local disk as well. Working with large files was fantastic with the RAID-0. File copies were very quick. My ATTO scores are in the 100-105MB/s top end, and the lowend is also not bad. During most everyday desktop activities, I couldn't tell much of a difference.

    On the downside, I've only got 70 something gigs of storage space on this PC. If I could have done it again, I probably would have skipped out on the raptors and went for some large capacity maxtors or something along those lines. I'm really not very concerned about reliability, I've got all of my important data backed up.

    With drive prices being as low as they are, and with onboard raid 0,1 on almost every motherboard now-a-days, I'd say 'why not?'. If you are worried about reliability, take raid-1 out for a spin :) Although I dont think the improvement RAID-0 offers is tremendous, a quick file system has a nice feel to it.

    If cost and system simplicity/reliability is a primary objective, a quick single drive would probably be a better solution.
  • TexTex Dallas/Ft. Worth
    edited July 2004
    A good hardware raid-1 controller may actualy give the benefits of raid-0 on speed for reads as it alternates reads from each disk like raid-0 but the writes stay the same as a single disk. This isnt true on the most low end controllers though.

    Tex
Sign In or Register to comment.