1TB Western Digital Green Hard Drive failure

2»

Comments

  • foolkillerfoolkiller Ontario
    edited July 2011
    We've been noticing an extremely high failure rate with all WD drives of late. It is no longer limited to high capacity green drives, now seeing a high laptop drive failure rate and standard Blue 500GB drives failing within months of being sold. As a result, we are switching back to Seagate. I don't mind getting a bad drive once in a while, but we're seeing about 30% of our drives failing within 6 months, and that isn't acceptable for us.
  • TimTim Southwest PA Icrontian
    edited July 2011
    I don't like this whole concept of "green" drives. The Seagate ones I've seen spin at only 5900 rpm instead of 7200. So what... I'm supposed to LOSE performance and LOSE speed so we can save an insignificant couple of watts? NO WAY!!! I want my performance and speed and if it takes more power, then that's what will happen!

    I was thinking of a mirrored raid 1 array of 2 drives for my next PC, with 2 of those Seagate server 500 GB drives or 1 TB drives. Maybe 750 GB.
  • shwaipshwaip bluffin' with my muffin Icrontian
    edited July 2011
    ssd for windows + programs you use often (depending on the size), green drive (that is not a significant amount slower than 7200 rpm drives) for your media.

    the "slow" green drives are also faster than older 7200 rpm drives (except in seek time) due to increased areal density. see this for a sample comparison

    http://www.storagereview.com/php/benchmark/suite_v4.php?typeID=10&testbedID=4&osID=6&raidconfigID=1&numDrives=1&devID_0=371&devID_1=294&devCnt=2
  • fatcatfatcat Mizzou Icrontian
    edited July 2011
    4x WD15EARS
    2x WD10EARS

    still all chugging away fine here. Retail, 64MB cache models
  • TimTim Southwest PA Icrontian
    edited July 2011
    So if a new 5900 rpm drive is faster than an old 7200, then a new 7200 is faster than both! I do not plan to use old drives in a new computer.
  • edited August 2011
    boma23 wrote:
    I'll be glad if I never see a WD Green drive again.

    out of 8 drives between 1TB and 2TB for customers over the last 2 years (and before it was made obvious anywhere that WD don't approve them for RAID - this is due tot he firmware apparently), we regularly ordered and used them in cheaper mirrored arrays for small businesses. We also have used them in normal use, i personally lost 1TB of movies on an external drive.

    I'd say in total, 7 of those 8 drives have died, in all sorts of use - click of death, to just simply not responding any more... and the one left hasn't been used yet.

    I can think of 4 others I've seen fail which we didn't supply (one in a Fantom drive and one in a MyBook World, 2 in another RAID mirror) that have also failed.

    Current situation is that a customer's data RAID mirror has just lost the second drive, 8 months after the 1st, and 13 monhts after initial install. AC in room, ventilation good.

    Utter, utter crap. Not like WD will listen though.

    Yeah that could be a bad run in the batch. But --
    Seagate claims a 73% failure rate if the drives heat up over 40 degrees centigrade. WD claims a 8% failure rate on components. ( I suspect it is the controller board and/or the disks inside).

    When you look at all the "refurbished" hard drives WD and Seagate put out especially in the external hard drive systems, it makes you wonder, why do they put hard drives in tiny cases with no air dissipation system? WD claims they spend a billion each year on R&D. The consumer market is suddenly big money that pays the bills, and the external market is big. But those drives fail every one to 3 months (not all, but most).

    WD and Seagate want to dominate the market, so when ever they can they will use cheaper components (ic's) in thier drives and external cases so they can offer them at "affordable" prices to the consumers. So instead of paying $345 you only pay $89 and then 3 month later another $89.

    Here is the really bad news, you don't know if your drive is really 8mb, 16mb 32mb or 64mb cache or if it 1.5gb/s or 3gb/s or 6gb/s you don't know if your hard drive is new or used err.."refurbished". The billion dollars r&d should stand for rip & do it again, because changing the label and the controller and firmware is cheaper to do than building a drive that will keep running for 3 to 5 years. Since 1995 we the consumer have accepted that this computer stuff is never 100%, but we keep buying it. Why? (It looks good) WD, Seagate, Hitachi, Samsung, MDT, White Label, Toshiba, would never sell a refurbished drive as new?? Would they? thank god they don't produce pace makers.
  • ardichokeardichoke Icrontian
    edited August 2011
    nobis wrote:
    But those drives fail every one to 3 months (not all, but most)

    Hyperbole much? As someone who works with consumer computer parts on a very large scale on a daily basis, you're so wrong on this that it's painful. If, as you claimed, "most" hard drives failed between 1 and 3 months, my company would need probably about 10x more staff just to replace all the failing hard drives.
  • ThraxThrax 🐌 Austin, TX Icrontian
    edited August 2011
    I believe this qualifies for hyperbolulz, yes?
  • BuddyJBuddyJ Dept. of Propaganda OKC Icrontian
    edited August 2011
    Verily sir. Hyperbolulz indeed.
  • GargGarg Purveyor of Lincoln Nightmares Icrontian
    edited August 2011
    I can't remember when I bought my 1TB GP drive, but it's been around two years ago, now. Still going strong.

    That said, I avoided buying another GP when I needed a second drive. There's a lot of FUD out there, but since there are other options, I thought it was easy enough to avoid a drive that's gotten a lot of complaints.
  • edited September 2011
    I had a 1TB Caviar Green series (model: WD10EADS-00L5B1) fail after only 14 months. I've never had a drive of any brand fail so fast, although I understand that anything is possible. Interestingly, it passed normal SMART testing, and I could still access (and copy/backup) most files, but cloning was a no-go.

    Replaced with a Seagate 2TB Green series. and hope for better luck with it.
  • PirateNinjaPirateNinja Icrontian
    edited September 2011
    Just my own experience...
    I have two of the drives (1TB WD Green, unsure of mfg. date or exact model #) mirrored in my NAS running heavily for closing on two years straight. No issues yet. Knock on wood.
  • edited October 2011
    I had "bad luck" with hard drives: my desktop drive failed. I went to my back-up, a WD WorldBook with 2 drives. While restoring, 1 HD failed. I had noticed a HD SMART fault signal but did not know at that time what it meant. I found out the 2 drives had been set in RAID 0 by default from WDC factory. I remembered suddenly that I had forgotten to change the option. Damned! Now I know the difference between Raid 0 & Raid 1. I went to an older back up on a Maxtor (I back up a separate image once a month). Il failed in the middle of the restore! The disk drive was 9 years old and totally unrepairable.
    Having lost personal and professional data, I desired to be wiser.
    I bougth a Synology 209+ and I put 2 samsung green in them in RAID 1. It works really great, but ... a next event makes me feel nervous about mounting consumer disks in RAID 1 array.

    To rebuild my desktop an rock solid basis, I bougth a very cheap desptop (Medion) from a retailed store, which had a motherboard (MSI OEM) allowing 2 arrays of raid 1 disks (6 sata connectors). I put ultrafast WD HD for the system (with a antideluvian monster name I have forgotten) and reasonable storage WD (blue?) also in RAID1 for the data. I felt safe and proud, all this being backed up nearly daily to my Synology on my home network. All my recent professional data is also synchronised nearly daily and with me on a light but big Samsung portable drive with usb connector.

    The Desktop worked great for about 6 months, then one day, it became impossible to restart the system (W7) at boot time. I put the 2 arrays in another machine. I found out that the WD Raid 1 system Array had to be "repaired" which ultimately worked with a utility I had to find and buy. The "data" RAID 1 array was broken, i.e. not recognised anymore as RAID1. No data lost, simply 2 split disks with same data. Impossible (for a non techy like me) to put them back together to get them recognised as one volume.
    The problem here is not so much the risk of data loss but the huge waste of valuable time such problems create (finding a utility that repairs raid disks, hesitating between restoring from an Acronis image or not (when is it the full back-up ran? is it really recent enough? will it succeed fully?), etc). I know now that HD have a higher probability to fail between 0 and 6 months and from 6 years onwards, but they ultimately fail if not thrown away before. So, don't rely on a too old HD. The probability that 2 backups would fail consecutively was/is very low (in 30 years, it never happened to me before). But it happened to me, with old drives though (yes "things" age faster than you think or you hope...).
    I did some quick research as to why WD HD Entreprise edition for RAID controllers were different from WD consumer market (blue, black, etc), and unfortunately much more expensive.
    http://wdc.custhelp.com/app/answers/detail/a_id/996/related/1/session/L2F2LzEvdGltZS8xMzE4MDYyMzU1L3NpZC9vSFA1YTFHaw%3D%3D
    http://wdc.custhelp.com/app/answers/detail/a_id/996/related/1/session/L2F2LzEvdGltZS8xMzE4MDYyMzU1L3NpZC9vSFA1YTFHaw%3D%3D
    I am technically unable to make the difference between marketing and genuine technical specs. Still, I know now that a normal "consumer" motherboard with 2 WD HD mounted in RAID 1 is not a stable and a safe set-up as it looks, perhaps because the RAID controlers don't interact correctly with HD designed for consumer desktop market. I found other articles on the internet confirming this. But, appart from the two links above, it is not easy to find information, and even less objective one about such topic. If you guys have some, I am always ready to learn.

    Brussels, Belgium
Sign In or Register to comment.