I once heard the saying:
There only 2 type of drives;
Those that fail and those that will eventually fail.
So true from my experience.
I had failing drives from all manufacturers.
Lately WD drives have been very reliable for me.
Surprisingly true for me too but I've only had experience of WD drives during the past eight years from a sample size of three.
I had to check out the 2013 email archive (a trip down memory lane) to find the
only two SMART error warning emails that
weren't test emails
In this case, it involved a 4TB Hitachi Deskstar that hadn't
quite clocked up 6000 hours (so within the vendor's one year warranty which I collected on after purchasing an identically sized WD RED to replace it - still going strong to this day).
The older 3TB 5700rpm Hitachi which I'd managed to purchase from, of all places PC World, for a mere 150 quid (this was during the Taiwanese flood induced shortages) had always shown LBA errors almost from the start during the first 10815 hours of its life stopping at a count of four for the next few years before it was finally retired with no untoward consequences.
The 4TB Deskstar 'wonder' didn't show any errors until that first email which I didn't see on account of my disdain for email as a "high priority, you must deal with this straight away!" messaging system, only seeing those first and second emails when checking to see whether there had been any such SMART warnings after seeing actual problems reading and writing data to the FreeNAS server (as it was still known back then).
I was very lucky in that, after a 48 hour run with ddrescue, I managed to recover every single sector to its replacement WD40EFRX some three days later. Said replacement is still going strong to this day, almost 8 years later, some (reported) 32252 power on hours - with WD drives, you need to remember that the PoH counting algorithm suffers from "Dorean Grey Syndrome"
.
At that time, I still had a couple of 2TB Samsung Spinpoints in the box (never ever used RAID - couldn't afford it, plus knowing me, I'd most likely cock it all up if I ever had to resilver a replacement into the array anyway). One had clocked a mere 168 thousand head unloading events, the other a staggering
million plus, neatly explaining the ensuing Multi_Zone_Error_Rate of 26081 versus a zero value for the other. They'd both accurately reported some 25,000 PoH by then (three years' worth - the longest period of active service of all the ever increasingly larger sized drives I'd fed to my home server to keep just ahead of my ever increasing storage requirements).
Actually, the 1 million plus head unload events drive had clocked 2,000 hours less PoH. I can only surmise that in my experimenting with the various power saving options, I'd somehow managed to get it to emulate the same head unload behaviour you get by default with the WD drives (8 second head unload time-out FFS!).
If you're setting up a home server, whether a ready made or, best of all, something like FreeNAS or its FreeBSD based derivatives, it's well worth monitoring the SMART logs to make sure there aren't any hidden 'timebombs' like WD's infamous 8 second head unload time-out to send your investment to an early grave. Also, don't let the power savings on the annual electricity bill tempt you into placing your drives under more stress than they already have to contend with - let them spin 24/7, they're far more likely to survive to the next disk capacity upgrade that way.
The three WD drives (10+6+4 TB's worth) in mine have all been spinning non stop for the past 746 days of up-time as shown on the status page of my XigmaNAS box.