Cloud service provider Backblaze has updated its earlier study of hard drive failure rates (Nov 2013) in its own infrastructure – from 27,000 to more than 34,000 drives, and the new report (Sep 2014) is quite informative. Hitachi comes out pretty high, Western Digital has produced some good drives, but Seagate tends to come out worst. Each brand does have good and not-so-good models so there’s no single right answer, and for any new model you’ll always be dealing with an unknown factor.
Backblaze also found that consumer drives actually perform well compared to enterprise grade drives, and once price is taking into account the enterprise drives just lose. We’ve been noting this to our clients for a number of years – based on actual performance (rather than specs). It’s useful to see this backed up by reliability data, also.
Typically, enterprise drives are SAS, consumer drives are SATA. Comparing SAS and SATA, there’s actually very little difference. SAS has a longer command queue, allowing it to be more efficient in seeks by re-ordering commands. That’s nice, but a RAID or other storage controller and even your operating system would be doing that as well now. So it’s pretty much a moot point.
Higher RPM or even data transfer rate tend to be minor factors as well – your database server will already have all recent pages in memory anyway. When it asks the physical storage for some pages, it won’t hit any cache and it’ll require a seek. It can ask for more nearby pages (this is configurable in MySQL/MariaDB), but those pages may or may not actually be used. With a higher RPM and higher data transfer rate, such reads would be a bit faster. But compared to the overhead of a seek operation, which is measured, in milliseconds, it’s really minimal.
These days, if you have issues with (spinning) hard drive performance, the solution is not caching (other than database server caching which is hugely important!), not higher RPM, higher transfer rate, not higher grade drives, but solid state. And probably local, as latency then becomes the critical factor: if the access path to your fast storage device is slow (again, in terms of latency for reading or writing a specific block, not burst transfer rate), the end result is going to be slow.
Some so-called “Green” harddisks don’t like being in a RAID array. These are primarily SATA drives, and they gain their green credentials by being able reduce their RPM when not in use, as well as other aggressive power management trickery. That’s all cool and in a way desirable – we want our hardware to use less power whenever possible! – but the time it takes some drives to “wake up” again is longer than a RAID setup is willing to tolerate.
First of all, you may wonder why I bother with SATA disks at all for RAID. I’ve written about this before, but they simply deliver plenty for much less money. Higher RPM doesn’t necessarily help you for a db-related (random access) workload, and for tasks like backups which do have a lot of speed may not be a primary concern. SATA disks have a shorter command queue than SAS, so that means they might need to seek more – however a smart RAID controller would already arrange its I/O in such a way as to optimise that.
The particular application where I tripped over Green disks was a backup array using software RAID10. Yep, a cheap setup – the objective is to have lots of diskspace with resilience, and access speed is not a requirement.
Not all Green HDs are the same. Western Digital ones allow their settings to be changed, although that does need a DOS tool (just a bit of a pest using a USB stick with FreeDOS and the WD tool, but it’s doable), whereas Seagate has decided to restrict their Green models such that they don’t accept any APM commands and can’t change their configuration.
I’ve now replaced Seagates with (non-Green) Hitachi drives, and I’m told that Samsung disks are also ok.
So this is something to keep in mind when looking at SATA RAID arrays. I also think it might be a topic that the Linux software RAID code could address – if it were “Green HD aware” it could a) make sure that they don’t go to a state that is unacceptable, and b) be tolerant with their response time – this could be configurable. Obviously, some applications of RAID have higher demands than others, not all are the same.