Posted on

Green HDs and RAID Arrays

Some so-called “Green” harddisks don’t like being in a RAID array. These are primarily SATA drives, and they gain their green credentials by being able reduce their RPM when not in use, as well as other aggressive power management trickery. That’s all cool and in a way desirable – we want our hardware to use less power whenever possible! – but the time it takes some drives to “wake up” again is longer than a RAID setup is willing to tolerate.

First of all, you may wonder why I bother with SATA disks at all for RAID. I’ve written about this before, but they simply deliver plenty for much less money. Higher RPM doesn’t necessarily help you for a db-related (random access) workload, and for tasks like backups which do have a lot of speed may not be a primary concern. SATA disks have a shorter command queue than SAS, so that means they might need to seek more – however a smart RAID controller would already arrange its I/O in such a way as to optimise that.

The particular application where I tripped over Green disks was a backup array using software RAID10. Yep, a cheap setup – the objective is to have lots of diskspace with resilience, and access speed is not a requirement.

Not all Green HDs are the same. Western Digital ones allow their settings to be changed, although that does need a DOS tool (just a bit of a pest using a USB stick with FreeDOS and the WD tool, but it’s doable), whereas Seagate has decided to restrict their Green models such that they don’t accept any APM commands and can’t change their configuration.

I’ve now replaced Seagates with (non-Green) Hitachi drives, and I’m told that Samsung disks are also ok.

So this is something to keep in mind when looking at SATA RAID arrays. I also think it might be a topic that the Linux software RAID code could address – if it were “Green HD aware” it could a) make sure that they don’t go to a state that is unacceptable, and b) be tolerant with their response time – this could be configurable. Obviously, some applications of RAID have higher demands than others, not all are the same.

Posted on
Posted on 2 Comments

HDlatency – now with quick option

I’ve done a minor update to the hdlatency tool (get it from Launchpad), it now has a –quick option to have it only do its tests with 16KB blocks rather than a whole range of sizes. This is much quicker, and 16KB is the InnoDB page size so it’s the most relevant for MySQL/MariaDB deployments.

However, I didn’t just remove the other stuff, because it can be very helpful in tracking down problems and putting misconceptions to rest. On SANs (and local RAID of course) you have things like block sizes and stripe sizes, and opinions on what might be faster. Interestingly, the real world doesn’t always agree with the opinions.

We Mark Callaghan correctly pointed out when I first published it, hdlatency does not provide anything new in terms of functionality, the db IO tests of sysbench cover it all. A key advantage of hdlatency is that it doesn’t have any dependencies, it’s a small single piece of C code that’ll compile on or can run on very minimalistic environments. We often don’t control what the base environment we have to work on is, so that’s why hdlatency was initially written. It’s just a quick little tool that does the job.

We find hdlatency particularly useful for comparing environments, primarily at the same client. For instance, the client might consider moving from one storage solution to another – well, in that case it’s useful to know whether we can expect an actual performance benefit.

The burst data rate (big sequential read or write) which often gets quoted for a SAN or even an individual disk is of little interest to database use, since its key performance bottleneck lies in random access I/O. The disk head(s) will need to move. So it’s important to get some real relevant numbers, rather than just go with magic vendor numbers that are not really relevant to you. Also, you can have a fast storage system attached via a slow interface, and consequentially the performance then will not be at all what you’d want to see. It can be quite bad.

To get an absolute baseline on what are sane numbers, run hdlatency also on a local desktop HD. This may seem odd, but you might well encounter storage systems that show a lower performance than that. ‘nuf said.

If you’re willing to share, I’d be quite interested in seeing some (–quick) output data from you – just make sure you tell what storage it is: type of interface, etc. Simply drop it in a comment to this post, so it can benefit more people. thanks

Posted on 2 Comments
Posted on

Storage Miniconf Deadline Extended!

The linux.conf.au organisers have given all miniconfs an additional few weeks to spruik for more proposal submissions, huzzah!

So if you didn’t submit a proposal because you weren’t sure whether you’d be able to attend LCA2010, you now have until October 23 to convince your boss to send you and get your proposal in.

Posted on
Posted on

Storage miniconf at linux.conf.au 2010

Since you were going to linux.conf.au 2010 in Wellington, NZ anyway in January of next year, you should submit a proposal to speak at the data storage and retrieval miniconf.

If you have something to say about storage hardware, file systems, raid, lvm, databases or anything else linux or open source and storage related, please submit early and submit often!

The call for proposals is open until September 28.

Posted on