Category Archives: Software and tools

Munin graphing of MySQL

While there are many graphing tools out there and we’ve used Munin for a while now.

The MySQL plugin for Munin had fallen out of date and the show engine innodb status output changed in 5.5 making some bits of the plugin simply not work any more. Also the show global status has some extra variables so there was a need to create new graphs.

All of these are now in the 2.1.8+ development releases of Munin.

Here are samples of the new/updated graphs.

mysql2_tables-day
Tables
Munin table definations
Table Definitions
mysql2_innodb_bpool_act-day

Innodb Buffer Pool Activity
mysql_innodb_bpool_internal_breakdown-day
Innodb Buffer Pool Internal Breakdown
Innodb Insert Buffer
Innodb Insert Buffer
mysql_innodb_bpool-day
Innodb Buffer Pool
Innodb Semaphores
Innodb Semaphores
Innodb Master Thread
Innodb Master Thread
mysql_innodb_adaptive_hash-day
Innodb Adaptive Hash Index
Innodb Queries and Transactions
Innodb Queries and Transactions
Innodb Read Views
Innodb Read Views
Innodb Descriptors
Innodb Descriptors
Performance Schema Losses
Performance Schema Losses
Query Cache
Query Cache
Maximum Memory of MySQL
Maximum Memory of MySQL
Rows
Rows
Handler Read
Handler Read
Handler Transaction
Handler Transaction
Handler Write
Handler Write
Handler Temporary Write/Updates
Handler Temporary Write/Updates
mysql2_execution-day
Execution (triggers and events)
mysql_icp-day
Index Condition Pushdown
Multi Range Read Optimizations
Multi Range Read Optimizations

Some of these above graphs may miss a variable or two with MariaDB-10 because of variable name changes. These will be corrected when we get to those. In MariaDB-10 there is useful transition to information schema tables for status information which will make it significantly easier to parse.

Individual buffer pool information also has been parsed out however we haven’t worked out how to graphing this correctly. Also not yet merged is a bunch of Galera graphs which are currently waiting on some Galera provider changes.

We’ll continue to work with the Munin developers to keep this MySQL plugin up to date and useful.

There’s other graphs in the MySQL Munin plugins that we haven’t changed so aren’t included here.

MySQL Connector/Arduino

Chuck Bell, one of my former colleague from MySQL AB, has created a connector for Arduino to MySQL. So this allows Arduino code to be a direct client of a MySQL or MariaDB server, with Ethernet and WiFi shields supported.

With Arduino boards being used more and more, this can come in really handy – not only for retrieving (for instance) centralised configuration data, but also for logging. Useful stuff. Thanks Chuck!

Links

 Introducing MySQL Connector/Arduino 1.0.0 beta

Tool of the Day: MOSH (Mobile Shell)

Today I nominate MOSH (Mobile Shell) from MIT in our “tool of the day” category.

With people working remote, we sometimes encounter connectivity issues. But even when working from a stable connection, it’s sometimes just a pest when you close your laptop even though you hadn’t quite finished looking at something on that SSH connection…

We tend to use a jumping box for connections to clients, so connections come from a known IP (for the firewalls) and where we have our end of VPN  and such. On that box we now also have a MOSH server. It doesn’t replace the authentication part of SSH, but rather takes over afterwards and maintains an (encrypted) UDP path (it being UDP you can’t really call it a connection).

Now you can change IPs, close your laptop lid, see your ADSL connection retrain, and all will be well anyway. MOSH will warn you when a connection is (temporarily) gone but it’ll automatically sort out the reconnection for you. And depending on what you’re doing, you can actually keep typing locally. Even roaming between wifi and mobile will not break things.

There’s more to it, but the important thing is that now you know it exists! The MOSH site at MIT is simple but clear, and fairly complete including instructions how to install and use on pretty much any platform. Have you tried MOSH already? Send us your thoughts.

Storage caching options in Linux 3.9 kernel

dm-cache is (albeit still classified “experimental”) is in the just released Linux 3.9 kernel. It deals with generic block devices and uses the device mapper framework. While there have been a few other similar tools flying around, since this one has been adopted into the kernel it looks like this will be the one that you’ll be seeing the most in to the future. It saves sysadmins the hassle of compiling extra stuff for a system.

A typical use is for an SSD to cache a HDD. Similar to a battery backed RAID controller, the objective is to insulate the application from latency caused by the mechanical device, the most laggy part of which is seek time (measured in milliseconds). Giventhe  relatively high storage capacity of an SSD (in the hundreds of GBs), this allows you to mostly disregard the mechanical latency for writes and that’s very useful for database systems such as MariaDB.

That covers writes (for the moment), but what about reads? Can MariaDB benefit from the read-caching? For the MyISAM storage engine, yes (as it relies on filesystem caching for speeding up row data access). For InnoDB, much less so. But let’s explore this, because it’s not quite a yes/no story – it depends. For typical systems with a correctly dimensioned system and InnoDB buffer pool, most of the active dataset will reside in RAM. For a system using a cached RAID controller that means that an actual disk read is not likely to be in the cache. With an SSD cache you might get lucky as it’s bigger – so stuff that has been read or written in some recent past may still be there. What we have found from testing with hdlatency (on actual client/hosting infra) is that SANs typically don’t have enough cache to pull that off – they too may have SSD caches now, but remember they get accessed by many more users with different data needs as well. The result of SSD filesystem caching for reads is actually similar to InnoDB tweaks that implement a secondary buffer pool on SSD storage, it creates a relatively large and cheap space for “lukewarm” pages (ones that haven’t been recently accessed).

So why does it depend? Because your active dataset might be too large, and/or your combined reads/writes are still more than the physical disks can handle. It’s very important to consider the latter: write caching insulates you from the seeks and allows an intermediate layer to re-order writes to optimise the head movement, but the writes still need to be done and thus ultimately you remain bound by an upper end physical limit. Insulation is not complete separation. If your active dataset is larger than RAM+SSD, then the reads also also need to be taken into account for seek capacity.

So right now you could say that at decent prices, if your active dataset is in the range of a few hundred GB to even a few TB, RAM with the optional addition of SSD caching can all work out nicely – what can still make it go sour is the rate of writes. Conclusion: this type of setup provides you with more headroom than a battery backed RAID controller, should you need that.

Separating reporting to distinct database servers (typically slaves, configured for relatively few connections and large queries) actually still helps quite a bit as it really changes what’s in the buffer pool and other caches. Or, differently put, looking at the access patterns of the different parts of your application is important – there are numerous variation on this basic pattern. It’s a form of functional sharding.

You’ll have noticed I didn’t mention any benchmarks when discussing all this (and most other topics). Many if not most benchmarks have artificial aspects to them, which makes them problematic when dealing with the real world. As shown above, applying background knowledge of the systems and structures, logic, and maths gets you a very long way (either independently or in consultation with us). It can get you through important decision processes quicker. Testing can still play an important part, but then it’s either part of or very close to your real world environment, not a lab activity. It will be specific to you. Don’t get trapped having to deliver on numbers from benchmarks.

SPDY protocol available in nginx 1.4

Nginx 1.4 can now do SPDY (draft 2). It’s hiding away in a separate file http://nginx.org/en/docs/http/ngx_http_spdy_module.html

So what is SPDY? In a nutshell, it does multiplexing, prioritization and compression of HTTP/HTTPS requests over a single TCP/IP connection. It also enables the server to push data before requested. These enable a browser or web services client to obtain multiple responses quickly by opening and authenticating a single connection to a web server and then issue multiple requests in parallel (well, whenever it wants, but in any case not requiring additional the completion of one request before the next request and not requiring, though still possible, multiple TCP/IP concurrent connections). For more info on SPDY, see http://www.chromium.org/spdy/spdy-protocol.

There is also an Apache 2.2 module for SPDY (https://code.google.com/p/mod-spdy/). Browser support for SPDY is present in Firefox, Chrome, the default Android web browser, and Opera.

If you have production experience with SPDY, good or bad, we’d like to hear about it! Particularly since SPDY is still relatively new and not yet used everywhere, the more information is published, the better.

edit: official docs are up.

Web Logs and Statistics

I’ve previously covered how user perspective and that browser to web server latency is a dominate cause of web server performance issues.

Assuming you’ve resolved those as best as you can how to you measure server performance at a per request level. Web servers allow you to put their processing time into the logs however its not there by default.

Using Nginx you can define a custom format by adding $request_time:

log_format extended '$remote_addr - $remote_user [$time_local]  '
                    '"$request" $status $request_time $body_bytes_sent '
                    '"$http_referer" "$http_user_agent"';

access_log [current access log file] extended;

In Apache’s HTTPd you can do a similar thing using %D:

LogFormat "%h %l %u %t \"%r\" %>s %D %b \"%{Referer}i\" \"%{User-agent}i\"" extended

CustomLog [current access log file] extended

After you’ve collected a meaningful sample of web activity you’re ready to analyse. First step copy the logs off all web servers to a common directory.

Second combine the logs. Assuming you haven’t played with the time format in the log, logmerge using –apache-access is a good way to get a single log even with the Nginx format above.

logmerge --apache-access *access.log > weball.log

Assuming your time is in the 10th column like I’ve done in the example. The following will put the slowest responses at the top.

sort -n -r -k 10 --parallel=8 weball.log > weball-sort.log

Take a look though and look at what’s here.

If you consider more that 4 seconds for a response is slow lets move all those requests to their own file. The grep here is matching the first response of 0, 1, 2 or 3 seconds in the sorted log.

grep -n -m 1 'HTTP/1\.[01]" 200 [0123]\.' weball-sort.log

42435:127.0.0.1 - - [02/Oct/2012:22:24:33 -0400] "POST /search ...

This will print the first line that has a response time beginning with 0,1,2 or 3. Because its sorted all lines before this are bigger. So lets put that in its own file.

head -n {linenumber - 1} weball-sort.log > weball-sort-more-than-4-seconds.log

To group these by URL path and get some frequency counts:

cut -f 8 -d ' ' web-access-sorted-time-more-than-4-seconds.log | sort | uniq -c | sort -n -r

6377 /new-releases
3753 /userrecommendations/index
2160 /home
1073 /bestsellers/reviewed
1048 /myfantasicproduct
563 /category/cool/10
464 /category/fun/17
395 /index.php?do=/blog/add/

....

From this you should start to recognise some URL patterns that need attention.

Take a note of these. And we’ll use statistics to see how these really are across all logs. I didn’t find a good simple number counter quickly enough so I’m using the modified one as attached. Compile with gcc stat.c -o stat -lm. This should compile on any ansi C compiler. It takes numbers as the input and calculates output when the EOF is reached.

/* stat.c
* This program reads numbers from standard input and outputs
* sum, mean and standard deviation.
*
* Rewritten by Daniel Black of Open Query
*
* To build:
* gcc stat.c -o stat -lm
*
* originally based of:
* http://www.dreamincode.net/forums/topic/138734-sum-mean-median-max-min-standard-deviation-in-c/
*/

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>

int main(void)
{
long unsigned count = 0;
double sum_sqrs = 0, total = 0, max = 0, min = 0, mean = 0, std_dev = 0, var = 0;
int res = 0;
float tmp;

/* Read inputs from the console window */
res = scanf("%f", &tmp);

if (res == EOF) return 0;
count = 1;
min = max = total = tmp;
sum_sqrs = tmp * tmp;

while(1)
{
res = scanf("%f", &tmp);
if (res == EOF) break;
count += res;

if(tmp < min) min = tmp;
if(tmp > max) max = tmp;

total += tmp;
sum_sqrs += tmp * tmp;
}

printf("Count: %lu\n", count);
printf("Sum of All Values: %.2f: \n", total);

mean = total / count;
printf("Mean of Values: %.2f: \n", mean);

printf("Minimum value entered: %.2f: \n", min);
printf("Maximum value entered: %.2f: \n", max);

var = sum_sqrs / count - mean*mean;
std_dev = sqrt(var);

printf("Standard Deviation: %.2f\n", std_dev);

return 0;
}

So:

fgrep "GET /home" weball.log | cut -f 11 -d ' ' | ./stat
Count: 47438
Sum of All Values: 69947.67:
Mean of Values: 1.47:
Minimum value entered: 0.00:
Maximum value entered: 200.00:
Standard Deviation: 1.83

Assuming this is a normal like distribution 97% of responses will lie within mean + 2* standard deviation so 5.13 seconds. Acceptable? up to you.

There’s still a 200 second maximum on this. Though there shouldn’t be that many outlying entries we’ll need to look at this to see its significance.

How many took more than 10 seconds?

fgrep 'GET /home' web-access-sorted-time-slowest-40000.log| egrep -v 'HTTP/1\.[01]" [0-9]{3} [0-9]\.' | wc -l

61

So 61 request out of 47438? Using “more” instead of “wc -l” will enable you to see some responses. Sometimes it a property of the dataset that causes these particular pages to be slow so even a particularly low number here may be worth examining.

Repeating the above steps for other URLs is a pretty good way of identifying problem pages by examining the frequency, mean, standard deviation and the business value of the page. So now you just need to dive into the mechanics of the page and work out what’s being slow.

Aside if you’re interested in graphing:

If you want to do XY graphs using the time as a X column I suggest using logmerge’s –tempfile option then:

sed -i -e 's/^\(..............\)_\([^ ]*\)\(.*\)/\1\3/g' logfile

After this and the first column with be a YYYYMMDDHHMMSS time and thus usable on an X axis. I was having trouble using Libreoffice Calc for this on a 2 million line log file though putting the file with a .csv extension helped.

Optimising Web Servers

I was lucky enough to attend PyCon-AU recently and one talk in particular highlighted the process of web server optimisation.

Graham Dumpleton’s add-in talk Web Server Bottlenecks And Performance Tuning available on YouTube (with the majority of PyCon-AU talks)

The first big note at the beginning is that the majority of the delay in user’s perception of a website is caused by the browser rendering the page. Though not covered in the talk for those that haven’t used the tool YSlow (for Firefox and Chrome) or Google’s Developer Tools (ctrl-alt-I in Chrome), both tools will give you pretty much identical recommendations as to how to configure the application page generated and server caching /compression settings to maximise the ease at which a web browser will render the page. These recommendations also will also minimise the second most dominate effect in web pages displayed, network latency and bandwidth. Once you have completed this the process of making web pages faster on the web server begins to take a measurable effect to the end user.

The majority of the talk however continues talking about web server configuration. The issues you will find at the web server are the memory, CPU and I/O are the constraints that you may hit depending on your application.

Measuring memory usage by considering an applications use of memory multiplies by how many concurrently running processes will give you an idea of how much memory is needed. Remember always that spare memory is disk cache for Linux based systems and this is significant in reducing I/O read time for things like static content serving. Memory reduction can be helped by front-end proxying as described by the question at offset 19:40 and relating it to the earlier description of threads and processes.  In short the buffering provided by Nginx in a single process on the input ensures that the application code isn’t running until a large amount of input is ready and that output is buffered in Nginx such that the process can end quicker while Nginx trickles the web page out to the client depending on the network speed. This reduction in the running time of the application enables the server to support better concurrency and hence better memory usage. This is why we at Open Query like to Nginx as the web server for the larger websites of our clients.

Database is effectively an I/O constraint from the web server perspective as it should be on a different server if you run something more than a simple blog or an application where database utilisation is very low.  A database query that requires input from the web request that takes a long time to run will add to the time taken in rendering the page in fairly significant terms. Taking note of which queries are slow, like enabling the slow query log is the first step to identifying problem. Significant gains can usually be made by using indexes and using the database rather than the application to do joins, iterations and sorting. Of course much more optimisation of server and queries is possible and Open Query is happy to help.

Thanks again to PyCon speakers, organisers, sponsors and delegates. I had a great time.

Jetpants: a toolkit for huge MySQL topologies

From a Tumblr engineering blog post:

Tumblr is one of the largest users of MySQL on the web. At present, our data set consists of over 60 billion relational rows, adding up to 21 terabytes of unique relational data. Managing over 200 dedicated database servers can be a bit of a handful, so naturally we engineered some creative solutions to help automate our common processes.

Today, we’re happy to announce the open source release of Jetpants, Tumblr’s in-house toolchain for managing huge MySQL database topologies. Jetpants offers a command suite for easily cloning replicas, rebalancing shards, and performing master promotions. It’s also a full Ruby library for use in developing custom billion-row migration scripts, automating database manipulations, and copying huge files quickly to multiple remote destinations.

Dynamically resizable range-based sharding allows you to scale MySQL horizontally in a robust manner, without any need for a central lookup service or massive pre-allocation of tiny shards. Jetpants supports this range-based model by providing a fast way to split shards that are approaching capacity or I/O limitations. On our hardware, we can split a 750GB, billion-row pool in half in under six hours.

Jetpants can be obtained via GitHub or RubyGems.

Good work Tumblr, excellent move to open up your tools: you’re bound to get good feedback and bug catches/fixes from users in other environments now, making your toolset even better!

MySQL Cluster on Raspberry Pi

Earlier this week, Andrew Morgan wrote a piece on running MySQL Cluster on Raspberry Pi. Since the term “Cluster” is hideously overloaded, I’ll note that we’re talking about the NDB cluster storage engine here, a very specific architecture originally acquired by MySQL AB from Ericsson (telco).

Raspberry Pi is a new single-board computer based on the ARM processor series (same stuff that powers most mobile phones these days), and it can run Linux without any fuss. Interfaces include Ethernet, USB, and HDMI video, and the cost is $25-50. I’m looking to use one for the front-end of a MythTV setup (digital video recorder and TV system), I can just strap the Raspberry Pi to the back of a TV or monitor to do its job.

As Andrew already notes, in practical terms you’re not likely to use Raspberry Pi for a cluster – perhaps for development and certain testing, and it’d be a neat solid state management server. Primarily, it’s “techie cool”.

Knowing the NDB architecture, one of the key issues is that all nodes need to communicate with each other (NxN) so the system is very network intensive, and network latency significantly affects performance. So commonly, a cluster would have at least separate interfaces for direct connections to its siblings (no switch), and possibly Dolphin Interconnect cards to provide a link with much less latency than regular Ethernet offers. And you can’t do either with Raspberry Pi.

However, there are important positive lessons in this setup:

  • Using the open source nature of the software it can be utilised in a new environment with only minimal tweaks. Not everybody needs to or wants to tweak, but the ability to do so is critical to innovation.
  • Overall, scaling out rather than up makes sense. There are cost, power-efficiency and other factors involved. More, cheap, relatively low-powered, systems can deliver a system architecture that would otherwise be unaffordable (and the expensive construct might not scale anyway).
  • Affordable resilience (redundancy).

What if you needed lots of MySQL slaves with a fairly small dataset? Raspberry Pi could well be the solution. Not everybody is “big” or “high performance” in the same way.

Green HDs and RAID Arrays

Some so-called “Green” harddisks don’t like being in a RAID array. These are primarily SATA drives, and they gain their green credentials by being able reduce their RPM when not in use, as well as other aggressive power management trickery. That’s all cool and in a way desirable – we want our hardware to use less power whenever possible! – but the time it takes some drives to “wake up” again is longer than a RAID setup is willing to tolerate.

First of all, you may wonder why I bother with SATA disks at all for RAID. I’ve written about this before, but they simply deliver plenty for much less money. Higher RPM doesn’t necessarily help you for a db-related (random access) workload, and for tasks like backups which do have a lot of speed may not be a primary concern. SATA disks have a shorter command queue than SAS, so that means they might need to seek more – however a smart RAID controller would already arrange its I/O in such a way as to optimise that.

The particular application where I tripped over Green disks was a backup array using software RAID10. Yep, a cheap setup – the objective is to have lots of diskspace with resilience, and access speed is not a requirement.

Not all Green HDs are the same. Western Digital ones allow their settings to be changed, although that does need a DOS tool (just a bit of a pest using a USB stick with FreeDOS and the WD tool, but it’s doable), whereas Seagate has decided to restrict their Green models such that they don’t accept any APM commands and can’t change their configuration.

I’ve now replaced Seagates with (non-Green) Hitachi drives, and I’m told that Samsung disks are also ok.

So this is something to keep in mind when looking at SATA RAID arrays. I also think it might be a topic that the Linux software RAID code could address – if it were “Green HD aware” it could a) make sure that they don’t go to a state that is unacceptable, and b) be tolerant with their response time – this could be configurable. Obviously, some applications of RAID have higher demands than others, not all are the same.