Posted on 1 Comment

SQL Locking and Transactions – OSDC 2011 video

This recent session at OSDC 2011 Canberra is based on part of an Open Query training day, and (due to time constraints) without much of the usual interactivity, exercises and further MySQL specific detail. People liked it anyway, which is nice! The info as presented is not MySQL specific, it provides general insight in how databases implement concurrency and what trade-offs they make.

See http://2011.osdc.com.au/SQLL for the talk abstract.

Posted on 1 Comment
Posted on

Know your data – and your numeric types.

Numeric types in MySQL have two varieties:
– “precise” types such as INTEGER and DECIMAL;
– the IEEE-standard floating point types FLOAT and DOUBLE.
As a rule of thumb, the first group are for exact, “counted” quantities. The INTEGER types represent whole numbers, and DECIMAL represents “fixed point” decimal, with a preset number of places after the decimal point.
Various widths of INTEGER are available in MySQL, from 8-bit TINYINT to 64-bit BIGINT. Calculations with integer types are fast, as they usually correspond to hardware register sizes.
DECIMAL is commonly used for quantities like decimal currency where the number of digits of precision is known and fixed. For example, exactly counting pennies in two decimal digits. Computation with DECIMAL is slower than other types, but this is unlikely to impact most applications.
In the other category are FLOAT and DOUBLE, which are the 32 and 64-bit IEEE standard types, which are usually supported in hardware and are therefore fast and convenient for arithmetic. These are generally good choices for “measurements” – values with limited precision.
It is important to understand what is meant by “precision” (Wikipedia has <a href=”http://en.wikipedia.org/wiki/Arithmetic_precision”>a full discussion</a>). For example, I can measure my height at 185.8 centimetres. Because of the way I make the measurement, which we know to be approximate, this figure is understood have only one meaningful digit after the decimal point.
Precision is a property of all inexact real world “measurements” – such as position, length, weight, brightness; it is usually expressed as the number of “significant figures” or significant digits. (My height measurement has four significant digits.) This should be considered when values are displayed. It is somewhat misleading to represent my height as 185.8000 – common sense tells us that the value is not this accurate.
Serious problems can occur when we do not know the actual precision of measurements, if we wrongly assume a greater precision than exists. A typical example might be a GPS map display which uses measured position to locate the user relative to features such as rivers, roads, and railway tracks. The map display is high resolution and implies a great deal of precision to the viewer. Let us say that based on incoming data, it places our vehicle three metres East of a river. If the measurement has a true precision of, say, 20 metres, we cannot even know even which side of the river we are on! So to allow users to draw safe conclusions, the presentation of data needs to take precision into account.
(While very frequently cited as examples of “precise” figures, not all monetary values are. For example, the value of <a href=”http://twitter.com/nationaldeficit”>USA’s national deficit</a> was estimated today at $11,983,250,643,192.95. I am no economist, but rather few of these digits can be actually significant!)
Throwing away precision with inappropriate modelling is also a potential problem. Ideally measurements arrive with an implied or explicit precision. But even without that, we need to assure ourselves that we are safely storing them.
Keep in mind that we are always dealing with two kinds of precision:
– Machine precision, which is defined by the chosen type; and
– Data precision, which is a property of the values we are storing.
If I am storing my height as cm in a FLOAT column, the data precision is only 4 significant digits as discussed, but the machine precision of this column is always about 7 significant digits, no matter what we try to store. Clearly to avoid throwing away significant parts of your input, the machine precision should exceed your data precision.
Consider a <a href=”http://geocoder.ibegin.com/downloads/canada_cities.zip”>CSV file of latitude and longitude</a>, and notice that every value has 11 digits after the point:
Afton Station,NS,45.6051050000,-61.6974950000
Agassiz,BC,49.2421627750,-121.7496169988
This does not mean that every value has 14 digits of precision – if that were so, then these coordinates would be accurate to within 0.01mm at the equator! This is clearly not true.
Let’s say we merely want to stick markers on a national map. One hundred metre resolution would be more than adequate. Given an Earth circumference of 40,075,020 metres, 100 metres is approximately 1/1113 of a degree. While three digits (0.001) can represent 1/1000ths of a degree, this is not quite precise enough, so let’s keep four fractional digits after the point. Therefore we are looking to represent 7 significant figures, for example:
Afton Station,NS,45.60511,-61.69750
Which of FLOAT or DOUBLE is the right type to use for such values? Let’s investigate.
A key difference between floating point types and fixed point representations (such as DECIMAL) is that while the overall binary precision is fixed, the precision of the fractional part of the value can vary! The available precision depends on magnitude of the value. The highest precision is available for values closest to zero, and precision gets worse as numbers increase in magnitude (by every power of two).
To understand the effect of this, it is necessary to examine floating point representation in more detail. (I am going to handwave more esoteric features of <a href=”http://en.wikipedia.org/wiki/IEEE_754-1985″>the IEEE standard</a> and try to give a general picture.)
Floating point values have three parts:
– a sign (+/-)
– an “exponent” (binary scale factor)
– the value itself (known as the fraction or “mantissa”).
These are analogous to decimal “scientific” or “exponential” notation that you may already be familiar with (e.g. 76.4935 = 7.64935 x 10^1 or 7.64935E+1, where 1 is the decimal exponent).
The combination of these fields precisely defines a rational number, whose value is desirably “close enough” to the value you need to store (which was imprecise to begin with, so the approximation involved in converting to floating point representation is normally not a problem).
The overall precision available is determined by the bits allowed to store the fraction. For reference,
– FLOAT allows 23 bits for the fraction
– DOUBLE allows 52 bits for the fraction
Respectively, this means about 7 and 16 decimal digits in total. So it appears that FLOAT is probably adequate for our seven digit needs.
To make sure, let’s work backwards and confirm just how precisely a FLOAT value can represent a latitude. To do this I will show how an example value is converted into the FLOAT representation. None of the values in our table will have latitudes greater than 77 degrees, so I will pick 76.4935. (Higher values have larger exponents and hence the least available precision for the fractional part, so are the safest test.)
First we need to determine the correct exponent for the value. Then we can work out the real-world “resolution” of the number, i.e. how much the actual value changes if we change the fraction by the smallest possible amount (i.e., in its least significant bit).
The exponent is the largest power of 2 that divides the value, to “normalise” it into the range 0..1. A glance shows us that for 76, a divisor of 128 (2^7) is the right one. That is, the floating point exponent is 7. And the resulting fraction is 76.4935 / 128, or in decimal, 0.59760546875.
Remembering that the binary fraction portion has 23 bits, let’s examine its “binary expansion”. This is effectively just the result of 0.59760546875 x 2^23, written out in base 2:
1001100 . 0111111001010110  (total 23 bits)
^^^^^^^   ^^^^^^^^^^^^^^^^
whole #   fraction part
=    76 . 4935 (approx)
Note that, because the exponent is 7, the first 7 bits make up the whole number part (= binary 1001100 = 76). I’ve put a gap where the “binary point” belongs. Written out like this, we can see that we have 16 bits to the right of this point. 2^16 = 65536; so, around this magnitude of 76 degrees (and up to 128, as at 128 the binary exponent increases to 8), we can resolve to 1/65536th of a degree. This is enough bits for four decimal digits (which only requires 1/10000th).
So exactly how precise on the ground will this be?
1/65536th of a degree of latitude is about 1.7 metres: much better than our hoped for resolution of 100 metres. So we have shown that FLOAT is more than adequate for the job.
The same analysis can be done for DOUBLE, of course. For interest’s sake, the equivalent binary expansion is:
1001100 . 011111100101011000000100000110001001001101110  (total 52 bits)
We have 29 more bits to play with, or a total of 45 fraction bits after the whole number part, at this magnitude. This is ridiculously precise, and can resolve 1/35184372088832th of a degree; or 0.00114mm on the surface of the globe. (This is enough to represent 13 decimal digits after the point.)
This example has shown how knowing a little of how floating point works can help you be confident about issues of precision, when choosing types to represent approximate values, or measurements. The key is to know your data, and understand how much precision you have, and how much your application needs.

In this  “Good Practice/Bad Practice” post I hope to give some guidelines to choosing between MySQL’s numeric types, using longitude and latitude as a modelling example. (Disclaimer: I am not a mathematician, and the generalisations here are meant to help with practical modelling questions rather than be rigorously theoretical.)

Numeric types in MySQL fall into two main varieties:

  • “precise” types such as INTEGER and DECIMAL;
  • the IEEE-standard floating point types FLOAT and DOUBLE.

As a rule of thumb, the first group are for exact, or “counted” quantities. The INTEGER types represent whole numbers, and DECIMAL represents “fixed point” decimal, with a preset number of places after the decimal point.

Various widths of INTEGER are available in MySQL, from 8-bit TINYINT to 64-bit BIGINT. Calculations with integer types are fast, as they usually correspond to hardware register sizes.

DECIMAL is commonly used for quantities like decimal currency where the number of digits of precision is known and fixed. For example, exactly counting pennies in two decimal digits. Computation with DECIMAL is slower than other types, but this is unlikely to impact most applications.

In the other category are FLOAT and DOUBLE, which are the 32 and 64-bit IEEE standard types, which are usually supported in hardware and are therefore fast and convenient for arithmetic. These are generally good choices for “measurements” – values with limited precision.

It is important to understand what is meant by “precision” (Wikipedia has a full discussion). For example, I can measure my height at 185.8 centimetres. Because of the way I make the measurement, which we know to be approximate, this figure is understood have only one meaningful digit after the decimal point.

Precision is a property of all inexact real world “measurements” – such as position, length, weight, brightness; it is usually expressed as the number of “significant figures” or significant digits. (My height measurement has four significant digits.) This should be considered when values are displayed. It is somewhat misleading to represent my height as 185.8000 – common sense tells us that the value is not this accurate.

Serious problems can occur when we do not know the actual precision of measurements, if we wrongly assume a greater precision than exists. A typical example might be a GPS map display which uses measured position to locate the user relative to features such as rivers, roads, and railway tracks. The map display is high resolution and implies a great deal of precision to the viewer. Let us say that based on incoming data, it places our vehicle three metres East of a river. If the measurement has a true precision of, say, 20 metres, we cannot even know even which side of the river we are on! So to allow users to draw safe conclusions, the presentation of data needs to take precision into account.

(While very frequently cited as examples of “precise” figures, not all monetary values are. For example, the value of USA’s national deficit was estimated today at $11,983,250,643,192.95. I am no economist, but rather few of these digits can be actually significant!)

Throwing away precision with inappropriate modelling is also a potential problem. Ideally measurements arrive with an implied or explicit precision. But even without that, we need to assure ourselves that we are safely storing them.

Keep in mind that we are always dealing with two kinds of precision:

  • Machine precision, which is defined by the chosen type; and
  • Data precision, which is a property of the values we are storing.

If I am storing my height as cm in a FLOAT column, the data precision is only 4 significant digits as discussed, but the machine precision of this column is always about 7 significant digits, no matter what we try to store. Clearly to avoid throwing away significant parts of your input, the machine precision should exceed your data precision.

Consider a CSV file of latitude and longitude, and notice that every value has 10 digits after the point:

 Afton Station,NS,45.6051050000,-61.6974950000
 Agassiz,BC,49.2421627750,-121.7496169988

This does not mean that every value has 13 digits of precision – if that were so, then these coordinates would be accurate to within 4mm on the ground! This is clearly not possible.

Let’s say we merely want to stick markers on a web page showing a national map. One hundred metre resolution would be more than adequate. Given an Earth circumference of 40,075,020 metres, 100 metres is approximately 1/1113 of a degree. While three digits (0.001) can represent 1/1000ths of a degree, this is not quite precise enough, so let’s keep four fractional digits after the point. Therefore we are looking to represent 7 significant figures, for example:

 Afton Station,NS,45.60511,-61.69750

Which of FLOAT or DOUBLE is the right type to use for such values? Let’s investigate.

A key difference between floating point types and fixed point representations (such as DECIMAL) is that while the overall binary precision is fixed, the precision of the fractional part of the value can vary! The available precision depends on magnitude of the value. The highest precision is available for values closest to zero, and precision gets worse as numbers increase in magnitude (by every power of two).

To understand the effect of this, it is necessary to examine floating point representation in more detail. (I am going to handwave more esoteric features of the IEEE standard and try to give a general picture. In particular I am not going to talk about rounding, biased exponents, or denormalised numbers.)

Floating point values have three parts:

  • a sign (+/-)
  • an “exponent” (binary scale factor)
  • the value itself (known as the fraction or “mantissa”).

These are analogous to decimal “scientific” or “exponential” notation that you may already be familiar with (e.g. 76.4935 = 7.64935 x 10^1 or 7.64935E+1, where 1 is the decimal exponent).

The combination of these fields precisely defines a rational number, whose value is desirably “close enough” to the value you need to store (which was imprecise to begin with, so the approximation involved in converting to floating point representation is normally not a problem).

The overall precision available is determined by the bits allowed to store the fraction. For reference,

  • FLOAT allows 23 physical bits for the fraction (with hidden bit, effectively 24 bits of fraction)
  • DOUBLE allows 52 physical bits for the fraction (with hidden bit, effectively 53)

Respectively, this is 7 and 15 precise decimal digits in the whole figure. So FLOAT probably meets our 7 digit requirement.

To make sure, let’s work backwards and confirm just how precisely a FLOAT value can represent a latitude. To do this I will show how an example value is converted into the FLOAT representation. None of the values in our table will have latitudes greater than 77 degrees, so I will pick 76.4935. (Higher values have larger exponents and hence the least available precision for the fractional part, so are the safest test.)

First we need to determine the correct exponent for the value. Then we can work out the real-world “resolution” of the number, i.e. how much the actual value changes if we change the fraction by the smallest possible amount (i.e., in its least significant bit).

The exponent is the largest power of 2 that divides the value, to “normalise” it into a value between 0 and 1. Since our starting value is more than 1, we need to look at divisors among the powers of 2, which are greater than one: 1, 2, 4, 8, 16, 32, 64, 128, 256… A glance at this list shows us that for 76, a divisor of 128 (2^7) is the right one; i.e., the floating point exponent is 7. And the resulting fraction part is 76.4935 / 128, or in decimal, 0.59760546875.

(Normalisation works similarly for starting values less than one, but in the other direction. We multiply by the largest power of 2 that leaves the value less than one, and store the negative exponent. Negative values are simply dealt with by converting to positive before normalising, and noting a negative sign.)

Since all non-zero positive numbers begin with binary ‘1’, IEEE representation cleverly implies this “hidden” 1 bit, and doesn’t physically store it. This frees up one more bit for the fraction, i.e. giving a total of 24 bits for FLOAT precision. (Because of this, and “exponent biasing”, the binary sequence shown isn’t the actual IEEE bit pattern representing this number.)

Assuming a fraction of 24 bits, let’s examine its “binary expansion”. This is effectively just the result of 0.59760546875 x 2^24, written out in base 2:

1001100 . 01111110010101100  (total 24 bits including "hidden" bit)
^^^^^^^   ^^^^^^^^^^^^^^^^^
whole #   fraction part
=    76 . 4935 (approx)

Because the exponent is 7, the first 7 bits in my binary sequence above are the whole number part (= binary 1001100 = 76). I’ve put a gap where the “binary point” belongs. Written out like this, we can see that we have 17 bits to the right of this point. 2^17 = 131072; so, around this magnitude of 76 degrees (and up to 128, as at 128 the binary exponent increases to 8), we can resolve no worse than 1/131072th of a degree. This is enough bits for five decimal digits (which only requires 1/100000th).

So how precise on the ground will this be?

1/131072th of a degree of latitude is about 0.85 metres: much better than our hoped for resolution of 100 metres. So we have shown that FLOAT is more than adequate for the job. This is not to say that FLOAT is the correct choice for all geographic uses; only that it is adequate for this use, where we decided 100m resolution was enough. On the other hand, source data for geocoding may need more precision than FLOAT can deliver. (Naturally the extra precision must be present in the source data. Simply using a more precise type cannot add any precision to the original measurement, of course 🙂

The analysis above can be done for DOUBLE, of course. For interest’s sake, the equivalent binary expansion is:

1001100 . 0111111001010110000001000001100010010011011101  (total 53 bits including "hidden" bit)

We have 29 more bits to play with, or a total of 46 fraction bits after the whole number part, at this magnitude. This is ridiculously precise, and can resolve no worse than 1/70368744177664th of a degree; or 0.0000015mm on the surface of the globe. (This is enough to represent 13 decimal digits after the point.)

This example has shown how knowing a little of how floating point works can help you be confident about issues of precision, when choosing types to represent approximate values, or measurements – rather than automatically falling back on DOUBLE or even DECIMAL as a “paranoid default”. The key is to know your data, and understand how much precision you have, and how much your application needs.

Posted on
Posted on

On SQL vs No-SQL

The No-SQL tag really lumps together a lot of concepts that are in fact as distinct from eachother as they are from SQL/RDBMS.

An object store is not at all similar to Cassandra and Hypertable, which is not at all like an column store. And when looking at BigTable derivatives, it’s quite important to realise that Google actually does joins in middle layers or apps, so while BigTable does not have joins, the apps essentially do use them – I’ve heard it professed that denormalising everything might be a fab idea, but I don’t quite believe in that for all cases, just like I don’t believe in ditching the structured form of RDBMS being the solution.

SQL/RDBMS has had a few decades of dominance now, and has thus become the great “general purpose” tool. With the ascent of all the other tools, it’s definitely worthwhile to look at them, but also realise that each (inluding SQL based ones) have their place. Moving all your stuff wholesale from one to the other is probably a fail.

At the recent OpenSQL Camp in Portland, Brian Aker did a short (7 minute) talk, covering some of these aspects, with a humerous angle. It’s educational, and fun!

Posted on
Posted on 3 Comments

Walking the Tree of Life in simple SQL

Antony and I are busy getting the Open Query GRAPH Engine code ready so you all can play with it, but we needed to test with a larger dataset to make sure all was fundamentally well with the system.

We have some intersting suitable dataset sources, but the first we tried in ernest because it was easy to get in (thanks to Roland Bouman for both the idea and providing xslt stylesheets to transform the set), was the Tree of Life which is a hierarchy of 89052 entries showing how biological species on earth are related to eachother.

GRAPH engine operates in a directed fashion, so I inserted the connections both ways resulting in 178102 entries. So, I inserted A->B as well as B->A for each connection. So we now have a real graph, not just a simple tree.

Just like with my previous post, we have a separate table that contains the name of the species. For query simplicity, I looked up the id the start/end name separately. By the way, latch=1 indicates we use Dijkstra’s shortest-path algorithm for our search.

# with all that explained, let’s find ourselves in the tree of life!
SELECT GROUP_CONCAT(name ORDER BY seq SEPARATOR ‘ -> ‘) AS path FROM tol_graph JOIN tol ON (linkid=id) WHERE latch=1 AND origid=1 AND destid=16421 \G
*************************** 1. row ***************************
path: Life on Earth -> Eukaryotes -> Unikonts -> Opisthokonts -> Animals -> Bilateria -> Deuterostomia -> Chordata -> Craniata -> Vertebrata -> Gnathostomata -> Teleostomi -> Osteichthyes -> Sarcopterygii -> Terrestrial Vertebrates -> Tetrapoda -> Reptiliomorpha -> Amniota -> Synapsida -> Eupelycosauria -> Sphenacodontia -> Sphenacodontoidea -> Therapsida -> Theriodontia -> Cynodontia -> Mammalia -> Eutheria -> Primates -> Catarrhini -> Hominidae -> Homo -> Homo sapiens
1 row in set (0.13 sec)

# how are we related to the family of plants containing the banana
SELECT GROUP_CONCAT(name ORDER BY seq SEPARATOR ‘ -> ‘) AS path FROM tol_graph JOIN tol ON (linkid=id) WHERE latch=1 AND origid=16421 AND destid=21506 \G
*************************** 1. row ***************************
path: Homo sapiens -> Homo -> Hominidae -> Catarrhini -> Primates -> Eutheria -> Mammalia -> Cynodontia -> Theriodontia -> Therapsida -> Sphenacodontoidea -> Sphenacodontia -> Eupelycosauria -> Synapsida -> Amniota -> Reptiliomorpha -> Tetrapoda -> Terrestrial Vertebrates -> Sarcopterygii -> Osteichthyes -> Teleostomi -> Gnathostomata -> Vertebrata -> Craniata -> Chordata -> Deuterostomia -> Bilateria -> Animals -> Opisthokonts -> Unikonts -> Eukaryotes -> Archaeplastida (Plantae) -> Green plants -> Streptophyta -> Embryophytes -> Spermatopsida -> Angiosperms -> Monocotyledons -> Zingiberanae -> Musaceae
1 row in set (0.06 sec)

Obviously, this search needs to find its way up the tree then find the appropriate other branch.

# finally, our connection retro-viruses
SELECT GROUP_CONCAT(name ORDER BY seq SEPARATOR ‘ -> ‘) AS path FROM tol_graph JOIN tol ON (linkid=id) WHERE latch=1 AND origid=16421 AND destid=57380 \G
*************************** 1. row ***************************
path: Homo sapiens -> Homo -> Hominidae -> Catarrhini -> Primates -> Eutheria -> Mammalia -> Cynodontia -> Theriodontia -> Therapsida -> Sphenacodontoidea -> Sphenacodontia -> Eupelycosauria -> Synapsida -> Amniota -> Reptiliomorpha -> Tetrapoda -> Terrestrial Vertebrates -> Sarcopterygii -> Osteichthyes -> Teleostomi -> Gnathostomata -> Vertebrata -> Craniata -> Chordata -> Deuterostomia -> Bilateria -> Animals -> Opisthokonts -> Unikonts -> Eukaryotes -> Life on Earth -> Viruses -> DNA-RNA Reverse Transcribing Viruses -> Retroviridae
1 row in set (0.06 sec)

As you can see this one has to walk all the way back to “life on earth”, we’re really not related at all.

I left in the lines that show the amount of time taken. In earlier queries it took a few seconds, and I thought that was just some slowness in the graph engine, until I found out that the join was un-indexed so MySQL was table-scanning the tol table for each item found. Quickly corrected, the numbers are as you see.

I was still curious though, and since the SELECT returns a single item (a string in this case) it was really easy to use the BENCHMARK(N,func) function. That standard MySQL function runs func N times. Simple.

# so, we do
SELECT benchmark(1000000,(SELECT GROUP_CONCAT(name ORDER BY seq SEPARATOR ‘ -> ‘) AS path FROM tol_tree JOIN tol ON (linkid=id) WHERE latch=1 AND origid=16421 AND destid=57380));

1 row in set (1.86 sec)

As it turns out, we were really just measuring latency before, as this shows we can do a million of these path searches through a graph in less than 2 seconds. To me, that’s not just “not bad” (the usual opinion a Dutch person would express 😉 but freaking awesome. And that is just what I wanted to tell.

Posted on 3 Comments
Posted on

Storage Miniconf Deadline Extended!

The linux.conf.au organisers have given all miniconfs an additional few weeks to spruik for more proposal submissions, huzzah!

So if you didn’t submit a proposal because you weren’t sure whether you’d be able to attend LCA2010, you now have until October 23 to convince your boss to send you and get your proposal in.

Posted on
Posted on

Storage miniconf at linux.conf.au 2010

Since you were going to linux.conf.au 2010 in Wellington, NZ anyway in January of next year, you should submit a proposal to speak at the data storage and retrieval miniconf.

If you have something to say about storage hardware, file systems, raid, lvm, databases or anything else linux or open source and storage related, please submit early and submit often!

The call for proposals is open until September 28.

Posted on
Posted on 1 Comment

#songsincode on Twitter, SongsInCodeDB

Looking at twitter #songsincode (just search on #songsincode tag), it appears a large chunk of geeky/nerdy world has come to a halt while spending the day expression song titles in code. So far we’ve seen most programming languages as well as CSS and SQL come by. I think it’s a nice example of how “the collective” can become very creative. My favourite SQL ones so far (by @john_chr): SELECT * FROM walk WHERE gait LIKE '%EGYPTIAN%'

Update: a good friend of mine, Steve Thorne (@Jerub), wanted to set up a site for this, so we hacked one up: SongsInCodeDB.

Posted on 1 Comment