Posted on

Friendlist Graph Module for Drupal

At DrupalSouth 2010 (Wellington) after LCA2010, Peter and I implemented a Drupal module as a practical example of how the OQGRAPH engine can be used to enable social networking trickery in any website. The friendlist_graph module (available from GitHub) extends friendlist, which implements basic functionality of friends (2-way) and fans (1-way) for Drupal users.

The friendlist_graph module transposes the friendlist data using an OQGRAPH table, allowing you to query it in new and interesting ways. By adding some extra Drupal Views, it allows you to play Six Degrees of Kevin Bacon with your Drupal users or find out how two arbitrary users are connected. It can find a path of arbitrary length near-instantly. Previously, you’d just avoid doing any such thing as it’s somewhere between impossible/limited/slow/painful in a regular relational schema.

Now think beyond: retrieve/share connections using Open Social, FOAF, Twitter/Identi.ca, logins with OpenID, and you “instantly” get a very functional social networking enabled site that does not rely on localised critical mass!

We tested with about a million users in Drupal (and approx 3.5 million random connections), which worked fine – the later demo at the DrupalSouth stuffed up because I hadn’t given the demo VM sufficient memory.

Naturally, you could do the same in Joomla! or another CMS or any site for that matter, we just happened to be at DrupalSouth so a Drupal module was the obvious choice. Take a peek at the code, it’s pretty trivial. Just make sure you run a version of MySQL that has the OQGRAPH engine, for instance 5.0.87-d10 (Sail edition!) from OurDelta.

Posted on
Posted on

Know your data – and your numeric types.

Numeric types in MySQL have two varieties:
– “precise” types such as INTEGER and DECIMAL;
– the IEEE-standard floating point types FLOAT and DOUBLE.
As a rule of thumb, the first group are for exact, “counted” quantities. The INTEGER types represent whole numbers, and DECIMAL represents “fixed point” decimal, with a preset number of places after the decimal point.
Various widths of INTEGER are available in MySQL, from 8-bit TINYINT to 64-bit BIGINT. Calculations with integer types are fast, as they usually correspond to hardware register sizes.
DECIMAL is commonly used for quantities like decimal currency where the number of digits of precision is known and fixed. For example, exactly counting pennies in two decimal digits. Computation with DECIMAL is slower than other types, but this is unlikely to impact most applications.
In the other category are FLOAT and DOUBLE, which are the 32 and 64-bit IEEE standard types, which are usually supported in hardware and are therefore fast and convenient for arithmetic. These are generally good choices for “measurements” – values with limited precision.
It is important to understand what is meant by “precision” (Wikipedia has <a href=”http://en.wikipedia.org/wiki/Arithmetic_precision”>a full discussion</a>). For example, I can measure my height at 185.8 centimetres. Because of the way I make the measurement, which we know to be approximate, this figure is understood have only one meaningful digit after the decimal point.
Precision is a property of all inexact real world “measurements” – such as position, length, weight, brightness; it is usually expressed as the number of “significant figures” or significant digits. (My height measurement has four significant digits.) This should be considered when values are displayed. It is somewhat misleading to represent my height as 185.8000 – common sense tells us that the value is not this accurate.
Serious problems can occur when we do not know the actual precision of measurements, if we wrongly assume a greater precision than exists. A typical example might be a GPS map display which uses measured position to locate the user relative to features such as rivers, roads, and railway tracks. The map display is high resolution and implies a great deal of precision to the viewer. Let us say that based on incoming data, it places our vehicle three metres East of a river. If the measurement has a true precision of, say, 20 metres, we cannot even know even which side of the river we are on! So to allow users to draw safe conclusions, the presentation of data needs to take precision into account.
(While very frequently cited as examples of “precise” figures, not all monetary values are. For example, the value of <a href=”http://twitter.com/nationaldeficit”>USA’s national deficit</a> was estimated today at $11,983,250,643,192.95. I am no economist, but rather few of these digits can be actually significant!)
Throwing away precision with inappropriate modelling is also a potential problem. Ideally measurements arrive with an implied or explicit precision. But even without that, we need to assure ourselves that we are safely storing them.
Keep in mind that we are always dealing with two kinds of precision:
– Machine precision, which is defined by the chosen type; and
– Data precision, which is a property of the values we are storing.
If I am storing my height as cm in a FLOAT column, the data precision is only 4 significant digits as discussed, but the machine precision of this column is always about 7 significant digits, no matter what we try to store. Clearly to avoid throwing away significant parts of your input, the machine precision should exceed your data precision.
Consider a <a href=”http://geocoder.ibegin.com/downloads/canada_cities.zip”>CSV file of latitude and longitude</a>, and notice that every value has 11 digits after the point:
Afton Station,NS,45.6051050000,-61.6974950000
Agassiz,BC,49.2421627750,-121.7496169988
This does not mean that every value has 14 digits of precision – if that were so, then these coordinates would be accurate to within 0.01mm at the equator! This is clearly not true.
Let’s say we merely want to stick markers on a national map. One hundred metre resolution would be more than adequate. Given an Earth circumference of 40,075,020 metres, 100 metres is approximately 1/1113 of a degree. While three digits (0.001) can represent 1/1000ths of a degree, this is not quite precise enough, so let’s keep four fractional digits after the point. Therefore we are looking to represent 7 significant figures, for example:
Afton Station,NS,45.60511,-61.69750
Which of FLOAT or DOUBLE is the right type to use for such values? Let’s investigate.
A key difference between floating point types and fixed point representations (such as DECIMAL) is that while the overall binary precision is fixed, the precision of the fractional part of the value can vary! The available precision depends on magnitude of the value. The highest precision is available for values closest to zero, and precision gets worse as numbers increase in magnitude (by every power of two).
To understand the effect of this, it is necessary to examine floating point representation in more detail. (I am going to handwave more esoteric features of <a href=”http://en.wikipedia.org/wiki/IEEE_754-1985″>the IEEE standard</a> and try to give a general picture.)
Floating point values have three parts:
– a sign (+/-)
– an “exponent” (binary scale factor)
– the value itself (known as the fraction or “mantissa”).
These are analogous to decimal “scientific” or “exponential” notation that you may already be familiar with (e.g. 76.4935 = 7.64935 x 10^1 or 7.64935E+1, where 1 is the decimal exponent).
The combination of these fields precisely defines a rational number, whose value is desirably “close enough” to the value you need to store (which was imprecise to begin with, so the approximation involved in converting to floating point representation is normally not a problem).
The overall precision available is determined by the bits allowed to store the fraction. For reference,
– FLOAT allows 23 bits for the fraction
– DOUBLE allows 52 bits for the fraction
Respectively, this means about 7 and 16 decimal digits in total. So it appears that FLOAT is probably adequate for our seven digit needs.
To make sure, let’s work backwards and confirm just how precisely a FLOAT value can represent a latitude. To do this I will show how an example value is converted into the FLOAT representation. None of the values in our table will have latitudes greater than 77 degrees, so I will pick 76.4935. (Higher values have larger exponents and hence the least available precision for the fractional part, so are the safest test.)
First we need to determine the correct exponent for the value. Then we can work out the real-world “resolution” of the number, i.e. how much the actual value changes if we change the fraction by the smallest possible amount (i.e., in its least significant bit).
The exponent is the largest power of 2 that divides the value, to “normalise” it into the range 0..1. A glance shows us that for 76, a divisor of 128 (2^7) is the right one. That is, the floating point exponent is 7. And the resulting fraction is 76.4935 / 128, or in decimal, 0.59760546875.
Remembering that the binary fraction portion has 23 bits, let’s examine its “binary expansion”. This is effectively just the result of 0.59760546875 x 2^23, written out in base 2:
1001100 . 0111111001010110  (total 23 bits)
^^^^^^^   ^^^^^^^^^^^^^^^^
whole #   fraction part
=    76 . 4935 (approx)
Note that, because the exponent is 7, the first 7 bits make up the whole number part (= binary 1001100 = 76). I’ve put a gap where the “binary point” belongs. Written out like this, we can see that we have 16 bits to the right of this point. 2^16 = 65536; so, around this magnitude of 76 degrees (and up to 128, as at 128 the binary exponent increases to 8), we can resolve to 1/65536th of a degree. This is enough bits for four decimal digits (which only requires 1/10000th).
So exactly how precise on the ground will this be?
1/65536th of a degree of latitude is about 1.7 metres: much better than our hoped for resolution of 100 metres. So we have shown that FLOAT is more than adequate for the job.
The same analysis can be done for DOUBLE, of course. For interest’s sake, the equivalent binary expansion is:
1001100 . 011111100101011000000100000110001001001101110  (total 52 bits)
We have 29 more bits to play with, or a total of 45 fraction bits after the whole number part, at this magnitude. This is ridiculously precise, and can resolve 1/35184372088832th of a degree; or 0.00114mm on the surface of the globe. (This is enough to represent 13 decimal digits after the point.)
This example has shown how knowing a little of how floating point works can help you be confident about issues of precision, when choosing types to represent approximate values, or measurements. The key is to know your data, and understand how much precision you have, and how much your application needs.

In this  “Good Practice/Bad Practice” post I hope to give some guidelines to choosing between MySQL’s numeric types, using longitude and latitude as a modelling example. (Disclaimer: I am not a mathematician, and the generalisations here are meant to help with practical modelling questions rather than be rigorously theoretical.)

Numeric types in MySQL fall into two main varieties:

  • “precise” types such as INTEGER and DECIMAL;
  • the IEEE-standard floating point types FLOAT and DOUBLE.

As a rule of thumb, the first group are for exact, or “counted” quantities. The INTEGER types represent whole numbers, and DECIMAL represents “fixed point” decimal, with a preset number of places after the decimal point.

Various widths of INTEGER are available in MySQL, from 8-bit TINYINT to 64-bit BIGINT. Calculations with integer types are fast, as they usually correspond to hardware register sizes.

DECIMAL is commonly used for quantities like decimal currency where the number of digits of precision is known and fixed. For example, exactly counting pennies in two decimal digits. Computation with DECIMAL is slower than other types, but this is unlikely to impact most applications.

In the other category are FLOAT and DOUBLE, which are the 32 and 64-bit IEEE standard types, which are usually supported in hardware and are therefore fast and convenient for arithmetic. These are generally good choices for “measurements” – values with limited precision.

It is important to understand what is meant by “precision” (Wikipedia has a full discussion). For example, I can measure my height at 185.8 centimetres. Because of the way I make the measurement, which we know to be approximate, this figure is understood have only one meaningful digit after the decimal point.

Precision is a property of all inexact real world “measurements” – such as position, length, weight, brightness; it is usually expressed as the number of “significant figures” or significant digits. (My height measurement has four significant digits.) This should be considered when values are displayed. It is somewhat misleading to represent my height as 185.8000 – common sense tells us that the value is not this accurate.

Serious problems can occur when we do not know the actual precision of measurements, if we wrongly assume a greater precision than exists. A typical example might be a GPS map display which uses measured position to locate the user relative to features such as rivers, roads, and railway tracks. The map display is high resolution and implies a great deal of precision to the viewer. Let us say that based on incoming data, it places our vehicle three metres East of a river. If the measurement has a true precision of, say, 20 metres, we cannot even know even which side of the river we are on! So to allow users to draw safe conclusions, the presentation of data needs to take precision into account.

(While very frequently cited as examples of “precise” figures, not all monetary values are. For example, the value of USA’s national deficit was estimated today at $11,983,250,643,192.95. I am no economist, but rather few of these digits can be actually significant!)

Throwing away precision with inappropriate modelling is also a potential problem. Ideally measurements arrive with an implied or explicit precision. But even without that, we need to assure ourselves that we are safely storing them.

Keep in mind that we are always dealing with two kinds of precision:

  • Machine precision, which is defined by the chosen type; and
  • Data precision, which is a property of the values we are storing.

If I am storing my height as cm in a FLOAT column, the data precision is only 4 significant digits as discussed, but the machine precision of this column is always about 7 significant digits, no matter what we try to store. Clearly to avoid throwing away significant parts of your input, the machine precision should exceed your data precision.

Consider a CSV file of latitude and longitude, and notice that every value has 10 digits after the point:

 Afton Station,NS,45.6051050000,-61.6974950000
 Agassiz,BC,49.2421627750,-121.7496169988

This does not mean that every value has 13 digits of precision – if that were so, then these coordinates would be accurate to within 4mm on the ground! This is clearly not possible.

Let’s say we merely want to stick markers on a web page showing a national map. One hundred metre resolution would be more than adequate. Given an Earth circumference of 40,075,020 metres, 100 metres is approximately 1/1113 of a degree. While three digits (0.001) can represent 1/1000ths of a degree, this is not quite precise enough, so let’s keep four fractional digits after the point. Therefore we are looking to represent 7 significant figures, for example:

 Afton Station,NS,45.60511,-61.69750

Which of FLOAT or DOUBLE is the right type to use for such values? Let’s investigate.

A key difference between floating point types and fixed point representations (such as DECIMAL) is that while the overall binary precision is fixed, the precision of the fractional part of the value can vary! The available precision depends on magnitude of the value. The highest precision is available for values closest to zero, and precision gets worse as numbers increase in magnitude (by every power of two).

To understand the effect of this, it is necessary to examine floating point representation in more detail. (I am going to handwave more esoteric features of the IEEE standard and try to give a general picture. In particular I am not going to talk about rounding, biased exponents, or denormalised numbers.)

Floating point values have three parts:

  • a sign (+/-)
  • an “exponent” (binary scale factor)
  • the value itself (known as the fraction or “mantissa”).

These are analogous to decimal “scientific” or “exponential” notation that you may already be familiar with (e.g. 76.4935 = 7.64935 x 10^1 or 7.64935E+1, where 1 is the decimal exponent).

The combination of these fields precisely defines a rational number, whose value is desirably “close enough” to the value you need to store (which was imprecise to begin with, so the approximation involved in converting to floating point representation is normally not a problem).

The overall precision available is determined by the bits allowed to store the fraction. For reference,

  • FLOAT allows 23 physical bits for the fraction (with hidden bit, effectively 24 bits of fraction)
  • DOUBLE allows 52 physical bits for the fraction (with hidden bit, effectively 53)

Respectively, this is 7 and 15 precise decimal digits in the whole figure. So FLOAT probably meets our 7 digit requirement.

To make sure, let’s work backwards and confirm just how precisely a FLOAT value can represent a latitude. To do this I will show how an example value is converted into the FLOAT representation. None of the values in our table will have latitudes greater than 77 degrees, so I will pick 76.4935. (Higher values have larger exponents and hence the least available precision for the fractional part, so are the safest test.)

First we need to determine the correct exponent for the value. Then we can work out the real-world “resolution” of the number, i.e. how much the actual value changes if we change the fraction by the smallest possible amount (i.e., in its least significant bit).

The exponent is the largest power of 2 that divides the value, to “normalise” it into a value between 0 and 1. Since our starting value is more than 1, we need to look at divisors among the powers of 2, which are greater than one: 1, 2, 4, 8, 16, 32, 64, 128, 256… A glance at this list shows us that for 76, a divisor of 128 (2^7) is the right one; i.e., the floating point exponent is 7. And the resulting fraction part is 76.4935 / 128, or in decimal, 0.59760546875.

(Normalisation works similarly for starting values less than one, but in the other direction. We multiply by the largest power of 2 that leaves the value less than one, and store the negative exponent. Negative values are simply dealt with by converting to positive before normalising, and noting a negative sign.)

Since all non-zero positive numbers begin with binary ‘1’, IEEE representation cleverly implies this “hidden” 1 bit, and doesn’t physically store it. This frees up one more bit for the fraction, i.e. giving a total of 24 bits for FLOAT precision. (Because of this, and “exponent biasing”, the binary sequence shown isn’t the actual IEEE bit pattern representing this number.)

Assuming a fraction of 24 bits, let’s examine its “binary expansion”. This is effectively just the result of 0.59760546875 x 2^24, written out in base 2:

1001100 . 01111110010101100  (total 24 bits including "hidden" bit)
^^^^^^^   ^^^^^^^^^^^^^^^^^
whole #   fraction part
=    76 . 4935 (approx)

Because the exponent is 7, the first 7 bits in my binary sequence above are the whole number part (= binary 1001100 = 76). I’ve put a gap where the “binary point” belongs. Written out like this, we can see that we have 17 bits to the right of this point. 2^17 = 131072; so, around this magnitude of 76 degrees (and up to 128, as at 128 the binary exponent increases to 8), we can resolve no worse than 1/131072th of a degree. This is enough bits for five decimal digits (which only requires 1/100000th).

So how precise on the ground will this be?

1/131072th of a degree of latitude is about 0.85 metres: much better than our hoped for resolution of 100 metres. So we have shown that FLOAT is more than adequate for the job. This is not to say that FLOAT is the correct choice for all geographic uses; only that it is adequate for this use, where we decided 100m resolution was enough. On the other hand, source data for geocoding may need more precision than FLOAT can deliver. (Naturally the extra precision must be present in the source data. Simply using a more precise type cannot add any precision to the original measurement, of course 🙂

The analysis above can be done for DOUBLE, of course. For interest’s sake, the equivalent binary expansion is:

1001100 . 0111111001010110000001000001100010010011011101  (total 53 bits including "hidden" bit)

We have 29 more bits to play with, or a total of 46 fraction bits after the whole number part, at this magnitude. This is ridiculously precise, and can resolve no worse than 1/70368744177664th of a degree; or 0.0000015mm on the surface of the globe. (This is enough to represent 13 decimal digits after the point.)

This example has shown how knowing a little of how floating point works can help you be confident about issues of precision, when choosing types to represent approximate values, or measurements – rather than automatically falling back on DOUBLE or even DECIMAL as a “paranoid default”. The key is to know your data, and understand how much precision you have, and how much your application needs.

Posted on
Posted on

OQGRAPH at OpenSQL Camp 2009, Portland

Antony is travelling up to Portland for this great event that’s about to start Fri evening and going over the weekend. He’ll be showing other devs and people more about the OQGRAPH engine, and gathering useful feedback.

Open Query is, together with many others (I see Giuseppe, Facebook, Gear6, Google, Infobright, Jeremy Cole, PrimeBase Technologies, Percona, Monty Program, and lots more), sponsoring the event so that it’s accessible for everybody – reducing the key factor to getting there rather than having to worry about high conf fees.

Having acquired the world’s biggest jetlag flying to Charlottesville VA for last year’s OpenSQL Camp, I can confirm from personal experience that it’s a great event. While I can’t be there this time, I’m looking forward to hearing all about it!

Posted on
Posted on

OQGRAPH engine on MySQL University – 5 Nov 2009 10:00 UTC

MySQL University logoOnly a few weeks after Walter’s session on Multi-Master Replication with MMM and thanks to the great gang at MySQL Docs (my colleagues from long ago!) I’ll be doing a MySQL University session in a few days, about the GRAPH computation engine. From talks/demos I’ve done about it so far, I’ve learnt that people love it but there are lots of interesting questions. After all, it’s a pretty new and in a way exotic thing.

MySQL University uses DimDim, an online presentation service. You’ll see slides, and hear my voice. You can also type questions in a live chat room. We actually even got desktop sharing working so a live demo is possible, we’ll see how that goes on the day (I’ll make sure to have static slides for the same also 😉

For session details and the exact link to DimDim, see the MySQL uni page for the OQGRAPH session.

To attend, please calculate the starting time for your local timezone! It’ll be very early in the morning for US people, however for Europe it will be late morning, and Asia/Pacific will be evening. If you miss the live session, there’ll be a recording online soon afterwards and of course you can contact me for questions anyway. Still, it would be be cool if lots of people attended live, that’s always extra useful. Hope to meet you there!

Posted on
Posted on 2 Comments

thread_stack_size in my.cnf

Many configs have thread_stack_size configured explicitly, but that can cause rather bad trouble:

  • if the stack inside a thread it’s too small, you can get segfault crashes (stack overflow, essentially). Particularly on 64-bit.
  • if the stack is too large, your system cannot handle as many connections since it all eats RAM.

Let mysqld sort it out, on startup it does a calculation based on the CPU architecture, and that’s actually the most sensible. So for almost all setups, remove any thread_stack_size=… line you might have in my.cnf.

Posted on 2 Comments
Posted on 1 Comment

OQGRAPH on Launchpad, graph examples

The MySQL 5.0 and MySQL/MariaDB 5.1 source code is now also available through Launchpad. If you were waiting for a version for 5.1 and are ok with building the plugin from source, now you can!

The repo contains a subdir for examples, we’re hoping many people will contribute little snippets and scripts to import and use interesting datasets. To give you a hint, with graph capabilities you are able to deal with RDF data sources. You just need to transform the XML to say CSV, import into a suitable structure, and copy the edge information across to an OQGRAPH table.

Roland Bouman’s tree-of-life (which uses xslt stylesheets) are a good example of that approach, and was the first entry in the examples tree, including an SQL dump of the base dataset (it was CC-NC licensed) so you don’t necessarily have to fuss with the RDF/xslt foo.

Enjoy! We want to have examples/demos, a proper testsuite (there’s a bug/wishlist for that), and more. If you can help, please do: mucking around with graphs is great fun. If you implement OQGRAPH in a “proper” app, we’d also like to hear from you. The examples are intended to get people used to what OQGRAPH can do, and thus trigger ideas for practical uses. It’s not just fun. With OQGRAPH’s capabilities and speed, you can profit.

Posted on 1 Comment
Posted on 1 Comment

GRAPH engine – Mk.II

The GRAPH engine allows you to deal with hierarchies and graphs in a purely relational way. So, we can find all children of an item, path from an item to a root node, shortest path between two items, and so on, each with a simple basic query structure using standard SQL grammar.

The engine is implemented as a MySQL/MariaDB 5.1 plugin (we’re working on a 5.0 backport for some clients) and thus runs with an unmodified server.

Demo time! I’ll simplify/strip a little bit here for space reasons, but what’s here is plain cut/paste from a running server, no edits

-- insert a few entries with connections (and multiple paths)
insert into foo (origid, destid) values (1,2), (2,3), (2,4), (4,5), (3,6), (5,6);
-- a regular table to join on to
insert into people values (1,"pearce"),(2,"hunnicut"),(3,"potter"),
                          (4,"hoolihan"),(5,"winchester"),(6,"mulcahy");
-- find us the shortest path from pearce (1) to mulcahy (6) please
select group_concat(people.name order by seq) as path
  from foo join people on (foo.linkid=people.id)
  where latch=1 and origid=1 and destid=6;
+--------+--------+--------------------------------+
| origid | destid | path                           |
+--------+--------+--------------------------------+
|      1 |      6 | pearce,hunnicut,potter,mulcahy |
+--------+--------+--------------------------------+
-- find us all people we can get to from potter (3)
select origid,group_concat(people.name order by seq) as destinations
  from foo join people on (foo.linkid=people.id)
  where latch=1 and origid=3;
+--------+----------------+
| origid | destinations   |
+--------+----------------+
|      3 | mulcahy,potter |
+--------+----------------+

-- find us all people from where we can get to hoolihan (4)
select origid,group_concat(people.name order by seq) as origins
  from foo join people on (foo.linkid=people.id)
  where latch=1 and destid=4;
+--------+--------------------------+
| origid | origins                  |
+--------+--------------------------+
|      4 | hoolihan,hunnicut,pearce |
+--------+--------------------------+

So, there you have it. A graph (in this case a simple unidirectional tree, aka hierarchy) that looks like a table to us, as do the resultsets that have been computed.

This is still a early implementation, we’re still enhancing the storage efficiency (in memory) and speed, and adding persistence. We’re also looking for a suitable large dataset that would allow us to seriously test the system, find bugs and assess speed. If you happen to have a large hierarchical structure, but especially a social graph you could obfuscate and give to us, that would be great!

Also, if you’re interested in deploying the GRAPH engine or have questions or additional needs, we’d be happy to talk with you.

select origid,group_concat(people.name order by seq) as destinations from foo join people on (foo.linkid=people.id) where latch=1 and origid=4;
+——–+—————————–+
| origid | destinations                |
+——–+—————————–+
|      4 | mulcahy,winchester,hoolihan |
+——–+—————————–+
Posted on 1 Comment
Posted on

Open Database Alliance

This alliance is an excellent step, showing the maturity, breadth and depth of expertise for MySQL related services! Of course Open Query is an active early member, with our training and subscription services, and initiatives like the OurDelta builds project.

Kudos to MontyW and PeterZ for driving this further while at the MySQL Conference last month.

Posted on