Posted on

Open Query on Twitter/Identi.ca

Open Query now has its own @openquery account on Twitter and Identi.ca so you can conveniently follow us there for announcements and tips – and also ask us questions! All OQ engineers can post/reply. The OQ site front page also tracks this feed.

Previously I was posting from my personal @arjenlentz account with #openquery hashtag, but that’s obviously less practical.

Posted on
Posted on 1 Comment

Visiting Monty HQ

On this big trip, I made particular effort to finally visit Monty at his home near Helsinki. Somehow, in all my years at MySQL AB, this never happened – a sad omission. So, I spent the Easter days with Monty, Anna and now 5yo Maria.

I’m not a fan of most meetings, and in many cases in-person meetings are not actually necessary to get things organised or done, but I think this was both most enjoyable as well as productive for our respective businesses and joint interests. Good company, discussion, food, drink, sauna… fabulous.

It’s a great pity we live on opposite sides of the planet, as we do get along very well together. We definitely don’t agree on everything, but we’re always absolutely direct with each other, and try to provide good arguments whenever we disagree, to explore things further.

Posted on 1 Comment
Posted on

OQGRAPH at OpenSQL Camp 2009, Portland

Antony is travelling up to Portland for this great event that’s about to start Fri evening and going over the weekend. He’ll be showing other devs and people more about the OQGRAPH engine, and gathering useful feedback.

Open Query is, together with many others (I see Giuseppe, Facebook, Gear6, Google, Infobright, Jeremy Cole, PrimeBase Technologies, Percona, Monty Program, and lots more), sponsoring the event so that it’s accessible for everybody – reducing the key factor to getting there rather than having to worry about high conf fees.

Having acquired the world’s biggest jetlag flying to Charlottesville VA for last year’s OpenSQL Camp, I can confirm from personal experience that it’s a great event. While I can’t be there this time, I’m looking forward to hearing all about it!

Posted on
Posted on

OQGRAPH session on MySQL University – recording now available

It was fun doing the MySQL University session on OQGRAPH yesterday. Now also available: slides (PDF) and audio/video recording (FLV download, if anyone can convert to a more open format, that’d be great).

Posted on
Posted on 2 Comments

thread_stack_size in my.cnf

Many configs have thread_stack_size configured explicitly, but that can cause rather bad trouble:

  • if the stack inside a thread it’s too small, you can get segfault crashes (stack overflow, essentially). Particularly on 64-bit.
  • if the stack is too large, your system cannot handle as many connections since it all eats RAM.

Let mysqld sort it out, on startup it does a calculation based on the CPU architecture, and that’s actually the most sensible. So for almost all setups, remove any thread_stack_size=… line you might have in my.cnf.

Posted on 2 Comments
Posted on

GRAPH Engine source in MySQL 5.0.86-d10 available now

It’s time to play! A special thanks particularly to Antony Curtis for the excellent smart and actually very speedy coding, and for just being a great guy to work with. If you would like to utilise his ace MySQL knowledge and coding skills, do talk to me!

Right now, we have a source tarball available for you, patching OQGRAPH on top of a MySQL 5.0.86-d9-Sail (OurDelta) source. As you know MySQL 5.0 does not have engine plugins so patching is the only way we can put it in. This OQGRAPH codebase is licensed under GPLv2+.

Even though we’ve successfully built it on several platforms and architectures, since this is the first public release we’d like you to try it first, as we’re sure that there might be problems on some platforms. When we catch and fix those, we can do proper package builds.

You will find the link to the source tarball, and other necessarily instruction and configuration, on the documentation page. It’s tempting to skim through it and just start playing, but I recommend you really read through first: this engine is quite different. Please explore, and tell us what you think!

To contact Open Query directly about the GRAPH engine, email g r a p h (at) o p e n q u e r y (dot) c o m

Posted on
Posted on

MySQL University session Oct 22: Dual Master Setups With MMM

This Thursday (October 22nd, 13:00 UTC), Walter Heck (of Open Query) will present Dual Master Setups With MMM. MMM (Multi-Master Replication Manager for MySQL) is a set of flexible scripts to perform monitoring/failover and management of MySQL master-master replication configurations (with only one node writable at any time). Session slides (PDF).

The toolset also has the ability to read balance standard master/slave configurations with any number of slaves, so you can use it to move virtual IP addresses around a group of servers depending on whether they are behind in replication. For more
information, see mysql-mmm.org.

For MySQL University sessions you point your browser here. You need a browser with a working Flash plugin. You may register for a Dimdim account, but you don’t have to.

Posted on
Posted on 1 Comment

GRAPH engine – Mk.II

The GRAPH engine allows you to deal with hierarchies and graphs in a purely relational way. So, we can find all children of an item, path from an item to a root node, shortest path between two items, and so on, each with a simple basic query structure using standard SQL grammar.

The engine is implemented as a MySQL/MariaDB 5.1 plugin (we’re working on a 5.0 backport for some clients) and thus runs with an unmodified server.

Demo time! I’ll simplify/strip a little bit here for space reasons, but what’s here is plain cut/paste from a running server, no edits

-- insert a few entries with connections (and multiple paths)
insert into foo (origid, destid) values (1,2), (2,3), (2,4), (4,5), (3,6), (5,6);
-- a regular table to join on to
insert into people values (1,"pearce"),(2,"hunnicut"),(3,"potter"),
                          (4,"hoolihan"),(5,"winchester"),(6,"mulcahy");
-- find us the shortest path from pearce (1) to mulcahy (6) please
select group_concat(people.name order by seq) as path
  from foo join people on (foo.linkid=people.id)
  where latch=1 and origid=1 and destid=6;
+--------+--------+--------------------------------+
| origid | destid | path                           |
+--------+--------+--------------------------------+
|      1 |      6 | pearce,hunnicut,potter,mulcahy |
+--------+--------+--------------------------------+
-- find us all people we can get to from potter (3)
select origid,group_concat(people.name order by seq) as destinations
  from foo join people on (foo.linkid=people.id)
  where latch=1 and origid=3;
+--------+----------------+
| origid | destinations   |
+--------+----------------+
|      3 | mulcahy,potter |
+--------+----------------+

-- find us all people from where we can get to hoolihan (4)
select origid,group_concat(people.name order by seq) as origins
  from foo join people on (foo.linkid=people.id)
  where latch=1 and destid=4;
+--------+--------------------------+
| origid | origins                  |
+--------+--------------------------+
|      4 | hoolihan,hunnicut,pearce |
+--------+--------------------------+

So, there you have it. A graph (in this case a simple unidirectional tree, aka hierarchy) that looks like a table to us, as do the resultsets that have been computed.

This is still a early implementation, we’re still enhancing the storage efficiency (in memory) and speed, and adding persistence. We’re also looking for a suitable large dataset that would allow us to seriously test the system, find bugs and assess speed. If you happen to have a large hierarchical structure, but especially a social graph you could obfuscate and give to us, that would be great!

Also, if you’re interested in deploying the GRAPH engine or have questions or additional needs, we’d be happy to talk with you.

select origid,group_concat(people.name order by seq) as destinations from foo join people on (foo.linkid=people.id) where latch=1 and origid=4;
+——–+—————————–+
| origid | destinations                |
+——–+—————————–+
|      4 | mulcahy,winchester,hoolihan |
+——–+—————————–+
Posted on 1 Comment
Posted on

RAM flakier than expected

Ref: Google: Computer memory flakier than expected (CNET DeepTech, Stephen Shankland)

Summary: According to tests at Google, it appears that today’s RAM modules have several thousand errors a year, which would be correctable if it weren’t for the fact that most of us aren’t using ECC RAM.

Previous research, such as some data from a 300-computer cluster, showed that memory modules had correctable error rates of 200 to 5,000 failures per billion hours of operation. Google, though, found the rate much higher: 25,000 to 75,000 failures per billion hours.

This is quite relevant for database servers because they write a lot rather than mainly read (desktop use). In the MySQL context, if a bit gets flipped in RAM, your data could get corrupted, or it’s ok on disk and you’re just reading corrupted data somehow. While using more RAM is good for performance, it also means a bigger RAM footprint for your data and thus more exposure to the issue.

In MySQL 5.0 and the general 5.1, the binary and relay logs do not have checksums on log events. If something gets corrupted anywhere on disk or on its way to disk, garbage will come out and we have seen instances where this happens. There are patches to add a checksum to the binlog structure (Google worked on this) and we’ll be pushing for this to be ported into MariaDB 5.1 urgently. It’s no use having it just in later versions. It does change the on-disk format, but so be it. This is very very important stuff.

FYI, InnoDB does use page checksums which are also stored on disk. There is an option to turn them off, but our general recommendation would be to not do that 😉 What about the iblog files though? Normally they just refer to pages which at some stage get flushed, but a) if through a glitch they refer to a different page that could lose some committed data and b) on recovery, it could directly affect data. Mind you I’m conjecturing here, more research necessary!

Naturally this does not just affect database systems, file systems too can easily suffer from RAM glitches – probably with the exception of ZFS, since it has checksums everywhere and keeps them separate from the data.

Anything that keeps data around in RAM, and/or is write intensive. Memcached! How do other database systems work in this respect?

Note: this post is not intended to be alarmist; I just think it’s good to be aware of things so they can be taken into account when designing systems. If you look closely at any system, there are things that can potentially be cause for concern. That doesn’t mean we shouldn’t use them, per-say.

Posted on