Posted on 5 Comments

A million tables

Of course it’s silly, anyone considering anything like this should seriously look at their design! But someone else blogged about this, so that got me curious. He ran on Win2003 with NTFS and created InnoDB tables.

I happened to have RHEL4 box (P4) floating around, so I wrote a little PHP script:

$db = mysql_connect(':/tmp/mysql.sock','user','pwd')
     or die(mysql_errno().': '.mysql_error());
mysql_select_db('maxtables',$db)
     or die(mysql_errno().': '.mysql_error());

for ($i = 1; $i <= 1000000; $i++) {
  print "\r{$i}\t";
  $query = "CREATE TABLE t{$i} (i TINYINT) ENGINE=MyISAM";
  mysql_query($query)
    or die(mysql_errno().': '.mysql_error());
}

mysql_close($db);

It took 2 1/2 hours to complete on a 4.1 server (there's a 5.0 on the box but it's handling other stuff). Not too shabby and no problems encountered.
Unfortunately, the filesystem is ext3 and so now removing the 3 million (3 files for each table) files (I opted for just shutting down the server and zapping the directory) has been going for some hours already. Plain "rm *" in the directory actually crashes rm (!), but "rm -rf dirname" does appear to be doing its job (albeit its taking its sweet time).

I don't know if I'll bother trying the same with InnoDB. If you'd care to, go for it and please report back your findings. On Linux, you probably want to use ReiserFS: Reiser is way more efficient/fast with open/close of files, and with large directories. Just the thing.

I see no reason why it wouldn't work. MySQL just doesn't crash/hang on big stuff. If there is a hard limit internally on something, you will get a proper error. Sometimes there's an external cause (like the operating system or file system blowing up) and it can't always trap that. So nastyness can still happen. But the general rule is that if you see MySQL crash and it's a good build from mysql.com, it's likely to be a bug and please do report it (bugs.mysql.com). It's just not "normal" and people should never become accepting of crashing programs.

The author of the original blog entry tells me that his mysql instance didn't actually hang, it was just his script that did. But he was running it remotely so that could have been anything. He could connect/reconnect just fine.

So anyway, creating a million tables in MySQL is possible. Easily proven. Still doesn't mean you should do it 😉

Posted on 5 Comments

5 thoughts on “A million tables

  1. A limit is the InnoDB data dictionary, which keeps on growing while you’re creating the tables. For a case with 120,000 or so tables I ended up at about 700 megabytes of RAM used on a 512MB system, courtesy of virtual memory. Easy to imagine running out of RAM and being killed by the OOM manager, say.

  2. It takes a server restart to free the data dictionary RAM. Will be reallocated again as each table is opened following the restart.

  3. So the solution for that would be to do it in chunks to reduce memory use… but the server will be able to handle it, regardless. As long as the OS doesn’t zap it for using too much mem, but that’s a matter of OS settings.

  4. Dealing with such an amount of tables or other database objects under MySQL is a nightmare and not recommendable:

    InnoDB needs extra space for each table, which will never be freed unless you shutdown server.

    Another problem is, that all information (TABLE/VIEW definition) is stored in a single .frm file for each table. This will slow down retrieval of metadata when accessing information_schema.

Comments are closed.