Posted on

Amazon AWS Billing

During our recent AWS research I observed again how their billing seems to lack clarity and granularity.  That is, it’s very difficult to figure out in detail what’s going on and where particular charges actually come from.  If you have a large server farm at AWS, that’s going to be a problem.  Unless of course you just pay whatever and don’t mind what’s on the bill?

Of particular interest was a charge for inter-AZ (DC) traffic.  Not because it was large, it was just a few cents.  But it was odd.  So this would be outbound traffic from our short-lived test server(s) to another AWS availability zone (datacenter) in the US.  This gets charged at a couple of cents per GB.

AWS billing 3cents breakdown

I had a yarn with billing support, and they noted that the AWS-side of the firewall had some things open and because they can’t see what goes on inside a server, they regarded it valid traffic.  The only service active on the server was SSH, and it keeps its own logs.  While doing the Aurora testing we weren’t specifically looking at this, so by the time the billing info showed up (over a day later), the server had been decommissioned already and along with it those logs.

As a sidenote, presuming this was “valid” traffic, someone was using AWS servers to scan other AWS servers and try and gain access.  I figured such activity clearly in breach of AWS policies would be of interest to AWS, but it wasn’t.  Seems a bit neglectful to me.  And with all the tech, shouldn’t their systems be able to spot such activities even automatically?

Some days later I fired up another server specifically to investigate the potential for rogue outbound traffic.  I again left SSH accessible to the world, to emulate the potential of being accessed from elsewhere, while keeping an eye on the log.  This test server only existed for a number of hours, and was fully monitored internally so we know exactly what went on.  Obviously, we had to leave the AWS-side firewall open to be able to perform the test.  Over hours there were a few login attempts, but nothing major.  There would have to be many thousands of login attempts to create a GB of outbound traffic – consider that there’s no SSH connection actually getting established, the attempts don’t get beyond the authentication stage so it’s just some handshake and the rejection going out.  So no such traffic was seen in this instance.

Of course, the presumed earlier SSH attempts may have just been the result of a scanning server getting lucky, whereas my later test server didn’t “get lucky” being scanned. It’s possible. To increase the possible attack surface, we put an nc (netcat) listener on ports 80, 443 and some others, just logging any connection attempt without returning outbound traffic.  This again saw one or two attempts, but no major flood.

I figured that was the end of it, and shut down the server.  But come billing time, we once again see a cent-charge for similar traffic.

And this time we know it definitely didn’t exist, because we were specifically monitoring for it.  So, we know for a fact that we were getting billed for traffic that didn’t happen. “How quaint”.  Naturally, because the AWS-side firewall was specifically left open, AWS billing doesn’t want to hear.  I suppose we could re-run the test again, this time with a fully set up AWS-side firewall, but it’s starting to chew up too much time.

My issues are these:

  1. For a high tech company, AWS billing is remarkably obtuse.  That’s a business choice, and not to the clients’ benefit.  Some years ago I asked my accountant whether my telco bill (Telstra at the time) was so convoluted for any particular accounting reason I wasn’t aware of, and her answer was simply “no”.  This is the same.  The company chooses to make their bills difficult to read and make sense of.
  2. Internal AWS network monitoring must be inadequate, if AWS hosted servers can do sweep scans and mass SSH login attempts. Those are patterns that can be caught.  That is, presuming, that those scans and attempts actually happened.  If they didn’t, the earlier traffic didn’t exist either, in which case we’re getting billed for stuff that didn’t happen. It’s one or the other, right?  (Based on the above observations, my bet is actually on the billing rather than the internal network monitoring – AWS employs very smart techs.)
  3. Us getting billed an extra cent on a bill doesn’t break the bank, but since it’s for something that we know didn’t happen, it’s annoying. It makes us wonder what else is wrong in the billing system, and whether other people too might get charged a cent extra here or there.  Does this matter?  Yes it does, because on the AWS end it adds up to many millions.  And for bigger AWS clients, it will add up over time also.

Does anybody care?  I don’t know.  Do you care?


Oh, and while Google (AdWords, etc), Microsoft and others have over the last few years adjusted their invoicing in Australia to produce an appropriate tax invoice with GST (sales tax) even though the billing is done from elsewhere (Singapore in Google’s case), AWS doesn’t do this.  Australian server instances are all sold from AWS Inc. in the Seattle, WA.

Posted on
Posted on

RDS Aurora MySQL Cost

big bag of money getting carriedI promised to do a pricing post on the Amazon RDS Aurora MySQL pricing, so here we go.  All pricing is noted in USD (we’ll explain why)

We compared pricing of equivalent EC2+EBS server instances, and verified our calculation model with Amazon’s own calculator and examples.  We use the pricing for Australia (Sydney data centre). Following are the relevant Amazon pricing pages from which we took the pricing numbers, formulae, and calculation examples:

Base Pricing Details

Specs         EC2     RDS Aurora MySQL  
instance type vCPU ECU GB RAM   Storage Linux/hr   instance type Price/hr
r4.large 2 7 15.25 EBS Only $0.160 db.r4.large $0.350
r4.xlarge 4 13.5 30.5 EBS Only $0.319 db.r4.xlarge $0.700
r4.2xlarge 8 27 61 EBS Only $0.638 db.r4.2xlarge $1.400
r4.4xlarge 16 53 122 EBS Only $1.277 db.r4.4xlarge $2.800
r4.8xlarge 32 99 244 EBS Only $2.554 db.r4.8xlarge $5.600
r4.16xlarge 64 195 488 EBS Only $5.107 db.r4.16xlarge $11.200

That’s not all we need, because both EBS and Aurora have some additional costs we need to factor in.

EBS pricing components (EBS Provisioned IOPS SSD (io1) volume)

“Volume storage for EBS Provisioned IOPS SSD (io1) volumes is charged by the amount you provision in GB per month until you release the storage. With Provisioned IOPS SSD (io1) volumes, you are also charged by the amount you provision in IOPS (input/output operations per second) per month. Provisioned storage and provisioned IOPS for io1 volumes will be billed in per-second increments, with a 60 second minimum.”

  • Storage Rate $0.138 /GB/month of provisioned storage
    “For example, let’s say that you provision a 2000 GB volume for 12 hours (43,200 seconds) in a 30 day month. In a region that charges $0.125 per GB-month, you would be charged $4.167 for the volume ($0.125 per GB-month * 2000 GB * 43,200 seconds / (86,400 seconds/day * 30 day-month)).”
  • I/O Rate $0.072 /provisioned IOPS-month
    “Additionally, you provision 1000 IOPS for your volume. In a region that charges $0.065 per provisioned IOPS-month, you would be charged $1.083 for the IOPS that you provisioned ($0.065 per provisioned IOPS-month * 1000 IOPS provisioned * 43,200 seconds /(86,400 seconds /day * 30 day-month)).”

Other Aurora pricing components

  • Storage Rate $0.110 /GB/month
    (No price calculation examples given for Aurora storage and I/O)
  • I/O Rate $0.220 /1 million requests
    (Presuming IOPS equivalence / Aurora ratio noted from arch talk)

So this provides us with a common base, instance types that are equivalent between Aurora and EC2.  All other Aurora instances types are different, so it’s not possible to do a direct comparison in those cases.  Presumably we can make the assumption that the pricing ratio will similar for equivalent specs.

On Demand vs Reserved Instances

We realise we’re calculating on the basis of On Demand pricing.  But we’re comparing pricing within AWS space, so presumably the savings for Reserved Instances are in a similar ballpark.

Other factors

  • We have 720 hours in a 30 day month, which is 2592000 seconds.
  • 70% read/write ratio – 70% reads (used to calculate the effective Aurora IOPS)
  • 10% read cache miss -10% cache miss rate on reads
  • Aurora I/O ratio: 3 (Aurora requiring 2 IOPS for a commit vs 6 in MySQL – even though this is a pile of extreme hogwash in terms of that pessimistic MySQL baseline)

We also spotted this note regarding cross-AZ Aurora traffic:

“Amazon RDS DB Instances inside VPC: For data transferred between an Amazon EC2 instance and Amazon RDS DB Instance in different Availability Zones of the same Region, Amazon EC2 Regional Data Transfer charges apply on both sides of transfer.”

So this would apply to application DB queries issued across an AZ boundary, which would commonly happen during failover scenarios.  In fact, we know that this happens during regular operations with some EC2 setups, because the loadbalancing already goes cross-AZ.  So that costs extra also.  Now you know!  (note: we did not factor this in to our calculations.)

Calculation Divergence

Our model comes up with identical outcomes for the examples Amazon provided, however it comes up 10-15% lower than Amazon’s calculator for specific Aurora configurations.  We presume that the difference may lie in the calculated Aurora I/O rate, as that’s the only real “unknown” in the model.  Amazon’s calculator does not show what formulae it uses for the sub-components, nor sub-totals, and we didn’t bother to tweak until we got at the same result.

It’s curious though, as the the architecture talk makes specific claims about Aurora’s I/O efficiency (which presume optimal Aurora situation and a dismal MySQL reference setup, something which I already raised in our initial Aurora post).  So apparently the Amazon calculator assumes worse I/O performance than the technical architecture talk!

Anyhow, let’s just say our costing is conservative, as the actual cost is higher on the Aurora end.

Scenarios

Here we compare with say a MySQL/MariaDB Galera setup across 3 AZs running on EC2+EBS.  While this should be similar in overall availability and read-capacity, note that

  1. you can write to all nodes in a Galera cluster, whereas Aurora currently has a single writer/master;
  2. Galera doesn’t require failover changes as all its nodes are technically writers anyhow, whereas Aurora failover causes a cluster outage of at least 30 seconds.
Servers R/Zones Instance GB DB I/O rate     EC2 EBS     Aurora      
          Read IOPS   Instances Storage I/O EC2 Total   Instances Storage I/O Aurora Total
3 3 r4.xlarge 250 2,000 740 $689 $104 $160 $952   $1,512 $83 $141 $1,735
6 3 r4.xlarge 250 2,000 740 $1,378 $207 $320 $1,905   $3,024 $83 $141 $3,247
     

When using the Amazon calculator, Aurora comes out at about double the EC2.  But don’t take our word for it, do try this for yourself.

Currency Consequences

While pricing figures are distinct per country that Amazon operates in, the charges are always in USD.  So this means that the indicated pricing is, in the end, in USD, and thus subject to currency fluctuations (if your default currency is not USD).  What does this mean?

USD AUD rate chart 2008-2018
USD-AUD rate chart 2008-2018, from xe.com

So USD 1,000 can cost as little as AUD 906 or as much as AUD 1,653, at different times over the last 10 years.  That’s quite a range!

Conclusion

As shown above, our calculation with Aurora MySQL shows it costing about twice as much.  This is based on a reference MySQL/MariaDB+Galera with roughly the same scaling and resilience profile (e.g. the ability to survive DC outages).  In functional terms, particularly with Aurora’s 30+second outage profile during failover, Galera comes out on top at half the cost.

So when is Aurora cheaper, as claimed by Amazon?

Amazon makes claims in the realm of “1/10th the cost”. Well, that may well be the case when comparing with the TCO of Oracle or MS SQL Server, and it’s fairly typical when comparing a proprietary system with an Open Source based one (mind again that Aurora is not actually Open Source as Amazon does not make their source code available, but it’s based on MySQL).

The only other way we see is to seriously compromise on the availability (resilience).  In our second sample calculation, we use 2 instances per AZ.  This is not primarily for performance, but so that application servers in an AZ don’t have to do cross-DC queries when one instance fails.  In the case of Aurora, spinning up a new instance on the same dataset requires 15 minutes.  So, do you want to take that hit?  If so, you can save money there.  If not, it’s still costly.

But hang on, if you’re willing to make the compromise on availability, you could reduce the Galera setup also, to only one instance per AZ.  Yep!

So, no matter how you tweak it, Aurora is about twice the cost, with (in our opinion) a less interesting failover profile.

The Price of RDS Convenience

What you get with RDS/Aurora is the promise of convenience, and that’s what you pay for.  But, mind that our comparison worked all within AWS space anyway, the EC2 instances we used for MySQL/MariaDB+Galera already use the same basic infrastructure, dashboard and management API as well.  So you pay double just to go to RDS/Aurora, relative to building on EC2.

To us, that cost seems high.  If you spend some, or even all that money on engineering that convenience around your particular setup, and even outsource that task and its maintenance, you get a nicer setup at the same or a lower cost.  And last but not least, that cost will be more predictable – most likely the extra work will be charged in your own currency, too.

Cost Predictability and Budget

You can do a reasonable ball-park calculation of AWS EC2 instances that are always active, but EBS already has some I/O charges which make the actual cost rather more variable, and Aurora adds a few more variables on top of that.  I’m still amazed that companies go for this, even though they traditionally prefer a known fixed cost (even if higher) over a variable cost.  Choosing the variable cost breaks with some fundamental business rules, for the sake of some convenience.

The advantage of known fixed costs is that you can budget properly, as well as project future costs based on growth and other business factors.  Purposefully ditching that realm, while exposing yourself to currency fluctuations at the same time, seems most curious.  How do companies work this into their budgets?  Because others do so?  Well, following the neighbours is not always a good idea.  In this case, it might be costly as well as financially risky.

Posted on
Posted on

RDS Aurora MySQL and Service Interruptions

In Amazon space, any EC2 or Service instance can “disappear” at any time.  Depending on which service is affected, the service will be automatically restarted.  In EC2 you can choose whether an interrupted instance will be restarted, or left shutdown.

For an Aurora instance, an interrupted instance is always restarted. Makes sense.

The restart timing, and other consequences during the process, are noted in our post on Aurora Failovers.

Aurora Testing Limitations

As mentioned earlier, we love testing “uncontrolled” failovers.  That is, we want to be able to pull any plug on any service, and see that the environment as a whole continues to do its job.  We can’t do that with Aurora, because we can’t control the essentials:

  • power button;
  • reset switch;
  • ability to kill processes on a server;
  • and the ability to change firewall settings.

In Aurora, an instance is either running, or will (again) be running shortly.  So that we know.  Aurora MySQL also offers some commands that simulate various failure scenarios, but since they are built-in we can presume that those scenarios are both very well tested, as well as covered by the automation around the environment.  Those clearly defined cases are exactly the situations we’re not interested in.

What if, for instance, a server accepts new connections but is otherwise unresponsive?  We’ve seen MySQL do this on occasion.  Does Aurora catch this?  We don’t know and  we have no way of testing that, or many other possible problem scenarios.  That irks.

The Need to Know

If an automated system is able to catch a situation, that’s great.  But if your environment can end up in a state such as described above and the automated systems don’t catch and handle it, you could be dead in the water for an undefined amount of time.  If you have scripts to catch cases such as these, but the automated systems catch them as well, you want to be sure that you don’t trigger “double failovers” or otherwise interfere with a failover-in-progress.  So either way, you need to know and and be aware whether a situation is caught and handled, and be able to test specific scenarios.

In summary: when you know the facts, then you can assess the risk in relation to your particular needs, and mitigate where and as desired.

A corporate guarantee of “everything is handled and it’ll be fine” (or as we say in Australia “She’ll be right, mate!“) is wholly unsatisfactory for this type of risk analysis and mitigation exercise.  Guarantees and promises, and even legal documents, don’t keep environments online.  Consequently, promises and legalities don’t keep a company alive.

So what does?  In this case, engineers.  But to be able to do their job, engineers need to know what parameters they’re working with, and have the ability to test any unknowns.  Unfortunately Aurora is, also in this respect, a black box.  You have to trust, and can’t comprehensively verify.  Sigh.

Posted on
Posted on

RDS Aurora MySQL Failover

Right now Aurora only allows a single master, with up to 15 read-only replicas.

Master/Replica Failover

We love testing failure scenarios, however our options for such tests with Aurora are limited (we might get back to that later).  Anyhow, we told the system, through the RDS Aurora dashboard, to do a failover. These were our observations:

Role Change Method

Both master and replica instances are actually restarted (the MySQL uptime resets to 0).

This is quite unusual these days, we can do a fully controlled role change in classic asynchronous replication without a restart (CHANGE MASTER TO …), and Galera doesn’t have read/write roles as such (all instances are technically writers) so it doesn’t need role changes at all.

Failover Timing

Failover between running instances takes about 30 seconds.  This is in line with information provided in the Aurora FAQ.

Failover where a new instance needs to be spun up takes 15 minutes according to the FAQ (similar to creating a new instance from the dash).

Instance Availability

During a failover operation, we observed that all connections to the (old) master, and the replica that is going to be promoted, are first dropped, then refused (the connection refusals will be during the period that the mysqld process is restarting).

According to the FAQ, reads to all replicas are interrupted during failover.  Don’t know why.

Aurora can deliver a DNS CNAME for your writer instance. In a controlled environment like Amazon, with guaranteed short TTL, this should work ok and be updated within the 30 seconds that the shortest possible failover scenario takes.  We didn’t test with the CNAME directly as we explicitly wanted to observe the “raw” failover time of the instances themselves, and the behaviour surrounding that process.

Caching State

On the promoted replica, the buffer pool is saved and loaded (warmed up) on the restart; good!  Note that this is not special, it’s desired and expected to happen: MySQL and MariaDB have had InnoDB buffer pool save/restore for years.  Credit: Jeremy Cole initially came up with the buffer pool save/restore idea.

On the old master (new replica/slave), the buffer pool is left cold (empty).  Don’t know why.  This was a controlled failover from a functional master.

Because of the server restart, other caches are of course cleared also.  I’m not too fussed about the query cache (although, deprecated as it is, it’s currently still commonly used), but losing connections is a nuisance. More detail on that later in this article.

Statistics

Because of the instance restarts, the running statistics (SHOW GLOBAL STATUS) are all reset to 0. This is annoying, but should not affect proper external stats gathering, other than for uptime.

On any replica, SHOW ENGINE INNODB STATUS comes up empty. Always.  This seems like obscurity to me, I don’t see a technical reason to not show it.  I suppose that with a replica being purely read-only, most running info is already available through SHOW GLOBAL STATUS LIKE ‘innodb%’, and you won’t get deadlocks on a read-only slave.

Multi-Master

Aurora MySQL multi-master was announced at Amazon re:Invent 2017, and appears to currently be in restricted beta test.  No date has been announced for general availability.

We’ll have to review it when it’s available, and see how it works in practice.

Conclusion

Requiring 30 seconds or more for a failover is unfortunate, this is much slower than other MySQL replication (writes can failover within a few seconds, and reads are not interrupted) and Galera cluster environments (which essentially delivers continuity across instance failures – clients talking to the failed instance will need to reconnect to the loadbalancer/cluster to continue).

I don’t understand why the old master gets a cold InnoDB buffer pool.

I wouldn’t think a complete server restart should be necessary, but since we don’t have insight in the internals, who knows.

On Killing Connections (through the restart)

Losing connections across an Aurora cluster is a real nuisance that really impacts applications.  Here’s why:

When MySQL C client library (which most MySQL APIs either use or are modelled on) is disconnected, it passes back a specific error to the application.  When the application makes its next query call, the C client will automatically reconnect first (so the client does not have to explicitly reconnect).  So a client only needs to catch the error and re-issue its last command, and all will generally be fine.  Of course, if it relies on different SESSION settings, or was in the middle of a multi-statement transaction, it will need to do a bit more.

So, this means that the application has to handle disconnects gracefully without chucking hissy-fits at users, and I know for a fact that that’s not how many (most?) applications are written.  Consequently, an Aurora failover will make the frontend of most applications look like a disaster zone for about 30 seconds (provided functional instances are available for the failover, which is the preferred and best case scenario).

I appreciate that this is not directly Aurora’s fault, it’s sloppy application development that causes this, but it’s a real-world fact we have to deal with.  And, perhaps importantly: other cluster and replication options do not trigger this scenario.

Posted on
Posted on

Exploring Amazon RDS Aurora: replica writes and cache chilling

Our clients operate on a variety of platforms, and RDS (Amazon Relational Database Service) Aurora has received quite a bit of attention in recent times. On behalf of our clients, we look beyond the marketing, and see what the technical architecture actually delivers.  We will address specific topics in individual posts, this time checking out what the Aurora architecture means for write and caching behaviour (and thus performance).

What is RDS Aurora?

First of all, let’s declare the baseline.  MySQL Aurora is not a completely new RDBMS. It comprises a set of Amazon modifications on top of stock Oracle MySQL 5.6 and 5.7, implementing a different replication mechanism and some other changes/additions.  While we have some information (for instance from the “deep dive” by AWS VP Anurag Gupta), the source code of the Aurora modifications are not published, so unfortunately it is not immediately clear how things are implemented.  Any architecture requires choices to be made, trade-offs, and naturally these have consequences.  Because we don’t get to look inside the “black box” directly, we need to explore indirectly.  We know how stock MySQL is architected, so by observing Aurora’s behaviour we can try to derive how it is different and what it might be doing.  Mind that this is equivalent to looking at a distant star, seeing a wobble, and deducing from the pattern that there must be one or more planets orbiting.  It’s an educated guess.

For the sake of brevity, I have to skip past some aspects that can be regarded as “obvious” to someone with insight into MySQL’s architecture.  I might also defer explaining a particular issue in depth to a dedicated post on that topic.  Nevertheless, please do feel free to ask “so why does this work in this way”, or other similar questions – that’ll help me check my logic trail and tune to the reader audience, as well as help create a clearer picture of the Aurora architecture.

Instead of using the binary log, Aurora replication ties into the storage layer.  It only supports InnoDB, and instead of doing disk reads/writes, the InnoDB I/O system talks to an Amazon storage API which delivers a shared/distributed storage, which can work across multiple availability zones (AZs).  Thus, a write on the master will appear on the storage system (which may or may not really be a filesystem).  Communication between AZs is fairly fast (only 2-3 ms extra overhead, relative to another server in the same AZ) so clustering databases or filesystems across AZs is entirely feasible, depending on the commit mechanism (a two-phase commit architecture would still be relatively slow).  We do multi-AZ clustering with Galera Cluster (Percona XtraDB Cluster or MariaDB Galera Cluster).  Going multi-AZ is a good idea that provides resilience beyond a single data centre.

So, imagine an individual instance in an Aurora setup as an EC2 (Amazon Elastic Computing) instance with MySQL using an SSD EBS (Amazon Elastic Block Storage) volume, where the InnoDB I/O threads interface more directly the the EBS API.  The actual architecture might be slightly different still (more on that in a later post), but this rough description helps set up a basic idea of what a node might look like.

Writes in MySQL

In a regular MySQL, on commit a few things happen:

  • the InnoDB log is written to and flushed,
  • the binary log is written to (and possibly flushed), and
  • the changed pages (data and indexes)  in the InnoDB buffer pool are marked dirty, so a background thread knows they need to be written back to disk (this does not need to happen immediately).  When a page is written to disk, normally it uses a “double-write” mechanism where first the original page is read and written to a scratch space, and then the new page is put in the original position.  Depending on the filesystem and underlying storage (spinning disk, or other storage with different block size from InnoDB page size) this may be required to be able to recover from write fails.

This does not translate in to as many IOPS because in practice, transaction commits are put together (for instance with MariaDB’s group commit) and thus many commits that happen in a short space effectively only use a few IOs for their log writes.  With Galera cluster, the local logs are written but not flushed, because the guaranteed durability is provided with other nodes in the cluster rather than local persistence of the logfile.

In Aurora, a commit has to send either the InnoDB log entries or the changed data pages to the storage layer; which one it is doesn’t particularly matter.  The storage layer has a “quorum set” mechanism to ensure that multiple nodes accept the new data.  This is similar to Galera’s “certification” mechanism that provides the “virtual synchrony”.  The Aurora “deep dive” talk claims that it requires many fewer IOPS for a commit; however, it appears they are comparing a worst-case plain MySQL scenario with an optimal Aurora environment.  Very marketing.

Aurora does not use the binary log, which does make one wonder about point-in-time recovery options. Of course, it is possible to recover to any point-in-time from an InnoDB snapshot + InnoDB transaction logs – this would require adding timestamps to the InnoDB transaction log format.

While it is noted that the InnoDB transaction log is also backed up to S3, it doesn’t appear to be used directly (so, only for recovery purposes then).  After all, any changed page needs to be communicated to the other instances, so essentially all pages are always flushed (no dirty pages).  When we look at the InnoDB stats GLOBAL STATUS, we sometimes do see up to a couple of dozen dirty pages with Aurora, but their existence or non-existence doesn’t appear to have any correlation with user-created tables and data.

Where InnoDB gets its Speed

InnoDB rows and indexing
InnoDB rows and indexing

We all know that disk-access is slow.  In order for InnoDB to be fast, it is dependent on most active data being in the buffer pool.  InnoDB does not care for local filesystem buffers – something is either in persistent storage, or in the buffer pool.  In configurations, we prefer direct I/O so the system calls that do the filesystem I/O bypass the filesystem buffers and any related overhead.  When a query is executed, any required page that’s not yet in the buffer pool is requested to be loaded in the background. Naturally, this does slow down queries, which is why we preferably want all necessary pages to already be in memory.  This applies for any type of query.  In InnoDB, all data/indexes are structured in B+trees, so an INSERT has to be merged into a page and possibly causes pages to be split and other items shuffled so as to “re-balance” the tree.  Similarly, a delete may cause page merges and a re-balancing operation.  This way the depth of the tree is controlled, so that even for a billion rows you would generally see a depth of no more than 6-8 pages.  That is, retrieving any row would only require a maximum of 6-8 page reads (potentially from disk).

I’m telling you all this, because while most replication and clustering mechanisms essentially work with the buffer pool, Aurora replication appears to works against it.  As I mentioned: choices have consequences (trade-offs).  So, what happens?

Aurora Replication

When you do a write in MySQL which gets replicated through classic asynchronous replication, the slaves or replica nodes affect the row changes in memory.  This means that all the data (which is stored with the PRIMARY KEY, in InnoDB) as well as any other indexes are updated, the InnoDB log is written, and the pages marked as dirty.  It’s very similar to what happens on the writer/master system, and thus the end result in memory is virtually identical.  While Galera’s cluster replication operates differently from the asynchronous mechanism shown in the diagram, the resulting caching (which pages are in memory) ends up similar.

MySQL Replication architecture
MySQL Replication architecture

Not so with Aurora.  Aurora replicates in the storage layer, so all pages are updated in the storage system but not in the in-memory InnoDB buffer pool.  A secondary notification system between the instances ensures that cached InnoDB pages are invalidated.  When you next do a query that needs any of those no-longer-valid cached pages, they will have to be be re-read from the storage system.  You can see a representation of this in the diagram below, indicating invalidated cache pages in different indexes; as shown, for INSERT operations, you’re likely to have pages higher up in the tree and one sideways page change as well because of the B+tree-rebalancing.

Aurora replicated insert
Aurora replicated insert

The Chilling Effect

We can tell the replica is reading from storage, because the same query is much slower than before we did the insert from the master instance.  Note: this wasn’t a matter of timing. Even if we waited slightly longer (to enable a possible background thread to refresh the pages) the post-insert query was just as slow.

Interestingly, the invalidation process does not actually remove them from the buffer pool (that is, the # of pages in the buffer pool does not go down); however, the # of page reads does not go up either when the page is clearly re-read.    Remember though that a status variable is just that, it has to be updated to be visible and it simply means that the new functions Amazon implemented don’t bother updating these status variables.  Accidental omission or purposeful obscurity?  Can’t say.  I will say that it’s very annoying when server statistics don’t reflect what’s actually going on, as it makes the stats (and their analysis) meaningless.  In this case, the picture looks better than it is.

With each Aurora write (insert/update/delete), the in-memory buffer pool on replicas is “chilled”.

Unfortunately, it’s not even just the one query on the replica that gets affected after a write. The primary key as well as the secondary indexes get chilled. If the initial query uses one particular secondary index, that index and the primary key will get warmed up again (at the cost of multiple storage system read operations), however the other secondary indexes are still chattering their teeth.

Being Fast on the Web

In web applications (whether websites or web-services for mobile apps), typically the most recently added data is the most likely to be read again soon.  This is why InnoDB’s buffer pool is normally very effective: frequently accessed pages remain in memory, while lesser used ones “age” and eventually get tossed out to make way for new pages.

Having caches clear due to a write, slows things down.  In the MySQL space, the fairly simply query cache is a good example.  Whenever you write to table A, any cached SELECTs that accesses table A are cleared out of the cache.  Regardless of whether the application is read-intensive, having regular writes makes the query cache useless and we turn it off in those cases.  Oracle has already deprecated the “good old” query cache (which was introduced in MySQL 4.0 in the early 2000s) and soon its code will be completely removed.

Conclusion

With InnoDB, you’d generally have an AUTO_INCREMENT PRIMARY KEY, and thus newly inserted rows are sequenced to that outer end of the B+Tree.  This also means that the next inserted row often ends up in the same page, again invalidating that recently written page on the replicas and slowing down reads of any of the rows it contained.

For secondary indexes, the effect is obviously scattered although if the indexed column is temporal (time-based), it will be similarly affected to the PRIMARY KEY.

How much all of this slows things down will very much depend on your application DB access profile.  The read/write ratio will matter little, but rather whether individual tables are written to fairly frequently.  If they do, SELECT queries on those tables made on replicas will suffer from the chill.

Aurora uses SSD EBS so of course the storage access is pretty fast.  However, memory is always faster, and we know that that’s important for web application performance.  And we can use similarly fast SSD storage on EC2 or another hosting provider, with mature scaling technologies such as Galera (or even regular asynchronous multi-threaded replication) that don’t give your caches the chills.

Posted on
Posted on

Amazon vs Linode in case of hardware failure

One of our clients received an email like this:

Dear Amazon EC2 Customer,

We have important news about your account (…). EC2 has detected degradation of the underlying hardware hosting your Amazon EC2 instance (instance-ID: …) in the […] region. Due to this degradation, your instance could already be unreachable. After […] UTC your instance, which has an EBS volume as the root device, will be stopped.

You can see more information on your instances that are scheduled for retirement in the AWS Management Console

  • How does this affect you?
    Your instance’s root device is an EBS volume and the instance will be stopped after the specified retirement date. You can start it again at any time. Note that if you have EC2 instance store volumes attached to the instance, any data on these volumes will be lost when the instance is stopped or terminated as these volumes are physically attached to the host computer
  • What do you need to do?
    You may still be able to access the instance. We recommend that you replace the instance by creating an AMI of your instance and launch a new instance from the AMI.
  • Why retirement?AWS may schedule instances for retirement in cases where there is an unrecoverable issue with the underlying hardware.

Great to be notified of such things. That’s not the problem. What I find curious is that Amazon tosses the problem resolution entirely at their clients. Time and effort (cost) is required, even just create an AMI (if you don’t already have one) and restart elsewhere from that.
Could it be done differently? I think so, because it has been done for years. At Linode (for example). If something like this happens on Linode, they’ll migrate the entire VM+data to another host – quickly and at no cost. Just a bit of downtime (often <30mins). They’ll even do this on request if you (the client) suspect there is some problem on the host and would just like to make sure by going to another host.
So… considering how much automation and user-convenience Amazon produces, I would just expect better.

Of course it’s nice to have everything scripted so that new nodes can be spun up quickly. In that case you can just destroy an old instance and start a new one, which might then be very low effort. But some systems and individual instances are (for whatever reason) not set up like that, and then a migration like the one that Linode does is eminently sensible and very convenient….

Posted on
Posted on 6 Comments

Your opinion on EC2 and other cloud/hosting options

EC2 is nifty, but it doesn’t appear suitable for all needs, and that’s what this post is about.

For instance, a machine can just “disappear”. You can set things up to automatically start a new instance to replace it, but if you just committed a transaction it’s likely to be lost: MySQL replication is asynchronous, EBS which is slower if you commit your transactions on it, or EBS snapshots which are only periodic (you’d have to add foo on the application end). This adds complexity, and thus the question arises whether EC2 is the best solution for systems where this is a concern.

When pondering this, there are two important factors to consider: a database server needs cores, RAM and reasonably low-latency disk access, and application servers should be near their database server. This means you shouldn’t split app and db servers to different hosting/cloud providers.

We’d like to hear your thoughts on EC2 in this context, as well as options for other hosting providers – and their quirks. Thanks!

Posted on 6 Comments