Posted on

How not to respect your users’ privacy

PrivacyYou just run the usual online frameworks, with their extensive plugin range, CDN, Google Analytics, NewRelic, Twitter, Facebook and LinkedIn widgets, and the rest.  Then, you display a notice to your users that your site uses cookies and passes some data to third parties (such as Google Analytics and NewRelic) “to enhance the user experience”.

There. Easy, right? You probably didn’t need to change anything at all. Most companies, sites and applications do this.  Now tell me: given that you probably agree with at least some of the above, how come you display a notice to your users explaining how you respect their privacy?  It can’t both be true.

So yes, this was a test.  And most of us fail, including us.  Why is this?

  1. Are you asking for and storing more data than you actually require for delivering the product or service that you provide?  You can probably only test this by working out the minimum data requirements, questioning each item, and then comparing that list with what you currently actually collect.  There’s likely to be a (large) discrepancy.
  2. Are you using multiple analytics and trackers?  Why?  It does in fact affect the user experience of your site, both in terms of speed as well as privacy.  And you probably don’t actually use all that data.  So think about what you actually use, and get rid of the rest.  That’s a good exercise and an excellent step.
  3. Does your site deliver pixel images for Facebook and others?  If so, why?
  4. Does your site show a “site seal” advertising your SSL certificate’s vendor?  If so, why?
  5. Does your site set one or more cookies for every user, rather than only logged-in users?  If so, why?
  6. Most CMS and frameworks actually make it difficult to not flood users with cookies and third-party tracking. They have become the new bloat.  Example: you use a component that includes a piece  of javascript or css off a vendor-provided CDN. Very convenient, but you’ve just provided site-usage data to that vendor as well as your users’ IP address.
  7. Respecting privacy is not “business as usual” + a notice. It’s just not.

So, privacy is actually really hard, and for a large part because our tools make it so.  They make it so not for your users’ convenience, or even your convenience, but for the vendors of said tools/components. You get some benefit, which in turn could benefit your users, but I think it’s worthwhile to really review what’s actually necessary and what’s not.

A marketing or sales person might easily say “more data is better”, but is it, really?  It affects site speed and user experience. And unless you’ve got your analytics tools really well organised, you’re actually going to find that all that extra data is overhead you don’t need in your company.  If you just collect and use what you really need, you’ll do well. Additionally, it’ll enable you to tell your users/clients honestly about what you do and why, rather than deliver a generic fudge-text as described in the first paragraph of this post.

A few quick hints to check your users’ privacy experience, without relying on third-party sites.

  • Install EFF’s Privacy Badger plugin.  It uses heuristics (rather than a fixed list) to identify suspected trackers and deal with them appropriately (allow, block cookies, block completely).  Privacy Badger provides you with an icon on the right of your location bar, showing a number indicating how many trackers the current page has.  If you click on the icon, you can see details and adjust.  And as a site-owner, you’ll want to adjust the site it rather than badger!
  • If you click on the left hand side of your location bar, on the secure icon (because you are already offering https, right?), you can also see details on cookies: both how many and to which domains. If you see any domains which are not yours, they’re caused by components (images, javascript, css) on your page that retrieve bits from elsewhere. Prepare to be shocked.
  • To see in more detail what bits an individual page uses, you can right-click on a page and select “Inspect” then go to the “Sources” tab.  Again, prepare to be shocked.

Use that shock well, to genuinely improve privacy – and thereby respect your users.

Aside from the ethics, I expect that these indicates (cookies, third-party resource requests, trackers, etc) will get used to rank sites and identify bad players. So there’ll be a business benefit in being ahead of this predictable trend.  And again, doing a clean-up will also make your site faster, as well as easier to use.

Posted on
Posted on

SSL certificates – not optional

We made a stuff-up over the weekend.  Historically we have different SSL certificates for different services in our realm, and last Saturday the certificate for the main website expired.  Of course we noticed at that point, but we should have had an internal notification earlier and somehow that had failed.  Fixed, but it would have been much better if the front-end hadn’t been temporarily inaccessible.  It was, because of HTTPS Strict Transport Security (HSTS). Any browser that had previously talked with our website (rightfully) refuses to talk to it if it doesn’t see a valid certificate.  Going back to non-HTTPS is not an option either, for this reason as well as others mentioned below. However, we do have different certificates for different services, so it was only our frontend website that was affected (bad enough), the various other services for our clients fortunately were unaffected.

Let's Encrypt logoGoing forward though, keeping up-to-date with the certificates and automatically renewing them is much easier now than it used to be. Let’s Encrypt® has been around for a while, but a few months ago they started supporting wildcard certificates.  With non-wildcard, one of the ways Let’s Encrypt can verify that you own the site is by doing a challenge/response on port 443 of the website address; Certbot will temporarily listen there and give the appropriate answers.  For a wildcard, that doesn’t work, because you can have an infinite number of subdomains and Let’s Encrypt needs to be certain that you actually control these.  So in v2 of the API there’s support for DNS based validation.  Through special TXT records for which Let’s Encrypt provides the token on every domain request, you can prove that you are in control of the DNS servers for the domain. That’s pretty slick!

There are integrations for many hosting providers as well as Cloudflare, which through a secure mechanism allow Let’s Encrypt to update those records in your DNS, and then validate. As Let’s Encrypt certificates are only valid for 3 months, this is important for the automation.  If you run your own DNS servers, you can still automate the DNS based verification process by setting up RFC-2136 remote updates in your DNS server (Bind9 can do it, it’s been around for many years – that said, being an older system, it can be rather finicky to set up).

Let’s Encrypt’s Certbot can take care of the entire updating process, including reloading your webserver’s or reverse proxy’s config.  Do make sure you use a recent Certbot, as all the appropriate support is quite recent. We had to grab Certbot from Github the first time as the Debian release hadn’t updated quite far enough yet – it has now.

We think that the EFF has done brilliantly with setting up Let’s Encrypt, and helping everyone move towards a fully encrypted web.  Particularly with the cost-factor removed, there’s no reason to not offer HTTPS to users – whether for a website, or an API.  Respecting one’s users and their online privacy is really a must-do.  Companies that don’t, increasingly look bad.  See it this way: going fully HTTPS is an opportunity to make a good first impression.  And did you know it also affects your ranking in search engines?  Now there’s a good incentive for the PHB

Do you need an EV certificate?  Probably not, as they actually have very little meaning – and even less so with various CAs having distinctly flawed verification processes.

Do you need a site seal from your CA (Certificate Authority)?  Really not.  It just advertises the CA, and actually enables them to track your users – if you get the seal served from the CA’s URL, that’s every single user. Not cool. So just don’t.

Final hint: if you do get a wildcard certificate from Let’s Encrypt, make sure you include both the wildcard and non-wildcard in the certificate domain names, otherwise it won’t work. So, *.example.com as well as example.com. You may not have noticed that your wildcard certificate always contains these, as many CAs automatically include the appropriate extra item.  Certbot just does exactly what you tell it to, so it’s something to be aware of.

Posted on
Posted on

PURGE BINARY LOGS with a relative time

Sometimes you want to reduce disk usage on certain servers by adjusting the time that binary logs are kept.  Also, some installations of MySQL and MariaDB have suffered from a very-hard-to-catch bug where the binary logs end up not getting automatically expired (basically, the expire_logs_days option doesn’t always work effectively).

A workaround can be scripted, but typically the script would specify the exact datetime to which the logs need to be kept.  The reference manual and examples all do this too, quite explicitly, noting:

The datetime expression is in the format ‘YYYY-MM-DD hh:mm:ss’.

However, the actual command syntax is phrased as follows:

PURGE { BINARY | MASTER } LOGS { TO ‘log_name’ | BEFORE datetime_expr }

and that indicates much more flexibility in the parser: “datetime_expr” means that you can put in an arbitrary temporal expression!

So let’s test that, with a functional equivalent of expire_logs_days=14:

FLUSH BINARY LOGS;
PURGE BINARY LOGS BEFORE (NOW() – INTERVAL 14 DAY);

And yep, that works (and the extra parenthesis around the expression are not required, I just did that to show clearly what the expression is).

Now, I’m not the first person to use this construct, there are several posts online from recent years that use an expression with PURGE BINARY LOGS.  I’m not sure whether allowing datetime_expr is a modification that was made in the parser at some point, or whether it was always possible.  Fact is, the reference manual text (MariaDB as well as MySQL) only provide examples with an absolute ISO datetime: ‘YYYY-MM-DD HH:MM:SS’.

Posted on
Posted on

Amazon AWS Billing

During our recent AWS research I observed again how their billing seems to lack clarity and granularity.  That is, it’s very difficult to figure out in detail what’s going on and where particular charges actually come from.  If you have a large server farm at AWS, that’s going to be a problem.  Unless of course you just pay whatever and don’t mind what’s on the bill?

Of particular interest was a charge for inter-AZ (DC) traffic.  Not because it was large, it was just a few cents.  But it was odd.  So this would be outbound traffic from our short-lived test server(s) to another AWS availability zone (datacenter) in the US.  This gets charged at a couple of cents per GB.

AWS billing 3cents breakdown

I had a yarn with billing support, and they noted that the AWS-side of the firewall had some things open and because they can’t see what goes on inside a server, they regarded it valid traffic.  The only service active on the server was SSH, and it keeps its own logs.  While doing the Aurora testing we weren’t specifically looking at this, so by the time the billing info showed up (over a day later), the server had been decommissioned already and along with it those logs.

As a sidenote, presuming this was “valid” traffic, someone was using AWS servers to scan other AWS servers and try and gain access.  I figured such activity clearly in breach of AWS policies would be of interest to AWS, but it wasn’t.  Seems a bit neglectful to me.  And with all the tech, shouldn’t their systems be able to spot such activities even automatically?

Some days later I fired up another server specifically to investigate the potential for rogue outbound traffic.  I again left SSH accessible to the world, to emulate the potential of being accessed from elsewhere, while keeping an eye on the log.  This test server only existed for a number of hours, and was fully monitored internally so we know exactly what went on.  Obviously, we had to leave the AWS-side firewall open to be able to perform the test.  Over hours there were a few login attempts, but nothing major.  There would have to be many thousands of login attempts to create a GB of outbound traffic – consider that there’s no SSH connection actually getting established, the attempts don’t get beyond the authentication stage so it’s just some handshake and the rejection going out.  So no such traffic was seen in this instance.

Of course, the presumed earlier SSH attempts may have just been the result of a scanning server getting lucky, whereas my later test server didn’t “get lucky” being scanned. It’s possible. To increase the possible attack surface, we put an nc (netcat) listener on ports 80, 443 and some others, just logging any connection attempt without returning outbound traffic.  This again saw one or two attempts, but no major flood.

I figured that was the end of it, and shut down the server.  But come billing time, we once again see a cent-charge for similar traffic.

And this time we know it definitely didn’t exist, because we were specifically monitoring for it.  So, we know for a fact that we were getting billed for traffic that didn’t happen. “How quaint”.  Naturally, because the AWS-side firewall was specifically left open, AWS billing doesn’t want to hear.  I suppose we could re-run the test again, this time with a fully set up AWS-side firewall, but it’s starting to chew up too much time.

My issues are these:

  1. For a high tech company, AWS billing is remarkably obtuse.  That’s a business choice, and not to the clients’ benefit.  Some years ago I asked my accountant whether my telco bill (Telstra at the time) was so convoluted for any particular accounting reason I wasn’t aware of, and her answer was simply “no”.  This is the same.  The company chooses to make their bills difficult to read and make sense of.
  2. Internal AWS network monitoring must be inadequate, if AWS hosted servers can do sweep scans and mass SSH login attempts. Those are patterns that can be caught.  That is, presuming, that those scans and attempts actually happened.  If they didn’t, the earlier traffic didn’t exist either, in which case we’re getting billed for stuff that didn’t happen. It’s one or the other, right?  (Based on the above observations, my bet is actually on the billing rather than the internal network monitoring – AWS employs very smart techs.)
  3. Us getting billed an extra cent on a bill doesn’t break the bank, but since it’s for something that we know didn’t happen, it’s annoying. It makes us wonder what else is wrong in the billing system, and whether other people too might get charged a cent extra here or there.  Does this matter?  Yes it does, because on the AWS end it adds up to many millions.  And for bigger AWS clients, it will add up over time also.

Does anybody care?  I don’t know.  Do you care?


Oh, and while Google (AdWords, etc), Microsoft and others have over the last few years adjusted their invoicing in Australia to produce an appropriate tax invoice with GST (sales tax) even though the billing is done from elsewhere (Singapore in Google’s case), AWS doesn’t do this.  Australian server instances are all sold from AWS Inc. in the Seattle, WA.

Posted on
Posted on

RDS Aurora MySQL Cost

big bag of money getting carriedI promised to do a pricing post on the Amazon RDS Aurora MySQL pricing, so here we go.  All pricing is noted in USD (we’ll explain why)

We compared pricing of equivalent EC2+EBS server instances, and verified our calculation model with Amazon’s own calculator and examples.  We use the pricing for Australia (Sydney data centre). Following are the relevant Amazon pricing pages from which we took the pricing numbers, formulae, and calculation examples:

Base Pricing Details

Specs         EC2     RDS Aurora MySQL  
instance type vCPU ECU GB RAM   Storage Linux/hr   instance type Price/hr
r4.large 2 7 15.25 EBS Only $0.160 db.r4.large $0.350
r4.xlarge 4 13.5 30.5 EBS Only $0.319 db.r4.xlarge $0.700
r4.2xlarge 8 27 61 EBS Only $0.638 db.r4.2xlarge $1.400
r4.4xlarge 16 53 122 EBS Only $1.277 db.r4.4xlarge $2.800
r4.8xlarge 32 99 244 EBS Only $2.554 db.r4.8xlarge $5.600
r4.16xlarge 64 195 488 EBS Only $5.107 db.r4.16xlarge $11.200

That’s not all we need, because both EBS and Aurora have some additional costs we need to factor in.

EBS pricing components (EBS Provisioned IOPS SSD (io1) volume)

“Volume storage for EBS Provisioned IOPS SSD (io1) volumes is charged by the amount you provision in GB per month until you release the storage. With Provisioned IOPS SSD (io1) volumes, you are also charged by the amount you provision in IOPS (input/output operations per second) per month. Provisioned storage and provisioned IOPS for io1 volumes will be billed in per-second increments, with a 60 second minimum.”

  • Storage Rate $0.138 /GB/month of provisioned storage
    “For example, let’s say that you provision a 2000 GB volume for 12 hours (43,200 seconds) in a 30 day month. In a region that charges $0.125 per GB-month, you would be charged $4.167 for the volume ($0.125 per GB-month * 2000 GB * 43,200 seconds / (86,400 seconds/day * 30 day-month)).”
  • I/O Rate $0.072 /provisioned IOPS-month
    “Additionally, you provision 1000 IOPS for your volume. In a region that charges $0.065 per provisioned IOPS-month, you would be charged $1.083 for the IOPS that you provisioned ($0.065 per provisioned IOPS-month * 1000 IOPS provisioned * 43,200 seconds /(86,400 seconds /day * 30 day-month)).”

Other Aurora pricing components

  • Storage Rate $0.110 /GB/month
    (No price calculation examples given for Aurora storage and I/O)
  • I/O Rate $0.220 /1 million requests
    (Presuming IOPS equivalence / Aurora ratio noted from arch talk)

So this provides us with a common base, instance types that are equivalent between Aurora and EC2.  All other Aurora instances types are different, so it’s not possible to do a direct comparison in those cases.  Presumably we can make the assumption that the pricing ratio will similar for equivalent specs.

On Demand vs Reserved Instances

We realise we’re calculating on the basis of On Demand pricing.  But we’re comparing pricing within AWS space, so presumably the savings for Reserved Instances are in a similar ballpark.

Other factors

  • We have 720 hours in a 30 day month, which is 2592000 seconds.
  • 70% read/write ratio – 70% reads (used to calculate the effective Aurora IOPS)
  • 10% read cache miss -10% cache miss rate on reads
  • Aurora I/O ratio: 3 (Aurora requiring 2 IOPS for a commit vs 6 in MySQL – even though this is a pile of extreme hogwash in terms of that pessimistic MySQL baseline)

We also spotted this note regarding cross-AZ Aurora traffic:

“Amazon RDS DB Instances inside VPC: For data transferred between an Amazon EC2 instance and Amazon RDS DB Instance in different Availability Zones of the same Region, Amazon EC2 Regional Data Transfer charges apply on both sides of transfer.”

So this would apply to application DB queries issued across an AZ boundary, which would commonly happen during failover scenarios.  In fact, we know that this happens during regular operations with some EC2 setups, because the loadbalancing already goes cross-AZ.  So that costs extra also.  Now you know!  (note: we did not factor this in to our calculations.)

Calculation Divergence

Our model comes up with identical outcomes for the examples Amazon provided, however it comes up 10-15% lower than Amazon’s calculator for specific Aurora configurations.  We presume that the difference may lie in the calculated Aurora I/O rate, as that’s the only real “unknown” in the model.  Amazon’s calculator does not show what formulae it uses for the sub-components, nor sub-totals, and we didn’t bother to tweak until we got at the same result.

It’s curious though, as the the architecture talk makes specific claims about Aurora’s I/O efficiency (which presume optimal Aurora situation and a dismal MySQL reference setup, something which I already raised in our initial Aurora post).  So apparently the Amazon calculator assumes worse I/O performance than the technical architecture talk!

Anyhow, let’s just say our costing is conservative, as the actual cost is higher on the Aurora end.

Scenarios

Here we compare with say a MySQL/MariaDB Galera setup across 3 AZs running on EC2+EBS.  While this should be similar in overall availability and read-capacity, note that

  1. you can write to all nodes in a Galera cluster, whereas Aurora currently has a single writer/master;
  2. Galera doesn’t require failover changes as all its nodes are technically writers anyhow, whereas Aurora failover causes a cluster outage of at least 30 seconds.
Servers R/Zones Instance GB DB I/O rate     EC2 EBS     Aurora      
          Read IOPS   Instances Storage I/O EC2 Total   Instances Storage I/O Aurora Total
3 3 r4.xlarge 250 2,000 740 $689 $104 $160 $952   $1,512 $83 $141 $1,735
6 3 r4.xlarge 250 2,000 740 $1,378 $207 $320 $1,905   $3,024 $83 $141 $3,247
     

When using the Amazon calculator, Aurora comes out at about double the EC2.  But don’t take our word for it, do try this for yourself.

Currency Consequences

While pricing figures are distinct per country that Amazon operates in, the charges are always in USD.  So this means that the indicated pricing is, in the end, in USD, and thus subject to currency fluctuations (if your default currency is not USD).  What does this mean?

USD AUD rate chart 2008-2018
USD-AUD rate chart 2008-2018, from xe.com

So USD 1,000 can cost as little as AUD 906 or as much as AUD 1,653, at different times over the last 10 years.  That’s quite a range!

Conclusion

As shown above, our calculation with Aurora MySQL shows it costing about twice as much.  This is based on a reference MySQL/MariaDB+Galera with roughly the same scaling and resilience profile (e.g. the ability to survive DC outages).  In functional terms, particularly with Aurora’s 30+second outage profile during failover, Galera comes out on top at half the cost.

So when is Aurora cheaper, as claimed by Amazon?

Amazon makes claims in the realm of “1/10th the cost”. Well, that may well be the case when comparing with the TCO of Oracle or MS SQL Server, and it’s fairly typical when comparing a proprietary system with an Open Source based one (mind again that Aurora is not actually Open Source as Amazon does not make their source code available, but it’s based on MySQL).

The only other way we see is to seriously compromise on the availability (resilience).  In our second sample calculation, we use 2 instances per AZ.  This is not primarily for performance, but so that application servers in an AZ don’t have to do cross-DC queries when one instance fails.  In the case of Aurora, spinning up a new instance on the same dataset requires 15 minutes.  So, do you want to take that hit?  If so, you can save money there.  If not, it’s still costly.

But hang on, if you’re willing to make the compromise on availability, you could reduce the Galera setup also, to only one instance per AZ.  Yep!

So, no matter how you tweak it, Aurora is about twice the cost, with (in our opinion) a less interesting failover profile.

The Price of RDS Convenience

What you get with RDS/Aurora is the promise of convenience, and that’s what you pay for.  But, mind that our comparison worked all within AWS space anyway, the EC2 instances we used for MySQL/MariaDB+Galera already use the same basic infrastructure, dashboard and management API as well.  So you pay double just to go to RDS/Aurora, relative to building on EC2.

To us, that cost seems high.  If you spend some, or even all that money on engineering that convenience around your particular setup, and even outsource that task and its maintenance, you get a nicer setup at the same or a lower cost.  And last but not least, that cost will be more predictable – most likely the extra work will be charged in your own currency, too.

Cost Predictability and Budget

You can do a reasonable ball-park calculation of AWS EC2 instances that are always active, but EBS already has some I/O charges which make the actual cost rather more variable, and Aurora adds a few more variables on top of that.  I’m still amazed that companies go for this, even though they traditionally prefer a known fixed cost (even if higher) over a variable cost.  Choosing the variable cost breaks with some fundamental business rules, for the sake of some convenience.

The advantage of known fixed costs is that you can budget properly, as well as project future costs based on growth and other business factors.  Purposefully ditching that realm, while exposing yourself to currency fluctuations at the same time, seems most curious.  How do companies work this into their budgets?  Because others do so?  Well, following the neighbours is not always a good idea.  In this case, it might be costly as well as financially risky.

Posted on
Posted on

Keeping Data Secure

a safeWe often get asked about data security (how to keep things safe) and local regulations and certifications regarding same. Our general thoughts on this are as follows

  1. Government regulations tend to end up becoming part of the risk/cost/benefit equations in a business, which is not particularly comforting for customers.
    • Example: some years ago an Australian bank had a mail server mis-configured to allow relaying (i.e., people could send phishing emails pretending to legitimately originate from that bank).  A caring tech citizen reported the issue to the bank.  Somehow, it ended up with the legal department rather than a system/network administrator.  The legal eagles decided that the risk to the organisation was fairly low, and didn’t forward it for action at that time.  Mind that the network admin would’ve been able to fix up the configuration within minutes.
  2. Appreciate that certifications tend to mainly give you a label to wave in front of a business partner requiring it, they do not make your business more secure.
    • Data leaves footprints.  For instance, some people use a separate email address for each website they interact with.  Thus, when a list of email addresses leaks, saying “it didn’t come from us” won’t hold.  That’s only a simple example, but it illustrates the point.  Blatant denial was never a good policy, but these days it’ll backfire even faster.
  3. Recent legislation around mandatory data retention only makes things worse, as
    • companies tend to already store much more detail about their clients and web visitors than is warranted, and
    • storing more activity data for longer just increases the already enlarged footprint.

business advice personSo what do we recommend?

  1. Working within the current legal requirements, we still advise to keeping as little data as possible.
    • More data does not intrinsically mean more value – while it’s cheap and easy to gather and store more data, if you’re actually actually more strategic about what you collect and store, you’ll find there’s much more value in that.
  2. Fundamentally, data that you don’t have can’t be leaked/stolen/accessed through you.  That’s obvious, but still worth noting.
    • Our most critical example of this is credit card details.  You do not want to store credit card details, ever.  Not for any perceived reason.  There are sensible alternatives using tokens provided by your credit card gateway, so that clients’ credit cards never touch your system.  We wrote about this (again) in our post “Your Ecommerce Site and Credit Cards” last year.
      Why?  It’s fairly easy to work out from a site’s frontend behaviour whether it stores credit cards locally, and if it does, you’re much more of a target.  Credit card details provide instant anonymous access to financial resources.  Respect your clients.
  3. More secure online architecture.
    • We’ll do a separate post on this.
  4. If you have a data breach, be sensible and honest about it.
    • If your organisation operates in Australia and “with an annual turnover of $3 million or more, credit reporting bodies, health service providers, and TFN recipients, among others.“, the Notifiable Data Breaches (part of the Australian Privacy Act) scheme applies, which came in to force this February 2018, applies to you.

handshakeWe’re happy to advise and assist.  Ideally, before trouble occurs.  For any online system, that’s a matter of when, not if.
(And, of course, we’re not lawyers.  We’re techies.  You may need both, but never confuse the two!)

Posted on
Posted on

RDS Aurora MySQL and Service Interruptions

In Amazon space, any EC2 or Service instance can “disappear” at any time.  Depending on which service is affected, the service will be automatically restarted.  In EC2 you can choose whether an interrupted instance will be restarted, or left shutdown.

For an Aurora instance, an interrupted instance is always restarted. Makes sense.

The restart timing, and other consequences during the process, are noted in our post on Aurora Failovers.

Aurora Testing Limitations

As mentioned earlier, we love testing “uncontrolled” failovers.  That is, we want to be able to pull any plug on any service, and see that the environment as a whole continues to do its job.  We can’t do that with Aurora, because we can’t control the essentials:

  • power button;
  • reset switch;
  • ability to kill processes on a server;
  • and the ability to change firewall settings.

In Aurora, an instance is either running, or will (again) be running shortly.  So that we know.  Aurora MySQL also offers some commands that simulate various failure scenarios, but since they are built-in we can presume that those scenarios are both very well tested, as well as covered by the automation around the environment.  Those clearly defined cases are exactly the situations we’re not interested in.

What if, for instance, a server accepts new connections but is otherwise unresponsive?  We’ve seen MySQL do this on occasion.  Does Aurora catch this?  We don’t know and  we have no way of testing that, or many other possible problem scenarios.  That irks.

The Need to Know

If an automated system is able to catch a situation, that’s great.  But if your environment can end up in a state such as described above and the automated systems don’t catch and handle it, you could be dead in the water for an undefined amount of time.  If you have scripts to catch cases such as these, but the automated systems catch them as well, you want to be sure that you don’t trigger “double failovers” or otherwise interfere with a failover-in-progress.  So either way, you need to know and and be aware whether a situation is caught and handled, and be able to test specific scenarios.

In summary: when you know the facts, then you can assess the risk in relation to your particular needs, and mitigate where and as desired.

A corporate guarantee of “everything is handled and it’ll be fine” (or as we say in Australia “She’ll be right, mate!“) is wholly unsatisfactory for this type of risk analysis and mitigation exercise.  Guarantees and promises, and even legal documents, don’t keep environments online.  Consequently, promises and legalities don’t keep a company alive.

So what does?  In this case, engineers.  But to be able to do their job, engineers need to know what parameters they’re working with, and have the ability to test any unknowns.  Unfortunately Aurora is, also in this respect, a black box.  You have to trust, and can’t comprehensively verify.  Sigh.

Posted on
Posted on

RDS Aurora MySQL Failover

Right now Aurora only allows a single master, with up to 15 read-only replicas.

Master/Replica Failover

We love testing failure scenarios, however our options for such tests with Aurora are limited (we might get back to that later).  Anyhow, we told the system, through the RDS Aurora dashboard, to do a failover. These were our observations:

Role Change Method

Both master and replica instances are actually restarted (the MySQL uptime resets to 0).

This is quite unusual these days, we can do a fully controlled role change in classic asynchronous replication without a restart (CHANGE MASTER TO …), and Galera doesn’t have read/write roles as such (all instances are technically writers) so it doesn’t need role changes at all.

Failover Timing

Failover between running instances takes about 30 seconds.  This is in line with information provided in the Aurora FAQ.

Failover where a new instance needs to be spun up takes 15 minutes according to the FAQ (similar to creating a new instance from the dash).

Instance Availability

During a failover operation, we observed that all connections to the (old) master, and the replica that is going to be promoted, are first dropped, then refused (the connection refusals will be during the period that the mysqld process is restarting).

According to the FAQ, reads to all replicas are interrupted during failover.  Don’t know why.

Aurora can deliver a DNS CNAME for your writer instance. In a controlled environment like Amazon, with guaranteed short TTL, this should work ok and be updated within the 30 seconds that the shortest possible failover scenario takes.  We didn’t test with the CNAME directly as we explicitly wanted to observe the “raw” failover time of the instances themselves, and the behaviour surrounding that process.

Caching State

On the promoted replica, the buffer pool is saved and loaded (warmed up) on the restart; good!  Note that this is not special, it’s desired and expected to happen: MySQL and MariaDB have had InnoDB buffer pool save/restore for years.  Credit: Jeremy Cole initially came up with the buffer pool save/restore idea.

On the old master (new replica/slave), the buffer pool is left cold (empty).  Don’t know why.  This was a controlled failover from a functional master.

Because of the server restart, other caches are of course cleared also.  I’m not too fussed about the query cache (although, deprecated as it is, it’s currently still commonly used), but losing connections is a nuisance. More detail on that later in this article.

Statistics

Because of the instance restarts, the running statistics (SHOW GLOBAL STATUS) are all reset to 0. This is annoying, but should not affect proper external stats gathering, other than for uptime.

On any replica, SHOW ENGINE INNODB STATUS comes up empty. Always.  This seems like obscurity to me, I don’t see a technical reason to not show it.  I suppose that with a replica being purely read-only, most running info is already available through SHOW GLOBAL STATUS LIKE ‘innodb%’, and you won’t get deadlocks on a read-only slave.

Multi-Master

Aurora MySQL multi-master was announced at Amazon re:Invent 2017, and appears to currently be in restricted beta test.  No date has been announced for general availability.

We’ll have to review it when it’s available, and see how it works in practice.

Conclusion

Requiring 30 seconds or more for a failover is unfortunate, this is much slower than other MySQL replication (writes can failover within a few seconds, and reads are not interrupted) and Galera cluster environments (which essentially delivers continuity across instance failures – clients talking to the failed instance will need to reconnect to the loadbalancer/cluster to continue).

I don’t understand why the old master gets a cold InnoDB buffer pool.

I wouldn’t think a complete server restart should be necessary, but since we don’t have insight in the internals, who knows.

On Killing Connections (through the restart)

Losing connections across an Aurora cluster is a real nuisance that really impacts applications.  Here’s why:

When MySQL C client library (which most MySQL APIs either use or are modelled on) is disconnected, it passes back a specific error to the application.  When the application makes its next query call, the C client will automatically reconnect first (so the client does not have to explicitly reconnect).  So a client only needs to catch the error and re-issue its last command, and all will generally be fine.  Of course, if it relies on different SESSION settings, or was in the middle of a multi-statement transaction, it will need to do a bit more.

So, this means that the application has to handle disconnects gracefully without chucking hissy-fits at users, and I know for a fact that that’s not how many (most?) applications are written.  Consequently, an Aurora failover will make the frontend of most applications look like a disaster zone for about 30 seconds (provided functional instances are available for the failover, which is the preferred and best case scenario).

I appreciate that this is not directly Aurora’s fault, it’s sloppy application development that causes this, but it’s a real-world fact we have to deal with.  And, perhaps importantly: other cluster and replication options do not trigger this scenario.

Posted on
Posted on

Exploring Amazon RDS Aurora: replica writes and cache chilling

Our clients operate on a variety of platforms, and RDS (Amazon Relational Database Service) Aurora has received quite a bit of attention in recent times. On behalf of our clients, we look beyond the marketing, and see what the technical architecture actually delivers.  We will address specific topics in individual posts, this time checking out what the Aurora architecture means for write and caching behaviour (and thus performance).

What is RDS Aurora?

First of all, let’s declare the baseline.  MySQL Aurora is not a completely new RDBMS. It comprises a set of Amazon modifications on top of stock Oracle MySQL 5.6 and 5.7, implementing a different replication mechanism and some other changes/additions.  While we have some information (for instance from the “deep dive” by AWS VP Anurag Gupta), the source code of the Aurora modifications are not published, so unfortunately it is not immediately clear how things are implemented.  Any architecture requires choices to be made, trade-offs, and naturally these have consequences.  Because we don’t get to look inside the “black box” directly, we need to explore indirectly.  We know how stock MySQL is architected, so by observing Aurora’s behaviour we can try to derive how it is different and what it might be doing.  Mind that this is equivalent to looking at a distant star, seeing a wobble, and deducing from the pattern that there must be one or more planets orbiting.  It’s an educated guess.

For the sake of brevity, I have to skip past some aspects that can be regarded as “obvious” to someone with insight into MySQL’s architecture.  I might also defer explaining a particular issue in depth to a dedicated post on that topic.  Nevertheless, please do feel free to ask “so why does this work in this way”, or other similar questions – that’ll help me check my logic trail and tune to the reader audience, as well as help create a clearer picture of the Aurora architecture.

Instead of using the binary log, Aurora replication ties into the storage layer.  It only supports InnoDB, and instead of doing disk reads/writes, the InnoDB I/O system talks to an Amazon storage API which delivers a shared/distributed storage, which can work across multiple availability zones (AZs).  Thus, a write on the master will appear on the storage system (which may or may not really be a filesystem).  Communication between AZs is fairly fast (only 2-3 ms extra overhead, relative to another server in the same AZ) so clustering databases or filesystems across AZs is entirely feasible, depending on the commit mechanism (a two-phase commit architecture would still be relatively slow).  We do multi-AZ clustering with Galera Cluster (Percona XtraDB Cluster or MariaDB Galera Cluster).  Going multi-AZ is a good idea that provides resilience beyond a single data centre.

So, imagine an individual instance in an Aurora setup as an EC2 (Amazon Elastic Computing) instance with MySQL using an SSD EBS (Amazon Elastic Block Storage) volume, where the InnoDB I/O threads interface more directly the the EBS API.  The actual architecture might be slightly different still (more on that in a later post), but this rough description helps set up a basic idea of what a node might look like.

Writes in MySQL

In a regular MySQL, on commit a few things happen:

  • the InnoDB log is written to and flushed,
  • the binary log is written to (and possibly flushed), and
  • the changed pages (data and indexes)  in the InnoDB buffer pool are marked dirty, so a background thread knows they need to be written back to disk (this does not need to happen immediately).  When a page is written to disk, normally it uses a “double-write” mechanism where first the original page is read and written to a scratch space, and then the new page is put in the original position.  Depending on the filesystem and underlying storage (spinning disk, or other storage with different block size from InnoDB page size) this may be required to be able to recover from write fails.

This does not translate in to as many IOPS because in practice, transaction commits are put together (for instance with MariaDB’s group commit) and thus many commits that happen in a short space effectively only use a few IOs for their log writes.  With Galera cluster, the local logs are written but not flushed, because the guaranteed durability is provided with other nodes in the cluster rather than local persistence of the logfile.

In Aurora, a commit has to send either the InnoDB log entries or the changed data pages to the storage layer; which one it is doesn’t particularly matter.  The storage layer has a “quorum set” mechanism to ensure that multiple nodes accept the new data.  This is similar to Galera’s “certification” mechanism that provides the “virtual synchrony”.  The Aurora “deep dive” talk claims that it requires many fewer IOPS for a commit; however, it appears they are comparing a worst-case plain MySQL scenario with an optimal Aurora environment.  Very marketing.

Aurora does not use the binary log, which does make one wonder about point-in-time recovery options. Of course, it is possible to recover to any point-in-time from an InnoDB snapshot + InnoDB transaction logs – this would require adding timestamps to the InnoDB transaction log format.

While it is noted that the InnoDB transaction log is also backed up to S3, it doesn’t appear to be used directly (so, only for recovery purposes then).  After all, any changed page needs to be communicated to the other instances, so essentially all pages are always flushed (no dirty pages).  When we look at the InnoDB stats GLOBAL STATUS, we sometimes do see up to a couple of dozen dirty pages with Aurora, but their existence or non-existence doesn’t appear to have any correlation with user-created tables and data.

Where InnoDB gets its Speed

InnoDB rows and indexing
InnoDB rows and indexing

We all know that disk-access is slow.  In order for InnoDB to be fast, it is dependent on most active data being in the buffer pool.  InnoDB does not care for local filesystem buffers – something is either in persistent storage, or in the buffer pool.  In configurations, we prefer direct I/O so the system calls that do the filesystem I/O bypass the filesystem buffers and any related overhead.  When a query is executed, any required page that’s not yet in the buffer pool is requested to be loaded in the background. Naturally, this does slow down queries, which is why we preferably want all necessary pages to already be in memory.  This applies for any type of query.  In InnoDB, all data/indexes are structured in B+trees, so an INSERT has to be merged into a page and possibly causes pages to be split and other items shuffled so as to “re-balance” the tree.  Similarly, a delete may cause page merges and a re-balancing operation.  This way the depth of the tree is controlled, so that even for a billion rows you would generally see a depth of no more than 6-8 pages.  That is, retrieving any row would only require a maximum of 6-8 page reads (potentially from disk).

I’m telling you all this, because while most replication and clustering mechanisms essentially work with the buffer pool, Aurora replication appears to works against it.  As I mentioned: choices have consequences (trade-offs).  So, what happens?

Aurora Replication

When you do a write in MySQL which gets replicated through classic asynchronous replication, the slaves or replica nodes affect the row changes in memory.  This means that all the data (which is stored with the PRIMARY KEY, in InnoDB) as well as any other indexes are updated, the InnoDB log is written, and the pages marked as dirty.  It’s very similar to what happens on the writer/master system, and thus the end result in memory is virtually identical.  While Galera’s cluster replication operates differently from the asynchronous mechanism shown in the diagram, the resulting caching (which pages are in memory) ends up similar.

MySQL Replication architecture
MySQL Replication architecture

Not so with Aurora.  Aurora replicates in the storage layer, so all pages are updated in the storage system but not in the in-memory InnoDB buffer pool.  A secondary notification system between the instances ensures that cached InnoDB pages are invalidated.  When you next do a query that needs any of those no-longer-valid cached pages, they will have to be be re-read from the storage system.  You can see a representation of this in the diagram below, indicating invalidated cache pages in different indexes; as shown, for INSERT operations, you’re likely to have pages higher up in the tree and one sideways page change as well because of the B+tree-rebalancing.

Aurora replicated insert
Aurora replicated insert

The Chilling Effect

We can tell the replica is reading from storage, because the same query is much slower than before we did the insert from the master instance.  Note: this wasn’t a matter of timing. Even if we waited slightly longer (to enable a possible background thread to refresh the pages) the post-insert query was just as slow.

Interestingly, the invalidation process does not actually remove them from the buffer pool (that is, the # of pages in the buffer pool does not go down); however, the # of page reads does not go up either when the page is clearly re-read.    Remember though that a status variable is just that, it has to be updated to be visible and it simply means that the new functions Amazon implemented don’t bother updating these status variables.  Accidental omission or purposeful obscurity?  Can’t say.  I will say that it’s very annoying when server statistics don’t reflect what’s actually going on, as it makes the stats (and their analysis) meaningless.  In this case, the picture looks better than it is.

With each Aurora write (insert/update/delete), the in-memory buffer pool on replicas is “chilled”.

Unfortunately, it’s not even just the one query on the replica that gets affected after a write. The primary key as well as the secondary indexes get chilled. If the initial query uses one particular secondary index, that index and the primary key will get warmed up again (at the cost of multiple storage system read operations), however the other secondary indexes are still chattering their teeth.

Being Fast on the Web

In web applications (whether websites or web-services for mobile apps), typically the most recently added data is the most likely to be read again soon.  This is why InnoDB’s buffer pool is normally very effective: frequently accessed pages remain in memory, while lesser used ones “age” and eventually get tossed out to make way for new pages.

Having caches clear due to a write, slows things down.  In the MySQL space, the fairly simply query cache is a good example.  Whenever you write to table A, any cached SELECTs that accesses table A are cleared out of the cache.  Regardless of whether the application is read-intensive, having regular writes makes the query cache useless and we turn it off in those cases.  Oracle has already deprecated the “good old” query cache (which was introduced in MySQL 4.0 in the early 2000s) and soon its code will be completely removed.

Conclusion

With InnoDB, you’d generally have an AUTO_INCREMENT PRIMARY KEY, and thus newly inserted rows are sequenced to that outer end of the B+Tree.  This also means that the next inserted row often ends up in the same page, again invalidating that recently written page on the replicas and slowing down reads of any of the rows it contained.

For secondary indexes, the effect is obviously scattered although if the indexed column is temporal (time-based), it will be similarly affected to the PRIMARY KEY.

How much all of this slows things down will very much depend on your application DB access profile.  The read/write ratio will matter little, but rather whether individual tables are written to fairly frequently.  If they do, SELECT queries on those tables made on replicas will suffer from the chill.

Aurora uses SSD EBS so of course the storage access is pretty fast.  However, memory is always faster, and we know that that’s important for web application performance.  And we can use similarly fast SSD storage on EC2 or another hosting provider, with mature scaling technologies such as Galera (or even regular asynchronous multi-threaded replication) that don’t give your caches the chills.

Posted on
Posted on

Your E-Commerce site and Credit Cards

Sites that deal with credit cards can have some sloppy practices.  Not through malicious intent, but it’s sloppy nevertheless so it should be addressed.  There are potential fraud and identity theft issues at stake, and any self-respecting site will want to be seen to be respecting their clients!

First, a real-world story. Read Using expired credit cards

The key lesson from there is that simply abiding by what payment gateways, banks and other credit card providers require does not make your payment system good.  While it is hoped that those organisations also clean up their processes a bit, you can meanwhile make sure that you do the right thing by your clients regardless of that.

First of all, ensure that all pages and all page-items (CSS, images, scripts, form submit destinations, etc) as well as payment gateway communications go over HTTPS.  Having some aspects of payment/checkout/profile pages not over HTTPS will show up in browsers, and it looks very sloppy indeed. Overall, you are encouraged to just make your entire site run over HTTPS.  But if you use any external sources for scripts, images or other content, that too needs to be checked as it can cause potential leaks in your site security on the browser end.

For the credit card processing, here are a few tips for what you can do from your end:

  • DO NOT store credit card details.  Good payment gateways work with a token system, so you can handle recurring payments and clients can choose to have their card kept on file, but you don’t have it.  After all, data you don’t have, cannot be leaked or stolen.
  • DO NOT check credit card number validity before submitting to the payment gateway, i.e. don’t apply the Luhn check.  We wrote about this over a decade ago, but it’s still relevant: Lunn algorithm (credit card number check).  In a nutshell, if you do pre-checks, the payment gateway gets less data and might miss fraud attempts.
  • Check that your payment gateway requires the CVV field, and checks it.  If it doesn’t do this, the gateway will be bad at fraud prevention: have them fix it, or move to another provider.
  • Check that your payment gateway does not allow use of expired cards, not even for recurring payments using cards-on-file.  This is a bit more difficult to check (since you don’t want to be storing credit card details locally) and you may only find out over time, but try to make this effort.  It is again an issue that can otherwise harm your clients.

If you have positive confirmation that your payment gateway does the right thing, please let us know!  It will help others.  Thanks.

Posted on