Posted on

Australian government encryption folly

data encryption lockIn IDG’s CIO magazine (17 July 2018): Wickr, Linux Australia, Twilio sign open letter against govt’s encryption crackdown ‘mistake’.  Not just those few, though, 76 companies and organisations signed that letter.

Learning Lessons

Encryption is critical to the whole “online thing” working, both for individuals as well as companies.  Let’s look back at history:

  • In most countries’ postal legislation, there’s a law against the Post Office opening letters and packages.
  • Similarly, a bit later, telephone lines couldn’t get tapped unless there’s a very specific court order.

This was not just a nice gesture or convenience.  It was critical for

  1. trusting these new communication systems and their facilitating companies, and
  2. enabling people to communicate at greater distances and do business that way, or arrange other matters.

Those things don’t work if there are third parties looking or listening in, and it really doesn’t matter who or what the third party is. Today’s online environment is really not that different.

Various governments have tried to nobble encryption at various points over the past few decades: from trying to ban encryption outright, to requiring super keys in escrow, to exploiting (and possibly creating) weaknesses in encryption mechanisms.  The latter in particular is very damaging, because it’s so arrogant: the whole premise becomes that “your people” are the only bright ones that can figure out and exploit the weakness. Such presumptions are always wrong, even if there is no public proof. A business or criminal organisation who figures it out can make good use of it for their own purposes, provided they keep quiet about it.  And companies are, ironically, must better than governments at keeping secrets.

Best Practice

Apps and web sites, and facilitating infrastructures, should live by the following basic rules:

  • First and foremost, get people on staff or externally who really understand encryption and privacy. Yes, geeks.
  • Use proper and current encryption and signature mechanisms, without shortcuts.  A good algorithm incorrectly utilised is not secure.
  • Only ask for and store data you really need, nothing else.
  • Be selective with collecting metadata (including logging, third party site plugins, advertising, etc). It can easily be a privacy intrusion and also have security consequences.
  • Only retain data as per your actual needs (and legal requirements), no longer.
  • When acting as an intermediary, design for end-to-end encryption with servers merely passing on the encrypted data.

Most of these aspects interact.  For example: if you just pass on encrypted data, but meanwhile collect and store an excess of metadata, you’re not actually delivering a secure environment. On the other hand, by not having data, or keys, or metadata, you’ll be neither a target for criminals, nor have anything to hand over to a government.

See also our earlier post on Keeping Data Secure.

But what about criminals?

Would you, when following these guidelines, enable criminals? Hardly. The proper technology is available anyway, and criminal elements who are not technically capable can hire that knowledge and skill to assist them. Fact is, some smart people “go to the dark side” (for reasons of money, ego, or whatever else).  You don’t have to presume that your service or app is the one thing that enables these people. It’s just not.  Which is another reason why these government “initiatives” are folly: they’ll fail at their intended objective, while at the same time impairing and limiting general use of essential security mechanisms.  Governments themselves could do much better by hiring and listening to people who understand these matters.

Posted on
Posted on

EFF STARTTLS Everywhere project: safer hops for email

Safe and secure online infrastructure is a broad topic, covering databases, privacy, web applications, and much more, and over the years we’ve specifically addressed many of these issues with information and recommendations.

The Electronic Frontier Foundation (EFF) announced the launch of STARTTLS Everywhere, their initiative to improve the security of the email ecosystem. Thanks to previous EFF efforts like Let’s Encrypt (that we’ve written about earlier on the Open Query blog), and the Certbot tool, as well as help from the major web browsers, there have been significant wins in encrypting the web. Now EFF wants to do for email what they’ve done for web browsing: make it simple and easy for everyone to help ensure their communications aren’t vulnerable to mass surveillance.

STARTTLS is an addition to SMTP, which allows one email server to say to the other, “I want to deliver this email to you over an encrypted communications channel.” The recipient email server can then say “Sure! Let’s negotiate an encrypted communications channel.” The two servers then set up the channel and the email is delivered securely, so that anybody listening in on their traffic only sees encrypted data. In other words, network observers gobbling up worldwide information from Internet backbone access points (like the NSA or other governments) won’t be able to see the contents of messages while they’re in transit, and will need to use more targeted, low-volume methods.

STARTTLS Everywhere provides software that a sysadmin can run on an email server to automatically get a valid certificate from Let’s Encrypt. This software can also configure their email server software so that it uses STARTTLS, and presents the valid certificate to other email servers. Finally, STARTTLS Everywhere includes a “preload list” of email servers that have promised to support STARTTLS, which can help detect downgrade attacks.

The net result: more secure email, and less mass surveillance.

This article is based on the announcement in EFFector, EFF’s newsletter.

Posted on
Posted on

Keeping Data Secure

a safeWe often get asked about data security (how to keep things safe) and local regulations and certifications regarding same. Our general thoughts on this are as follows

  1. Government regulations tend to end up becoming part of the risk/cost/benefit equations in a business, which is not particularly comforting for customers.
    • Example: some years ago an Australian bank had a mail server mis-configured to allow relaying (i.e., people could send phishing emails pretending to legitimately originate from that bank).  A caring tech citizen reported the issue to the bank.  Somehow, it ended up with the legal department rather than a system/network administrator.  The legal eagles decided that the risk to the organisation was fairly low, and didn’t forward it for action at that time.  Mind that the network admin would’ve been able to fix up the configuration within minutes.
  2. Appreciate that certifications tend to mainly give you a label to wave in front of a business partner requiring it, they do not make your business more secure.
    • Data leaves footprints.  For instance, some people use a separate email address for each website they interact with.  Thus, when a list of email addresses leaks, saying “it didn’t come from us” won’t hold.  That’s only a simple example, but it illustrates the point.  Blatant denial was never a good policy, but these days it’ll backfire even faster.
  3. Recent legislation around mandatory data retention only makes things worse, as
    • companies tend to already store much more detail about their clients and web visitors than is warranted, and
    • storing more activity data for longer just increases the already enlarged footprint.

business advice personSo what do we recommend?

  1. Working within the current legal requirements, we still advise to keeping as little data as possible.
    • More data does not intrinsically mean more value – while it’s cheap and easy to gather and store more data, if you’re actually actually more strategic about what you collect and store, you’ll find there’s much more value in that.
  2. Fundamentally, data that you don’t have can’t be leaked/stolen/accessed through you.  That’s obvious, but still worth noting.
    • Our most critical example of this is credit card details.  You do not want to store credit card details, ever.  Not for any perceived reason.  There are sensible alternatives using tokens provided by your credit card gateway, so that clients’ credit cards never touch your system.  We wrote about this (again) in our post “Your Ecommerce Site and Credit Cards” last year.
      Why?  It’s fairly easy to work out from a site’s frontend behaviour whether it stores credit cards locally, and if it does, you’re much more of a target.  Credit card details provide instant anonymous access to financial resources.  Respect your clients.
  3. More secure online architecture.
    • We’ll do a separate post on this.
  4. If you have a data breach, be sensible and honest about it.
    • If your organisation operates in Australia and “with an annual turnover of $3 million or more, credit reporting bodies, health service providers, and TFN recipients, among others.“, the Notifiable Data Breaches (part of the Australian Privacy Act) scheme applies, which came in to force this February 2018, applies to you.

handshakeWe’re happy to advise and assist.  Ideally, before trouble occurs.  For any online system, that’s a matter of when, not if.
(And, of course, we’re not lawyers.  We’re techies.  You may need both, but never confuse the two!)

Posted on
Posted on

Your E-Commerce site and Credit Cards

Sites that deal with credit cards can have some sloppy practices.  Not through malicious intent, but it’s sloppy nevertheless so it should be addressed.  There are potential fraud and identity theft issues at stake, and any self-respecting site will want to be seen to be respecting their clients!

First, a real-world story. Read Using expired credit cards

The key lesson from there is that simply abiding by what payment gateways, banks and other credit card providers require does not make your payment system good.  While it is hoped that those organisations also clean up their processes a bit, you can meanwhile make sure that you do the right thing by your clients regardless of that.

First of all, ensure that all pages and all page-items (CSS, images, scripts, form submit destinations, etc) as well as payment gateway communications go over HTTPS.  Having some aspects of payment/checkout/profile pages not over HTTPS will show up in browsers, and it looks very sloppy indeed. Overall, you are encouraged to just make your entire site run over HTTPS.  But if you use any external sources for scripts, images or other content, that too needs to be checked as it can cause potential leaks in your site security on the browser end.

For the credit card processing, here are a few tips for what you can do from your end:

  • DO NOT store credit card details.  Good payment gateways work with a token system, so you can handle recurring payments and clients can choose to have their card kept on file, but you don’t have it.  After all, data you don’t have, cannot be leaked or stolen.
  • DO NOT check credit card number validity before submitting to the payment gateway, i.e. don’t apply the Luhn check.  We wrote about this over a decade ago, but it’s still relevant: Lunn algorithm (credit card number check).  In a nutshell, if you do pre-checks, the payment gateway gets less data and might miss fraud attempts.
  • Check that your payment gateway requires the CVV field, and checks it.  If it doesn’t do this, the gateway will be bad at fraud prevention: have them fix it, or move to another provider.
  • Check that your payment gateway does not allow use of expired cards, not even for recurring payments using cards-on-file.  This is a bit more difficult to check (since you don’t want to be storing credit card details locally) and you may only find out over time, but try to make this effort.  It is again an issue that can otherwise harm your clients.

If you have positive confirmation that your payment gateway does the right thing, please let us know!  It will help others.  Thanks.

Posted on
Posted on

SSL and trust

We can all agree on this: security is important, as is trust.

Does a pretty seal from an SSL certificate provider create trust? Doubtful. The provider’s own claims aside, it’s marketing fluff.
Oh, it used to provide them with some extra Google juice (one more link to them) but Google’s algorithms doesn’t care for that any more. Good!

What Google (and others) do care about is security, all sites should use SSL. For everything.
Expensive? Not really. Let’s Encrypt is free, and updates can be fully automated (scripted). Quite shiny really.

Let’s Encrypt only does domain validation, so a user sees the green lock and a “Secure” indicator. If you want company validation, you need to use another provider and pay their fees. Do you need that? That’s up to you. We reckon that in many (if not most) cases, you don’t really. It might depend on whether your clients are informed enough to care for SSL, and then whether they know (and care) enough to discern which indicators actually have real security meaning and which are just fluff. Tech geeks aside, few people do. We’re not saying that is brilliant, but it is reality. Do people care for pretty seals, and do we want to feed that realm of misinformation and false security? We hope you don’t go that path, because if we really care for security, this just distracts without solving the real issues. Doing things you technically don’t believe in won’t create real trust, as it’s not genuine. And whatever marketing/sales types tell you, you can’t fake genuine. Increasingly, people see right through it. Which is awesome! If your users know enough and care to ensure that your site is really owned by your company, then yes, a certificate with company validation makes sense.

Actionable task

If your publicly facing web or API servers aren’t using SSL for everything yet, you’ll want to spend some time to fix this. Real security aside, it affects your search engine ranking. If web pages pull in logos, javascript or even stylesheets from third parties, make sure those too use https as otherwise browsers produce “mixed content” warnings.

References

Posted on
Posted on

Tom Eastman on File Uploads

The awesome Tom Eastman presented a session at PyCon Australia (Melbourne) 2016 entitled

“The dangerous, exquisite art of safely handing user-uploaded files”.

Every web application has an attack surface — the exposed points of interaction where a malicious or mischievous user can commit malice, or mischief (respectively). Possibly nowhere, however, is more vulnerable than places a user is allowed to upload arbitrary files.
The scope for abuse is eye-widening: The contents of the file, the type of the file, the size and encoding of the file, even the *name* of the file can be a potent vector for attacking your system.
The scariest part? Even the best and most secure web-frameworks can’t protect you from all of it.

In this talk, Tom shows you every scary thing he knows about that can be done with a file upload, and how to protect yourself from — hopefully — most of them.

Do watch it and pick up any hints you can.  This is important stuff.

How do your web applications handle file uploads?

Posted on
Posted on

Found data leak of a company while giving a college lecture | Sijmen Ruwhof

Sijmen writes:

A few weeks ago I gave a guest lecture at the Windesheim University of Applied Sciences in The Netherlands. I graduated there and over the years I kept in contact with some of my teachers since then. One of the teachers told me recently that a lot of students wanted to learn more about IT security and hacking and asked me to give a lecture about it. Of course! And to keep it a bit juicy, I built in a hacking demonstration in my lecture.

Read the full story at http://sijmen.ruwhof.net/weblog/937-how-i-found-a-huge-data-leak-of-a-company-during-a-college-lecture

For any server that’s connected to the Internet (and these days, that’s most servers), security is very important.

Mind that as a fundamental, you have to regard any web server as compromised. Not that they necessarily are, but it’s a very useful baseline to use as these are the most visible servers and thus potentially the easiest targets. What information is present on the web server itself, and what information is on there that can be used to access other systems (and to what extent). Scary? Perhaps. But that’s no reason to not review and put sensible practices in place.

If you’d like to discuss ways to secure your online environment, or would like to see how your current setup holds up to the various security benchmarks, have a chat with us: Open Query offers a security review (ad-hoc consulting) package, and we also offer regular security check-ups for our subscription clients.

Posted on
Posted on

Web Security: SHA1 SSL Deprecated

You may not be aware that the mechanism used to fingerprint the SSL certificates that  keep your access to websites encrypted and secure is changing. The old method, known as SHA1 is being deprecated – meaning it will no longer be supported. As per January 2016 various vendors will no longer support creating certificates with SHA1, and browsers show warnings when they encounter an old SHA1 certificate. Per January 2017 browsers will reject old certificates.

The new signing method, known as SHA2, has been available for some time. Users have had a choice of signing methods up until now, but there are still many sites using old certificates out there. You may want to check the security on any SSL websites you own or run!

To ensure your users’ security and privacy, force https across your entire website, not just e-commerce or other sections. You may have noticed this move on major websites over the last few years.

For more information on the change from SHA1 to SHA2 you can read:

To test if your website is using a SHA1 or SHA2 certificate you can use one of the following tools:

Open Query also offers a Security Review package, in which we check on a broad range of issues in your system’s front-end and back-end and provide you with an assessment and recommendations. This is most useful if you are looking at a form of security certification.

Posted on
Posted on 1 Comment

Password rules

The below comes from an Australian government site (formatting is mine, for readability):

“Your password must be a minimum length of nine characters, consisting of three of the following – lowercase (a-z) and uppercase (A-Z) alphabetic characters,
numeric characters (0-9) or
special characters (! $ # %).
It cannot contain any 2 consecutive characters that appear in your user ID, first name or last name.
It must not be one of your 8 previous passwords.”

That’s a serious looking ruleset. But does it actually make things safer?

I doubt it. What do you think?

Posted on 1 Comment
Posted on

Fatal Half-measures in Incident Response

CSO Online writes about a rather sad list of security breaches at http://www.csoonline.com/article/721151/fatal-half-measures-in-incident-response, and the half-hearted approach companies take in dealing with the security on their networks and websites.

What I find most embarrassing is that it appears (judging by the actions) that many companies have their lawyers do some kind of borked risk assessment , and decide that they can just leave things as-is and yell foul when there’s a breach. After all, particularly in the US prosecutors are very heavy handed with breaches, even when the company has been totally negligent. That’s weird, because an insurance company wouldn’t pay out for a break-in when you’ve left your front door wide open! The problem is of course that the damage will have been done, generally data (such as personal details or credit card info) taken. The damage that does might be hidden and not even get tracked back to this cause. But it hurts individuals, potentially badly (and not just financially).

One example I know of… years ago, Commonwealth Bank Australia had an open mail relay. This means that outsiders could pass mail through CBA mail systems, thus send emails pretending to come from CBA addresses and looking 100% legit. When CBA was notified about this, somehow they decided to not do anything (lawyers again?). If it had been passed to a tech, it would have been about 10 minutes of work to rectify.

At Open Query we do security reviews for clients, naturally focusing on the externally facing sites with the back-end infrastructure including MySQL/MariaDB. We specifically added this offering because we happen to have the skillset in-house, our clients often have e-commerce or privacy sensitive data, and we regard this as very important.

We’re heartened by the introduction of more strict legislation in Australia that requires disclosure of breaches – that means that companies no longer have the option of not fessing up about an incident. Of course they could try and hide it, but these things tend to come out and apart from the public nature of that there are now legal consequences. It’s not perfect, I’d hope companies are smarter than even try to walk that line. Fessing up to a problem and dealing with it is much better, and that’s what we advise companies to do. But that’s about incident response policy, and while important in the overall picture that’s not our main focus.

Similar to our approach with reliability of infrastructure, we take a precautionary approach with security. We want to help prevent problems, rather than doing remedial work later. Of course there’s always a trade-off (law of diminished returns applies), but even small budgets can accommodate a decent level of security. And really, it’s not an optional extra. If you have a website or other publicly facing system as part of your business, you take on this responsibility. You can outsource the work, but not the responsibility.

Posted on