Posted on

On Open Source and Business Choices

Open Source is a whole-of-process approach to development that can produce high-quality products better tailored to users’ real world needs.  A key reason for this is the early feedback cycle built into that complete process.

Simply publishing something under an Open Source license (while not applying Open Source development processes) does not yield the same quality and other benefits.  So, not all Open Source is the same.

Publishing source of a product “later” (for instance when the monetary benefit has diminished for the company) is meaningless.  In this scenario, there is no “Open Source benefit” to users whatsoever, it’s simply a proprietary product. There is no opportunity for the client to make custom modifications or improvements, or ask a third party to work on such matters – neither is there any third party opportunity to verify and validate either code quality or security.

Open Source is not a marketing gimmick.  Labels such as “Open Source”, or “Enterprise”, on their own, do not have any more positive outcome than a greasy hamburger labeled with “healthy”.  If a company “believes” in Open Source software, they’ll use the open source development model for their software development.

And now we see things like this: Uproar: MariaDB Corp. veers away from open source (by Simon Phipps, InfoWorld, August 2016)

So what does it mean when a company publishes some of their software under an open source license, and does some related products under a proprietary license?  To me, it’s generally a strong indication that the company either doesn’t believe in that model, or doesn’t understand it.  And we’ve seen it before.

It also reminds me of an interaction I had many years ago.  A Marketing VP asked me “How can we leverage our [Open Source] community?”  I answered the only possible way: “One does not ‘leverage’ the community, that’s not how it works.”  Of course that wasn’t the answer the VP wanted to hear, but that doesn’t make it less true.  They saw the community as an asset to use, rather than work with.  People don’t like getting used, and in the Open Source space that’s even more true.

Companies that have turned their back on their earlier Open Source work and who have devised some other model to (arguably) make more money, have all discovered that this fundamentally changes their market.  They’ll lose some of their users, customers and supporters, and gain some new different clients.  It’s a different market.  Whether and how that pans out in terms of commercial success is never certain.  Given that we know that the Open Source development process yields benefits in terms of quality and features users want, we can say that non-OSS products lack (some of) those benefits, so to put it bluntly, it’ll be a different product of possibly less quality and the feature set is likely to differ as well.

Naturally we cannot ascertain code quality directly as we can’t review closed code directly, bug systems of proprietary software tends to be closed, changelogs are condensed for marketing purposes, but as far back as a decade and a half there have been independent studies that worked out “lines of code per software flaw” and it came out significantly in favour of Open Source software, having proportionally much fewer bugs.  Bugs also tend to get fixed quicker in Open Source software.  None of this is new(s). see for instance Open-source vs. proprietary software bugs: Which get squashed fastest? (CNET, 2007)

For complete products (libraries are a slightly different beast) with a relatively large market scope, source code being available does not in any way diminish a company’s ability to make money.  Having the core developers, tech writers and support people gives them a significant edge in the open market, and that’s a business asset you can leverage.  You do that by focusing on those aspects in your communications – that’s basic marketing, you draw attention to the positive aspects that make your company/product stand out from the rest.  Clearly, this objective cannot not achieved by force, as you don’t make a (potential) client like or trust you by denying them choice or transparency.

There is one other known option aside from not believing or not understanding, and that’s fear. But fear is an awkward business driver, it makes for very bad decisions.

MariaDB Corp in part uses the Open Source development model, in part they’re an Open Source publisher (in-house work that’s only made available at a later stage in the development process), and now some proprietary product has been added to the mix (actually new versions of an existing product).  Looking at this I am rather unclear about what they believe in.  Of course companies can make business choices as they see fit – but they never operate in a vacuum.  In the end it doesn’t matter much what I believe personally, the market will do what it will – historically, it responds in the various ways as described above.  We’ll see how it pans out.

Open Query does not recommend (or re-sell at all) proprietary tools, as it just doesn’t make sense for us or our clients.  We often do bugfixes and improvements which we contribute upstream – for proprietary tools we can’t do that and thus it becomes a hindrance for us and our clients.  On the specific practical level, we’ve actually never used MaxScale (the product that MariaDB Corp will now sell under different conditions for future versions), and this stems from our experience with its effective predecessor MySQL Proxy.  Having a complex set of scripted logic in a proxy slows down applications and introduces a rather large extra (single) point of potential failure in to infrastructure.   So, while Simon refers to MaxScale as an essential tool for scale-able environments, we know from experience that there are other ways of achieving that desired objective, and without the downsides.

Rather than promoting a single tool for many wildly different jobs, we utilise a few different tools depending on the needs of particular client infrastructure.  We still have a couple of (now legacy) MySQL-MMM deployments, but also quite a few Galera clusters, and other setups as suit our clients’ needs.  Key is to not only make the infrastructure convenient to use for applications, but also to not introduce any more single points of failure.  We build resilience into the client’s server infrastructure, without adding significant overhead in either performance or maintenance requirements.

We believe that that’s what clients want, and since potential clients come to us asking exactly for that (and note our approach with relief) we think that we’re doing the right thing by our clients.  We’ve used this approach for over 9 years, and we’ll just keep on doing that – our basic approach doesn’t change even when our tools do.  If you’d like to talk with us about helping you with your infra, using our approach and way of working, contact us today!

Posted on
Posted on

Web Security: SHA1 SSL Deprecated

You may not be aware that the mechanism used to fingerprint the SSL certificates that  keep your access to websites encrypted and secure is changing. The old method, known as SHA1 is being deprecated – meaning it will no longer be supported. As per January 2016 various vendors will no longer support creating certificates with SHA1, and browsers show warnings when they encounter an old SHA1 certificate. Per January 2017 browsers will reject old certificates.

The new signing method, known as SHA2, has been available for some time. Users have had a choice of signing methods up until now, but there are still many sites using old certificates out there. You may want to check the security on any SSL websites you own or run!

To ensure your users’ security and privacy, force https across your entire website, not just e-commerce or other sections. You may have noticed this move on major websites over the last few years.

For more information on the change from SHA1 to SHA2 you can read:

To test if your website is using a SHA1 or SHA2 certificate you can use one of the following tools:

Open Query also offers a Security Review package, in which we check on a broad range of issues in your system’s front-end and back-end and provide you with an assessment and recommendations. This is most useful if you are looking at a form of security certification.

Posted on
Posted on

Motivation to Migrate RDBMS

http://www.itnews.com/article/3004953/use-oracles-database-watch-out-for-this-dec-1-deadline.html

Companies that use a standard edition of Oracle’s database software should be aware that a rapidly approaching deadline could mean increased licensing costs.

Speaking from experience (at both MySQL AB and Open Query), typically, licensing/pricing changes such as these act as a motivator for migrations.

Migrations are a nuisance (doesn’t matter from/to what platform) and are best avoided as they’re intrinsically painful, costly and time-consuming. Smart companies know this.

When asked in generic terms, we generally recommend against migrations (even to MySQL/MariaDB) for the above-mentioned practical and business reasons. There are also technical reasons. I’ll list a few:

  • application, query and schema design tends to be most tuned to a particular RDBMS, usually the one the main developer(s) are familiar with. Features are used in a certain way, and the original target platform (even if non deliberate) is likely to execute most efficiently;
  • RDBMS choice drives hardware/network architecture. A migration should also include a re-think of this, to make optimal use of the database platform;
  • it’s quite rare (but not unheard of!) for an application to perform better on another platform, without putting a lot of extra work in. If extra work is on the table, then the original DB platform should also be considered as a valid option;
  • related to other points: a desire to migrate might be based on employees’ expertise with a particular platform rather than this particular application’s intrinsic suitability to that platform. While that can be a valid reason, it should be recognised as the actual reason as there are obviously cost/effort implications in terms of migration cost and other options such as training can be considered.
Nevertheless, a company that’s really annoyed by a vendor’s attitude can opt for the migration route, as they may decide it’s the path of less pain (and lower cost) in the long(er) term.

We do occasionally guide and assist with migrations, if after review it looks like a viable and sensible direction to take.

Posted on
Posted on 5 Comments

Slow Query Log Rotation

Some time ago, Peter Boros at Percona wrote this post: Rotating MySQL slow logs safely. It contains good info, such as that one should use the rename method for rotation (rather than copytruncate), and then connect to mysqld and issue a FLUSH LOGS (rather than send a SIGHUP signal).

So far so good. What I do not agree with is the additional construct to prevent slow queries from being written during log rotation. The author’s rationale is that if too many items get written while the rotation is in process, this can block threads. I understand this, but let’s review what actually happens.

Indeed, if one were to do lots of writes to the slow query log in a short space of time, a write could block while waiting.

Is the risk of this occurring greater during a logrotate operation? I doubt it. A FLUSH LOGS has to close and open the file. While there is no file open, no writes can occur anyhow and they may be stored in the internal buffer of the lowlevel MySQL code for this.

In any case, if there is such a high write rate, that is an issue in itself: it is not useful to have the slow query log write that fast. Instead, you’d up the long_query_time and min_examined_rows variables to reduce the effectively “flow rate”. It’s always best to resolve an underlying issue rather than its symptom(s).

Posted on 5 Comments
Posted on

Using Persistent Memory in RDBMS

People at Intel started the pmem library project some time ago, it’s open to the broader community at GitHub and  other developers, including Linux kernel devs, are actively involved.

While the library does allow interaction with an SSD using a good-old-filesystem, we know that addressing SSD through SATA or SAS is very inefficient. That said, the type of storage architecture that SSD uses does require significant management for write levelling and verifying so that the device as a whole actually lasts, and your data is kept safe: in theory you could write to an NVRAM chip, and not know when it didn’t actually store your data properly.

But there are other technologies, such as Memristor (RRAM) and Phase Change Memory (PCM, PRAM). Numonyx (founded by Intel and others, since acquired by Micron) was one of the companies developing PCM some years ago, to the point of some commercial applications. Somewhat oddly (in my opinion), Micron ditched their PCM line in 2014 focusing more on 3D NAND technology. In 2015, Intel and Micron announced that they were working on something called 3D XPoint but Micron denies that it’s based on PCM.

I like the concept of PCM because it has a lot of advantages over NAND technology. It’s very stable, doesn’t “bleed” to adjacent memory cells, if it writes correctly it’s stored correctly, and it’s fast. Not as fast as ordinary RAM, but it’s persistent! What I’ve been holding out for is just a small amount of PCM or similar storage in computers, phones, tablets and e-book readers.

In small mobile devices the advantage would be vastly reduced power consumption. ARM processors are able to put entire sections of the processor in standby to save power, but RAM needs to be powered and refreshed regularly. So with persistent memory, a device could maintain state while using hardly any power.

For RDBMS such as MySQL and MariaDB, persistent memory could be used for the InnoDB log files and other relatively small state information that needs to be persistently kept. So this storage would behave likely memory and be addressed as such (pmem uses mmap), but be persistent. So you could commit a transaction, your fsync is very quick, and the transactional information has been stored in a durable fashion. Very shiny, right?

It doesn’t need to be large, something like 512MB would be ample for RDBMS, and possibly much less for mobile devices.

I still reckon persistent memory space has huge potential – and I mention the mobile devices because that’s obviously a larger market. Previously Micron did work with Nokia on using NVM in phones, but as we all know Nokia was acquired and the Micron focus changed. I find the current state of it all quite disappointing, but I do hope the various players in this field will soon focus on this again properly and get the tech out there to be used!

If you happen to know of any current developments and activities, I’d like to hear about it!

Posted on
Posted on

on ORDER BY optimization | Domas Mituzas

http://dom.as/2015/07/30/on-order-by-optimization/

An insightful exploration by Domas (Facebook) on how some of the MySQL optimiser’s decision logic is sometimes naive, in this case regarding ORDER BY optimisation.

Quite often, “simple” logic can work better than complex logic as chasing all the corner cases can just make things worse – but sometimes, logic can be too simple.

Everything must be made as simple as possible, but no simpler.
— Albert Einstein / Roger Sessions

Posted on
Posted on 4 Comments

More Cores or Higher Clock Speed?

This is a little quiz (could be a discussion). I know what we tend to prefer (and why), but we’re interested in hearing additional and other opinions!

Given the way MySQL/MariaDB is architected, what would you prefer to see in a new server, more cores or higher clock speed? (presuming other factors such as CPU caches and memory access speed are identical).

For example, you might have a choice between

  • 2x 2.4GHz 6 core, or
  • 2x 3.0GHz 4 core

which option would you pick for a (dedicated) MySQL/MariaDB server, and why?

And, do you regard the “total speed” (N cores * GHz) as relevant in the decision process? If so, when and to what degree?

Posted on 4 Comments
Posted on

mysql-cli on Kickstarter

Open Query is supporting the mysql-cli Kickstarter project (for MySQL and MariaDB) by Amjith Ramanujam who already successfully completed a similar tool for PostgreSQL.

It is a new MySQL client with Auto-Completion and Syntax Highlighting. From the info provided, it’s Python based, thus portable, and can be installed without root access. Could be a very useful tool. The good old mysql command line client does lack some things, yet a relatively low-level command line client is often useful for remote tasks (as opposed to graphical tools) so we reckon it’s good that this realm gets a bit of attention!

Posted on
Posted on

Interactive Online Training for MySQL and MariaDB

Because interactivity with the trainer (our classes are not dry lectures) and discussions are an important and intrinsic part of our teaching approach, we’ve long tracked development of technologies for online training, but previously were not satisfied.

High costs of various corporate offerings would negatively impact our pricing, given the relatively small scale use and our purposely small classes. The student system requirements would often be problematic – obviously students use different operating systems (Windows, Mac, Linux) and we cannot prescribe that people use a particular OS.

Big Blue Button has long looked like it had the right potential, and it’s now developed to a point where were happy with it. For more tech and practical details, see our Interactive Online Training page.

After our successful trial runs, we have the following course modules scheduled in the next few weeks, others to follow soon:

The date ranges may appear a tad odd at first, but what we do is run each original day-module across two sessions over two days, in this case noon-3.30pm Sydney time. An online session has three 10 minute breaks.

As you can also see the pricing is pretty neat – we can do that since we control the infrastructure and obviously don’t have trainer travel, venue and catering to worry about. No travel hassles for you, either! You should find a quiet spot and try and not get interrupted.

All the interactivity, discussion and hands-on work is there as normal, Open Query provides the VMs – and students (and trainer) can access each other’s session, where needed. We’re pretty pleased with the set up.

Naturally we can also do custom training in this format – we do still offer on-site training as well.

For bookings, or if you’d like more information or have other questions, contact us today!

Posted on
Posted on

LKML: Live patching for 3.20

https://lkml.org/lkml/2015/2/9/534

Building on the original kSplice idea and combining the efforts of the work done at Red Hat and SuSE, common infrastructure is now ready to be put into the Linux 3.20 mainline kernel – Red Hat and SuSE have already committed to using this.

I still reckon it’s freaky trickery, but heck – it works, and it’s great for server environments that have no redundancy (I prefer to fix that issue!) and can’t afford any downtime.

Posted on