Posted on

Decommissioning old servers, saving money…

Of course it’s not quite that simple. I’ve just decomissioned an old Red Hat 7.1 box (hosted dedicated server) that had been in service since 2002, so about 7 years. Specs? Celeron 1.3GHz, 512M, 60GB HD. Not too bad in the RAM and disk realm. It did a good job but goodness am I glad to be rid of it!

Not having that box online is safer for the planet, although it (perhaps amazingly considering the age of some of the externally facing software components) has never been compromised – I consider that mostly luck, by the way, I’m not naive about that. But it’s not easy to move off old servers, it’s generally (and also has been in this case) a lot of work.

Of course hosting has moved on since 2002, places like Linode offer more for less money/month. Of course they virtualise (Xen based in this case) and that’s not been my favourite (particularly for DB servers but depending on the use it really comes down to how you set up the whole infra). It’s a different environment, so different “rules” apply for the optimal setup. The feature/pricing model of the hosting(/cloud) provider actually has more than a little bit to do with that. Distributing tasks like MX relaying, DNS, moderate MySQL tasks, web server, across different virtual machines, with added redundancy across different data-centers, works very well for many use cases. And the funniest thing… more servers, with distributed redundancy, the net cost per month is actually lower than that one single server!

There a many aspects to consider, and I’m intending to write more about that in future posts. I just found it an interesting experience, dealing with this (personal, not even business) server. We handle with these technical environments all the time in our work, but it’s not quite the same perspective. It’s not all technical/financial issues, there’ more to it.

Posted on
Posted on 6 Comments

Your opinion on EC2 and other cloud/hosting options

EC2 is nifty, but it doesn’t appear suitable for all needs, and that’s what this post is about.

For instance, a machine can just “disappear”. You can set things up to automatically start a new instance to replace it, but if you just committed a transaction it’s likely to be lost: MySQL replication is asynchronous, EBS which is slower if you commit your transactions on it, or EBS snapshots which are only periodic (you’d have to add foo on the application end). This adds complexity, and thus the question arises whether EC2 is the best solution for systems where this is a concern.

When pondering this, there are two important factors to consider: a database server needs cores, RAM and reasonably low-latency disk access, and application servers should be near their database server. This means you shouldn’t split app and db servers to different hosting/cloud providers.

We’d like to hear your thoughts on EC2 in this context, as well as options for other hosting providers – and their quirks. Thanks!

Posted on 6 Comments
Posted on

Time-share computing is back!

I kid you not. Let’s quote http://en.wikipedia.org/wiki/Time-sharing

“Time-sharing is sharing a computing resource among many users by multitasking. Its introduction in the 1960s, and emergence as the prominent model of computing in the 1970s, represents a major historical shift in the history of computing. By allowing a large number of users to interact simultaneously on a single computer, time-sharing dramatically lowered the cost of providing computing, while at the same time making the computing experience much more interactive.”

Virtualisation and Cloud computing are merely the new form of this, and not actually a new concept as such 😉

Both virtualisation and cloud architecture (and combinations thereof) have their place; they’re not the new solution to everything, as any architecture is situation dependent. Looking at the various cloud providers now, they have quite distinct deployment and pricing models which means that different providers will be suitable and economical for different applications. That’s quite interesting. It may well mean that a smart company might deploy independent aspects of their operation on different environments.

Architecting infrastructure is about way more than a properly specced box and a bit of tuning, and that’s why the above is also of interest to Open Query. There are serious technical aspects to this, but also financial and business factors. Haha, and you thought we just did some database stuff 😉

Posted on