Tags: my server, webhosting.
By lucb1e on 2012-01-25 20:52:42 +0100
The past half year of school has mostly been about designing and implementing applications rather than actually building them. This also includes the SLA or Service Level Agreement, in which you should note what uptime a customer can expect. I saw a banner lately on a website which advertised the website's uptime. It said I could monitor my own website for free, which was of course a good marketing trick, but I could get a 12-day professional trial. And I must say, I do like it. The service is great and quite cheap too, it's just that I don't need it at all. (Actually, the name of the service is Site24x7).
But I let it monitor the uptime of my website for nearly two weeks. The final result: 99.96%. Not that bad if I do say so myself, even better than what most common webhosts offer! And all downtime was intentional, I wanted to make some changes to the Apache configuration (mostly after midnight) and needed to restart Apache or something. I've always wondered why a professional webhost would ever be down at all really, aside from general power failures or when the internet connection fails for some reason. I still wonder though.
It is 2012, any self-respecting webhost should be able to manage a 99.995% uptime ifnot more. That's just over 2 minutes of downtime a month. Why would anyone be down 2 minutes every month? They are professionals, working on server clusters no doubt. I mean, they host tens of thousands of domains and websites (I'm just talking about shared hosting, nothing else), so a reboot of a single server from the cluster wouldn't do much. Perhaps it would make the response time infinitesimally slower, but that would be the total impact of the operation. Also, you would assume a back-up power grid for at least some servers...
Well I guess on average the power grid fails like once every three years for a good part of the afternoon. Let's see, that is (3 hours * 60 minutes) / (3 years * 365 days * 24 hours * 60 minutes) = 99.989% uptime. And that's a very, very rough estimate.
Internet failure is more common for consumer ISPs, but why would the dedicated lines of a datacenter go down? Good question. I bet they covered that in their SLA anyway, "If some sort of ground operation breaks our internet connection, we are, of course, not liable in any possible way."
So if that's not it, what does cause shared hosting provider's downtime? I keep wondering...
Oh wait I was talking about my experience with Site24x7 and my uptime!
What I hadn't expected when I signed up was the ammount of services they offered. Not just website monitoring, also network, DNS, SSL certificate, and many more options to monitor to detail. Everything was configurable, you could set who should be notified and when exactly. Also if the problem persisted more people could automatically be notified. You could set how often it would try (up to every minute, which I set), from which locations it would check (dozens of locations from all around the world were available, and you could just tick them all if you wanted to!), and the possibilities go on and on. In fact, you could even set what SLA you had to comply to, and it would tell you if you were going to reach it.
Still I don't need any of this, it's just something which is nice to have but not necessary to me. I can also see how they made this all. Simply get a couple of VPSes around the world with some cronjobs setup, log a ton of things, write a good system which integrates all results into a single website, get a sms notification provider somewhere, and you're pretty much done. Of course all of these things are big, but they are all doable and maintainable for a low cost, which is exactly their asking price for the service. Simple, good, and configurable, that's my impression of Site24x7. And thanks to them I can now claim I have an uptime of 99.96% :P