In this series of blog articles I will share with you six steps that will help make your business failure tolerant. — See also: The Complete Series
Thing number one is to have a website that's online and working (and to make it really fast). Many younger companies seem to head straight for the cloud, mainly to market leader Amazon EC2 or Rackspace Cloud. We don't.
We have been running our website and online shop on dedicated servers from Rackspace for 7 years now and I can say nothing but praise about them. We have not seen any serious problems accountable to them except for the 2 hour downtime in November of 2007 which was caused by a car crashing into a transformer station just outside their datacenter in Dallas. The crash triggered a perfect-storm-series of events that eventually took down the whole datacenter for more than two hours (blog post Downtime of paessler.com website due to a traffic accident).
Workshop attendees said they were also happy with other companies like pair.com, hetzner.com, and others.
Before the last upgrade of our website's systems in April 2011 we did several tests with EC2 and Rackspace Cloud but we found that the latency provided by our dedicated servers in Dallas was always considerably better than those of the cloud servers. We did not want to settle for a slower latency than we had before so we stayed at Rackspace.
In our case the Rackspace setup is about 3 times as expensive as a comparable EC2 setup would be (US$1800 vs. US$600), but we firmly believe it is worth every penny of it.
So, step no. 1 is: Go for a reliable hosting provider and pay attention to latency values. Cloud hosting can also be a viable option, but can quickly become complex and for most situations I think a reliable "old-school" hosting provider may result in better bang-for-the-buck as long as you don't already have cloud expertise.
In the next blog post, I will talk about the redundancy of our website hosting.
At Paessler we have been selling software online for 15 years and we have had hardware, software, and network failures just as everybody else. We tried to learn from each one of them and we tried to change our setup so that each failure would never happen again.
Read the other posts of this series: