For some businesses, the transition to cloud computing can seem like a good opportunity to take control of one’s destiny and completely manage their infrastructure themselves, without the continued burden of investing in and maintaining new hardware.
The reality, however, is that the cloud comes with its own considerations and challenges. A study by CloudCheckr found that 98 per cent of self-service cloud users had issues maintaining uptime, while 91 per cent of those respondents did not back up their Cloud computing instances often enough. A further 44 per cent were found to have data that was left vulnerable to an attack or outage.
How a business architects its cloud infrastructure, and the considerations it places on security, performance, high availability, and ongoing optimisation of its cloud computing deployment, is vital to ensuring that cloud deployment delivers the expected business outcomes.
Here are some considerations that those going to the cloud need to factor in before going it alone:
Ensuring your cloud deployment is always online and available requires a different kind of thinking to your traditional IT environment. It is no longer about ensuring a specific virtual server, or compute instance, is running, but instead about keeping your website or application online.
In the cloud, this is often done through a ‘High Availability’ configuration, a fail–over method that requires two or more instances of your environment to be operational at any time. In the event that there is an error with one of the instances, it will automatically fail over to the next one, ensuring your operations continue to run.
For those managing the cloud on their own, this also means you need to think about who has the knowledge to keep things online, or fix problems, in case of emergency. Particularly for smaller companies without hefty IT teams, managing the cloud by oneself means there is no one to rely on if it all comes crashing to the ground in the middle of the night.
The cloud is always changing. Amazon Web Services released 250 new products last year alone, and 500 more are expected in 2014. This makes it a huge task, even for those with the right expertise, to keep up to date with each influx of new tools and products,
Before virtualisation, server hardware could essentially be treated as a “set-and-forget” method of computing — configure the server, leave it to do its thing and pay it off in 18 months. Essentially, you didn’t get to optimise, because you were stuck with what you had.
Though the cloud offers many automated advantages that physical hardware does not, it is an environment that constantly evolves, improving with each tweak, but also requiring constant monitoring to ensure that new products and new tools are adopted. Failing to do so means you’re not getting optimal price or performance benefits.
Availability, uptime, performance, security and scalability are considerations that in-house IT teams have always had, but the approach needed for addressing them in Cloud, from the infrastructure to application layer, is completely different. Going it alone can leave IT departments and entire companies devoid of the experience needed to keep it smooth and simple.
In a sense, moving to the cloud is like the difference between choosing a car or a train for transportation. While your existing car has served you well, it’s now clunky and rusty and it is time to decide — buy a new car, with the added costs of licensing and ongoing maintenance, or hop on the train, sharing the same infrastructure as others with the common goal of getting to your preferred destination.
Of course, choosing what carriage to get on and, indeed, which train is the cheapest and fastest at getting to the intended destination, can be a little daunting with so much choice. With the right conductor, however, the journey is much smoother, safer and easier.
Mark Randall is chief customer officer of Bulletproof