Multi-modal and multi-vendor: the smart way to do cloud
Where have cloud infrastructure services come from?
Around a decade ago, server virtualisation started to drive huge levels of consolidation and efficiencies within private infrastructure. Traditional service providers were also able to benefit from better economies of scale and easier provisioning of new services. These sorts of scenarios are now typically referred to as private cloud (whether on-premise or hosted). Meanwhile, Amazon, Microsoft and Google were developing their own purpose-built offerings with proprietary, highly scalable architectures delivering consumer-grade consumption models and the ability to rapidly on-board customers. Fast-forward ten years and with the label of “public cloud services”, it’s clear to see how much more attractive those purpose-built platforms are to end-users due to their flexibility, instant-on accessibility and the promise of huge cost efficiencies.
What’s the reality of public cloud?
The success of public cloud adoption depends hugely on the transition process and the suitability of the use case. For example, we’ve seen many Amazon Web Services (AWS) success stories within e-commerce. Web-facing services that can be rapidly scaled (up and down) and delivered in an agile way are critical features for that sector and are a best-fit scenario for AWS from both a technical and commercial standpoint. At the other extreme, I’ve seen cloud service adoption fail dramatically where an organisation has taken an existing IT environment with all of its legacy concepts and ideas (always-on VMs, Server Operating Systems, large backend databases etc.) and performed a straight lift and shift migration into AWS or Azure. Doing this hasn’t necessarily been a technical failure (Amazon, Microsoft etc can provide an abundance of capacity, performance and resilience) but the resulting charges have actually meant a huge increase in costs, even taking into account hardware, DC and staff savings.
How has it worked for others?
Taking a best-fit approach to cloud adoption is by far the best approach for organisations who already have a ‘traditional’ IT footprint (i.e. virtual machines, datacentres, server hardware etc.) Applications should be reviewed and placed into the most appropriate services based on the use case and pricing model. For example, a bespoke UNIX database application that requires a huge amount of processing power, RAM and disk IOPS is almost always going to be better off kept onsite within a private cloud. Keep it close to the IT team and on purpose-built hardware: this will ultimately provide the best TCO and predictable performance. Look to instead redevelop this application directly in the cloud, making use of the efficiencies of that platform (think AWS Lambda, DynamoDB). On the other hand, front of house and global services like collaboration (e-mail, video conferencing and instant-messaging) have seen very successful transitions into public cloud services. Office 365 and GSuite are two examples of this.
What’s the solution?
What most organisations will end up with is a multi-modal cloud strategy, consisting of public and private cloud services (we can call this hybrid cloud). This is where I’ve seen the most widespread success. My customers who keep LOB applications in private cloud while developing new customer-facing web-centric applications directly in public cloud platforms have the best of both worlds.
Where does a multi-vendor strategy come in?
In all aspects of IT it’s worth considering a multi-vendor strategy. From the components of your private cloud (servers, networking, storage) all the way to the public cloud provider you choose for your collaboration platform, multiple vendors can de-risk the approach and prevent “vendor lock-in”. However, this is particularly true of public cloud services. Examine the pricing models of public cloud providers and you’ll find that things like egress charges (for data sent out of that provider’s network) will make it massively expensive to perform a large migration of services to another provider in the future. If I was building public-cloud infrastructure I would likely choose to build services across two or more platforms. However, even this doesn’t solve the issue of data egress costs plus some organisations may not want the additional complexity of managing applications across two cloud providers and the associated skill set required.
So what can help prevent cloud lock-in?
If we look at what’s most costly to migrate out – data (everything else like VMs, DB instances etc. is just metadata) – then that’s a good place to start. What if your data (or a subset of that data) could be held on an agnostic third party service that resides outside of the AWS, Azure or Google Cloud bubble? I spoke to HPE Nimble Storage earlier this year and they have deployed their first cloud data platform – Nimble Cloud Volumes. Nimble Cloud Volumes is a fully managed block storage service with a number of benefits – better performance and scalability than public cloud block storage services, for example. In terms of this discussion, however, it’s the ability to instantly present this block storage directly to an EC2 or Azure virtual machine, but very easily ‘flip’ this connection between clouds without any data migrating between those platforms that’s of interest. Cloud Volumes can also be cloned and the clone presented instantly to another cloud instance (e.g. a VM in Azure, for testing). As well as the obvious flexibility this brings, this mechanism could improve performance, be extremely powerful for test/dev, and offer a simple and cost-efficient backup/replication mechanism that doesn’t exist today inside the public cloud vendors.
NG-IT are experts in next-generation private and hybrid cloud solutions; we have in-house Amazon Web Services specialists with a wide range of knowledge across infrastructure stacks. If you’d like to have a discussion about adoption of cloud and how NG-IT can help develop a hybrid, multi-vendor solution for you, please get in touch.