The DC is changing – where do we go from here?

You don’t have to look too closely to realise that infrastructure is undergoing a massive transformation. Some say baremetal servers are dead, despite our many customers who still have a whole bunch of these (thanks UNIX!) Others say that storage is dead and that it’s all about hyper-converged, software-defined and “the cloud”. The truth is, exactly how this new era of infrastructure will play out is not fully defined and therefore opinions vary greatly depending on who you talk to.

So let’s think about IT infrastructure components. In their most basic form, we end up with three categories: storage, compute and networking. None of these ‘components’ are actually going to disappear any time soon; what’s different is how they are delivered (in a physical sense), how they are managed (and automated) and how they integrate with one another. It’s this package that’s changing and over time we’ll undoubtedly see the lines between compute, storage and networking blur as they become more abstracted from the physical pieces of tin with flashing lights in the datacentre. These changes have already had knock-on effects throughout our industry. I’ve seen customers’ job roles changing: someone who was a dedicated NAS/SAN admin three years ago now does a bit of VMware and actually doing a lot less storage admin nowadays (because current gen storage systems are innovative, easier to manage and with better analytics ). With the emergence of software defined technologies such as NSX, it’s likely that more VMware admins will start doing a bit of networking and vice versa. My peers in the channel have found that the days of specialising in a particular technology are numbered and we must all broaden our skills to suit the market trends.

So what will the infrastructure stack of the future look like? If we look at the established architectures and the emerging architectures today we have the following:

  • Traditional physical server/storage workloads (mainly confined to legacy UNIX platforms these days) – what we might call Silos of Infrastructure
  • Predominantly virtualised workloads with shared storage – what some might call Virtualisation v1.0
  • As above but with collaborative support models and reference architectures based on denser, more performant hardware – what we might call Converged Infrastructure
  • As above but packaged and delivered as an appliance solution with no discrete physical components for networking, storage or compute – what we might call Hyper-Converged Infrastructure or HCI

Many customers I speak to at the small to medium end of the market are actually still somewhere between 2 and 3 above: they are mostly virtualised but the platform consists of multiple vendors who won’t speak to each other – plus there’s no validated design. Those guys are looking for something more tightly packaged that provides a single point of support, uses less resources (people and datacentre) and has better integration between components. My larger customers have already had an early version of converged (3) but perhaps based on a legacy SAN platform – these guys are usually keen to swap the storage out for something more intelligent and easier to manage. Converged is extremely popular because it maintains the flexibility of distinct storage, server and networking components (in terms of scalability) but delivers the safety net of collaborative support and a validated, repeatable design.

Most have heard about HCI but not all can actually explain why (and if) you would deploy HCI instead of converged. Put simply, hyper-converged packages are all our basic components (storage, servers, and networking) into a single appliance. These appliances can be grouped together and scaled outwards, with predictable costs and minimal configuration. HCI solutions also tend to have a GUI driven front end, which nicely brings together the components into a ‘single pane of glass’, with integrated monitoring, simple configuration and rapid deployment. Right now it doesn’t fit all use cases, especially for those customers who still have some (1) in their environment. HCI might sometimes be introduced as a point solution for a particular project (e.g. VDI). In these scenarios, as long as the customer is happy to manage multiple pods of infrastructure, then it works very nicely. That said, in some environments, HCI may well take over most infrastructure services, as their capabilities continue to expand (and it may well be the bridge to the cloud that most organisations take).

Let’s take it a step further. I often see confusion in the channel between hyper-converged and software-defined (in fairness, the lines are blurred – it all comes down to how it’s packaged).  Which brings us on to:

  • Software defined datacentre (SDDC)

I have separated SDDC from the rest of the list purely because I see this as an emerging architecture (as opposed to 1-4, which are arguably established). We’re already seeing uptake, particularly in the test/dev arena and to some degree in HPC, and a few are using it in production.

For me, SDDC is exciting and undoubtedly “the future” but right now it’s a leap of faith for a lot of customers. Consider SDDC’s message: to deliver software based infrastructure components on commodity hardware, which can be scaled out almost infinitely. This is ground-breaking in terms of flexibility, cost, rapid scale-out etc. but it’s also a massive shift in terms of support model. Taking a  customer from a converged or hyper-converged appliance solution where they own a stack of kit all wrapped by a single point of support,  to the world of SDDC where ownership of support is undefined between the software provider and the commodity hardware supplier (which could be a vast number), could be tricky.

However, we do have to look at SSDC in the context of on-premise vs cloud (which is a whole other topic of conversation). Industry analysts tell us that infrastructure in customer DCs (i.e. ‘on-prem’) will one day be a thing of the past, in which case we have to assume everything ‘physical’ will one day be in a lights out DC somewhere that the customer doesn’t care about or have any visibility of. I think that’s it’s this shift from on-prem to cloud that will fully enable SDDC in taking over from traditional infrastructure.

Will this happen? Almost certainly, but until cloud services have the maturity, flexibility and cost model to suit the majority (as it stands, public cloud is limited to a select number of use cases), converged and HCI private-cloud offerings will have a role to play for years to come.

Still confused by the plethora of Infrastructure options available in this rapidly changing area of the industry? Then get in touch with the experts at NG-IT to help you understand what is right for you.

Contact Olly: or via Twitter @TheNimbleGuy