The Yottabyte Blog

Software Defined Infrastructure – we had it coming

Biting the hand that feeds IT

Biting the hand that feeds IT

The end of IT as we know it is upon us. Decades of hype, incremental evolution and bitter disappointment are about to come to an end as the provisioning of IT infrastructure is finally commoditised. By the end of the decade, the majority of new IT purchases will be converged infrastructure solutions that I only semi-jokingly call Infrastructure Endgame Machines (IEMs).

I’ve discussed this topic with everyone from coalface systems administrators to the highest-ranking executives of companies that have become household names. Only a few truly see the asteroid headed their way and the collective denial of the entire industry will mean an absolute bloodbath when it hits.

Back in October, I talked about Software Defined Infrastructure (SDI). I painted a picture of a unicorn-like solution that, in essence, combined hyper-convergence, Software Defined Networking (SDN) and Network Functions Virtualisation (NFV), with orchestration, automation and management software that didn’t suck. I thought it was going to be rather a long time before these started showing up.

Boy, was I wrong.


An IEM is an SDI Block made manifest*, but as more than merely something you can install on your premises. It would include the ability to move workloads between your local SDI block and those of both a public cloud provider and regional hosted providers.

This gives those seeking to run workloads on someone else’s IEM the choice of using a vendor legally beholden to the US of NSA, or one that operates entirely in their jurisdiction. What a magical future that would be. All the promises of the past 15 years of marketing made real.

The goal of an IEM is that it removes the requirement to ever think about your IT infrastructure beyond some rather high-level data centre architecting. Figuring out cooling and power delivery will probably take more effort than lighting up an entire private cloud that’s ready to deliver any kind of “as a Service” you require, including a full self-service portal.

Put bluntly, IEM is the data centre product of the year 2020. Storage, networking, servers, hypervisors, operating systems, applications, management and so on will all simply be features.

Today, it would be rare to find a company that goes out and buys deduplication as a product. It’s expected that this is a basic feature of modern storage. By 2020, all of modern storage – and a whole lot more – will be expected to be a basic feature of an IEM.

It is all too easy to slip back into cynicism and think about the dozen reasons this might never happen. SDI blocks would decimate internal IT teams. Entire classes of specialities would become obsolete overnight.

Hundreds – if not thousands – of IT providers that only deliver one piece of the puzzle are instantly put on life support. Heck, the US government (amongst others) might intervene to stop the creation of IEMs because it would put at risk their ability to spy on everyone, all the time.

The beginning of the end of IT as we know it

It’s easy to talk about all this in abstract. Prognosticating about the future of IT is a fool’s game, and putting actual dates to things is sheer madness. If I’m going to take the risk of being so laughably wrong in public, I should at least join Gartner so that I can milk a high salary out of promoting my humiliation, no?

Given the kind of pushback even small parts of an on-premises SDI block are getting, it is perfectly rational to think that the entire IEM would be impossible. Except that they already exist, and based on months of research, I have every reason to believe that IEMs will start shipping very, very soon… and that they won’t be an “enterprise only” affair.

Let’s start with the only early IEM that actually exists: IBM’s SoftLayer. Right at this moment, I would consider it to be a functional – albeit primitive and early days – IEM manifestation.

SoftLayer is not simply a public cloud offering. Heck, calling what IBM serves up as a “cloud” is simplifying things by a lot. IBM can remotely provision for you virtual machines (VMs), containers or bare-metal servers in a few different architectures.

IBM have a library of software that can be deployed at the push of a button, and are adding new features and functionality to cover every aspect of IT infrastructure. We’re not talking about paying lip service to concepts like data protection, but fully capable, customisable solutions supported by a rich partner ecosystem.

If you ask nicely enough, IBM will install a SoftLayer cloud on your premises. Regional providers can get a full SoftLayer installed for them so you can move your workloads over to them. Alternately – because of IBM’s deep investment in OpenStack – you can move your workloads off your local or IBM-hosted SoftLayer set-up over to regional providers running OpenStack.

IBM has evolved SoftLayer into the first IEM. We could quibble about some points, for example, that IBM still relies on legacy storage array and networking tech too much to meet the strictest definitions.

The key is that in jettisoning their own hardware-manufacturing divisions, IBM has become entirely agnostic about such things. IBM are not against hyper-convergence, SDN, NFV or any of the newer, sexier technologies. They are being integrated and will most likely become the default over the next few years, as they are becoming the default across the industry. Nothing about IBM’s culture, management software or approach makes these technologies hard to migrate to.

Of course, that IBM have an IEM is functionally irrelevant to the world at large. It’s IBM, so virtually no one will ever be able to afford it.

However, there are other players that are close. The closest to producing actual purchasable IEMs are VMware, Hewlett-Packard and Microsoft.


VMware has almost all the pieces needed to make an IEM. Indeed, if I hire the right VCDXes to build it and add in the right third-party software from VMware’s partner ecosystem, I could build an IEM out of VMware’s software. In fact, I’ve done it more than once in the past couple of months, though the cost per VM is absolutely breathtaking.

Despite having the pieces, I have absolutely zero faith that VMware will create an actual, usable IEM. Too much of it goes against VMware’s culture. VMware is far too fragmented internally for all the various groups to work together, and – perhaps the key point – the companies with the critical missing bits are terrified of working closely with VMware.

VMware doesn’t understand the importance of trust. Trust among partners and among customers. It is thrashing about like an Oracle in the making, seeing how tight it can get the vice grips it has on everyone’s genitals. This will be its undoing.

Buying an IEM from any vendor requires either that you trust that vendor implicitly, or that the vendor in question be so married to open standards that you feel you can move away from it easily. When you buy your entire IT infrastructure as variations on a single SKU, lock-in is a real concern – one VMware is institutionally incapable of understanding.


HP is another company that is busy building an IEM. Its investments in OpenStack are impressive, as is its storage, networking, management software and so forth. HP is building the pieces. It is assembling them in the correct order.

The result is that HP is building an impressive Saturn V of an IEM, to awe and amaze all who gaze upon it. Of course, it’s HP, so the company has aimed the damned thing at the ground – and it’ll fire everyone involved in its design and construction just before it actually ignites the engine.


That brings us to Microsoft. Azure could be an IEM. Microsoft does have the Azure Stack. It also has an eye-wateringly expensive baby Azure that you can buy from Dell.

Windows Server is baking a primitive form of hyper-convergence called Storage Replica into its next Windows Server, and trying hard to cut Windows Server down to a usable size. Heck, Microsoft is evenadding container support to the mix.

Trust and licensing are Microsoft’s stumbling blocks here. Though Microsoft will deny it vehemently in public, the company emphatically does not want you using regional service providers or running workloads on your own infrastructure. The company’s words and actions are not aligned.

The IEM is about more than simply running your workloads on expensive public infrastructure owned by a company beholden to the US of NSA. It is about the ability to run those workloads where they are appropriate and make the most sense. The body corporate of Microsoft doesn’t agree that this is the future, though some parts keep working on it, and the marketing messaging periodically tries to convince us of this, before changing to “Azure public is the solution to everything”.

Microsoft has spent billions trying to put regional service providers and a large part of its own partner ecosystem out of business. The company has jacked up SPLA pricing to where it is essentially impossible to compete with Microsoft’s hosted Azure directly. SPLA still has completely ridiculous licensing restrictions that prevent service providers from building a truly shared infrastructure, and VARs and MSPs have margins for everything other than selling Azure public cloud services slashed to meaninglessness.

From a technological perspective, Microsoft would seem to be on track towards creating an IEM. That said, this is Microsoft. It might decide tomorrow that the future of IT is a mutant hybrid chihuahua-giraffe thing that requires all new APIs, new programming languages, a baffling new interface and you to retain Microsoft-certified licensing lawyers on staff. It’s Microsoft: consistency isn’t its strong suit.


The most interesting challenger to the existing players is Cisco. It recently borged OpenStack expertsMetacloud and Pistoncloud.

Cisco has its own networking and servers. The company also boasts partnerships with required elements up and down the stack from hyper-converged vendors like Maxta and SimpliVity to every kind of management, automation and orchestration piece you could imagine.

Cisco lacks a credible public cloud offering and doesn’t have much of a hybrid story at all. These are things Cisco can buy, if it feels the IEM is more than voices in my head. Cisco’s problem isn’t getting the pieces together in time to be a real competitor. Cisco’s problem is that it’s Cisco.

Cisco has a culture of total lock-in followed by ruthless exploitation of that lock-in. This hardly it the company anyone wants to trust with the totality of their IT infrastructure. Moving away from that would require some pretty big cultural changes inside the firm but, for once, it may actually be possible to see that level of change.

Cisco has experienced a CEO change, and the incoming chief has presided over a purge of the executive layer. It has also seen it’s Invicta acquisition fail. Reports from staffers have the mood within Cisco as being somewhere between Game of Thrones and House of Cards, with collateral damage reported far and wide.

What will Cisco be when the dust settles? Nobody knows… but they could be an IEM contender.

Dell and Nutanix

The big mystery among the big names is “what is Dell up to?” If you are thinking Nutanix, don’t be absurd. Dell could never pay Nutanix as much as they’ll get from an IPO, and the only thing that matters at Nutanix right now is that massive payout. The exit of everyone’s dreams is so close that the Nutanix leadership can taste it and nothing on this earth is going to keep them from their dump trucks full of cash.

So if Dell isn’t going to build an IEM out of Nutanix, what is the company up to? I know many cynical IT types who believe Dell is up to nothing; that it is resting on its laurels and that innovation within the organisation has ceased. I am not one of those people.

Michael Dell is many things, but he’s not stupid. If I have figured out that IEMs are on the immediate horizon – and the massive disruption they will cause – then so has Dell. Not being a public operation, however, the company has no reason to reveal its plans until it’s ready to obliterate competitors in one decisive push.

That said, Nutanix has recently made some big strides of their own towards an IEM with its recentAcropolis announcement. It doesn’t have the full suite of software widgetry that Dell could bring to bear, but it may not need it.

The Dell/Nutanix partnership is perhaps the most interesting one in this entire space.

The little guys

You can be excused if you haven’t heard of Yottabyte. This is a small, public cloud provider from Detroit that decided it needed to roll its own hyper-convergence and eventually data centre convergence software. Yottabyte is by no means the only small/startup IEM contender, but I am going to use it as my example because I have worked very closely with it during the past six months.

Yottabyte doesn’t have massive venture capital funding or marketing budget. It just has a stable of loyal customers and it ploughs profit from its public cloud operations back into R&D.

The result of years of this R&D is yCenter. It can be a public cloud, a private cloud or a hybrid cloud. Yottabyte has gone through and solved almost all the tough problems first – erasure coded storage, built-in layer 2 and 3 networking extensibility, management UI that doesn’t suck – and is only now thinking about solving the easy ones.

Yottabyte is a completely random tiny company with 80 per cent of an IEM already built and is only just starting to realise what it can do with it. I’ve spent hundreds of hours with Yottabyte helping them design a hybrid cloud partner ecosystem that will not only include regional service providers, it will make discovering and utilising them easy.

Yottabyte didn’t push back and say it needed to control all aspects of the ecosystem. it didn’t come back demanding a means to limit the service providers’ role, own every element of the customer relationship or otherwise marginalise everyone except itself.

If all goes well, by the end of next year Yottabyte will have a prototype IEM. What Yottabyte has got is already pretty impressive, and adding the last few elements honestly shouldn’t be that hard. The hard part is done, now it’s just a matter of getting some venture capital and going forth and marketing the thing.

I bring Yottabyte up as an example mostly because it’s a textbook perfect case of “buy or bury” in the IEM space. If Yottabyte ever gets to C-round funding, a whole lot of executives at a whole lot of companies will have a great deal of explaining to do.

Yottabyte isn’t the only example. I can’t talk about most of the others because they’re in stealth, but some have emerged organically. Maxta and Mirantis, for example, have a partnership that can deliver a hyper-converged OpenStack solution as an appliance. It’s not exactly an IEM, but it could be the start of one.

SimpliVity just announced KVM support, and has OpenStack integration for both its KVM and VMware-based solutions. I could keep going up and down the line and list a bunch of companies or company combinations that deliver the building blocks of an IEM, but I suspect you get the picture by now.

The shape of things to come

There are startups that two years ago I would have said were novel because they sold a product, not a feature. They integrated multiple features – that themselves were products not too long ago – into a single item and sold it at a lower price. Those startups haven’t even finished their lifecycle and already I can see quite clearly that they are merely features of an IEM.

IEMs won’t completely eliminate systems administrators, but they will completely transform the data centre. Gone from most organisations are dedicated infrastructure specialities like storage admin, virtualisation admin and network admin. A general operations – or more likely, DevOps – staff will take their place in all but the largest organisations.

Perhaps the best analogue is physical building infrastructure. Most companies don’t have a plumber or a roofer on staff. They have a general utilities body – a “handyman” – that fixes what they can, and calls in specialists for what they can’t. Larger organisations – a university campus for example – absolutely will keep some specialities to hand. A plumber is probably a safe bet, even if the roofer is unlikely.

By the same token, network admins will be the infrastructure positions hardest to fully eliminate. Networks are the bit that have to interact with the outside world and the networking world’s abject inability to play nicely with standards means that in any large organisation, someone has to babysit the thing.

As I see it, the data centre container vision of the deceased Sun Microsystems is – a decade later – starting to be realised. Instead of needing consume entire containers, however, we’re on the cusp of being able to provide a full suite of data centre services in a 2U Twin form factor.

IEMs are bringing the entry point to full data centre services down to something that – by 2020 – 50-seat companies should be able to afford to deploy on premises, with full and simple access to a wide array of hybrid cloud services. They already scale up and scale out simply and easily.

Almost everyone reading this has some sort of vested interest in preserving the status quo. That status quo is, after all, what is currently paying our mortgages. Resistance to IEMs will be enormous. Despite this, how many of us can afford to gamble that I’m wrong?

If you’re a systems administrator, can you afford not to gain development or at least DevOps skills beyond your infrastructure roots? If you’re a vendor and you don’t have an IEM solution, and I’m right that IEMs are about to be a very big thing, then you’re probably already dead and you just don’t know it yet.

The future of the data centre is about more than performance results and benchmarks. It’s about more than individual features or varying measures of density. More than anything else, the success or failure of the IEM concept will boil down to trust. Who would you trust to deliver you an entire data centre’s worth of services in what amounts to a black box?

How many intercompatible vendors would you require before you bought into that concept? And could we ever trust the vendors involved not to turn the screws once we’d bought in? If IEMs are – as I believe them to be – inevitable, then a new system of checks and balances will need to be developed. The delicate balance between the power vendors, customers and regulators will need to be very carefully managed. I hope we’re all up to the task.


* I could have gone on using the SDI nomenclature, but various industry marketing hooligans decided to misappropriate the term to refer to their pathetically simplistic legacy converged or hyper-converged infrastructure solutions. Infrastructure Endgame Machine is a lovely bit of hyperbole that – I hope – no marketing wonk will try to steal unless they’ve actually built one. The internet being what it is, calling what you’ve made an Endgame Machine will attract every vicious piranha on the planet to tear you apart, so if you use the IEM terminology you’d better be ready to deliver.

Posted in Blog

Last updated by at .