The Yottabyte Blog

BLOOMFIELD TOWNSHIP, Michigan, Sept. 13, 2016 /PRNewswire/ — A strategic partnership between the University of Michigan and software company Yottabyte promises to unleash a new wave of data-intensive research by providing a flexible computing cloud for complex computational analyses of sensitive and restricted data.

The Yottabyte Research Cloud will provide scientists high performance, secure and flexible computing environments that enable the analysis of sensitive data sets restricted by federal privacy laws, proprietary access agreements, or confidentiality requirements. Previously, the complexity of building secure and project-specific IT platforms often made the computational analysis of sensitive data prohibitively costly and time consuming.

Brahmajee Nallamothu, professor of internal medicine, tested a pilot installation of the Yottabyte Research Cloud at the U-M Institute of Healthcare Policy and Innovation for his research on such topics as predictors of opioid use after surgery and the costs and uses of cancer screenings under the Affordable Care Act.

“We recently moved a healthcare claims database, which is multiple terabytes in size and requires a great deal of memory and fast storage to process, onto the pilot platform,” Nallamothu said. “The platform allows us to immediately increase or decrease computing resources to meet demand while permitting multiple users to access the data safely and remotely. Our previous setup relied on network storage and self-managed hardware, which was extremely inefficient compared to what we can do now.”

“The Yottabyte Research Cloud will improve research productivity by reducing the cost and time required to create the individualized, secure computing platforms that are increasingly necessary to support scientific discovery in the age of Big Data,” said Eric Michielssen, associate vice president for advanced research computing at U-M.

“With the Yottabyte Research Cloud, researchers will be able to ask more questions, faster, of the ever-expanding and massive sets of data collected for their work,” said Yottabyte CEO Paul E. Hodges, III. “We are very pleased to be a part of the diverse and challenging research environment at U-M. This partnership is a great opportunity to develop and refine computing tools that will increase the productivity of U-M’s world class researchers.”

Many U-M scientists are working on a variety of research projects that could benefit from use of the Yottabyte Research Cloud:

  • Healthcare research, for example in precision medicine, often requires working with sensitive patient information and large volumes of diverse data types. This research can yield results that positively impact patients’ lives, but often involves the analysis of millions of clinical observations that can include genomic, hospital, outpatient, pharmaceutical, laboratory and cost data. This requires a secure high performance computing ecosystem coupled to massive amounts of multi-tiered storage.
  • In the social sciences, U-M research requires secure, remote access to sensitive research data about substance abuse, mental health, and other topics.
  • Transportation researchers who mine large and sensitive datasets — for example, a 24 Terabyte dataset that includes videos of drivers’ faces and GPS traces of their journeys — also stand to benefit from the security features and computing power.
  • In learning analytics, studies of the persistence of teacher effects on student learning could benefit from the enclaves to store and analyze data that includes observational measures scored from classroom videos, and elementary and middle school students’ scores on standardized tests.
  • Researchers in brain science will be able to use the Yottabyte Research Cloud to investigate a wide range of topics including the effects of aging on brain function and structure and how we focus our attention in the presence of distraction.

The Yottabyte Research Cloud is U-M’s first foray into software-defined infrastructure for research, allowing on-the-fly personalized configuration of any-scale computing resources, which promises to change the way traditional IT infrastructure systems are deployed across the research community.

More about Yottabyte:
More about Yottabyte Research Cloud:

UM, Dan Meisler, 734-764-7414
YB, Duane Tursi, 248-464-6100 x102

Vendors selected for the “Cool Vendor” report are innovative, impactful and intriguing

BLOOMFIELD TOWNSHIP, Mich., May 25, 2016 /PRNewswire/ — Yottabyte, a leading provider of next-generation software-defined infrastructure solutions, today announced it has been included in the list of “Cool Vendors” in the “Cool Vendors for Compute Platforms” report by Gartner, Inc.

“Our selection as a ‘Cool Vendor’ by Gartner is a testament to the level of creativity and innovation we’ve applied to the IT challenges facing businesses today,” says Duane Tursi, Principal of Yottabyte. “At a time when infrastructure costs are rising and IT resources are being trimmed, our forward-thinking virtual datacenter platform is tremendously versatile and hits every important benchmark.”

YottaBlox appliances are software defined infrastructure building blocks for storage, computing and networking, enabling users to build public, private and hybrid cloud-based virtual datacenters. All completely secure. YottaBlox are known for their rare combination of simplicity, scalability and cost-effectiveness. The Yottabyte software-defined infrastructure platform is adaptable to a variety of business needs, including high performance computing, Tier 1 application virtualization, test and development simulations, remote office / branch office standardization, data backup and archive to high availability and disaster recovery.

“Cool Vendor” status is the most recent accolade awarded to Yottabyte. The company also recently earned Tech Trailblazer runner-up.

Gartner Cool Vendor Disclaimer
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About Yottabyte
Yottabyte is a virtualization and cloud vendor offering public, private and hybrid cloud solutions. Yottabyte operates a public cloud based on yCenter, the same software that powers Yottabyte’s YottaBlox. YottaBlox are a turnkey Hyperconverged Infrastructure Appliance (HCIA) providing a full private cloud solution. Data protection, storage tiering, multi-site replication, software defined networking and an easy-to-use self-service virtualization management interface are only some of the powerful features that are a standard part of yCenter and the YottaBlox it powers. YottaBlox are available and supported worldwide through the Yottabyte Partner Network. The company is headquartered in Bloomfield Township, Michigan.

Courtney Tursi

Related Links

Also posted on The Data Stack at

What are the fundamental building blocks of a datacenter? Some might say servers, storage and switches; others would say the components that make up those devices. Yottabyte takes a different view. Technology for technology’s sake is pointless; it is what you do with the technology that really matters.

A switch with nothing to interconnect is of use to no one. A CPU is worthless without the rest of the server, and a server does nothing without applications to run on it. It’s the applications – or, more specifically the work those applications do – that gives purpose to the totality of today’s IT industry.

Businesses don’t buy CPUs. Nor do they buy servers. They do not set out to buy a specific model of any piece of technology and built up justifications and rationale around that purchase.

Businesses buy results. They start at the end of the problem chain – what they want to accomplish – and work their way back to the beginning: what resources do I need to accomplish my desired outcome?

Specific applications enable specific results. What is required to run those applications is relevant only insofar as the infrastructure under consideration is cost effective, adequately performant and fits within the business requirements for ease of use. Operating systems, hypervisors and every layer of hardware and software all the way down the stack can all be replaced or interchanged if there is a sufficient reason to do so.

Given then that the focus of business is on applications and the data that they create for the purpose of transforming these data into information about the business, then the fundamental building block of the datacenter is the whole datacenter infrastructure itself, built for the express purpose of running business application workloads.

Yottabyte provides the software that delivers results business need, but it is Intel’s hardware technologies that make this possible. Intel provides storage in the form of Solid State Drives (SSDs) that are the most reliable and performant in the business. Intel’s Broadwell Xeon v4 CPUs offer unprecedented speed and critical functionality through features such as Advanced Vector Extensions (AVX). Similarly, Intel networking provides the stability and reliability Yottabyte requires to provide solutions which simply wouldn’t function without a dependable network stack.

Intel’s injection of value doesn’t end with supplying hardware. Through its Storage Builders program Intel has provided valuable advice and support. Less time spent worrying about the vagaries of hardware issues means more time spent doing what we do best: worrying about the details that matter to the customer.

The atomic datacenter

Yottabyte specializes in packing all the capabilities of a datacenter into a small package, which we call YottaBlox. The adequately skeptical individual, of course, will ask questions about what constitutes “all the capabilities of a datacenter,” and rightly so. There are plenty of vendors making claims along these lines, and all of us have had to learn to live with disappointment regarding those claims.

Yottabyte doesn’t believe in offering up a hyperconverged box that marries storage with compute and calling it a day. Adding in some basic networking capabilities doesn’t tip the scales towards “datacenter in a can” either. To deliver on the promise, much, much more is required.

Today’s datacenters need to be turnkey cloud services. This means hardware, hypervisors and management software all provided as a unit and without the need to license the hypervisor before you can start working.

Lets not forget, the datacenter is a multi-tenant affair. This means self-service portals for end consumers of resources. The use of policies and Role-Based Access Control (RBAC) to allow different classes of customer different capabilities. A datacenter needs to deliver different tiers of performance and guarantee Quality of Service (QoS) for critical workloads.

A datacenter should make deploying workloads easy, allowing customers to utilize templates and recipes to create a marketplace of operable environments and applications. Infrastructure needs to be usable from a user-friendly GUI as well as through a RESTful API. Snapshots, clones, backups and disaster recovery options need to be available at the workload level as well as at the level of the entire datacenter.

This is what Yottabyte views as the fundamental building block of the datacenter. The resource unit of the datacenter is the datacenter infrastructure itself. Everything about the datacenter is a feature, not the final product to be purchased or sold.

Datacenters as they should be

Yottabyte took all the above and wrapped it up as a single offering, and then we let you make as many datacenters as you want. A single YottaBlox is a fully redundant hardware appliance and multiple YottaBlox can be joined together into a single cluster.

A YottaBlox can house multiple Virtual Datacenters (VDCs). These VDCs can be moved between YottaBlox, or to yCenter-compatible clouds hosted by Yottabyte or third-party service providers.

Yottabyte allows customers to move individual workloads or entire VDCs. A VDC can be hundreds of VMs, or a single one. Those VMs which make up a single service can be wrapped together into a single VDC and moved, copied, backed up or otherwise manipulated easily as a whole.

This is how datacenters should be: exactly as large or as small as is needed. With the focus on the application instead of the infrastructure, and with all the rest taken care of, right out of the box through smart, configurable software. That’s Yottabyte’s vision of the future of the datacenter, and its what we deliver.

Greg M. Campbell, Author

Yottabyte is pleased to announce the general availability of our YottaBlox hyperconverged, storage and compute appliances, alongside version 4.0 of our yCenter software defined infrastructure platform. YottaBlox appliances, powered by yCenter, the very same software we have used to power our public cloud computing and storage services for years. Finally, administrators can have all the power they need to build robust private, public or hybrid clouds in a turnkey, building block appliance approach.

As with other providers, new features are trialed in our public cloud offering first and then released to our YottaBlox appliances. This ensures that features are refined and tested before administrators have to wrangle with them.

YottaBlox come with all the trimmings. Virtual datacenters are at the core of how yCenter works. This functionality isn’t an add-on and it doesn’t cost any extra money. Virtual machines can be grouped together into a virtual datacenter and control over each virtual datacenter restricted via role-based administration and policy.

YottaBlox may be purchased as a hyperconverged appliance. Unlike some, we don’t believe that hyperconvergence is a product in and of itself; hyperconvergence is merely a hardware configuration where the CPU and RAM resources used to run VMs reside on the same physical server as the physical disks they’re attached to. This fact has been confused by many vendors who wish to stay relevant to the increasingly busy administrators of today’s datacenters.

YottaBlox hyperconverged appliances incorporate storage, networking and compute into a single appliance. Virtual datacenters serve as containers for virtual machines which are themselves wrappers for operating systems and applications. Virtual machines and entire virtual datacenters can be snapshotted, cloned, migrated, backed up, and automatically deployed as part of templates or recipes.

YottaBlox can be joined together to form a single cluster, and ultimately a single site’s worth of physical datacenter. They can also be joined over geographic distances to allow migration of workloads between sites, disaster recovery planning and failover and even the migration of workloads between yCenter clusters owned by different organizations.

The barriers between your private cloud and someone else’s public cloud are being broken down. Interminable configuration on a per-virtual machine basis is no longer required to set up disaster recovery or enable hybrid cloud movement of workloads. Entire virtual datacenters can move quickly and easily. New workloads can be deployed locally, remotely or on a publicly hosted yCenter instance where you have an account.

Hyperconvergence isn’t a product. Virtual datacenters aren’t a product. Public clouds, private clouds, even hybrid cloud capabilities aren’t a product. None of these, on their own, are worthy of their own SKU, their own price tag or their own licencing. They are all merely features of a datacenter solution done right.

YottaBlox is the product, and ease of use without worry is the feature that can finally be delivered upon. Your workloads, where you need them, when you need them.

Join us @YottabyteLLC, @greg_m_campbell & @dduanetursi at Intel’s Cloud Day 2016 (@IntelITCenter | #IntelCloudDay | #NowPossible) to learn about YottaBlox. Bring YottaBlox with yCenter to your datacenter today so you can be ready for the challenges of tomorrow.

Every datacenter is different, even if only because the construction of the actual building(s) differ from site to site. While physical layouts, equipment and vendor selections change, the purpose of a datacenter always remains the same: to run digital workloads. Given the intense amount of marketing surrounding datacenter equipment this can sometimes be the difficult bit to remember: datacenters exist to run workloads, nothing more.

As layers of abstraction come into play and datacenters become entirely virtual, the purpose of a datacenter remains constant. A virtual machine is a wrapper for the environment in which the OS and application live that can be moved from physical host to physical host. A virtual datacenter is a wrapper for the environment in which virtual machines and virtual networks operate that can be moved from cluster to cluster (or datacenter to datacenter), and nothing more.

Despite the finality of the above statement, virtual machines changed IT practices around the world. They freed operations teams from a significant amount of drudgery related to moving applications between dissimilar hardware, made high availability and fault tolerance affordable enough to be within reach of businesses of all sizes and allowed software updates to be uncoupled from hardware updates, changing the financial dynamics of the entire IT industry.

We at Yottabyte believe the adoption of the virtual datacenter model marks a similarly significant change for IT operations. This is why the virtual datacenter is at the core of yCenter. The ability to wrap up every configuration, rule, permission, network and virtual machine in a datacenter and move the whole lot around as needed isn’t an add-on or a separate product. It’s a basic, table stakes feature of modern IT infrastructure.


Virtualization allowed operations staff to drive the utilization of individual servers much higher. No longer were administrators faced with the choice between devoting a piece of hardware to a single workload (highly inefficient), or trying to integrate multiple workloads into a single environment (a practice which usually ended in disaster).

With virtualization, individual workloads could have their own operating system with it’s own configuration. One workload per environment, and you could run multiple environments on a single piece of hardware. Efficiency increased without compromising the security, stability or ease of use of individual workloads.

What’s important to note is that the environment around individual workloads got smaller. Administrators stopped trying to put multiple workloads in a single environment. This meant that they stopped having to configure environments to support those multiple workloads, and each environment was thus only modified from the default settings as little as was required to get the job done.

Virtual datacenters offer a similar sense of compartmentalization. Once you can virtualize datacenters you can stop thinking of datacenters as collections of hundreds or thousands of workloads and start thinking of them as wrappers around interconnected workloads.

In the days before virtualization, an administrator might have put all the individual workloads for a given service on a single system. For a website this might include a database, a web server, a file server and some security applications. Today, a single website could consist of a dozen virtual machines, each irrelevant on their own, but combined form that same single service.

With virtual datacenters you can wrap each service’s virtual machines together so that it can be worked on by a given individual or department, cloned as a unit (for DR or QA or…), migrated as a unit and so forth. Alternately, you can bundle similar kinds of workloads into a single datacenter, splitting up administration based on workload type rather than service.

Administer securely

The choice is yours, but the choice is important. Even a modest physical datacenter today can run millions of individual workloads. The ability to chop up that physical datacenter into smaller groupings is necessary if we are to make any sense of it, or be able to administer it securely.

Even if, for example, you trust your datacenter administrator to have personal control over the millions of workloads running in a physical datacenter, can you trust a single login with that kind of power? If a piece of malware were to get hold of the superuser credentials to a datacenter of that scale, the results could be catastrophic.

Virtual datacenters are more than simply theoretical groupings to ease the burden on an administrator’s sanity. They are more than simple compartmentalisations to wrap workloads together. They are the new normal; a means to segment and segregate access and administration for security purposes as much as for utility and ease of use.

Yottabyte believes in the importance of the virtual datacenter to the future of IT administration. We’ve built it into yCenter and yCenter is at the core of everything we offer. From our public cloud services to all of our products, the virtual datacenter isn’t tomorrow’s technology, or some difficult to configure and expensive add-on. It is a fundamental feature of today’s datacenter.

Join us @YottabyteLLC, @greg_m_campbell & @dduanetursi at Intel’s Cloud Day 2016 (@IntelITCenter | #IntelCloudDay | #NowPossible) and learn how to bring yCenter to your datacenter. With Yottabyte, you can be ready today for the challenges of tomorrow.

What’s in a name? Would storage by any other name perform as sweetly? Are all buzzworded products equal? In tech, buzzwords are rendered meaningless very quickly, and empires are more often built on being first to market, rarely on being the best on the market.

Hyperconvergence is a fantastic example of a tech market whose primary defining buzzword has been stretched, tortured, mangled and abused to the point of meaninglessness. Originally, hyperconvergence meant lashing together all the hard drives and SSDs inside a virtualization cluster into centralized storage. This allowed critical functions such as VM vMotion/Live Migration to occur without needing an expensive SAN or enterprise NAS.

Eventually, things got a little bit fuzzier. Companies started to claim hyperconvergence if they had cluster-wide server side caching or storage gateways that presented to the hypervisor like they were a hyperconverged storage offering, but still utilized one (or more) centralized storage units to provide most of the storage.

As the definition of hyperconvergence was stretched to accommodate every marketing department that wanted in on the buzzword, it seems everyone (with a couple of notable exceptions) has brought to market a hyperconverged solution.

Nutanix was founded in 2009 and shortly thereafter launched it’s first hyperconverged node. As the lore goes, the term “hyperconvergence” was coined at the VMworld where Nutanix came out of stealth mode in 2011. (It should be noted that Pivot 3 and a few others were offering practical hyperconverged products for years before the term was coined.)

Unfortunately, in the nearly five years since hyperconvergence became “a thing,” it seems that very few of the vendors occupying this space have cared to look beyond “storage + compute = novelty” and deliver something that actually advances the state of the art or solves real world customer problems.

That’s about to change.

Feature versus product

The storage industry is fascinated by the concept of storage for storage’s sake, but storage of any kind is a feature, not a product. Businesses don’t care about the storage that powers their datacenters any more than they care about the brand name of the oil used in their fleet of cars. If it does the job adequately and for a price the organization is willing to pay, then it should be out of sight and out of mind.

By this logic, if storage vendors want to stay relevant and be anything other than a commodity, they should be focused on incorporating other features by default. When we buy virtualization clusters we should be seeing self-service portals, API-based infrastructure-as-code provisioning and management, role-based administration, templates and recipes, and more besides.

In short, today’s storage vendors should be leaning towards the infrastructure endgame machine model, rather than wasting everyone’s time trying to convince us all that hyperconvergence is a product instead of just a hardware configuration; and a fairly limited configuration at that. This model renders the hardware infrastructure invisible to the user, and manifests itself as a software defined infrastructure platform where storage + compute + network + management & all other goodies = infrastructure endgame machine.

Yottabyte is well positioned to deliver exactly this. We have been running our own cloud for years. Our software has evolved to meet the real world needs of our customers and by now is more than proven. YottaBlox, powered by the yCenter SDI Platform, offers everything needed for a “cloud in a can” solution.

Yottabyte is ready to take the industry beyond hyperconverged hardware. We’re ready to stop pretending that storage is a product and deliver the features required to meet business needs.

Join us @YottabyteLLC, @greg_m_campbell & @dduanetursi at Intel’s Cloud Day 2016 (@IntelITCenter | #IntelCloudDay | #NowPossible) and let’s take that next step together.

BLOOMFIELD TOWNSHIP, MI–(Marketwired – Jan 28, 2016) – Yottabyte, a software-defined datacenter start-up, today announced that it has been named a Tech Trailblazer Award finalist in the virtualization category.

The Tech Trailblazer Awards are specifically designed for start-ups and are judged by a global panel of technology experts as well as the voting tech public. The awards program recognizes privately-held organizations around the world that are developing innovative solutions. The finalists have been selected through an initial judging round determined by a diverse panel of technology industry experts.

Chief Trailblazer Rose Ross said: “This year has been phenomenal again for startup talent and unsurprisingly the quality of the finalists reflects that. We have also seen previous winners go from strength to strength and we are looking forward to seeing who will be following in their startup footsteps. Our esteemed judges and the Tech Trailblazing team wish all the finalists the best of luck.”

yCenter, Yottabyte’s cloud-building software, creates a virtual datacenter environment that incorporates storage, compute and networking. It makes it possible for users to deploy applications; provision virtual datacenters, virtual machines, virtual networks and storage; and reconfigure this infrastructure in seconds. yCenter is available on YottaBlox, Intel-based hardware, enabling it to take advantage of many forms of hardware acceleration. This provides high performance without traditional high rack space, power and cooling requirements.

“We are honored to be named as a Tech Trailblazer Award finalist alongside other virtualization innovators seeking to disrupt the status quo,” said Paul E. Hodges, III, CEO at Yottabyte. “This is further validation for what our customers and partners have been saying. The market wants private and hybrid cloud-building technology that is cost-effective, and easy to implement, use and manage — simply put, a consumable cloud.”

Public voting for the Tech Trailblazer Awards will be open until February 12 at

About the Tech Trailblazers Awards
Tech Trailblazers is designed for smaller businesses and startups that are five years old or less and at C-series funding or below. The awards have low barriers to entry and aim to recognize both established and up-and-coming startups. They are supported by sponsors and industry partners including AfriLabs, Amoo Venture Capital Advisory, beSUCCESS, bnetTV, China AXLR8R, the Cloud Security Alliance, Computing, The Green Grid, GSMA, The Icehouse, Innovation Warehouse, Launchpad Europe, Lissted, MIT/Stanford Venture Lab, Mynewsdesk, The Next Silicon Valley, Outsource, Prezi, RealWire, Silicon Cape Initiative, Skolkovo, StarTau, Startup America, Storage Networking Industry Association (SNIA), Tech in Asia, TechNode, TiE Silicon Valley, Wazoku, Ventureburn and VMware. For more information, go to or follow Tech Trailblazers on Twitter @techtrailblaze.

About Yottabyte
Yottabyte is a virtualization and cloud vendor offering public, private and hybrid cloud solutions. Yottabyte operated a public cloud based on yCenter, the same software that powers Yottabyte’s YottaBlox. YottaBlox is a turnkey Hyperconverged Infrastructure Appliance (HCIA) that provides a full private cloud solution. Data protection, storage tiering, multi-site replication, software defined networking and an easy-to-use self-service virtualization management interface are only some of the powerful features that are a standard part of yCenter and the YottaBlox it powers. YottaBlox is available and supported worldwide by through the Yottabyte Partner Network. The company is headquartered in Bloomfield Township, MI, USA. For more information, visit or call 1-888-630-BYTE (888-630-2983).

Twitter: @YottabyteLLC


Media and Analyst Contact:
Courtney Tursi



An optional Windows update to the SUSE Block driver, released by Microsoft on Dec 8, 2015 has been found to be defective. This update will cause BSOD (blue screens of death) on virtual machines using a virtio driver for block storage. The SUSE Block Driver overwrites the existing (correct) virtio driver and causes the VM to become unstable. It is strongly recommended to avoid applying this patch until Microsoft offers a verified replacement/fix.

To deselect the optional update

  1. Change Windows update options (Windows 2008/2012) in Control Panel ->System and Security ->Windows Update.
  2. Click on the link “…optional updates are available”.
  3. Deselect the update “SUSE – Storage Controller – SUSE Block Driver for Windows”.
  4. Right click this update and select the Hideupdate option (this will ensure the update is not accidentally applied.)

Hidden patches can be restored (from the left menu on the Updates screen) when/if Microsoft issues a replacement patch.


If the patch has already been applied

  1. Shutdown the VM completely.
  2. In yCenter, Edit the VM.  Change the driver type to IDE.
  3. Submit the change.
  4. Start the VM.  This should allow booting the Windows VM without Blue screen.
  5. Change the block device driver from SUSE back to the Redhat version.
  6. Shutdown the VM completely.
  7. In yCenter, Edit the VM. Change the driver type back to virtio.
  8. Start the VM.
  9. Contact Yottabyte Support if you need assistance or have further questions/concerns.


Software Defined Infrastructure – we had it coming

Biting the hand that feeds IT

Biting the hand that feeds IT

The end of IT as we know it is upon us. Decades of hype, incremental evolution and bitter disappointment are about to come to an end as the provisioning of IT infrastructure is finally commoditised. By the end of the decade, the majority of new IT purchases will be converged infrastructure solutions that I only semi-jokingly call Infrastructure Endgame Machines (IEMs).

I’ve discussed this topic with everyone from coalface systems administrators to the highest-ranking executives of companies that have become household names. Only a few truly see the asteroid headed their way and the collective denial of the entire industry will mean an absolute bloodbath when it hits.

Back in October, I talked about Software Defined Infrastructure (SDI). I painted a picture of a unicorn-like solution that, in essence, combined hyper-convergence, Software Defined Networking (SDN) and Network Functions Virtualisation (NFV), with orchestration, automation and management software that didn’t suck. I thought it was going to be rather a long time before these started showing up.

Boy, was I wrong.


An IEM is an SDI Block made manifest*, but as more than merely something you can install on your premises. It would include the ability to move workloads between your local SDI block and those of both a public cloud provider and regional hosted providers.

This gives those seeking to run workloads on someone else’s IEM the choice of using a vendor legally beholden to the US of NSA, or one that operates entirely in their jurisdiction. What a magical future that would be. All the promises of the past 15 years of marketing made real.

The goal of an IEM is that it removes the requirement to ever think about your IT infrastructure beyond some rather high-level data centre architecting. Figuring out cooling and power delivery will probably take more effort than lighting up an entire private cloud that’s ready to deliver any kind of “as a Service” you require, including a full self-service portal.

Put bluntly, IEM is the data centre product of the year 2020. Storage, networking, servers, hypervisors, operating systems, applications, management and so on will all simply be features.

Today, it would be rare to find a company that goes out and buys deduplication as a product. It’s expected that this is a basic feature of modern storage. By 2020, all of modern storage – and a whole lot more – will be expected to be a basic feature of an IEM.

It is all too easy to slip back into cynicism and think about the dozen reasons this might never happen. SDI blocks would decimate internal IT teams. Entire classes of specialities would become obsolete overnight.

Hundreds – if not thousands – of IT providers that only deliver one piece of the puzzle are instantly put on life support. Heck, the US government (amongst others) might intervene to stop the creation of IEMs because it would put at risk their ability to spy on everyone, all the time.

The beginning of the end of IT as we know it

It’s easy to talk about all this in abstract. Prognosticating about the future of IT is a fool’s game, and putting actual dates to things is sheer madness. If I’m going to take the risk of being so laughably wrong in public, I should at least join Gartner so that I can milk a high salary out of promoting my humiliation, no?

Given the kind of pushback even small parts of an on-premises SDI block are getting, it is perfectly rational to think that the entire IEM would be impossible. Except that they already exist, and based on months of research, I have every reason to believe that IEMs will start shipping very, very soon… and that they won’t be an “enterprise only” affair.

Let’s start with the only early IEM that actually exists: IBM’s SoftLayer. Right at this moment, I would consider it to be a functional – albeit primitive and early days – IEM manifestation.

SoftLayer is not simply a public cloud offering. Heck, calling what IBM serves up as a “cloud” is simplifying things by a lot. IBM can remotely provision for you virtual machines (VMs), containers or bare-metal servers in a few different architectures.

IBM have a library of software that can be deployed at the push of a button, and are adding new features and functionality to cover every aspect of IT infrastructure. We’re not talking about paying lip service to concepts like data protection, but fully capable, customisable solutions supported by a rich partner ecosystem.

If you ask nicely enough, IBM will install a SoftLayer cloud on your premises. Regional providers can get a full SoftLayer installed for them so you can move your workloads over to them. Alternately – because of IBM’s deep investment in OpenStack – you can move your workloads off your local or IBM-hosted SoftLayer set-up over to regional providers running OpenStack.

IBM has evolved SoftLayer into the first IEM. We could quibble about some points, for example, that IBM still relies on legacy storage array and networking tech too much to meet the strictest definitions.

The key is that in jettisoning their own hardware-manufacturing divisions, IBM has become entirely agnostic about such things. IBM are not against hyper-convergence, SDN, NFV or any of the newer, sexier technologies. They are being integrated and will most likely become the default over the next few years, as they are becoming the default across the industry. Nothing about IBM’s culture, management software or approach makes these technologies hard to migrate to.

Of course, that IBM have an IEM is functionally irrelevant to the world at large. It’s IBM, so virtually no one will ever be able to afford it.

However, there are other players that are close. The closest to producing actual purchasable IEMs are VMware, Hewlett-Packard and Microsoft.


VMware has almost all the pieces needed to make an IEM. Indeed, if I hire the right VCDXes to build it and add in the right third-party software from VMware’s partner ecosystem, I could build an IEM out of VMware’s software. In fact, I’ve done it more than once in the past couple of months, though the cost per VM is absolutely breathtaking.

Despite having the pieces, I have absolutely zero faith that VMware will create an actual, usable IEM. Too much of it goes against VMware’s culture. VMware is far too fragmented internally for all the various groups to work together, and – perhaps the key point – the companies with the critical missing bits are terrified of working closely with VMware.

VMware doesn’t understand the importance of trust. Trust among partners and among customers. It is thrashing about like an Oracle in the making, seeing how tight it can get the vice grips it has on everyone’s genitals. This will be its undoing.

Buying an IEM from any vendor requires either that you trust that vendor implicitly, or that the vendor in question be so married to open standards that you feel you can move away from it easily. When you buy your entire IT infrastructure as variations on a single SKU, lock-in is a real concern – one VMware is institutionally incapable of understanding.


HP is another company that is busy building an IEM. Its investments in OpenStack are impressive, as is its storage, networking, management software and so forth. HP is building the pieces. It is assembling them in the correct order.

The result is that HP is building an impressive Saturn V of an IEM, to awe and amaze all who gaze upon it. Of course, it’s HP, so the company has aimed the damned thing at the ground – and it’ll fire everyone involved in its design and construction just before it actually ignites the engine.


That brings us to Microsoft. Azure could be an IEM. Microsoft does have the Azure Stack. It also has an eye-wateringly expensive baby Azure that you can buy from Dell.

Windows Server is baking a primitive form of hyper-convergence called Storage Replica into its next Windows Server, and trying hard to cut Windows Server down to a usable size. Heck, Microsoft is evenadding container support to the mix.

Trust and licensing are Microsoft’s stumbling blocks here. Though Microsoft will deny it vehemently in public, the company emphatically does not want you using regional service providers or running workloads on your own infrastructure. The company’s words and actions are not aligned.

The IEM is about more than simply running your workloads on expensive public infrastructure owned by a company beholden to the US of NSA. It is about the ability to run those workloads where they are appropriate and make the most sense. The body corporate of Microsoft doesn’t agree that this is the future, though some parts keep working on it, and the marketing messaging periodically tries to convince us of this, before changing to “Azure public is the solution to everything”.

Microsoft has spent billions trying to put regional service providers and a large part of its own partner ecosystem out of business. The company has jacked up SPLA pricing to where it is essentially impossible to compete with Microsoft’s hosted Azure directly. SPLA still has completely ridiculous licensing restrictions that prevent service providers from building a truly shared infrastructure, and VARs and MSPs have margins for everything other than selling Azure public cloud services slashed to meaninglessness.

From a technological perspective, Microsoft would seem to be on track towards creating an IEM. That said, this is Microsoft. It might decide tomorrow that the future of IT is a mutant hybrid chihuahua-giraffe thing that requires all new APIs, new programming languages, a baffling new interface and you to retain Microsoft-certified licensing lawyers on staff. It’s Microsoft: consistency isn’t its strong suit.


The most interesting challenger to the existing players is Cisco. It recently borged OpenStack expertsMetacloud and Pistoncloud.

Cisco has its own networking and servers. The company also boasts partnerships with required elements up and down the stack from hyper-converged vendors like Maxta and SimpliVity to every kind of management, automation and orchestration piece you could imagine.

Cisco lacks a credible public cloud offering and doesn’t have much of a hybrid story at all. These are things Cisco can buy, if it feels the IEM is more than voices in my head. Cisco’s problem isn’t getting the pieces together in time to be a real competitor. Cisco’s problem is that it’s Cisco.

Cisco has a culture of total lock-in followed by ruthless exploitation of that lock-in. This hardly it the company anyone wants to trust with the totality of their IT infrastructure. Moving away from that would require some pretty big cultural changes inside the firm but, for once, it may actually be possible to see that level of change.

Cisco has experienced a CEO change, and the incoming chief has presided over a purge of the executive layer. It has also seen it’s Invicta acquisition fail. Reports from staffers have the mood within Cisco as being somewhere between Game of Thrones and House of Cards, with collateral damage reported far and wide.

What will Cisco be when the dust settles? Nobody knows… but they could be an IEM contender.

Dell and Nutanix

The big mystery among the big names is “what is Dell up to?” If you are thinking Nutanix, don’t be absurd. Dell could never pay Nutanix as much as they’ll get from an IPO, and the only thing that matters at Nutanix right now is that massive payout. The exit of everyone’s dreams is so close that the Nutanix leadership can taste it and nothing on this earth is going to keep them from their dump trucks full of cash.

So if Dell isn’t going to build an IEM out of Nutanix, what is the company up to? I know many cynical IT types who believe Dell is up to nothing; that it is resting on its laurels and that innovation within the organisation has ceased. I am not one of those people.

Michael Dell is many things, but he’s not stupid. If I have figured out that IEMs are on the immediate horizon – and the massive disruption they will cause – then so has Dell. Not being a public operation, however, the company has no reason to reveal its plans until it’s ready to obliterate competitors in one decisive push.

That said, Nutanix has recently made some big strides of their own towards an IEM with its recentAcropolis announcement. It doesn’t have the full suite of software widgetry that Dell could bring to bear, but it may not need it.

The Dell/Nutanix partnership is perhaps the most interesting one in this entire space.

The little guys

You can be excused if you haven’t heard of Yottabyte. This is a small, public cloud provider from Detroit that decided it needed to roll its own hyper-convergence and eventually data centre convergence software. Yottabyte is by no means the only small/startup IEM contender, but I am going to use it as my example because I have worked very closely with it during the past six months.

Yottabyte doesn’t have massive venture capital funding or marketing budget. It just has a stable of loyal customers and it ploughs profit from its public cloud operations back into R&D.

The result of years of this R&D is yCenter. It can be a public cloud, a private cloud or a hybrid cloud. Yottabyte has gone through and solved almost all the tough problems first – erasure coded storage, built-in layer 2 and 3 networking extensibility, management UI that doesn’t suck – and is only now thinking about solving the easy ones.

Yottabyte is a completely random tiny company with 80 per cent of an IEM already built and is only just starting to realise what it can do with it. I’ve spent hundreds of hours with Yottabyte helping them design a hybrid cloud partner ecosystem that will not only include regional service providers, it will make discovering and utilising them easy.

Yottabyte didn’t push back and say it needed to control all aspects of the ecosystem. it didn’t come back demanding a means to limit the service providers’ role, own every element of the customer relationship or otherwise marginalise everyone except itself.

If all goes well, by the end of next year Yottabyte will have a prototype IEM. What Yottabyte has got is already pretty impressive, and adding the last few elements honestly shouldn’t be that hard. The hard part is done, now it’s just a matter of getting some venture capital and going forth and marketing the thing.

I bring Yottabyte up as an example mostly because it’s a textbook perfect case of “buy or bury” in the IEM space. If Yottabyte ever gets to C-round funding, a whole lot of executives at a whole lot of companies will have a great deal of explaining to do.

Yottabyte isn’t the only example. I can’t talk about most of the others because they’re in stealth, but some have emerged organically. Maxta and Mirantis, for example, have a partnership that can deliver a hyper-converged OpenStack solution as an appliance. It’s not exactly an IEM, but it could be the start of one.

SimpliVity just announced KVM support, and has OpenStack integration for both its KVM and VMware-based solutions. I could keep going up and down the line and list a bunch of companies or company combinations that deliver the building blocks of an IEM, but I suspect you get the picture by now.

The shape of things to come

There are startups that two years ago I would have said were novel because they sold a product, not a feature. They integrated multiple features – that themselves were products not too long ago – into a single item and sold it at a lower price. Those startups haven’t even finished their lifecycle and already I can see quite clearly that they are merely features of an IEM.

IEMs won’t completely eliminate systems administrators, but they will completely transform the data centre. Gone from most organisations are dedicated infrastructure specialities like storage admin, virtualisation admin and network admin. A general operations – or more likely, DevOps – staff will take their place in all but the largest organisations.

Perhaps the best analogue is physical building infrastructure. Most companies don’t have a plumber or a roofer on staff. They have a general utilities body – a “handyman” – that fixes what they can, and calls in specialists for what they can’t. Larger organisations – a university campus for example – absolutely will keep some specialities to hand. A plumber is probably a safe bet, even if the roofer is unlikely.

By the same token, network admins will be the infrastructure positions hardest to fully eliminate. Networks are the bit that have to interact with the outside world and the networking world’s abject inability to play nicely with standards means that in any large organisation, someone has to babysit the thing.

As I see it, the data centre container vision of the deceased Sun Microsystems is – a decade later – starting to be realised. Instead of needing consume entire containers, however, we’re on the cusp of being able to provide a full suite of data centre services in a 2U Twin form factor.

IEMs are bringing the entry point to full data centre services down to something that – by 2020 – 50-seat companies should be able to afford to deploy on premises, with full and simple access to a wide array of hybrid cloud services. They already scale up and scale out simply and easily.

Almost everyone reading this has some sort of vested interest in preserving the status quo. That status quo is, after all, what is currently paying our mortgages. Resistance to IEMs will be enormous. Despite this, how many of us can afford to gamble that I’m wrong?

If you’re a systems administrator, can you afford not to gain development or at least DevOps skills beyond your infrastructure roots? If you’re a vendor and you don’t have an IEM solution, and I’m right that IEMs are about to be a very big thing, then you’re probably already dead and you just don’t know it yet.

The future of the data centre is about more than performance results and benchmarks. It’s about more than individual features or varying measures of density. More than anything else, the success or failure of the IEM concept will boil down to trust. Who would you trust to deliver you an entire data centre’s worth of services in what amounts to a black box?

How many intercompatible vendors would you require before you bought into that concept? And could we ever trust the vendors involved not to turn the screws once we’d bought in? If IEMs are – as I believe them to be – inevitable, then a new system of checks and balances will need to be developed. The delicate balance between the power vendors, customers and regulators will need to be very carefully managed. I hope we’re all up to the task.


* I could have gone on using the SDI nomenclature, but various industry marketing hooligans decided to misappropriate the term to refer to their pathetically simplistic legacy converged or hyper-converged infrastructure solutions. Infrastructure Endgame Machine is a lovely bit of hyperbole that – I hope – no marketing wonk will try to steal unless they’ve actually built one. The internet being what it is, calling what you’ve made an Endgame Machine will attract every vicious piranha on the planet to tear you apart, so if you use the IEM terminology you’d better be ready to deliver.

Posted in Blog

About the Summit The Intel® Solutions Summit (ISS) is Intel’s largest and most prestigious channel event. A key opportunity for Intel’s Platinum partners, it connects Intel Technology Providers with the latest technologies, market opportunities, and Intel products. A unique networking and collaboration opportunity, ISS provides partners the chance to meet face to face with Intel executives and important industry players LIKE YOU. Connections made at ISS lead to new business relationships, growing a company’s network and bottom line.


Hyatt Regency Dallas

300 Reunion Blvd, Dallas, TX 75207, Unites States


May 5 @ 8:00 am
May 7 @ 5:00 pm 

Posted in News via