Every datacenter is different, even if only because the construction of the actual building(s) differ from site to site. While physical layouts, equipment and vendor selections change, the purpose of a datacenter always remains the same: to run digital workloads. Given the intense amount of marketing surrounding datacenter equipment this can sometimes be the difficult bit to remember: datacenters exist to run workloads, nothing more.
As layers of abstraction come into play and datacenters become entirely virtual, the purpose of a datacenter remains constant. A virtual machine is a wrapper for the environment in which the OS and application live that can be moved from physical host to physical host. A virtual datacenter is a wrapper for the environment in which virtual machines and virtual networks operate that can be moved from cluster to cluster (or datacenter to datacenter), and nothing more.
Despite the finality of the above statement, virtual machines changed IT practices around the world. They freed operations teams from a significant amount of drudgery related to moving applications between dissimilar hardware, made high availability and fault tolerance affordable enough to be within reach of businesses of all sizes and allowed software updates to be uncoupled from hardware updates, changing the financial dynamics of the entire IT industry.
We at Yottabyte believe the adoption of the virtual datacenter model marks a similarly significant change for IT operations. This is why the virtual datacenter is at the core of yCenter. The ability to wrap up every configuration, rule, permission, network and virtual machine in a datacenter and move the whole lot around as needed isn’t an add-on or a separate product. It’s a basic, table stakes feature of modern IT infrastructure.
Virtualization allowed operations staff to drive the utilization of individual servers much higher. No longer were administrators faced with the choice between devoting a piece of hardware to a single workload (highly inefficient), or trying to integrate multiple workloads into a single environment (a practice which usually ended in disaster).
With virtualization, individual workloads could have their own operating system with it’s own configuration. One workload per environment, and you could run multiple environments on a single piece of hardware. Efficiency increased without compromising the security, stability or ease of use of individual workloads.
What’s important to note is that the environment around individual workloads got smaller. Administrators stopped trying to put multiple workloads in a single environment. This meant that they stopped having to configure environments to support those multiple workloads, and each environment was thus only modified from the default settings as little as was required to get the job done.
Virtual datacenters offer a similar sense of compartmentalization. Once you can virtualize datacenters you can stop thinking of datacenters as collections of hundreds or thousands of workloads and start thinking of them as wrappers around interconnected workloads.
In the days before virtualization, an administrator might have put all the individual workloads for a given service on a single system. For a website this might include a database, a web server, a file server and some security applications. Today, a single website could consist of a dozen virtual machines, each irrelevant on their own, but combined form that same single service.
With virtual datacenters you can wrap each service’s virtual machines together so that it can be worked on by a given individual or department, cloned as a unit (for DR or QA or…), migrated as a unit and so forth. Alternately, you can bundle similar kinds of workloads into a single datacenter, splitting up administration based on workload type rather than service.
The choice is yours, but the choice is important. Even a modest physical datacenter today can run millions of individual workloads. The ability to chop up that physical datacenter into smaller groupings is necessary if we are to make any sense of it, or be able to administer it securely.
Even if, for example, you trust your datacenter administrator to have personal control over the millions of workloads running in a physical datacenter, can you trust a single login with that kind of power? If a piece of malware were to get hold of the superuser credentials to a datacenter of that scale, the results could be catastrophic.
Virtual datacenters are more than simply theoretical groupings to ease the burden on an administrator’s sanity. They are more than simple compartmentalisations to wrap workloads together. They are the new normal; a means to segment and segregate access and administration for security purposes as much as for utility and ease of use.
Yottabyte believes in the importance of the virtual datacenter to the future of IT administration. We’ve built it into yCenter and yCenter is at the core of everything we offer. From our public cloud services to all of our products, the virtual datacenter isn’t tomorrow’s technology, or some difficult to configure and expensive add-on. It is a fundamental feature of today’s datacenter.
Join us @YottabyteLLC, @greg_m_campbell & @dduanetursi at Intel’s Cloud Day 2016 (@IntelITCenter | #IntelCloudDay | #NowPossible) and learn how to bring yCenter to your datacenter. With Yottabyte, you can be ready today for the challenges of tomorrow.