The data center has been in transition for the past several years, and it continues to shift as requirements change and technologies mature. To meet challenges like cost reduction, greater efficiency and improved security, data centers have moved from largely hardware-based, labor-intensive operations to incorporate at least some level of virtualization.
For many organizations, the first step has been virtualizing servers, storage, networking, or all three. The next step, in many cases, was moving some portion of the data center to converged infrastructure, where servers, networking and storage functions are preconfigured and ready to use.
All of this was a necessary foundation for what comes next—more software and less physical infrastructure. Both hyperconverged infrastructure and the full software-defined data center approach use software to control, manage and provision, but there are differences.
A hyperconverged infrastructure integrates compute, networking and storage functions into one box. Each can function individually although they are in the same physical structure. By combining them, the “data center in a box” can better pool and share resources. Depending on how it is built, it also can include other components such as data compression, capacity optimization, auto-tiering and WAN optimization. Everything is managed by software. They are often deployed in remote or branch offices, and are seen as an optimum path to the cloud.
There is a bit of overlap between the hyperconverged infrastructure and the software defined data center. Both, for example, offer at least some degree of centralized management and configuration. Both provide high levels of virtualization and allow for pooling and sharing of resources. Both also provide server and network consolidation, and support cloud computing. Both help optimize storage and improve business agility to some level.
While a hyperconverged infrastructure is often part of a full software defined data center, there are other features required to create a full SDDC. The main difference is the SDDC’s extra layer—an orchestration layer that includes policy-driven automation and management. This layer allows for more proactive monitoring, automated policy management and automated provisioning, as well as more effective capacity planning.
This higher degree of management allows the different resources of the data center to be configured differently based on demand. It also means that the data center can be managed with fewer tools than in the past, reducing complexity and saving money. Finally, it allows for more software-based services like firewalls and load balancing, reducing the amount of dedicated hardware appliances.
While the evolution is still underway, all signs point to a move in the direction of the software-defined data center. According to Gartner, three-quarters of enterprises will use some form of software-defined data center by 2020.
Guest blogs such as this one are published monthly and are part of The VAR Guy's annual platinum sponsorship.