What is hyperconvergence, or HCI, or dHCI today? Why it’s all worth knowing | ZDNet

Photo of the convergence of the Indus and Zanskar Rivers in India by Sundeep Bhardwaj, licensed under Creative Commons 3.0

What is hyperconvergence?

Of the multitude of definitions of hyperconvergence that have been tossed around by vendors over the past decade, here is the one that should cover all grounds: A data center that employs hyperconvergence (HCI) enables workloads (software) to be deployed, hosted, and managed in a data center, using hardware designed to scale and adjust for those workloads’ varying requirements, along with the data center’s own changing operating circumstances. The needs of the software are answered and addressed by all the hardware in the facility or in the hyperconverged cluster, acting collectively.

HCI is hardware

The key difference here is the hardware. There are a multitude of workload deployment and orchestration systems in data centers today. You’re familiar with Kubernetes. You may also be familiar with the most prominent branded versions of Kubernetes today, such as VMware’s Tanzu, HPE’s Ezmeral, and Red Hat’s OpenShift. All of these systems enable new classes of containerized workloads to be developed, tested, deployed, and managed in fully orchestrated systems, using substantial amounts of automation. And Kubernetes is promoted by champions who have publicly argued that the orchestrator fulfills the fundamental objectives of hyperconvergence, thus rendering HCI support by the hardware unnecessary and even obsolete.

But Kubernetes is not baked into hardware — at least, not yet. By everyone’s definition, HCI is hard-wired into servers. If HCI is any one thing, it is this: the hard-wiring of servers’ control planes in hardware. The crux of the HCI value proposition at present is that having control in hardware, expedites processes and accelerates productivity.

Typically, the relocation of control over the network, storage, management, and security from software to hardware should result in faster processes with lower latency and broader access to system resources.

So what’s all this talk about ‘HCI software’?

However, there are plenty of major-market software components available today, particularly for networking, that advertise themselves as hyperconverged, as part of HCI, or as a unit of their maker’s HCI platforms. The confusion begins with the introduction of the concept of the software-defined data center (SDDC). HCI, vendors say, enables SDDC.

They are correct about that part, if you accept the broader definition of “software” as encompassing anything digital rather than physical — more specifically, configuration code. SDDC enables operators to specify the configurations of systems using source code. In other words, they can program the assembly of components and the provisioning of resources. In the sense that any program is software, then this is an accurate explanation of “HCI software.” HCI platforms actually produce the code that SDDC uses to configure data centers, on these operators’ behalf. They determine the requirements of servers, and the placement and availability of resources, and make the necessary adjustments. In that respect, HCI borrows the purpose and some of the functionality of SDDC, while reassigning much of the burden of control to automation.

Yet in the end, what this configuration accomplishes is the delivery of instructions to the HCI hardware. At a fundamental level, HCI is hardware.

There are plenty of arguments going on from well-meaning engineers, who will wager their remaining teeth and hair to advance the premise that HCI is not software. And yet you will still find so-called “HCI software.”  What is far more important than whether one side or the other is right, is whether that software offers genuine benefits to your data center (there’s a good chance that it can), and ironically though just as importantly, whether that one vendor’s HCI software can co-exist on the same platform as another vendor’s HCI hardware (there’s an above-zero chance that it won’t).

For the sake of this article, data center infrastructure is indeed comprised of software and hardware. Yet HCI is rooted in hardware.

Each manufacturer of HCI components has engineered them to be answerable to a centralized management system. That system can place workloads where they can be run most effectively, make storage and memory accessible to those workloads, and — if the system is clever enough — alter workloads’ view of the network so that distributed resources are not only easier to address, but more secure.

That’s if everything works as advertised. Since its inception, hyperconvergence has been something of an ideal state. That ideal has always been at least one step ahead, often more, of its actual implementation. In earlier incarnations of this article, we spent several paragraphs explaining what exactly got “converged” in hyperconvergence. In HCI’s present form, the issue is now largely irrelevant. In fact, from here on out, we’re only going to refer to it as “HCI” in the company of those abbreviations like “ITT,” “NCR,” and “AT&T” that no longer stand for their original designations.

Some folks perceive HCI today as a means of artificially distinguishing one manufacturer’s line of enterprise servers from those of another. This is an argument you will continue to see from the makers of workload orchestration systems, many of whom are part of the open source movement. This article will avoid evaluating the virtues of both sides’ arguments. Instead, it will present an examination of HCI in its present state, and leave any qualitative judgments to historians.

The various layers of HCI

Because vendors in the HCI market space pursue the end goals of the technology to varying degrees, it’s important to define HCI beginning at a foundational level, and then proceed into deeper levels where some systems may not choose to wade. Each deeper level may pertain less and less to certain market participants whose intellectual investments in HCI may not be as deep as for others.

  1. Fundamentally, HCI is implemented by way of embedding a hypervisor (the component responsible for hosting and managing virtual machines, or VMs) in hardware. An HCI-capable server fulfills the role of a hypervisor in a virtualization suite. It is built to stage workloads, run VMs, and migrate them elsewhere in the network as necessary.
  2. Very basically, HCI builds the environment that a VM workload needs, out of data center resources that are pooled together as commodities (thus, “convergence”). Processor power  and storage capacity are the two easiest commodities to collect together. HCI can provision the space and power required for a given workload, at the current time.
  3. Up until very recently, HCI platforms also typically included integrated components for software-defined storage (SDS). At its roots, SDS is a mechanism for pooling storage resources from an array together into a logical unit (LUN). If SDS does its job in an HCI environment, it completely substitutes for a storage-area network (SAN). Many vendors incorporated SDS into their HCI value propositions, though over time, began offering SDS separately as a solution unto itself. Now there are active arguments to the effect that SDS has separated itself from HCI, and is its own self-contained market. Dell EMC’s recent “reimagining” of VxRail, its hardware-based HCI platform, makes the split official, enabling the SDS component to be determined by the customer.
  4. HCI’s emphasis from a systems management perspective is on a single front-end console from which the entire data center may be administered. You’ve seen Web sites that are actually three or four publications (or 16 or 17) that are all linked together by a single front page. So it should come as no shock to anyone that a plethora of management services can all be gathered into one bunch and made selectable from one menu. HCI management tools that are genuinely designed to work together, as opposed to being individually chosen from one list, collectively produce one meaningful representation of the entire system, as a single console or “dashboard.” There is a separate market for data center infrastructure management (DCIM) software, which HCI would seek to render irrelevant.
  5. Some vendors’ HCI platforms incorporate fabrics, which are directly interfaced components at a deeper level than an addressable network. A fabric, for example, may link multiple storage devices throughout a data center, making them addressable as a collective volume.
  6. Modern workloads are being refactored for containerization (begun by Docker, extended by Kubernetes). This is a class of virtualization that focuses on virtualizing the workload so the operating system can host it, as opposed to virtualizing the operating system so it can support a workload. At its root, HCI uses a hardware-level hypervisor. The benefits of deploying one virtualization layer at the hypervisor level to support another virtualization layer at the workload level, are still being debated. Its proponents, however, advance this compelling theory: The hypervisor provides security at a layer beneath the operating system, so that the Kubernetes environment is protected from attacks on the kernel. The hypervisor, in turn, is further protected in HCI than it would be in a typical software-driven virtualization system, because it is rooted in hardware. So some HCI platforms are being advanced as containerization platforms, with this specific premise.
  7. In recent years, HCI systems have worked to retrofit themselves with the capability to deploy workloads to the public cloud. This is often described as “cloud integration,” but whether it’s actually integrated depends on whether the data center’s virtual infrastructure can be extended to the cloud infrastructure. In the case of VMware’s part of the VxRail package, it can. Here, space and resources provided by the public cloud are made available to the HCI platform for provisioning.
  8. Some HCI platforms continue to manage virtual desktops (VDI), which are essentially entire server and operating system images, complete with installed applications, pressed onto a “golden master” virtual machine. As applications move away from Windows/DOS-style architecture and toward network distribution and browser-based functionality, traditional VDI plays less and less a role in modern data centers. Some VDI vendors are skillfully migrating the concept of VDI to architectures where virtualized apps appear to their users on standardized desktops, but that concept bears about as much resemblance to original VDI and modern HCI does to the original hyperconvergence.
  9. Memory is trickier to pool together, since electronic memories are bound to the system address busses of their respective servers.  (Some HCI systems that purport to include memory in their commodities list, technically don’t pull off this feat.)  Networking bandwidth is treated as a commodity, but all virtualization platforms level the playing field for clustered servers so workloads can be distributed using their own network overlays.

What does the “d” in “dHCI” stand for?

There are three possibilities, depending upon whom you ask and when:

  1. Disaggregated. One of the newer trends in data center server architecture, whose end products we’re just now beginning to see, is disaggregation — the decoupling of server components, especially processors, from a locked-down system bus. Last year, inspired by analysts at IDC, HPE began referring to “disaggregated HCI” or dHCI as an architecture where compute and storage (and perhaps, down the road, network bandwidth and memory as well) are delivered in separate boxes that are attached by way of a network fabric, in a rack-scale layout.
  2. Distributed. In recent months, however, some at HPE have referred to distributed HCI as dHCI, defining it not using IDC’s framework but a completely different scheme based either upon:
    a.    A concept HPE introduced earlier in 2020, which leveraged VMware’s NSX virtual network infrastructure to flatten the appearance of the network from the vantage point of applications;
    b.    A more recent framework for HPE’s Nimble Storage model in which servers pre-designated for compute workloads (as opposed to database lookups) are compartmentalized, and the decoupled storage array utilized a highly distributed file system, similar to the type created for so-called “Big Data” workloads.
  3. Decomposed — an unfortunate choice of words, but which has shown up in more than one analyst report. It’s used either interchangeably with “disaggregated,” or to refer to a concept bandied about in 2017 by a company called Datrium (that has since been acquired by VMware) that suggested semi-autonomous functionality for HCI storage arrays, enabling them to reclaim unused capacity without direct oversight by the HCI management system.

Whatever you decide the “d” may stand for in your data center, it’s hard to ignore that these choices all share a common theme: compartmentalization, separation, isolation, autonomy. Indeed, “d” may actually stand for the “unraveling” of hyperconvergence.

Who produces HCI hardware?

My ZDNet colleague Chris Preimesberger produced a detailed examination of the six leading vendors in the global HCI market, weighing the pros and cons of their respective products. Here is what you need to know about the architectural and implementation choices currently being made by the leading vendors in the HCI space:

Nutanix

180326-hyperconvergence-01-nutanix.jpg

Nutanix’ Acropolis Distributed Storage Fabric model for HCI.


Nutanix

The Nutanix model, presently called AOS, is based on two components: a distributed storage fabric with a managing hypervisor, both called Acropolis; and a distributed management plane called Prism. At the center of the Acropolis fabric is a single class of appliance, simply called the node, which assumes the role conventionally played by the server. Although ideally a node would provide a multitude of commodities, in an online publication it has dubbed the Nutanix Bible, Nutanix itself admits that its model natively combines just the main two: compute and storage.

One principal point of contention among HCI vendors is whether a truly converged infrastructure should incorporate a data center’s existing storage array, or replace it altogether. Nutanix argues in favor of the more liberal approach: instituting what it calls a distributed storage fabric (DSF). With this approach, one of the VMs running within each of its HCI nodes is a controller VM dedicated to storage management. Collectively, they oversee a virtualized storage pool that incorporates existing, conventional hard drives with flash memory arrays. Within this pool, Nutanix implements its own fault-resistance, reliability checks, and so-called tunable redundancies that ensure at least two valid copies of data are always accessible.

Prism is the company’s HCI management system. In recent months, it has become a three-tier cluster of services, licensable on a per-node basis. The basic tier performs oversight and hypervisor management services, while the “Pro” level adds capacity planning, adjustment, and automated remediation (arguably what hyperconvergence was originally about), and the “Ultimate” tier adds machine learning-oriented performance tuning.

Nutanix partners with server maker Lenovo to produce jointly-branded HCI platforms.

Dell EMC / VMware / VxRail

In May 2021, Dell EMC, in collaboration with sister company VMware, began the latest round of what this group described as “re-imagining HCI.” Tossing much of the team’s old models into the annals of history, Dell EMC brought forth what its VP of product management for HCI and other categories, Shannon Champion, called, “a series of integrated, value-added components built on top of the VMware software, as well as PowerEdge [servers], that enables automation, orchestration, and lifecycle management.”

VxRail is the brand for the group’s HCI servers, which are Dell PowerEdge server models modified with HCI hardware. For these servers, VxRail has introduced a concept the group calls dynamic nodes. It’s perhaps a more exciting name than “storage-less server.” Think of a processor bus designed to connect to storage only through network fabric, rather than an expansion interface.

dell-emc-vxrail-use-cases.png

Dell EMC’s new presentation of VxRail as a hardware extension of VMware virtualization.


Dell Technologies

The VxRail group’s previous iteration of HCI relied heavily upon a built-in layer of abstraction that utilizes software-defined storage (SDS), to connect to the Dell EMC storage arrays that have been the pillar of the EMC brand since its inception. With the current iteration, astonishingly, the reliance upon this SDS was part of what was tossed, replacing it essentially with the customer’s choice. One option is a contribution from VMware called HCI Mesh, which is its method of bringing existing SAN arrays into the HCI scheme (as the diagram above indicated) by bridging virtual SAN (vSAN) clusters across networks.

Cisco HyperFlex

Cisco’s HCI model, called HyperFlex (HX), deploys a controller VM on each node, but in such a way that it maintains a persistent connection with VMware’s ESXi hypervisor on the physical layer. Here, Cisco emphasizes not only the gains that come from strategic networking but the opportunities for inserting layers of abstraction that eliminate the dependencies that bind components together and restrict how they interoperate.

This way, for HyperFlex’s 3.0, data centers may incorporate a variety of abstract storage and data constructs including one of the most recent permutations, put forth by the Kubernetes orchestrator: persistent volumes. In a Kubernetes distributed system, individual pieces of code called microservices may be scaled up or down in accordance with demand, and that scaling down literally means chunks of code can wink out of existence when not in use. For the data and databases to survive these minor catastrophes, developers have created persistent volumes — which are not really new data constructs at all. Rather, they’re generated by way of layers of abstraction, extending connections to storage volumes to the HCI environment without having to share the details or schematics of those volumes.

cisco-hyperflex-application-platform-intersight-1280p.jpg

The working relationship between Intersight and HyperFlex HXAP.


Cisco

Containerized workloads are managed within HX by way of an integrated container environment Cisco unveiled in Q3 2020, called the HyperFlex Application Platform (HXAP). This is a branded Kubernetes environment, capable of staging and orchestrating containerized applications, all contained within a virtual machine that is itself managed by Cisco’s hypervisor.

Although HyperFlex does utilize VMware’s hypervisor, Cisco continues to tie its HCI control together with its Intersight management platform. According to Cisco, the scale of the distribution of a Kubernetes application managed with Intersight extends to the furthest reach of the customer’s HyperFlex network, which includes nodes in edge locations. This has been a problem for hypervisor-managed Kubernetes platforms in the past, which were limited to the scope of a single hypervisor — usually one virtual server.

HPE dHCI

In 2017, HPE acquired an HCI equipment provider called SimpliVity, which at the time could integrate not only with HPE/HP servers, but also Dell, Cisco, Huawei and Lenovo. SimpliVity concentrated mainly on scaling data storage in accordance with workload requirements, and gave HPE more of a concrete strategy for battling Nutanix and Dell.

During the 2021 HPE Discover virtual conference, it was clear that the company had decided to begin downplaying SimpliVity, in favor of what its engineers, at this show, at that time, were calling “disaggregated HCI.” By this, they were referring to a storage array which scaled independently of the compute array (although at the time, compute capacity was still being provided by ProLiant servers, rather than converged compute boxes). This array is provided by way of a class of pool-ready data storage boxes called Nimble Storage.

hpe-dhci.png

HPE’s most recent minimum requirements list for dHCI with Nimble Storage. 


Hewlett Packard Enterprise

With dHCI, gone is the need for an HPE-branded management system. In its place is VMware vSphere and vCenter, where the configurability of dHCI servers appears as a plug-in. Since HPE’s Aruba Networks unit had already integrated its network automation functionality into VMware vSphere, this enables HPE dHCI to bring network automation into its present-day HCI portfolio, without having to reinvent the wheel a fourth or fifth time.

The one tie that binds the dHCI package together, as the diagram shows, is HPE’s InfoSight monitoring predictive analytics platform. This provides performance tuning and anomaly detection, which was also a component of SimpliVity. Here we see a probable preview of coming attractions for HCI as a product category, with reduced emphasis on the “hyper-” part; a clear compartmentalization of networking from storage from computing; a greater reliance upon the underlying virtualization platform (as well as its producer); and a more minimal wrapper around the components to provide some semblance of unity.

Didn’t there used to be someone else?

NetApp did enter the HCI hardware market in 2017, with the intent to produce entry-level appliances that could appeal to small and medium enterprises. However, last March that company decided to exit this market, giving its existing customers one year to transition to its Kubernetes-based Astra platform. In a spectacularly candid admission, NetApp engineers publicly expressed their opinion that HCI was the wrong direction for infrastructure evolution. They cited the multiplicity of options there, which they characterized as arbitrary, as opposed to a single, clearer channel for Kubernetes orchestration. In so doing, they implied that by introducing more and more architectural distinctions between HCI platforms, the remaining vendors there were essentially stratifying the market, just so they could claim small competitive advantages over each other.

Related Coverage

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.