Gartner: How to turn old datacentres into critical IT assets
Armed with the right approach, legacy datacentre infrastructure can be reinvented to increase capacity, support new and emerging business services and reduce operating costs.
As part of this approach, organisations with existing workloads remaining within their datacentres must decide how best to restructure their physical infrastructures to improve efficiencies and extend the datacentre’s useful life.
Most IT infrastructure and operations (I&O) leaders are dedicating their attention to cloud migrations, edge strategies and moving workloads closer to the customer. But they need to remember that a core set of workloads may remain on-premise. Although continued investment in an older, more traditional datacentre may seem contradictory, it can yield significant benefits to short- and long-term planning.
There are three key ways that I&O leaders can optimise existing datacentres to support new and emerging business services.
Enhancing delivery
In datacentres that are nearing operational capacity, the main limitation is a lack of physical space and power to support additional equipment or adequate cooling infrastructures. This results in companies either choosing to build a new, next-generation datacentre to support longer-term growth or using colocation, cloud or hosting services as a solution.
Although these are viable options, they each entail moving workloads away from the traditional on-premise operation. This introduces risk and adds complexity to the operating environment. A possible alternative for long-term upgrades of existing datacentres is to use self-contained rack systems.
These manufactured enclosures contain a group of racks designed to support medium to high-compute densities. Often integrating their own cooling mechanism, retrofit or repurposing high-density computing self-cooling racks can be a simple and effective way to improve datacentre space.
Maximise space
Clearing out a small section of floorspace for one of these self-contained units is the least intrusive retrofit technique. You can then break floorspace into discrete sections and reconfigure your arrangement.
Self-contained rack units will typically require power from an existing power distribution unit or, in some cases, it may require a refrigerant or cooling distribution unit. Assume an increase in per-rack space of about 20% to take into account additional supporting equipment.
Because in-rack cooling systems are self-contained, they don’t require a hot-aisle/cold-aisle configuration or containment. This will enable more flexibility in the placement of the new racks on the datacentre floor.
Once the unit is installed, begin a phased migration of workloads from other sections of the floor. This is not a one-for-one migration, because these rack units can support higher cooling densities. Often, an existing datacentre would only utilise on average 50-60% of rack capacity, because higher-density racks cause hot spots on the floor.
With the new contained racks, the amount of workload migrated is often 40-50% greater. For example, a new, self-contained four-rack unit might absorb the workloads from between six and eight racks on the existing floor.
Therefore, workloads moved to the new enclosure are unlikely to come from the same racks, and the older section of the server area will be heavily fragmented. The next phase in the project entails defragmenting the environment and moving workloads out of under-utilised racks to free up additional floorspace.
Once these workloads are moved, begin the process of physically relocating equipment and clearing out the next section of floorspace to make room for the next self-contained rack installation. As each subsequent unit is installed, the overall density of computing per rack increases, resulting in a significantly higher compute-per-square-foot ratio and a smaller overall datacentre footprint.
This migration phase might also be an excellent time to consider a server refresh, depending on where existing servers are in their economic life cycles. Implementing smaller server form factors can increase rack density, while reducing overall power and cooling requirements.
Key to all of this remains the input power to the datacentre – it must be adequate for the higher-density racks. One offsetting benefit is that the overall cooling load can actually decrease as more workloads move to high-density racks, because much of the cooling air flow is handled inside the rack, reducing the amount of air flow needed across the entire datacentre space.
Reinvent infrastructure
While new chip designs attempt to lower the heat footprint of processors, increased computing power requirements lead to higher equipment densities – which, in turn, increases cooling requirements. As the number of high-density servers grows, I&O leaders must provide adequate cooling levels for computer rooms.
Those looking to retrofit datacentres for extreme densities in a small footprint, perhaps for quantum computing or artificial intelligence (AI) applications, should consider liquid or immersive cooling systems as viable options. Gartner predicts that by 2025, datacentres deploying speciality cooling and density techniques will see 20-40% reductions in operating costs. This topic will be further discussed at the Gartner IT Infrastructure, Operations & Cloud Strategies Conference in the UK in November.
It can take as much as 60-65% of the total power used to cool a datacentre. Higher-density racks of 15kW to 25kW can often require more than 1.5kW of cooling load for every 1kW of IT load, just to create the cool air flow needed to support those racks.
Rear-door heat exchangers (RDHx) are field-replaceable rack doors (in most instances) that cool the hot air as it exits the rack door, rather than relying on air flow in the datacentre. One benefit of the RDHx is that not only do you have more-efficient racks, but much of the power once used for cooling becomes available for reuse by facilities to support other building systems or rerouted as additional IT load. RDHx suppliers include Futjitsu, Vertiv, Schneider Electric, Nortek Air Solutions, Cool IT Systems and Opticool.
Using liquid cooling can solve the high-density, server-cooling problem, because water (conductive cooling) conducts more than 3,000 times as much heat as air and requires less energy to do so. Liquid cooling enables the ongoing scalability of computing infrastructure to meet business needs.
It may not be obvious that RDHx can save money, so customers must be willing to build the business case. Depending on heat load and energy costs, return on investment (ROI) can be attained within a few years. In many cases, previously planned facilities upgrades (with typical ROIs between 15 and 20 years) may not be required.
Most suppliers have recently started providing a refrigerant solution, instead of water. A low-pressure refrigerant can alleviate water leakage concerns because, if leaks occur, refrigerants can boil off as non-toxic, non-corrosive gases. Although this may add extra cost for coolant distribution units, it will remove any worry of water leaks damaging the equipment.
Immersive cooling systems are also gaining acceptance, especially where self-contained, high-density (40kW to 100kW and beyond) systems are needed.
Direct immersion and liquid cooling systems are now available. They can be integrated into existing datacentres with air-cooled servers. However, adoption has been slow, considering the heavy investment in mechanical cooling methodologies and the continually improving power efficiency of modern systems. Immersive cooling suppliers include Green Revolution Cooling, Iceotope, LiquidCool, TMGCore and Stulz.
Because every environment is different, it is critical for I&O leaders to use detailed metrics, such as power usage efficiency (PUE) or datacentre space efficiency (DCSE), to estimate the benefits and unique cost savings from such investments.
The bottom line
I&O leaders may attain significant growth in their existing facilities by implementing a phased datacentre retrofit, while reducing the cooling requirements and freeing up power for additional IT workloads.
This activity is not without risk, because any physical equipment move within a live production datacentre is risky. However, if executed as a long-term project, and broken into small, manageable steps, the benefits can be far-reaching and far outweigh the risks.
From a budgeting point of view, it is also easier to implement, because the capital requirements are spread over multiple quarters, versus a traditional datacentre build project. Also, the overall costs will be significantly less than a new build, with many of the same benefits.
As I&O leaders begin to enhance their datacentres to support new business services and reduce operating costs, they should keep this step-by-step approach in mind.
David Cappuccio is a distinguished research vice-president analyst at Gartner
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.