Looking ahead: How to create a multi-year datacentre capacity plan

Multi-year datacentre planning is more constrained than ever, with equipment and construction lead times lengthening against a background of rising requirements around storage, power and compute. With hyperscalers sitting on resources in a global game of musical chairs, how can players plan to get ahead before the music stops?

Jinender Jain, UK and Ireland sales head at IT consultancy Tech Mahindra, notes that power, cooling and space parameters should be assessed as a whole – and says it’s a “no-brainer” to adjust datacentre capacity based on business needs, demands and dynamics and allow for spare capacity that can quickly come on-stream.

However, many datacentres designed for 200W per square foot are still operating at half that wattage or less, with rack power effectively stranded. “As any datacentre manager knows, capacity planning is as much art as it is science,” says Jain.

There is little chance of the return on investment on material costs, as Uptime Institute’s 2022 Outage analysis highlights that power and networking issues still cause multiple outages globally, with nearly 30% of major public outages in 2021 lasting 24 hours, compared with 8% in 2017.

“Operators still struggle to meet high standards that customers expect and service-level agreements demand – despite improving technologies and strong resiliency and downtime prevention investments,” says Andy Lawrence, Uptime Institute Intelligence executive director.

Steve Wright, chief operating officer at colocation and cloud provider 4D Data Centres, says concerns and risk factors should link to any multi-year plan – from data sovereignty, skills and systems, quantity and type of cloud or datacentre environment required.

Newer cloud-first deployments can require big data analytics or artificial intelligence (AI) testing, not least because multi-MW ramp-ups can cause “astronomical” cost blow-outs. Those that deal in “bigger” data may also need to replace 1,000 servers every two or three years. Some customers might be shrinking and others growing – and it is easier to expand than shrink capacity.

Yet for many customers, beyond about 12 months out or with a budget cycle ahead, things are “quite fluffy”, says Wright. “Six months before inception, they then say ‘we need to get this nailed down’,” he adds.

4D plans 15-20 years ahead around the lifespans of mechanical and electrical equipment in its own datacentres, matching requirements against the age and state of a location. The right size of land is needed, ideally near a high-voltage connection point, with capacity available, with dense fibre connectivity and access to a suitable workforce, with flexibility “designed in” to accommodate technological change, says Wright.

“With our Gatwick facility we thought about high-density cooling, tweaking the cooling system to enable that to happen,” he says. “Last year, we deployed immersion cooling for a customer; the year before we went with high-density, rear-door cooling on racks to support high-performance computing-type environments where a standard 7kW rack just won’t cut it.”

Supply chain constraints

Wright says large facilities may aim to plan as far ahead as 2050, but customers may have a relatively short-term view. That is on top of supply chain constraints, particularly on networking equipment, with lead times of 275 days from Cisco or Juniper.

“And if you put in for a power connection request for a datacentre right now and it’s in London, you’re probably looking at 2025 before you get your power allocated,” he adds. “Redesigns and networking are having to happen a bit more on the fly.”

Lewis White, enterprise infrastructure vice-president – Europe at CommScope, agrees that there is more pressure in today’s power-and-network access-centred conversations around capacity.

“Lane speeds have risen from 40Gbps to 100Gbps, even 400Gbps in larger enterprise and cloud datacentres,” he says. “Operators are now deploying optical fibre infrastructures that can support 800Gbps and beyond – going all-in on fibre investment.”

Simon Riggs, Postgres fellow at EDB, points out that squaring monthly performance or annually recurring revenue with a demand for multi-year plans might not sit comfortably beside an agility mantra. Also, accountants rarely tie their calculations to the actual costs of various specific IT solutions and how they are managed.

“I think it’s a little bit cheeky to talk in terms of long durations,” says Riggs. “The original USP in the cloud was that you had flexibility. If you really can predict it years in advance, then why not simply go back to the old datacentre? And it’s happening when people are questioning huge cloud costs.”

“Too much inventory is out there and people aren’t properly tracking what they’re actually doing”
Simon Riggs, EDB

Capacity requirements depend on actual volumes of business – and in the past, no one was as worried about the cost of energy. That is why technical problems often occur – such as if a burst happens sooner than a year away and people are running to keep up, says Riggs, suggesting another look at consumption and technology efficiency.

“Really, too much inventory is out there and people aren’t properly tracking what they’re actually doing,” he adds.

Mark Pestridge, senior director – customer experience at colo provider Telehouse, points out that acquiring or building new sites takes years – even just to secure planning permission.

“You have to really build almost floor by floor, suite by suite,” he says. “You’ve just got to continue evaluating what your clients are trying to do and piece it together. It’s like building a jigsaw without all the pieces to start with.”

It can be about ensuring secure interconnection with service providers, telcos, internet peering exchanges and cloud services providers to deliver customer choice, he says. Then it’s about every customer’s different requirements and having the ability to fulfil them more flexibly.

“Yet how do you predict what applications are going to drive [datacentre] adoption?” says Pestridge. “With the way the world is evolving, how can we predict what type of power each rack is going to need? That’s really difficult.”

Adam Bradshaw, commercial director at colo provider ServerChoice, notes that, to an extent, we have all been here before, with the 2008 financial crisis causing a similar move away from on-premise, and again during the pandemic.

“We are seeing a similar thing with this huge exponential increasing of energy costs as well,” he says. “Stuff in AI, with autonomous vehicles, is very power-hungry. They will quite happily take in excess of 20kW of rack, no problem. For more traditional lower-powered hardware, customers just want it somewhere safe.”

Datacentre operators tend to do well in times of major crisis, says Bradshaw. Seen as “safe” places, this can encourage hikes in power densities and pushes to reduce prices and power consumption. Yet so much is dependent on the customer.

Bradshaw recommends a better view of customer requirements in the discovery process. “How old is X piece of kit, what do you want to do with it and how long do you expect to run it for? What does the customer business expect to look like in 12, 24 or 36 months’ time?” he says.

“Drill down into those bits and work out what’s best. But that requires that the prospect to really kind of play ball and work with us and be open to discussing these things.”

Jonathan Bridges, chief innovation officer at cloud services provider Exponential-e, suggests pay-as-you-go, consumption-based wholesale cloud models do not necessarily require much capacity planning, barring trending analysis of customers and the analysis of private, hybrid, or multi-cloud capabilities to keep costs down and boost sustainability.

Get ‘really smart’

He agrees that service providers and datacentres both need to get “really smart” about discovering the patterns in data, in storage and beyond to profile specific customer requirements in more detail.

“We need to be more predictive, looking further around at estate, infrastructure, what’s running in the datacentre, how that will evolve over time and affect capacity,” says Bridges.

That also means continually monitoring utilisation, feeding that more into historical trending, and making more use of descriptive and diagnostic analytics to make decisions, he points out.

“As we advance that, maybe more predictive analytics to try and model what will happen,” says Bridges. “The third thing is take stock of the contracts that you have, and try and do some analysis for when they refresh those contracts. Ask: what is that footprint going to look like?”

Erich Sanchack, chief operating officer at Digital Realty, reveals a focus on supplier-managed inventory agreements with tier-one and tier-two suppliers.

“However, it’s not an easy job, requiring commitment to standards that many providers are not able to establish,” says Sanchack. “Moreover, multi-year planning doesn’t insulate providers from regulatory and external governing factors, which can evolve at the drop of a hat.”

“Multi-year planning doesn’t insulate providers from regulatory and external governing factors”
Erich Sanchack, Digital Realty

Simon Bennett, chief technology officer at cloud provider Rackspace, notes that hyperconverged infrastructures are “packing things in” already. Also, the rise of liquid and immersive cooling to manage power densities brings further considerations.

Will building structures cope with the weight and concentration of the racks, and what will that more robust footprint cost?

“You may have to use new facilities and less physical space,” says Bennett. “Then you need a lot of power to go into that small space. If there’s 100kW per rack, suddenly 20 racks is 2MW.”

Several physically smaller facilities might work better than a “massive” datacentre with empty space but where you have used up all the power – yet negatively impacting sustainability and power consumption while increasing reliance on networking and interconnect, he says.

“Whatever capacity you’ve got, you want to drive it hard,” says Bennett. “You don’t want to leave it idle just burning electricity and incurring costs. It’s essential to do your own analytics on your demand profile.

“A lot of people still rely on spreadsheets. You need your own business intelligence around datacentre capacity. Overall, it’s probably about flexibility.”

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.