Cloud repatriation: Five reasons to repatriate data from cloud

Moving to cloud computing is not necessarily the one-way street you would imagine. Although the cloud attracts an increasing percentage of enterprise IT spending – a trend IT analysts expect to continue – the cloud does not hold all the answers.

In some cases, organisations have found the need to move workloads and data back from the cloud – so-called cloud repatriation.

Researchers at Forrester expect the public cloud infrastructure market to grow by 35% during 2021 to $120bn. This growth has been driven by the Covid-19 pandemic and, Forrester says, in particular by a move to cloud-based backup and recovery.

But even where the cloud is now the default choice for CIOs, enterprises also need to consider whether and when to move data back, or repatriate it, from cloud infrastructure. As yet, the number of organisations repatriating data is small, but data repatriation should be consideration in any cloud strategy. 

With applications such as backup and recovery, the idea of moving data back is built in. But bringing data back on-premise can be driven by financial, practical or even regulatory considerations. Here we look at the main reasons for cloud repatriation.

1. Cost reduction

Cloud computing is not always cheaper than on-premise options. And costs can change, because providers increase pricing, because requirements change or, often, because the organisation has underestimated some of the costs involved with operating in the cloud.

As an on-demand or pay-as-you-go service, higher cloud utilisation – of storage or compute resources – will mean a bigger bill. Organisations might find their projected storage requirements quickly exceed a budget. With on-premise systems, once the hardware is bought or leased, most costs will not change with utilisation.

With cloud, the more the service is used, the more it costs. This is the case with data storage generally, and with specific aspects such as data egress, costs for related resources such as security and management tools, or even database writes.

Another possibility is that the cloud provider could increase its fees. Depending on the contract, organisations could face rapid cost increases, potentially to the point where an on-premise option might be more economical.

2. Security and regulation

Regulatory requirements should not be a reason to move data from the cloud, provided the migration was planned properly. And there is no inherent reason why a public cloud deployment would be less secure than on-premise architecture, as long as the correct security policies are followed and systems set up correctly.

Unfortunately, this is not always the case. Although security failures by public cloud providers are rare, misconfiguration of cloud infrastructure by customers is not uncommon. A data loss or breach could lead to the organisation deciding to move data back on-premise, even if only to minimise reputational damage.

When it comes to regulation, public cloud providers, including the hyperscalers, have taken steps to meet government and industry requirements. Specific cloud services are available for classified data, for HIPAA-compliant information, or for PCI-DSS, to give just some examples.

But the biggest concern is often the location of data. Although the large cloud providers now offer specific geographical zones for their storage, a business might still decide, or be required to decide, that the better option is to relocate data to an on-premise system or a local datacentre.

“It is a misconception that regulation creates significant barriers to moving workloads to the cloud,” says Adam Stringer, business resilience expert at PA Consulting. “Regulators do demand rigour, just as they do for other outsourced arrangements, but there are many successful examples of highly regulated firms migrating to the cloud.”

The key lies in careful planning, he says.

A further twist in the regulatory tale comes from investigations. If a regulator, law enforcement agency or a court requires extensive data forensics, this might be impossible, or at least very expensive, in the cloud. The alternative is to bring the data in-house.

3. Latency and data gravity

Although the cloud provides almost limitless storage capacity, it depends on internet connections to operate. This, in turn, creates latency.

Some applications – backup and recovery, email and office productivity, and software-as-a-service packages – are not especially sensitive to latency. Enterprise-grade connectivity is now fast enough that users notice little in the way of lag.

For some workloads, however, which could include real-time analytics, databases, security applications and those connected to sensors and the internet of things, there may be more sensitivity to latency. Systems architects need to account for latency between the data source, storage or compute resources and the end-user, and latency between services in the cloud – intra-cloud latency.

Although technologies such as edge computing, caching and network optimisation will cut latency, in other cases the simplest solution will be to bring the data back in-house, shortening communications paths and allowing the IT team to fine-tune storage, compute and networking to suit the applications and workloads.

Avoiding latency issues in the first place means analysing where most data is based. This deals with issues of data gravity. If most data is in the cloud, and processing is done in the cloud, data gravity will not be an issue. If data is constantly swapping between clouds and on-premise storage or compute resources, something is wrong.

4. Poorly planned cloud migrations

Sometimes, organisations repatriate data simply because the move to the cloud has not met expectations. In this case, they might try to “save face”, according to Forrester’s Naveen Chhabra. “They tried to retrofit an app in the cloud while architecturally they should not have,” he says.

It could be that the workload was not suited to the cloud, or cloud migration was poorly planned or executed. “If your data architecture is a mess and you move your data to the cloud, you just end up with a mess in the cloud,” says PA’s Stringer. A move to the cloud will not, in itself, fix IT design issues, he adds.

And where organisations want to use the cloud – either as a redeployment or a greenfield project – they need to apply the same or higher standards of design. “Architectural rigour is as important for cloud deployments as it is for on-prem,” says Stringer. “If they don’t get that right, businesses will end up having to repatriate parts of their estate.”

This does not mean repatriation will be easy, or even that it will fix the problem. But at least it will give the IT team the chance to reset, analyse what went wrong, and replan how cloud could be used more effectively in the future.

5. Provider failure

Provider failure is perhaps the ultimate reason to repatriate data. The customer will probably have no choice. Hopefully, the provider will give some notice and a realistic timescale for organisations to take back their data or move it to another cloud provider.

But it is possible that a provider could cease trading without notice, or that technical or environmental problems could force it to cease operating without notice. In that case, firms will need to rely on alternative copies of their data, on-premise or with another cloud.

Fortunately, complete provider failure is rare. But the experience gained from recent cloud outages suggests that at the very least, organisations need a plan for how to secure and retrieve their data if it does happen. And on-premise technology is likely to be central to any recovery plan, even if only until the organisation can source new cloud capacity.

“The question to ask before moving a workload to the cloud is: does this increase the resilience of the customer or market-facing service?” says PA’s Stringer. “If you’re only moving to reduce costs, the overheads of building resilience back in at a later date could offset any benefit.”

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.