Near-premises computing offers the robustness of a central data center at the edge. It’s time to take a look at this option.
Too often Internet of Things devices that run manufacturing production lines, robotics, sensors and other IoT devices, are run strictly on distributed networks that at some time must connect with a central data center so more heavy-lifting processing can be done. But near-premises computing is data center and IT infrastructure that is distributed so it can optimally support IoT that is implemented at different physical locations.
SEE: Edge computing adoption to increase through 2026; organizations cautious about adding 5G to the mix (TechRepublic Premium)
The concept behind near-premises computing is that resources that were formerly housed in corporate data centers can actually be moved out to micro data centers at the edge. This enables more robust computing at the edge that can perform many of the functions formerly reserved for the central data center—and it can better support IoT than the standard networks that are widely deployed today.
Whether the decision is to move physical micro data centers to the edge, use mini data centers that are cloud-based or use some combination of the two, the end result is that on-premises-level, scalable processing and storage is suddenly available to IoT applications in manufacturing plants and remote offices. This takes the pressure off the central data center to do all the work, and it can reduce costs associated with transmitting data over communications lines.
“Near-premises computing is enabled, primarily, by micro modular data centers and software-defined fiber optic networks,” said Cole Crawford, CEO of VaporIO, which provides edge colocation and interconnection services. “The optical network connects the enterprise facility to the near-premises data center and creates a flat L2 network that makes the data center equipment perform as if it were on site. Because the near-premises data center is within 10 miles of the [IoT] target site, the network is able to deliver 75µs latencies—a level of performance that makes the off-premises equipment behave as if it were on site.”
SEE: Future of farming: AI, IoT, drones, and more (free PDF) (TechRepublic)
For IoT initiatives like Manufacturing 4.0, which relies on sensors, robotics and equipment that must cross-communicate and interact with each other, a mini data center that delivers unparalleled scalability and quickness at the edge can be transformative.
The question is, will CIOs see near-premises computing as transformative?
“The real change is cultural,” Crawford said. “Enterprises need to first adopt a mindset that allows them to recognize the benefits of shifting or augmenting their on-premises infrastructure to a nearby location. Near-prem should be considered a new best practice for deploying private cloud workloads.”
This, of course, requires a change to IT infrastructure, including work process and storage redeployments. As IoT establishes itself on the edge, CIOs and system architects see this trend, but until now they have been addressing it with standard network deployments.
SEE: 3 advantages to having cloud tools available for on-prem data centers (TechRepublic)
Strategically, it’s not a major leap to consider near-premises data centers that are hybrid, on premises or cloud-based. However, there are always issues such as figuring out how to redeploy when you have budget constraints and also existing resources that must stay working, CIOs and infrastructure architects must also find time to reconstruct IT infrastructure for near-premises computing.
Crawford said that enterprises adopting near-premises computing can reduce their compute and storage infrastructure TCO by 30% to 50% and eliminate most or all of the capital costs they would typically need to spend on the data center itself; and that these gains can be further compounded by turning capital expenses into operating expenses through new scalable service models.
If CIOs can demonstrate these gains in the cost models that they prepare for IT budgets, near-premises computing may indeed become a new implementation strategy at the edge.
Don’t overlook the resilience that near-premises computing brings. “The performance of near-premises computing rivals that of on-premises computing but also has the capability to add significantly more resilience,” Crawford said. “Enterprises can distribute their workloads across multiple nearby facilities, removing the single point of failure and allowing for software-based high-availability resilience techniques.”
With production line outages averaging $17,000 per incident and with manufacturers averaging 800 hours annually in downtime, system resilience as well as quickness, scalability and robust security should be important considerations on any CIO’s list—and a reason to take a serious look at near-premises computing at the edge.
Also see
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.