GTC 2022: Nvidia’s plan to power the “next wave of AI,” from silicon to software | ZDNet
Nvidia this week is laying out its plan for driving forward the next era of computing. At Nvidia’s spring Graphics Technology Conference (GTC) this week, CEO Jensen Huang unveiled a series of technologies that he argues will power the next wave of AI and the virtual worlds of the future.
Over the past decade Nvidia accelerated computing… started the modern AI revolution,” Huang said in a keynote address. “Now AI will revolutionize all industries.”
Huang presented laid out his strategic vision for Nvidia, which involves delivering products across a four-layer AI stack: hardware, system software and libraries, software platforms like Omniverse; and AI and robotics application frameworks.
“The first wave of AI learned perception and inference,” Huang said. “The next wave of AI is robotics — AI planning actions. Digital robots, avatars and physical robots will perceive, plan and act.”
Nvidia products like Ominverse, he argued, “will be essential” for building this next wave of AI.
To make the case for Nvidia’s influence and reach, the GTC conference features speakers from several of Nvidia’s major customers, including Best Buy, Home Depot, Walmart, Kroger, Lowe’s, Linkedin, Snap and Visa. More than 200,000 attendees are registered to tune into the conference.
Here’s a rundown of Nvidia’s spring GTC announcements:
The new Hopper Architecture
Nvidia has announced its new architecture for data centre AI workloads to succeed Ampere is called Hopper, after computing pioneer Grace Hopper. The first product based on Hopper will be the H100, which delivers three to six times more performance than the Ampere-based A100.
The company will bundle eight H100 GPUs together for its DGX H100 system that will deliver 32 petaflops on FP8 workloads, and the new DGX Superpod will link up to 32 DGX H100 nodes with a switch using fourth-generation NVLink capable of 900GBps.
The world’s fastest AI supercomputer
Huang shared Nvidia’s plan to build “the first Hopper AI factory,” the Eos supercomputer. When it begins operations later this year, Nvidia expects it to be the world’s fastest AI system.
Eos features 576 DGX H100 systems with 4,608 DGX H100 GPUs. It’s anticipated to provide 18.4 exaflops of AI computing performance, 4x faster AI processing than the Fugaku supercomputer in Japan, which is currently the world’s fastest system.
For traditional scientific computing, Eos is expected to provide 275 petaflops of performance.
The Grace CPU “superchip”
Nvidia also announced the Grace CPU Superchip, the company’s first discrete data
center CPU for high performance computing. It comprises two CPU chips, connected over a 900 gbps NVLink chip-to-chip interconnect, to make a 144-core CPU with 1 terabyte per second of memory bandwidth.
Both the Grace Superchip, and the CPU and GPU pairing in the Grace Hopper Superchip announced last year, will be available in the first half of 2023.
The Omniverse platform
The virtual sphere represents a huge opportunity for Nvidia, as more and more businesses aim to build digital twins and simulations of everything from cars to cities to whole planets. Nvidia is tapping into that opportunity with its Omniverse platform, which gives 3D designers a shared virtual world from which they can collaborate across different software applications and from different geographic locations.
This year, Nvidia is focusing on making Omniverse more widely accessible to individuals and businesses. The new Omniverse Cloud gives creators instant access to Omniverse, even if they don’t have an Nvidia RTX-based system. Meanwhile, Omniverse OVX is a computing system designed to meet the needs of massive simulations.
New RTX GPUs
Also in the realm of professional visualization, Nvidia rolled out six new Ampere architecture RTX GPUs for laptops and one new RTX GPU for notebooks. The new RTX A5500, with ray-traced rendering 2x faster than the previous generation, is already powering intense aerospace engineering, architecture and animation projects.
AI software
The NVIDIA AI platform, used by over 25,000 companies worldwide, is getting major updates, Huang said, including Triton Inference Server, the NeMo Megatron 0.9 framework for training large language models, and the Maxine framework for audio and video quality enhancement. Meanwhile, Nvidia updated 60 SDKs at this GTC.
Huang noted that Nvidia SDKs serve healthcare, energy, transportation, retail, finance, media and entertainment — a combined $100 trillion of industries.
Robots and autonomous vehicles
The development of robotic devices and autonomous vehicles presents another major opportunity for Nvidia’s AI platforms. Overall, Nvidia’s automotive pipeline has increased to over $11 billion over the next six years.
Hyperion is Nvidia’s hardware architecture for self-driving cars on which Nvidia DRIVE is built. The company on Tuesday announced Hyperion 9 will start shipping in 2026. It will feature the coming DRIVE Atlan SoC, providing double the performance of the current DRIVE Orin-based architecture.
Meanwhile, BYD, the second-largest EV maker globally, will adopt the DRIVE Orin computer for cars starting production in the first half of 2023. Additionally, Lucid Motors revealed that its DreamDrive Pro advanced driver-assistance system is built on Nvidia DRIVE.
In terms of robotics, Nvidia announced Isaac Nova Orin, built on Jetson AGX Orin, a state-of-the-art compute and sensor reference platform to accelerate autonomous mobile robot development and deployment.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.