Join top executives in San Francisco July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
In a highly anticipated keynote a COMPUTEXTaipei, Nvidia Founder and CEO Jensen Huang has unveiled a range of cutting-edge systems, software and services that harness the power of generative AI to reshape various industries, from advertising to manufacturing to telecommunications. The live event marked Huang’s first in-person intervention since the start of the pandemic.
Huang said he believes these innovations will not only facilitate new business models but also significantly improve the efficiency of existing models across a multitude of industries.
One of the highlights of the keynote was the official launch of Grace Hopper, a platform that combines the energy-efficient Nvidia Grace CPU with the high-performance Nvidia H100 Tensor Core GPU. This all-in-one module enables enterprises to achieve unprecedented AI performance, Huang said.
In addition, Huang introduced the DGX GH200, an AI supercomputer with large memory capacities capable of integrating up to 256 Nvidia Grace Hopper Superchips into a single datacenter-sized GPU.
Event
Transform 2023
Join us in San Francisco July 11-12, where top executives will share how they integrated and optimized AI investments for success and avoided common pitfalls.
subscribe now
Advanced performance and memory
With one exaflop of performance and 144 terabytes of shared memory, the DGX GH200 outperforms its predecessors nearly 500 times, enabling developers to build intricate language models for GenAI chatbots, advanced recommender algorithms, and sophisticated neural graph networks for activities such as fraud detection and data analysis. Huang said that tech giants like Google Cloud, Meta and Microsoft are already exploring the capabilities of the DGX GH200 for their generative AI workloads.
Huang pointed out that “DGX GH200 AI supercomputers integrate Nvidia’s most advanced accelerated computing and networking technologies, pushing the boundaries of AI.”
Huang also unveiled the Nvidia Avatar Cloud Engine (ACE) for games, a foundry service that allows developers to build and deploy custom AI models for speech, conversation and animation. ACE equips non-playable characters with conversational skills, allowing them to answer questions with realistic evolving personalities.
The toolkit includes essential AI foundation models such as Nvidia Riva for speech detection and transcription, Nvidia NeMo for generating custom responses, and Nvidia Omniverse Audio2Face for animating those responses.
Additionally, Nvidia announced its partnership with Microsoft to lead innovation in the era of generative AI for Windows PCs. The partnership includes the development of advanced tools, frameworks and drivers that streamline the process of developing and deploying AI on PCs.
The collaboration aims to improve and expand the installed base of over 100 million PCs with RTX GPUs equipped with Tensor Cores, thereby increasing the performance of over 400 AI-accelerated Windows applications and games.
Leverage generative AI for digital advertising and production workloads
Huang said the potential of generative AI expands to the digital advertising industry, where Nvidia partners with the marketing services organization WPP extension. Together, the companies have developed an innovative content engine on the Omniverse Cloud platform.
This engine allows creative teams to connect their 3D design tools like Adobe Substance 3D to create digital twins of client products within Nvidia Omniverse. Using GenAI tools trained on data from responsible sources and powered by Nvidia Picasso, these teams would now be able to rapidly produce virtual sets.
Nvidia said this new feature allows WPP customers to generate a multitude of customized 3D ads, videos and experiences for global markets, accessible on any web device.
Successful focus on production
The company also announced its successful focus on manufacturing, a $46 trillion industry comprising approximately 10 million factories. Huang pointed out that by leveraging Nvidia technologies, electronics manufacturers such as Foxconn Industrial Internet, Innodisk, Pegatron, Quanta and Wistron are transitioning to digital workflows, bringing the vision of all-digital smart factories closer to reality.
According to Huang, “The world’s largest industries create physical things. By building them digitally first, we can save billions.”
The integration of Omniverse and AI generative APIs has enabled these companies to make connections between design and manufacturing tools, thereby building digital replicas of their factories known as digital twins.
In addition, companies are using Nvidia Isaac Sim to simulate and test robots and Nvidia Metropolis, a vision AI framework, for automated optical inspection. The company’s latest addition, Nvidia Metropolis for Factories, enables the creation of custom QA systems, giving manufacturers a competitive edge and enabling them to develop cutting-edge AI applications.
A new range of versatile AI supercomputers and server solutions
Nvidia also revealed its ongoing construction of the impressive AI supercomputer, Nvidia Helios, which is expected to be operational later this year. The supercomputer will use four DGX GH200 systems interconnected with Nvidia’s Quantum-2 InfiniBand network, delivering bandwidth up to 400 Gb/s. As a result, data throughput for training large-scale AI models will be greatly improved.
In addition to this groundbreaking development, Nvidia has introduced Nvidia MGX, a modular reference architecture that allows system manufacturers to efficiently and cost-effectively create different server configurations tailored to AI, HPC and Nvidia Omniverse applications.
With the MGX architecture, manufacturers can develop standardized CPUs and accelerated servers using modular components. These configurations support a range of GPUs, CPUs, data processing units (DPUs), and network adapters, including x86 and Arm processors.
Additionally, MGX configurations can be housed in both air- and liquid-cooled chassis. Leading the way in the adoption of MGX designs are QCT and Supermicro, with their respective introductions scheduled for August. Other major companies like ASRock Rack, ASUS, GIGABYTE and Pegatron are expected to follow suit.
Nvidia’s impact on 5G infrastructure and cloud networking
Huang announced collaborative efforts to revolutionize 5G infrastructure and cloud networking. For example, a partnership with a major telecommunications giant in Japan will develop a distributed network of data centers that will leverage Nvidia’s Grace Hopper and BlueField-3 DPUs within modular MGX systems.
By incorporating Nvidia spectrum Ethernet switches, data centers will facilitate the delivery of accurate times required by the 5G protocol. This will lead to higher spectral efficiency and reduced energy consumption. The platform holds promise for various applications, including autonomous driving, AI factories, augmented and virtual reality, computer vision, and digital twins.
In addition, Huang introduced Nvidia Spectrum-X, a networking platform specifically designed to improve the performance and efficiency of Ethernet-based AI clouds. By combining Spectrum-4 Ethernet switches with BlueField-3 DPU and software, Spectrum-X delivers a 1.7X increase in AI performance and power efficiency. Major system manufacturers such as Dell Technologies, Lenovo, and Supermicro already offer Nvidia Spectrum-X, Spectrum-4, and BlueField-3 DPU switches.
Generative AI supercomputing centers
Nvidia is also making strides in establishing generative AI supercomputing centers around the world.
For example, in Israel, the company is building Israel-1, a state-of-the-art supercomputer inside its local data center. Israel-1 includes Dell PowerEdge servers, Nvidia HGX H100 supercomputing platform and Spectrum-X platform with BlueField-3 DPUs and Spectrum-4 switches. Its main goal is to accelerate local R&D efforts.
And, in Taiwan, two new supercomputers are currently under development: Taiwania 4 and Taipei-1. Taiwania 4, made by ASUS and launching next year, leverages Arm-based Grace CPUs and an Nvidia Quantum-2 InfiniBand network. Nvidia said it is one of Asia’s most energy-efficient supercomputers.
On the other hand, Nvidia-owned and operated Taipei-1 will feature 64 DGX H100 AI supercomputers, 64 Nvidia OVX systems, and Nvidia networks.
The company believes these additions will significantly improve local R&D efforts.
VentureBeat’s mission it is to be a digital city square for technical decision makers to gain insights into transformative business technology and transactions. Discover our Briefings.
Online protection starts with knowledge. Visit Itshield to find out more