Skip navigation
Nvidia signage Alamy

Nvidia Touts New AI, Omniverse Tools and Hardware

At SIGGRAPH, Nvidia announces new software, new servers featuring a new GPU, and partnerships to enable enterprises to accelerate their generative AI and Omniverse efforts.

Nvidia today unveiled new hardware, software, and services to make it faster and easier for enterprises to pursue generative AI, Omniverse, and other AI projects.

Nvidia CEO Jensen Huang spoke at the SIGGRAPH 2023 computer graphics conference in Los Angeles today and announced a spate of new technologies, including many aimed at enterprises and data center operators.

On the hardware front, he announced that Dell, Hewlett Packard Enterprise, Lenovo, Supermicro, and others will soon ship new Nvidia OVX servers that feature Nvidia's L40S GPU, a new data center processor designed to speed AI training and inference, 3D design and visualization, video processing, and Omniverse applications.

Nvidia today also announced a partnership with AI startup Hugging Face to provide a new cloud service that will enable enterprises to use the Hugging Face platform to train and tune custom generative AI models. The new service, called Training Cluster As a Service, will be available in the coming months on the Nvidia DGX Cloud, Nvidia's AI supercomputer cloud service.

Huang said organizations can take advantage of the 275,000 models and 50,000 data sets that have been shared in the Hugging Face community.  

"From the Hugging Face portal, choose your model that you would like to train or [if] you'd like to train a brand new model, connect yourself to DGX Cloud for training," Huang said in his keynote speech. "So this is going to be a brand new service to connect the world's largest AI community with the world's best AI training infrastructure."

To streamline AI deployment, Nvidia has upgraded its AI Enterprise software suite with new features including Nvidia NeMo for building, customizing, and deploying generative AI models.

Huang pulled quote

The company also upgraded its Nvidia Omniverse software platform with enhancements that accelerate the creation of virtual worlds and enables enterprises to build larger, more complex simulations, such as digital twins of factories or warehouses.

To further simplify adoption of generative AI, Nvidia also announced Nvidia AI Workbench, a unified tool that allows enterprises to create, test, and deploy generative AI models on GPU-powered workstations. Then when more capacity is needed, they can use the same tool to migrate the AI models and scale them to any data center, public cloud, or the Nvidia DGX Cloud, Nvidia executives said.

"It helps you set up the libraries and the runtimes that you need. You can fine-tune the model," Huang said in his speech. "If you want to migrate this project so that all of your colleagues can use it and fine-tune other models, you could just tell it where you want to migrate it to, and [with] one click you will migrate the entire dependency of the project, all the runtimes and the libraries, all the complexities. And it runs on workstations and runs in the data center. It runs in the cloud — one single body of code."

Why Generative AI Is So Popular

Huang took time to give his perspective on the impact generative AI is making.

"What is the meaning of generative AI? Why is this such a big deal? Why is it changing everything? The reason for that is, first, human is the new programming language. We've democratized computer science," he said. "Everybody can be a programmer now because human language, natural language, is the best programming language. And this is the reason why ChatGPT has been so popular."

He said the large language model (LLM) is a new computing platform and that generative AI is the new killer app.

"For the very first time, after 15 years or so, a new computing platform has emerged. Like the PC, like the internet, like mobile cloud computing, a new computing platform has emerged," Huang said. "And this new computing platform is going to enable all kinds of new applications, but very differently than the past. This new computing platform benefits every single computing platform before it."

Zeus Kerravala, founder and principal analyst of ZK Research, said the big takeaway of Nvidia's announcements today is that they are making AI —specifically generative AI — easier to deploy.

"It's Nvidia's ability to simplify the use of their technology through a combination of silicon, software, and partnerships," Kerravala said. "They've perfected the concept of an engineered system, where they make it so easy for companies and developers to hit the ground running because it's all kind of prepackaged together. We see that in a lot of their announcements today: the partnership with Hugging Face and the AI Workbench. Customers get faster time to value."

New Servers and Workstations Powered by New GPU

The new Nvidia OVX servers, which hardware vendors will begin shipping in the fall, are targeted at enterprises and cloud service providers that want to run AI or Omniverse workloads, the company said.

The OVX systems will support up to eight Nvidia L40S GPUs per server. The new GPU, which includes fourth-generation Tensor Cores and an FP8 Transformer Engine, delivers more than 1.45 petaflops of tensor processing power.

The new L40S GPU provides 1.2 times more generative AI inference performance and 1.7 times more training performance when compared with the Nvidia A100 Tensor Core GPU, the company said.

"The L40S is a really terrific GPU for enterprise-scale fine-tuning of mainstream large language models," Huang said.

Nvidia today also announced three new RTX workstations, each powered by up to four Nvidia RTX 6000 Ada Generation GPUs. The RTX 5000, RTX 4500, and RTX 4000 workstations, which can be configured with Nvidia AI Enterprise or Nvidia Omniverse software, can provide up to 5,828 TFLOPS of AI performance and 192GB of GPU memory. Dell, HP, Lenovo, and BOXX Technologies are expected to begin releasing the systems this fall, Nvidia said.

Nvidia AI Enterprise 4.0

Nvidia AI Enterprise is a software suite of AI tools, frameworks, and pretrained models designed to make it easier for enterprises to develop and deploy AI workloads.

The new version — AI Enterprise 4.0 — offers new features that include Nvidia NeMo for generative AI, Nvidia Triton Management Service for automating and optimizing production deployments, and Nvidia Base Command Manager Essentials, cluster management software that helps enterprises maximize performance and utilization of AI servers across data center and hybrid, multicloud environments.

AlamyOmniverse

Nvidia Omniverse Update

The Nvidia Omniverse platform is used to build and create metaverse applications or virtual 3D simulations, such as digital twins by industrial companies.

Nvidia has added new platform updates to Omniverse, including new developer templates and resources that enable developers to get started with minimal coding. The new version also includes an upgraded Omniverse Audio2Face application, which provides access to generative AI APIs that create realistic facial animations and gestures from an audio file, and now includes multilingual support and a new female base model, the company said.

Nvidia has also added more support to third-party AI tools through its support for

Open Universal Scene Description (OpenUSD). Last week, Nvidia joined Pixar, Adobe, Apple, Autodesk, and the Joint Development Foundation, which is part of the Linux Foundation family, to create the Alliance of OpenUSD (AOUSD) to promote OpenUSD as a standard to enable the interoperability of 3D tools and data.

"OpenUSD is visionary, and it's going to be a game changer," Huang said. "OpenUSD is going to bring together the world onto one standard 3D interchange and has the opportunity to do for the world and for computing what HTML did for the 2D web."

Nvidia today announced that AI tools from Cesium, Convai, Move AI, SideFX Houdini, and Wonder Dynamics are now connected to Omniverse thanks to OpenUSD.

Nvidia and Adobe also announced plans to make Adobe Firefly, which is Adobe's family of creative generative AI models, available as APIs in Omniverse. The new version of Omniverse is currently available in beta, Nvidia said.

To further advance OpenUSD, the company today announced that it has developed four cloud APIs to help developers adopt OpenUSD, including Deep Search, an LLM that enables fast semantic search through massive databases of untagged assets.

Editor's Note: This story has been updated with additional quotes by Nvidia CEO Jensen Huang during his keynote address.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish