NVIDIA Announced Third-Generation OVX Computing Systems

They feature a new architecture, with a server design based on a dual-CPU platform with four NVIDIA L40 GPUs. 

NVIDIA announced the third generation of its OVX computing systems designed to power large-scale digital twins. They are built on NVIDIA Omniverse Enterprise, a platform for creating and managing metaverse applications, and provide "the breakthrough graphics and AI required to accelerate massive digital twin simulations" by combining NVIDIA BlueField-3 DPUs with NVIDIA L40 GPUs, ConnectX-7 SmartNICs and the NVIDIA Spectrum Ethernet platform.

The new generation features a new architecture, with a server design based on a dual-CPU platform with four NVIDIA L40 GPUs, which deliver "revolutionary neural graphics, AI compute and the performance needed for the most demanding Omniverse workloads."

According to NVIDIA, each OVX server includes two ConnectX-7 SmartNICs to enable multi-node scalability and precise time synchronization. Additionally, the Ethernet adapters ensure low-latency, high-bandwidth communication.

The BlueField-3 data processing unit offloads, accelerates, and isolates CPU-intensive infrastructure tasks. It enables higher performance, limitless scaling, zero-trust security, and better economics.

Finally, the new OVX introduces the accelerated NVIDIA Spectrum Ethernet platform, which provides high bandwidth and network synchronization.

Third-generation OVX systems will be available later this year through Lenovo, Supermicro, Dell Technologies, GIGABYTE and QCT. NVIDIA is also working on Digital Twin as a Service offerings based on OVX with HPE Greenlake.

Find out more here and don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more