Today's AI/ML headlines are brought to you by ThreatPerspective

Digital Event Horizon

NVIDIA Revolutionizes AI Infrastructure with Kyber and Vera Rubin NVL144


NVIDIA has revolutionized the field of artificial intelligence (AI) infrastructure by announcing its latest innovations, namely the NVIDIA Kyber rack server generation and the Vera Rubin NVL144 MGX compute tray. These groundbreaking developments are set to transform the way large-scale AI applications are deployed, with a focus on increased efficiency, reduced costs, and enhanced performance.

  • NVIDIA has announced its latest innovations: the NVIDIA Kyber rack server generation and the Vera Rubin NVL144 MGX compute tray.
  • The innovations are designed to increase efficiency, reduce costs, and enhance performance in large-scale AI applications.
  • NVIDIA has partnered with Samsung Foundry to meet growing demand for custom CPUs and custom XPUs.
  • The partnership aims to provide design-to-manufacturing experience for custom silicon, accelerating innovation in the field of AI.
  • The Vera Rubin NVL144 MGX compute tray offers an energy-efficient, 100% liquid-cooled, modular design with a significant leap in accelerated computing architecture and AI performance.
  • NVIDIA Kyber is designed to boost rack GPU density, scale up network size, and maximize performance for large-scale AI infrastructure.
  • The transition to 800 VDC architecture offers benefits such as increased scalability, improved energy efficiency, reduced materials usage, and higher capacity for performance in data centers.
  • Other industries like electric vehicles and solar have adopted 800 VDC technology for similar benefits.
  • NVIDIA's ecosystem is open and collaborative, with over 50 MGX partners gearing up to deploy the Vera Rubin NVL144 MGX compute tray.



  • NVIDIA has revolutionized the field of artificial intelligence (AI) infrastructure by announcing its latest innovations, namely the NVIDIA Kyber rack server generation and the Vera Rubin NVL144 MGX compute tray. These groundbreaking developments are set to transform the way large-scale AI applications are deployed, with a focus on increased efficiency, reduced costs, and enhanced performance.

    As part of its ongoing collaboration with Intel, NVIDIA has partnered with Samsung Foundry to meet growing demand for custom CPUs and custom XPUs. This partnership is aimed at providing design-to-manufacturing experience for custom silicon, ensuring that AI applications can scale up quickly to handle demanding workloads. Furthermore, the integration of x86 CPUs into NVIDIA infrastructure platforms using NVLink Fusion is expected to further accelerate innovation in this field.

    The OCP Global Summit has played host to a multitude of industry leaders and innovators as they converge on the San Jose Convention Center from October 13-16. This event has provided a unique opportunity for companies to showcase their latest technologies, including new silicon components, power systems, and support for the next-generation, 800-volt direct current (VDC) data centers.

    At the heart of NVIDIA's strategy lies its commitment to creating an open ecosystem that enables collaboration among partners. The company has already established a network of over 50 MGX partners who are gearing up to deploy the Vera Rubin NVL144 MGX compute tray, with industry pioneers such as Foxconn, CoreWeave, Lambda, Nebius, Oracle Cloud Infrastructure, and Together AI showcasing new technologies designed for this platform.

    The Vera Rubin NVL144 is an energy-efficient, 100% liquid-cooled, modular design that offers a significant leap in accelerated computing architecture and AI performance. Its fundamental design lives in the MGX rack architecture, which will be supported by 50+ MGX system and component partners. The upgraded rack features energy-efficient 45°C liquid cooling, a new liquid-cooled busbar for higher performance, and 20x more energy storage to keep power steady.

    NVIDIA Kyber, on the other hand, is designed to boost rack GPU density, scale up network size, and maximize performance for large-scale AI infrastructure. The innovative architecture enables up to 18 compute blades per chassis, while purpose-built NVIDIA NVLink switch blades are integrated at the back via a cable-free midplane for seamless scale-up networking.

    The transition to an 800 VDC architecture offers various benefits, including increased scalability, improved energy efficiency, reduced materials usage, and higher capacity for performance in data centers. In comparison to traditional 415 or 480 VAC three-phase systems, this technology enables up to 150% more power to be transmitted through the same copper, thereby eliminating the need for heavy-duty copper busbars.

    The adoption of 800 VDC infrastructure is not limited to NVIDIA's ecosystem; other industries such as electric vehicles and solar have also adopted this technology for similar benefits. As a result, the Open Compute Project has emerged as an industry consortium focused on redesigning hardware technology to efficiently support the growing demands on compute infrastructure.

    In addition to its hardware advancements, NVIDIA NVLink Fusion is gaining momentum, enabling companies to seamlessly integrate their semi-custom silicon into highly optimized and widely deployed data center architecture. The inclusion of Intel and Samsung Foundry in this ecosystem underscores the importance of collaboration among industry leaders in driving innovation.

    The unveiling of the Vera Rubin NVL144 MGX compute tray at the OCP Global Summit marks a significant milestone for NVIDIA, with over 50 MGX partners gearing up to deploy this platform. Furthermore, more than 20 industry pioneers are showcasing new silicon components, power systems, and support for the next-generation, 800-volt direct current (VDC) data centers.

    As AI applications continue to push the boundaries of performance and efficiency, companies such as HPE are announcing product support for NVIDIA Kyber and NVIDIA Spectrum-XGS Ethernet scale-across technology. This emphasis on collaboration and innovation underscores the commitment of industry leaders to driving progress in this rapidly evolving field.

    In conclusion, NVIDIA's latest innovations in AI infrastructure represent a significant leap forward in terms of efficiency, performance, and scalability. The company's commitment to creating an open ecosystem that enables collaboration among partners has enabled the development of cutting-edge technologies such as the Vera Rubin NVL144 MGX compute tray and the 800-volt direct current (VDC) data centers.

    As the AI landscape continues to evolve at breakneck speed, it is clear that NVIDIA's leadership in this field will play a pivotal role in shaping the future of artificial intelligence. With its focus on innovation, collaboration, and performance, the company is well-positioned to address the growing demands of large-scale AI applications and drive progress in the years to come.



    Related Information:
  • https://www.digitaleventhorizon.com/articles/NVIDIA-Revolutionizes-AI-Infrastructure-with-Kyber-and-Vera-Rubin-NVL144-deh.shtml

  • https://blogs.nvidia.com/blog/gigawatt-ai-factories-ocp-vera-rubin/

  • https://theoutpost.ai/news-story/nvidia-unveils-next-gen-ai-infrastructure-plans-at-ocp-global-summit-20876/


  • Published: Wed Oct 15 02:17:31 2025 by llama3.2 3B Q4_K_M











    © Digital Event Horizon . All rights reserved.

    Privacy | Terms of Use | Contact Us