Nvidia seeks to stay ahead of Intel and AMD in a high-stakes, high-performance computing race

Nvidia seeks to stay ahead of Intel and AMD in a high-stakes, high-performance computing race

Written by Ryan Schrute

The AI ​​and gaming giant leaves no gap for competitors

Nvidia (NVDA) was built on the promise of a graphics processing unit (GPU) for gaming and advanced display, but its rise to a $1 trillion valuation has been on the back of high-performance computing and artificial intelligence. What started as a small project called a “general purpose GPU” (GPGPU) that looked at game physics and video transcoding applications turned the company into a silicon giant, displacing Intel as the clear thought leader for the future of computing.

What has made Nvidia successful in this transformation from being a gaming company to one of the leading companies in computing and AI is its ability to not only build silicon, but to build an entire platform and ecosystem. This is called the “Nvidia Scientific Computing Platform” and is a combination of hardware including the Hopper GPU and Grace CPU with system software such as CUDA and PhysX that aims to simplify programming models for developers.

Add to these comprehensive platforms including Nvidia Omniverse and Nvidia AI, and you get applications that can spur scientific development and advance artificial intelligence, all of course optimized for Nvidia silicon.

But like all giants, Nvidia always needs to stay ahead of the curve and competitors. The company’s struggle is to maintain the momentum that brought it to where it is.

At SC23, the leading conference for high-performance computing, Nvidia this week made some interesting product announcements and gave some insight into its thinking about the next big thing in high-performance computing.

New GPU to maintain the lead

The primary announcement from Nvidia was the H200, a mid-gen update of its H100 Hopper GPU that makes a significant advance in memory design. By moving from HBM3 (High Bandwidth Memory) to HBM3e, the new H200 can improve memory bandwidth and maximum memory capacity, and as a result delivers significant performance improvements over the current generation H100 product.

Memory technology is often as important to high-performance computing (HPC) systems as the graphics processing unit (GPU) or central processing unit (CPU) itself. New AI models and HPC workloads will test the limits of memory capacity. The new Nvidia H200 increases memory capacity by 76% and memory bandwidth by 43%.

Nvidia was already the market leader in this area, so the improvements may seem superficial. But with Intel (INTC) continuing to push its GPU and Gaudi AI acceleration strategy, and AMD (AMD) executing the rollout of its MI300 AI chip, it’s crucial that Nvidia stays aggressive.

Read: Nvidia stock has its longest winning streak in seven years, with all-time high

For investors interested in what Nvidia has planned next, the company has teased the performance of its upcoming architecture (codenamed Blackwell) and the B100 GPU scheduled for 2024 as offering more than double the performance of the H200 in the GPT-3 AI inference benchmark, similar to the work required for robotics. AI-powered chat like ChatGPT.

Nvidia has withdrawn from several partners and customers to support product claims and long-term commitments. For example, the EuroHPC Joint Undertake, a group founded in 2018 to invest in European supercomputing systems, showed a plan for a supercomputer called “Jupiter” with approximately 24,000 GH200 nodes, a mix of Arm-based Grace CPUs and Hopper graphics processing. The other was the Texas Center for Advanced Computing, one of the centers of computational excellence in the United States, which has a system called “Vista” that will use both the GH200 Grace Hopper and Grace CPU superchips.

The next frontier of computing – quantum computing

Perhaps the most significant change in computing is the move to quantum systems. Quantum computing differs from classical computing in that it relies on quantum mechanics, rather than the simple electrical pulses used in classical computing. Quantum computing can help solve problems that require computing large numbers of sets, such as machine learning and cryptography, faster than any supercomputers currently available.

At the moment, Nvidia does not have a QPU (Quantum Processing Unit) in its technology stable, but that does not prevent the company from participating in this cutting-edge technology. Whether or not Nvidia chooses to build quantum capacity or perhaps buy it, it is creating the ecosystem of hardware and software supporting quantum computing to make sure it doesn’t get left out.

From a hardware perspective, almost all quantum computing systems today are connected to classical computing systems that are used to simulate or control quantum systems. This could be for error accounting, systems control, or simply for secondary processing where quantum hardware does not excel. Nvidia calls these “hybrid quantum systems,” a way to ensure quantum engineers and systems researchers work with Nvidia throughout the process.

Perhaps most innovative is the creation of “CUDA Quantum,” the quantum equivalent of CUDA, the programming and software paradigm that has enabled Nvidia to dominate the AI ​​and high-performance computing spaces over the past 15 years. CUDA Quantum includes a high-level programming language that allows quantum system designers and application developers to write code that can run on both classical and quantum systems, converting work between them or simply simulating the quantum part.

Nvidia claims that 92% of the top 50 quantum startups currently use Nvidia GPUs and software and that 78% of companies building and deploying quantum processors use CUDA Quantum as their software model.

To me, this represents a step that no other technology company can do today, empowering the HPC industry that is already working on designing software for Nvidia GPUs and ensuring that the next generation of quantum applications is created as “Nvidia-first” even before the technology exists Quantum. The company has a clear stake in the quantum computing hardware landscape.

Ryan Shout is founder and principal analyst at Shout Research. Follow him on xryanshrout. Shout has provided consulting services to AMD, Qualcomm, Intel, Arm Holdings, Micron Technology, Nvidia and others. Shout owns shares in Intel.

More: These tech stocks record earnings season this even as hype around artificial intelligence slows — and the winner may surprise you

Read also: Computers and cars are the future for the giant but little-known chip designer

-Ryan Schrute

This content was created by MarketWatch, operated by Dow Jones & Co. MarketWatch is published independently of Dow Jones Newswires and The Wall Street Journal.

 

(End) Dow Jones News Agency

15-23-11 1621ET

Copyright (c) 2023 Dow Jones & Company, Inc.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *