The good, the bad, and the ugly of supercomputing 23 or close to it

This year’s event was as much about artificial intelligence as it was about high-performance computing. The only booths that didn’t talk about AI were, well, nobody. Everyone has been touting the miracles of AI, from central processing units (CPUs) to accelerators to systems companies to networking vendors to storage to the cloud to water cooling systems to the US Department of Energy and the US Department of Defense. And then thereOpen.ai BOD animation.
The 23rd Supercomputing Conference in Denver is a wrap, with some extracurricular activities thanks to Microsoft and Open.ai. Here’s a summary of the good, the bad, and the ugly.
The good
Nvidia was everywhere and nowhere
The traditional big green booth was absent at SC’23. They didn’t need their own suite because almost every system vendor offered an H100-based server.
Nvidia has made some announcements, of course. Most impressive was Europe’s first Exaflop monster at the Julich Supercomputing Center in Germany, which the company touts will be the world’s fastest AI system. “Jupiter” is the first system to use the Grace Hopper 200 (GH200) with additional HBM capacity and bandwidth. Based on Eviden’s BullSequana Nvidia Quantum-2 InfiniBand Networking platform. The system also includes a “cCuster Module” equipped with the new European ARM CPUs from SiPearl, supplied by the German company ParTec. SiPearl promises a massive memory data rate of up to 0.5 bytes per flip using the Rhea CPU, which is approximately five times that of the GPU, providing high efficiency for complex data-intensive applications.
Nvidia also announced (at Microsoft Ignite) AI Foundry services on Azure, with the core models and Enterprise AI software suite now available on Microsoft Azure. SAP, Amdocs, and Getty Images were among the first companies to build custom LLM courses and deploy those models using Enterprise AI software.
Cerebras continues to gain momentum, with G42 alongside it
As a follower of Cambrian-AI, you know that we rank very highly in Cerebras’ systems, and have been since it came out of stealth. It remains among the few companies with a long-standing discrimination against the green tide.
I spent a few minutes with CEO Andrew Feldman at the show, and he remained typically enthusiastic, especially since Cerebras is the only AI hardware startup with hundreds of millions in revenue. In addition to UAE-based G42, Cerebras includes GlaxoSmithKline, Total, AstraZeneca, Argonne National Laboratories, EPCC, the Pittsburgh Supercomputing Center, Inverness, the National Energy Technology Laboratory, the Leibniz Supercomputing Center, NCSA, and Lawrence Livermore Laboratories. National, Cerebras Corporation. An unnamed major financial services organization.
AMD is about to announce the MI300, but MS Maia might steal the show
But you couldn’t tell it wasn’t available yet while walking around the show floor. Microsoft, HPE Cray and others have talked about the upcoming MI300 family. I won’t spoil the news, which will be released on December 6, but at the kiosks and at MS Ignite, the news has been front and center with massive anticipation.
Micron Technologies: A better HBM mousetrap?
Micron, the only remaining U.S. memory company, was showing off its version of the HBM3e, which they say has more bandwidth and memory capacity than its Korean rivals, Samsung and SK Hynix. While talking to company representatives, I get the impression that a huge person is lining up to place orders. See our review here.
Microsoft Maia: Can SRAM make up for HBM’s deficit?
Also at Ignite, which ran in conjunction with Supercomputing 23, Satya Nadella announced the in-house Maia we covered on Forbes last week. This seems like a good start, but I’m surprised by the small amount of HBM. They sure got the memo that Open.ai GPT4 needs a lot of fast memory, right? I’m sure they will fix this issue soon, but AMD and Nvidia are not standing still. The Maia 100 only has 64GB of HBM but a ton of SRAM. Standards please? I think Microsoft’s designers know more about how LLM performs with this combination of memory than I do.
Grok and Samba Nova find their groove.
The emergence, or explosion, of large language models has given two prominent startups, Groq Inc. And SambaNova Systems, a reason to brag. These two companies were working on next-generation silicon, and their booths were packed with interested scientists wanting access. Since both startups have adopted the AI-as-a-service business model, they can accommodate interested data scientists without installing massive hardware locally. Honestly, I was very skeptical of both companies until I saw their demos and spoke with company leadership at SC’23.
Groq Inc. showed World’s fastest inference performance of Llama 2 70B – competitive with GPT-3. To celebrate their record-breaking performance, Groq brought a cute and cuddly llama named Bunny to the SC23 event in front of the Convention Center. The company’s demo was nothing short of amazing, showing what appears to be at least a 10X performance advantage over Nvidia GPUs in running GPT-3 inference queries. Standards please!
Neorealism reduces Deduction Costs increased by 90%. Bring your DLA!
I spent some time with Moshe Tanach, CEO of Israeli startup Nureality, to discuss how the upcoming inference platform will work. NeuReality’s mission is to reduce AI infrastructure costs and increase AI performance. The company’s software and hardware handle the entire AI inference workflow, offloading the actual DNN calculations to your preferred Deep Learning Accelerator, or DLA.
The company showed off several “AI devices” with a network addressable processing unit, or NAPU, linked to various DLAs, including AMD FPGAs, Qualcomm’s Cloud AI100 (which was updated this week with a new version for 4X performance) and even the IBM AIU, which is… The prototype is still from IBM Research. Other products will be added as the company moves into production next year.
Untether.ai reduces costly data movement with large operable RAM (SRAM).
Untether.AI touted its second generation memory accelerator at its SC23 presentation, achieving best-in-class TOPs/W efficiency. The new silicone will be available next year. We want to see how this compares to the latest Qualcomm AI100, hopefully in the MLPerf benchmark, but CEO Arun Iyengar was confident his SpeedAI would win.
Yes, in November, it was warm
The use of heat and energy has been a hot topic. And yes, data centers are not what they used to be! There were a lot of data center cooling vendors on the floor. This is one shelf.
Now that’s really cool!
Author
System vendors
There were lots of great booths staffed by experts from Dell, HPE, Lenovo, Supermicro, Boston Ltd, Penguin and many other system vendors. Some were showing off the AMD MI300, others were showing how the Nvidia GH200 would change system design.
bad
This Oscar obviously goes to the board of directors of Open.ai, aka Loony Tunes, and now former CEO Sam Altman, who will now become Satya Nadella’s right-hand man in AI research. Well, it wasn’t part of Supercomputing 23. But it dominated the news cycle over the weekend. We do not know why the Board fired Mr. Altman in such a sudden and unprofessional manner, but the Board must come clean soon. 700 dissatisfied employees demand the board of directors to reinstate Mr. Altman and Brockman, or the entire organization, could be at risk of defection or worse. Our theory about who wins and who loses is here. It’s a worthwhile read.
The ugly
HPC Wire reported a violation of SuperComputing’s Code of Conduct (COC), posting a redacted image of an offensive T-shirt. The COC states, “The SC Conference is dedicated to providing a harassment-free conference experience for everyone, regardless of gender, sexual orientation, disability, physical appearance, race, or religion. We do not tolerate harassment in any form.” We have chosen not to show the shirt .
Conclusions
I’ve been attending SuperComputing since 1987, and have only missed a few (thanks, Covid!). It has been transformed first by big data and now by AI, from a group of nervous scientists wondering where their funding will come from to more than fourteen thousand enthusiastic attendees and vendors, all of them speaking AI at scale.
Don’t miss it next year in Atlanta. i won’t!
Follow me Twitter Or LinkedIn. paying off My website.
Disclosures: This article expresses the views of the author
It should not be taken as advice to purchase or invest in the companies mentioned. Cambrian AI Research is fortunate to have many, if not most, semiconductor companies as our clients, including Blaize, BrainChip, CadenceDesign, Cerebras, D-Matrix, Eliyan, Esperanto, FuriosaAI, Graphcore, GML, IBM, Intel, Mythic, NVIDIA, Qualcomm Technologies, Si-Five, SiMa.ai, Synopsys, Ventana Microsystems, and Tenstorrent. We do not have any investment positions in any of the companies mentioned in this article and do not plan to start any in the near future. For more information, please visit our website at https://cambrian-AI.com.
(Tags for translation)Satya Nadella