I had the opportunity to speak with Intel CEO Pat Gelsinger ahead of the company’s Developer Innovation Conference 2023. It was an opportunity to learn about the progress Intel has made over the past year and a great way to get a sense of where Intel is headed on many fronts, including client computing Data center, edge computing and cloud computing. None of these topics could be talked about without the tremendous focus Intel has placed on not only accelerating the AI ​​ecosystem with hardware, but also on enabling developers to build their AI applications at scale on millions of client devices through Intel’s open source stack. To develop artificial intelligence. Tools inside the OpenVINO toolkit.

Implementation, implementation, implementation

The first thing Pat said to me was that he was glad the company brought back the spirit of the IDF (Intel Developers Forum) after the company “stupidly killed him.” Intel Innovation and Vision are two ways for Intel to reconnect with the ecosystem in a tangible way that also brings the ecosystem together and helps accelerate ideas and growth. This is part of a broader effort within the company to improve its execution, something it struggled with before his arrival. This is the second year of a rebuilding process that brings back the systematic engagement the company once had, and each year Intel’s developer conferences are expected to be better than the last. Intel gets valuable feedback from these conferences, which ultimately helps drive the company’s future efforts and how to address industry needs and demands.

Intel’s Pat Gelsinger says the company continues to execute on time on the foundry’s “5 nodes in 4 years” strategy and that the Intel 18A remains on track for delivery. This has long been a cornerstone of Intel’s transformation strategy, which makes sense when you consider Intel’s long-term strategy of relying on operations leadership to deliver leadership performance and power. Intel has a more than two-year advantage over TSMC; She disappeared. Intel looked like a money-losing company, operating almost like a rudderless ship. Almost every design needed to be reverted to an old process. This means that the drivers of product value and timing change dramatically. So, returning to competitive operations nodes and executing on those nodes is critical to a company’s transformation. It’s the most important metric I track. At Deutsche Bank’s Technology Conference 2023, Pat Gelsinger announced that Intel is not only accelerating the rollout of its foundry in Arizona but has a customer who has already paid for 18A capacity, a major vote of confidence in Intel’s progress. Intel has also partnered with industry leaders such as Synopsys to expand its partnership on Intel Advanced Processing Nodes to enable leading IPs on Intel’s 3A and 18A processing nodes. Given Synopsys’ leadership in the industry, this is a valuable deepening of the important relationship for any foundry, especially one like Intel, that is moving forward so quickly. I’m more confident in Intel’s ability to deliver on its promise of better execution with these latest announcements, which I may not have said a year ago.

Some issues are somewhat beyond Intel’s control, such as the semiconductor tower deal, which was halted by Chinese regulators and has since been abandoned by Intel. I believe Intel was able to avoid that failed deal by signing a US foundry agreement, which involves Tower Semiconductor investing $300 million in its own equipment to help speed up the rollout of the IFS fab it will end up using. This is a great demonstration of leadership for Intel to turn a situation it has no control into and turn it into a net positive for the company and the acquisition target. Although the company will not get the tower margin, it is better than not having the relationship. The key to Intel’s success, as Pat Gelsinger succinctly put it, is that the company needs to get back to world-class execution and not only meet expectations and deadlines but exceed them.

Artificial intelligence data center

Given the explosion of the generative AI market at the end of last year, many eyes were focused solely on GPU-based generative AI training solutions. The reality is that although this is where the heat is rising now, it probably won’t be the only beneficiary of the recent rally. As we saw with ML and DL, while the initial mutation started with training, it led to an explosion of inference. Amazon has made it very clear that 80% of Alexa’s machine learning budget is inference.

I will reiterate from my previous predictions that Intel is well positioned in the future of generative AI. Although we haven’t seen much of what I like to call “proof of life,” evidence is starting to trickle in that Intel could be a competitive and viable alternative to NVIDIA and AMD for their AI inference needs. Intel has recently introduced three products for this purpose. MLPerf Tour – Gaudi2, 4th Gen Xeon and Xeon CPU Max Series. I’ve been very critical of vendors for not delivering products, but Intel did.

The results show that on a single workload, the Gaudi2 price performance significantly outperforms the NVIDIA A100/H100/GH-200 (near parity in performance alone in the GPT-J 6B vs. H100 parameter model). For specific workloads, 4th Generation Xeon CPUs are a suitable solution for building and deploying general-purpose AI workloads using the most popular open source AI frameworks and libraries. What no one wants to talk about is that most of the world’s AI workloads run on central processing units (CPUs). Intel remains the only server CPU vendor offering MLPerf results. This is the first time Intel has sent MLPerf results for the Intel Xeon CPU Max series (with up to 64GB of high-bandwidth memory). For GPT-J, it was the only CPU capable of achieving up to 99.9% accuracy.

The results demonstrate Intel’s competitive ability to perform AI inference on very specific workloads, and I believe reinforce Intel’s commitment to addressing the full spectrum of the AI ​​continuum – across hardware and software. You have to start somewhere.

The company still has a lot of work to do, from winning “select workloads” to “most workloads,” and I’m looking forward to learning more about its innovation progress.

AI-accelerated computer

Everyone is talking about artificial intelligence these days, and Intel is no exception. The company is enabling AI on all fronts, from the cloud to the PC and everything in between. Intel is already building AI into every platform, from Meteor Lake on the PC side to Sapphire Rapids on the server/data center side. I expect there will be variants at the data center edge such as carriers and verticals such as retail and manufacturing. Intel has been preparing for this moment for years, and with the introduction of Meteor Lake, the company is poised to significantly impact the implementation and scale of how businesses and consumers use AI applications. There is still a lot of work to be done at the software level to enable these capabilities, so I believe companies like Intel, Qualcomm, Apple, and MediaTek should build AI capabilities into their devices years before they use them.

Developers need to know that AI capabilities and performance exist before they can even start leveraging the hardware and creating AI applications. Generative AI has helped accelerate some developers’ appetite for AI performance, and we’re starting to see everyone in the industry talking about how they plan to enable on-device generative AI with intelligent models that are both performant and energy-efficient. Client-level AI is changing the user experience because you can scale in a way that the cloud alone cannot. You will still benefit from the cloud for model training and retraining, but the industry knows that inference in the cloud is simply too expensive today to scale, especially when talking about applications like generative AI.

In talking with Pat Gelsinger, I got the sense that he thinks the AI-enabled PC is a Wi-Fi-like moment where Intel’s Centrino didn’t necessarily define the first day of Wi-Fi, but it enabled new use cases and new applications because developers were able to Assuming the device will have a connection. He even points out that when Centrino came to market, the IEEE 802.11 standard, which Wi-Fi is based on, was already 802.11g, the fourth version of the wireless standard. We are by no means at the beginning or end of the PC industry’s AI journey, as others like NVIDIA have carried the torch for years through training, high-performance computing, and a focus on the cloud. But I heard Pat Gelsinger make a very controversial claim: “We will deliver more TOPs next year than any other vendor.” He says they will dwarf Nvidia because they will offer much more volume. I know that with Meteor Lake and Sapphire Rapids they would provide significant amounts of volume to validate this claim, but I wonder if companies like Qualcomm that ship in larger volumes might have some challenges to such claims.

Intel says the company will have hundreds of independent software vendors (ISVs) enabled on Meteor Lake, which means I expect to see a lot of developers announcing their support for it soon. Intel’s OpenVINO is the core of Intel’s open source software capabilities, and Intel says it is gaining widespread acceptance in the industry for edge computing. We haven’t seen this be the case yet on PC, but I think Meteor Lake could be the chip and next year could be the time we see Intel gain momentum with OpenVINO in PC. There are a lot of interesting things happening within the industry that I see that lead me to believe that next year will definitely be the year of the AI-powered PC.

wrapping

I think Intel is looking beyond the current AI bubble and thinking about more than just cutting-edge cloud applications. Intel is the company that will bring us AI at scale across tens of millions of customers, cloud-connected and edge-enabled use cases. Intel has always been a market maker when it fires and executes on all cylinders. The company is built on its ability to lead and shape the industry around it, which is an essential aspect of the company’s success and when combined with manufacturing execution and leadership is a healthy recipe for success. Encapsulation technologies like Foveros are driving some interesting new capabilities coming from Intel, like Meteor Lake, and embracing the microchip architecture enables Intel to get back on track as a market maker. Foveros will also empower other chipmakers as it has advantages over TSMC CoWaS and appears to be higher in availability.

I’m looking forward to the Intel Innovation 2023 analyst launch on Monday and the big announcements on Tuesday. Most of all, I look forward to the direct discussions with the CEO of the company.
Follow me Twitter Or LinkedIn. paying off My website or some of my other work is here.

(Tags for translation)Intel

Leave a Reply

%d bloggers like this: