Intel Core Ultra and 5th Gen Intel Xeon processors expand Intel’s unique AI portfolio, making AI accessible to everyone.
At the launch of ‘AI Everywhere‘ today in New York City, Intel unveiled a unique portfolio of AI products that will enable customers to deploy AI solutions everywhere, including the data center, cloud, network, edge and PC.
Highlights:
The Intel® Core™ Ultra family of mobile processors, built on Intel 4 process technology and the first processor to take advantage of the company’s largest architectural change in 40 years, delivers Intel’s most power-efficient client processor and ushers in the era of AI computing.
Designed with AI acceleration on every core, the 5th Generation Intel® Xeon® processor family delivers big leaps in AI and overall performance, lowering total cost of ownership.
Intel CEO Pat Gelsinger demonstrated for the first time the Intel® Gaudi®3 AI accelerator, which will launch next year as planned.
“AI innovation has the potential to increase the impact of the digital economy by up to thirty percent of global gross domestic product,” said Gelsinger: “Intel is developing technologies and solutions that enable customers to seamlessly integrate and effectively operate AI in the cloud and, increasingly, locally on the PC and at the edge – where data is generated and used.”
Gelsinger emphasized that Intel’s extensive AI footprint extends from cloud and enterprise servers to networks, high-volume clients and pervasive edge environments. He also reemphasized Intel’s readiness to deliver five new compute technology nodes in four years.
“Intel is on a mission to bring AI everywhere through exceptionally designed platforms, secure solutions and support for open ecosystems,” Gelsinger said: “Today, our AI portfolio grows even stronger with the launch of Intel Core Ultra, heralding the era of AI computing, and 5th Gen Xeon with AI acceleration for the enterprise.“
Intel Core Ultra powers AI computing and new applications
Representing the company’s biggest architectural change in 40 years, Intel Core Ultra ushers in a generation of AI computing with innovations in every aspect: CPU computing, graphics, power, battery life and cutting-edge AI capabilities. AI computing represents the biggest transformation in computing in 20 years, since Intel® Centrino® enabled laptops to connect to Wi-Fi from anywhere.
Featuring Intel’s first client on-chip AI accelerator (neural processing unit, or NPU), Intel Core Ultra is designed to deliver a new level of power-efficient AI acceleration with 2.5x better power efficiency than the previous generation. 2 Its world-class GPU and leading CPU are also capable of accelerating AI solutions.
More importantly, Intel is partnering with more than 100 software vendors to bring hundreds of AI-powered applications to the PC market – highly creative, productive and fun applications that will transform the computing experience. For consumers and business customers, this means a larger and more comprehensive set of AI-enhanced applications will perform brilliantly on Intel Core Ultra, especially when compared to competing platforms. Content creators working in Adobe Premiere Pro, for example, will enjoy 40% faster performance than the competition.
Intel Core Ultra-based AI PCs will be available at select retailers in the US during the holiday season. In the coming year, Intel Core Ultra will bring AI to more than 230 designs from laptop and PC manufacturers worldwide. By 2028, AI computers will account for 80% of the PC market,4 bringing new tools to the way we work, learn and create.
New Xeon brings more powerful AI to the data center, cloud, network and edge
Also introduced today, the 5th Generation Intel Xeon processor family delivers a significant leap in performance and efficiency. They deliver an average 21% performance gain in overall computing performance compared to the previous generation Xeon,6 delivering an average 36% higher performance per watt across a range of customer workloads. Customers following a typical five-year refresh cycle and upgrading from older generations can reduce their total cost of ownership by up to 77%.
While Xeon represents the only mainstream data center processor with built-in AI acceleration, the new 5th Gen Xeon delivers up to 42% higher inference and fine-tuning on models as large as 20 billion parameters. In addition, 5th Gen Xeon is the only CPU with consistent and continuously improving MLPerf training and inference benchmark results.
Xeon’s built-in AI accelerators, along with optimized software and advanced telemetry capabilities, enable more manageable and efficient deployment of demanding network and edge workloads for communications service providers, content delivery networks and broad vertical markets including retail, healthcare and manufacturing.
During today’s event, IBM announced that 5th Generation Intel Xeon processors achieved up to 2.7 times better query throughput on the watsonx.data platform than previous generation Xeon processors during testing. Google Cloud, which will deploy 5th Generation Xeon next year, said Palo Alto Networks experienced a 2x performance improvement in threat detection deep learning models using the built-in acceleration in 4th Generation Xeon over Google Cloud. In addition, independent game studio Gallium Studios used Numenta’s AI platform running on Xeon processors to increase inference performance by 6.5x over a GPU-based cloud instance, resulting in cost savings and latency reductions in its AI-based game Proxi.
Such performance opens up new possibilities for advanced AI in the data center and cloud, as well as in networks and edge applications around the world.
AI acceleration and solutions wherever developers need them
Both Intel Core Ultra and 5th Generation Xeon will find their place in unexpected places. Imagine a restaurant that guides your menu choices based on your budget and dietary needs; a manufacturing plant that identifies quality and safety issues at the source; an ultrasound that can see what the human eye cannot; a power grid that manages electricity with careful precision.
These edge computing use cases represent the fastest growing segment of computing, with artificial intelligence being the fastest growing workload and estimated to reach a global market of $445 billion by the end of the decade. In this market, edge and client devices are driving 1.4 times more demand for inference than in the data center.
In many cases, customers will benefit from hybrid AI solutions. Take Zoom, which runs AI workloads on Intel Core-based client systems and Intel Xeon-based cloud solutions on an all-in-one communication and collaboration platform to deliver the best user experience and lowest costs. Zoom uses AI to drown out the sound of the neighbor’s barking dog, blur the image of your cluttered home office, and generate a meeting summary and email.
To make AI hardware technologies as accessible and easy to use as possible, Intel is creating optimizations to the AI frameworks developers use (such as PyTorch and TensorFlow) and providing underlying libraries (via oneAPI) to make software portable and high-performance on different types of hardware.
Intel’s superior developer tools, including oneAPI and the OpenVINO toolkit, help developers leverage hardware acceleration for AI workloads and solutions and quickly build, optimize and deploy AI models across a wide range of inference targets.