Nvidia hardware has powered the recent generative AI product surge. Its dominance in consumer and workstation graphics, CUDA frameworks, and AI-accelerating Tensor cores help.
Bloomberg says that Microsoft is working with AMD to increase the AI capabilities of its GPUs, even though its Azure data centers utilize “tens of thousands” of Nvidia GPUs. The article states Microsoft is providing “support” and “engineering resources” to AMD to increase its AI workload performance.
Bloomberg reports that AMD is developing “Athena,” a Microsoft-developed AI accelerator. The Information reported on it last month. Microsoft disputed AMD’s involvement in Athena’s development.
Microsoft might charge firms “10 times” for private ChatGPT.
As Microsoft adds AI features to additional applications, a more competitive market for AI-accelerating technology can reduce server costs. Microsoft is reportedly developing a private version of ChatGPT for businesses that could cost “10 times” as much to run. If the server hardware needed to run these generative AI models is cheaper, Microsoft could lower its prices, costs, or both to make these products more appealing.
Microsoft’s previous Surface PCs featured “Surface Edition” Ryzen processors, but not the latest ones. The Surface Edition Ryzen processors, like Qualcomm’s SQ-series chips used in the Arm versions of the Surface Pro, were AMD’s chips with Microsoft-assisted optimizations to the “firmware, drivers, and software stack.”
Tom’s Hardware experiments show AMD’s GPU designs suffer with AI tasks. AMD’s flagship Radeon RX 7900 XTX performed slower than Nvidia’s RTX 4090, 4080, and 4070 Ti in Stable Diffusion image generator testing, even though it’s faster in most games. Even the entry-level RTX 3050 outperformed AMD’s previous-generation RX 6000-series cards (software tweaks could improve these cards, but the fact that the software was Nvidia-optimized by default says a lot about the current state of affairs).