5 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Microsoft has unveiled the Maia 200, its latest AI accelerator designed for high-performance inference tasks. Built on TSMC's 3nm process, it features 140 billion transistors and claims to deliver 30% more performance per dollar than its predecessor, the Maia 100. The chip targets efficiency and environmental concerns while competing against offerings from Amazon and Google.
If you do, here's more
Microsoft has launched the Maia 200, its latest AI accelerator, claiming it outperforms competitors like Amazon's Trainium3 and Nvidia's Blackwell B300 Ultra. Built on TSMC's 3nm process, Maia 200 features 140 billion transistors and delivers up to 10 petaflops of FP4 compute. It also offers 216 GB of HBM3e memory with a bandwidth of 7 TB/s. Notably, it achieves 30% more performance per dollar compared to its predecessor, the Maia 100, despite having a higher thermal design power (TDP) of 750W.
The Maia 200 is optimized for inference tasks, particularly those requiring FP4 performance. Microsoft emphasizes its design efficiency, boasting that it operates well below its maximum TDP, akin to the Maia 100's performance. While comparisons with Nvidia's B300 Ultra show that Maia 200 is more efficient, the latter chip is tuned for more demanding tasks. The article suggests that direct comparisons are tricky, as the chips cater to different markets and use cases.
Deployment of the Maia 200 has begun in Microsoft's Azure data centers, with plans for further rollout in the US. Microsoft aims to balance performance with environmental considerations, addressing public concerns about AI's impact. Although the Maia 200 faced delays before its release, it reflects Microsoft's ongoing commitment to advancing AI infrastructure while managing its social responsibilities.
Questions about this article
No questions yet.