Click any tag below to further narrow down your results
Links
This article reveals OpenAI's significant spending on inference through Microsoft Azure and details the complexities of their revenue-sharing agreement. The reported inference costs and revenues differ from previously stated figures, suggesting that OpenAI's financial situation may be more complicated than understood. The analysis challenges the accuracy of OpenAI's claimed revenues.
Microsoft has unveiled Maia 200, an AI inference accelerator built on TSMC’s 3nm process, designed to enhance AI token generation efficiency. It features advanced memory systems and high-performance capabilities, making it more efficient than previous generations of AI hardware. Maia 200 will support multiple models, including OpenAI's GPT-5.2, and aims to streamline AI development across Microsoft's cloud services.