1 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Google is shifting its strategy by offering its custom TPUs for deployment in customer data centers, moving away from using them only in its own cloud. Meta is reportedly in talks to integrate these chips, planning a multibillion-dollar investment starting in 2027 while also renting TPU capacity from Google Cloud. This could significantly boost Google's presence in the AI chip market and challenge Nvidia's dominance.
If you do, here's more
Google is intensifying its competition with Nvidia in the AI chip market by exploring a significant deal with Meta Platforms. Reports indicate that Google plans to make its custom tensor processing units (TPUs) available for deployment in clients’ data centers, a shift from its previous strategy of using TPUs solely in its own cloud infrastructure. Meta is considering investing billions to integrate these chips into its operations, starting in 2027, while also renting TPU capacity from Google Cloud as soon as next year. Currently, Meta relies heavily on Nvidia GPUs for its AI needs.
The potential partnership could validate Google’s hardware ambitions, especially if it enables the company to capture a share of Nvidia's revenue. Google has communicated to potential clients that using TPUs on-premises can enhance security and compliance for sensitive data, which is a critical consideration for many businesses. The stakes are high; Google Cloud executives estimate that expanding TPU usage could allow Google to seize up to 10% of Nvidia’s annual revenue, translating to billions of dollars.
In the wake of this news, Alphabet's stock rose by 2.1% in after-hours trading, while Nvidia saw a 1.8% drop. As demand for AI computing resources continues to surge, Google’s strategy to place TPUs directly in customer facilities marks a more aggressive approach in the ongoing chip competition.
Questions about this article
No questions yet.