Mountain View, CA – Google is significantly increasing its Tensor Processing Unit (TPU) infrastructure, signaling a major push to enhance and expand the capabilities of its advanced AI video generation model, Veo 3, within the Gemini App. The development was highlighted by Josh Woodward, Vice President at Google Labs and the Gemini App, who stated on social media, > "We're setting up a LOAD of TPUs today and warming them up. Ever wanted to try Veo 3 in @GeminiApp?" This suggests an imminent increase in user access or processing power for the generative AI tool.
Veo 3 is Google's state-of-the-art AI model designed for creating high-quality, 8-second video clips with native audio from text prompts. It was initially unveiled at Google I/O 2025 and has since been rolling out globally to Gemini App Pro and Ultra subscribers. The model is lauded for its realism, improved prompt adherence, and advanced creative controls, including the ability to generate synchronized sound effects, dialogue, and ambient noise.
The deployment of a "load" of TPUs underscores Google's commitment to scaling its generative AI offerings. TPUs are custom-built ASICs (Application-Specific Integrated Circuits) designed by Google specifically to accelerate machine learning workloads, providing the computational power necessary for training and running complex AI models like Veo 3. This infrastructure expansion is critical for handling increased demand and facilitating further advancements in AI video generation.
Access to Veo 3 is tiered, with Google AI Pro subscribers typically receiving a limited number of video generations per day, while Ultra plan subscribers gain higher limits and exclusive access. The integration within the Gemini App positions Veo 3 as a key feature of Google's broader AI ecosystem, which aims to provide a more personal, proactive, and powerful AI assistant experience. The increased TPU capacity is expected to support a smoother and more efficient user experience as more users engage with Veo 3.