Tech
Google’s latest AI tool wants you to think you’re a music producer
Google is making it very easy to feel like a music producer, whether you have the skills or not. The company has announced that ProducerAI, an AI-powered music-making platform, is joining Google Labs, bringing together sound generation, visuals, and video into one experimental creative tool.
ProducerAI is powered by a preview version of Lyria 3, Google’s newest music-generation model. You can describe what you want, and the AI helps you build it, whether that means crafting beats, shaping melodies, or experimenting with entirely new sounds.
ProducerAI first launched in July 2025 to let users collaborate with an AI agent to generate music, workshop lyrics, and remix tracks from text prompts. Until now, it relied on its own underlying models. Joining Google Labs gives it access to a much larger AI toolkit.
Inside Google’s AI-powered music studio
As a part of Google Labs, ProducerAI will pull from several of Google’s models. Lyria 3 handles music generation, while Gemini powers the conversational interface that guides users through ideas and edits.
Nano Banana will generate album art, and Veo will be used to create AI-generated music videos, turning a song idea into a complete audio and visual project. Google says it will also embed SynthID watermarks into ProducerAI outputs.
This flags AI-generated audio, images, video, and text, adding transparency as AI music becomes harder to distinguish from human work. But companies like Sony have already developed a tool to detect original songs used in AI-generated tracks.
The ProducerAI team has already worked with artists like The Chainsmokers, Lecrae, and Anjulie to shape the platform. Google positions ProducerAI as an experiment, not a replacement for musicians. However, this collaboration arrives at a time when AI-generated songs are topping Billboard charts, drawing scrutiny from artists and listeners alike.