Editorial illustration for Gemini app expands tools, now generates music alongside images and video
Gemini AI Launches Mini Apps with One-Click Creation
Gemini app expands tools, now generates music alongside images and video
Since its debut, the Gemini app has been a sandbox for visual creators, letting users spin up images and stitch together video with a few taps. Over the months, Google has layered in editing knobs, style filters and collaborative sharing, turning a simple sketchpad into a modest media studio. Yet the platform’s roadmap has always hinted at a broader ambition: to let anyone generate content that moves beyond the screen.
Enter Lyria 3, DeepMind’s newest generative music model, now being woven into Gemini’s toolkit. By opening a channel for custom melodies, the app pushes past static visuals into an auditory dimension that many creators have long craved. The move signals more than a feature add‑on; it marks a shift toward a single interface where text prompts can birth pictures, clips and now, original tunes.
For users accustomed to swapping photos, the prospect of coaxing a fresh soundtrack from the same AI feels like a natural next step.
---
A new way to express yourself: Gemini can now create music.
A new way to express yourself: Gemini can now create music Since launching the Gemini app, we've built tools to encourage creative expression through images and video. Today, we're taking the next step: custom music generation. Lyria 3, Google DeepMind's latest generative music model, is rolling out today in beta in the Gemini app.
Just describe an idea or upload a photo, like "a comical R&B slow jam about a sock finding their match" and in a matter of seconds, Gemini will translate it into a high-quality, catchy track. To push the creative envelope further, you can even ask Gemini to take inspiration from something you upload.
Will users embrace AI‑crafted tunes? Gemini now offers a music‑generation option through its Lyria 3 model, rolled out in beta within the app. The feature works by letting a user type a prompt or attach a photo—“a comical R&B slow jam about a sock finding their match,” for example—and receiving a short composition in seconds.
It extends the app’s existing image and video tools, aiming to broaden creative expression. Yet the announcement provides no data on sound quality, genre fidelity, or how the model handles complex musical structures, leaving those aspects uncertain. Because the rollout is still in beta, user feedback will likely shape future refinements, but the current release does not disclose any performance metrics or licensing considerations.
The integration appears seamless, and the interface mirrors the simplicity of the earlier visual tools. Whether the addition will become a staple for creators or remain a niche experiment remains to be seen. For now, Gemini simply adds another layer to its generative suite, inviting experimentation without guaranteeing professional‑grade results.
Further Reading
Common Questions Answered
How does Gemini generate music using the Lyria 3 model?
Users can generate music by typing a text prompt or uploading a photo in the Gemini app. The Lyria 3 model from Google DeepMind then creates a short musical composition based on the input, such as generating a 'comical R&B slow jam about a sock finding their match' in seconds.
What creative tools are now available in the Gemini app?
The Gemini app now offers music generation alongside existing image and video creation tools. This expansion allows users to create custom musical compositions through the Lyria 3 model, broadening the app's creative expression capabilities beyond visual media.
What makes the Lyria 3 music generation feature unique?
The Lyria 3 model enables users to create music through flexible input methods like text prompts or photo uploads. Unlike traditional music creation tools, this AI-powered feature can generate unique musical compositions quickly, potentially opening up new avenues for creative expression.