Google DeepMind's new AI tech will generate soundtracks for videos

Google's DeepMind artificial intelligence laboratory is working on a new technology that can generate soundtracks, even dialogue, to go along with videos. The lab has shared its progress on the video-to-audio (V2A) technology project, which can be paired with Google Veo and other video creation tools like OpenAI's Sora. In its blog post, the DeepMind team explains that the system can understand raw pixels and combine that information with text prompts to create sound effects for what's happening onscreen. To note, the tool can also be used to make soundtracks for traditional footage, such as silent films and any other video without sound.  DeepMind's researchers trained the technology on videos, audios and AI-generated annotations that contain detailed descriptions of sounds and dialogue transcripts. They said that by doing so, the technology learned to associate specific sounds with visual scenes. As TechCrunch notes, DeepMind's team isn't the first to release an AI tool that can generate sound effects — ElevenLabs released one recently, as well — and it won't be the last. "Our research stands out from existing video-to-audio solutions because it can understand raw pixels and adding a text prompt is optional," the team writes. While the text prompt is optional, it can be used to shape and refine the final product so that it's as accurate and as realistic as possible. You can enter positive prompts to steer the output towards creating sounds you want, for instance, or negative prompts to steer it away from the sounds you don't want. In the sample below, the team used the prompt: "Cinematic, thriller, horror film, music, tension, ambience, footsteps on concrete. The researchers admit that they're still trying to address their V2A technology's existing limitations, like the drop in the output's audio quality that can happen if there are distortions in the source video. They're also still working on improving lip synchronizations for generated dialogue. In addition, they vow to put the technology through "rigorous safety assessments and testing" before releasing it to the world. This article originally appeared on Engadget at https://www.engadget.com/google-deepminds-new-ai-tech-will-generate-soundtracks-for-videos-113100908.html?src=rss

Jun 18, 2024 - 17:30
 0
Google DeepMind's new AI tech will generate soundtracks for videos

Google's DeepMind artificial intelligence laboratory is working on a new technology that can generate soundtracks, even dialogue, to go along with videos. The lab has shared its progress on the video-to-audio (V2A) technology project, which can be paired with Google Veo and other video creation tools like OpenAI's Sora. In its blog post, the DeepMind team explains that the system can understand raw pixels and combine that information with text prompts to create sound effects for what's happening onscreen. To note, the tool can also be used to make soundtracks for traditional footage, such as silent films and any other video without sound. 

DeepMind's researchers trained the technology on videos, audios and AI-generated annotations that contain detailed descriptions of sounds and dialogue transcripts. They said that by doing so, the technology learned to associate specific sounds with visual scenes. As TechCrunch notes, DeepMind's team isn't the first to release an AI tool that can generate sound effects — ElevenLabs released one recently, as well — and it won't be the last. "Our research stands out from existing video-to-audio solutions because it can understand raw pixels and adding a text prompt is optional," the team writes.

While the text prompt is optional, it can be used to shape and refine the final product so that it's as accurate and as realistic as possible. You can enter positive prompts to steer the output towards creating sounds you want, for instance, or negative prompts to steer it away from the sounds you don't want. In the sample below, the team used the prompt: "Cinematic, thriller, horror film, music, tension, ambience, footsteps on concrete.

The researchers admit that they're still trying to address their V2A technology's existing limitations, like the drop in the output's audio quality that can happen if there are distortions in the source video. They're also still working on improving lip synchronizations for generated dialogue. In addition, they vow to put the technology through "rigorous safety assessments and testing" before releasing it to the world. This article originally appeared on Engadget at https://www.engadget.com/google-deepminds-new-ai-tech-will-generate-soundtracks-for-videos-113100908.html?src=rss

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

Viral News Code whisperer by profession, narrative alchemist by passion. With 6 years of tech expertise under my belt, I bring a unique blend of logic and imagination to ViralNews360. Expect everything from tech explainers that melt your brain (but not your circuits) to heartwarming tales that tug at your heartstrings. Come on in, the virtual coffee's always brewing!