BSV
$53.96
Vol 31.72m
1.76%
BTC
$95354
Vol 48246.77m
-0.86%
BCH
$440.52
Vol 324.12m
-2.1%
LTC
$101.84
Vol 755.61m
2.07%
DOGE
$0.31
Vol 4494.45m
-0.23%
Getting your Trinity Audio player ready...

Google (NASDAQ: GOOGL) has launched a new artificial intelligence (AI) tool for audio generation, while a watermarking feature has earned it praises from experts and global regulators.

Dubbed Lyria, the AI music generation tool allows users to create synthetic sounds with multiple instruments and voices. According to an official blog post, Google’s DeepMind noted that the Lyria model offers users “more nuanced control of the output’s style and performance.”

The company confirmed a partnership with YouTube to test Lyria functionalities with a limited set of creators using AI-generated voices from several singers like Charlie Puth, Demi Lovato, T-Pain, and John Legend. Selected creators can use Lyria in Shorts with the tool providing lyrics and an AI-generated backing track.

Alongside Lyria’s announcement, Google launched a new watermarking feature for AI-generated audio to identify synthetically generated content. Google expanded the tool SynthID, initially designed for AI-generated images, to include audio content, affirming that the watermark does not “compromise the listening experience.”

“It does this by converting the audio wave into a two-dimensional visualization that shows how the spectrum of frequencies in a sound evolves over time,” read the blog post. “This novel method is unlike anything that exists today, especially in the context of audio.”

Google says the watermark can still be detected even after the AI-generated audio has received a range of alterations, including file compression, distortion, and changing the cadence of the sound. As an added layer, the watermark remains detectable in sections of the track relying on AI, distinguishable from others with human effort.

The announcement did not disclose further technical details for the new watermarking tool, with the firm wary of attempts by bad actors to circumvent the process.

“The more you reveal about the way it works, the easier it’ll be for hackers and nefarious entities to get around it,” said Demis Hassabis, Google DeepMind CEO.

Google has led the way for AI labeling since regulators included the requirement in their incoming regulatory frameworks for safe AI usage. With the U.S. elections around the corner, the watermarking features likely mitigate the risks of deep fakes and impersonation.

Global push for AI regulation

As AI developers quicken the pace of innovation, regulatory authorities are clambering to launch legal frameworks for the emerging technology. The European Union (EU) has taken the lead with its incoming EU AI Act, with provisions mandating AI firms to ensure clear labeling of AI-generated content, respect existing privacy laws, and proper disclosures.

In early November, the U.K. hosted a global AI summit with 28 member countries signing the Bletchley Declaration, pledging to ensure AI’s safe development and adoption. Participants reached a consensus on the risk of AI to key sectors of the global economy, and they restated their commitment to adopt a global approach to regulation.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI truly is not generative, it’s synthetic

Recommended for you

Who wants to be an entrepreneur?
Embodying the big five personality traits could be beneficial for aspiring entrepreneurs, but Block Dojo shows that there is more...
December 20, 2024
UNISOT, PSU China team up for supply chain business intelligence
UNISOT revealed a new partnership with business intelligence and research firm PSU China, which will combine its data with UNISOT's...
December 20, 2024
Advertisement
Advertisement
Advertisement