Getting your Trinity Audio player ready...
|
Google (NASDAQ: GOOGL) says it will protect users of its generative artificial intelligence (AI) products from claims of copyright infringements stemming from third parties.
Google disclosed in a company blog post that it will assume full responsibility for all legal risks associated with intellectual property (IP) violations when clients make use of its AI products. The company stated that it will rely on a “two-pronged” approach to protecting users, with the first indemnity covering the use of training data.
Under this protection, the big tech firm confirmed it will bear full legal liability if users of its AI products are sued by third parties over claims of copyright infringement regarding data used in training models.
“If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved,” read the blog post.
The second approach deals with AI-generated content—Google pledges to indemnify users over claims of generated output infringing a third party’s IP rights. However, Google remarks that the protections are not absolute, and users must ensure that they do not attempt to violate the IP of third parties intentionally.
“This indemnity only applies if you didn’t try to intentionally create or use generated output to infringe the rights of others, and similarly, are using existing and emerging tools, for example to cite sources to help use generated output responsibly,” said Google.
Google’s indemnity offering appears limited in its applications to AI offerings in Workspace and Google Cloud, famously excluding its generative AI platform ChatGPT. Currently, products covered include Duet AI in both Cloud and Workspace, Vertex AI Search, Conversation, Text Embedding API, Multimodal Embeddings, Codey APIs, and Visual Captioning.
Enterprise users do not need to update their existing agreements to reap the indemnity benefits given Google’s publication on its public service terms page, according to Google. The Big Tech firm described the move as only “the first step,” pledging to roll out more features to allow users to get the most out of generative AI.
Copyright claims threaten the future of AI
As AI continues its march, analysts have pointed to copyrights and IP violations as stumbling blocks to their development. Nearly all leading AI developers are in court over IP violations, with OpenAI being sued for unjust enrichment by the U.S. Authors Guild.
“These algorithms are at the heart of Defendants’ massive commercial enterprise,” said the filing. “And at the heart of these algorithms is systematic theft on a mass scale.”
Despite the weighty evidence, the defendants protested their innocence, claiming fair use as a defense. Still, the court’s final decision remains unclear.
Aside from IP claims, AI must grapple with a scarcity of chips and looming regulations that could dent the pace of innovation. Global regulators are scrambling up new legal frameworks to ensure safe usage and protect key and emerging sectors of the economy like finance, Web3, education, and health from misuse.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: AI truly is not generative, it’s synthetic