Google Building

Google fake AI suit claims chatbot exploiting Bard’s popularity

Google (NASDAQ: GOOGL) has filed a lawsuit against a trio of scammers who allegedly created and promoted a fake version of Bard, its artificial intelligence (AI) chatbot, to gain access to victims’ social media accounts.

In its filing, Google argues that the scammers violated existing copyright rules by parading themselves as official social media pages, offering free versions and new updates to Bard. Leveraging on the growing popularity of AI in recent months, the scammers baited thousands of unsuspecting victims to download the fake Bard software.

The fake Bard offered automatically installs malware on users’ devices with the capabilities of accessing and sending social media login credentials to the scammers. A closer inspection of the trail of the scammers reveals a preference for social media accounts with a considerable following but also target business accounts.

“Google brings this action for trademark infringement and breach of contract to disrupt Defendants’ fraudulent scheme, prevent Defendants from causing further harm to Google’s users, and raise public awareness about Defendants’ misconduct so users can protect themselves,” read the filing.

Despite the detailed nature of the filing, Google says it does not know the names or identities of the scammers, opting to represent them as “DOES 1-3.”

Google is pursuing a range of reliefs for infringement of its registered trademark, unfair competition, false designation of origin, and breach of contract under the company’s terms of service. The search giant seeks a permanent injunction restraining the hackers from engaging in any activity bearing any connection with Google Marks and triple damages for the defendants’ “willfulness.”

Google is also seeking disgorgement of all profits made by the defendants, prejudgment and post-judgment interest, along with other reliefs which the court may deem just and equitable.

Google’s lawsuit comes on the heels of an adoption spike for generative AI tools in recent months, with millions of consumers relying on the products for efficiency and productivity. Bard and OpenAI‘s ChatGPT have managed to snag a massive chunk of the market share since launch but face several lawsuits over copyright violations.

AI opens a new can of worms

Authorities and consumer groups have expressed concerns over bad actors’ potential misuse of AI to perpetuate financial fraud, impersonation, and other scams. In April, a trio of U.S. securities regulators raised alarm over using AI tools to lure unsuspecting investors into YieldTrust.ai, a digital currency project they describe as a Ponzi scheme.

As governments worldwide scramble to churn out watertight AI regulations, consumers are advised to be wary of deepfakes and misinformation spread by AI tools. Conversely, AI developers have pledged to abide by voluntary standards to ensure safe innovation and use of machine learning models.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI, Blockchain, and secret to winning in technology

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.