11-22-2024
BSV
$67.64
Vol 217.97m
-3.42%
BTC
$98179
Vol 123075.06m
3.36%
BCH
$485.44
Vol 2295.74m
8.62%
LTC
$89.62
Vol 1440.03m
6.02%
DOGE
$0.38
Vol 9553.7m
-0.06%
Getting your Trinity Audio player ready...

As generative artificial intelligence (AI) systems continue to hog the spotlight, officials are racing to release guardrails to guide their development and commercial applications, with Massachusetts’ Attorney General Andrea Campbell joining the fray.

Campbell urged AI developers to abide by existing consumer protection guidelines in the rollout of AI systems. Her statements came in the form of an advisory released to serve as guidance for industry service providers and the general public pending the launch of a comprehensive AI playbook.

Campbell’s advisory detailed the rise of generative AI models and their widespread use by enterprises, citing their advantages over traditional systems.

“There is no doubt that AI holds tremendous and exciting potential to benefit society and our commonwealth in many ways, including fostering innovation and boosting efficiencies and cost-savings in the marketplace,” said Campbell. “Yet, those benefits do not outweigh the real risk of harm that, for example, any bias and lack of transparency within AI systems, can cause our residents.”

The advisory took swipes at the “false advertising” of AI systems by developers. Campbell warned that AI firms engaged in deceptive marketing of generative AI systems could face dire sanctions under the state’s consumer protection rules.

AI developers are urged to roll out systems without bias or discrimination while ensuring that their models are not trained with harmful content that aligns with civil rights laws. The advisory extends to copyrights, with firms advised to seek the consent of intellectual property (IP) holders before using the content to train their models.

Firms are expected to include clear warnings to consumers to indicate interactions with AI systems and ensure the safety of users’ personal data. As an added layer of protection, the advisory urges AI developers to provide for the clear labeling of AI-generated content.

Aware that AI could be a “black box” without a firm guarantee of real-world applicability, Campbell extends her advisory to consumers. Per the statement, users are prohibited from using AI systems to spread misinformation or leveraging deepfakes and chatbots to defraud unsuspecting members of the public.

Pushing for sterner rules

With the U.S. advocating for stricter AI rules, several jurisdictions appear keen on toeing the same path to protect consumers. For Japan, regulators are scrambling to roll out stringent rules to prevent a collapse of the “social order” stemming from election manipulation and the dissemination of discriminatory ideas.

The European Union (EU), with its forward-thinking stance, has opted to tighten the screws for the industry over the rising risks posed by industry behemoths. However, EU regulators say that stern rules over the sector will not stifle the growth of new startups in the ecosystem, but will draw from its experiences with digital asset regulation.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI Forge masterclass—Why AI & blockchain are powerhouses of technology

Recommended for you

BIT Mining hit with $10M fine over bribery charges
In its previous existence as a casino and sports lottery firm, BIT Mining reportedly paid $2 million in bogus consultation...
November 21, 2024
Donald Trump’s role in the ‘crypto’ boom
Donald Trump pledged to make the United States the "crypto capital of the world." For the first time in nearly...
November 21, 2024
Advertisement
Advertisement
Advertisement