Getting your Trinity Audio player ready...
|
Singapore is moving toward a new framework that will govern the development and use of generative artificial intelligence (AI) models focused on consumer safety and international alignment.
The proposed framework, a brainchild of the AI Verify Foundation (AIVF) and Infocomm Media Development Authority (IMDA), is Singapore’s first attempt at making specific regulations for generative AI.
Dubbed the Model AI Governance Framework for Generative AI, the proposed rules build on the provision of the existing Model Governance Framework rolled out in 2020. The 2020 rulebook, while reasonably robust in its own rights, fails to cater to emerging challenges posed by the rapid advancements of generative AI since 2022.
A joint statement by both entities sheds light on the regulatory direction of the proposed framework, hinting at nine key areas to achieve a “trusted ecosystem” for generative AI.
The incoming provisions will demand accountability from developers and users, imposing criminal and civil liability for misuse of AI systems. The framework pushes for the trusted development of large language models (LLMs), requiring AI developers to train their models responsibly.
The proposed rulebook also prioritizes the safe handling of customers’ data, proper testing before a commercial rollout, advanced security measures, and a robust system for incident reporting.
AI firms operating in Singapore will be required to seek full approvals from copyright holders before using their intellectual property to train their AI models. Firms are expected to earmark a portion of their operational budget to “safety and alignment research and development.”
Apart from leaning on previous regulatory attempts, the new proposal draws inspiration from the attempts of other jurisdictions to rein in generative AI. The statement cites the successful mapping of the AI governance framework in the U.S. and the European Union (EU), adopting a tailor-made approach for the local ecosystem.
“As generative AI continues to develop and evolve, there is a need for global collaboration on policy approaches,” read the joint statement. “We hope that this serves as a next step towards developing a trusted AI ecosystem, where AI is harnessed for the public good, and people embrace AI safely and confidently.”
Singapore embraces AI
Despite the absence of a complete AI regulation, Singaporean authorities are incorporating AI and other emerging technologies into their operations. The Monetary Authority of Singapore (MAS) revealed AI-backed anti-money laundering efforts to improve its monitoring of the financial ecosystem.
“Like all technologies, AI brings great promise and great peril,” said MAS Chief Ravi Menon. “We need to be realistic in appreciating both sides. How can we harness the benefits, and not get too scared about the risks?”
Rather than a speedy approach, the MAS says it will play the long game in its AI integration plans, leaning on the lessons gleaned from its tango with digital currencies.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Does AI know what it’s doing?