Getting your Trinity Audio player ready...
|
More than 200 prominent politicians, public figures, and scientists released a letter calling for urgent binding international “red lines” to prevent dangerous artificial intelligence (AI) use. The letter was released to coincide with the 80th session of the United Nations General Assembly (UNGA).
- Global plea to curb AI risks
- Defining AI red lines
- Global unity on big risks
- AI as climate gamechanger
The illustrious list of signees included ten Nobel Prize winners, eight former heads of state and ministers, and several leading AI researchers. They were joined by over 70 organizations worldwide, including Taiwan AI Labs, the Foundation for European Progressive Studies, AI Governance and Safety Canada, and the Beijing Academy of Artificial Intelligence.
“AI holds immense potential to advance human wellbeing, yet its current trajectory presents unprecedented dangers,” read the letter. “We urgently call for international red lines to prevent unacceptable AI risks.”
Among the concerned figures putting their name to the call for AI caution was Nobel Peace Prize laureate Maria Ressa, who announced the letter in her opening speech at the UN General Assembly’s High-Level Week on Monday.
She warned that “without AI safeguards, we may soon face epistemic chaos, engineered pandemics, and systematic human rights violation.”
Ressa added that “history teaches us that when confronted with irreversible, borderless threats, cooperation is the only rational way to pursue national interests.”
The brief letter, published on a dedicated site called ‘red-lives.ai’, raised fears that AI could soon “far surpass human capabilities” and, in so doing, escalate risks such as widespread disinformation and manipulation of individuals. This, it claimed, could lead to national and international security concerns, mass unemployment, and systematic human rights violations.
“Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world,” the letter warned. “Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years.”
In order to meet this challenge, the various public figures and organizations who signed the letter called on governments to act decisively, “before the window for meaningful intervention closes.”
Specifically, they suggested that an international agreement on clear and verifiable red lines, that build upon and enforce existing global frameworks and voluntary corporate commitments, is necessary for preventing these “unacceptable” risks.
“We urge governments to reach an international agreement on red lines for AI — ensuring they are operational, with robust enforcement mechanisms — by the end of 2026,” said the letter.
This not-too-distant date was chosen because, according to the letter, the pace of AI development means that risks once seen as speculative are already emerging.
“Waiting longer could mean less room, both technically and politically, for effective intervention, while the likelihood of cross-border harm increases sharply,” said the signees. “That is why 2026 must be the year the world acts.”
Former President of the UN General Assembly, Csaba Kőrösi, one of the notable signatures on the letter, argued that “humanity in its long history has never met intelligence higher than ours. Within a few years, we will. But we are far from being prepared for it in terms of regulations, safeguards, and governance.”
This sentiment was echoed by Ahmet Üzümcü, former Director General of the Organization for the Prohibition of Chemical Weapons, another signee of the letter, who said, “it is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly.”
Former President of Ireland Mary Robinson and former President of Colombia Juan Manuel Santos also put their names to the call. In addition to these international leaders were Nobel Prize recipients in chemistry, economics, peace and physics, as well as popular and award-winning authors such as Stephen Fry and Yuval Noah Harari.
“For thousands of years, humans have learned—sometimes the hard way—that powerful technologies can have dangerous as well as beneficial consequences,” said Harari, author of the 2011 book ‘Sapiens: A Brief History of Humankind,’ that spent 182 weeks in The New York Times best-seller list. “With AI, we may not get a chance to learn from our mistakes, because AI is the first technology that can make decisions by itself, invent new ideas by itself, and escape our control.”He added that “humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity.”
As well as being timed for the opening of latest UN General Assembly, the letter’s release fortuitously fell the same day that OpenAI and Nvidia (NASDAQ: NVDA) announced a “landmark strategic partnership” for the deployment of at least 10 gigawatts of Nvidia systems and a $100 billion investment from the company to help power OpenAI’s next-generation of AI infrastructure.
This deal between two of the world’s largest players in the AI space served to underscore the urgency of the AI red line letter.
Possible red lines
The website for the letter also provided a few examples of what these hypothetical red lines might look like, in the context of AI, suggesting that they could focus either on AI behaviors (what the AI systems can do) or on AI uses (how humans and organizations are allowed to use such systems).
The site emphasized that the campaign did not endorse any specific red lines, but provided several examples related to the areas of most concern. This included prohibiting: the delegation of nuclear launch authority, or critical command-and-control decisions, to AI systems; the deployment and use of weapon systems used for killing a human without meaningful human control and accountability; the use of AI systems for social scoring and mass surveillance; and the uncontrolled release of cyber offensive agents capable of disrupting critical infrastructure.
In terms of the feasibility of any of these controls, the site noted that certain red lines on AI behaviors are already being operationalized in the ‘Safety and Security’ frameworks of AI companies, such as Anthropic’s Responsible Scaling Policy, OpenAI’s Preparedness Framework, and DeepMind’s Frontier Safety Framework.
A realistic goal
In order to further demonstrate that the letter’s goals are reasonable, the site gave a few more real-world examples from history that shows “international cooperation on high-stakes risks is entirely achievable.”
Two such cases were the Treaty on the Non-Proliferation of Nuclear Weapons (1970) and the Biological Weapons Convention (1975), which were negotiated and ratified at the height of the Cold War, “proving that cooperation is possible despite mutual distrust and hostility.”
More recently, it also pointed to the 2025 ‘High Seas Treaty’, which “provided a comprehensive set of regulations for high seas conservation and serves as a sign of optimism for international diplomacy.”
If controlled, AI can be a force for good
The concerns raised by the public figures, along with calls for increased rules and protections, came the same day that the UN’s climate chief, Simon Stiell, gave an interview to U.K. broadsheet The Guardian, in which he said governments must step in to regulate AI technology.
Steill argued that if governments and authorities control AI, it could prove a “gamechanger” when it comes to combatting the climate crisis.
“AI is not a ready-made solution, and it carries risks. But it can also be a gamechanger,” the UN climate chief told The Guardian. “Done properly, AI releases human capacity, not replaces it. Most important is its power to drive real-world outcomes: managing microgrids, mapping climate risk, guiding resilient planning.”
Stiell’s comments demonstrate that there is a desire from current international leaders—at least at the UN—to see appropriate laws, regulation and controls for AI, as much as to utilize the technology’s potential for positive change.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Demonstrating the potential of blockchain’s fusion with AI