|
Getting your Trinity Audio player ready...
|
South Korea’s Artificial Intelligence Act came into force last week, making it the second jurisdiction after the European Union to adopt an AI regulatory framework and the first to establish safety requirements for high-performance frontier AI systems.
The ‘Basic Act on the Development of Artificial Intelligence and Creation of a Trust Foundation’—or ‘AI Basic Act’—was passed in December 2024, enacted January 21, 2025, and came into force on January 22, 2026.
It is intended to provide a legal framework to advance South Korea’s national competitiveness in AI while ensuring ethical standards and public trust in the technology.
“The purpose of this Act is to protect human rights and dignity, and to contribute to enhance the quality of life, while strengthening national competitiveness by establishing essential regulations for the sound development of artificial intelligence (AI) and the establishment of trust,” reads a Center for Security and Emerging Technology translation of the Act.
It goes to say directly that: “AI technology and the AI industry shall be developed in a direction that promotes safety and reliability to improve people’s quality of life.”
To achieve this, the Act sets “legal grounds for establishing a national AI control tower, an AI safety institute, and various governmental initiatives in R&D, standardization, and policies.” It also orders various initiatives to support the national AI infrastructures, such as training data and data centers, and promoting SMEs, startups, and talent in the AI field.
However, what sets the AI Basic Act apart, in global regulatory terms, is its various safety obligations.
As well as setting up the AI safety institute—to secure AI safety by “protecting people’s lives, physical well-being, property, from risks associated with AI and maintaining a foundation of trust in an AI society”—”the act assigns transparency and safety responsibilities to businesses that develop and deploy ‘high impact’ AI and generative AI,” which it defines as “AI systems that have the potential to significantly impact human life, safety, or fundamental rights.”
Specific safety measures include requiring businesses to conduct AI risk assessments, designating a local representative, and requiring AI business operators to obtain verification and certification before providing high-impact AI.
Regarding these various safety mandates, Kim Kyeong-man, deputy minister of the Office of AI Policy at the Ministry of Science and ICT, said, “This is not about boasting that we are the first in the world.” Instead, said Kim, the goal is to “ensure that people can use it with a sense of trust.”
Speaking to journalists in Seoul on January 20—as reported by local outlet The Korea Herald—Kim reportedly emphasized that the aim of the country’s ‘word-first’ safety measures “is not to stop AI development through regulation” and that the law should be seen as a starting point, not a finished product.
“The legislation didn’t pass because it’s perfect,” said Kim. “It passed because we needed a foundation to keep the discussion going.”
To reduce the initial burden on businesses, the government reportedly plans to implement a grace period of at least one year, during which it will not conduct fact-finding investigations or impose administrative sanctions. Instead, the focus will be on consultations and education.
South Korea doubling down on AI
South Korea’s AI Act coming into force marks another significant landmark in the country’s relationship with AI. Like many countries and businesses around the world, South Korea has increasingly prioritized investment in innovative tech over the past few years.
According to a recent report from Industry Research, South Korea’s superconductor market was valued at $284.24 million in 2025 and is forecast to reach $529 million by 2034, with a 15% share of the booming Asia Pacific market and a compound annual growth rate of 7.10%.
This forecast was made all the more realistic by the South Korean government revealing last Autumn an 8% increase in its 2026 budget to expand investment in AI—the highest rise in government expenditure in four years.
“We are now in the era of an AI transformation, and if we fall behind in so-called ‘physical AI,’ we will have no future,” said Finance Minister Koo Yun-cheol at the time. “That is why next year’s budget is larger in scale, with more aggressive restructuring than in previous years.”
Most recently, South Korean President Lee Jae Myung met with Italian Prime Minister Giorgia Meloni to expand cooperation across a range of sectors, including AI. Among the agreements, the two countries reportedly signed a memorandum of understanding (MoU) for AI chip industry cooperation, agreed to intensify joint efforts to develop resilient, reliable critical mineral supply chains, and discussed collaboration on joint research projects.
While the AI Basic Act imposes certain new obligations on AI businesses, it also includes a range of measures to boost and support the sector, such as establishing the ‘Korea AI Promotion Association’ and promoting AI demonstration testing.
Thus, the Act could be seen as just another step on the country’s journey to staying relevant in the AI-race and becoming even more of a regional and global innovative tech powerhouse.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: AI is a double-edged sword




