Getting your Trinity Audio player ready...
|
Amid the frenzied efforts to integrate artificial intelligence (AI), Kazakhstan’s authorities have launched a national framework to guide the adoption of the emerging technology in educational institutions.
- Kazakhstan launched AI national framework
- Kazakhstan’s digitization initiatives
- AI usage triggers dishonest behavior
- Dangers of posed by AI
The new national standards flow from a collaboration between the Ministry of Education and the Ministry of Digital Development, Innovation, and Aerospace Industry. Signed by both coordinating ministers, the framework provides a clear blueprint for introducing AI technologies in Kazakhstan’s educational system from high schools to universities.
Furthermore, the national standards extend to technical and vocational education institutions across Kazakhstan. A community reading of the joint document reveals a focus on ethics, legal regulation, academic integrity, and personal data protection for students and teachers across the Central Asian country.
The framework also focuses on deepening the local AI talent pool, backing a raft of learning initiatives. Upon implementation, the framework will support the introduction of AI-related topics with “project-based learning” for students in the educational system.
The blueprint provides a three-pronged approach to professional development for teachers, focusing on acquiring, deepening, and creating knowledge. Despite the heightened adoption stance, Minister of Education Gani Beisembayev revealed that the blueprint will not stifle the professional authorities of teachers.
“The concept not only defines strategic priorities but also establishes a clear mechanism for implementation, a monitoring system, and a roadmap that will ensure the systematic, responsible, and safe use of AI in Kazakhstan’s schools and colleges,” said Beisembayev.
Regarding protecting children’s rights, Beisembayev disclosed that the framework relied on the recommendations of UNESCO, the EU, and the OECD in addition to its own national approach. Authorities say that the blueprint will ensure that Kazakhstan’s students will become creators with AI rather than merely using the technology.
Per the report, the 2025-2026 academic year will debut several AI-based subjects for the first time. Meanwhile, a raft of professional development programs will be introduced for teachers during the academic year amid plans for an online course for the general population.
The building blocks for a digital future
The release of the national framework follows an order by Kazakhstan President Kassym-Jomart Tokayev for the country to pursue mainstream digitization. Since the annual address, Kazakhstan has unveiled plans for a Ministry of AI, with the country eyeing the development of CryptoCity, a pilot zone for digital assets.
There are plans for a national digital asset fund while the country turns to stablecoin payments for licensing and supervision fees in its largest financial center. Kazakhstan’s ambitious plans for digitization have piqued the interest of Belgium, as it leads the region in adopting emerging technologies.
AI usage can trigger dishonest behavior among users: report
In other news, a new study by scientists in Berlin has revealed that individuals are more likely to engage in dishonest behavior during interactions with AI systems than with humans.
According to a report, the chances of unethical behavior skyrocket with AI, given a “convenient moral distance” between people and their actions. The research, organized by German-based researchers, spanned 13 independent studies with over 8,000 participants involved.To gauge the gap between ethical and unethical behaviors, the researchers leaned on a dice-roll task with participants asked to monitor and report the outcomes of a rolled dice. Participants were paid for each higher roll recorded for the study, with the researchers observing marked changes in behaviors.
Participants were allowed to delegate the task to an AI chatbot. They had to select a priority for the chatbot on a seven-point scale, ranging from maximizing accuracy to maximizing profit. Nearly 85% of respondents engaged in a form of dishonesty, while most instructed the AI chatbot to always report higher rolls to maximize profits.
However, when asked to perform the die-roll task without machine involvement, the researchers noted that 95% of participants reported rolls ethically.
“Our study shows that people are more willing to engage in unethical behavior when they can delegate it to machines—especially when they don’t have to say it outright,” said Nils Köbis, chair in Human Understanding of Algorithms and Machines at the University of Duisburg-Essen. “It’s easier to bend or break the rules when no one is watching—or when someone else carries out the act.”
Iyad Rahwan, co-author of the study, noted that the outcomes of the research point to a dire need for technical safeguards to prevent dishonest activity with AI tools amid rising adoption levels. Furthermore, Rahwan is pushing for mass sensitization for AI consumers on the issue of human-machine moral partnership.
“Our findings clearly show that we urgently need to further develop technical safeguards and regulatory frameworks,” said Rahwan. “But more than that, society needs to confront what it means to share moral responsibility with machines.”
Stifling the rogue use of AI with innovation
Aware of the dangers AI poses in the hands of bad actors, several technology giants have moved to introduce a raft of safeguards in their AI models, tapping emerging technologies. Leading chatbots are embedded with content filtering and moderation, with the latest offerings undergoing red-teaming and stress testing to break the model before rollout.
To prevent the dissemination of fake news, Google (NASDAQ: GOOGL) and Meta (NASDAQ: META) have introduced an invisible watermark tool for AI-generated images. On the regulatory side, authorities are cracking down on deepfakes with many regulations and enforcement against technology companies.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch | Alex Ball on the future of tech: AI development and entrepreneurship