Getting your Trinity Audio player ready...
|
In recent years, artificial intelligence (AI) has evolved from being a buzzword to a transformative force with profound implications for society. However, as AI becomes more sophisticated, its potential risks and ethical dilemmas also grow. These were the highlights and recurring discussions during Rappler’s “Changemaker Series” titled “AI in Motion.”
A presentation by Gemma Mendoza, Rappler’s Head of Disinformation and Platform Research, titled “Beyond the Hype: A Reality Check on AI,” raised several critical issues surrounding AI, shedding light on both the opportunities and challenges this technology presents. One of the key points Mendoza raised was the increasing sophistication of deepfakes—AI-generated content designed to mimic real individuals or events.
For instance, recent examples include the recent drug allegations against Philippine President Bongbong Marcos and his supposed audio recording of military orders to attack China in the disputed West Philippine Sea. Such incidents underscore the dangerous potential of AI when used to deceive and manipulate public perception. Another concerning example is the claim that Rappler CEO Maria Ressa, a prominent journalist and Nobel Peace Prize awardee, was offering cryptocurrency, a falsehood generated and spread by scammers through AI. These instances highlight how AI can be weaponized to spread disinformation and create chaos.
Mendoza pointed out that the threat is not just in the technology itself, but also in its ability to learn and adapt. This raises fundamental questions about how to align AI with human values, ensure its fairness and honesty, and develop scalable oversight mechanisms.
Perhaps the most pressing question is: When AI makes mistakes, who is responsible? As AI systems become more autonomous, determining accountability becomes increasingly complex. The potential risks of AI are not limited to disinformation. Mendoza also emphasized the broader dangers, such as bioterrorism, surveillance, and warfare. AI-driven technologies can be used to create and deploy biological weapons, enhance state surveillance, and even autonomously engage in warfare. These risks highlight the urgent need for comprehensive guidelines and regulations to govern the use of AI, ensuring that its development and deployment are aligned with ethical standards and societal values.
In a related talk titled “Reputation Re(AI)magined” by Patrisha Estrada, Data Science lead at Nerve, the focus shifted to the impact of AI on organizational reputation and public image.
Estrada noted that the rise of AI has intensified scrutiny of the gap between reputation and reality. AI systems, often operating as opaque “black boxes,” can harbor biases that influence their outputs. This issue of algorithmic bias is particularly concerning, as it can perpetuate existing inequalities and reinforce stereotypes. Estrada argued that AI has fundamentally changed how beliefs and expectations are shaped, expressed, and managed.
Social media platforms like TikTok and YouTube, powered by AI algorithms, continue to prove that video content is the most engaging form of communication. However, using AI in these platforms also raises questions about the accuracy and fairness of the information being disseminated.
The third and highlight talk of the event was by Jonathan Yabut, one of Asia’s leading business speakers, who discussed the impact of AI on talent recruitment, upskilling, and retention. Yabut differentiated between traditional AI, which operates based on pre-programmed rules and algorithms, and generative AI, which involves deep learning and synthesizing new information. While there is a legitimate concern that AI could lead to reduced manpower in the future, Yabut emphasized that, when properly utilized, AI has the potential to enhance and augment human performance in the workplace.
In conclusion, these discussions underscore the dual nature of AI: it is both a powerful tool for progress and a potential source of harm.
Rappler CEO Maria Ressa‘s parting words at the event serve as a poignant reminder of our responsibility in shaping AI’s future. She urged everyone to continue writing, to keep creating, and to actively participate in shaping the world we want for future generations. The question of what kind of future we want for our children and how AI fits into that vision is one that we must all grapple with.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Understanding the dynamics of blockchain & AI