12-26-2024
BSV
$54.35
Vol 26.2m
-4.72%
BTC
$95555
Vol 43049.78m
-2.78%
BCH
$440.78
Vol 240.98m
-5.26%
LTC
$104.02
Vol 560.56m
-4.28%
DOGE
$0.31
Vol 2672.08m
-4.87%
Getting your Trinity Audio player ready...

Artificial intelligence (AI)-generated disinformation around the 2024 presidential election has begun. In late January, New Hampshire residents were targeted by a robocall scam that used AI to mimic the voice of President Joe Biden, discouraging them from participating in the state’s primary election.

On the call, a voice that sounded identical to Biden’s told recipients that it was “important that you save your vote for the November election.” It falsely said to them that voting in the primary would inadvertently support Republican efforts to re-elect Donald Trump, adding that “your vote makes a difference in November, not this Tuesday.”

The perpetrator who executed these calls even made the caller ID appear as the contact information of an individual named Kathy Sullivan, who is affiliated with politics. Sullivan was the former New Hampshire Democratic Party chair and a political action committee leader supporting write-in campaigns for President Biden. Because of this, the recipients mistook the call as legitimate and even provided them with a valid callback number.

But once Sullivan began receiving phone calls from those who received the fraudulent robocall, she reported the incident to the New Hampshire attorney general’s office, triggering an investigation into what is being characterized as an “unlawful attempt” at voter suppression.

The Attorney General’s office has opened an investigation into the matter, saying that these messages appear to be an unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters. The Attorney General’s office went on to say that New Hampshire voters should disregard the content of this message entirely.

This incident also prompted the Federal Communications Commission (FCC) to take action against unsolicited AI-generated robocalls. The Commission announced its intention to criminalize such calls under the Telephone Consumer Protection Act (TCPA), a 1991 statute designed to regulate automated political and marketing communications made without the recipient’s consent.

According to an FCC spokesperson, the five-member Commission is expected to vote on and pass this new change in the next few weeks.

Generative AI innovation vs. crime

The rapid evolution of generative AI is a double-edged sword to society: on one side, there is a lot of innovation happening and new products being released that have a positive impact on the world, but on the other side, criminals have a new toolkit that can be used to exploit others. As generative AI systems have significantly improved in their capabilities, they’ve allowed bad actors to create highly convincing replicas that can be indistinguishable from their originals, as we saw in the Biden robocall incident.

Criminals are often early adopters of emerging technologies and exploit these advancements to commit crimes with a relatively high success rate because most of the population, including law enforcement, is still learning how to operate these new tools at the time that the crimes are being committed.

FCC Chairwoman Jessica Rosenworcel emphasized this growing concern, saying, “AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate.”

Rosenworcel’s remark underscores the universal vulnerability to these scams, noting, “No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls.”

Despite potential strategies to verify authenticity and reduce the impact of generative AI-driven fraud, these solutions are not abundant yet, which allows criminals to exploit these technologies with a relatively high success rate. This reality calls for an urgent response to boost our defenses against the misuse of AI, ensuring the safety and trust of all citizens in the digital age, which may be one reason the FCC is taking a step to mitigate AI-induced crime.

The 2024 election and generative AI

As we get closer to the 2024 election, we should expect an unprecedented level of digital campaigning, where every political entity, from individual candidates to entire parties, will most likely use every tool at their disposal to get an edge–including generative AI.

This race to the top—or, in some cases, a battle to undermine the opposition—will go beyond audio content and robocalls and extend into AI-generated images and videos. These deepfakes will be created to tarnish the reputation of political figures or mislead the public.

The Biden Administration has taken a few proactive steps toward identifying and mitigating AI-induced risks. However, the unpredictability of technological advancements and the unknown attack vectors make creating a comprehensive, foolproof strategy challenging.

How generative AI is used in and around the upcoming election will be a space to keep an eye on. As political actors try to achieve their objectives, often with little regard for how they accomplish them, we will likely see never-before-seen electioneering tactics, thanks to AI. It would not be surprising to see legislative changes, like those proposed by the FCC, coming into play post-mortem after generative AI runs its course in the political world.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI truly is not generative, it’s synthetic

Recommended for you

India leads AI adoption in 2024 despite concerns
India wholeheartedly embraced AI adoption in 2024 following PM Narendra Modi's statement that AI could transform the nation in sectors...
December 26, 2024
Top Interviews of 2024: Breaking down future tech
Being in front of forward-thinkers in the emerging tech landscape, CoinGeek's Beky Liggero gained significant insights on future tech, listing...
December 26, 2024
Advertisement
Advertisement
Advertisement