RateLimited°C
11-05-2024
BSV
$46.68
Vol 15.87m
-0.47%
BTC
$68879
Vol 45728.96m
0.48%
BCH
$338.8
Vol 272.83m
1.49%
LTC
$67.03
Vol 308.85m
1.12%
DOGE
$0.16
Vol 3681.6m
11.7%
Getting your Trinity Audio player ready...

The Canadian Security Intelligence Service (CSIS) has issued a report detailing the threats posed by deepfakes using artificial intelligence (AI) tools on the key sectors of the nation’s economy.

In its report, the CSIS noted that the growing sophistication of deepfakes poses significant risks to Canada’s democracy, values, and way of life. The security agency pointed out that deepfakes are becoming increasingly accessible and do not require much technical knowledge, noting that the “inability to recognize or detect” deepfakes complicates the problem for millions of Canadians with access to content generated by AI tools.

“Deepfakes and other advanced AI technologies threaten democracy as certain actors seek to capitalize on uncertainty or perpetuate ‘facts’ based on synthetic and/or falsified information,” read the report. “This will be exacerbated further if governments are unable to ‘prove’ that their official content is real and factual.”

The report highlighted the rising cases of deepfake pornographic videos proliferating the internet, citing examples of Twitch streamer “QTCinderella” and investigative journalist Rana Ayyub. CSIC noted a spike in fraud cases involving deepfakes, with one popular scam using an AI-generated video of Elon Musk to lure people to invest in digital currencies.

Other scams involve synthetic videos masquerading as CEOs of international companies to convince employees to wire transfers into their bank accounts. The report noted a rising trend of scammers using deepfakes to conduct employment scams, costing unsuspecting victims losses running into millions of dollars.

Aware of the issues associated with deepfake, the CSIS recommends swift regulatory action to crack down on the activities of bad actors. The security outfit warns that governments are “notoriously slow” in rolling out regulatory frameworks compared to AI developers’ innovation pace.

“AI capabilities will continue to advance and evolve; the realism of deepfakes/synthetic media is going to improve; and AI-generated content is going to become more prevalent,” said the CSIS. “This means that governmental policies, directives, and initiatives (both present and future) will need to advance and evolve in equal measure alongside these technologies.”

Proper labeling is the way forward

In its recommendation, the CSIS noted that the establishment of a requirement for labeling AI-generated content may mitigate the risks associated with deepfakes. While technical standards for labeling appear to be nascent, Google (NASDAQ: GOOGL) has made significant strides with its watermarking tool for AI-generated images.

The CSIS urges policymakers to roll out “capacities” to differentiate between malicious AI content from those with positive applications in society. Ahead of the 2024 U.S. general elections, U.S. authorities have expressed concerns over the utility of AI in campaigns, noting their potential to mislead millions of voters.

Watch Sentinel Node: Blockchain tools to improve cybersecurity

Recommended for you

Zanzibar launches blockchain sandbox for startups
Zanzibar seeks to support blockchain startups and recently launched a sandbox; meanwhile, Vietnam has launched a national blockchain strategy.
November 5, 2024
FINRA: Metaverse to hit $3T by 2031, but poses regulatory risks
FINRA says it has observed more players in the securities industry diving into the metaverse but warns that they must...
November 4, 2024
Advertisement
Advertisement
Advertisement