Reserved IP Address°C
11-05-2024
BSV
$46.72
Vol 19.67m
0.62%
BTC
$68806
Vol 44118.86m
0.05%
BCH
$338.98
Vol 271.64m
0.63%
LTC
$66.97
Vol 319.85m
0.53%
DOGE
$0.16
Vol 3754.57m
6.94%
Getting your Trinity Audio player ready...

The Canadian Security Intelligence Service (CSIS) has issued a report detailing the threats posed by deepfakes using artificial intelligence (AI) tools on the key sectors of the nation’s economy.

In its report, the CSIS noted that the growing sophistication of deepfakes poses significant risks to Canada’s democracy, values, and way of life. The security agency pointed out that deepfakes are becoming increasingly accessible and do not require much technical knowledge, noting that the “inability to recognize or detect” deepfakes complicates the problem for millions of Canadians with access to content generated by AI tools.

“Deepfakes and other advanced AI technologies threaten democracy as certain actors seek to capitalize on uncertainty or perpetuate ‘facts’ based on synthetic and/or falsified information,” read the report. “This will be exacerbated further if governments are unable to ‘prove’ that their official content is real and factual.”

The report highlighted the rising cases of deepfake pornographic videos proliferating the internet, citing examples of Twitch streamer “QTCinderella” and investigative journalist Rana Ayyub. CSIC noted a spike in fraud cases involving deepfakes, with one popular scam using an AI-generated video of Elon Musk to lure people to invest in digital currencies.

Other scams involve synthetic videos masquerading as CEOs of international companies to convince employees to wire transfers into their bank accounts. The report noted a rising trend of scammers using deepfakes to conduct employment scams, costing unsuspecting victims losses running into millions of dollars.

Aware of the issues associated with deepfake, the CSIS recommends swift regulatory action to crack down on the activities of bad actors. The security outfit warns that governments are “notoriously slow” in rolling out regulatory frameworks compared to AI developers’ innovation pace.

“AI capabilities will continue to advance and evolve; the realism of deepfakes/synthetic media is going to improve; and AI-generated content is going to become more prevalent,” said the CSIS. “This means that governmental policies, directives, and initiatives (both present and future) will need to advance and evolve in equal measure alongside these technologies.”

Proper labeling is the way forward

In its recommendation, the CSIS noted that the establishment of a requirement for labeling AI-generated content may mitigate the risks associated with deepfakes. While technical standards for labeling appear to be nascent, Google (NASDAQ: GOOGL) has made significant strides with its watermarking tool for AI-generated images.

The CSIS urges policymakers to roll out “capacities” to differentiate between malicious AI content from those with positive applications in society. Ahead of the 2024 U.S. general elections, U.S. authorities have expressed concerns over the utility of AI in campaigns, noting their potential to mislead millions of voters.

Watch Sentinel Node: Blockchain tools to improve cybersecurity

Recommended for you

Blockchain firm R3 looking for a buyer: report
R3 has raised over $120 million over the years, but broader market conditions have proven tough as its permissioned blockchain...
November 5, 2024
Zanzibar launches blockchain sandbox for startups
Zanzibar seeks to support blockchain startups and recently launched a sandbox; meanwhile, Vietnam has launched a national blockchain strategy.
November 5, 2024
Advertisement
Advertisement
Advertisement