BSV
$53.62
Vol 31.45m
-6.1%
BTC
$96145
Vol 42638.24m
-2.82%
BCH
$438.96
Vol 227.45m
-4.69%
LTC
$102.54
Vol 603.19m
-6.18%
DOGE
$0.31
Vol 2641.14m
-5.34%
Getting your Trinity Audio player ready...

The Canadian Security Intelligence Service (CSIS) has issued a report detailing the threats posed by deepfakes using artificial intelligence (AI) tools on the key sectors of the nation’s economy.

In its report, the CSIS noted that the growing sophistication of deepfakes poses significant risks to Canada’s democracy, values, and way of life. The security agency pointed out that deepfakes are becoming increasingly accessible and do not require much technical knowledge, noting that the “inability to recognize or detect” deepfakes complicates the problem for millions of Canadians with access to content generated by AI tools.

“Deepfakes and other advanced AI technologies threaten democracy as certain actors seek to capitalize on uncertainty or perpetuate ‘facts’ based on synthetic and/or falsified information,” read the report. “This will be exacerbated further if governments are unable to ‘prove’ that their official content is real and factual.”

The report highlighted the rising cases of deepfake pornographic videos proliferating the internet, citing examples of Twitch streamer “QTCinderella” and investigative journalist Rana Ayyub. CSIC noted a spike in fraud cases involving deepfakes, with one popular scam using an AI-generated video of Elon Musk to lure people to invest in digital currencies.

Other scams involve synthetic videos masquerading as CEOs of international companies to convince employees to wire transfers into their bank accounts. The report noted a rising trend of scammers using deepfakes to conduct employment scams, costing unsuspecting victims losses running into millions of dollars.

Aware of the issues associated with deepfake, the CSIS recommends swift regulatory action to crack down on the activities of bad actors. The security outfit warns that governments are “notoriously slow” in rolling out regulatory frameworks compared to AI developers’ innovation pace.

“AI capabilities will continue to advance and evolve; the realism of deepfakes/synthetic media is going to improve; and AI-generated content is going to become more prevalent,” said the CSIS. “This means that governmental policies, directives, and initiatives (both present and future) will need to advance and evolve in equal measure alongside these technologies.”

Proper labeling is the way forward

In its recommendation, the CSIS noted that the establishment of a requirement for labeling AI-generated content may mitigate the risks associated with deepfakes. While technical standards for labeling appear to be nascent, Google (NASDAQ: GOOGL) has made significant strides with its watermarking tool for AI-generated images.

The CSIS urges policymakers to roll out “capacities” to differentiate between malicious AI content from those with positive applications in society. Ahead of the 2024 U.S. general elections, U.S. authorities have expressed concerns over the utility of AI in campaigns, noting their potential to mislead millions of voters.

Watch Sentinel Node: Blockchain tools to improve cybersecurity

Recommended for you

Trump firms up ‘crypto’ appointments, plots executive orders
U.S. President-elect Donald Trump is gradually building his 'crypto' empire, recently unveiling the individuals who will join in crafting initiatives...
December 26, 2024
WhatsOnChain tags top BSV transaction producers in 2024
Users of the BSV blockchain can now compare the network's actual usage among various apps, thanks to WhatsOnChain, which added...
December 26, 2024
Advertisement
Advertisement
Advertisement