Anonymous hooded hacker, flag of Canada

Canada flags deepfakes as a societal threat

The Canadian Security Intelligence Service (CSIS) has issued a report detailing the threats posed by deepfakes using artificial intelligence (AI) tools on the key sectors of the nation’s economy.

In its report, the CSIS noted that the growing sophistication of deepfakes poses significant risks to Canada’s democracy, values, and way of life. The security agency pointed out that deepfakes are becoming increasingly accessible and do not require much technical knowledge, noting that the “inability to recognize or detect” deepfakes complicates the problem for millions of Canadians with access to content generated by AI tools.

“Deepfakes and other advanced AI technologies threaten democracy as certain actors seek to capitalize on uncertainty or perpetuate ‘facts’ based on synthetic and/or falsified information,” read the report. “This will be exacerbated further if governments are unable to ‘prove’ that their official content is real and factual.”

The report highlighted the rising cases of deepfake pornographic videos proliferating the internet, citing examples of Twitch streamer “QTCinderella” and investigative journalist Rana Ayyub. CSIC noted a spike in fraud cases involving deepfakes, with one popular scam using an AI-generated video of Elon Musk to lure people to invest in digital currencies.

Other scams involve synthetic videos masquerading as CEOs of international companies to convince employees to wire transfers into their bank accounts. The report noted a rising trend of scammers using deepfakes to conduct employment scams, costing unsuspecting victims losses running into millions of dollars.

Aware of the issues associated with deepfake, the CSIS recommends swift regulatory action to crack down on the activities of bad actors. The security outfit warns that governments are “notoriously slow” in rolling out regulatory frameworks compared to AI developers’ innovation pace.

“AI capabilities will continue to advance and evolve; the realism of deepfakes/synthetic media is going to improve; and AI-generated content is going to become more prevalent,” said the CSIS. “This means that governmental policies, directives, and initiatives (both present and future) will need to advance and evolve in equal measure alongside these technologies.”

Proper labeling is the way forward

In its recommendation, the CSIS noted that the establishment of a requirement for labeling AI-generated content may mitigate the risks associated with deepfakes. While technical standards for labeling appear to be nascent, Google (NASDAQ: GOOGL) has made significant strides with its watermarking tool for AI-generated images.

The CSIS urges policymakers to roll out “capacities” to differentiate between malicious AI content from those with positive applications in society. Ahead of the 2024 U.S. general elections, U.S. authorities have expressed concerns over the utility of AI in campaigns, noting their potential to mislead millions of voters.

Watch Sentinel Node: Blockchain tools to improve cybersecurity

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.