|
Getting your Trinity Audio player ready...
|
The United Kingdom government has published a policy paper on its 2026 to 2029 fraud strategy, which highlights the “growing risks” that digital assets and artificial intelligence (AI) pose to consumers, whilst praising the sector for creating business opportunities.
- UK police growing risks on AI and digital assets
- Increased social media and digital currency fraud
- UK’s regulation on digital currency
- Evolving threats due to AI
The U.K.’s Home Office recently published a paper identifying digital currencies and AI as part of a growing fraud threat, in which victims are deceived into willingly transferring money through scams on social media platforms and messaging apps.
“Cryptoassets pose growing risks, with investment fraud among the fastest-rising threats,” read the report. It went on to argue that social media, telecommunications, digital payments, and digital currency are now part of “routine activity” embedded in daily life; as such, there are now “even more opportunities to target at unprecedented scale.”
According to the paper, fraud against individuals and businesses is the largest crime type in the U.K. and costs the economy £14.4 billion ($19.3 billion) in 2023–2024. Individuals face a very high risk, with over 4 million offenses estimated in 2025, representing 45% of all crime in England and Wales.
“The scale and sophistication of modern fraud demand a new and accelerated response,” said Home Secretary Shabana Mahmood MP in a joint statement accompanying the report.
“Criminals are exploiting new technologies, deploying increasingly sophisticated attacks and operating across borders with increasing impunity,” Minister of State at the Home Office David Hanson (Lord Hanson of Flint) remarked.
They added, “We must match their innovation with our own, to protect our people and defend our economy.”
With this in mind, the Home Office set out its fraud strategy for the next few years and committed to investing over £250 million ($335.6 million) between 2026 and 2029 to deliver the strategy. Identifying and cracking down on the growing threats posed by new technologies will be a key part of this.
Among the areas of focus for the strategy is the worrying rise in fraud in which victims are deceived into authorizing payments to criminally controlled accounts, also known as ‘Authorised Push Payment (APP)’ fraud.
The report noted that APP fraud increasingly exploits social media and new technologies, not least digital currency and AI.
“Criminals are exploiting these platforms and networks to reach their victims easily while concealing their identities,” said the paper. “Behind the scenes, the dark web and grey web host marketplaces selling phishing kits and offering fraud-as-a-service subscriptions targeting structural vulnerabilities in these platforms.”
It added that “these services lower barriers to entry, reduce costs, and provide scalable, sophisticated methods to anyone willing to pay.”
Digital assets, with their inbuilt pseudoanonymity, the abundance of mixing services—platforms that pool and mix digital asset funds to obfuscate the original source—and often decentralized governance, make an attractive means of payment for fraudsters seeking to conceal their identity and ill-gotten gains.
For example, the Home Office paper highlighted how the online nature of many frauds and money laundering methods, including digital currency, often spans multiple jurisdictions, with Southeast Asia being a particular hotbed of activity.
“Cyber fraud operations in Southeast Asia are increasingly poly-criminal, intertwined with human trafficking, money laundering, corruption, and organised crime, and remain highly adaptable, relocating compounds or switching jurisdictions to evade crackdowns,” said the paper.
This assessment is consistent with recent data that identified the increased threat posed by Southeast Asian digital currency scam operations. Last month, blockchain analytics firm Chainalysis published data showing an 85% year-over-year surge in human trafficking services in 2025, “largely based in Southeast Asia.”
“Analysis reveals global reach of Southeast Asian trafficking operations, with significant cryptocurrency flows from destinations across the Americas, Europe, and Australia,” Chainalysis said.
Fortunately, regulation and enforcement capabilities around the world are beginning to meet the challenges posed by digital assets, with funds becoming more transparent and traceable, mixer services being sanctioned and shut down, and digital asset exchanges and issuers increasingly required to comply with regulatory standards.
In the U.K., various steps have been taken to address the risks presented by digital assets, several which were outlined in the Home Office paper.UK digital currency measures
Since 2023, firms marketing digital assets to U.K. consumers have been required to comply with the Financial Conduct Authority’s (FCA’s) Financial Promotions Regime, “ensuring that all promotions are fair, clear, and not misleading.”
More recently, in December 2025, the Treasury introduced legislation that brought digital currency firms under a full financial services regulatory framework, similar to that of traditional financial firms. Once the regime comes into force—mooted to be October 2027—digital currency firms will need to be authorized by the FCA and comply with its rules.
“The new regime covers ‘fungible and transferable’ cryptoassets and creates new regulated activities such as operating a qualifying cryptoasset trading platform and issuing a qualifying stablecoin in the U.K.,” noted the strategy report.
Firms that fall within this remit will have to, amongst other mandates, adhere to anti-money laundering and counter-terrorist financing controls, customer due diligence and know your customer (KYC) rules, transaction monitoring, reporting of suspicious activity, maintaining adequate systems and controls, safeguarding customer assets, and disclosure and conduct obligations.
In terms of other specific fraud-focused measures, the Home Office noted how the U.K.’s National Crime Agency (NCA) launched a nationwide campaign in 2025 to help consumers spot fraud, and that the government is supporting law enforcement, including the Serious Fraud Office (SFO), to enhance digital currency investigation capabilities.
Beyond the risks of digital assets and the U.K.’s efforts to address them, the Home Office paper did offer some praise for the sector, saying that “the UK’s financial services sectors and emerging payment and cryptoasset technologies have driven growth and created business opportunities.”
It also praised the financial services and digital currency sector’s heavy investment in fraud prevention, saying that they are “often the last line of defence in disrupting criminals’ attempts to commit both unauthorised and authorised frauds.”
However, just as enforcement becomes more sophisticated, so too do criminals’ methods, with the technology de jour in the fraudsters’ arsenal now being generative AI.
Evolving threats
On the increasing prevalence of AI in fraud, the Home Office warned that “criminals are adopting generative artificial intelligence (GenAI) tools such as deepfakes, large language models, and voice cloning to improve the sophistication, credibility and volume of attacks.”
It added that “a key threat is generative AI’s ability to create deepfakes that impersonate trusted individuals and organisations,” which are tailored to specific victims and fraud types, “making attacks more effective and harder to detect.”
According to the Home Office, fraudsters also deploy AI deepfakes “to enhance social engineering, and hack email accounts to divert payments.”
The paper argued that, as with online platforms and digital currency, the rapid development of AI brings both opportunities and risks. To address the latter, it said the government “is working to improve the security of AI models and ensure their safe adoption to drive growth.”
“The Home Office is leading work with the Department for Science, Innovation and Technology (DSIT), the Alan Turing Institute and other Government departments to design and implement a robust framework for detecting deepfake media, including fraudulent documents and synthetic audio,” the paper said. “The Home Office will continue to help protect the public from harmful and deceptive content by evaluating detection capabilities to ensure that they remain effective against emerging techniques.”
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Can we trust AI? How blockchain and IPv6 could fix accountability




