|
Getting your Trinity Audio player ready...
|
The United Kingdom government’s Department for Science, Innovation and Technology (DSIT) published a research report exploring the country’s deepfake detection technology market, finding that it is still in its nascent stages but growing rapidly, due to ever-increasing demand, and will continue to do so if supported by clear regulation.
The report examined the current state of demand and supply in the local deepfake detection market, including the existing barriers and key drivers shaping the future of the sector, the aim to develop “a robust evidence base on the U.K. deepfake detection market,” identify barriers to adoption and scaling, and explore the future evolution of the market.
Generative artificial intelligence (AI) deepfakes, whereby audiovisual content is generated or manipulated using AI to misrepresent someone or something, are rapidly advancing and spreading, often being abused to assist criminals in sophisticated scams and frauds, or to create explicit images of celebrities, women, and children. In 2025 alone, the U.K. government estimated that 8 million deepfakes were shared, up from 500,000 in 2023.
There is also increasing concern around the role of AI in national security and public safety from crime, all of which makes the need for advanced detection solutions “critical across sectors,” said the DSIT.
Fortunately, the report found that “deepfake detection technology is rapidly evolving to combat threats posed by AI-generated content.”
A growing sector
Since 2017, the global deepfake detection market has experienced rapid growth, led by 23 United States-headquartered providers out of 59 identified third-party firms, as of 2025, followed by seven from the U.K., with the remainder spread across other regions, according to DSIT.
Yet many of these providers remain in the pre-seed or seed funding stages, with an average total funding of £25 million ($33.2 million), and their primary technical approaches focus on machine learning, particularly neural network architectures and feature-based detection methods.
“Businesses are increasingly integrating deepfake detection for fraud prevention, brand protection, identity verification and content moderation,” the DSIT wrote. “While progress is being made through government initiatives, industry collaboration, technological solutions, and ongoing research, the effectiveness of detection tools remains a challenge due to the rapid evolution of deepfake technology.”
With this in mind, the report, compiled through a combination of a literature review, expert engagement, and provider mapping, aimed to establish a robust evidence base to inform how the government can support the continued growth of the U.K.’s deepfake detection market.
In terms of barriers to achieving this, it concluded that one of the key factors that will affect the future of the deepfake detection market will be the development of clear regulatory frameworks and enforcement mechanisms, as well as the state of the evolving international online safety regulatory landscape.In the absence of clear and consistent regulation, both in the U.K. and internationally, uncertainty for both suppliers and customers will remain.
UK tests and regulatory efforts
In the U.K., the government recently revealed that it was collaborating with Microsoft (NASDAQ: MSFT) and other leading technology companies to create a framework to identify gaps in the country’s deepfake detection.
In February, a government-led and funded “Deepfake Detection Challenge,” hosted by Microsoft, saw more than 350 participants, including INTERPOL, members of the Five Eyes community, and big tech, immersed in high-pressure, real-world scenarios, challenging them to identify real, fake and partially manipulated audiovisual media.
The aim of the test was to help strengthen the U.K.’s ability to detect and defend against malicious synthetic media.
At the same time, the government announced that it had “fast-tracked work to bring into force legislation making it illegal for anyone to create or request deepfake intimate images of adults without consent,” which became law on February 6.
“Deepfakes are being weaponised by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear,” said U.K. Secretary of State for Science, Innovation and Technology Liz Kendall at the time. “We are working with technology, academic and government experts to assess and detect harmful deepfakes.”
She added that “detection is only part of the solution,” which is why the government criminalized the creation of non-consensual intimate images and is planning to go one step further, with a complete ban on the “nudification tools that fuel this abuse” in the works.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Can we trust AI? How blockchain and IPv6 could fix accountability




