AI inside a speech balloon in a technology illustration

AI progresses in digital asset space as regulators play catch up

“AI and digital assets are closely linked as AI technology can be used to improve the management, security, and efficiency of digital assets. AI can help analyze and identify patterns in data related to digital assets, which can help optimize their performance and identify potential risks or opportunities for investment. Additionally, AI can be used to detect and prevent fraud and cyberattacks, which are significant risks in the digital asset space.” – ChatGPT, March 2023

This is the optimistic assessment of the present and future of Artificial intelligence (AI) in the digital asset industry, as described by AI specialist ChatGPT. AI has increasingly pushed its way to the forefront of the cultural and technological zeitgeist, in no small part thanks to Microsoft (NASDAQ: MSFT) investing billions into ChatGPT, the artificial intelligence chatbot developed by OpenAI and launched in November 2022.

No stranger to innovation, the digital asset space has quickly adopted AI technology where possible, with intersections and crossover emerging in blockchain and digital assets. According to data from the blockchain informational website Cryptoslate, 78 ‘AI cryptos’—tokens that power AI blockchain platforms such as The Graph and SingularityNET—are now worth $3.37 billion, accounting for 0.30% of the overall digital asset market.

This might seem like a relatively small percentage of the overall market, but keep in mind the relative newness of the technology. For example, nine of the top ten AI digital currencies by market cap were launched in 2021 or later—by comparison BTC was launched in 2009 and Ethereum (ETH) in 2014.

But AI is no mere fad. There are practical reasons why the evolving technology can provide value, specifically in the digital asset space. Smart contracts, fraud detection, and data analysis are just some of the areas where AI can, and is already, being utilized to improve certain functions on the blockchain, whereas, on the digital asset platform side of things, trading algorithms, portfolio management and security are areas AI can come into play.

How is AI being utilized in the digital asset industry?

There are already several key crossovers between the digital asset space and AI technology.

Smart contracts, for instance, are self-executing contracts with the terms of the agreement directly written into code on a blockchain. These contracts can be programmed to trigger actions based on certain conditions or events automatically. Incorporating AI algorithms adds another layer of complexity and flexibility to the contract by using machine learning algorithms and natural language processing to analyze data and make intelligent decisions.

Before exploring this further, it’s worth outlining how AI and machine learning is distinct from automation, which is currently used in most smart contracts. Automation is a type of software that follows pre-programmed rules, whereas AI is software designed to more closely simulate adaptable, changeable human thinking, a subset of which uses machine learning to gradually become more intelligent.

This means that while standard smart contracts perform their automated function when certain conditions are met, an AI smart contract can adapt to changing circumstances and make decisions based on complex data inputs, such as deciding whether or not the contract should/will be executed.

An example of this in use today is the decentralized protocol Uniswap, which uses a smart contract deployed on the Ethereum blockchain to allow users to trade tokens without needing an exchange. Instead of a centralized order book, the platform relies on liquidity pools and an automated market maker (AMM). As with most AMMs, Uniswap utilizes smart contracts to automate the process of matching buyers and sellers, determining exchange rates, and executing trades. However, it also incorporates an AI machine learning algorithm that continually adjusts the exchange rates, as well as employs a prediction model to estimate the future price of assets.

In theory, combining this type of AI technology with smart contracts provides a more efficient and accurate exchange platform, not influenced by human biases or errors.

A similar use of AI trading algorithms can also be seen in Numerai, a blockchain-based hedge fund that uses AI and machine learning algorithms to make predictions on financial markets. The platform allows data scientists to create prediction models, which are then fed into the AI that uses this data to determine the trades to make on financial markets.

An alternative application of AI technology is being used on Fetch.ai, a blockchain-based platform powered by the native token FET that aims to decentralize and automate transactions on the web. It combines AI tech and digital assets to create “autonomous agents” or “digital twins,” which can be used and adapted for various purposes across industries, from booking holidays to trading digital currency. Once taught a specific task and the parameters of that task, the digital twins are designed to learn from their owner, customers, and business partners, improving decision-making over time—eventually automating tasks normally requiring human intelligence to do and, in theory, saving businesses time and money, and individuals time and effort.

And then, of course, there are digital assets themselves. AGIX is the native ‘utility’ token of SingularityNET. While it does not incorporate AI technology itself yet, it enables AI agents—entities that execute smart contracts and autonomously interact with other agents to exchange data and supply results—to interact with each other, streamline payments for AI services within SingularityNET’s AI marketplace, and rewards token holders who help secure the platform through staking. So AI is intrinsically linked to the value of AGIX.

These applications are early use cases that, so far, have successfully incorporated AI technology into digital asset projects or companies. Still, the relative newness of the field is a double-edged sword, serving as part of the attraction of AI and the amount of risk involved in using it, both of which are worth exploring in more detail.

Cybercrime

Beyond blockchain-based AI applications, AI can also be used to solve another of the digital asset industry’s most persistent problems: fraud.

When it comes to the digital asset market, AI algorithms could provide a heightened ability to detect and prevent cyberattacks, security breaches, and fraud. The latter is a persistent issue in the digital asset space.

Take the recent example of the charges against Terra founder and former fugitive Do Kwon, who was indicted on conspiracy to commit fraud related to the collapse of LUNA and the UST stablecoin. In this case, amongst other things, Kwon is accused of conducting manipulative trading strategies designed to alter the market price of UST.

Because financial markets and asset prices react to changes quickly, it can be difficult for humans to spot suspicious activity until after a problem is detected or the house of cards has come crashing down. Manipulative practices are often found in the post-mortem after much time spent analyzing data. In contrast, AI employed by exchanges to monitor platforms and transaction systems in real-time could learn from the patterns involved in such incidences and flag when it thinks an asset’s value is being manipulated, potentially catching them in the act, not after.

In the hands of regulators and market enforcers, advanced analytic AI can sift through a huge amount of data and flag up suspicious activity and patterns much faster than a human or a team. It could also provide enough of a deterrent on its own to put fraudsters such as Kwon off market manipulation, as they are much more likely to get found out.

In a July report on AI’s utility to cybersecurity, the World Economic Forum noted how “AI algorithms are pattern-detection machines with a significant edge over legacy list-based security systems. AI enhances and surpasses these systems by detecting novel threats that exhibit suspicious patterns.”

Threats and concerns

The nature of AI poses a problem that companies inside and outside the digital asset industry will need to pay attention to: garbage in, garbage out.

Like anything, the quality of AI-augmented output will only be as good as the quality of the input. This is particularly true given the rise of programs continually learning from its inputs, such as ChatGPT. Using AI can lead to unwanted and unethical results without proper quality control.

For example, malicious inputs can be used to manipulate an AI tool in unpredictable ways.

A general early example is the case of the Microsoft chatbot Tay in 2016, a distant progenitor of ChatGPT. Tay was an AI-powered chatbot designed to interact with users on social media platforms like Twitter. However, within 24 hours of its launch, Tay’s programming was exploited by online trolls who began feeding it racist and offensive messages, which it then learned from and imitated, with predictably depressing results.

But inputs need not be malicious. 2016 saw the launch of the Beauty.AI contest, an AI-judged beauty contest that drew roughly 6,000 people from more than 100 countries, who submitted photos in the hopes that the company’s AI would determine their face most closely resembled “human beauty.” However, when the results came in, to the dismay of the developers, it appeared the AI did not like people with dark skin. Out of 44 winners, nearly all were white.

When applied to the digital asset industry, this kind of bias-producing or replicating AI algorithm could lead to unfair market outcomes, with certain groups or individuals being disadvantaged, having serious consequences for investors, businesses, and the wider economy. For example, if an AI system used for market analysis and trend prediction is biased or discriminatory, it might produce inaccurate or unfair forecasts about the performance of certain assets or markets (based on geographic or cultural bias, etc.). This could cause investors to make poor investment decisions, resulting in financial losses for individuals, businesses, and those markets the AI deemed undesirable—garbage in, garbage out.

But the evidence for AI potentially being a blessing and a curse to the digital asset industry doesn’t end with cybercrime and racism.

Scalability

As mentioned, AI models depend on the enormous datasets they’re trained on. If data on the blockchain is being used to teach and improve the AI, then the scalability, or ability of the blockchain to produce and store ever larger pools of data, also becomes important. Not being able to handle the high volume and complexity of data that AI algorithms require could limit the potential use cases for AI-based blockchain solutions.

We’re still in the early stages of the relationship between AI technology and blockchain, and it remains to be seen if projects utilizing AI on the blockchain are hampered in their progress by scalability issues. However, some might be able to accommodate exponentially growing data demands of AI technology on the blockchain, such as Bitcoin SV (BSV).

BTC has a block size limit of 1MB, which restricts the number of transactions that can be processed per block. BSV, in contrast, has a block limit of 4GB. This relates to scalability, which is the capability of the network to handle large amounts of transaction data on its platform in a short span of time; the larger the block limit/size, the more transaction data it can handle.

Due to its massively larger block limit, BSV is almost infinitely scalable compared to the limited ‘small block’ BTC or Ethereum and its’ gas limits.’ For example, on March 16 this year, mintBlue, a Blockchain-as-a-Service (BaaS) platform on BSV, achieved a new record by processing over 50.53 million transactions in just 24 hours, the highest number of transactions ever processed on the public blockchain. This surpassed the daily transaction averages of other mainstream blockchains such as Ethereum and BTC, with Ethereum processing 1.2 million transactions daily and BTC processing 300k transactions daily.

The kind of scalability demonstrated by BSV could, in theory, also provide the fertile blockchain soil for AI to grow, the big blocks providing the necessary space for AI’s ever-expanding and more complex datasets.

Outside of scalability issues, a lot of the concerns around the technology derived from the fact that AI is still in its infancy and its application in the digital asset space even more so. It could be that some of these problems get ironed out in the development process, especially considering the speed the technology is developing, but in the meantime, lawmakers are scrambling to keep up, leading to another obvious drawback of the emerging area, its lack of regulation and oversight.

Regulatory catch-up

The wheels of lawmaking often move slower than those of tech innovation, and legal frameworks for digital assets are still evolving by and large without taking into account or making additional allowance for the implications of AI technology. However, the development of AI has not gone unnoticed by regulators, including those in the financial sector, and comparing the regulatory approaches to AI of three key digital asset markets can help give a picture of major concerns shared by regulators, notably the issue of ‘explainability,’ as well as how governance of the area may develop.

In the U.S., the Securities and Exchange Commission (SEC) has been actively monitoring the use of AI and machine learning in the financial industry and has issued guidance on the use of these technologies. For instance, in June 2020, the Financial Industry Regulatory Authority (FINRA), a non-governmental organization overseen by the SEC and focusing on regulating brokers, published a report on AI in the securities industry, which recommended firms that employ AI-based applications may benefit from reviewing and updating their model risk management frameworks, particularly concerning data integrity, customer privacy, and model explainability.

This last point relates to how machine learning models allow for some level of ‘explainability’ or transparency of code and programming with respect to the underlying assumptions and factors the AI considers when making a prediction.

Explainability has become a central part of the AI conversation. For example, a digital asset exchange using AI to determine exchange rates and execute trades could be programmed to favor certain positions, so being able to explain the programming will be necessary to avoid accusations of market manipulation or fraud. Avoiding this through vague references to an AI’s opaque decision-making will not be a suitable.

Another factor related to AI that the SEC has voiced concerns about is Electronic Investment Advice (EIA). In 2020 the regulator’s Office of Compliance Inspections and Examinations (OCIE) included EIA as part of its examination priorities for the year. It noted “robo-advisers”—virtual financial advisors—as a particular focus when it comes to SEC registration eligibility, cybersecurity policies, marketing practices, and adherence to fiduciary duty, including adequate disclosures and compliance programs.

Robo-advisors, like smart contracts, can be simply automated programs designed to trigger set responses when certain conditions are met, or they can incorporate more complex AI algorithms designed to learn and evolve from interactions or market information—U.S. financial advisory company Betterment is an example of this latter category, using AI to reduce taxes on transactions through machine learning algorithms selecting the specific tax consequences of the transactions.

If AI-powered advisors become increasingly the norm, it raises the question of who is responsible for advice that might have been sourced entirely from artificial intelligence?

European approach

On the other side of the Atlantic, Europe often leads the way when it comes to digital asset regulation, a notable example being the European Union’s planned Markets in Crypto Assets (MiCA) regulation, which includes requirements for issuers, service providers, and custodians of digital assets, as well as rules on governance, risk management, and data protection. The upcoming regulation focuses on Crypto Asset Service Providers (CASPs) and issuers of tokens and stablecoin, but it does also have provisions related to the use of AI in the digital asset space.

When it comes into force in 2024, the regulation will require CASPs to use resilient, secure, and transparent ICT systems, including those based on or utilizing AI technology. CASPs will have to conduct risk assessments on AI systems and ensure that their use of these technologies is consistent with the principles of data protection and privacy, and that they can explain the results and decisions made by their AI systems—explainability again being a theme.

In addition to this, the draft Regulation on Artificial Intelligence of April 2021 sets out rules for the development, market listing, and use of AI systems in the EU. In a risk-based approach, AI systems are divided into four risk categories: unacceptable, high, low, and minimal.

AI systems with unacceptable risk are banned and include those that may violate fundamental rights, or that have the potential to “manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children.” High-risk AI systems, such as those that make decisions about people in areas sensitive to fundamental rights, health and safety, must meet strict requirements for their use. Low and minimal-risk AI, such as chatbots or spam filters, will remain largely unregulated so that competitiveness is maintained in the EU.

If the blockchain vision is to put the entire world on-chain, then this regulation will inevitably play a gatekeeper for the kinds of projects that will be suitable, determining which AI blockchain projects and companies will be allowed to list or even be used in the EU.

A vote on the draft regulation on AI is planned for later this year, with the hopes that it can be passed through and come into force along with MiCA in 2024.

AI across the channel

Meanwhile, in the U.K., the Financial Conduct Authority (FCA) published a report on AI last February that looked into issues around data governance, Model Risk Management frameworks, and operational risk management. The report concluded that:

“Many of the risks from AI models also apply to more traditional models with existing governance structures to manage those risks and issues. Therefore, where possible, firms should use and adapt existing governance frameworks to manage the novel challenges of AI.”

In addition, the U.K. financial services regulators, the Bank of England (BoE), the Prudential Regulation Authority (PRA), and the Financial Conduct Authority (FCA) jointly published a discussion paper on October 11 last year, seeking views on the benefits, risks, and harms related to the use of AI in the financial sector, and asking for feedback on how regulation maybe be adapted or added to account for AI unique risks.

The discussion paper proposed a process of beginning with the current regulatory framework and considering how legal requirements and guidance in financial services apply to AI. “This evaluation will allow us to consider which ones are most relevant, explore whether they are sufficient, and identify gaps,” said the paper.

When it comes to AI in the finance space, the likely focus will be data use, and if that’s the case, the current regulation most applicable would relate to requirements around data security, which state that financial service firms are responsible for securing customer data and protecting it from fraudsters— something which also applies to digital asset exchanges registered in the U.K. Any firm, digital asset or otherwise, utilizing AI in its data processing would have to demonstrate adequate security procedures, controls, and monitoring.

Feedback on the discussion paper was closed in February this year. The results have yet to be published but will likely shape the future of AI regulation in the U.K. financial sector.

A mixed picture

As regulation continues to play catch up, the rate of evolution of AI technology could keep lawmakers permanently one step behind.

ChatGPT was released in November last year; in March this year, OpenAI launched the fourth in its GPT series, aptly named GPT-4. In only a few months, it has progressed from an impressive artificial intelligence chatbot to a multimodal large language model, which can process over eight times as many words as its predecessor and can understand images.

The rapid pace of evolution could exacerbate problems around volatility, cybersecurity risks, bias, and transparency, all of which are hot topics in the digital asset space already, without adding AI concerns on top of them. However, it’s possible the contrary could play out, with AI providing incredible opportunities for the digital asset space and actually enhancing capabilities across many of those same areas of concern.

Only time will tell whether AI technology can be a positive force for good innovation or exacerbate problems for the digital asset industry.

With that bittersweet image in mind, let’s have ChatGPT play us out…

The future of AI in the crypto industry is bright, and its potential to transform the way we invest in and manage digital assets is significant. With the right approach, AI can help make the crypto industry more secure, efficient, and transparent, creating a more sustainable and accessible ecosystem for all stakeholders.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: CoinGeek Roundtable episode 5 talks AI, ChatGPT, and Blockchain

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.