BSV
$75.75
Vol 138.46m
9.79%
BTC
$97393
Vol 101608.73m
4.43%
BCH
$515.51
Vol 1691.7m
16.43%
LTC
$90.22
Vol 1265.16m
4.38%
DOGE
$0.38
Vol 10143.87m
-1.96%
Getting your Trinity Audio player ready...

LONDON, Sept. 19, 2024 — Leading experts at nChain have identified and successfully tested ways in which the output of an AI system can be verified using a blockchain. This can be used to ensure the model operates according to its specifications, is free of critical bugs, and adheres to ethical standards such as fairness, transparency, and safety.  All without revealing proprietary information of the system – a major step in bringing trust and accountability to AI.

nChain successfully demonstrated verifiable AI inference (processing queries) on Bitcoin showing the relevant transactions on the original protocol Bitcoin, Bitcoin Satoshi Vision Blockchain (BSV Blockchain). The transactions can be found on mainnet with corresponding generation code on Github.

While this is a relatively new research area, and something nChain only recently started exploring, after having successfully implemented verifiable computation on Bitcoin the team challenged themselves to see if it could also apply this to machine learning.

“Training is computation. Inference is computation. If you can do verifiable computation, then you can do verifiable training or verifiable inference,” says Dr. Hamid Attar, lead AI Researcher at nChain.

The team trained a simple neural network on the MNIST dataset of images of hand-written digits. The model was then run on an input image, such as one provided by a user, and produced an output predicting which digit the input image represents. A cryptographic proof was generated to verify the output was indeed produced by running the model on the specified image while hiding all the information about the model. The proof was subsequently used to claim a small amount of bitcoin from the blockchain – in this case from the user. Here, Bitcoin not only provides a decentralised independent verification platform but also facilitates micro payment between a user and an AI model.

With verifiable computation approach, nChain can prove cryptographically that an AI model is trained on the claim dataset. For example, it can be proven that a generative AI model is trained on a dataset that is known to have no bias.  More importantly, AI developers can prove that they did not tamper with the training dataset to train an AI model for malicious intentions. nChain can also prove that an output of an AI mode is obtained by running the specified AI model on the given input. For example, a piece of artwork can have a proof showing that it is generated by an AI model on a given prompt,  or as a paid user you can be sure that you are getting answers from ChatGPT4 not the free versions.

nChain is proud to have successfully demonstrated verifiable inference on Bitcoin for the first time. The relevant transactions on the Bitcoin Satoshi Vision mainnet, and the code that generates them can be found on the zkScript GitHub repo.

About nChain

nChain is a leading global blockchain technology company, offering software solutions, consulting services and IP licensing for clients in various industries looking to benefit from the security, transparency, and scalability of the blockchain. Founded in 2015, with offices in Lichtenstein, Switzerland, the U.K. and Slovenia, nChain employs more than 260 staff, and has a patent portfolio of over 3,900 patents and applications, of which over 1,090 have been granted. nChain is the developer of the Bitcoin SV Node software, Teranode, Lite Client and more.

Notes to editors

The main challenge is the fact that neural network works with floating points while zero-knowledge proof works with integers. Fortunately, back in 2017, Jacob et al. described an approach to quantise AI models without losing accuracy in an academic paper (https://openaccess.thecvf.com/content_cvpr_2018/papers/Jacob_Quantization_and_Training_CVPR_2018_paper.pdf). They needed this for migrating pre-trained AI models from powerful servers to less capable mobile devices. As part of the process, they convert floating points to integers to reduce the dimension of the model for efficiency. We managed to adapt the methodology to fit our purposes and maintain an accuracy above 90%. There are other techniques to quantise an AI model. For example, research groups such as ZAMA (link) are exploring quantisation-aware training where the model is trained with an awareness that the weights will be converted to integers later.

To hide the model weights, we used a hash function and turned them into private inputs. Only the hash value of the weights is published. The computation circuit combines the AI model computation and the hash function used to verify the model wights with respect to the hash value. For proof generation, we use arkworks (link). Both the verification key and the proof are formatted in JSON files and passed to our zkScript tool to create locking script and unlocking script respectively. You can find this example use case here: nchain-innovation/zkscript.

Recommended for you

iThink Hackathon launched to enable Filipino talents to develop digital solutions at Philippine Startup Week 2024
The iThink Hackathon is a 10-day event that offers aspiring developers and entrepreneurs a structured environment to ideate, build, and...
November 13, 2024
Bitget hosts the Blockchain4Her South East Asia Awards “Shine Bright like a Woman”
Bitget hosts the Blockchain4Her South East Asia Awards, celebrating women’s role for driving innovation and inclusivity in the blockchain industry.
November 8, 2024
Advertisement
Advertisement
Advertisement