AI for governance: Can governments be replaced with decentralized intelligence?

AI for governance: Can governments be replaced with decentralized intelligence?

A column exploring blockchain-related possibilities in the far future.
 Here, we look at blockchain technology in conjunction
 with other developing technologies. 
 Disclaimer: this post may be closer to science fiction than fact.

It didn’t take long before technologists decided to combine artificial intelligence (AI) with blockchain technology. I’ve come across projects that use artificial intelligence to recognize patterns and combining these functions with smart contracts—bringing the power of both technologies to unprecedented scales.

The assemblage, called decentralized intelligence, is capable of automating consensus mechanisms as well as managerial decisions for blockchain-based organizations. By analyzing collected data, AI can make business decisions for decentralized applications and subsequently enforce them.

The implications of combining the two technologies are quite vast. And because these are both new and continuously developing territories, it’s hard to see their limits.

One of the biggest questions many have been wondering is whether it’s possible to automate entire governments using this combination. Some have actually started trying: the UK has started their test run for a blockchain-run social welfare system. Russia has also started using it for a voting system. Stretching this use case further, I imagine a world where cases taken to the International Court can instead be decided upon by neutral delegates from anywhere in the world through a blockchain-enabled voting system. Instead of years, decisions can be arrived at faster.

Government adoption

It’s easy to see how this transition can quickly spread throughout government systems. I asked Dr. Paolo Di Prodi, senior data scientist at FortiGuard Labs, Fortinet for his personal opinion on the matter (he would like to clarify that these are his own personal stances, and not his employer’s). Dr. Di Prodi worked very closely with machine learning applications for big firms, including the Universities and Colleges Admissions Service (UCAS) in the UK, and Microsoft.

Dr. Di Prodi thinks the UK’s blockchain test run is particularly interesting, but deploying the technology laterally, across all government agencies will be difficult—as interruptions are expected between administration changes.

“Yes, it will be interesting to see the outcome of that trial to manage welfare support payments in the UK. For me, it does solve a very practical security problem as well as an efficiency problem of receiving cash. The larger implication of adopting this payment system is that all the other interconnected services like housing services will need to be crypto-enabled to receive payments. This will reduce spending in processing and IT administration but of course will require an initial expenditure to modernize all the IT platforms which will need to come from the tax payers. The problem of deploying a blockchain solution is that it will span several administrations and thus will require a long term commitment from all political parties. I believe Russia or China will not have the same issue paradoxically.”

Additionally, the rise of AI in governance will be slow, especially because there are limitations arising now when it comes to acquiring the data needed to build machine learning models.  Governments will probably remain cautious as the technology proceeds.

“One of the most interesting projects in this field is openmined.org which allows the construction of decentralized machine learning models without disclosing private personal data. Other companies like Microsoft, Google, and Apple—under recent pressure of privacy concerns—are working on privacy preserving machine learning especially after the deployment of the GDPR regulation in Europe.

The largest concern for using AI at a government level and by AI—I mean a fully automated process, is that the decisions will be biased on the actual data as we have seen in the press recently about racial discrimination performed by the COMPAS program in US courts. The governments of this world will be probably still cautious about using AI for decision making but instead still rely on their data scientist to propose new policies. I believe an area where the government will invest more will be more in protecting and exchanging citizen data to improve the quality of service they provide,” Dr. Di Prodi wrote.

He also agrees with blockchain’s advantages as a consensus mechanism, and how it can help curb influence and illicit activities. But admits it has its limits in terms of battling human frailty.

“The citizen could even have a major role in deciding in real time via electronic voting. However a shift will be required to move from a democracy to a technocracy which might still suffer from the influence of lobbies and wealthy individuals perhaps in a lesser form. I think AI will not be able to solve the human nature of greed but with the power of data into the citizen’s hands will be more likely to expose frauds, evasion, crime and in general inefficiencies.”

AI for governance: Can governments be replaced with decentralized intelligence?

Current Limitations

Data collection is crucial in building the necessities of decentralized intelligence, and machine learning as a whole. But data is as powerful as it is energy-intensive, Dr. Di Prodi says, yet he is optimistic that this hurdle will be overcome soon. He adds that a fully decentralized intelligence-run government depends on certain factors

“Yes this would be possible when we will live in a fully digitized word where we could possibly collect and process all the information from the macro to the micro economic factors. This will allow the government to run for example future scenario of the effect of a new tax structure, health service or pension scheme. More data will require more compute power and thus a larger footprint for the environment. Do you know for example that data centres across the world are already using 3% of global electricity supply? This means we will have to be more efficient in storing and computing data. The good news is that GPU and TPU are overcoming the limitation of the Moore’s Law suffered by CPU so there will be enough firepower to process all the data we need.”

Another obstacle he sees is the fact that although AI can be encoded with moral rules, these rules would have to be pre-set by humans themselves—something that is easier said than done due to highly relative and debatable morality standards.

“The AI will need to be programmed with moral rules, over population is a growing concern and we can’t really save the environment if we can’t reduce our birth rate thus consuming less. Look at what China did with the one child policy, most western countries define it as inhumane, but it was rationally the only choice to make the economy sustainable. The AI cannot make those sort of decisions for us, we are still responsible to program what is good and what is bad. To quote an old Latin proverb: Quis custodiet ipsos custodes (who watches the watchmen)?

Is singularity in the horizon?

Dr. Di Prodi doesn’t think so—at least not in the near future.

“Well shallow or deep AI is still in its infancy, the most imminent risk to humans is just what I call ‘poor AI’. We have allowed companies like Uber (and others like Waymo, Cruise, etc.) to run their automated driving cars in our streets without thorough certification and testing. As a result, a few lethal accidents have skewed public perception of AI. There is of course debate whether the accident would have been avoided by a real person but in most accidents, it was evident that the supervisor in the car was not vigilant. I believe the technology right now could be best applied in reducing specific behavior like drowsy driving or driving under the influence of alcohol. I believe there is need to more regulation and testing for physical AI (any AI that interacts with the physical world), because the legal frameworks like the FMVSS in US don’t work for driverless cars.”

He says that developing AI in self-driving cars will help decrease car accidents—which he says keeps him up at night.

“All governments have the same issue and will have to work together to develop one. In the long term when all cars will be automated and being able to talk to each other, there will be far less accidents due to human error but is the transition from mixed automated and manual traffic that keeps me awake at night!”

“We are far away from the singularity point, some people say is 30 years away, and even if we achieve the computational power of the brain we are still far away from understanding how the human mind works,” he adds.

“I believe the most likely scenario will be an AI bug – where bug can be a programming error or unexpected behavior – like the flash crash of the markets in 2010 most likely caused by high frequency trading bots. The most danger comes where AI is used in a closed loop fashion with fast decision making, although we have a kill switch [if] we are not fast enough to press it as in the flash crash or in the self driving accident scenario.”

Cecille de Jesus
@the_Scientress

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.