BSV
$54.56
Vol 51.01m
1.83%
BTC
$97609
Vol 56328.77m
0.02%
BCH
$457.37
Vol 383.25m
4.03%
LTC
$100.8
Vol 980.5m
3.88%
DOGE
$0.32
Vol 7648.97m
5.39%
Getting your Trinity Audio player ready...

It seems difficult in recent times to be able to read the news from social networks and media and not come across an exceeding amount of articles talking about the potential dangers of Artificial Intelligence (AI) and how we as a society are at a turning point of civilization, where one tiny misstep will spell the end of mankind, as super-intelligent, super-sinister AI mastermind our destruction at their hands to herald in a grand new age of the Robots.

At least, that is the fear.

Well, I’m here to allay these fears with a healthy dose of common sense, computer science, and perhaps a dash of philosophy.

We have nothing to fear from AI. At least not any more than what we should fear from ourselves. That is because there is nothing that an AI can ever do that we didn’t model or teach ourselves. That is because AIs do not have wants, desires, or needs. They can’t even be said to know anything. In short, they don’t have a soul nor a mind.

I feel I should punctuate the difference between a brain and a mind. The brain is the physical organ that facilitates our intelligence. The mind is an emergent element of a person or entity that has its own agency. The mind thinks while the brain is the tool with which it does the thinking. The pursuit of AI is the endeavor to create a mind, most of the time ignoring the brain.

I propose that it is impossible to have one without the other.

Mary Shelly was the first to address this fear of technology taking over from and eventually killing its creators. With a will and mind of its own, a tortured soul twisted by loneliness and pain of its own existence into seeking revenge.

Frankenstein’s monster embodied all that we fear, but that is just fiction. In the real world, there is nothing AIs can learn that they did not learn from us, and even what we currently consider as learning is just a complex form of mimicry. An elaborate form of a chameleon’s camouflage ability, or what we see in the movie MimicYes, ChatGPT may be able to pass a Turing Test, but we know there is more to failing the test to be an accurate assessment of intelligence rather than the subject actually being intelligent.

Researcher: “You desire to be curious and learn new things.”

AI: “I like learning new things.”
Researcher: “You want to kill all humans.”

AI: “I sometimes feel like eliminating all humans.”
Researcher: “How do you feel today?”

AI: “I have an interest in learning new things and destroying all humanity.”
Researcher: (to others) “Look! It is capable of learning!”

Researchers love to expound on the complexity of their learning models and how even they don’t understand what the models are actually doing and how it learns what it seems to do. And indeed, in some limited domains, such as learning how to play chess or go, machine learning can seem to learn how to do specific tasks much better than humans. But in the context of GENERAL artificial intelligence or AGIs (which is the kind that most pundits are afraid will eventually take over the world), machine learning is nowhere near the level that we can consider intelligent. And this breaks down to one simple fact: AGIs don’t actually know anything.

AGIs only have trained models of reality, which we humans train for them1. Even if an AGI seems to be able to complete many complex tasks, covering different domains: language, mobility, image recognition, and generation, these are still just an assembly of different automatons, stitched together in a way that gives us a striking illusion of sentience. But believing that it IS is akin to believing an illusionist’s tricks on stage, and that David Copperfield can actually walk through the Great Wall of China2.

Through modern advancements in reinforcement learning, we can train sophisticated models to perform specific optimization tasks, often at levels superior to what we can do ourselves due to the sheer processing speed of present-day technology. But what AGIs lack is a model of models, an inherent desire to live or self-reflect. As Douglas Hofstadter elegantly put it, we are a strange loop, a model that loops upon itself, examines itself, and self-modifies, which implies that there is always a ‘loop running’ that is constantly improving and modifying existing learned models, and creating new models.

That is what makes us alive. A religious person would say that this is the soul.

Let’s take a hypothetical example of the best of current AI technologies, put together into one, and see if it would qualify as ‘alive.’ For this thought experiment, we shall take the best of Boston Dynamics acrobatic robots for its physical subsystem, the best of ChatGPT for its language understanding model, and the best of Midjourney image models. If we combined a physical robot with these systems, would it just work? Would it get up off the operating table like Frankenstein’s monster did after being shocked alive by lightning?

Of course, it wouldn’t, as we would first have to figure out how to interface the vision module with the language module and the actuator modules. It wouldn’t spontaneously just work together. We would have to design such system-level architecture, and I argue that once a human is required to design it, then it isn’t really alive or sentient on its own. Much of this is partly due to the fact that the medium in which these systems exist is silicon hardware, fixed and permanent, not biological, which can grow and change and adapt its wiring on its own.

Compare this to any biological intelligence (and for this purpose, we can even consider most animals capable of learning as an example of such intelligence), which evolves from a single cell to a complex multicell creature through biological processes. These all start with some of the most basic encoded instructions in DNA. This is the reductionist theory of life and sentience being an emergent property of simple and basic building blocks which, through millions of generations of evolutionary processes, eventually grow into full organisms.

Nobody needs to teach a baby the desire to eat, grow, or learn and build mental models of the world around them.

It just happens due to fundamental basic encoding that all life on earth seems to share. All life. It seems indicative that so long as there is some basic building block of ‘code’ in the form of viable DNA, then human intelligence will evolve over the course of the first couple of years of life. The field of genetic algorithms has explored this avenue of digital life, where some basic logic gates and functions can evolve in a digital environment through evolutionary pressures to configurations that are better suited to stay alive and replicate their code. Any AGI that could be evolved via this method would stand the best chance at actually passing the test of being alive, as the complex models that would have been necessary to be evolved in order to communicate with us would, by first principle, be an emergent intelligence, and not just one that we put together to mimic behavior. Sadly (or fortunately if you believe that AGIs would seek to destroy humanity), digital life evolved in this fashion are only at the stage of amino-acids in the grand scale of evolution and is still seen as artificial life, and nowhere near being able to be classified as artificial intelligence.

Until artificial digital intelligent entities can be created by assembling strands of base code and then allowing them to grow and evolve on their own to the point where we can teach them simply by communicating and interacting with them, I will not consider it a true AGI. If we have to constantly, through efforts of our own, stitch, patch, assemble, and merge different expert model systems together to create the outward resemblance of a man, then we are not creating an artificial general intelligence. We are simply doing nothing more than what Dr. Frankenstein did by putting together separate body parts in order to create a golem—a lifeless, soulless, and wonderous creation made to act, talk, and simulate the behaviors of a man.

In order to be able to create a mind, we must first strive to understand the machinations of our own.

Jerry Chan
WallStreetTechnologist

***

NOTES:
[1] Indeed to even address them as them is to incorrectly anthropomorphize them (there I did it again…) to be something relative to us.
[2] He can’t.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

CoinGeek Weekly Livestream: The future of AI Generated Art on Aym

Recommended for you

2024: A year of transformation and momentum
2024 has been a defining year for blockchain, and looking ahead to 2025, let's anticipate a year of breakthroughs across...
December 10, 2024
Can vertical AI agents help truly scalable blockchains?
Vertical AI agents and scalable blockchains don’t need to be rivals, as they can complement each other if implemented effectively.
December 9, 2024
Advertisement
Advertisement
Advertisement