Getting your Trinity Audio player ready...
|
Artificial intelligence (AI) will not become self-aware and destroy humanity, says Prof. Yann LeCun. The AI godfather dismissed claims of AI posing a threat to humanity as “premature and preposterous.”
AI will surpass human intelligence in the future in most domains, Meta’s (NASDAQ: META) Chief AI Scientist stated in an interview with the Financial Times. However, it will still be under our control and will assist us in tackling some of the biggest challenges we face, from climate change to curing diseases.
AI’s perceived threat is just a product of science fiction, and watching the Terminator movie way too often, he quipped.
“Intelligence has nothing to do with a desire to dominate. It’s not even true for humans. If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither,” he said.
LeCun is one of the world’s top experts in deep neural networks. In 2018, he won the Turing Award—considered the Nobel Prize of computer science—alongside Google’s (NASDAQ: GOOGL) former AI head George Hinton and Yoshua Bengio. Since then, Hinton and Bengio have been campaigning against the rapid AI advancements, while LeCun continued to work on AI developments at Meta.
LeCun told FT that the current fear against AI is based on a misconception of its capability. Companies like OpenAI and Google have been misleading the public to believe their LLM models can do way more than they are capable of.
“They just do not understand how the world works. They’re not capable of planning. They’re not capable of real reasoning. We do not have completely autonomous, self-driving cars that can train themselves to drive in about 20 hours of practice, something a 17-year-old can do,” he said.
But what happens when these neural networks become as intelligent as human beings? For LeCun, the solution would be encoding “moral character” into them. This would govern their behavior like laws and morality govern human beings.
AI godfather: Don’t regulate AI
LeCun also tore into the proposed regulation of AI, claiming that it has yet to hit a level that requires oversight. It would be like regulating the jet airline sector in the mid-1920s, 15 years before the world’s first jet aircraft made its first flight.
“The debate on existential risk is very premature until we have a design for a system that can even rival a cat in terms of learning capabilities, which we don’t have at the moment,” he noted.
The debate on AI regulation rages on. Legislators continue to express concern that if left unregulated, AI could become a threat to humanity. But even among the legislators, the level of the regulations has evoked intense debate.
Asian countries have favored a lax approach. Japan is leading a campaign among the G7 for a hands-off regulatory framework, while Southeast Asian countries have drafted a framework that leans towards promoting the technology.
In contrast, Europe implemented the stringent AI Act that seeks to slow down advances in the sector and focuses on copyright protection, data integrity, and promoting human rights.
LeCun is against both approaches.
“Regulating research and development in AI is incredibly counterproductive. They want regulatory capture under the guise of AI safety.”
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: AI truly is not generative, it’s synthetic