BSV
$52.26
Vol 37.02m
2.67%
BTC
$76095
Vol 65886.55m
1.55%
BCH
$374.29
Vol 397.02m
-0.71%
LTC
$71.87
Vol 416.07m
1.36%
DOGE
$0.19
Vol 4479.79m
3.84%
Getting your Trinity Audio player ready...

A recent paper from a group of Huawei researchers is calling for “embodied” artificial intelligence (AI) as the next big thing for the emerging ecosystem, poking holes in the argument for scaling.

Achieving artificial general intelligence (AGI) will not be a walk in the park for frontier companies, given the absence of a consensus on the definition of the concept, according to the paper. Generally, AGI refers to the intelligence that exceeds or is at par with human abilities, possessing the ability to learn and solve challenges.

Despite the leaps in large language models (LLMs), AI is far from achieving
superintelligence, with models described as “static and unable to evolve with time and experience.” The researchers argue that the best path to attaining superintelligent AI systems is through embodying AI rather than the widespread belief in scaling LLMs.

“It is a prevalent belief that simply scaling up such models, in terms of data volume and computational power, could lead to AGI,” the paper read. “We contest this view. We propose that true understanding, not only propositional truth but also the value of propositions that guide us how to act, is achievable only through E-AI agents that live in the world and learn of it by interacting with it.”

For the researchers, giving AI a body capable of interacting with its surroundings will offer several benefits, including achieving general intelligence. Four components are necessary for embodiment to take place, with perception at the top of the pile.

The second trait required the ability to take action based on perceived data, with the researchers subdividing the functionality into reactive and goal-directed actions. The capability of reactive actions is primarily for “self-preservation,” while goal-directed actions are necessary to achieve “complex, high-level objectives.”

Memory is another essential component for embodied agents to possess to achieve general intelligence. Finally, the ability to learn from memory is a distinguishing factor for intelligent AI systems, with its reliance on simulators and other emerging technologies offering new learning pathways for systems.

Challenges to embodying AI

The paper highlights several downsides associated with embodying AI, including “noise and uncertainty,” which can affect the system’s decision-making abilities.

Hardware limitations pose another challenge for embodying AI, given the cost and energy consumption rates of GPU clusters.

There is also the concern that embodied AI systems will proceed with an “egocentric” perspective, which may open a whole new issue. The researchers are predicting ethical dilemmas in communication with humans, data collection methods, and outputs, given the novelty of the technique.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: What does blockchain and AI have in common? It’s data

Recommended for you

ODHack 9.0: Better wallet, easy testnet coins for developers
OnlyDust's ODHack 9.0 hackathon event provides developers building on the BSV blockchain with new ways to test their applications without...
November 8, 2024
BSV joins Linux Foundation to advance open standards
The BSV Association has partnered with the Linux Foundation to advance its objective of promoting development that adheres to BSV...
November 6, 2024
Advertisement
Advertisement
Advertisement