11-22-2024
BSV
$68.43
Vol 217.36m
-1.77%
BTC
$98383
Vol 124245.49m
4.27%
BCH
$486.08
Vol 2226.17m
10.2%
LTC
$89.58
Vol 1417.87m
7.24%
DOGE
$0.38
Vol 9408.87m
1.81%
Getting your Trinity Audio player ready...

Researchers at Google DeepMind (NASDAQ: GOOGL) and Google Research have proposed a new method to extend the capabilities of artificial intelligence (AI) models by interlinking them with other existing AI systems.

The research scored impressive results in early studies using the novel Composition to Augment Language Model (CALM) developed by the researchers. According to a 17-page
report, the model allows AI researchers to augment existing large language models (LLMs) with new capabilities.

“Foundational models with billions of parameters which have been trained on large corpora of data have demonstrated non-trivial skills in a variety of domains,” read the report. “However, due to their monolithic structure, it is challenging and expensive to augment them or impart new skills.”

Rather than training new LLMs with new capabilities from scratch, the new method enables new functionalities in existing models by augmenting with their peers. The researchers submit that promoting interoperability via CALM will be an endeavor, saving time and cost to “enable newer capabilities.”

Through CALM, LLMs can preserve their existing functionalities while unlocking applications in new domains without the hassle of fresh fine-tuning or retraining processes.

The researchers arrived at their conclusions by augmenting Google’s PaLM2-S, an LLM touted to possess the same functionalities as OpenAI’s ChatGPT-4, with smaller AI models. In their submission, the new hybrid model demonstrated a significant improvement above baseline in coding and translation tasks.

“Similarly, when PaLM2-S is augmented with a code-specific model, we see a relative improvement of 40% over the base model for code generation and explanation tasks—on-par with fully fine-tuned counterparts,” said the researchers.

The research has several applications for the fledgling area of generative AI, including potential use cases in LLMs without English support and providing an answer to AI’s scaling and copyright issues.

Increasing research to enhance AI capabilities

In early 2023, researchers at Austria’s University of Innsbruck uncovered a new metric to measure temporal validity in complex statements for LLMs designed to improve chatbot capabilities.

Another research uncovered a streak of mainstream AI models favoring sycophancy over factual responses as a result of their training methods, while others are probing an integration between AI and blockchain technology.

Despite the pace of AI research, developers have to grapple with copyright complaints and the grim prospects of regulatory enforcement in the fledgling sector.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI takes center stage at London Chatbot Summit

Recommended for you

David Case gets technical with Bitcoin masterclass coding sessions
Whether you're a coding pro or a novice, David Case's livestream sessions on the X platform are not to be...
November 21, 2024
NY Supreme Court’s ruling saves BTC miner Greenidge from closing
However, the judge also ruled that Greenidge must reapply for the permit and that the Department of Environmental Conservation has...
November 20, 2024
Advertisement
Advertisement
Advertisement