Reserved IP Address°C
02-22-2025
BSV
$37.18
Vol 26.89m
-4.14%
BTC
$96433
Vol 49263.06m
-1.94%
BCH
$318.1
Vol 195.64m
-3.85%
LTC
$127.76
Vol 1677.12m
-5.14%
DOGE
$0.24
Vol 2034.57m
-4.49%
Getting your Trinity Audio player ready...

Researchers at Google DeepMind (NASDAQ: GOOGL) and Google Research have proposed a new method to extend the capabilities of artificial intelligence (AI) models by interlinking them with other existing AI systems.

The research scored impressive results in early studies using the novel Composition to Augment Language Model (CALM) developed by the researchers. According to a 17-page
report, the model allows AI researchers to augment existing large language models (LLMs) with new capabilities.

“Foundational models with billions of parameters which have been trained on large corpora of data have demonstrated non-trivial skills in a variety of domains,” read the report. “However, due to their monolithic structure, it is challenging and expensive to augment them or impart new skills.”

Rather than training new LLMs with new capabilities from scratch, the new method enables new functionalities in existing models by augmenting with their peers. The researchers submit that promoting interoperability via CALM will be an endeavor, saving time and cost to “enable newer capabilities.”

Through CALM, LLMs can preserve their existing functionalities while unlocking applications in new domains without the hassle of fresh fine-tuning or retraining processes.

The researchers arrived at their conclusions by augmenting Google’s PaLM2-S, an LLM touted to possess the same functionalities as OpenAI’s ChatGPT-4, with smaller AI models. In their submission, the new hybrid model demonstrated a significant improvement above baseline in coding and translation tasks.

“Similarly, when PaLM2-S is augmented with a code-specific model, we see a relative improvement of 40% over the base model for code generation and explanation tasks—on-par with fully fine-tuned counterparts,” said the researchers.

The research has several applications for the fledgling area of generative AI, including potential use cases in LLMs without English support and providing an answer to AI’s scaling and copyright issues.

Increasing research to enhance AI capabilities

In early 2023, researchers at Austria’s University of Innsbruck uncovered a new metric to measure temporal validity in complex statements for LLMs designed to improve chatbot capabilities.

Another research uncovered a streak of mainstream AI models favoring sycophancy over factual responses as a result of their training methods, while others are probing an integration between AI and blockchain technology.

Despite the pace of AI research, developers have to grapple with copyright complaints and the grim prospects of regulatory enforcement in the fledgling sector.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI takes center stage at London Chatbot Summit

Recommended for you

Majorana 1 chip offers breakthroughs in quantum computing
Microsoft's Majorana 1 chip signifies a leap in quantum computing, but developers in the blockchain community should still be wary...
February 21, 2025
Ransomware losses tumble but threat remains: Chainalysis
A new report shows that collaboration between authorities and victims' refusal to negotiate with bad actors caused a decline in...
February 20, 2025
Advertisement
Advertisement
Advertisement