BSV
$52.59
Vol 32.73m
-5.38%
BTC
$96610
Vol 49289.86m
-0.77%
BCH
$449.41
Vol 408.59m
-0.52%
LTC
$99.45
Vol 922.28m
-2.11%
DOGE
$0.31
Vol 6606.21m
-3.41%
Getting your Trinity Audio player ready...

Researchers at Google DeepMind (NASDAQ: GOOGL) and Google Research have proposed a new method to extend the capabilities of artificial intelligence (AI) models by interlinking them with other existing AI systems.

The research scored impressive results in early studies using the novel Composition to Augment Language Model (CALM) developed by the researchers. According to a 17-page
report, the model allows AI researchers to augment existing large language models (LLMs) with new capabilities.

“Foundational models with billions of parameters which have been trained on large corpora of data have demonstrated non-trivial skills in a variety of domains,” read the report. “However, due to their monolithic structure, it is challenging and expensive to augment them or impart new skills.”

Rather than training new LLMs with new capabilities from scratch, the new method enables new functionalities in existing models by augmenting with their peers. The researchers submit that promoting interoperability via CALM will be an endeavor, saving time and cost to “enable newer capabilities.”

Through CALM, LLMs can preserve their existing functionalities while unlocking applications in new domains without the hassle of fresh fine-tuning or retraining processes.

The researchers arrived at their conclusions by augmenting Google’s PaLM2-S, an LLM touted to possess the same functionalities as OpenAI’s ChatGPT-4, with smaller AI models. In their submission, the new hybrid model demonstrated a significant improvement above baseline in coding and translation tasks.

“Similarly, when PaLM2-S is augmented with a code-specific model, we see a relative improvement of 40% over the base model for code generation and explanation tasks—on-par with fully fine-tuned counterparts,” said the researchers.

The research has several applications for the fledgling area of generative AI, including potential use cases in LLMs without English support and providing an answer to AI’s scaling and copyright issues.

Increasing research to enhance AI capabilities

In early 2023, researchers at Austria’s University of Innsbruck uncovered a new metric to measure temporal validity in complex statements for LLMs designed to improve chatbot capabilities.

Another research uncovered a streak of mainstream AI models favoring sycophancy over factual responses as a result of their training methods, while others are probing an integration between AI and blockchain technology.

Despite the pace of AI research, developers have to grapple with copyright complaints and the grim prospects of regulatory enforcement in the fledgling sector.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI takes center stage at London Chatbot Summit

Recommended for you

Google unveils ‘Willow’; Bernstein downplays quantum threat to Bitcoin
Google claims that Willow can eliminate common errors associated with quantum computing, while Bernstein analysts noted that Willow’s 105 qubits...
December 18, 2024
WhatsOnChain adds support for 1Sat Ordinals with new API set
WhatsOnChain now supports the 1Sat Ordinals with a set of APIs in beta testing; with this new development, developers can...
December 13, 2024
Advertisement
Advertisement
Advertisement