11-21-2024
BSV
$67.81
Vol 209.88m
-0.81%
BTC
$98218
Vol 120894.47m
4.16%
BCH
$480.99
Vol 2185.52m
9.73%
LTC
$88.36
Vol 1400.84m
4.96%
DOGE
$0.38
Vol 9408.28m
2.07%
Getting your Trinity Audio player ready...

Microsoft Research and Peking University researchers have reached a new milestone in their attempts to teach OpenAI’s GPT-4 how to operate within the Android operating system.

In a joint report, the study achieved relative success in finetuning large language models (LLMs) to operate autonomously in a specific operating system. While generative artificial intelligence (AI) has found myriad use cases, the technology has found it challenging to work within the borders of an operating system without human interference.

The study highlighted several reasons for generative AI’s inability to explore Android autonomously, including the reliance on reinforcement training. Most LLMs use trial and error to explore a new environment, setting the stage for security issues in their application.

“Firstly, the action space is vast and dynamic,” the report read. “Secondly, real-world tasks often require inter-application cooperation, demanding farsighted planning from LLM agents. Thirdly, agents need to identify optimal solutions aligning with user constraints, such as security concerns and preferences.”

To solve the challenges, the research team initiated AndroidArena, which was designed as a training environment for LLMs to explore the Android operating system. Preliminary studies highlighted new flaws in the way of autonomous exploration for LLM, focusing primarily on understanding and reasoning.

As the experiments within the AndroidArena proceeded, the researchers noted additional challenges to reflection and exploration by models.

While exploring potential solutions, the team eventually settled for prompting LLMs with detailed information on previous attempts to reduce incidents of errors. By embedding previous memories in prompts, the researchers recorded a 27% spike in accuracy when operating Android systems.

The solution yielded positive results when extended to other LLMs, including Google’s Bard (NASDAQ: GOOGL) and Meta’s LLaMA 2 (NASDAQ: META), with researchers optimistic for new iterations to demonstrate advanced functionalities.

Optimizing AI one feature at a time

While generative AI has enjoyed mass adoption rates, researchers are scrambling behind the curtains to fix several problems associated with the offering. One study by Anthropic AI focused on stifling incidents of sycophancy in LLMs and earned plaudits from industry players, while AutoGPT and Microsoft (NASDAQ: MSFT) are testing an AI monitoring tool to flag harmful real-world outputs.

“We design a basic safety monitor that is flexible enough to monitor existing LLM agents, and, using an adversarial simulated agent, we measure its ability to identify and stop unsafe situations,” said the Microsoft-backed research.

Other studies are focused on merging blockchain technology with AI, while some are pursuing labeling AI-generated content to stifle the proliferation of deepfakes.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

Recommended for you

David Case gets technical with Bitcoin masterclass coding sessions
Whether you're a coding pro or a novice, David Case's livestream sessions on the X platform are not to be...
November 21, 2024
NY Supreme Court’s ruling saves BTC miner Greenidge from closing
However, the judge also ruled that Greenidge must reapply for the permit and that the Department of Environmental Conservation has...
November 20, 2024
Advertisement
Advertisement
Advertisement