BSV
$53.37
Vol 30.45m
-2.18%
BTC
$95455
Vol 42511.08m
-2.1%
BCH
$447.14
Vol 339.78m
-2.4%
LTC
$100.8
Vol 803m
-0.13%
DOGE
$0.31
Vol 4693.21m
-3.86%
Getting your Trinity Audio player ready...

Microsoft Research and Peking University researchers have reached a new milestone in their attempts to teach OpenAI’s GPT-4 how to operate within the Android operating system.

In a joint report, the study achieved relative success in finetuning large language models (LLMs) to operate autonomously in a specific operating system. While generative artificial intelligence (AI) has found myriad use cases, the technology has found it challenging to work within the borders of an operating system without human interference.

The study highlighted several reasons for generative AI’s inability to explore Android autonomously, including the reliance on reinforcement training. Most LLMs use trial and error to explore a new environment, setting the stage for security issues in their application.

“Firstly, the action space is vast and dynamic,” the report read. “Secondly, real-world tasks often require inter-application cooperation, demanding farsighted planning from LLM agents. Thirdly, agents need to identify optimal solutions aligning with user constraints, such as security concerns and preferences.”

To solve the challenges, the research team initiated AndroidArena, which was designed as a training environment for LLMs to explore the Android operating system. Preliminary studies highlighted new flaws in the way of autonomous exploration for LLM, focusing primarily on understanding and reasoning.

As the experiments within the AndroidArena proceeded, the researchers noted additional challenges to reflection and exploration by models.

While exploring potential solutions, the team eventually settled for prompting LLMs with detailed information on previous attempts to reduce incidents of errors. By embedding previous memories in prompts, the researchers recorded a 27% spike in accuracy when operating Android systems.

The solution yielded positive results when extended to other LLMs, including Google’s Bard (NASDAQ: GOOGL) and Meta’s LLaMA 2 (NASDAQ: META), with researchers optimistic for new iterations to demonstrate advanced functionalities.

Optimizing AI one feature at a time

While generative AI has enjoyed mass adoption rates, researchers are scrambling behind the curtains to fix several problems associated with the offering. One study by Anthropic AI focused on stifling incidents of sycophancy in LLMs and earned plaudits from industry players, while AutoGPT and Microsoft (NASDAQ: MSFT) are testing an AI monitoring tool to flag harmful real-world outputs.

“We design a basic safety monitor that is flexible enough to monitor existing LLM agents, and, using an adversarial simulated agent, we measure its ability to identify and stop unsafe situations,” said the Microsoft-backed research.

Other studies are focused on merging blockchain technology with AI, while some are pursuing labeling AI-generated content to stifle the proliferation of deepfakes.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

Recommended for you

Google unveils ‘Willow’; Bernstein downplays quantum threat to Bitcoin
Google claims that Willow can eliminate common errors associated with quantum computing, while Bernstein analysts noted that Willow’s 105 qubits...
December 18, 2024
WhatsOnChain adds support for 1Sat Ordinals with new API set
WhatsOnChain now supports the 1Sat Ordinals with a set of APIs in beta testing; with this new development, developers can...
December 13, 2024
Advertisement
Advertisement
Advertisement