BSV
$52.41
Vol 19.51m
-8.49%
BTC
$96136
Vol 51888.59m
-2.56%
BCH
$447.46
Vol 411.07m
-4.28%
LTC
$98.64
Vol 952.78m
-6.71%
DOGE
$0.31
Vol 6607.66m
-9.31%
Getting your Trinity Audio player ready...

Apple (NASDAQ: AAPL) has thrown its hat in the ring for small language models (SML) after falling behind Google (NASDAQ: GOOGL) and OpenAI.

In an official announcement on the open-source platform Hugging Face, Apple confirmed the release of eight small artificial intelligence (AI) models split evenly between pre-trained and instruction-tuned models. The parameters range from 270 million parameters, with the largest of the lot having 3 billion parameters.

Dubbed OpenELM, Apple says the lightweight models are specifically designed to run locally on devices without the need for internet connectivity. Although the report failed to specify exact use cases, pundits predict several utilities, including forming the base layer for mobile assistants with specific functionalities.

To close the gap between it and other industry first-movers, Apple is opting for an open-source route, stealing a page from Meta’s Llama 2 (NASDAQ: META). OpenELM was trained using public datasets with Apple, allowing users to inspect the “recipe” for the models by sharing the CoreNet code.

According to the official report, OpenELM holds its own logic and reasoning capabilities when placed side-by-side with larger models. Apple says the models were able to achieve the feat through the efficient allocation of parameters leaning on a layer-wise scaling method.

“We introduce OpenELM, a family of Open-source Efficient Language Models,” read a company statement. “OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy.”

The new release brings Apple in direct competition with Microsoft’s (NASDAQ: MSFT) suite of LLMs, with the company releasing the third iteration of its Phi models. Following the successes of the previous two versions, Microsoft says the Phi-3 mini 3 will not require extensive computing power but will broaden the horizon for enterprises to choose their AI strategy.

“What we’re going to start to see is not a shift from large to small, but a shift from a singular category of models to a portfolio of models where customers get the ability to make a decision on what is the best model for their scenario,” said Microsoft’s generative AI product manager, Sonali Yadav.

A comparative advantage

Despite playing second, Apple has several things working in its favor regarding the race for SMLs. Experts say Apple has a significant advantage over Windows in terms of local AI use stemming from the combination of device RAM with VRAM.

For Windows, users must buy 32GB GPUs to support RAMs running AI models. However, since AI hardware is predominantly Nvidia-based (NASDAQ: NVDA), Apple has a mountain to climb with its in-house chips with the firm adopting a slow-and-steady approach to emerging technologies in recent years.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: How blockchain will keep AI honest

Recommended for you

Google unveils ‘Willow’; Bernstein downplays quantum threat to Bitcoin
Google claims that Willow can eliminate common errors associated with quantum computing, while Bernstein analysts noted that Willow’s 105 qubits...
December 18, 2024
WhatsOnChain adds support for 1Sat Ordinals with new API set
WhatsOnChain now supports the 1Sat Ordinals with a set of APIs in beta testing; with this new development, developers can...
December 13, 2024
Advertisement
Advertisement
Advertisement