BSV
$53.25
Vol 24.16m
0.19%
BTC
$97192
Vol 61501.73m
0.56%
BCH
$455.5
Vol 380.39m
4.25%
LTC
$99.98
Vol 960.86m
3.28%
DOGE
$0.32
Vol 7469.73m
4.59%
Getting your Trinity Audio player ready...

ChatGPT maker OpenAI has introduced fine-tuning for its large language model (LLM), which is designed to improve its enterprise performance.

According to the announcement, the fine-tuning feature for GPT-4o will allow developers to tweak the model to fit the needs of their organizations without breaking the bank. Developers can customize the LLM with their custom data sets, with early results yielding a marked performance improvement.

“Fine-tuning enables the model to customize structure and tone of responses, or to follow complex domain-specific instructions,” read OpenAI’s official statement. “Developers can already produce strong results for their applications with as little as a few dozen examples in their training data set.”

Rather than merely being a technological upgrade, the fine-tuning feature appears to have elements of transforming to offering AI as a Service. Fine-tuning training is pegged at $25 per million tokens, while inference is capped at $3.75 per million and $15 per million for input and output tokens, respectively.

The fees are expected to contribute to OpenAI revenues and could form a large chunk of its earnings as more enterprises tailor the LLM to suit their individual needs. To spur adoption, OpenAI says it will offer one million training tokens each day for free to organizations until September 23, with users of GPT-4o mini receiving 2 million free tokens per day.

Prior to the announcement, OpenAI conducted early studies with several firms to test the practicality of the finetuning feature. Cosine’s Genie, an AI software engineering assistant relying on GPT-4o, demonstrated impressive results in writing code, spotting bugs, and building new features.

AI solutions firm Distyl ranked in first place after leaning on a fine-tuned GPT-4o in studies exploring text-to-SQL benchmarks, racking up accuracies of over 70% across all metrics.

OpenAI says that fine-tuned models will still provide users with the same levels of data privacy as ChatGPT while rolling out new security measures to protect enterprise data.

“We’ve also implemented layered safety mitigations for fine-tuned models to ensure they aren’t being misused,” said OpenAI. “For example, we continuously run automated safety evals on fine-tuned models and monitor usage to ensure applications adhere to our usage policies.”

A streak of upgrades

OpenAI has been bullish in rolling out upgrades for its artificial intelligence (AI) offerings, teasing users with an AI-powered search engine at the tail end of July. In April, the company announced an upgrade designed to make the chatbot more conversational while reducing the usage of verbose language in responses.

The firm previously confirmed the development of a new AI detection tool with 99.9% accuracy levels after its previous attempt had a sputtering start. However, it says it will adopt a cautious approach for a commercial launch to avoid pitfalls associated with next-gen technologies.

“We believe the deliberate approach we’ve taken is necessary given the complexities involved and its likely impact on the broader ecosystem beyond OpenAI,” said a company executive.

Watch: Understanding the dynamics of blockchain & AI

Recommended for you

Google unveils ‘Willow’; Bernstein downplays quantum threat to Bitcoin
Google claims that Willow can eliminate common errors associated with quantum computing, while Bernstein analysts noted that Willow’s 105 qubits...
December 18, 2024
WhatsOnChain adds support for 1Sat Ordinals with new API set
WhatsOnChain now supports the 1Sat Ordinals with a set of APIs in beta testing; with this new development, developers can...
December 13, 2024
Advertisement
Advertisement
Advertisement