BSV
$53.88
Vol 51.44m
0.97%
BTC
$97014
Vol 63895.21m
-0.19%
BCH
$451.54
Vol 378.43m
3.22%
LTC
$99.92
Vol 975.34m
3.5%
DOGE
$0.32
Vol 7669.83m
5.11%
Getting your Trinity Audio player ready...

The artificial intelligence (AI) industry is a fast-moving space. Innovators, governments, and everyone in between currently have their attention fixed on the products and services that AI fuels.

AI can significantly improve business operations and our quality of life, but no revolutionary technology comes without risks that have the potential to cause harm.

Here are a few significant events that took place in AI this week:

Meta releases its largest AI model to date: Llama 3.1

On July 23, Meta (NASDAQ: META) unveiled Llama 3.1, the largest-ever open-source AI model. Three versions of Llama will be available, but the one getting the most attention is the largest version, which has 405 billion parameters (a reference to the overall size of the model and how much data it can process). Unlike other top AI models on the market, Meta offers Llama for free as open-source software.

“I believe that open source is necessary for a positive AI future. AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life – and to accelerate economic growth while unlocking progress in medical and scientific research,” said Meta CEO Mark Zuckerberg.

“Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society,” he added.

To help distribute Llama 3.1, Meta is partnering with tech giants such as Amazon Web Services (NASDAQ: AMZN), Google Cloud (NASDAQ: GOOGL), Microsoft Azure (NASDAQ: MSFT), Databricks, and Dell (NASDAQ: DELL), which will offer their customers access to Llama 3.1 via their respective cloud computing platforms.

But will that be enough to help Llama proliferate? Some top AI models are so popular because of their user-friendly interfaces—even if they operate within a walled garden or behind a paywall. It’s also crucial to remember that non-technical users played a significant role in AI’s rapid rise in popularity, and open-source software typically isn’t very user-friendly for non-technical individuals. Although open source has historically driven innovation in highly technical spaces, it will be interesting to see how Meta fares in a market that gravitates towards easy-to-use models with lower barriers to entry.

OpenAI announces SearchGPT

On July 25, OpenAI, the creator of ChatGPT, announced ‘SearchGPT,’ an AI search tool that gives users “fast and timely answers with clear and relevant sources” and offers publishers a new way to connect with users through “prominently citing and linking to them in searches. Responses have clear, in-line, named attribution and links so users know where information is coming from and can quickly engage with even more results in a sidebar with source links.”

OpenAI says SearchGPT is currently being tested with a small group, and those not in the group can join the waitlist for its prototype. However, the company says that the best features from SearchGPT will ultimately be added to ChatGPT so that all users can experience its benefits without using the product directly.

OpenAI’s SearchGPT offering is a direct threat to Google’s “AI Overview,” a feature that provides an AI summary after a query along with various sources that a user can use to learn more about their search topic.

It’s no secret that many AI users use their AI tool of choice as a glorified search engine, preferring it over traditional search engines because it returns a direct answer rather than a pool of information that they need to sift through before unearthing an answer.

Although it will be useful, SearchGPT has its work cut out for it when it comes to taking market share away from Google—the world’s dominant search engine with a 90.91% market share. Typically, a product needs to be substantially better than the industry incumbent for someone to switch away from that incumbent, so when we stack up SearchGPT against Google, SearchGPT will need to be better by a significant margin for people to adopt it as their daily driver of a search engine.

Regardless, this move indicates that tech giants like OpenAI are paying attention to how consumers use their products and creating new features and workflows that specifically cater to those use cases.

FCC Wants New Rules for AI in Political Ads

On July 25, the Federal Communications Commission (FCC) proposed a new rule requiring political advertisers, as well as television and radio broadcasters, to disclose whether their political ads use AI.

“There’s too much potential for AI to manipulate voices and images in political advertising to do nothing,” said the agency’s chairwoman, Jessica Rosenworcel. “If a candidate or issue campaign used AI to create an ad, the public has a right to know.”

If adopted, the rule would require broadcasters to verify with political advertisers whether their content was generated using AI tools, such as text-to-image creators or voice-cloning software, and disclose this information on-air or in the TV/Radio station’s public political files.

Although this new rule is a work in progress, it is unclear whether it will take effect before the upcoming presidential election in the United States. If the rule comes after the election, it may be too late.

One of the largest attack vectors for AI right now is the campaign trail. Even leading up to the election, attackers successfully used AI to replicate President Joe Biden’s voice in a disinformation campaign that reached many residents before it was discovered. While the FCC’s proposed rule will be beneficial in the future, its absence in the days leading up to the election could cause harm that could have been avoided.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI. 

Watch: Understanding the dynamics of blockchain & AI

Recommended for you

Who wants to be an entrepreneur?
Embodying the big five personality traits could be beneficial for aspiring entrepreneurs, but Block Dojo shows that there is more...
December 20, 2024
UNISOT, PSU China team up for supply chain business intelligence
UNISOT revealed a new partnership with business intelligence and research firm PSU China, which will combine its data with UNISOT's...
December 20, 2024
Advertisement
Advertisement
Advertisement