BSV
$54.42
Vol 32.29m
-3.62%
BTC
$97047
Vol 47241.62m
-1.49%
BCH
$459.6
Vol 368.95m
-2.1%
LTC
$102.23
Vol 908.37m
-0.87%
DOGE
$0.32
Vol 5753.86m
-4.9%
Getting your Trinity Audio player ready...

Five secretaries of state—Steve Simon of Minnesota, Al Schmidt of Pennsylvania, Steve Hobbs of Washington, Jocelyn Benson of Michigan and Maggie Toulouse Oliver of New Mexico—are planning to send a letter to Elon Musk, urging him to make immediate changes to Grok, the artificial intelligence (AI) chatbot native to X (formerly known as Twitter).

The secretaries of state took this action after Grok provided users with inaccurate information about the upcoming presidential election. Shortly after President Joe Biden announced that he would be dropping out of the 2024 presidential race, the chatbot incorrectly informed users that the ballot deadline had passed for several states, making it impossible for Vice President Kamala Harris to replace Biden.

The secretaries argued that voters need accurate information, and Grok’s dissemination of inaccurate details, combined with Musk’s inaction, contributed to the spread of misinformation.

While it may seem unlikely that many people would turn to Twitter’s AI chatbot for information about the presidential election, it’s not impossible. In addition, many nationally recognized news sources are typically viewed as verified sources of political information. However, the fact remains that Grok was providing users with inaccurate information that could theoretically influence their decisions or the votes they cast in the 2024 election.

What’s more interesting, in my opinion, is that this incident highlights the importance of AI providers keeping their models up to date, feeding them new information and training them on current data. You would think that a major social media platform like X (formerly Twitter) would be on top of a task like this, but it shows that maintaining accuracy in AI outputs requires significant human oversight and continuous updates.

OpenAI Co-Founder John Schulman joins rival Anthropic

OpenAI Co-Founder John Schulman has announced that he is leaving OpenAI to join competitor Anthropic.

“I’ve made the difficult decision to leave OpenAI. This choice stems from my desire to deepen my focus on AI alignment and to start a new chapter of my career where I can return to hands-on technical work,” Schulman said in a social media post.

“To be clear, I’m not leaving due to a lack of support for alignment research at OpenAI. On the contrary, company leaders have been very committed to investing in this area. My decision is a personal one, based on how I want to focus my efforts in the next phase of my career,” he added.

We are at a stage in the AI cycle where we are beginning to see consolidations of all sorts. In many industries, this typically looks like mergers and acquisitions. However, in AI, we’re witnessing tech giants going to great lengths to poach talent from up-and-coming AI startups rather than attempting to acquire or merge with the startup itself.

This could signal that there is often more value in the world’s top AI researchers and developers than in the products and services they create. This trend is particularly relevant given the recent challenges many AI companies have faced in turning a profit despite the substantial investments they continue to receive.

Palantir secures AI partnerships with Wendy’s, Microsoft

Palantir, the software company specializing in big data analytics, has partnered with both the fast-food chain Wendy’s (NASDAQ: WEN) and Microsoft (NASDAQ: MSFT).

Wendy’s will be using Palantir’s Artificial Intelligence Platform (AIP) in its Quality Supply Chain Co-op. AIP is a Palantir product that connects disparate data sources into a single common operating picture, enabling technical and non-technical users to make quick decisions, evaluate efficacy and custom-build modular applications. Additionally, Palantir allows companies to introduce large language models (LLMs) and other AI into their operations to improve processes, which Wendy’s will use to enhance its drive-thru experience—similar to what Taco Bell has done—and increase its efficiency across its supply chain.

Meanwhile, Microsoft will be running Palantir products—such as Gotham, Foundry, Apollo and AIP—on top of its cloud services for government agencies, particularly U.S. defense and intelligence communities.

When it comes to AI, we often discuss its possibilities. However, what isn’t often discussed is why companies prohibit their employees from using certain LLMs and AI tools. In most cases, these restrictions are due to security concerns.

Corporations, especially those involved in national defense, need to ensure they operate within secure systems that comply with various internal and external regulations. This is why many companies have advised employees not to use tools like ChatGPT. Palantir seems to be positioning itself to fill this gap with its service offerings and its partnership with Microsoft.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Transformative AI applications are coming

Recommended for you

Who wants to be an entrepreneur?
Embodying the big five personality traits could be beneficial for aspiring entrepreneurs, but Block Dojo shows that there is more...
December 20, 2024
UNISOT, PSU China team up for supply chain business intelligence
UNISOT revealed a new partnership with business intelligence and research firm PSU China, which will combine its data with UNISOT's...
December 20, 2024
Advertisement
Advertisement
Advertisement