Tesla CEO Elon Musk

What is Grok? Elon Musk’s new AI and how it uses your data

As the AI race heats up, Elon Musk’s xAI has announced the release of Grok, a chatbot similar to ChatGPT. It is the product of two months of development and is available to a limited number of X premium users immediately.

xAI claims Grok is superior to ChatGPT in several ways, such as being better informed about recent developments and events. Since Grok is trained on data from social media platform X, it has access to real-time data rather than the limited datasets ChatGPT has.

Users noted that Grok has a witty, sarcastic ‘personality’ and is willing to answer politically sensitive and potentially controversial questions other AI chatbots aren’t. For example, it answered with a recipe for cocaine, something that Elon Musk himself posted on X.

Musk, who bought Twitter in April 2022 for $44 billion and rebranded it to X, plans to create an ‘everything app’ featuring everything from news feeds and personal profiles to financial services.

What does this mean for your data?

xAI and Musk acknowledge that Grok is trained on data that X users have posted on the platform. Essentially, this means that the hundreds of millions of users who spend endless hours and energy posting on the platform are fueling the development of what could become one of the world’s largest AI companies in the future.

While that’s not such a big issue when it’s just public posts people are willing to share anyway, it does become questionable if and when X becomes the ‘everything app’ Musk envisions. Will Grok or other AI apps have access to users’ financial data and other sensitive information? If so, what will it do with it, and what measures will be taken to protect users’ privacy? These are questions that need to be answered.

Now, more than ever, the development of AI models needs to be recorded and monitored in a provable way to create the kind of accountability Musk himself called for when he signed a letter asking for an AI development pause last year. The X boss is known for his warnings about the dangers of AI, so he, of all people, should be interested in a technology that can mitigate some of them.

Using a scalable public blockchain like BSV, AI development can be public, transparent, and regulated. The time-stamped records on public blockchains can prove the origins of data sets, alleviating some privacy concerns, something especially pertinent when harvesting data from app users.

The micropayments a blockchain like the BSV blockchain is capable of can also ensure that users get paid for the data AI models train on. Of course, this may not be in the interests of a company like xAI in the short term, but in the long run, it incentivizes users to share their data and compensates the data owners.

Best of all, companies like xAI can create provable, immutable records of the development of their AI models. This would ensure compliance with regulations and create a reliable record of events AI firms can lean on should anything harmful happen due to their AI models. You can bet your bottom dollar there will be lawsuits in the future of AI, and having an immutable record of events can only benefit AI companies.

As Grok and other exciting AI developments continue at pace, it’s important to temper the excitement with an awareness of the dangers and implications. Scalable public blockchains were made for this task.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.