BSV
$68.73
Vol 78.13m
-5.38%
BTC
$97861
Vol 57961.49m
-0.29%
BCH
$511.39
Vol 984.2m
-1.48%
LTC
$97.05
Vol 1352.38m
-4.61%
DOGE
$0.42
Vol 12793.46m
-3.31%
Getting your Trinity Audio player ready...

In a recent Instagram post, Mark Zuckerberg, CEO of Meta (NASDAQ: META), offered insights into the company’s artificial intelligence (AI) initiatives.

He set the stage by telling the audience that he envisions the next generation of services to rely on fully General Intelligent AI (AGI). AGI is often described as a type of artificial intelligence that can understand, learn, and take action on various problems—very similar to humans. Currently, most AI systems, including ChatGPT, are considered “narrow AI” since they specialize in very specific tasks or domains. The move into AGI means that a system would need a broader range of understanding and capability that mirrors human cognitive abilities.

“I don’t have a one-sentence, pithy definition,” said Zuckerberg in a recent interview with The Verge. “You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition.”

At the moment, no true AGI systems exist. AGI largely remains a theoretical concept and a goal in the field of AI research. However, Zuckerberg has said that Meta is attempting to build an AGI system, and in his Instagram post, he announces the steps the team is taking to make that a reality.

Meta’s $9 billion investment in AI infrastructure

To power these AI services, Zuckerberg revealed that by the end of 2024, Meta is looking to stockpile roughly 600,000 GPUs, including 350,000 H100 graphics cards from Nvidia
(NASDAQ: NVDA). The cost of the Nvidia chips alone is equal to nearly $9 billion, given that the H100 costs anywhere between $25,000 to $30,000, and even more when bought on a secondary market like eBay.

Multiple GPUs typically power large-scale AI systems because these AIs require high computational power, speed, parallel processing capabilities, and efficiency in handling the complex and data-intensive tasks characteristic of AI and machine learning. Many modern GPUs have specialized cores and architecture explicitly designed for AI tasks, like tensor cores in Nvidia’s GPUs, which are optimized for operations commonly used in neural networks.

With this computing power, Meta plans to create and train new models like Llama 3. Zuckerberg says that once Meta creates these models, they will “responsibly open source and make it widely available.” This is a much different approach than the one its competitors are taking.

OpenAI, the dominant player in the AI industry, goes a much different route. Its tools and services operate inside of its walled garden. Interestingly, OpenAI’s first offerings were open-sourced, but as time passed, and maybe as they realized they had something extremely valuable in the works, the company became a walled garden and introduced a for-profit subsidiary company.

The benefit of being a walled garden is that the creators can exert more control over the system, which means it is more likely to have restrictions and safeguards in place that make it resistant to external tampering and being used in ways that violate company policy. Although these restrictions are prohibitive to some, they are crucial when many policymakers and global organizations scrutinize AI and call for legislation. At the same time, walled garden products are often slower to innovate because there is less collaboration taking place by the population that uses it the most.

This is much different than open source models, where projects benefit from the input of a diverse and global developer community, which typically leads to rapid innovation and problem-solving. Users also have more freedom to customize their environments when using OpenSource tools, which means they can modify the software to meet their specific needs.

Meta may be making this decision with the hopes that the open-source strategy could create an environment of rapid growth and innovation, which could ultimately make Meta’s product the go-to model for building an AI system.

Zuckerberg’s vision of AI-enhanced reality

Zuckerberg anticipates that as AI becomes increasingly ingrained in our daily routines, there will need to be more efficient devices that allow us to summon and use AI. Zuckerberg believes the best way to accomplish this will be through glasses.

“Glasses are the ideal form factor for letting an AI see what you see and hear what you hear, so that it’s always available to help out,” he says.

His vision aligns with the product that Meta currently has on the market, the Meta Rayban Smartglasses, which recently got an update that incorporated multi-modal AI into the device. Zuckerberg envisions individuals tapping into AI to understand the world around them, which means whatever device ends up being used needs to have both a computer vision element and the ability to capture and understand audio. We are seeing a variety of AI wearable products entering the market that are trying to accomplish this. Still, none of these products have picked up significant amounts of traction yet.

Meta’s AI ambitions

It is clear that Meta is staying competitive when it comes to AI. They have a popular open-source large language model on the market (Llama 2), they have more AI models in the works, they recently launched a generative AI image application, and they even have an AI wearable product that is out in the market.

Zuckerberg’s recent Instagram post can be seen as Meta’s declaration that the company is preparing for a future where AI is integrated into every facet of our lives, from personal assistance to business operations. With significant investments in infrastructure and a commitment to open sourcing, it is clear that Meta does not just want to participate in the AI race but would like to be viewed as a leader when it comes to artificial intelligence.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI. 

Watch: Cybersecurity fundamentals in today’s digital age with AI & Web3

Recommended for you

David Case gets technical with Bitcoin masterclass coding sessions
Whether you're a coding pro or a novice, David Case's livestream sessions on the X platform are not to be...
November 21, 2024
NY Supreme Court’s ruling saves BTC miner Greenidge from closing
However, the judge also ruled that Greenidge must reapply for the permit and that the Department of Environmental Conservation has...
November 20, 2024
Advertisement
Advertisement
Advertisement