US and China flag print screen on pawn chess

Microsoft’s G42 deal shows US and China in AI cold war (that could get much hotter)

Asia-Pacific markets are embracing generative artificial intelligence (AI) to ensure they keep pace with their U.S. and European counterparts, but the real race may be to see which of the world’s military powers can fully weaponize AI technology first.

IDC Global, a Singapore-based market/data intelligence firm focused on the information technology, telecom, and consumer technology markets, recently released its latest
Worldwide AI and Generative AI Spending Guide. The guide projects that Asia-Pacific markets will collectively spend $26 billion on generative AI (GenAI) software, hardware, and services by 2027, with a compound annual growth rate of 95.4.

As with other recent studies, IDC finds GenAI spending in Asia-Pacific markets claiming an ever-larger slice of the overall AI investment pie. GenAI’s share is projected to rise from its current share of 15% to 29% by 2027. Around one-fifth of Asia-Pacific organizations have plans to build their own GenAI models.

The financial services sector is projected to claim the largest growth in Asia-Pacific GenAI spending, with software & information services second and governments third. The retail and durable goods sectors round out the top five.

Deepika Giri, IDC’s head of research, big data & AI, claims the Asia-Pacific surge in GenAI spending “will reach its zenith within the next two years, followed by a period of stabilization.” China will remain the region’s “dominant market,” but Japan and India will experience comparatively more rapid growth in a bid to catch up with the local kingpin.

US gov’t pushing China out of Abu Dhabi AI

China is currently the global leader in the AI technology patent race, but American investment in AI tech in 2023 was nearly nine times that of China. Anyone who doubts the existence of a global AI arms race needs only look to this week’s $1.5 billion ‘strategic investment’ that Microsoft (NASDAQ: MSFT) made in G42, the United Arab Emirates-based AI technology holding company.

The ‘expanded partnership,’ which gives Microsoft president Brad Smith a seat on G42’s board, was brokered in part by U.S. Commerce Secretary Gina Raimondo. The deal followed behind-the-scenes negotiations between G42 and Commerce’s Bureau of Industry and Security (BIS) that began last year. As a result of those talks, G42 agreed to divest itself from Chinese technology and embrace U.S. technology.

Raimondo reportedly used a combination of carrots and sticks to convince G42 to tilt westward. Earlier this year, Congress made noises about sanctioning G42 for its ties to Chinese firms currently on the BIS blacklist. The Federal Communications Commission also announced plans to prohibit new sales of telecom devices made by China’s Huawei and ZTE in the U.S. due to national security concerns.

Cutting ties with those Chinese firms would ensure G42 retained access to cutting-edge technology produced by U.S. firms, including chipmaker Nvidia (NASDAQ: NVDA) and ChatGPT developer OpenAI (with which G42 recently partnered). This week’s deal will see G42 remove Huawei gear from its systems in favor of Microsoft’s Azure platform.

The New York Times quoted Raimondo, making plain the importance of how the Biden administration views the AI sector. Using zero-sum language that echoes Cold War rhetoric, Raimondo insisted that “when it comes to emerging technology, you cannot be both in China’s camp and our camp.” Peng Xiao, G42’s CEO, used much the same language last year when he first announced the China divestment plans, saying, “We cannot work with both sides.”

It remains to be seen how hard the U.S. might pressure the UAE to curtail other Chinese connections, like the joint air force exercises that began last year, which followed the UAE agreeing to buy Chinese-built’ combat trainer’ jets. Regardless, the U.S. government’s overt involvement in an international commercial technology deal foreshadows a fraught environment for AI firms on the world stage.

US AI’ doomer’ takes on safety role

While the Biden administration may be keen to block China’s AI ambitions, China has one major element in its favor: the fact that it’s an authoritarian state with fewer of the checks-and-balances that often sandbag more democratic countries.

Even when its politicians aren’t fighting like toddlers on a playground, American bureaucracy can often stifle industry innovation and slow development. Meanwhile, China’s leadership simply declares that something needs doing, and (for the most part) it gets done, albeit often at the cost of the environment, its citizens’ freedoms, or other ‘trivial’ concerns.

That dynamic could play a role in how each country allows AI to develop. Take the US AI Safety Institute (USAISI), which was established last November at the direction of President Biden “to lead the U.S. government’s efforts on AI safety and trust, particularly for evaluating the most advanced AI models.”

The USAISI operates through the National Institute of Standards and Technology (NIST), which just announced new members of the USAISI leadership team. Among the new hires is former OpenAI researcher Paul Christiano, who will serve as USAISI’s head of AI safety.

The trouble is, Christiano has previously expressed concerns that the chance of AI resulting in our inevitable “doom” was basically a coin flip. Last year, Christiano warned the Bankless podcast that once AI has been deployed everywhere, “if for some reason, God forbid, all these AI systems were trying to kill us, they would definitely kill us.”

While Christiano doesn’t insist that our fate is irrevocably sealed, his views have nonetheless led to concerns both within the NIST rank-and-file and the larger U.S. AI community that Christiano could act as a brake on AI development by U.S. firms if he thinks we’re stumbling into T-800 territory.

Not this again

China and Russia have grown closer over the past couple of years, with China’s leader Xi Jinping famously declaring in 2022 that there were “no limits” to the neighboring countries’ partnership. Just days later, Russia invaded Ukraine.

In February, China and Russia began collaborating on AI’s military applications, including discussions on the ethical implications of allowing a non-human thought process in the use of military hardware. While China is said to be against the notion of AI-based autonomous weapons systems, neither Russia nor the U.S. has publicly taken such a firm stance.

Last year, the U.S. led a Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy, to which over 50 countries added their names. Neither China nor Russia were among these names. Despite these lofty declarations, the Pentagon continues to research how AI might enhance its ability to conduct warfare amid the ever-present suspicion that China—or any other nation—might be ahead of the curve.

History is riddled with nations eyeing their rivals’ military capabilities with unease (often without reason). Think of the infamous missile gap of the 1950s, which got an encore presentation recently with Russia’s bragging about its hypersonic cruise missile
technology, said to be impervious to interception by Western anti-missile batteries. Russia actually fired one of the things at Ukraine in February.

On the surface, Xi Jinping doesn’t appear as needy as Vladimir Putin in terms of wanting the world to think he’s a tough guy. But Xi’s silence probably gives Pentagon officials even more heebie-jeebies about what wonder weapons the AI developers in the People’s Liberation Army might have hidden up their sleeves.

Weaponized AI is already here

Late last year, reports spread that the Israeli Defense Forces (IDF) were using AI to identify bombing targets in Gaza. More recently, Jerusalem-based journalists issued a report on two separate AI targeting systems—known as Lavender and ‘Where’s Daddy?’—the IDF is using to target Hamas operatives. The report claimed this has led to “the dispassionate annihilation of thousands of eligible—and ineligible—targets at speed and without much human oversight.”

The failure of Iran’s recent swarm attack on Israel—involving hundreds of drones, cruise missiles, and ballistic missiles—has been credited in part to AI, which assists Israel’s Iron Dome and David’s Sling anti-missile systems. AI also has a seat on Israel’s Oron spy plane
(also known as MARS2), a modified Gulfstream G550 jet equipped with “innovative AI technologies and algorithms for processing vast amounts of data within minutes.”

The U.S. has its own AI-targeting system, known as Project Maven (a former Google (NASDAQ: GOOGL) partnership), which was used to strike targets in Iraq and Syria this February. The Intercept reported this week that the U.S. actually played a more crucial role than Israel in shooting down the Iranian missiles. Given the scale of the Iranian attack, it’s a fair bet that AI was key to identifying and tracking these weapons long before they reached their targets.

So where is this going? Well, way back in 1990, author Tom Clancy appeared in an episode of PBS’s Nova series about the rise of so-called Killing Machines. Clancy offered the following dystopian vision of a future that is a lot closer to reality now than it was back then.

It is one thing to be hunted by a man who has a wife and parents and
children and dreams and ideas. It is another thing entirely to be hunted by a machine that simply thinks of you as a target. Worst of all, to be hunted by a machine that’s patient and can wait and doesn’t care that you’re a living person with dreams and hopes and a sweetheart and children or whatever. It just knows that you’re something that it wants to kill. That is truly scary.

Regardless of which country gets there first, the moment that weaponized AI catches up with the freaky-ass stuff coming out of Boston Dynamics’ labs; it’s time to bend over, kiss our asses goodbye, and welcome our new robot overlords. Today’s doomers may be tomorrow’s prophets.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI. 

Watch: AI is for ‘augmenting’ not replacing the workforce

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.