11-22-2024
BSV
$68.39
Vol 226.33m
0.24%
BTC
$99241
Vol 126993.06m
4.48%
BCH
$492.92
Vol 2353.84m
12.92%
LTC
$90.34
Vol 1480.39m
8.33%
DOGE
$0.39
Vol 10390.56m
5.13%
Getting your Trinity Audio player ready...

The U.S. State Department has teamed up with Nigeria to further the responsible use of artificial intelligence (AI) in the military.

Mallory Stewart, whose role at the State Department focuses on arms control and stability, recently discussed AI use in military operations with Nigeria’s Ministry of Foreign Affairs, the Ministry of Defence, the national security advisor, civil society, and other officials from the regional bloc ECOWAS.

The U.S. has been on a global tour drumming up support for its initiative to have guardrails for AI use in the military. This initiative, which has garnered the support of 55 countries, advocates using AI “in a manner consistent with international laws and recognising inherent human bias,” Stewart told journalists in Abuja.

“We’ve learned the hard way [about the] inherent human bias built into the AI system … leading to maybe misinformation being provided to the decisionmaker,” she added.

It’s not the first time the U.S. government has partnered with Nigeria on AI. Earlier this year, the American government reiterated its support for Nigeria’s AI strategy, pledging to support the development of the West African nation’s infrastructure to boost research and innovation. A few months later, the two governments signed an MoU to increase AI engagements between their respective national AI institutes.

The U.S. Department of Commerce has also pledged to collaborate with Nigeria on its approaches to critical areas such as “data, trusted digital infrastructure, power/green energy, AI governance policies, computing resources, digital skills relevant to AI and more.”

The controversy of AI in the military

As with virtually every other sector, AI is gaining rising adoption in the military. For some, like Japan, the technology presents a solution to a rapidly aging and declining population that has left the country short of a military workforce. Others are using it to collect and analyze data and assist in decision-making.

According to former Google (NASDAQ: GOOGL) CEO Eric Schmidt, global wars are “no longer about who can mass the most people or field the best jets, ships, and tanks.” It’s now about autonomous weapon systems and powerful algorithms.

However, the military remains the most controversial field for AI. In the Gaza conflict, for instance, Israel has been reported to use AI to identify and target suspected militants, with humans playing an alarmingly reduced role in the process. One Israeli investigation found that the country has killed thousands of women and children as collateral damage from bombings orchestrated almost entirely by AI.

This makes regulations and guardrails critical for the technology’s deployment in the sector. However, global political alignments have overshadowed the need for policy frameworks.

One major movement led by the U.S. brought together 31 nations, including France, Germany, Canada, and Australia, to sign a declaration setting guardrails on military AI. However, China and Russia, the other two most powerful militaries after the U.S., were conspicuously missing.

As regulators slack off, AI developers are increasingly voicing their concerns and opposition to the military deployment of AI. Earlier this year, nearly 200 employees at Google DeepMind signed a letter demanding the company terminate its contracts with military organizations.

“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the developers say.

Industry leader OpenAI has also been dragged into military applications. Earlier this year, the company quietly removed its ban on using its AI models “for military and warfare” and has been working with the Pentagon since.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Understanding the dynamics of blockchain & AI

Recommended for you

David Case gets technical with Bitcoin masterclass coding sessions
Whether you're a coding pro or a novice, David Case's livestream sessions on the X platform are not to be...
November 21, 2024
NY Supreme Court’s ruling saves BTC miner Greenidge from closing
However, the judge also ruled that Greenidge must reapply for the permit and that the Department of Environmental Conservation has...
November 20, 2024
Advertisement
Advertisement
Advertisement