BSV
$53.11
Vol 24.18m
-7.74%
BTC
$96681
Vol 45488.28m
-2.03%
BCH
$451.38
Vol 401.97m
-3.64%
LTC
$99.65
Vol 939.84m
-5.01%
DOGE
$0.31
Vol 6234.64m
-8.93%
Getting your Trinity Audio player ready...

The U.S. State Department has teamed up with Nigeria to further the responsible use of artificial intelligence (AI) in the military.

Mallory Stewart, whose role at the State Department focuses on arms control and stability, recently discussed AI use in military operations with Nigeria’s Ministry of Foreign Affairs, the Ministry of Defence, the national security advisor, civil society, and other officials from the regional bloc ECOWAS.

The U.S. has been on a global tour drumming up support for its initiative to have guardrails for AI use in the military. This initiative, which has garnered the support of 55 countries, advocates using AI “in a manner consistent with international laws and recognising inherent human bias,” Stewart told journalists in Abuja.

“We’ve learned the hard way [about the] inherent human bias built into the AI system … leading to maybe misinformation being provided to the decisionmaker,” she added.

It’s not the first time the U.S. government has partnered with Nigeria on AI. Earlier this year, the American government reiterated its support for Nigeria’s AI strategy, pledging to support the development of the West African nation’s infrastructure to boost research and innovation. A few months later, the two governments signed an MoU to increase AI engagements between their respective national AI institutes.

The U.S. Department of Commerce has also pledged to collaborate with Nigeria on its approaches to critical areas such as “data, trusted digital infrastructure, power/green energy, AI governance policies, computing resources, digital skills relevant to AI and more.”

The controversy of AI in the military

As with virtually every other sector, AI is gaining rising adoption in the military. For some, like Japan, the technology presents a solution to a rapidly aging and declining population that has left the country short of a military workforce. Others are using it to collect and analyze data and assist in decision-making.

According to former Google (NASDAQ: GOOGL) CEO Eric Schmidt, global wars are “no longer about who can mass the most people or field the best jets, ships, and tanks.” It’s now about autonomous weapon systems and powerful algorithms.

However, the military remains the most controversial field for AI. In the Gaza conflict, for instance, Israel has been reported to use AI to identify and target suspected militants, with humans playing an alarmingly reduced role in the process. One Israeli investigation found that the country has killed thousands of women and children as collateral damage from bombings orchestrated almost entirely by AI.

This makes regulations and guardrails critical for the technology’s deployment in the sector. However, global political alignments have overshadowed the need for policy frameworks.

One major movement led by the U.S. brought together 31 nations, including France, Germany, Canada, and Australia, to sign a declaration setting guardrails on military AI. However, China and Russia, the other two most powerful militaries after the U.S., were conspicuously missing.

As regulators slack off, AI developers are increasingly voicing their concerns and opposition to the military deployment of AI. Earlier this year, nearly 200 employees at Google DeepMind signed a letter demanding the company terminate its contracts with military organizations.

“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the developers say.

Industry leader OpenAI has also been dragged into military applications. Earlier this year, the company quietly removed its ban on using its AI models “for military and warfare” and has been working with the Pentagon since.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Understanding the dynamics of blockchain & AI

Recommended for you

Google unveils ‘Willow’; Bernstein downplays quantum threat to Bitcoin
Google claims that Willow can eliminate common errors associated with quantum computing, while Bernstein analysts noted that Willow’s 105 qubits...
December 18, 2024
WhatsOnChain adds support for 1Sat Ordinals with new API set
WhatsOnChain now supports the 1Sat Ordinals with a set of APIs in beta testing; with this new development, developers can...
December 13, 2024
Advertisement
Advertisement
Advertisement