Artificial Intelligence cyberborg

NYAG outlines benefits and risks of AI in anticipation of potential regulation

Getting your Trinity Audio player ready...

New York Attorney General (NYAG) Letitia James issued a report on the potential benefits and risks associated with artificial intelligence (AI) in anticipation of potential incoming legislation related to the new technology.

Key benefits included AI’s application in the healthcare sector and the streamlining of administrative tasks. Amongst the possible risks were algorithmic bias in hiring, generative AI being used by bad actors to create misinformation, and anti-competition due to AI’s need for vast amounts of data for training.

“There are many ways this technology can help people, from completing routine administrative tasks to assisting with medical developments,” said the Office of the New York State Attorney General (OAG). “While this presents exciting opportunities, there are several risks associated with this technology. It is crucial to timely address these risks before it is too late.”

The report is the product of a private symposium held on April 12, 2024, in which the OAG hosted leading academics, policymakers, advocates, and industry representatives in panel discussions to address the major opportunities and risks AI technology presents.

The purpose of the symposium, titled “The Next Decade of Generative AI: Fostering Opportunities While Regulating Risks,” was to help the OAG “develop strategies to mitigate those risks while ensuring New York can remain at the forefront of innovation.”

Generative AI was a particular focus, but speakers also addressed more traditional AI systems, such as automated decision-making technology. Topics included addressing information and misinformation sharing, data privacy, and potential healthcare uses for AI.

“On a daily basis, we are seeing artificial intelligence utilized to improve our lives, but also sow chaos and confusion,” James said in a statement. “The symposium I organized helped bring together government and industry experts to discuss and generate real plans and next steps on addressing AI technology, and I thank everyone for their participation and insights on this critical issue.”

She added, “I want to ensure that government is stepping up to properly regulate AI, and ensure that its potential to help New Yorkers is realized, while its potential to cause harm is addressed and safeguarded against.”

Benefits of AI

One of the areas identified where AI could potentially provide huge benefits was healthcare. The report noted how participants at the symposium had outlined AI’s use for early disease detection, drug discovery, monitoring trends in public health, and precision medicine.

“AI tools have already been used to assist with medical imaging, making scans faster and less expensive,” said the OAG. “These tools can help clinicians triage by screening medical images to identify potentially urgent issues for priority review by a physician. AI models are now trained to go a step further and help detect disease.”

Another health-related benefit of AI is its use for administrative tasks that can lighten workloads and alleviate physician burnout.

The report also suggested that AI’s potential to assist in administrative tasks could benefit other sectors, particularly government agencies.

“A government official outlined opportunities to use generative AI to calculate tax liability, generate public education materials, and write computer code,” said the OAG.

It praised how AI tools, including chatbots powered by generative AI, can help people easily find information. For example, they are already being used to supplement phone lines for public non-emergency services and corporate customer services.

“This use of chatbots can free up phone operators to focus on providing specific services and addressing complicated questions,” said the OAG. “In addition, generative AI tools can automate translation, allowing government and businesses to better communicate with people in their native languages and provide better access to information.”

Not without its risks

While outlining the potential benefits of AI technology across various sectors, the NYAG also cautioned about associated risks.

“Healthcare data is especially sensitive,” said the OAG. “Patients may not understand what data is being collected or how it is being used by AI tools, especially when such tools are continuously running in their hospital rooms or even homes.”

To effectively use AI tools in such a sensitive context, the OAG symposium speakers suggested that humans must be involved, have ultimate responsibility, and be prepared to make decisions about when to trust AI tools and when to challenge them.

Unequal access was also a concern, with minority groups underrepresented in clinical data used to create personalized treatment plans and AI transcription services currently not covering a broad range of languages or accents.

In terms of generative AI tools, the report highlighted the risks of output and content that is false, biased, or otherwise “problematic” because the model was trained on flawed data. A problem often referred to as “garbage in, garbage out.”

In addition, the OAG noted how generative AI can be used by bad actors to intentionally create misinformation materials, such as deepfakes:

“Laws around defamation and fraud provide some recourse but do not address the full scope of the problem, particularly as deepfakes become increasingly realistic and harder to detect. Speakers noted that the use of generative AI in misinformation would be a major concern over the coming months ahead of the general election, as bad actors may create a deluge of misinformation that cannot be adequately factchecked in time.”

In the sphere of hiring, while symposium participants praised AI’s ability to streamline the application review process, they pointed out that companies using AI screening tools for hiring create the potential for algorithmic bias.

“Speakers cited ample evidence that AI tools often amplify, rather than correct, bias. For example, algorithms trained on data from past hiring can amplify human biases reflected in past hiring decisions and entrench existing norms,” said the report.

One speaker even argued that it’s best to assume AI tools “discriminate by default.”

Possible AI regulation in the US

Regarding how AI technology should be regulated, the report noted differing views amongst symposium speakers.

Some favored the passage of a comprehensive law, such as the European Union’s Artificial Intelligence Act (EU AI Act), which creates a broad framework of regulation based on risk and establishes a centralized agency to oversee AI technology. Others argued such a model is not appropriate in the U.S., instead advocating for regulation and oversight to be divided by sector and handled within separate agencies.

Current uncoordinated legislative efforts in the U.S. have addressed varied AI-related issues.

In April, a bipartisan bill aimed at supporting U.S. innovation in AI technology, the ‘Future of AI Innovation Act,’ was introduced to the Senate, proposing a number of measures, including setting up the AI Safety Institute to develop voluntary standards and mandating federal science agencies to make datasets publicly available; in May, the ‘Enhancing National Frameworks for Overseas Critical Exports Act’ was introduced by the House Foreign Affairs Committee to empower the Commerce Department to control the transfer of advanced AI systems and protect homegrown technology; and in June, the Senate Armed Services Committee (SASC) passed the fiscal year 2025 National Defense Authorization Act (NDAA), which included directions for a Department of Defense (DoD) AI pilot program.

These efforts have leant towards the protecting and supporting of AI innovation, in one form or another, and less so dealing with the consequences and practicalities of applying AI in different sectors, such as healthcare or administration.

When it comes to the benefits and risks of applying AI, to consumers, citizens and organizations, the most relevant piece of legislation to date has been on the state-level, in Colorado.

In May, Colorado became the first state to pass a bill protecting its residents from AI discrimination with the ‘Colorado AI Act.’ It targets developers and deployers of ‘high-risk’ AI systems to ensure that AI doesn’t influence decisions based on algorithmic discrimination, which it described as a “condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals.”

A system is considered high-risk if it’s used to make consequential decisions impacting areas such as housing, medical cover, and employment.

Connecticut attempted a similar bill—which actually predated the Colorado AI Act—but the state’s Governor Ned Lamont, threatened to veto the bill if it passed the House of Representatives, effectively killing it.

Meaning that Colorado remains the U.S. benchmark for AI regulation.

Going forward

The OAG noted that while there is disagreement on the appropriate framework for regulating AI technology, including the proper level of centralization, it is “actively monitoring the effectiveness of different regulatory frameworks, such as the EU AI Act, to inform future legislative and regulatory proposals.”

It closed by adding that it would continue to listen and learn as AI develops and explore “appropriate ways to encourage innovation while protecting New Yorkers.”

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Improving logistics, finance with AI & blockchain

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.