Getting your Trinity Audio player ready...
|
The United States House Financial Services Committee held a hearing last week to hear views from finance and housing industry figures on the application and regulation of artificial intelligence (AI). Consensus favored a measured approach to regulation, applying existing laws and supporting innovation, while the synergy between AI and blockchain technology was also raised.
The hearing, titled “AI Innovation Explored: Insights into AI Applications in Financial Services and Housing,” was the latest step in a congressional effort to examine whether existing statutory and regulatory frameworks are sufficient to safeguard the U.S. financial and housing markets from the potential risks posed by AI, without hampering development and innovation.
This effort began in January 2024 with the establishment of a bipartisan AI Working Group, co-chaired by Financial Services Committee chair Patrick McHenry (R-MC) and Ranking Member Maxine Waters (D-CA) and comprising 12 other Committee members.
“To help educate members, our Committee is creating a new bipartisan Working Group on AI. The Working Group will explore this technology’s potential, specifically its adoption in our financial system,” said McHenry, when the group was set up. “It will also find ways to leverage artificial intelligence to foster a more inclusive financial system, while establishing the U.S. as the world leader in AI development and terms of use.”
The working group conducted six roundtables focused on AI’s relationship with federal regulators, capital markets, housing and insurance, financial institutions and nonbank firms, and national security.
Keys takeaways from these discussions included that, given the critical role of the financial and housing markets, the Committee should lead oversight of AI adoption in the financial services and housing industries; the Committee must ensure regulators apply and enforce existing laws, such as anti-discrimination laws; the Committee should ensure financial regulators have the appropriate focus and tools to oversee new products and services; and the Committee should continue to consider how to reform data privacy laws, given the importance of data to AI.
Some of these takeaways were up for discussion in the July 23 hearing, which kicked off with statements from the various housing and finance sector witnesses.
Industry seeks AI-supportive regulation
One of the most high-profile witnesses the Committee heard from was John Zecca, Executive Vice President and Global Chief Legal, Risk and Regulatory Officer at Nasdaq.
Zecca noted how AI has been in use at Nasdaq for some time:
“While recent developments regarding generative AI have brought AI into the forefront of public consciousness, this technology has been integral to our services for years. We are already using AI technology to enhance transparency, liquidity and integrity in the system.”
Specifically, the world’s largest stock exchange employs AI and machine learning to combat financial crime, detect and prevent market abuse, and extract insights and value from large and complex data sets, such as market data, alternative data, and proprietary data.
He explained Nasdaq’s “robust, coordinated process to govern the implementation of AI,” which he argued “is foundational to ethically and securely unlock the power of AI to benefit society.”
In terms of his recommendations for AI oversight, Zecca argued to leverage existing regulations and regulatory structures, where possible.
“New regulation should be risk-based and proportionate, meaning that it should focus on the potential outcomes in terms of benefits, risks and harms of the AI applications, rather than on the specific technologies or methods,” said Zecca.
He went on to suggest that any AI-specific regulation should be consistent and harmonized and that it should “avoid creating gaps, overlaps, or inconsistencies among different regulators, jurisdictions, or sectors, and that it should promote coordination and cooperation, among the regulators, the industry, and the international community.”
In order to achieve this, Zecca proposed creating and promoting safe platforms, such as sandboxes, pilots, and labs, where industry and regulators can test and learn from the AI applications in a controlled and supervised environment and where industry and regulators can share and exchange their experiences.
He rounded off by stating that “while the calls for caution about AI’s development are appropriate, so are the calls for optimism. Right now – in our products and across our markets – AI applications are enabling a fairer, more efficient and more resilient financial system.”
Much as Zecca and Nasdaq advocated for a lighter touch when it comes to AI regulation, so too did the other witnesses appear to present a united front in favor of more AI- and innovation-friendly legislation.
Elizabeth Osborne, Chief Operations Officer of Great Lakes Credit Union, suggested that “policymakers should consider non-prescriptive approaches for encouraging the responsible use of AI within the financial services sector.”
“As policymakers grapple to legislate and regulate in this emerging environment, it is important to recognize many existing laws are technology agnostic and still apply,” said Osborne. “As such, while it is important to have clear rules of the road that protect participants in the marketplace and guard against bias or discrimination, it is also important that those rules do not stifle and harm innovation.”
Meanwhile, Frederick Reynolds, Deputy General Counsel for Regulatory Legal and Chief Compliance Officer of fintech company FIS Global, argued that “current regulations governing financial services are robust enough to support the responsible adoption of AI technologies.”
Lisa Rice, President and CEO of the National Fair Housing Alliance, echoed these comments while also warning against complacency regarding encouraging AI development in the U.S.
In her prepared remarks, she cautioned that “other nations are significantly stepping up their efforts by building the infrastructure needed to spur AI innovations. The U.S. is behind the curve, and in some cases playing catch-up to other nations.”
In order to put the U.S. back at the forefront of technological development, Rice said “it is imperative that the U.S. continue to lead the world in establishing policies and frameworks to advance technological innovations while ensuring these systems are fair, safe, transparent, explainable, and reliable.”
As Committee Chair McHenry made clear, the witnesses appeared to be preaching to the choir to some extent.
“We cannot allow the fear of the unknown to thwart the United States’ role as a hub for technological innovation,” said McHenry. “Far greater than the risks associated with AI itself, are the risks of allowing foreign competitors and adversaries to lead the development, adoption and terms of use.”
No rush for new AI regulation
In his opening remarks to the hearing, McHenry hinted at the Committee’s current inclination, or at least his own leanings, toward a careful and considered approach to legislation in the area.
“We should be leery of rushing legislation,” he said. “It’s far better we get this right, rather than be first. In other words, policymakers should measure twice and cut once.”
He also pitched the financial services industry as a proving ground for any potential AI regulation, saying that “the financial services industry — one of the most highly regulated in America — is a clear entry point as policymakers attempt to tackle the thorny questions AI presents.”
McHenry suggested that while a measured approach to regulation and legislation is preferable, “at the same time, our regulators must ensure they are equipped to take on this new technological frontier.”
He added, “this Committee should examine whether current regulation needs to be clarified, and carefully consider if targeted legislation to close regulatory gaps may be needed.”
In a rare moment of unity, McHenry’s oft sparring partner in the Financial Service Committee, Ranking Member Waters, was agreed that the Committee “must lead the House in overseeing AI.” Waters, much like many of the witnesses sat facing her, also favored the enforcing of existing laws over drastic new regulation.
For Waters, the principle regulatory concern of the hearing was the issue of discrimination:
“As companies forge ahead with AI, it’s more important than ever that this Committee and Congress, not only continue this kind of oversight, but prioritize its review of AI and diversity, equity, and inclusion. As we know, AI is built by humans and relies on data that may reflect bias and systemic inequities or perpetuate discrimination.”
For this reason, she voiced her satisfaction that the hearing was considering her draft legislation, “which would better inform consumers when products and services incorporate AI, what data is used to train AI-based decisions, and provide regulators with the origin of the data used by AI.”
Waters also praised a proposed bill from fellow Committee Democrat Rep. Brittany Pettersen (D-CO), also under consideration at the hearing, the Preventing Deep Fake Scams Act. The bill would set up a Task Force to examine how banks and credit unions can protect themselves and their customers and members from fraud associated with AI.
“Through efforts like these, we can build more transparent and equitable systems, as well as trust, and safety in an increasingly AI-driven world,” said Waters.
Outside of the merits and possible forms of future AI regulation, another topic to emerge from the hearing was the symbiotic relationship between AI and digital assets.
A marriage made in tech heaven
During his time with the mic, House Majority Whip Tom Emmer (R-MN) took the opportunity to suggest that digital assets and AI could enter into a “symbiotic relationship” in the future as the technology continues to mature.
“The nexus between AI and digital assets seems necessary inevitable to me,” said Emmer, a known digital asset advocate, who has been described as the “crypto king of Congress.”
Specifically, he suggested that blockchain technology could be used to improve data used in AI models.
“I believe the convergence of blockchain and AI can not only improve the trustworthiness of data, but the decentralization of artificial intelligence data can mitigate single point of failure security issues. Additionally, as AI systems communicate with each other and need to transact with each other to obtain information, digital assets and AI can have that symbiotic relationship,” said Emmer.
In his questioning of the witnesses, Emmer asked Vijay Karunamurthy, Chief Technology Officer of Scale AI—an artificial intelligence company headquartered in San Francisco—whether blockchain technology could be used “as a tool to ensure data authenticity in the future.”
To which Karunamurthy responded: “It’s of increasing importance, paramount for us to observe, monitor and to find authoritative sources of information. So whether we’re talking about the blockchain or digital identity solutions, those all have an important role to play in ensuring that data is accurate and up to date.”
One of Karunamurthy’s other memorable contributions came in his opening remarks, in which he hailed AI’s nearly limitless potential, but only if it is deployed within a regulatory framework that enables innovation.
“AI is the most promising technological innovation of our time, but it must be deployed in a safe, responsible, and thoughtful manner,” said the Scale AI STO.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Improving logistics, finance with AI & blockchain