Getting your Trinity Audio player ready...
|
A committee constituted by the Reserve Bank of India (RBI) has submitted its report and recommendations on establishing a Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) in the financial sector. Emphasizing a comprehensive understanding of the opportunities and risks posed by AI, the report presents a strategic roadmap for all key stakeholders. The committee stressed that the FREE-AI framework is essential for unlocking the benefits of AI while maintaining public trust and ensuring ethical standards across the financial ecosystem.
- RBI’s FREE-AI framework for AI integration
- Recommendations for implementing AI effectively
- Emerging risks and cybersecurity vulnerabilities with AI
The RBI conducted two focused surveys to urgently assess the state of AI integration and its associated challenges within the financial sector. Building on these findings, the committee engaged in consultations with key stakeholders to deepen its understanding. The report also drew on global perspectives, referencing the 2022 discussion paper by the Bank of England and the Financial Conduct Authority (FCA) on AI and machine learning in U.K. financial services. Additionally, it referenced insights from the 2021 Stanford University report by the Center for Research on Foundation Models, which examined the potential and risks of foundation models.
“For an emerging economy like India, AI presents new ways to address developmental challenges. Multi-modal, multi-lingual AI can enable the delivery of financial services to millions who have been excluded. When used right, AI offers tremendous benefits. If used without guardrails, it can exacerbate the existing risks and introduce new forms of harm,” the committee said.
“In the financial sector, AI has the potential to unlock new forms of customer engagement, enable alternate approaches to credit assessment, risk monitoring, fraud detection, and offer new supervisory tools. At the same time, increased adoption of AI could lead to new risks like bias and lack of explainability, as well as amplifying existing challenges to data protection, cybersecurity, among others,” it added.
In December 2024, the RBI initiated the formation of a specialized committee to craft a framework for the ethical deployment of AI in the financial sector. The RBI brought together experts from various sectors to develop a strong, adaptable, and forward-thinking framework that ensures ethical integrity while supporting innovation across the financial ecosystem.
The committee is led by Pushpak Bhattacharyya, a Computer Science and Engineering professor at the Indian Institute of Technology (IIT) Bombay, one of India’s premier technology institutes. The RBI also constituted committee members, including Balaraman Ravindran, a professor and head of data science and AI at IIT Madras; Sree Hari Nagaralu, head of security AI research at Microsoft India (NASDAQ: MSFT); Suvendu Pati, an official of the RBI; Anjani Rathor, Group Head and Chief Digital Experience Officer at HDFC Bank (NASDAQ: HDB); as well as Abhishek Singh, Additional Secretary, Ministry of Electronics and Information Technology, Government of India.
The RBI’s move gains urgency as generative AI (GenAI) is expected to contribute as much as $438 billion to India’s gross domestic product (GDP) by 2029–2030, underscoring the critical need for ethical and responsible implementation of AI to drive sustainable economic and financial sector growth.
In parallel, the RBI rolled out MuleHunter.AI—an AI/ML-powered system developed by the Reserve Bank Innovation Hub (RBIH)—to combat rising incidents of digital fraud. This advanced model is designed to assist banks in identifying and addressing mule accounts, a common method fraudsters use to launder illicit funds.
The recommendations
While the Ministry of Electronics and Information Technology (MeitY) is driving national initiatives to expand access to hardware and computing power, the committee’s recommendations are specifically tailored to building the infrastructure required for the financial sector to effectively adopt and scale AI-driven innovation.
The committee pointed out in its report that effective integration of AI into the financial sector demands a balanced strategy that promotes innovation while actively managing potential risks. To fully harness AI’s potential, creating an environment that supports responsible innovation through strong infrastructure, adaptive policies, and skilled human resources is vital.
The report pointed out that robust infrastructure lies at the foundation of any innovation. For AI in finance, this includes well-developed data ecosystems, sufficient computing resources, and accessible public digital assets that support safe experimentation.
The committee said that a robust financial sector data infrastructure should be developed as a form of digital public infrastructure (DPI) to support the creation of reliable and transparent AI models in the financial domain. This infrastructure could be integrated with AI Kosh—the India Datasets Platform launched under the IndiaAI Mission—to enhance access to diverse and standardized datasets. The committee also recommended establishing an enabling framework to integrate AI with DPI to accelerate the delivery of inclusive, affordable financial services at scale.
It proposed that a dedicated AI innovation sandbox tailored for the financial sector should be created to allow regulated entities (REs), FinTech companies, and other innovators to design and test AI-powered solutions, models, and algorithms within a secure and well-regulated setting. Collaboration with other financial sector regulators (FSRs) is also essential to ensure mutual contribution and shared benefits from this initiative.
There is also an urgent need to establish suitable incentive mechanisms and supporting infrastructure to promote inclusive and fair adoption of AI, particularly among smaller players in the financial ecosystem. To foster innovation and address critical sectoral priorities, the committee recommended that the RBI consider allocating dedicated funding to develop essential data and computational infrastructure.
It said that AI models developed within the country—such as large language models (LLMs), small language models (SLMs), or other non-LLM variants—should be specifically designed to address the financial sector’s needs and made available as public goods.
The committee recommended that regulators regularly review and evaluate current policies and legal frameworks to ensure they remain conducive to AI-driven innovation while effectively managing risks. It also emphasized the need for a robust and adaptable AI policy framework for the financial sector to guide innovation, adoption, and risk management. Additionally, the RBI could explore the issuance of a unified AI Guidance document to serve as a comprehensive reference for regulated entities and the wider fintech community on the responsible creation and implementation of AI technologies.
The committee recommended that regulators promote AI-driven innovations to enhance financial inclusion for underserved and unserved communities by easing compliance requirements wherever feasible, without cutting essential safeguards. Recognizing that AI systems operate in probabilistic and non-deterministic ways, the committee proposed adopting a graded liability framework to support responsible innovation. While regulated entities (REs) should remain accountable for any customer losses, a more flexible supervisory approach is advised when REs have implemented robust safety measures such as incident reporting, audits, and red teaming. However, this leniency should apply only to isolated or first-time incidents and not extend to repeated violations, gross negligence, or failure to address known issues.
The committee recommended that a permanent AI Standing Committee, comprising multiple stakeholders, should be established under the RBI to provide continuous guidance on emerging AI trends, associated risks, and technological advancements. This body would also evaluate the adequacy of existing regulatory policies in light of ongoing developments. Initially, the committee could be set up for a five-year term, incorporating periodic reviews and a defined end date. To strengthen oversight and sectoral coordination, a specialized institution for the financial sector should be created, aligning with a national-level AI Safety Institute through a hub-and-spoke model.Regulated Entities (REs) must enhance their AI-related expertise at both the board and executive levels, and implement structured training programs aimed at upskilling and reskilling staff involved in AI operations. These measures are essential to managing AI-related risks and ensuring ethical, responsible use of the technology. In parallel, regulatory and supervisory authorities should invest in training and institutional capacity-building to keep pace with AI’s evolving capabilities and associated ethical and risk dimensions. The RBI may also explore establishing a dedicated AI institute to foster sector-wide skill development and awareness.
The committee advised that the financial sector—through organizations like the Indian Banks’ Association (IBA) or Self-Regulatory Organizations (SROs)—should create a structured framework to facilitate sharing AI use cases, insights, and best practices. This framework should also encourage responsible adoption and scaling by showcasing successful implementations, challenges faced, and effective governance strategies.
Regulated Entities (REs) must implement comprehensive data governance structures that include clear internal policies and controls covering the entire data lifecycle—collection, access, use, storage, and deletion—within AI systems. These policies must align with relevant legal requirements, such as the Digital Personal Data Protection (DPDP) Act.
In addition, REs should enforce strong model governance practices that span all stages of the AI model lifecycle, from design and development to deployment and retirement. This includes thorough documentation, validation, and continuous monitoring to identify and correct model drift or performance issues over time.
Furthermore, before deploying autonomous AI systems that can make independent financial decisions, REs should establish rigorous oversight frameworks. Given the higher risks associated with such systems, particularly in medium to high-risk scenarios, human oversight and intervention should be integral to the governance process.
The committee recommended that financial sector regulators introduce a formal AI incident reporting system for Regulated Entities (REs) and fintech firms, designed to promote early identification and transparent reporting of AI-related issues. This system should encourage openness by adopting a supportive and good-faith approach to disclosures.
REs should maintain an updated internal inventory of all AI systems, capturing details such as models, use cases, user segments, dependencies, associated risks, and any complaints received. This inventory should be reviewed and updated at least every six months and must be accessible for regulatory audits and inspections.
Simultaneously, regulators should create a centralized AI repository to monitor the sector’s AI usage, identify concentration risks, and assess potential systemic threats, with appropriate anonymization of institutional data.
REs are also expected to implement a robust, risk-based AI audit framework, tailored to the risk categorization approved by their boards. This framework should cover all stages of the AI lifecycle—from data inputs and model algorithms to decision outputs—ensuring responsible and ethical deployment. In addition, REs should include AI-related information in their annual reports and on their websites. Regulators should define a standardized AI disclosure framework to ensure consistency and transparency across institutions.
To support compliance, the committee recommended that an AI Compliance Toolkit be developed and maintained by an accredited industry association or Self-Regulatory Organization (SRO). This toolkit would help REs assess, benchmark, and demonstrate adherence to core responsible AI principles such as fairness, transparency, accountability, and robustness.
Emerging risks
In its report, the committee pointed out that integrating AI into the financial sector introduces various risks that traditional financial risk management frameworks cannot. These include data privacy concerns, algorithmic bias, market manipulation, concentration risks, cybersecurity vulnerabilities, and governance failures. In return, these challenges erode trust, compromise market integrity, and heighten systemic risks.
AI models can produce unpredictable outcomes, primarily based on biased or poor-quality data. Their “black box” nature complicates auditing and accountability, increasing the risk of errors in critical financial decisions. Risks include flawed data, improper design, miscalibration, and poor implementation. These can cascade across systems. AI-based risk management tools can themselves introduce model-on-model risks. Generative AI, in particular, may produce hallucinations and lack explainability.
The committee said in its report that AI-driven automation can amplify issues if systems fail, like fraud detection errors or data pipeline breakdowns. Without consistent monitoring, AI models may degrade and deliver flawed outcomes. Moreover, assigning responsibility in AI-related failures is complex due to models’ probabilistic and opaque nature. Biased decisions may raise legal and reputational risks, especially in credit approvals or investment advice. Though currently theoretical, AI systems might collude in dynamic pricing or high-frequency trading, potentially breaching market conduct norms and harming competition.
The committee also pointed out financial stability concerns as AI can reinforce procyclicality and herd behavior, increasing volatility during stress. When many firms use similar AI strategies, model convergence could amplify systemic shocks, while AI’s opacity makes it hard to predict crisis transmission pathways.
The committee said that AI can both improve and undermine cybersecurity. While enhancing threat detection, AI systems are also vulnerable to attacks like data poisoning, adversarial inputs, model inversion, and prompt injection. Bad actors may exploit AI to launch phishing, deepfakes, and automated fraud. AI systems often over-collect and process data, violating privacy norms. Bias, lack of transparency, and manipulation risks can harm consumers, especially vulnerable groups. If regulated properly, AI may deepen power imbalances, reduce informed consent, and contribute to financial exclusion.
However, failure to adopt AI responsibly could hinder competitiveness and financial inclusion. Institutions that lag may miss opportunities to counter AI-enabled threats or serve underserved populations through innovations like alternative credit scoring. AI presents vast potential, but also real risks. As understanding improves and frameworks evolve, the financial sector can responsibly harness AI’s power, turning early apprehensions into transformative outcomes, the Committee added.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Exploring use cases for blockchain in India