Balancing Innovation and Risk: AI Governance in Financial Services

Share This Post

Introduction

The rapid advancement and adoption of artificial intelligence (“AI”) technologies present financial institutions with powerful tools to transform their operations and service offerings, while also introducing complex risks that must be carefully managed. Financial institutions must navigate a careful balance between innovation and risk mitigation, as AI systems can deliver significant benefits through improved efficiency, enhanced decision-making capabilities, and more personalized customer experiences. At the same time, these systems may introduce risks related to data privacy, algorithmic bias, model reliability, and operational resilience. The Monetary Authority of Singapore (the “Authority“) has emphasized that effective risk management of AI systems requires a structured approach that builds upon established risk management principles while addressing the unique characteristics of AI technologies.[1]

A critical consideration often overlooked is that current generative AI systems are designed to be well-formulated and persuasive, even when producing inaccurate information. This inherent characteristic creates a significant risk where users may place unwarranted trust in AI outputs, particularly in domains where they lack subject matter expertise. Financial institutions must recognize that one of the most substantial risks in AI implementation remains human susceptibility to overreliance on seemingly authoritative AI-generated content.

This article provides guidance on developing and implementing effective risk management strategies for AI systems in financial institutions. By examining relevant regulatory expectations, industry best practices, and practical implementation approaches, we aim to help financial institutions establish robust frameworks for managing AI-related risks while enabling innovation and growth.

Use Cases of AI by Financial Institutions

Financial institutions are deploying AI across various business functions to drive innovation and enhance service delivery. For example, AI technologies are transforming risk management within financial institutions. AI’s ability to process vast amounts of data enables more sophisticated fraud detection and security measures. Financial institutions can identify suspicious transactions in real-time across multiple channels, significantly enhancing their ability to prevent fraud before it impacts customers. This capability represents a substantial improvement over traditional rule-based systems, which may struggle to detect novel fraud patterns or adapt to evolving threats.

Beyond fraud detection, financial institutions are leveraging AI for credit risk assessment, enabling more accurate evaluation of borrower creditworthiness. AI-powered analytics can incorporate alternative data sources and identify complex relationships between variables that traditional models might overlook. This enhanced assessment capability can potentially expand credit access while maintaining appropriate risk controls. Similarly, AI solutions are being deployed for real-time risk monitoring, allowing institutions to continuously evaluate their risk exposure across various dimensions and respond rapidly to emerging issues.

Customer-facing applications represent another significant area of AI deployment in financial services. Virtual assistants and chatbots provide immediate responses to customer inquiries, while personalized recommendation engines deliver tailored financial advice and product suggestions. These applications enhance customer experience while potentially increasing engagement and loyalty. However, they also introduce risks related to customer data protection, algorithmic bias, and the potential for inappropriate recommendations if not properly governed.

Operational efficiency represents a third major category of AI applications in financial services. AI solutions are being used to automate routine processes, optimize resource allocation, and enhance documentation processing. These applications can significantly reduce operational costs while improving accuracy and consistency. Additionally, AI is being deployed to support regulatory compliance activities, including transaction monitoring, suspicious activity reporting, and ongoing customer due diligence.

Guidance on Risk Management

General risk management guidance

In the Guidelines on Risk Management Practices – Objectives and Scope, the Authority highlighted “the presence of sound risk management processes and operating procedures that integrate prudent risk limits with appropriate risk measurement, monitoring and reporting” as one of the cornerstones of effective risk management.  The Authority has published many other guidelines on more specific risk management topics, including the Guidelines on Risk Management Practices – Technology Risk (the “TRM Guidelines”) which address technological risks that financial institutions face. The TRM Guidelines emphasize sound governance, cyber surveillance, secure development practices, and management of emerging technology risks. The TRM Guidelines also require boards of directors and senior management to maintain active oversight of technology risks and appoint appropriate leadership roles such as Chief Information Officers and Chief Information Security Officers.  Additionally, there are the Guidelines on Outsourcing and the Authority’s Information Paper on Management of Third Party Arrangements to help financial institutions manage risks associated with outsourcing arrangements and third-party dependencies, an increasingly important consideration as institutions leverage external partners for technological capabilities. Together, these frameworks establish a comprehensive foundation for risk management that financial institutions can adapt and extend to address the specific challenges posed by AI systems.

Specific guidance on risks of AI

In November 2018, MAS introduced the Principles to Promote Fairness, Ethics, Accountability and Transparency in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector (the “FEAT Principles”). These principles established the foundation for responsible AI adoption in financial services. The FEAT Principles provide guidance across four key dimensions: (1) fairness, ensuring AI-driven decisions do not systematically disadvantage individuals without justification and that data and models are regularly reviewed to minimize unintentional bias; (2) ethics, requiring AI use to align with the firm’s ethical standards and values; (3) accountability, establishing clear responsibility for AI systems both internally and externally; and (4) transparency, promoting clear communication about AI use and providing explanations for AI-driven decisions. The FEAT Principles continue to serve as a cornerstone for AI governance in Singapore’s financial sector and underpin the Authority’s subsequent guidance on AI risk management.

Subsequently, in December 2024, the Authority published the AI Paper which highlighted good practices observed during a thematic review of banks’ AI model risk management approaches.   The Authority’s guidance emphasizes three key pillars of effective risk governance: oversight and governance structures, risk management systems and processes, and development and deployment standards.[2] For governance, the Authority recommended updating existing policies and procedures to strengthen AI oversight, establishing cross-functional forums to manage evolving AI risks, and articulating clear principles for the fair, ethical, accountable, and transparent use of AI.[3]

Regarding risk management systems, comprehensive risk identification and assessment are critical. Financial institutions should maintain centralized inventories of AI applications to support effective oversight and assess the materiality of AI-related risks across key dimensions.[4] This enables proportionate application of controls based on risk levels, ensuring that resources are directed to the most significant risk areas while avoiding unnecessary constraints on lower-risk applications.

The Authority also provided guidance on standards for AI development and deployment, emphasizing the importance of robust data management, model explainability, fairness assessments, and auditability.[5] These standards should be consistently applied throughout the AI lifecycle, with appropriate validation and review processes based on risk materiality.

Risk Management of AI in Practice

Implementing effective risk management for AI requires addressing several key risk areas. When applying risk management principles to AI, financial institutions should focus on practical controls that mitigate these risks while enabling the benefits of AI to be realized. Governance structures should explicitly incorporate AI oversight, with clear responsibilities for AI development, deployment, and monitoring.

Data Governance and Privacy

A fundamental question for any AI system is “where does the data go?” Financial institutions must establish robust data governance frameworks to protect customer information and ensure compliance with privacy regulations. This includes implementing appropriate access controls, data minimization practices, and encryption measures for sensitive information.

The risks of data leakage are particularly relevant for generative AI systems, which may inadvertently incorporate sensitive information into their outputs. As highlighted in the Information Paper of Cyber Risks Associated with Generative Artificial Intelligence, unauthorized information disclosure and data leakage represent significant concerns. Financial institutions should implement safeguards such as data filtering, output review processes, and appropriate use restrictions to mitigate these risks.

System Reliability and Accuracy

The question of “how reliable is the AI agent” is central to effective risk management. Financial institutions must ensure that AI systems produce accurate, consistent, and reliable outputs through rigorous testing and continuous monitoring.

Poor prediction quality and hallucinations represent significant risks for AI systems, particularly generative AI applications. As noted in industry analyses, errors in market forecasts or customer recommendations can lead to significant financial losses or customer harm. Financial institutions should implement comprehensive validation processes, including testing against diverse scenarios, performance benchmarking, and stress testing to ensure system reliability.

Financial institutions must remain cognizant that generative AI systems are fundamentally designed to produce coherent, persuasive outputs regardless of accuracy. This feature makes it particularly challenging to detect erroneous information, as the systems present incorrect conclusions with the same confidence as factual ones. This risk is magnified in specialized financial contexts where domain expertise is required to evaluate outputs critically. Implementing robust verification mechanisms becomes essential not only for technical validation but also to mitigate the human tendency to trust convincing AI-generated content.

Model risk management principles should be applied to AI systems, with validation procedures proportionate to the system’s complexity and materiality. The Authority recommends conducting independent validation or peer reviews of AI systems before deployment and implementing change management processes for system modifications.[6] These measures help ensure that AI systems operate as intended and continue to meet performance expectations over time.

Financial institutions should also address risks related to AI model and output manipulation, which could compromise system integrity. Implementing appropriate security measures, including access controls, change management processes, and tamper detection mechanisms, can help protect against these threats.

Operational Controls and Limitations

Establishing appropriate “limits on the AI system” is essential for risk management. Financial institutions should define clear boundaries for AI system operations, including limits on the types of decisions the system can make autonomously and thresholds for human intervention.

Human-in-the-loop requirements represent a critical control mechanism that financial institutions should implement across AI applications with material impact. Rather than viewing human oversight as merely a fallback, it should be designed as an integral component of AI operations, particularly for decisions affecting customer outcomes or significant financial transactions. Human validators should be appropriately trained to recognize AI limitations and be empowered with clear procedures for overriding or correcting AI recommendations when necessary. This approach acknowledges that while AI can enhance decision-making, human judgment remains essential for contextual understanding and ethical considerations that AI systems cannot fully replicate.

For customer-facing AI applications, controls should prevent inappropriate recommendations or excessive upselling. This may include limits on transaction values that can be processed without human review, constraints on the types of products that can be offered to certain customer segments, or requirements for explicit customer confirmation of AI-generated recommendations. These controls help ensure that AI systems act in customers’ best interests and align with regulatory expectations for fair treatment.

Financial institutions should also address risks related to over-reliance on AI systems. As AI becomes more embedded in operational processes, there is a risk that institutions may lose the ability to function effectively if AI systems fail or produce inaccurate outputs. Maintaining alternative processes, establishing clear escalation paths, and conducting regular continuity testing can help mitigate these risks.

Financial institutions should implement ex-ante controls that verify the conformity of AI systems and governance frameworks before granting operational trust. These preventative measures should include pre-deployment certification processes that validate critical requirements such as model fairness, data governance standards, and security controls. By requiring formal approval before implementation rather than relying solely on post-deployment monitoring, institutions establish a stronger foundation for responsible AI use and create clear accountability for compliance with risk management expectations.

Performance Monitoring and Oversight

Continuous “monitoring of AI performance” is critical for effective risk management. Financial institutions should establish key performance indicators for AI systems and regularly assess actual performance against these metrics.

Monitoring should encompass multiple dimensions, including accuracy, reliability, response time, and potential biases in AI outputs. The Authority recommends implementing pre-deployment checks and ongoing monitoring to ensure that AI systems continue to behave as intended over time.

Financial institutions should also consider implementing drift detection mechanisms to identify changes in data distributions or system behavior that could affect performance. AI systems may perform differently as underlying conditions change, making ongoing validation essential for maintaining reliability. This includes regular retraining and recalibration of models to ensure they remain accurate and relevant.

Effective monitoring also requires clear escalation paths and response protocols for identified issues. Financial institutions should define thresholds for escalation and establish processes for investigating and addressing performance concerns. This includes determining when systems should be modified, retrained, or potentially decommissioned if they no longer meet performance expectations.

Conclusion

Financial institutions should anticipate evolving regulatory expectations by establishing flexible governance approaches that adapt to emerging risks and regulatory developments. By viewing AI governance as a competitive advantage rather than merely a compliance exercise, institutions can build customer trust while leveraging innovation opportunities.  In the evolving landscape of AI adoption, the most successful financial institutions will be those that recognize risk management not as a barrier to innovation, but as its essential foundation—creating the secure framework within which transformative technologies can flourish, customer trust can deepen, and sustainable competitive advantage can be achieved.

Key areas of focus may include: (1) enhanced authentication measures to combat deepfake proliferation in financial communications; (2) advanced monitoring systems to detect and mitigate AI-driven market manipulation; (3) increased emphasis on AI explainability and accountability, particularly for high-stakes financial decisions; (4) potential international cooperation on AI governance, necessitating preparation for cross-border AI risk assessment frameworks; (5) introduction of AI-specific stress testing requirements; (6) more stringent regulations around ethical AI use, possibly requiring the establishment of AI ethics committees; and (7) evolving data rights and consent requirements specific to generative AI applications. By proactively addressing these areas, financial institutions can position themselves at the forefront of responsible AI adoption while maintaining regulatory compliance.

HM provides specialized AI risk management services for financial institutions, leveraging our expertise in regulatory requirements and industry best practices. We develop tailored AI governance frameworks, conduct risk assessments, and implement appropriate controls and monitoring systems that satisfy regulatory expectations while enabling innovation.

Our comprehensive services include AI governance framework development, risk assessment methodologies, policy formulation, implementation support, and regulatory compliance assessments. We also offer model validation and testing to ensure AI systems operate reliably and fairly. With our proven regulatory expertise, HM serves as a trusted advisor guiding financial institutions through responsible AI adoption balancing innovation and risk management.

For further information, contact:

Chris Holland: Partner | chris.holland@hmstrategy.com

Harminder Gill: Partner | harminder.gill@hmstrategy.com

Samuel Bourque: Executive Director | samuel.bourque@hmstrategy.com

Alex Webb: Senior Consultant | alex.webb@hmstrategy.com

Disclaimer: The material in this post represents general information only and should not be relied upon as legal advice. Holland & Marie Pte. Ltd. is not a law firm and may not act as an advocate or solicitor for purposes of the Singapore Legal Profession Act.


[1] See Section 6.11 of the Information Paper on Artificial Intelligence Model Risk Management (the “AI Paper”).

[2] See Section 3.3 of the AI Paper.

[3] See Section 4 of the AI Paper.

[4] See Section 5 of the AI Paper.

[5] See Section 6 of the AI Paper.

[6] See Sections 6.4 and 6.5.9 of the AI Paper.

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore