Singapore’s Generative AI Model Governance Framework – Board-risk Impacts on Financial Services Business

Share This Post

Model AI Governance Framework for Generative AI  

Singapore’s Model AI Governance Framework for Generative AI (the “Framework”) stems from a collaborative initiative between the Infocomm Media Development Authority (“IMDA”) and AI Verify Foundation (“AIVF”). Comments – particularly from the international community – are welcome until 15 March 2024. The final version is expected in mid-2024.

Initial Framework

Singapore launched its first Model Governance Framework on traditional AI in 2019, later revising it in 2020. Updating the framework again is necessary amid emerging concerns and threat scenarios surrounding the use and development of generative AI. The main aim of the new Framework is to promote public understanding and trust in technologies enabling end-users to access generative AI confidently and safely.

Traditional AI models aid decision making by making predictions or recommendations based on pattern recognition. Specific tasks are solved with predefined rules (e.g. differentiate between images of horses v donkeys).
In the financial services industry such models are used for predictive analytics (i.e. analysing credit histories to predict loan default or using market data to recommend investment strategies).

Generative AI models focus on pattern creation and are able to create new and original content (such as literature, audio, chat responses, designs) through learned patterns (e.g. create new images of horses).
In the financial services industry such models can be used for diverse applications (i.e. from designing personalised financial planning to launching chatbots and virtual assistants to handle customer enquiries).

Generative AI has the ability to enhance all aspects of daily life and is likely to bring transformative changes to the financial services industry. However, its adoption potentially produces a series of new risks.

The new Framework addresses some of the common risks specified in the existing Model Governance Framework, and also identifies additional ones such as mistakes and “hallucinations”, privacy and confidentiality and copyright infringement.

Consultation process

Ensuring trusted AI ecosystems is vital as Singapore continues to advance its digital economy. The consultation process of the Framework adopts a novel approach by soliciting comments from the international community.

The potential misuse of AI to cause harm (e.g. scams, cyberattack and misinformation) is a global problem, and achieving global consensus on policy approaches is challenging. The Framework therefore seeks to foster greater collaboration by sharing ideas and practical pathways with the ultimate aim of providing a common baseline for understanding among different jurisdictions.

IMDA also acknowledges that some ideas set out in the Framework are not unique. With that in mind, the Framework is promoted as a space to “work closely with a coalition of like-minded jurisdictions, industry partners and researchers towards a common global platform and better governance frameworks for generative AI”.1

Risk mitigation and market innovation

Although there has been pressure to regulate AI, Singapore currently does not intend to implement AI regulation. Its preferred approach is to first develop technical tools, standards and technology to support regulatory implementation.

The new Framework is therefore seen as a balance between risk mitigation and market innovation.

Guidelines based AI Governance in context

Globally, two key regulatory approaches to AI governance are emerging — on one side, some jurisdictions (e.g. the EU, Canada, and China) are mandating strict standards, and on the other are those favoring a more flexible, guideline-based strategy, (e.g. Singapore, the UK, and Japan)2.

Singapore’s Framework, which is sector-agnostic and guideline-based, exemplifies this second approach. It aims to harmonise the management of Gen-AI risks whilst maintaining an environment that fosters technological innovation.

Summary of the proposed Model AI Governance Framework for Generative AI

  Principle Key features
1 Accountability Encourages responsible AI development, recognising multi-layered tech stacks and advocating for initial practical steps for clarity in responsibility
2 Data Emphasises data quality and the use of trustworthy data sources, and stresses the importance of clarity and fairness in contentious cases like personal and copyrighted data
3 Trusted Development and Deployment Centers on adopting industry best practices and transparency in safety measures, much like ‘food label’ disclosures, to promote awareness and safety
4 Incident Reporting Highlights the importance of established incident reporting structures for timely notification and remediation, supporting AI system improvement
5 Testing and Assurance Advocates for third-party testing and the development of common AI testing standards to assure quality and foster end-user trust
6 Security Addresses the unique security threats posed by generative AI and calls for adapted information security frameworks and new tools
7 Content Provenance Aims for transparency in AI-generated content’s origin and creation process to help end-users make informed decisions and counter misinformation.
8 Safety and Alignment R&D Stresses the need for increased R&D investment for model safety and alignment with human values, and calls for global cooperation among AI safety institutes.
9 AI for Public Good Focuses on using AI responsibly to benefit society, including democratizing AI access, enhancing public sector adoption, and promoting sustainable AI systems.

How will Singapore’s Model AI Governance framework impact business?

The Framework’s principles largely reflect existing norms in the financial services (“FS”) sector. Accountability, data protection, and trusted risk management practices such as incident reporting, testing, and security are not new; they are embedded in the sector’s regulatory fabric and operational risk management practices. Yet, these principles gain fresh relevance in the context of AI and Gen-AI in particular.

AI influences the scale and speed at which existing operational risks may materialise. For example, an AI-driven trading platform might execute thousands of transactions in seconds based on a flawed algorithm, potentially leading to substantial financial losses before human operator intervention. Similarly, a robo-adviser could, due to a programming error, give suboptimal portfolio allocations to thousands of customers simultaneously, affecting their investments adversely and eroding trust in automated financial advice systems.

The complexity of these use cases escalates when AI models are sourced from third parties. Where does accountability lie in such scenarios, and how should the responsibility for implementing controls be apportioned? Is there a case for shared accountability between technology companies providing the AI model and the financial services firms that use them?

All firms, from emerging fintechs to established banks, will need to address such issues before they integrate AI more deeply into their strategic operations.

Interplay between existing regulatory outcomes, industry standards and new AI guidance

AI-related risk considerations impact the outcomes linked to existing financial services regulatory frameworks. For instance, regarding the principle of accountability, the industry benefits from frameworks like the Senior Managers Regime in Singapore and the UK3 4.

The respective regimes mandate that Senior Managers at financial institutions are expected to undertake “reasonable steps” to manage the most significant risks associated with their respective business areas and become personally accountable for them.

However, the practical application of such principles is not straightforward. Company data is not only the responsibility of a CISO/CTO – and this is already an accepted principle. Similarly, the accountability and oversight related to AI should not sit with a single Senior Manager.

For firms navigating the challenges of AI governance, it is beneficial to consider a collaborative approach that draws on diverse perspectives from across the organisation. Prescribing a one-size-fits-all top down solution is not desirable. The preferred approach is to draw the departments together by allocating shared risk management responsibilities.

Fostering a deeper understanding of AI throughout the company should be a key focus for management. As company personnel become more knowledgeable about AI’s implications for business processes, they can contribute more effectively to the company’s risk management processes and adapt to the evolving regulatory landscape.

Such growth in AI awareness is less about strict compliance and more about nurturing a risk-aware culture that can integrate AI in ways that support the firm’s broader objectives, whilst mitigating negative consequences.

Conducting a thorough risk assessment is also vital. By assessing how principles from frameworks like Singapore’s Gen-AI governance could affect their operations, firms can evaluate the adequacy of their existing governance structures vis a vis deployment of AI models. Such assessments will help address novel AI risks such as explainability and data bias. Understanding how AI impacts traditional operational risks, including data security and technical robustness is a further benefit.

AI oversight: Key questions

Our list of recommended initial questions that Board members and teams should ask to evaluate their firm’s readiness to address increased operational and regulatory risk related to AI are as follows:

  • Due Diligence: how well do our current due diligence processes evaluate the risks and ethical considerations associated with AI, and are these processes in alignment with our strategic objectives and risk appetite?
  • Regulatory Compliance: are we prepared to meet the regulatory expectations that govern the use of AI in our sector, and do we have the capability to adapt to regulatory changes in a timely manner?
  • Risk Management: does our risk management infrastructure adequately identify, assess, and mitigate the potential risks introduced by AI, including operational, reputational, and cybersecurity risks?
  • Accountability: have we established clear governance structures that delineate accountability for AI-related decisions, be it in-house or outsourced models?
  • Transparency: how are we engaging with stakeholders, including customers, regulators, and partners, to ensure transparency and accountability in our use of AI, and do our communication strategies effectively address their concerns and expectations?

How can we help?

We strive to combine our bespoke solutions and robust partner ecosystems to help business leaders navigate the increasing complexities of AI governance.

Contact us to find out how we can help you.

About the Author(s)

Claire Wilson is a Partner at HM, based in Singapore. She provides support to innovative technology firms and FinTechs on governance, compliance and regulatory strategy. Contact Claire at [email protected]

Anna Nicolis is a Director at Braithwate based in London, UK specialising in risk management and regulatory strategy. Contact Anna at [email protected]

The authors wish to thank Michelle Goh and Aletta Rizni for their contributions to this article.


Footnotes

1. Discussion Paper on Generative AI: Implications for Trust and Governance, IMDA and Aicadium, June 2023.

2. Global AI Law and Policy Tracker, IAPP Research and Insights, February 2024.

3. Senior Managers and Certification Regime, FCA, March 30, 2023. 

4. Guidelines on Individual Accountability and Conduct, MAS, September 10, 2020. 

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore