Introduction
In recent years, we have witnessed an explosion on an international scale in the development and use of artificial intelligence (“AI”).[i] From the rapid ascension of ChatGPT to the implementation of AI in vehicles and investment banking, there is a growing demand for the technology that has left regulators struggling to keep pace.
Singapore has been at the forefront in recognizing the large scale potential accompanying the use of AI in multiple economic sectors. As early as 2019, Singapore unveiled its National Artificial Intelligence Strategy (“NAIS”). That strategy shows that Singapore is optimistic about the future of AI development, but acknowledges the need for a responsible, human-centric approach. Since then, there have been a variety of guidelines, frameworks and toolkits provided by various agencies intended to steer the responsible development and deployment of AI.
As a business looking to develop or deploy AI in Singapore, it can be hard to know whether one has properly navigated the regulatory landscape. What regulations governing AI are currently in place? What regulations might one expect to see in the future? What are best practices? This article provides a comprehensive overview of all guidelines, frameworks, laws and toolkits that have been issued by the government of Singapore thus far dealing explicitly with AI.
While there are currently no regulations or regulatory agencies specifically governing AI in Singapore, the government has been very proactive in promoting its vision for responsible AI development. It has also devoted substantial resources to helping organizations meet that vision.
In the first part of this article, we take a look at Singapore’s AI guidelines and frameworks, starting with its National Artificial Intelligence Strategy (2019, 2023). We then turn to two AI frameworks, including the latest Model Governance Framework dealing with Generative AI (2024). After looking at the frameworks, we turn our attention to other relevant documents and guidelines, including The Principles to Promote Fairness, Ethics, Accountability and Transparency (2018) and the Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems (2024). Finally, we examine tools that are currently in development to assist companies in meeting their obligations, and speculate briefly on the regulatory landscape to come.
1. National Artificial Intelligence Strategy
In its 2019 National Artificial Intelligence Strategy (“NAIS 1.0”), Singapore announced its intention to become a global hub for “developing, test-bedding, deploying, and scaling AI solutions”.[ii] This is to involve a “triple helix partnership” between research communities, industry and government, alongside other ecosystem enablers, to bolster the adoption and innovation of AI in Singapore.
As a start, Singapore promoted five flagship national AI projects aimed at enhancing the socially and economically important areas of: freight planning, municipal services, chronic disease protection (and management), education, and border clearance operations. The proposed timeline anticipated most projects reaching fruition by 2030.
By 2023, the technology of publicly available AI had undergone rapid changes (most notably the availability of Generative AI (“Gen AI”)), prompting the government to revisit its AI strategy in its 2023 National Artificial Intelligence Strategy (“NAIS 2.0”).[iii] NAIS 2.0 surpasses the ambitions of its predecessor in three key areas. First, it treats AI as a national necessity, rather than simply an opportunity for economic development. Second, it aims to make Singapore a world-leader in AI and not just a regional player. Third, in line with its expanded ambitions, it aims to move beyond its national AI projects toward a “systems approach”, involving stakeholders inside and outside of Singapore devoted to developing AI-enabled solutions at scale.[iv]
Singapore recognizes the need for a human-centric approach that focuses on “benefits to citizens and businesses…rather than developing the technology for its own sake”— one that is “proactive in addressing the risks and governance issues that come with the increasing use of AI”.[v] To that end, several government departments have put forward guidelines and frameworks intended to assist private industry reach that goal.
2. Model Artificial Intelligence Governance Frameworks
2.1. (Updated) Model Artificial Intelligence Framework (2020)
Singapore’s Personal Data Protection Commission (the “PDPC”) presented the Model Artificial Governance Framework (“Model Framework”) at the 2019 World Economic Forum Annual Meeting. The Model Framework was updated in January of 2020. The update included industry examples, title changes and several key clarifications. Importantly, it also provided guidance to organizations for adopting a “risk-based approach” to Artificial Intelligence by “identifying features or functionalities with the greatest impact on stakeholders”.[vi] The recommendations set out in the Model Framework are intended to guide deployment of AI at scale.[vii]
Concomitantly, Singapore’s Info-communications Media Development Authority (“IMDA”) and the PDPC partnered with the World Economic Forum Centre for the Fourth Industrial Revolution to develop an Implementation and Self-Assessment Guide for Organizations (the “ISAGO”). The ISAGO allows organizations to check their AI governance practices against the recommendations of the Model Framework. The PDPC has also published a Compendium of Use Cases[vi], featuring “real-world examples of how organizations have implemented or aligned their AI governance practices with the Model Framework”.[ix]
The Model Framework and related initiatives form the foundation of Singapore’s National AI Strategy. In the words of its framers, these initiatives, “epitomise [Singapore’s] plans to develop a human-centric approach towards AI governance that builds and sustains public trust.”[x]
Definition of AI
The Model Framework defines AI as “a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning, and, depending on the AI model, produce an output or decision (such as a prediction, recommendation and/or classification).”[xi]
Guiding Principles
A core objective of the Model Framework is to promote public trust and understanding with regard to AI. Consequently, the Model Framework is based on two overarching principles:
- “Organizations using AI in decision-making should ensure that the decision-making process is explainable, transparent and fair.”[xii]
- AI solutions should be human-centric. The “protection of the interests of human beings, including their well-being and safety, should be the primary considerations in the design, development and deployment of AI.”[xiii]
Key Guidance Areas
To satisfy these two broad principles, the Model Framework offers guidance in the following four areas: (i) Internal Governance Structures, (ii) Level of Human Involvement, (iii) Operations Management, (iv) Stakeholder Interaction and Communication.
a. Internal Governance Structures
In order to ensure responsible AI use, organizations are called upon to develop and modify internal governance structures to provide appropriate oversight. This involves delineating clear roles and responsibilities to members within the organization, and may entail establishing a coordinating body with relevant expertise and proper representation.[xiv] An organization deploying AI at scale should also develop a system of risk management and internal controls that address the risks specific to the AI model(s) being deployed.[xv]
For example, MasterCard employs a Chief Data Officer and Chief Privacy Officer to ensure that the data is fit for purpose and the risk of potential harms is sufficiently mitigated. At the same time, a Chief Information Security Officer ensures the implementation of security by design, while Data Science Teams build and implement the AI. MasterCard also employs risk management and internal controls such as initial risk scoring to determine the risk of the proposed AI activity.[xvi] This division of labor coupled with constant communication between parties allows for efficient and robust oversight of AI operations.
b. Level of Human Involvement
When considering the level of human involvement needed for “AI-augmented decisions”, organizations are advised to first decide on their commercial objectives in using the AI system. Those objectives should then be weighed against the risks of using AI in the organization’s decision-making. This consideration should be made in light of corporate values and should be sensitive to differences in societal norms when the organization is operating in multiple countries.[xvii] Organizations should continually identify and review risks relevant to their AI implementation, documenting this process through a periodically reviewed risk impact assessment.
The Model Framework identifies three broad approaches to human involvement in the AI-augmented decision-making process:[xviii]
-
- Human-in-the-loop: Human oversight is active and involved, with humans retaining full control and AI only providing recommendations or input.
- Human-out-of-the-loop: No human oversight involved. The AI system has full executive control without the option of human override.
- Human-over-the-loop: Human oversight is involved in a monitoring or supervisory capacity, with the ability to take over control when unexpected or undesirable events are encountered by the AI Model.
The simple risk matrix below helps to determine the level of human involvement needed in AI-augmented decision making.[xviii]

For example, a case falling in the bottom left corner with Low Severity and Low Probability indicates that a human-out-of-the-loop approach might be more appropriate.
By contrast, a case where a certain degree of domain knowledge from human experts is required and where the cost of regulatory non-compliance would be significant is one that would likely fall in the upper right corner, requiring human-in-the-loop supervision.[xix]
c. Operations Management
The Model Framework provides a high-level three step process for implementing AI-powered solutions.[xxi]

Here we will focus on two critically important pre-deployment stages: data preparation and algorithm training.
1. Data Preparation: Data quality is crucial for the success of the model.
All AI models depend on data for training. If the training data is poor, the model output will be poor. This makes data preparation an essential first step in using AI. Preparation is also required to prevent bias or non-representativeness in the data from unduly influencing the output of the model.[xxii]
Ensuring data quality calls for good data accountability practices. Here are some of the Model Framework’s recommendations for good data accountability practices:[xxiii]
-
-
-
- Determine the lineage of the data. Know the origin of the data, how they were collected, and how they were curated and moved within the organization. Keep a data provenance record that allows for the tracing of potential sources of error, as well as data updating and source attribution.
- Ensure data quality by evaluating the accuracy and completeness of the dataset, as well as its veracity (e.g. whether it came from a reliable source). Ascertain whether human intervention has been involved (e.g. human filtering, labeling or editing of the data).
- Minimize inherent bias by taking steps to mitigate selection bias and measurement bias.[xxiv]
- Use different datasets for training, testing, and validation.
- Periodically review and update datasets.
-
-
2. Algorithm Training: Enhance model transparency through explainability, repeatability, robustness, traceability and auditability.
A risk-based approach involves a two-fold assessment relative to each transparency measure. First, identify the subset of features or functionalities with the greatest impact on stakeholders for which the measure is relevant. Second, identify which of these measures will be most effective in building trust with stakeholders.[xxv]
For example, full explainability of how the model functions may not be possible. However, explainability can still be promoted by documenting how the training and model selection was conducted, and what steps were taken to mitigate identified risks. Further clarity can be achieved by including descriptions of solution design and expected behavior in product or service descriptions.
In some cases, it may not be feasible to provide this kind of transparency without undermining the efficacy of the system, e.g. for an anti-money laundering detection system that relies on secrecy of its internal workings.
Where explainability is not feasible, an organization can nonetheless instill public confidence by confirming the repeatability of results produced by the AI model through repeatability assessments.[xxvi]
Similarly, other measures, such as auditability, can be assessed relative to the cost of supplying those measures. In the case of an audit, the sensitivity of information about the AI system may weigh against a public audit, but still allow for a private one. Where auditability is necessary, organizations should keep a comprehensive record of “data provenance, procurement, pre-processing, lineage, storage and security”.[xxvii] This could be maintained in a centralized process log, making it easier to provide information about the AI system to concerned parties and justify its design and implementation.[xxviii]
d. Stakeholder Interaction and Communication
It is good practice to provide general information to the public about whether your organization is using AI in its products or services. Having an explicit explanation policy regarding what explanations to provide to individuals can help establish explanatory consistency.[xxix] Information should be provided to consumers where such provision would enhance consumer trust and would not jeopardize the safety of the model or overriding proprietary concerns. Furthermore, helping consumers understand how AI-enabled features work can give them more agency in their interaction with the AI-enabled features.[xxx]
The Model Framework suggests two kinds of communication channels for consumers:[xxxi]
-
- Feedback channels: These can be used to manage feedback from customers and to address customer queries. This would also allow for correction of inaccuracies in personal data.
- Decision review channels: These can be used to allow customers to petition for a review of AI-augmented decisions that have affected them.
It may be advisable to set out acceptable user policies (“AUPs”) in order to prevent users from maliciously manipulating the algorithm through bad input data, e.g. prompt injection or jailbreaking prompts.
Summary
Adopting the Model Framework will not absolve organizations from compliance with current laws and regulations, but its adoption will “assist organisations in demonstrating that they [have] implemented accountability-based practices in data management and protection, e.g. the PDPA and the OECD Privacy Principles”.[xxxii]
The central themes of the Model Framework are transparency and trust.
Transparency: An organization looking to develop and deploy an AI system should seek to establish transparency at every juncture of model development, from the initial stages of data acquisition to the final stages of model implementation.
Trust: Trust is established by implementing strong security measures consistently—starting with data collection and preparation and extending throughout the AI system’s development cycle. Human oversight is likely required, but should vary in scope and intensity depending on the degree and severity of the risk involved. Open channels of communication with consumers can facilitate oversight and catch errors when they occur.
Taking the appropriate steps to explain the model algorithm and training process, establish output reliability, and provide channels for feedback all help to secure the acceptance of the AI system from stakeholders and the public at large. The principles outlined above form the basis of Singapore’s approach to AI safety and regulation. In the next section, we will see how Singapore has built on those principles to address the novel challenges posed by Gen AI.
2.2 Proposed Model AI Governance Framework for Generative AI (2024)
With the recent introduction of Gen AI for general purpose use, new challenges have arisen for the responsible deployment of AI systems.[xxxiii] Gen AI is the use of machine learning algorithms to generate text, video and other media content. Many of the novel risks accompanying Gen AI surround the possibility of socially undesirable or harmful model output. For example, malicious agents might use a Gen AI model to acquire information on how to build a bomb, or such agents might construct prompts that trick the model into revealing confidential information about customers. Copyright infringement is another serious concern, as training data may include copyrighted material reproduced in the final output of the model.
Companies that deploy Gen AI systems, especially in customer-facing roles, are advised to take steps to protect users and guard against outside manipulation. (See our article on ML Commons’ cutting-edge safety benchmark for one way of guarding against harmful output.)
In response to the new safety concerns surrounding Gen AI, Singapore’s AI Verify Foundation (“AIVF”) and IMDA jointly released a draft of the 2024 Proposed Model AI Governance Framework for Generative AI (“Gen AI Governance Framework”). The drafting process is currently undergoing public consultation, but it expected to be finalized by the end of the year. The goal of the Gen AI Governance Framework, much like its predecessor, is to advance AI security and foster trust in AI solutions at scale. It builds on its predecessor by focusing more sharply on the development and implementation process of Gen AI solutions.
The Gen AI Governance Framework is organized around nine dimensions of analysis that any organization should undertake when considering the deployment of Gen AI solutions. The ultimate goal is to foster a trusted ecosystem.[xxxiv]
Nine Dimensions of Analysis[xxxv]

- Accountability
- Data
- Trusted
- Incident Reporting
- Testing and Assurance
- Security
- Content Provenance
- Safety and Alignment Research & Development
- AI for Public Good
Overview of the Nine Dimensions:
In this section we do a high-level walk through of the nine dimensions of analysis.
1. Accountability
The Gen AI Governance Framework recognizes a need to “incentivize players along the AI development chain to be responsible to end-users”.[xxxvi] This requires assigning responsibility to the right agents along the development chain. There is a question of how responsibility is to be allocated both upfront in the development process (ex-ante) and how redress can be obtained if issues are discovered afterwards (ex-post).
The Gen AI Governance Framework advises allocating ex-ante responsibility on the basis of level of control in the development chain. This creates differentiated incentives for those who are in the best position to take necessary action along the full span of the development chain.
Ex-post responsibility requires not only finding the right people to provide redress when problems arise, but also creating “safety nets” that operate at a larger social scale. For example, existing legal frameworks should be updated in order to cover AI risks that may have a disproportionate impact on society as a whole.[xxxvii]
2. Data
For all AI solutions, ensuring data quality and security for model training is crucial. All AI models must be trained on copious amounts of data. A well-known danger for datasets is that they may include publicly-available personal data or copyright material, especially if scraped from the web. Data poisoning can also be a concern, where false or malicious content is injected into data sources before collection.[xxxviii]
The Gen AI Governance Framework recommends expanding personal data laws to apply to cases involving the use of Gen AI. Practical steps that an organization can take to protect data include general best practices, such as consistent data annotation and data cleaning, along with the use of Privacy Enhancing Technologies (“PETs”), which are currently being developed to make data safer for training.[xxxix] Notably, IMDA has put forward the PETs Sandbox to facilitate experimentation with PETs based on real-world use cases.
As for copyright concerns, there is currently no legal consensus in Singapore on matters surrounding copyright infringement through the use of Gen AI. The Gen AI Governance Framework advocates open dialogue among all stakeholders to seek a solution that balances the realistic demands of model training with the entitlements of copyright holders.
3. Trusted Development and Deployment
A genuinely trustworthy AI ecosystem depends on all organizations undertaking adequate security measures to ensure baseline model safety and hygiene. This should be accompanied by meaningful transparency around the security measures adopted in model training and deployment.[x1] In line with the principles laid out in the Model Framework, the Gen AI Governance Framework calls for industries to adopt best practices that maximize transparency while minimizing risk. This can be done in a way that balances the need to protect business interests and proprietary information.
There are already some baseline safety practices available for organizations looking to meet their obligations. For instance, an organization can use red teaming to pressure test its AI models. Specific to the case of Gen AI, a red team would involve a group of cybersecurity specialists dedicated to discovering vulnerabilities surrounding model input and output, such as a system’s vulnerability to jailbreaking. An organization can additionally use a safety benchmark to certify the model’s safety.
Other security measures for Gen AI include Reinforcement Learning from Human Feedback (“RLHF”), which can be used to better align LLMs with core human values. In essence, this technique involves using humans to reward LLM Models for good output and penalize it for bad output. However, for the purposes of efficiency, another model, the reward or preference model, is trained to mimic the preferences of the human agent. That model is then used to train the main AI model through reinforcement learning.[x1i]
Another development technique for Gen AI is Retrieval-Augmented Generation (“RAG”). RAG fine-tunes model output by providing the already trained model with the user prompt in conjunction with relevant data from trusted and authoritative sources. Access to the relevant data bolsters accuracy and reliability, preventing model hallucination and other undesirable model outputs. Input and output filters can also be helpful tools to prevent harmful output.
While the above are important tools for evaluating the safety and performance of AI, many times they are customized to particular models, making generalization impossible. Hence, there is a need for a set of comprehensive standardized safety evaluations that cover back-end and front-end safety for all models. A preliminary effort for LLMs is detailed in a 2023 paper by the AI Verify Foundation and IMDA: “Cataloguing LLM Evaluations”.
Transparency around these safety measures is key.[x1ii] The Gen AI Governance Framework suggests standardizing the disclosure of safety measures similar to the use of food/ingredient labels. Among other things, this might involve providing an “overview of the types of training data sources and how the data was processed before training,” as well as an overview of the training infrastructure and evaluation results of the model. A public statement describing safety measures used in safeguarding the model, such as bias correction techniques, would also promote trust and transparency.[x1iii]
4. Incident Reporting
It is important to address the possibility of accidents and unforeseen effects of AI-implementation in ways that allow for timely remediation.
The Gen AI Governance Framework recommends acting preemptively to prevent accidents through vulnerability reporting. As with already established best practices for software products, this could involve using white hats and independent researchers to discover safety vulnerabilities in the AI system. The owner of the AI system would then be given a certain window of time to repair and publish the vulnerability, crediting the white hat or independent researcher.
That being said, it is crucial to have a process in place for reporting an incident once it has occurred. Depending on the severity of the incident, a report to the public or to the government may also be appropriate. As a “reference point” for legal reporting requirements, the Gen AI Governance Framework cites the EU AI Act, which requires the reporting of “serious incidents” to market surveillance authorities within 15 days of the AI system provider becoming aware of the incident.[x1iv] While future regulations in Singapore may not be as stringent, the requirement does indicate an ideal for best practices.
5. Testing and Assurance
Consonant with the above measures, an organization seeking to deploy AI-powered solutions is encouraged to devote resources to internal model evaluation and independent, third-party testing of its AI systems.
Like companies that use external audits to “provide transparency and build greater credibility and trust with end-users”, organizations deploying AI-powered solutions may opt to use third parties to independently evaluate their AI systems.[x1v] While some third-party benchmark tests currently exist, a set of standardized benchmarks and other methodologies is still needed for third-party testing.[x1vi]
6. Security
Typical “Security-by-design” protocols that minimize system vulnerabilities by integrating security measures into every phase of “the systems development lifecycle” must be adapted to meet the new challenges and probabilistic nature of Gen AI.[x1vii] One tool suggested by the Gen AI Governance Framework is an input filter that detects unsafe prompts before they enter the processing stream of the AI system. Another suggestion is the use of digital forensics tools to analyze data for reconstructing cybersecurity incidents.[x1viii] Databases such as MITRE’s Adverserial Threat Landscape for AI Systems can be consulted for information on adversary tactics and use cases specific to Gen AI.
7. Content Provenance
With the ubiquity of AI-generated content, the potential for misinformation and “deepfakes” is on the rise. Content provenance techniques allow end users to know when they are looking at AI-generated content, warding off the potential for misleading content. Digital watermarking and cryptographic provenance are two techniques suggested by the Gen AI Governance Framework for flagging AI-generated and modified content.
Digital watermarking is used to identify ownership of a piece of content by covertly embedding a marker in the data. This method could be used to identify AI-generated content, but a drawback is that such watermarks are typically imperceptible by design and require a special algorithm to recover.[x1ix] Such algorithms may not be readily available to end users at this stage.
Cryptographic provenance techniques, on the other hand, allow for the provenance information to be more readily available to the consumer. The embedded provenance information is cryptographically encoded in order to protect it from external manipulation. This would allow users to see, e.g., if the content is from a trusted source. As part of its Content Authenticity Initiative founded by Adobe, the Coalition for Content Provenance and Authenticity (the “C2PA”) is currently in the process of developing open standards for content provenance. The C2PA currently includes includes Microsoft, Intel and Adobe among other large tech companies.
Given the relatively early stages of development in content provenance, the Gen AI Governance Framework emphasizes the need for caution when using these techniques, as provenance information can be stripped out, malicious actors can find ways to circumvent certain provenance techniques, and end users will be typically poorly informed about content provenance in general.
8. Safety and Alignment Research & Development (“R&D”)
Investment in safety and alignment R&D should be accelerated in AI safety to keep pace with the deployment of Gen AI. There are two forms of alignment to take into consideration.
The first is “forward alignment”—the alignment of AI models with human intentions and values. As mentioned previously, RLHF is one way to accomplish forward alignment. Reinforcement Learning from AI Feedback (“RLAIF”) is another option. Like RLHF, RLAIF uses AI models in place of human beings to provide reinforcement feedback to the target model during training. However, unlike RLHF, the preferences are not generated by humans, but by the feedback model itself on the basis of a constitution provided by the developers.[1] Automating the feedback process in this way has the potential to cut down significantly on cost.
The second kind of alignment is “backward alignment”.[1i] Backward alignment can be used to validate the model once the model is trained, for instance to test the model for “emergent capabilities”, such as autonomous replication, over a wider time horizon.
9. AI for Public Good
The Gen AI Governance Framework addresses the need to go beyond risk mitigation by using AI to “uplift and empower” people and businesses.[1ii] This involves “democratising AI access, improving public sector AI adoption [and] upskilling workers” while developing AI systems sustainably.[1iii]
The Framework lays out four “touchpoints” for this initiative:
a. Democratizing Access to Technology
Governments can facilitate the access and deployment of Gen AI for all members of society through digital literacy initiatives that instruct average users on the basics of Gen AI, e.g., how to use chatbots safely, how to identify deepfakes and how to overcome susceptibility to illusions that the AI system is a human-like agent.
Governments and industry partners can also help small and medium enterprise (“SMEs”) adopt Gen AI solutions by providing SMEs with tools and training on Gen AI. An example is Singapore’s own Generative AI Sandbox.
b. Public Service Delivery
The Gen AI Framework argues that “AI should serve the public in impactful ways”.[1iv] For example, Gen AI can be used in public services such as “adaptive learning systems” and “health management systems in hospitals”. To that end, governments should facilitate “data sharing across different government agencies” and “access to high performance compute”, while possibly enacting other measures to support public sector AI adoption.[1v] AI developers can help governments in identifying salient use cases and providing solutions.
c. Workforce
The Gen AI Framework promotes the upskilling of the workforce in order to counteract the potentially negative consequences of AI in the labor market. Governments and educational institutions can “work together to redesign jobs and provide upskilling opportunities for workers”.[1vi] Companies and organizations that have already adopted AI can provide training for their employees.
d. Sustainability
There is legitimate concern about the raw resources needed to sustain Gen AI over the long term. Tracking and measuring the carbon footprint of Gen AI will be a necessary part of making it sustainable and environmentally friendly. To that end, researching “green computing techniques” and “green energy sources or pathways” are ways to help drive AI sustainability.
Summary
Building on the foundation of its predecessor, the new Gen AI Framework provides more detailed guidance for challenges specific to Gen AI. It calls for global cooperation in the endeavor to make AI secure and reliable, signaling Singapore’s aim to become a world-leader in AI solutions.[1vii]
The nine dimensions of analysis canvassed above help companies locate themselves more precisely within Singapore’s global vision. Some dimensions, such as Accountability, Data and Trusted Development and Deployment, will apply to all organizations employing AI solutions. Others, such as Safety and Alignment, Research and Development and AI for Public Good, can be seen as an opportunity for organizations to voluntarily contribute in important ways to a trustworthy and sustainable international AI ecosystem.
3. Other Relevant Documents and Guidelines
3.1. MAS, Artificial Intelligence Model Risk Management: Observations From a Thematic View (2024), (“MAS Thematic”)
In 2024 the Monetary Authority of Singapore ( the “MAS”) released an information paper setting out good practices relating to AI. While its focus was on selected banks, the suggested good practices are generally applicable to “other financial institutions (FIs), which should take reference from these when developing and deploying AI” (p. 3).
It list several potential risks in the use of AI in finance, including the use of Gen AI (p. 4):
- Financial risks, e.g. “poor accuracy of AI used for risk management”.
- Operational risks, e.g.,” unexpected behavior of AI used to automate financial operations”.
- Regulatory risks, e.g., “poor performance of AI used to support AML efforts leading to non-compliance”.
- Reputational risks, e.g., “wrong or inappropriate information from AI-based customer-facing systems”.
The opacity of Gen AI models, their susceptibility to hallucinations and their unpredictable behaviors are also cause for concern in “mission-critical” areas, where both predictability and explainability are of high priority (pp. 5–6).
The MAS recommendations are grouped into the following three subcategories (pp. 7–9):
- Oversight and Governance of AI
- Key Risk Management Systems and Processes
- Development and Deployment of AI
Here we will summarize some of the most important recommendations under those headings.
3.1.1. Oversight and Governance of AI
Whether your organization has only just started to incorporate AI into its business operations, or its AI development is well underway, the MAS Thematic advises updating and compiling your organization’s policies and procedures relevant to AI into a central guide. This will ensure that your organization is able to effectively manage subsequent deployments of AI as further plans involving AI are rolled out.
In line with other regulatory guidelines, the MAS Thematic recommends setting out clear statements and principles detailing how your organization intends to use AI responsibly, “including developing guidelines to govern areas such as fair, ethical, accountable, and transparent use of AI” (p. 11). Following the lead of many banks, an institution can further commit to its AI principles by “mapping them to key controls,” and in turn mapping those controls to the “relevant functions responsible for these controls” (pp. 11–12).
Participating in AI oversight forums is a good way to stay abreast of developments in AI safety and regulation as the AI landscape evolves (p. 10).
3.1.2. Key Risk Management Systems
Risk Management is sub-divided by the MAS Thematic into the following: (i) Identification, (ii) Inventory, (iii) Risk Materiality Assessment.
a. Identification: Identifying where AI is being used in one’s organization is necessary for effective risk management.
We recommend reflecting on industry-specific concerns when developing company policy on. AI. For instance, most banks surveyed by the MAS harnessed definitions in existing Risk Management (“MRM”) policies to extend or adapt these definitions to account for AI-specific cases. As the MAS writes, “Some banks shared that the uncertainty of model outputs is a common source of risk for both AI and conventional models, and that the presence of such uncertainties was a key feature that was usually considered when identifying AI” (p. 13). MRM control functions can also play a “key” role in AI identification, with some banks developing “tools or portals to facilitate the process of identifying and classifying AI across the bank in a consistent manner” (p. 13).
b. Inventory: Most banks reviewed by the MAS maintained a formal AI inventory, including a comprehensive record of where AI was being used in the bank. That inventory typically took the form of software systems that “not only record where AI is used in the bank, but may also include additional features…such as automated tracking of approvals and issues, and identification of inter-dependences between AI” (p. 13). The MAS cautions that reliance on spreadsheets for AI inventories is “prone to operational issues, e.g., outdated records,” and would not allow for additional features available through software inventorying.
It is critical that the organization ensure that “AI are only used within the scope in which they have been approved for use…” [emphasis added] (p. 13). Unapproved usage of AI can lead to unintended consequences in contexts for which it has not been fitted.
An AI inventory will typically “capture key attributes” of the AI system such as (quoting directly), p. 14:
-
- Purpose and description
- Scope of use
- Jurisdiction
- Model type
- Model output
- Upstream and downstream dependencies
- Model status
- Risk materiality rating
- Approvals obtained for validation and deployment
- Responsible AI requirements
- Waiver or dispensation details
- Use of personally identifiable information (“PII”)
- Personnel responsible (e.g., owners, sponsors, users, developers, validators)
When a third party supplies the AI system the following can be included:
-
- AI provider
- Model version
- Endpoints utilized
- Other details from the AI developers that might be found in AI model cards (see https://link.springer.com/chapter/10.1007/978-3-031-68024-3_3)
c. Risk Materiality Assessment: In assessing risk materiality, the banks reviewed by the MAS considered both quantitative and qualitative risk dimensions. These are grouped by the MAS Thematic into the following three broad categories:
-
- Impact on the bank, customers and other stakeholders, including “financial, operational, regulatory and reputational impact” (p. 15)
- Complexity stemming from the nature of the AI system, or from the novelty of the area or use case in which the AI system is being applied
- Reliance on AI, taking into account the autonomy granted to the AI system and human involvement in the loop as “risk mitigants” (p. 15). (See section 1. of this article on the updated Model Artificial Intelligence Framework, Key Guidance Areas, Operations Management for a discussion of human involvement in the loop.)
Processes, measures and methods should be periodically updated to reflect the evolving nature of the business in which the AI system is being used.
3.1.3 Development and Deployment
3.1.3.1 Standards and Processes
The MAS Thematic lists the following key areas of focus for development and deployment: (i) data management, (ii) model selection, (iii) performance evaluation, (iv) documentation, (v) validation, (vi) mitigating model limitations, (vii) monitoring and change management (p. 17). The reader is advised to consult the guideline for a more in-depth view.
3.1.3.2. Data Management
Data management involves the determination of suitability of data, e.g. the “representativeness of data for the intended objective, assessment of completeness, reliability, quality, and relevance of data, and approaches for determining training and testing datasets” (p. 17).
Standards and processes already in place for data governance and management will naturally apply to data used for AI. In addition, however, all of the banks reviewed by MAS had established further data management standards and processes pertaining to ai (p. 19). These additional standards and processes broadly include the following:
-
-
- Appropriateness of data for AI use cases
- Representativeness of data for development
- Robust data engineering during development
- Robust data pipelines for deployment
- Documentation of data-related aspects of reproducibility and audibility
-
3.1.3.3. Development
i. Model Selection: The MAS notes that increased complexity in AI models typically brings along greater trade-offs, such as higher uncertainties and limited explainability. Given the trade-offs, most banks require developers to “justify the selection of a more complex AI model over a conventional model or a simpler AI model” (p. 21). Model justification might include demonstration of “performance uplift” of the AI model over a challenger model (p. 21).
ii. Robustness and Stability: In line with considering the trade-offs between model complexity and performance risks, the banks reviewed by the MAS put a high premium on ensuring that their AI models were robust and stable. The primary focus in this regard was the selection and processing of data for model training, proper measures and thresholds for model evaluation, and mitigation of overfitting.
The testing data was expected to be representative “of the full range of input values and environments under which the AI model was intended to be used” (p. 22). There were efforts to obtain datasets that were tailored to evaluate the AI model’s responses in light of the operating contexts specific to the bank. This could involve, for instance, comparing AI generated answers to customer queries against the answers of in-house human experts (p. 22).
Some of the tests that were used by banks to ensure robustness and stability were the following (paraphrasing the original text closely, pp. 23–24):
- Sensitivity analysis to understand how the predictions or outputs of models change under different permutations of data inputs.
- Stability analysis to compare the stability of data distributions and predictions or outputs.
- Sub-population analysis to assess whether there were any significant differences in model performance across different sub-populations or subsets within the datasets (e.g., between different customer segments), especially to help identify potential sources of bias.
- Error-analysis to identify potential patterns in prediction errors.
- Stress testing the response of AI models to edge cases or inputs outside the typical range of values used in training. Some of the methods used here were adversarial testing and red teaming.
The MAS Thematic highlights the need to establish clear performance thresholds that are mutually agreed upon by developers and validators (p. 24).
In order to mitigate the risk of overfitting, banks tended to favor less complex models over more complex models unless otherwise justified (thereby reducing the risk of overfitting due to fewer model parameters). They also applied “explainability methods” to identify key input features that would help make sense of and explain model outputs. They tested the performance of their AI models on unseen data where possible, and used other techniques such as cross-validation for model training and testing. (Cross-validation resamples the data or splits it up into proper subsets, using different subsets to train and test the model.)
iii. Explainability: All banks had expanded their development standards to include a section on explainability, especially for “higher risk materiality use cases” such as bank staff making decisions on the basis of AI-based predictions, or customers requesting a reason for being denied a financial service (pp. 25–26). There are two types of explainability that could be required, depending on the use case: global explainability or local explainability. Global explainability entails that the overall functioning and behavior of the model can be understood at the “overall model level” in terms of how model input features drive model outputs generally speaking. Local explainability entails the ability to understand how a particular output arose from particular input features (p. 26). An example of a method for global explainability is SHAP, which” generates Shapley values for each feature based on its contribution to a given model output” (p. 26). An example of a method for local explainability is LIME, which involves “training a separate model on the local instance that needs to be explained” (p. 26).
iv. Fairness: All financial institutions should take steps to ensure that biases found in the training data that unfairly advantage or disadvantage certain groups do not become reflected in model output. The MAS outlines the following general steps followed by banks for guarding against bias and assessing fairness (p. 28):
- “Defining a list of protected features or attributes,” such as gender, race or age, that would require justification for use in in AI models.
- Determining whether protected features or attributes were used in training, and identifying groups at risk of being “systematically disadvantaged” by AI-driven decisions.
- Determining whether at-risk groups were systematically disadvantage by AI-driven decisions via fairness measures.
- Providing adequate justification for the use of protected features and attributes in AI models.
v. Reproducibility and Auditability: Most banks reviewed by the MAS had expanded existing documentation to include sections on AI development processes and considerations. A list of commonly seen sections required for documentation is provided by the MAS as follows (closely paraphrased) (pp. 28–29):
- Data Section: Document key data management steps, including “data sets and data sources used in model development and evaluation”.
- Model Training Section: Describe how the AI model was trained, which could include details of relevant code, key settings such as hyper parameters, and other “configurations required for a third party to reproduce the training process”.
- Model Selection Section: Provide details of how the performance of the AI model was evaluated and how the final model was selected.
- Explainability Section: Provide global and/or local explainability methods, feature selection process, and other aspects of algorithm analysis relevant to model justification.
- Fairness Section: Provide metrics and thresholds related to fairness, as well as results of fairness assessments and “justifications for the use of any protected features or attributes”.
Documentation templates were also developed to allow for consistency, and could differ across business domains.
3.1.3.4. Validation
‘Validation’ here refers to the assessment—usually third-party—of AI solutions for the confirmation that the relevant AI standards and processes had been followed (p. 30). There is a wide range of approaches to independent validation depending on the risk materiality rating of the AI system. Some banks opt to require independent validation for all AI, with the depth and rigor varying according to the AI’s risk materiality rating. Other banks limited independent validation to AI systems with higher risk materiality, opting for peer review of other lower risk AI systems.
3.1.3.5. Deployment, Monitoring and Change Management
Pre-deployment checks are a great way to guarantee that standards have been adhered to before putting an AI system into action. The MAS reports that banks “placed significant focus on implementing controls for the deployment of AI to ensure that the AI functions as intended in the production environment” (p. 31). The MAS goes on to list additional tests that were conducted by some banks to conduct quality checks on AI systems before deployment, including: forward testing, live edge case testing, automated pipelines and process management. Forward testing consists of “experimental runs” on a limited set of production data or a limited set of users to select for the use cases that would be most helpful in assessing the AI system in anticipated deployment environments. Process management is used to assess how well AI systems can handle “improbable but plausible scenarios when deployed” (p. 32).
Automated pipelines can be used to minimize human error and maintain a “consistent process for how AI is deployed, monitored and maintained” (p. 32). Process management involves running checks on key process elements such as human oversight, backup models, and “other appropriate controls and contingencies” (p. 32).
The MAS stresses the importance of continual monitoring. To that end, most banks have a process or system in place for “reporting, tracking and resolving issues or incidents if breaches or anomalies arise from the monitoring process” (p. 33). Predefined thresholds can be used to measure model performance. Some banks employed “tiered thresholds” that could signal model deterioration preemptively, and used different thresholds for determining when retraining or full redevelopment of the AI was necessary (p. 33).
All banks had contingency plans for AI, subject to regular reviews, which “typically outline fallback options, such as alternative systems or manual processes” (p. 34). Some banks also had kill switches in place for “mission-critical AI applications,” such as AI used for trading (p. 34). Banks also reviewed AI portfolios and frequently revalidated their AI, especially for AI deemed critical.
For most banks, changes to AI required review and approval by “control functions” prior to implementation. To manage such changes, banks established “systems and processes for version control of both internal and third-party AI”, with version control enabling banks to track changes and “roll back” to previous versions of the model. For automatic updating systems and processes of so-called “dynamic AI”, the MAS argues that such AI “need to be subject to enhanced requirements and controls,” including “justifications for enabling automatic updating” (p. 35). There should also be enhanced risk management requirements, such as additional checks on data quality and drifts and enhanced performance monitoring (pp. 35–36).
3.1.4. Generative AI
The MAS Thematic highlights the greater unreliability and complexity of Generative AI, the greater difficulty in comprehensive testing and evaluation, and the lack of transparency from Generative AI providers (pp. 36–38).
Banks generally displayed caution in their adoption of Generative AI, opting, e.g., to start out by assisting and augmenting human performance with Generative AI, as opposed to directly employing it in customer-facing roles. Generative AI pilots and experimentation frameworks were also used to manage risk.
Some banks put in place cross-functional risk control checks or other process controls to enhance security measures. Many banks also placed emphasis on user education and training on the limitations of Generative AI.
In terms of technical controls, the MAS Thematic reports that most Generative AI models used by banks were from third parties. Consequently, banks would often begin by conducting research on the models, including public benchmarks and the latest research papers, along with testing and evaluation in the context of the bank’s use cases (p. 40). The MAS Thematic reports that banks also conducted “functional assessments” involving “evaluations of Generative AI model performance on tasks and contexts specific to the bank” (p. 40). End-to-end assessments were conducted for entire Generative AI systems, potentially involving multiple AI models.
Another important practice of more advanced banks was the curation of testing datasets that were specific to use cases, in order to ensure that the models were fit-for-purpose. The MAS Thematic gives the example of Generative AI models used for summarizing complaints from bank customers. A model’s performance on general summarization tasks may not be indicative of model performance in that particular use case, since it likely would not have been trained on complaints that were not in the public domain and specific to the bank. For this reason, the MAS Thematic states:
To ensure the proper evaluation of Generative AI in the bank’s context, the bank will need to curate bank-specific testing datasets from the bank’s internal historical data, or use expert human annotators to generate good quality summaries for a set of customer complaints to evaluate against (p. 41).
This is particularly important because the success of generative models in general domains may give the misleading impression that they will be equally successful in specific domains. On the contrary, they may actually fail in those more specific domains, due to being blind to the relevant data during training.
3.1.5. Third-Party AI
According to the MAS Thematic, most banks use Gen AI models that have been pre-trained by an external party (p. 43). This poses further risks beyond the proprietary use of Gen AI, such as “unknown biases from pre-training data, data protection concerns, as well as concentration risks due to increased interdependencies, e.g., from multiple FIs or even third-party providers relying on common underlying Generative AI models” (p. 43).
The MAS Thematic reports that most banks have instituted content guardrails that employ filters “to manage risks relating to areas such as toxicity, biasness, or leakage of sensitive information” (p. 41). Private cloud solutions were one of the ways that banks mitigated data security risks for Generative AI models (p. 41). RAG and source citations were also considered as ways of mitigating Generative AI risks.
Some of the means for compensating for lack of transparency from third-party providers were the following (p. 44):
- Compensatory testing: Compensatory testing involves rigorous testing of third-party AI models to verify model robustness and stability in the banks’ context.
- Contingency planning: Contingency planning accounts for failures and unexpected behaviors from third-party AI.
- Legal agreements: Financial institutions can establish legal agreements requiring performance guarantees, rights to audit and other contractual obligations in order to mitigate risks with third-party AI.
- Awareness efforts: Awareness efforts include the training of staff on AI literacy, conducting surveys with third-party providers on their use of AI in products and services, as well as inquiring into third-party providers’ practices for managing risk.
3.1.6. Closing Remarks on the MAS Thematic
We close out this section of our article with a quotation from the conclusion of the MAS Thematic:
Robust oversight and governance of AI, supported by comprehensive identification, inventorisation of AI and appropriate risk materiality assessment, as well as rigorous development, validation and deployment standards and processes are important areas that FIs need to focus on when using AI. (p. 44)
All financial institutions will naturally be sensitive to the possibility of the unique risks posed by AI systems, but not all financial institutions will be aware of the ways in which such risks can materialize and ramify throughout the AI employment cycle. Holland & Marie offers the expertise from the technical and the regulatory side to help ensure that the proper security measures have been put in place that track the latest AI trends in the financial sector and the evolving MAS regulatory landscape.
3.2. The Principles to Promote Fairness, Ethics, Accountability and Transparency (“FEAT Principles”) (2018)
In 2018 the Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector (“FEAT Principles”) was published by the MAS for the purpose of guiding financial institutions in the use of artificial intelligence and data analytics (“AIDA”), specifically with respect to decision-making involving the provision of financial products and services.[1viii] The FEAT principles were developed in close coordination with the PDPC and IMDA in order to align the principles with IMDA’s AI governance initiatives. “AIDA” is defined as “technologies that assist or replace human decision-making”.[1ix]
Many of the principles will already be familiar to the reader, but it is helpful to explore them in this context. They come under four main headings: (1) Fairness, (2) Ethics, (3) Accountability and (4) Transparency.
1. Fairness calls for (i) justifiability and (ii) the assessment of accuracy and bias.
- Individuals or groups of individuals should not be systematically disadvantaged through AIDA-driven decisions unless such decisions are justifiable. The use of personal attributes as inputs for AIDA-driven decisions must also be justifiable.[1x]
- Models should be regularly reviewed and validated for accuracy and relevance. AIDA-driven decisions should be regularly reviewed to ensure they are behaving as designed.
2. Ethics requires the use of AIDA to be aligned with the firm’s ethical standards. AIDA-driven decisions should be held to at least the same standards as human-driven decisions.
3. Accountability demands both (i) internal accountability and (ii) external accountability.
a. The use of AIDA should be approved by an appropriate internal authority. Accountability applies for both internally developed and externally acquired AIDA models. Firms should proactively raise awareness of the use of AIDA to management and board members.
A pre-existing internal authority in the relevant domain may be used for oversight of AIDA-driven decisions. For instance, where the Head of Financial Markets in a firm grants the ultimate approval for the execution of trades, he or she would also do so for AIDA-driven trades.[1xi]
b. “Data subjects” are provided with the means of inquiring about, appealing and reviewing AIDA-driven decisions that affect them. Verified and supplementary data from data subjects are taken into account during review of AIDA-driven decisions.[1xii]
For example, the firm might provide online data management tools such as “privacy or personal data dashboards” where subjects can update or review their information for accuracy.
4. Transparency calls for the proactive disclosure of the use of AIDA to data subjects as part of general communication. Data subjects should be provided with clear explanations of how data was used for AIDA-driven decisions impacting the subjects. Data subjects are provided with clear explanations of the consequences of AIDA-driven decisions if requested.
3.3. Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems (2024)
The Personal Data Protection Act of 2012 (the “PDPA”) provides for the general protection of personal data in Singapore. As such, it applies to the collection and use of data in the training and output of AI systems. The Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems (“Personal Data Guidelines”), published by the PDPC in March of 2024, is intended to “provide organisations with certainty on when they can use personal data to develop and deploy systems that embed machine learning models,” as well as give consumers “assurance on the use of their personal data in AI systems”.[1xiii]
Generally speaking, organizations can use personal data when and only when there is meaningful consent to its use. (There are some of exceptions to be detailed below.) Third-party developers of bespoke AI systems (“Service Providers”) have obligations as data intermediaries under the PDPA to protect personal data and guard against unauthorized modification. The Personal Data Guidelines recommend providing relevant information at the point of data collection in order to allow for meaningful consent. They also encourage including written policies about the safeguards and practices put in place to ensure the trustworthiness of the AI system.[1xiv]
The Personal Data Guidelines are organized according to the typical stages of AI development as shown in the chart below. [1xv]

We will highlight some of the more salient parts of each of the sections, but the treatment will not be exhaustive. The interested reader is encouraged to consult the Personal Data Guidelines for more comprehensive information.
3.3.1. Development, testing and monitoring
As mentioned in the introduction to this section, obtaining consent for the use of personal data in any step of the AI lifecycle is generally required. In the course of development, testing and monitoring, however, there are some exceptions.
The first is the Business Improvement Exception.[1xvi] Personal data that has been collected in accordance with the PDPA can be used without consent when it is for the purposes of improving and enhancing existing goods and services, or developing new goods and services. This extends to the use of personal data for the improvement (or development) of methods or process for business operations in relation to the organization’s goods and services. Additionally, acceptable purposes can include: learning or understanding the behavior and preferences of individuals, identifying goods and services that may be suitable for individuals, and personalizing or customizing any such goods or services.[1xvii]
A requirement for the Business Improvement Exception is that improvement purposes not be reasonably achievable without the use of the personal data in an individually identifiable form, and that their use must be that which a “reasonable person” would consider appropriate in the circumstances.[1xviii] Generally speaking, acceptable uses should contribute toward improving the effectiveness or quality of AI systems and their output. There should not be other cost-effective means to develop, test or monitor the AI Systems without using personal data. It should improve the quality of new product features and functionalities leading toward greater innovation, competitiveness and consumer choice.
Here are some examples of where the Business Improvement Exception could be relevant to AI System development[1xix]:
- recommendation engines in social media services offering content aligned with browsing history
- job assignment systems that automatically assign jobs to platform workers
- internal HR systems used to recommend potential job candidates
- use of AI systems to provide new product features and functionalities to improve competitiveness of products and services
The Business Improvement Exception may also be applicable when using personal data for bias assessment, e.g., in checking if protected characteristics (such as race or religion) are well represented in datasets.
The second kind of exception is the Research Exception, which is intended to for research and development that may not have immediate application to an organization’s “products, services, business operations or market”.[1xx] As with the Business Improvement Exception, it is a requirement that the research purposes not be reasonably achievable without the use of personal data provided in an individually identifiable form. Additionally, there must be a clear public benefit to using the personal data for the research purpose, and the result of the research cannot be used to make any decision affecting the individual in question. Finally, if the research results are published, they must be published in a form that does not identify the individual.
The Personal Data Guidelines recommend using only personal data containing attributes required to train and improve AI systems, limiting the volume of personal data and using pseudonymization when possible. Organizations are also encouraged to conduct a Data Protection Impact Assessment and refer to the Commission’s Guide to Data Protection Practices for ICT systems and the Guide on Responsible Use of Biometric Data in Security Applications where biometric data is used.[1xxi] Anonymization should be conducted to the extent that there is a serious possibility of re-identification.[1xxii]
3.3.2. Deployment
Organizations also have certain consent and notification obligations under the PDPA with regards to the use of personal data in the deployment of AI. Pursuant to Section 13 of the PDPA, consent is required for “the collection of personal data to provide recommendations, predictions, or decisions”.[1xxiii] The Notification Obligation requires users to be notified of “the purpose of the collection and the intended use of their personal data when seeking their consent”.[1xxiv] This requires informing the user as to the types of personal data being collected. Notifications should be developed from the perspective of the user for the purposes of providing meaningful consent.
There are certain Legitimate Interest Exceptions for processing personal data without consent. Such legitimate interests are specified in Paragraphs 2 to 10 under Part 3 of the First Schedule to the PDPA. An example would be the use of personal data as input to an AI system for the purposes of detecting or preventing illegal activities. If intending to call on this exception, the organization should make sure that such legitimate interests outweigh any adverse effects that might ensue from the use of the data.[1xxv]
There is also an Accountability Obligation regarding the discharge of an organization’s responsibility for the personal data it has collected or obtained for processing, or over which it has control. (See Sections 11 and 12 of the PDPA.) The Personal Data Guidelines recommend developing written policies and documenting processes for the sake of showing that the organization has met its accountability obligations. Organizations should be transparent in their written policies about the kinds of data practices used and the safeguards put in place to protect user data. Indeed, Section 12(d) of the PDPA requires organizations to make such information available to individuals upon request. But for the sake of establishing trust with data subjects, the Personal Data Guidelines advises going further, making written policies available preemptively (say on the company website).
To that end, organizations can bolster trust and confidence by providing information about “behind-the-scenes” measures to protect personal data relating to bias assessment and data quality, safeguards and technical measures to protect personal data, and accountability mechanisms and oversight procedures.
With respect to data quality, an organization might wish to describe certain steps taken to ensure the quality of personal data in the training dataset; whether the model development was undertaken with pseudonymized data; whether it was necessary to use personal data when conducting bias assessments; what technical safeguards were employed for securing the testing environment and limiting access; whether data minimization was practiced, and if so at what stages.[1xxvi] AI Verify could be used at this juncture as an independent validation regarding areas for concern. Impact assessments are also recommended.
3.3.3. Procurement
When it comes to business to business provisions of AI solutions, certain obligations are taken on by outside providers of AI services (“Service Providers”) themselves. When Service Providers handle personal data on behalf of their customers they become data intermediaries, and are thus subject to the applicable obligations under the PDPA. The Personal Data Guidelines recommends the following as good practice for Service Providers[1xxvii]:
- Use techniques such as data mapping and labelling to keep track of data used in the pre-processing stage of training
- Maintain a provenance record to document the lineage of the training data, identifying the source and transformation of the data during preparation
Undertaking the above measures will enable both data intermediaries and the deploying organization to assess whether any unauthorized access or modification of training data has occurred. It also helps identify potentially sensitive personal data in the possession of the data intermediary.
Due to their technical expertise, Service Providers may be called upon by their customers to assist them in their Notification, Consent and Accountability Obligations. The Personal Data Guidelines encourage Service Providers to be conversant with the the types of techniques that are required to meet those obligations. Additionally, Service Providers are encouraged to build in processes when designing bespoke or customizable AI systems that “facilitate the extraction of information relevant to meeting their customers’ PDPA obligations.” [1xxviii]
Below is a chart from the Personal Data Guidelines helpfully detailing some best practices for Service Providers in assisting their customers with regard to their various data obligations[1xxix]:
Artificial Intelligence in Healthcare Guidelines (2021)
In 2021, the Ministry of Health Singapore issued the Artificial Intelligence in Healthcare Guidelines (“AIHGle”). The guidelines were based on the framework established by the FEAT Principles and the Personal Data Guidelines, but adapted to the healthcare context. The purpose of the AIHGle is to provide guidance in the development and implementation of AI-Medical Devices (“AI-MD”).
An AI-MD is defined by the Health Sciences Authority (“HSA”) as an AI solution that is “intended to be used for investigation, detection, diagnosis, monitoring, treatment or management of any medical condition, disease, anatomy or physiological process,” typically in ways that have direct impact on patient safety.[1xxx] For example, AI tools used for the diagnosis of cancers or the management of Type 1 Diabetes could be considered AI-MDs. The Guiding Principles of the AIHGle (Fairness, Responsibility, Transparency, Explainability and Patient-Centricity) are very similar in content and scope to those found in the other guidelines covered in this article, so we will not go in depth in describing them. However, the Guidelines do provide a useful set of recommendations for both developers and implementors.
Below are the key recommendations of the AIHGle for AI-MD developers. (Any recommendation in red is already a part of HSA’s existing AI-MD product registration requirements.)[1xxxi]
Below are the key recommendations for implementors.[1xxxii]

In recognition of emerging developments in AI, the AIHGle provides further recommendations for developers and implementers that target more dynamic AI models. In contrast to static AI-MDs that do not automatically incorporate new data into their algorithms, there are now “continuous learning” AI-MDs that have the ability to “continuously learn and adapt” during deployment. Implementers should be sure to identify and take reasonable steps to mitigate risks of deploying AI-MDs with these dynamic capabilities.[1xxxiii] Especially salient is the risk of maliciously introduced data and end-user manipulation. One alternative to direct deployment of continuous learning AI-MD’s is to use a static or “locked” AI-MD, while allowing a dynamic AI-MD to learn in parallel. Once sufficiently checked and validated, the new AI-MD can be safely deployed.
Sufficient safeguards should be in place for continuous learning AI-MD’s, and those safeguards should be submitted to the HSA as part of AI-MD product registration, along with notifications of changes.[1xxxiv] Developers are called upon to “introduce controls to review the newly trained and deployed AI-MD at high frequencies” so that performance consistently meets a certain range and end-users are notified when it drops below that range.[1xxxv]
In the words of its authors, the AIHGle is meant to be a “living” document that is updated alongside the development of AI. It is a supplement to HSA’s broader regulatory requirements,
providing “a set of good practices for developers and implementers”.[1xxxvi] As with all guidelines, if those guidelines are applicable to the organization’s situation, it is advisable to be proactive in meeting them.
4. Potentially Applicable Preexisting Laws
As White & Case note, there are no currently existing laws or regulations in Singapore directly controlling the use of AI. However, the Road Traffic Act of 1961 was amended in 2017 to accommodate the testing and use of AI in motor vehicles. And, the Health Products Act of 2007 also requires the registration of AI-MDs.[1xxxvii]
5. AI Verify Foundation and Relevant Toolkits
Started by IMDA as a not-for-profit, the AI Verify Foundation is a “global, open-source community that convenes AI owners, solution providers, users, and policymakers, to build trustworthy AI.”[1xxxviii] Premier members of the AI Verify Foundation include AWS, Dell Technologies, Google, IBM, Microsoft, RedHat, Resaro and Salesforce.
5.1 AI Verify Toolkit
In 2023, as part of its mission to “foster a community to contribute to the use and development of AI testing frameworks, code base, standards, and best practices,” the AI Verify Foundation launched AI Verify, an “AI governance testing framework and software toolkit that validates performance of AI systems against a set of internationally recognised principles through standardised tests” (https://aiverifyfoundation.sg/what-is-ai-verify/).[1xxxix] The testing framework and toolkit are currently available as a Minimum Viable Product (“MVP”).
As Singapore is a participating member of the International Organization for Standardization on Artificial Intelligence, AI Verify was developed to align with internationally accepted AI ethics principles and frameworks so as to be applicable under a broad range of AI regulations and guidelines. Below is a summary of the 11 dimensions along which an AI system is evaluated by AI Verify. The 11 dimensions are grouped into five focus areas according to the AI Verify Summary Report:[xc]
Focus Area 1: Ensuring that individuals are aware and can make informed decisions
- Transparency: Responsible disclosure provided to those affected by AI systems
Focus Area 2: Ensuring AI operation/results are explainable, accurate and consistent
- Explainability: Able to assess what led to AI system’s decisions and overall behavior
- Repeatability / Reproducibility: System able to consistently perform its required functions under stated conditions for a specific period of time
Focus Area 3: Ensuring AI system is reliable and will not cause harm
- Safety: Ensures no harm to humans, putting measures in place to mitigate harm
- Security: Maintains confidentiality, integrity, and availability through protection mechanisms that prevent unauthorized access
- Robustness: Resilient against attacks and attempts at manipulation by third part actors, functional in the face of unexpected input
Focus Area 4: Ensuring that the use of AI does not unintentionally discriminate
- Fairness: Avoiding inappropriate discrimination against individuals or groups
- Data Governance: Governing data used in AI systems, putting in place good governance practices for data quality, lineage, and compliance
Focus Area 5: Ensuring human accountability and control
- Accountability: Organizational structures and actors held accountable for proper functioning of AI systems
- Human Agency & Oversight: Appropriate oversight and control measures with humans-in-the-loop at the appropriate juncture
- Inclusive Growth, Societal and Environmental Well-Being: Potential for AI to contribute to growth and prosperity for all, and advance global development objectives
An example of testing for the transparency category is shown below. It includes a rating for transparency, a justification for the rating and recommendations for improving transparency.[xci]


Here is another example, this time of testing for repeatability/reproducibility.[xcii]
AI Verify currently supports supervised learning classification models and regression models for most tabular and image datasets, but not generative AI/LLM models.[xciii] (For the latter models, see next section on Project Moonshot.) It also does not give a pass/fail score.[xciv] Nevertheless, it can be useful for companies as a first step in evaluating the safety of their AI model. Improvements are likely in the near future.
The AI Verify toolkit can be accessed here.
5.2. Project Moonshot
In order to expand the scope of the AI Verify Foundation to include large language models, the AI Verify Foundation launched the LLM Evaluation toolkit Project Moonshot. The aim of Project Moonshot is to “integrate benchmarking, red teaming, and testing baselines” in order to assist in the management of LLM deployment risks.[xcv] Project Moonshot includes automated red-teaming capabilities, which pressure test the model through adversarial prompting. The ML Commons Benchmark v0.5 has also been folded into Project Moonshot’s functionality.
Project Moonshot is currently in the beta stage of development. It can be accessed here.
Conclusion
While Singapore has yet to enact any pieces of legislation specifically targeting AI, it has been at the forefront of developing a robust AI ecosystem with a wide range of built-in safety measures. From the beginning, Singapore has been concerned with the potential for the misuse of AI, the mishandling of data and the unforeseen consequences of AI deployment at scale. The government is committed to promoting AI systems that are trustworthy and transparent, making these top goals for all contributors to the AI ecosystem.
It is a good bet is that the National Artificial Intelligence Strategy, the Model Artificial Intelligence Governance Framework, the MAS Thematic, and the other frameworks and guidelines canvassed in this article, are a forecast of things to come. These guidelines should be treated as company best practices for organizations seeking to use AI in Singapore. The government of Singapore has invested in tools, such as AI Verify and Project Moonshot, to make that process easier and more standardized. As time goes on, these tools promise to become more sophisticated.
While the sheer quantity of frameworks and guidelines can be daunting, most recommendations can be boiled down to common sense: take rigorous measures to secure one’s data, test the AI model for vulnerabilities to external manipulation, be transparent about the use of AI in one’s operations, make public procedures for addressing mishaps and redressing injured parties. However, a fully developed AI safety strategy requires a thorough and comprehensive review of the use case, technical expertise, and detailed planning.
At HM, our team specializing in AI consulting can help you find the right approach to meeting all of the AI safety and regulatory requirements pertinent to your organization’s AI use case. Please do not hesitate to contact us today with any questions. We hope this review has given a good sense of Singapore’s AI regulatory landscape.
i.While the term ‘artificial intelligence’ has had a number of meanings over the years, most recently it has become a label for technologies that are powered by machine learning, especially deep learning algorithms, such as large language models. We will use the term loosely in this sense unless when otherwise defined.
ii. “Model Framework”, p. 4.
iii. ChatGPT was released by OpenAI on November 30, 2012.
iv “Model Framework,” p. 10.
v Ibid.
vi “Model Framework,” p. 5. Key stakeholders include: providers of AI solutions or application systems that make use of AI technology, organizations that adopt AI solutions in their operations, and current or potential consumers of an organization’s AI products and/or services (“Model Framework,” p. 18).
vii “Model Framework,” p. 15.
viii See Volume 1 featuring use cases from Callsign, DBS Bank, HSBC, MSD, Ngee Ann Polytechnic, Omada Health, UCARE.AI and Visa Asia Pacific, and Volume 2 featuring use cases from the City of Darwin (Australia), Google, Microsoft, Taiger as well as Singapore’s implementation of the Model Framework in its “100 Experiments projects” with IBM, Renal Team, Sompo Asia Holdings and Versa Fleet.
ix “Model Framework,” p. 8.
x Ibid.
xi “Model Framework,” p. 18.
xii “Model Framework,” p. 15.
xiii Ibid.
xiv “Model Framework,” p. 22.
Core roles and responsibilities include: assessing and managing the risks of deploying AI, deciding appropriate level of human involvement in AI-augmented decision-making, managing the AI model training and selection process, maintaining and monitoring AI models with a view to taking remediation measures if necessary, reviewing communications with stakeholders to provide disclosure and effective feedback, and ensuring relevant staff dealing with AI systems are properly trained (“Model Framework,” pp. 22-23).
xv “Model Framework,” pp. 22–24.
xvi “Model Framework,” p. 27.
xvii “Model Framework,” p. 28.
xviii “Model Framework,” p. 30.
xix “Model Framework,” p. 32.
xx “Model Framework,” p. 33.
xxi Chart found on p. 35 of the “Model Framework.”
xxii “Model Framework,” p. 36.
xxiii “Model Framework,” p. 37.
xxiv Selection bias occurs when the training data are not fully representative of the actual data, such as when the data omit certain characteristics from the dataset that occur in the population on which the model is supposed to generalize. Measurement bias occurs when the data collection device itself causes the data to be “systematically skewed in a particular direction” (“Model Framework,” p. 39).
xxv “Model Framework,” p. 43.
xxvi “Model Framework,” p. 46.
xxvii “Model Framework,” p. 51.
xxviii Ibid.
xxix “Model Framework,” p. 54.
xxx “Model Framework,” p. 55.
xxxi “Model Framework,” p. 57.
xxxii “Model Framework,” p. 17.
xxxiii The Gen AI Governance Framework defines Generative AI as, “AI models capable of generating text, images or other media.” Gen AI models, “learn the patterns and structure of their input training data and generate new data with similar characteristics.” (“Gen AI Governance Framework,” p. 3).
xxxiv “Gen AI Governance Framework,” p. 3.
xxxv Chart found on p. 5 of the “Gen AI Governance Framework”.
xxxvi Ibid.
xxxvii “Gen AI Governance Framework,” p. 7.
xxxviii For example, a bad actor could inject false information in Wikipedia pages before the model is trained on data from those pages. (“Gen AI Governance Framework,” p. 8)
xxxix Follow this link for a useful handbook on PETs: https://www.andrewpatrick.ca/pisa/handbook/handbook.html.
x1 “Gen AI Governance Framework,” p. 5.
x1i See https://www.assemblyai.com/blog/how-reinforcement-learning-from-ai-feedback-works/ for a helpful overview.
x1ii “Gen AI Governance Framework,”p. 10.
x1iii Care should be taken not to reveal too much at the risk of revealing information that could be exploited by bad actors.
x1iv “Gen AI Governance Framework,” p. 14. A “serious incident” is defined as “any incident or malfunctioning of an AI system that directly or indirectly leads to the death of a person, serious damage to a person’s health, serious and irreversible disruption of critical infrastructure, breaches of fundamental rights under Union law, or serious damage to property or the environment” (“Gen AI Governance Framework,”p. 14).
x1v “Gen AI Governance Framework,” p. 15
x1vi The Gen AI Governance Framework cites the Stanford Holistic Evaluation of Language Models as an example.
x1vii “Gen AI Governance Framework,” p. 16.
x1viii Ibid.
x1ix “Gen AI Governance Framework,” p. 17.
1 See https://www.assemblyai.com/blog/how-reinforcement-learning-from-ai-feedback-works/ for a helpful overview.
1i “Gen AI Governance Framework,” p. 19.
1ii “Gen AI Governance Framework,” p. 6.
1iii Ibid.
1iv “Gen AI Governance Framework,” p. 20.
1v “Gen AI Governance Framework,” p. 21.
1vi Ibid.
1vii “Gen AI Governance Framework,” p. 22.
1viii The FEAT principles were updated in 2019. (FEAT Principles, p. 3)
1ix “FEAT Principles,” p. 5.
1x It should be noted that the MAS does not lay down any criteria for what counts as justifiable.
1xi “FEAT Principles,” p.10.
1xii ‘Data subjects’ refers to “individuals and/or firms who are or may be affected by AIDA-driven decisions” (“FEAT Principles,” p. 5).
1xiii “FEAT Principles,” p. 3.
1xix Ibid.
1xx “FEAT Principles,” p. 5.
1xxi To be found in Part 5 of the First Schedule and Division 2 under Part 2 of the Second Schedule to the PDPA.
1xxii “FEAT Principles,” p. 6.
1xxiii The reader is referred to para 1(2) under Part 5 of the First Schedule to the PDPA, and para 1(2) under Division 2 under Part 2 of the Second Schedule to the PDPA.
1xxiv “FEAT Principles,” p. 7.
1xxv “FEAT Principles,” p. 9.
1xxvi “FEAT Principles,” p. 10.
1xxvii “FEAT Principles,” pp. 11–12.
1xxviii “FEAT Principles,” p. 13.
1xxix Ibid.. See the PDPA, Section 20.
1xxv “FEAT Principles,” pp. 15–16.
1xxvi “FEAT Principles,” p. 18.
1xxvii “FEAT Principles,” p. 20.
1xxviii “FEAT Principles,” p. 21.
1xxix “FEAT Principles,” p. 21.
1xxx “AIHGle,” p. 5.
1xxxi “AIHGle,” p. 14.
1xxxii “AIHGle,” p. 28.
1xxxiii “AIHGle,” p. 38.
1xxxiv “AIHGle,” p. 39.
1xxxv Ibid.
1xxxvi “AIHGle,” p. 6.
1xxxvii https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-singapore
1xxxviii https://aiverifyfoundation.sg/
1xxxix Some of the tools included are SHAP (SHapley Additive exPlanations), the Adversarial Robustness Toolkit, and AIF360 and Fairlearn for fairness testing.
xc “AI Verify Summary Report” p. 3. This is an abridgment of the list. For the full specification of each principle, see page 3 of the AI Verify Summary Report.
xci Summary Report, p. 5
xcii “AI Verify Summary Report,” p. 8.
xciii It works for models such as binary classification models and “regression algorithms from common frameworks such as scikit-learn, Tensorflow, and XGBoost” (“AI Verify: AI Governance Testing Framework & Toolkit,” p. 8)
xciv https://aiverifyfoundation.sg/downloads/AI_Verify_Primer_Jun-2023.pdf. See https://file.go.gov.sg/aiverify.pdf for further limitations of the toolkit.



