MDDI's Response to PQ on Tackling Risk of Agentic AI Capable of Autonomous Actions and Unforeseen Emergent Behaviours
14 October 2025
Parliament Sitting on 14 October 2025
Question for oral answer
*26. Mr Low Wu Yang Andre asked the Minister for Digital Development and Information regarding the risk of agentic AI capable of autonomous actions and unforeseen emergent behaviours (a) what is the Government’s specific plan to regulate these high-risk capabilities; and (b) whether this plan will include legislative empowerment to govern both AI actions and the content that AI generates.
Answer
In our current AI governance approach, many AI risks are covered by broad legislation such as the Personal Data Protection Act, the Workplace Fairness Act and the Broadcasting Act, and sector-specific guidelines in the healthcare, finance and legal sectors. For specific risks, there are also targeted interventions, e.g. Elections (Integrity of Online Advertising) (Amendment) Act, and the upcoming Online Safety (Relief and Accountability) Bill.
Human and organisational accountability is central to Singapore’s AI governance approach. The Government’s Model AI Governance Framework sets out guidelines for AI systems to be explainable, transparent, and fair. Organisations deploying AI should establish clear governance structures with designated oversight roles. They should ensure meaningful human accountability in AI-augmented decision-making. Organisations should also implement risk management processes to monitor and mitigate risks, such as algorithmic bias.
Agentic AI presents new opportunities and risks. These systems can execute actions, interact with external systems, and adapt their behaviour with reduced human oversight. Our response focuses on two areas.
First, we recognise that many risks arising from agentic AI are extensions of existing challenges. Existing guidelines and regulations, including risk management processes and oversight, apply to agentic AI systems. The principle of maintaining human accountability and putting in place sufficient controls and guardrails also applies. In addition, our long-standing efforts to strengthen data protection, cybersecurity and resilience help protect the systems that AI agents interact with. We are reviewing how to adapt and strengthen these frameworks and systems to account for the increased autonomy of agentic systems, while monitoring international developments in this emerging field.
Second, we are developing capabilities to go beyond existing frameworks and approaches. We are running trials and experiments to stay abreast of the evolving technology and to understand what it takes to implement agentic systems responsibly. For example, GovTech has been experimenting with agentic AI in public sector use cases. We are doing this with care. This approach will deepen our understanding of how to interact with these systems, build confidence, and harness their value for the public good.
*Converted to written answer