MDDI's Response to PQ on Ensuring Meaningful Human Accountability for Public-facing Autonomous AI Agents and Pathways to Mandatory Governance in High-risk Sectors
6 May 2026
Parliament Sitting on 6 May 2026
Question for Written Answer
31. Ms Sylvia Lim asked the Minister for Digital Development and Information following the launch of the Model AI Governance Framework for Agentic AI (a) how does the Ministry intend to ensure "meaningful human accountability" for autonomous AI agents that interact with the public in the absence of explicit disclosure requirements; and (b) what triggers would necessitate transitioning the Framework from a voluntary code to enforceable standards for high-risk sectors.
Answer
The Model AI Governance Framework for Agentic AI (Framework) sets out guidance for organisations to ensure meaningful human accountability in deploying agentic AI. They should not allow high stakes or irreversible actions to take place without human review. Appropriate actions therefore include identifying checkpoints or action boundaries that require human approval. The Framework also emphasises transparency towards users, such as declaring upfront that users are interacting with agents and the agents’ capabilities and data access.
Agentic AI use cases and the appropriate safeguards are still evolving. Hence, together with sector regulators, we will continue to monitor how various sectors deploy agentic AI and put the above principles in practice, continue to consult and learn from best practices internationally, to make the adjustments to the framework as necessary.
