MDDI's Response to PQ on Assessing Laws and Liability Frameworks Given Risk of Serious Harm Arising from Large Language Model Outputs
7 May 2026
Parliament Sitting on 7 May 2026
Question for Oral Answer
*25. Mr Cai Yinzhou asked the Minister for Digital Development and Information (a) whether the Ministry has assessed existing laws and liability frameworks in relation to serious harm arising from the stochastic nature of large language model outputs where residual risks persist even after reasonable precautions have been taken; and (b) whether the Ministry is considering AI-specific or strict liability frameworks to address accountability gaps that arise when such residual risks materialise into harm.
Answer
The Government recognises that Generative AI, including LLMs, brings significant benefits. However, its probabilistic nature means residual risks may persist despite reasonable safeguards. The Model AI Governance Framework for Generative AI provides guidelines to manage these risks, focusing on accountability, transparency and public trust.
There is also existing legislation to protect Singaporeans from online harms and risks, such as the Online Criminal Harms Act (OCHA) and our misinformation laws. They extend to harms that could result from the use of AI and its output. In addition, existing legal principles, such as under tort and contract law, can be applied by our courts to assess and determine liability in appropriate cases.
That being said, we share the concerns raised by the Member, and will continue to study this issue, including consulting with practitioners, academia, and industry on whether there are any accountability gaps and if policy, regulatory and legal measures may be needed.
*Converted to written answer
