Opening Keynote by Minister Josephine Teo at Preparing to Monitor the Impacts of Agents: Closing the Global Assurance Divide for Safe and Trusted AI
20 February 2026
Thank you, Partnership on AI, for the invitation.
When this series of summits first began in Bletchley Park, AI agents were not a thing. Nobody was talking about them, even just 12 months ago when we had the AI Action Summit in Paris, it had barely crept into the conversation.
At the time, the preoccupation was all around DeepSeek, and what it told us about the capabilities that are emerging out of China. But today, agentic systems have taken off. They are increasingly being used, and we need to have a better grasp on how to deal with this issue.
Agentic AI certainly offers transformative possibilities in how we delegate and orchestrate work when deployed strategically. Agents function as invaluable teammates, unlocking productivity gains and time savings, which we all want more of.
However, I should also add that the very nature of how agents can be helpful to us, is autonomy. This autonomy also introduces new risk. The potential for harm increases when systems malfunction and human oversight is no longer present or at least diminished to a very large extent. The implications may be complex and not fully predictable.
The way my colleagues and I have been thinking about this is that there needs to be a shift, in terms of how we might want to rely on reactive regulation, to a different kind of stance, which is proactive preparation.
And in Singapore, that's what we've been trying to do. We have tried to be proactive about governing the new risks in the era of agentic AI.
I think it starts with the Government itself being a leader and not a laggard in using agentic AI. We need to test it. We need to look at how the solutions can enhance public service delivery but also put in place more controls.
Government is high-risk because the touch point with citizens is very sensitive. No government wants to make serious mistakes when it interacts with its citizens – telling them things about their health, social security, or things to do with their benefits that are not accurate, and having these mistakes not just told to citizens but acted upon. This need to ensure that we know what we're doing is a very high one, and the way we are also thinking about it is to work with the industry.
For example, between Google and the Singapore Government, we have a sandbox on agentic AI. It's one of the ways in which we think we can, in a way, try our own dog food. Try it to see if it tastes alright? Does it hurt us in a very significant way? Because if we were not able to do so, I don't think we have a lot of credibility in terms of how we want to govern agentic AI. But we can't wait for the dog food to materialise its consequences for ourselves.
In the meantime, my colleagues have put together a Model Governance Framework for agentic AI. It is meant to provide practical support to enterprises so that they can also deploy autonomous agents responsibly and mitigate the risk. We know that this is not a complete solution, and this document that we put out, has to be a live document. We very much encourage feedback as a way for us to keep improving the guidance to enterprises.
As we do this work, what is the meaning and purpose behind it? Ultimately, it is to build confidence in the use of agentic AI systems. At many levels, this confidence has to be presented and demonstrated to boards of organisations, customers, and other stakeholders. How do we demonstrate that the risks have been managed well?
That is where the assurance ecosystem comes in. It is an absolutely essential part of building trust over the medium to longer term, so that there is a foundation upon which agentic AI systems can be made more readily adopted and available.
I should also say that for companies that are thinking about it, if we are to trust these agentic systems, the safety aspects should not be downplayed.
I would venture to say that a company that is able to give a high assurance on safety will find itself being differentiated from its competitors, and this is more likely to translate into stronger interest in its products and services.
Rather than think of it as something that you are unhappy to comply with, think of it as a strategic competitive advantage, and the way that will give us the confidence to put it forward.
The question, however, is: are we completely without experience in this regard? The answer is no.
In aviation and healthcare, there are a lot of measures being put in place to give assurance to passengers that when we board a plane, we usually expect to arrive, or when we visit the hospital, we generally expect to be treated, except for disease conditions that are not yet well understood.
The trust in these systems has to be built over time, and it doesn't come without some assurance being put in place. The question is, for AI, and specifically agentic AI, what would be the components? What leads to an assurance ecosystem that would be robust enough?
We think that there are at least three components.
The first is that there must be testing. We need some way of making sure that there are technical assessments of the system to make sure that the systems are robust, reliable and safe. A lot more work needs to be done in this space – developing the testing methodology, building the testing data sets, and also making sure that the testing of agentic systems takes into account that these systems are going to be much more complex because they involve multiple agents.
For example, it's not just the output, but the in-between steps – how the reasoning takes place and what is the orchestration that is being built into the agentic systems.
The second is that eventually we will need standards. We cannot just define what is good enough. We also need to assure the users that it has met expectations for safety and reliability, and so these are still very early days.
Third, we think that this ecosystem cannot do without third party assurance providers. It's one thing to claim that your agentic AI system is safe, but another to have someone attest to the safety of it. So these could be technical testers, auditors, and they provide independence, augment in-house capabilities, and also help to identify the blind spots. And it's necessary for us to strengthen this pool as well.
I want to conclude my remarks by saying that Singapore is actively building these components.
We welcome conversations with partners and colleagues, because we know that we cannot do this alone. We look forward to discussions in the three panels on how we can meaningfully collaborate on assurance for agentic AI.
Thank you very much once again.
