Remarks by Minister Josephine Teo at AI Safety at the Global Level: Insights from Digital Ministers & Officials Panel
20 February 2026
Moderator, Lee Tiedrich, Senior Advisor to the 2026 International AI Safety Report: For Minister Teo, Singapore has been at the forefront of AI governance from the ASEAN AI Governance Guide to the Singapore Consensus on AI Safety. One of the things that Yoshua (Bengio) highlighted that the report talks about is the need to translate some of the evaluation for different cultures and different norms, and also to be able to put it into practice. Based on Singapore's experience, what does it look like to take the science and actually put that into tools and practice that people around the world can use?
Minister Josephine Teo: Perhaps I will offer a perspective as a small state in a part of the world that has a lot of interest in the adoption of AI technologies but perhaps is still only becoming much more aware of the extent of the risks.
In my interactions with my counterparts, I often share with them a perspective – They would have visited Singapore; they would have travelled in and out of our air hub. And I explained to them that Singapore does not own aircraft technologies. Boeing does not belong to us, neither does Airbus. But we have to be concerned about the safety of how these aircraft are manufactured. We have to be concerned about maintenance, repair, and overhaul. We have to be concerned about air traffic management. If we didn't have all these elements in place, it's very hard to see how you can have a thriving air hub and be responsible for the lives of millions of people passing through the airport.
So that's the reason why we think we have to be invested in the conversations and the efforts to bring about AI safety. If we want to see wide adoption in our region, then we must equally be aware of how the risks can be mitigated. So that's the starting point.
The second point I'd like to make is that ultimately, as policymakers, our objective in understanding the safety aspects must translate into how we can put them into operable guardrails. And very often, this would mean standards that are being imposed. This would mean regulations and laws.
But we have to do it in a thoughtful way, because we still do want to benefit from this technology. So if we are not targeted in the way we implement these requirements, then what we might achieve is not just an impact to the pace of innovation.
What we could end up with is a situation where we have given a false promise to our citizens, giving them the impression that we have protected them when in fact we haven't actually done so.
That's why I think we need to be thoughtful. Part of Singapore's interest is also that when there is clarity about what needs to be done, we want to be able to move very quickly.
Yoshua has talked about the misuse of AI, for example, to use it for generating images that often target women and children. What we did was that last year we introduced a new law. It imposes statutory obligations on the services that bring these images and make this content available to vast numbers of people. They've always said that we are not responsible for the generation of such content. And so that's something that we take on board. But having been notified of the existence of such harmful content, then there is an obligation for you to remove it. So this new law that we passed imposes such an obligation.
Yoshua also talked about the findings in the reports – how AI and cybersecurity are intersecting in very, very concerning ways. For example, AI being used to target systems, and so AI is a threat. Now, however, we also see that AI itself can be a target of cyber-attacks. And when AI becomes a target of cyber-attacks, particularly for multi-agent systems, those kinds of risks can easily go out of control.
So even as the Singapore Government is experimenting with the use of AI, we want to be very thoughtful about how these AI agent systems are being architected and what exactly goes into the decision-making process regarding the agency that is being granted. Is there a way to put guardrails around it?
So I would just say that AI as a threat, AI as a target, and where we really need to cooperate and do much better is in using AI as a tool to fight these threats. Those are the kinds of things that within the ASEAN community we hope to be able to make progress on.
Moderator: In addition to the policymakers being able to use this information -- through my work, I end up talking to a lot of organisations, nonprofits, small-and medium-sized businesses. What I hear a lot is, it's great -- you have to start with the science, and that is ground zero. But then for some of those other organisations, they need the tooling. They're not going to have a whole scientific staff to figure out how to put that into practice? And I'm just wondering, from the government's perspective, Minister Teo, what are your thoughts on how we might be able to advance some of the tooling to take this great learning and make it easier for companies and other organisations to actually deploy?
Minister: I was at a similar session recently, and this topic came up. The way I think about it is that I use IKEA as an example. You know, when you go to IKEA, you buy furniture, and IKEA promises you that this furniture has been tested. So, if it's a couch, it has been jumped on, perhaps 25,000 times, and it didn't break, you know that your kids are not going to be hurt if they jump on it too – well up to 25,000 times.
If you think about a user on the receiving end of this technology, it is quite unreasonable to expect them to have to impose safety conditions on their own. They're simply not in a position to do so, and they don't have the power to decide what gets sold to them and what does not.
So, we as policymakers must recognise that there is a huge gap between those that we are encouraging to adopt AI tools and technology in various contexts. We must think about where the right points are to make these requirements mandatory, and where it might be more useful for industries to come together, rather than imposing strict mandatory requirements. For example, in Davos, we discussed the possibility of insurance schemes and creating the right incentives for AI model developers. And I think that there is no easy landing point just yet, but if we fail to engage in these conversations in a rational way, then I think we are even further behind in trying to manage the risks. So I would say that the thoughtfulness has to be applied at many different levels.
There needs to be continued research in AI safety. And so I'm very happy that we are continuing to have this conversation through the second edition of the International Scientific Exchange for AI Safety in Singapore. We hope to update which areas of safety research that should be prioritised. I think this year, I certainly agree that multi-agent systems are going to come up quite prominently.
But we cannot just stop there. We also have an ongoing program. We started by setting aside commitments under our own National AI R&D Plan. In fundamental research, one of the areas that we are very interested in is responsible AI, so you need the two to go hand in hand. But can we not have some testing frameworks and toolkits to begin with? We think that that is also not helpful. It is more pragmatic to try to recognise the shortcomings of those testing tools, and then to invest further effort in promoting more thoughtful ways of looking at the risk of these systems and how to mitigate against them.
Ultimately, we should try to get to a point where the end user has assurance of safety so that they don't have to be thinking so hard about whether the proper tests have been applied. We're not there yet, but I think we need to find a way to work out the roadmap.
Moderator: I'm interested, and I think it touches on some of the themes of “how do we take the science and bring it to practice?”, “how do we actually create this evaluation ecosystem?” So step one is developing the science. Step 2 is then figuring out, “how do we actually evaluate this?” And then there's “by whom?” How do you see an evaluation ecosystem emerging? Do you see governments being the evaluator? Do you see this going more like we have with accounting, where you have third-party certified auditors of doing the evaluations? I'd be interested in each of your thoughts. Maybe start with Minister Teo.
Minister: Well, certainly in the ASEAN context, I would advocate for an approach that addresses near and present dangers that everyone is dealing with.
The risk of not focusing on what's most prominent in people's minds today, and policymakers' minds today, is that the conversation may feel too theoretical, and we may lose interest and momentum, and we won’t even build the foundations of cooperation in a meaningful way.
What are some of those areas where AI intersects? AI being used, or misused, for harming people in terms of content creation. I think that's one area.
Almost every single policymaker that I come across is very, very upset by the fact that they have to address their constituents' concerns about all these harmful images that are being created with the use of, or with the help of AI. It's very offensive to our societies.
And if we are not able to work on these areas in a meaningful way, in a practical way, then I think we risk losing my colleagues' attention.
So what can we do? We have to then seriously ask: Is watermarking the correct approach to dealing with it? Is there some other way of labelling AI-generated content? Is that even the right direction that we should be moving in?
The other area that I think it will be very prominent, and that is the use of AI in cybersecurity. I don't think at this point in time AI as a threat is adequately addressed. AI as a target is even further from people's minds. A pickup of the conversation in the areas that my colleagues care about, I think, stands a better chance of anchoring their attention and creating meaningful opportunities for us to say, “Here are the ways you can test for it”, and “Here are the tools that can be applied”.
They won't be perfect, but they are an important start.
Audience Member: Now we hear a lot about the rise of digital sovereignty like everywhere, and like a lot of more countries are trying to claim it in some ways or another. And I would be really curious to hear like how, at least in the AI safety field, how are you perceiving that impact and which are the safety concerns that are most pressing that get thrown out of the window based on that first?
Minister: Yeah, I'm so glad that Yoshua has offered a view that to me is a very sound approach. You (Yoshua) said earlier that what we want is a world where every country can be at the table, not on the menu.
That's exactly how you can preserve sovereignty, even with AI developments. The idea that you get sovereign AI by confining everything to your own shores, I think it gives a false sense of security.
Firstly, it's not achievable. Secondly, the idea that you can do so would, I think, mean that for many countries, where the most sophisticated applications will have to originate from elsewhere, it just cuts you off from being able to make progress, and that puts you even further behind.
So how does sovereignty fit in? It has to be a topic that is dealt with thoughtfully. It's not a term to be bandied about too easily.
