Remarks by Minister Josephine Teo at the Fireside Chat on AI: The Global Context
20 February 2026
Moderator, Mariano-Florentino Cuellar, President, Carnegie Endowment for International Peace: We talk about emerging or rather emerged technology, and how much it's going to affect countries, large and small. Minister Teo, you are playing a critical role – and I know this because I see you at every single AI summit in the world. It's amazing.
How are countries like Singapore in a position to navigate this tsunami and these changes, and what do you think we can learn from Singapore's strategy. As I see it being at the forefront on AI Governance, like the Model AI Governance Framework for example, but also navigating a world that some people see as balkanised between China and the United States around the technology stack.
Minister Josephine Teo: Thank you very much. Tino, that's a lot of questions packed into one. I'll do my best to address them. I think embedded in what you're saying is that there is the risk of technology decoupling, and what does a small state do in this kind of context, and how do we navigate the big power contestation?
The way we think about it is that for Singapore, it's very important for us to maintain this ability to operate as a trusted node. Trusted node means that we can trust you with our technology, so your companies and people, can continue to access these technologies that are the most sophisticated, because they will not be abused, and the risk of them being misused is also minimised.
The question is, how do we remain trusted? And I think the only way to do so is if we act in a consistent and principled way. Being consistent and principled is not a matter of size – Singapore is not the only small state that has a good track record of maintaining this discipline.
We are consistent in being pro-Singapore, and sometimes our choices may align with this country or that country. Sometimes they will align with many countries. Sometimes they only align with a few countries, but they always align with our own interests in technology choice. For example, 5G – we are always operating on the basis of principles – number one: that these are commercial decisions that have to be undertaken by the operators of the mobile networks, and they have to decide on the basis of what works for them, in terms of performance, security and resilience – keeping in mind what are all the rules that are in place in our context. So those are the broad directions in which we operate in, and it's not easy, but it's a path that has served us well.
Moderator: There are enormous possibilities for AI, but along with that opportunity, will probably come some disruption, some real policy difficulties in some countries that are experiencing rapid changes in the labour market. The question then is how we might develop the right strategy so that the productivity gains that the world can experience would actually translate into shared prosperity. What do you think we can do on that score?
Minister: I think sometimes there is a tendency to want to think of ways of regulating AI in order to slow down its advance, and perhaps, to try and forestall the risk. I'm not underestimating the need, to make sure that there are guardrails on AI safety. I just want to say that these are important, but to over-expect AI regulations to deliver on the other important issues, such as the potential for greater social inequality, I think, is unrealistic.
The way to deal with it is to look at what other methods there are to strengthen social solidarity. For example, what provisions do we put in place to help people move from one job to the next? What provisions do we put in place to ensure that even people who don't earn a lot have the prospect of owning their own homes, access to good healthcare, and educating their children to a very high level? I think these are the other things, and you cannot run away from those conversations just by expecting regulations to solve the problem.
Moderator: Imagine yourselves 15 years in the future, looking back at the past. At that point, you're being interviewed on the same stage here in India, and you're saying it's been a very good thing to see how well the world has handled its relationship with this emerging technology of AI. And it's turned out very well, because of “blank”. I want you to mention one thing that you think in particular would have been so critical to make that transition. Well, you've all mentioned a bunch of things, but I'm interested in the main, most important takeaway that you'd like to leave with the audience.
Minister: For me, that one word is trust. In 15 years, if we go and ask citizens in all the countries where AI is being deployed widely: “do you trust this technology?” If their answer is no, then I believe that we must have failed in some way. If they believe that this technology has been implemented in a way that didn't rob them of their livelihood, didn’t leave them being totally misinformed about the world, allowed them to carry out their lives in a safe and secure manner, and that it didn't destroy families, I think that would be a success. I think if they can still see that this is a technology that can work reasonably well when you put in place the safeguards, I think we would have come a long way.
