Transcript of Minister Josephine Teo's Fireside Chat at Fortune Brainstorm AI
22 July 2025
Question: With reference to the AI Action Summit in Paris this year, which is a big event that got a lot of people’s attention, Singapore then launched a Global AI Assurance Pilot, and more recently, it has established something called the AI Verify Foundation. Can you tell us about these initiatives, what they are and how they evolve?
Minister: Clay, it’s great to be back and I’m also very happy that you returned to Singapore for this Brainstorm AI event. You referenced what we were doing at the AI Action Summit. For the audience’s context, the AI Action Summit that took place in Paris is a third edition of what started out in Bletchley as the AI Safety Summit. In between, there was a summit that was held in Korea, and by the time it got to Paris, our French colleagues decided that they didn’t want to focus so much on safety, but more on action. If we ask what we really want to see in AI development, the way Singapore thinks about it, is that this being general-purpose technology, you do want it to be very widely adapted across many industry settings, as well as across a whole range of organisations.
So, when we updated our National AI Strategy, we thought that we needed to ensure that people understood what this was going to be for. Hence, we said that the vision for Singapore was AI for the Public Good for Singapore and the World. Why we decided to focus on public good was that the incentives would naturally create many commercial applications of AI, but for the whole of society to benefit, for people to appreciate its value, and this being general-purpose technology, you also want AI to be applied towards public healthcare, public transportation, public safety, and in policing.
So that’s one dimension of AI development that we wanted to see in Singapore. The other dimension is equally important. In many of these commercial settings, you would want AI to be implemented and deployed in a responsible way. So the question, however, is, what does responsible AI look like? Well, it really depends on the application, and at which layer of AI development you are talking about. Are you talking about the infrastructure, the models or the applications?
Apart from being able to articulate what responsible AI looks like, you also need to answer the question of “How do you get there?” If you say that you are implementing responsible AI, how do you test for it and assure people that, in fact, this is happening? So as we were also promoting AI adoption, we decided at the same time that these questions deserved investigation. You need to put aside resources to research, but at the same time, it’s useful for us to start developing practical tools to answer the question of what responsible AI looks like.
That’s how the Global AI Assurance Pilot came about. It was to create a number of methods by which AI developers as well as deployers were able to assure their stakeholders that they were implementing in a responsible way. The idea was also to learn from it, accepting the fact that at this early stage, there are no interoperable standards that apply across the globe. We don’t know when we will get there, and so, Singapore’s modest hope is to contribute to this conversation and bring about international collaboration so that we can all develop best practices in AI development.
Question: Really interesting, there’s a lot to think about there. I want to come back and maybe talk a little bit more about how assurance as a strategy aligns with the kind of commercial competitiveness objectives of Singapore. Before we get to that, I want to ask you about one of the other big developments that happened in the last year, just about the time a lot of people were in Davos. I remember being in Davos and hearing people at the beginning sort of looking down on China’s ability to catch up to the United States. Then, right in the middle of Davos, there was an announcement that emerged that there was this thing called DeepSeek. There was this Chinese hedge fund that had sort of issued this new platform in AI that was produced with a fraction of the resources of OpenAI or Gemini but seems to be pretty close on all the significant benchmarks of performance. Last year, we talked about the age of big data and AI superpowers, and whether that created daunting obstacles for smaller states like Singapore. So, I’m curious to hear your thoughts about the implications for countries like Singapore for the DeepSeek breakthrough. We’ve had a couple of these other really kind of fast follower platforms come through in China recently. Is it something that Singapore can now compete by following those kinds of examples? How does it open up and democratise the AI game for a country like Singapore?
Minister: Well, the thing that we ask ourselves is, what are the impediments of using any sort of technology for innovative purposes? And inevitably, cost becomes an issue, because for a company that is deploying technology already in a certain way, to use AI as an alternative or as an addition? The question of course must come up and it has to be weighed against the benefits. So, when the cost comes down, then whatever benefits you were able to get from it may look more attractive. From the perspective of bringing down costs, innovations such as DeepSeek are very much welcomed, but I would also say that this whole dynamic is not necessarily only a competitive one. It is also mutually reinforcing. Let me give you an example. We don’t use DeepSeek per se to develop our own Large Language Model, but within the context of Southeast Asia, we know that Large Language Models are that are trained primarily on a Western corpus, primarily on perhaps English as the language, they will have difficulties being applied in the Southeast Asian context. In our case, there are already probably more than 1000 local languages. And if you built AI tools on top of these Large Language Models that didn’t incorporate the kind of data that could be found in our part of the world, then naturally, the quality, the way it performs, the way it responds to crops, will perhaps not meet the requirement of Singapore as well as our neighbouring countries.
What we did was to therefore develop our own Large Language Model, the SEA-LION, not necessarily the biggest, but certainly trained on languages that we find commonly in our part of the world like Malay language, Tamil, Bahasa Indonesia. So I think there is room for both. Indeed, many companies are thinking about how they can develop, for example, chat assistance that could be useful in our context. They would use of combination of both, and so that’s how I think about it. I think that in an innovative ecosystem, ways in which we can bring down the cost, ways in which we can complement one another in terms of the models that are available. I think they just open up the space for innovation to a larger extent. In the Singapore context, when we think about AI governance and AI innovation, the two are not necessarily at odds with each other. Being able to assure ourselves of safety and proper governance, actually I think can enable innovations to go further, because then the developers and the deployers know where are the markers that they should work with.
Question: This is very interesting, because it's a more neutral approach to evaluating technology and AI platforms. In the US right now, a lot of this stuff is very politicised. So, we thought that TikTok was a terrible national security risk until we realised that it was so popular that we couldn't really stop it. And certain political leaders realised that it was to their advantage to embrace it. But there is still a lot of ambivalence about DeepSeek in the US and the US government is prohibiting government agencies from using it. I gather in Singapore, there is none of that. There’s a “Well, let's look at this and see if it us useful or not.”
Minister: Or perhaps I would put it another way. I think when companies, developers, deployers, they think about what model they want to use, there is always an assessment of a couple of things. One of the things would be, of course, performance. The other things that they would take into consideration are security and resilience. And it's up to every organisation to evaluate all three and draw their own conclusions about what can be used, what cannot be used, and what they do not wish to use. And I would imagine that there will still be concerns about DeepSeek and other models that would cause companies to take a pause and say that maybe this is something that we are still not very comfortable with. I wouldn't be surprised at all. This is a very natural consequence of organisations having to weigh the cost of benefit, as well as the many dimensions of what makes innovation worth the effort.
Question: The other thing that's related to this that's new over the past year is that we have a new President in the US now. Relations between the US and China have gotten more tense. President Trump is imposing very large tariffs on China. China has been retaliating and the US has tightened restrictions on selling chips and other technologies to China. I wonder whether we talked last year about Singapore’s strategy of being non-aligned and kind of like Switzerland of Asia, a space where both Chinese and US companies as well as tech players could operate together. So my question is, can Singapore still do that? And this also sort of applies to the rest of ASEAN. Can it remain strategically unaligned, or are the two giants trying to force small and medium-sized economies like this one to pick a side?
Minister: I think Singapore's consistent approach is to act in a way that meets our own interests, and that is the starting point. That has to be our guiding principle. Now, we would certainly hope that relationships between the two giants can warm up to a much greater extent, but it's not something that will happen if we wish for it to happen. I wish that was something that we didn't have to deal with, but it's a reality.
The way in which we go about it is to engage all countries, not just China and US, and build upon our bilateral foundations and to try and make headway in new technology areas too. So with the US, we have a dialogue on Critical and Emerging Technology (CET). Similarly, with China, we have a Digital Policy Dialogue. They cover different areas of interest that we mutually believe are important for our own countries. But you know, it doesn't prevent us from seeking to understand each other's concerns better and then continuing to find ways to move forward. So for example, in AI assurance, and AI governance with our US colleagues, we did a mapping of our own risk assessment model against theirs, and this serves as a basis for us to engage with colleagues elsewhere around the world, particularly those that are involved in the AI Safety network.
With our Chinese colleagues, we noticed that their industrial foundation is so broad and deep that the applications of AI would be very interesting to watch and learn from. So, there are different ways in which we engage with ASEAN countries. Even if we are not ready to move into the era of standards in AI governance, there's nothing to prevent us from agreeing on what the ethical principles could look like first. And then with other small countries like us, we created a digital pillar in the Forum of Small States, which Singapore is the convener. And with countries like Rwanda, we've created AI Playbook for Small States. So these are the ways in which we are coming together for conversations around different aspects of AI, hoping to meet make progress.
Question: If I can talk about scaling the digital workforce to kind of support a lot of these initiatives. The last time we spoke, you described the government's plan of training 5,000 AI practitioners in Singapore. You have since tripled that, as I understand it, to 15,000. Why the increase, and how do you achieve it, and what would be the background of these practitioners?
Minister: Well, number one, I think there's a lot of interest. So many people in Singapore are aware of the potential benefits of AI, and they are finding ways to acquire the skills. I would say that whether it is AI creators or AI practitioners, I am overall optimistic about what can be achieved through our workforce if each and every one were enabled to acquire some AI-related skills. And you can imagine that certain basic AI skills will become as ubiquitous as you and I being very familiar with using emails. Perhaps in the earlier stages, maybe it was not so common for people to be able to use digital tools, whether for word processing or making slides and so on. But over time, we all acquire a certain level of confidence. But we think that it's not enough just to have basic level of competence. So what you described the 5,000, and 15,000, what we call AI practitioners, we’re hoping also to grow another pool to complement the data scientists and machine-learning engineers. We’re talking about people who are in the professions, lawyers, accountants, doctors, who will become the early adopters of AI and then they show their peers how to make better use of it. We're also talking about people in different sectors, whether it's manufacturing, healthcare or in financial services. They themselves also acquire this facility with using AI, and then they demonstrate how it can create more value for their organisations. This group of AI practitioners will have to exceed the 15,000 that I talked about to a much greater extent. So in the Singapore context, our workforce is maybe three and a half million. I think, at the level of AI users that are competent, you will need to get to a much larger number than we were talking about last year. So watch this space, we will have more to say about this.
Question: We will watch it with great interest. We've got just a couple of minutes here and I wonder if we could come back to this initial point about assurance. It's very interesting to me, because it seems that what Singapore is doing is it's trying to become a kind of a test bed for Safety and Responsible AI and to create a kind of haven and build a brand around that, in much the way that Singapore has done with fintech regulation and cyber security services. Is that the idea, and maybe you could say a little bit more about the philosophy behind this?
Minister: The broad idea is that this technology can have many applications, and you don't want to overly restrict its growth and development. On the other hand, for a technology that can be used in so many different contexts, you've got to watch that it doesn't cause harm, because the moment it does, and people become very wary of seeing it being used in even more ways, then it's much harder to recover from that sort of loss of momentum. So the way in which we'd like to go about it is to set out the broad directions, and also to set out what are the principles that we should be working along and then to say to the industry and organisations that are using AI, let's try and find a way to define the risks, and also then to find ways of mitigating those risks. That's generally the approach that we take: small steps, not necessarily always “big bang”, but when we're ready to move, then move quickly.