Opening Remarks by Minister Josephine Teo at Singapore Conference on AI
Opening Remarks by Mrs Josephine Teo, Minister for Digital Development and Information, at the Singapore Conference on AI: International Scientific Exchange on AI Safety on 26 April 2025
Distinguished guests
Esteemed researchers
Colleagues and friends
-
A very warm welcome to the second edition of the Singapore Conference on AI (SCAI).
-
As it turns out, this meeting is taking place during the most significant period in our political calendar. In exactly seven days, our citizens go to the polls to elect the next term of Government.
-
This period may also be one of the most significant in Singapore’s AI calendar. It is after all, the first time that we are hosting some 8,500 participants to the International Conference on Learning Representation (ICLR). I believe this is in fact, the first time ICLR is being held in Asia. It made sense therefore, to also organise the Singapore AI Research Week, and I thank many of you here today for being part of it.
-
In democracies, general elections are a way for citizens to choose the party that forms the government and to make decisions on their behalf. But in AI development, citizens do not get to make a similar choice. However democratising we say the technology is, citizens will be at the receiving end of AI’s opportunities and challenges, without much say over who shapes its trajectory.
-
But as a nation, Singapore believes we are not without choices. We can choose to embrace and not reject. We can choose to prepare and not despair. We can choose to contribute constructively and not just comment from the sidelines.
-
We may not design or produce the most advanced AI chips. We may not be the originator of the most sophisticated AI models. And we may not have the widest range of applications. But we choose to envision AI being used for the Public Good, for Singapore and the world. We also choose to prioritise governance, as much as we do adoption. These choices have guided our actions.
-
We were among the world’s first to articulate AI governance principles through the Model AI Governance Framework in 2019. We have since updated it for generative AI, and worked with partners like the US to align our governance approaches.
-
Closer to home, we launched the ASEAN Guide on AI Governance and Ethics, to help address our region's AI governance needs cohesively. AI safety and governance was also one of the focus areas for the Digital Policy Dialogue we had with China last year.
-
We continue to engage deeply with international experts and stakeholders to guide our thinking on how to govern AI well. Last year, we partnered with Humane Intelligence to conduct the world’s first multicultural and multilingual AI safety red-teaming exercise focused on Asia-Pacific. Earlier this year, we launched our Global AI Assurance Pilot to match testing providers with demand, and to share and promote best practices in the testing of Gen AI applications.
-
This commitment to safety and good governance is not new for Singapore, or unique to AI. As a country, we have long held high standards when it comes to areas like aviation, healthcare, and financial regulation. Our approach to AI is grounded in that same principle: enabling innovation while putting in place guardrails to manage risks.
-
But unlike those other sectors, the science of AI safety is still in its early stages. This is why research is so critical at this juncture. In the interest of time, I will not say too much our National AI Strategy 2.0., except to highlight that of the three key activity drivers of AI excellence, we specifically identified Research.
-
In fact, the Strategy was launched when we first convened SCAI in December 2023. Our aim then was to challenge ourselves to ask the right questions about the future of AI – its promise, its perils, and how we might govern it wisely. The meeting identified 12 key questions that not only informed our research priorities but also affirmed our governance priorities. Many of those questions revolved around AI governance and safety, which we are focusing on today. These questions have since informed global conversations, including at the UN High-Level Advisory Body on AI. And we believe these remain salient, urgent questions that all of us – in academia, industry and Government – must endeavour to answer, if we are to shape a positive future for AI.
-
More than a year on from the first SCAI, we find ourselves at a very different point in the AI journey. Systems are more capable and autonomous – able to plan, delegate and execute complex tasks. Advanced AI is also becoming more accessible. Inference at GPT 3.5 levels is over 99% cheaper than it was in late 2022 and open-weight models are closing the gap with closed models.
-
What counts as progress from one perspective, counts as risk from another. Whether it is concerns around the impact on jobs and societies, or the proliferation of AI deepfakes for disinformation or scams, or even unsettling questions on AI alignment with human values and priorities, the need to manage risks will only be more pressing, the more powerful and accessible AI models become.
-
Earlier this year at the AI Action Summit, the International AI Safety Report was launched, which captured many of these concerns. I recall my conversations with some of you here on how frontier models are becoming more capable, less predictable, and harder to evaluate; and about the need for proactive, coordinated efforts across the AI research and policy communities.
-
Today, we will move from questions to action. We will unpack what we need to do to govern AI well, building on the first SCAI and the International AI Safety Report. More importantly, we will discuss how to translate AI safety research into real, effective policy.
-
Often, critical research doesn’t yet make its way into the policy conversations where it’s most needed. Not because it lacks relevance, but because we need to build stronger pathways to connect these worlds. Bridging that gap requires genuine partnership between researchers, industry and policymakers.
-
That’s what we are here to do today, to co-create a roadmap for global AI safety research – one that reflects both the pace of innovation, and the responsibility that must accompany it.
-
I want to personally thank the Expert Planning Committee – Yoshua Bengio, Max Tegmark, Dawn Song, Luke Ong, Stuart Russell, Tegan Maharaj, Xue Lan and Zhang Yaqin – for their intellectual rigour in shaping our programme.
-
We hope that today’s efforts will be a launchpad for further technical collaboration, and serve as a bridge to policy discussions, including at the ATxSummit Digital Ministers’ Roundtable in a month’s time.
-
So, let’s make the most of this moment. There’s a lot of important work ahead. But with the expertise and energy in this room, I am confident we can help steer AI in the right direction, together.
-
Thank you.