Opening Address by Minister Josephine Teo at ATxSummit 2025
28 May 2025
Excellencies,
Distinguished colleagues and friends,
Introduction
A very good morning and thank you all for joining us.
I am keenly aware that some of you have travelled long distances to be here. I just want to say how much we appreciate you making your presence felt. We would like to extend your warmest hospitality so that this can be a very meaningful use of your time.
When we first convened Asia Tech x Singapore (ATxSG), we envisioned it as a platform to bring together global technology leaders from governments, companies, research institutions, and civil society to discuss future tech innovations, the evolving digital landscape, and our responses to all of the opportunities and challenges.
We are now in our fifth edition, and your presence here tells us that you find this a useful platform. And that we all instinctively understand what President Tharman referred to last night as the “broad coalition of the willing”.
For the benefit of those who could not join us at our opening dinner, I encourage you to read the President’s full speech. It provoked us to think about the inherent tensions of AI’s progress; and calls on us as leaders, to move forward with a combination of humility and tenacity.
The ATx is not just about AI. Having said that, it is the moment of truth for all of us where AI is concerned.
And so, for my keynote this morning, with your permission, I will share some reflections on Singapore’s journey, since we launched our refreshed National AI Strategy (NAIS 2.0) in December 2023.
Catalysing AI in Industry
You will remember at the time, the obsession with access to GPUs – the compute capacity for AI workloads. It is not unusual, at the beginning of an innovation cycle, to seek to boost activity from the supply side. Some access to this capacity is clearly needed. It is, however, the demand side that needs nurturing, to sustain a pace of progress that will keep the supply flowing.
To start, we turned to industry to identify applications with commercial utility. Initially, few businesses were wise to the benefits that AI could bring them. Insights come chiefly through experience, and this was not readily available.
It takes bold ambition. Such as when a bank declares that it is really a tech company offering financial products and services; or when an airline says it wants to transform civil aviation with AI. This declaration of bold ambitions unlocks the mind as to what this effort is all about, and unleashes a new kind of energy that is essential to rallying support for experimentation.
When ambition meets resource commitment, there can be vision and there is potential. But for vision to become reality, potential must be matched with capabilities. This is where we have seen steady build-ups, with companies forming AI transformation teams and plugging gaps with a combination of training and hiring.
Getting the full benefits of AI often involves changes to a business’ operations. If nothing is broken, who says you should try and fix it?
Legacy systems and processes need to be updated or replaced, and employees at all levels need to be equipped with the relevant skills. But there is going to be friction and resistance.
All the good things we want to see happen will take time but what we are seeing in Singapore is that the early signs are very good, with significant reported gains in productivity and cost savings. This then helps to build the support for the next wave and phase of efforts.
Some leading organisations have gone further to set up AI Centres of Excellence with meaningful mandates and sizeable budgets, to enhance infrastructure and engage in AI research and development. At each visit to such Centres, I can see the enthusiasm in great abundance, as well as the experimentation that is taking place.
And the Government is more than willing to support these efforts. Not just to cheer them on, but to back them up financially.
Let me just say, however, it’s one thing to build capabilities enterprise by enterprise. But there’s also value in aggregation. In manufacturing, for example, common data standards would enable larger datasets, that can be used for better failure detection and defect prediction using AI models. With manufacturing contributing some 20% of our GDP, there is good reason for our specialised, sectoral AI Centre of Excellence. And that’s what we have today.
Aggregation can also take place at the national level, such as when we decided to develop SEA-LION, which stands for Southeast Asian Languages in One Network. As Large Language Models go, SEA-LION is actually quite modest in size. But scale was never our primary goal.
Rather, it was the fact that there are over 1,200 languages and dialects in Southeast Asia. Many Singapore-based companies have extensive regional links. With SEA-LION, their AI applications have a much better chance of working well with local languages, colloquial expressions, and references.
The building of SEA-LION is also a great example how we benefit from trans-national aggregation. Datasets were contributed by regional partners. In turn, SEA-LION has been kept open-source. It has been tapped on by a wide community of AI developers in Indonesia, Thailand, and Vietnam through more than 200,000 downloads.
With this as a foundation, there was good reason to build another model capable of accepting multimodal inputs, such as speech and text. As befitting our Lion City, the Agency for Science, Technology and Research, or A*STAR, called it MERaLiON, or the Multimodal Empathetic Reasoning and Learning in One Network.
MERaLiON v2.0, which we are launching today, expands its language coverage from English, Mandarin, and of course Singlish, to include Malay, Vietnamese, Thai, Tamil, and Bahasa Indonesia. This makes MERaLiON relevant to about 450 million people who use these languages primarily on a day-to-day basis. Furthermore, it understands sentences containing a mix of languages, which is very common in multi-cultural societies. What makes MERaLiON empathetic though? I’ve been told it can also handle non-verbal cues such as the speaker’s volume, tone and emotion.
To help MERaLiON make a bigger impact, we will establish the MERaLiON Consortium. A*STAR will partner companies such as DBS Bank, Grab, ST Engineering, NCS, SPH Media, as well as the MOH Office for Healthcare Transformation (MOHT) to harness expertise in the ecosystem, share learnings, and accelerate adoption.
Transforming the Public Sector for AI
Colleagues and friends, ambition and aggregation are helping AI adoption gain momentum in Singapore’s industrial scene. What about our public sector?
The public sector’s AI efforts are equally important. They contribute to building scale in demand, and help crowd in capabilities from around the world that the private sector too can draw on. A Government is also better equipped to promote or regulate activities it has first-hand experience carrying out.
We have three main lines of effort that are producing good returns.
First, we provide broad-based access and skills training.
Today, around 50,000 or one-third of public officers use our secure, in-house version of ChatGPT monthly. Tasks such as drafting reports, research, and reviewing papers take less time than before.
Through regular hackathons, officers have the opportunities showcase their skills and see how their peers are dealing with similar issues.
And an in-house platform guides our officers on how they can build customised AI chatbot assistants. More than 16,000 bots have been built this way.
It really outstretched my expectations. I did not imagine at the outset that when were building this platform that the officers will take to it so readily.
All these is to say that our officers are getting comfortable with the use of AI and are nurturing a different kind of problem-solving mindset.
Our second line of effort involves strengthening core AI expertise in technical government agencies.
They often have unique operational needs, requiring customised AI solutions.
And for security reasons, they must have engineering capabilities across the tech stack.
Our third line of effort is to actively transform parts of the public service through AI.
Take for example, Homeland Security, where AI can be a force multiplier in law enforcement and public safety. My Home Team colleagues have identified over 300 AI use-cases, and set aside over $400M to bring good proposals to fruition. Just two days ago, they committed another $100 million for Embodied AI, to develop AI-enabled humanoids for high-risk scenarios like search and rescue, where officers’ live are often put at risk.
In healthcare, AI is helping our doctors save time on administration so that they have more time for patient care. And AI is also starting to help them design better treatment plans.
To improve environmental sustainability, our researchers can use AI tools to design more effective cooling solutions in tropical settings – a key challenge for a densely built-up city like Singapore. AI is also used to identify and design new catalysts for things like carbon capture.
These kind of efforts in the public service are shaping up a culture where AI is valued as an accessible technology even if we are not “techies” or software engineers by training.
Where there is agency to deploy AI in meaningful ways, and where each new achievement raises our ambition to serve citizens better with the help of AI.
Governing AI for the Public Good
With the experimental use of technologies, there is always risk. This is why there must also be assurance. For public officers, there is assurance of support to be properly trained, and collective responsibility for risk management. For the public, there is assurance of the Government’s strong commitment to AI Safety and Governance.
This has been a cornerstone for Singapore, from our earliest efforts to harness AI.
Our position has always been that good governance enables and encourages innovation. We believe it is important to:
Set clear expectations of what “safe and responsible” AI is.
Put in place the necessary guardrails for its development and deployment.
And ensure that our regulators are up-to-date, credible, and proficient.
These help assure businesses and individuals that there are protections against the risks and harms that AI may bring, and in turn builds confidence and trust in AI, facilitating its adoption.
The challenge, of course, is that AI is an inherently probabilistic technology, that is developing at an incredibly fast clip.
While we have made progress globally, AI Safety Science still has quite some way to go before it can effectively and comprehensively address the risks and harms that AI could bring – whether inadvertently or intentionally.
Singapore has therefore taken a practical and risk-based approach to AI Governance.
We have developed AI Governance frameworks and testing tools, in partnership with industry and academia.
We started doing so for more “traditional” AI through our Model AI Governance Framework in 2019, and the AI Verify testing framework and toolkit in 2022. In fact, AI Verify was launched at ATx in 2022.
We have progressively updated these frameworks and tools for Generative AI.
We have just enhanced our AI Verify Testing Framework to deal with new risks, like leakage of personal or sensitive data, or harmful output such as hallucinations and toxicity.
And I’m pleased to share that we have also completed a mapping of this enhanced framework with the comparable framework that is published by the US National Institute of Standards and Technology’s (NIST). This will make it easier for businesses operating in both Singapore and the US to meet their AI safety obligations in both countries.
In addition, we have also taken action where there were clear gaps that needed to be plugged.
For example, we passed a new law to safeguard the integrity of our elections from malicious AI-generated deepfakes.
We continue our efforts to advance the state-of-science in AI safety, through investing in R&D activities such as in our Digital Trust Centre and the Centre for Advanced Technologies in Online Safety.
Contributing to International Efforts
Every country takes its own approach to AI Governance, in line with their own context, challenges and priorities.
But these differences do not mean that we are at odds, nor that there is no space for mutual learning and cooperation.
On our part, Singapore strives to be a constructive member of the international community. By sharing our own AI experience in groupings such as the Digital FOSS and ASEAN, or through platforms like ATx. And supporting collaborative efforts to develop global norms in AI Governance.
As you heard Chuen Hong say earlier, last month, we hosted the second edition of the Singapore Conference on AI, as an International Scientific Exchange on AI Safety. It concluded with a “Singapore Consensus on Global AI Safety Research Priorities”. This will form the basis of the Ministerial Roundtable on Digital Trust later this afternoon, and shape my colleague and my thinking on the appropriate policy responses.
Our AI Safety Institute will also step up collaboration with France, to advance our understanding on managing AI risks.
Closing
Colleagues and friends, our journey to use “AI for the public good, for Singapore and the World” is well on its way.
Although many of you have generously complimented our efforts, I sincerely believe that most of us are really only at the starting line. We are not in a race against each other. We are in a race against abusers of AI and against powerful incentives to take excessive risks with AI.
We must not give up on the possibility of making the most of AI and making it safe. We must learn to join hands like never before.
Thank you once again for being here.