Opening Address by Minister Josephine Teo at HLP (AI) on 22 Oct 2025
22 October 2025
My Cabinet colleague, Mr Goh Pei Ming
Fellow Ministers, excellencies,
Distinguished guests,
Colleagues and friends
Welcome to Day 2 of the Singapore International Cyber Week. We are glad to see so many developers, security practitioners, and policymakers gathered here today.
We are living through an extraordinary moment in technology. Two developments are reshaping our world right before our eyes.
The first is agentic AI – systems that do not just analyse and recommend, but decide and take action.
They can already help us schedule meetings, write and deploy code, even automate entire business operations.
Implemented properly, agentic AI will likely be a welcomed teammate that amplifies human abilities, freeing us from repetitive work and enabling faster responses to complex problems.
But there are also questions of accountability when systems malfunction, and humans lose control.
The second is quantum computing.
This technology will fundamentally change how we think about trust, especially in cryptography and secure communications.
While it promises revolutionary capabilities in drug discovery and financial modelling, it could also break current encryption, potentially compromising both national security and business operations.
Both technologies offer tremendous promise. But they also pose serious risks.
More significantly, both demand something new from us: a shift from reactive regulation to proactive preparation when their implications cannot be fully predicted.
This shift can be our aspiration, but it will take collective will, wisdom and action to govern these technologies before they govern us.
INTERNATIONAL SCAN
Fortunately, many countries are already seeking answers.
On agentic AI, we wrestle with the same basic question: how to govern AI that can act autonomously?
The EU and South Korea have established comprehensive AI regulations, but agentic AI's autonomous decision-making capabilities create practical challenges in meeting key requirements like transparency and human oversight.
The US National Institute of Standards and Technology (NIST) is developing testing standards for AI agents rather than prescriptive rules.
The UK's AI Security Institute has developed sandboxing toolkits for testing AI agents, though it is not known if “passing” a test guarantees good behaviour as the agents learn and evolve.
In quantum, there is also growing momentum.
The UN has declared 2025 the International Year of Quantum Science and Technology – an extraordinary international consensus on quantum's transformative potential.
The EU launched its Quantum Europe Strategy to turn scientific leadership into industrial strength.
South Korea established a Quantum Strategy Committee backed by significant funding. Japan declared 2025 the first year of quantum industrialisation.
Along with hope, there is fear that quantum capabilities can be misused to break encryption and threaten the foundation of our digital systems.
We want to know how to thrive in a post-quantum future – both in terms of harnessing the opportunities and managing the risks. The question is: how long can we afford to wait for the answers?
OUR GOVERNANCE OBJECTIVES
As policymakers, we should always strive to be clear about our governance objectives when taking actions. Whether for agentic AI or quantum computing, I suggest that there are three objectives at this juncture.
First, our goal must be to build trust with citizens through assurance, and not necessarily control all the instances where AI agents and quantum technologies are deployed.
Good governance begins with understanding risks even when we do not exercise control, and building the tools to manage the risks systematically.
We need practical frameworks for testing, validation, and accountability before systems are deployed at scale, because it may be too late to address the risks by then.
Second, we must ensure that the frameworks and tests are relevant and robust in real-world applications. This calls for the provision of safe spaces for experimentation, with appropriate guardrails.
Third, we want to ensure timely action. In several areas, we know the costs of not having acted early enough – the digital divide, misinformation, disinformation, online harms, and scams, for example. Let us try not to make the same mistakes with agentic AI and quantum.
Singapore will not pretend to have all the answers. But we would like to share how we are thinking about these issues and what we are doing in response.
OUR APPROACH TO AGENTIC AI GOVERNANCE
For a country with insufficient manpower, agentic AI offers tremendous potential.
We can see them being used to enhance public service delivery, to anticipate citizens’ needs and provide personalised support.
Our SMEs can benefit from more automated operations and resource optimisation.
Our national cybersecurity can be stronger with the use of intelligent agents to detect, defend and respond at machine speed. GovTech is already experimenting.
But every new capability brings new risks. Who is accountable when agentic AI malfunctions? How do we prevent malicious use – automated cyberattacks or misinformation campaigns? How do we manage systemic impacts on jobs or potential loss of human control?
First, we must identify risks systematically. This year, GovTech launched the Agentic Risk and Capability Framework. It defines components and capabilities of agentic AI systems, to map risks, and prescribes safeguards. The principle is that we must understand where and how risks arise before we can trust autonomy.
Second, making assurance practical and measurable.
Through the IMDA’s AI Verify Framework and AI Assurance Sandbox, we give developers open tools to test their systems for robustness, transparency, and safety. systems for robustness, transparency, and safety.
IMDA had also enhanced AI Verify to cover generative AI's unique risks through Project Moonshot, which combines benchmarking and content red-teaming to test for issues like hallucination and harmful content generation.
We are adapting our tools and security frameworks for agentic AI – building on the CSA’s Guidelines and Companion Guide on Securing AI Systems.
Third, learning by doing with real deployment.
Through the GovTech-Google Cloud sandbox initiative, MDDI agencies have a chance to test and evaluate Google’s latest agentic capabilities, assess the risks, develop mitigation measures, and share the lessons learned with the broader community of AI practitioners in Singapore.
By observing how these systems behave – and sometimes fail – we learn what guardrails are truly needed.
Fourth, we are applying risk-based governance consistently.
We take a sector-specific approach to governance.
This sector-specific approach is designed to ensure that governance measures are proportionate to the risks.
For example, financial decisions affecting livelihoods receive more scrutiny compared with entertainment recommendations, and medical diagnoses demand higher validation standards than logistics optimisation.
Across our regulated sectors, we follow the principle that the higher the autonomy, the stronger the assurance needed.
Most importantly, humans remain ultimately responsible.
This coordinated approach aims to create a comprehensive governance ecosystem where testing frameworks, security requirements, and practical implementation guidance work together. Over time, we hope to build a governance stack that scales with AI capability and risk, while maintaining human accountability at every level.
OUR APPROACH TO QUANTUM SAFE
In quantum, we are also taking concrete action.
Last year, we announced the National Quantum Strategy with S$300 million committed over five years to quantum research and development. These investments build on foundations dating back to the early 2000s to give academia resources to push scientific boundaries, and support industry with capabilities to develop commercial applications.
But we are also managing the risks.
While there is growing awareness of the quantum threat, few organisations have embarked on quantum safe migration.
This is likely because of uncertainty over quantum developments and the lack of specific guidance.
CSA will plug this gap by launching two resources for public consultation today.
First, the Quantum Readiness Index is a self-assessment tool that helps organisations understand their current preparedness for quantum threats to encryption, and chart their migration journey towards quantum-safe systems.
Second, the Quantum-Safe Handbook provides guidance for organisations, particularly Critical Information Infrastructure owners and government agencies, to ready themselves for the transition to quantum-safe cryptography. This handbook was jointly developed by CSA, GovTech, and IMDA, in collaboration with leading technology companies, cybersecurity consultancies, and professional associations.
We consider these resources to be MVP – minimum viable products – live documents that get improved through public feedback. And we welcome you to contribute so we can all learn together.
INTERNATIONAL COOPERATION
Let me now turn to the important topic of international cooperation.
There is a fundamental reality about both technologies that we have discussed today.
Neither agentic AI nor quantum computing respects borders.
A breakthrough in quantum computing anywhere affects encryption everywhere.
A vulnerability in one country's systems can cascade globally.
This means international cooperation must turn from principle to practice.
One way is to ensure interoperable governance frameworks that work across different systems and countries. For example:
Singapore’s crosswalk with NIST hopes to enable companies to "test once, comply globally".
AI Verify's testing framework aligns with international standards including ISO/IEC 42001 and the G7's Hiroshima AI Process principles.
This reduces compliance burden while maintaining rigorous standards. It is a practical consideration that we must keep in mind. Companies always evaluate the cost and benefit of any action, including testing.
Through Digital Economy Agreements with countries like Australia and the UK, we also embed governance principles into trade relationships. We published the ASEAN Guide on AI Governance and Ethics in 2024 to harmonise Southeast Asian approaches, with a further expansion in 2025 to cover generative AI.
On agentic AI security specifically, we are taking proactive steps to address the challenges internationally.
CSA is releasing for public consultation a document on securing agentic AI.
This document is an addendum to its Guidelines and Companion Guide on Securing AI Systems, to cover the unique risks of agentic AI systems.
It is also an invitation – to governments, researchers, and industry partners – to help shape a global reference for securing agentic AI.
On quantum computing, the new NIST quantum-resistant cryptographic standards give us a common technical foundation.
But standards alone are insufficient.
We need to work regionally and internationally to develop and coordinate migration advice.
This is an area that my ASEAN colleagues have asked for further discussions on, and we will see how to facilitate.
Besides inter-governmental cooperation, we are deepening practical partnerships with industry.
CSA will be signing memoranda of cooperation with major technology companies, including Google, AWS, and TRM Labs, to enhance AI-driven intelligence sharing on cyber threats and enable joint operations against malicious activities.
Our partnership with Google demonstrates the tangible benefits – the Enhanced Fraud Protection feature within Google Play Protect has blocked 2.78 million malicious app installations across 622,000 devices in Singapore as of September 2025.
CONCLUSION
Let me conclude.
The age of agentic AI is upon us and the time for quantum-safe preparation is now. They bring much promise but also many unknowns.
We have a collective interest in maximising the upsides while minimising the downsides. By working together with a sense of urgency and purpose, we will learn faster and better our chances of success.
On that note, I thank you once again for being part of SICW and wish you many more fruitful discussions.