Opening Keynote by SMS Tan Kiat How at ATxSummit Day 2
29 May 2025
Opening Keynote by Mr Tan Kiat How, Senior Minister of State for Digital Development and Information, at ATxSummit Day 2 (29 May 2025)
Your excellencies, distinguished guests, ladies and gentlemen
I am happy to be back at AsiaTech Singapore, or ATxSG – where some of the best from technology, business, and policy come together to shape the future of our digital economies.
The rapid developments in AI – especially in generative AI, or GenAI – have opened up immense potential, but also surfaced urgent, important questions about how it will impact jobs, reshape economies, and influence our societies.
Around the world, people are asking what AI means for their livelihoods and their future.
Governments and businesses are racing to unlock AI’s potential, while also managing risks and guiding its impact in a direction that benefits all.
Singapore is taking deliberate steps – not just to keep pace, but to shape how AI is used to strengthen our economy, build resilience, and uplift our people.
What we choose to do matters, because it will shape our long-term competitiveness and the kind of digital future we pass on to our next generation.
As a small, connected nation, we also want to contribute to a global AI ecosystem in support of open digital economies – one that is trusted, inclusive, and interoperable, building on our strong digital foundation and culture of trust.
We are therefore building a home where AI can thrive, not just for ourselves but as a trusted node in a wider international community.
To realise this, we need to test what trust looks like in practice, so that it can be scaled with greater confidence.
That is why the AI Verify Foundation and IMDA launched the Global AI Assurance Pilot earlier this year.
The initiative pairs businesses bringing real-world GenAI use cases with specialist AI testers, so that they can move beyond theoretical frameworks and see how their GenAI applications perform under practical conditions.
In just a few months, more than 30 companies across 8 jurisdictions and 10 sectors, including HR, healthcare and finance, have participated in this initiative.
This shows that businesses want to get GenAI adoption right, because trust is a business risk – when stakeholders like customers hesitate, competitiveness suffers.
With the Global AI Assurance Pilot now concluded, we have put together a report that distils practical lessons from these collaborations, each one helping us understand what businesses really need to build and use GenAI with confidence.
One, risks are context specific.
Your risks depend on your use case.
The most effective testing starts with a clear understanding of what is relevant to your use case, as well as what is not.
Two, useful test data rarely comes ready-made.
Most businesses do not have the prefect dataset sitting on the shelf.
Generating realistic and edge case test scenarios requires thoughtful effort from us, with support from machines.
Three, go beyond outputs.
Sometimes, the issue lies deeper in the pipeline.
Testing what happens inside the system can offer more useful insights – and greater assurance.
And four, Large Language Models, or LLMs, can help with evaluation, but only with care.
They can be fast and scalable, but still require thoughtful design, calibration, and human oversight.
In some cases, simpler methods work just as well.
These are not just technical observations.
They reflect where collaboration meets practice, and where insights translate into action.
For those keen to draw on these lessons for your own interests – whether you are applying GenAI in a business setting, building testing tools, or shaping policies – the report is available today from today as a resource.
It captures the insights in greater detail and offers practical takeaways for those looking to move from principles to implementation.
Building on the Global AI Assurance Pilot, we are making it easier for businesses to take action – on their own terms and at their own pace – without having to start from scratch.
This will help us nurture an AI ecosystem where businesses are supported, empowered, and become part of a broader culture of trust.
As a next step, we have developed the Testing Starter Kit for GenAI Applications, which will be released to the public for comments today.
The Starter Kit is a set of voluntary guidelines that, in essence, lowers the barriers for businesses that want to adopt GenAI responsibly but may not know where to start.
It draws on insights from the Global AI Assurance Pilot, tapping on the experience of practitioners to ensure the guidance is practical and useful.
It aims to do two things:
First, it pulls together emerging best practices and methodologies for testing GenAI applications, so that businesses can understand what good testing looks like.
Second, it offers practical guidance on how to go about it – when to test, what to test, and how to test.
To ensure that the guidance is actionable, the Starter Kit will be complemented by testing tools that businesses and developers can use and run assessments on their own.
As a start, seven baseline tests from the Starter Kit are made available on Project Moonshot, enabling businesses to easily integrate responsible AI practices into their operations.
We will progressively make more tests available through Project Moonshot.
The Starter Kit is also designed to evolve as technologies shift, new risks might emerge, and use cases will grow more complex.
Whether you are a startup piloting a chatbot or a large enterprise deploying AI applications at scale, the aim is to make responsible innovation more accessible and achievable.
This creates a feedback loop between practice, tools, and policy that keeps governance agile, grounded, and innovation friendly.
More importantly, it allows businesses to take the lead in building trusted GenAI – backed by shared standards, open frameworks, and a community committed to safe and responsible innovation.
Together, the Global AI Assurance Pilot helped us learn, the Starter Kit enables more to apply, and Project Moonshot provides the means to scale.
But enablers alone do not make a thriving AI ecosystem.
To unlock the benefits of AI in full, we need to empower people too – from everyday users who interact with AI, to employees who work alongside it, and leaders who can make decisions about how it is used.
Just as we invest in the technical side, we must also invest in human capacity.
This means creating space for people to learn, adapt, and engage – so that AI is not something done to them, but something they can shape, understand and benefit from.
I am pleased to share that AI Singapore, or AISG, and the United Nations Development Programme, or UNDP, will be signing a Memorandum of Understanding to close the AI literacy divide and transform communities in developing countries.
This will extend AISG’s successful AI for Good (AI4Good) programme – launched in 2024 to bolster national AI capabilities – from Asia to an international scale, in support of the United Nations Sustainability Development Goal 4.
The aim is to enable individuals, organisations, and businesses to better understand the opportunities, risks, and ethical dimensions of AI.
We will co-develop resources to teach AI with educators, and reach underrepresented groups through targeted outreach, so that no one is left behind as we advance together as a global community.
AISG and UNDP will explore initial AI4Good pilots in Southeast Asia, the Caribbean, and the Pacific Islands, so that we can support more inclusive participation in AI-driven growth together.
This MOU represents a shared commitment to make AI work for everyone – not just where it can be developed and advanced but where it could be needed most.
In many contexts, the promise of AI is harder to realise – shaped by differences in access, infrastructure, or readiness.
Through this partnership, we seek to close these gaps and open up opportunities for more to participate confidently in digital economies, starting here in Singapore, and extending to our region and beyond.
This effort reflects the same ethos that has guided our broader efforts to nurture an AI ecosystem that delivers impact where it matters most.
As we look ahead, we must remember that meaningful progress in AI is not just about scale or speed, but how well we align it with the needs of our people, our businesses, and our communities.
Singapore will continue to partner across sectors and borders, because we believe that a trusted, inclusive, and useful future for AI must be built together.
We invite partners who share this vision to work with us so that, together, we can build this home for AI to become a force for good.
Thank you very much.