Summary of the Asia Tech x Singapore (ATxSG) Government-to-Government Roundtable on Digital Trust
28 May 2025
Summary of the Asia Tech x Singapore (ATxSG) Government-to-Government Roundtable on Digital Trust by Chair and Minister for Digital Development and Information, Wednesday, 28 May 2025
Sentosa, Singapore: Ministers and senior government officials from 18 countries and international organisations met today at the ATxSummit for a closed-door Government-to-Government Roundtable discussion on Digital Trust (“Roundtable”). Minister for Digital Development and Information, Mrs Josephine Teo, who chaired the Roundtable, released the following Chair’s Summary:
At the Government-to-Government Roundtable on 28 May, we discussed the topic of advancing the state of sciences for AI safety for a trusted AI ecosystem. To build a trustworthy, reliable and secure AI ecosystem, we agreed that there is a pressing need to understand AI as a technology, the associated risks that should be prioritised and the potential measures to address these risks. We had a constructive exchange of views on how governments and policymakers can work together with industry, the research community, as well as the wider public and other international partners, to address AI risks and advance the field of AI safety.
Participants agreed that the Roundtable was valuable in shaping and clarifying the priorities that each country faced in terms of AI adoption and safety concerns. There was strong desire amongst the participants to drive AI for the public good, and to enable growth with the adoption of AI. They also agreed that building a trusted AI ecosystem is essential for helping citizens embrace AI with confidence, while providing optimal room for business innovation. Participants acknowledged that despite ongoing efforts, significant challenges remain in fully understanding AI risks due to the technology’s rapid advancement. Cybersecurity, particularly for critical systems, has also emerged as a concern with the advancement of AI.
Participants appreciated the sharing by the technical experts who outlined three priority domains for AI safety research, i.e. (1) creating trustworthy AI systems (development), (2) evaluating AI systems’ risks (assessment), and (3) monitoring and intervening after deployment (control). The participants also recognised the significance of platforms such as the 2025 Singapore Conference on AI (SCAI): International Scientific Exchange (ISE) on AI Safety as a vital channel for global dialogue and advancement in AI safety research. The resulting outcome document from SCAI: ISE (i.e. the Singapore Consensus on Global AI Safety Research Priorities) was acknowledged as an important document to drive future global conversations amongst governments, policymakers, industry leaders and researchers. A range of issues was discussed, including examples of practical elements that can advance the next steps in AI safety, such as impact assessments, incident reporting, capacity building, as well as digital literacy.
The participants stressed the need for joint research initiatives between industry and academia, supported by government funding and incentives for safety-focused development. This is a coordinated approach that leverages all stakeholders’ strength to maintain a safe, accountable and trusted ecosystem.
Education emerged as a crucial component, with particular focus on equipping citizens with the knowledge and skills to use AI responsibly and manage potential harms such as deepfakes.
The discussion emphasised a strong commitment to international collaboration, recognising the borderless nature of AI and online safety risks. Participants also highlighted the importance of a coalition of the willing – in particular, alignment and capacity building are essential for small states with fewer resources in AI adoption and safety, in order to close the inequality gap and ensure that AI development remains inclusive. This commitment underscores the shared responsibility in advancing AI safety research and building a trusted global AI ecosystem.