July 2025
By Raadhika Sihin, Head of Public Policy, GFTN
Artificial Intelligence (AI) presents both significant opportunities and risks across industries. Financial services, with its heavy reliance on data-driven decision-making, is at the forefront of this transformation. However, rapid AI advancement raises critical governance questions: most critically, how can leaders enable innovation while ensuring safety, ethical integrity, and public trust?
With jurisdictions taking divergent regulatory approaches, a roundtable discussion held at the Point Zero Forum in May 2025 explored how AI governance models impact innovation, particularly for startups, SMEs, and financial institutions. The session brought together regulators, industry leaders, academics, and international organizations to compare emerging frameworks and explore practical pathways for responsible AI development.
The roundtable focused on:
The roundtable participants discussed the divergent regulatory approaches with regards to AI across key regions, including the European Union, the United Kingdom, and Asia. These differences include:
European Union (EU AI Act)Asia (Singapore, Japan, Hong Kong)
Participants noted that EU’s regulatory model reflects deep-rooted societal values (human dignity, equality, safety). However, while strong safeguards can serve public interest; they may inhibit rapid innovation and experimentation.
There is a risk that excessive caution could result in “precautionary paralysis”, preventing Europe from capitalizing fully on AI’s growth.
Several participants argued for more adaptive, principles-based models that evolve with technological progress.
Participants noted that regulators struggle to recruit talent with sufficient AI expertise, especially in emerging markets. This could limit the supervisory capacity of regulatory bodies and limit the effectiveness of regulatory measures.
Industry representatives emphasised that market incentives already encourage ethical AI. These include factors such as reputation, trust, procurement requirements. As such, self-regulation efforts could emerge as a viable alternative to top-down regulatory measures, particularly considering the talent gap that regulatory bodies face. These could include certification schemes (e.g. machine-readable labels for trustworthy AI), which could create competitive advantages. This presents a critical opportunity for greater collaboration between regulators and industry to shape practical, risk-based frameworks.
Participants also noted that compliance costs disproportionately impact smaller firms, and that the EU AI Act offers limited relief for SMEs – which could include lower reporting requirements, sandbox access and capped fines.
There was broad agreement during the session that graduated compliance models are needed, which could allow startups to innovate while progressively meeting stricter requirements as they scale. Without such support, there is risk of concentrating innovation within larger firms or foreign markets.
Participants noted that one of the challenges the European tech industry faces is heightened competition from the rest of the world, particularly from the U.S. market, which has placed the EU at a disadvantage. Regulatory burdens can further exacerbate the challenges that the EU ecosystem faces.
One opportunity that European tech companies could leverage is the potential to differentiate themselves as providers of responsible AI solutions. European leaders are increasingly reframing responsible AI as a strategic differentiator that could distinguish European tech solutions and set themselves apart from American competitors, particularly in regulated industries.
Ethical AI is hence not only a moral issue but also an emerging commercial advantage in securing contracts and public trust.
AI’s role in advancing sustainable finance and climate goals was also highlighted by participants. AI will prove to be increasingly essential to manage complex ESG data sets, with the potential to improve financial inclusion in underserved markets.
As such, participants warned that overly restrictive regulation could delay AI’s contribution to addressing global sustainability and development challenges – but thoughtful, innovation-friendly regulatory frameworks could help accelerate its impact.
The roundtable highlighted that Europe’s values-based approach to AI regulation could position it as a global leader in ethical AI governance, especially in highly regulated industries. However, there are real risks that excessive rigidity could constrain innovation, particularly for SMEs and startups, and hinder progress towards climate mitigation and sustainable finance.
A balanced regulatory model must: