Ethical AI as a Competitive Edge
July 2025
By Raadhika Sihin, Head of Public Policy, GFTN
Artificial Intelligence (AI) presents both significant opportunities and risks across industries. Financial services, with its heavy reliance on data-driven decision-making, is at the forefront of this transformation. However, rapid AI advancement raises critical governance questions: most critically, how can leaders enable innovation while ensuring safety, ethical integrity, and public trust?
With jurisdictions taking divergent regulatory approaches, a roundtable discussion held at the Point Zero Forum in May 2025 explored how AI governance models impact innovation, particularly for startups, SMEs, and financial institutions. The session brought together regulators, industry leaders, academics, and international organizations to compare emerging frameworks and explore practical pathways for responsible AI development.
The roundtable focused on:
- Comparing regulatory models (EU, UK, Asia)
- Balancing innovation and risk aversion
- Supervisory capacity challenges
- Industry self-regulation and ethical AI
- Impacts on SMEs and global competitiveness
- Ethical AI as a strategic advantage
- The link to broader sustainability objectives
Key Findings
1. Diverging Regulatory Approaches
The roundtable participants discussed the divergent regulatory approaches with regards to AI across key regions, including the European Union, the United Kingdom, and Asia. These differences include:
European Union (EU AI Act)- Comprehensive, horizontal regulation applying across all sectors
- Risk-based classification of AI systems (unacceptable, high, limited, minimal)
- Strong focus on protecting fundamental rights, avoiding discrimination, and ensuring transparency.
- Criticised for being burdensome for SMEs and startups due to high compliance costs.
- May deter early-stage innovation and drive firms to more permissive jurisdictions.
- Vertical, sector-specific regulation; each regulator governs AI within its existing mandate.
- Focus on outcomes rather than technology itself.
- Encourages direct dialogue between regulators and firms, emphasising adaptive supervision.
- Cross-sector collaboration through the Digital Regulation Cooperation Forum (DRCF).
Asia (Singapore, Japan, Hong Kong)
- Principles-based, exploratory frameworks.
- Early deployment of sandboxes and controlled experimentation (e.g. Singapore FEAT principles, Hong Kong’s generative AI sandbox).
- Emphasis on understanding practical use cases before establishing rigid regulations.
2. Innovation vs Risk Aversion
Participants noted that EU’s regulatory model reflects deep-rooted societal values (human dignity, equality, safety). However, while strong safeguards can serve public interest; they may inhibit rapid innovation and experimentation.
There is a risk that excessive caution could result in “precautionary paralysis”, preventing Europe from capitalizing fully on AI’s growth.
Several participants argued for more adaptive, principles-based models that evolve with technological progress.
3. Supervisory Capacity and Talent Gaps
Participants noted that regulators struggle to recruit talent with sufficient AI expertise, especially in emerging markets. This could limit the supervisory capacity of regulatory bodies and limit the effectiveness of regulatory measures.
- Effective oversight depends on:
- Transparency (e.g. mandatory AI inventories under the EU AI Act).
- Building supervisory skills.
- Establishing cross-sector knowledge-sharing platforms.
4. Industry Self-Regulation and Responsibility
Industry representatives emphasised that market incentives already encourage ethical AI. These include factors such as reputation, trust, procurement requirements. As such, self-regulation efforts could emerge as a viable alternative to top-down regulatory measures, particularly considering the talent gap that regulatory bodies face. These could include certification schemes (e.g. machine-readable labels for trustworthy AI), which could create competitive advantages. This presents a critical opportunity for greater collaboration between regulators and industry to shape practical, risk-based frameworks.
5. Impact on SMEs and Startups
Participants also noted that compliance costs disproportionately impact smaller firms, and that the EU AI Act offers limited relief for SMEs – which could include lower reporting requirements, sandbox access and capped fines.
There was broad agreement during the session that graduated compliance models are needed, which could allow startups to innovate while progressively meeting stricter requirements as they scale. Without such support, there is risk of concentrating innovation within larger firms or foreign markets.
6. Global Competitiveness and Ecosystem Challenges
Participants noted that one of the challenges the European tech industry faces is heightened competition from the rest of the world, particularly from the U.S. market, which has placed the EU at a disadvantage. Regulatory burdens can further exacerbate the challenges that the EU ecosystem faces.
- US advantages:
- Deep venture capital markets.
- High tolerance for entrepreneurial risk.
- Close collaboration between academia, government, and private sector
- EU challenges:
- Less venture capital availability.
- Greater risk aversion.
- More state-led, slower-moving innovation ecosystems.
- Lower global share of AI patents and startup activity.
7. Ethical AI as a Competitive Advantage
One opportunity that European tech companies could leverage is the potential to differentiate themselves as providers of responsible AI solutions. European leaders are increasingly reframing responsible AI as a strategic differentiator that could distinguish European tech solutions and set themselves apart from American competitors, particularly in regulated industries.
- Growing demand for ethical AI solutions is increasingly emerging from:
- Clients and procurement teams.
- Investors concerned with responsible business practices.
- Societal expectations for transparency and accountability.
Ethical AI is hence not only a moral issue but also an emerging commercial advantage in securing contracts and public trust.
8. AI and Sustainable Development
AI’s role in advancing sustainable finance and climate goals was also highlighted by participants. AI will prove to be increasingly essential to manage complex ESG data sets, with the potential to improve financial inclusion in underserved markets.
As such, participants warned that overly restrictive regulation could delay AI’s contribution to addressing global sustainability and development challenges – but thoughtful, innovation-friendly regulatory frameworks could help accelerate its impact.
Conclusion
The roundtable highlighted that Europe’s values-based approach to AI regulation could position it as a global leader in ethical AI governance, especially in highly regulated industries. However, there are real risks that excessive rigidity could constrain innovation, particularly for SMEs and startups, and hinder progress towards climate mitigation and sustainable finance.
A balanced regulatory model must:
- Protect fundamental rights.
- Enable market-driven innovation.
- Support startups through graduated compliance.
- Build supervisory capacity.
- Encourage global regulatory cooperation.