Building Trust in the Era of AI: Fostering Public-Private Collaboration To Scale AI Responsibly
March 2025
This whitepaper presents the issues faced by the financial sector related to the adoption of AI, focusing on the theme “Building trust in the era of AI: Fostering public-private collaboration to scale AI responsibly”. Drawing insights from the roundtable discussion at the Insights Forum, the report synthesises key discussion points for the growing role of AI in financial markets, as well as the importance of collaboration between stakeholders to ensure trust and safety, whilst also facilitating innovation.
Role of AI in the Financial Sector and Regulation
Although the adoption of AI into financial markets offers tremendous potential by improving efficiency, accuracy, and customer service, financial institutions must consider safety and ethical aspects. Regulators should strike a balance: risks should be managed on the one hand, whilst, on the other hand, innovation should not be unnecessarily stifled.
Risks related to AI-adoption in the Financial Sector
With AI rapidly evolving and becoming more capable yet less transparent, security and reliability concerns arise. These security concerns include the potential for AI systems to be manipulated or hacked, leading to unauthorised access to sensitive information. Reliability concerns involve the inconsistent performance of AI systems, where errors or unexpected behaviour may lead to critical failures or costly decisions. As the financial sector faces both micro-level (institution-level) risks and macro-level (systemic) risks, safeguards need to be put in place to protect the integrity and safety of financial markets.
AI Skills Gap
Roundtable participants flagged that they see a critical AI skills gap in the financial sector, hindering institutions’ ability to implement AI solutions effectively and responsibly. Recruiting and training specialised AI talent is a challenge, with few professionals able to grasp the complexity of AI and its practical applications.
This knowledge gap also fuels distrust in AI, as stakeholders may not trust AI-driven outcomes and regulatory assurances. Bridging the AI skills gap through education, training, and a collaborative approach is necessary to ensure proper AI implementation and governance. The skills gap affects the private sector as well as public sector entities, including regulators and policymakers.
Role of Policy Makers and Research Institutions
When adopting AI tools in finance, organisations should adopt a clear set of principles covering safety, privacy, and transparency aspects. Governments and public research institutions are essential in providing resources, research, and guidance on responsible AI usage. These institutions can facilitate continual improvements in AI safety. Policymakers should develop regulatory frameworks that are robust and yet practical. Cooperation between public and private sector entities is important, as these stakeholders can learn from each other.
FutureMatters is a platform for thought leaders, practitioners, and industry players to share their insights on emerging opportunities and challenges in today's world. Apply to be a contributor here.