Trust in AI: Will Consumer Protection and Data Privacy Regulations Hinder or Enable AI Adoption in the Global South?

Trust in AI Will Consumer Protection and Data Privacy Regulations Hinder or Enable AI Adoption in the Global South

July 2025

By Raadhika Sihin, Head of Public Policy, GFTN

 

As artificial intelligence (AI) becomes increasingly central to digital economies, the Global South faces a critical policy question: Will consumer protection and data privacy regulations support or stall AI adoption? 

This question framed a recent roundtable that brought together regulators, central banks, industry experts, and financial service providers to explore how emerging markets can create AI governance frameworks that foster trust, innovation, and inclusion. The session, held at the Point Zero Forum in Zurich, Switzerland, May 2025, emphasized that building trust in AI isn’t just about preventing harm: it’s also about enabling opportunity. 

 

The Regulatory Tightrope 

With AI advancing rapidly, countries around the world are racing to set rules that govern its development and deployment. The European Union’s General Data Protection Regulation (GDPR) and AI Act are increasingly serving as global reference points. This is especially so for the GDPR, which has influenced data privacy regulatory frameworks in large markets such as Brazil, India, and Nigeria, amongst others.  

However, many experts caution against simply adopting such frameworks in the Global South without adaptation, particularly as the EU itself has begun reviewing the terms of the EU AI Act in reaction to industry backlash.  Emerging markets often face vastly different conditions: weaker regulatory capacity, infrastructure gaps, lower digital literacy, and a higher proportion of informal businesses. Stringent regulatory imports risk overwhelming institutions and excluding key actors—particularly small and medium enterprises (SMEs)—from the potential benefits of leveraging AI. As one participant cautioned, there is a risk of the global digital divide further widening if the Global South fails to leverage the benefits of AI in a timely manner: the AI divide. 

Rather than copying established models wholesale, participants at the roundtable emphasised the importance of locally grounded governance, tailored to the region’s development priorities and challenges. 

 

Navigating Global Norms and Local Realities 

A core theme of the discussion was the need to reconcile global data governance norms with local realities. While global frameworks offer useful principles, applying them in emerging economies can often result in disproportionate compliance burdens. 

The roundtable called for context-specific policy frameworks that reflect each country’s digital maturity, economic structure, and technological capacity. These policies should be pragmatic and inclusive, supporting innovation while embedding safeguards around transparency, accountability, and user rights. 

A key concern raised by participants was the concentration of AI innovation in a few key jurisdictions – such as the US, EU, and China – with Global South countries as receivers, rather than shapers, of AI services. Participants highlighted that there is little incentive for AI players to tailor the technology to the needs, concerns, or realities of the Global South. One potential way forward is to deploy AI monitoring tools that can test AI-enabled services for compliance to local regulatory requirements and needs. 

Another concern was the growing global fragmentation in data policy, which can serve as a barrier for collaboration and digital trade across borders. Participants stressed the need for regional data-sharing agreements that preserve national data sovereignty while enabling AI-driven trade, cross-border digital services, and collective problem-solving. Such agreements could help smaller countries pool data resources while negotiating standards and ensuring interoperability. 

 

The Value of Risk-Based Approaches 

A central idea discussed was the value of a risk-based approach (RBA) to AI regulation. Instead of blanket rules, RBA tailors regulatory requirements to the potential impact or harm of specific AI use cases and accounts for emerging applications and the rapid evolution of the technology. 

For instance, an AI model that manages customer service chatbots should not be regulated the same way as one that determines creditworthiness or medical outcomes. Under an RBA framework, high-risk applications face stricter oversight, while low-risk uses are allowed more flexibility. 

Participants shared examples from sectors where RBA is already in use, such as financial services. These examples showed that differentiated oversight can be both effective and innovation-friendly, provided regulators are equipped with the tools and expertise to evaluate risk. Participants also noted that many of the regulatory challenges associated with AI may already be covered by existing regulations, such as data protection and cyber security frameworks. Regulators can then identify current gaps and consider amending existing frameworks to manage those risks, before introducing AI-specific regulations to tackle new risks.  

However, implementation challenges remain. Determining risk categories and thresholds requires strong institutional capacity, which many countries in the Global South are still building. International cooperation, including regulatory sandboxes, technical assistance, and peer learning, was cited as key to overcoming these gaps and enabling innovation in a safe manner. 

 

SMEs at the Centre of the Debate 

The conversation turned to SMEs, which are widely recognised as key engines of employment and innovation in the Global South. Despite their importance, they are often left out of AI policy discussions and face disproportionate challenges in complying with data and privacy regulations. 

Participants advocated for proportional regulation to help SMEs innovate responsibly without facing the same compliance burdens as large corporations. This could include: 

  • Tiered obligations based on business size and use case risk, 
  • Simplified regulatory reporting tools, 
  • Digital literacy and capacity-building programs, and 
  • Targeted support from public institutions and private partners. 

There was consensus that regulatory frameworks should empower SMEs and ensure they are beneficiaries—not casualties—of the AI revolution. Building inclusive AI ecosystems means designing policies that work for the realities of small businesses and everyday users, not just major firms or governments. 

 

Empowering Citizens and Building Public Trust 

Underlying the entire discussion was a shared recognition that trust is foundational to AI adoption. Without public trust, even the most well-designed systems can falter. Participants noted that in many countries, people remain sceptical about how their data is used, how algorithms affect their lives, and whether they have recourse when things go wrong. 

To address this, policymakers must ensure that AI and data policies protect and empower citizens, especially vulnerable or underrepresented groups. This includes embedding rights-based principles, ensuring data transparency, and providing accessible mechanisms for redress. Indeed, awareness and education is crucial.  

Importantly, the group highlighted that AI governance must go beyond risk mitigation. It should also aim to maximize benefits, from improving access to finance and healthcare, to supporting digital entrepreneurship and public service delivery. For instance, this could mean leveraging AI to provide more tailored products that meet user needs, such as through widening the provision of financial services in local languages. 

 

The Way Forward: Smarter, Inclusive Regulation 

As the roundtable drew to a close, participants reflected on what a “minimum viable regulation” might look like for the Global South. The goal: rules that are light enough to encourage innovation but strong enough to prevent harm. 

This does not mean deregulation. Rather, it means creating fit-for-purpose frameworks that are scalable, adaptable, and informed by both global learnings and local needs. Participants emphasized the role of: 

  • Public-private partnerships (PPPs) to co-create regulatory frameworks; 
  • Development finance institutions (DFIs) and international bodies in providing technical and financial support; 
  • Cloud infrastructure and open standards to reduce entry barriers and promote security; and 
  • Multi-stakeholder collaboration to ensure voices from civil society, startups, and academia are included. 

 

Building Trust as a Strategic Asset 

The Global South stands at a pivotal moment. With AI offering immense potential for growth and inclusion, the challenge is to ensure it is developed and deployed responsibly. 

Trust is not a byproduct: it’s a strategic asset. Trustworthy AI ecosystems will attract investment, unlock innovation, and build resilience. Achieving this requires regulatory frameworks that are grounded in local context, responsive to risk, and inclusive of all stakeholders. 

As these conversations continue at the Global SME Finance Forum (GSMEFF) 2025 in South Africa, the hope is to shift from theoretical debates to practical, scalable solutions that empower people, protect rights, and position the Global South as a leader in shaping human-centred AI. 

Sign up for the monthly GFTN newsletter