CEIMIA/GPAI: The Role of Government as a Provider of Data for Artificial Intelligence

This blog delineates the efforts undertaken in exploring the government's role as a data provider for AI.

Artificial intelligence (AI) holds the potential for both great public good, but also, through algorithmic bias or nefarious use, great societal harm. This means the stakes are very high, and AI needs to be managed responsibly. While governments cannot ignore the potential of AI, it is incumbent upon them to ensure the use of government-held data is for the benefit of all, with a strong emphasis on protecting vulnerable groups and entrenching ethical principles and human rights.

Government data has immense value for the development of AI systems in both public services and private sector innovations. However, the sharing of government-held data with private AI developers raises critical legal and ethical concerns for governments and society. This is because of the sensitivity of data processed, and the potential harm citizens could face through the sharing of such data. 

To offer guidance to governments around the sharing of data in a way that is grounded in the principles of human rights, inclusion, diversity, innovation, and economic growth, Research ICT Africa (RIA) was commissioned by the International Center of Expertise in Montreal on Artificial Intelligence (CEIMIA) to undertake a project with the Global Partnership on Artificial Intelligence (GPAI). This work resulted in a report titled ‘The Role of Government as a Provider of Data for Artificial Intelligence’. 

The report establishes key principles that should govern when and how governments share data for AI development. These include principles around legal concerns such as the creation of public trust, the importance of data collaboration, accountability, transparency, and human oversight, among others. Key case studies have proved insightful into the issues faced, and the report looks at several case studies across the globe, including the National Health Service (NHS) in the United Kingdom (UK), and the implementation of a social protection programme to make cash transfers during the COVID-19 pandemic in Nigeria.

Based on the learnings drawn from the case studies, RIA assessed the legal landscape covering anti-trust laws, cross-border data flows, intellectual property laws, data protection, access to information, and relevant infrastructure. 

The case studies raised several issues, and our recommendations address different themes including:

  • Building public trust in AI
  • The need for data collaboration
  • Algorithmic decision-making and the need for human oversight
  • Tackling digital inequality
  • Data and AI justice
  • Regulatory certainty and efficient redress mechanisms
  • Robust public procurement process for AI development
  • Transparency and accountability and
  • The role of AI in advancing a development agenda

The principles recommended for government data sharing for AI development within the GPAI report are also guided by the responsible AI principles of public benefit, accountability, data subject participation, data equity and data justice, and more.

Included in our work for the report was the hosting of several foresight workshops with experts from Africa, Latin America, and the GPAI Data Governance Working Group. Here participants were taken through future scenario scoping exercises to imagine The objective of the workshops was to explore future uses of government data for AI development and key risks and challenges for the responsible, sustainable, and rights-respecting provision of government data for AI. The outcomes were incorporated into the report to ensure comparative insights and globally inclusive advice for governments.

Download The Role of Government as a Provider of Data for Artificial Intelligence.