Governance requires trust. If policy makers inform, consult and involve citizens in decisions, policy makers are likely to build trust in their efforts. Public participation is particularly important as policy makers seek to govern data-driven technologies such as artificial intelligence (AI). Although many users rely on AI systems, they do not understand how these systems use their data to make predictions and recommendations that can affect their daily lives. Over time, if they see their data being misused, users may learn to distrust both the system and how policy makers regulate them. Hence, it seems logical that policy makers would make an extra effort to inform and consult their citizens about how to govern AI systems. This paper examines whether officials informed and consulted their citizens as they developed a key aspect of AI policy — national AI strategies.
The working paper is available on GW's Institute for International Economic Policy's website.