A Missed Opportunity to Further Build Trust in AI: A Landscape Analysis of OECD.AI

September 1, 2022

GW logo

My neighbors are probably a lot like yours; they are increasingly dependent on services built on artificial intelligence (AI). For example, they rely on digital assistants to check their schedules, and utilize AI to help them avoid traffic jams. When they get home they check Netflix’s algorithms to search for their next must watch tv show. My neighbors recognize that firms and governments utilize AI to make decisions for and about them, but they don’t understand how AI might affect their future.

My neighbors tend to distrust AI because also don’t understand the processes and technologies that underpin it (Hoff and Bashir, 2006; Rainie et al: 2022). But they expect government officials to design public policies that allow society to reap the benefits and minimize the costs of AI deployment. They also want to know if programs designed to do so are effective.

My neighbors are not alone--the world needs a better understanding of how policymakers can effectively encourage AI innovation and adoption, while mitigating potential AI risks (Litman et al: 2021). Some governments are starting to develop guidelines for regulating various AI sectors (as example the US) while others such as the EU and Canada are debating regulation of risky types of AI. Meanwhile, various think tanks and scholars have published reports or assessments of government programs or overall efforts. For example, the Center for Security and Emerging Technology examined comparative advantage in AI. The authors compared AI capabilities (the state of AI research, large data pools, semi-conductor capacity and enablers (such as workforce development and research funding) in China and the US (Imbrie et al. 2020) CSET has also examined responsible and ethical military AI, comparing government actions and policies (Stanley-Lockman: 2021 The Center for Data Innovation has issued a report card for US AI Policies (Omaar: 2022a).

Read More