The world has been waiting for the United States to get its act together on regulating artificial intelligence—particularly since it’s home to many of the powerful companies pushing at the boundaries of what’s acceptable. Today, U.S. president Joe Biden issued an executive order on AI that many experts say is a significant step forward.
“I think the White House has done a really good, really comprehensive job,” says Lee Tiedrich, who studies AI policy as a distinguished faculty fellow at Duke University’s Initiative for Science & Society. She says it’s a “creative” package of initiatives that works within the reach of the government’s executive branch, acknowledging that it can neither enact legislation (that’s Congress’s job) nor directly set rules (that’s what the federal agencies do). Says Tiedrich: “They used an interesting combination of techniques to put something together that I’m personally optimistic will move the dial in the right direction.”
This U.S. action builds on earlier moves by the White House: a “Blueprint for an AI Bill of Rights“ that laid out nonbinding principles for AI regulation in October 2022, and voluntary commitments on managing AI risks from 15 leading AI companies in July and September...
Susan Ariel Aaronson, a professor of international affairs at George Washington University who works on data and AI governance, calls the order “a great start.” However, she worries that the order doesn’t go far enough in setting governance rules for the data sets that AI companies use to train their systems. She’s also looking for a more defined approach to governing AI, saying that the current situation is “a patchwork of principles, rules, and standards that are not well understood or sourced.” She hopes that the government will “continue its efforts to find common ground on these many initiatives as we await congressional action.”