Meta’s new AI labels won’t solve the ‘Taylor Swift problem’


February 7, 2024

The Washington Post

In a world where it's getting harder to tell the difference between the work of humans and that of machines, experts are welcoming Meta's new plan to label AI-generated content on its platforms. But it's worth being clear about which AI problems it could help to solve — and which ones it won't.

The world's largest social media company announced on Tuesday it will begin putting labels on realistic-seeming images that users post on Facebook, Instagram and Threads when it can tell that they were generated with AI. The goal is to make sure users don't get fooled into thinking an AI fake — say, the pope in a puffer coat — is the genuine article.

The move aligns with the Biden administration’s executive order on AI last fall, which urged "watermarking" — invisible signals built into images that identify them as AI-generated — as a policy priority. Meta already puts both watermarks and visible "imagined by AI" labels on images created with its own AI tools. But now it will work with other companies on industry-standard signals that they can all use to recognize AI images wherever they crop up. Meta said it will also ask users to label AI-generated images they upload, though how it will enforce that was not immediately clear.

...

Ahead of a big election year, both in the U.S. and globally, labeling huge swaths of AI-generated images created with mainstream tools will, if nothing else, "put more friction into the system" by which AI fakes are generated, said David Broniatowski, an engineering professor at George Washington University. "It’s nice to see that they're taking the problem of false content at scale seriously." 

Read the full article in the Washington Post.