As the 2024 U.S. presidential election approaches, Meta Platforms has announced a new policy requiring political advertisers to disclose when their ads contain digitally altered media. The policy, which takes effect on January 1, responds to the rise of AI generative technology like ChatGPT that can easily create synthetic images, video, and audio.
Under the new rules, political ads on Meta’s platforms — Facebook, Instagram, and WhatsApp — must note when they include media that has been “materially altered” by AI or other software. Examples given by Meta include an actual person depicted saying or doing something they did not actually say or do, or creating a fake event or person using generative technology.
Meta said immaterial alterations, like color or lighting changes, do not require disclosure.
However, advertisers found to violate the policy repeatedly could face penalties, including rejection of their ads. The requirements apply to ads about candidates and elections and those covering legislative issues and social causes.
The policy comes a year after ChatGPT’s launch, which ushered in a new era of advanced generative AI. With simple text prompts, systems like ChatGPT can generate human-like content like essays, songs, and visual art. Experts worry this technology could allow the easy production of misinformation and make it harder to discern truth online.
Meta itself has built generative AI products, like AI chatbots and Instagram creative tools it announced in September. But it said advertisers cannot use Meta’s own tools to make political ads.
The new rules will get their first test during the upcoming U.S. primary season. With generative AI entering the mainstream, experts say it could have a significant impact on digital political advertising and campaigns in the 2024 election and beyond. Meta’s disclosure requirements attempt to increase transparency around synthetic media as this technology continues advancing rapidly.
Featured Image Credit: Photo by Markus Spiske; Pexels; Thank you!