Google said Wednesday it will require political ads on its platforms to disclose when images and audio have been altered or created using tools such as artificial intelligence (AI). Google’s advertising policy change will go into effect in November, about a year before what is likely to be a contentious US presidential election and as fears mount that generative AI will be used to mislead voters.
“For years we have provided additional levels of transparency for election ads,” a Google spokesperson said in response to a question from AFP. “Given the increasing number of tools that produce synthetic content, we’re taking our policy one step further and requiring advertisers to disclose when their election ads contain material that has been digitally modified or generated.”
In June, a Ron DeSantis campaign video attacking former US President Donald Trump contained images with markers suggesting they were created using AI, an AFP Fact Check team found.
The video shared in a tweet on X, formerly known as Twitter, contained photos that appeared altered to show Trump hugging Anthony Fauci, a key member of the US coronavirus task force, with kisses on the cheek, according to AFP Fact Check. Google’s advertising policies prohibit all manipulation of digital media to deceive or mislead people about political, social issues, or matters of public interest.
Demonstrably false claims that could undermine participation or confidence in the election process are also prohibited by Google under the internet giant’s advertising policies. Google requires political ads to disclose who paid for them, and makes information about the posts available in an online ad library.
The upcoming update will require election-related ads to “prominently disclose” whether they contain “synthetic content” that depicts real or realistic-looking people or events, Google said. The tech titan said it continues to invest in technology to detect and remove such content.
According to Google, disclosures of digitally altered content in election ads must be “clear and conspicuous” and placed where they are likely to be noticed. Examples of what justifies a label include synthetic images or audio in which someone says or does something they didn’t do, or depicts an event that didn’t happen.
Google suggested labels such as “This image does not depict real events” or “This video content is synthetically generated.”