Disturbing Biden video can stay on Facebook because it ‘doesn’t violate’ Meta’s policies

Innovation

The Meta Oversight Board has upheld the company’s decision to leave up a manipulated video of US President Joe Biden that claims he is a paedophile.
Source: Getty

The Facebook post in question manipulated a 2022 clip of Joe Biden placing an ‘I Voted’ sticker on his granddaughter, and kissing her on the cheek. Instead, it showed the US President inappropriately touching his granddaughter’s chest and labelling him a “sick pedophile”.

But according to Meta’s Oversight Board, which is an independent board established in 2020 to review the company’s content decisions, the post doesn’t violate Meta’s Manipulated Media policy.

Why? According to the Board, the current Media policy applied only to videos created through artificial intelligence (AI), and only to content showing people saying things they didn’t say.

The Board says that, because the video was not altered using AI and it shows President Biden doing something he did not do (not something he didn’t say), it doesn’t violate the existing policy.

According to Meta, a key characteristic of “manipulated media” is that it could mislead the “average” user to believe it is authentic and unaltered. In this case, the looping of one scene in the video is an obvious alteration.

But despite its ruling, the Board says Meta’s Manipulated Media policy needs to be reconsidered, because it’s “incoherent and confusing to users”. In a statement, the Board said the criteria for the policy is too narrow, and should extend to cover audio as well as to content that shows people doing things they did not do.

“Experts the Board consulted, and public comments, broadly agreed on the fact that non-AI-altered content is prevalent and not necessarily any less misleading; for example, most phones have features to edit content,” the Board stated. “Therefore, the policy should not treat “deep fakes” differently to content altered in other ways (for example, “cheap fakes”).”

Following the Board’s ruling, Meta said it would review its guidance and respond to their recommendations within 60 days. And in a blog post dated February 6, Meta said it was working with industry partners on common technical standards for identifying AI content, including video and audio.

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” the blog post said.

“People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So it’s important that we help people know when photorealistic content they’re seeing has been created using AI. We do that by applying “Imagined with AI” labels to photorealistic images created using our Meta AI feature, but we want to be able to do this with content created with other companies’ tools too.” 

Deepfakes: A growing problem

But it’s not the first time deepfakes have caused problems. Just last week, explicit AI-generated images of Taylor Swift went viral on X before being taken down. And now, tech juggernauts with generative AI tools, like Microsoft (who’s genAI tool Designer was reportedly used to create the Swift images), are trying to put stopgaps in place to prevent users from generating these kinds of images. Deepfake porn was also reportedly found at the top of Google and Bing search results.

Having guardrails in place will become particularly important as the US heads for another election this year, along with more than 70 other countries.

But deepfakes are posing another threat: cybersecurity. In fact, Gartner predicts that cyber attacks using AI-generated deepfakes on face biometrics will mean that 30% of enterprises will no longer consider identity verification and authentication solutions to be reliable in isolation by 2026.

ID verification and authentication processes using face biometrics today rely on presentation attack detection (PAD). According to the research, presentation attacks are the most common attack vector, but digital injection attacks increased 200% in 2023. These are attacks used to spoof biometric verification by injecting attacks or synthetic imagery – like deepfakes – into a data stream, to try and impersonate the real user.

“Current standards and testing processes to define and assess PAD mechanisms do not cover digital injection attacks using the AI-generated deepfakes that can be created today,” Akif Khan, VP analyst at Gartner, says. “As a result, organisations may begin to question the reliability of identity verification and authentication solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake.”

Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here or become a member here.

More from Forbes Australia

Avatar of Anastasia Santoreneos
Forbes Staff
Topics: