Home » Lawmakers demand action from big tech on rising deepfake pornography issue

Lawmakers demand action from big tech on rising deepfake pornography issue

A bipartisan group of 26 U.S. lawmakers is calling on seven major tech companies to take more aggressive action against the growing problem of deepfake pornography on social media.

img_1551-1.jpg

A bipartisan group of 26 U.S. lawmakers is calling on seven major tech companies to take more aggressive action against the growing problem of deepfake pornography on social media.

In letters sent to Google, Apple, X, ByteDance, Snapchat, Microsoft, and Meta, the lawmakers demanded updates on how the companies plan to address the surge in non-consensual, sexually explicit deepfake content.

According to the lawmakers, deepfake pornography has exploded by 550% between 2019 and 2023, with such content now accounting for 98% of all deepfake videos online. “Deepfake technology has enabled abusers to create and disseminate realistic, non-consensual pornographic content, causing emotional, psychological, and reputational harm,” wrote Reps.

Debbie Dingell (D-Mich.) and August Pfluger (R-Texas), who led the effort. “The spread of this content, often with little recourse for victims, underscores the need for stronger and effective protections.”

The lawmakers pointed out that many of the tech giants have failed to adequately address deepfake abuse. Google, for instance, pledged earlier this year to ban ads for sites producing deepfake pornography and to implement reporting mechanisms, but recent reports suggest the platform still promotes apps that generate such content. 

Apple removed three deepfake creation apps in 2024 only after a report exposed loopholes in its app screening process. Meanwhile, X (formerly Twitter) allows AI-generated nudity as long as it is deemed “consensually produced,” raising concerns over how the platform would enforce such a policy.

ByteDance, owner of TikTok, has made efforts to label deepfakes as fake, but it still allows explicit content to circulate on the platform. Snap Inc., in response to deepfake nude images of a 14-year-old girl on Snapchat, now watermarks AI-generated images but has not outlined specific measures for combating deepfake pornography. Microsoft and Meta were also questioned for their handling of deepfake content, particularly regarding the Designer tool used to create Taylor Swift deepfakes and Meta’s inconsistent enforcement of its pornography policies.

In their letters, the lawmakers demanded each company provide Congress with detailed plans to combat deepfake pornography. “As Congress works to keep up with shifts in technology, Republicans and Democrats will continue to ensure that online platforms do their part to collaborate with lawmakers and protect users from potential abuse,” the group said.

With deepfakes posing a growing challenge to both lawmakers and tech companies, the pressure is mounting for the industry to take meaningful steps to prevent the exploitation of AI technologies for harmful purposes.

About The Author

Copyright © All rights reserved.