Research conducted by Channel 4 News, a UK nightly news show, has uncovered a massive explosion in deepfake pornography. According to the program, more than 4000 celebrities have had their likenesses used to create pornographic images and videos.

With generative artificial intelligence (AI) tools users are able to ‘map’ faces of well-known celebrities onto existing pornographic videos. This then gives the impression that the celebrity has participated willingly in the films.

What is going on?

As with all new technologies, someone, somewhere is always looking for a way to exploit it. Deepfake videos have been used to recreate concerts by music legends or resurrect long-dead movie stars. But the same technology can be used to create and share illegal content online – such as deepfake pornography.

Channel 4’s investigation into the five most popular deepfake websites discovered that more than 4000 individuals had had their likenesses stolen and reused in fabricated nude images. Of these, 252 were identified as coming from the UK, including female actors, TV stars, musicians and YouTubers.

The program also recounts how in 2016 there was just one deepfake pornography video posted online. In the first three-quarters of 2023, 143,733 new deepfake porn videos were uploaded to the 40 most used deepfake pornography sites – more than in all the previous years combined.

Are deepfakes legal?

Most experts agree that being a victim of deepfake pornography is deeply distressing, humiliating and dehumanizing. Unsurprisingly, governments across the world are working to better combat deepfakes and protect victims. 

In the UK, sharing deepfake porn without the permission of the person depicted is now illegal under the Online Safety Act. However, no one has yet been arrested or prosecuted for doing so. Notably, it is not illegal to create deepfake imagery – it is sharing that content which is banned.

What are the web giants doing about deepfakes?

Most web content hosts are still struggling to meet their obligations regarding detecting and removing deepfake content, but it seems that Google is leading the way. Speaking to Channel 4, a spokesperson said;

“Under our policies, people can have pages that feature this content and include their likeness removed from search. And while this is a technical challenge for search engines, we’re actively developing additional safeguards on Google search – including tools to help people protect themselves at scale, along with ranking improvements to address this content broadly.”

This offers victims some level of protection – but only after the deepfake content has begun circulate online. 

The battle against deepfake content continues to evolve, as does generative artificial intelligence. Legal frameworks like the Online Safety Act do provide some safeguards – and should help to deter some would be pornographers. But with so many different legal standards across the globe, it will remain difficult for service providers to properly police the content being uploaded and viewed by their users.