Site icon Panda Security Mediacenter

Deepfake Fraud: Security Threats Behind Artificial Faces

deepfake-danger-hero

The evolution of modern technology has brought many innovations, and one in particular is shaking up media landscape: deepfakes. Deepfakes are videos, images or audio recordings that have been manipulated by AI technology. In a deepfake, an individual can be presented as saying or doing something that didn’t actually happen.

Deepfake content is highly convincing, and the ongoing development of deepfake tech has made it more difficult to discern between real and fake content. While deepfake tech is still relatively new, we continue to see its role in emerging fraud and cybercrime trends. This has become a growing concern among consumers and organizations, as deepfakes are exploited by criminals to carry out social engineering attacks, the spread of misinformation and fraud scams.

The cost of a deepfake scam was estimated to exceed $250 million in 2020, and this form of technology is still in its early stages. There’s no doubt that as deepfake technology evolves, so will the sophistication of how criminals exploit this tech to attack businesses and consumers alike. Read on to learn more or jump to the infographic below.

What is a Deepfake?

A deepfake is a form of media that overlays an existing image or video with AI-generated content that resembles someone’s voice or appearance. Commonly referred to as a form of “synthetic media,” deepfakes mimic people’s faces, movements and voices with such accuracy that they’re often impossible to tell apart from the real thing.

Thanks to highly sophisticated machine learning algorithms, biometrics like facial expressions and the pitch of the human voice can be manipulated to create realistic depictions of events that never took place. While not all deepfakes are used with malicious intent, this form of digital impersonation has often been used to create fake videos and convincingly real audio recordings of people doing or saying malicious things, prompting privacy concerns and fears of deception.

How Does Deepfake Technology Work?

The key component to creating a deepfake is machine learning. Deepfakes rely on AI computer systems called artificial neural networks, which are based loosely on the human brain and designed to recognize patterns in data. This is where the creation of a deepfake begins.

To create a deepfake video, the creator starts by feeding hundreds of hours of real video footage to the artificial neural network in order to “train” the computer to identify detailed patterns and characteristics of a person. This is done to give the algorithm a realistic understanding of what that person looks like from different angles.

The next step involves combining the trained neural network with computer-graphics techniques to overlay the real footage of a person with the AI-synthesized facial and speech patterns derived from the neural network data.

While many believe that creating a deepfake requires sophisticated tools and expert skills, this isn’t the case—they can be made even with basic computer-graphics skills. More convincing deepfakes do require more advanced techniques, but all you really need is access to video or audio footage of someone. This is especially accessible given the staggering amount of media available today, leading to an ample amount of source material to feed the algorithm and generate a realistic deepfake.

What’s the Risk? 3 Types of Deepfake Fraud

The rapid advancement of deepfake technology has created an opportunity for tech-savvy criminals to enable serious financial harm. From identity theft and the spread of public misinformation to corporate extortion, fraud and automated cyberattacks, deepfake technology has introduced a new breed of media that bad actors are using to their advantage. Below are a few of the ways criminals commit deepfake fraud.

Ghost Fraud

Ghost fraud occurs when a criminal steals the data of a deceased person and in order to impersonate them for financial gain. The stolen identity might be used to gain access to online services and accounts or to apply for things like credit cards and loans.

New Account Fraud

Also referred to as application fraud, new account fraud involves using stolen or fake identities for the purpose of opening new bank accounts. Once a criminal has opened an account, they can wreak serious financial damage by maxing out credit cards or taking out loans they have no intention of paying back.

Synthetic Identity Fraud

Synthetic identity fraud is a more complex method of fraud that’s typically more difficult to spot. Rather than exploiting the stolen identity of a single person, criminals mine for information and identities of multiple people to create a “person” who doesn’t actually exist. This manufactured identity is then used for large transactions or new credit applications.

Examples of Deepfake Attacks

Recent innovations have substantially reduced the amount of time and data required to create highly realistic deepfakes. As deepfakes become more accessible, the number of known attacks will likely continue to rise. The examples below offer a look into what deepfake technology is capable of.

Energy Firm CEO Attack

This deepfake scam from 2019 is the first known deepfake attack to occur, and it illustrates the unfortunate dark side of deepfake technology’s capabilities. In a classic case of corporate extortion, the CEO of an energy firm took a phone call from who he thought was his boss and chief executive of the firm. In reality, the voice was actually an AI-generated deepfake voice—which is why he promptly cooperated with the urgent request to transfer $243,000 within the hour.

Tech Firm Attack

In this unsuccessful audio deepfake attempt from 2020, a tech firm employee received a strange voicemail from someone who sounded like the firm’s CEO. The message was a request for “immediate assistance to finalize an urgent business deal,” and the employee followed his suspicions and flagged it to the firm’s legal department.

While this deepfake attack was ultimately unsuccessful, it’s an important window into the types of attacks we can expect to see more of as technology advances and tools become more widely available.

How to Detect a Deepfake Scam

While the scammer in the tech firm’s deepfake attack sounded similar to the CEO he was impersonating, the slight robotic tone of the deepfake voice is what ultimately triggered the employee’s suspicions. Several similar telltale signs can tip you off to a potential audio or video deepfake:

Organizations can combat the threat of a deepfake attack mainly by educating their workforce on how to recognize potential audio deepfakes. Most often, educating employees on cybersecurity measures is an organization’s first line of defense against cyberattacks. Providing guidance on company protocol and having a system in place for internally verifying suspicious communications is an important place to start when it comes to organizational security.

The Future of Deepfakes

Deepfake technology has seen an incredible rise in just a few short years. While deepfake detection tools are improving, so are the capabilities of deepfakes themselves. As a result, federal legislative efforts have recently focused on further research into understanding deepfake technologies, and more state-level laws are emerging to provide support for victims.

The nation’s first deferral law pertaining to deepfakes was passed in 2020. It criminalized the creation and distribution of deepfake videos that aren’t properly labeled, and established a plan for further research and development of deepfake detection tools.

While deepfakes certainly aren’t the first threat to cybersecurity, they represent a growing challenge that will require ongoing research to prevent criminals from exploiting it. Organizations and individuals alike will need to seek out new ways to properly secure their data and defend against increasingly sophisticated cyberattacks.

Exit mobile version