The U.S. Federal Bureau of Investigation (FBI) has issued a warning about the increasing use of deepfakes by criminals to target victims for extortion.

These sophisticated manipulations created by generative AI have raised concerns due to their ability to produce highly realistic images and videos that are difficult to distinguish from reality.

In a recent public service announcement, the FBI highlighted the rising number of victims, including minors and non-consenting adults, who have fallen prey to explicit deepfake content.

With over 7,000 reports of online extortion targeting minors last year, the agency has observed a surge in “sextortion scams” employing deepfakes since April.

The Threat of Deepfakes:

Deepfakes, powered by AI platforms like Midjourney 5.1 and OpenAI’s DALL-E 2, have become increasingly prevalent. These manipulations utilize artificial intelligence to create videos or audio that portray false events, making it challenging to distinguish between real and fake content.

While some deepfakes have gained attention for their harmless or entertaining nature, such as a viral video of a deepfake Pope Francis, criminals have exploited this technology for malicious purposes.

For instance, scammers created a deepfake of Tesla and Twitter CEO Elon Musk to deceive crypto investors, using altered footage to fit their fraudulent scheme.

FBI Recommendations and Actions:

In response to the growing threat, the FBI has provided recommendations to protect individuals from falling victim to deepfake extortion scams. Firstly, they strongly advise against paying any ransom, as doing so does not guarantee that criminals will refrain from posting the deepfake anyway.

The agency also emphasizes the importance of exercising caution when sharing personal information and content online. Privacy features, like setting accounts to private, can help reduce exposure to potential threats.

Additionally, monitoring children’s online activity, being vigilant for unusual behavior from known individuals, and conducting regular searches for personal and family information online are recommended actions to enhance personal safety and security.

The Federal Trade Commission’s Warning:

The U.S. Federal Trade Commission (FTC) has also voiced concerns about deepfakes and their exploitation by criminals.

The FTC has specifically highlighted cases where deepfake audio has been used to deceive victims into sending money by imitating the voice of a friend or family member, falsely claiming they have been kidnapped.

The FTC’s consumer alert in March emphasized that artificial intelligence is no longer a distant concept, as scammers can clone the voice of a loved one with just a short audio clip, making the recording sound remarkably real.

Tags