Home Updates News New FBI Warning About AI-Powered Scams Coming After Your Money

New FBI Warning About AI-Powered Scams Coming After Your Money

10
0

The FBI is issuing a warning that criminals are increasingly using generative AI technologies, specifically deepfakes, to exploit unsuspecting people. Be alert serves as a reminder of the increasing sophistication and accessibility of these technologies and the urgent need to be vigilant to protect ourselves from potential scams. Let’s explore what deepfakes are, how criminals use them, and what steps you can take to protect your private information.

I AM GIVING AWAY THE LATEST AND BEST AIRPODS PRO 2

Participate in the giveaway by registering in my free newsletter.

FBI building in DC (Kurt “CyberGuy” Knutsson)

The rise of deepfake technology

Deepfakes refer to AI-generated content that can convincingly imitate real people, including their voices, images, and videos. Criminals use these techniques to impersonate people, often in crisis situations. For example, they can generate audio clips that sound as if a loved one is requesting urgent financial assistance or even create real-time video calls that appear to involve company executives or law enforcement officials. The FBI has identified 17 common techniques used by criminals to create these deceptive materials.

AI illustration image (Kurt “CyberGuy” Knutsson)

THE AI-POWERED GRANDMA TAKING ON SCAMMERS

Key tactics used by criminals

The FBI has identified 17 common techniques that criminals use to exploit generative artificial intelligence technologies, specifically deepfakes, for fraudulent activities. Here is a complete list of these techniques.

1) Voice cloning: Generate audio clips that imitate the voice of an acquainted or other trusted people to manipulate victims.

2) Video calls in current time: Create fake video interactions that appear to involve authority figures, such as law enforcement officers or corporate executives.

3) Social engineering: Using emotional appeals to manipulate victims into revealing private information or transferring funds.

4) AI generated text: Prepare realistic written messages to phishing attacks and social engineering schemes, making them appear credible.

5) AI generated images: Use synthetic images to create credible profiles on social networks or fraudulent websites.

6) AI generated videos: Produce convincing videos that can be used in scams, including investment fraud or phishing schemes.

7) Create fake profiles on social networks: Establish fraudulent accounts that use AI-generated content to deceive others.

8) Phishing Emails: Sending emails that look legitimate but are crafted with artificial intelligence to trick recipients into providing sensitive information.

9) Impersonation of public figures: Using deepfake technology to create videos or audio clips that imitate well-known personalities for scams.

10) Fake identification documents: Generate fraudulent identification, such as driver’s licenses or credentials, for identity fraud and spoofing.

11) Investment fraud schemes: Implement AI-generated materials to convince victims to invest in non-existent opportunities.

12) Ransom demands: Posing as loved ones in distress to request ransom payments from victims.

13) Manipulation of voice recognition systems: Use cloned voices to bypass security measures that rely on voice authentication.

14) Fake Charity Appeals: Create deepfake content that solicits donations under false pretenses, often during a crisis.

15) Business Email Engagement: Crafting emails that appear to come from executives or trusted contacts to authorize fraudulent transactions.

16) Create disinformation campaigns: Using deepfake videos as part of broader disinformation efforts, particularly around major events such as elections.

17) Exploitation of disaster situations: Generating urgent requests for help or money during emergencies, taking advantage of emotional manipulation.

New FBI Warning About AI-Powered Scams Coming After Your Money

AI illustration image (Kurt “CyberGuy” Knutsson)

These tactics highlight the increasing sophistication of fraud schemes facilitated by generative AI and the importance of surveillance to protect private information.

FCC NAMED ITS FIRST AI SCAMMER IN THREAT ALERT

Tips to protect yourself from deepfakes

Implementing the following strategies can improve your security and awareness against fraud related to deepfakes.

1) Limit your online presence: Reduce the amount of private information, especially high-quality images and videos, available on social networks by adjusting your privacy settings.

2) Invest in personal data removal services: The less information there is, the harder it will be for someone to create a deepfake of you. While no service promises to delete all of your data from the Web, having a deletion service is great if you want to constantly monitor and automate the process of deleting your information from hundreds of sites continuously over a longer period of time. Check out my top picks for data removal services here.

3) Avoid sharing sensitive information: Never reveal personal data or financial information to strangers online or over the phone.

4) Keep an eye out for new connections: Be careful when accepting new friends or connections on social media; Please verify its authenticity before participating.

5) Check privacy settings on social networks: Make sure your profiles are set to private and that you only accept friend requests from people you trust. Here’s how to make any social media account private, including Facebook, Instagram, Twitter, and any others you may use.

6) Use two-factor authentication (2FA): Implement 2FA in your accounts to add an extra layer of security against unauthorized access.

7) Verify Callers: If you receive a suspicious call, hang up and independently verify the identity of the caller by contacting your organization through official channels.

8) Watermark your media: When sharing photos or videos online, consider using digital watermarks to deter unauthorized use.

9) Monitor your accounts regularly: Keep an eye on your financial and online accounts for any unusual activity that could indicate fraud.

10) Use strong and unique passwords: Use different passwords for different accounts to prevent a single breach from compromising multiple services. Consider using a password manager to generate and store complex passwords.

11) Backup your data periodically: Keep backups of important data to protect against ransomware attacks and ensure recovery in the event of data loss.

12) Create a secret verification phrase: Set a unique word or phrase with family and friends to verify identities during unexpected communications.

13) Be aware of visual imperfections– Look for subtle flaws in images or movies, such as distorted features or unnatural movements, that may indicate manipulation.

14) Listen for voice abnormalities: Pay attention to the tone, tone, and word choice in the audio clips. AI-generated voices can appear unnatural or robotic.

15) Do not click on links or download attachments from suspicious sources: Please use caution when receiving emails, direct messages, text messages, phone calls, or other digital communications if the source is unknown. This is especially true if the message demands that you act quickly, such as claiming that your computer has been hacked or that you have won a prize. Deepfake creators try to manipulate your emotions, so you download malware or share private information. Always think before you click.

The best way to protect yourself from malicious links that install malware and potentially access your private information is to have antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your private information and digital assets safe. Get my picks for the best antivirus protection winners of 2025 for your Windows, Mac, Android, and iOS devices.

16) Be careful with money transfers: Don’t send money, gift cards oh cryptocurrencies to people you don’t know or have only met online or by phone.

17) Report suspicious activity: If you suspect that you have been targeted by scammers or have been the victim of a fraud scheme, report it to FBI Internet Crime Reporting Center.

New FBI Warning About AI-Powered Scams Coming After Your Money

A woman typing on her laptop (Kurt “CyberGuy” Knutsson)

By following these tips, people can better protect themselves from the risks associated with deepfake technology and related scams.

30% OF AMERICANS OVER 65 WANT TO BE REMOVED FROM THE WEB. HERE’S WHY

Kurt’s Key Takeaways

The increasing use of generative AI technologies, specifically deepfakes, by criminals highlights a pressing need for awareness and caution. As the FBI warns, these sophisticated tools allow scammers to convincingly impersonate people, making scams harder to detect and more credible than ever. It is essential that everyone understands the tactics employed by these criminals and takes proactive measures to protect their personal information. By staying informed about the risks and implementing security measures, such as verifying identities and limiting online exposure, we can better protect ourselves against these emerging threats.

How do you think companies and governments should respond to the growing threat of AI-driven fraud? Let us know by writing to us at Cyberguy.com/Contact.

For more tech tips and security alerts, sign up for my free CyberGuy Report newsletter by heading to Cyberguy.com/Newsletter.

Ask Kurt a question or tell us what stories you’d like us to cover.

Follow Kurt on his social channels:

Answers to the most frequently asked questions about CyberGuy:

New from Kurt:

Copyright 2024 CyberGuy.com. All rights reserved.

LEAVE A REPLY

Please enter your comment!
Please enter your name here