Feb 13, 2025

Social Engineering in 2024: A Year in Review

Social Engineering in 2024: A Year in Review

Social Engineering in 2024: A Year in Review

Social Engineering in 2024: A Year in Review

In 2024, deepfake scams became a real threat, with criminals using AI-generated voices and faces to impersonate executives and steal millions.

In 2024, deepfake scams became a real threat, with criminals using AI-generated voices and faces to impersonate executives and steal millions.

In 2024, deepfake scams became a real threat, with criminals using AI-generated voices and faces to impersonate executives and steal millions.

Ross Lazerowitz

Co-Founder and CEO

In May of 2024, a finance worker at a British engineering firm received a message purportedly from the company’s UK-based CFO requesting a secret transaction. Initially suspicious of phishing, the employee’s doubts dissolved during a multi-person video call in which the CFO and other colleagues appeared to validate the request. Unbeknownst to the employee, every participant except them was a deepfake—AI-generated replicas of real employees crafted using publicly available footage of their faces, voices, and mannerisms.


As we reflect on cybersecurity trends in 2024, one noteworthy phenomenon is the unprecedented evolution of social engineering attacks. This year marked a watershed moment where advanced technologies, particularly artificial intelligence, transformed from theoretical threats into practical tools for cybercriminals. From sophisticated deepfake attacks targeting major corporations to increasingly nuanced consumer scams, 2024 has fundamentally changed our understanding of digital security challenges.


Deepfakes: When Reality Isn’t What It Seems


This year, deepfakes have gone from a theoretical possibility to a potent tool in the cybercriminal’s arsenal. Here are some highlights:


  • Arup HK $25m Deepfake Scam: One of the most shocking events came when a deepfake scam cost Arup a whopping $25 million. Criminals used hyper-realistic deepfake videos to impersonate top executives, tricking employees into transferring funds. CNN reported the incident in detail.

  • Wiz CEO Targeted by a Deepfake Attack: Imagine hearing your own voice, or that of someone you trust, convincing you to do something risky. Dozens of employees at Wiz received a voice message that used a deepfake voice clone of Wiz CEO Assaf Rappaport attempting to get employee credentials. TechCrunch has the full story.

  • Ferrari’s Close Call: Even luxury brands weren’t spared. Scammers used deepfake audio to impersonate Ferrari CEO Benedetto Vigna, attempting to manipulate an executive into a fake high-stakes deal. The fraudster’s convincing WhatsApp messages and phone call unraveled when the executive asked a question only the real Vigna could answer. Fortune broke down this incident.

  • WPP CEO and LastPass: The deepfake phenomenon wasn’t just confined to financial scams. Scammers used deepfake audio to impersonate WPP CEO Mark Read and a LastPass executive in elaborate fraud attempts. In WPP’s case, attackers set up a fake Teams call with AI-generated voice clones and YouTube footage to trick an agency leader into setting up a fraudulent business. Meanwhile, a LastPass employee received WhatsApp messages and calls from a deepfake of their CEO but recognized red flags and reported the attempt. The Guardian and LastPass’s own blog offer more insights into these unsettling events.


These incidents are part of a startling new trend where attackers are leveraging deepfakes to impersonate executives. It will be an important space to watch as the rapid improvement of AI technology continues, realism improves, and costs drop.


North Korean IT Worker Scams: A Growing Threat


While deepfake attacks dominate headlines, another cyber threat is quietly infiltrating US businesses: North Korean IT worker scams. Recent cases have revealed how DPRK operatives, using AI-generated profile images and stolen identities, secured remote jobs at major companies to funnel money back to the regime.


One high-profile incident involved KnowBe4, where a fake software engineer was hired after passing multiple rounds of interviews and background checks. The deception unraveled when the company’s security team detected malware loading onto the provided laptop immediately upon activation. The worker, operating from a North Korean “laptop farm,” used AI-enhanced resumes and deepfake video interviews to appear legitimate, while VPNs helped them pose as US-based employees. This tactic allowed them to earn salaries while secretly funding North Korea’s illicit programs.


The US Justice Department has indicted multiple individuals involved in these scams, emphasizing how thousands of skilled DPRK operatives use AI-powered deception to bypass security and gain employment. Companies must remain vigilant by enhancing background checks, verifying physical locations, and monitoring remote devices to prevent these state-sponsored threats from breaching corporate defenses.


Consumer Scams: When the Bait Is Too Good to Be True


Cybercriminals are weaponizing AI to launch increasingly convincing scams that prey on everyday consumers. In 2024 alone, the US saw over 3,100 data breaches, exposing more than 1.3 billion victim records, according to the Identity Theft Resource Center. With this stolen data, attackers create highly personalized scams, making it harder than ever for victims to detect deception.


Here are some of the most alarming cases from the past year:


  • How I Got Scammed Out of $50,000: A personal account published by The Cut dives into the nuances of how an Amazon scam call duped one individual. It’s a harsh reminder that these attacks don’t discriminate—they can hit anyone.

  • International Operation Against Phone Phishing: On a more positive note, law enforcement showed us that there’s a global crackdown underway. An international operation against a ‘phone phishing’ gang in Belgium and the Netherlands was detailed by Europol, demonstrating that coordinated efforts can and do make a difference.


These cases illustrate that while technology provides new tools for scammers, it also offers a pathway for collective action against them.


Reflections and Lessons Learned


2024 made it clear that security needs a reset. AI-powered threats like deepfake scams, social engineering, and nation-state infiltration have outpaced traditional defenses. Attackers are no longer just targeting passwords or exploiting careless clicks. They are impersonating executives with convincing deepfake calls, slipping into companies as remote workers, and using AI-generated profiles to bypass background checks. The way forward is a smarter, layered approach that blends technology, training, and constant adaptation.


Consider taking these steps:


  • Ditch passwords. Shift toward passwordless authentication using security keys or physical security keys (going for $25 a pop). They are not a silver bullet but will make credential-based attacks much harder.

  • Strengthen verification methods and processes. Adopt identity verification tools as part of the interview process. Vet internal and external verification processes for support teams for reliance on easily spoofed information.

  • Make security training more realistic. Use deepfake vishing simulations and hands-on social engineering exercises to prepare employees for real-world threats. Your employees have priors that need to be updated.

  • Get better intel. Educate and enable your employees to report all social engineering attacks, not just email.


Looking Forward to 2025


We predict 2025 will be the breakout year for AI-driven social engineering, with two significant developments. First, we believe that open-source video models will become good enough to be used in widespread deep fake video campaigns. Second, the already existing capabilities will be leveraged considerably more. Much like companies, attackers have been refining how to use this technology, and new research confirms that they are closing the gap on human-level deception.


A recent study from Harvard and Avant Research Group, co-authored by Bruce Schneier, proves how capable AI has become at scaling phishing attacks. Their latest research tested AI-generated phishing attempts using OSINT-fed context in an LLM. The results? The AI system matched human expert success rates at 54% and is projected to exceed them by 55% next year. This, coupled with reports from Google and OpenAI that attackers are increasingly using LLMs for OSINT and writing phishing emails, means we will see a wider deployment of this technology.


The sobering reality is that old advice, like telling users to look for typos and suspicious links, is no longer enough. Attacks are no longer likely to be sloppy. Security teams must upgrade verification protocols, transition to passwordless authentication, and adopt AI-driven defense strategies to stay ahead. The game has changed, and 2025 will test how well we can adapt.


Try Mirage

Learn how to protect your organization from AI-driven social engineering.

Ready to see Mirage in action?

Concerned about social engineering? Get a free AI simulation and speak directly with our founders.

© Copyright 2024, All Rights Reserved by ROSNIK Inc.

Ready to see Mirage in action?

Concerned about social engineering? Get a free AI simulation and speak directly with our founders.

© Copyright 2024, All Rights Reserved by ROSNIK Inc.


Ready to see Mirage in action?

Concerned about social engineering? Get a free AI simulation and speak directly with our founders.

© Copyright 2024, All Rights Reserved by ROSNIK Inc.


Ready to see Mirage in action?

Concerned about social engineering? Get a free AI simulation and speak directly with our founders.

© Copyright 2024, All Rights Reserved by ROSNIK Inc.