In January 2024, a finance worker at global engineering firm Arup was fooled into handing over $25 million to cybercriminals, after attending a video conference populated entirely by AI-generated deepfakes of senior executives.
The audacious attack demonstrated clearly that generative AI can create novel attack vectors and deliver enormous payoffs for criminals.
Since ChatGPT’s launch in November 2022, criminals have embraced AI with enthusiasm – using it to research vulnerabilities, compose phishing emails, write code, and create new forms of social engineering with cloned voices and faked likenesses.
Yet, for all its impact so far, AI’s disruptive potential for cybercrime has only just begun to surface, and 2025 looks set to be a critical year in its development.
