Just don't miss OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One leaders at VentureBeat Transform 2024. Gain essential insights into GenAI and expand your network at this exclusive three-day event. learn more
Now one of the fastest-growing forms of adversarial AI, losses related to deepfakes are expected to grow from $12.3 billion in 2023 to $40 billion by 2027, growing at a 32 percent compound annual growth rate. has been Deloitte sees deep fraud spreading in the coming years, with banking and financial services being prime targets.
Deepfakes represent the latest in anti-AI attacks, with a 3,000% increase in the last year alone. It is estimated that deep counterfeiting incidents will increase by 50% to 60% in 2024, with 140,000-150,000 incidents predicted globally that year.
The latest generation of generative AI apps, tools, and platforms give attackers what they need to create deeply fake videos, fake voices, and fake documents quickly and at a fraction of the cost. Pindrops' 2024 Voice Intelligence and Security Report estimates that deep fake fraud will cost contact centers an estimated $5 billion annually. Their report underscores just how serious a threat deep counterfeiting technology poses to banking and financial services.
Bloomberg reported last year that “there is already an entire cottage industry on the dark web selling scamming software ranging from $20 to thousands of dollars.” A recent infographic based on SIMSB's Identity Fraud Report 2023 provides a global view of the rapid growth of AI-powered fraud.
Countdown to VB Transform 2024
Join enterprise leaders in San Francisco July 9-11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of generative AI, and learn how to integrate AI applications into your industry. Register now
Source: Statista, How Dangerous Are Deepfakes and Other AI-Powered Frauds? March 13, 2024
Enterprises are not ready for deepfakes and adversarial AI.
Adversarial AI creates new attack vectors that no one sees coming and creates a more complex, nuanced threat that prioritizes identity-based attacks.
Surprisingly, one in three enterprises do not have a strategy in place to deal with the threat of a hostile AI attack starting with a deepfake of their key executives. Ivanti's latest research shows that 30% of enterprises do not have a plan in place to identify and defend against adversarial AI attacks.
The Ivanti 2024 State of Cybersecurity report found that 74% of enterprises surveyed are already seeing evidence of AI-driven threats. A majority, 89%, believe AI-driven threats are just getting started. Of the majority of CISOs, CIOs and IT leaders Ivanti interviewed, 60% fear their organizations are unprepared to defend against AI-driven threats and attacks. Using a deepfake as part of an orchestrated strategy that includes phishing, software vulnerabilities, ransomware and API-related threats. This is consistent with the threats that security professionals expect to become more dangerous due to gen AI.
Source: Ivanti 2024 State of Cyber Security Report
Attackers focus intensive spoofing efforts on CEOs.
VentureBeat regularly hears from enterprise software cybersecurity CEOs who prefer to remain anonymous about how deepfakes have evolved from easily identified fakes to recent videos that look legitimate. Voice and video DeepFax appears to be a favorite attack strategy of industry executives, aiming to defraud their companies of millions of dollars. Adding to the threat is how aggressively nation-states and large-scale cybercrime organizations are doubling down on developing, recruiting and expanding their expertise with generative adversarial network (GAN) technologies. Among the thousands of CEO deepfake attempts this year alone, one that targeted the CEO of the world's largest advertising firm shows just how sophisticated attackers are becoming.
In a recent tech news briefing with the Wall Street Journal, CrowdStrike CEO George Kurtz explained how advances in AI are helping cybersecurity practitioners defend systems while also commenting that attackers How are you using it? Kurtz spoke with WSJ reporter Dustin Wolz about AI, the 2024 U.S. election, and the threats posed by China and Russia.
“Deepfake technology today is great. I think that's one of the areas that you really worry about. I mean, in 2016, we used to track it, and you'd see people actually Just chat with bots, and this was in 2016, and they're having a verbal discussion or they're promoting their cause, and they're having an interactive conversation, and it seems like there's no one behind this thing either. “So I think it's very easy for people to get wrapped up in this reality, or there's a narrative that we want to leave behind, but a lot of that can be driven and driven by other nation states,” Kurtz said. said
CrowdStrike's intelligence team has spent a lot of time understanding the nuances of what makes a persuasive deep fake and where technology is going to achieve maximum impact on viewers.
Kurtz continued, “And what we've seen in the past, we've spent a lot of time researching this with our CrowdStrike intelligence team, is it's a little bit like a pebble in a pond. Like you're a subject. Take or you'll hear a topic, anything related to the geopolitical environment, and a pebble is dropped in the pond, and then all these ripples come out and that's the amplification.”
CrowdStrike is known for its deep expertise in AI and Machine Learning (ML) and its unique single-agent model, which has proven effective in driving its platform strategy. With such deep expertise at the company, it's understandable how its teams would experiment with deep forgery technologies.
“And if now, in 2024, with the ability to deepfake, and some of our inside guys make some ridiculous cheating videos of me and just show me how terrible it is, you don't could tell I wasn't in the video. So I think that's one of the areas I'm really concerned about,” Kurtz said. “There's always a concern about infrastructure and things like that. In those areas, a lot of it is still paper voting and stuff like that. Some of it isn't, but you guys have to do that. How to misrepresent what a nation-state wants them to do, that's the area I'm really concerned about.
Businesses need to rise to the challenge.
Enterprises are running the risk of losing the AI war if they don't keep pace with attackers weaponizing fast AI for deep spoofing attacks and all other forms of adversarial AI. Deepfakes have become so common that the Department of Homeland Security has released a guide, The Growing Threats of Deepfake Identification.