A recent report from the Capgemini Research Institute reveals that 97 percent of surveyed organizations experienced at least one security breach related to generative AI (Gen AI) in the past year.
The study, titled "New Defences, New Threats: What AI and Gen AI Bring to Cybersecurity," highlights the growing role of AI in cybersecurity while exposing significant vulnerabilities associated with its adoption. Conducted in May 2024, the survey included 1,000 organizations across 13 countries, spanning industries such as banking, insurance, energy, healthcare, and aerospace, with annual revenues exceeding $1 billion.
The report underscores the dual nature of AI in cybersecurity. While AI enhances the speed and accuracy of cyber incident detection, it also introduces unprecedented risks.
Over 90 percent of respondents reported experiencing at least one cybersecurity breach in the past year, a sharp increase from 51 percent in 2021. Nearly half of these organizations estimated financial losses exceeding $50 million over the last three years. The rise in breaches coincides with the widespread adoption of generative AI technologies, which malicious actors exploit for cyberattacks.
Several risks associated with generative AI adoption are identified in the report, including data poisoning, sensitive information leaks, and misuse of deepfake technologies.
Approximately 67 percent of organizations expressed concerns about these issues, while 43 percent reported direct financial losses from incidents involving deepfake content. Other vulnerabilities include hallucinations -incorrect or misleading outputs from AI systems - biased content generation, and prompt injection attacks. These challenges are exacerbated by the improper use of Gen AI by employees, expanding the attack surface for organizations and increasing the complexity of defense strategies.
Despite these risks, organizations continue to rely on AI-driven security solutions to mitigate cyber threats. AI and Gen AI provide security teams with powerful tools to transform their defense strategies, but the report emphasizes the need for stronger governance frameworks and enhanced security measures to address emerging threats. The findings highlight the urgent need for organizations to balance AI's benefits with its risks, ensuring robust cybersecurity protocols are in place to safeguard sensitive data and prevent financial losses.
Source: https://www.techmonitor.ai/technology/cybersecurity/97-of-organisations-hit-by-gen-ai-related-security-breaches-survey-finds?cf-view
Commentary
Artificial intelligence (AI) is a broad term that refers to machines or computer systems capable of performing tasks that typically require human intelligence, such as decision-making, problem-solving, and pattern recognition. AI encompasses various techniques, including machine learning, natural language processing, and robotics, which help computers analyze data, recognize trends, and automate processes.
Generative AI is a specific subset of AI focused on creating new content, such as text, images, music, code, and videos. Unlike traditional AI, which primarily classifies or analyzes data, generative AI models use deep learning algorithms to generate original outputs based on patterns in large datasets. For example, generative AI models like GPT can generate human-like text, while image-generation models like DALL·E produce realistic artwork or visuals from text descriptions.
The key distinction between AI and generative AI is their function: AI broadly enhances automation and decision-making, while generative AI specializes in content creation.
Due to this, generative AI poses unique cybersecurity risks because it can be exploited to create highly sophisticated attacks that are more difficult to detect and mitigate. These risks include:
- Cybercriminals can use generative AI to craft highly convincing emails, fake websites, or even voice and video deepfakes that mimic legitimate sources. This makes phishing attempts more effective, as attackers can personalize messages to target individuals based on publicly available data.
- Malicious actors can introduce biased or misleading data into AI training models, causing them to generate inaccurate or harmful outputs. This can be exploited to spread misinformation or manipulate automated decision-making systems.
- Attackers can use AI models to generate new variants of malware that evade traditional security defenses. AI-generated code can be optimized to bypass detection mechanisms, making cybersecurity defenses less effective.
- These allow adversaries to manipulate AI models by crafting inputs that bypass security filters. This can lead to unintended outputs, such as generating harmful or illegal content.
The final takeaway is that the widespread adoption of generative AI increases the attack surface for organizations, requiring stronger governance frameworks and security measures to mitigate these emerging threats.