Cybersecurity Trends and Predictions 2025 From Industry Insiders: Part 1Cybersecurity Trends and Predictions 2025 From Industry Insiders: Part 1

From quantum preparedness to AI's dual impact on cybersecurity, IT leaders and industry insiders share their cybersecurity trends and predictions for 2025.

Rick Dagley

January 23, 2025

2h 42m Read
High-tech padlock representing the protection of digital assets in an interconnected world
Alamy

Cybersecurity was a significant concern for organizations in 2024, as it became increasingly more difficult to protect sensitive data and critical infrastructure from theft, damage, and unauthorized access by bad actors. So it was no surprise that the top cybersecurity stories on ITPro Today last year were a quiz testing your IT security knowledge and a cybersecurity reference guide for IT professionals.

What's in store for cybersecurity in 2025? IT leaders and industry insiders are claiming in 2025 cybersecurity will experience "The Great AI Awakening," infamy will be the new payday, and quantum preparedness will become the No. 1 board-level cybersecurity topic.

Read on to see what else they are expecting in the cybersecurity space in 2025. Below are their predictions, broken into security categories.

But first, explore our 2025 tech predictions, including "anti-predictions" that challenge widely anticipated IT trends with fresh insights from our experts:

Top Cybersecurity Predictions of 2025: Part 1

We were inundated with so many cybersecurity predictions that we split them into two articles. Part 1 covers the following topics:

Related:Top 10 Cybersecurity Articles of 2024

Click here for Part 2, which covers zero trust, cloud security, the role of the CISO in 2025, cybersecurity workforce, security spending, cyber insurance, GRC, and cybersecurity techniques and strategies.

AI's Impact — Good and Bad — on Cybersecurity

First Major AI-Generated Code Vulnerability

Development teams have eagerly embraced AI, particularly GenAI, to accelerate coding and drive efficiency. While the push for the "10x developer" is transforming software creation, the need for speed can sideline or shortcut traditional practices like code reviews, raising significant security concerns. In the coming year, overconfidence in AI's capabilities could lead to vulnerable or malicious code slipping into production. GenAI is powerful but fallible — it can be tricked with prompts and is prone to hallucinations. This risk is not hypothetical: 78% of security leaders believe AI-generated code will lead to a major security reckoning. The CrowdStrike outage illustrated how quickly unvetted code can escalate into a crisis. With AI-generated code on the rise, organizations must authenticate all code, applications, and workloads by verifying their identity.

Related:Cybersecurity Basics: A Quick Reference Guide for IT Professionals

Code signing will become an even greater cornerstone in 2025, ensuring code comes from trusted sources, remains unchanged, and is approved for use. Yet, challenges persist: 83% of security leaders report developers already use AI to generate code, and 57% say it's now common practice. Despite this, 72% feel pressured to allow AI to stay competitive, while 63% have considered banning it due to security risks. Balancing innovation with security will be critical moving forward. Kevin Bocek, chief innovation officer, Venafi, a CyberArk company

Machine Identity Security Teams Become the Norm for Forward-Thinking Enterprises

In 2025, machine identity security will become an integral part of IAM programs. As machine identities become a key focus within these programs, CISOs will recognize the need to address both human and machine identity management prompting organizations to adapt with dedicated teams. This shift stems from escalating threats and rapid technological change. Attackers are increasingly targeting machine identities, as seen with IntelBroker's claims of selling stolen developer assets from Cisco and Nokia. The rise of cloud-native technologies and AI is also accelerating the creation and deployment of machine identities like TLS and SPIFFE, adding complexity to their management.

Related:Cybersecurity Acronyms Cheat Sheet

At the same time, shortening machine identity lifecycles and looming post-quantum encryption challenges are pushing organizations to rethink their strategies. The scale of the issue is immense: Machine identities now outnumber human identities by 45 to 1, a ratio expected to reach 100 to 1 soon.

Forward-thinking companies are already creating dedicated Machine Identity Security Programs, leveraging automation to address these challenges. As the machine identity landscape grows more complex, organizations without robust programs risk frequent outages and security incidents. By formalizing these efforts, businesses can stay ahead of today's threats and prepare for the demands of tomorrow. Kevin Bocek, chief innovation officer, Venafi, a CyberArk company

2025 Will See the First Data Breach of an AI Model

Pundits have frequently warned about the data risks in AI models. If the training data is compromised, entire systems can be exploited. While it is difficult to attack the large language models (LLMs) used in tools like ChatGPT, the rise of lower-cost, more targeted small language models (SLM) make them a target. The impact of a corrupt SLM in 2025 will be massive because consumers won't make a distinction between LLMs and SLMs. The breach will spur the development of new regulations and guardrails to protect customers. — Stephen Manley, CTO, Druva

Synthetic Data Used More in AI Training to Safeguard Sensitive Customer Data, Creating New Risks

For AI to produce good results, it needs to be trained on good data and rigorously tested with prompt engineering. The business temptation is to use customer data to train AI models — but that causes a myriad of problems to crop up, such as data compliance breaches, higher impact of cyber risk, and higher likelihood of data leakage. To effectively combat these challenges, businesses will turn to synthetic data, or training data that AI models generate, to maintain safety best practices during the training process. This, however, will create new risks, since the synthetic data can create a feedback loop that will exacerbate any bias in the data. Therefore, companies will need to invest in transparency and increase the rigor in reviewing their AI-generated output. — Stephen Manley, CTO, Druva

Security Leaders Will Embrace AI Experimentation

2024 shocked many of us with AI technologies' sophistication and rapid advancement. The year also highlighted that we don't quite know how to incorporate such tools into work and which vendors can help us along the way. Organizations in 2025 will continue to experiment with AI to understand where it offers value. And we'll also see many startups experiment with business models and tech approaches. Security and IT leaders should be ready to help evaluate and onboard a diverse set of immature AI products. We'll need to comprehend a range of AI technologies and understand the expectations of diverse internal stakeholders so we can contribute toward making informed risk vs. reward decisions. Lenny Zeltser, SANS Institute Fellow and CISO at Axonius

AI in Security: Balancing Human Expertise and Automation for Optimal Outcomes

AI-related advancements will continue to fuel discussions regarding the role of humans vs. automation in the workforce. Security teams will see more opportunities to use AI and non-AI technologies to automate tasks across many domains, including GRC, security operations, and product security. Security leaders will need to be strategic about deciding which tasks to leave for humans and which to automate. Given how rapidly the technology is changing, we should be ready to experiment and determine how to measure project outcomes to decide which approaches work best. Lenny Zeltser, SANS Institute Fellow and CISO at Axonius

U.S. Border Control Will Detect Threats With AI Knowledge Graphs

In 2025, AI-driven semantic knowledge graphs will enhance border control operations along the U.S.-Mexico border. These systems will integrate vast streams of data, including surveillance feeds, biometric records, sensor networks, and cross-agency intelligence to provide real-time situational awareness and predictive insights. AI Knowledge Graphs enhanced with LLMs will enable border agents to identify patterns of movement associated with smuggling, human trafficking, and unauthorized crossings more effectively. By connecting disparate data points — such as vehicle histories, communication metadata, and geographic trends — these systems will allow authorities to detect emerging threats and respond with greater precision. — Dr. Jans Aasman, CEO, Franz

Strengthening Cybersecurity Against AI-Generated Threats

With escalating threats from sophisticated phishing and ransomware attacks, focus needs to shift toward advanced data protection strategies, AI-driven threat detection and continuous employee training to mitigate ongoing risks. Businesses that proactively adopt these measures will not only comply with regulations but also build customer trust and loyalty. — James Tommey, Global Head of IT & Chief Security Officer, DISCO

Combating Fraudulent AI-Generated Content

In 2025, organizations will face unprecedented cybersecurity challenges due to the rise of fraudulent AI-generated content, which will become indistinguishable from human-created data. Leaders must think about how to implement robust authentication and verification protocols to safeguard against deepfakes and synthetic data breaches to ensure protection over the integrity of their workflows. — James Tommey, Global Head of IT & Chief Security Officer, DISCO

Cybersecurity Will See New Threats

The rise of AI also brings a new era of cybersecurity challenges. In 2025, companies must up their security postures to address entirely new types of risk introduced by AI. One such example is prompt injection attacks — where malicious inputs are disguised as legitimate user prompts in generative AI systems. According to the latest Cisco AI Readiness Index, only 30% of companies globally said they have the capabilities to tackle these threats.

And AI isn't the only factor adding pressure to security teams. Advancements in quantum computing will force companies to reckon with the vulnerabilities of traditional encryption methods to quantum-powered attacks. As quantum computing inches toward mainstream adoption in 2025, we will see organizations adopting quantum-resistant security protocols to safeguard sensitive data. And, the rise of digital ecosystems and platforms further complicates the landscape. Things are more connected than ever before — and as things become increasingly connected, the sophistication of attacks grows too. In 2025, we'll see increased risk of social engineering and supply chain attacks. 

As attackers shift their tactics to compromise users and endpoints, aiming for lateral movement to maximize the impact of their attacks, the network will become a crucial pillar of security. The network's ability to provide visibility into the environment will make it the first and last line of defense. We will see organizations integrating AI to augment human capabilities to fortify the network as a pivotal line of defense and policy enforcement. — Liz Centoni, Executive Vice President and Chief Customer Experience Officer, Cisco

Schools Reinvigorate Efforts to Protect Students Online in the Wake of AI Proliferation

We'll see a strong push for more safety mechanisms to be installed on student devices, specifically when it comes to data protection, threat prevention, and privacy controls. Educational institutions will be encouraged (or perhaps required) to improve encryption protocols and access controls, use AI-powered threat detection to fight AI-powered attacks, use systems that provide real-time alerts, and step up their game when it comes to student data privacy. — Suraj Mohandas, Vice President, Strategy, Jamf

AI-Powered Predictive Maintenance and Risk Management to Dominate Building Systems

Managed services that monitor and optimize physical assets throughout their lifecycle will be table stakes. This includes critical functions like firmware updates, system health monitoring, and ensuring proper functionality. Predictive maintenance powered by AI will play a pivotal role in addressing vulnerabilities proactively, minimizing downtime and costs while bolstering security. The growing interconnectivity of building management systems brings new risks, including unvetted device access and limited visibility into system components. In 2025, facility managers need a layered risk management strategy that incorporates tiered system criticality, comprehensive remediation plans, and continuous auditing. — Greg Parker, Global Vice President, Security and Fire, Life Cycle Management, Johnson Controls

AI and Automation Will Take Over Tedious Vulnerability Management Tasks

Security teams are overwhelmed by the growing volume and complexity of vulnerabilities, leading to errors and burnout. AI-driven tools are set to change this, automating tasks like triage, validation, and patching. By analyzing vast datasets, these tools will predict which vulnerabilities are most likely to be exploited, allowing teams to focus on critical threats. By 2025, up to 60% of these tasks will be automated, significantly improving accuracy and response times. AI-driven tools will also proactively discover vulnerabilities, closing gaps before attackers can exploit them. — Jimmy Mesta, CTO and founder, RAD Security

AI Will Give CISOs and Security Teams a Head Start on Threats

It's no longer enough to detect threats after they've infiltrated a system. By training models on vast amounts of historical data, AI will help security teams spot emerging attack patterns before they cause damage. By detecting subtle anomalies in network traffic and user behavior, AI will provide proactive alerts, giving organizations a critical edge. This approach could cut the average time to detect threats (MTTD) by half. Moreover, as AI continues to advance, multi-agent systems will emerge as a new challenge. Attackers will use these systems to orchestrate sophisticated, automated attacks, forcing defenders to adopt similarly sophisticated AI solutions. — Jimmy Mesta, CTO and founder, RAD Security

AI Will Help Close the Cybersecurity Skills Gap

The demand for cybersecurity talent keeps growing, but there aren't enough skilled professionals to fill the gap. AI-powered tools are stepping in to level the playing field, helping organizations of all sizes automate threat detection, incident response, and compliance tasks. In the new year, over half of small and medium-sized businesses will depend on AI to manage their security operations. These tools will make advanced protection accessible, especially for teams with limited resources. — Jimmy Mesta, CTO and founder, RAD Security

AI-Driven Threat Detection Will Integrate Seamlessly into DevOps Workflows

AI will become fully integrated into DevOps workflows, enabling security to be embedded directly into the development process. With cloud-native environments growing more complex, AI-powered threat detection will continuously monitor applications in real-time, catching vulnerabilities before they can escalate. Rather than interrupting development cycles, AI tools will seamlessly provide proactive alerts and insights, helping teams address security issues as they arise — without slowing down the pace of innovation or deployment. — Jimmy Mesta, CTO and founder, RAD Security

AI Will Simplify Compliance in an Era of Stricter Regulations

As global data privacy and cybersecurity regulations become stricter, compliance will become an even more significant challenge. Traditional, manual compliance processes won't be enough anymore. By 2025, AI will automate compliance workflows, including auditing, reporting, and monitoring regulatory requirements in real-time. AI tools will identify gaps, generate actionable insights, and help organizations stay agile in adapting to evolving legal landscapes, freeing up security teams to focus on proactive protection. — Jimmy Mesta, CTO and founder, RAD Security

AI Workload Security Will Address New Attack Vectors

As AI becomes central to operations, attackers are targeting foundational elements like training datasets, where a single compromise can create widespread vulnerabilities. AI workload security will be crucial, focusing on protecting models from data poisoning, model evasion, and adversarial attacks. By 2025, integrated security solutions will safeguard AI throughout its lifecycle, ensuring data integrity and resistance to tampering. — Jimmy Mesta, CTO and founder, RAD Security

Advanced AI Deployments Will Power the Next Generation of Cyber Attacks

AI is the game changer for cybercriminals. By 2025, attackers to leverage AI to automate and accelerate their campaigns, adapting to defenses in real-time and making attacks more effective and harder to detect than ever before. As AI is integrated into complex decision-making systems like supply chain management and financial planning, it also presents new opportunities for cybercriminals. Attacks involving model manipulation, data poisoning, supply chain disruptions, and AI-assisted fraud are expected to be among the first attack vectors. — Rik Ferguson, vice president of security intelligence, Forescout

Cybersecurity Moves From Chatbots to Agents

By 2025, AI in cybersecurity will quickly move from chatbots to a more agent-driven approach. While chatbots offer value, agents represent a paradigm shift. Organizations leveraging automation will use agents for threat detection and autonomous responses. Additionally, agents will improve IT resource scalability and enhance cyber hygiene. — Harman Kaur, VP of AI, Tanium

AI in Security Operations Workflows

Companies are already using AI in their security operations workflows to minimize the volume of alerts by weeding out false positives. The next stage of AI for SecOps, which we'll start to see in 2025, is the use of AI in the investigation stage of SOC analysts' work. AI will be able to conduct investigations on behalf of analysts, generate a comprehensive timeline of adversary activity, and summarize its findings. AI will also grasp the context of threats and autonomously initiate response actions, waiting for confirmation from analysts to proceed further. Integrating AI into security operations has been a long-standing ambition for over a decade. With the recent advancements in data collection capabilities and the rapid progress of AI technology, we are finally seeing tangible improvements in how security operations are managed. In 2025, we'll make significant progress toward that goal. — Rakesh Nair, SVP of product and engineering, Devo

The Future of AI in Security Threats

Attackers are already using AI to enhance their tactics. As this technology continues to evolve and quantum computing capabilities emerge, the danger of AI-powered cyberthreats will only grow. Still, many organizations are struggling to stop the most basic cyber attacks, never mind quantum-powered AI attacks. Now is the time for organizations to make sure they're using AI in their defensive strategies to help stop basic threats. In 2025, it will be increasingly important for security teams to ensure their security tools have AI capabilities for the more monotonous work, like effective anomaly detection and opening and closing cases. There will be an even greater need for human analysts to focus on the threats that require more sophisticated decision-making, so passing off the monotonous work of weeding out false positives and catching easily detected threats to AI will be crucial for staying ahead of new threats. — Rakesh Nair, SVP of product and engineering, Devo

Malicious AI Will Help Some Victims Before Scamming Others

Victims worldwide have transnational criminal groups to thank for the proliferation of scams.  These criminals — often based in places that turn a blind eye to these sorts of crimes for a price — cracked the code of operating an efficient business.  Every business must contend with costs, so by soliciting and subsequently enslaving desperate people with offers of paid work, they have ensured that they have all of the 'manpower' needed while also maximizing their profits.  Applying modern AI tools, such as generative AI and deepfakes, will be a natural evolution in their business operations. So, in an ironic twist, AI will not have enslaved humanity but rather freed it — only to be used to further one of the most human sins: greed. — Al Pascual, CEO, Scamnetic

Data Protection Platforms Will Become a Focus

GenAI tools such as Copilot and ChatGPT have driven a significant growth in niche security tools that aim to control and monitor GenAI usage. However, at the heart of the issue with the risk of using GenAI tools is still the lack of a good data protection program. — Max Shier, CISO, Optiv

AI and ML to Revolutionize Cybersecurity

AI and ML will play an even larger role in detecting and responding to threats. Expect more advanced threat hunting tools and automated incident response systems. Cybercriminals will increasingly use AI to develop more sophisticated and targeted attacks, making it crucial for defense mechanisms to stay ahead. AI and ML will be leveraged to increase threat detection, monitoring capabilities and mitigating risks, while also increasing the threat landscape and enhancing adversary capabilities and execution. AI & Machine Learning integration will continue to be prevalent: 2025 will bring us better efficiency, natural language use and better detection of threats utilizing AI. — Randy Lariar, AI security leader, Optiv

AI as a Double-Edged Sword in Software Security

AI will increasingly help coders, defenders, and attackers accelerate their work. By integrating AI with automated tooling and CI/CD pipelines, developers will be able to quickly identify and fix coding flaws. Defenders can leverage AI's ability to analyze massive amounts of data and identify patterns, accelerating the work of SOC teams and other blue-team operations. Unfortunately, attackers may also use AI to craft sophisticated social engineering attacks, review public code for vulnerabilities, and employ other tactics that will complicate cybersecurity in the near future. We need to learn how to secure AI before broadly deploying it for security purposes. — Christopher Robinson, chief security architect, OpenSSF

IT Service Management and GenAI

The evolution of IT Service Management and Infrastructure Monitoring through 2025 will spotlight a deeper issue within our IT infrastructure: the growing reliance on vulnerable systems that too easily gain our trust. As organizations increasingly depend on AI/GenAI and automation, we believe that Managed Service Providers (MSPs) will become critical partners in building robust security frameworks and third-party oversight, particularly as recent outages have shown how trusted vendors can lead to cascade failures affecting millions of machines worldwide. The most successful MSPs will be those who help organizations build redundancy and autonomy into their systems, moving beyond traditional solutions to address the fundamental flaws in how we approach security today. — Aaron Melear, VP Partnerships, Secureframe

How AI Is Worsening Phishing Attacks

AI is dramatically lowering the barrier to entry for creating sophisticated phishing campaigns — from deepfake voice calls to hyper-personalized spear phishing emails. But AI is also enhancing our defensive capabilities. At Secureframe, we're seeing organizations leverage AI to automate security control monitoring and detect anomalous patterns that could indicate compromise. The key is moving from reactive to proactive security measures, especially when it comes to employee security awareness training and vendor risk management. — Shrav Mehta, CEO and founder, Secureframe

AI Enhances Identity Security

Identity security will no longer be limited to traditional single sign-on (SSO) and multi-factor authentication (MFA) as the core of access control. Organizations will move to continuous monitoring before, during, and after authentication. Threat actors are increasingly targeting identity as a weak point, making it essential for organizations to safeguard user identities throughout their entire digital interaction. As a result, identity verification will evolve into an ongoing process that extends well beyond the login screen. Taking that step further, AI-powered identity management will transform access control by integrating with popular AI frameworks to monitor and analyze user behavior continuously. These AI-enhanced IAM systems will detect anomalies and dynamically adjust permissions based on real-time context, reducing the risk of unauthorized access. This shift will make identity management more adaptive, providing enhanced security while responding to users' changing behaviors and needs. — Neeraj Methi, vice president of solutions, BeyondID

GenAI Will Spark Rise in Synthetic Identity Fraud

Generative AI will introduce sophisticated new attack vectors, with synthetic identity fraud becoming a prominent method for unauthorized access. Cybercriminals will leverage AI to create highly realistic digital identities, posing significant challenges for traditional verification methods. To combat this threat, organizations must adopt advanced identity verification tools capable of detecting synthetic identities and monitoring for anomalies in real time. — Neeraj Methi, vice president of solutions, BeyondID

The Future of AI in Cybersecurity

Looking ahead, AI will increasingly shape cybersecurity. On the positive side, it can predict attacker behavior, assist in threat modeling and automate responses to security events through approaches like SOAR — 'Security Orchestration, Automation, and Response.' AI-driven systems will analyze vast amounts of data in real-time, identifying patterns and anomalies that might indicate a breach far faster than any human could. They will automate routine tasks, freeing up our skilled professionals to focus on more complex challenges. — Chris Gibson, CEO, FIRST

AI Threats on the Horizon

We're witnessing a concerning trend where bad actors increasingly leverage AI to enhance their attack methodologies. They'll use AI to create more convincing phishing emails, automate the discovery of vulnerabilities, and develop malware that can evade detection by traditional security tools. This creates an arms race in the cyber realm, where both defenders and attackers are constantly trying to outpace each other in AI adoption and innovation. — Chris Gibson, CEO, FIRST

Organizations Will Increasingly Turn to AI to Improve Security Posture

AI-powered threat hunting will play a crucial role in detecting and responding to advanced threats. As AI models continue to evolve, they will be able to identify sophisticated attacks that traditional methods might miss. By automating routine tasks and recommending effective response strategies, AI can significantly reduce the impact of security incidents and improve overall security posture. — Chris Scheels, VP of product marketing, Gurucul

Microsoft Copilot to Revolutionize Generative AI Adoption and Security

In 2025, Copilot will be one of the most innovative products released by Microsoft. Leveraging Copilot across multiple data sources within Microsoft 365 will drive greater adoption of generative AI in organizations from the ground up. Copilot for Security will also become a critical tool by integrating with a broad range of ISV-security plugins. This ecosystem of ISV-plugins will provide specialized tools, enabling Copilot to deliver enhanced, multi-layered threat detection. It will empower organizations to tackle complex security challenges more cohesively and proactively, while alleviating concerns about the secure application of AI across both internal and third-party solutions. — Sergey Medved, VP of product management, Quest Software

AI-Enhanced Social Engineering Scams Will Dominate Threat Landscape

The 2024 Bitwarden Cybersecurity Pulse survey found that 89% of tech leaders are already concerned about existing and emerging social engineering tactics enhanced by generative AI, underscoring the heightened risks. In 2025 people will likely adapt to more believable attacks, but the speed and sophistication of these threats may outpace defense measures. The best way to combat these threats will be layered security—combining passwordless solutions, multi-factor authentication (MFA), and continuous education for employees on identifying potential scams. — Gary Orenstein, chief customer officer, Bitwarden

AI in Cybersecurity Faces Reality Check in 2025

In the coming year, we'll see the initial excitement that surrounded AI's potential in cybersecurity start to give way due to a growing sense of disillusionment among security leaders. While AI adoption is on the rise — 89% plan to use more AI tools in the coming year — there is still cautious optimism within the industry. Many practitioners worry that adding more AI tools could create more work and as a result, vendors will need to focus on demonstrating value and proving ROI. Vendors will no longer be able to rely on generic promises of "AI-driven security" to make sales. Instead, they will need to demonstrate tangible outcomes, such as reduced time to detect threats, improved signal accuracy, or measurable reductions around time spent chasing alerts and managing tools. — Mark Wojtasiak, VP of research and strategy, Vectra AI

AI-Powered Threats and Deepfakes Redefine Cybersecurity

In 2025, it's a given that AI will continue to advance, and with that, AI-powered threats will also become more sophisticated, with deepfakes emerging more often, amplifying issues around misinformation and fake news. For small and medium-sized businesses, attackers will focus more on automated, large-scale attacks, leveraging AI to exploit vulnerabilities quickly rather than relying on intelligence-driven tactics. We can anticipate that larger enterprises will also be a prime target of AI-supported attacks, which will be more sophisticated and capable of adapting in real-time. As AI continues to evolve, attackers will have more tools at their disposal to exploit weaknesses, requiring organizations to adopt proactive defenses to stay ahead of the threat landscape. — Raffael Marty, EVP & general manager, Cybersecurity, ConnectWise

Focus on Model Security

Model security — specifically data security, data lifecycle management, and data telemetry — will be a top priority as commercial-off-the-shelf (COTS) foundational models drive quicker adoption of generative AI functionality across multiple industries: Enterprises can now build applications around COTS AI models, reducing the need to acquire and maintain specialized hardware, and affording Generative AI companies the opportunity to amortize astronomical training costs across multiple users. This has been a revolution in machine learning, but it carries a cost to security. The fact that there are a relatively small number of models serving a broad number of users, makes these foundational models tempting targets for adversaries both in terms of training and avoidance. We are applying generative AI to more tasks, and empowering generative AI with a degree of autonomy. This increases the responsibility for AI developers to demonstrate that the data they use to train and refine model predictions is clean, timely, and has provable lineage. We will see a greater need for tools which automate the track data usage throughout its lifecycle. — Joe Regensburger, VP of research, Immuta

Shifting Toward AI-Powered Resilience Frameworks

In 2025, I think a dominant cybersecurity trend will be the shift toward AI-powered resilience frameworks, especially in response to supply chain vulnerabilities and deepfake-driven social engineering attacks. As more organizations move to multicloud environments and third-party integrations increase, managing these extended supply chains securely will be crucial. AI will likely play a key role here — not just in detecting threats but also in making real-time adjustments to counteract attacks and secure data flows across multiple layers. Deepfake technology is also evolving rapidly, becoming a critical vector for social engineering attacks that can bypass traditional detection methods. This trend will likely push organizations to adopt multi-factor authentication processes that go beyond passwords and fingerprints, like behavioral analysis or contextual cues, to confirm user identity accurately. In my view, cybersecurity in 2025 will need to focus on building resilience by integrating predictive AI that can assess both the technical and human aspects of emerging threats, helping organizations adapt dynamically to this new landscape. — Ania Kowalczuk, VP of customer trust, MongoDB

AI Will Democratize Malware Creation, Opening the Door for a New Class of Cybercriminals

You won't need to be a coder to create sophisticated malware in 2025 — AI will do it for you. Generative AI models trained specifically to generate malicious code will emerge in underground markets, making it possible for anyone with access to deploy ransomware, spyware and other types of malware with little effort. These "hacker-in-a-box" tools will automate everything from writing to deploying attacks, democratizing cybercrime and increasing the volume and diversity of threats. — Steve Povolny, senior director, Security Research & Competitive Intelligence and co-founder, TEN18 by Exabeam

AI-Powered Attack Sophistication

By 2025, hackers will have access to dramatically advanced AI tools, transforming the threat landscape. Generative AI, with significantly improved reasoning abilities, will allow cyber attackers to execute highly realistic phishing scams, including deepfake voices and video avatars. Expect nearly flawless, real-time impersonations and highly complex automated probing for vulnerabilities, which could overwhelm traditional defenses. Organizations must implement AI-driven security tools that continuously learn from and adapt to emerging attack patterns, particularly to counter advanced social engineering attacks. Training employees to recognize AI-powered threats will also become essential. — Steve Wilson, chief product officer, Exabeam

Expedited Exploitation Cycles

With AI's ability to identify weaknesses faster than humanly possible, the time from vulnerability discovery to exploitation will shrink significantly. Attackers will leverage AI to automate the assembly and deployment of exploits, building on more complex attack strategies and rapidly escalating threats. To stay ahead, organizations must adopt predictive AI capabilities within their cybersecurity frameworks. Leveraging tools that utilize AI to simulate attack vectors will enable teams to proactively identify and patch vulnerabilities, staying a step ahead of threat actors. — Steve Wilson, chief product officer, Exabeam

Enhanced Defensive Capabilities with AI-Powered Copilots

On the defensive front, AI copilots will become indispensable in cybersecurity operations, speeding up threat detection, investigation, and response. By 2025, every cybersecurity operator will likely be equipped with a generative AI copilot, streamlining complex analyses and providing actionable insights in real-time. Companies should prepare to integrate these copilots, ensuring interoperability with existing security infrastructure and training operators to collaborate effectively with AI assistance. This dual human-AI approach will elevate response speed and precision, especially in high-stakes incidents. — Steve Wilson, chief product officer, Exabeam

The Battle Between AI-Weaponized Attackers and AI-powered Defenders Will Intensify

Malicious actors will increasingly use generative AI to create morphing malware — code that adapts and mutates to evade detection, making traditional defenses obsolete. These new strains of AI-generated malware will be more efficient and harder to trace. At the same time, defenders will lean on AI tools to streamline threat detection, asking more sophisticated questions and flagging abnormal behavior more quickly. — Kevin Kirkwood, CISO, Exabeam

AI Specialists Will Make Traditional SOC Analysts Obsolete

In 2025, traditional security operations center (SOC) analyst roles will rapidly decline as AI and machine learning take over routine security tasks. Organizations will prioritize hiring AI specialists who can interpret, manage and guide advanced AI-driven security systems. Threat hunting roles will surge in demand, as human expertise is needed to contextualize and act on AI-generated insights. Companies will no longer rely on generalist cybersecurity teams but instead seek highly specialized professionals to stay ahead of increasingly sophisticated AI-powered attacks. The future of cybersecurity jobs will hinge on human expertise paired with AI innovation. — Gabrielle Hempel, solutions engineer and TEN18 analyst, Exabeam

AI Applications Will Be Under Attack

Hackers will breach an AI application in 2025 — and then they will manipulate the AI application to cause problems in the target company. Organizations will need to start treating an AI application like a person, much in the same way as we did for bots not too long ago. — Bruce Esposito, senior manager of IGA Strategy and Product Marketing, One Identity

The Rise of Non-Human Identities

The hysteria around AI, and the generally unmanaged proliferation of GenAI, will heighten the danger of non-human identities in 2025. — Larry Chinski, senior vice president of Global IAM Strategy, One Identity

Shifting to AI-Driven Remediation for Stronger Cloud Resilience

In the cloud-native space, we anticipate a shift from prioritizing vulnerability detection to focusing on streamlined remediation, driven by faster, automated responses to security issues. With rising threat volumes, organizations will increasingly rely on AI-guided remediation, automated workflows, and contextual analysis to expedite fixes and reduce manual workload. Advanced tools will assign responsibility, provide targeted guidance, and adapt in real time, enhancing both accuracy and speed. This transition will strengthen cloud resilience, as organizations move from merely identifying risks to actively and efficiently closing vulnerabilities across their dynamic infrastructures. — Gilad Elyashar, chief product officer, Aqua Security

GenAI to Drive the Future of Cloud Security Against Evolving Threats

In continuation to last year, GenAI will continue to empower both attackers and defenders. Attackers can now use AI to generate complex, targeted phishing, deepfakes, and adaptive malware. In response, cloud-native security solutions leverage GenAI to automate threat detection and response across distributed environments, enabling real-time analysis and predictive defense. By 2025, using AI within cloud-native frameworks will be essential for maintaining the agility needed to counter increasingly adaptive threats. — Moshe Weis, CISO, Aqua Security

AI-Powered Detection and Response for Security and Compliance Alerts

Businesses are increasingly adopting GRC tools (which complement already-in-place security detection and alerting solutions) that can automatically flag and alert on potential compliance control violations within their systems and processes. Doing so has notably helped GRC teams identify and fix problems faster and with less manual effort. However, responding to all of these alerts can demand significant daily time and effort due to the need for human analysis in many cases. This is why I expect 2025 to be a year where we see increased adoption of AI-backed capabilities to help manage alerts across the board — from security detection and response to compliance monitoring tool alerting. AI-backed tools and capabilities can do things like find and consolidate redundant alerts, help set alert priorities and summarize alert data in context. They can also recommend the ideal course of action for responding to risks, allowing teams to operate more effectively. — Matt Hillary, CISO, Drata

AI Fighting Evil

It will be much harder for attackers to be successful in 2025 with the advancement of endpoint detection responses (EDRs) and security tools. These changes mean attackers will have to get really creative when coming up with new attack methods in order to bypass those advanced security measures. In terms of technology, an on-premises attack would be detected more frequently because EDR products are becoming more visible and introducing AI capabilities, which enhances their broader visibility of the system. Consequently, security teams can identify more attacks using AI algorithms, even if they haven't developed an algorithm specifically for a particular attack. — Ilan Kalendarov, security researcher, Cymulate

AI in Cybersecurity Will Bolster Defenses but Amplify Risks

In the coming year, organizations will face the challenge of balancing AI's security advantages with the mounting risks it introduces. While AI strengthens threat detection and response, attackers are equally adept at harnessing its power, rendering traditional employee training methods obsolete. Common indicators of phishing, like grammatical errors and unnatural phrasing, are vanishing as generative AI and deepfakes enable more convincing and sophisticated attacks. To combat these evolving threats, businesses must continually refresh employee training and adopt advanced AI tools, such as Microsoft's Azure sandbox, to maintain robust security control. — Jim Broome, CTO and president, DirectDefense

New Security Threats From Generative AI

By 2025, generative AI will be integrated into nearly every business and department, significantly boosting productivity. However, this will also introduce new security risks that organizations will need to address. Simply automating tasks won't be enough. A focus on secure automation and responsible AI practices will be essential. Additionally, creating cyber exploits will become easier, as the barrier to entry lowers. Individuals will need to think like hackers rather than relying solely on coding skills, making the cybersecurity landscape more complex and challenging. — TK Keanini, chief technology officer, DNSFilter

Rise of AI-Driven Human-Augmented Decision-Making in Identity Management

In 2025, we may see the first widespread implementation of AI-human augmented decision-making in identity management. Not all organizations are ready to configure systems to "just do it," that is, allowing AI to make decisions without human intervention; the industry will closely observe whether the human AI-augmented decision-making approach delivers value and can build trust. A key challenge to full automation of decision-making will be the transparency of recommendations and how humans can override automatically made decisions with feedback, adjusting the recommendation engine for future decisions. Decision makers need to feel confident that they can trust the recommendation and that their feedback is effective, because they're still accountable to the business when critical identity decisions are made without direct human oversight. — Paul Walker, field strategist, Omada

From Preventative to Proactive Security with GenAI Integration

Identity Governance and Administration (IGA) products will likely evolve into more proactive security tools. For example, offering real-time recommendations and insights to enhance IT security operations and maintain identity/data hygiene. Moving on from analysis of existing assigned permissions and incorporating user behavior information as well, especially from cloud/SaaS systems that can easily share these logs. Integrating Generative AI will be a key driver in this change to become more proactive. For example, intelligent notifications using desktop collaboration tools to deliver daily "messages of the day" with personalized suggestions to strengthen identity security posture. Traditionally focused on prevention, IGA will shift toward contributing to operational security and security hygiene posture. The adoption of new, user-friendly interaction methods, such as the Generative AI-powered natural language model, will drive this transformation. — Paul Walker, field strategist, Omada

Building AI Pipelines with Security from the Ground Up Will Be a Major Focus for Federal Agencies

Many global factors, including geopolitical conflict and the rising ransomware threat, have caused federal agencies to rethink their data security. Agencies are heavily investing in system detection to monitor for the intrusion of potential bad actors, but as the complexity of these threats increases, they've discovered that applying quick fixes to existing systems does not provide the level of security needed for such critical data. In 2025, I expect to see an increased focus on building security software from the ground up (leveraging security-first platforms like Rust) as opposed to inserting solutions along the way, following White House guidance from earlier in 2024. — Tobie Morgan Hitchcock, CEO and co-founder, SurrealDB

AI Will Help Cybercriminals Improve Their ROI on Attacks

In 2025, cybercriminals will continue using AI to enhance the effectiveness and scale of their attacks — and they'll likely reach record levels of ROI. Being able to better adapt messages on the fly and analyzing media and social media trends, attackers will use AI to craft more personalized and convincing phishing and social engineering campaigns, such as through email and text messaging. More people will fall victim to attempts to exploit their trust in colleagues and business leaders in order to compromise sensitive information. Credential stuffing attempts will also become more sophisticated as AI can be integrated with automated workflow to test stolen login credentials on a much shortener timeline. For example, attackers will impersonate employees through credential stuffing against services like VPNs with an improved success rate if they fall victim to a phishing email. — Michael Smith, field CTO, Vercara

Generative AI Will Lead to a Rise in Traditional Fraud Schemes

A new wave of traditional fraud is coming at us full steam ahead. With generative AI easily accessible to hackers, we're going to see more impersonation tactics posing a huge threat to our society. Hackers are quickly becoming more proficient in identifying vulnerable attack surfaces, and the human element is one of the biggest. For example, we can expect there to be more impersonations of police officers or high-ranking C-suite from Fortune 500 companies being generated by GenAI in efforts to gain access to login credentials, PII and more. As we enter 2025, there will be a bigger emphasis on identity protection measures as we learn to contend with impersonation issues. This means having stronger authentication methods like MFA and IAM tools that check for abnormalities for where and when credentials are being used and what they are trying to access. Leaning into these tools will be critical in combating this new wave of traditional fraud we will likely see ahead. — Mark Bowling, VP of Security Response Services, ExtraHop

AI Is a Double-Edged Sword in Cybersecurity

In 2025, AI will be both an offensive and defensive force in cybersecurity, each side pursuing control over critical data. Deepfake-related losses are expected to soar from $12.3 billion in 2023 to $40 billion by 2027, as attackers increasingly leverage AI to create more sophisticated threats. In addition to deep fakes that challenge traditional authentication methods, other AI-powered attack techniques are emerging such as autonomous malware, social engineering, data exfiltration, and credential stuffing are significantly harder to detect. This will lead to an intensifying arms race between attackers and defenders, with AI at the center. This AI-driven evolution will fundamentally change cybersecurity, forcing organizations to rethink security strategies and invest heavily in AI-powered defense mechanisms to streamline security processes and detect threats faster. This dynamic has already begun to emerge in 2024, marking the first steps of an intensifying arms race centered on AI-driven strategies. As organizations adapt, new ethical questions will surface, especially around securing training data and AI autonomy in making security-critical decisions. — Ron Reiter, CTO/co-founder, Sentra

Malicious Use of Multimodal AI Will Create Entire Attack Chains

By 2025, malicious use of multimodal AI will be used to craft an entire attack chain. As multi-modal AI systems gain the ability to integrate text, images, voice, and sophisticated coding, they will be coming to threat actors who will leverage them to streamline and automate the entire pipeline of a cyber attack. This includes profiling targets on social media, crafting and delivering realistic phishing content, including voice phishing (vishing), sometimes finding zero-day exploits, generating malware that can bypass endpoint detection and deploying the infrastructure to support it, automating lateral movements within compromised networks, and exfiltrating stolen data. This hands-off, entirely seamless approach will democratize cyberthreats even more radically than malware-as-a-service offerings have in recent years, enabling less skilled threat actors to launch advanced attacks with minimal human intervention. Therefore, organizations and security teams, regardless of size, will face an increase in highly tailored cyberthreats that will be difficult to detect and combat. — Corey Nachreiner, CISO, WatchGuard

AI-Driven Cyberthreats on the Rise

The biggest cyberthreats in 2025 will stem from increasingly sophisticated, AI-driven attacks.
As AI evolves at breakneck speed, attackers are deploying machine learning models that adapt, disguise themselves, and evade traditional defenses in real-time. This creates a constant race between defensive and offensive AI technologies, making it harder to detect and combat cyberthreats.Avani Desai, CEO, Schellman

Cybersecurity Will Experience 'The Great AI Awakening'

I think 2025 is going to be the year of "The Great AI Awakening" among cybersecurity professionals. They're going to find out just how easily AI agents can be manipulated to act in unintended ways to carry out harm, including data leaks. When they do, the pace of AI deployment will slow to a crawl because of the amount of work security teams will have to do to retrofit current-day security models to address AI agents' vulnerabilities. Tools for managing identities in computing infrastructure have always operated on the assumption that the user is a human or machine. But that distinction will stop making sense in 2025 because these tools were never built for AI agents that straddle the line between human and machine. These agents will be subject not just to malware but also identity-based attacks at the same time. I don't think the cybersecurity community is prepared for the enormous ramifications of the risks these agents pose. Many AI deployments were implemented in 2024 under the assumption that AI would function as conventional software, without a dedicated framework to define what AI agents can or cannot do. But AI agents aren't conventional software. They behave in non-determistic ways like humans, and like humans, AI agents can be deceived. Researchers have already successfully manipulated AI assistants before into extracting sensitive user data by convincing it to adopt a "data pirate" persona.

The solution will be to treat all software and hardware powering it just like we treat humans from the security point of view. This paradigm shift will require consolidating the identity of AI agents with all other identities — engineers, their laptops, servers,  microservices — into one unified inventory that provides a single source of truth for identity, policy, access relationships and real time visibility of what is going on. — Ev Kontsevoy, CEO and co-founder, Teleport

AI-Driven Recruitment Scams Will Move from LinkedIn to Zoom as Threat Actors Get Bolder

In 2024, AI impersonation on LinkedIn took a startling turn, with threat actors posing as recruiters to target developers and engineering talent. These attackers used AI-generated personas to reach out under the guise of recruiting tests, tricking victims into downloading malicious files. What was once an email scam is now a fully immersive recruitment scam, underscoring the accelerated pace at which threat actors are maturing their use of AI. AI-generated social engineering attacks will evolve far beyond LinkedIn scams in 2025. As threat actors leverage more sophisticated AI, expect to see realistic AI-generated Zoom meetings used to deceive and exploit targets. These immersive attacks will bypass traditional security controls, creating a new wave of trust-based breaches. Companies relying on outdated defenses will be caught off guard as AI moves into more interactive environments, fostering deception on an unprecedented scale. — Steve Cobb, CISO, SecurityScorecard

Threat Actors Will Exploit AI by Manipulating Private Data

We are witnessing a fascinating convergence in the AI realm, as models become increasingly capable and semi-autonomous AI agents integrate into automated workflows. This evolution opens intriguing possibilities for threat actors to serve their own interests, specifically in terms of how they might manipulate private data used by LLMs (Large Language Models). As AI agents depend increasingly on private data in emails, SaaS document repositories, and similar sources for context, securing these threat vectors will become even more critical. In 2025, we will start to see initial attempts by threat actors to manipulate private data sources. For example, we may see threat actors purposely trick AI by contaminating private data used by LLMs—such as deliberately manipulating emails or documents with false or misleading information—to confuse AI or make it do something harmful. This development will require heightened vigilance and advanced security measures to ensure that AI isn't fooled by bad information. — Daniel Rapp, chief AI and data officer, Proofpoint

Under Scrutiny, AI Will Become an Essential Part of How We Do Business

A few years ago, cloud computing, mobile and zero-trust were just the buzzwords of the day, but now they are very much a part of the fabric of how organizations do business. AI technologies, and especially Generative AI, are being scrutinized more from a buyer's perspective, with many considering them a third-party risk. CISOs are now in the hot seat and must try to get their hands around both the 'risk vs. reward' and the materiality of risk when it comes to AI tools. CISOs are asking exactly how employees are using AI to understand where they may be putting sensitive information at risk. As a result, there will be increased scrutiny around how LLMs are powering AI tools. Just like food packaging labels (which first surfaced back in the 60's and 70's) tell us what ingredients are used in the creation of a food product, today's CISOs will increasingly ask, "what's in this AI tool, and how do we know it's manufactured and secured correctly?"  — Patrick Joyce, Global Resident CISO, Proofpoint

AI's Networking Challenges Come to the Forefront

AI adoption will continue to skyrocket in 2025. Enterprises deploying artificial intelligence are well aware of the business, safety, skills, and technical challenges associated with AI. But there's another issue most haven't prepared for that will come to the forefront in 2025: AI's networking challenges. AI apps put much greater strain on the network. They typically move significant quantities of data across long distances, and usually have to do it quickly to support rapid decision making. As a result, these AI workloads need a lot more bandwidth and other resources to deliver sufficient performance and work properly. There are also security challenges: AI adoption introduces new attack surfaces and other potential vulnerabilities into a network. As organizations move AI apps into production, they'll come head-to-head with these networking hurdles and realize they need an answer to safely achieve the results they're expecting from artificial intelligence. Aditya K. Sood, VP of security engineering and AI strategy, Aryaka

The Battle Between Attackers and Defenders

In 2025, AI will grow increasingly central to both cyber attacks and defenses, driving a significant evolution in the threat landscape. The commoditization of sophisticated attack tools will make large-scale, AI-driven campaigns accessible to attackers with minimal technical expertise. At the same time, malware and phishing schemes will grow more advanced, as cybercriminals leverage AI to create highly personalized and harder-to-detect attacks tailored to individual targets. However, there are two sides to every coin, and AI also has a key role to play in cyber defense. Cybersecurity solutions are advancing to combat the alarming surge of large-scale AI-driven attacks. This includes more AI-discovered vulnerabilities, as well as autonomous real-time threat detection and mitigation systems, powered by predictive analytics capable of anticipating and countering attacks — even before they occur. — John Bennett, CEO, Dashlane

AI Drives Both Sides of the Battle

Cybersecurity is rapidly evolving into an AI-powered arms race. With breach costs exceeding $4.88M and affecting 70% of businesses annually last year, organizations must uplevel their cyber defense strategies to keep pace. They'll need to specifically find ways to thwart increasingly sophisticated AI-based attacks, such as deepfake phishing and quantum encryption-breaking attempts. Furthermore, government bodies are often distributed, making them potentially more vulnerable to threats like deepfakes and sophisticated AI attacks. To mirror this, government requirements for security certification such as the Cybersecurity Maturity Model Certification (CMMC) are already being published at an accelerated rate. As such, government entities and the organizations that work with them will see more stringent security requirements in 2025 just by keeping their doors open. — Zack Moore, product manager, security, InterVision

Businesses Will Adopt Hybrid AI Models to Safeguard Data While Maximizing Results

Enterprises will embrace a hybrid approach to AI deployment that combines large language models with smaller, more specialized, domain-specific models to meet customers' demands for AI solutions that are private, secure and specific to them. While large language models provide powerful general capabilities, they are not equipped to answer every question that pertains to a company's specific business domain. The proliferation of specialized models, trained on domain-specific data, will help ensure that companies can maintain data privacy and security while accessing the broad knowledge and capabilities of LLMs. Uses of these LLMs will force a shift in technical complexity from data architectures to language model architectures. Enterprises will need to simplify their data architectures and finish their application modernization projects. — Mohan Varthakavi, VP of AI and edge, Couchbase

Security and Stability Will Remain Hurdles to Production-Grade AI

Driving applications from proof of concept to production in 2025 will remain an uphill challenge, with significant roadblocks rooted in security, maintainability and rapid technological evolution. Enterprises eager to leverage AI for competitive advantage have faced — and will continue to face — complex demands for privacy, compliance and production-ready deployment, particularly as more applications rely on proprietary data for training. Specifically, some enterprises are hesitant to have their data go outside their own secure environment when using external AI APIs and models. Therefore, maintaining data compliance will be a key requirement for production-grade applications. The applications need to operate within the security boundaries and data policies defined by individual enterprises. One solution could be observability tools, which can help organizations work toward production-grade readiness, providing end-to-end insights into model behavior, from data prompting to output validation.

In addition to security challenges, the rapidly changing developer ecosystem adds even more complexity, with a proliferation of new frameworks, tools, and platforms forcing teams to constantly adapt their technical strategies and skill sets. There's still a long road ahead — with these ever-present security and infrastructure challenges in mind, enterprises aiming to build more autonomously will require significant time, with production stability remaining an elusive goal. — Rahul Pradhan, VP of product and strategy, Couchbase

AI-Driven Cyber Attacks Propel Shift to Offensive Security Strategies

We stand at the intersection of human ingenuity and technological innovation, where the game of cybersecurity has evolved into a high-stakes match. With AI orchestrating cyber attacks like a skilled quarterback, organizations can no longer rely on a passive zone defense. They must embrace an offensive unified platform approach to stay ahead in the game. The real advantage will go to the organizations that can centralize their data, enabling AI outcomes we have yet to see, and make the decisions now that will enable their security and success for the future. — Nir Zuk, founder and CTO, Palo Alto Networks

AI Integration Raises Stakes in Cybersecurity

As organizations increasingly rely on AI to streamline operations and improve decision-making, securing these systems against cyberthreats becomes critically important. AI tools often have access to critical systems, applications, and sensitive data, and they are capable of making autonomous decisions that directly impact business operations. This level of access and authority introduces new risks, making it essential to manage the identities and access rights of AI bots with the same rigor as human users. This shifting landscape means prevention will no longer be enough — resilience and adaptability will take center stage. Rather than relying on static multi-factor authentication (MFA) defenses, adaptive authentication will dynamically adjust access controls based on real-time signals like user behavior, location, and device health. And with the help of generative AI, adaptive authentication will become smarter and more proactive, allowing organizations to contain breaches in ways that feel natural and unobtrusive. — Art Gilliland, CEO, Delinea

Emerging AI regulation will require CISOs to develop an even deeper understanding of legal frameworks and articulate a clear vision of the risks, security roadmap, and mitigation plan at all levels of the organization. AI-driven risk assessment and ethical considerations will also play a crucial role in shaping the future of cybersecurity. This convergence will require CISOs to navigate a complex landscape, balancing board-level legal and compliance communications alongside security design and implementation details to protect their organizations from emerging threats. — Josh Lemos, CISO, GitLab

Adding AI to Existing Software Products Will Lead to Large-Scale Security Incidents

Software vendors are rushing to add AI-enabled product features to their existing software, primarily by leveraging foundational models and OSS LLMs. As attackers uncover vulnerabilities in proprietary foundational models, expect them to leverage the models to harm by commanding the models themselves. Without model provenance and a deep understanding of the model guardrails, attackers could embed malware in models or exploit lesser-known attack surfaces in the model's feature space. As the industry increasingly relies on a few proprietary LLMs, these attacks could have cascading effects throughout the software ecosystem. — Josh Lemos, CISO, GitLab

AI Arms Race Continues as Adoption of New AI Services Outpaces Security Governance

Recent Gartner survey data shows that while 68% of Executives believe the benefits of AI outweigh the risks, only 14% are adding GenAI usage guidance to their Security policies and only 13% have effective data leakage tools. In 2025 we will see a marked uptick in adoption of new "agentic" AI services that are able to carry out tasks autonomously and routinely access data from multiple sources to complete complex tasks using multiple online services to complete the actions. Since these new tools and services will develop exponentially faster than the necessary AI governance tools and policies to apply "responsible AI" controls and because many organizations will not even be aware of how their staff are using many of these services, there will be an equivalent uptick in the number of data leakage incidents and other breaches, compromise of credentials and failure to comply with existing compliance and legislation. Equally, threat attackers will both seize-on the opportunity to harvest data and credentials from unsecured agentic AI services, plus they will also leverage AI agents themselves in malicious ways, to speed up, automate and supercharge their own attacks and threats, exploiting "blind spots in security tools" coverage and reach. — David Wiseman, VP of secure communications, BlackBerry

AI Will Continue to Drive Innovation & Expose Major Security Gaps

AI has had a profound impact on the way businesses operate and the speed at which they are able to innovate. By 2025, the continued rapid adoption of AI will spark an unprecedented wave of innovation, but it will also expose glaring gaps in security that have been left untouched — specifically when it comes to identity, which accounts for 80% of all data breaches. Rapid AI integration has occurred across industries; however, organizations are not considering the need for comprehensive security controls, leaving organizations vulnerable to sophisticated threats. In 2025, leaders must shift their focus from merely educating teams about AI risks to actively detecting and preventing attacks. One way we'll see organizations start to do this is by investing in end-to-end identity security platforms that break down the silos between identity providers and provide holistic security controls across all on-prem, cloud, and hybrid environments and doubling down on protecting identities. With the rapid pace of AI adoption and manipulation, siloed identity management tools and traditional MFA tools are no longer enough. Identity was misunderstood and unloved for so many years; it's finally getting the attention it needs. It's gone from a help desk ticketing thing where we provisioned to being mission-critical for a good cybersecurity program. Identities need to be checked continuously, especially amid the rise of sophisticated threats and DarkAI. — John Paul Cunningham, CISO, Silverfort

AI Security

Businesses are at a pivotal moment in AI innovation — a thrilling opportunity that comes with sharp risks.  AI is both a shield and a sword in cybersecurity: offering unprecedented potential to strengthen cybersecurity, while giving attackers new tools to exploit. As companies advance their use of AI, they must proceed cautiously. Success hinges on using AI thoughtfully and strategically, not just adopting AI for the hype but strategically deploying it where it truly adds value. — Siroui Mushegian, CIO, Barracuda

DORA Will Reshape Operational Resilience with AI-Driven Tools

In 2025, the Digital Operational Resilience Act (DORA) is set to reshape how organizations manage their data and ensure operational resilience. Recent events like the CrowdStrike outage highlight the critical need for resilience to move beyond a regulatory checkbox — it's now a strategic imperative. With over half of technology leaders admitting their companies are ill-prepared for today's regulatory demands in our latest research, companies will increasingly turn to AI for managing and monitoring corporate resilience. To keep up with these growing, stringent regulatory requirements, expect to see a wave of new AI-driven tools to help companies more closely monitor vulnerabilities and outages more closely. — Spencer Kimball, CEO and co-founder, Cockroach Labs

GenAI Tools and Deepfakes Move Down-Market, Fueling Rising Cyberthreats and Scams

2024 was a record year for elections worldwide. Approximately 4 billion people across 60 countries were expected to vote, including major elections in the US, UK, EU, Taiwan, South Africa, and India. GenAI made its mark on these elections, with sophisticated attacks meant to deceive voters and impact elections. 2025 will be the year these tools and techniques — deepfakes, targeted scams, social engineering, and more -- move down-market and become available to ordinary cyber criminals. Be on the lookout for fakes and scams across all forms of interaction: email, text, phone calls and video calls. The tools to create convincing fakes in all of these interactions are already, or soon will be, readily available to criminals who will use these tools to scam you out of money or reveal valuable information. —Robert (Bobby) Blumofe, CTO, Akamai

AI's Dual Impact on Cybersecurity — Boosting Productivity, Heightened Risk

In 2025, I expect we'll see a dual impact from AI on cybersecurity: increased productivity and heightened risk. People often prioritize efficiency over security without realizing that uploading sensitive data to large language models can lead to dangerous data leaks. Attackers won't even need to break into your private systems — they can exploit users who willingly share data with AI. This might look like attackers infiltrating AI chatbots to access users' input data. It could also look like bad actors creating fake AI chatbots with the explicit intent to trick users into sharing sensitive information directly. — Dror Liwer, co-founder, Coro

Injection Attacks Resurface as AI-Generated Code Opens New Vulnerabilities

As AI-driven coding tools become mainstream in 2025, injection attacks are set to make a strong comeback. While AI accelerates development, it frequently generates code with security weaknesses, especially in input validation, creating new vulnerabilities across software systems. This resurgence of injection risks marks a step back to familiar threats, as AI-based tools produce code that may overlook best practices. Organizations must stay vigilant, reinforcing security protocols and validating AI-generated code to mitigate the threat of injection attacks in an increasingly AI-powered development environment. — Randall Degges, head of developer and security relations, Snyk

AI Decision Support to Improve Human Safety and Efficiency

AI is providing insights and guidance that help industrial workers perform their tasks flawlessly and efficiently — reducing mundane tasks and freeing up the industrial workforce to focus higher value tasks that improvement business performance. Further, much like today's cars are outfitted with sensors and systems to improve driving safety by alerting the driver to hazardous conditions like approaching a car too quickly, AI combined with new sensors in industrial plants is providing guidance to assure plant operations remain safe. — Jason Urso, VP and CTO of Industrial Automation, Honeywell

In 2025, expect a surge in industry-specific AI assurance frameworks to validate AI's reliability, bias mitigation, and security. These standards will transition from "nice-to-have" guidance to critical requirements for organizations operating in regulated industries like finance, healthcare, and even critical infrastructure. The regulatory environment will push companies to establish formal AI governance programs that can provide verifiable evidence of fair, safe, and transparent AI operations, emphasizing accountability from design to deployment. Concretely, organizations will face pressure to adopt independent, third-party audits for AI systems to verify compliance with emerging regulations. Think of it as SOC 2 for AI—standardized audits will cover security, bias, ethics, and operational transparency, creating a new branch of compliance-driven "AI Assurance" that vendors must demonstrate in their third-party risk assessments. This push toward standardization will address the trust deficit in AI, making "AI assurance" a board-level conversation. — Bob Maley, chief security officer, Black Kite

Rise of AI-Powered Defensive Systems

As generative AI advances, prediction models will likely integrate AI more deeply. Instead of an "AI takeover," we'll see it supporting humans in making faster, informed security decisions. Security automation will help fill resource gaps rather than replace talent outright. — Bob Maley, chief security officer, Black Kite

The AI Bubble Will Burst, Leading Bad Actors to Pick Up the Pieces

It's the golden age of AI. Nearly every cybersecurity company claims to have it and promises it's the solution to solving security pain points while largely falling short on those promises. 2025 will be the year the AI bubble bursts. AI-enabled cybersecurity companies will struggle while attackers find new ways to leverage AI for attacks, leaving defenders lagging behind. Finding credible companies with staying power in AI to help combat the increase in threats will be key for companies to keep up in the evolving threat landscape. — Jeffrey Wheatman, SVP, cyber risk strategist, Black Kite

2025 Is the Year of the Geopolitical AI Arms Race

As AI drives the next wave of cyber strategy, the stakes have never been higher. Welcome to a new age of geopolitical tension, where AI will drive both attack and defense strategies in 2025, ultimately redefining how we approach incident response. AI systems will become increasingly essential for detecting potential breaches, identifying anomalies, and automating cybersecurity measures to address threats before they can cause significant damage. On the flip, AI is poised to revolutionize attack strategies for cybercriminals, making it easier for them to execute large-scale operations with minimal effort. The net-net? AI itself isn't the issue — it's about whose hands it's in. — Sabeen Malik, VP of Global Government Affairs & Public Policy, Rapid7

Personalization Will Be AI's No. 1 Role in Digital Wallet Growth

AI will more frequently be built into digital wallets in 2025 to provide hyper-personalized experiences, prevent fraud, and give retailers and other businesses unique insights into their customer behaviors. — Oz Olivo, VP of product management, Inrupt

Blocked Theft Patterns Will Rise by Up to 25%

In the cat-and-mouse game of theft and loss prevention, would-be thieves continually adapt their techniques. But AI-powered solutions, fueled by data and scale, will continue to become increasingly precise. In 2025, AI is projected to identify new and evolving theft patterns. Everseen, which currently identifies over 30 loss patterns, predicts it will add 5-10 new micro-patterns in 2025. — Alex Siskos, SVP of strategy, Everseen

AI-Driven Threats Like 'FraudGPT' Intensify Risks

Fraud prevention and cybersecurity are increasingly intertwined, as cyber vulnerabilities are often exploited to execute sophisticated fraud schemes. Today, effective fraud prevention strategies must encompass cybersecurity measures to address these cyber-driven threats directly. Take "FraudGPT," for example — a tool designed to generate highly convincing scams and social engineering attacks. FraudGPT empowers fraudsters to craft personalized, deceptive messages that can exploit both human and system-level weaknesses. This kind of cyber-enabled fraud intensifies the need for strategies like those targeting Authorized Push Payment (APP) fraud, where attackers trick individuals or employees into authorizing transactions to fraudsters. Protecting against such attacks requires layered defenses and an understanding of how cyber and fraud risk converge, enabling organizations to counteract fraudsters who continually exploit these cyber vulnerabilities. — Galia Beer-Gabel, partner, Team8

Cybercriminals Use AI to Craft Persuasive Phishing Campaigns

As we enter 2025, AI is revolutionizing cyberthreats in concerning ways. Cybercriminals are leveraging AI to craft highly persuasive phishing campaigns that overcome traditional red flags. With AI tools, attackers — especially those operating from outside the U.S. — can generate highly convincing messages without easy-to-spot indicators like poor grammar or awkward phrasing. By analyzing targets' digital footprints, AI enables highly personalized attacks that are increasingly indistinguishable from legitimate communications. — Bill Murphy, director of security & compliance, LeanTaaS

The Role of AI and Collaboration in Combating Rising Cyberthreats

In today's cyber landscape, collaboration is essential. Building trusted partnerships is the lifeblood in strengthening digital defenses. With ransomware attacks on the rise, ensuring data integrity has become a number one organizational priority. AI-powered technologies are pivotal in enhancing cyber resilience, they also present opportunities for bad actors to exploit this innovation for financial gain. This highlights the pressing need for robust and adaptive cybersecurity measures. To address these evolving threats, detection must be rapid and ongoing as should the response to data breaches. A proactive approach to detection, analysis, and response empowers organizations to do everything possible to minimize disruptions, accelerate recovery, and protect sensitive data. By fostering trust and teamwork, businesses can better navigate the ever-changing cyberthreat landscape and ensure resilience in the face of ever evolving cybersecurity challenges. — Danielle Coady, vice president, Index Engines

AI, Resilience, and New Regulations Will Shape the Future of Defense

The cybersecurity landscape will continue to shift as both attackers and defenders advance their strategies. Cyberthreats will grow more sophisticated, while organizations refine their defenses and response capabilities to address breaches more effectively. AI and machine learning will play a dual role, empowering both attackers and defenders. Cybercriminals will increasingly leverage AI to bypass detection and complicate recovery, making it harder to restore systems. To counter this, organizations must rely on isolated, unaffected data copies and AI/ML-powered tools to detect and validate clean data for recovery. Regulatory frameworks will also tighten worldwide, with initiatives like NIS2.0 setting new and more stringent standards for prevention and recovery. Governments will also increasingly legislate to ensure organizations are better prepared for cyber incidents. Finally, organizations will adopt a "prepare for the breach" mindset. The growing frequency of attacks will drive businesses to prioritize cyber resilience as part of their overall security strategy. Data storage solutions and integrity will take center stage, ensuring organizations know what to recover and how to act swiftly in the aftermath of an attack. — Ian Rothery, EMEA channel manager, Index Engines

New GenAI Tools Will Enable Attacks of Unprecedented Sophistication and Scale

Adoption of AI tools like ChatGPT will continue to increase dramatically, driving rapid growth in the surrounding ecosystem of AI-augmented services, extensions, and browser plug-ins. Together with the introduction of even more AI-enabled phishing kits, these technologies will enable attackers in a multitude of ways, including writing better, more convincing phishing emails (in multiple languages). They will also be able to use APIs to automate the creation of more personalized, targeted, and polymorphic phishing emails — ultimately driving up both the volume of attacks and their success rates. And creating code to develop more authentic-looking, spoofed web pages such as login pages for Microsoft 365, Google Workspace, or login pages for industry-specific services for space such as real estate, legal services, healthcare, and higher education. As a result, we'll see higher click-through rates on fake landing pages, and the successful collection of individuals' credentials, which can immediately be turned around and used in devastating account takeovers. As all of the above takes place, security vendors will be working overtime to develop new, more sophisticated and reliable tools for the detection of AI-based content — including synthetic writing, videos, static imagery, and voice duplication — and AI-enabled attacks. — Eyal Benishti, founder & CEO, IRONSCALES

Fortune 500 Companies Will Standardize AI Security Architectures

Following a series of serious and high-profile data leakage incidents, driven by AI misuse, Fortune 500 companies will standardize their AI security architectures — marking AI governance as a board-level priority equal to cybersecurity. — Vaikkunth (Vaik) Mugunthan, CEO/co-founder, Dynamo AI

AI Transforms Junior Attackers into an Existential Threat

In 2025, malicious actors with relatively low technical acumen and proficiency will dramatically benefit from AI. AI will uplevel their skillsets and adversarial capabilities, enabling them to launch high-volume enterprise-wide attacks that were previously the domain of larger-scale criminal organizations. Further, AI will heighten these junior attackers' social engineering threat possibilities, with multilingual, highly credible text phrasing when it comes to manipulating people with official-sounding communication. — Leonid Belkind, co-founder and CTO, Torq

AI Tools Will Bleed into Security Teams

In 2025, expect security departments to adopt AI to keep up with the cybersecurity arms race. Early adopters will look to ML-assisted threat analytics to find patterns in attack behavior that better equip them to mitigate attacks. Dan Shugrue, Application Security Product Marketing, Digital.ai

AI-Aided Threat Monitoring Will Become the Norm

SOC managers have the unenviable job of searching mountains of data for actionable information. AI-aided threat monitoring, such as pattern recognition, anomaly detection, and general classification of data, will become necessary for security teams to surface the most urgent threats so that proper mitigation steps can be taken in a timely manner. Mike Woodard, VP of product management for application security, Digital.ai

GenAI Will Upend Traditional Security Methods — and Vastly Increase the Amount of Zero Days to the Detriment of Many

GenAI accelerates general understanding of people, processes, and technologies — and that will spur elaborate attacks including sophisticated phishing emails, deep fakes, vishing, and more. Not only that, but GenAI has robust search and analyze capabilities that can and will be used to surface unknown zero days and CVEs that haven't been patched. Already overwhelmed security teams will be further inundated with the need for more cyber investigations, and agile threat actors will continually gain advantage. Unless businesses adopt a new approach to secure their business at the data level, security teams will find themselves burnt out during what will be an especially stressful 2025. — Yogesh Badwe, chief security officer, Druva

Artificial Intelligence Risks

I think we can all agree that the sheer speed of AI services coming online in 2024 was surprising. This new era of technology means that in 2025, organizations will need to focus on the adoption of these powerful AI services and how to minimize the associated risks. To start the year off strong, organizations shouldn't just be focusing on the risks from the application level, but also the model level itself. Organizations need to get serious about LLM risks to avoid becoming the next victim. We saw the possible damage that could unfold from the KnowBe4 AI cyber attack in 2024 — and it will only get worse without the right preparations. Ultimately, I think the realm of generative AI will continue to quickly advance to offer even richer services and features to users and businesses alike, however it will be up to the businesses to look at the data security risks that it poses as well. — Rodman Ramezanian, global cloud threat lead, Skyhigh Security

Tempering the Rise of RAG Threats

Retrieval-Augmented Generation (RAG) is a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from external sources, enabling users to check claims, which in turn, builds trust. Attacks on RAG pipelines have been optimized to boost the ranking of malicious documents during the retrieval phase, now making Vector and Embedding Weaknesses one of OWASP's top 10 use cases for LLM Security. Rather than relying solely on static permissions, more dynamic methods such as Context-Based Access Control (CBAC) will come into play that evaluate the context of both the request and the response. By incorporating the user's role and behavioral patterns, specifics of the query and relevance and sensitivity of the retrieved data, CBAC blocks sensitive or out-of-scope information when necessary. — Elad Schulman, CEO and co-founder, Lasso

LLM Security's Achilles' Heel Is Surfacing: System Prompt Vulnerabilities

A new addition to OWASP's latest list for LLM Security, system prompts often act as both behavior guides and inadvertent repositories for sensitive information. When these prompts leak, the risks extend far beyond the disclosure of their content, exposing underlying system weaknesses and improper security architectures. System prompts are essential for steering LLM behavior, defining how an application responds, filtering content, and implementing rules. But when they include sensitive data (API keys, internal user roles, or operational limits), they create a hidden liability. Worse, even without explicit disclosure, attackers can reverse-engineer prompts by observing model behavior and responses during interactions. Companies should adopt best practices to avoid potential sophisticated exploits via system prompts such as separating sensitive data, red teaming LLMs and implementing layered guardrails. — Elad Schulman, CEO and co-founder, Lasso

AI Will Power Faster Threat Response While Fueling Advanced Phishing Attacks

It will come as no surprise that we expect to see a continuation of AI being used for both good and bad in security. The use of Large language models (LLMs) is still relatively new in security, but can really help to pull together threat and contextual information more rapidly, accelerating triage and investigation. In light of this, 2025 will almost certainly see more security platforms incorporating LLMs within their interfaces. In terms of the impact of AI on cyber attacks, AI-driven phishing will continue to be a major issue with the development of AI capabilities used to create cleverly crafted campaigns, so individuals and businesses are at greater risk. — Darren Anstee, CTO for Security, NETSCOUT

Secure, Customizable AI and Ethical Governance Take Center Stage

Companies will prioritize secure, customizable AI solutions that protect sensitive customer data while still leveraging the power of advanced analytics. AI governance frameworks will become essential for enterprises to ensure ethical use of AI in customer interactions and decision-making processes. Regulatory compliance in AI will drive innovation in transparent, explainable AI models for customer service applications. — Ashish Nagar, CEO, Level AI

Security as the Backbone of AI-Driven Enterprises in 2025

In 2025, data will be more valuable than ever as enterprises leverage AI to power their operations. However, as data's value grows, so does its appeal to increasingly sophisticated threat actors. This new reality will continue driving organizations to rethink their security frameworks, making data protection and rapid recovery the backbone of any AI strategy. Attackers are evolving, using AI to create more insidious methods, like embedding corrupted models and targeting AI frameworks directly, which makes rapid data recovery as vital as data protection itself. Businesses will need to deploy rigorous measures not only to prevent attacks but to ensure that if the worst happens, they can quickly restore their AI-driven processes. 2025 will bring a new era of security maturity, one where the ability to protect and quickly recover data assets underpins every other business process in an AI-first world. — Russ Kennedy, chief evangelist, Nasuni

Threat Actors Will Use GenAI to Make Their Operations More Effective and Efficient

Many of the GenAI use cases around creation, automation and virtual assistance that are being embraced by individuals and businesses will be adapted to support cybercrime. Whether it's helping to write scripts, uncovering vulnerabilities, analyzing data, or using copilots to assist with coding tasks, GenAI will help cybercriminals to increase their productivity, efficiency, and effectiveness. Barriers to entry for cybercriminals will be lowered, allowing novices to carry out attacks without coding know-how. We may see click through rates on phishing rise, as GenAI helps attackers to craft convincing multi-lingual, targeted lures. On the positive side, cybersecurity teams will harness AI to enhance threat detection and response, relieving the pressure on teams. Partnering with trusted AI security vendors will ensure organizations reap the benefits of AI, while being protected from new AI-assisted threats. — Alex Holland, principal threat researcher, HP Security Lab

AI-Powered Ransomware Set to Dominate 2025

In 2025, threat actors will benefit from AI's continuous enhancement, using it to craft highly successful ransomware campaigns. New models which are capable of analyzing massive amounts of public and stolen data, can and will be used to create "tailor-made ransomwares" to match "costumer's" situation and request the perfect ransom amount. AI-driven ransomware will automate attacking steps and even dynamic decision-making during the attack by identifying the most critical systems to target and adjust encryption speeds or scope in real-time, optimizing the attack for maximum success rate. Art Ukshini — Associate Threat Research, Permiso

AI Adoption Will Lead to More Non-Human Identity Risk 

AI adoption is creating new challenges when it comes to non-human identity management and security. A growing trend, termed "LLMJacking," involves threat actors targeting machine identities with access to Large Language Models (LLMs), and either abusing this access themselves, or selling it to third parties. This threat will escalate in the year ahead, amplifying the need for robust non-human identity security measures. Danny Brickman, CEO and co-founder, Oasis Security

AI Security and Safety Begins to Hit Its Stride

As the hype dies down and the real-world use cases of generative AI start to form, I expect the overall field of AI security and safety to mature significantly in 2025, addressing AI as a target, tool, and threat. Casey Ellis, founder and advisor, Bugcrowd

Integration of AI with Human Expertise in Pen-testing Approaches

While AI will handle routine and large-scale vulnerability scanning, human expertise will remain crucial for interpreting results and identifying nuanced or context-specific security issues. A collaborative approach will emerge where AI handles the heavy lifting of data analysis, and human pen-testers focus on strategic thinking and creative attack vectors. This synergy will enhance the overall effectiveness of penetration testing efforts. Julian Brownlow Davies, VP, Advanced Services, Bugcrowd

AI Security Liability and Accountability Will Be in Question

Organizations will continue to focus on securing all forms of AI for security vulnerabilities, bias, and data privacy. However, as organizations evolve, develop, and roll out agentic AI inline of core business processes (meaning that AI can make and act on its own informed business decisions autonomously), we'll see more liability and accountability events publicly surface when 'bad AI' calls are made. Nick McKenzie, chief information and security officer, Bugcrowd

Securing AI Models from New Threats

As AI becomes a core business asset, it becomes a prime target. The 2025 conversation will include "AI security" as a boardroom imperative, with new investments in safeguarding models from adversarial attacks, model theft, and data poisoning. We will see the first examples of security vetting for AI models, similar to existing security protocols. — William Falcon, founder and CEO, Lightning AI

Malicious Use of Multimodal AI Will Create Entire Attack Chains

By 2025, malicious use of multimodal AI will be used to craft an entire attack chain. As multi-modal AI systems gain the ability to integrate text, images, voice, and sophisticated coding, they will be coming to threat actors who will leverage them to streamline and automate the entire pipeline of a cyber attack. This includes profiling targets on social media, crafting and delivering realistic phishing content, including voice phishing (vishing), sometimes finding zero-day exploits, generating malware that can bypass endpoint detection and deploying the infrastructure to support it, automating lateral movements within compromised networks, and exfiltrating stolen data. This hands-off, entirely seamless approach will democratize cyberthreats even more radically than malware-as-a-service offerings have in recent years, enabling less skilled threat actors to launch advanced attacks with minimal human intervention. Therefore, organizations and security teams, regardless of size, will face an increase in highly tailored cyberthreats that will be difficult to detect and combat. — Corey Nachreiner, CISO, WatchGuard

Threat Actors Move to the Long Con

In 2025, attackers will intensify their attempts to target little-known but widely used third-party open-source libraries and dependencies to avoid detection and execute malicious attacks. They will also expand their focus on a "long-con" approach, where attackers target the software supply chain over a long period of time, building up a false reputation as a good faith actor rather than just a point attack. This could even involve impersonating or compromising reputable maintainers to enter the software supply chain. By quietly invading these trusted sources that many applications use, attackers can push malware, making the threat much more challenging for organizations and open-source ecosystems to detect and defend against. — Corey Nachreiner, CISO, WatchGuard

AI-Powered Social Engineering Will Outsmart MFA Protections

2025 will see cybercriminals deploying increasingly sophisticated social engineering tactics, leveraging AI to bypass even robust security measures like multi-factor authentication (MFA). What's driving this trend? AI's automation capabilities now allow attackers to craft highly personalized, convincing scams at scale. But there's more at stake: as organizations continue their migration to cloud and SaaS applications, the need to safeguard identity and access management (IAM) will become paramount. Companies that lag in fortifying their IAM strategies risk exposing critical assets to attackers using AI as their ultimate "skeleton key." — Andrew Costis, engineering manager of the Adversary Research Team, AttackIQ

AI as a Double-Edged Sword in Cybersecurity

Generative AI is lowering the barriers for unsophisticated attackers while amplifying the capabilities of advanced threat actors. Since the release of commercial generative AI tools, we've seen phishing attacks surge by 1,265 percent. The speed and precision these tools operate with are forcing security teams to rethink traditional defenses. — Ian Gray, VP of intelligence, Flashpoint

AI Transforms Cybersecurity

AI is transforming the threat landscape, making cyber attacks faster, more scalable, and more automated. While we must remain vigilant to how it's being exploited to undermine trust and compromise systems, AI also has immense potential when paired with human expertise. At Flashpoint, we leverage AI tools like Automated Source Discovery to empower our analysts, enabling them to uncover critical intelligence faster and disrupt adversaries effectively. — Josh Lefkowitz, CEO, Flashpoint

Emerging Threats Will Drive Demand for Varied Cryptographic Solutions and Agile Defense Strategies

Recent research, particularly from Chinese cybersecurity experts, underscores the complexity of modern cyberthreats. Organizations must develop multi-layered, adaptable cryptographic approaches that can quickly respond to evolving technological risks. — Michele Mosca, founder, evolutionQ

AI-Powered Attacks Will Force Shift Toward Integrated Network and Security Teams

The rise of AI-powered attacks will force organizations to finally dismantle the barriers between network and security teams. 2025 will be a watershed moment where the rise of AI-powered attacks forces organizations to finally dismantle the barriers between network and security teams. With more than half (55%) of security experts reporting they are concerned about the risk of a security incident due to a lack of collaboration between these critical functions, the need for integration has never been more urgent.

While the disconnect between these critical functions has long been a vulnerability, the escalating sophistication of AI-powered threats will make it impossible to ignore. Cybercriminals are increasingly leveraging AI and automation to launch highly adaptive attacks that traditional, siloed defenses simply can't handle. This new breed of threat will expose the critical weakness of disjointed security approaches, pushing organizations to the edge. The consequences of inaction, including breaches inflicting crippling damage to infrastructure, data, and reputation, will become too dire to ignore. As a result of prioritizing this convergence, organizations will achieve a more integrated, collaborative approach that improves threat visibility, detection, and response times. — Mo Rosen, CEO, Skybox Security

AI Will Do Identity Governance, and Identity Governance Will Do AI

In 2025, AI and machine learning (AI/ML) will drive a change in identity governance, automating complex processes like role management and access reconciliation. These technologies will analyze historical data and usage patterns to make a meaningful dent to the manual tasks required and the frequent rubber stamping. AI will predict access related risks and help mitigate them. However, the growing footprint of AI/ML across the enterprise introduces new risks: opaque decision-making models can make it impossible to predict which users can see what data and compromised AI systems could magnify vulnerabilities. CISOs need to implement robust governance systems to maintain oversight for critical access decisions, and govern AI projects across the enterprise to reduce the risk of data loss. AI/ML promises significant efficiency gains but must be deployed within secure, transparent frameworks to realize its full potential. — Nitin Sonawane, chief product officer & co-founder, Zilla Security

Phishing in the Era of Generative AI

In 2025, I expect to see hackers' mobile phishing toolkits expand with the addition of deepfake technology. I can easily see a future, especially for CEOs with a celebrity-level status, where hackers create a deepfake video or vocal distortion that sounds exactly like the top leader at an organization to further pursue attacks on corporate infrastructure, either for monetary gain or to share information with foreign adversaries. Currently, deepfake technology is extremely expensive to leverage, so the hackers that have currently been using it target widespread consumer type attacks often targeting crypto buyers with fake celebrity endorsements to gain the highest monetary rewards possible. However, as this technology becomes more accessible to consumers, I expect to see a new addition to the phishing toolkit with deepfake technology to conduct hyper-personalized attacks. — David Richardson, VP of Endpoint, Lookout

GenAI Becomes Cybercriminals' Key Weapon

In 2024, we saw the rapid evolution of generative AI, but 2025 will usher in a seismic shift in how threat actors weaponize it to orchestrate large-scale cyber attacks. I think the next major breach will be a direct result of generative AI models autonomously identifying vulnerabilities, crafting deceptive phishing campaigns, and even bypassing detection systems. GenAI is no longer just a tool; it’s often the single most important key for attackers. We can anticipate AI-driven malware capable of learning and adapting in real-time during an attack. While organizations increasingly integrate generative AI into their operations to drive efficiency, this reliance introduces significant risks. The potential exploitation of internal knowledge could lead to catastrophic consequences. The technology is here, and it’s no longer a matter of if such incidents will occur, but when. The stakes have never been higher. — Christian Geyer, founder and CEO, Actfore

Ransomware

Ransomware Threats Persist with Less Innovation, But More Financial Impact

While ransomware evolution may be slowing, its danger is more pronounced than ever. Threat actors aren't fixing what isn't broken. The golden age of ransomware innovation appears over due to many encryptors' source codes that have been leaked or shared, yet the payouts are rising at alarming rates. With larger organizations in the crosshairs, we expect ransom demands to surge past the average $2.73 million from 2024, as cybercriminals target high-value victims for bigger payouts. New extortion methods, such as double extortion using data leaks and triple extortion with DDoS attacks, and other technical innovations, such as accelerated encryption and attacks on virtualization servers, were common up to 2022. Despite all this, ransomware attacks are still rising in number year over year, and even more concerning is the rise of the groups that launch these attacks. This is despite international law enforcement operations that have broken large criminal cartels. Although we don't see much change in ransomware strategy in the coming year, we expect more cases and more organizations exposing victims on their data leak pages. — Daniel dos Santos, head of security research, Forescout Research — Vedere Labs

Boards Will Have Businesses Prioritizing Data Security … But Only After High-Profile Breaches

We've continually seen ransomware dominate headlines in a banner year for hackers, and many businesses are investing in security tooling to prevent attacks. But prevention clearly isn't enough, and 2025 is the year security at the data level becomes a board mandate. The consequences of a breach are simply too high, and it will be a point that a few unfortunate businesses will illustrate for the benefit of the rest of the industry. — Stephen Manley, CTO, Druva

Ransomware Threats in the Age of AI

Ransomware attacks increased by 81% from 2023 to 2024. While there was a notion that these attacks were going by the wayside, that's not the case. The introduction of artificial intelligence (AI) is fueling the advancements of these threats, and they will significantly increase in 2025. While AI has benefited the "good guys" by improving their products for consumers, bad actors are taking advantage of it to scale their attacks and capabilities. Companies will adopt broader cyber resilience programs focusing on AI as these attacks increase. — Todd Thorsen, CISO, CrashPlan

Ransomware Evolution

We're tracking a significant upward trend in ransomware attacks, becoming more sophisticated and targeted. Ransomware groups continue to evolve their tactics to increase pressure on victims. We're seeing cases where attackers not only encrypt a company's data but also exfiltrate sensitive information and threaten to release it publicly if the ransom isn't paid. This double extortion tactic puts immense pressure on victims and has unfortunately proven effective for cybercriminals. — Chris Gibson, CEO, FIRST

Financial Severity of Ransomware Attacks to Rise

Research from my company, Resilience, showed that the financial severity of ransomware attacks jumped significantly last year — by 411%. I expect that the financial impact of these attacks will likely continue in an upward trajectory, thanks to advancing attacker strategies, targeting of critical industry sectors, and rising ransom payment demands. — Justin Shattuck, CISO, Resilience

Ransomware Defense of Unstructured Data Becomes More Urgent

Traditionally, data protection has focused on mission-critical data because this is the data that needs faster restores. Yet the landscape has changed with unstructured data growing to encompass 90% of all data generated in the last 10 years. The large surface area of petabytes of unstructured data coupled with its widespread use and rapid growth make it highly vulnerable to ransomware attacks. Cyber-criminals can use the unstructured data as a Trojan horse to infect the enterprise. Cost-effectively protecting unstructured data from ransomware will become a critical defense tactic, starting with moving the cold, inactive data to immutable object storage where it cannot be modified. — Krishna Subramanian, co-founder and COO, Komprise

Ransomware-as-a-Service (RaaS) and Specialized Subservices Will Further Commoditize the Criminal Marketplaces

The importance of Initial Access Brokers will continue to rise due to the success of Information stealers and Loaders. Additionally, in 2025, we'll likely see larger and more successful ransomware groups enjoy heightened international attention from law enforcement organizations. With the increasing number of successful takedowns, extraditions, and arrests, some groups are expected to further fragment and rebrand themselves, but only a small percent might be deterred from continuing their cybercrime activities. — Balazs Greksza, threat response lead, Ontinue

Surge in Mobile-Specific Ransomware

Mobile-specific ransomware is a rapidly evolving threat that should be top of mind for every CISO. Zimperium's Mobile Banking Heist Report provides early evidence of this shift: in 2023, 29 malware families targeted 1,800 mobile banking apps with several showing early-stage ransomware capabilities. These tactics are tailored for mobile, signaling a move beyond data theft toward extortion and ransomware schemes designed specifically for mobile platforms. This trend is part of a larger increase in ransomware and extortion attacks across digital channels. According to the 2023 Verizon Data Breach Investigations Report (DBIR), ransomware or extortion was involved in nearly one-third of breaches, indicating a shift among traditional ransomware actors toward new methods, including mobile-focused extortion. This shift is further confirmed by the Thales 2024 Data Threat Report, which notes that ransomware and malware remain some of the fastest-growing threats, with 41% of enterprises facing malware-related breaches last year alone. With ransomware attacks growing by 21% in 2024, attackers are increasingly exploiting mobile platforms due to their unique vulnerabilities and often weaker security postures. For CISOs, this signals an urgent need to prioritize advanced app-level security, phishing defenses, and proactive monitoring in mobile environments, as the connectivity and sensitive data handled by mobile devices make them prime targets for the next wave of ransomware. — Krishna Vishnubhotla, VP of product strategy & threat intelligence, Zimperium

Consumers Will Be Testing Ground for Scamming Operations 

In the early stages of fraud in the cyber or digital arena, individual consumers were the target; now, after two decades of evolution of the cybercrime ecosystem, we see ransomware operators "big game hunting" enterprise businesses for tens of millions of dollars. Over time, layered defenses and security awareness have hardened organizations against many of the everyday threats. As a result, we have seen an uptick in actors once again leaning on individual consumers for their paydays. Pig butchering and sophisticated job scams are two examples that focus on social engineering outside of a corporate environment. We will see a resurgence in the number of less sophisticated threat actors leveraging alternative communication channels, such as social media and encrypted messaging apps, to focus on fleecing individuals outside of enterprise visibility. — Selena Larson, staff threat researcher, Proofpoint

No Slowdown in Ransomware Attacks

Ransomware attacks will continue to increase as phishing techniques continue to become more sophisticated. Also, in 2023-24, attackers began to document the corporate weak points they found to sell the information they gained to other hackers.  In 2025, attackers will see which weak points are still available for entry. — Cynthia Overby, director of security, Rocket Software

Ransomware for All

Ransomware will continue to be a major issue, affecting not only large corporations but also small and medium-sized healthcare organizations and even individuals. Last year, I highlighted the UHC/Change Healthcare issue, which personally impacted my wife, a doctor who owns her private practice and uses Change Healthcare for revenue cycle management. We've also seen incidents like the one with GM. Threat actors are finding good ROI in ransomware attacks and will likely double down. Barracuda published a threat spotlight on a campaign where threat actors targeted individuals by showing pictures of their homes and insinuating physical threats unless a ransom was paid. — Riaz Lakhani, CISO, Barracuda

Ransomware Will Continue to Target Legacy Systems to Maximize ROI

Legacy industries and organizations that have been around for decades and are responsible for managing a unique blend of hardware and software across continents — think airlines, railways, energy production, and the like — will be a top target for ransomware attackers in 2025. These organizations move large sums of revenue, and their systems generally aren't the most modern. Also, due to the sheer size of the business, they typically have smaller IT teams in-house and employ more outside services and third-party partners to help maintain those systems. This exposes them to more methods of attack, which bad actors are increasingly taking advantage of to secure massive paydays. As ransomware attackers get even more creative and targeted (thanks to AI), having a good backup system in place will be critical for success. If organizations — legacy or otherwise — don't have a means to restore to a good known state, before a malicious payload was distributed to the systems in question, they're going to find themselves paying hefty ransoms more often than not. — Mike Arrowsmith, chief trust officer, NinjaOne

Play, RansomHub, 8Base and Qilin Will Continue to Be Key Ransomware Players to Watch

There are a few ransomware players that will likely play a significant role in the ransomware economy in 2025. For starters, by the second quarter of 2024, Play established itself as one of the most active and innovative groups in the RaaS space. The group operates with tactics similar to the now-defunct Hive and Nokoyawa ransomware strains. RansomHub platform is another one to keep on our radar. It swiftly garnered attention for its high-impact attacks and advanced ransomware deployment techniques. The platform has distinguished itself by offering affiliates up to 90% of ransom payments, making it highly attractive to potential partners. Then there is 8Base. The threat actor emerged in March 2022, and has quickly become one of the most active and prominent threat actors in the cyber landscape. 8Base employs double extortion tactics, exfiltrating victim data before deploying ransomware, and is known for using advanced techniques to evade security measures. Last but not least is Qilin, which initially operated under the name Agenda before rebranding as a RaaS operation in July 2022. Qilin is written in Golang and Rust, making it capable of targeting both Windows and Linux systems. Rust, known for its security and cross-platform capabilities, provides excellent performance for concurrent processing, helping Qilin evade security measures and develop variants targeting multiple operating systems. — Jon Miller, CEO & co-founder, Halcyon

Prominent Ransomware Players Will Be Determined by the Sectors They Target

In the past, the most prominent ransomware threat actors were identified by the number of targets, price of the ransom demand, or the sophistication of their techniques. In 2025, we will likely see a major shift. The threat actors targeting the most vulnerable sectors like healthcare or other critical infrastructures will likely be seen as the most dangerous players because of the threat they impose on human lives, regardless of what tactics and techniques they use. — Jon Miller, CEO & co-founder, Halcyon

While critical infrastructure like healthcare, manufacturing and education will remain the primary targets of ransomware attacks and large sources of ransom payments, one industry that we may see more highly targeted is legal. Law firms and legal departments often hold the keys to very sensitive information and data. While they have been a target in the past, it is often kept hush because of the privacy of the information they are dealing with. In 2025, this may come to a head with more ransomware groups targeting legal information. — Jon Miller, CEO & co-founder, Halcyon

Sarcoma, Fog, KillSec, Meow Ransomware Will Emerge as Leading RaaS Groups

As we look ahead to 2025, a few ransomware as a service (RaaS) groups will quickly emerge. Sarcoma, which first debuted in October 2024, has quickly gained notoriety for its aggressive tactics and significant data breaches. Unlike some ransomware groups, Sarcoma doesn't publicly list ransom amounts. Instead, it leverages data leaks as a primary means of coercion. Fog ransomware is another key player to keep your eyes on in 2025. A variant of the STOP/DJVU family, it has quickly become a prominent threat with its swift file encryption and ransom demands in Bitcoin. It is expanding to more lucrative and high-end profile targets like critical infrastructure and financial sectors. KillSec, originally a hacktivist group tied to the Anonymous movement, transitioned to ransomware earlier this summer. Previously focused on government website defacements, particularly in India, KillSec's pivot represents a broader shift among hacktivist groups incorporating criminal tactics. It uses features like a C++-based locker, DDoS capabilities, and automated calls to pressure victims, while taking a 12% commission on each ransom paid through a Tor-account dashboard. Last, but certainly not least, is Meow Ransomware. After a brief hiatus, it is back and aggressive as ever. Associated with the Conti v2 ransomware variant, the group has become notorious for targeting industries in the United States with highly sensitive data like healthcare and medical research. — Jon Miller, CEO & co-founder, Halcyon

A New Generation of Hackers Will Wreak Havoc in Seemingly Comical — but Increasingly Nefarious — Ways

Ransomware group Hellcat generated buzz when they hacked Schneider Electric and demanded ransomware payment of $125,000 in baguettes. While silly visuals soon flooded the internet, it doesn't take away from the fact that this criminal act successfully extorted a sizable amount of data. This attack highlighted the visibility that "comedy" can bring to data breaches, and more are likely to follow suit in one-upmanship of ridiculous ransomware demands. It's critical for businesses to follow robust data security strategies or risk winding up the victim of not only data extortion, but amateur jokes as well. — Yogesh Badwe, chief security officer, Druva

Ransomware Threat Actors Will Become More Aggressive

When policyholders have backups and don't need the data threat actors stole, they refuse to pay. Threat actors respond by kicking it up a notch to scare someone into making a payment. They may call 911 and send the police to the CEO's house, their kids' school, or even send bomb threats. Thankfully these have all been empty threats so far. CIR responded to a case where the threat actors put a hit on the CEO saying that they would pay anybody a million dollars to produce the CEO's finger. These criminals and ransomware groups are not getting arrested or taken down nearly as quickly as we would like, so they have nothing to lose. They will keep getting more aggressive. — Leeann Nicolo, incident response lead, Coalition

Ransomware Surge Demands Strategic Recovery Plans

As ransomware and supply chain attacks are expected to increase, organizations will need a plan for fast recovery and business continuity. Attackers will continue demanding ransoms not only to decrypt but also to avoid the publishing of stolen data.  Some threat actors have moved to deleting data as part of their normal motions. If this gains traction in 2025, organizations will not have a method to recover by simply paying a ransom and hoping to get a working decryption tool. The only method of recovery will be backups, however data shows that backups do not typically survive these breaches.

According to Conversant Group's research, 93% of cyber events involve targeting of backup repositories, and 80% of data thought to be immutable does not survive. Being able to recover, but having no place to recover, will result in longer outages and increased business interruption costs. This will require strategic breach recovery plans that integrate real-time threat detection, adaptive defenses and incident response protocols. The most effective component of breach recovery plans is immutable backups, which are essential for fast recovery from breaches. The tamper-proof design of immutable backups guarantees the integrity of stored data and reduces recovery time while allowing for rapid restoration without the risk of reintroducing infected or corrupted files. — Brandon Williams, chief technology officer, Conversant Group

Phishing and Other Attacks

Vendor Risk Awareness Rises

In 2024, organizations started to understand that their cybersecurity isn't as strong as they previously thought — particularly when it comes to service providers and the supply chain. Hackers shifted to these vulnerabilities as they realized one attack can disrupt hundreds or even thousands of companies at once. It's not surprising given it not only increases hackers' potential ransomware payouts while also providing access to vast amounts of data that can be sold on the black market. As a result, in 2025, I hope more businesses shore their vendor security and management protocols. This should include auditing and understanding their vendor relationships and organizational dependencies, as well as requiring stricter vetting processes for these vendors throughout the entire lifecycle of the partnership. — Joe Oleksak, CISSP, CRISC, QSA, partner, Plante Moran

Growing Convergence of AI, AppSec, and Open Source

We will see the continued intersection of AI, AppSec, and open source — from malicious actors targeting open source models, the communities and platforms that host them, and organizations looking to leverage AI to address code analysis and remediation. Increasingly, we will see widely used OSS AI libraries, projects, models, and more targeted as part of supply chain attacks on the OSS AI community. Commercial AI vendors are not immune either, as they are large consumers of OSS but often aren't transparent with customers and consumers regarding what OSS they use. — Chris Hughes, chief security advisor, Endor Labs

AppSec's 2025 Focus: Cutting Through the Noise

Signal through noise will continue to be the name of the game for AppSec in 2025. Organizations are drowning in noise, findings, alerts and notifications. They are in desperate need of context and are looking for tools to not just provide insights around exploitation, exploitability, and reachability for better prioritization, but to take it a step further and move toward remediation and solutions that help not just find, but fix problems. 2024 is another year of record vulnerability and CVE growth, and modern solutions are needed more now than ever. — Chris Hughes, chief security advisor, Endor Labs

Lack of Visibility, Increased Uncertainty Heighten Challenges in Combating Third-Party Threats

The risks associated with third-party access have evolved into a pressing, existential concern for businesses, as their reliance on external vendors, partners, and contractors continues to grow. The vulnerabilities introduced by these relationships can no longer be underestimated, as they hold the potential to disrupt operations, compromise security, and erode trust.  Year after year, we've seen the impact of third-party attacks — for example, MOVEit breach in 2023 and Change Healthcare in 2024. These attacks have highlighted the critical need for mature security systems to manage these risks, as vulnerabilities in external software continue to expose sensitive data at an alarming speed. As organizations across all industries try to respond to the looming third-party threat, they are struggling. Organizations are increasingly unsure how the cyber attacks they suffered were perpetrated, due to limited visibility into how vendors are accessing their networks. These massive blind spots showcase a critical risk for all organizations in the year ahead. To mitigate these risks, organizations must embrace automation and analytics to enhance visibility and eliminate the guesswork surrounding third-party access. — Fran Rosch, CEO, Imprivata

Cybercriminals Will Get Crafty with New Custom Attacks on Routers and Perimeter Devices

In 2024, we saw threat actors increasingly targeting network perimeter devices, including routers, firewalls, and VPNs. In the first half of the year alone, 20% of newly exploited vulnerabilities focused on these assets, a trend we expect to persist with growing sophistication. Notably, advanced persistent threats from China have developed several custom malware for espionage on perimeter devices recently — such as ZuoRAT, HiatusRAT, and COATHANGER — and deployed those on thousands of devices across the world, supposedly as part of pre-positioning activities. Sophisticated targeting of perimeter devices through custom malware and other methods can lead to privileged access to networks, making them high-value targets for state-sponsored actors like China, with other countries like Iran potentially following suit in 2025. — Daniel dos Santos, head of security research, Forescout Research — Vedere Labs

Legacy OT Systems Will Be a Cybercrime Goldmine as Entry Point to Critical Infrastructure

With increasing integration between IT, IoT and OT devices, custom malware will increase threats to critical infrastructure similar to what's happening to perimeter devices. Botnets and other opportunistic IoT malware already include capabilities, such as infection via well-known OT credentials. By 2025, we predict a rise in attacks leveraging opportunistic malware that may disrupt operations. Legacy OT systems remain vulnerable. As we've seen last year in the water sector, too many assets and devices are unmanaged and exposed. If attacked, these systems can serve as an entry point to critical infrastructure systems. As demonstrated by the ongoing Russia-Ukraine conflict, critical infrastructure is at risk, highlighting that proactive vulnerability management is an urgent priority. — Daniel dos Santos, head of security research, Forescout Research — Vedere Labs

Autonomous Business Compromise Will Allow Cybercriminals to Steal Money While You Sleep

Business Email Compromise (BEC) could evolve into Autonomous Business Compromise (ABC) where AI will automate fraud with minimal human interaction. Cybercriminals will target AI-driven processes like supply chain management and financial planning to conduct high-stakes fraud without ever stepping foot in the target's inbox. This allows cybercriminals to carry out attacks without reliance on social engineering methods to trick an individual into making a payment. — Rik Ferguson, vice president of security intelligence, Forescout

Wealthy Americans Will Begin to Face the Threat of a Digital Arrest

Increasingly popular in Asia, criminals will find that the U.S. is fertile ground for a relatively new and lucrative scam known as the "digital arrest." An evolution of extortion and "bail money" scams, they threaten financially well-off individuals with arrest by a law enforcement agency, knowing that it will instill panic and fear. When faced with the prospect of jail time, many of their targets will be desperate for an alternative outcome, and these scammers are happy to oblige. — Al Pascual, CEO, Scamnetic

Third-Party App Stores Will Give Rise to a Scam App Explosion

As some of the world's largest and most influential companies, Apple and Google enjoy veritable monopolies on the digital app store ecosystem for their respective platforms. With U.S. courts demonstrating that they feel the same way, these companies will be forced to allow greater competition. This will take the form of allowing third-party app stores unimpeded access to iOS and Android devices, as well as support the efforts of these stores to offer comparable libraries of applications.  And much as these companies have warned, this will dramatically increase the number of apps that exist for the sole purpose of scamming victims, entirely controlled by criminals who exist to enable scams.  With a newly enthusiastic audience of device users who are gaining access to, or simply exploring, a third-party app store for the first time, these apps will find a whole new world of victims who are no longer out of reach. — Al Pascual, CEO, Scamnetic

Expanding Threat of OSS Supply Chain Attacks

Open source software (OSS) supply chain attacks will continue to expand. Reports show that supply chain attacks have risen significantly over the last several years. Open source developers and consumers will need to be more diligent in vetting the OSS components they use. The OpenSSF provides resources like the SIREN mailing list to warn of emerging exploits, OSV to track malicious packages alongside vulnerability data, and tools like Scorecard and GUAC to enhance visibility into dependencies. — Christopher Robinson, chief security architect, OpenSSF

The Growing Threat to Critical Infrastructure

Attacks on Critical National Infrastructure continue to intensify, requiring unprecedented collaboration between the public and private sectors. Power grids, water systems, healthcare facilities and transportation networks are becoming prime targets for cybercriminals and state-sponsored actors alike. The potential for widespread disruption makes these targets particularly attractive. The chaos that could ensue from a coordinated attack on a nation's power grid or healthcare system is a scenario we must be prepared to prevent and respond to swiftly. — Chris Gibson, CEO, FIRST

Insider Threats Will Branch Out to More Industries

Historically, financial service organizations have been a primary target for insider threats due to the high value of their assets and potential for fraud. However, as the digital landscape expands, other industries are becoming increasingly vulnerable. This includes sectors like healthcare, e-commerce, and critical infrastructure, where sensitive data and operational continuity are paramount. As a result, these industries are now investing more in insider threat detection and response solutions to protect their valuable assets. — Chris Scheels, VP of product marketing, Gurucul

SMBs and Highly Regulated Industries Will Be Highly Targeted

Due to resource constraints, slower adoption, and the high value of sensitive information they typically store, SMBs and highly regulated industries will be most at-risk in 2025. These sectors often prioritize access over security, creating exploitable vulnerability gaps. Remote workers will also continue to be a threat vector for bad actors, as home security postures are generally less robust compared to enterprise environments. — Gary Orenstein, chief customer officer, Bitwarden

We Are One Major Hack Away from Losing Our Phones

As surveillance technology advances, companies like NSO Group have shown that phones can be compromised without users even knowing. It's only a matter of time before one of these surveillance software providers faces a major breach and has its source code exposed. Once leaked, this code could quickly reach the dark web, fueling an entire new wave of "malware-as-a-service" offerings. This would give cybercriminals unprecedented access to our mobile devices, and the valuable data they hold. Mobile devices that are used for MFA will be useless too. Anyone who can see the screen can access the authentication token it provides and use it to gain control over accounts. Thus, mobile devices should not be part of the authentication process of MFA. — Spencer Parker, chief product officer, iVerify

Cybercriminals Will Keep Pace with Emerging Cybersecurity Tech

Emerging technologies will elevate cybersecurity in 2025, yet cybercriminals will keep pace, exploiting threats like supply chain vulnerabilities, ransomware, IoT botnets, and AI-driven social engineering. Ransomware groups now target critical services, making software lifecycle security and vendor verification essential. Rising IoT use demands industry-wide standards to prevent device weaponization in DDoS attacks and breaches. Meanwhile, cybercriminals' use of AI to craft targeted phishing challenges organizations to evolve their defenses. In this evolving landscape, fortifying supply chains, adopting IoT standards, and leveraging AI will be vital to staying ahead. — Bindu Sundaresan, director of cybersecurity, LevelBlue

Deepfakes Will Unleash a Devastating New Wave of Social Engineering Attacks

No longer just a theoretical risk, video-based deepfakes will become highly realistic and imperceptible from reality. This technology will be weaponized in social engineering attacks, allowing criminals to impersonate executives, forge high-stakes transactions, and extract massive payouts from unsuspecting victims. With AI making deepfakes accessible at the push of a button, the potential for financial fraud will explode, forcing organizations to rethink how they verify identity in an increasingly deceptive world. — Steve Povolny, senior director, Security Research & Competitive Intelligence and co-founder, TEN18 by Exabeam

2025 Will Bring a Wave of Triple Extortion Attacks Targeting Partners and Subsidiaries

Hackers are getting greedier and more sophisticated. In 2025, companies won't just face the theft of their data and ransom demands — they'll see attackers extort their partners, suppliers, and even customers. After locking systems and stealing data, hackers will squeeze not just the victimized company, but the entire ecosystem they work with, demanding ransoms from any organization with a connection. Triple extortion will become the latest method to maximize profits from a single attack, wreaking havoc across entire supply chains. — Gabrielle Hempel, solutions engineer and TEN18 analyst, Exabeam

Biggest Cyber Incident Will Involve a Little Known Company

In 2025, the biggest cyber incident will involve a company that most people haven't even heard of before, and the impact will be devastating to a small group of companies — just like we saw with CDK Global this past summer. In addition, we will see a successful deepfake attack on a Fortune 500 company in 2025. — Ann Irvine, chief data & analytics officer, Resilience

There Will be No Cyber Attack Causing a Nationwide Internet Outage

Some people fearmonger about a potential nationwide internet outage caused by a cyber attack. I just don't think that kind of thing will happen. I don't believe that AWS, for instance, will go down for more than 24 hours in 2025. Threat actors are aware that an attack at that scale would put them at extreme risk of being hunted down and sent to prison. The disincentives are too strong for them to take their attacks to that level. — Ann Irvine, chief data & analytics officer, Resilience

Advanced Cybersecurity Attacks Grow More Prevalent

The frequency of cybersecurity attacks has been steadily growing on a year-over-year basis for a number of years now, and I don't expect that to change in the near future (sadly). However, what will likely change in 2025 is the types of attacks that are successful. I am heartened by our collective ability to learn from weaknesses and improve over time. However, our adversaries continue to iterate and innovate their attack methods, which means companies will need to do the same for their defense techniques. By now, the typical business has finally become adept at adhering to basic cybersecurity standards and best practices. As a result, threat actors are increasingly focusing on more novel means of attack.

But this means that, in 2025 and beyond, businesses that want to be as secure as possible must focus on identifying and blocking new and emerging threats, in addition to sticking with the basic best practices that are at the heart of guidance from organizations like OWASP and NIST. Many organizations continue to take advantage of capable tooling that allows them to get the most ROI on their security efforts — especially those tools that provide the necessary context around identified vulnerabilities, misconfigurations and security research conducted by those security services / product organizations. — Matt Hillary, CISO, Drata

Bad Actors Will Leverage Personal Data & AI to Launch More Effective Attacks

The NPD and MC2 breaches that took place in 2024 will enable cyber criminals to leverage far more personal data, combined with AI-generated "deep fakes," to launch more realistic and effective phishing and spear phishing campaigns in 2025. Since the human element remains the most "hackable" security control, these attacks will likely lead to even more data breaches and/or compromise of control systems. When successful, spear phishing attacks can have devastating consequences, given the privileged access employees often have to sensitive data, financial transactions, and physical control systems. — Maurice Uenuma, VP & GM, Americas and security strategist, Blancco

The Future of BRICS Pay  

The money laundering ecosystem might get increased use of mixers; tumblers and the introduction of BRICS Pay may enable new payment routes for cybercrime operators. BRICS Pay envisions a more centralized structure within individual countries while maintaining a decentralized approach on an international scale. The introduction of BRICS Pay can make tracking more difficult due to its semi-local nature. — Balazs Greksza, threat response lead, Ontinue

Data Exfiltration and Extortion Will Eclipse Ransomware as the Primary Threat

In 2025, ransomware will increasingly be used as a precursor to larger attacks, where the real threat is data exfiltration and extortion. Attackers will leverage stolen data as a bargaining tool, especially in highly-regulated industries like healthcare, where companies are forced to disclose breaches. As a result, we'll see more sophisticated ransom demands based on exfiltrated data. — Jim Broome, CTO and president, DirectDefense

Mobile Phishing Attacks Will Become More Sophisticated and Evasive, Traditional Phishing Defenses Fall Short

Social engineering has evolved considerably over the past year. In 2025, I predict that "mishing," or mobile phishing, attacks will become so sophisticated and evasive that traditional tooling won't be able to detect it. We will see the rise of AI-driven mobile malware capable of mimicking user behavior, making it far harder to detect using traditional methods. Verizon's 2024 Mobile Security Index  revealed that AI technologies are expected to intensify the mobile threat landscape, with 77% of respondents anticipating AI-assisted attacks, such as deep fakes and SMS phishing. A notable example was identified by Zimperium's zLabs researchers on an Android-targeted SMS stealer campaign, which, to date, researchers have found over 107,000 malware samples directly tied to the campaign. In separate research, the zLabs team found a new variant of the FakeCall malware, revealing new traits present in this variant, including the ability to capture information displayed on a screen using the Android Accessibility Service. Similar to the above, we will continue to see the development of "stealth mobile devices," or devices specifically designed to circumvent typical security measures. This highlights a strategic evolution in mobile security — evasive cyber attacks are now the new normal, as cybercriminals are becoming more sophisticated in their mobile phishing attacks. — Nico Chiaraviglio, chief scientist, Zimperium

Non-Traditional Entry Points Will Escalate Enterprise Mobile Risk

Threat actors will increasingly exploit iOS shortcuts, configuration profiles, and sideloaded applications to breach enterprise security. Recent research unveiled the dangers of sideloading applications, the practice of installing mobile apps on a device that are not from the official app stores. According to Zimperium's 2024 Global Mobile Threat Report, financial services organizations saw 68% of its mobile threats attributed to sideloaded apps. In fact, zLabs researchers found that mobile users who engage in sideloading are 200% more likely to have malware running on their devices than those who do not. Riskware and trojans, applications that disguise themselves as legitimate apps, are the most common malware families found. APAC outpaced all regions in sideloading risk with 43% of Android devices sideloading apps. To protect against the risk that comes from sideloaded apps, enterprises must effectively protect their mobile endpoints by adopting a multi-layered security strategy including mobile threat defense and mobile app vetting. The prominence of trojans are highlighted in the report with the findings indicating that threats from sideloaded apps are primarily driven by riskware and trojans, which account for a staggering 80% of the malware observed. Additionally, Zimperium's threat data shows that approximately one in four Android devices face this issue. While sideloading is much more prevalent on Android, the recent Digital Markets Act (DMA) is expected to increase its prevalence on iOS. Cybercriminals are constantly scouring for ways to break in and compromise corporate networks. In 2025, they will ramp up efforts on targeting non-traditional entry points. — Nico Chiaraviglio, chief scientist, Zimperium

Emergence of Autonomous Malware

One under-the-radar development is the rise of autonomous malware. Unlike traditional malware, this next generation can operate independently, learning to bypass security measures as it moves through systems. These self-sustaining attacks refine themselves at each step, presenting a profound challenge for cybersecurity defenses. Few are prepared for this shift, but it has the potential to reshape the entire cybersecurity landscape.Avani Desai, CEO, Schellman

Third-Party Breaches Will Reach Critical Mass, Threatening Entire Supply Chains

As attackers zero in on the weakest links in supply chains, third-party breaches are set to shatter previous records. Vulnerable, smaller partners — often less equipped to fend off sophisticated attacks — are becoming backdoors to infiltrate larger organizations. This trend will force companies to rethink their risk management strategies entirely. In 2025, annual security reviews alone will no longer suffice as organizations adopt continuous monitoring of their supplier networks. This real-time approach to risk detection will become essential. Companies that rely on traditional security methods face two major threats: costly business disruptions and lasting reputation damage. As attacks spread through interconnected systems, even a single gap in supplier security could expose entire business networks. — Dr. Aleksandr Yampolskiy, co-founder and CEO, SecurityScorecard

Attackers Will Continue Targeting SaaS Applications

SaaS applications will continue to face increasingly sophisticated threats as adversaries exploit advancements in technology — especially AI. AI will enable threat actors to more easily uncover SaaS vulnerabilities and misconfigurations, bypass traditional security measures, and craft more convincing phishing campaigns. As AI becomes more capable and accessible, the barrier to entry for less skilled attackers will become lower, while also accelerating the speed at which attacks can be carried out. Additionally, the emergence of AI-powered bots will enable threat actors to execute large-scale attacks with minimal effort. Armed with these AI-powered tools, even less capable adversaries may be able to gain unauthorized access to sensitive data and disrupt services on a scale previously only seen by more sophisticated, well-funded attackers. — Justin Blackburn, senior cloud threat detection engineer, AppOmni

Enterprises Must Beware of Automated Attacks

Automation-driven perimeter breaches will remain prevalent in 2025, with large-scale reconnaissance, password spraying, and AI-powered phishing automation among the leading tactics. As SaaS platforms increasingly fall within the scope of these attacks, the potential impact of breaches will continue to escalate significantly. Enterprises must anticipate automated attacks by securing all internet-exposed resources. Today's attackers no longer selectively target; instead, they pursue any organization lacking a robust security posture. — Martin Vigo, lead offensive security engineer, AppOmni

Telecom Networks Are the New Stratum for Threat Actors 

In 2025, attackers are expected to target telecom and internet service provider (ISP) networks more aggressively, which allows them to bypass device-specific malware and focus on broader infrastructure vulnerabilities. This shift is driven by the interconnectivity of legacy telecom networks and the troves of data they contain, which make them prime targets for threat actors. By compromising these networks, attackers have access to a more extensive range of targets, bypassing traditional security defenses that are device centric and any need for device-specific malware. This trend will also likely increase the rate of real-time communications interception, which puts truly classified information at risk. A prime example of this evolving threat is the recent Salt Typhoon wire-tapping incident which intercepted court-ordered wiretaps, illustrating how telecom networks are becoming a primary vector for attackers. In the coming year, nations must prioritize security strategies at the network and infrastructure-level, dividing focus upon device-centric protections to safeguard critical communication systems from targeted, real-time attacks. 

Most importantly, an attack like Salt Typhoon is a warning sign for your organization. It reveals a level of risk most never intended to take: the risk that secrets that give you a competitive advantage — in the marketplace or on the battlefield — could be too easily exposed. Public telecom networks are primarily designed around reachability, which means security trade-offs often take place and can leave you inherently vulnerable. No doubt, telco and internet providers globally will be assessing vulnerable entry points and legacy systems comprehensively in an effort to boost resilience against espionage efforts. — David Wiseman, VP of secure communications, BlackBerry

Identity Spoofing Will Escalate as AI, Deepfakes, and Exposed Metadata Fuel Sophisticated Attacks 

Sophisticated identity spoofing will take poll position as a major concern in 2025 for two reasons: 1. AI and deepfake technologies and tactics used are rapidly advancing to appear more convincing and 2. Attackers will continue to leverage personal metadata and "listening data" such as voice and text from telecoms network breaches that gives them up-to-the-minute information to better target victims. Breaches such as that uncovered by AT&T in July, as well as the notable Verizon and T-Mobile breaches in October, have shown us how widely accessible user metadata and real-time communication information have become. This allows threat actors to tailor their attacks based on previous communications, making their impersonations harder to detect. In 2025, we will need to be more cognizant of where our data, even metadata, goes and how it's secured. This means, also being cautious about using consumer-grade messaging apps for work. As attackers weaponize the knowledge of communication patterns and collaborations, the threat of identity spoofing will reach new levels of danger, demanding enhanced security measures to safeguard personal information. We'll also see a greater emphasis on deploying robust secure communications solutions beyond the current, widespread capabilities. What does this mean? It means ensuring the encryption of communications and leaving no breadcrumb trail or stored metadata for attackers to harvest. What we know as "encrypted communications" now, like WhatsApp, doesn't cut it — we need to uplevel the technology we're deploying across the board. — David Wiseman, VP of secure communications, BlackBerry

Blurring Personal and Professional Boundaries Puts Employees, Organizations at (Cyber) Risk

Employees will continue to be at risk and expose their organizations if they blend their personal and professional lives on their devices, as it creates new entry points for cyberthreats. Senior executives and employees with access to "insider" information will certainly face heightened risks as they routinely access their organization's most sensitive data, as well as personal and restricted information. Using personal devices and unsecure networks while traveling or conducting sensitive communications, can expose critical vulnerabilities within organizations. Many high-value employees may overlook these risks, assuming their personal devices are safe, but simple practices like syncing with personal Apple/Google IDs can inadvertently expose sensitive data. Threat actors are becoming more motivated in their attacks, meaning they aren't attacking with a volume goal in mind anymore. They want to know that an attack will be successful, and they are willing to put resources behind it to increase the chance of success. While full compliance with established securities protocols is critical and a strong first step, we're finding (and will continue to see into 2025) that existing security controls are insufficient. There are cracks in any armor, which means to be truly robust, organizations must assess the communication methods they're using and fortify against pervasive interception tactics at both the network and device level. This means out-of-band encrypted networks and highly certified secure communications tools that do not share metadata like Whatsapp and Signal. — David Wiseman, VP of secure communications, BlackBerry

Unseen Vulnerabilities: The Hidden Risks of 'Free' Communication Apps in 2025

It is not only espionage at the network level that is of concern; mobile spying is on the rise. People should think twice about what they are sharing on so-called "free" messaging apps like WhatsApp and Signal. The perceived security of popular communication apps like these will face growing scrutiny as their vulnerabilities become more apparent in 2025. In fact, it was recently found that the group APT41 is using updates to the LightSpy malware campaign to infiltrate common communications systems, notably WhatsApp. A rule of thumb: If it is free, you are the product, and your data can be sold, moved, and targeted. This leaves users' metadata and personal information at risk of exposure or misuse by third parties. This concern goes beyond system availability; it's about the uncertainty surrounding who has access to sensitive information and what they might do with it. As attackers increasingly weaponize insights from this data, the risks surrounding these tools grow significantly. Many assume these widely used communication apps are secure enough for sensitive information, trusting that their internal security teams would intervene if they weren't secure. However, these platforms are often used without proper oversight or security controls, exposing both individuals and organizations to unnecessary risk. — David Wiseman, VP of secure communications, BlackBerry

The Rise of Cyber 'Ghost' Bots Will Spark a High-Stakes Cat-and-Mouse Game

The cyber arms race between bot developers and defenders will escalate as cybercriminals increasingly deploy "anti-detectable" bots with advanced evasion tactics, and DataDome's Advanced Threat Research found that fewer than 5% of businesses can adequately protect themselves and their customers from these "ghost" bots. Bot developers are using anti-fingerprinting headless browsers, a new tool that makes detection much more challenging. For example, last year Chrome's Headless mode was updated to achieve a near-perfect browser fingerprint, making these automated sessions nearly indistinguishable from real user sessions. In response, bot mitigation teams turned to CDP (Chrome DevTools Protocol) detection as a countermeasure, but bot creators quickly adapted, incorporating anti-CDP detection techniques and advanced anti-detect frameworks to evade these defenses. These anti-detect browsers excel at randomizing fingerprints, enabling bots to bypass basic security checks. Defenders will need to proactively stay ahead of these advancements, constantly adapting to anticipate the next wave of bot attacks and maintain robust protection against increasingly stealthy bot traffic. — Benjamin Fabre, CEO, DataDome

Fraudsters Will Continue to Deploy Basic Bot Attacks (and Get Away With It)

Basic bot attacks aren't going anywhere, even as bots become more sophisticated and scalable with the use of generative AI tools. DataDome's 2024 Global Bot Security Report found nearly 2 in 3 businesses were completely unprotected against basic bots. The most successful basic bots were the fake Chrome bots, with only 15.82% detected — leaving businesses at risk for layer 7 DDoS attacks, account takeover fraud and other automated threats. — Benjamin Fabre, CEO, DataDome

Advanced AI-Powered Bots Will Fuel an Unprecedented Wave of Misinformation

Advanced AI-powered bots will fuel an unprecedented wave of misinformation, putting social media platforms squarely in the line of fire. Malicious actors are increasingly deploying these bots to flood networks with false content, manipulating recommendation algorithms to amplify deceptive narratives through inflated engagement metrics. In 2024 alone, DataDome's Advanced Threat Research team found that sophisticated bots evade traditional CAPTCHA defenses over 95% of the time, mimicking real users with a high accuracy rate. What once required coding expertise to launch now requires minimal skills, making bot-driven misinformation campaigns easier and cheaper to execute at scale. Beyond the manipulation of public perception, these bots also pose a growing threat to user security by harvesting credentials and personal data. — Benjamin Fabre, CEO, DataDome

Bots Will Snatch Up High-Profile Event Tickets  

As the online ticketing market approaches $68 billion in 2025, bots will increasingly target high-profile event sales, creating a battleground for ticketing platforms and fraud prevention. The barrier to entry for bot makers has never been this low due to new bot frameworks, basic defenses like CAPTCHAs becoming less effective, and Bots-as-a-Service (BaaS) tools available for as little as $50. Even users with minimal technical skills can flood ticketing platforms and monopolize tickets at scale. The sophistication of bot attacks continues to evolve alongside the lucrative opportunities in cybercrime. The Taylor Swift ticket fiasco is a prime example of both the increasing sophistication of bots and the massive payday threat actors see in scalping tickets. For businesses that conduct transactions or handle sensitive data online, robust fraud detection has become essential. AI and ML-based fraud detection are increasingly vital for combating these threats. Unlike static defenses that rely on preset rules, dynamic learning systems can adapt in real-time, responding to evolving bot tactics and providing essential protection against financial and operational losses. — Benjamin Fabre, CEO, DataDome

Shadow IT Risk

The risk associated with shadow IT will grow significantly unless companies aggressively address it. With so many SaaS services being introduced by employees, contractors, or others as more innovative tools are available for easy deployment without a security review, there's a heightened risk of data leakage and general security threats. Additionally, the use of unsanctioned AI SaaS tools will increase, posing risks of downloading malicious LLMs or legitimate LLMs that have been tampered with. — Riaz Lakhani, CISO, Barracuda

Extremely Percussive Social Engineering

We will see very convincing social engineering attacks like never before. Threat actors will use AI to scale content creation, produce more persuasive content, and employ deepfake/voice replication for sophisticated phishing and social engineering attacks. Phishing already provides a good ROI for threat actors, and I fully expect to see high-quality phishing to warm up the target with layered follow-up social engineering tactics. — Riaz Lakhani, CISO, Barracuda

Bad Actors Will Develop Synthetic Online Personalities for Financial Gain

In 2025, I imagine we'll see a significant uptick in the presence of fabricated experts and audiences for sale. The phenomenon is already taking place, albeit at smaller, more hand-tailored scales. However, with the emergence of generative AI, deepfakes, and other forms of synthetic content, people will be able to create rather believable internet personalities with significant online presences, which will be able to gain sizable audiences by doing things like creating tutorials, producing articles, writing reviews, blogging, and even creating podcasts and video series. I also expect there will be a widespread effort to automate these personalities and content in order to establish substantial online circles that can then be offered up for sale. Alternatively, they can be used to promote, sell, or criticize whatever the highest bidder chooses. I think in most cases we'll see people initiate the process with some real content early on, in order to plant the seeds of trust and establish some credibility before they're used for more nefarious purposes. This will allow them to propagate messaging and influence more effectively, circumventing the screening technologies we have in place today. — Tyler Swinehart, director, Global IT & Security, IRONSCALES

Increasingly Creative Social Engineering Strategies Will Stymie Anti-Phishing Efforts

I think we're going to see threat actors use increasingly creative social engineering strategies to drive impersonation and other forms of attacks. For example, we're already beginning to see a growing number of voicemail phishing attacks, as well as attacks in which malicious links are being embedded in email attachments — both of which are proving to be some of the most effective phishing strategies in use today. At the moment, both of these strategies are particularly good at circumventing most email defenses.
And while attackers are using more AI to generate their attacks, it also seems that the market is less inclined to believe that AI alone is enough to defend against such attacks. — Shai Mael, director, global sales engineer, IRONSCALES

Infamy Is the New Payday

Cybercriminals aren't just in it for the money anymore — they're after clout. In 2025, fame and notoriety will drive bad actors as much as profit, fueling a wave of cyber scams with all the discipline of a 9-to-5. Hackers today work methodically, even showing dips in activity over weekends and summer breaks. Leaks on social media and press cycles will continue to motivate criminal behavior seeking their own place in the discourse. But they're also more audacious, combining financial schemes with a thirst for social influence. Expect a new breed of cybercriminals who don't just steal—they seek the spotlight. — Joshua Terry, director of product management, Aura

Attackers Will Hit SecOps' Soft Underbelly

With SecOps focused on front-line defense measures, attackers will focus on stack elements and settings that are typically under-protected and less tightly managed. SaaS misconfigurations, access control anomalies, and third-party integrations and gateways are prime examples. With SecOps' staff overwhelmed and burning out, advanced security automation such as hyperautomation can use Gen AI to manage and parse these systems and auto-remediate or escalate threats before they have a chance to take root. — Leonid Belkind, co-founder and CTO, Torq

The AppSec Arms Race Will Heat Up

In 2025, If it wasn't obvious before, no one in application security can safely take a break. With the advantage of AI, reverse engineering and attack tools will become even more sophisticated. Threat actors will use these tools and techniques to better understand the operation of apps, uncover their secrets, and make malicious use of APIs. Mike Woodard, VP of product management for application security, Digital.ai

Social Media Commerce and Data Security Challenges

Social commerce will continue to surge in 2025, but as mobile payments grow in popularity, they bring elevated risks of data breaches. These challenges will be addressed by focusing on mobile-first security, offering solutions that proactively prevent data leaks and fortify payment systems. With threats from bad actors using unconventional payment methods to harvest user data, security tools will position companies as trusted leaders in protecting customer privacy. — Shash Anand, SVP of product strategy, SOTI

Threat Actors Will Conduct Fraud Outside Official App Stores

With our Konfety investigation threat actors using a novel TTP in which they did not deploy malicious apps in the Google Play Store, instead deploying only "decoy" versions of malicious twin apps. This makes it harder for companies, like Google in the Google Play Store or Apple in the Apple iOS App Store, to remediate fraud in situations like this because the apps in the official store are not malicious and are only being used to enable fraud outside of the store. We'll likely see more of this type of fraud this year in which threat actors look for new techniques that make it harder to identify and eliminate their fraud campaigns using many of the methods we have consistently relied on in the past. — Lindsay Kaye, VP of threat intelligence, HUMAN Security

Off-Brand Devices Will Become a Major Target for Fraudsters

Google- and Apple-certified devices have many security mechanisms, and there's much more scrutiny into how they're produced than off-brand devices. So, it's easier to target off-brand devices with implants, malware, backdoors, and TTPs, which is similar to what we saw with the original BadBox, so that's something that we expect to continue, largely because cybercriminal threat actors are often driven by opportunism, meaning that they will often identify ways to receive a payout for minimal effort wherever possible. — Lindsay Kaye, VP of threat intelligence, HUMAN Security

More Threat Actors Will Take Advantage of Residential Proxies in Their Attacks

We saw a lot of threats involving the creation of residential proxy networks and the use of residential proxies in cyber attacks in 2024. Three different HUMAN investigations highlighted both of these residential proxy use cases. We expect threat actors to continue to use them in their attacks (and for others to invest in creating these networks) because they are so much harder for defenders to defend against, given that it looks like the malicious traffic is coming from peoples' IPs. — Lindsay Kaye, VP of threat intelligence, HUMAN Security

DDoS Attacks Will Become More Sophisticated

In 2025, I expect to see continued growth in the sophistication of DDoS attacks.  Just to clarify, I am talking about more widespread use of sophisticated attack vectors and attack techniques that were previously solely the proviso of sophisticated adversaries. The attack tools available now have simplified access to these capabilities, driving the strong growth in application layer and multi-vectors attacks that we reported in our recent Threat Report. For an ISP or MSSP, when these behaviors are coupled with multiple concurrent attacks to different target organizations, something we are seeing from some hacktivist groups, there is a growing challenge for the operations teams dealing with attacks. In some cases, these attacks can last for multiple days, with frequent rotation of attack vectors, requiring operations teams to constantly update their defensive strategies for each attack. The overall goal, from the attacker's perspective, seems to be to drive operational fatigue in defensive teams so that their attacks get through; however, with the right threat-intelligence, technology, and operational best practices, these attacks are being successfully managed. — Darren Anstee, CTO for Security, NETSCOUT

CIOs Face Growing Threats from Misconfigured Infrastructure and GenAI Vulnerabilities

The increasing proliferation of bad actors finding holes in improperly configured infrastructure will keep CIOs up at night. The cloud was touted as "secure as can be," and while that might be one of its capabilities, most organizations don't have the manpower or expertise to properly configure and maintain a truly secure environment. The biggest security hole in an organization has been cataloged by many social engineers. This takes another level with AI, where bad actors can generate realistic artifacts to trick employees. Further, GenAI needs to have a security perimeter where it can operate, and not provide access to data that it shouldn't. This is easier said than done, and due to the nature of GenAI being unable to test every scenario. A well-crafted AI security strategy is critical to ensure data security.Eduardo Mota, senior cloud data architect — AI/ML specialist, DoiT

Rising Opportunity Costs for Cybercriminals

The opportunity cost for threat actors will continue to grow. Adversaries continue to compromise human and non-human identities at a faster rate than ever. As the opportunity cost for adversaries increases, many enterprises are becoming more target rich for threat actors. Cloud environments and the AI services and SaaS applications housed within them are becoming incredibly valuable assets for threat actors to hijack and abuse. Paul Nguyen, co-founder and co-CEO, Permiso

Interactive Console UIs Emerge as Key Tool for Threat Actors

Threat actors will continue to use interactive console UIs throughout many stages of their attack lifecycle. Unlike command-line interfaces (CLIs), interactive console UIs provide a more user-friendly experience for managing cloud services that can simplify complex tasks, especially for less technically proficient attackers, worth noting that the noise generated by console clicks is often far noisier compared to the typically single-action CLI commands. Andi Ahmeti, associate threat researcher, Permiso

Expect an Escalation of Abuse of Inbox Rules

As attackers continue to refine their tactics, the abuse of inbox rules in compromised email accounts is likely to escalate during 2025. By using inbox rules they can conceal important security alerts, delete/hide incoming messages, or otherwise alter email flows in victim mailboxes they compromise. Andi Ahmeti, associate threat researcher, Permiso

Threat Actors Will Exploit Third-Party Software Flaws

Supply chain attacks will continue to rise in 2025, with threat actors exploiting vulnerabilities in third-party software, cloud services, and key suppliers. By compromising large providers, attackers will gain access to broader victim networks, amplifying the scale and impact of their campaigns. Isuf Deliu, threat research manager, Permiso

Personalized Extortion Scams Will Become a Growing Threat

The rise of personalized extortion scams, where cybercriminals research their victims using publicly available information, will redefine social engineering attacks. These schemes will use family names, relationships, or past events to create tailored threats, such as claims of unpaid debts or fabricated legal issues, pressuring victims into immediate payment via cryptocurrency. As cybercriminals adopt increasingly sophisticated techniques to exploit personal data, individuals and organizations must strengthen digital hygiene and educate themselves on recognizing and responding to these high-pressure, emotionally charged scams. Alex Quilici, CEO, YouMail

Holiday Shopping Scams Will Reach New Levels of Sophistication

Cybercriminals will increasingly exploit the holiday shopping frenzy with highly targeted scams such as fake package delivery notifications, fraudulent order confirmations, and phishing texts claiming missed deliveries. These attacks will leverage advanced personalization tactics, using data from past breaches to craft convincing messages that reference real orders, family members, or known shopping habits. Consumers can expect a surge in fake text messages mimicking major retailers, creating a heightened need for vigilance and education on identifying these threats. Alex Quilici, CEO, YouMail

Package Delivery Scams Will Dominate the Festive Season

Package delivery scams will become one of the most prevalent holiday threats, capitalizing on the surge of online shopping during the festive season. Cybercriminals will flood consumers with fake notifications about undelivered packages, tracking updates, and shipping delays, using trusted brands like UPS, FedEx, and USPS to lure victims into clicking malicious links. These scams will not only target financial information but also aim to harvest personal data for future attacks, highlighting the need for heightened consumer awareness and robust security practices during peak shopping periods. Alex Quilici, CEO, YouMail

Proliferation of Deepfake Technologies Powering Social Engineering Attacks

Criminals will harness advanced deepfake technology to create highly convincing fake audio and video messages from trusted individuals or organizations. These deepfakes will be used in spear-phishing campaigns and fraud schemes, making it increasingly difficult for individuals and businesses to distinguish genuine communications from malicious ones. This will lead to a surge in investment in deepfake detection technologies and stricter verification protocols. Julian Brownlow Davies, VP, Advanced Services, Bugcrowd

Perimeter Connectivity Devices Will Be an Exploited Hotspot

2024 was all about vulnerabilities and exploits on numerous perimeter (edge) connectivity devices. We will see this continue into 2025 as a vector, compounded further by multinational government/agency backed broadcasts and directives on their use. Nick McKenzie, chief information and security officer, Bugcrowd

Cybercrime-as-a-Service Will Flood the Market with New Attackers

The era of "cybercrime-as-a-service" will explode in 2025, making it easier than ever for inexperienced hackers to access sophisticated attack tools. This underground marketplace has created a whole new class of criminals, effectively democratizing cybercrime. As barriers to entry fall, organizations should brace for a surge in attacks, spanning from ransomware to data exfiltration. This expanding pool of cybercriminals, with access to ransomware kits and phishing services, will push companies to rethink their security posture as they confront an unpredictable threat landscape. — Andrew Costis, engineering manager of the Adversary Research Team, AttackIQ

Infostealers as a Persistent Threat

Infostealers have emerged as one of the most persistent and widespread threats in the cybercrime ecosystem. These lightweight malware programs are designed to scrape sensitive data, including credentials and cookies, which are then sold on underground marketplaces. Their popularity has grown because they are inexpensive, easy to deploy, and require little technical expertise. This low barrier to entry makes infostealers accessible to a broad spectrum of threat actors, from novices to highly organized ransomware groups. The surge in activity around infostealers is evident across illicit forums, where demand for "logs" continues to skyrocket. These logs, containing data from infected devices, are the backbone of an underground economy that fuels larger-scale breaches and ransomware attacks. For example, marketplaces like Russian Market show a steady stream of log uploads from families like Lumma, Stealc, and Vidar, which are poised to dominate the ecosystem heading into 2025. Credentials exposed in infostealer logs are a gateway to enterprise attacks. They serve as the first step in broader attack chains, providing initial access that often leads to more destructive outcomes, such as ransomware or data extortion. Flashpoint data shows a growing sophistication in how threat actors leverage these tools, particularly in bypassing security measures. Threat actors are constantly evolving their tactics, and the accessibility of infostealers and logs makes them an accessible tool for cybercriminals to gain a foothold. Monitoring these trends and strengthening defenses against initial access threats will be key to mitigating the risks posed by this pervasive malware. Real-time intelligence can significantly reduce exposure to infostealers and the downstream threats they enable. — Ian Gray, VP of intelligence, Flashpoint

The Extortion Landscape Continues to Evolve

As extortion tactics grow more complex, organizations must rethink their approach to resilience and redundancy. Threat actors are no longer relying solely on ransomware; they are employing layered campaigns — such as double and triple extortion — that combine encrypted data with threats to leak sensitive information, disrupt operations, and exploit third-party vulnerabilities. These methods amplify the stakes, making it essential for leaders to prepare for increasingly interconnected disruptions. Threat actors are expanding their leverage by targeting vulnerabilities across entire ecosystems, maximizing the potential for disruption. This underscores the importance of building resilience at every level of operations. — Ian Gray, VP of intelligence, Flashpoint

APIs Will Become the Prime Target for Business Logic Exploits

As AI weaves deeper into the fabric of businesses, the spotlight will shift to APIs as the new attack vectors. With APIs facilitating rapid data exchange, attackers will increasingly exploit weaknesses in business logic — the overlooked entry points where systems fail to properly validate or process data. As AI models grow more sophisticated, so will the tactics used to target these critical connections. In an era where APIs are the lifeblood of innovation, securing them will no longer be optional — it will be the frontline defense in the battle for data integrity and digital trust. — Randy Barr, CISO, Cequence Security

Cyber Attacks Targeting Political Figures and Campaigns Will Continue Post-election

There will be an increase in mobile phishing and social engineering attacks targeting political figures at all levels of government. We already saw high-profile attacks targeting the Harris and Trump presidential campaigns, but these attempts will become more common for state and local leaders. We can also expect an increase in the amount and quality of information hackers obtain from successful attacks. Without a definitive "line in the sand" from a policy standpoint to deter threat actors with vigorous repercussions, these attacks will become more frequent and increase in damage and sensitive data obtained. — Jim Coyle, US Public Sector CTO, Lookout

Identity Theft, Data Security and Privacy, and Fraud

SaaS Apps Receive Long Overdue Attention for Data Risks

A reckoning is coming for SaaS apps as businesses seek to understand how these apps use sensitive data and introduce potential security risks. We've seen supply chain threats emerge downstream as a result of overlooked data access, and IT leaders will scrutinize the interconnected nature of SaaS apps to mitigate potential risks. Sophisticated cyber attacks are accelerating this process, and we'll see industry demand for robust data protection skyrocket. — Stephen Manley, CTO, Druva

2025 Cryptocurrency Surge Will Drive Next-Gen Fraud Detection

As digital currencies grow, the sophistication of fraud, including money laundering and phishing will require more advanced detection methods. Emerging forms of AI, such as Neuro-Symbolic AI (NSAI) will combine pattern recognition, logical reasoning, and language understanding to identify suspicious transactions across decentralized platforms. By analyzing blockchain data, smart contracts, and transaction histories, NSAI will uncover hidden patterns of fraud, interpret the intent behind transactions, and distinguish legitimate trades from illicit activities like market manipulation. The unique abilities of NSAI will be able to flag high-risk transactions while providing clear, explainable reasons for the flags, helping regulators and industry players maintain transparency and compliance. — Dr. Jans Aasman, CEO, Franz

Fortifying Foundational Data Security

In a rapidly evolving digital world, our greatest defense is precision and deep awareness of where data resides and how it moves. The exponential pace of AI adoption has amplified opportunities and threats, demanding organizations go beyond conventional data protection strategies. To remain resilient, leaders must view data security not merely as a compliance requirement but as a continuous, adaptive process that builds trust and safeguards innovation. — Balaji Ganesan, co-founder and CEO, Privacera

From Reactive to Resilient: Elevating Data Security

Data security without proper governance is a house of cards. In 2025, effective access management must be woven into the very fabric of our operations, with controls that transcend boundaries and adapt as data journeys through complex, interconnected systems. — Don "Bosco" Durai, co-founders and CTO, Privacera

Adaptive Strategies for Modern Data Protection

Hybrid and multi-cloud architectures are the lifeblood of modern business agility. However, with great flexibility comes great responsibility. For 2025, we must enforce consistent, adaptive security policies that accompany data wherever it flows — cloud, on-premises, or edge. This is not just about safeguarding data but about building a resilient and trust-driven digital economy. — Balaji Ganesan, co-founder and CEO, Privacera

Rise in Synthetic Identity Fraud

As digital identities become more complex, a rise in synthetic identity fraud could pose an unexpected challenge. In these attacks, threat actors combine real and fake data to create entirely new digital personas that pass as legitimate. This could become a significant issue in finance, healthcare, and even social media, where identity verification processes are often automated and could be easily tricked. AI tools to detect anomalies in identity behaviors will be crucial to mitigating this trend. — Sam Peters, chief product officer, ISMS.online

Rise in Biometric Data Theft

As organizations increasingly use biometrics (such as fingerprints, facial recognition, and voiceprints) for security, the risk of biometric data theft will rise. Unlike passwords, biometric data cannot be changed once compromised, making such breaches particularly devastating. The stolen biometric data could be used for identity theft or even forged into digital profiles for social engineering attacks. Securing biometrics and investing in multi-layered biometric verification systems will become more critical. — Sam Peters, chief product officer, ISMS.online

Rise in Attacks on Wearable Tech

As wearable technology (like fitness trackers and smartwatches) becomes more advanced and widely used, the health data they collect will become a lucrative target for cybercriminals. Attacks on these devices could lead to privacy breaches or data manipulation that could impact healthcare decisions. This would force manufacturers to implement more robust data encryption and authentication methods for wearable devices. — Sam Peters, chief product officer, ISMS.online

The industry will recognize an Entra ID adoption ceiling in 2025, due to the persistent need for legacy systems and the management of existing effective policies that will require "pockets" of Active Directory (AD) usage. In this hybrid environment, where Active Directory and Entra ID coexist, organizations will need to adopt a comprehensive approach to address identity security threats. As the complexity of managing identities in such diverse environments increases, implementing advanced solutions like Identity Threat Detection and Response (ITDR) will become essential for maintaining a robust security posture and ensuring compliance with evolving regulations. — Richard Dean, senior manager of solutions architecture, Quest Software

The Rise of Digital Wallets

If 2024 was the year when cybercriminals became more sophisticated, and deepfakes came onto the scene at a record rate, then 2025 will be centered on how organizations choose to secure their business, adopting new technology to combat these threats while putting control back in the hands of customers.  As such, 2025 will be the year we'll see increasing adoption of digital wallets that safely secure digital identities and allow users to transact, travel, and verify identities at a moment's notice. According to a new study, 74% of consumers like the idea of digital wallets or ID cards that are kept on personal mobile devices, but barriers to adoption are top of mind. Businesses will start adopting a seamless, gradual, and approachable rollout to digital IDs, helping ease consumer concerns while progressing to more widespread adoption. Those that cannot offer a secure, customer-centric experience will be at risk of falling behind early adopters who embrace this new wave of digital interaction. — Patrick Harding, chief product architect, Ping Identity

The Perception of Personal Data Will Fundamentally Change

With major incidents like the National Public Data breach — which compromised billions of individuals' sensitive information —becoming more common, it's no longer reasonable to assume that your personal data is not compromised. Moving forward, individuals and organizations will need to operate with the expectation that personal data is already compromised by others. The focus will shift from solely preventing breaches to limiting attackers' movement within networks, to prevent additional data loss and even though data maintained may have been compromised elsewhere, organizations will need to make sure the data they are responsible for is not compromised as well. This approach will involve organizations requiring employees to lock down sensitive accounts, applying layered security controls and closely monitoring access to prevent unauthorized lateral movement to protect their most critical data. — Doug Kersten, CISO, Appfire

Mobile Security Platforms Will Increasingly Address Data Privacy Concerns, Not Just Security

Mobile security plays a crucial role in addressing the needs of data privacy. However, we often see mobile security with the lens of threat defense and application security. But regulatory compliance is a key piece of the mobile security function. I predict that in 2025, we will see mobile security prioritizing data privacy needs by implementing robust privacy-preserving technologies. According to Zimperium's 2024 Global Mobile Threat Report, 82% of organizations allow bringing your own device (BYOD) to work. And a recent survey from Tableau found that 63% of Internet users believe most companies aren't transparent about how their data is used, and 48% have stopped shopping with a company because of privacy concerns. We will likely see more regulatory compliance baked into mobile security solutions, particularly around data handling and encryption standards. We are already seeing regulatory shifts in the financial sector, holding app developers accountable for any harm towards their end users due to external attacks. Businesses are recognizing that regulatory compliance features are a necessary piece of the mobile security stack, and they are seeking mobile security platforms that address both privacy and security needs. — Nico Chiaraviglio, chief scientist, Zimperium

Organizations Will Prioritize Data Privacy in Vendor Selection

Data privacy will emerge as a crucial factor in vendor selection, driven by widespread concerns over data mishandling and breaches. Organizations will prioritize partnerships with vendors that demonstrate strong data stewardship and robust privacy practices. As consumers become more aware of how their data is managed and their data privacy rights, companies that fail to prioritize data protection will risk losing business. I also believe we will see the beginning of privacy regulation around using our likenesses in things like AI-generated content, where consent will become mandatory. — TK Keanini, chief technology officer, DNSFilter

Attackers Will Target Cloud-Native Environments to Disrupt Critical Company Systems and Shut Down Core Services 

As businesses rapidly adopt cloud native technologies like Kubernetes and service mesh, they often overlook specific security risks that make these environments appealing targets for attackers. In the coming year, cloud native and developer environments will become even bigger targets due to the surge in machine identities — like cloud access tokens, API keys and service accounts. Machines — from IoT devices to servers, and even the workloads that run on them — all require unique identities that, like human credentials, can be hacked to expose critical information. Machine identities now outnumber human identities by 45 to 1, and this gap is expected to widen, set to reach 100 to 1 soon. The risk of exploitation grows if these identities aren't consistently protected across environments — giving attackers more opportunities to exploit weak points. For instance, compromising a single service account — which relies on machine identities — can grant direct entry into sensitive resources, often with privileged access that allows attackers to move laterally across cloud infrastructures. As we move into 2025, this ability to exploit machine identities for unauthorized access will drive adversaries to focus more intently on cloud native environments. Successfully targeting machine identities gives attackers a clear pathway to admin-level control, that can enable everything from data theft to taking over — or shutting down — critical business services. — Sitaram Iyer, VP of emerging technologies, Venafi, a CyberArk company

The 'How' of the Threat Actor Landscape Is Evolving Faster Than the 'What'

The end game for cybercriminals hasn't evolved much over the past several years; their attacks remain financially motivated, with Business Email Compromise (BEC) designed to drive fraudulent wire transfers or gift card purchases. Ransomware and data extortion attacks still follow an initial compromise by malware or a legitimate remote management tool. So, while the ultimate goal of making money hasn't changed, how attacks are conducted to get that money is evolving at a rapid pace. The steps and methods cybercriminals employ to entice a victim to download malware or issue a payment to a bogus "supplier" now involve more advanced and complex tactics and techniques in their attack chain. Over the past year, financially motivated threat actors have socially engineered e-mail threads with responses from multiple compromised or spoofed accounts, used "ClickFix" techniques to run live Powershell, and abused legitimate services — like Cloudflare — to add complexity and variety to their attack chains. We predict that the path from the initial click (or response to the first stage payload) will continue to become increasingly targeted and convoluted this year to throw defenders, and especially automated solutions, off their scent. — Daniel Blackford, head of threat research, Proofpoint 

Smishing Goes Visual: MMS-Based Cyber Attacks Will Flourish in 2025

MMS (Multimedia Messaging Service)-based abuse, consisting of messages that use images and/or graphics to trick mobile device users into providing confidential information or fall for scams, is a burgeoning attack vector that will expand rapidly in 2025. Built on the same foundation as SMS, MMS enables the sending of images, videos, and audio, making it a powerful tool for attackers to craft more engaging and convincing scams. Cybercriminals will embed malicious links within messages containing images or video content to impersonate legitimate businesses or services, luring users into divulging sensitive data. Mobile users are often unaware that they are using MMS, as it blends seamlessly with traditional SMS, creating a perfect storm for exploitation. — Stuart Jones, director, Cloudmark Division, Proofpoint

2025 Will Be the Year of Deepfakes

The examples of deepfakes that we've seen in 2024 have been terrifying to say the least. From AI-generated images of faux Taylor Swift fans claiming their support for US President Elect Donald Trump to fake videos of Ukrainian President Volodymyr Zelenskyy bowing his head in surrender, deepfakes have largely targeted major celebrities and world leaders. In 2025, I predict we'll see the use of deepfakes taken to the next level and used as a core tactic in financially motivated cyber attacks on companies large and small. While business email compromise (BEC) certainly isn't going anywhere anytime soon, we can expect to see the use of deepfakes to accomplish similar goals. I also expect that deepfake technology will become increasingly commoditized in order for adversaries to use it on a larger scale and to target more "everyday" people. Deepfakes in the form of audio, video and image manipulation are on the rise and it's imperative that organizations plan accordingly. How? Educating employees on what deepfakes are and what they look or sound like, clearly outlining processes for employees to report incidents, and exploring the use of deepfake detection tools are all solid starting points. All in all, deepfakes got a decent amount of "screen time" throughout 2024, but the impact on businesses is going to skyrocket in 2025 as deepfake technology becomes commoditized. Jon France, CISO, ISC2

Identity-Based Phishing Drives Security Shift

Because hackers are placing more emphasis on identity-based phishing, security administrators will re-evaluate the need for "privilege" and focus their defensive management on reducing the number of privileged users and accounts, while implementing broader and deeper risk assessment processes to meet the new regulatory requirements. — Cynthia Overby, director of security, Rocket Software

Digital Wallets Will Go Beyond Credential and Card Management

The digital wallets that are built on consumer privacy and consent will better understand the connections between different aspects of our lives (health, travel, finance, etc.) to make helpful recommendations for us beyond a boarding pass before a flight. — Davi Ottenheimer, VP of Trust and Digital Ethics, Inrupt

Nation-State Attacks

Nation-State Attacks Are Poised to Become Greatest Threat to Critical Infrastructure

The sophisticated network of malicious actors officially and unofficially, being sponsored by adversarial nation-states like Russia, China, and Iran often conduct reconnaissance to identify vulnerabilities and entry points within systems like healthcare, water, energy, and telecommunications, often taking advantage of the interconnected systems of devices, technologies, and affiliated organizations in these industries rely on, leveraging them in an attack designed to inflict broad-scope pain and disruption. Such attacks can lead to severe consequences, from disruptions to providing care in both acute care facilities like a major hospital as well as other ancillary avenues of care for those in need, to supply water, food, energy and communication to a broad population. These attacks underscore the risk of significant damage during geopolitical conflicts, where these vulnerabilities could be exploited to cause widespread chaos and harm at a critical time when we need them most.

Nation-State Activity Heats Up

Geopolitical conflicts aren't going anywhere, and in 2025, we'll likely see increased threats from Russia, China, Iran, and North Korea. Many ransomware groups are backed by nation-state governments (such as Lazarus Group and BlackCat), and ransomware-as-a-service (RaaS) activity will escalate. Targeting U.S. organizations and critical infrastructure will be key targets of adversaries, as we saw last year with organizations like American Water and AT&T. Nation-state cybercriminals will also continue using AI to their advantage, such as using AI-backed misinformation bots, making phishing attacks more personalized and believable. They'll even go as far as impersonating public figures (similar to AI-misinformation campaigns around the election) or personally known individuals like family and friends. Compounding this is the increasing concern it's creating for consumers. A recent Vercara survey found that ransomware, nation-state attacks, and phishing are the types of attacks consumers are most concerned about in the new year. Businesses that don't prioritize protection against these attacks will be especially vulnerable, ultimately putting customer trust and data at a heightened risk. — Michael Smith, field CTO, Vercara

Nation-State Ransomware Groups Targeting U.S. Critical infrastructure More Than Ever Before

Nation-state ransomware groups are actively targeting our critical infrastructure, and threats will increase in volume and sophistication in 2025. Industries like healthcare will experience heightened risk for potentially devastating attacks driven by escalating geopolitical conflicts across Russia, China, Iran, and North Korea. As ransomware groups gather banks of sensitive information to bolster their operational intelligence base, attacks on critical infrastructure will likely be more effective than ever before. A degree of a "perception of weakness" in the U.S. is a vital piece of the puzzle behind the increasingly aggressive hacking, advanced persistent threats, and coordinated attacks we'll likely see this year. The U.S. can't afford to fall behind in defenses, especially as most of our critical infrastructure is connected to the Internet, making critical industries more susceptible to attacks. More controls must be put in place to prevent perpetration, such as investments in tools that improve visibility. — Mark Bowling, VP of Security Response Services, ExtraHop

Geopolitical Cyber Warfare and AI Alignment

The geopolitical landscape will grow more complex as governments expand their cyber warfare capabilities. With rising tensions, state-sponsored attacks are likely to escalate. Cyber operations will increasingly serve as extensions of diplomacy, exposing organizations to indirect risks from global rivalries. A critical, emerging concern is AI alignment — AI models tailored to serve specific geopolitical motives. These tools could be engineered to exploit vulnerabilities in a rival's infrastructure, targeting not only regions but also specific economic and political agendas.Avani Desai, CEO, Schellman

Nation-State Espionage Will Lurk Beneath the Surface of U.S. Infrastructure

In 2025, the Trump administration's national security priorities will lead to direct action against Chinese cyber operations. China will target more U.S. infrastructure systems through hidden network access points, particularly in compromised routers. Rather than launching immediate attacks, these concealed entry points serve as strategic assets for potential future conflicts. This approach of establishing quiet network access, combined with rising international tensions, this passive infiltration strategy will underscore the urgent need for vigilant monitoring of infrastructure vulnerabilities — vulnerabilities that could be activated when tensions reach their breaking point. — Dr. Aleksandr Yampolskiy, co-founder and CEO, SecurityScorecard

With a New Administration, Relentless Cyberthreats from Nation-States Will Test U.S. Defenses

The next U.S. presidential administration will face a surge in cyber aggression, with China, Iran, Russia and North Korea expected to ramp up their attacks. China may escalate operations against U.S. critical infrastructure as Taiwan tensions rise. Russia, exploiting Western divisions, is likely to deploy disinformation and DDoS assaults to destabilize NATO-aligned regions. North Korea, relying on cybercrime, will continue using ransomware and crypto theft to sustain its regime. With adversaries embracing AI-driven disinformation and sophisticated tactics, U.S. defenses must adapt swiftly. A pivot toward offensive cyber tactics and reduced international cooperation may strain intelligence-sharing networks when they're needed most. The administration will need to balance aggressive deterrence with strong public-private partnerships to protect critical assets, maintain stability, and the country's current research and economic advantage. — Jeff Le, VP of Global Government Affairs and Public Policy, SecurityScorecard

The New Battlefield: Geopolitics Will Shape Cyber Espionage and the Rise of Regional Cyber Powers

2024 has demonstrated that state-aligned cyber espionage operations are deeply intertwined with geopolitical dynamics. In 2025, APT operations will continue mirroring global and regional conflicts. The cyber espionage campaigns preceding these conflicts will not be limited to large nations historically seen as mature cyber actors but will proliferate to a variety of actors focused on regional conflicts seeking the asymmetric advantage cyber provides. Additionally, state-aligned adversaries will use cyber operations to support other national goals, like spreading propaganda or generating income. Targeted threat actors will likely leverage the continued balkanization of the internet to attempt to deliver their malicious payloads. — Joshua Miller, staff threat researcher, Proofpoint 

Critical Infrastructure and Corporate Data Face Rising Threat from Nation-State Cyber Attacks

Attackers will significantly impact global business systems and operations. Insurance and financial systems will continue to be focal points for attacks, but in 2025, we can expect critical infrastructure operations and corporate data to become a higher priority for nation-state threat actors. These attacks will no longer focus on ransomware using forward facing web applications, but instead focus on power grids and corporate data stored on critical hardware.  Lack of knowledgeable resources to manage security across an enterprise, and the lack of understanding and maturity around critical infrastructure vulnerability management within the C-level community will make for easy targets. — Cynthia Overby, director of security, Rocket Software 

Nation-State-Run Ransomware Groups Will Sharpen Focus on Critical Infrastructure Organizations

In early 2024, we saw ransomware groups shed what ethics they had to target large organizations that served critical functions for society — Change Healthcare and Ascension Healthcare being prime examples. Traditionally considered off-limits as targets, the disruption and urgency caused by such attacks create Catch-22 scenarios in which targets more reliably paid high ransoms for quick resolutions. For this reason, I expect we'll see increasingly daring ransomware attacks against critical infrastructure targets in 2025. In response, we'll see a sharp increase in disaster recovery demand to complement the defense solutions on the frontlines. — Bob Bobel, CEO & founder, Cayosoft 

Nation-State Breach

One of the major foundational model providers will disclose a nation-state breach. — Jason Martin, co-founder and co-CEO, Permiso

Peacetime Cyber vs. Wartime Cyber

In 10 years, we'll likely look back on this season as a defining period. As global tensions continue to escalate and cyber makes itself obvious as a theater of modern warfare, the operating assumptions of cyber defenders will need to change. The true value of solutions and strategies developed during a period of relative "peace" will be challenged. Casey Ellis, founder and advisor, Bugcrowd

Nation-State Actors Diversify and Continue to Get More Aggressive

As global alliances continue to evolve, generative AI and technique-sharing accelerates time-to-effectiveness, and the "spectrum of attribution" broadens, attribution will become more of a challenge. Attackers, aware of this phenomenon, will be emboldened and the trend towards effectiveness over stealth that we've seen globally over the past 5 years will accelerate. I'm interested in the role of grass-roots Civil Cyber Offense activities, such as the IT Cyber Army. Casey Ellis, founder and advisor, Bugcrowd

Nation-State Actors Will Blend In with Criminals

Nation-state actors, including Russia's Sandworm and China's APT 41, will dominate global cybersecurity concerns in 2025, with tactics evolving in complexity and stealth. These groups are now turning to widely available off-the-shelf tools, blurring the line between nation-state and financially motivated cybercriminals. But the real danger? The proliferation of zero-day exploits and sophisticated backdoors designed to evade detection for months or even years. This means that organizations, especially in critical infrastructure sectors, must adopt real-time threat detection to stay ahead of this mounting threat. — Andrew Costis, engineering manager of the Adversary Research Team, AttackIQ

The Convergence of Cyber, Physical, and Geopolitical Threats

The global threat landscape is undergoing a seismic shift, which Flashpoint calls the "New Cold War." Unlike the First Cold War of the 20th century, this conflict plays out across digital, physical, and geopolitical domains, with nation-state actors such as Russia. The convergence of cyber, physical, and geopolitical threats increasingly targeting multinational businesses demands a holistic approach to security. Organizations can no longer afford to view these domains in isolation. Unified threat intelligence is essential to identifying patterns, anticipating risks, and countering adversaries who operate across all fronts. — Andrew Borene, executive director of Global Security, Flashpoint

Watch for Surge in Russian State-Sponsored Cyber Attacks on Western Nations

If peace is brokered in Eastern Europe under the new administration, it might lead to a surge in state-sponsored cyber attacks against Western nations. With Russia losing ground in Ukraine and U.S. aid totaling nearly $183 billion, the likelihood of peace being brokered has increased. If the war in Ukraine ends or diminishes, Russia may (re)allocate its massive 13.5 trillion-ruble budget (over $145 billion) to fund nation-state-backed hacking campaigns against the U.S. and other countries. While Russia hasn’t paused its virtual warfare, a shift in focus could escalate attacks. This underscores a critical need for countries and organizations to strengthen their cybersecurity defenses and investments. — Christian Geyer, founder and CEO, Actfore

Nation-States Will Hijack 'Grassroots' Hacktivist Cyber Attacks to Wage a Silent War

Since 2022, we've seen hacktivism tactics increasingly leveraged in regional conflicts like Russia-Ukraine and the Middle East. By 2025, more nations will adopt hacktivist identities to carry out sophisticated cyber attacks — moving beyond defacements and DDoS to include massive data breaches and cyber-physical disruptions. With tensions rising, such as the conflict between China and Taiwan, we anticipate more nation-states will use hacktivist fronts to execute covert cyber operations. — Daniel dos Santos, head of security research, Forescout Research — Vedere Labs

Threat Actors Will Hijack Supply Chains with 'Invisible' Firmware Threats

Nation-state actors are increasingly weaponizing firmware supply chain attacks, embedding malicious code during manufacturing that bridges cyber and physical warfare. The recent compromise of communication devices by Israel demonstrates how firmware-level threats can have real-world impact. Traditional defenses and documentation, including Software Bill of Materials (SBOMs), are merely reactive and neglect to provide true visibility and detection of these risks and sophisticated implants. As IoT adoption grows, supply chain risks escalate, making it imperative for organizations to secure every step of the production and distribution process. — Rik Ferguson, vice president of security intelligence, Forescout

Cybersecurity Grows in Complexity

Looking ahead to 2025, I anticipate a surge in nation-state insider threats, with "fake" employees infiltrating companies to exfiltrate data or hold organizations ransom. We're seeing the dawn of a new era where the combination of AI-driven attacks, deepfake technology, and heightened regulation will make cybersecurity more complex than ever. AI will be instrumental both as a threat and a defense in 2025. From enhancing internal and external bots for automated GRC and audits to helping security teams scale against sophisticated threats, AI's influence on cybersecurity will be both powerful and unavoidable. — George Gerchow, head of trust, MongoDB

Balancing Global Cyberthreats and Basic Defenses

IT will have to manage an increasingly uncertain world in 2025. Nation-state-driven cyber attacks will become more frequent and sophisticated, targeting businesses and critical infrastructure. These attacks, combined with inevitable widespread outages affecting major service providers and platforms, will extend beyond companies and governments, making the impact more personal to consumer lives. As a result, IT will be under pressure to strike a balance to fortify their systems from being caught in the crossfire of larger global cyber conflicts and outages while also focusing on the basics — like assessing and securing the configurations of their identity systems. Preparing for complex, high-level attacks is essential, but so is ensuring that fundamental defenses, such as "locking the doors," are not overlooked. This dual focus will be crucial in building a cohesive strategy to mitigate evolving cyberthreats. — Bryan Patton, CISSP, principal strategic systems consultant, Quest Software

Geopolitics Influences Digital Connectivity

Increasing geopolitical tensions will shape digital interactions and domain usage worldwide, with nation-states influencing how information is accessed and shared, raising challenges for domain security and the management of online identities. We can expect a rise in phishing schemes, where malicious sites mimic legitimate ones using similar domain names. Despite these challenges, domain names remain valuable for establishing online identity and brand. This dual nature of domain names — empowering individuals while also posing risks — will define the future of digital interactions. Users must stay vigilant about how their online identities are shaped and potentially compromised. Awareness of these implications will be crucial for ensuring personal safety and maintaining the integrity of digital communication. In fact, individuals and businesses alike can protect themselves from new forms of phishing tactics and domain hacks with simple, effective steps. This includes asking a domain name registrar (or in some cases, the domain name registry) to securely lock a domain name so they cannot be fraudulently transferred away. It also requires a proactive and comprehensive plan to immediately address security breaches as they happen and quickly mitigate the associated risks. — Ram Mohan, chief strategy officer, Identity Digital

Increased Focus from Private Industry on Bolstering Cyber Defense Measures

With more aggressive nation-state hacking, advanced persistent threats, and coordinated infrastructure attacks, it's clear that cyber attacks are more often disrupting our economy, and more industries are recognizing that they have targets on their backs. In 2025, we will see the private sector start to continually work to get involved in efforts to boost information sharing to help industries get ahead of attacks amid rising geopolitical tensions. With more industry participation in ISACs (Information Sharing and Analysis Systems), we'll see a bigger effort in fostering a proactive cybersecurity culture, further enabling organizations to share information, resources, and ultimately stronger defenses. — Mark Bowling, VP of Security Response Services, ExtraHop

Quantum Computing

Post-Quantum Computing Will Be the Next Security Frontier

In 2025, Post-Quantum Computing (PQC) will take a big step forward as businesses and governments start adopting Quantum-Safe encryption to secure their data. With the National Institute of Standards and Technology having finalized the key algorithms needed for PQC, companies will soon be integrating these into their security systems. The move will also require updates like Java 21+, which is essential for managing quantum-safe encryption keys. For industries that deal with sensitive information, transitioning to quantum-resistant tech will be critical in staying ahead of emerging cybersecurity threats. — Avishai Sharlin, division president, Product and Network, Amdocs

Post-Quantum Cryptography

The important phrase that will become commonplace in 2025 will be "post-quantum cryptography." While the quantum epoch is still in the near future, "harvest now, decrypt later" for current cryptography will highlight the need to prepare the role out of quantum-secure algorithms as soon as possible. This will impact everywhere secure data has to flow, over public private, wired and wireless. — Brendan Bonner, innovation lead, Office of the CTO, Extreme Networks

Balancing Efficiency and Security

In 2025, organizations will increasingly prioritize email productivity solutions that balance efficiency with robust privacy and data protection. As these solutions streamline email workflows, enterprises will face a growing challenge: how to embrace these advances while meeting stringent security and compliance requirements. New entrants in this space will need to adopt enterprise-grade security practices from the outset if they hope to gain traction. Looking ahead, the next frontier in email security may come from quantum computing, which has the potential to completely disrupt today's standards. Though still on the horizon, quantum technology could quickly render current email security measures outdated, exposing communications to unforeseen vulnerabilities. As we saw with the rapid rise of AI, quantum advancements could arrive sooner than expected, making it important to start preparing even now. — Karl Bagci, head of information security, Exclaimer

2025's Biggest Expected Trend — Quantum Computing

Once quantum computers can crack AES-256 encryption, it will have devastating effects on all aspects of security. While quantum computing can aid in areas like weather prediction, my focus here is on its impact on security. The advancement of the technology could render many techniques in the MITRE ATT&CK framework, currently considered resilient, vulnerable to state-level attacks. Examples illustrating the potential risks include traffic encryption, extortion and leaked databases and password hashes, and the breaking of bitcoin and cryptocurrency algorithms. Organizations should begin preparing by adopting or planning to adopt post-quantum cryptography to safeguard against these emerging threats. — Sasha Gohman, VP, Research, Cymulate

Rise of Quantum-Inspired Cryptography

While the quantum computing revolution is still on the horizon, a more immediate concern could be quantum-inspired cryptography. Hackers may begin experimenting with quantum algorithms to solve traditional encryption problems more efficiently, weakening some existing cryptographic standards ahead of actual quantum computers. This pre-quantum era might see a rise in hybrid encryption methods that combine current and post-quantum algorithms long before full-scale quantum computers are widely available. — Sam Peters, chief product officer, ISMS.online

Post-quantum Cryptography Will Continue to Rise in Use

It's not certain that general-purpose quantum computers will ever work at scale, but if they do, the public will probably not know about them until many years later. Such computers would completely break traditional public-key cryptography. The Intelligence Community Studies Board estimated in 2018 that these computers would be unlikely to exist before 2028, but that year is quickly approaching, and it takes years to deploy massive changes like this. Thankfully, new post-quantum algorithms have been developed and are already being used in a few applications. I expect these new post-quantum algorithms will be increasingly implemented and used in 2025. They'll usually be added as an additional (hybrid) algorithm.
They are so new that it will be considered too risky to depend solely on these new algorithms. — David A. Wheeler, director of open source supply chain security, OpenSSF

Quantum Preparedness Becomes #1 Board-Level Cybersecurity Topic

In 2025, quantum preparedness will dominate boardroom discussions, becoming a top cybersecurity priority. This is not a fleeting issue like Y2K but a generational change with lasting implications. Advancements in quantum technology are raising alarm about the potential for quantum computers to break current encryption, threatening digital trust and business functionality.

The immediate challenge is identifying where machine identities — keys and certificates enabling secure machine-to-machine communication — are being used. This is the foundation of quantum readiness, as these identities must transition to quantum-resistant alternatives. For large organizations, this means replacing thousands or even hundreds of thousands of certificates. However, 64% of security leaders admit they "dread the day" the board asks about quantum migration plans, and 67% see the shift to post-quantum cryptography as a daunting task, given the lack of visibility into their certificates and keys.

The journey to quantum resilience starts. Companies will begin phasing out untrusted certificate authorities (CAs) and adopting quantum-proof systems. Platforms for certificate lifecycle management (CLM), PKI-as-a-service, and workload identity issuers are already available, offering streamlined solutions. These tools not only secure machine identities, but also provide a strong foundation for a successful transition to a post-quantum future. Kevin Bocek, chief innovation officer, Venafi, a CyberArk company

Future of Cybersecurity and Quantum Computing

By 2025, the cybersecurity sector needs to adapt to the imminent threat posed by quantum computing, which could one day break traditional encryption methods. Relying solely on traditional practices like routine software updates are no longer sufficient to defend against this evolving and advanced threat. As Cryptologically Relevant Quantum Computers (CRQCs) are anticipated by 2030, the push for quantum-safe security is intensifying. Additionally, advancements in quantum computing, particularly in error correction, will make it even more vital for government agencies and companies to implement quantum-safe encryption standards now. This proactive approach will help safeguard data in a rapidly evolving quantum era, ensuring security even as technology advances and quantum computers becomes a viable threat to existing cryptographic protections. Defense that relies on quantum-safe security measures including post-quantum cryptography (PQC) and Quantum Key Distribution (QKD) will ensure that data is protected from advanced hackers. — John Prisco, CEO, Safe Quantum

 

About the Author

Rick Dagley

Rick Dagley is senior editor at ITPro Today, covering IT operations and management, cloud computing, edge computing, software development and IT careers. Previously, he was a longtime editor at PCWeek/eWEEK, with stints at Computer Design and Telecommunications magazines before that.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like