We are entering the year 2026 with a new military operation, resulting in the abduction of the president of Venezuela. During a press conference on January 3, the United States Chairman of the Joint Chiefs of Staff indicated that Cyber Command intervened to “create a pathway“, while the President of the U.S. stated that “the lights of Caracas were largely turned off due to a certain expertise that [the U.S.] have“, suggesting that the operation may have been supported by cyberattacks against critical infrastructure and foreign defense systems.
While it may now be commonplace for international confrontations to be the scene of such cyber assault, our predictions already suggested in early 2024 that “critical global infrastructures – most notably energy” were exposed to global disruptions as soon as a conflict erupts.
As illustrated by the unfolding of these new events, which some might describe as unexpected, our predictions are intended to help our readers anticipate developments in cyberspace. Although it is becoming increasingly difficult every year to imagine plausible trends that we have not already forecasted, we are committed to providing decision-makers and cybersecurity professionals with opportunities to tackle uncertainty.
In a nutshell, here are our predictions:
- Towards further erosion of the states’ power in cyberspace: state authority in cyberspace will be eroding further as democratic oversight is replaced with a business oligarchy and a professionalized criminal economy;
- Continued efforts by the commercial cyber intrusion industry to shape regulations: the commercial cyber intrusion industry will continue to expand through investment and state adoption, leading firms to intensify lobbying efforts to weaken future oversight and protect their interests;
- More direct state responses to cyberattacks: states will try to implement a more direct posture against cyber threats, by increasing criminal prosecutions, publicly attributing attacks, and disrupting actors through exposure and offensive operations;
- Forced pivoting to an “assume leak” strategy: facing ever-increasing data breaches count and data processing delegation, organizations might reduce their protection efforts in favor of attempts to make accessed data more difficult to use;
- AI-driven acceleration of fraud and cyber-extortion: proliferation of generative AI will further help lowering the entry barrier for low-skilled actors and enabling hyper-personalized, automated, and convincing deception campaigns, at a pace and scale which will increasingly outmatch organizations who fight against such threats;
- First physical damage or disruption resulting from AI exploitation: integration of autonomous agents into industrial processes and physical operations will enable exploitation of traditional AI vulnerabilities to achieve first “real-world” damage, error or disruption;
- Development of systemic vulnerabilities in cyber detection and response processes: the development of the delegation of critical cybersecurity tasks to autonomous agents will create systemic risks and vulnerabilities which will negatively alter detection and response;
- Reaching maturity: systematic and automated “advanced” attacks (also known as “more of the same”): as threat actors are industrializing “advanced” attacks by automating vulnerability exploitation, scaling mobile and supply-chain compromises, and intensifying data poisoning operations, they will achieve unprecedented speed and reach;
- Making the “Advanced” Great Again: long-term maturation of government-backed cyber capabilities is expected to culminate in the discovery of an highly sophisticated and synchronized cyber operation.
📑
Reviewing last year’s predictions
Before further developing our predictions, we like to first review the ones we did before, to check if we were actually able to forecast events, facts or trends.
Most of our predictions for 2025 appeared relevant given the events that unfolded throughout the year. However, it was difficult to determine the extent to which a couple of these predictions have come true, often due to the lack of publicly available data. Finally, only one of our 2025 predictions does not appear to have been validated by information available to us.
Further Internet balkanization and technonationalism
Last year, we anticipated that the “balkanization” of the Internet would intensify as national-technological identities continued to solidify, driving further fragmentation of the global communication and data processing infrastructure.
This fragmentation was most visible in Russia, which took decisive steps toward complete digital isolation in 2025. Roskomnadzor repeated “dry run” disconnection tests in several regions. The latter were notably supported by regional mobile providers and justified by national security concerns. Furthermore, a resolution signed in October 2025 established the legal framework for a centralized management regime of the national communications network, effectively allowing the Kremlin to sever “Runet” from the Internet at will by this year.
In addition, China’s influence on the global digital landscape shifted from domestic defense to international presence. In September 2025, reports based on 2024 data leaks revealed that Chinese companies have been exporting the “Great Firewall” technologies to countries including Myanmar, Pakistan, Kazakhstan and Ethiopia. Associated tools allow governments to implement geofencing and individual tracking, standardizing authoritarian “cyber-sovereignty” beyond Chinese borders.
Network assets were also used for state-directed censorship and information control. Cloudflare Radar data from late 2025 revealed that nearly 50% of all major internet outages worldwide were the result of government-directed regional or national shutdowns.
Member states of the European Union also fueled the “split” through the push of a new “Data Sovereignty Framework” which was introduced in late 2025, and envisions strict localization barriers to shield European data from extraterritorial reach, effectively fracturing the cloud market.
Globally and as reported under the “Splinternet” naming, this trend has evolved into a more aggressive form of technonationalism where connectivity is weaponized through semiconductor sanctions, and selective infrastructure investments, reimagining the concept of Internet as a patchwork of competing national interests.
Conclusion: Fulfilled ✅
More open-source supply chain attacks discovered
In 2025 we suggested that the open-source supply chain attack of 2024 (LibLZMA/XZ utils compromise) could inspire more threat actors to exploit the open-source model.
2025 marked a transformative year where threat actors shifted to the high-scale industrialization of ecosystem compromise. Some reported a staggering 188% year-over-year increase in attacks targeting the foundational libraries of modern software development packages across major registries like npm and PyPI. This escalation was characterized by a move toward identity-based infiltration, where threat actors phished maintainers to hijack largely downloaded libraries, turning popular “building blocks” of modern software into weapons of mass delivery.
In September 2025, in a similar manner to the XZ utils social engineering tactics, a threat actor successfully phished the maintainers of widely used npm packages including chalk and debug among others, totalling over 2 billion weekly downloads. This compromise allowed to intercept browser-based cryptocurrency transactions. The Python ecosystem was not spared, with packages like termncolor using a multi-stage loader to deliver the SilentSync RAT.
The “Shai-Hulud” campaign served as the most notable example of this trend, utilizing a worm-like propagation method that compromised hundreds of npm packages and dumped stolen secrets in GitHub repositories. By late November 2025, the updated “Shai-Hulud 2” wave introduced a dead man’s switch, a backdoor feature, could mutualize exfiltration, exposed over 33,000 unique secrets, and notably led to a multi-million dollar heist from cryptocurrency wallet Trust Wallet.
Beyond individual libraries, threat actors demonstrated that their “supply chain” definition expanded to include SaaS-to-SaaS integrations. A critical cascading breach initiated via Salesloft’s Drift application in August 2025 allowed attackers to exploit OAuth integration flows, gaining unauthorized access to the Salesforce instances of hundreds of major organizations. This form of attack re-emerged in November 2025 through Gainsight, possibly using credentials stolen during the initial Drift breach, to once again target Salesforce customer data.
Conclusion: Fulfilled ✅
Agentic AI leveraged in attacks
Last year we anticipated that “agentic AI” solutions, which gained traction in the industry, would likely be used by threat actors in their operations.
By late 2025, the cybersecurity industry witnessed the first wave of “autonomous agent” malware which no longer strictly implement a fixed course of actions but rather leverage AI services to “reason” through defense obstacles in real time.
In November 2025, Anthropic described a cyber espionage campaign orchestrated by a Chinese state-sponsored threat actor (GTG-1002). This marked the first case where an AI agent (a manipulated version of Claude Code) reportedly executed a significant part of the attack lifecycle in autonomy, including reconnaissance, lateral movement, and credential harvesting. According to the authors, the malicious agent was observed decomposing high-level strategic objectives into tactical sub-tasks, adapting its methods when encountering specific environment restrictions.
In another instance, Google Threat Intelligence identified new malware families (including PROMPTFLUX and PROMPTSTEAL) which leverage AI services as part of their execution flow. As an example, PROMPTFLUX works as an autonomous agent that queries a Large Language Model (LLM) to rewrite its own malicious functions on-the-fly, generating unique variants to evade specific EDR signatures on targeted computers. Such “agentization” of malware allows even mid-level actors to deploy adaptive malicious payloads.
Conclusion: Fulfilled ✅
Well poisoning to support IO campaigns
In 2025, we predicted that globally available knowledge would be increasingly altered as threat actors would leverage AI to insert bias and influence narratives across the Internet, poisoning the same datasets that others use to train AI. This trend represents a strategic shift where adversaries no longer simply target human perception, but rather the underlying data supply chains that feed AI ecosystems.
As early as March 2025, a report indicated that a Moscow-based disinformation network dubbed “Pravda” successfully “infected” popular AI chatbots. By flooding the Internet with millions of pro-Kremlin articles, the network ensured that its narratives were incorporated into the training and retrieval-augmented generation (RAG) datasets of leading AI models. A study from the authors of the report found that these chatbots repeated the deployed narratives approximately 33% of the time, effectively turning the “well” of public information into a source of propaganda.
Research from the British AI Security Institute, the Alan Turing Institute and Anthropic in late 2025 has confirmed the efficiency of such campaigns. The associated paper demonstrated that a small sample of 250 specifically-crafted documents could be enough to introduce a permanent bias in the results of a LLM, regardless of the model’s total size. This means that threat actors do not necessarily need to control a significant percentage of a dataset to compromise AI-generated results: they only need a precise set of entries to manipulate the learning on specific topics.
The European Parliamentary Research Service (EPRS) and the European agency for cybersecurity (ENISA) have additionally noted a record increase in AI-generated content used for model poisoning or “information laundering”. By April 2025, deceptive narratives regarding European elections were found to be sourced directly from manipulated Wikipedia entries and fringe news sites specifically designed to be ingested by AI scrapers. While the EU AI Act aims to address some systemic risks, the high success rate of poisoning operations mean that for the foreseeable future, the integrity of AI-derived information should always be challenged.
Conclusion: Fulfilled ✅
Strategic malware deployments pre-positioned for 2025 operations
Last year we suggested that seemingly inconsequential large-scale cyberattacks from 2024 could be repurposed in 2025 as strategic tools in hybrid warfare campaigns.
An example of this strategy might be the operational shift observed for the specific actor we named last year: RomCom. In 2025, the threat actor notably moved beyond its cybercrime roots, “financially motivated” operations and traditional targets in Ukraine, to pre-position itself within European and Canadian manufacturing and logistics sectors. By exploiting a path traversal zero-day vulnerability in WinRAR during July 2025, the group deployed backdoors (such as SnipBot and RustyClaw), designed for long-term persistence and covert reconnaissance.
We however did not observe any disruption in 2025 that supported hybrid warfare and could be tied to a specific inconsequential large-scale cyberattack from 2024. There’s the possibility that some of these pre-positioned assets were instead leveraged for targeted espionage operations that remained unreported. It is of course also possible that some pre-positioning efforts remain undetected, or are still preserved for future use. The existence and true scale of such operations can only be known and understood if or when such dormant assets are leveraged and disclosed.
Conclusion: Not fulfilled ❌
Proxies in cyberwarfare: private companies and civil organizations
Our 2025 prediction anticipated that governments would lean more heavily on private companies and civil organizations as proxies in cyber operations, while some of these actors would also take more initiative on their own.
Throughout 2025, multiple strategic assessments documented the growing use of cyber “proxies”. ENISA’s 2025 threat landscape report highlighted a rise in state‑aligned hacktivist operations against EU public administrations, noting that ostensibly independent groups were amplifying or fronting for state objectives in conflicts involving Russia and Iran.
Governments worked on formalizing the “cyber mercenary” model, as seen with China’s late 2025 policy update, which provides a legal framework that integrates the private sector into national cyber objectives through state support and mandatory cooperation. The U.S. also introduced a new bill (H.R.4988 – Scam Farms Marque and Reprisal Authorization Act of 2025), seeking to allow privately appointed individuals or entities to pursue foreign scam farms and crypto thieves on behalf of the United States, authorizing the President to issue such letters for offensive cyber action against specified targets.
The Dutch NCTV Cybersecurity Assessment 2025 noted that state actors are “developing or expanding cyber programmes by using non-state actors or private organisations” to broaden their capabilities. The AIVD further highlighted that these proxies, ranging from private companies to local criminal groups, are leveraged specifically for their ability to hide state involvement in disruptive operations.
We also noted that, following a breach involving rogue agents, the Coinbase cryptocurrency exchange took a rare step of offering a $20 million reward in May 2025, to identify and “bring down” the perpetrators, initiating a private-sector “manhunt”. This remains a singularity, which hardly confirms the second part of our prediction that stated the private sector or the civil society would act independently more often against cyber threats in 2025.
Conclusion: Partially fulfilled ➕➖
Shaping the Internet: network-level attacks in 2025
Last year we anticipated that network manipulation would escalate, with state and private actors exploiting control over internet infrastructure to manipulate, intercept, or disrupt traffic.
In June 2025, a major routing incident saw prefixes for several DNS root servers hijacked by an autonomous system in Kazakhstan. For over an hour, DNS queries in the region were diverted to unauthorized servers, demonstrating that even the most fundamental pillars of internet naming are not immune to redirection. This incident validates our concern about state-affiliated autonomous systems abusing trust, yet it remained a relatively isolated event in 2025.
Furthermore, connectivity is being weaponized through the strategic targeting of undersea cables. While those few events and statistics keep demonstrating that state and private actors exploit network manipulation as a primary tool for censorship and strategic disruption, these physical and logical layer attacks do not necessarily confirm the trend of escalation that we anticipated, as we also haven’t observed any for-profit routing attacks targeting cryptocurrencies.
Conclusion: Partially fulfilled ➕➖
2026 predictions
We do not claim to offer a comprehensive overview of what is conceivable, and we humbly acknowledge that there are always surprises. We also take care to avoid repetition of past predictions – beware, threat actors on the other hand love to re-use proven techniques while they remain effective.
We tried to focus on a selection of likely developments that we believe can have significant cybersecurity impact. Most of those anticipated developments can still be addressed through preventive, protective, corrective or control measures.
Towards further erosion of the states’ power in cyberspace
Cybercriminal organisations have reached a stage of full operational autonomy, possessing the financial capabilities and technical initiative to dictate “rules” of engagement in the cyberspace. Criminal actors still operate with relatively low constraints, while states remain reactive, often struggling to regulate, prosecute actors or enforce rules in a borderless ecosystem. This power shift is further pushed by the development of offensive cyber-mercenaries. Despite high-profile legal battles involving such private actors and regulation attempts from the states, the demand for intrusion tools and security vulnerabilities by intelligence agencies, security services, military forces, and the cybercrime ecosystem have created a mutual interest in maintaining a deregulated “gray zone” market for offensive capabilities.
The concentration of digital control within a handful of “digital giants”, “hyper scalers” and “data brokers” has effectively relegated states to the role of regulatory observers who appear to mainly be able to issue fines long after the landscape has shifted. This imbalance is being exacerbated by the rapid, unconstrained deployment of AI integrations. Recent executive actions in the United States, such as the December 2025 orders even demonstrate a strategic preference for corporate “tech dominance” over public oversight.
Additionally, as critical state infrastructures, services and organisations are (still increasingly) dependent on private IT assets in the cyberspace, the vulnerability of the public sector and the civil society grows: criminal or private actors holds power over essential functions.
Ultimately, we anticipate that the general public might rely on a cyberspace where democratic oversight has been even further replaced by the interests of a business oligarchy and the unchecked expansion of a professionalized criminal economy, leaving little place for any designated authority to actually implement a cybersecurity mission.
Continued efforts by the commercial cyber intrusion industry to shape regulations
As the number of countries acquiring a commercial mobile spyware solution has grown significantly over the years (between more than 70 and 100 countries depending on the sources), 2025 has not fallen short of public reports about questionable uses of such tools to target members of the civil society.
In 2024, France and the United Kingdom initiated the Pall Mall Process to tackle “the proliferation and irresponsible use of commercial cyber intrusion capabilities“, which produced a non-binding “Code of Practice for States”. A consultation of the commercial cyber intrusion industry closed on December 22, 2025, and may lead to the publication of a code of practice for the industry in 2026. In recent years, other initiatives and measures have been taken to counter the proliferation and misuse of commercial spyware.
Despite these efforts, there is no indication that commercial spyware activity will decline in the near future. Indeed, investments in the cyber intrusion industry have reportedly increased significantly during the last 2 years. In addition, public reports hint at a potential unlawful use of commercial spyware by countries that are signatories of the Pall Mall Code of Practice for States. Finally, states known to be involved in the commercial spyware market, including EU Member States, are not signatories to that Code of Practice.
With increasing regulatory efforts and opposition from non-governmental organizations defending human rights and civil society more broadly, we anticipate that the commercial cyber intrusion industry will seek to narrow the scope of current and future regulations, and will increase its lobbying directed at government policymakers.
More direct state responses to cyberattacks
In recent years, the increasing number of arrests for cybercrime offenses, the growing use of sanctions against individuals and organizations, expanded law-enforcement operations to take down supporting infrastructure, and accompanying public communications indicate that states are aiming to signal their resolve to combat cybercrime and intend to create a deterrent effect.
In addition, over the past two years, certain European countries that previously refrained from attributing cyberattacks to foreign states – preferring joint statements issued alongside European or American partners – have shown a policy shift: France independently attributed several cyberattacks to Russia, Denmark and Germany did formal attributions of cyberattacks which occurred in 2024 and 2025, the latter hinting at a response.
As part of a stronger response to cyberattacks from foreign adversaries, we expect that existing cooperation between states, especially in judicial processes, will intensify in the near future. In addition to seeking to impose sanctions on individuals and organizations associated with identified threat actors in accordance with already established legal processes, governments may aim to disrupt threat actors’ operations through various means, for example by leaking information exposing their modus operandi as well as details regarding the identity of the individuals involved in their operations. As an example, in 2025, an online persona known as KittenBusters released details about the infrastructure, the tools, the targets and operational practices of alleged members of CharmingKitten, a threat actor reportedly tied to the Iranian’s Islamic Revolutionary Guard Corps (IRGC). A stronger response to state-sponsored cyberattacks would also likely involve influence operations targeting an adversary’s interests or weaknesses.
Finally, in the future, we expect a reinforcement of the cooperation between governmental or military organisations and the private sector, potentially leading towards a greater involvement of the latter into supporting offensive cyber capabilities and fighting adversaries’ influence operations.
Forced pivoting to an “assume leak” strategy
Persistent failure in traditional “protection-first” cybersecurity prevention models are forcing a strategic pivot toward “assume leak” or “assume unauthorized access” strategies. Despite record-breaking cybersecurity spending (projected to exceed $377 billion by 2028), defense-in-depth and “zero trust” models appear to be failing to stem the tide – even if due to incomplete implementation or delayed adoption of frameworks. In 2025, the number of successful data breaches was still raising and even skyrocketed. “Assume leak” refers to a mindset where defenders operate under the premise that an adversary has already bypassed the perimeter and obtained sensitive data. Instead of focusing solely on keeping them out, the goal shifts to making that stolen data impossible to use or monetize.
Defenders are forced to accept that the network perimeter is hardly existent in most cases and that data is inherently “leak-prone” once it leaves direct enterprise control. Phishing remains the dominant vector in 60% of cases, weaponizing the human element that traditional defense cannot fully neutralize, while cloud outsourcing, once viewed as a security boon, has instead introduced governance gaps and faced notable security failures. Furthermore, the aggressive integration of AI has blurred the data processing frontier, with “shadow AI” incidents now accounting for a growing share of data breaches. We naturally expect to continue seeing data being stolen and leaked to the public by various criminal or state-sponsored actors throughout 2026.
To adapt to this environment, some cybersecurity strategies might shift to rendering data harder to use upon exfiltration, prioritizing measures that assume most corporate-processed data might be accessed by unauthorized parties. Such measures include targeted detection of the use of leaked credentials using baits and decoys, identity threat detection and response practices. This transition is further enabled by the emergence of small, efficient AI reasoning models, which are trained on fully synthetic datasets, allows for complex data processing without exposing the underlying sensitive information. Furthermore, some possible recent breakthroughs in the field of anamorphic cryptography give hope that ultimately, exposition might be drastically reduced as data would stay encrypted while computations are run.
AI-driven acceleration of fraud and cyber-extortion
Generative AI is increasingly being used to support various aspects of cybercrime intrusion operations, from developing scripts and tools and setting up infrastructure to forging deceitful content for phishing purposes. In addition to easing operation development and setup processes, generative AI also contributes to lowering the entry barrier, enabling less experienced or resourceful players to engage in cybercriminal activities.
Parallel to cybercrime intrusion activities, in recent years, fraudsters started to leverage generative AI to produce highly convincing social engineering lures. For example, voice cloning for the impersonations of executives when attempting to commit CEO fraud have become more prevalent over the past two years. The sophistication of various scams, notably romance scams, has been improved through the use of LLM chatbots, which enabled scammers to scale up their operations by automating the labor-intensive deception process. As organizations active in fighting fraud have witnessed the rapid adoption of AI technology by scammers, they recognize the need for greater collaboration and for engaging in the disruption of scam networks.
As generative AI becomes more integrated into the adversarial workflow, we anticipate an increase in the pace of intrusion attempts in 2026. Likewise, we believe that the use of generative AI will increasingly widen the gap between fraudsters and defenders in the short term, further highlighting the need to strengthen information-sharing between organizations, engage in disrupting fraud networks, and build new detection capabilities.
First physical damage or disruption resulting from AI exploitation
The ongoing deployment of agentic AI and LLM automation into industrial sectors is creating a novel attack surface that probably develops way faster than risk is dealt with, mirroring but outpacing the “connectivity-first” rush of the late 20th century. As of late 2025, many organizations have reported encountering risky behaviors from AI agents, including unauthorized system access and improper data exposure. This trend is set to peak in 2026 as these autonomous entities, being implemented to optimize power grids, manage logistics, or assist in vehicle telemetry, might become primary targets for malicious actors.
The integration of “physical AI”, including consumer humanoid robotics and autonomous warehouse systems, significantly escalate this risk by bridging the gap between digital instructions and the physical world. This adoption trend comes with a risk of a transition from “digital-only” exploitation and effects, such as data theft via prompt injection, to kinetic manipulation. In 2026, we expect to see the first instances of physical damage or disruption in the “real world” resulting from AI exploitation techniques and vulnerabilities, such as indirect prompt injection, data poisoning and excessive agency.
For instance, an AI-powered industrial robot or an autonomous shipping agent could be manipulated via specifically crafted data, leading to misdirection of freight, physical DoS by interrupting shipping lines, or even localized physical damage to infrastructure.
Development of systemic vulnerabilities in cyber detection and response processes
Actors in the cybersecurity industry appear to be prone to delegate an important amount of tasks and autonomy to AI technologies: threats are already being vastly discovered, analyzed, (very poorly) documented or summarized using AI – if not “by” AI agents with practically full autonomy. Vulnerabilities are identified and reported in the same way, all this with already identified and major quality, quantity or verifiability issues. In parallel, the push for the “agentic SOC” introduces approaches and processes where autonomous AI agents are not only supposed to find and consume intelligence, detect threats and analyze signals, but also directly action response playbooks
We expect that further development of such trends might lead to major issues in detection and response processes for the coming years:
- the outbreak of a flood of low-quality (a.k.a “slop”) threat intelligence and automated threats reports which would be considered and even potentially actioned automatically, in favor of less visible but more qualitative data;
- the exploitation of such dynamic by threat actors, which might leverage traditional AI exploitation such as indirect prompt injections and data poisoning to trick defensive agents into bypassing detection, ignoring threats, or performing unwanted actions;
- the delegation of further autonomy to algorithms and automation that lack explainability, removing some human verification opportunities and creating blind spots in parts of critical security loops, leading to a loss of understanding and management capability over the detection and response, but also introducing opportunities to bias detection.
Reaching maturity: systematic and automated “advanced” attacks (also known as “more of the same”)
With the ransomware “big-game hunting” getting closer to its 10th birthday, specialized hacking agencies even closer to their 30th and successful private offensive actors incorporated in 2010, many cybercrime clusters, private companies and governmental organizations now must have cumulated extended experience to conduct cyberattacks, and reached advanced maturity in terms of capability. In addition, AI as well as offensive cyber technologies and services now might largely help automating various and tedious parts of cyber attacks lifecycle which cost too much human efforts to allow scaling before.
As a result, we expect the systematic industrialization of some efficient forms of malicious activities that already exist in the cyberspace, and which are considered “advanced” to some extent. Considering the trends we could already observe in 2024 and 2025, the following results of industrialization would most likely be observed:
- The time between the disclosure of a vulnerability in internet-facing IT services (such as VPNs, application servers and firewalls) and its global, possibly automated exploitation has already collapsed from weeks to days, if not hours. More threat actors will maintain a near-real-time map of fresh attack surfaces, identifying and compromising targets before traditional patch management cycles can respond. The variety and quantity of targets will naturally increase, as automation and scaling enables malicious actors to exploit every opportunity,
- Similarly, with private companies developing vulnerabilities and tools to target mobile phones having flourished we expect more systematic and larger-scale exploitation of vulnerabilities against mobile devices,
- As described while reviewing our 2025 predictions, the success and potential impact of supply-chain attacks will most likely stimulate extended and systematic exploitation of supply-chain compromise opportunities, including attacks affecting cloud-based provision and SaaS,
- Finally and as also discussed in the review of our previous predictions, we foresee the data poisoning and information influence operations (in particular those targeting AI training and RAG) will intensify further.
Making the “Advanced” Great Again
With decades of experience in developing and implementing cyber-enabled espionage and hybrid warfare, some governments (through intelligence agencies, military forces and a supporting private sector) likely had opportunities and time to prepare and iterate over several improvement cycles of advanced capabilities (such as human-enabled accesses, advanced vulnerability research, stealth hardware and software supply-chain compromise, as well as sophisticated malware implants deployment in global cloud and communication infrastructures), aimed at achieving effect in cyberspace, or to leverage cyberspace for kinetic effects. Such actors also likely have tried and tested organisations and processes to effectively combine those advanced capabilities under pressure, as well as experience preparing for and adapting to response from cyber-defenders. As tensions intensified and conflicts emerged pretty much everywhere on the globe since 2022, actors definitely had motive to leverage even dormant capabilities.
We anticipate the unveiling of a long-term intrusion set that successfully synchronized several of mentioned capabilities, demonstrating that the pursuit of cyberspace dominance reached a new and possibly unsuspected summit. The discovery of such a campaign could stem from the investigation of “configuration drifts” in core communication devices, unexplained physical symptoms on critical infrastructure, or whistleblowing. Such discovery could additionally prove that some of the most dangerous threats are built into the very foundations of the global cloud and communication networks, and would definitely make “advanced threats” worthy of the “advanced” adjective again.