Zero Trust & AI Security Trends for 2025
Boards are still funding Zero Trust programs, but the emphasis has shifted from network segmentation to identity threat protection, SaaS posture control, and AI-supply-chain telemetry. ExtraHop’s outlook puts renewed focus on defending Active Directory while tracking ransomware crews such as RansomHub, 8Base, and Cl0P that now pivot through unmanaged machine identities and vulnerable model endpoints [1]. As organisations wire AI copilots into production workflows, that identity blast radius keeps expanding, creating a rich target for adversaries hunting signing keys or pipeline secrets [1].
While security teams deploy AI to speed investigations, the same tooling is supercharging adversary tradecraft. CERT‑MU’s 2025 report calls out FraudGPT, WormGPT, and other jailbroken assistants that make phishing kit automation, exploit triage, and deepfake creation point-and-click simple [2]. Deepfake incidents already jumped into the majority of social-engineering cases during 2024, showing how fast synthetic media is being weaponised [3].
The practical takeaway for 2025: blend Zero Trust guardrails with AI-aware monitoring. Protect model supply chains (data, evals, orchestration secrets), enforce continuous verification on every user and workload identity, and pair SOC copilots with robust human review. Attackers will continue to let AI drive reconnaissance and payload customization, so defenders need shared telemetry between identity, data, and MLOps stacks to keep pace.
- ExtraHop’s 2025 forecast notes that ransomware groups RansomHub, 8Base and Cl0P will drive extortion, Active Directory will remain a significant post‑exploitation target, and rapid AI development will create a vulnerable supply‑chain attack surface.
- CERT‑MU observes that cybercriminals are using AI to identify vulnerabilities, generate deepfakes and automate attacks; tools like FraudGPT and WormGPT make these capabilities accessible.
- Deepfake attacks increased from 50 % to 60 % in 2024, illustrating the growing sophistication of AI‑powered social engineering.
Comments