The security landscape is undergoing yet another major shift, and nowhere was this more evident than at Black Hat USA 2025. As artificial intelligence (especially the agentic variety) becomes deeply embedded in enterprise systems, it’s creating both security challenges and opportunities. Here’s what security professionals need to know about this rapidly evolving landscape.
AI systems—and particularly the AI assistants that have become integral to enterprise workflows—are emerging as prime targets for attackers. In one of the most interesting and scariest presentations, Michael Bargury of Zenity demonstrated previously unknown “0click” exploit methods affecting major AI platforms including ChatGPT, Gemini, and Microsoft Copilot. These findings underscore how AI assistants, despite their robust security measures, can become vectors for system compromise.
AI security presents a paradox: As organizations expand AI capabilities to enhance productivity, they must necessarily increase these tools’ access to sensitive data and systems. This expansion creates new attack surfaces and more complex supply chains to defend. NVIDIA’s AI red team highlighted this vulnerability, revealing how large language models (LLMs) are uniquely susceptible to malicious inputs, and demonstrated several novel exploit techniques that take advantage of these inherent weaknesses.
However, it’s not all new territory. Many traditional security principles remain relevant and are, in fact, more crucial than ever. Nathan Hamiel and Nils Amiet of Kudelski Security showed how AI-powered development tools are inadvertently reintroducing well-known vulnerabilities into modern applications. Their findings suggest that basic application security practices remain fundamental to AI security.
Looking forward, threat modeling becomes increasingly critical but also more complex. The security community is responding with new frameworks designed specifically for AI systems such as MAESTRO and NIST’s AI Risk Management Framework. The OWASP Agentic Security Top 10 project, launched during this year’s conference, provides a structured approach to understanding and addressing AI-specific security risks.
For security professionals, the path forward requires a balanced approach: maintaining strong fundamentals while developing new expertise in AI-specific security challenges. Organizations must reassess their security posture through this new lens, considering both traditional vulnerabilities and emerging AI-specific threats.
The discussions at Black Hat USA 2025 made it clear that while AI presents new security challenges, it also offers opportunities for innovation in defense strategies. Mikko Hypponen’s opening keynote presented a historical perspective on the last 30 years of cybersecurity advancements and concluded that security is not only better than it’s ever been but poised to leverage a head start in AI usage. Black Hat has a way of underscoring the reasons for concern, but taken as a whole, this year’s presentations show us that there are also many reasons to be optimistic. Individual success will depend on how well security teams can adapt their existing practices while embracing new approaches specifically designed for AI systems.