AI agents are no longer a distant future concept. They have arrived, and the pace of their deployment is accelerating rapidly. In this episode of Threat Vector, David Moulton speaks with Nicole Nichols, Distinguished Engineer for Machine Learning Security at Palo Alto Networks. Nicole breaks down her new paper, Achieving a Secure AI Agent Ecosystem, where she outlines three foundational pillars for defending emerging agent-based systems: protecting agents from third-party compromise, ensuring user alignment, and guarding against malicious agents. With deep expertise spanning academia and industry, Nicole brings clarity on why structured collaboration, component provenance, and rigorous evaluation are essential for deploying autonomous AI safely.
For listeners looking to dive deeper into securing AI-driven environments, Palo Alto Networks offers a range of valuable resources:
Secure AI by Design, part of the Precision AI portfolio, offering complete AI security coverage from model development to runtime protection
AI Access Security, delivering visibility, access control, and data protection for generative AI applications
Cyberpedia: AI Security, a guide to understanding AI security practices, challenges, and strategies
Nicole also shares candid perspectives on what still needs to be built, from containment strategies to community-driven security protocols. If you are serious about securing the next era of autonomous systems, this episode is your primer.
Mentioned by Nicole:
“Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?” Sahar Abdelnabi, et al.
Protect yourself from the evolving threat landscape – more episodes of Threat Vector are a click away