
AI has the potential to significantly improve how cyber security systems reason about risk, discover vulnerabilities, and validate real-world impact. Yet many “AI-first” security tools center on automation layered on large language models, which can constrain adaptability and generate excessive noise in practice.
This talk presents an alternative approach: autonomous offensive security systems built using agentic AI. Instead of executing static scans or single-step prompts, these systems combine planning, exploration, execution, and validation into closed feedback loops. Agents reason about target environments, generate attack hypotheses, attempt exploitation, and verify outcomes using exploit validators—allowing them to iterate without increasing review burden on defenders.
Hear how agent-based AI techniques apply to continuous vulnerability discovery and exploit validation across real-world systems. By emphasizing autonomy and verification rather than alert generation, this approach reduces false positives, lowers the cost of defensive effort, and provides security teams with actionable, verified results instead of raw signals.
