
As the adoption of AI accelerates, so too does its use within offensive cyber workflows across the kill chain, from offensive cyber operations and hacking to command and control and exploitation. Industry, academia and government must ensure that our defensive capabilities evolve at this same pace.
This workshop will focus on the dual nature of AI in cyber security and will be divided into two halves. The first half will examine the real threat posed by attackers who are weaponising LLMs in the wild. We will explore open‑source threat reporting on LLM powered malware, cyber operations and hacking, and we will look at the technical details behind these attacks, including adversarial jailbreaking of models, the use of LLMs as C2, and more.
The second half will focus on defence. We will introduce the current art of the possible in AI attack detection, highlight what we are seeing from industry and academia, and connect this with more conventional deception techniques. This will include areas such as confusion-based disruption, objective manipulation, decoy strategies, and overload-based approaches.
This workshop is ideal for researchers, startups, industry, and academia working in, or seeking to work in, the field of identifying and responding to AI enabled cyber-attacks.
In this workshop particpants will, understand how the threat landscape is evolving as LLMs are misused, explore techniques for detecting and mitigating LLM cyber-attacks and C2 and engage with academia, industry, and government on existing capabilities and on how the threat landscape may evolve in future.
