Track

Date & Time
Thursday, April 23, 2026, 11:00 AM - 11:50 AM
Room Location
Boisdale 1 & 2
Session Code
WK05
Name
The AI Achilles' Heel: How Your AI Can Get Hacked
Description

Learning Workshop: As enterprise AI adoption skyrockets, a dangerous new blind spot has emerged: the perception that AI components are secure by design. Although your network and application infrastructure might be protected, the AI models and agents living within it are inherently insecure and quickly emerging as the newest, weakest link in your cybersecurity posture. Join this hands-on workshop to challenge your assumptions, assess your own AI Security plan maturity, and expose the hidden insecurities in the LLM and Agentic AI landscape. We’re moving beyond theory to systematically help you develop your AI Security plan while practically demonstrating high-priority threats against AI Confidentiality, Integrity, and Safety that traditional security controls often miss or simply are not equipped to address.

What to Expect:

Overview of the “AI Security Problem”: A brief update on the current and future risks from AI both as a frontier Attack Vector and Attack Surface.

Your AI Security Plan: Assess AI Framework maturity in an interactive Q&A using an AI Security framework plan that enables you to self-reflect on your current AI Security status and work toward a 90-day action plan.

Live Attack & Defence Simulation: Apply your AI Security plan thinking in interactive scenarios where we simulate a corporate AI application and demonstrate how easily an attacker (or an unwitting insider) can compromise its Confidentiality, Integrity and Safety, such as triggering unauthorised data exposure, injecting toxic content and jailbreaking guardrails.

Defence in Depth: Share your thoughts in real-time on appropriate defence measures to discover which platform-led approaches provide real-time protection through measures such as Prompt Sanitisation, Contextual Detection, and I/O Validation.

Learn how to build a robust defence strategy that protects your organisation from the unique risks of AI models and agents, empowering transformative AI adoption responsibly and securely.

This session is suitable for a wide-ranging audience such as CISOs, Network Security Professionals, Security Architects, SOC Managers/Engineers, and AI Security Architects, Security Engineers, DevSecOps.

Participants will gain a first-hand understanding of the key tenant of model-specific vulnerabilities and attack vectors within AI applications.

Dr Ryan Heartfield Jack Hughes
Cross Learning Threads
AI for Cyber Defence
Session Keywords
AI enabled threat, AI Security, AI for Cyber Defence, Application and product security, Cloud and SaaS security, Data security, Detection, response, and forensics
With thanks to our Sponsor