Hunting Vulnerabilities in Symfony with LLMs
Hackers are already using Large Language Models to map your attack surface, it’s time you used them to defend it.
While we’ve spent years trying to write the perfect regex or rule-set to catch bugs, LLMs have unlocked a terrifyingly effective ability to understand the intent and context of our Symfony and PHP code.
This session dives into the practical reality of using AI as an autonomous security researcher to uncover deep-seated vulnerabilities in Symfony applications that traditional tools simply cannot see.
We will explore how to architect a high-speed security pipeline that feeds your codebase to an LLM to detect broken access control, complex injection paths, and logic flaws in real-time.
You’ll walk away with a battle-tested strategy to weaponize AI against your own technical debt, turning the most unpredictable technology of our time into your most meticulous security auditor.
Thursday, June 11, 2026 at 16:00 PM – 16:40 PM