Cover Image for Hands-On AI Security: Exploring LLM Vulnerabilities and Defenses
Cover Image for Hands-On AI Security: Exploring LLM Vulnerabilities and Defenses
Avatar for Hacken Events
Presented by
Hacken Events
Trusted Blockchain Security Auditor
36 Going

Hands-On AI Security: Exploring LLM Vulnerabilities and Defenses

Virtual
Registration
Welcome! To join the event, please register below.
About Event

Large Language Models (LLMs) are transforming how we build Web3 applications - but they’re also introducing critical new security risks.

Join Stephen Ajayi, Technical Lead, dApp & AI Audits at Hacken, AI security researcher, and OffSec Certified Expert 3, for a live, hands-on webinar that dives deep into real-world threats and how to defend against them.

👨‍💻 You’ll explore:

  • Prompt injection attacks

  • Agent takeovers

  • Vector database vulnerabilities

  • Testing & securing AI-integrated dApps

📚 Agenda Highlights:

  • Anatomy of LLM systems

  • Why organizations adopt AI

  • Where and how vulnerabilities emerge

  • Overview of the LLM threat landscape

  • Live demo: DevOps Chat Assistant use case

  • Interactive CTF-style exercises

  • Actionable security takeaways

​🗓 Date: June 12
Time: 13:00 UTC

Whether you’re building AI-driven dApps or auditing them, this session will help you stay ahead of emerging threats.

🛡 About Hacken:
Hacken is a top-tier Web3 security company, trusted by leading protocols to secure smart contracts, infrastructures, and ecosystems. https://hacken.io/

Avatar for Hacken Events
Presented by
Hacken Events
Trusted Blockchain Security Auditor
36 Going