How to Use Wan, Hunyuan & LTX — Top AI Video Models for Creators ft. Sebastian Kamph
Live walkthroughs. Free workflows.
Powerful Open-Source AI Video Models, made simple.
If you’ve been waiting to try AI video but didn’t know where to start — this is it.
Early access to the AI Filmmaking Kit and our next Maven course.
There’s a wave of open-source AI video models hitting the scene. Wan. Hunyuan. LTX.
Everyone’s posting about them — but few actually know how to use them for real projects.
This webinar is for creators who want to try these models for real. We’ll walk through what they do, when to use which, and how to run them without needing to set up a single thing locally.
No fluff. Just tools, workflows, and real answers.
What You’ll Get
A side-by-side comparison of Wan, Hunyuan & LTX (with real video output)
The pros and tradeoffs of each model — speed, detail, consistency
A free ComfyUI workflow pack so you can test all 3 models yourself
How to use ThinkDiffusion to run them instantly — no install, no config
Early access to our AI Filmmaking Starter Kit (normally gated)
Priority seat for our upcoming Maven course for AI creators
Meet Your Hosts
Sebastian Kamph
Creative director and educator helping 160k+ creators make sense of AI tools.
Matt Shih
Cofounder at ThinkDiffusion. Building cloud tools that make open-source creative workflows usable for everyone.
Ashna
Product marketer obsessed with making complex tools feel clear, fun, and frictionless.
For Attendees Only
Download link for all 3 workflows
Application form for early access to the filmmaking kit
Early registration window for Maven course