
Behind LTX Video: כך אימנו מודל וידאו בקוד פתוח לייצר תוכן סקיילבילי
Models that turn text into video are the new holy grail in GenAI, and the competition is fierce. The development faces complex challenges such as continuity, camera angles, quality, context, real-time data processing, scalability, and more. These challenges make this field one of the most exciting in LLM development, with significant breakthroughs, creative solutions at scale, and fascinating use cases.
A new meetup by Lightricks invites developers to take a closer look at LTX Video, an open-source video generation model revealed by the company last November. Speakers will present the full technical report of the model, reveal its advanced architecture, and discuss the challenges and solutions that enable fast, high-quality video creation in real-time. Experts from Nvidia and Meta AI will also share insights on the most interesting and complex challenges and innovations in this development field.
The event, "Behind LTX Video" will take place on February 25 (17:30) at Mindspace Ahad Ha'am, Tel Aviv. Participation is free with prior registration.
Speakers List:
🎤 Ofir Bibi - VP Research @ Lightricks
🎤 Gal Chechik - Sr. Director of AI Research @ NVIDIA
🎤 Yaniv Talgman - AI Research @ Meta
🎤 Dr. Yoav HaCohen - Director LTX Video @ Lightricks
🎤 Chen Tessler - Research Scientist @ NVIDIA