

CodeMasters Talks: The Future of Multi-Modal GenAI
Where Research Meets Industry in the Next Wave of AI
Generative AI is rapidly evolving beyond text, enabling systems to process and generate across multiple modalities- language, vision, audio, and perception of 3D environments. The first edition of CodeMasters Talks explores this frontier, bringing together researchers and practitioners working at the edge of what's next.
🔍 Inside the Talks
This edition will feature three expert perspectives on integrating multi-modal data, evaluating LLM behavior, and scaling AI systems in the enterprise.
– Prof. Desislava Petrova-Antonova, research group leader at GATE Institute, focuses on data interoperability, semantic enrichment, and domain-specific models. She will explore the integration of multi-modal data sources, such as spatial (3D), environmental, and real-time sensor data, and discuss the role, capabilities, and limitations of LLMs in supporting data structuring, simulation, and visualization. She will also examine strategies for addressing situations when high-quality data is missing or incomplete.
– Dr. Venelin Kovatchev, Assistant Professor in Computer Science at the University of Birmingham and member of the ELLIS Society, researches data-centric NLP and AI, including dynamic evaluation, active learning, unit testing, adversarial attacks, and data augmentation. He will explore core challenges in developing and applying large language models (LLMs)—from how we define problems and structure data, to how we evaluate and test model behavior under real-world conditions.
– Yavor Belakov, co-founder and AI engineer at Team-GPT, is building systems at the forefront of AI adoption in enterprise environments. Team-GPT raised $4.5M from True Ventures and is trusted by organizations like Salesforce, Maersk, Charles Schwab, EY, Yale, and Johns Hopkins University. He is also the founder of the AI Engineer Foundation Europe and brings insights from scaling real-world GenAI systems, from deployment and monitoring to user feedback loops.
🤝 90 Minutes of Curated Peer Networking
Over 70% of the evening is dedicated to meaningful exchange among peers. Expect high-level discussions with AI PhDs, ML engineers, researchers, and CTOs tackling real-world challenges: model deployment, infrastructure scaling, and the future of foundation models.
Who Should Join:
– PhDs in AI and Computer Vision
– Senior ML and Software Engineers
– Technical Architects and CTOs
– Researchers in foundation models, robotics, and 3D AI