Innovations in Efficient Training, Tuning, and Serving Large Foundational Models - ft. Alireza Darbehani
In the rapidly evolving landscape of artificial intelligence, the development and training of large foundational models stand as a cornerstone of innovation. However, the computational and resource demands of such models often pose significant challenges. This workshop delves into cutting-edge techniques designed to surmount these hurdles, focusing on the integration of Quantized Low-Rank Approximation (QLoRA) and DeepSpeed, among other efficiency-enhancing strategies.
Alireza Darbehani (MLOps Engineer @BenchSci)
Alireza is a trailblazer in the machine learning and generative AI realm, with over six years of hands-on experience in deploying AI models across cloud platforms like GCP and Azure. Alireza has carved a niche in MLOps, specializing in model life cycle management, distributed training of models, and streamlining the operations of foundational models. His expertise extends to optimizing the end-to-end management of machine learning projects, from development to deployment, ensuring scalable, efficient, and innovative solutions in the rapidly evolving field of AI.
WORKSHOP INFO
One day full of LLMs!
Fri Mar 1st, 9am - 5pm ET(times are in ET)