Cover Image for Continued Pretraining and Fine-Tuning with Unsloth
Cover Image for Continued Pretraining and Fine-Tuning with Unsloth
Avatar for Public AIM Events!
Presented by
Public AIM Events!
Hosted By
161 Going

Continued Pretraining and Fine-Tuning with Unsloth

YouTube
Registration
Past Event
Welcome! To join the event, please register below.
About Event

Continued pretraining, alongside Supervised Fine Tuning (SFT), is gaining in popularity alongside Small Language Models (SLMs) in the industry. Finding faster ways to fine-tune that maintain high accuracy and minimize hallucinations is a priority for AI Engineering teams everywhere.

One such method for faster fine-tuning is to use Unsloth, which is all about faster LLM training.

The claims we’ll test during the event from Unsloth for their free, open-source version are:

  • 2x faster training

  • 50% less memory usage

Additionally, continued pretraining (”AKA continued Finetuning” was released in June by the Unsloth team.)

Similar claims:

  • 2x faster

  • 50% less VRAM than Hugging Face + Flash Attention 2 QLoRA

The team also provided the following “insights” for continued pretraining:

You should finetune the input and output embeddings.

  • Unsloth offloads embeddings to disk to save VRAM.

  • Use different learning rates for the embeddings to stabilize training.

  • Use Rank stabilized LoRA.

During this event, we’ll leverage fine-tuning, to fully test out the concepts and code of Unsloth.

We’ll dig under the hood to find out what tricks they’re using to speed up the training and reduce memory usage, and we’ll review what we know about some of the more advanced techniques they’re using in their enterprise versions! This should give us some great insights into the tips and tricks that are being used today to speed things up!

We will leverage Unsloth through Google Colab directly so that we can use free GPUs during the event!

📚 You’ll learn:

  • How to leverage Unsloth for faster continued pretraining and supervised fine-tuning

  • What Unsloth is doing under the hood to speed up LLM training

  • To fit tools like Unsloth, which accelerate training and tuning, into your toolbelt

🤓 Who should attend the event:

  • Aspiring AI Engineers who want to understand the latest LLM training and fine-tuning tools

  • AI Engineering leaders interested in continued pretraining or fine-tuning of LLMs or SLMs

Speakers:

  • Dr. Greg Loughnane is the Co-Founder & CEO of AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. Since 2021 he has built and led industry-leading Machine Learning education programs.  Previously, he worked as an AI product manager, a university professor teaching AI, an AI consultant and startup advisor, and an ML researcher.  He loves trail running and is based in Dayton, Ohio.

  • Chris “The Wiz” Alexiuk is the Co-Founder & CTO at AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. During the day, he is also a Developer Advocate at NVIDIA. Previously, he was a Founding Machine Learning Engineer, Data Scientist, and ML curriculum developer and instructor. He’s a YouTube content creator YouTube who’s motto is “Build, build, build!” He loves Dungeons & Dragons and is based in Toronto, Canada.

Follow AI Makerspace on LinkedIn and YouTube to stay updated about workshops, new courses, and corporate training opportunities.

Avatar for Public AIM Events!
Presented by
Public AIM Events!
Hosted By
161 Going