


AI Meetup Berlin: How to Build an On-Premise LLM Finetuning Platform
Details
A team from Aleph Alpha will talk about finetuning models at scale.
Abstract: Finetuning large language models (LLMs) is a complex engineering challenge. From managing GPUs and preprocessing data to orchestrating distributed training — the process is hard enough. Doing it on-premise adds another layer of difficulty.
In this talk, we’ll explore different fine-tuning approaches — including LoRA, QLoRA, and full finetuning — and discuss when to use each. We’ll also show how to implement dynamic worker scheduling and automatic GPU resource allocation, helping you streamline training workflows and turbocharge your engineering teams — all while ensuring your data stays securely on your own infrastructure.
🔈 Speaker: Aziz Belaweid
Agenda:
✨ 18:30 Doors open: time for networking with fellow attendees
✨ 19:00 Talk and Q&A
✨ 20:00 Mingling and networking with pizza and drinks
✨ 21:00 Meetup ends
Where: In person, Aleph Alpha Berlin, Ritterstr. 7, Berlin
When: June 24, 2025
Language: English
⚠️ Registration is free, but required due to building security.