Munich MLOps Community Meetup #7
Hello, fellow MLOps Engineers and ML Enthusiasts, ready for a new neetup?
We're announcing our next meetup event - on Wednesday, April 10th at 18:30, at Munich JetBrains Events Space (close to Laim S-Bahn station).
This time we'll hear from Michele and Max about machine learning experiments and benchmarks!
Full agenda announced:
18:30 - 18:50 - Beers & Networking
18:50 - 19:00 - Welcome
19:00 - 19:40 - Talk #1: MLtraq: Track your AI experiments at hyperspeed - Michele Dallachiesa, Data Products & AI Consultant
19:40 - 20:20 - Talk #2: Overview of IDEal Tools for ML Experiments - Max Melekhovets, Software developer at JetBrains
20:20 - Onwards - Beer, food & networking
More about the talks:
Speaker #1:
Michele Dallachiesa, Data Products & AI Consultant (LinkedIn)Title:
MLtraq: Track your AI experiments at hyperspeedDescription:
Every second spent waiting for initializations and obscure delays hindering high-frequency logging, further limited by what you can track, an experiment dies. Wouldn't it be nice to start tracking in nearly zero time? What if we could track more and faster, even handling arbitrarily large, complex Python objects with ease?
In this talk, I will present the results of comparative benchmarks covering Weights & Biases, MLflow, FastTrackML, Neptune, Aim, Comet, and MLtraq. You will learn their strengths and weaknesses, what makes them slow and fast, and what sets MLtraq apart, making it 100x faster and capable of handling tens of thousands of experiments.
The talk will be inspiring and valuable for anyone interested in AI/ML experimentation and portable, safe serialization of Python objects.Speaker #2:
Max Melekhovets, Software developer at JetBrains (LinkedIn)Title:
Overview of IDEal Tools for ML ExperimentsDescription:
ML tools often struggle to provide a simple and convenient GUI. In this talk, we first overview how widely adopted approaches to running ML experiments (SSH, Pipelines, Task scheduling, etc) are integrated with popular IDEs (VS Code, PyCharm). We then highlight their advantages and limitations, and finally discuss how these integrations affect the entry barrier for ML training problems and contribute to increasing experiment throughput.
Looking forward to your RSVPs and to meeting you there!
Keep on hacking!🤩🤩🤩