

Vision Meets Language with SmolVLM
Join the GitHub Community @ GITAM for an exciting, beginner-friendly deep dive into the future of AI — Vision-Language Models (VLMs).
We’ll kick off with the demo then explain the basics of Large Language Models (LLMs) — from neurons to transformers to ChatGPT — and explore how these models are becoming smaller, faster, and more accessible through quantization and local deployment tools like Ollama and LM Studio.
We'll break down how we built it, how it works, and show a live demo of this powerful multimodal model in action.
What to Expect:
🚀 Live Demo First: See SmolVLM in action
🔍 What is an LLM?
🧬 Origins: From neurons to transformers to LLMs
⚖️ Parameters & quantization explained simply
🖥️ How to run LLMs locally with Ollama & LM Studio
🧠 Picking the right LLM for different tasks
🧩 Behind the scenes of the SmolVLM GitHub repo
🤖 What is Hugging Face?
🙋 Q&A + Thank You Slide
💡 No prior experience needed — just bring your curiosity.
Perfect for students interested in AI, GitHub projects, and real-world machine learning applications.
📍 Venue: J-211
🕑 Time: 2:00 PM – 4:00 PM
📅 Date: August 1st, 2025
Bring your curiosity. We’ll bring the code.
Hosted by: GitHub Community GITAM