Cover Image for LLMs Can't Do It All: Why Embeddings Still Matter
Cover Image for LLMs Can't Do It All: Why Embeddings Still Matter
50 Went
Registration
Past Event
Welcome! To join the event, please register below.
About Event

In the age of powerful large language models, embeddings remain a critical component in retrieval, recommendation, and understanding systems. This session explores why embeddings still matter, how they’re built, and how to choose the right architecture for your domain and application.

Agenda Highlights:

  • Context length limitations and the rise of RAG

  • Why LLMs aren’t ideal for embedding generation

  • Causal attention vs contrastive learning

  • Sparse vs dense retrieval (TF-IDF, SPLADE, hybrid approaches)

  • Bi-Encoders vs Cross-Encoders vs Late Interaction

  • Latent interaction models and HNSW search

  • Contrastive learning, triple loss, Siamese networks

  • Speed vs accuracy trade-offs

  • Domain-specific embedding tuning (e.g., code vs text)

  • Evaluation with BEIR and MTEB benchmarks

Speaker: Sandro Barnabishvili, AI Researcher

Location
Ilia State University, T Building / ილიას სახელმწიფო უნივერსიტეტი, T კორპუსი
1 Giorgi Tsereteli St, T'bilisi, Georgia
Auditorium #102
50 Went