Cover Image for VAM! AI Reading Group: Making Software Faster with LLMs
Cover Image for VAM! AI Reading Group: Making Software Faster with LLMs
17 Went

VAM! AI Reading Group: Making Software Faster with LLMs

Hosted by Issam Hadj Laradji
Zoom
Registration
Past Event
Welcome! To join the event, please register below.
About Event

🤖 AI Reading Group Meetup: Details 🤖

Next Reading is on "Meta Large Language Model Compiler: Foundation Models of Compiler Optimization"

Paper: https://arxiv.org/abs/2407.02524

Most software runs through compilers, which require massive human effort to optimize for speed and efficiency. This paper explores how Large Language Models (LLMs) can automate compiler optimization, reducing manual work, minimizing human errors, and improving performance.

🚀 Key Takeaways:

  • LLM Compiler, built on Code Llama, is trained on massive datasets of LLVM-IR & assembly code.

  • Can help automate code optimization, making software faster and more efficient.

  • Reduces the need for manual tuning, saving time and effort for developers.

This meetup will be held online only.

How to Prepare:

  • Please try to read the paper before the meetup. Don’t worry if you don’t understand everything—we’ll start with a short presentation to explain the key ideas and then have a group discussion.

  • You can use NotebookLM to have a quick overview of the paper: https://notebooklm.google/

How to Join:

Join via the provided Zoom link

Schedule:

  • 6:00 PM Sharp: Meetup starts on time.

    • 20-Minute Presentation: Summary of the paper and key points.

    • 25-Minute Discussion: Group Q&A and sharing of ideas.


About the AI Reading Group:

We meet every week to discuss interesting AI topics and papers.

This is a great opportunity to learn about AI and connect with others. Hope to see you there!

17 Went