Cover Image for LlamaIndex Webinar: Efficient Parallel Function Calling Agents with LLMCompiler
Cover Image for LlamaIndex Webinar: Efficient Parallel Function Calling Agents with LLMCompiler
Hosted By
374 Going

LlamaIndex Webinar: Efficient Parallel Function Calling Agents with LLMCompiler

Hosted by Jerry Liu
Zoom
Registration
Past Event
Welcome! To join the event, please register below.
About Event

LLMs are great at reasoning and taking actions.

But previous frameworks for agentic reasoning (e.g. ReAct) were primarily focused on sequential reasoning, leading to higher latency/cost, and even poorer performance due to the lack of long-term planning.

LLMCompiler is a new framework by Kim et al. that introduces a compiler for multi-function calling. Given a task, the framework plans out a DAG. This planning both allows for long-term thinking (which boosts performance) but also determination of which steps can be massively parallelized.

We're excited to host Sehoon Kim and Amir Gholami to present this paper and discuss the future of agents.

LLMCompiler paper: https://arxiv.org/pdf/2312.04511.pdf

LlamaPack: https://llamahub.ai/l/llama_packs-agents-llm_compiler?from=llama_packs 

Notebook: https://github.com/run-llama/llama-hub/blob/main/llama_hub/llama_packs/agents/llm_compiler/llm_compiler.ipynb 

This is our first webinar of 2024, come check it out!

Hosted By
374 Going