LlamaIndex Webinar: Efficient Parallel Function Calling Agents with LLMCompiler
LLMs are great at reasoning and taking actions.
But previous frameworks for agentic reasoning (e.g. ReAct) were primarily focused on sequential reasoning, leading to higher latency/cost, and even poorer performance due to the lack of long-term planning.
LLMCompiler is a new framework by Kim et al. that introduces a compiler for multi-function calling. Given a task, the framework plans out a DAG. This planning both allows for long-term thinking (which boosts performance) but also determination of which steps can be massively parallelized.
We're excited to host Sehoon Kim and Amir Gholami to present this paper and discuss the future of agents.
LLMCompiler paper: https://arxiv.org/pdf/2312.04511.pdf
LlamaPack: https://llamahub.ai/l/llama_packs-agents-llm_compiler?from=llama_packs
This is our first webinar of 2024, come check it out!