Ollama and local AI, benchmarking & more
Registration
Past Event
About Event
Why does local LLM deployment make sense? What are the pros & cons? And how can you experiment quickly?
In this session, we'll hear about building local AI workflows using Ollama & Postgres. Using local tools with simple architectures is a useful approach to learn and test ideas and avoid the hidden logic of tools like LangChain.
We'll also talk benchmarking of performance with the tool Weights & Biases rounding out the discussion on fine tuning. Why is benchmarking so important?
Much thanks to the team at HNRY for hosting us at this event.
Possible drinks & nibbles (TBC)
Further details soon.