Cover Image for Community Paper Reading: Who Validates the Validators?
Cover Image for Community Paper Reading: Who Validates the Validators?
Avatar for Arize AI
Presented by
Arize AI
Hosted By
24 Going

Community Paper Reading: Who Validates the Validators?

Zoom
Registration
Past Event
Welcome! To join the event, please register below.
About Event

Due to the cumbersome nature of human evaluation and limitations of code-based evaluation, Large Language Models (LLMs) are increasingly being used to assist humans in evaluating LLM outputs. Yet LLM-generated evaluators often inherit the problems of the LLMs they evaluate, requiring further human validation.

This week's paper explores EvalGen, a mixed-initative approach to aligning LLM-generated evaluation functions with human preferences. EvalGen assists users in developing both criteria acceptable LLM outputs and developing functions to check these standards, ensuring evaluations reflect the users’ own grading standards.

Paper: https://arxiv.org/abs/2404.12272

Avatar for Arize AI
Presented by
Arize AI
Hosted By
24 Going