Arxiv Dives with Oxen.AI - LLaVA-CoT: Vision Language + Step-by-step Reasoning
Hey Nerd, join the Herd!... for a little book/paper review.
WHAT TO EXPECT
Each week we pick a paper to cover in depth and have open Q/A. Often joined by paper authors themselves! Reading optional 🙃.
THIS WEEK
We present... LLaVA-CoT: Let Vision Language Models Reason Step-by-Step
TRY OUR NEW MODEL EVALS FEATURE
Oxen.ai makes it easier than ever to prompt various models on real world datasets. Just like the paper, we run through prompts live and testing them on a variety of benchmarks.
ARXIV DIVE SPECIAL. Email us via the link below and we'll 2x your free compute credits! All you gotta do is ask :)
🆓 🤖 Get Free Compute on Oxen 🤖 🆓
Docs here (https://docs.oxen.ai/getting-started/models)
💬 JOIN THE CONVO
Discord here to share paper recs and more community discussion.
SEE PAST SESSIONS
To see past topics head over to our blog which has show notes and links to Youtube videos.
🤓🐂🤓🐂🤓🐂🤓🐂🤓🐂🤓🐂🤓🐂🤓🐂🤓🐂🤓🐂
WHO'S AN ARXIV DIVER
1.2k in Discord and 5k on Youtube - we've been joined by folks from around the world including leaders from:
and many more...
Sign up
We share datasets relevant to these sessions via Oxen.ai. To get free data:
About Oxen.ai: Build World-Class AI Datasets, Together. Track, iterate, collaborate on, & discover data in any format.
About Arxiv Dives
Each week we dive deep into a topic in machine learning or artificial intelligence. We break down the content into a digestible format and have an open discussion with the Oxen.ai community. Read more in our Arxiv Dive Manifesto.