Building with Voice LLMs | AssemblyAI
AssemblyAI makes it easy for developers to build Speech AI into their applications using the programming languages and frameworks they already know.
In this session, Matthew Makai, VP Developer Relations & Experience, AssemblyAI will cover how to quickly integrate high accuracy speech-to-text transcription of audio and video data into new or existing code bases. Additionally, how to use a large language model (LLM) with audio and video data using only a single line of code, and building with advanced features such as content moderation, topic detection and personally identifiable information (PII) redaction.
This session is designed for CTOs, data science leaders, and engineering leaders who are exploring voice LLMs in their tech stacks.