Cover Image for [PMAG Nexus Series] Interpreting foundation models of the brain
Cover Image for [PMAG Nexus Series] Interpreting foundation models of the brain
Avatar for Pioneering Minds AI Group
Hosted By
18 Went
Private Event

[PMAG Nexus Series] Interpreting foundation models of the brain

Zoom
Registration
Past Event
Welcome! To join the event, please register below.
About Event

Foundational Model in AI are models trained on vast amount of data enabling application across multiple domains. Brain Foundation Model is inspired by the same concept that trained on vast amount of… You guessed it! brain data!

Join us for a tour of the mind and explore the interesting aspects of our brain!

Speaker: 

Prof. Xaq Pitkow, Assistant Professor, Electrical and Computer Engineering; Assistant Professor of Neuroscience, Baylor College of Medicine, Rice University; Associate Director NSF AI Institute for Artificial and Natural Intelligence.

Bio

Prof.Xaq Pitkow is a computational neuroscientist who develops mathematical theories of the brain and general principles of intelligent systems. He focuses on how distributed nonlinear neural computation uses statistical reasoning to guide action in naturalistic tasks.

Abstract: 

We build state-of-the-art predictive models of visual responses in the mouse brain, exposing richer feature preferences than conventional models. We can then perform unlimited experiments on these models to find Most Exciting Inputs (MEIs). We show these MEIs back to the brain and find that, indeed, for most neurons they evoke greater responses than any other stimuli we tried. We call this method “inception” (after the movie of the same name) because it implants a desired response (or “idea") into the brain. We also identify ensembles of stimuli that all evoke high responses (Diverse Exciting Inputs, or DEIs), revealing invariances in neural tuning that we again validate in the brain. Analyzing these invariances we discover image features that are informative about features like object boundaries and relative depth, which are interpretable causal features of behavioral relevance. These approaches are examples of how we can discover how the brain might perform scene analysis using nonlinear combinations of sensory inputs that relate statistically to causal variables. I will discuss how analyses of the joint statistics of inputs, neural activity, and behavior together can help us understand behaviorally relevant neural computations.

Session Requirements

Satisfy one of the following:

- Intermediate understanding of AI modeling / development

- Intermediate understanding of neuroscience

And

- basic understanding of mathematics

Avatar for Pioneering Minds AI Group
Hosted By
18 Went