Cover Image for Elan Barenholtz Livestream @ Wolfram Institute | Language, Autoregression, and the Structure of Natural Computation
Cover Image for Elan Barenholtz Livestream @ Wolfram Institute | Language, Autoregression, and the Structure of Natural Computation
Hosted By

Elan Barenholtz Livestream @ Wolfram Institute | Language, Autoregression, and the Structure of Natural Computation

YouTube
Registration
Past Event
Welcome! To join the event, please register below.
About Event

Tune in to the Wolfram Institute YouTube channel!

Participants: Elan Barenholtz, Daniel Van Zant, William Hahn, Dugan Hammock, Nikolay Murzin, James Wiles, Willem Nielsen, Max Boucher, Luke Wriglesworth

Title:

Nature’s Memory: Language, Autoregression, and the Non-Markovian Structure of Natural Computation

Abstract: Autoregressive language models demonstrate that coherent linguistic behavior—syntax, inference, narrative structure—can be generated purely through next-token prediction conditioned on prior context. This talk advances a broader theoretical claim: that non-Markovian autoregression over an autogenerative structure is not merely a technical strategy, but a fundamental principle of language, cognition and potentially many other natural systems.. I introduce a distinction between autogeneration, a static property of a system whose internal structure encodes its own rules of continuation, and autoregression, a dynamic process in which each output is generated from accumulated past outputs. In natural language, this autogenerative structure is encoded in the corpus itself—its long-range dependencies, compositional regularities, and statistical curvature form a latent space that supports meaningful generativity. Autoregressive traversal of this space enables systems like LLMs to produce structured, context-sensitive language without symbolic rules or external supervision. Critically, I argue that non-Markovianism is a necessary condition for autogeneration. Markovian models, which operate only within fixed local neighborhoods, lack the capacity to construct or traverse a meaningful global topology. While language provides the clearest and most developed case, the same non-Markovian autoregressive architecture appears across natural systems: in the residual activation of short-term memory, in Zipfian distributions, and in biological processes such as epigenetic regulation, transcriptional feedback, immune memory, and developmental differentiation. I propose that such systems reveal a general principle: the capacity for structure, meaning, and generativity arises not from local rules, but from the recursive traversal of a self-encoding space.

Keywords: Language, Autoregression, Computation, Memory

Check out the Wolfram Institute Wiki: https://wiki.wolframinstitute.org/

These livestreams are made possible by the generous support of our Patreons, thank you! https://www.patreon.com/WolframInstitute

Hosted By