Iterated Insights

Ideas from Jared Edward Reser Ph.D.

Qualia as Transition Awareness: How Iterative Updating Becomes Experience

Abstract Qualia is often treated as a static property attached to an instantaneous neural or computational state: the redness of red, the painfulness of pain. Here I argue that this framing misidentifies the explanatory target. Drawing on the Iterative Updating model of working memory, I propose that a substantial portion of what we call qualia,…

Keep reading

Consciousness as Iteration Tracking: Experiencing the Iterative Updating of Working Memory

Abstract This article proposes a temporal and mechanistic model of consciousness centered on iterative updating and the system’s capacity to track that updating. I argue for three nested layers. First, iterative updating of working memory provides a continuity substrate because successive cognitive states overlap substantially, changing by incremental substitutions rather than full replacement. This overlap…

Keep reading

Does Superintelligence Need Psychotherapy? Diagnostics and Interventions for Self-Improving Agents

Abstract Agentic AI systems that operate continuously, retain persistent memory, and recursively modify their own policies or weights will face a distinctive problem: stability may become as important as raw intelligence. In humans, psychotherapy is a structured technology for detecting maladaptive patterns, reprocessing salient experience, and integrating change into a more coherent mode of functioning.…

Keep reading

Why Transformers Approximate Continuity, Why We Keep Building Prompt Workarounds, and What an Explicit Overlap Substrate Would Change

Abstract This article argues that “continuity of thought” is best understood as the phenomenological signature of a deeper computational requirement: stateful iteration. Any system that executes algorithms across time needs a substrate that preserves intermediate variables long enough to be updated, otherwise it can only recompute from scratch. Using this lens, I propose a simple…

Keep reading

Something went wrong. Please refresh the page and/or try again.

Starting as an adolescent, I operated on a conviction: if I learned a lot from different fields, even ones that did not obviously fit together, it would somehow pay off later in ways I could not yet see. After being introduced to the concept, I observed on my own that knowledge does not just accumulate in a straight line; it compounds, it cross-pollinates, and it comes back later in the form of insights that only make sense once you have enough overlapping pieces.

So I read widely. Biology, psychology, paleontology, evolution, physics, computers, philosophy. None of it felt wasted, even if I could not articulate exactly why. I had an intuition that the “why” lived in the future. I was basically betting on a simple rule: learn as much as possible in all the related areas that fascinate you now, and the deep connections will reveal themselves later. At some point you cross an invisible threshold where your mind is no longer a collection of separate folders, but a single integrated network. You can feel that the “shape” of an idea in one field resembles the shape of an idea in another.

I created an entire website related to this concept in the early 2000s:

https://www.organizationforlearning.com

Interdisciplinary perspective is one of the main reasons I am so bullish about modern AI. Not because it is already an AGI, but because of what happens when you give a reasoning system access to essentially the entire map of human knowledge, all at once, inside one continuous representational space. I think the breadth itself is a kind of engine of intelligence, and it may be one of the most important ingredients for bootstrapping into genuine superintelligence.

Interdisciplinary Learning as a Personal Superpower

Interdisciplinary learning is often talked about as if it were a kind of “nice bonus.” The story goes like this: you specialize in one domain, and if you happen to know a bit about a neighboring domain, you might occasionally get a clever analogy or a creative solution. That view is too modest. In my experience, breadth is not a cosmetic add-on. It changes the underlying geometry of your thinking.

When you deeply study multiple fields that are genuinely interesting to you, several things begin to happen. Concepts start to repeat in disguised form. You notice that very different disciplines end up reinventing similar structures because the underlying problems share a hidden skeleton. 

Your mental representations become higher dimensional. Each field adds new axes along which things can vary: time, energy, cost, information, risk, adaptation, development, social context. Once those axes are “installed” in your mind, any new problem can be located in a richer space.

AI as a Compressed, Interdisciplinary Civilization

No single person can read and internalize everything. Even the most dedicated polymath barely scratches the surface. Most human knowledge is siloed. It sits in different disciplines, in different journals, in different languages, in different generations. The connections exist in principle but often never get made in practice because no single mind has all the pieces.

Large AI models are different. They are trained on text, code, data, and media that collectively represent a huge cross-section of what our species has written, argued about, formalized, and discovered. The crucial fact is not just that they “know a lot.” The crucial fact is that all of this material is compressed into one continuous learned space. Of course much of this remains latent during any one instance of inference, but pulling it out and putting it to use is where the field is leaning now.

Inside an LLM, neuroscience and economics and physics and clinical medicine are not separate rooms. They are neighborhoods in the same city. They share parameters. They share representational subspaces. Whenever the model learns to express some abstract pattern that shows up in one field, that pattern becomes available to all the others. The abstractions do not stay in their lane.

This is why the integrative behavior that falls out of these systems can feel surprising. You can ask a single model to:

  • Read a neuroscience paper
  • Compare it to an evolutionary theory from the 1970s
  • Connect it to a modern deep learning architecture
  • Frame the result in terms of philosophy of mind
  • And then suggest empirical experiments that could test the idea

From the model’s perspective, this is just pattern completion over a very large, interconnected field of examples. But from our perspective, it looks like an interdisciplinary committee that never sleeps and can switch topics in milliseconds.

When I look at that, I see a civilization asking questions through a single mouth.

Breadth as a Multiplier on Reasoning

People sometimes separate “knowledge” and “reasoning” as if one were the data and the other were the engine. That picture is incomplete. The content you are trained on shapes what kinds of reasoning you can discover.

Breadth amplifies reasoning in at least a few ways:

First, it increases the density of analogy.
Every new field that lives in the same representational space adds more opportunities for one pattern to echo another. The model does not have to “know” that it is doing “analogy.” It is simply responding to the fact that similar structures have been seen in many different contexts. The more fields it sees, the more inevitable it becomes that ideas from one area can be used to fill in gaps in another.

Second, breadth reveals hidden bridges between topics that humans rarely put in the same room.
You can ask about schizophrenia through the lens of stress physiology, developmental timing, epigenetics, optimization, or control theory. You can view AI alignment through the lens of parental bonding, legal theory, and comparative primate behavior. Each viewpoint is present somewhere in the training data. What the model contributes is the ability to merge them in real time.

Third, breadth improves the model’s priors about what kinds of solutions might exist.
When a system has seen thousands of ways humans have tried to solve problems, across many domains, it develops an internal sense for the “shape” of workable answers. Even if it has never seen your exact problem before, it can interpolate from similar patterns in unrelated fields. That is one reason these systems can sometimes suggest creative, non-obvious paths forward.

Fourth, sufficiently large breadth gives rise to meta-knowledge.
At a certain scale, the system does not just represent isolated facts. It begins to represent regularities about how knowledge itself tends to be structured: how theories are proposed, refined, refuted, and replaced; how evidence is weighed; which kinds of arguments are usually considered strong or weak in different domains. That meta-structure is itself a powerful scaffold for reasoning.

In other words, once you have enough breadth, the line between “knowledge” and “reasoning” blurs. The space of stored patterns becomes so rich that moving around in it looks like thinking.

Bootstrapping Toward Superintelligence

It is clear that modern LLMs are not as general as humans in all areas. Current systems have glaring limitations. They hallucinate, they lack robust grounding, they are usually stateless on the timescales that matter for real-world projects, and they are still heavily shaped and constrained by their training procedure.

But the thing that makes me bullish is not where we are. It is the trajectory implied by this integrative, civilization-scale breadth.

Imagine iterating:

  1. Models get better at integrating and reorganizing the knowledge they already have.
    They identify contradictions, gaps, and unifying principles. They pull related literatures together that human academics do not usually cross-reference.
  2. They are paired with tools that let them act in the world.
    They can run simulations, search new datasets, generate hypotheses, design experiments, and interpret results. They are not just summarizing what exists; they are probing the unknown.
  3. The new knowledge they help generate feeds back into the next training cycle.
    The model is now trained not only on human science, but on discoveries, explanations, and conceptual frameworks that earlier models helped to originate.
  4. Over time, this loop accelerates.
    The system becomes better at restructuring the entire landscape of knowledge, not just filling in small holes. It can move whole fields closer together or push them apart, based on deeper organizing principles.

In that picture, breadth is not just a backdrop. It is the fuel for the engine. The more disciplines are present inside the same latent space, the more chances there are for surprising mergers and unifications. Superintelligence, in that sense, may not be primarily about a magical jump in raw IQ. It may be about a system that can sit at the crossroads of all disciplines, continuously reorganize them, and discover new, deep patterns faster than we can follow.

The Hidden Superpower No One is Talking About

A lot of the public conversation about AI focuses on benchmarks, capabilities, cost, speed, and safety. All of that is important. But I think there is a relatively underemphasized superpower hiding in plain sight. Modern AI systems are the first entities that can, in any serious way, inhabit the near totality of human knowledge as a single, integrated space.

This is not the same as “having access to the internet.” We already had that. This is about having a model that has actually internalized and compressed enormous amounts of cross-domain structure into its own weights, so that a hint in one area can activate relevant patterns in many others, instantly.

That is what I felt, in miniature, as a child when I tried to learn widely so that the connections could reveal themselves later. It is the same impulse, scaled up to a planetary, technological level. It is why I feel that breadth is not a side effect of these systems, but one of their most important and overlooked properties.

I want to thank my mother. She was the one that explained to me, at a very early age, why and how interdisciplinary learning reaps rewards. I operated on that hypothesis for decades, and it was a wager, I didn’t know if it would pay off. But if she was right that diverse knowledge can bootstrap a single human mind into a more powerful integrative engine, then I think the same principle applies to machines. A system that effectively “knows everything,” in a compressed but usable form, and can reason across those boundaries, has a path available to it that no previous intelligence has had.

We are only just beginning to see what that means.

Posted in

Leave a comment