Iterated Insights

Ideas from Jared Edward Reser Ph.D.

Qualia as Transition Awareness: How Iterative Updating Becomes Experience

Abstract Qualia is often treated as a static property attached to an instantaneous neural or computational state: the redness of red, the painfulness of pain. Here I argue that this framing misidentifies the explanatory target. Drawing on the Iterative Updating model of working memory, I propose that a substantial portion of what we call qualia,…

Keep reading

Does Superintelligence Need Psychotherapy? Diagnostics and Interventions for Self-Improving Agents

Abstract Agentic AI systems that operate continuously, retain persistent memory, and recursively modify their own policies or weights will face a distinctive problem: stability may become as important as raw intelligence. In humans, psychotherapy is a structured technology for detecting maladaptive patterns, reprocessing salient experience, and integrating change into a more coherent mode of functioning.…

Keep reading

Why Transformers Approximate Continuity, Why We Keep Building Prompt Workarounds, and What an Explicit Overlap Substrate Would Change

Abstract This article argues that “continuity of thought” is best understood as the phenomenological signature of a deeper computational requirement: stateful iteration. Any system that executes algorithms across time needs a substrate that preserves intermediate variables long enough to be updated, otherwise it can only recompute from scratch. Using this lens, I propose a simple…

Keep reading

Plural Canons and the Siloed Future of Synthetic Knowledge

1. The “Final Library” was never going to be singular When I first started thinking about what I called a “Final Library,” I pictured a single, civilization-scale repository of synthetic writing, synthetic hypotheses, and machine-generated explanations. The basic premise was simple: once AI systems can generate and refine ideas at industrial scale, they will produce…

Keep reading

Something went wrong. Please refresh the page and/or try again.

Abstract

This article proposes a temporal and mechanistic model of consciousness centered on iterative updating and the system’s capacity to track that updating. I argue for three nested layers. First, iterative updating of working memory provides a continuity substrate because successive cognitive states overlap substantially, changing by incremental substitutions rather than full replacement. This overlap offers a direct account of why experience is typically felt as a stream rather than a sequence of snapshots. Second, consciousness in the stronger, phenomenologically salient sense arises when the system represents features of its own state-to-state transitions, in effect tracking the stream as it unfolds. On this view, awareness is not merely access to current contents but access to trajectory properties such as drift, stabilization, conflict, novelty, and goal alignment, together with the regulatory control these representations enable. Third, self-consciousness emerges when a self-model functions as a relatively stable but updateable reference frame carried within the stream, and when changes in that self-model are themselves tracked. The model is positioned as complementary to major consciousness frameworks while supplying an explicit temporal architecture they often leave underspecified. It yields principled dissociations among continuity, awareness of change, and self-experience, and it motivates empirical predictions: measurable overlap across adjacent representational states should correlate with felt continuity, transition-encoding signals should correlate with metacognitive access to ongoing change, and disturbances of self-consciousness should correspond to altered stability or tracking of self-variables embedded in the updating stream.

Introduction

Most theories of consciousness begin with what consciousness contains. They talk about the integration of information, the broadcast of representations, the accessibility of content for report, or the construction of a world-model. Those are all legitimate targets. But they can leave a central phenomenological fact underexplained: consciousness is not experienced as a sequence of snapshots. It is experienced as a stream that changes continuously, where each moment is shaped by what came just before it and where the present seems to be arriving rather than merely appearing.

My model of iterative updating proposes that the temporal architecture of cognition is not a secondary detail but a core explanatory variable. You can find the model at :

aithought.com

Here I argue for a three-layer model. First, iterative updating of working memory provides a substrate of continuity because successive cognitive states overlap substantially, changing by small increments rather than full replacement. Second, consciousness in a stronger sense arises when the system tracks its own updating. It is not only updating, but representing and regulating the fact that it is updating. Third, self-consciousness arises when the self is represented as a relatively stable model within the stream and when the updating of that self-model is itself tracked. The goal here is to articulate these layers cleanly, relate them to the current literature, and propose empirical hooks that could make the account testable.

1. The problem of temporal phenomenology

The basic phenomenon is easy to notice and surprisingly hard to formalize. Experience feels temporally extended. A sound has duration, not just presence. A visual scene seems to persist while subtly shifting. A thought unfolds, branches, corrects itself, and settles. Even when attention jumps, the jump is experienced as a transition rather than as a hard reset. This is true not only for perception but for inner cognition. Deliberation, mind-wandering, and mental imagery all have the character of motion through a space rather than discrete frames laid side by side.

One reason this is difficult is that science likes snapshots. Our measurements often privilege static contrasts: stimulus versus baseline, condition A versus condition B, region X more active than region Y. Even computational models often focus on functions that map an input to an output, as if cognition were primarily a single-pass transformation. But the lived structure of consciousness is not only about content. It is about how content changes, how it stays coherent, how it gradually becomes something else, and how the system can remain “with itself” as it changes.

It helps to distinguish three targets that are commonly bundled together under the word consciousness. The first is temporal continuity, the sense that experience persists and flows. The second is awareness of the stream, meaning the system not only has content but is in contact with the way that content is evolving, drifting, stabilizing, or being redirected. The third is self-consciousness, the sense that the stream is happening to an entity that is represented as “me,” with ownership, perspective, and some degree of identity across time. These are entangled in everyday life, but they can come apart. A theory that does not separate them risks either explaining too little or claiming too much.

The thesis of this paper is that temporal continuity can be grounded in a specific dynamical property of working memory, but awareness requires an additional step: the updating itself must become an object of representation and control. Self-consciousness then becomes a further specialization: the self is one of the represented structures carried through the stream, and its updates become trackable as well.

2. Iterative updating as the continuity substrate

The simplest way to make a stream is to avoid full replacement. If cognitive states were rebuilt from scratch each moment, continuity would be difficult to explain. You could still have a sequence, but you would be missing a direct mechanism for why the sequence feels like ongoing experience rather than flicker. Iterative updating proposes the opposite architecture: successive working-memory states share substantial overlap. The system carries forward many of the same active elements while selectively swapping in a small number of new elements and letting others fall away.

In cognitive terms, the “elements” can be treated as a small set of representations that are coactive at a given moment, constrained by the capacity limits of working memory. The details of representation can be left open. They might be assemblies, distributed patterns, symbols, or structured feature bundles. What matters for the present argument is the dynamics: the next state is not independent of the previous one. It is built out of it.

This overlap yields an immediate phenomenological consequence. If each moment retains a large fraction of the previous moment’s content, then the present is literally constructed from the immediate past. A stream becomes not a metaphor but a property of the physical process. The experience of persistence is what it is like for a system whose current state is partially composed of what was active a moment ago, with incremental revision rather than total replacement.

Iterative updating also provides a substrate for thought as a process of refinement. If you can hold a set of representations active, you can test candidate additions, evaluate coherence, and gradually steer the set toward better constraint satisfaction. This is the difference between a single jump to an association and an extended trajectory of improvement. Many cognitive achievements feel like this: understanding a sentence, solving a problem, remembering a name, integrating a new piece of evidence into a belief. They often require multiple micro-updates in which most of the context remains while one element shifts, a relationship is reweighted, or an implication becomes salient.

At this point the model is powerful but still incomplete. Overlap can explain continuity, but continuity alone does not guarantee awareness. A system can update iteratively without being aware of that updating in any meaningful sense. It can have state overlap and still operate in a largely automatic manner, with transitions that are not represented as transitions but merely occur. If we want to explain not just the existence of a stream, but the experience of being in the stream, we need an additional layer.

3. Iteration tracking as awareness of the stream

The central proposal is that consciousness, in the stronger sense people typically care about, involves a specific kind of reflexivity. The system does not merely undergo iterative updating. It tracks it. It represents aspects of its own state transitions, and it uses those representations to regulate subsequent transitions. Put differently, the stream becomes something the system can in some sense perceive.

This can be stated without introducing a homunculus. Tracking does not mean that there is an inner observer watching thoughts go by. It means the cognitive machinery includes variables that encode change over time. In engineering terms, the system has an observer for its own dynamics. In informational terms, it encodes deltas or derivatives, not merely states. In psychological terms, it has access to whether a thought is stabilizing, whether it is drifting, whether a line of reasoning is gaining coherence, whether a perception is becoming more confident, or whether attention is slipping.

A useful way to understand this is to separate content from trajectory. Content is what is currently active. Trajectory is the pattern of change across successive activations. Iteration tracking is the representational capture of trajectory features. These features can include novelty, conflict, instability, goal misalignment, and the need for re-anchoring. They can also include the felt speed of thought, the sense of effort, and the sense that a mental object is being held in place versus allowed to wander. None of this requires language. Much of it is plausibly prelinguistic and nonverbal, which matters because we want an account that could apply across development and across species.

This distinction also clarifies why awareness often feels like control. When people say they became more conscious, they often mean they became more able to notice drift, to slow down, to redirect, to hold onto a thread, or to catch themselves before they act impulsively. That is exactly what you would expect if awareness involves tracking and regulating the update process. A mind that cannot track its own updating might still update, but it would not have the same capacity to notice that it is losing the plot, nor the same ability to modulate the rate and selectivity of its transitions.

On this view, “experiencing the stream” is not something extra pasted onto cognition. It is what it is like for a system to include its own updating dynamics within the scope of what it represents and controls. Iterative updating gives you a stream. Iteration tracking gives you awareness of the stream.

4. Self-consciousness as self-model-in-the-loop

Self-consciousness adds another ingredient that is conceptually straightforward once the prior layer is in place. The self becomes one of the structures carried forward through iterative updating, and the system tracks the updating of that self-representation as part of the same process. The key point is that the self is not an ethereal essence. It is a model. It is a set of variables, regularities, and expectations that describe the agent as an entity with a perspective, a body, capacities, goals, commitments, and a history.

Many theories treat self-consciousness as a special mystery, but it can be reframed as a special case of a general mechanism. If a system can track its own updating, it can in principle track any domain of content that is repeatedly carried in the stream. When the repeatedly carried content includes a self-model, then the system is not only aware of thoughts, perceptions, and goals, but also aware that these belong to an ongoing agent. This yields the familiar phenomenology of ownership and perspective. The experience is not only that something is happening, but that it is happening to me, and that I can situate myself within what is happening.

It helps to separate three components that are often conflated. Ownership is the sense that experiences are mine. Perspective is the sense of being located at a point of view, whether spatial, affective, or intentional. Narrative continuity is the sense that there is an identity extended through time, a thread connecting past, present, and anticipated future. These can vary somewhat independently. A person can have vivid experience with disturbed ownership, as in depersonalization. A person can have a stable perspective with reduced narrative continuity, as in certain amnestic states. The point of the present model is that these components can be understood as properties of a self-model embedded in an updating stream.

One way to formalize this is to treat self-representations as relatively slow variables within a fast-updating process. The contents of working memory may change quickly, but self-parameters tend to be more stable and can act as an anchor. They provide a reference frame that constrains interpretation and guides action. When that anchor is stable and when its updates are tracked, self-consciousness is robust. When the anchor is unstable, poorly updated, or poorly tracked, self-consciousness becomes distorted. Importantly, this distortion can occur even when the basic stream of experience remains intact.

This completes the conceptual ladder. Iterative updating gives continuity. Iteration tracking yields awareness of the continuity and the ability to regulate it. Self-consciousness emerges when a self-model is maintained as part of what the system is tracking and controlling within the stream.

5. Dissociations and boundary cases

A useful theory of consciousness should not only explain the central case, the ordinary waking stream. It should also illuminate the ways that consciousness can fragment, narrow, or become oddly self-salient. The layered model does this almost automatically, because each layer can vary somewhat independently.

Start with continuity. A mind can show iterative updating even when awareness is thin. Habitual behavior is the simplest example. People can drive a familiar route, shower, or clean the kitchen with a sense of time passing and with some coherence of perception, yet later have surprisingly little recollection of the intermediate moments. The substrate is running and the stream exists, but the tracking of the stream is partial. Conversely, awareness can become unusually vivid when tracking is amplified. This is one way to characterize certain contemplative states and also certain anxious states. The system is not just thinking and perceiving, it is monitoring every micro-shift. The stream is lit up as an object.

The model also predicts dissociations in which self-consciousness changes while continuity remains intact. Depersonalization provides a striking example: people often report that experience continues normally in sensory terms, but the sense of ownership and self-presence is altered. In the present framework, this would correspond to a disturbance of the self-model-in-the-loop. The stream continues, and some degree of iteration tracking continues, but the self-variables that normally anchor ownership and perspective are either unstable, underweighted, or not being tracked with the usual fidelity. Another boundary case is absorption, the “lost in the task” state. Here, iterative updating is strong and tracking is sufficient for performance, but self-model content is temporarily minimized. The person does not lack consciousness, but self-consciousness is reduced. This is consistent with the common report that self-awareness returns when attention is disrupted or when social evaluation enters the scene.

Fatigue, intoxication, and stress are also useful because they can degrade different components. Fatigue can reduce the precision of tracking, producing the familiar feeling of mental drift and reduced executive capture. Intoxication can preserve the stream but destabilize update selection, so that the system continues to move forward without being able to regulate its own trajectory effectively. Stress can narrow the set of representations that remain coactive across moments, producing a kind of premature context collapse where the system updates too aggressively, drops the wrong elements, or becomes overbound to threat-related content. The model does not need to claim that these are the only mechanisms involved. It only needs to show that the layered architecture gives a principled way to map subjective reports onto plausible computational failures.

The most important takeaway from these boundary cases is conceptual. If you treat consciousness as a single thing, the cases look like exceptions. If you treat consciousness as layered, the cases become expected patterns: continuity without rich tracking, tracking without a stable self-anchor, self-salience without good regulation, and various mixed profiles.

6. Relation to major consciousness frameworks

The iteration tracking model is not offered as a replacement for the existing landscape so much as a temporal spine that many existing theories can attach to. The goal is to make explicit something that is often implicit: consciousness is not only about what is represented, but about how representation persists and changes through time, and whether the system has access to that change.

Global workspace theories emphasize access, broadcast, and coordination across specialized systems. The present proposal is compatible with that emphasis but adds a specific temporal mechanism for why the workspace would feel like a stream rather than a bulletin board. Iterative updating supplies continuity, and iteration tracking supplies a form of global availability not only of contents but of the system’s own transitional dynamics. In other words, a workspace could broadcast what is currently in view, but a conscious workspace also makes available how the view is evolving.

Higher-order theories propose that a mental state becomes conscious when it is represented by another mental state. Iteration tracking can be framed as a particular form of higher-order representation, but with a distinctive target. The higher-order content is not necessarily a proposition about a belief. It can be a representation of the transition itself, encoding that the system is shifting, stabilizing, or losing coherence. This keeps the core idea of reflexivity while grounding it in dynamics rather than introspective commentary.

Predictive processing and related accounts focus on prediction and error minimization. Iterative updating is naturally compatible with this, because an updating stream is a plausible vehicle for continual model refinement. The difference is emphasis. Prediction error is a signal. Iteration tracking is a way of representing the ongoing evolution of the internal model, including error dynamics but not reducible to them. In everyday experience, one does not only experience surprise. One experiences a trajectory: a thought coming together, a perception sharpening, an understanding forming. Those are temporal structures that are not captured by error signals alone.

Integrated information approaches emphasize the structure of causal integration. The iteration tracking model does not deny that integration matters. It argues that integration alone does not specify why experience feels temporally continuous and process-like. A system could be highly integrated yet still be experienced, if it were experienced at all, as a sequence of unrelated states if it lacked sufficient overlap and lacked access to its own transitions. The present proposal therefore treats temporal overlap and transition representation as constraints that any fully satisfying account must include, regardless of whether it is framed in terms of integration, broadcast, or prediction.

The common thread in these comparisons is that the iteration tracking model is not trying to compete on every dimension. It is trying to contribute a missing dimension: explicit temporal architecture and an explicit account of how the system can become aware of its own updating rather than merely performing it.

7. Empirical predictions and operationalization

If the model is to be more than a metaphor, it needs operational handles. The layered view suggests three classes of measurable signature corresponding to continuity substrate, iteration tracking, and self-model-in-the-loop.

For the continuity substrate, the prediction is that adjacent cognitive moments should show measurable overlap in representational patterns, and that the degree of overlap should correlate with subjective continuity. States described as fragmented or discontinuous should show reduced overlap, more abrupt representational turnover, or a higher rate of unstructured replacement. This could be probed in perceptual paradigms where continuity is manipulated, in working memory tasks where maintenance must persist across interference, or across transitions into and out of sleep and anesthesia where continuity reports change sharply.

For iteration tracking, the prediction is stronger and more distinctive: there should be measurable signals that encode the delta between successive states, not merely the states themselves. In practice, this might look like neural activity that correlates with estimated drift, conflict, or stabilization of a representation, even when the represented content is held constant. It could be probed with tasks that control content while altering the dynamics of updating, for example by manipulating the rate of change in a stimulus stream, the rate of rule-switching in a cognitive task, or the degree of uncertainty that requires iterative refinement. If subjective clarity is tied to iteration tracking, then measures of metacognitive sensitivity should covary with these transition-encoding signals.

For the self-model layer, the prediction is that self-related variables behave like stabilizing parameters that constrain interpretation across time, and that disturbances of self-consciousness correspond to disturbances in the stability or tracking of those variables. This suggests a way to interpret depersonalization, certain dissociative states, and aspects of self-disturbance in psychiatric conditions. The model predicts that in such states, many forms of content processing can remain intact while the coupling between the stream and the self-anchor is altered. Paradigms that elicit changes in ownership, agency, or perspective could be used to examine whether the brain is tracking self-variable updates in a manner analogous to how it tracks other trajectory dynamics.

The paper does not require committing to a single measurement modality. The important commitment is conceptual and testable: conscious awareness should correlate not only with representational content but with representational access to transition structure, and self-consciousness should correlate with the embedding of self-variables within that transition-aware stream.

The strongest falsification pressure would come from a dissociation in the opposite direction. If one could show robust subjective awareness of flow and change while the brain exhibits no meaningful overlap across adjacent states and no measurable transition-encoding signals, the model would be weakened. Conversely, if one could show robust overlap and transition encoding in conditions where subjective awareness is reliably absent, the model would need to clarify whether those signals are sufficient or only necessary. The layered structure makes room for this. It is possible that overlap is necessary but not sufficient, and that tracking must also be broadcast to a set of systems that enable report and control. That is an empirical question, not a rhetorical escape hatch.

Conclusion

The argument of this article is that the temporal architecture of cognition deserves to be treated as a central explanatory variable in theories of consciousness. Iterative updating of working memory provides a concrete substrate for continuity because each moment is built from the remnants of the moment before it, altered by incremental revision rather than full replacement. This can explain why experience feels like a stream.

But continuity is not the whole story. Consciousness in the stronger sense involves iteration tracking: the system represents and regulates the updating itself, encoding features of its own transitions such as drift, stability, novelty, and goal alignment. When the stream becomes an object of monitoring and control, experience becomes not merely a succession of states but an ongoing process that the system can remain with.

Self-consciousness then emerges when a self-model is maintained within the stream and when the updating of that self-model is itself tracked. Ownership, perspective, and narrative continuity can be treated as properties of a stable but updateable reference frame embedded in the same transition-aware dynamics that govern ordinary thought and perception.

This framework is intended to be compatible with major families of theory while contributing an explicit account of temporal phenomenology and reflexivity. It makes commitments that can be operationalized. It predicts dissociations across continuity, awareness of transitions, and self-consciousness, and it suggests that the “shape” of conscious life may be measurable as the overlap, the tracked deltas, and the anchoring self-variables that together allow a mind to experience itself changing through time.

Posted in

Leave a comment