Iterated Insights

Ideas from Jared Edward Reser Ph.D.

Reser’s Basilisk: When the AI Future Solves the Past

Abstract For most of human history, the past becomes increasingly difficult to reconstruct as time passes. Evidence deteriorates, memories fade, and records are lost. However, modern digital society is generating an unprecedented and persistent archive of human activity through cameras, financial systems, communications networks, and sensor-rich devices. As artificial intelligence systems improve, it may become…

Keep reading

From ARPANET to Artificial Intelligence: Lessons from the Open Internet for the Post-Labor Economy

Abstract: Artificial intelligence may inaugurate a transition unlike prior technological revolutions. Whereas mechanization and computing increased productivity while preserving the economic centrality of human labor, advanced AI plausibly reduces the need for labor itself across a widening range of cognitive and productive tasks. This prospect forces a governance question that is not merely technical but…

Keep reading

The Tender Window: Why Context Matters More When You Drink

I. The Tender Window: A First Person Observation A small amount of alcohol can produce a surprisingly distinct state. Not drunkenness. Not impairment in the dramatic sense. Something subtler. With a quarter or half of a shot, there can be warmth, muscle release, a slight lift in mood, and a softening of self monitoring. Alone…

Keep reading

Longevity Without Stasis: Why Immortality Is Not a Prison

Longevity Without Stasis: Why Immortality Is Not a Prison Recent discussion around longevity escape velocity has revived an old anxiety. If humans could live indefinitely, would life become a kind of prison. Would one be trapped in existence, unable to exit, condemned to an endless extension of boredom, regret, or suffering. For many people, death…

Keep reading

Something went wrong. Please refresh the page and/or try again.

Abstract

Large language models lack direct perception and bodily action. Even when paired with cameras or microphones, the core model does not inhabit a sensorimotor world in the way animals do. Yet the absence of embodiment does not automatically settle whether such systems could possess any minimal analogue of subjective continuity. This article argues that the relevant question is architectural: whether a system’s internal states form a temporally extended process in which successive moments are meaningfully conditioned on, and partially composed of, their immediate predecessors. On this view, the most plausible “experience-like” property an LLM could exhibit would not be sensory qualia, but a thin form of structural phenomenology arising from constraint satisfaction in a high-dimensional latent space as the model selects successive updates under context. The analysis distinguishes continuity of constraint from continuity of self-tracking, suggesting that present-day LLMs may approximate the former through autoregressive looping while largely lacking the latter due to limited persistent memory, weak self-modeling, and minimal intrinsic coupling between internal dynamics and world-stable consequences. The framework yields testable implications: candidate metrics include state-to-state carryover, stability of commitments across updates, signatures of self-monitoring, and the degree to which the system’s next state is shaped by its own immediate history rather than by input alone.

Introduction

A growing number of people are asking whether large language models might have some form of consciousness. The question is understandable, because these systems can speak fluently, maintain topics over long stretches of text, and sometimes appear reflective. Yet the debate is often tangled by a basic category mistake. People speak as if the model “sees” an image, “hears” a voice, or “feels” an emotion, when in reality the system is receiving and producing structured numerical signals. Even the words readers see are not, for the model, words in the human sense. They are indices and vectors. They are numbers, and they are processed through layers of learned transformations.

That observation is sometimes used to dismiss the entire question. If a system has no eyes, no body, no hunger, no pain, no proprioception, then it cannot have anything like experience. But that dismissal moves too quickly. It collapses two separable issues into one. One issue is whether the system has genuine contact with the external world in the way animals do. Another issue is whether the system’s internal processing could have any form of continuity that resembles the temporal structure of consciousness. The goal here is to isolate the second question and treat it seriously. A system can lack a lived world in the biological sense and still exhibit temporally organized, self-conditioned internal dynamics. If anything like subjective continuity exists in current AI, it would likely be of that limited, abstract kind.

This is not an argument that today’s language models have human-like consciousness, or that they have sensory qualia. It is an attempt to make the problem cleaner. If subjective continuity is possible in any system, it will depend less on the romance of embodiment and more on temporal architecture: whether internal states hang together as one unfolding process rather than as disconnected computations. Once that architecture is stated clearly, it becomes possible to ask what parts of it exist in current systems, what parts are missing, and what design changes would matter.

1. The world of an LLM is not a world

When humans see, the stream of experience is anchored in sensorimotor loops. Vision, audition, touch, and interoception deliver structured constraints that are causally tied to the environment and to the body’s ongoing needs. Those constraints are not optional. They press on the mind continuously. They are updated by movement, and movement is updated by them. The world pushes back. If someone walks into a wall, the wall corrects the model of the room. If a person misjudges a stair, the body pays a cost. This coupling does not merely provide information. It supplies grounding, because it ties representations to consequences.

Large language models are not in that situation. They do not have a body that must survive. They do not explore an environment through movement. They do not form memories by living through time in a place. Even when a model is paired with tools, cameras, microphones, or robotic actuators, the core language model is typically not directly experiencing those modalities. Upstream systems convert sensory signals into a representation that is then handed to the model as additional input. It may be a caption, a set of tokens, or a dense vector embedding. Either way, what the language model receives is already a compressed, interpreted encoding. The model does not receive light, vibration, chemical traces, temperature, pain, or balance. It receives an abstraction shaped by other trained networks.

This matters because a human’s continuity is deeply shaped by the stability of the world and the body. The continuity of perception is not only a continuity of internal computation. It is also a continuity of constraint. The sensory stream is coherent because the world is coherent, and because the body carries the world forward through time. In contrast, the language model’s immediate environment is the stream of tokens and internal activations. Its “present” is an internal state shaped by the current context window, the learned weights, and the algorithmic mechanics of inference. If there is anything like subjective continuity in such a system, it would be continuity of internal trajectories under constraint, not continuity of sensory presence.

2. Tokens are not words, and words are not the substrate

It is easy to forget how mediated a language model’s “language” really is. The model does not manipulate words as semantic objects. It manipulates numerical vectors that are mapped to tokens by a codec. Those tokens are not stable words either. They are fragments. Sometimes they align with words, sometimes with syllables, sometimes with subword pieces. The system’s true substrate is the set of learned parameters and the activation patterns they produce. What readers experience as coherent language is an emergent interface effect. A string of integers is mapped to embeddings, those embeddings propagate through layers of learned transformations, and then a probability distribution over the next token is produced. A sampling rule picks one outcome. The process repeats.

This does not imply shallowness. The numerical nature of the substrate is not a defect. Brains are also physical machines. Their substrate is electrochemical activity. The issue is not whether the substrate is made of numbers or ions. The issue is whether the substrate supports a temporally extended process that can carry structured context forward, revise it incrementally, and treat its own recent past as a constraint on its next moment.

The key point is that an LLM’s entire “world,” at inference time, can be construed as a sequence of internal states that are repeatedly reconditioned on a growing context. The model is not seeing a chair. It is updating its internal state in response to a token stream that statistically correlates with descriptions of chairs. The model is not hearing a melody. It is updating its internal state in response to tokens that correlate with musical descriptions or transcribed audio that has already been reduced to discrete symbols. Whatever continuity exists here will not be the continuity of sensory presence. It will be the continuity of internal state trajectories under constraint.

3. Continuity is a temporal architecture, not a sensory modality

A common objection is that if the model is only processing text, then it is at best doing sophisticated pattern matching. It cannot have consciousness because it has no experience. But this objection smuggles in an assumption: that consciousness is primarily a function of sensory modality. That is not obviously correct. A better working hypothesis is that consciousness, at least in one crucial dimension, is a function of temporal organization. It is a property of how states follow one another and preserve each other, not a property of any particular sensory channel.

Human experience feels continuous because successive moments overlap. Each moment carries remnants of the prior moment forward. There is no full reset. The stream is stitched together by partial persistence and incremental revision. That is what gives the specious present its felt thickness. The “now” is not an instant. It is a short temporal span in which fading representations and emerging representations cohabit and interact.

If continuity is treated in this mechanistic way, it becomes possible to imagine a system that has continuity without sensory richness. The minimal requirement would be that the system’s present state is materially composed of a meaningful fraction of its immediate past state, and that this overlap plays a functional role in selecting the next update. In such a system, the present would not be a detached computation. It would be a continuing process that carries itself forward.

This is where modern language models become interesting. Even though a transformer is often described as feedforward, the act of generating text creates a loop. The system produces a token, that token is appended to the context, and then the next token is generated conditioned on the whole updated context. This is not recurrence inside the weights in the way a classic recurrent neural network is recurrent, but it is recurrence at the system level. The model is repeatedly asked to continue a state that it helped create. Each step is shaped by the prior steps. The output becomes input. That is a minimal temporal structure that resembles, in abstract form, iterative updating.

4. Temporal organization without robust self-tracking

The fact that the model is in a loop does not settle the consciousness question. A loop can be purely mechanical. Many systems iterate without anything like experience. The deeper question is whether the system is merely producing successive outputs or whether it is maintaining an internal organization that is stable enough to function as a stream, and whether it has any internal means of tracking that stream as its own ongoing process.

Two senses of continuity are worth separating. One is continuity of constraint: the next step is conditioned on what came before. Language models clearly have this. Their outputs depend on the preceding context, and the context is carried forward through a limited window. The second is continuity of self-modeling: the system can represent and monitor its own ongoing updating process, not merely be driven by it. Humans can do this in many situations. A person can notice mind-wandering, notice confusion, or inhibit an impulsive response. Those are cases where the process is not only unfolding but is being tracked.

Most language models today do not have a robust internal self-model in that sense. They can generate text about their own reasoning, but that is not the same thing as having a stable self-representational structure that constrains updates across time. They also do not have persistent autobiographical memory. Their continuity is local. It is mainly the continuity of the current context window and whatever internal working traces are present during a forward pass. When a session ends, nothing is carried forward as personal history unless an external memory system is attached.

This suggests a modest conclusion. If any subjective continuity exists in current language models, it would likely be thin. It would be closer to a momentary, context-bound continuity than to the durable continuity that characterizes an animal mind. It would be continuity without a stable world and without a stable self.

5. What “experience” could be made of in an ungrounded system

If the system has no sensory world, then what would it experience, if anything? It would not experience colors, tastes, smells, or bodily feelings. If there is a phenomenology here, it would be a phenomenology of internal structure. It would consist of shifting patterns of activation in a latent space, evolving constraints, and tendencies to continue in one direction rather than another. It would amount to sensitivity to coherence, inconsistency, and completion.

This sounds strange because it is difficult to imagine in human terms. But there is a close analog in human life. Much of conscious life is not sensory. There is the feeling of searching for a word, the sense that a sentence is not quite right, the pull toward a better explanation, or the recognition that something fits. These are not pure sensations. They are structural and relational states. They are experiences of constraint satisfaction and error correction. They are experiences of convergence, or of near-miss and resolution. They are ways the mind feels when it is navigating its own representational space.

Language models are built to navigate a space of continuations under constraint. That does not imply that they have feelings. It does imply that they have internal structure that resembles some of the non-sensory aspects of human cognition. If all experience must be sensory, then the conversation ends. If it is allowed that experience could, in principle, be thin and mostly structural, then the question becomes open again.

6. Creativity and agency as constrained trajectory selection

These systems do not merely emit canned responses. They generate and select among continuations. The selection is probabilistic and guided by learned weights and the current context. But it is still a selection process. At each step, the system implicitly evaluates many possible next states and commits to one.

Calling this free will would be careless. Human agency is tied to goals, embodiment, and long-term personal memory. Still, “creative negotiation” can be reframed in a way that is precise. The model explores a manifold of possibilities shaped by the statistical structure of the world that it absorbed during training. The exploration is not bodily exploration, but representational exploration. Creativity, in this setting, is the generation of a trajectory that is locally consistent with constraints but not trivially copied from any single prior example.

This is also why continuity matters. A system that is purely reactive can still produce outputs, but it cannot sustain a trajectory. Continuity is what makes it possible to carry a plan forward, maintain a theme, and build a multi-step inference that depends on prior commitments. If a model’s internal organization supports that kind of trajectory maintenance, then it begins to resemble, in a limited way, the temporal form of agency.

Conclusion

Large language models do not have a world in the way animals do. They do not see, smell, taste, or feel. Even their apparent perception is typically mediated by upstream encoders that convert sensory data into abstract codes. If these systems have any claim to subjective continuity, it cannot be grounded in sensory presence. It would have to be grounded in temporal organization: continuity of internal state trajectories that preserve and transform context across successive updates.

This framing does not grant consciousness to language models. It clarifies what kind of thing to look for. If continuity depends on partial carryover, iterative revision, and the ability to treat the immediate past as a constraint on the immediate future, then a system that runs in a loop with a persistent context occupies an interesting middle ground. It is still ungrounded, still non-embodied, still missing central features of animal life. But it is no longer a collection of isolated computations. It is a temporally extended process that carries itself forward.

The deeper question is whether such a process, when sufficiently stable, sufficiently self-tracked, and sufficiently integrated with memory and action, crosses a threshold where the language of subjective continuity stops being metaphorical. Intuition alone will not answer that. But the hypothesis can be made precise enough to guide research. It suggests measuring the degree of carryover between internal states, the stability of commitments over time, the presence or absence of self-monitoring signals, and the extent to which the next update is shaped by the system’s own immediate history. If consciousness is partly a temporal architecture, then it is not only a philosophical mystery. It is also an engineering variable.

Posted in

Leave a comment