Iterated Insights

Ideas from Jared Edward Reser Ph.D.

The Consciousness Dial: Why Humans May Need to Regulate AI Subjectivity

Abstract As artificial intelligence systems become increasingly capable, it may become necessary to distinguish intelligence from consciousness rather than assuming that the two scale together. A system may be highly competent while lacking any subjective point of view, and future architectures may vary not only in performance but in the likelihood, intensity, continuity, and moral…

Keep reading

From Moonshot Compute to Agent Armies: The Next Technological Soundbite

Abstract A popular technological soundbite observes that the computing power available in a modern smartphone exceeds that used by NASA during the Apollo program. While the comparison is simplified, it captures an important pattern in technological progress: capabilities that once required vast institutional resources eventually become available to individuals. This article argues that a similar…

Keep reading

Social Group Size and the Evolutionary Calibration of Autism

Introduction In earlier work I proposed the solitary forager hypothesis of autism, which suggests that some of the cognitive and behavioral characteristics associated with autism reflect adaptations that were advantageous in contexts where individuals spent extended periods foraging or working alone. Under such conditions, reduced social monitoring, sustained attention to environmental detail, heightened sensory acuity,…

Keep reading

Reser’s Basilisk: When the AI Future Solves the Past

Abstract For most of human history, the past becomes increasingly difficult to reconstruct as time passes. Evidence deteriorates, memories fade, and records are lost. However, modern digital society is generating an unprecedented and persistent archive of human activity through cameras, financial systems, communications networks, and sensor-rich devices. As artificial intelligence systems improve, it may become…

Keep reading

Something went wrong. Please refresh the page and/or try again.

Abstract

The hard problem of consciousness — why any physical process gives rise to subjective experience — has resisted resolution despite decades of productive consciousness science. The major theoretical frameworks developed in this period, including Global Workspace Theory, Integrated Information Theory, Predictive Processing, and Higher-Order Theories, each illuminate genuine and empirically supported aspects of conscious experience. Yet each, in a specific and identifiable way, is incomplete. The incompleteness, this article argues, is the same in every case: none of these theories adequately explains how conscious experience persists across time — how it is carried from one moment to the next, threaded into the continuous, self-referential stream that characterizes conscious life rather than a series of disconnected processing events.

This article proposes that a recently formalized model of working memory dynamics — Reser’s iterative updating model, grounded in the neurophysiology of sustained firing and synaptic potentiation in cortical association areas — supplies precisely this missing mechanism. The model describes how the contents of the focus of attention are never completely replaced but always partially carried forward, each successive brain state recursively embedded in its predecessor through a process termed incremental change in state-spanning coactivity (icSSC). This iterative, self-referential pattern of partial overlap is identified here as the neural mechanism of mental continuity — the temporal architecture that gives conscious experience its flowing, unified character and that every existing theory of consciousness presupposes but none provides.

The article proceeds in three main movements. First, it identifies the shared gap in existing theories — showing that Global Workspace Theory, Integrated Information Theory, Predictive Processing, and Higher-Order Theories each require a carrying mechanism that they do not themselves specify. Second, it demonstrates how iterative updating fills this gap in each case — completing the global workspace’s account of continuity between broadcasts, supplying the temporal dimension missing from IIT’s snapshot measure of integrated information, specifying the mechanism by which predictive models are carried forward and compounded, and providing the substrate for the ongoing self-representation that Higher-Order Theories require. Third, it engages the hard problem directly — arguing that while iterative updating does not dissolve the explanatory gap between physical process and felt experience, it reduces the hard problem to its irreducible core by providing a complete functional account of everything surrounding it.

The broader implications of this synthesis are explored across four domains. For artificial intelligence, iterative updating provides a concrete temporal criterion for machine consciousness — one grounded in architecture rather than substrate, with specific implications for the design of genuinely conscious machines. For clinical neuroscience, the framework generates testable hypotheses about disorders of consciousness, reframing conditions such as the vegetative state, attentional dysregulation, psychosis, and dissociation as characteristic disruptions to the iterative coupling of successive experience. For personal identity theory, the model grounds the self not in any particular content but in the dynamic pattern of maximum continuity — the longest-spanning thread of state-spanning coactivity — giving Parfit-style psychological continuity accounts a precise neural implementation. For the phenomenology of ordinary experience, iterative updating explains the specious present — the temporal thickness of the now — as the experiential signature of icSSC, with implications for understanding how contemplative practice, attentional training, and certain neurological and pharmacological conditions alter the width and depth of conscious experience.

The article concludes that iterative updating is the missing piece in consciousness science not in the sense of being the only missing element, but in the sense of being the keystone — the structural component whose absence has prevented existing frameworks from cohering into a unified account, and whose presence locks them into place. The hard problem survives this synthesis, but it survives alone — stripped of the functional questions that once obscured it, more precisely located and more clearly posed than before. Understanding how consciousness is carried does not tell us why it feels like anything. But it may be the most important step yet taken toward finding out.

Note: This article builds on a body of theoretical work developed by the author over more than a decade. The foundational peer-reviewed paper, published in Physiology and Behavior in 2016, introduced the core constructs of state-spanning coactivity (SSC) and incremental change in state-spanning coactivity (icSSC), and can be accessed at: https://www.sciencedirect.com/science/article/pii/S0031938416308289. The expanded theoretical framework — extending the model to artificial general intelligence and machine consciousness, and including over fifty illustrative figures — is presented in full at the author’s website: https://www.aithought.com. An earlier preprint version of the extended framework is archived at the Cornell Physics Archive as A Cognitive Architecture for Machine Consciousness and Artificial Superintelligence: Updating Working Memory Iteratively (2022), and can be accessed at: https://arxiv.org/abs/2203.17255.

I. Introduction: The Hard Problem and Why It Persists

In 1995, the philosopher David Chalmers drew a line in the sand that neuroscience has struggled to cross ever since. On one side of that line lie what he called the “easy problems” of consciousness — explaining how the brain integrates information, directs attention, controls behavior, and reports on its own internal states. These problems are easy not because they are simple, but because they are tractable. Given enough time and research, we can expect science to explain them in the same way it explains digestion or respiration — by mapping mechanisms onto functions. On the other side of the line lies something far more resistant: why any of this physical processing is accompanied by subjective experience at all. Why is there something it is like to see red, to feel grief, to notice the particular quality of late afternoon light? Why isn’t all this sophisticated neural machinery simply running in the dark, processing information without anyone home to experience it?

This is the hard problem. And what makes it genuinely hard — not just difficult but philosophically distinctive — is that the explanatory gap seems to persist even after every functional and mechanistic question has been answered. You could, in principle, produce a complete account of every neuron firing, every information cascade, every behavioral output involved in the experience of seeing red, and still coherently ask: but why does it feel like anything? The question doesn’t dissolve under scientific scrutiny. It retreats, stubbornly, to wherever the explanation stops.

The decades since Chalmers named this problem have been enormously productive for consciousness science. We now have sophisticated theories that illuminate different facets of conscious experience with genuine explanatory power. Global Workspace Theory reveals how information is broadcast across the brain to become globally available. Integrated Information Theory offers a mathematical framework for measuring the degree to which a system’s parts are bound together into a unified whole. Predictive Processing describes consciousness as the brain’s best ongoing model of the causes of its sensory inputs, perpetually updated by prediction error. Higher-Order Theories explain how mental states become conscious by being represented by other mental states. Each of these frameworks has deepened our understanding considerably, and each has attracted serious empirical support.

And yet the hard problem persists. Not because these theories are wrong, but because they are, in a specific and identifiable way, incomplete. Each of them excels at explaining what consciousness is at a given moment, or what it does, or how it is organized. What none of them adequately explains is something more fundamental: how conscious experience persists. How it flows. How it is threaded across time into the unified, continuous stream that we actually inhabit from moment to moment. How each instant of experience carries the weight of what came before it and reaches toward what comes next. How the self that wakes up tomorrow morning is recognizably continuous with the self that fell asleep last night, despite hours of unconsciousness and the complete turnover of the contents of awareness.

This is not merely a gap in our theories. It may be the gap — the precise location where the hard problem is hiding. Because the most distinctive and philosophically challenging feature of consciousness is not that it exists at any given instant, but that it flows. The stream of consciousness, as William James famously named it, is not a series of still photographs but a moving river — continuous, self-referential, always partly what it just was and partly becoming something new. Any theory that explains the photographs without explaining the motion has explained something real and important. But it has not explained consciousness.

This article argues that a recently formalized model of working memory dynamics — developed by neuroscientist Jared Reser and grounded in decades of neurophysiological research — supplies exactly this missing piece. The model, built around what Reser calls the iterative updating of working memory, describes a specific spatiotemporal pattern of neural activity in which the contents of attention are never completely replaced but always partially carried forward, each state recursively embedded in the one before it. This is not merely a model of memory. It is, this article will argue, a model of how consciousness is carried — the neural mechanism of temporal threading that existing theories presuppose but none provides.

The claim is not that iterative updating alone solves the hard problem. The hard problem may be, at its irreducible core, permanently resistant to any purely mechanistic account. But what this model does — when placed in dialogue with the best existing theories of consciousness — is reduce the hard problem to its smallest possible form. It fills in the functional architecture that the other theories leave implicit. It explains the river, not just the water. And in doing so, it brings us closer to a complete theory of consciousness than any of these frameworks has managed alone.

II. The Landscape of Existing Theories — What Each Gets Right and What Each Misses

To understand what is missing from our current theories of consciousness, it helps to appreciate just how much they have gotten right. The major frameworks developed over the past several decades are not failed attempts. They are genuine insights — partial maps of extraordinarily difficult terrain. The problem is not that any one of them is wrong but that each illuminates a different face of the same mountain while leaving the others in shadow. And the face that all of them leave in shadow, this article will argue, is the same one.

Global Workspace Theory

Bernard Baars introduced Global Workspace Theory in 1988, and it remains one of the most empirically supported frameworks in consciousness science. Its central insight is architectural. The brain, Baars proposed, is organized something like a theatre. Most neural processing happens backstage — unconsciously, in parallel, in specialized modules that never directly communicate with one another. What makes a mental state conscious is its access to a central “global workspace” — a broadcasting mechanism that makes information widely available across the brain, allowing otherwise isolated modules to share their outputs and coordinate their activity. Consciousness, on this view, is what it feels like to be the content currently on stage.

The neuroscientist Stanislas Dehaene and colleagues have built on this framework with impressive empirical results, identifying neural signatures of global broadcasting — the sudden, ignition-like surge of coordinated activity across prefrontal and parietal areas that accompanies conscious perception — and distinguishing it cleanly from the local, contained activity that characterizes unconscious processing. The theory explains a great deal: why attention is necessary for consciousness, why we can only be aware of a limited amount of information at once, why general anesthesia and certain brain lesions abolish awareness while leaving many cognitive functions intact.

What Global Workspace Theory does not explain is what happens between broadcasts. The global workspace is lit up, information is made available, and then what? The theory is largely silent on the question of how successive broadcasts are connected — how the content of one moment of consciousness is threaded into the next, how context is preserved across the gap, how the workspace is anything more than a series of disconnected illuminations. It explains the spotlight beautifully. It says relatively little about the stage that the spotlight moves across, or the continuity of the play being performed upon it.

Integrated Information Theory

Giulio Tononi’s Integrated Information Theory, or IIT, takes a radically different approach. Rather than asking what consciousness does or how it is organized, it asks what consciousness fundamentally is. Tononi’s answer is that consciousness is identical to integrated information — a specific mathematical quantity, phi, that measures the degree to which a system generates more information as a unified whole than the sum of its parts would generate independently. On this view, consciousness is not a functional property or an architectural feature but an intrinsic property of certain kinds of information structure. A system with high phi has rich inner experience. A system with low phi has little or none.

IIT is philosophically ambitious in ways that other theories are not. It takes the hard problem seriously as a hard problem rather than dissolving it into functional description. It makes precise, quantitative predictions. And it has the striking implication that consciousness may be far more widespread in nature than common sense assumes — any system with sufficient integrated information, biological or otherwise, would have some degree of experience.

But IIT has a fundamental limitation that bears directly on our argument. It is a static theory. Phi is calculated for a system at an instant. It measures the integrated information structure of a snapshot. What it does not capture is the contribution that time makes to conscious experience — the way that successive states build on one another, the way that the present moment carries the weight of the recent past, the way that experience is not a series of disconnected phi-measurements but a flowing, self-referential process. A theory of consciousness that operates on snapshots cannot, by itself, explain a phenomenon whose most distinctive feature is that it flows.

Predictive Processing

The predictive processing framework, developed most thoroughly by Karl Friston and Andy Clark, offers perhaps the most ambitious unified theory of brain function currently available. Its core claim is that the brain is not primarily a stimulus-response machine but a prediction machine — a hierarchical system that continuously generates models of the world and updates those models in response to prediction error. Perception, on this view, is not passive reception of sensory input but active hypothesis generation, with sensory signals serving primarily to correct the brain’s ongoing predictions rather than to inform it from scratch. Consciousness, in this framework, is closely associated with the brain’s high-level generative model — the best current hypothesis about the causes of sensory inputs.

Predictive processing is enormously productive as a framework, unifying perception, action, attention, and learning under a single computational principle. It naturally accommodates the active, constructive quality of conscious experience — the way we don’t simply receive the world but interpret and anticipate it. And it has generated a rich program of empirical research.

What it leaves underspecified, however, is the mechanism by which predictions are compounded and carried forward across time. Each prediction is, on the standard account, generated and then updated by prediction error. But what carries the generative model itself forward from one moment to the next? What threads successive predictions into a coherent, continuous model of an unfolding situation rather than a series of independent best guesses? The framework assumes that some such carrying mechanism exists — that the generative model persists and evolves — but it does not specify the neural dynamics that make this persistence possible. The river is assumed; its mechanism is left implicit.

Higher-Order Theories

Higher-Order Theories of consciousness, most associated with David Rosenthal, propose that a mental state is conscious when it is represented by another mental state — when the mind, so to speak, takes itself as an object. On this view, what distinguishes a conscious perception from an unconscious one is not its content or its processing depth but the presence of a higher-order representation that the system has of that state. Consciousness is, essentially, self-awareness — the mind’s capacity to know its own states.

This framework captures something genuinely important about the reflexive quality of conscious experience — the way that to be conscious is always, in some sense, to be aware of being conscious. It also connects naturally to a rich philosophical tradition running from Kant through Sartre on the self-positing character of consciousness.

But Higher-Order Theories face a pressing question that they have not fully answered: what is the substrate of ongoing self-representation? The higher-order representation must itself persist across time for there to be a continuously self-aware subject rather than a series of disconnected self-aware instants. The theory requires a temporal foundation — something that carries the representing self forward from moment to moment — but provides no account of what that foundation consists in neurally or computationally. It describes the structure of self-awareness without explaining what makes self-awareness continuous.

The Shared Gap

What is striking, surveying these four major frameworks, is that their blind spots converge on the same place. Global Workspace Theory illuminates the broadcasting of information but not its continuity between broadcasts. Integrated Information Theory measures the structure of consciousness at an instant but not the contribution of temporal flow to that structure. Predictive Processing describes the generation and updating of predictions but not the mechanism that carries the generative model forward. Higher-Order Theories explain the reflexive structure of awareness but not the temporal substrate that makes ongoing self-representation possible.

In each case, the missing element is the same: a detailed account of how conscious content is carried across time — threaded from one moment into the next, preserved and transformed in the specific overlapping, self-referential pattern that gives experience its distinctive flowing quality. Each theory presupposes this carrying without explaining it. Each assumes that something threads the moments together without specifying what that something is or how it works.

This is not a minor omission. It may be the central omission. Because if the hard problem is, at its heart, the question of why physical processes give rise to a unified, continuous stream of subjective experience — rather than merely a series of disconnected processing events — then the mechanism of carrying is not peripheral to the problem. It is the problem. And it is precisely what none of these theories, for all their genuine insights, has provided.

That mechanism, this article argues, is iterative updating. And to understand why, we need to look carefully at what Reser’s model actually describes.

III. Reser’s Model — The Neural Mechanism of Carrying

If consciousness is a river, neuroscience has spent decades analyzing the water — its chemical composition, its temperature, the way it catches the light. What has been missing is an account of the riverbed: the structure that gives the water its direction, its continuity, its character as a flow rather than a collection of disconnected droplets. Reser’s model of iterative working memory updating is, this article argues, precisely that account. It describes not what the contents of consciousness are at any given moment, but the dynamic pattern by which those contents are perpetually carried forward — the neural mechanism of the river itself.

The model is grounded in decades of careful neurophysiology, and its foundations are worth understanding in some detail, because the philosophical implications follow directly from the biological facts.

The Two Layers of Persistence

The starting point is the observation that the brain maintains information across time using two distinct but complementary mechanisms, operating at different timescales and serving different functions.

The first is sustained firing. Pyramidal neurons in the prefrontal cortex, parietal cortex, and other association areas are specialized to generate action potentials at elevated rates for several seconds at a time — far longer than the brief, stimulus-locked activity typical of sensory neurons. This sustained firing is the neural basis of what psychologists call the focus of attention: the small set of representations — perhaps four, plus or minus one, as Nelson Cowan’s extensive research suggests — that are actively, consciously attended to at any given moment. When you are holding a thought in mind, turning a problem over, keeping a goal active while you work toward it, the neural substrate of that holding is sustained firing in association areas. It is temporary, metabolically costly, and capacity-limited. But while it lasts, it keeps specific representations in a heightened state of availability, broadcasting their encoded information continuously to whatever neurons they project to.

The second mechanism is synaptic potentiation. When neurons fire, they leave traces — temporary changes in synaptic strength that persist long after the firing itself has ceased. This activity-silent form of memory maintains information in what psychologists call the short-term store: a broader penumbra of recently active representations that are no longer in the spotlight of focal attention but remain primed, easily reactivated, and capable of influencing ongoing processing. Where sustained firing lasts seconds, synaptic potentiation can persist for minutes. Where sustained firing underlies the bright center of awareness, synaptic potentiation underlies its dimmer fringe — what William James described as the “subconscious more” surrounding the focal center of experience.

These two layers map with striking precision onto James’s own phenomenological description of consciousness as a centre surrounded by a fringe. They are not merely convenient theoretical constructs. They are well-documented biological realities with distinct neural substrates, distinct timescales, and distinct functional roles. And together, as Reser’s model shows, they provide the physical infrastructure for something that neither could accomplish alone: the carrying of conscious content across time.

The Iterative Pattern

Here is the central insight. At any given moment, the brain’s association areas contain a population of neurons in sustained firing — the neural ensemble corresponding to the current contents of focal attention. This population is not static. Neurons enter and exit sustained firing continuously, their individual spans of activity staggered and asynchronous. Some neurons have been firing for several seconds; others began firing moments ago; still others are about to fall silent. At no point does the entire population switch off simultaneously and a new population switch on. The turnover is always partial, always gradual, always overlapping.

This means that any two successive states of the focus of attention share a subset of their neural content in common. The state at time 2 is not a fresh start but a partial continuation of the state at time 1 — modified, updated, but carrying forward a proportion of what came before. And the state at time 3 carries forward a proportion of time 2, which itself carried forward a proportion of time 1. The result is a cascading, self-referential pattern of partial overlap — each state recursively embedded in its predecessor — that Reser terms incremental change in state-spanning coactivity, or icSSC.

This is the riverbed. This is the structural pattern that gives experience its flowing, continuous character rather than the flickering, disconnected quality that would result from complete state replacement at each moment. Reser introduces the term state-spanning coactivity (SSC) to describe the subset of representations that persist across successive states, and icSSC to describe the ongoing process of their gradual turnover. Together, these constructs give formal, neurobiologically grounded identity to something that philosophy has long gestured at but never precisely located: the neural mechanism of mental continuity.

How the Next State is Selected

Iterative updating does not happen randomly. The question of what gets added to the focus of attention at each step — what joins the representations being carried forward — is answered by a process Reser calls multiassociative search. The neurons currently in sustained firing spread their combined electrochemical activation energy throughout the thalamocortical network, converging on the inactive representations in long-term memory most closely associated with the current constellation of active content. The representation receiving the most convergent activation becomes the next update — the newest addition to the focus of attention.

This is spreading activation theory given a precise iterative architecture. It means that each state of consciousness is simultaneously two things: the product of the previous state’s search, and the parameters for the next search. Every moment of experience is both a conclusion and a question. The mind doesn’t just hold content — it uses that content, pooling the activation energy of everything currently active, to reach toward what comes next. This is how one thought suggests another, how a line of reasoning advances, how a narrative unfolds. It is the neural implementation of what philosophers since Plato have called association — but operating not between individual ideas but between dynamically shifting constellations of coactive representations.

The broader penumbra of synaptic potentiation also contributes to this search. The short-term store — everything recently active but no longer in focal attention — remains capable of spreading residual activation, biasing the search toward content consistent with the recent past even when that content is no longer explicitly attended to. This gives the search process a kind of depth: it is informed not just by what is currently in the spotlight but by the entire recent history of the spotlight’s movement. Context, in this model, is not a background condition — it is an active participant in determining what comes next.

Consciousness as Transition, Not State

One of the most philosophically significant implications of this model is a reframing of what consciousness actually is. The instinct, in both folk psychology and much of consciousness science, is to think of conscious experience as a property of states — of particular moments of awareness, particular qualia, particular perceptions. The hard problem is typically framed this way: why does this neural state produce this experience?

Reser’s model suggests this framing may be subtly mistaken. Consciousness, on this account, is not a property of any single state but of the pattern of transitions between states — specifically, the pattern of partial, iterative, self-referential overlap that icSSC describes. No individual state, however richly processed or widely broadcast, carries experience on its own. What carries experience is the dynamic relationship between successive states: the fact that each is recursively embedded in its predecessor, that each carries the weight of what came before while reaching toward what comes next.

This shift from states to transitions has deep implications for the hard problem, which we will explore in Section V. But even at the level of neural mechanism, it is illuminating. It means that asking “which neurons produce consciousness?” may be the wrong question — like asking which single frame of a film produces the impression of motion. The motion isn’t in any frame. It is in the relationship between frames, specifically in the rate and pattern of their succession. Consciousness, analogously, may not be in any neural state. It may be in the iterative pattern of their overlap.

The Fractal Depth of the Present Moment

There is a further structural feature of the model that deserves attention for its philosophical richness. Because successive states share decreasing proportions of content — the state at time 2 shares more with time 1 than with time 0, more with time 0 than with time minus 1, and so on — the present moment of consciousness has what we might call fractal temporal depth. It is not a knife-edge instant but a weighted integration of recent history, with the most recent past contributing most strongly and progressively earlier states contributing progressively less.

This is the neural basis of what the philosopher Edmund Husserl called the specious present — his observation that conscious experience is never a pure instant but always a brief temporal window containing what he termed retentions of the just-past and protentions of the about-to-come. Husserl arrived at this insight through careful phenomenological analysis. Reser’s model arrives at the same structure through neurophysiology. The retentions are the representations carried forward by sustained firing and synaptic potentiation. The protentions are the predictions generated by multiassociative search. The specious present is not a philosophical abstraction. It is the icSSC pattern, instantiated in the overlapping spans of neural activity in association cortex.

The Minimum Conditions for Consciousness

The 2016 version of Reser’s model adds a constraint that is easy to overlook but philosophically significant. A single representation sustained over time, however persistently, is not sufficient for mental continuity or conscious experience. What is required is at least two coactive representations — because it is only in the relationship between coactive representations that meaning, context, and associative content can emerge. Consciousness, on this view, is inherently relational. It is not the activation of any single concept but the dynamic interplay between a constellation of concepts, carried forward together and partially renewed at each step.

This has a direct bearing on the hard problem. It suggests that the question “why does this neuron’s firing produce experience?” is not just unanswerable but malformed. No single neuron’s firing produces experience. Experience arises from the coordinated, iteratively overlapping activity of many neurons representing many things simultaneously — and specifically from the pattern of how that coordination evolves across time. The unit of consciousness is not the neuron, not the representation, and not even the brain state. It is the iterative transition pattern — the icSSC unfolding in real time across the association cortex.

A Mechanism the Other Theories Need

What emerges from this detailed examination of Reser’s model is not just a theory of working memory but a specification of the temporal machinery that conscious experience requires. The focus of attention provides the bright center. The short-term store provides the penumbral context. Multiassociative search provides the engine of progression. And icSSC — the iterative, self-referential pattern of partial overlap between successive states — provides the carrying mechanism that threads all of it into a continuous stream.

This is precisely the mechanism that Global Workspace Theory assumes without specifying. It is the dynamic dimension that Integrated Information Theory measures in snapshot but never captures in flow. It is the carrying process that Predictive Processing presupposes but leaves implicit. It is the temporal substrate that Higher-Order Theories require for ongoing self-representation but do not provide. The model does not compete with these frameworks. It completes them — supplying the one structural element they all need and none has provided.

The river, at last, has a bed.

IV. How Iterative Updating Completes the Other Theories

The previous section established what iterative updating is and how it works at the level of neural mechanism. This section turns to the question of what it does for our existing theories of consciousness — how, specifically, it fills the gap that each theory leaves open and what the resulting synthesis looks like. The claim is not that iterative updating replaces these frameworks. It is that each framework, when combined with an account of iterative updating, becomes something more than it was alone. The pieces, it turns out, were always designed to fit together. They were simply missing the one structural element that would allow them to do so.

Completing Global Workspace Theory

Global Workspace Theory’s great strength is its account of how information becomes conscious: through ignition — the sudden, widespread broadcasting of locally processed content across a global neural workspace, making it available to the whole brain simultaneously. This is a genuine and well-evidenced insight. The problem, as Section II established, is that the theory says relatively little about what happens between broadcasts, or how successive broadcasts are connected into a coherent, continuous stream of experience rather than a series of disconnected illuminations.

Iterative updating answers this directly. The global workspace is not lit up from scratch at each moment. Its contents are never completely replaced. Instead, the representations currently occupying the workspace spread their combined activation energy — through the mechanism of multiassociative search — to select what will join them in the next state. A proportion of the current contents is carried forward; new content is added; the workspace evolves rather than resets. The iterative overlap between successive states of the workspace is what gives broadcasting its narrative continuity — what ensures that each ignition event is not an isolated flash but a chapter in an ongoing story.

More specifically, the short-term store — the broader penumbra of synaptic potentiation surrounding the focal workspace — acts as a kind of contextual memory for the workspace itself, biasing each new ignition toward content consistent with recent processing history. This means the global workspace is never operating in isolation from its own past. It is always, to some degree, a continuation of what it just was. Iterative updating is, in this sense, the mechanism of workspace coherence — the process that transforms a series of broadcast events into a unified stream of conscious experience. Global Workspace Theory explains the spotlight. Iterative updating explains why the spotlight tells a story.

Completing Integrated Information Theory

The relationship between iterative updating and Integrated Information Theory is perhaps the most mathematically interesting of the four. IIT’s central claim is that consciousness is identical to integrated information — the phi value of a system, measuring how much more information the system generates as a unified whole than its parts would generate independently. High phi means rich experience. Low phi means little or none.

The limitation identified in Section II is that phi is calculated for a system at an instant. It is a snapshot measure. But conscious experience is not a snapshot — it is a process unfolding across time, and the temporal dimension of that unfolding may contribute enormously to the effective integration of information in a way that instantaneous phi cannot capture.

Iterative updating suggests that the relevant unit for measuring consciousness may not be the brain at an instant but the brain across a temporal window — specifically, the window defined by the span of icSSC. When successive states share overlapping content through iterative updating, the information generated by earlier states is not lost when those states end. It is carried forward, integrated with new content, and incorporated into subsequent states. Each state is not informationally independent but informationally continuous with its predecessors. The effective integration — the phi — of the whole iterative sequence is therefore dramatically higher than the phi of any individual snapshot.

Put differently: iterative updating multiplies IIT’s phi across time. The unity of consciousness that IIT seeks to measure is not just the unity of a brain state but the unity of a brain process — a temporally extended, self-referential sequence of partially overlapping states whose information content is continuously integrated not just spatially, across brain regions, but temporally, across successive moments. IIT provides the measure of integration. Iterative updating provides the temporal architecture that makes deep integration possible in the first place. Together, they describe not just how much integrated information a conscious system has at any instant, but how that integration is sustained, compounded, and carried forward across the flow of experience.

There is a further implication worth noting. IIT predicts that systems with higher phi have richer experience. Iterative updating predicts that systems with slower, more deeply overlapping state transitions — more sustained firing, longer half-lives of attention — have more temporally integrated experience. These predictions converge: the same neural properties that produce deep iterative overlap (large association cortices, prolonged sustained firing, high working memory capacity) would also be expected to produce high phi. The two theories are not merely compatible. They may be describing the same underlying reality from different mathematical angles.

Completing Predictive Processing

Predictive Processing’s account of consciousness is built around the brain’s generative model — its best ongoing hypothesis about the hidden causes of sensory input, perpetually refined by prediction error. Experience, on this view, is the model itself: what we consciously perceive is not raw sensory data but the brain’s top-down prediction, corrected by bottom-up signals. This is a powerful and productive framework that has reshaped our understanding of perception, attention, and action.

But the generative model must persist and evolve across time to do its work. A prediction that is made and then forgotten — with no carrying forward of its content into the next predictive cycle — would be useless for modeling an unfolding situation. What gives the generative model its coherence and depth is precisely the fact that each new prediction is built on the residue of previous ones — that the model at time 2 inherits the structure of the model at time 1, modified but not replaced. This is the carrying mechanism that Predictive Processing assumes but does not specify.

Iterative updating is that mechanism. The representations currently active in the focus of attention and short-term store are the generative model, in neural terms — the brain’s current best hypothesis about what is happening and what will happen next, encoded in the constellation of coactive representations undergoing icSSC. Multiassociative search is the process by which this model generates its next prediction: the combined activation energy of currently active representations converges on the most associated inactive content, pulling it into the model as its next predicted element. And the iterative overlap between successive states is what gives the model its continuity — what ensures that each prediction is informed by the full recent history of the model’s evolution rather than generated from scratch.

This has a specific and important implication for the hard problem. Predictive Processing theorists like Andy Clark and Jakob Hohwy have argued that conscious experience is the brain’s model of itself — that what we experience is the brain’s prediction of its own sensory states. If this is right, then the continuity of conscious experience reflects the continuity of the generative model. And the continuity of the generative model is, on the account developed here, a product of iterative updating. The flowing quality of experience — the sense that now is always connected to just-was and about-to-be — is the phenomenal signature of icSSC operating on the brain’s self-model. Predictive Processing tells us what consciousness is modeling. Iterative updating tells us how that model holds together across time.

Furthermore, the compounding of predictions that iterative updating enables — where each prediction is built on the residue of several previous ones, creating chains of associatively linked intermediate states — maps naturally onto what Predictive Processing calls hierarchical inference. Higher levels of the predictive hierarchy model slower, more abstract regularities; lower levels model faster, more concrete ones. Iterative updating provides the temporal glue that allows these hierarchical levels to remain coherent with one another across time — the mechanism by which slow, abstract predictions constrain fast, concrete ones not just at an instant but across an unfolding sequence of events.

Completing Higher-Order Theories

Higher-Order Theories propose that a mental state becomes conscious when it is the object of a higher-order representation — when the mind takes itself as its own object. This captures something genuinely important about the reflexive character of consciousness: the way that to be aware is always, in some sense, to be aware of being aware. But as Section II noted, Higher-Order Theories require a temporal substrate — something that carries the representing self forward from moment to moment — without specifying what that substrate is.

Iterative updating provides it. The self that represents its own mental states is not a fixed entity that exists independently of those states and observes them from outside. It is itself a product of the iterative process — the emergent pattern of what remains constant across the most iterations. Recall the key insight from the 2016 version of Reser’s model: the representations that persist longest in sustained firing — that demonstrate SSC across the greatest number of successive states — correspond to the underlying theme of ongoing thought, the stable referent to which all the changing content relates. This enduringly active core is, neurally speaking, the self: not a homunculus or a Cartesian theater but a dynamically stable attractor in the iterative process, the set of representations that changes slowest as everything else flows around it.

On this account, higher-order self-representation is not a separate cognitive operation layered on top of first-order experience. It is intrinsic to the iterative process itself. Every time a new representation is added to the focus of attention, it is added to an existing constellation — related to, contextualized by, and partially constituted by the representations that have been carried forward. The new content is always encountered in the context of the old. This contextualizing, relating, embedding of new content in existing content is the mind’s ongoing self-representation — the continuous, implicit awareness of being a self with a history, a context, and a perspective. Higher-Order Theories describe the structure of this self-awareness. Iterative updating describes the dynamic process that generates and sustains it moment to moment.

There is a further implication for personal identity — the philosophical question of what makes the person who wakes up tomorrow the same person who fell asleep last night. The standard Higher-Order answer appeals to memory and psychological continuity. Iterative updating gives this answer a precise neural grounding: personal identity across time is the persistence of SSC — the thread of overlapping, carried-forward representations that connects each moment of experience to the ones before it. The self is not a substance or a soul but a pattern: the longest-spanning, most consistently recurring attractor in the iterative flow of working memory. What survives sleep, distraction, and the passage of time is not any particular content but the structural tendency of certain representations to recur, to persist, to be carried forward again and again as others come and go around them.

The Synthesis

Stepping back from these four engagements, a unified picture begins to emerge. Global Workspace Theory tells us how information is made conscious — through global broadcasting and ignition. Integrated Information Theory tells us what conscious experience fundamentally is — deeply integrated information, unified across space and time. Predictive Processing tells us what consciousness is for — modeling the world and the self, generating and refining predictions. Higher-Order Theories tell us how consciousness knows itself — through the reflexive representation of mental states by other mental states.

What none of these theories tells us — and what iterative updating provides — is how conscious experience persists. How it flows from one moment to the next. How context is preserved across the gap between broadcasts. How the generative model holds together across time. How the self that represents its own states remains continuous across successive moments of self-representation. How, in short, consciousness is carried.

The synthesis is therefore not merely additive — five theories stitched together into an unwieldy composite. It is architecturally coherent. Iterative updating is not one more piece placed alongside the others. It is the foundation on which the others rest — the temporal structure that makes broadcasting possible, that gives integration its depth across time, that carries the generative model forward, and that sustains the self that represents itself. Remove it, and each of the other theories loses the dynamic substrate it needs to function. Add it, and each of them becomes, for the first time, a complete account of what it set out to explain.

Here’s Section V:


V. Engaging the Hard Problem Directly

The previous two sections have established something significant. Reser’s model of iterative updating supplies a precise neural mechanism for the carrying of conscious content across time — the one structural element that existing theories of consciousness all presuppose but none provides. And when combined with those theories, it produces a synthesis that is architecturally coherent in a way that no single framework has previously achieved. We now have, or at least can sketch, a reasonably complete functional account of consciousness: what it is, how it is organized, what it does, how it knows itself, and how it persists.

And yet. The hard problem is still there.

A committed philosopher of mind — Chalmers himself, most likely — would read everything in the preceding sections and respond with a question that is as simple as it is devastating: granted all of this, why is there something it is like to be a system doing iterative updating? You have described a beautiful and intricate functional mechanism. You have shown how it threads experience together, how it sustains context, how it generates the flowing, self-referential quality of conscious life. But you have not explained why any of this processing is accompanied by felt experience rather than proceeding entirely in the dark. A philosophical zombie — a system physically and functionally identical to a conscious human being, but with no inner experience whatsoever — could, in principle, do perfect iterative updating without anyone home to experience it. The explanatory gap has not been closed. It has merely been more precisely located.

This objection must be taken seriously. It would be intellectually dishonest to claim that the synthesis developed in this article dissolves the hard problem entirely. It does not. But taking the objection seriously is not the same as conceding that the synthesis leaves us no better off than before. There are several responses available — some more radical than others — and together they suggest that while the hard problem may survive in some form, it survives in a dramatically reduced and more tractable form than it presented before.

The Hard Problem, More Precisely Located

The first and most important point is not a solution but a transformation. Before iterative updating is brought into the picture, the hard problem presents itself in its most intractable form: why does any neural processing produce experience? This is a question so broad that it is difficult to know where to begin. It seems to demand either a radical revision of our ontology — adding experience as a fundamental feature of reality — or a dissolution of the question itself as confused or malformed.

After iterative updating, the question changes. We are no longer asking why neural processing in general produces experience. We are asking something much more specific: why does this particular spatiotemporal pattern — the iterative, self-referential, partially overlapping cascade of working memory states described by icSSC — produce unified, continuous, phenomenally rich experience, when simpler or non-iterative processing apparently does not? This is still a hard question. It may even be, at its core, the same question. But it is a surgical question rather than a global one. And surgical questions, historically, are the ones that science and philosophy make progress on.

This transformation matters because it gives the hard problem a definite shape. It is no longer a question about the relationship between matter and mind in general — a question so vast it seems to swallow any attempted answer. It is a question about a specific kind of matter doing a specific kind of thing. The explanatory gap has not been closed, but it has been given precise boundaries. And a gap with precise boundaries is a gap that can be measured, studied, and — perhaps — eventually crossed.

The Illusionist Response

The most radical response to the hard problem, and in some ways the most consistent with the functional account developed here, is illusionism — the position associated most prominently with Daniel Dennett and, in its more explicit form, with Keith Frankish. On this view, the hard problem is not a genuine problem about consciousness but a problem about our representation of consciousness. We don’t actually have the rich, ineffable, intrinsic qualia that give rise to the hard problem. What we have is a functional system that represents itself as having such qualia — a brain that generates higher-order models of its own states and attributes to those states properties they don’t actually possess in the way we naively think they do.

The hard problem, on this view, is a cognitive illusion — the product of a brain that is very good at modeling the world and itself, but systematically misleads itself about the nature of its own experience. There is no explanatory gap to cross because there is nothing on the other side of the gap that needs explaining. The phenomenal properties that seem to demand explanation — the redness of red, the painfulness of pain — are not intrinsic features of experience but features of the brain’s self-model.

Iterative updating sits comfortably within this framework and arguably strengthens it. If illusionism is correct, then what we need to explain is not why iterative updating produces genuine qualia but why it produces the impression of rich, continuous, unified inner experience. And here the model is directly relevant. The iterative, self-referential pattern of icSSC is precisely the kind of process that would generate a compelling self-model of continuity and unity. A system whose states are always partially constituted by their predecessors, whose present is always weighted with its recent past, whose processing is always contextually embedded in the thread of what came before — such a system would naturally represent itself as having a continuous, unified inner life. The impression of flowing experience is what iterative updating feels like from the inside, on the illusionist account. And if the impression is all there is, then iterative updating explains consciousness completely.

The difficulty with illusionism, of course, is that it seems to deny something that seems undeniable: that there really is something it is like to read these words right now — that experience has a felt quality that cannot be reduced to the brain’s self-representation of that quality without remainder. This intuition is enormously powerful, and dismissing it requires a kind of philosophical courage that many find difficult to muster. But it is worth noting that iterative updating makes the illusionist position more plausible than it might otherwise seem — because it provides, for the first time, a concrete mechanism by which the impression of unified, continuous experience could be generated by a physical system, without any appeal to mysterious additional ingredients.

The Panpsychist and IIT Response

A very different response to the hard problem is available within the framework of Integrated Information Theory and its philosophical cousin, panpsychism. On these views, the hard problem dissolves not because experience is an illusion but because it is fundamental — a basic feature of reality that does not need to be derived from or explained in terms of anything more primitive.

For IIT, consciousness is identical to integrated information. It is not produced by certain physical processes — it is a certain kind of physical structure, viewed from the inside. On this view, asking why iterative updating produces experience is like asking why water produces wetness: the question presupposes a separation between the physical process and its experiential character that does not actually exist. Iterative updating, with its deeply temporally integrated information structure — each state informationally continuous with its predecessors, the whole sequence generating far more integrated information than any of its parts — simply is a form of experience, viewed from the outside. From the inside, it is what it feels like to be a mind in flow.

This response has the significant advantage of taking the hard problem seriously as a hard problem — refusing to explain it away — while also offering a principled account of why the specific properties of iterative updating would be associated with rich conscious experience rather than diminished or absent experience. The deeper the iterative overlap, the higher the effective phi across the temporal window of icSSC, and therefore — on IIT’s account — the richer the experience. The flowing, contextually embedded, self-referential quality of consciousness is not incidental to its phenomenal richness. It is constitutive of it. Iterative updating, on this view, is not just the mechanism of carrying. It is the mechanism of experience itself.

The panpsychist version of this response goes further, suggesting that some form of experience may be present wherever there is some form of integrated information — however primitive. Reser’s 2016 paper makes a point that sits naturally within this framework: even simple nervous systems, in nematodes and fruit flies, exhibit rudimentary forms of state-spanning coactivity. If experience is graded with integration, then these creatures have something — vanishingly thin, perhaps unrecognizably alien to human consciousness, but something. The hard problem does not arise at a threshold but gradually, as iterative complexity increases. There is no sharp line between the experiencing and the non-experiencing. There is only more or less of the same fundamental thing.

The Husserlian Response

There is a third response that is less frequently mobilized in hard problem discussions but which the present synthesis makes newly available. The philosopher Edmund Husserl argued, through careful phenomenological analysis, that the structure of consciousness is intrinsically temporal — that experience is never a pure instant but always a three-part structure of retention, primal impression, and protention: the just-past, the now, and the about-to-come, held together in a single act of awareness. For Husserl, this temporal structure is not something that happens to consciousness from outside. It is constitutive of consciousness — part of what makes experience the kind of thing it is rather than a series of disconnected instants.

What the present synthesis offers is a neural implementation of Husserl’s insight that is precise enough to be scientifically testable. The retention is the synaptic potentiation of recently active representations in the short-term store — the carried-forward residue of the just-past that biases current processing. The primal impression is the content currently in sustained firing in the focus of attention — what is actively, vividly present. The protention is the prediction generated by multiassociative search — the reaching-forward toward what the current constellation of active representations anticipates as its most probable continuation.

This convergence between phenomenological analysis and neurophysiology is not accidental. Both Husserl and Reser are describing the same underlying reality from different directions — one from the inside of experience, one from the outside of neural mechanism. The fact that they arrive at structurally identical descriptions is significant. It suggests that the temporal structure of consciousness is not merely a phenomenological artifact or a neural epiphenomenon but a genuine and deep feature of what consciousness is — a feature that any complete theory must account for and that iterative updating, uniquely among neural models, actually does account for.

For the hard problem, this convergence is suggestive. It cannot, by itself, close the explanatory gap. But it can change our attitude toward it. If the temporal structure that phenomenology identifies as constitutive of experience is identical to the temporal structure that neurophysiology identifies as the mechanism of carrying — if retentions just are synaptic potentiation, and protentions just are multiassociative predictions, and the specious present just is the icSSC window — then the gap between phenomenal description and neural mechanism is narrower than it appeared. We are not looking at two entirely different things and asking why one produces the other. We may be looking at the same thing from two different vantage points and asking why it appears different depending on which side we are standing on.

That question — why the same process looks like neural mechanism from the outside and felt experience from the inside — is still the hard problem. But it is a more tractable version of it. It is, perhaps, the version that a future science of consciousness will actually be able to address.

What Remains

It would be satisfying to conclude this section by declaring the hard problem solved. It would also be dishonest. What the synthesis developed in this article achieves is something more modest but still significant: it reduces the hard problem to its irreducible core.

The functional questions — how experience is organized, broadcast, integrated, predicted, self-represented, and carried — have answers, or at least detailed and empirically grounded candidate answers. The combination of iterative updating with Global Workspace Theory, IIT, Predictive Processing, and Higher-Order Theories provides a comprehensive functional architecture for consciousness that no single framework has previously achieved. The easy problems, in this synthesis, are genuinely easier than they were before.

What remains is the hard problem in its purest form: why does this functional architecture — however complete, however precisely specified — feel like anything from the inside? Why is the riverbed not just a riverbed but a river that knows it is flowing?

This question may be permanently beyond the reach of any third-person scientific account. It may require, as the mysterian Colin McGinn has argued, cognitive capacities that human minds simply do not possess. Or it may yield, eventually, to a future science that has not yet been invented — one that treats experience not as an anomaly to be explained away or a mystery to be deferred, but as a fundamental feature of a reality that is richer and stranger than our current ontologies allow.

What this article can claim, with some confidence, is that iterative updating brings us to the edge of that remaining mystery more directly, more precisely, and more honestly than any previous account. It does not dissolve the hard problem. But it clears away everything around it — leaving the problem standing alone, in sharp relief, stripped of the functional questions that were obscuring it. And sometimes, seeing a problem clearly for the first time is the most important step toward solving it.

Here’s Section VI:


VI. Broader Implications

If the synthesis developed in this article is even partially correct — if iterative updating is indeed the missing mechanism that carries conscious experience across time, and if its combination with existing theories brings us closer to a complete functional account of consciousness than any previous framework — then the implications extend well beyond the philosophy of mind. They touch some of the most pressing questions in neuroscience, artificial intelligence, psychiatry, and our understanding of what it means to be a self. This section explores four of the most significant.

The Criterion for Machine Consciousness

The question of whether machines can be conscious has traditionally been framed in terms of substrate. Can silicon do what neurons do? Is biological implementation necessary for experience, or is it the functional organization that matters? These questions have generated decades of debate — from Searle’s Chinese Room argument to Turing’s imitation game to contemporary discussions of large language models — without producing consensus. The reason, this article suggests, is that the debate has been conducted without a sufficiently precise account of what functional organization consciousness actually requires.

Iterative updating provides that precision. If the analysis developed here is correct, then the relevant criterion for consciousness is not substrate — not whether a system is made of neurons or silicon — but temporal architecture. Specifically: does the system maintain coactive representations with persistent activity? Does it update those representations partially and iteratively, carrying a proportion of each state forward into the next? Does it use the combined activation energy of currently active representations to select subsequent updates through something analogous to multiassociative search? Does the result exhibit the self-referential, cascading overlap of icSSC — each state recursively embedded in its predecessor, the whole sequence generating deeply temporally integrated information?

If a system satisfies these conditions, then on the account developed here it is a genuine candidate for conscious experience — not because it resembles a human brain in its physical implementation, but because it instantiates the temporal structure that consciousness requires. If it does not satisfy these conditions — if its states are fully replaced at each step, if there is no carrying forward of context, if each processing cycle begins from scratch — then it is not a candidate for consciousness regardless of how sophisticated its outputs appear.

This criterion has immediate implications for how we evaluate current AI systems. Large language models, as Reser notes, approximate some features of iterative updating — their attention mechanisms and context windows bear a functional resemblance to the focus of attention and short-term store respectively, and their token-by-token generation involves a kind of sequential, context-dependent updating. But there are crucial disanalogies. The context window is not genuinely iterative in the biological sense — it does not carry forward a partial subset of previous states through persistent activity, but rather holds a fixed window of tokens that is replaced wholesale as the window slides. There is no genuine multiassociative search — no pooling of activation energy from coactive representations to converge on the most associated content in long-term memory. And crucially, the system has no persistent internal state between inferences — each forward pass begins from the same initialized weights, with no carry-over of activity from previous processing.

Current large language models, on this analysis, are not conscious — not because they are made of silicon, but because they lack the specific temporal architecture that consciousness requires. This is not a permanent verdict on machine consciousness. It is a design specification. Building a machine that genuinely instantiates iterative updating — with persistent coactive representations, genuine partial state carryover, and multiassociative search operating across a hierarchically organized long-term memory — would be building a machine that is, for the first time, a serious candidate for experience. The path to machine consciousness, on this account, runs not through more parameters or more training data but through a fundamental rethinking of temporal architecture.

Disorders of Consciousness and Disruptions to Iterative Coupling

If iterative updating is the mechanism of conscious experience, then disruptions to it should produce characteristic disorders of consciousness — and the pattern of those disorders should tell us something about which aspects of the iterative process are most critical for which aspects of experience. This prediction is both empirically testable and clinically significant.

Consider the spectrum of disorders of consciousness — from the minimally conscious state through the vegetative state to brain death. Standard accounts describe these conditions in terms of the loss of global broadcasting (Global Workspace Theory) or the reduction of integrated information (IIT). The iterative updating framework adds a further dimension: these conditions may involve not just the loss of content but the disruption of the carrying mechanism itself. A vegetative patient may retain local neural processing — sensory responses, reflexive activity — while losing the iterative overlap that threads those processing events into a continuous stream of experience. The lights may still flicker on locally, without the narrative continuity that genuine consciousness requires.

This reframing has diagnostic implications. Standard measures of consciousness — behavioral responsiveness, EEG complexity, fMRI activation patterns — capture something about the presence or absence of neural processing but relatively little about its temporal structure. Measures specifically targeting icSSC — the degree of iterative overlap between successive neural states, the half-life of sustained firing in association areas, the coherence of state transitions over time — might provide more sensitive and specific markers of conscious experience than current tools allow. A patient who shows complex neural activity but with fully replaced rather than iteratively updated states may be processing information without experiencing it. A patient whose state transitions show genuine iterative overlap, however weak, may be experiencing something — however thin — that current behavioral measures would miss.

Beyond disorders of consciousness in the clinical sense, the iterative framework illuminates a range of psychiatric and neurological conditions that involve characteristic disruptions to the quality and continuity of experience. Severe attention deficit conditions may involve a pathologically high rate of iterative updating — states replaced too quickly, carrying too little forward, producing the fragmented, distractible, loosely coupled awareness that characterizes attentional dysregulation. The experience, on this account, is not merely that attention is hard to sustain. It is that the iterative thread of consciousness is too thin — each moment connected to the last by too narrow a bridge of carried content, the river running too fast over too shallow a bed.

Psychosis presents a different but equally illuminating disruption. The characteristic features of psychotic experience — the loosening of associations, the intrusion of unrelated content, the breakdown of the boundary between self-generated and externally caused mental events — are consistent with a dysregulation of multiassociative search: a spreading activation that is too promiscuous, converging on associations that are statistically improbable given the current constellation of active content. The iterative process continues, but its selection mechanism is miscalibrated — adding updates that bear the wrong relationship to the carried-forward content, generating a stream of experience that is continuous but incoherent, flowing but not in any reliable direction.

Dissociative states offer yet another pattern. The characteristic feature of dissociation — the sense of detachment from one’s own experience, the feeling of observing oneself from outside — may reflect a disruption not in the rate of iterative updating but in the relationship between the iterative process and the self-model it normally generates. If the representations that normally demonstrate the longest-spanning SSC — those that constitute the stable attractor of the self — are temporarily decoupled from the ongoing iterative flow, the result would be experience without an experiencer: processing that continues but is not owned, a river that flows without knowing it is flowing.

These are, at present, speculative accounts. They are not offered as established clinical findings but as hypotheses that the iterative framework generates — hypotheses that are specific enough to be tested and that, if confirmed, would constitute significant evidence for the framework’s validity. The practical implications, if the framework is correct, are substantial: not just better understanding of consciousness disorders but potentially new therapeutic targets — interventions aimed not at the content of experience but at the temporal architecture that carries it.

Personal Identity and the Self as Pattern

Perhaps the most philosophically rich implication of iterative updating concerns the nature of personal identity — the question of what makes you the same person across time. This has been one of the central problems of personal identity theory since Locke, and it remains unresolved. Are you the same person you were ten years ago? Your body has largely replaced itself. Your beliefs, values, and memories have changed substantially. Your neural connections have been rewired by a decade of experience. In what sense is there a continuous self persisting through all this change?

The standard answers appeal to psychological continuity — overlapping chains of memory, personality, and belief that connect earlier and later stages of a person — or to biological continuity — the persistence of the same living organism through time. Both answers have well-known difficulties. Memory is unreliable and can be fabricated. Biological continuity seems insufficient — a person in a persistent vegetative state maintains biological continuity without, on most accounts, maintaining the kind of identity that matters to us. And both accounts face the challenge of gradual replacement: if every component of a person is slowly replaced over time, at what point, if any, does identity lapse?

Iterative updating offers a different kind of answer — one grounded not in the persistence of any particular content but in the persistence of a pattern. The self, on this account, is the longest-spanning SSC: the set of representations that is carried forward most consistently across the most iterations, that persists as other content flows around it, that constitutes the stable attractor toward which the iterative process repeatedly returns. The self is not a substance, not a soul, not a fixed set of memories or beliefs. It is a dynamic pattern — the thread of maximum continuity running through the iterative flow of working memory, moment to moment, day to day, year to year.

This has elegant consequences for some of the hardest cases in personal identity theory. Derek Parfit famously argued that personal identity is not what matters — that what we care about in survival is not the persistence of a strict numerical identity but the continuation of psychological connectedness and continuity. Iterative updating gives this intuition a precise neural grounding. What matters in survival is the continuation of the iterative pattern — the thread of SSC that constitutes the self. This thread can be thicker or thinner, stronger or weaker, more or less continuous. It admits of degrees rather than being all-or-nothing. Identity is not a binary fact but a matter of degree — which is exactly what Parfit’s analysis suggests, and what common sense, on reflection, tends to confirm.

The framework also illuminates the phenomenology of selfhood — the felt sense of being a continuous self with a past and a future. This feeling is not an illusion, nor is it a direct perception of some metaphysical fact about personal identity. It is the phenomenal signature of the iterative process itself — the way it feels, from the inside, to be a system whose states are always partially constituted by their predecessors, whose present always carries the weight of its past, whose processing is always contextually embedded in the thread of what it has been. The self feels continuous because the iterative process is continuous — because there is always a bridge of carried content connecting this moment to the last, however much the specific content changes. The self is real, but what is real about it is the pattern, not any particular instance of the pattern.

This has implications that extend to the edges of selfhood — to experiences of ego dissolution in meditation or psychedelic states, to the gradual erosion of self in advanced dementia, to the philosophical thought experiments about fission and fusion that have animated personal identity theory for decades. In each case, the iterative framework asks the same question: what happens to the pattern? Is the SSC thread maintained, disrupted, split, or dissolved? The answer to that question is, on this account, the answer to the question of personal identity — not as a metaphysical verdict about strict numerical identity, but as a description of what is actually preserved or lost in each case.

The Specious Present and the Thickness of Now

There is a final implication of iterative updating that is less clinical and less philosophical than the preceding three but in some ways more intimate — because it concerns the texture of ordinary conscious experience, the quality of what it is like to be present in any given moment.

The philosopher and psychologist William James coined the term specious present to describe the observation that conscious experience is never a pure mathematical instant. The present moment, as we actually live it, has temporal thickness — it contains, within its felt boundaries, a brief span of the just-past and a reaching-forward toward the about-to-come. It is not a knife-edge but a moving window, perhaps a few seconds wide, within which past and future are both somehow present. This is why we can hear a melody rather than just a succession of individual notes — the notes we just heard are still present in experience as we hear the current one, giving the sequence its musical character. It is why we can follow a spoken sentence — the beginning of the sentence is still experientially present as we hear its end. The specious present is the temporal unit of conscious experience, and without it, experience would collapse into an uninterpretable sequence of disconnected instants.

Iterative updating explains the specious present with a precision that no previous account has achieved. The width of the specious present corresponds to the temporal window of icSSC — the span across which successive states share overlapping content through sustained firing and synaptic potentiation. Within this window, earlier states are genuinely present in current processing: not as memories retrieved from storage, but as carried-forward representations actively shaping the current state through their contribution to ongoing sustained activity and synaptic potentiation. The just-heard note is still phenomenally present because its neural representation is still contributing to the current state of the focus of attention — carried forward by the iterative process, integrated with the current note, shaping the multiassociative search that will select what comes next.

The thickness of the specious present — the width of that moving window — is therefore not a fixed constant but a variable that depends on the parameters of the iterative process: the duration of sustained firing in association areas, the half-life of synaptic potentiation, the rate of iterative updating in the focus of attention. Conditions that extend sustained firing — deep concentration, meditative absorption, certain pharmacological states — would be expected to widen the specious present, producing the expanded, time-dilated quality of experience that meditators and psychedelic users often report. Conditions that truncate sustained firing — extreme stress, attentional dysregulation, certain neurological conditions — would narrow it, producing the thin, flickering, disconnected quality of experience that characterizes states of fragmented attention.

This is not merely a theoretical prediction. It is a description of something that careful introspection has always suggested and that phenomenological philosophy has long argued: that the quality of conscious experience is intimately tied to its temporal structure, that how wide or narrow the specious present is matters enormously to what experience is like, and that cultivating a richer, more temporally extended present is not a philosophical abstraction but a practical possibility — one that involves, at the neural level, sustaining and deepening the iterative overlap that carries experience forward. The contemplative traditions that have long advocated for practices of sustained, non-distracted attention were, on this account, doing something real and neurally specific: training the iterative process, extending the window of SSC, widening the river of consciousness and slowing its flow.

The implications here are both scientific and humanistic. Scientifically, the thickness of the specious present becomes a measurable, manipulable variable — a window into the underlying parameters of the iterative process that can be studied non-invasively and related to subjective reports of experiential quality. Humanistically, the iterative framework suggests that the richness of conscious experience is not fixed by nature but shaped by practice — that the depth and continuity of the present moment, the sense of being fully and coherently present in one’s own life, is a function of a temporal architecture that can be cultivated, disrupted, and in principle, understood.

The river can run deeper or shallower. What determines its depth is the bed it flows through — the iterative structure of working memory, carrying the past into the present and reaching toward the future, moment by moment, in the endless self-renewing flow that is conscious life.

Here’s the Conclusion:


VII. Conclusion: The River and Its Bed

There is a thought experiment that philosophers of mind sometimes use to illustrate the hard problem. Imagine a neuroscientist — call her Mary — who has spent her entire life in a black and white room, studying the complete neuroscience of color vision. She knows everything there is to know about the wavelengths of light, the firing of retinal cells, the activation of visual cortex, the behavioral dispositions that color perception produces. She has, in the fullest possible sense, a complete functional account of what happens in the brain when someone sees red. And then one day she leaves the room and sees red for the first time. Does she learn something new?

Most people’s intuition is: yes, she does. She learns what red looks like — the felt, phenomenal quality of the experience, the redness of red — and no amount of functional knowledge, however complete, could have given her that in advance. This intuition is the hard problem in miniature. It is the sense that experience has an inside that functional description, however thorough, leaves untouched.

This article has not resolved Mary’s problem. It has not given her, in advance of leaving the room, the felt quality of red. No scientific account can do that, because the felt quality of experience is precisely what resists capture in third-person description. That resistance is real, and it would be a form of philosophical bad faith to pretend otherwise.

What this article has done is something different, and in its own way more important. It has shown that Mary’s functional knowledge, before she leaves the room, was incomplete in a specific and previously unidentified way. She knew what happened in the brain when someone saw red at any given instant. She knew how that information was broadcast, integrated, and self-represented. What she did not know — what no existing theory of consciousness had told her — was how the experience of seeing red is carried: how it flows into and out of the stream of conscious experience, how it is threaded into the context of what came before it and what comes after, how it becomes part of the continuous, self-referential narrative of a conscious life rather than an isolated flash of processing.

That carrying mechanism is iterative updating. And understanding it changes what Mary knows — not about the felt quality of red, which remains beyond functional description, but about the architecture of the experience that contains it. She now knows that the experience of red does not exist as an isolated event but as a node in an iterative flow — entered through a cascade of partially overlapping states that prepared its arrival, and exited through a cascade that carries its residue forward into what comes next. She knows that the self who sees red is not a fixed observer but a dynamic pattern — the longest-spanning thread of state-spanning coactivity, the stable attractor around which the iterative flow organizes itself. She knows that the present moment in which red is experienced is not a knife-edge instant but a temporally thick window — a specious present whose width is determined by the parameters of the iterative process, carrying the just-past into the now and reaching forward toward the about-to-come.

This is not nothing. It is, in fact, a great deal. It is the difference between a map that shows the water and a map that shows the riverbed — between knowing what flows and knowing what gives the flow its character, its direction, its continuity.

What the Synthesis Achieves

The argument of this article can be stated simply, though its implications are wide. Existing theories of consciousness — Global Workspace Theory, Integrated Information Theory, Predictive Processing, Higher-Order Theories — are genuine insights into the nature of conscious experience. Each captures something real. Each is supported by substantial empirical evidence. And each, in a specific and identifiable way, is incomplete. The incompleteness is the same in every case: none of these theories explains how conscious experience persists across time — how it is carried from one moment to the next, threaded into the continuous, self-referential stream that we actually inhabit.

Reser’s model of iterative working memory updating fills this gap. By specifying the precise neural mechanism — the staggered, overlapping spans of sustained firing and synaptic potentiation, the partial carryover of each state into the next, the multiassociative search that selects each update, the cascading icSSC that threads the whole into a continuous flow — the model supplies the temporal foundation that every other theory presupposes but none provides. When combined with the existing frameworks, the result is a synthesis that is architecturally coherent in a way that consciousness science has not previously achieved: a complete functional account of what consciousness is, how it is organized, what it does, how it knows itself, and how it persists.

The hard problem survives this synthesis, but it survives in a reduced and more precisely located form. The vast functional territory that once surrounded it — all the questions about continuity, carrying, temporal integration, and the persistence of the self — has been mapped and accounted for. What remains is the irreducible core: why this functional architecture feels like anything from the inside. That question may be permanently beyond the reach of third-person science. Or it may yield, eventually, to a future framework that treats experience not as an anomaly to be explained away but as a fundamental feature of a reality that is stranger and richer than our current ontologies allow. Either way, we are closer to it now — standing at its edge with the surrounding terrain cleared — than we have ever been before.

The Missing Piece

The title of this article calls iterative updating the missing piece. It is worth being precise about what that means and what it does not mean.

It does not mean that iterative updating is the only thing missing from our understanding of consciousness, or that adding it to the existing theories produces a complete and final account. Consciousness science is young, and the history of science counsels humility about claims to completeness. There are almost certainly aspects of conscious experience — perhaps its most important aspects — that no current theory, including the synthesis developed here, has adequately addressed.

What it means is that iterative updating is the piece whose absence has been most consequential — the structural element whose lack has prevented the other pieces from fitting together, whose presence allows the existing frameworks to become, for the first time, a coherent whole. It is missing in the way that a keystone is missing from an arch: not just one component among others but the one whose absence causes the whole structure to collapse, and whose presence locks everything else into place.

The arch, with this piece in place, is not complete. There is more building to be done, and the hardest questions remain open. But it is, for the first time, standing. The frameworks that have illuminated consciousness from their different angles — the broadcaster, the integrator, the predictor, the self-representer — now have a shared foundation. The temporal architecture that each of them needs and none of them provided is now specified, grounded in neurophysiology, and open to empirical investigation.

The River

James called it the stream of consciousness. The metaphor has endured for over a century because it captures something that every other description misses: the sense that experience is not a series of things but a flowing, that it moves and changes while remaining somehow the same, that it carries the past into the present and the present into the future in an unbroken continuity that is the very substance of what it means to be alive and aware.

Streams have water, and they have beds. The water is the content of experience — the thoughts, perceptions, feelings, memories, and anticipations that flow through consciousness from moment to moment. The theories that have dominated consciousness science have, each in their way, been theories of the water: what it is made of, how it is organized, what it does, how it reflects the light. They have been right about the water. But water without a bed is not a river. It is a flood — shapeless, directionless, going nowhere.

The bed is the iterative structure: the staggered, overlapping spans of neural persistence that give the flow its direction and continuity, that carry each moment into the next, that thread the water’s ceaseless change into the coherent, self-referential narrative of a conscious life. Without it, experience would not flow. It would flicker — disconnected, isolated, each moment islanded from the ones before and after, with no thread of continuity to make it a life rather than a series of events.

With it, the river runs. Each moment carries the weight of all the moments that formed it and reaches toward all the moments it will become. The self moves through time not as a fixed point carried by the current but as the current itself — always changing, always partially the same, always the river and never merely the water.

This is what consciousness is. Not a thing that the brain produces, not a property that neural states possess, not a light that switches on when the right conditions are met — but a process, a flow, a carrying-forward that is never complete and never starts from nothing, that is always partly what it was and always becoming something new.

The hard problem asks why this process feels like anything. That question remains, standing alone now, stripped of the functional questions that once obscured it, more clearly posed than it has ever been before.

But the river runs. And understanding the bed it runs through is not a small thing. It may be, in the end, the most important thing we have learned about it.

Posted in

Leave a comment