Iterated Insights

Ideas from Jared Edward Reser Ph.D.

Intellectual Disability and Neurodevelopmental Syndromes: Are Some Congenital Disorders Ancient Canalized Response Patterns?

Introduction: From Disorder to Developmental Morph Human neurodevelopmental syndromes are usually described as disorders, and in modern clinical terms that description is often appropriate. Down syndrome, Prader-Willi syndrome, Fragile X syndrome, Williams syndrome, Angelman syndrome, Rett syndrome, and autism-related conditions can involve disability, medical vulnerability, dependency, suffering, and substantial support needs. Nothing in an evolutionary…

Keep reading

Autism as Low-Social-Dependence Cognition: Common Variation, Regulatory Evolution, and Neurodevelopmental Complexity

Abstract In 2011, I proposed the solitary forager hypothesis of autism, arguing that some traits associated with the autism spectrum may have reflected adaptive variation in ancestral social ecology. This hypothesis should now be reformulated in light of modern autism genetics. Autism is not a single evolved adaptation, nor is it a unitary biological condition.…

Keep reading

The Correlates of Chronic Muscle Hypertonicity May Constitute an Evolved Energy-Conservation Strategy

Jared Edward Reser Ph.D. Abstract Chronic muscle hypertonicity and its downstream sequelae, including adaptive shortening, myofascial contracture, reduced range of motion, postural collapse, and diminished movement variability, are conventionally framed as pathological or degenerative phenomena. This article proposes an alternative interpretation: that these effects, considered in aggregate rather than individually, may constitute an evolutionarily conserved…

Keep reading

Self-Driving Cars Could Help to Reduce Chronic Muscular Strain

I. Driving as Embodied Vigilance Driving is usually treated as a cognitive task. We talk about attention, distraction, reaction time, fatigue, situational awareness, and judgment. But driving is also a bodily state. It is a form of vigilance that recruits posture, muscle tone, and readiness throughout the body. The hands grip. The shoulders subtly rise.…

Keep reading

Something went wrong. Please refresh the page and/or try again.

I. Why the Brain Needs a System Card

In artificial intelligence, model descriptions are becoming more realistic. For a while, the public conversation centered mostly on total parameter count. A model had 7 billion parameters, or 70 billion, or 1 trillion, and that headline number often stood in for its size, power, and sophistication. But that language is now being refined. In mixture-of-experts systems, researchers increasingly distinguish between the model’s total stored capacity and the smaller subset of parameters that are actually activated for each token. DeepSeek-V3, for example, is described as having 671 billion total parameters with 37 billion activated per token. Llama 4 Scout is described as having 17 billion active parameters, 109 billion total parameters, and a 10 million token context window. Google’s Gemma 4 line now even builds the distinction into the model name itself, as in 26B A4B, where the “A4B” indicates that only about 4 billion parameters are active during inference. 

This is an important conceptual shift. It acknowledges that total size is not the same thing as real-time computational recruitment. A model may contain an enormous amount of stored structure, but only some fraction of that structure is brought to bear on any given step of processing. Once that distinction is made, a system can be described in more meaningful terms. We can ask not only how much it contains, but how much of it is actually mobilized in the act of generating an output. The result is a more dynamic and operational picture of intelligence, one that focuses less on abstract capacity and more on active engagement.

Neuroscience, by contrast, still lacks an equivalent vocabulary for the brain. We can estimate the total number of neurons and synapses. We can identify the large-scale networks involved in attention, working memory, executive control, and perception. We can measure metabolic consumption, firing rates, oscillatory rhythms, and BOLD responses. But we do not yet have a standard way to describe how much of the brain is effectively recruited during a single moment of thought, or how much of that recruited activity is carried forward into the next moment. Our descriptions remain far better at cataloguing the brain’s substrate than at characterizing its moment-to-moment computational dynamics.

This matters because cognition is not simply a matter of total neural mass, nor even of total neural activity. At any given moment, much of the brain is metabolically active in some way. Different systems regulate posture, vision, autonomic function, prediction, homeostasis, sensory filtering, memory maintenance, emotional tone, and motor readiness. The mere fact that neurons are active somewhere in the system tells us very little about which neural populations are on the critical path for a particular thought. A useful science of cognition should be able to distinguish between background activity and functionally decisive recruitment. It should be able to ask, for a given mental update, which neurons, assemblies, and synaptic pathways were actually involved in shaping the next state of mind.

This is where the analogy to recent AI model specifications becomes valuable. The brain is not a transformer, and a thought is not a token. Biological cognition unfolds through recurrent, metabolically constrained, massively parallel dynamics that differ profoundly from the feed-forward token processing of a language model. Still, the comparison is illuminating because both cases force us to distinguish total available substrate from actively deployed substrate. In one case the question is how many parameters are activated for each token. In the other case the analogous question would be how much neural machinery is recruited for each iterative working-memory update.

That question, even if only hypothetical for now, opens a promising line of inquiry. Instead of asking only how many neurons the brain contains, we might ask how many neurons meaningfully participate in a given cognitive step. Instead of asking only which regions light up during a task, we might ask how large the effective coalition is that carries the current mental state. Instead of focusing only on static anatomical capacity, we might begin to characterize the brain in terms of recruitment, persistence, overlap, turnover, and energetic cost. Such metrics would not replace anatomy, physiology, or network neuroscience. They would add a missing layer, one centered on the temporal organization of cognition itself.

This article proposes that the brain may need something like a biological system card. By this I do not mean a literal engineering dashboard already waiting to be filled in with precise measurements. I mean a conceptual framework for describing cognitive dynamics in a more principled way. A biological system card would aim to characterize not just the brain’s total substrate, but the subset of that substrate effectively recruited during an update, the degree to which active contents persist across updates, the way coactive contents constrain the next state, and the metabolic costs of maintaining a continuous stream of thought. It would offer a vocabulary for describing minds as temporally extended updating systems rather than as static lumps of neural tissue.

The need for such a framework becomes especially clear once we shift from spatial questions to temporal ones. What makes one thought flow naturally into the next? How much representational overlap exists between adjacent moments of cognition? How much of the present state is preserved, and how much is replaced? How many active representations jointly influence the next update? How broad is the search through memory, and how sharply does the system converge on a new state? These are not peripheral details. They are central to the character of cognition. They may help explain the difference between focused reasoning and distraction, between mind wandering and rumination, between ordinary wakefulness and fragmented awareness, and perhaps even between simple information processing and conscious mental continuity.

My broader claim is that neuroscience has become rich in maps but relatively poor in process-level summary metrics. We know a great deal about where functions are localized, how regions interact, and which networks correlate with specific tasks. But we have fewer ways of summarizing the brain as a dynamic computational regime. The language now emerging in AI, especially the distinction between total capacity and active recruitment, offers a suggestive template. It invites us to imagine a neuroscience in which a brain could be characterized not only by what it contains, but by how much it recruits, sustains, and hands forward from one mental moment to the next.

The central proposal of this essay is therefore simple. Just as sparse language models are increasingly described not only by their total parameters but by their active parameters per token, biological minds may eventually be described not only by their total neurons and synapses but by their active coalitions per thought. More importantly, because brains generate continuity through overlapping updates rather than isolated forward passes, the most revealing quantities may not concern activation alone. They may concern how much neural content survives into the next update, how much new content is recruited, how broad the associative search becomes, and how much energy is spent preserving a coherent stream. A biological system card would be an attempt to give those neglected dimensions names.

The sections that follow develop this proposal in more detail. First, I argue that cognition is best understood as a sequence of overlapping working-memory updates rather than as a series of isolated states. I then sketch the core fields of a biological system card, including metrics for active coalition size, state overlap, continuity depth, persistence architecture, associative branching, and energetic efficiency. The larger goal is not to claim that these measures are already established, but to suggest that they are worth imagining. If AI has taught us to distinguish stored capacity from active computation, perhaps it can also help us formulate a richer language for the living dynamics of thought.

II. From Static Anatomy to Iterative Cognitive Dynamics

If the brain is ever to receive something like a system card, the unit of description cannot be anatomy alone. Static features matter. Neuron number matters. Synapse number matters. Cortical size, white matter connectivity, network organization, receptor distributions, and metabolic constraints all matter. But none of these tells us, by itself, how thought unfolds across time. A brain is not merely a stored structure. It is a process that continually updates itself. For that reason, the most important metrics for cognition may not be purely spatial metrics at all. They may be temporal metrics describing how one mental state gives rise to the next.

This point is easy to miss because neuroscience often presents the brain as if it were best understood through maps. We map regions, pathways, networks, and functional specializations. We identify circuits involved in working memory, salience detection, episodic recall, valuation, language, and motor planning. These advances are enormously valuable, but they can leave the impression that once the parts are catalogued, cognition has been explained. What remains under-described is the actual flow of cognition, the stepwise movement by which the brain maintains some contents, replaces others, recruits new representations, and threads these transitions into a continuous stream. The central problem is not only where mental content is represented, but how content is preserved and transformed across successive moments.

This is where an iterative view becomes essential. On the framework I am proposing, thought is not best understood as a series of isolated snapshots. It is better understood as a sequence of overlapping working-memory updates. At any given moment, a subset of representations is active. Some of those representations are newly recruited, some are remnants of the immediately preceding state, and some may have been maintained across several successive updates. The next state does not arise from nothing. It emerges from the present configuration by modifying it. Something is added, something is removed, and something is carried forward. The continuity of cognition arises from this structured overlap.

That point deserves emphasis. Continuity is not merely the fact that the brain remains alive from one second to the next. Continuity, in the cognitive sense, is the persistence of representational structure across adjacent mental states. A train of thought feels continuous because portions of one state survive into the next state and continue to exert causal influence. If every cognitive moment were wholly replaced by an unrelated new configuration, thinking would not resemble a stream. It would resemble a sequence of disconnected flashes. The fact that thought instead exhibits coherence, momentum, and topic stability suggests that the brain preserves a meaningful fraction of active content across iterative updates.

This is what I mean by state-spanning coactivity. Some active representations do not simply appear and vanish within a single cognitive instant. They bridge successive moments. They remain functionally present while new material is incorporated and old material is discarded. These state-spanning representations provide a scaffolding for continuity. They help maintain goals, themes, objects of attention, emotional context, task demands, and partially completed lines of reasoning. They also help explain why a thought can be revised without being destroyed. One can refine an idea, redirect a sentence, elaborate an image, or update an appraisal while still remaining within the same broad mental episode. That stability amid revision is one of the defining features of ordinary cognition.

A useful way to think about this is that each mental state is not a replacement of the previous one, but an edited successor. The brain is constantly performing controlled revisions. It is neither perfectly stable nor chaotically discontinuous. It occupies a middle regime in which enough of the previous state is preserved to maintain coherence, but enough novelty is introduced to permit learning, inference, planning, and adaptation. Cognitive life depends on this balance. Too little preservation and the stream fragments. Too much preservation and the system risks stagnation, rumination, or perseveration.

Working memory is central here, but it should not be understood too narrowly. It is not merely a small buffer that briefly stores a few items. It is the active workspace in which current contents are maintained, related to one another, and used to constrain what comes next. In this workspace, representations can remain active through more than one mechanism. One mechanism is sustained firing across seconds. Another is transient synaptic potentiation, which can preserve traces over longer spans without requiring uninterrupted high firing. Together these mechanisms provide a two-store maintenance architecture. They help explain how information can remain functionally available even as overt activity fluctuates. More importantly, they allow the brain to carry content across iterative updates without requiring every relevant representation to be continuously maximally active.

This two-store picture has important consequences for any future cognitive metric. If one only counts spiking neurons at an instant, one may miss part of the persistence structure underlying thought. Some information may be maintained through sustained firing, while other information is temporarily stabilized in synaptic form and can be reactivated as needed. A meaningful account of cognitive dynamics should therefore distinguish between what is actively firing right now and what remains functionally poised to influence the next update. The persistence of thought is not exhausted by immediate activation. It also depends on maintenance mechanisms that preserve recoverable structure across time.

Once active contents are maintained, they do not sit passively. They shape the next state through associative pressure. Coactive representations spread activation through long-term memory and through other currently available structures. The next update is influenced not by a single dominant item alone, but by the joint effect of several active items operating together. This is what I have called multiassociative search. The present coalition of working-memory contents activates related possibilities, candidate continuations, relevant memories, associated concepts, task rules, and emotional or perceptual traces. From this field of partially activated possibilities, some representations receive stronger net support than others and are recruited into the next state. In this way, cognition proceeds by constrained exploration rather than by random replacement.

This matters because it shifts our idea of what the current mental state really is. A mental state is not only a set of currently present contents. It is also a launch platform for the next update. The active coalition carries forward constraints, exerts associative influence, and determines the shape of the search space. The present is therefore both a state and a transition mechanism. It preserves the immediate past while helping select the immediate future. That is why the right metrics for cognition should describe not just current activation, but also persistence, overlap, branching, and selection.

Seen in this light, the most important question is not simply, “Which neurons are active?” The more revealing questions are: Which neurons and assemblies are contributing to the current working-memory coalition? Which of those will survive into the next update? How much of the present state overlaps with the next one? How many alternative continuations are materially activated? How sharply does the system converge on one successor state? How much of the current cognitive configuration is carried by ongoing firing, and how much by transient synaptic stabilization? These are the questions that begin to characterize cognition as a dynamic regime rather than a static object.

This approach also helps explain why gross activity measures are insufficient. If a region shows elevated metabolic activity or BOLD signal, that does not tell us whether it is carrying state-spanning content, contributing only transiently, maintaining background readiness, or participating in the decisive transition to the next mental update. Similarly, a high firing rate does not by itself reveal whether the activity belongs to a stable cross-update scaffold or to a fleeting local response. A richer theory of cognition must distinguish between background neural busyness and those neural coalitions that are functionally central to iterative thought.

The dynamic picture also provides a more natural framework for understanding the stream of consciousness. Consciousness, on this view, is not simply a matter of the brain having active contents at a moment. It is more plausibly related to the organized persistence and revision of contents across adjacent moments. What gives conscious thought its flowing character may be the structured overlap between successive updates. A conscious episode is not a single state, but a temporally extended sequence whose members partially inherit from one another. The subjective sense of a present moment may therefore depend not on an isolated snapshot, but on a continuously refreshed window of state-spanning coactivity.

This way of thinking prepares the ground for a biological system card. If cognition is iterative, overlapping, and persistence-dependent, then the most revealing metrics will be those that track how the active coalition is assembled, how much of it endures, how it searches memory, and how rapidly it turns over. A brain should not be characterized only by the size of its substrate or the locations of its activity peaks. It should be characterized by its mode of updating. The relevant properties include the effective size of the active coalition, the degree of overlap between adjacent states, the depth of continuity across multiple updates, the balance between firing-based and synaptic maintenance, the breadth of associative branching, and the energetic costs of sustaining coherence through time.

In short, the proper shift is from static anatomy to iterative cognitive dynamics. The brain is not merely a structure that contains representations. It is a temporally organized system that continually recruits, preserves, modifies, and hands forward representational coalitions. Any serious attempt to build a biological system card must begin there. Only then can we start specifying the hypothetical metrics that would describe the brain not just as a physical organ, but as an engine of continuous thought.

III. The Biological System Card: Proposed Metrics for Cognitive Dynamics

If thought unfolds through overlapping working-memory updates, then a biological system card should aim to describe the structure of those updates. The goal is not to pretend that neuroscience already possesses all the tools needed to measure these quantities precisely. At present, many of them remain hypothetical, composite, or only indirectly accessible. The point is conceptual. We need a more adequate vocabulary for describing how cognitive systems operate across time. Just as recent AI model specifications increasingly distinguish total model capacity from the subset of parameters activated during inference, a biological system card would distinguish the brain’s total substrate from the subset of neural resources effectively recruited, sustained, and handed forward during thought.

Such a system card would not replace anatomy, physiology, or systems neuroscience. It would sit on top of them as an integrative summary layer. Its purpose would be to characterize the brain not just as a stored physical structure, but as a temporally organized updating regime. To do that, it would need to summarize several distinct but related dimensions of cognition. These include the size of the total neural substrate, the effective coalition recruited during an update, the degree of continuity across updates, the mechanisms that support persistence, the structure of associative search, the rate and pattern of turnover, the strength of top-down control, and the energetic price of maintaining an organized stream of thought.

The first category is total substrate. This is the most familiar kind of measure and the least novel. It includes overall neuron count, synapse count, large-scale network organization, and the total memory architecture available to support cognition. It also includes the brain’s metabolic ceiling, since no cognitive process can exceed the energetic resources available to sustain it. These are the rough biological analogues of total parameter count in artificial models. They matter because they determine the system’s broad capacity. But on their own they tell us little about how much of that capacity is actually being mobilized in a given mental moment. Total substrate is a background condition, not a full description of cognition.

The next category is the one most directly inspired by recent AI discourse: active coalition per update. This refers to the subset of neurons, assemblies, and synaptic pathways that are functionally contributing to the present working-memory state and to the transition into the next one. One could call this the brain’s effective coalition size. It is not equivalent to every neuron currently firing somewhere in the nervous system. Many neurons may be active while contributing only indirectly, peripherally, or homeostatically to the current cognitive episode. The relevant question is narrower. Which populations are on the critical path for the current thought? Which networks are helping determine what the mind contains right now, and what it will contain one step later?

This distinction is crucial because it separates gross activity from causally meaningful recruitment. A system card should therefore include not merely a count of currently active neurons, but a measure of effective neural participation. This would refer to the share of neural substrate materially contributing to the present update. Closely related would be effective synaptic participation, meaning the share of synaptic pathways exerting nontrivial causal influence on the current transition. These measures would almost certainly be difficult to operationalize in practice, but conceptually they are indispensable. They ask what portion of the brain is not merely alive and busy, but computationally decisive for a given step of cognition.

Yet current activation alone is not enough. In the framework developed here, the most important property of cognition may be not simple activation, but structured continuity. For that reason, a biological system card should include a third major category: the continuity profile. This would describe the degree to which active contents persist across successive updates. The central variable here is the state-overlap ratio, the proportion of representational content shared between one working-memory state and the next. This metric captures the extent to which the present state is an edited successor of the prior state rather than a wholesale replacement. It is one of the clearest ways to express the intuition that thought proceeds through partial preservation and revision.

A second continuity measure would be continuity depth, meaning the number of successive updates over which a representation remains functionally relevant. Some active contents may survive only briefly, shaping one immediate transition before disappearing. Others may persist over many updates, acting as longer-range organizers of attention, reasoning, planning, or narrative continuity. Continuity depth would therefore help distinguish fleeting local activations from the more durable state-spanning coactivity that anchors a train of thought. Related to this would be a state-spanning coactivity index, a measure of how much of the active coalition bridges adjacent mental states and thereby contributes to the temporal integrity of cognition.

A fourth category concerns the mechanisms by which content is maintained across time. This may be called the system’s persistence architecture. In the framework proposed here, persistence is not achieved by a single process. Some contents are maintained through sustained neuronal firing across seconds. Others may be maintained through transient synaptic potentiation or other short-term changes in synaptic efficacy that preserve information in a functionally retrievable form even when overt firing is reduced. A biological system card should therefore distinguish between sustained firing support and transient potentiation support. These are not redundant measures. Two systems might display similar effective coalition sizes while relying on very different maintenance strategies. One might preserve continuity mainly through persistent spiking, while another might offload more information into temporarily altered synaptic states.

It would also be useful to describe the maintenance handoff efficiency between these modes. How effectively can a system move information between immediate active firing and more latent short-term maintenance? How long can relevant content remain poised for reactivation without being lost? This could be expressed through a persistence half-life, meaning the average duration over which activated task-relevant content remains recoverable and able to shape future updates. Such measures would offer a richer understanding of cognitive stability than instantaneous recordings of spiking alone.

A fifth category would describe the system’s associative search profile. Working memory does not merely hold content in place. The active coalition spreads activation through long-term memory and through related representational structures, thereby generating candidate continuations for the next update. This process can vary in breadth, intensity, and selectivity. A biological system card should therefore include a measure of associative branching factor, the number of candidate representations or trajectories materially activated by the current coalition. A related measure, search breadth, would describe how widely the present state propagates activation through memory and association space. Another, selection sharpness, would describe how strongly the system converges on a particular successor state rather than preserving many alternatives in partial competition.

These measures matter because they illuminate qualitative differences in cognition. A highly focused reasoning state may involve relatively narrow search breadth and strong selection sharpness. A creative ideation state may involve broader branching and weaker immediate convergence, allowing more unusual continuations to compete. Rumination may involve high continuity but narrow and repetitive associative branching. Mind wandering may involve lower top-down constraint and higher drift across loosely connected trajectories. By including an associative search profile, the system card begins to describe not just how much of the brain is active, but how the active coalition explores what comes next.

This leads naturally to a sixth category: the iterative turnover profile. Thought is not defined only by what is preserved, but also by what changes. Each update includes some retention, some replacement, and some novel recruitment. A biological system card should therefore summarize the proportions of content that are carried forward, discarded, and added at each step. One metric here would be the retention ratio, the share of current content preserved into the next update. Another would be the replacement ratio, the share lost during transition. A third would be the novel recruitment ratio, the amount of newly incorporated material entering the active coalition per update.

Also important would be update frequency, the tempo at which the system revises its working-memory state. This is not merely a clock-speed measure in the engineering sense. It is part of the system’s cognitive style. Some forms of thought may involve rapid turnover with shallow continuity. Others may involve slower, more stable progression. A related measure, the cognitive drift index, could characterize the rate at which successive updates wander away from the present topic, task, or goal. Together these measures would help distinguish coherence from fragmentation, exploration from instability, and adaptive flexibility from mere distraction.

A seventh category should address control profile, because cognition is not just the product of associative flow. It is also shaped by goals, task demands, inhibitory processes, and the selective stabilization of relevant content. One useful measure would be top-down stabilization strength, referring to the degree to which goals and executive constraints preserve task-relevant representations across updates. Another would be bottom-up capture susceptibility, referring to the ease with which salient external or internal stimuli disrupt the current coalition. A third would be interference resistance, meaning the system’s ability to prevent competing representations from displacing relevant ones. These control-related measures would help explain why one mind can hold a goal steadily in view while another is constantly hijacked by distraction, anxiety, or intrusive associations.

Finally, no biological system card would be complete without an energetic efficiency category. The brain is a metabolically expensive organ, and cognition unfolds under strict energetic constraints. Every act of recruitment, maintenance, and transition carries a cost. It would therefore be valuable to estimate the metabolic cost per update, the energetic expenditure required to produce one effective cognitive transition. Related to this would be the cost of continuity, meaning the burden of maintaining overlapping content across successive updates, and search efficiency, the amount of useful associative exploration achieved per unit energy. These are not peripheral details. They may help explain why biological cognition so often balances richness against frugality, and why attention, planning, and conscious continuity cannot simply expand without bound.

Taken together, these categories define the basic structure of a biological system card. The card would not merely say what the brain is made of. It would say how the brain spends itself on a thought. It would describe total substrate, effective coalition size, state overlap, continuity depth, persistence architecture, associative branching, turnover dynamics, control strength, and energetic cost. Each category captures a different facet of the same general phenomenon: the brain as a temporally extended system of iterative updating.

It is worth emphasizing again that these proposed metrics are hypothetical. Some may ultimately prove hard to operationalize. Others may need revision, decomposition, or replacement as neuroscience advances. But the lack of ready measurement does not diminish their conceptual value. Scientific progress often begins with better questions and more appropriate categories of description. The language of active parameters per token has helped AI researchers describe sparse computation more honestly. An analogous language for brains may help neuroscientists describe cognition more dynamically and more precisely.

The larger aspiration is to move beyond the idea that the brain is best summarized by static anatomy or by coarse region-level activation maps alone. Those descriptions are indispensable, but incomplete. A biological system card would ask how much neural substrate is effectively recruited during a thought, how much of that recruitment persists into the next thought, how associative search unfolds, how turnover is regulated, and how much energy is spent preserving an organized mental stream. In doing so, it would bring us closer to a more satisfying science of cognitive dynamics, one that treats the mind not as a fixed object, but as an evolving pattern of overlapping neural coalitions.

IV. What a Biological System Card Could Explain

The value of a biological system card would not lie only in its elegance or novelty. Its real value would lie in what it could help explain. A framework centered on active coalitions, continuity profiles, persistence architecture, associative branching, and iterative turnover would give neuroscience a more refined way to compare cognitive states, cognitive styles, biological systems, and perhaps even artificial minds. It would make it possible to ask not merely where activity occurs, but how cognition is organized across time. That shift could illuminate phenomena that often remain flattened when described only in terms of anatomy, gross activation, or broad psychological labels.

Consider first the range of ordinary mental states within a single healthy person. Focused reasoning, mind wandering, rumination, creative ideation, fatigue, and distraction are all familiar modes of thought, yet they are rarely described in a unified computational vocabulary. A biological system card could help. Focused reasoning might be characterized by a relatively large but stable effective coalition, a high state-overlap ratio, moderate associative branching, strong top-down stabilization, and low cognitive drift. The mind would preserve enough continuity to sustain a line of argument or a multistep plan, while keeping branching narrow enough to resist derailment. In such a state, the present coalition would act like a disciplined search engine, exploring possibilities without losing task coherence.

Mind wandering would present a different profile. Its active coalition might remain sufficiently coherent to sustain a stream of thought, but top-down stabilization would be weaker, associative branching broader, and cognitive drift higher. More candidate continuations would be allowed to compete, and the stream would migrate more easily from one topic to another. A biological system card would not reduce mind wandering to mere noise. It would describe it as a legitimate regime of cognition, one with a distinctive balance of continuity and exploratory turnover.

Rumination would look different again. It might display strong continuity and high state overlap, but low novelty recruitment and narrow associative diversity. The system would preserve content too successfully within a restricted region of conceptual space, resulting in repetitive and self-reinforcing updating. The problem would not be a lack of continuity, but continuity that is too locally trapped. This illustrates one of the virtues of the framework. It allows dysfunction to be described not simply as too much or too little activity, but as a distorted configuration of continuity, branching, and turnover.

Creative ideation would likely occupy yet another regime. It might involve a moderately stable active coalition, broad associative branching, elevated novelty recruitment, and enough continuity to prevent fragmentation. Creativity is often described vaguely as unconstrained association, but that is incomplete. A fully unconstrained system would disintegrate into incoherence. What creative cognition appears to require is a balance, enough state overlap to preserve a thematic core, enough branching to activate unusual continuations, and enough selection pressure to crystallize something useful from the field of alternatives. A biological system card could make that balance more explicit.

Fatigue, drowsiness, and cognitive overload might also be rendered more precisely in these terms. Under fatigue, one might expect a reduced effective coalition size, lower top-down stabilization, shorter continuity depth, and perhaps a change in the system’s energetic efficiency. The mind would have less capacity to sustain state-spanning coactivity and less ability to stabilize task-relevant content against interference. Under overload, by contrast, the problem might not be reduced recruitment but excessive competition, too many partially activated candidates, weakened selection sharpness, and diminished interference resistance. In both cases, the framework offers a more structured description than simply saying that the brain is underperforming.

The same framework could also help compare individuals. Two people may perform similarly on broad tests while differing substantially in their cognitive dynamics. One may rely on relatively narrow, disciplined associative branching and strong top-down stabilization, resulting in precise but less exploratory reasoning. Another may exhibit wider branching, greater novelty recruitment, and higher drift, making the mind more generative but also more distractible. A biological system card would not eliminate the need for conventional psychological constructs, but it could provide a mechanistic bridge between observed traits and the temporal structure of thought.

This possibility becomes even more interesting when extended across development and aging. A child’s cognition might be characterized by different turnover dynamics, reduced goal-lock duration, broader but less stabilized associative branching, and shallower continuity depth in some domains. A mature adult may exhibit greater control, deeper continuity, and more efficient handoff between persistence mechanisms. In aging, some changes may involve not just memory loss in a broad sense, but altered persistence half-life, reduced state overlap, increased susceptibility to interference, or reduced energetic support for maintaining continuity. Such distinctions could enrich how cognitive development and decline are described.

Across species, the framework could become even more revealing. Comparative neuroscience often focuses on absolute measures such as brain size, encephalization, neuron counts, or regional elaboration. These are important, but they may obscure equally important differences in dynamic organization. A species might possess a relatively modest total substrate while still achieving surprisingly flexible cognition through efficient continuity management, strong associative integration, or favorable energetic tradeoffs. Another species might have a large substrate but relatively shallow continuity depth or more limited branching structure. A biological system card would not replace comparative anatomy, but it would add a process-based dimension to it. It would encourage us to ask not only how much brain a species has, but what sort of iterative cognitive regime that brain supports.

This becomes especially relevant when considering the stream of consciousness. If consciousness depends in part on temporally structured continuity, then the most relevant variables may not be gross activation alone, but the degree of overlap between adjacent states, the durability of state-spanning coactivity, and the ability of the system to preserve and revise content within a continuously refreshed present. On this view, a conscious episode is not simply a moment with enough activity in the right regions. It is a sequence of related updates bound together by persistence and partial inheritance. A biological system card could therefore provide a more formal language for discussing the difference between fragmented processing and temporally unified experience.

This does not mean that the card would solve the problem of consciousness. It would not by itself explain why continuity feels like something from the inside. But it could clarify the functional architecture most relevant to that question. It could show why a system with rich overlap, persistence, and controlled iterative updating might be a better candidate for sustained conscious experience than one composed of isolated bursts of processing with little state-to-state inheritance. At the very least, it would shift the discussion from vague appeals to complexity or integration toward more specific hypotheses about continuity-carrying substrate.

The framework could also help illuminate pathology. Disorders of attention, compulsivity, mood, schizophrenia, dementia, and altered states of consciousness might all involve disruptions in the dynamic variables described here. Some conditions may involve unstable coalitions with poor continuity depth. Others may involve excessively rigid continuity with too little novelty recruitment. Still others may involve disordered associative branching, weak selection sharpness, or breakdowns in top-down stabilization. By naming these variables, a biological system card could help organize hypotheses that are currently spread across many separate literatures.

The framework may also create a more productive basis for comparing biological and artificial systems. At present, such comparisons often fluctuate between overstatement and dismissal. Either the systems are treated as fundamentally alike because they both process information, or they are treated as incomparable because their substrates differ. A biological system card offers a middle path. It allows one to compare systems not in terms of superficial similarity, but in terms of abstract computational organization. One system may have active parameters per token. Another may have active coalitions per thought. One may process in largely feed-forward steps. Another may rely on recurrent state-spanning continuity. The comparison becomes less about claiming equivalence and more about identifying useful homologous questions.

That may prove especially important as AI systems become more recurrent, multimodal, memory-augmented, and agentic. As artificial systems begin to maintain task sets over time, reuse internal state, and coordinate longer behavioral episodes, questions of continuity, persistence, turnover, and control will become increasingly central. A framework first developed to describe biological cognition could eventually help clarify which artificial systems more closely approximate temporally extended cognition and which remain fundamentally punctate. In that sense, the biological system card may not be only a neuroscience tool. It may become part of a more general science of intelligent dynamics.

The broader lesson is that brains should not be described only as anatomical objects or as collections of activated regions. They should be described as regimes of ongoing, structured revision. At any moment the brain is preserving some contents, dropping others, recruiting new ones, and searching for the next viable update under severe energetic and informational constraints. A biological system card would make that process explicit. It would provide a language for the organized expenditure of neural resources across time.

What this proposal ultimately points toward is a change in emphasis within cognitive science. Instead of treating cognition mainly as representation plus localization, we may need to treat it as representation plus temporal governance. The important question is not only what is represented and where, but how current content is maintained, how much of it carries forward, how widely it branches, how sharply it converges, and how much it costs to sustain continuity. Those are the dynamics that make minds feel like streams rather than snapshots.

The phrase “system card” may sound borrowed from engineering, but the deeper ambition is scientific. The point is to provide a concise and principled summary of the variables that matter most for understanding a cognitive system in operation. In the case of the brain, those variables may include effective coalition size, state-overlap ratio, continuity depth, persistence architecture, associative branching factor, turnover dynamics, control strength, and metabolic cost per update. None of these on its own is the whole story. Together, however, they begin to sketch a more faithful portrait of cognition as a temporally extended process.

The central idea of this essay can therefore be stated simply. Recent AI practice has begun to separate total capacity from active recruitment. Neuroscience would benefit from a comparable shift. Instead of describing the brain only by what it contains, we should also try to describe what it recruits, what it sustains, what it carries forward, and what it spends in the act of thought. A biological system card is one possible framework for doing that. It is speculative, but it is a productive speculation. It points toward a future in which cognitive science may be able to characterize minds not merely by their architecture, but by the dynamic profiles through which they continuously make and remake themselves.

Posted in

Leave a Reply

Discover more from Iterated Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading