To understand the mind as a time-symmetric generative engine, it helps to start from the idea that cognition is fundamentally about constructing and updating an internal model of the world. This model is not a passive snapshot but an active process that continuously generates predictions about sensory inputs and internal states. In conventional accounts, these predictions flow primarily from the past to the future: prior experiences shape priors, which shape expectations about what will happen next. A time-symmetric view highlights that the same machinery can be understood as simultaneously constraining what we take the past to have been and what we expect the future to be, such that both directions in time are woven together in a single inferential loop.
The bayesian brain framework is a natural starting point for this perspective. In a Bayesian view, the mind maintains probabilistic hypotheses about hidden causes in the world and updates them as new evidence arrives. Normally, we think of these hypotheses as being about what is currently present or what will soon occur. Yet the mathematics of Bayesian inference itself is indifferent to temporal direction: the same generative model that predicts future observations given current states can be used, with appropriate conditioning, to infer plausible past states given current evidence. When we experience perception as stable and coherent, it is because the brain is simultaneously fitting its model to sensory data in a way that is consistent with an inferred history and a projected future.
Time symmetry in cognition does not require literal physical retrocausality; rather, it arises from the way constraints propagate across time in a generative model. Consider a narrative with missing pieces: if you know the ending and a few key events, you can infer both the missing middle and what was likely true at the beginning. In a similar way, the brainās generative model uses present constraints to sculpt both backward-looking interpretations and forward-looking expectations. The temporal arrows we subjectively assign to cause and effect emerge from the structure of this model, not from a fundamental asymmetry in the underlying inferential machinery.
From this angle, generative models are not simply forward simulators of future sensory inputs; they are global consistency engines. For any given moment, the model seeks a configuration of latent causes that is coherent with past signals, present context, and anticipated consequences. This means that what we call āthe pastā is not a fixed archive but a continuously re-encoded set of inferred states revised whenever new evidence arrives. Each revision requires that past, present, and possible futures cohere under the same structural constraints, giving cognition a time-symmetric character at the level of inference even if our experience of time feels one-directional.
One way to formalize this is to think of the generative engine as defining a space of possible world-trajectories rather than isolated time slices. Each trajectory specifies how hidden variables and observations unfold across time, and the role of cognition is to find the subset of trajectories that best explain the currently available evidence. New observations prune and reshape this set, eliminating trajectories that are inconsistent and elevating those that make the data more probable. This process does not privilege inference about the future over inference about the past; both are adjusted together as the probability mass shifts across entire paths through time.
Neurally, this can be approximated by networks that propagate constraint signals both āupwardā and ādownward,ā but also āforwardā and ābackwardā across temporal layers. Instead of a strict feedforward cascade from earlier to later states, recurrent and hierarchical circuits allow information about distal goals, expected outcomes, and high-level context to feed back and reshape the interpretation of earlier events in a sequence. A surprising outcome at time t can retroactively alter how the brain encodes observations at time tā1, making them more compatible with the updated model. This retroactive reinterpretation is a cognitive manifestation of time symmetry in generative processing.
In this framework, priors are best thought of as constraints defined over extended temporal structures, not just instantaneous states. A simple prior might specify that objects tend to move smoothly rather than teleport; a richer one encodes social norms, physical laws, and narrative coherence across multiple time steps. When new data violate these expectations, the generative model can change its estimate of what must have happened before the surprising event as well as what is likely to happen afterward. The same prior, therefore, governs both prediction and retrodiction, tying together the two temporal directions into a single probabilistic fabric.
Time-symmetric cognition also clarifies why certain illusions and biases arise. Many perceptual phenomena show āpostdiction,ā where later events influence how earlier stimuli are consciously experienced. For example, a flash of light followed closely by another stimulus can change how we remember the timing or even the presence of the first flash. In a purely forward, feedforward account this is puzzling; in a time-symmetric generative model it is expected. The brain settles on an interpretation of the entire short temporal window that best fits all the evidence, even if that requires reshaping the apparent order or content of events within that window.
This perspective encourages us to view the mind as engaged not in linear time-tracking but in holistic pattern completion over temporal sequences. Whenever sensory information is ambiguous or incomplete, the generative engine fills gaps in a way that makes the entire segment of experienceāfrom just-before to just-afterāmutually consistent. This makes cognition less like a recorder and more like a dynamic constraint solver that continually edits its own timeline of reality. Our sense of a stable, ordered flow is the emergent result of this continuous editing process converging to a relatively stable interpretation.
Understanding cognition in this way has methodological implications for how models of the brain should be constructed. Instead of focusing solely on forward models that predict the next sensory frame from the current state, we can design architectures that natively handle whole sequences, optimizing over trajectories rather than snapshots. Techniques inspired by smoothing in state-space models, where estimates of past states are improved using future observations, provide a mathematical template for such time-symmetric cognitive models. These approaches better capture the interplay between what the brain believes has already occurred and what it expects to come next.
A time-symmetric view of generative models reframes the question of what it means for perception and cognition to be veridical. If the brain is constantly revising its inferred past in light of new evidence, then objective truth cannot be identified with a frozen record of prior events stored in memory. Instead, accuracy becomes a property of how well the entire inferred trajectory aligns with the constraints imposed by ongoing interaction with the world. The mind, as a generative engine, is always negotiating between multiple possible histories and futures, using a unified inferential mechanism that is fundamentally indifferent to the direction of time even as it generates the deeply felt arrow of temporal experience.
Predictive processing and retrodictive inference
Predictive processing has become one of the most influential frameworks for understanding how the mind operates as a generative engine. At its core, predictive processing treats perception, action, and cognition as forms of Bayesian inference: the brain continuously generates top-down predictions about sensory inputs and then compares them to bottom-up signals. The ensuing prediction errors serve as feedback, guiding adjustments to internal models so that future predictions improve. This picture is often described in forward-looking termsāanticipating what will happen nextābut the same machinery can be understood as simultaneously performing retrodictive inference, updating beliefs about what must have already occurred in order to make sense of what is currently being experienced.
In standard predictive processing accounts, the hierarchy of cortical areas encodes a set of priors at different levels of abstraction. High-level regions encode slowly changing, abstract expectations about the structure of the world (such as object identity or social norms), while lower-level sensory areas encode more rapidly fluctuating predictions about local features (such as edges, tones, or motion). When a new sensory input arrives, the generative model uses these priors to generate predictions at each level. Mismatches between prediction and input produce error signals that propagate upward, prompting the system to revise its expectations. Crucially, nothing in this Bayesian updating process specifies that the inferred causes must be located in the future; they can equally well be causes that lie in the recent past, reconstructed post hoc to best explain the current sensory pattern.
To see how retrodictive inference emerges naturally, consider the analogy to Bayesian smoothing in signal processing. In filtering, one estimates the current state of a system based only on past and present observations; in smoothing, one estimates the entire trajectory of states by also using future observations. The mathematics is the same, but the temporal conditioning is different. Predictive processing can be interpreted as implementing a form of online smoothing over short temporal windows: the brain does not merely filter forward, it continuously revises its estimate of earlier moments in light of subsequent evidence. This yields a time symmetry in inference: beliefs about what was just experienced are still plastic and can be reshaped by what is experienced next, so long as the overall trajectory becomes more coherent under the generative model.
Neuroscientifically, this can be mapped onto recurrent and bidirectional connectivity patterns in cortical circuits. Top-down connections encode generative predictions, while bottom-up pathways carry prediction errors. However, these interactions do not unfold in a simple feedforward cascade over time. Recurrent loops within and between areas allow the system to iteratively refine its estimates across a short temporal buffer. When a surprising event occurs, higher-level regions can send revised predictions back down that not only reinterpret the present signal but also alter the latent representation of the immediately preceding inputs still active in working memory. The net effect is that the āpast few hundred millisecondsā of experience are re-encoded, reflecting a joint compromise between what has already happened and what is now known.
Perceptual postdiction provides an illustrative example. In certain visual illusions, the conscious percept of an earlier stimulus is changed by a subsequent stimulus arriving tens or hundreds of milliseconds later. Rather than experiencing a clear first event followed by a separate second event, the observer experiences a unified percept that seems to have been present all along. From a predictive processing perspective, the brainās generative model settles on an interpretation of the entire temporal segment that most reduces prediction error globally. The later input adjusts the priors governing the interpretation of the earlier input, leading to a retroactive ācorrectionā of what the system takes itself to have perceived.
This retroactive reinterpretation does not require any exotic retrocausality; it follows straightforwardly from the logic of minimizing prediction error over time-extended patterns. The brain can be thought of as optimizing a cost function defined over trajectories of neural states and sensory inputs, not just individual moments. When new evidence arrives, it is not only the current prediction that is updated; the inferred latent states at slightly earlier times are also revised to better fit the newly constrained trajectory. In other words, the system is constantly solving for the best-fitting path through state space that explains all available observations up to the current moment.
This perspective can be formalized using generative models that explicitly include temporal dynamics, such as hidden Markov models, Kalman filters, or more complex state-space models. In these frameworks, states evolve according to probabilistic transition rules, and observations are generated from these hidden states. The inferential problem is to estimate both the sequence of hidden states and the parameters governing their dynamics. Predictive processing corresponds to an approximate, neurally plausible solution to this problem, often described as variational inference. When formulated over entire sequences, the same variational machinery yields both predictions about the next state and retrodictions about previous states, driven by the same objective of minimizing a global free energy or prediction error bound.
Importantly, the role of priors in this context is temporally holistic. A prior does not merely specify that a given state value is likely; it also specifies that certain transitions between states are more probable than others. For instance, a prior might encode that objects tend to move smoothly, that speech sounds unfold in lawful phonetic sequences, or that actions follow goals in coherent narratives. When a new observation violates these transition priors, the model can respond in two ways: adjust expectations about what will happen next, and revise beliefs about what must have been true just before the violation. Both adjustments serve to restore coherence to the inferred trajectory, demonstrating that prediction and retrodiction are two faces of the same inferential coin.
In everyday cognition, this manifests as rapid, largely unconscious narrative repair. Imagine hearing a sentence that begins ambiguously and is then disambiguated by its final word. The final word does not merely inform your expectation of further speech; it retroactively clarifies what you take yourself to have heard earlier, including which phonemes or words were likely present. Your conscious recollection of the opening part of the sentence shifts accordingly, as if it had always been interpreted that way. Under predictive processing, this is explained by the continuous interplay between higher-level linguistic priors and lower-level auditory predictions updating across a temporal window, with later evidence tightening constraints on earlier interpretations.
Retrodictive inference can extend beyond subsecond perception into longer timescales of understanding. When learning new concepts or social norms, later experiences can cause a reinterpretation of earlier events: a gesture that once seemed friendly may later be recoded as manipulative in light of subsequent behavior. The underlying predictive model of other agentsā intentions is revised, and this revision is applied not only to future expectations but also to how past interactions are categorized in memory. The cognitive system effectively recomputes the most probable latent causes of earlier episodes, given the enriched set of observations now available.
In computational neuroscience and machine learning, this time-symmetric view encourages the design of architectures that explicitly support bidirectional inference over sequences. Recurrent neural networks, temporal convolutional networks, and transformer-based models can be interpreted as engineering solutions to the same problem: integrating information from both earlier and later positions in a sequence to form better representations. When trained to minimize prediction error across entire sequences rather than one-step-ahead losses alone, these models naturally learn to leverage future context to refine representations of earlier elements, mirroring the retrodictive aspects of predictive processing in the brain.
From the standpoint of the bayesian brain hypothesis, what emerges is a unified picture: the mind is an active generative engine that applies the same inferential principles in both temporal directions. Prediction, in the narrow sense of forecasting future sensory inputs, is only one special case of a more general operation: selecting the most probable trajectory of hidden causes and observations that fits all available data. Retrodictive inference is simply the backward-looking aspect of this selection process. Both are governed by the same priors over dynamics and structure, and both are implemented by neural circuits that iteratively exchange predictions and errors across hierarchies and short-term temporal buffers.
Understanding predictive processing in this way reframes perception, memory, and decision-making as components of a single inferential flow that is locally anchored in the present but globally sensitive to patterns that span past and future. The apparent asymmetry of time in conscious experienceāour sense that we can influence the future but not the pastāthen reflects constraints on how far and how flexibly these inferential updates can reach, rather than a fundamental asymmetry at the level of the underlying computations. Within the windows where active neural representations remain malleable, the brain routinely uses later evidence to reshape what it takes the recent past to have been, fulfilling the logic of a time-symmetric generative engine operating under predictive processing principles.
Counterfactual worlds and mental simulation
If the mind is a time-symmetric generative engine, then one of its most striking capacities is the ability to roam through counterfactual worlds: not just what did happen or what will happen, but what could have happened instead. Every everyday āwhat if?ā thoughtāwhat if I had taken a different job, what if the ball had bounced the other way, what if I say this instead of thatāis a small act of mental time travel into nearby possible histories and futures. Under a bayesian brain perspective, these are not metaphysical excursions but specific ways of sampling from the generative model: the system temporarily relaxes some constraints imposed by actual evidence and explores alternative trajectories that remain consistent with its learned structure of the world.
Counterfactual thinking is often described as āimagination,ā but within a probabilistic framework it is better understood as structured inference under altered conditions. In ordinary perception, the model conditions on sensory data and seeks the most probable trajectory of hidden causes that explains those data. In counterfactual simulation, the model instead conditions on hypothetical constraintsāāsuppose I had turned left at the intersectionāāand recomputes which trajectories would follow. The same internal dynamics, transition probabilities, and latent variables are used; what changes is the subset of constraints treated as fixed. Time symmetry shows up because these constraints can be imposed at different points along the imagined trajectory: one can fix an outcome (āwhat if I had passed the exam?ā) and explore what earlier decisions and events would have needed to occur, just as readily as one can fix an initial choice and simulate forward.
Seen in this way, mental simulation is not an add-on to perception but a reconfiguration of the same predictive machinery. When we imagine an event, sensory cortices and associative areas are partially driven by top-down signals rather than bottom-up inputs. The generative engine runs in a more āofflineā mode, producing internally consistent sequences of states without being tightly constrained by current sensory evidence. Yet the system does not abandon realism altogether: the same priors that guide perceptionāabout physics, social interaction, language, and bodily dynamicsāalso shape the plausibility of imagined scenes. Even fantastical daydreams obey many of these constraints; people rarely imagine that objects randomly explode into pure noise or that causal sequences reverse arbitrarily. The continuity of priors across perception and imagination is what gives counterfactual worlds their felt coherence.
Time symmetry becomes especially salient in counterfactuals that hinge on future goals but modify past branches. Consider planning: when deciding how to act, the mind generates multiple possible action sequences and their likely consequences, evaluates them, and selects one. This is often described as forward simulation, but in practice, one also entertains backward-looking constraints. You might begin with a desired future stateāhaving completed a project, avoided an argument, or reached a destinationāand mentally work backward: what must I have done the previous day, hour, or minute to get there? In generative modeling terms, you are imposing a boundary condition at a later time and inferring earlier states compatible with it. The same network that usually predicts the future given the present can, under different conditioning, infer the necessary past given an imagined future.
This bidirectional flexibility is evident in how people spontaneously explain near-miss events and regrets. After a disappointing outcome, one often constructs āif onlyā scenarios that minimally alter the past to produce a better present. These counterfactuals typically modify events that are both causally potent and temporally close to the outcomeāāIf only I had left five minutes earlier, I would have caught the train.ā The generative process is implicitly optimizing over alternative past trajectories that require the least deviation from the actual history while flipping the sign of the final outcome. The sense of regret or relief is tied to the ease with which the model can find such nearby worlds: when a small change would have made a large difference, the counterfactual feels more salient because the inferred alternative trajectory sits high in the distribution of plausible paths.
Importantly, these simulations are rarely neutral. They are shaped by learned utilities, values, and affective biases that function like higher-order priors on which trajectories are worth exploring. In planning, trajectories leading to desired outcomes are preferentially expanded and elaborated; in rumination, trajectories leading to painful outcomes are revisited and reweighted. The generative engine does not merely sample possibilities; it selectively amplifies those that matter to the organismās goals. In this sense, emotional valence can be understood as modulating the probability landscape over counterfactual paths, making some imagined futures or alternative pasts more āstickyā in thought than others.
Neurally, mental simulation appears to leverage many of the same networks involved in memory and spatial navigation. The hippocampalāentorhinal system, famous for place cells and grid cells, can be interpreted as implementing a generative model over trajectories in a latent space of states. Recordings in animals show āpreplayā and āreplayā sequences: rapid, compressed reactivations of possible paths the animal could take or has taken in a maze. Preplay occurs before movement, as if the system were evaluating candidate future routes; replay occurs afterward, during rest or sleep, as if consolidating or evaluating the experienced path. Both phenomena can be viewed as samples from the same underlying trajectory model, sometimes anchored in actual experience and sometimes in hypothetical extensions or alternatives. The symmetry between preplay and replay underscores that the generative mechanism does not care whether it is traversing forward or backward along the temporal dimension of its internal space.
This perspective helps clarify why counterfactual simulation is so deeply entangled with both memory and prospective thinking. When you recall an event, you rarely produce a literal recording; instead, you reconstruct a plausible scene that fits current beliefs, cues, and goals. Tiny modifications to this reconstructionāshifting an action here, a decision thereāyield alternative versions that feel like they āalmost happened.ā Likewise, when you imagine the future, you often begin with fragments of actual memories and recombine them into new trajectories: a restaurant you know placed into a city you have never visited, familiar people in unfamiliar roles, routines projected into tomorrow with small perturbations. Both operations reuse the same inferential tools, and both can slide easily into fully counterfactual territory because the generative model is always defined over a space of possible trajectories, not just the single path that was actually realized.
From a time-symmetric standpoint, counterfactual worlds are not categorized by their direction in time but by which constraints they honor. Some simulations hold the past fixed and explore alternative futures; others hold a future condition fixed and explore alternative pasts; still others relax both ends and examine different ways the entire story could have unfolded under slightly changed parameters (a different personality trait, a different social norm, a different physical law). The underlying computation is the same: search over trajectories in a space defined by structural priors, guided by prediction and error signals, until a coherent alternative emerges that satisfies the imposed conditions. The phenomenological differences between āimagining what might happen,ā āregretting what did not happen,ā and āinventing a fictional worldā reflect differences in which temporal anchors and likelihood constraints are engaged, not differences in the basic inferential machinery.
This explains why humans can so readily entertain elaborate fictional universes yet still find some imagined possibilities intuitively āunbelievable.ā Even when deliberately relaxing certain constraintsāallowing for magic, alien technology, or altered physicsāthe generative engine continues to enforce deeper, more entrenched priors: objects still tend to persist, agents still have motives, causes still precede effects locally. Stories that brutally violate these structural regularities are hard to follow because they exceed the flexibility of the underlying model. Conversely, compelling narratives often derive their power from exploring counterfactual trajectories that stretch but do not break these constraints: alternate histories where a single decision changes geopolitical outcomes, speculative technologies that extrapolate existing trends, or personal life stories that branch at a key moment. The mind can inhabit these worlds with ease because the generative model can be smoothly deformed to accommodate them.
In practical reasoning, counterfactual simulation supports not just prediction but credit assignment and learning. When an outcome is observed, the system can ask: under slight variations of earlier states or actions, would the outcome have changed? If so, those earlier elements are likely to be causally relevant and are candidates for updating. This is analogous to sensitivity analysis in formal models: probing how small perturbations in a trajectory affect downstream variables. The ability to simulate nearby possible worlds thus helps the generative engine adjust its parameters to better capture which transitions are reliable, which policies are effective, and which situational cues are truly diagnostic. Time symmetry enters because the learning signal can flow backward along simulated paths as readily as forward; the system can trace responsibility for an outcome to both earlier choices and later contextual factors that together shape the inferred causal chain.
Viewing counterfactual thinking through this lens suggests that the apparent asymmetry we feelāof being able to change the future but not the pastācoexists with a deeper computational symmetry in how alternative timelines are evaluated. In imagination, the system routinely modifies both past and future in its internal scenarios, adjusting them jointly to maintain coherence under its priors. What distinguishes realistic planning from nostalgic regret is not the direction of temporal manipulation but whether the simulated trajectories remain coupled to current action policies and controllable variables. The same generative machinery that fabricates unrealized histories in daydreams is also what allows agents to chart meaningful paths through the actual world.
Memory, imagination, and bidirectional time
Memory, on this view, is not a warehouse of static records but an active process of reconstruction driven by the same generative engine that underlies perception and imagination. When an episode is āremembered,ā the mind does not simply replay stored data; it re-infers a plausible trajectory of past states that explains a set of present cuesāinternal and externalāin light of current beliefs and goals. The remembered past is thus always the output of ongoing inference: a best-guess reconstruction constrained by what is currently known, felt, and anticipated, rather than a fixed snapshot preserved from an earlier time.
This reconstructive character makes memory inherently compatible with time symmetry. The bayesian brain maintains priors not just over isolated events but over how events typically unfold in sequence: people tend to behave consistently; objects obey physical regularities; conversations follow conversational norms. When new information arrives that conflicts with these priors, the system can reduce global inconsistency by updating beliefs about what the past must have been, no less than about what the future is likely to be. Remembering thus becomes a temporally extended act of prediction: generating a past that, together with the present, forms a coherent story under the current model of the world.
The distinction between episodic and semantic memory illustrates this interplay. Episodic memories are often described as āmental time travelā to specific events, rich with context, perspective, and emotion. Semantic memory captures more abstract, timeless knowledgeāfacts, rules, concepts. From the standpoint of a time-symmetric generative model, episodic recall involves inferring a particular trajectory consistent with both semantic structure and present cues, while semantic memory encodes the compressed regularities that constrain which episodic trajectories are deemed plausible. As semantic knowledge changesāthrough learning, re-interpretation, or cultural influenceāthe space of admissible episodic reconstructions shifts as well, enabling new āmemoriesā of events that may never have been encoded in exactly the way they are now imagined.
This becomes especially evident in autobiographical memory, where identity and narrative coherence function as high-level priors. People tend to remember past actions and experiences in ways that support a relatively stable self-concept, even when the raw details were ambiguous or conflicting. Later experiencesāsuccesses, failures, relationships, ideological shiftsāprompt revisions to that self-model, and these revisions propagate backward, subtly altering how earlier episodes are recalled. The same story arc that supports expectations about future behavior also retrofits recollections of past behavior, yielding a life narrative that feels continuous even as it is periodically rewritten.
Imagination shares this reconstructive machinery but loosens the coupling to actual evidence. When one imagines a childhood conversation that never happened, or a different outcome to a familiar event, the generative engine draws on the same semantic scaffolding and episodic fragments used in memory. The difference lies in which constraints are treated as fixed. In memory, the system conditions more strongly on stable cuesāphotographs, testimony, bodily traces, entrenched beliefsāwhile in imagination it is free to explore trajectories that partially violate those cues as long as deeper structural regularities remain intact. This is why vivid daydreams and hypothetical recollections can feel āmemory-likeā: they occupy the same representational space and obey the same structural priors, differing mainly in the strength of their anchoring to external evidence.
Neuroscientific studies of the so-called ādefault mode networkā and hippocampal system provide converging support for this shared machinery. The hippocampus and associated medial temporal structures are engaged during encoding of new experiences, retrieval of past episodes, imagination of fictitious scenes, and projection into possible futures. Patterns of activity in these regions suggest that they implement a flexible mapping between present states and possible trajectories through a latent space of events. During recall, this mapping is constrained by partial cues; during imagination, constraints can be imposed from desired outcomes or counterfactual premises. In both cases, the system is effectively sampling from or optimizing over trajectories that best satisfy the current combination of sensory evidence, internal states, and higher-level narrative expectations.
Time symmetry is particularly vivid in phenomena where later information reshapes earlier memories. Consider āreinterpretive rememberingā in social contexts: learning that a trusted colleague has been dishonest can transform how one recalls years of interactions. Jokes once read as friendly now seem barbed; ambiguous actions are re-labeled as manipulative. The underlying generative model of that personās character has changed, and this new model is applied not only to future predictions of their behavior but also to past episodes stored in memory. The recollected past is updated to remain consistent with the newly inferred trait, thereby minimizing global prediction error across the entire interpersonal history.
At shorter timescales, similar processes appear in experiments on memory reconsolidation. When a memory is reactivatedābrought into conscious awareness or made labile by a reminderāit can be modified before being stored again. New information present at the time of reactivation can be incorporated, such that subsequent recall reflects a blend of old and new material. Within a time-symmetric framework, reactivation opens a window in which the inferred past is temporarily plastic; the generative engine can re-estimate the trajectory that produced the current memory trace, guided by recently acquired evidence. Once reconsolidated, this new trajectory becomes the default āpastā for subsequent inferences, even if it deviates significantly from the original encoding.
Imagination also reveals the joint constraints of past and future on present cognition. When planning, one typically draws on episodic memories of similar situations, abstracting regularities and projecting them forward. Yet the process can loop backward as well: a desired future outcome can cause one to selectively recall or even subtly reshape past experiences that support its feasibility. For example, deciding to undertake a challenging project might bring forth memories of previous successes while downplaying episodes of failure; over time, the autobiographical record itself may drift toward a story that rationalizes the chosen goal. Here, the future does not literally cause the past, but time symmetry in inference means that current and anticipated states constrain how both are represented.
The vividness and confidence associated with a memory can then be understood as emergent properties of how narrowly the generative modelās posterior distribution concentrates around a particular trajectory. When priors and current cues strongly favor a single explanation of how events unfolded, the resulting recollection feels sharp, detailed, and certain. When multiple trajectories remain comparably plausible, memory feels vague, fragmentary, or unstable, and imagination can more easily intrude. Under this view, the difference between ārememberingā and āimaginingā is not always categorical; it can be a matter of how tightly predictions about the past are constrained by evidence relative to the range of counterfactual trajectories that the model is willing to entertain.
Emotion provides a powerful modulatory influence on this process. Affective states shape which aspects of an episode are amplified, which are suppressed, and how links between episodes are drawn. A traumatic event, for instance, can serve as a high-weight constraint on subsequent inferences about both past and future: neutral earlier experiences may be reinterpreted as warning signs, while imagined futures become saturated with similar danger. The generative engineās learning rules, tuned for survival, bias memory and imagination toward trajectories that are especially relevant to threat and reward, even if this introduces distortions relative to the raw sensory record. Time symmetry shows up here in the way strong emotions can propagate both backward and forward along oneās personal timeline, reorganizing past narratives and future expectations in tandem.
Clinical phenomena further demonstrate the entanglement of memory, imagination, and bidirectional time. In depression, rumination often involves repeated reconstruction of past failures and imagined confirmations of future hopelessness, with each run through the generative model reinforcing priors that favor negative trajectories. In post-traumatic stress, intrusive āmemoriesā can blend actual episodes with imagined variations, each replay updating the internal model of the world as uncontrollable and unsafe. Therapeutic interventions frequently attempt to alter these priors and narrativesāintroducing new interpretations, counterexamples, and imagined alternativesāso that subsequent reconstructions of both past and future become less constrained by pathological expectations.
These dynamics indicate that what we call āthe pastā in psychological terms is not a fixed temporal region but a set of currently preferred explanations for how the world and the self have come to be as they are. As experiences accumulate, parameters of the generative model shift, and with them the entire inferred personal history subtly changes. The mindās sense of continuity is maintained not by preserving invariant records but by continuously adjusting recollections and projections to remain mutually consistent under evolving beliefs. Time symmetry lies in this global coherence: the same inferential machinery that updates predictions about what will happen next also revises constructions of what must have happened before, such that memory and imagination remain two coordinated aspects of a single, temporally bidirectional generative process.
Implications for consciousness and free will
Thinking of the mind as a time-symmetric generative engine forces a re-examination of what it means to be a conscious subject situated in time. If the bayesian brain is constantly inferring entire trajectories that span past and future, rather than merely tracking a moving ānow,ā then consciousness may be less like a spotlight scanning along a pre-existing timeline and more like a window onto the current best-guess world-history the system has settled upon. At any moment, what it feels like to ābe me nowā includes not only present sensations but also an inferred past and a set of anticipated futures that jointly cohere under the modelās priors and prediction mechanisms. The subjective present can then be understood as the point at which these bidirectional inferences meet: a locus where constraints from remembered history and imagined outcomes converge to shape perception, thought, and action.
This picture blurs the familiar distinction between consciousness as āawareness of what is happening nowā and memory or anticipation as separate, peripheral processes. Instead, conscious experience may just be the subset of the generative engineās inferences that have become sufficiently consistent and globally integrated across time to be broadcast and stabilized. On this view, when you consciously experience a scene, you are not merely registering raw inputs; you are occupying a temporally extended hypothesis about how things have been and where they are heading. The sense of a narrative selfāof being the same subject moving through timeāemerges from the continuity constraints the model places on these hypotheses: it tends to select trajectories in which a coherent agent with stable traits and goals persists from one moment to the next.
Time symmetry in inference complicates naive intuitions about free will. In everyday thinking, we imagine a fixed past funneling into an open future: the past is settled, the future is not, and our choices seem to push reality forward along one branch rather than another. Yet if the mindās internal model constantly revises both its past and future in order to maintain global coherence, then the distinction between āgiven historyā and āopen possibilityā becomes a feature of how the model organizes its trajectories, not an intrinsic property of time itself. The system treats some parts of the inferred timeline as effectively fixed because they are so tightly constrained by evidence and entrenched priors; other parts remain plastic and are explored through counterfactual simulation and planning. Subjectively, these different regimes are felt as āthings I cannot changeā versus āthings I can still decide,ā even though both are represented by the same underlying probabilistic machinery.
Agency, in this framework, can be reinterpreted as the generative engineās ability to selectively shape which trajectories remain viable by coupling internal simulations to actual motor outputs. When an agent ādecides,ā it evaluates multiple candidate paths under its current model, weighs them according to expected outcomes and values, and then commits its body to actions that make one of those paths more likely in the external world. The crucial asymmetry is not that inference suddenly stops being time-symmetric, but that action is constrained by physical irreversibility: once muscles move and external events unfold, many alternative trajectories become practically impossible, even if they remain conceivable within the model. Conscious will is then experienced as the sense of steering the system through this space of trajectoriesāpruning some possibilities and amplifying othersāwhile knowing that certain constraints (already incurred events, unchangeable conditions) sharply narrow what can still be realized.
Within this perspective, free will need not be understood as an exemption from causal explanation. The mindās choices are still the outcome of a richly structured causal process, encoded in its priors, learning history, bodily states, and environmental context. However, time-symmetric inference suggests a compatibility between determinism at the level of physical dynamics and a meaningful notion of freedom at the level of generative modeling. Freedom can be framed as flexibility in the space of trajectories the model can represent and evaluate: an agent is āfreerā to the extent that it can entertain diverse counterfactual futures, assign them differentiated values, and pursue those that align with its higher-level goals, rather than being trapped in narrow, rigid patterns of prediction and response. Conscious deliberation is one manifestation of this flexibilityāan explicit, reportable negotiation among candidate paths before action locks one of them in.
The role of counterfactuals is central here. When people introspect about choosing, they often describe imagining different outcomes, weighing pros and cons, and feeling that they ācould have done otherwise.ā In the time-symmetric picture, these impressions correspond to the generative engineās exploration of nearby possible trajectories originating from similar past conditions. The felt sense of ācould haveā arises when multiple trajectories remain similarly plausible under the modelās current constraints and values, such that small internal fluctuationsādifferent attentional foci, transient affective states, or newly considered informationācould tip the system toward one or another. After a choice is made and its consequences unfold, the model retrofits the past and updates its expectations so that the chosen path becomes part of the now-favored history, often downplaying or restructuring the previously live alternatives. The phenomenology of both open possibility and post hoc rationalization thus emerges from the same bidirectional inferential flow.
Consciousness of agency also depends on how the generative engine apportions causal responsibility across time. When an outcome occurs, the system not only predicts its likelihood but also performs a kind of retrospective credit assignment, inferring which earlier internal states and actions were most responsible. This process uses time-symmetric inference: later results reshape beliefs about earlier decisions, motives, and even perceptions, so that the new trajectory remains coherent. Feelings of pride, guilt, or regret are tied to how strongly the model attributes an outcome to its own controllable variables versus external constraints. If the system concludes that a different choice would almost certainly have led to a better trajectory, then the sense of āI could have done otherwiseā is intensified, even though that judgment is itself the result of updated inference rather than access to an objective alternate timeline.
Illusions of conscious will reveal how malleable this attribution can be. In experiments where actions are subtly manipulated or outcomes are prearranged, people can still report having freely chosen them, provided that the sequence fits their generative expectations of how intentional action should unfold. The brain infers a trajectory in which an intention leads to an action, which leads to an outcome; if the observed pattern matches this template within permissible noise, the system assigns authorship to the self. When discrepancies are too large or too frequent, the model may reassign causalityāblaming external forces, malfunction, or other agents. This shows that the experience of being a free, acting subject is not a direct readout of physical causes but a high-level inference about which trajectory of events best explains both internal signals (urges, plans, motor commands) and external feedback.
Time symmetry also reframes the moral and existential stakes often attached to free will. If the generative engine is continuously reconstructing both past and future in order to preserve narrative coherence, then personal responsibility cannot hinge on a notion of an immutable, fully transparent past combined with a metaphysically unbound future. Instead, responsibility may be better understood in terms of how an agentās modeling capacities and policies evolve: whether it learns to represent salient alternatives, to anticipate the consequences of its actions, and to integrate new evidence into richer, more accurate priors. An agent that persistently ignores counterevidence, collapses its space of possibilities, or repeatedly chooses harmful trajectories despite accessible better ones is, on this account, exercising a diminished or distorted form of agency, even if its behavior remains strictly caused.
This has implications for how we think about change, growth, and self-transformation. Because the mindās internal history is itself plasticāsubject to ongoing reinterpretation under new beliefs and valuesāfreedom includes the capacity to revise not only future plans but also the narrative through which one understands past actions. Therapy, education, and self-reflection can all be seen as interventions on the generative engine: introducing new perspectives, evidence, and counterfactual scenarios that reshape the inferred trajectory of oneās life and, in doing so, open up different regions of the future possibility space. The felt experience of ābecoming a different personā then corresponds to a large-scale reconfiguration of priors over both past and future selves, enacted within the same time-symmetric inference architecture.
The absence of literal retrocausality does not diminish the depth of these implications. Physics can remain time-asymmetric at macroscopic scales while the cognitive machinery that navigates those scales is effectively time-symmetric in its representational operations. The mind cannot physically alter what has already occurred, but it can alter which past it takes as explanatory, which future it deems reachable, and how strongly it constrains the link between them. Consciousness and free will, on this view, are emergent properties of a bayesian brain that is perpetually revising an internally coherent world-history and using that history to steer into preferred regions of its future, all within a generative engine whose fundamental computations do not privilege one direction of time over the other.
