Time-symmetric cortical computation starts from the premise that neural dynamics in the cortex can be organized so that information about the past and information about the future are treated on a more equal footing than in conventional feedforward models. Rather than viewing processing as a strictly causal cascade from sensory input to higher-level representation, the same circuitry can be interpreted as implementing constraints that must be satisfied both forward and backward in time. In this view, neural activity patterns are not just responses to incoming stimuli; they are solutions to a set of consistency conditions linking prior expectations, current evidence, and anticipated future states. These consistency conditions can be formalized using ideas from Bayesian inference, where priors and likelihoods jointly determine posterior beliefs that must remain coherent across time.
One way to conceptualize this is to treat cortical computation as a form of message passing on a factor graph that extends along the temporal dimension. Nodes correspond to latent causes and observable variables, while factors encode probabilistic relationships between them. Time symmetry arises when the underlying generative model allows the same factors to be used both to predict future observations from past causes and to retrodict past causes from future observations. If the biophysical substrate supports reversible or approximately reversible transformations of activity patterns, then messages can be exchanged in both temporal directions, enabling the cortex to reconcile what has been observed with what is expected, even when those expectations concern events that have not yet occurred.
Within this framework, predictive coding becomes a special case of a broader time-symmetric scheme. Classical predictive coding emphasizes top-down predictions and bottom-up prediction errors: higher areas send predictions of sensory inputs downward, and lower areas send mismatches upward. Under time symmetry, these messages can be reinterpreted as enforcing agreement between forward-in-time generative predictions and backward-in-time explanatory signals. Prediction errors no longer only signal unexpected inputs given the past; they also signal inconsistencies between what future states would imply about the present and what the present actually is. This dual role supports a richer error-correction process in which the cortex adjusts its activity to satisfy constraints that span past, present, and future.
Time-symmetric formulations naturally incorporate constraints from physics-inspired perspectives on computation. Many physical systems evolve according to laws that are symmetric under time reversal, even though macroscopic phenomena appear time-directed due to boundary conditions and thermodynamic considerations. Translating this to cortical computation, one can view the brain as implementing inference over trajectories that are constrained at multiple temporal boundaries, such as initial conditions set by past experiences and soft constraints imposed by goals, plans, or future rewards. Neural dynamics then approximate the most probable trajectories consistent with these boundary conditions, and message passing propagates information about both past evidence and future constraints throughout the network.
Mathematically, such a system can be framed as performing inference in a probabilistic graphical model defined over entire temporal sequences. Rather than inferring a sequence of hidden states using only causal filters that move forward in time, the cortex may approximate smoothing, in which information from the future refines estimates of earlier states. In a time-symmetric scheme, forward and backward passes are not separate computational phases but two aspects of the same ongoing process. At any moment, recurrent interactions allow activity to be influenced by signals that encode what has already happened as well as signals encoding what is likely to happen, yielding posterior beliefs that reflect evidence from both directions.
This perspective also reshapes how priors are understood. Instead of being static parameters or fixed initial conditions, priors become temporally extended constraints that link states over time. A prior might specify not just the expected value of a variable at one moment but also how that variable tends to evolve, the smoothness of trajectories, or the coupling between different modalities across time. Time-symmetric message passing enforces these priors by ensuring that both forward predictions and backward explanations conform to the same temporal structure. For example, a strong prior on continuity in visual motion implies that future positions of an object constrain present beliefs about its current position, just as past positions do.
Recurrent connectivity in the cortex provides a natural substrate for these computations. Dense horizontal and feedback connections allow information to circulate through networks in ways that are not strictly hierarchical or feedforward. From a time-symmetric standpoint, these recurrent loops can be interpreted as implementing iterative adjustments that gradually reconcile forward and backward messages. Neural dynamics settle into attractor-like states that locally minimize a global inconsistency measure, such as a variational free energy or a sum of squared prediction errors defined over entire trajectories. Convergence of these dynamics corresponds to reaching a state where messages in both temporal directions are mutually consistent.
Crucially, time symmetry does not imply that subjective experience is temporally reversible or that physical causation is violated. Instead, it states that the inferential machinery that constructs our experience exploits information from both temporal directions whenever such information is available. For example, when hearing a spoken word, later phonemes help disambiguate earlier ones; the cortical representation of the first sound can be refined after subsequent sounds arrive. A time-symmetric formulation captures this by allowing backward-propagating messages from future observations to update beliefs about earlier latent states. The resulting interpretation is that the cortex continuously revises its understanding of recent events in light of new data, rather than committing irrevocably to a single forward pass.
Another foundational implication is that learning and inference become more tightly linked. If the cortex implements time-symmetric message passing, then the same circuitry used to infer hidden states can also be used to propagate credit assignment signals for learning. Error signals that reflect mismatches between predictions and outcomes can travel both forward and backward in time, shaping synaptic strengths to improve future performance. In effect, the brainās learning rules can be understood as adjusting the generative model so that the time-symmetric inferential process yields trajectories that better match the statistics of the environment and the organismās goals.
By grounding cortical computation in time-symmetric principles, one can reinterpret many canonical findings in systems neuroscience. Classical receptive fields, tuning curves, and hierarchical feature detectors may reflect only the causal, stimulus-driven aspect of a richer bidirectional process. The same neurons that appear to respond selectively to particular inputs may, in other contexts, carry signals that encode expectations, counterfactual outcomes, or constraints derived from future goals. Time-symmetric cortical computation thus provides a unifying lens through which perception, action, and cognition are seen as different manifestations of a single, temporally extended inferential process implemented by neural dynamics in the cortex.
Biophysical mechanisms enabling retrocausal signaling
Biophysical mechanisms that could enable retrocausal signaling in cortical tissue must satisfy at least three constraints simultaneously: they must be compatible with known neurophysiology, they must support bidirectional constraint satisfaction in time, and they must avoid violating macroscopic causality. Rather than invoking exotic physics at the level of single neurons, the focus shifts to how standard componentsādendrites, synapses, glia, and neuromodulatory systemsācan be organized so that information about putative future states shapes present neural dynamics in a lawful, time-symmetric way. This perspective interprets retrocausality not as information traveling faster than light, but as the implementation of inference over temporally extended trajectories using ordinary biophysical processes.
A first ingredient is the separation of timescales across cellular and network processes. Membrane potentials, dendritic plateau events, synaptic release probabilities, and short-term plasticity all operate on distinct timescales, from milliseconds to seconds. Longer timescales appear in calcium-dependent signaling cascades, kinase activity, gene expression, and structural remodeling of synapses, which can persist for minutes to hours or longer. When these processes are coupled in recurrent circuits, the present state of the cortex implicitly encodes constraints that originate both from recently past activity and from anticipated or learned regularities about the near future. From a time-symmetric message passing standpoint, slow biophysical variables act as boundary conditions that shape the evolution of faster variables, effectively allowing āfuture-orientedā constraints (such as expected outcomes or goals) to influence moment-to-moment spiking patterns without literal violation of causality.
Dendritic computation is especially important in this context. Pyramidal neurons exhibit complex dendritic trees with compartmentalized nonlinearities, including NMDA spikes, Ca2+ plateau potentials, and backpropagating action potentials. These mechanisms enable local integration of top-down and bottom-up inputs in spatially segregated dendritic branches. A time-symmetric interpretation proposes that apical tufts receive signals that encode future-oriented expectations or constraints derived from higher areas, while basal dendrites receive signals driven more directly by current sensory evidence. The neuronās output then reflects a compromise between these influences, such that its firing represents a locally consistent solution to constraints from both past-derived evidence and future-oriented priors. Apical-basal interactions effectively implement a microcircuit-level form of predictive coding in which dendritic compartments compute mismatches not only between sensory input and immediate prediction but also between present state and what future states āwantā the present to be.
Backpropagating action potentials (bAPs) illustrate how information about a neuronās future spiking can shape synaptic integration at earlier synapses, in a way that superficially resembles retrocausal influence. After a somatic spike, the bAP travels back into the dendritic arbor and modulates local membrane potential, calcium concentration, and synaptic plasticity rules. This means that the neuronās later outputāthe spikeāfeeds back to alter the conditions that governed its earlier integration of inputs. When embedded in dense recurrent circuits, such feedback can support iterative refinement of synaptic efficacy and subthreshold integration so that current responses become better aligned with patterns that will, on average, be behaviorally successful in the future. In a time-symmetric formalism, bAP-mediated modulation acts as a local mechanism for enforcing consistency between forward-driving inputs and backward-propagated credit assignment or constraint signals.
Spike timingādependent plasticity (STDP) offers another lever for time-symmetric computation. Standard STDP rules are temporally asymmetric: presynaptic spikes followed shortly by postsynaptic spikes lead to long-term potentiation, while the reverse ordering leads to depression. However, variants of STDP exhibit more complex temporal kernels and can be modulated by neuromodulators, dendritic location, and firing context. When interpreted through the lens of bayesian inference over trajectories, STDP can be seen as sculpting synaptic weights so that the networkās dynamics approximate a time-symmetric solution to a credit assignment problem. Causal correlations (pre-before-post) are rewarded, but anti-causal correlations (post-before-pre) are not simply erased; they are transformed into signals that reduce weights or adjust temporal alignment, shaping the network so that future spikes become more predictive of useful outcomes and less responsive to spurious coincidences. Over learning, this produces circuits whose internal dynamics implicitly encode both the direction of environmental causation and the structure of future rewards or constraints, thus embedding a form of retrocausal information about what outcomes matter.
Short-term synaptic plasticity, including facilitation and depression, adds another layer of temporal structure. Synaptic efficacy at any moment reflects not only instantaneous firing but a short history of presynaptic activity. This dependence effectively creates a local memory that blurs the distinction between past and present. In recurrent networks, the pattern of short-term plasticity across connections shapes how current spikes are filtered, gating which input sequences can lead to sustained reverberation and which are rapidly quenched. Time-symmetric message passing can exploit these dynamics by tuning short-term plasticity profiles so that the network differentially amplifies trajectories that are consistent with both prior experience and anticipated outcomes. In other words, synapses become tuned to propagate spikes that are embedded in sequences likely to lead to desirable future states, thereby giving present activity an implicit dependence on future-oriented statistics.
Neuromodulatory systems further extend the possible mechanisms for retrocausal-like influence. Dopaminergic, serotonergic, noradrenergic, and cholinergic projections provide diffuse signals related to reward prediction error, uncertainty, arousal, and contextual relevance. These signals often arrive after a behavioral event, reflecting evaluation of outcomes that depend on prior actions and percepts. Yet they reshape ongoing synaptic plasticity and excitability in ways that affect how similar patterns will be processed in the future. A time-symmetric interpretation sees neuromodulators as implementing delayed boundary conditions: the organismās evaluation of a trajectory at a later time is fed back into earlier synaptic configurations, effectively altering the āinitial conditionsā for future encounters with similar situations. When such evaluation signals are integrated into recurrent cortical loops, they help adjust the network so that its intrinsic dynamics encode not merely the causal structure of the environment but also the constraints imposed by possible future consequences.
At the network level, bidirectional axonal projections between cortical areas provide the structural substrate for signals that function as forward and backward messages in a probabilistic graphical model. Feedforward connections from lower to higher areas often convey information that is more closely tied to current sensory inputs, whereas feedback connections convey expectations, contextual priors, and planned actions. Biophysically, these projections target different layers and dendritic compartments, with feedforward inputs often synapsing onto middle layers and basal dendrites, and feedback inputs preferentially contacting apical tufts and superficial layers. This anatomical arrangement naturally supports a division of labor in which forward messages carry evidence, and backward messages carry constraint signals that can be interpreted as retrocausal in the computational sense. The emergent activity of the cortex then reflects the equilibrium of these interacting influences, embodying a time-symmetric solution in which present firing patterns minimize global inconsistencies with both past data and future-oriented constraints.
Oscillations and phase relationships across frequency bands add an important dynamical dimension. Gamma-band activity is frequently associated with feedforward processing, while beta and alpha bands are often linked to feedback and top-down influence. Cross-frequency coupling, such as gamma bursts nested in slower beta cycles, allows different temporal scales of information to interact. Within a time-symmetric framework, these layered oscillations can be understood as carriers for distinct components of the message passing scheme: faster rhythms convey local, immediate prediction errors, while slower rhythms encode temporally extended priors and future-oriented constraints that modulate the gain or timing of local circuits. When the network enters a coherent oscillatory regime, it effectively coordinates the flow of information in both temporal directions: future-oriented signals bias which prediction errors are amplified, and error signals in turn prune or refine the future constraints that are allowed to influence ongoing processing.
Astrocytes and other glial cells provide additional mechanisms for temporally extended coordination. Astrocytic calcium waves can propagate over hundreds of micrometers on timescales of seconds, modulating synaptic transmission, synaptic plasticity, and local blood flow. Such slow, spatially extended signals can encode contextual information that outlasts the transient spikes of individual neurons. In the context of time-symmetric computation, glial modulation can be seen as setting a slowly varying background that reflects integrated information about recent and anticipated neural activity. This background biases which neural trajectories are energetically favored and which are suppressed, effectively encoding soft constraints on how the network can evolve. By adjusting the metabolic and synaptic environment, astrocytes help ensure that neural dynamics gravitate toward trajectories that are consistent with both the organismās learned expectations and its prospective goals.
Energy constraints themselves form a crucial biophysical boundary condition. The cortex operates under tight metabolic limits, and different patterns of activity carry different energetic costs. Neurons and glia employ mechanisms such as activity-dependent regulation of ion channel densities, synaptic scaling, and local blood flow adjustments to keep energy use within acceptable bounds. When these mechanisms are factored into a time-symmetric perspective, they function as physical priors on neural trajectories: paths that would require excessive or unstable energy expenditure are strongly disfavored. As a result, the trajectory that the network actually follows is not just the one that best explains past sensory inputs but also one that is compatible with sustainable future energy usage. This interplay between inference and energy regulation can give present neural dynamics an implicit dependence on the anticipated metabolic consequences of future states, thereby resembling a form of retrocausal constraint.
At the microcircuit level, local inhibitory interneurons play a key role in sculpting these dynamics. Different classes of interneuronsāparvalbumin-positive, somatostatin-positive, and VIP interneuronsātarget distinct subcellular compartments and participate in diverse oscillatory patterns. Through feedforward, feedback, and lateral inhibition, they impose competition and normalization that effectively limit the set of neural trajectories the network can realize. When such inhibitory circuitry is tuned through experience-dependent plasticity, it comes to embody expectations about which patterns of activation are likely to be behaviorally relevant in the future. In a time-symmetric message passing interpretation, inhibition functions as a constraint that ensures the compatibility of current representations with both past statistics and anticipated demands, pruning trajectories that would contradict future-oriented priors.
Importantly, none of these biophysical mechanisms requires that information literally travel from the future to the past. Instead, the apparent retrocausality emerges from how the system learns and stores regularities about temporal structure, and how those stored regularities are deployed in real time. By distributing information over multiple timescalesāfast spiking, intermediate plasticity, slow modulatory changesāthe cortex encodes a compressed representation of both historical data and likely future contingencies. When activity unfolds within this multi-timescale substrate, present neural dynamics are constrained not only by what just happened but also by what has, over the course of learning, been predictive of specific future outcomes. In computational terms, the network is performing smoothing rather than pure filtering, approximating an inference over entire trajectories even though it only has access to the present moment.
Viewed this way, retrocausal signaling in cortical tissue is best understood as a property of the joint organization of synapses, dendrites, glia, and neuromodulators, rather than as a single exotic mechanism. Each biophysical element contributes a piece of the temporal puzzle: dendrites separate and recombine past and future influences, STDP and other plasticity rules embed expectations about outcomes into the wiring, neuromodulators provide delayed evaluations that reshape earlier processing, astrocytes and energy constraints add slow boundary conditions, and oscillatory dynamics coordinate the real-time exchange of constraint and error signals. Together, these mechanisms allow the cortex to implement a time-symmetric form of predictive coding, in which apparent retrocausal effects arise from inference over temporally extended generative models grounded in ordinary, though richly organized, neurobiology.
Network architectures for time-symmetric message passing
Network architectures that realize time-symmetric message passing must distribute forward and backward influences across space, time, and scale while remaining compatible with known anatomy of the cortex. Rather than layering an explicit second āretrocausalā pathway on top of the standard feedforward hierarchy, time symmetry can emerge from how recurrent loops, laminar microcircuits, and long-range feedback are organized to support constraint satisfaction over trajectories. The same physical connections that carry causal sensory information can also transmit signals encoding how future constraints and goals shape the interpretation of current inputs. Architectures inspired by factor graphs, energy-based models, and predictive coding provide concrete blueprints for how such a system may be implemented neurally.
One useful organizing principle is to treat each cortical area as a local factor node in a larger graphical model and its constituent microcircuits as variable nodes encoding latent causes and observable features. In this scheme, message passing corresponds to the exchange of activity patterns along feedforward, feedback, and lateral pathways. Forward messages approximate likelihood information: how compatible are current spikes with candidate causes? Backward messages carry information analogous to priors and constraints derived from broader context and future goals. Time symmetry arises when the update rules implemented by neural dynamics ensure that, at convergence, the same network configuration can be interpreted either as resulting from forward propagation of sensory evidence or from backward propagation of constraint information. This dual interpretation is characteristic of energy-based models, where equilibrium states minimize a global cost function symmetrically defined over all variables and times.
Within a single cortical column, laminar organization lends itself naturally to such architectures. Thalamic and lower-area inputs arrive predominantly in layer 4 and lower layer 3, while feedback from higher areas targets layer 1 and apical tufts of pyramidal neurons in layers 2/3 and 5. Local recurrent connections within superficial and deep layers, together with interlaminar projections, form loops that can implement iterative inference. In a time-symmetric message passing framework, superficial layers can be viewed as encoding rapidly updated prediction errors and short-horizon beliefs, while deep layers encode slower, temporally extended beliefs that integrate information about anticipated outcomes. Activity flowing from deep to superficial layers acts like a backward-in-time message, imposing consistency between the current representation and trajectories deemed likely or valuable in the future. Conversely, superficial-to-deep projections update long-range beliefs in light of recent evidence, ensuring that future-oriented constraints remain grounded in actual sensory experience.
Predictive coding architectures provide a more specific instantiation. In classical predictive coding, each level of a hierarchy contains two populations: representation units that encode estimates of latent variables, and error units that encode mismatches between top-down predictions and bottom-up inputs. Time-symmetric message passing extends this by equipping representation units with dynamics that depend on both past and anticipated future states, and by allowing error units to signal discrepancies not only between present predictions and inputs but also between present states and the implications of likely future observations. Practically, this can be achieved by coupling prediction errors across time steps, so that an error at time t depends on predictions originating from both tā1 and t+1. Biophysically, this coupling may be realized through recurrent circuits that retain short-term memory of previous errors and that are biased by feedback encoding expectations about upcoming events, effectively embedding a temporal smoothing operation into local circuitry.
Recurrent neural networks with attractor dynamics offer another natural substrate. In such networks, fixed-point or low-dimensional manifold attractors represent consistent interpretations of sensory data and goals. When the connectivity matrix is shaped by learning to approximate the inverse covariance structure of latent variables over time, the resulting attractor landscape reflects both the typical sequences of states encountered in the environment and those associated with successful outcomes. Time-symmetric message passing occurs as the network relaxes toward an attractor: perturbations corresponding to new sensory inputs propagate through the recurrent connectivity, while feedback from higher-order representations or neuromodulatory states biases the trajectory toward attractors consistent with desired futures. The same relaxation process can be read as integrating information forward in time, accumulating evidence, or backward in time, revising recent states to match what eventual outcomes imply should have been the case.
Architectures inspired by bidirectional recurrent networks and Kalman smoothers make this even more explicit. In engineered systems, bidirectional RNNs pass information forward and backward across a sequence to compute context-sensitive representations of each time step. Their biological analog in cortex may consist of chain-like assemblies of columns connected both in the anatomical feedforward direction (e.g., along a sensory pathway) and in reverse via feedback and lateral projections. When sensory input arrives, activity sweeps rapidly forward, generating an initial feedforward interpretation. Shortly thereafter, slower feedback signals carry information about higher-level context, expectations, and goals back along the chain, modulating earlier representations. Because these signals persist and interact through local recurrent loops, each columnās state gradually converges to a representation that incorporates constraints from both earlier and later events within the same temporal window, implementing an approximate smoothing operation consistent with Bayesian inference over trajectories.
To maintain time symmetry without explicit separation into forward and backward passes, some network architectures rely on local update rules that are themselves reversible or nearly so. For example, networks inspired by Hamiltonian or symplectic dynamics use pairs of conjugate variables (such as position-like and momentum-like quantities) to encode state and constraint information. Within a cortical implementation, distinct but coupled neural populations could play analogous roles: one population encoding current beliefs about features or latent causes, and another encoding generalized prediction errors or constraint forces. Neural dynamics then approximate a leapfrog-like integration scheme in which belief and error populations update in alternating steps, each update being locally reversible under certain conditions. From a global perspective, this leads to trajectories in state space that preserve an approximate time-reversal symmetry, enabling information encoded at later āmomentsā to be reconstructed from earlier configurations and vice versa.
Hierarchical architectures must also reconcile local time-symmetric processing with large-scale directional organization. Sensory pathways exhibit a general progression from lower-order to higher-order areas, while motor pathways run in the opposite direction from prefrontal and premotor areas to spinal outputs. Time-symmetric message passing can be accommodated by interpreting this apparent directionality as reflecting boundary conditions rather than fundamental asymmetry of computation. Lower areas anchor the hierarchy to current sensory inputs (past-constrained boundary), whereas higher areas anchor it to goals, rewards, and action plans (future-constrained boundary). Activity in intermediate levels is then shaped by messages originating from both ends: bottom-up signals carrying evidence and top-down signals encoding desired or expected future states. The resulting architecture behaves like a distributed constraint solver whose solution corresponds to a consistent trajectory bridging sensory history and intended outcomes.
An explicit role for action and embodiment further enriches these architectures. When motor commands are considered part of the generative model, the network must infer not only hidden causes of sensory input but also action sequences that lead to desired future states. Architecturally, this can be achieved by coupling sensory hierarchies to motor hierarchies via shared latent variables representing inferred goals and affordances. Time-symmetric message passing then occurs across both perception and action pathways: forward messages propagate predicted sensory consequences of candidate actions, while backward messages convey what future rewards and constraints imply about which actions must have been taken and which sensory interpretations are most plausible. The motor cortex and associated basal ganglia-thalamocortical loops thus participate in a broader inference process in which present motor commands are simultaneously evidence for past intention and determinants of future outcomes.
To implement these computations robustly, architectures often incorporate specialized āhubā or association regions that integrate time-symmetric information across modalities and timescales. Prefrontal and parietal cortices, with their rich long-range connectivity, are well placed to serve as such hubs. They can host higher-order latent variables representing abstract context, rules, or task structure, which act as slowly varying priors on the dynamics of sensory and motor areas. Messages from these hubs to lower regions bias interpretation of ambiguous stimuli in ways that are consistent not only with recent history but also with future goals and contingencies, while feedback from lower regions updates these high-level variables in light of new evidence. In network terms, hubs maintain a coarse-grained description of trajectories over longer horizons, providing boundary conditions that induce effective retrocausality at finer-grained levels.
Another key architectural motif involves gating and routing mechanisms that determine which messages are allowed to influence a given region at a given time. Thalamus, basal ganglia, and neuromodulatory systems can be viewed as control structures supervising the flow of time-symmetric information. For instance, thalamic relays can dynamically gate whether a cortical area is more influenced by ascending sensory evidence, descending expectations, or internally generated simulations. In a time-symmetric framework, this gating determines the effective temporal depth of inference: when feedback is strongly gated in, local representations are more heavily constrained by anticipated future states; when feedback is gated out, processing behaves more like a causal forward pass. Similarly, neuromodulatory signals can modulate the gain of error units versus representation units, effectively shifting the network between exploration (allowing multiple futures to shape the present) and exploitation (committing to a particular inferred trajectory).
From a computational standpoint, such architectures can be formalized as performing approximate Bayesian inference in state-space models with both past- and future-conditioned priors. Inference algorithms such as expectation propagation, belief propagation on loopy graphs, or variational message passing provide templates for neural implementation. Each cortical microcircuit approximates marginalization over a subset of variables, while long-range connectivity encodes conditional dependencies. Time symmetry appears when the same structural motifsārecurrent loops, excitation-inhibition balances, laminar feedbackāare responsible for transmitting both likelihood-like and prior-like information, with no hard-wired distinction between causal and retrocausal pathways. What changes is the boundary condition: during perception dominated by external input, messages are anchored more strongly to recent sensory evidence; during planning or imagination, messages are anchored more strongly to desired or simulated future states, yet the underlying architecture remains the same.
Crucially, these network architectures do not require perfect reversibility of neural dynamics. Noise, metabolic constraints, and structural asymmetries ensure that real cortical computation is only approximately time-symmetric. However, by organizing connectivity so that information about likely future states is richly embedded in synaptic weights and recurrent loops, the cortex can leverage a functional form of retrocausality: present activity is shaped by patterns that, historically, have predicted valuable future outcomes. Architectures built around dense recurrence, laminar feedback, hierarchical hubs, and flexible gating make it possible for the same network to behave like a forward filter, a backward smoother, or a trajectory-level constraint solver, depending on the task and context. In all cases, time-symmetric message passing emerges as the unifying principle that organizes how information is routed, integrated, and stabilized across the distributed circuitry of the brain.
Implications for learning, prediction, and perception
Time-symmetric message passing reconfigures how learning is understood in the cortex by tightly coupling inference and credit assignment over extended time windows. In conventional models, learning is driven primarily by errors that compare what actually happened to what was predicted based on the past. In a time-symmetric framework, the relevant error signal reflects mismatches at the level of trajectories: which combinations of past states and actions are inconsistent with both observed outcomes and the spectrum of desirable futures. This shifts learning from fitting one-step input-output mappings toward refining an internal generative model that supports accurate smoothing in time, allowing present representations to be retrospectively and prospectively optimized. Synaptic changes are then best seen as adjustments to a generative model over trajectories, with each modification encoding refined beliefs about how current states are linked to both previous causes and future consequences.
Under such a view, synaptic plasticity becomes a vehicle for embedding temporally extended priors directly into network structure. Rather than encoding only static associations, synapses encode expectations about how patterns unfold: which sequences are likely, which transitions carry reward, and which trajectories should be avoided. Neural dynamics in recurrent circuits express these priors by shaping which trajectories are attractor-like and which are unstable or energetically costly. Learning modifies these dynamics so that forward evolution of activity tends increasingly to land on trajectories that are compatible with both environmental regularities and previously successful outcomes. At the same time, backward-propagating constraint signals tune synapses to make it easier to retrospectively infer which earlier states would have been most plausible given eventual results, improving the fidelity of reconstruction and thereby the quality of future predictions.
This coupling of backward and forward constraints has direct implications for how prediction operates on behavioral timescales. In the time-symmetric picture, prediction is not a single operation performed ahead of time; it is a continuous process in which the cortex maintains a distribution over possible futures that is dynamically reshaped as new evidence arrives. Message passing across levels and along temporal chains constantly updates which futures remain compatible with both the recent past and the organismās goals. When an unexpected event occurs, it does not only revise beliefs about upcoming events; it also triggers a partial re-interpretation of what just happened, as backward messages adjust earlier latent states to maintain coherence. This can explain why subjective perception of an event sometimes appears to depend on context that only becomes available afterward, as in certain auditory and visual illusions where later elements in a sequence alter how earlier ones were experienced.
Such context-sensitive re-interpretation plays a prominent role in perception. In predictive coding models extended to include time symmetry, perceptual content at any instant is the result of a compromise between three pressures: fidelity to current sensory input, consistency with learned regularities of the past, and compatibility with anticipated future structure. For example, in speech perception, the brain must infer phonemes and words from noisy signals. Time-symmetric message passing allows later phonemes and even the overall sentence structure to send backward constraints that refine representations of earlier sounds. This can make perception more robust to transient noise or ambiguity: if an early sound is ambiguous between multiple phonemes, but later context strongly favors one interpretation, the backward messages resolve the ambiguity retroactively, aligning the entire trajectory of inferred states with a coherent lexical and semantic interpretation.
Visual perception exhibits analogous phenomena. The ability to perceive smooth motion and coherent object trajectories despite occlusions and noise can be understood as inference over spatiotemporal trajectories constrained at multiple points. When an object briefly disappears behind an occluder and then reappears, the cortex infers a continuous path that links pre- and post-occlusion positions. In a time-symmetric formulation, the reappearance acts as a boundary condition that propagates a constraint backward to fill in the most plausible trajectory during the hidden interval. Neural dynamics in motion-sensitive and object-selective areas settle into patterns that are consistent with both the initial glimpse and the eventual reappearance, rather than with any single forward extrapolation. Learning tunes the underlying generative model so that such interpolations align with physical regularities like inertia and object permanence, effectively encoding a prior over smooth, causally reasonable motion.
The impact on learning is particularly pronounced when considering tasks that involve delayed outcomes, such as reinforcement learning in naturalistic settings. Most real-world rewards are not immediate; actions and percepts that occur long before an outcome must somehow acquire credit or blame. Time-symmetric message passing offers a natural substrate for solving this temporal credit assignment problem. Outcome-related signals can be conceptualized as imposing future boundary conditions that propagate backward through the network, reshaping activity patterns and synaptic strengths associated with earlier states. Instead of relying solely on scalar value propagating via incremental temporal-difference learning, the cortex could deploy structured constraint signals that specify which features and latent states along a trajectory were most responsible for the eventual outcome. This supports more precise and sample-efficient learning, as credit is assigned not just to whole states or actions but to specific inferred causes within them.
At the level of predictive habits and skills, such as fluent reading or skilled movement, time-symmetric learning can produce representations that anticipate entire sequences rather than isolated steps. As practice accumulates, priors over common trajectories become sharper and more entrenched, guiding neural dynamics to pre-activate likely upcoming elements. In reading, this manifests as prediction of forthcoming words based on context; in motor skills, it manifests as anticipatory activation of muscles that will be needed several steps into a movement. Crucially, these predictions are not simply pushed forward in time; they also reshape the interpretation of earlier components. When a skilled pianist hears or plays the first few notes of a familiar piece, the brainās expectation of the entire phrase informs how it segments and emphasizes the initial notes, leading to perception and performance that reflect the structure of the anticipated whole sequence.
Perceptual stability over short time windows can be reinterpreted as an emergent property of time-symmetric smoothing. The brain must maintain coherent percepts despite noise, saccades, and rapid changes in input. Rather than committing immediately to each instantaneous snapshot, neural circuits can integrate information over windows that extend both backward and forward in time relative to any given moment. Within such a window, the most probable trajectory of latent states is inferred, and the percept at each point reflects its position in that trajectory. This has the consequence that rapid, isolated perturbations in input may be downweighted if they are inconsistent with both the preceding and subsequent context, contributing to the subjective experience of a world that is stable and predictable despite the underlying volatility of sensory signals.
The same principles affect how uncertainty is encoded and resolved. In a purely forward-looking model, uncertainty at time t is determined mostly by the information content of past inputs and the noise in the generative model. In a time-symmetric framework, uncertainty is also shaped by how many of the possible futures remain compatible with the current state. If a current configuration leads to many divergent, low-probability futures, backward constraint messages from those futures can increase uncertainty about the present, prompting exploratory behavior, increased attentional gain, or recruitment of additional sensory resources. Conversely, when a current configuration tightly constrains likely outcomes, backward messages from those futures can sharpen present representations, effectively collapsing uncertainty earlier than would be possible under a purely forward filter. This dynamic interplay between forward evidence and backward constraint may underlie subjective feelings of confidence or doubt that emerge before outcomes are fully determined.
Time-symmetric predictive coding also reshapes notions of error and surprise. Traditional formulations focus on prediction errors that compare expected and observed input at the next time step. When inference operates over entire trajectories, an event can be surprising not only because it violates a short-term prediction, but also because it destroys the coherence of a longer inferred storyline linking recent and anticipated states. For instance, an unexpected reversal in a visual sequence or narrative can force wholesale reconfiguration of latent variables across an extended interval, not just at the moment of the surprise. Neural signatures of such global reconfiguration, such as large-scale network resets or transient desynchronization, may reflect the system abandoning one trajectory-level hypothesis in favor of another that better reconciles the new evidence with both reinterpreted past and revised future expectations.
These considerations imply that learning and perception are fundamentally intertwined through a constant negotiation between filtering and smoothing. Each new episode of perception is an opportunity to refine the generative model that bridges past and future. When current inputs align well with previously learned trajectories, inference proceeds with minimal adjustment, and learning is modest. When they do not, time-symmetric message passing exposes precisely where along a hypothesized trajectory the mismatch is greatest. Synaptic changes then act to either modify the prior trajectories (broadening or shifting them to accommodate the new data) or to carve out new trajectories in state space that better explain the expanded range of experiences. Over developmental timescales, this results in increasingly rich internal models that can support flexible perception, robust prediction, and adaptive behavior across diverse contexts.
Perception of agency and volition can also be viewed through this lens. The subjective sense that āI caused this outcomeā depends on a match between inferred internal states leading up to an action and the eventual sensory consequences. Time-symmetric processing allows outcome information to propagate backward, refining the inferred intention and motor command sequence that preceded it. When backward constraints can identify a coherent, high-probability trajectory that links internal states to observed outcomes, the system infers agency; when no such trajectory is plausible, the event is experienced as externally caused. Learning gradually tunes the generative model so that self-generated actions occupy a distinct region of trajectory space, with recognizable temporal signatures that are retrospectively and prospectively coherent, providing a computational basis for stable sense of self and control.
In sum, embedding learning, prediction, and perception within a time-symmetric framework transforms them from sequential stages into different aspects of one ongoing inferential process. Neural dynamics in the cortex continuously solve a trajectory-level constraint problem, adjusting beliefs about past states while forecasting future ones, and modifying synaptic structure so that future episodes of inference become more accurate and more aligned with the organismās values. This picture accommodates a wide range of empirical phenomenaāfrom backward masking and postdictive illusions to delayed reinforcement learning and skill acquisitionāwithin a single computational principle grounded in message passing over extended temporal graphs.
Experimental predictions and future directions
Translating a time-symmetric message passing framework into empirical science demands concrete behavioral, neurophysiological, and computational tests that distinguish it from conventional strictly feedforward or purely predictive coding accounts. The core claim is that neural dynamics in the cortex approximate smoothing rather than mere filtering: present activity reflects constraints from both earlier and later events within a temporal window. Experimental designs must therefore manipulate information that arrives after a target event and ask whether neural and behavioral signatures of that earlier event are measurably reshaped in a way best explained by bidirectional inference.
One broad class of predictions concerns postdiction and temporal integration. Many psychophysical illusions already suggest that later stimuli modify perception of earlier ones, but the time-symmetric view makes stronger, more structured claims. It predicts that when later context disambiguates an earlier ambiguous stimulus, the neural representation of that earlier stimulus in sensory cortex should be updated retroactively, not just in higher association regions. Highātemporal resolution measurements such as MEG, EEG, and laminar-resolved electrophysiology can test this by examining whether activity patterns corresponding to a stimulus at time t are modulated after disambiguating input at t+Ī, in ways that correspond to the later-established interpretation rather than the initial ambiguity. Multivariate pattern analysis can be used to decode which interpretation is encoded at successive time points, probing whether the ādecodedā representation of the earlier stimulus changes after later evidence appears.
Backward masking paradigms offer a particularly tractable test bed. In classical backward masking, a target stimulus is rendered less reportable by a high-contrast mask that follows it. Time-symmetric models predict that the mask not only interrupts feedforward propagation but also imposes strong future-oriented constraints that reshape the inferred trajectory of states bridging target and mask. A key prediction is that early cortical responses to the target (for instance, in primary visual cortex) will diverge across conditions where the later mask supports versus conflicts with the most likely continuation of the targetās content. By systematically varying how predictable the mask is from the targetāe.g., congruent versus incongruent orientation, motion, or semantic categoryāand recording laminar profiles, one can test whether backward-propagating signals from higher areas exert larger effects when the later stimulus contradicts the extrapolated trajectory implied by the earlier one.
Another family of experiments exploits temporal cueing and delayed disambiguation. In auditory perception, for instance, ambiguous phonemes can be followed by context that favors different lexical interpretations. The time-symmetric account predicts that cortical representations of the initial phoneme in primary and belt auditory regions will be updated after disambiguation, aligning more closely with the retrospectively inferred category. This can be probed using time-resolved decoding of phoneme identity from neural signals or intracranial recordings. A critical test is whether the degree of representational change correlates with the strength of the lexical constraint imposed by later context, as estimated by independent language models. Stronger future constraints should induce larger retroactive shifts in early sensory representations, consistent with bayesian inference over trajectories rather than purely feedforward classification.
On the motor side, time-symmetric message passing implies that cortical representations of intended actions are refined by outcome information arriving after movement execution. Experiments using braināmachine interfaces and continuous cursor control can probe this by introducing delayed visual feedback perturbations that alter the apparent trajectory of the controlled effector. If the cortex treats the observed outcome as a future boundary condition, then neural activity patterns representing movement intention just prior to execution should be retrospectively biased toward trajectories that better align with the eventual, altered visual path. Closed-loop paradigms allowing online perturbation of feedback while recording from premotor and motor areas can test whether internal representations of the āsameā executed movement differ systematically depending on how the outcome unfolds over a short temporal window.
Reinforcement learning and temporal credit assignment yield further discriminating predictions. In classical temporal-difference frameworks, value signals propagate gradually from reward back to preceding states. Time-symmetric schemes instead posit that outcome-related signals act as constraints on entire recent trajectories, leading to relatively rapid, structured assignment of credit to specific inferred causes. Behaviorally, this suggests faster and more selective adaptation in tasks where outcomes are delayed but highly informative about which component of a sequence was responsible for success or failure. For example, in multi-step decision tasks with sparse rewards, subjects should show abrupt reconfiguration of internal policies after informative feedback, in a way better captured by models that perform backward inference over latent variables than by incremental TD learning. Invasive recordings in animals performing such tasks can test whether synaptic and ensemble changes cluster around particular segments of the trajectory identified by a trajectory-level inference model as key contributors to the outcome.
At the physiological level, time-symmetric frameworks predict specific laminar and oscillatory signatures of backward constraint propagation. Feedback signals that encode future-oriented priors or goal-related constraints are expected to preferentially engage superficial layers and apical dendrites, often carried by beta- or alpha-band activity, whereas feedforward sensory evidence is predominantly gamma-band and middle-layerādominated. If later events reshape estimates of earlier states, then following a disambiguating or outcome-related event, there should be a transient increase in top-downādominated activity propagating from higher to lower areas, accompanied by reorganization of gamma activity corresponding to the reinterpreted earlier representation. Simultaneous laminar recordings across a hierarchy during temporally extended tasks can test whether the magnitude and timing of top-down beta bursts predict subsequent changes in low-level gamma patterns that correspond to earlier stimulus epochs.
Neuroimaging and perturbation methods afford complementary tests. If time-symmetric message passing is central to cortical computation, disrupting feedback pathways during critical post-stimulus windows should selectively impair phenomena that depend on retrospective refinement, while leaving strictly feedforward discrimination relatively preserved. In humans, TMS applied to higher-order visual or auditory cortex shortly after stimulus onsetābut before context or outcome information is processedāshould reduce postdictive illusions and context-driven reinterpretation, without dramatically affecting basic detection thresholds. In animal models, selective optogenetic silencing of feedback projections during defined temporal windows can reveal whether suppression of top-down signals abolishes representations that depend on future context while sparing early, context-independent response components.
Time-symmetric theories also motivate more fine-grained analyses of trial-to-trial variability. If cortex is performing inference over trajectories, then fluctuations in later events or expectations should induce correlated variability in neural responses to earlier stimuli, even when the stimuli themselves are identical. For instance, in a visual discrimination task where the same cue is followed by different probabilistic outcomes, the trial-by-trial neural representation of the cue in sensory cortex should vary systematically with the outcome experienced later on that trial, not just with the average reward statistics. This implies that decoding the eventual outcome from early sensory activity may be possible above chance, not because the sensory cortex āknows the futureā in a causal sense but because backward-in-time constraints have reshaped the representation within the same trialās temporal window. Sophisticated analysis of spike trains or population activity using generalized linear models that include future events as covariates can quantify the extent of such retroactive modulation.
At the level of computational modeling, time-symmetric frameworks suggest a distinct pattern of fit to behavioral and neural data compared with standard predictive coding or purely feedforward models. Models that implement smoothing via bidirectional message passing or variational inference over entire sequences should better capture delayed-influence effects, postdictive illusions, and nonlocal credit assignment in learning. A concrete approach is to fit multiple modelsāincluding a filter-only model, a classical predictive coding model, and a time-symmetric smoothing modelāto the same dataset and compare their predictive accuracy on held-out data. Variables of interest include reaction times, choices, confidence reports, and detailed neural responses. Superior out-of-sample prediction by the smoothing model, especially for conditions involving delayed context or outcome information, would provide quantitative support for the time-symmetric view.
Specific phenomena in perception and decision making can serve as benchmarks. The flash-lag effect, the color-phi phenomenon, the Frƶhlich effect, and other temporal illusions are typically explained with ad hoc mechanisms such as postdictive integration windows or lagged awareness. Time-symmetric message passing yields a unified explanation: perceptual content at a given moment reflects the most probable point along an inferred trajectory consistent with both earlier and slightly later inputs. Rigorous reanalysis of existing data, or new experiments systematically varying the temporal spacing and reliability of contextual cues, can test whether a bayesian inference model with symmetric temporal priors fits both behavioral and neural data better than models that restrict inference to past information alone. Similar logic applies to decision tasks where confidence judgments appear to be influenced by post-decision evidence; time-symmetric models predict that neural markers of confidence (e.g., in parietal cortex) should continue to evolve after the overt choice, integrating later evidence to retroactively refine the inferred trajectory leading up to the decision.
In motor control and proprioception, predictions concern how the brain integrates efference copy, delayed sensory feedback, and inferred body dynamics. Time-symmetric frameworks suggest that perception of limb position or movement at time t is influenced by feedback arriving at t+Ī, especially when delays are short relative to intrinsic integration windows. Robotic perturbations that introduce variable delays and distortions in sensorimotor contingencies can be used to probe whether subjective reports and neural estimates of state are better explained by models that integrate only prior efference and immediate feedback, or by models that allow feedback to retroactively correct state estimates within a moving window. Neural recordings in cerebellum and parietal cortex during such tasks can reveal whether representations of past joint angles or velocities are updated when late-arriving feedback conflicts with earlier predictions, a pattern consistent with time-symmetric smoothing.
Beyond immediate experiments, the framework suggests long-term developmental predictions. If the cortex learns internal generative models that support time-symmetric inference, then the effective temporal horizon over which future events influence present inference should grow with experience and maturation. Young children, whose models and priors are still broad and underconstrained, might exhibit shorter or less precise windows of postdictive integration and weaker susceptibility to certain temporal illusions. Longitudinal psychophysical studies and EEG/MEG recordings could assess whether the duration and structure of temporal integration windowsāand the strength of postdictive effectsāsystematically increase with age and domain-specific expertise. Conversely, disruptions in the capacity to impose future-oriented constraints may underlie certain neuropsychiatric conditions characterized by altered sense of agency, temporal binding, or predictive control, suggesting clinical tests involving altered postdictive phenomena in schizophrenia or autism.
Another direction involves testing how neuromodulators shape the effective degree of time symmetry. The framework predicts that neuromodulatory states associated with uncertainty, exploration, or heightened learning (e.g., noradrenergic or cholinergic activation) should broaden temporal windows over which later evidence can revise earlier inferences. Pharmacological manipulations or pupil-linked assessments of arousal during tasks with temporally extended ambiguity can test whether increased neuromodulatory tone enhances postdictive integration and trajectory-level reconfiguration. Conversely, dopaminergic signals tied to reward prediction errors might sharpen or gate which future events are allowed to exert strong retroactive influence, focusing backward credit assignment onto particular segments of a trajectory. Simultaneous measurement of neuromodulator proxies and cortical activity during long-horizon learning tasks can evaluate these predictions.
Neural circuitālevel predictions involve specific connectivity patterns and microcircuit motifs. Time-symmetric processing requires robust feedback and horizontal connections that support recurrent constraint propagation. Comparative anatomy suggests that association areas should exhibit particularly dense reciprocal connections and rich laminar structure, consistent with their role in integrating priors and future-oriented constraints. Diffusion MRI, tract tracing, and functional connectivity analyses can be used to assess whether regions implicated in strong postdictive effects exhibit especially pronounced bidirectional connectivity. Within these regions, high-resolution laminar fMRI, optical imaging, or multi-contact laminar electrodes can test whether signals carrying late-arriving context or outcome information preferentially target apical dendrites and superficial layers, as predicted by models where backward messages are delivered via feedback pathways to dendritic compartments that integrate future-oriented signals.
Computational neuroscience can contribute by building detailed spiking network models that implement time-symmetric bayesian inference using biologically plausible synaptic and dendritic mechanisms. Such models should reproduce key empirical phenomenaāpostdictive illusions, trajectory-level credit assignment, and context-dependent retuning of earlier representationsāwhile adhering closely to known constraints on firing rates, synaptic dynamics, and oscillatory patterns. Importantly, these models can generate new predictions about subtle aspects of neural dynamics, such as the timing and frequency content of re-entrant activity following disambiguating cues or rewards, which can then be tested experimentally. Discrepancies between model predictions and data will, in turn, refine both the computational formalism and hypotheses about the underlying biophysical mechanisms.
Future work could also explore how time-symmetric message passing interacts with sleep and offline consolidation. If the cortex learns generative models of trajectories, then replay during sleep or quiet wakefulness might implement not only forward replay of past experiences but also backward or mixed replay that enforces temporal consistency between past episodes and anticipated futures. Hippocampalācortical replay studies can be analyzed for evidence of such bidirectional sequences, testing whether backward replay sequences more often occur in contexts where future planning or credit assignment is required. Moreover, manipulating replay content via targeted stimulation during sleep could reveal whether selectively biasing future-oriented sequence segments alters subsequent perception, prediction, or choice behavior in a manner consistent with retroactive model refinement.
The long-term trajectory of this research will likely involve converging evidence from psychophysics, invasive and noninvasive recordings, circuit perturbations, and computational modeling. A central task is to delineate the boundaries of the temporal window within which time-symmetric inference operates under various conditions: perception versus planning, low versus high uncertainty, automatic skills versus novel tasks. Another is to determine how the cortex dynamically adjusts these windows, for example by modulating attentional gain, oscillatory regimes, and feedback strength. By systematically charting when and how later events reshape representations of earlier ones, and by linking these effects to specific neural signatures and architectural motifs, experimental work can progressively test the central claims of time-symmetric cortical computation and clarify the role of retrocausality-like phenomena in everyday cognition.
