Predictive minds in a block universe

by admin
48 minutes read

Predictive processing models the brain as a constantly active inference machine that minimizes prediction error about the causes of its sensory input. On this picture, perception is not a passive reception of signals from the world but a continuous comparison between top-down predictions and bottom-up sensory data. The brain deploys generative models to forecast what is likely to occur next, then adjusts those models when surprising information arrives. This framework already carries an implicit temporal structure: it assumes that the brain is always oriented toward the immediate future, treating incoming signals as evidence to refine its anticipations. The ā€œnowā€ is not a static slice but a transient boundary where predictions meet data and are updated for the next cycle of forecast.

This temporally loaded view of perception and cognition directly intersects with questions of temporal ontology. Eternalism, often glossed as the ā€œblock universeā€ view, holds that past, present, and future events are equally real within a four-dimensional spacetime manifold. On such an account, the apparent flow of time is not a fundamental feature of reality but a perspectival aspect of how beings like us experience the world from within that manifold. The tension then arises: predictive processing emphasizes an asymmetric, forward-looking brain that seems to presuppose a dynamic unfolding of events, whereas the block universe describes a static totality in which the entire temporal axis is already laid out.

One way to reconcile these pictures is to treat predictive processing as a story about the internal organization of an embedded agent, not about metaphysical time itself. The brain’s generative models are constructed from information available along that agent’s worldline. Even if all times are equally real in a block universe, an individual organism only has direct causal access to a limited subset of events—typically, what we call the past and the local present. The asymmetry between prediction and memory then arises from the direction of information flow for the organism, not from an ontological difference in the reality of different times. The brain becomes a system that compresses information from its past light cone into priors that guide expectations about what lies in its future light cone, even though both regions are equally ā€œthereā€ in a four-dimensional sense.

Temporal ontology also shapes how to understand the notion of priors in a bayesian brain. Priors encode the statistical regularities that have been learned across time. In an eternalist framework, these regularities are fixed features of an organism’s entire trajectory, but from the inside they appear as evolving expectations that become more refined with experience. The learning process corresponds to a path through the block universe on which the organism’s neural states change in lawful ways, reducing long-term prediction error across its lifespan. What looks like temporal updating from within is simply a pattern stretched out along the time dimension from a four-dimensional vantage point.

The internal temporal hierarchy posited by predictive processing offers another bridge to temporal ontology. Generative models are arranged in layers that operate over different timescales: rapid prediction at sensory levels, slower prediction at perceptual-object levels, and still slower forecasting at the level of goals, narratives, and identity. Each layer models not only spatial structure but temporal structure—how events tend to unfold and cohere over time. In a block universe, these nested temporal patterns correspond to structured segments of an organism’s worldline, with fine-grained micro-events embedded within longer macro-events. The hierarchical model is thus a physiological implementation of how a single four-dimensional trajectory is carved into meaningful segments, each with its own characteristic temporal grain.

This interaction between predictive processing and temporal ontology has important consequences for how consciousness of time is understood. Our sense of a moving present, an immediate past fading into memory, and an anticipated future emerging from uncertainty may be constructed by how prediction error is minimized across multiple temporal scales. The ā€œspecious presentā€ can be modeled as an integration window within which the brain smooths over discrete events to maintain coherent predictions. In a block universe, the neural realizations of these integration windows are just specific patterns distributed along the time axis, but from the agent’s internal perspective they generate an experienced flow. Temporal ontology thus constrains what sorts of neural and computational structures can plausibly underwrite time-consciousness without postulating an objectively moving now.

Another pressure point concerns how predictive processing handles asymmetries like causation and thermodynamic irreversibility. The brain learns priors that reflect the causal regularities it has encountered, which are themselves grounded in the low-entropy boundary conditions of the universe. In a block universe with time-symmetric laws, these boundary conditions still pick out a direction along the time dimension in which entropy tends to increase. The organism’s prediction machinery is tuned to this direction: it expects causes to precede effects in the sense defined by the thermodynamic and informational arrows of time. Thus, the orientation of predictive models in time is not arbitrary, even in an eternalist setting; it is anchored by the large-scale structure of the four-dimensional state of the universe.

Speculative ideas like retrocausality highlight how deeply predictive processing is entangled with temporal assumptions. If influences could run from what we call ā€œfutureā€ events to ā€œpastā€ events, then in principle an optimal generative model might incorporate information that, from the agent’s perspective, has not yet occurred. In a strict block universe, all correlations are simply relations among points in spacetime, unconstrained by our ordinary temporal intuitions. However, for predictive processing to remain empirically adequate, it must respect the actual constraints on information access that real organisms face. Whether or not retrocausal structures exist at a fundamental level, the effective causal structure along an organism’s worldline limits which parts of the block it can use to shape its priors, preserving a practically forward-directed concept of prediction.

Temporal ontology also clarifies the status of prediction error minimization as a dynamical principle. From the standpoint of an agent within time, prediction error appears to be minimized through ongoing updates: the brain changes its internal states to better match incoming data. From a block universe perspective, this dynamic is just a description of how neural states are arranged along the time dimension. One might think of the minimization of prediction error not as a process that unfolds in an open future but as a constraint that holds across the entire worldline of the organism: the actual trajectory is one among many possible trajectories that best satisfies the free-energy principle under given environmental conditions. On this reading, predictive processing describes a structural property of the complete spacetime pattern realized by the agent, rather than a law operating in a genuinely open temporal domain.

These considerations suggest that predictive processing is compatible with multiple views of time but takes on different interpretations in each. Within presentism, prediction error minimization would describe how the brain updates as new, truly coming-into-being events arrive. Within eternalism, the same mathematical relations characterize how neural states vary along the time axis in a fully specified spacetime. The key question is whether the explanatory power of predictive processing depends on the metaphysical reality of an open future, or whether its central notions—generative models, priors, and prediction errors—can be reinterpreted as features of four-dimensional structures without loss of empirical content. Engaging this question requires carefully tracking which temporal asymmetries belong to the physics, which belong to the organism’s informational situation, and which stem from the phenomenology of conscious experience.

Block universe implications for cognition

If the block universe view is correct, then cognition must be understood as a structured slice within an already-complete spacetime, rather than as a process that helps ā€œshapeā€ an open future. For predictive processing, this means that what we ordinarily describe as the brain using priors to generate forecasts and then updating those priors in light of new evidence is, at a deeper level, simply the description of a particular four-dimensional pattern. The entire history of an organism’s neural states, the sequence of prediction errors, and the gradual refinement of generative models are all equally real and fixed within the spacetime manifold. Seen this way, predictive cognition does not reach into an indeterminate future to make it one way rather than another; instead, the organism’s worldline already includes both its expectational stance and the sensory consequences that confirm or disconfirm those expectations.

This shift in perspective changes what is meant by ā€œpredictionā€ and ā€œupdating.ā€ In everyday language, prediction suggests a deliberate orientation toward events that are not yet real, and updating suggests that something genuinely new occurs when evidence arrives. Under eternalism, however, the neural states corresponding to ā€œbefore evidence,ā€ ā€œduring evidence,ā€ and ā€œafter updatingā€ coexist as distinct temporal parts of a single cognitive trajectory. The brain states that encode certain priors, the sensory states that produce large prediction errors, and the revised states embodying updated models are all laid out in order along the time dimension. The directedness from prior to posterior is not a metaphysical arrow carving the future from possibility into actuality; it is an internal asymmetry in how information is processed at different temporal locations along the organism’s path.

One practical implication is that computational descriptions of cognition must be interpreted carefully. When models of the bayesian brain talk about inference ā€œover time,ā€ they are typically written in terms of iterative algorithms that progress through updating cycles. In a block universe interpretation, these iterations correspond to successive brain states that satisfy certain functional relations to one another. The algorithm exists not as a procedure executed in a genuinely open temporal arena, but as a structural relationship among different temporal slices: earlier neural states encode priors and prediction errors that stand in lawful computational relations to later neural states encoding posteriors and revised predictions. The brain’s operation can still be described in procedural terms, but the apparent procedure is frozen into the geometry of spacetime rather than enacted in a becoming reality.

This reinterpretation also affects how to think about learning and plasticity. Ordinarily, to say that a system learns is to claim that it changes its weights or parameters in response to data, gradually improving its generative models. In a block universe, the full trajectory of those parameters is already fixed from the outset. What we call ā€œlearningā€ is the fact that, along certain temporal segments, synaptic strengths and connectivity patterns transition from one configuration to another in a way that tends to reduce long-run prediction error. The direction from ā€œless informedā€ to ā€œmore informedā€ is determined by the thermodynamic and informational arrows of time, not by ontologically open possibilities. A learning process is thus a particular sort of temporally extended structure: one in which earlier sub-states encode weaker priors and higher error, and later sub-states encode stronger priors and better error suppression.

From the vantage point of an embedded agent, this static four-dimensional structure is not accessible as such. The organism only has access to local segments of its worldline and can register asymmetries between what it stores as memory and what it constructs as expectation. Memory is realized by physical traces of earlier events—modified synapses, long-term potentiation, structural changes—whereas expectation involves active generative states that project likely sensory inputs. In the block universe, both memory traces and expectation states are simply different types of neural configurations at different times. The apparent difference in their temporal orientation is a matter of which parts of the manifold can causally influence which others, given the constraints imposed by the low-entropy past and the way information can propagate along the agent’s trajectory.

The experience of temporal flow, of a moving ā€œnowā€ in which predictions are continually being tested, is thus an emergent feature of particular worldlines with specific informational and dynamical properties. The organism’s cognitive architecture compresses its accumulated states into a sense of enduring self, tracks regularities in its environment, and constantly projects forward to generate expectations. In a block universe, this whole experiential panorama—the feeling of being located at a special present, the anticipation of what might happen, the surprise at prediction error—is fully contained in the four-dimensional neural and bodily patterns. The sense that ā€œtime is passingā€ is implemented by recurrent generative models whose internal dynamics create a stream-like phenomenology of consciousness, even though the underlying spacetime structure is static.

At the level of representational content, predictive processing in a block universe acquires a distinctive profile. The brain’s models do not represent a world that is genuinely unfolding from indeterminacy; they represent local statistical regularities across one part of spacetime, extrapolated to neighboring parts. The future-directed content of a prediction is thus best understood as a representation of patterns that hold across temporal neighborhoods adjacent to the agent’s current temporal location. The fact that the represented events are ā€œlaterā€ along the time axis is functionally important for the agent—because only those later events can be causally influenced by its current actions—but this does not entail that they are metaphysically less real. Cognition is oriented toward specific temporal regions because of the pattern of causal and informational accessibility, not because of a deep ontological divide between what exists and what does not yet exist.

This raises interesting questions about error and misrepresentation. Under eternalism, there is a determinate fact of the matter about what an organism’s later states are like; those states already exist as part of the block. When an agent at an earlier time forms a prediction, the content of that prediction can be evaluated against the agent’s states at subsequent times. A prediction is correct if it matches those later states, incorrect if it does not. The normative dimension of accuracy and inaccuracy is preserved, but it is now framed as a four-dimensional relation between different temporal parts of the agent and its environment. The very idea of surprise becomes a relation between the prior expectations encoded at one temporal location and the actual sensory inputs realized at a slightly later location, both of which coexist in the manifold even though the earlier self is ignorant of the later outcome.

In such a setting, the role of priors takes on a dual character. From within the stream of experience, priors are adjustable beliefs, shaped through interaction, that guide present expectations. From the block universe perspective, priors are temporally localized neural parameter settings, themselves the result of earlier learning episodes, which jointly determine patterns of behavior over the organism’s life. These parameter settings can be understood as encoding information about large swathes of the environment’s structure, distilled from extensive exposure to the past light cone. Although they are, in one sense, ā€œaboutā€ the future—because they influence how the organism will respond to later conditions—they are anchored in features that recur across time. In this way, priors function as four-dimensional summaries: compact encodings of regularities that span significant stretches of the manifold.

Another cognitive consequence of eternalism concerns the relationship between internal simulation and actual events. Predictive processing frameworks emphasize that the brain continuously runs generative models ā€œoffline,ā€ filling in missing sensory information and even hallucinating when top-down predictions dominate bottom-up input. These simulations create possible trajectories of the organism and its environment, providing a basis for planning and decision-making. In a block universe, the neural events corresponding to such simulations are themselves located at precise spacetime points and are related in determinate ways to the actual trajectories they represent. Some simulated trajectories may closely approximate the organism’s actual later worldline; others may diverge dramatically. The contrast between actual and simulated futures is thus a relation entirely internal to the manifold: a pattern in which some subsets of neural events encode alternative paths that are never realized, even though the encoding and the non-realization are equally real facts about the block.

This perspective also reframes the role of counterfactual reasoning. When the brain imagines what would have happened had it acted differently, it constructs alternative generative models that depart from the actual path traced in spacetime. These counterfactual models underpin learning by highlighting which aspects of the environment are stable and which are sensitive to intervention. From within time, these thoughts appear as explorations of unrealized possibilities. In a block universe, they are records of how the actual organism, at particular times, computed structured representations of ways the world is not. The distinction between actual and counterfactual is redescribed as a difference between the realized trajectory encoded by neural states across the whole worldline and the set of trajectories that those neural states model but that do not occur anywhere in the manifold.

Thinking of cognition in block-universe terms finally makes the informational asymmetry between past- and future-directed content more precise. The organism’s generative models can be tightly constrained by past sensory input, because earlier interactions leave abundant physical traces in its body and environment. By contrast, its models of later events must extrapolate from patterns that have held so far, without any direct access to those later states. The imbalance in constraint gives rise to higher uncertainty about the future and lower uncertainty about the past, even though both are equally real in spacetime. Cognition thus becomes a form of local inference under asymmetric evidence: the organism infers a richer, more fine-grained structure for those portions of the block already inscribed in memories and records, and a sparser, more probabilistic structure for those portions that lie ahead along its worldline, where evidence has not yet been received.

Mental time travel and counterfactual inference

Mental time travel, in the sense of remembering the past and imagining the future, can be naturally interpreted within predictive processing as the reuse of generative models across different temporal modes. The same hierarchical machinery that predicts imminent sensory input can be deployed ā€œofflineā€ to reconstruct earlier scenes or to project hypothetical future ones. Memories are then not static records but active reconstructions guided by present priors; imagined futures are structured guesses constrained by those same model parameters. Under eternalism, these acts of recollection and anticipation are themselves just temporally located neural events in the block universe, but they stand in systematic relations to other segments of the agent’s worldline: some reconstructions are anchored to earlier states that did occur, others are directed toward later states that will occur, and yet others describe trajectories that never occur anywhere in spacetime.

On a predictive processing picture, remembering is a form of inference in which top-down models fill in missing details of past episodes on the basis of partial cues. A familiar smell, a stray phrase, or a visual pattern sets off a cascade of predictions about associated content: faces, places, emotions, and actions. These predictions are tested, not against the original sensory inputs—those are inaccessible—but against currently available constraints such as stored traces in synapses, contextual information, and other memories. From within experience, this feels like ā€œrelivingā€ an event, but computationally it is closer to a best-guess reconstruction under present priors. In a block universe, the remembered event and the later act of remembering are distinct temporal slices, linked by physical traces that survived across the intervening interval. The accuracy of recollection is a four-dimensional relation between these slices: some reconstructions approximate their earlier targets well, others diverge because intervening updates have shifted the priors that drive recall.

Future-oriented mental time travel employs similar generative machinery in the opposite temporal direction. When an agent imagines what will happen if they take a particular route home or accept a new job, their brain runs forward models of likely environmental and bodily states: traffic patterns, conversations, emotional reactions, and long-term outcomes. These simulations are constrained by what has been learned about the world’s causal structure, encoded in synaptic weights and higher-level narratives. In the language of the bayesian brain, the agent draws samples from posterior distributions over future states, conditioned on its current situation and possible actions. Under eternalism, the future that is imagined and the future that is actually realized are both fixed parts of the manifold. The counterfactual distance between them—how close the imagined trajectory comes to the actual worldline—is a precise feature of spacetime, even though the imagining subject cannot know it at the time of simulation.

The phenomenology of ā€œtravelingā€ backward and forward in time can thus be understood as the brain shifting its point of reference within its generative models rather than moving in time itself. In episodic recollection, the model is anchored to an earlier self-location and reconstructs what was likely present then. In prospective simulation, the model is anchored to plausible later self-locations and extrapolates what might surround them. What gives these operations their temporal flavor is not a metaphysically moving now, but the way priors encode asymmetries in available evidence: past events are supported by abundant physical traces, while future events must be estimated from patterns inferred so far. In a block universe, this asymmetry is grounded in the fact that the agent’s current state is causally downstream of earlier events but upstream of later ones, making the former richly constrained and the latter comparatively open from the agent’s informational standpoint, though not ontologically indeterminate.

Counterfactual inference is deeply woven into this capacity for mental time travel. When an individual thinks ā€œIf I had left earlier, I wouldn’t have missed the train,ā€ they are constructing a nearby alternative trajectory in which only a small subset of variables—departure time, subsequent encounters, emotional tone—are altered. The generative model is tasked with recomputing the likely consequences of this local perturbation, holding much of the learned structure fixed. These counterfactuals serve several functions: they help isolate which variables are causally efficacious, they provide surrogate error signals when direct experimentation is impossible, and they guide future policy selection by highlighting better strategies. From a predictive processing standpoint, counterfactual thought is a way of exploring the gradient of prediction error in model space: by comparing actual outcomes with simulated alternatives, the system can adjust its priors about which actions tend to yield which results.

In a block universe framework, counterfactual thoughts remain fully real events in spacetime, but their content is explicitly about ways the world is not, and never will be, at any point in the manifold. The imagined earlier departure and its non-missed train do not correspond to another branch of reality; they are structures encoded in the neural states of the actual agent, representing trajectories that fail to map onto any realized worldline. The contrast between actual and counterfactual thus becomes a mapping relation: some internal simulations are isomorphic, to varying degrees, with segments of the organism’s actual path and environment, while others are systematically misaligned. Learning hinges on recognizing such misalignment and adjusting generative models accordingly. Even though all of this is fixed in spacetime, the distinction between ā€œwhat happenedā€ and ā€œwhat would have happenedā€ is preserved as a principled difference between the realized pattern and the set of patterns encoded but not instantiated.

This viewpoint casts the evaluative dimension of counterfactual thinking in an interesting light. Regret, relief, and responsibility often rely on comparisons between actual outcomes and imagined alternatives: ā€œThings could have gone worse,ā€ ā€œI should have done otherwise.ā€ Under predictive processing, these emotional reactions are not mere epiphenomena; they modulate precision weights on prediction errors and priors, thereby reshaping future expectations and policies. Regret may up-regulate the salience of specific action–outcome links, increasing the impact of discrepant outcomes on model revision. Relief may reduce the precision accorded to catastrophic scenarios, dampening their influence on future planning. In an eternalist picture, these affectively charged updates are simply portions of the organism’s trajectory where certain internal comparisons occur and corresponding parameter shifts take place. The normative sense that some alternatives would have been ā€œbetterā€ is instantiated as a ranking over simulated trajectories according to values encoded in the agent’s generative models.

The role of narrative further illustrates how mental time travel and counterfactual inference cohere within a block universe. Humans routinely organize their lives into stories: a childhood phase, pivotal decisions, turning points, and imagined futures. These narratives are high-level generative models that compress large segments of the worldline into coherent structures, explaining why particular events occurred and projecting how similar patterns might continue. When someone reinterprets their past (ā€œI now see that event as formative rather than traumaticā€) or revises their future plans (ā€œI’m no longer aiming for that careerā€), they are changing these narrative-level models, which in turn reconfigure the space of accessible counterfactuals. Some imagined paths become more salient, others less so. From a four-dimensional perspective, the earlier narrative and the later reinterpreted narrative are different modes of organizing the same underlying spacetime pattern, each supporting different sets of counterfactual simulations.

Mental time travel is also central to how agents manage uncertainty. Generative models do not just produce point predictions; they characterize distributions over possible outcomes. By mentally sampling from these distributions—imagining various ways a social encounter, medical diagnosis, or financial decision might unfold—agents can approximate expected utilities without physically realizing each alternative. The brain adjusts its policies by integrating over these simulated futures, weighting them according to estimated likelihood and value. In a block universe, the true later trajectory is already fixed, but the agent’s current uncertainty is genuine from its limited perspective: the mapping from present state to later segment of its worldline is unknown to it, even though well-defined in spacetime. Counterfactual inference supplies the internal ā€œsandboxā€ in which the organism can rehearse many incompatible trajectories, most of which never correspond to its actual future, while still extracting robust guidance for action.

Although retrocausality is not required for any of this, entertaining it clarifies the dependence of mental time travel on informational constraints. If there were physically accessible signals from what we call the future, an optimal predictive system might integrate them as additional evidence, narrowing the space of plausible future trajectories. Counterfactual simulations would then be conditioned not only on past and present states but also on partial constraints from later events. Under strict relativistic causality, however, the agent’s generative models must work only with information within or descending from its past light cone. This restriction ensures that mental time travel remains asymmetric: the agent can refine reconstructions of past events using new evidence (for example, discovering a forgotten letter that reshapes one’s understanding of a relationship), but it cannot directly access future evidence to refine simulations. The gap between imagined futures and actual later states—and the corresponding possibility of prediction error—is structurally baked into the organism’s position in spacetime.

These considerations highlight how mental time travel and counterfactual inference contribute to the construction of a temporally extended self. The sense of being the same subject who once occupied remembered situations and who will occupy imagined future ones depends on generative models that bind disparate temporal slices into a single, coherent identity. The agent tracks relatively stable traits, goals, and commitments and uses them to interpolate between past, present, and possible futures. Under predictive processing, this self-model is itself a high-level hypothesis: it is upheld as long as it helps minimize prediction error across the stream of experience, including the stream of remembered and imagined episodes. In a block universe, the self is not a separate entity traveling along a timeline but a structured pattern spread across that timeline, whose internal generative models encode how different parts of the pattern hang together and how things could have gone, or might yet go, differently. Mental time travel and counterfactual inference are the mechanisms by which this pattern represents its own temporal shape, including the unrealized paths that frame its actual history.

Agency, free will, and predictive control

Agency in a predictive processing framework is typically analyzed in terms of active inference: organisms do not merely predict their sensory input, they also act to make those predictions come true. Instead of passively waiting for the world to provide evidence, they move their bodies, seek out information, and restructure their environments to reduce long-term prediction error. Voluntary action, on this view, is guided by hierarchically organized policies encoded in generative models: the system entertains predictions not only about what sensations it will receive, but about what it will do, how its body will move, and how the world will accordingly change. Actions are selected that are expected to minimize future surprise relative to preferred states, such as bodily homeostasis or socially valued outcomes. This links the phenomenology of ā€œI am doing thisā€ to the brain’s capacity to generate and evaluate action-conditioned predictions.

In a block universe setting, these action-guiding models and the bodily movements they orchestrate are just parts of a single four-dimensional pattern. The apparent sequence in which an intention arises, is refined, and culminates in a bodily movement corresponds to a series of neural and motor states arranged along the time axis. Eternalism does not remove the computational or functional asymmetry between earlier states that encode intentions and later states that implement them, but it reinterprets that asymmetry as a structural relation within spacetime rather than a process that ā€œcould have gone otherwiseā€ in a robust metaphysical sense. The agent’s generative models at one time encode expectations about its own later bodily states; in the manifold, those later states already exist. The sense of initiating action then becomes a feature of how certain neural states within the worldline represent and control other, later parts of the same extended pattern.

This invites a re-examination of free will. Many compatibilist accounts already treat free will not as metaphysical indeterminism but as a matter of having the right kind of internal control: flexibility in responding to reasons, sensitivity to evidence, and the ability to regulate behavior in light of higher-level goals. Predictive processing aligns naturally with this stance. An agent exercises control to the extent that its hierarchical generative models can flexibly reallocate precision, explore alternative policies, and update long-range expectations in response to surprising outcomes. High-level priors encode values, norms, and long-term projects; lower levels encode sensorimotor contingencies. Free action, in this sense, is action guided by internally integrated, counterfactually robust models that allow the system to anticipate the likely consequences of many possible moves and select among them in a way that reliably tracks its goals.

Within eternalism, such compatibilist freedom can be redescribed four-dimensionally. A ā€œfreeā€ trajectory is one whose internal segments exhibit sophisticated active inference: earlier states model and evaluate multiple potential downstream pathways; the actual downstream states are sensitive to those evaluations in a law-governed way; and the resulting worldline shows rich counterfactual dependence on the agent’s internal deliberation. The block universe does not contain branching futures in the strong metaphysical sense, but it contains structures in which, had the agent’s internal parameters been slightly different at a given time, later parts of the worldline would have been different. These are the usual counterfactuals employed in physics and causal modeling. A robust sense of agency survives if the worldline includes mechanisms that systematically route it through regions of spacetime where the agent’s modeled preferences are better satisfied than they would have been under alternative parameter settings.

Active inference clarifies how such routing works in computational detail. In the bayesian brain, actions are selected to minimize expected free energy, a quantity that combines expected prediction error with a measure of how well preferred outcomes are realized. The system compares the predicted future sensory consequences of different policies—different sequences of actions—and tends to choose those whose expected trajectories stay close to homeostatic set points and other high-level goals. This multi-step evaluation can be embodied in generative models that simulate outcomes over extended horizons, including social reactions, reputational shifts, or long-term health impacts. The richness and depth of these simulations determine how ā€œinformedā€ or ā€œreflectiveā€ the resulting actions are. In a block universe description, the entire tree of internally simulated futures, along with the single realized branch that the organism actually traverses, is distributed across spacetime as a complex relational structure linking neural modeling events to ensuing behavior and environmental changes.

From the agent’s perspective, the lived sense of being able to do otherwise is closely tied to the experience of counterfactual policy evaluation. When an individual considers whether to speak up in a meeting or remain silent, they internally construct multiple plausible futures, compare their expected outcomes, and feel that more than one path is open. Under predictive processing, this openness is epistemic and computational rather than metaphysical. The brain does not know which trajectory will be realized; it entertains several and assigns them probabilities based on its priors and incoming evidence. Once a particular action is initiated, sensory feedback collapses that internal uncertainty: the organism finds itself on one path rather than another. In a block universe, these internal sequences of deliberation, the associated uncertainty, and the singular realized outcome coexist as a determinate arrangement. The ā€œcould haveā€ talk is cashed out in terms of what the agent’s generative models represented as available options and how sensitive the actual trajectory is to small variations in those earlier representational states.

This framing helps disentangle two senses of ā€œdeterminismā€ that often get conflated. Physical determinism, as implied by many standard formalisms, says that complete microphysical information at one time, plus the laws of nature, fix the entire history of the universe. Computational determinism, by contrast, concerns whether a given cognitive architecture always produces the same outputs from the same inputs. Even in a physically deterministic block universe, a predictive system can be computationally indeterministic in the sense that it samples stochastically from probability distributions encoded in its priors. Neural noise, random synaptic release, or algorithmic sampling routines introduce variability in how the system explores policy space and updates beliefs. Yet once the whole spacetime is fixed, specific random draws are themselves determined as part of the overall pattern. Agency, within this picture, is not impaired by physical determinism; it is expressed through the particular stochastic active inference structure realized along the organism’s worldline.

Control becomes especially salient when considering precision weighting, a key component in predictive processing. Precision reflects the estimated reliability of prediction errors at different hierarchical levels. By modulating precision, the system decides which signals to trust more in shaping its beliefs and policies: bottom-up sensory data, top-down expectations, or contextual cues such as social norms. Executive functions, often associated with frontal networks, can be modeled as structures that regulate precision allocation across the hierarchy. When an agent exerts self-control—resisting an impulse, maintaining a long-term plan despite immediate temptations—it may be deploying high-level processes to down-weight transient, reward-driven signals and up-weight more abstract goal-related priors. In a block universe, episodes of self-control are identified with those stretches of the worldline where precision modulation yields behavior aligned with long-range models, even in the face of conflicting low-level drives.

Many worries about free will in a block universe hinge on the intuition that if all actions are fixed, the sense of responsibility is undermined. Yet on a predictive processing account, responsibility is grounded in how actions relate to the agent’s internal models of norms, expectations, and consequences. An individual is responsible when their behavior flows through the right kind of informational channels: they understood relevant reasons, had the capacity to represent alternative courses of action, and could have responded differently had their generative models been appropriately revised by rational argument or new evidence. These modal claims are spelled out in terms of counterfactuals about how the system’s priors and likelihood mappings respond to varied inputs. Eternalism does not erase these counterfactual structures; it embeds them in the manifold as stable relations among actual and possible parameter settings. Evaluating responsibility then becomes an exercise in mapping how, across the worldline, the agent’s predictions and policies would have shifted under different informational conditions.

This is particularly vivid in cases of impaired agency, such as addiction, compulsion, or certain psychiatric disorders. Predictive processing offers a unified story in which such conditions involve maladaptive priors or precision weightings that lock the system into narrow policy loops. For example, in addiction, drug-related cues may carry pathologically high precision, overwhelming competing predictions about long-term harm or alternative rewards. The agent’s ability to explore counterfactual policies (ā€œWhat if I stay clean?ā€) and assign them realistic value is compromised: simulated sober futures may be accorded low probability or low expected utility relative to drug-related outcomes. In a block universe, an addicted agent’s worldline includes these distorted generative models and the constrained set of behaviors they produce. Holding this person fully responsible in the same way as a non-addicted agent becomes questionable, not because determinism is different, but because the internal control structures that underwrite robust active inference are degraded.

Another dimension of agency concerns how organisms shape their own future priors by engaging in long-term projects. Education, therapy, habit formation, and deliberate practice can all be seen as strategies for sculpting the generative models that will govern later behavior. When someone chooses to enter psychotherapy, they are initiating a process that is expected to reconfigure deep priors about self, others, and threat, thereby altering how later experiences are interpreted and which actions become salient. A musician who commits to daily practice is intentionally driving plasticity in motor and auditory hierarchies, anticipating greater fluency and expressive control. In an eternalist depiction, these self-shaping endeavors are patterns in which earlier segments of the worldline contain meta-policies that target the very parameters of the generative model, resulting in systematic changes in later segments. The sense of autonomy is partly constituted by the existence of such self-modifying loops: the agent not only responds to the world but also works on the machinery through which it will respond in the future.

The relationship between agency and consciousness within predictive processing also deserves attention. On some proposals, what we call conscious intention corresponds to high-level predictions about actions and their consequences that are accorded especially high precision and broadcast across multiple neural subsystems. These globally available action-predictions help coordinate perception, motor control, and evaluative systems, and they generate the distinctive phenomenology of ā€œI am about to do this for these reasons.ā€ Unconscious actions, by contrast, may be guided by lower-level generative models or by high-level priors that never gain sufficient precision to achieve global broadcast. Eternalism situates this distinction as a structural feature of the worldline: stretches where high-level predictions are widely integrated and behaviorally efficacious are those we retrospectively classify as consciously willed; others are more automatic or reflexive. The normative weight we attach to conscious actions—praise, blame, pride, remorse—tracks precisely these differences in how deeply action selection is embedded in an agent’s representational and evaluative architecture.

Questions about retrocausality sometimes arise in discussions of agency, especially when considering whether our intentions might be influenced by information from our own future. Predictive processing does not require such exotic influences; it explains anticipatory behavior in terms of learned regularities and internal simulations conditioned on past and present evidence. Nonetheless, thinking about retrocausality clarifies why the usual forward-directed structure of control is so central. If signals from future brain states could genuinely affect present ones outside the constraints of the past light cone, the space of available policies and the evaluation of expected outcomes would be radically altered. Agents might exploit partial knowledge of their actual later trajectories, undermining the epistemic openness that underlies deliberation. In a standard block universe governed by relativistic causality, such influences are disallowed. The agent’s capacity for predictive control resides entirely in how well its generative models use past information and present context to infer and influence the parts of spacetime that lie within its future light cone.

Social agency extends these considerations into multi-agent settings, where each organism’s generative models must track not only the physical environment but also the policies and hidden states of others. Here, predictive processing underwrites sophisticated forms of coordination and joint action. Agents model each other’s beliefs, desires, and intentions, attempting to minimize not only their own prediction error but also, in some cases, shared or coordinated error with partners. Collaborative projects—building a house, conducting an experiment, raising a child—require generative models that represent joint policies and shared goals. In a block universe, these collaborative patterns appear as intertwined worldlines where the internal models of multiple agents covary in structured ways and their actions synchronize across time. Agency at the collective level is realized when these interlocking generative models successfully orchestrate a shared trajectory that none of the participants could have produced alone.

Legal and moral practices implicitly rely on such a multi-level conception of agency. Institutions treat individuals as responsible when their behavior is sufficiently predictable and modifiable through reasons, incentives, and sanctions. That is, when their generative models are sensitive to social feedback and capable of updating in norm-responsive ways. When this sensitivity is absent—due to developmental limitations, severe pathology, or coercive environments—responsibility is typically mitigated. Eternalism does not render these distinctions illusory; it tells us that the entire network of practices, judgments, and reactive attitudes is itself part of the fixed spacetime tapestry. The meaningful question is whether, across that tapestry, the counters of moral practices tend to interact constructively with the dynamics of predictive control, encouraging worldlines in which agents develop richer, more flexible generative models that support cooperative, norm-guided behavior.

Ultimately, the compatibility of agency and free will with a block universe turns on what is demanded of free will. If freedom is equated with a metaphysically open future, eternalism will seem hostile. But if freedom is understood in terms of the internal organization and counterfactual richness of predictive control—the capacity to model alternatives, integrate reasons, reshape one’s own generative machinery, and reliably steer behavior in line with higher-order goals—then predictive processing offers a detailed, scientifically grounded account of how such freedom can be instantiated in a four-dimensional spacetime. The worldline of a richly predictive agent is not just another inert trajectory; it is a structure within the block in which priors, policies, and precision weightings together realize a distinctive form of self-governed dynamics, even though the entirety of that dynamics is, from an external perspective, already laid out.

Empirical prospects and philosophical challenges

Empirical engagement with the picture of predictive minds in a block universe begins with clarifying what, if anything, it predicts that differs from more familiar, implicitly presentist readings of predictive processing. Much of the mathematics of the bayesian brain is neutral on temporal ontology: differential equations and generative models can be interpreted as describing either unfolding dynamics or four-dimensional structures. Yet certain empirical programs can probe how organisms internally encode time, causation, and possibility—features that must be understood differently if eternalism is true but the organism experiences an apparent flow and openness.

One promising line of research concerns how neural systems represent temporal order and duration. If the brain is an inference machine that compresses segments of its worldline into generative models, then we should expect to find neural codes that explicitly track temporal relations among events rather than merely their content. Empirical work on time cells in the hippocampus and prefrontal cortex, and on sequence-sensitive activity in sensory cortices, already suggests that the brain encodes not just ā€œwhatā€ and ā€œwhereā€ but ā€œwhen.ā€ Under a block universe interpretation, these codes correspond to internal summaries of extended segments of spacetime. Experiments that manipulate the perceived order of events—using temporal illusions, rapid serial visual presentation, or delayed feedback—can illuminate how such codes are constructed and how brittle they are. If temporal order is inferred rather than given, then systematic mismatches between physical sequence and experienced sequence should be predictable from the structure of the generative models.

Closely related is the study of the brain’s forward models in motor control. Predictive processing posits that the nervous system anticipates sensory consequences of actions with exquisite temporal precision, enabling smooth movement and rapid error correction. Empirically, this is investigated using tasks that perturb feedback timing—introducing small delays or temporal distortions in visual or haptic signals—and assessing how quickly agents recalibrate. Under eternalism, these recalibration trajectories are pre-written, but the organism’s internal experience of surprise and adaptation is still governed by how its priors about temporal contingencies are violated and revised. Examining individual differences in adaptability, and relating them to structural and functional connectivity in cerebellar and cortical networks, can help map the architectures that support temporally fine-grained prediction, and thus the mechanisms through which a static four-dimensional structure supports dynamically experienced agency.

Another empirical frontier involves mental time travel and counterfactual cognition, now studied through a combination of neuroimaging, lesion work, and behavioral paradigms. Experiments that require participants to vividly imagine future episodes or alternative pasts, and then track neural activation patterns, can reveal how generative models are re-used across temporal modes. The overlap between episodic memory networks and prospection networks, for example, supports the idea that the same machinery reconstructs past events and simulates future ones. From within a block universe perspective, this overlap is a structural fact about how different segments of the worldline are related: later neural states in which one ā€œremembersā€ or ā€œanticipatesā€ are systematically linked to earlier and later environmental states they model or mis-model. Empirical measures of the fidelity, flexibility, and bias of such simulations—how often imagined futures converge on actual outcomes, how readily people update their simulations in light of new information—provide data for constraining theories of how a temporally finite organism navigates a fixed spacetime.

Investigations of predictive control and free will likewise have empirical handles. Voluntary action, deliberation, and the feeling of being able to do otherwise have been examined using paradigms such as Libet-style timing tasks, conflict tasks (e.g., Stroop, go/no-go), and decision-making under uncertainty. Predictive processing suggests that what is experienced as intention is tied to high-level generative states that predict both upcoming movements and their justifications. Collegial debates about Libet’s findings often hinge on whether early motor-preparatory signals reflect unconscious decisions that undermine freedom. On an active inference view embedded in eternalism, these preparatory signals, conscious urges, and ultimate actions are all linked via hierarchical priors and prediction errors along the worldline. Empirically, one can test whether modulating higher-level predictions—through instruction, priming, or social framing—systematically alters both the timing of conscious intention reports and the likelihood of action, thereby evidencing the role of generative models in shaping experienced agency, regardless of metaphysical commitments.

Psychopathology offers another rich empirical domain. Conditions such as depression, anxiety, schizophrenia, and PTSD can be characterized, within predictive processing, as disorders of priors, likelihoods, or precision weighting. For instance, excessively pessimistic priors about the future may lead to reduced exploration and diminished belief updating, while abnormally precise high-level priors in psychosis can make contradictory sensory evidence ineffective. These alterations have clear consequences for temporal cognition: depressed individuals may simulate future scenarios that are uniformly negative and low in detail; trauma survivors may experience intrusive, high-precision replays of specific past episodes; psychotic patients may misattribute agency or causal order. Studying how these populations remember, anticipate, and generate counterfactuals can empirically ground claims about the role of generative models in structuring an experienced timeline. Under a block universe interpretation, such disorders are not merely ā€œdifferent ways of unfoldingā€ but different four-dimensional patterns in which the integration of temporal information is impaired, leading to systematically distorted worldlines of thought and behavior.

Cross-species and developmental research complement this work by examining how temporal prediction capacities emerge and vary. Infants gradually learn temporal contingencies—such as the delay between reaching and grasping or between vocalizations and responses—providing a window into how priors about the structure of time are acquired. Longitudinal studies that track the development of episodic memory, prospection, and delay of gratification can reveal how the neural substrates of mental time travel mature. Under eternalism, these developmental trajectories are themselves fixed but internally lawful: earlier worldline segments contain cruder generative models that are progressively refined. The empirical question is how environmental variability, caregiving, and education shape these refinements. Comparative work with non-human animals—examining, for example, future-oriented caching behaviors in birds or planning in primates—can indicate which aspects of temporally rich cognition are late evolutionary add-ons and which are more basic features of predictive brains navigating structured environments.

Empirical prospects also extend into the domain of social and cultural scaffolding. Human temporal cognition is heavily mediated by artifacts such as clocks, calendars, writing, and digital reminders, as well as by shared narratives about history and the future. These external structures effectively extend the brain’s generative models beyond the skull, providing stable priors about temporal regularities (work schedules, life stages, institutional cycles) and tools for coordinating plans. Cross-cultural studies show considerable variation in how time is conceptualized—linear versus cyclical metaphors, emphasis on long-term planning versus present focus—which correlates with differences in economic behavior, health decisions, and intergenerational investment. Empirically, one can examine how exposure to distinct temporal institutions modulates neural and behavioral markers of prediction, delay discounting, and counterfactual reasoning. From a block universe angle, these institutions are parts of the spacetime structure within which individual worldlines are embedded; their regularities help shape the priors that agents rely on when inferring where they are in broader temporal patterns.

Turning to philosophical challenges, a central worry is underdetermination: the same empirical data about predictive brains can often be modeled compatibly with both presentist and eternalist temporal ontologies. Because the equations of motion and the algorithms of inference can be read either dynamically or structurally, empirical success of predictive processing does not by itself select a block universe picture. Critics may argue that importing eternalism adds no explanatory value and risks reifying metaphors about four-dimensional patterns. Proponents must then show that treating cognitive processes as worldlines within a block can clarify puzzles—about causation, counterfactuals, and the sense of flow—in ways that a purely present-focused account cannot. This is less an experimental issue than a matter of explanatory economy and conceptual clarity: whether the block universe interpretation allows for more coherent integration of neuroscience with fundamental physics and metaphysics of time.

Another challenge concerns the status of the temporal asymmetries embedded in predictive processing. The framework assumes, and relies on, a robust distinction between past and future: agents have memory traces of earlier events but must predict later ones; they can act on what comes next but not on what has already occurred. Eternalism, especially when paired with time-symmetric fundamental laws, appears to jeopardize these asymmetries. Philosophers of physics often appeal to low-entropy boundary conditions and statistical arguments to recover an arrow of time from an underlying symmetric microdynamics. The task here is to show how these physical asymmetries are mirrored in the informational structure of an organism’s worldline. Agents’ priors about causation and efficiency are learned under the constraint that signals from future events are inaccessible, even if those events are real in the block. This yields a naturalistic account of why predictive models are oriented ā€œforwardā€ without positing a metaphysically privileged present. Nevertheless, critics may contend that the resulting picture leaves the phenomenology of temporal flow underexplained: why does a static block give rise to such a compelling sense of passage?

The phenomenology of consciousness and time raises further philosophical difficulties. On the predictive processing view, the sense of a specious present and of continuous flow arises from integrating information over short temporal windows and from updating generative models in response to prediction errors. If eternalism is true, these integration windows and updates are features of specific neural configurations spread across the manifold. Some philosophers worry that this merely redescribes the problem: instead of explaining why there is an experienced now, it points to neural correlates whose existence is itself laid out timelessly. One way to respond is to emphasize that ā€œflowā€ is an internal structural property of an information-processing system: a matter of how representations at different times relate to each other, how they encode change, and how they give rise to higher-order models of one’s own temporally extended state. Yet the adequacy of this response depends on contentious assumptions about the nature of phenomenal consciousness, about whether a purely structural description can capture what it is like to experience time passing.

Relatedly, normative notions—such as truth, error, regret, and responsibility—take on a distinctive flavor in a block universe, and not all philosophers are satisfied with the resulting picture. If all events and states are equally real, then prediction error and surprise are simply relations between different temporal parts of the agent. Some argue that this undermines the idea that the agent ā€œdiscoversā€ anything or that its cognitive life involves genuine openness. On this view, the bayesian machinery describes how one part of the worldline encodes probabilistic beliefs about another, but it cannot ground a robust sense of epistemic agency. Advocates reply that epistemic and practical rationality need only be defined relative to the informational situation of the agent at each time: even in a fixed block, some generative models are better than others at tracking regularities and guiding action, and agents can be evaluated accordingly. Still, this debate turns on deep issues about whether rationality presupposes an open future or can be fully cashed out in terms of local uncertainty within a closed spacetime.

Retrocausality poses an additional conceptual challenge. Some interpretations of quantum mechanics allow, or appear to allow, influences that run from future measurement settings to past states. If such retrocausal structures are physically realized, then the constraints on information flow that underwrite standard predictive processing models may require revision. An optimal generative system might, in principle, incorporate information from what we ordinarily call the future, thereby blurring the distinction between memory and prediction. Empirically, it is unclear whether any organism could access such information, given decoherence, noise, and the scales involved. Philosophically, however, the mere possibility raises questions about how flexible the predictive processing framework is with respect to more exotic temporal structures. Can it describe agents whose worldlines include feedback loops that, in the block, run ā€œbackwardā€ as well as forward, without collapsing the very notions of priors, likelihoods, and updating?

A different family of objections targets the explanatory autonomy of cognitive science. Some philosophers worry that leaning heavily on the block universe picture risks reducing cognitive processes to mere patterns in fundamental physics, thereby threatening levels-of-description pluralism. On this concern, if one takes too seriously the idea that prediction error minimization is nothing but a structural constraint on a worldline, then the language of beliefs, desires, and reasons may seem dispensable. Yet empirical work in psychology and neuroscience relies on these higher-level constructs to frame experiments, interpret data, and design interventions. Defenders of the multi-level approach argue that the block universe interpretation should be viewed as a consistency constraint, not a replacement: whatever is true at the cognitive level must be implementable in four-dimensional spacetime, but cognitive explanations can retain their own criteria of adequacy, focusing on algorithmic and representational organization rather than microphysical detail.

Another philosophical worry concerns counterfactuals. Predictive processing depends on counterfactual evaluation: generative models represent not only what is the case but what would happen under different actions or conditions. Eternalism, especially in its simplest form, labels one concrete spacetime as actual and treats alternatives as mere possibilities. Some metaphysicians argue that this sits uneasily with talk about ā€œwhat would have happened ifā€ because, on a strict reading, no other spacetimes exist to serve as truthmakers. Advocates of a block universe often appeal to standard counterfactual semantics: alternative possibilities are represented by nearby models or solutions to the same laws with slightly altered initial conditions, not by concrete additional blocks. The predictive brain’s counterfactual simulations then become internal constructions that approximate such alternative models. Philosophically, however, questions linger about whether this semantic apparatus is sufficient to ground the rich normative and psychological roles that counterfactuals play in learning, regret, and planning.

There is the challenge of empirical access to temporal ontology itself. Experiments can probe how brains represent time, causation, and possibility, but they cannot directly settle whether the universe is fundamentally a block or a growing ā€œbecomingā€ process. At best, empirical findings can show that our cognitive architecture is compatible with, or even suggestive of, one view rather than another. For instance, the heavy reliance on priors about regularities across time, the integration of past and future in shared neural machinery, and the perspectival nature of temporal flow all sit comfortably with eternalism. Yet a determined presentist can reinterpret these same data as facts about how a temporally evolving system copes with an open future. The philosophical challenge is to articulate what would count, even in principle, as evidence for or against a particular temporal ontology, and to ensure that appeals to predictive processing do not smuggle in metaphysical conclusions under cover of empirical language.

Related Articles

Leave a Comment

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00