{"id":3256,"date":"2026-01-20T07:01:56","date_gmt":"2026-01-20T07:01:56","guid":{"rendered":"https:\/\/beyondtheimpact.net\/?p=3256"},"modified":"2026-01-20T07:01:56","modified_gmt":"2026-01-20T07:01:56","slug":"neural-coding-with-bidirectional-causation","status":"publish","type":"post","link":"https:\/\/beyondtheimpact.net\/?p=3256","title":{"rendered":"Neural coding with bidirectional causation"},"content":{"rendered":"<p><a name=\"bidirectional-frameworks-for-neural-information-processing\"><\/a><\/p>\n<p>Neural information processing can be understood as unfolding within a web of influences that run both from past to future and from higher to lower levels of a system, making strictly one-way descriptions of causation incomplete. Bidirectional perspectives emphasize that neural coding is not simply about transforming inputs into outputs, but about maintaining ongoing cycles of interaction in which activity at one moment both constrains and is constrained by activity at other times and levels. This view reframes neural activity patterns as nodes in a dense network of reciprocal influences rather than mere responses to externally imposed stimuli.<\/p>\n<p>In a bidirectional framework, the same neural population can simultaneously participate in feedforward evidence accumulation and feedback-driven constraint satisfaction. Instead of assigning neurons to exclusively sensory or decisional roles, the focus shifts to how information is constantly recycled within loops: signals ascend to inform higher-order representations while descending influences reshape the very patterns that carried the original signals. These loops support a more flexible notion of neural coding, in which the meaning of a spike train or firing pattern depends on the current configuration of recurrent and feedback connections as much as on the immediate input conditions.<\/p>\n<p>Bidirectional architectures mesh naturally with the bayesian brain hypothesis, which characterizes perception and cognition as forms of probabilistic inference. Under this hypothesis, neural systems combine sensory evidence with internally maintained priors about the structure of the world, generating ongoing prediction and error signals that circulate through cortical hierarchies. Bottom-up streams carry information that challenges or supports current hypotheses, while top-down streams carry prior expectations that shape how new evidence is interpreted. The resulting dynamics are better captured as reciprocal exchanges than as simple chains of cause and effect.<\/p>\n<p>From this angle, neural activity can be thought of as encoding joint constraints among variables, rather than unidirectional mappings. A given pattern may reflect the interplay between current sensory conditions, expectations derived from long-term experience, task demands, and internal states such as motivation or arousal. Bidirectional frameworks highlight how these factors mutually influence one another in real time, so that changes in one domain cascade across the network and are, in turn, modulated by the network\u2019s new state. Causation is distributed, circular, and context-sensitive rather than linear and modular.<\/p>\n<p>Recurrent connectivity is central to these frameworks. Loops within and between brain regions allow activity to reverberate, be reinterpreted, and be selectively amplified or suppressed. Local microcircuits can implement rapid cycles that refine raw input into coherent patterns, while long-range feedback pathways allow higher-level representations to bias processing in earlier cortices. This continuous back-and-forth allows the brain to stabilize interpretations in the face of noisy data, to disambiguate ambiguous stimuli, and to switch quickly between alternative hypotheses when contextual cues change.<\/p>\n<p>Bidirectionality also challenges the simplistic division between encoding and decoding. In a purely feedforward model, encoding is the mapping from stimuli to neural responses, and decoding is what an external observer does to reconstruct the stimulus from activity. In a bidirectional framework, the brain is continuously decoding its own activity while simultaneously re-encoding the results of those interpretations into updated neural states. Internal readout and re-entrance of information blur the line between cause and effect, making every stage of processing both a recipient and a source of constraints on other stages.<\/p>\n<p>Time plays a crucial role in these models. Neural responses at a given moment depend not only on recent history but also on anticipatory states that embody expectations about events that are likely to occur. These anticipatory states bias processing before inputs arrive, effectively shaping what counts as evidence and what is treated as noise. While not invoking physical retrocausality in a strict sense, these anticipatory influences give neural dynamics a forward-looking character in which future-oriented predictions function as active determinants of present neural patterns.<\/p>\n<p>Within a bidirectional framework, prediction and priors are not passive background features but active forces that sculpt the flow of information. Top-down signals carrying prior beliefs can selectively gate which bottom-up signals are propagated, which are suppressed, and which are earmarked for further scrutiny. Conversely, persistent mismatches between prediction and incoming data can reshape these priors, altering future top-down influences. The ongoing negotiation between these two directions of influence underlies adaptive behavior and learning, ensuring that the system remains sensitive to genuine environmental change while preserving useful regularities distilled from past experience.<\/p>\n<p>This perspective also alters how information content is characterized. Instead of treating information as residing solely in instantaneous firing rates or spike patterns, bidirectional frameworks emphasize information embedded in trajectories of activity over time and across interconnected populations. The meaning of a given configuration can only be understood with respect to how it emerges from, and feeds back into, larger cycles of interaction. Neural coding, in this sense, becomes a property of the dynamic interplay among components rather than a fixed signature attached to individual neurons or layers.<\/p>\n<p>By viewing neural processing as intrinsically bidirectional, it becomes possible to unify diverse phenomena\u2014such as attention, expectation, context effects, and rapid perceptual reinterpretation\u2014under a single explanatory umbrella. All of these phenomena involve top-down influences that reshape ongoing processing, as well as bottom-up signals that refine and sometimes overturn higher-level states. The resulting frameworks offer a richer and more faithful account of how brains operate in complex, changing environments, where mutual influence and looping causation are the norm rather than the exception.<\/p>\n<h3>Causal architectures in recurrent neural circuits<\/h3>\n<p>Recurrent neural circuits instantiate causal structures that depart markedly from the straightforward chains of influence found in simple feedforward models. In a purely layered architecture, causes are assumed to propagate in one direction: inputs determine hidden-unit activity, which in turn determines outputs, with no direct return path. By contrast, recurrent circuits introduce closed loops in which the activity of a neuron at one moment can both shape, and be shaped by, the activity of that same neuron and its neighbors at later times. This looping structure embeds causation in cycles rather than lines, so the effective influence of any event must be traced through a web of mutually dependent updates rather than through a single pass from stimulus to response.<\/p>\n<p>At the most local level, recurrent microcircuits in cortex provide a clear example of such causal cycles. Excitatory pyramidal neurons densely interconnect with one another and with diverse classes of inhibitory interneurons, forming motifs such as recurrent excitation, feedback inhibition, and disinhibitory chains. A brief input to a subset of neurons can reverberate within this microcircuit, recruiting additional cells and reshaping the balance of excitation and inhibition over tens or hundreds of milliseconds. The resulting dynamics cannot be captured by a static mapping from input strength to firing rate; instead, neural coding emerges from the evolving pattern of interactions as activity propagates through loops and returns to modulate its own preconditions.<\/p>\n<p>On a larger spatial scale, cortical and subcortical areas are linked by bidirectional projections that form nested feedback loops. Visual, auditory, and somatosensory cortices, for example, receive ascending projections from thalamic nuclei while simultaneously sending descending projections back to those same nuclei, and higher association areas send dense feedback to early sensory regions. These reciprocal pathways mean that the causal flow of influence does not stop at primary cortex but continues in loops spanning multiple levels of the hierarchy. Activity patterns in a high-level region representing abstract categories or goals can causally influence the firing of neurons in early sensory areas, which in turn feed forward to update the high-level representation, closing the loop.<\/p>\n<p>Causal architectures in recurrent circuits are thus best understood as graphs with rich cycles rather than as trees or chains. Within such graphs, any given synapse participates in multiple causal paths, some relatively short and local, others long and trans-areal. A spike traveling along one path may alter the probability of spikes traveling along another path moments or even seconds later, through changes in membrane potential, local network oscillations, or synaptic plasticity. From the perspective of analysis, this implies that interventions at one point in a circuit can have widespread, temporally extended consequences that are not easily decomposed into independent effects, because those consequences will themselves feed back into the conditions that mediate further changes.<\/p>\n<p>Recurrent architectures also create conditions in which internal states mediate causal influence between sensory inputs and motor outputs. Rather than a direct transformation from stimulus to action, recurrent circuits maintain latent variables\u2014such as context, working memory contents, or task set\u2014that persist over time and shape how new inputs are processed. These latent variables are themselves products of previous inputs and actions, maintained through recurrent activity. A present decision may depend on a neural state that is both an effect of past events and a cause of the system\u2019s current interpretation of incoming signals, illustrating how recurrent causation can effectively \u201cstore\u201d history in patterns of ongoing activity.<\/p>\n<p>Time is therefore not merely a parameter over which signals decay; it is intrinsic to the causal architecture of recurrent circuits. Because current activity both depends on earlier states and constrains subsequent states, causation becomes a matter of trajectories through a high-dimensional state space. Small perturbations can be amplified or damped depending on the attractor structure of the network: some patterns of activity draw trajectories toward them (stable attractors), while others repel them. In such systems, the causal significance of a transient input lies in how it repositions the system in state space, altering which attractor basins are accessible and thus which future patterns of activity become likely or impossible.<\/p>\n<p>Attractor networks provide a concrete illustration of this idea. In a classic autoassociative memory circuit, recurrent excitatory connections are tuned so that certain patterns of activation are self-reinforcing. When the network is presented with a partial or noisy version of a stored pattern, recurrent dynamics drive the system toward the nearest stable configuration. Here, causation is distributed across time: individual synaptic interactions may be local and instantaneous, but the effective causal role of the input is realized through an extended sequence of updates that collectively implement pattern completion. The network\u2019s prior learning has shaped its attractor landscape, and this landscape in turn channels future activity, endowing the system with memory and context sensitivity.<\/p>\n<p>Cortical feedback loops engaged in perceptual inference embody a related causal organization, often framed within bayesian brain theories. Higher-level regions encode hypotheses or predictions about the causes of sensory inputs, and send top-down signals that modulate the gain, receptive fields, or phase of ongoing activity in lower-level areas. Bottom-up error signals, representing mismatches between predictions and actual inputs, travel forward to adjust the higher-level hypotheses. This cyclical exchange of prediction and error constructs a form of closed-loop causation: higher-level states cause changes in lower-level responses, which in turn cause revisions to the higher-level states, and so on. The bidirectional nature of these interactions blurs the line between cause and effect at any single level, as each level alternates between being a driver and a responder within the overall loop.<\/p>\n<p>Feedback connections are frequently more numerous than feedforward ones, and they often target distinct subcellular compartments or neuronal types, such as apical dendrites of pyramidal cells or specific inhibitory interneurons. This anatomical differentiation supports specialized causal roles for feedback: rather than simply relaying more of the same kind of information, feedback may act as a contextual modulator, a gate, or a gain control on ascending signals. By shifting the excitability or synchrony of target populations, top-down input can determine which bottom-up signals are allowed to shape downstream activity and which are suppressed. In this sense, the causal impact of sensory input is conditioned on the current feedback state, embedding the cause of a given response in an interaction between afferent drive and ongoing recurrent modulation.<\/p>\n<p>Recurrent inhibitory circuitry adds another layer of causal complexity. Feedback inhibition, feedforward inhibition, and lateral inhibition collectively sculpt the timing and precision of spikes, regulate competition among neuronal ensembles, and stabilize network activity. When a group of excitatory neurons becomes active, they recruit inhibitory interneurons that in turn suppress both the original group and its competitors. The resulting push-pull dynamics implement forms of winner-take-all selection, contrast enhancement, and gain control. Cause and effect here are not linearly separable: excitatory activity causes inhibition, inhibition reshapes excitatory activity, and the balance between them is determined by the entire pattern of network activation at that moment.<\/p>\n<p>Neuromodulatory systems further expand the causal architecture by operating on slower timescales and broader spatial extents. Diffuse projections from structures such as the locus coeruleus, basal forebrain, and ventral tegmental area release neuromodulators that alter synaptic efficacy, firing thresholds, and adaptation properties across extended cortical territories. These modulatory influences effectively reconfigure the local causal relationships within recurrent circuits: under high neuromodulatory tone, a given input pattern may drive a strong and sustained network response, whereas under low tone the same input might produce only a weak, rapidly decaying response. Thus, global state variables like arousal or reward expectation do not merely sit outside the causal chain but reshape the very architecture through which influence flows.<\/p>\n<p>Importantly, recurrent architectures also admit forms of causation that appear temporally nonlocal when viewed at a coarse level. A decision made now may depend on a memory trace formed minutes or hours earlier, maintained either through ongoing network activity or through long-term synaptic changes wrought by that earlier activity. In this sense, past events continue to exert causal influence on current neural coding even when no obvious stimulus persists, thanks to the structural and dynamical properties of recurrent connections. While this does not imply physical retrocausality, it does mean that the brain\u2019s causal organization is best conceived as a history-dependent process in which earlier states leave durable imprints that shape the way later inputs are interpreted and acted upon.<\/p>\n<p>From an analytical viewpoint, these recurrent and hierarchical loops challenge common strategies for inferring causal structure from observational data. Correlations in firing between two neurons or regions may arise not because one directly causes the other, but because both are embedded in a shared loop where influence circulates through multiple paths. Even interventions such as microstimulation or optogenetic activation can propagate through recurrent circuits in unexpected ways, with initial perturbations being amplified, inverted, or damped by network dynamics before their behavioral consequences emerge. Interpreting such experiments thus requires models that explicitly incorporate feedback and recurrence rather than assuming that causal arrows always point from manipulated site to observed effect in a simple, linear manner.<\/p>\n<p>These considerations carry over to artificial recurrent neural networks and dynamical systems used in machine learning and computational neuroscience. Architectures like vanilla RNNs, gated recurrent units, and LSTMs all instantiate causal dependencies that unfold over sequences, with each time step depending on the previous hidden state as well as the current input. When extended with top-down connections or attention mechanisms, such models can mimic aspects of cortical feedback, where internal representations influence how the network processes subsequent inputs. The resulting behavior demonstrates how relatively simple local update rules can give rise to complex, history-dependent causal patterns that more closely resemble those found in biological recurrent circuits.<\/p>\n<p>In all of these cases, the key feature of causal architecture in recurrent neural circuits is the replacement of strictly feedforward mappings with intertwined loops of influence. Activity at one node or level does not merely propagate onward; it also returns, directly or indirectly, to reshape its own initial conditions or the conditions under which future signals will be processed. Neural coding in such systems is inseparable from this looping causation: what a pattern of activity \u201cmeans\u201d depends not only on the external input that elicited it, but also on how that pattern is embedded in recurrent structures that determine its impact on subsequent states and the way in which those later states, in turn, reach back through the network\u2019s causal web.<\/p>\n<h3>Encoding and decoding under mutual influence<\/h3>\n<p>In systems governed by mutual influence, encoding cannot be defined as a unidirectional mapping from external stimuli to internal states, nor decoding as a separate, downstream operation carried out by some hypothetical observer. Instead, encoding and decoding become interleaved components of a single dynamical process in which the brain continuously interprets, reinterprets, and re-embeds its own activity. Neural coding is realized not by static labels attached to spikes but by evolving patterns that are simultaneously shaped by incoming inputs, internal models, and the network\u2019s recent trajectory.<\/p>\n<p>One way to unpack this is to consider that any neural population simultaneously plays at least three roles: it encodes certain variables of interest (such as sensory features or latent causes), it decodes inputs arriving from other populations, and it participates in shaping the priors that constrain future encoding. Under mutual influence, these roles are not cleanly separable in time. As a population begins to respond to an incoming signal, it is already under the sway of top-down expectations and lateral constraints, which effectively perform an internal decoding of what the incoming activity \u201cshould\u201d mean. The resulting state is then re-encoded as an updated set of predictions that will bias subsequent processing, closing a rapid interpretive loop.<\/p>\n<p>From a bayesian brain perspective, this loop can be described as continual inference over hidden causes of sensory data. Bottom-up activity encodes a likelihood function\u2014how compatible the observed input is with different possible causes\u2014while top-down activity encodes priors that reflect long-term structure and current context. Decoding in this setting is the process of combining likelihood and priors to obtain a posterior estimate; encoding is the re-expression of this posterior in neural activity that then serves as both a new prior and a target for downstream decoding. Because these steps are implemented through recurrent and feedback connections, causation becomes inherently circular: posterior beliefs alter the very channels through which new evidence arrives, which in turn reshape those beliefs.<\/p>\n<p>In early sensory cortex, this interplay can be seen in how receptive fields shift and sharpen depending on task demands and expectations. Neurons that nominally \u201cencode\u201d simple features such as orientation or frequency actually participate in a much richer representational dance. When an organism anticipates a particular stimulus, top-down signals preconfigure these neurons so that certain feature combinations become easier to elicit and others harder. The firing pattern elicited by an identical physical input will differ depending on the anticipatory state, implying that what is being encoded is not raw stimulus structure but stimulus structure relative to current hypotheses. Decoding these patterns therefore requires access to the concurrent pattern of feedback and context; without it, the same spike trains can be ambiguous or even misleading.<\/p>\n<p>Mutual influence also alters how population codes should be interpreted. In a feedforward view, a decoder could in principle be trained offline to map patterns of activity to stimuli, treating each pattern as a fixed point in a high-dimensional space. Under bidirectional dynamics, however, the relevant object is a trajectory through that space, shaped by ongoing cycles of prediction and error correction. Early in a trial, population activity may reflect an imprecise mixture of competing hypotheses; as feedback refines the state, the same population gradually moves toward a configuration that encodes a more settled interpretation. A decoder that fails to track this temporal evolution may conflate transient, exploratory states with stable representations, obscuring the true structure of neural coding.<\/p>\n<p>The distinction between \u201cencoding variables\u201d and \u201cdecoding algorithms\u201d thus needs to be reframed. Variables such as position, color, or decision value are not simply written into neural activity; they emerge as fixed points or limit cycles of the system\u2019s dynamics under particular constraints. Decoding in this context amounts to reading out which attractor or trajectory the system has settled into, given both external conditions and internal priors. At the same time, the network continually re-encodes these attractors by adjusting synaptic weights and modulatory gains in response to ongoing experience, so that what is decodable at one moment depends on a long history of prior decoding operations that have reshaped the network\u2019s structure.<\/p>\n<p>Attention illustrates how internal readout and re-encoding interlock. When a downstream region \u201cdecodes\u201d activity from an upstream area to determine which features or locations are behaviorally relevant, the resulting selection is often fed back to the upstream area via gain modulation or selective inhibition. This feedback effectively re-encodes the attended features by amplifying their neural representation and suppressing competitors. As a consequence, the decoding decision becomes a cause of subsequent encoding: what the system has decided to treat as important alters the future meaning of population responses, making attention an example of decoding that writes itself back into the code.<\/p>\n<p>Motor control offers another perspective. In classical schemes, motor areas are assumed to encode commands that are then read out by muscles. In a mutually influential setting, however, motor signals are continuously informed by efference copies and sensory predictions about the consequences of actions. Forward models in cerebellum and cortex decode the intended motor command to predict its sensory outcome, compare that prediction with actual feedback, and then feed corrective signals back into the motor command pathways. The movement-related code is therefore the product of ongoing error-driven refinement: the brain decodes its own motor intentions, checks them against predicted and observed consequences, and re-encodes updated commands that incorporate these comparisons.<\/p>\n<p>Even seemingly simple reflex pathways can embody such loops. Stretch reflex circuits, for example, do not merely encode muscle length and decode it into compensatory force. Descending pathways from motor cortex and brainstem modulate the gain and thresholds of spinal circuits, changing how afferent signals are interpreted. The same stretch input can thus lead to different output forces depending on behavioral context, posture, or expectation. Encoded variables like \u201cerror from desired limb position\u201d only make sense relative to a reference that is itself set by higher-level decoding of task goals and body state.<\/p>\n<p>These considerations complicate experimental attempts to distinguish encoding from decoding empirically. Analyses that treat neural activity as a fixed code\u2014subject to decoding by classifiers or regression models that ignore feedback\u2014risk conflating different regimes of network operation. For example, a neuron that appears to encode a specific stimulus feature under passive viewing conditions may carry predominantly prediction-error signals under active inference, or may switch to representing task-relevant categories when top-down input is strong. Because these regimes involve different balances between bottom-up drive and top-down constraint, the same spiking pattern may participate in divergent causal roles across conditions.<\/p>\n<p>To capture this fluidity, it is useful to model encoding and decoding as parameterized by internal context variables: arousal level, prior belief strength, current policy, and so on. In a bidirectional architecture, these variables modulate synaptic efficacy, membrane properties, and oscillatory phase relationships, thereby altering both what is encoded and how it can be decoded at any moment. For instance, strong priors about an expected stimulus can bias early cortical activity toward confirming interpretations, leading to more rapid but potentially less accurate decoding. Conversely, when priors are weak or conflict with new evidence, error signals may dominate, keeping representations in a more labile, exploratory state until the system converges on a new compromise between prediction and data.<\/p>\n<p>Oscillatory coupling offers a concrete mechanism for how mutual influence shapes the code. Phase relationships between populations determine when spikes arrive relative to windows of heightened excitability, making some pathways more effective conduits of influence than others at any given time. Top-down signals can alter these phase relationships, retiming the arrival of bottom-up spikes so that they either strongly drive or barely affect downstream activity. Under this regime, encoding is not just a question of which neurons fire, but when they fire relative to ongoing rhythms, and decoding must take into account the instantaneous oscillatory state to correctly infer the underlying variables.<\/p>\n<p>Population-level decoding schemes in downstream circuits are themselves shaped by experience-dependent plasticity, closing an additional loop. Synaptic weights between an encoding area and a decoding area are modified to improve readout accuracy or behavioral utility, but the same plasticity changes also alter the effective tuning of the encoding population. A downstream neuron that learns to respond selectively to a particular pattern of input will tend, via recurrent feedback, to bias upstream populations toward configurations that reliably elicit that pattern. Over time, this reciprocal adaptation can lead to co-tuned ensembles in which encoding and decoding are jointly optimized, but also mutually dependent, such that perturbing one side changes the code itself.<\/p>\n<p>Internal models of the world, expressed as structured patterns of connectivity and baseline activity, mediate the mutual influence between encoding and decoding. When a network has learned that certain features tend to co-occur, partial activation of that pattern will recruit the remainder via recurrent completion. In such cases, decoding a partial input as belonging to a particular pattern is inseparable from re-encoding the full pattern in the current state. The network\u2019s attractor landscape effectively serves as a repository of priors: decoding is the selection of an attractor compatible with incoming evidence, and encoding is the instantiation of that attractor as a concrete activity pattern that will steer future processing.<\/p>\n<p>Crucially, this mutuality undermines any attempt to define a code independently of the operations that use it. The functional significance of a firing pattern depends on how other circuits respond to it, but those responses in turn depend on the current configuration of feedback, modulatory tone, and synaptic structure, all shaped by previous cycles of interpretation. Neural coding in such systems is therefore relational and context-bound: what a pattern \u201cmeans\u201d cannot be pinned down without specifying the broader web of decoding processes into which it is embedded, and those processes themselves are continually updating what counts as a meaningful pattern.<\/p>\n<h3>Learning rules for two-way causal interactions<\/h3>\n<p>Learning in systems with two-way causal interactions must accommodate the fact that synaptic changes affect not only how information flows forward, but also how predictions and contextual influences propagate backward. Classical rules such as Hebbian plasticity and error backpropagation presume a relatively simple relationship between cause and effect: presynaptic activity contributes to postsynaptic responses, and plasticity is driven by correlations or errors measured at the output. In a bidirectional architecture, however, the same synapses participate in loops in which activity both upstream and downstream co-determine firing, making it necessary to define learning rules that are stable, local, and compatible with circular causation.<\/p>\n<p>One broad strategy for such learning is to treat network dynamics as implementing approximate inference, and synaptic updates as optimizing an implicit generative model of the world. Under this view, often associated with bayesian brain theories and the free-energy principle, recurrent networks perform ongoing prediction and error correction, with ascending signals carrying deviations from expectation and descending signals encoding priors or beliefs about latent causes. Learning rules must then reinforce patterns of connectivity that reduce long-term prediction errors while preserving enough flexibility to adapt when environmental statistics change. This requires balancing plasticity across both feedforward and feedback pathways, so that neither direction dominates and destabilizes the inferential cycle.<\/p>\n<p>Hebbian learning provides a foundational template but needs to be refined for two-way interactions. The basic idea that \u201ccells that fire together, wire together\u201d can be extended to \u201ccells that co-participate in successful inference, wire together.\u201d In recurrent circuits, this means that synaptic strengthening should reflect not just raw co-activation, but coordinated participation in cycles that lead to accurate predictions, stable attractors, or behaviorally adaptive decisions. For example, if a particular pattern of top-down activity consistently helps disambiguate noisy bottom-up input, the synapses supporting that feedback pattern should be reinforced. Conversely, if certain recurrent loops repeatedly amplify misleading interpretations, their effective gain should be reduced through synaptic weakening or increased inhibition.<\/p>\n<p>Spike-timing-dependent plasticity (STDP) offers a more temporally precise mechanism for shaping two-way interactions. In STDP, the relative timing of pre- and postsynaptic spikes determines whether synapses are strengthened or weakened, capturing the causal order of events at a fine timescale. In recurrent circuits, STDP can align the timing of activity along loops such that predictive signals arrive in time to modulate, but not completely override, incoming evidence. For instance, if top-down neurons tend to fire slightly before their downstream targets when a correct prediction is made, feedback synapses that achieve this lead\u2013lag relationship will be potentiated, while those that fire too late or inappropriately will be depressed. Over time, such timing-sensitive rules can carve out recurrent pathways that embody reliable temporal predictions and suppress those that generate inconsistent or noisy influences.<\/p>\n<p>To avoid runaway excitation or pathological synchronization, Hebbian and STDP-like rules in bidirectional systems must be counterbalanced by homeostatic and inhibitory plasticity. Homeostatic mechanisms adjust overall synaptic strengths or intrinsic excitability to keep firing rates within functional ranges, ensuring that strengthened loops do not dominate the entire network. Inhibitory synapses can undergo plasticity that specifically targets overactive ensembles, damping excessive reverberation while preserving useful recurrent structure. These forms of meta-plasticity effectively regulate the space of possible loops, pruning those that destabilize neural coding and maintaining those that support accurate and flexible inference.<\/p>\n<p>In frameworks like predictive coding, learning rules are often expressed as gradient descent on a scalar quantity such as prediction error or variational free energy. Neurons representing predictions and neurons representing errors are connected in reciprocal fashion: predictions suppress error units, while error signals drive updates to prediction units. Synaptic plasticity then aims to adjust both the generative (feedback) and recognition (feedforward) pathways so that, under the learned model, errors become small on average. Crucially, this learning is driven locally by products of presynaptic activity and postsynaptic error signals, meaning that each synapse only needs information available at its endpoints, even though the overall optimization aligns a large, bidirectional network with complex environmental statistics.<\/p>\n<p>Biologically plausible approximations to backpropagation also attempt to harness two-way interactions. Algorithms such as feedback alignment, target propagation, and equilibrium propagation rely on the presence of feedback connections that carry teaching or target signals back into earlier layers. Instead of requiring exact symmetry between forward and backward weights, these schemes exploit the capacity of recurrent circuits to iteratively settle into states that reflect both input-driven activity and error-driven corrections. Synaptic updates are then computed from differences between network states before and after a teaching signal is applied, effectively using the network\u2019s own dynamics as a vehicle for propagating learning signals through its recurrent structure.<\/p>\n<p>Equilibrium-based learning rules are particularly well-suited to recurrent architectures. In such schemes, the network first relaxes to a \u201cfree\u201d steady state driven solely by inputs and its current synaptic configuration. A second phase then introduces a small nudging signal associated with a desired output or internal constraint, and the network relaxes again to a \u201cclamped\u201d or partially guided state. The difference in neural activity between these two phases encodes an error that is distributed across the entire network, including feedback and lateral connections. Local synaptic updates, computed as products of presynaptic firing and the change in postsynaptic activity between phases, implement a gradient step on an implicit objective without any explicit global backpropagation. Because the same dynamical principles govern both inference and learning, such rules naturally respect the bidirectional causation inherent in the circuit.<\/p>\n<p>Neuromodulators play a crucial role in making these learning rules behaviorally meaningful. Global signals such as dopamine, acetylcholine, norepinephrine, and serotonin modulate plasticity gates, effectively deciding when recurrent loops should be updated and in what direction. For example, dopamine is widely modeled as carrying a reward prediction error, signaling whether recent outcomes were better or worse than expected. When combined with local Hebbian or STDP mechanisms, such a global signal can selectively strengthen the recurrent pathways that contributed to successful predictions or rewarding actions and weaken those associated with negative outcomes. In this way, two-way causal interactions are sculpted not only by statistical regularities in sensory input, but also by reinforcement signals tied to the organism\u2019s goals.<\/p>\n<p>Three-factor learning rules provide a unifying description of this process: synaptic change depends on presynaptic activity, postsynaptic activity, and a modulatory \u201cthird factor\u201d that conveys context such as reward, novelty, or surprise. In recurrent networks, this third factor can be linked to global measures of prediction error or uncertainty. When the network\u2019s expectations are strongly violated, neuromodulatory systems increase plasticity, allowing rapid reconfiguration of both forward and feedback pathways. Conversely, in periods of high prediction accuracy, plasticity may be downregulated, consolidating useful loops and preventing overfitting to transient fluctuations. Such state-dependent gating ensures that learning tracks changes in the environment while preserving stable internal models when the world is predictable.<\/p>\n<p>Credit assignment in multi-area, recurrent systems poses a distinct challenge: how should an error or reward observed at one location influence synaptic updates in distant parts of the network that indirectly contributed to it? Two-way interactions provide partial answers by allowing error-related signals to reverberate backward through the system. Top-down projections can carry abstract evaluations\u2014such as decision confidence or task success\u2014into earlier sensory or associative areas, where they interact with local activity patterns to shape plasticity. If an early sensory representation consistently precedes successful decisions and co-occurs with positive feedback signals, synapses within the loop linking that representation to downstream decoders will be strengthened, even if no explicit, layer-by-layer backpropagation occurs.<\/p>\n<p>Reinforcement learning in recurrent circuits offers a computational illustration of this point. In actor\u2013critic architectures implemented with recurrent neural networks, the actor component maintains internal states and action policies that unfold over time, while the critic evaluates predicted returns. When outcomes deviate from expectations, the critic\u2019s error signal is broadcast throughout the actor network, which may contain complex feedback loops. Eligibility traces\u2014temporary tags on recently active synapses\u2014allow synaptic changes to be assigned to connections that were active at relevant past times, not just at the moment of reward. The combination of eligibility traces and global error signals effectively propagates credit or blame across loops and delays, allowing the network to learn appropriate two-way interactions that support long-term behavioral success.<\/p>\n<p>In sensory systems, unsupervised and self-supervised learning rules can similarly exploit bidirectional dynamics. Autoencoder-like circuits, in which feedback connections attempt to reconstruct earlier activity patterns, provide a natural substrate for learning internal models. Feedforward pathways encode inputs into latent representations, while feedback pathways decode these representations back into predicted sensory or feature space. Learning seeks to minimize reconstruction error across the loop, which requires adjusting both directions simultaneously. Local plasticity rules that strengthen synapses when presynaptic activity predicts postsynaptic activity, and weaken them when predictions fail, can gradually align encoding and decoding pathways so that recurrent cycles become more faithful to environmental structure.<\/p>\n<p>Another important aspect of learning in two-way systems is the development and refinement of attractor landscapes. Recurrent networks often exhibit multiple stable states or activity patterns that function as memories, concepts, or perceptual hypotheses. Learning rules modify the depth, width, and arrangement of these attractor basins by changing recurrent synaptic strengths and thresholds. When exposure to consistent stimuli or task demands repeatedly drives the network into particular patterns, Hebbian and STDP mechanisms make those patterns more self-sustaining, thereby deepening the corresponding basins. Simultaneously, competitive and inhibitory plasticity prevents the uncontrolled proliferation of attractors, enforcing sparsity and separability so that different concepts or contexts correspond to distinct, well-separated regions of state space.<\/p>\n<p>Because attractors are realized through loops of mutual excitation and inhibition, changes to any part of the loop can affect the stability and accessibility of the overall pattern. Learning must therefore coordinate adjustments across multiple synapses so that the network does not fragment into incompatible sub-loops or collapse into a single dominant state. Mechanisms such as synaptic scaling, structural plasticity (growth and pruning of connections), and activity-dependent myelination can contribute to this coordination, gradually reshaping the effective graph of recurrent connectivity while maintaining functional coherence. The resulting attractor landscape encodes the network\u2019s accumulated knowledge of environmental regularities and task contingencies, with two-way causal interactions serving as the structural backbone of that knowledge.<\/p>\n<p>Temporal credit assignment within recurrent loops also motivates learning rules that are sensitive to prediction and priors over evolving sequences. In many tasks, the relevant causes of a present event lie in patterns of activity that unfolded over multiple previous time steps. Eligibility traces, short-term synaptic facilitation or depression, and activity-dependent intrinsic plasticity can store transient signatures of recent dynamics, which then interact with delayed error or reward signals to drive synaptic change. When a later outcome reveals that a particular sequence of internal states was predictive or misleading, these stored signatures help target plasticity to the synapses that shaped that sequence, redistributing causal responsibility across both forward and feedback pathways.<\/p>\n<p>At the level of cortical hierarchies, learning rules must harmonize local, layer-specific mechanisms with global computational goals. Superficial layers may specialize in error signaling and rapid plasticity, while deep layers preferentially encode stable priors and exhibit slower, more consolidated changes. Feedback from deep to superficial layers carries structural expectations that constrain rapid learning, preventing fragile lower-level plasticity from distorting well-established higher-level knowledge after brief anomalies. Conversely, persistent errors at lower levels can eventually drive learning in deeper priors, updating the long-term model. The interaction of fast and slow plasticity across the hierarchy thus implements a form of meta-inference, where the system learns not only specific predictions but also how quickly its different components should adapt.<\/p>\n<p>From a systems perspective, successful learning in bidirectional architectures depends on maintaining alignment between three intertwined elements: the dynamics that implement inference at short timescales, the plasticity rules that update synapses at intermediate timescales, and the modulatory processes that gate learning at longer timescales. If inference dynamics change too quickly relative to plasticity, learning cannot track the effective mapping implemented by the recurrent loops. If plasticity is too aggressive, small, transient fluctuations during inference can be overinterpreted and frozen into structure, degrading performance. Carefully tuned learning rules, often involving multiple interacting forms of plasticity, allow recurrent circuits to preserve the qualitative character of their neural coding while gradually reshaping the underlying causal architecture in response to experience.<\/p>\n<p>Artificial neural networks inspired by these principles have begun to incorporate biologically motivated learning rules into recurrent and bidirectional architectures. Models that combine local Hebbian updates with occasional error-driven adjustments can learn internal representations that are robust to noise and capable of pattern completion, mirroring properties of cortical circuits. Training procedures that alternate between free and nudged phases, or that use synthetic gradients and auxiliary losses to approximate global error signals, illustrate how two-way information flow can be harnessed to perform credit assignment without strict backpropagation through time. These advances suggest that embracing, rather than circumventing, the complexities of mutual influence may yield learning algorithms that are both more powerful and more compatible with known biological constraints.<\/p>\n<p>Ultimately, learning rules for two-way causal interactions must negotiate a tension between flexibility and constraint. On one hand, recurrent loops and feedback pathways give the system ample capacity to represent complex dependencies, maintain context, and perform iterative inference. On the other hand, without carefully structured plasticity, this same capacity can lead to instability, overfitting, or maladaptive attractor states. By tying synaptic change to local activity patterns, distributed error signals, and global modulatory cues, biological and artificial systems can shape their bidirectional architectures so that neural coding remains coherent, interpretable, and behaviorally useful, even as the web of causal interactions becomes increasingly intricate through learning.<\/p>\n<h3>Implications for cognition and artificial intelligence<\/h3>\n<p>Bidirectional causation in neural circuits reframes core questions about cognition by shifting emphasis from static representations to dynamically negotiated states. Perception, memory, decision-making, and action selection emerge not from linear processing chains but from recurrent negotiations between bottom-up evidence and top-down constraints. Within this view, cognitive functions are realized as trajectories through a high-dimensional state space shaped by prediction and priors, rather than as fixed encodings of external stimuli. The same neural population may, at different points in a trajectory, serve as evidence accumulator, hypothesis tester, and controller of downstream activity, making cognitive roles context-dependent and temporally extended.<\/p>\n<p>Perception, for instance, becomes an active form of inference under this framework. Instead of passively encoding sensory inputs, cortical hierarchies implement a continual dialogue wherein higher-level hypotheses about the world attempt to \u201cexplain away\u201d incoming signals. Ambiguous or noisy stimuli, which challenge simple feedforward models, can be naturally understood as cases where multiple attractor states remain viable until additional evidence or contextual feedback resolves the competition. Neural coding is therefore less about mirroring the external world and more about settling on internally coherent interpretations that are useful for behavior. Perceptual illusions, rapid re-interpretations, and context effects arise as natural consequences of this inferential, loop-based organization.<\/p>\n<p>Attention can be reinterpreted within the same framework as a control policy over bidirectional information flow. Instead of being an add-on mechanism that simply boosts certain signals, attention governs which loops are allowed to dominate the inferential process at a given moment. Top-down attention can selectively amplify specific error pathways, giving certain discrepancies between prediction and input privileged access to higher-level updating. Conversely, it can dampen or gate irrelevant loops, ensuring that limited computational resources are deployed where they most improve behavioral outcomes. This implies that attentional control is not just a modulation of gain; it actively shapes which aspects of the environment become causally efficacious in determining internal states and subsequent actions.<\/p>\n<p>Working memory and executive control also look different under bidirectional principles. Rather than storing items in dedicated buffers, the system maintains patterns of recurrent activity and synaptic traces that can be re-entered into active loops when needed. These patterns act as latent causes in a bayesian brain scheme, exerting ongoing influence on how new inputs are parsed and interpreted. Executive processes correspond to high-level attractors that coordinate multiple loops across distributed areas, aligning sensory interpretations, goals, and motor plans. Failures of executive function\u2014such as distractibility or perseveration\u2014can be understood as disruptions in the stability or flexibility of these coordinating loops, rather than as defects in isolated \u201cmodules.\u201d<\/p>\n<p>Decision-making and valuation illustrate how mutual causation shapes cognition over time. In drift-diffusion models and related frameworks, choices emerge from the gradual accumulation of evidence toward a threshold. In a recurrent, bidirectional architecture, the thresholds themselves can be dynamically modulated by priors, expectations about reward, and ongoing internal states. Higher-order valuation circuits need not simply decode accumulated evidence; they can feed back biases that tilt the accumulation process toward particular alternatives, embodying preferences and learned policies. Confidence signals, in turn, can propagate backward to adjust how much weight is given to current versus future evidence, embedding metacognitive evaluations directly into the causal fabric of the decision process.<\/p>\n<p>Memory systems, particularly episodic and semantic memory, gain a concrete mechanistic grounding in recurrent attractor dynamics. Episodic recall can be seen as a process in which partial cues drive the network into previously learned attractor basins, reconstructing patterns of activity that approximate past states. Semantic memory corresponds to more abstract, overlapping attractors encoding relational structures and regularities. Critically, recall is not a one-way readout of stored content; the act of retrieval feeds back into ongoing processing, updating priors and reshaping current interpretations of sensory input. This explains why memory is reconstructive and context-sensitive, and why repeated retrieval can alter what is remembered: each retrieval is a new cycle of mutual influence between stored patterns and present constraints.<\/p>\n<p>Self-related cognition and conscious experience can also be interpreted through the lens of bidirectional neural coding. A sense of agency arises when internally generated predictions about the sensory consequences of actions successfully suppress or match incoming feedback. When forward models operate accurately within these loops, the system attributes changes in sensory input to its own actions; when mismatches persist, events are attributed to external causes. Alterations in these loops\u2014for example, disruptions in efference copy pathways or prediction-error signaling\u2014may underlie disorders of agency such as those seen in schizophrenia, in which self-generated thoughts or actions are experienced as externally imposed. Conscious access itself may reflect particular configurations of large-scale recurrent loops that temporarily stabilize certain interpretations while suppressing alternatives.<\/p>\n<p>These perspectives have direct implications for how artificial intelligence systems might be designed. Most contemporary AI architectures are dominated by deep feedforward networks trained with backpropagation, occasionally augmented with recurrent or attention mechanisms that are still largely unidirectional in their computational framing. Integrating robust bidirectional causation would involve building models in which top-down constraints and bottom-up evidence interact continuously, with internal generative models that can simulate and test hypotheses about future inputs. Such systems would not merely map inputs to outputs; they would maintain ongoing world models that are updated through cycles of prediction and correction, more closely echoing biological cognition.<\/p>\n<p>Generative models, such as variational autoencoders, diffusion models, and deep generative flows, already move in this direction by explicitly encoding both recognition (inference) and generation (prediction) pathways. When extended with recurrent connectivity and active feedback between these pathways, they can support richer forms of reasoning, imagination, and planning. For example, a robot equipped with a recurrent generative world model can internally simulate the sensory consequences of candidate actions before executing them, effectively running closed-loop inference in imagination. The quality of these internal simulations depends on how well bidirectional interactions have been learned to reflect true environmental dynamics, making learning rules for two-way causal interactions central to the system\u2019s overall competence.<\/p>\n<p>Active inference frameworks offer a particularly direct bridge between bidirectional neural theories and AI. In these frameworks, agents are modeled as systems that minimize expected prediction error (or free energy) by acting to sample the world in ways that confirm their priors or by updating their priors to better fit sensory data. Policies are thus selected not solely on the basis of external rewards but also on their expected impact on uncertainty and surprise. Implementing such agents in artificial systems requires recurrent architectures that tightly couple perception, action, and valuation: predictions about sensory outcomes must inform action selection, and action outcomes must feed back to update both perceptual models and policy priors. This yields AI agents that treat perception and control as two sides of the same inferential coin, rather than as separate modules linked by ad hoc interfaces.<\/p>\n<p>Another implication concerns robustness and generalization. Systems that rely heavily on feedforward mappings can excel at tasks matching their training distribution but often fail under distribution shifts, adversarial perturbations, or novel contexts. Recurrent, bidirectional architectures that constantly compare predictions with incoming data and adjust internal states accordingly are better positioned to detect anomalies, reject misleading cues, and adapt rapidly to changed circumstances. For instance, when encountering an unfamiliar object under unusual lighting, a system with strong top-down world models can use priors about object constancy and physical constraints to maintain coherent interpretations instead of overfitting to raw pixel-level deviations. This suggests that incorporating generative, feedback-driven inference into AI could improve both interpretability and resilience.<\/p>\n<p>Bidirectional causation also influences how AI systems should handle temporal information and long-term dependencies. Traditional sequence models like RNNs and LSTMs already exploit recurrent connections, but they often treat past states as purely causal antecedents to future states, without symmetric top-down influences. More brain-like architectures might integrate long-range feedback that allows later states\u2014such as high-level goals or outcomes\u2014to reshape earlier representations retroactively during training or offline consolidation. For example, experience replay and offline simulation phases could be organized as cycles in which outcome signals propagate back to refine the representations of earlier states and events, akin to hippocampal-cortical dialogues during sleep. This would enable AI systems to assign credit and blame across time in a manner closer to biological learning.<\/p>\n<p>In practical terms, designing bidirectional AI systems raises new engineering questions about stability, scalability, and learnability. Dense recurrent loops can easily lead to oscillations or chaotic dynamics if not carefully regulated. Biological solutions\u2014such as inhibitory control, neuromodulatory gating, and multi-timescale plasticity\u2014suggest design principles for artificial architectures. For instance, dividing networks into fast error-correcting circuits and slower, more stable priors can prevent runaway reverberation while preserving flexibility. Similarly, implementing attention-like mechanisms that dynamically gate recurrent pathways can ensure that only behaviorally relevant loops are reinforced and maintained, keeping the effective causal structure sparse and interpretable even when the underlying connectivity is rich.<\/p>\n<p>Moral and societal implications also emerge when cognitive and AI systems are understood as products of looping causation. Human values, beliefs, and biases are not merely encoded once and for all; they are continually reinforced or revised through recurring interactions between individuals, social structures, and informational environments. When AI systems participate in these loops\u2014for example, by curating information feeds, recommending content, or mediating communication\u2014they become part of the causal circuits that shape collective priors. An AI designed under a bidirectional perspective would need to account for how its outputs feed back into human cognition, potentially amplifying or dampening existing attractors in societal belief spaces. Responsible design thus requires modeling not just immediate input-output mappings but also longer-term causal feedback between AI behavior and human mental states.<\/p>\n<p>Clinical and diagnostic applications stand to benefit from this shift in perspective as well. Many psychiatric and neurological disorders can be recast as pathologies of recurrent loops and attractor landscapes rather than as localized lesions or deficits. Depression, for instance, may involve overly deep attractors corresponding to negative self-referential narratives, making it difficult for alternative interpretations and positive expectations to gain a foothold. Anxiety disorders may reflect exaggerated priors about threat, causing persistent over-weighting of danger-related evidence. Understanding these conditions as disruptions in bidirectional inference\u2014where top-down priors and bottom-up signals fail to achieve adaptive balance\u2014suggests interventions aimed at reshaping attractors and feedback gains, whether through neuromodulation, psychotherapy, pharmacology, or targeted cognitive training.<\/p>\n<p>The bidirectional view encourages a more unified treatment of cognition and intelligence across biological and artificial systems. Rather than drawing a sharp line between \u201clow-level\u201d perception and \u201chigh-level\u201d reasoning, it invites models in which both are expressions of the same underlying dynamics: iterative, context-sensitive inference driven by cycles of prediction and error correction. Neural coding, in this sense, is not a passive label but an active process in which codes and their interpretations co-evolve. For AI research, this suggests prioritizing architectures where internal world models, task goals, and sensory interfaces are tightly interwoven in recurrent loops, enabling systems that do not just react to data but continuously interrogate and refine their own understanding of the environments they inhabit.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Neural information processing can be understood as unfolding within a web of influences that run&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[1],"tags":[323,1996,1297,1688,735,1615,1613],"class_list":["post-3256","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-bayesian-brain","tag-bidirectional","tag-causation","tag-neural-coding","tag-prediction","tag-priors","tag-retrocausality"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.0 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Neural coding with bidirectional causation - Beyond the Impact<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/beyondtheimpact.net\/?p=3256\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Neural coding with bidirectional causation - Beyond the Impact\" \/>\n<meta property=\"og:description\" content=\"Neural information processing can be understood as unfolding within a web of influences that run&hellip;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/beyondtheimpact.net\/?p=3256\" \/>\n<meta property=\"og:site_name\" content=\"Beyond the Impact\" \/>\n<meta property=\"article:published_time\" content=\"2026-01-20T07:01:56+00:00\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"42 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/beyondtheimpact.net\/?p=3256#article\",\"isPartOf\":{\"@id\":\"https:\/\/beyondtheimpact.net\/?p=3256\"},\"author\":{\"name\":\"admin\",\"@id\":\"https:\/\/beyondtheimpact.net\/#\/schema\/person\/a5cf96dc27c4690dbf266a6cae4ee9aa\"},\"headline\":\"Neural coding with bidirectional causation\",\"datePublished\":\"2026-01-20T07:01:56+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/beyondtheimpact.net\/?p=3256\"},\"wordCount\":8344,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/beyondtheimpact.net\/#organization\"},\"keywords\":[\"Bayesian brain\",\"bidirectional\",\"causation\",\"neural coding\",\"prediction\",\"priors\",\"retrocausality\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/beyondtheimpact.net\/?p=3256#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/beyondtheimpact.net\/?p=3256\",\"url\":\"https:\/\/beyondtheimpact.net\/?p=3256\",\"name\":\"Neural coding with bidirectional causation - Beyond the Impact\",\"isPartOf\":{\"@id\":\"https:\/\/beyondtheimpact.net\/#website\"},\"datePublished\":\"2026-01-20T07:01:56+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/beyondtheimpact.net\/?p=3256#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/beyondtheimpact.net\/?p=3256\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/beyondtheimpact.net\/?p=3256#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/beyondtheimpact.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Neural coding with bidirectional causation\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/beyondtheimpact.net\/#website\",\"url\":\"https:\/\/beyondtheimpact.net\/\",\"name\":\"BeyondTheImpact\",\"description\":\"Concussion, FND and Neuroscience\",\"publisher\":{\"@id\":\"https:\/\/beyondtheimpact.net\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/beyondtheimpact.net\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/beyondtheimpact.net\/#organization\",\"name\":\"Beyond the Impact\",\"url\":\"https:\/\/beyondtheimpact.net\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/beyondtheimpact.net\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/beyondtheimpact.net\/wp-content\/uploads\/2025\/04\/955D378D-9439-4958-AA9D-866B66877DCB-1.png\",\"contentUrl\":\"https:\/\/beyondtheimpact.net\/wp-content\/uploads\/2025\/04\/955D378D-9439-4958-AA9D-866B66877DCB-1.png\",\"width\":1024,\"height\":1024,\"caption\":\"Beyond the Impact\"},\"image\":{\"@id\":\"https:\/\/beyondtheimpact.net\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/beyondtheimpact.net\/#\/schema\/person\/a5cf96dc27c4690dbf266a6cae4ee9aa\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/beyondtheimpact.net\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/59867129c03db343d7fdc6272ec5e0a85250cd376a4e7153307728ae82a1b108?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/59867129c03db343d7fdc6272ec5e0a85250cd376a4e7153307728ae82a1b108?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"sameAs\":[\"https:\/\/beyondtheimpact.net\"],\"url\":\"https:\/\/beyondtheimpact.net\/?author=1\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Neural coding with bidirectional causation - Beyond the Impact","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/beyondtheimpact.net\/?p=3256","og_locale":"en_US","og_type":"article","og_title":"Neural coding with bidirectional causation - Beyond the Impact","og_description":"Neural information processing can be understood as unfolding within a web of influences that run&hellip;","og_url":"https:\/\/beyondtheimpact.net\/?p=3256","og_site_name":"Beyond the Impact","article_published_time":"2026-01-20T07:01:56+00:00","author":"admin","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin","Est. reading time":"42 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/beyondtheimpact.net\/?p=3256#article","isPartOf":{"@id":"https:\/\/beyondtheimpact.net\/?p=3256"},"author":{"name":"admin","@id":"https:\/\/beyondtheimpact.net\/#\/schema\/person\/a5cf96dc27c4690dbf266a6cae4ee9aa"},"headline":"Neural coding with bidirectional causation","datePublished":"2026-01-20T07:01:56+00:00","mainEntityOfPage":{"@id":"https:\/\/beyondtheimpact.net\/?p=3256"},"wordCount":8344,"commentCount":0,"publisher":{"@id":"https:\/\/beyondtheimpact.net\/#organization"},"keywords":["Bayesian brain","bidirectional","causation","neural coding","prediction","priors","retrocausality"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/beyondtheimpact.net\/?p=3256#respond"]}]},{"@type":"WebPage","@id":"https:\/\/beyondtheimpact.net\/?p=3256","url":"https:\/\/beyondtheimpact.net\/?p=3256","name":"Neural coding with bidirectional causation - Beyond the Impact","isPartOf":{"@id":"https:\/\/beyondtheimpact.net\/#website"},"datePublished":"2026-01-20T07:01:56+00:00","breadcrumb":{"@id":"https:\/\/beyondtheimpact.net\/?p=3256#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/beyondtheimpact.net\/?p=3256"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/beyondtheimpact.net\/?p=3256#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/beyondtheimpact.net\/"},{"@type":"ListItem","position":2,"name":"Neural coding with bidirectional causation"}]},{"@type":"WebSite","@id":"https:\/\/beyondtheimpact.net\/#website","url":"https:\/\/beyondtheimpact.net\/","name":"BeyondTheImpact","description":"Concussion, FND and Neuroscience","publisher":{"@id":"https:\/\/beyondtheimpact.net\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/beyondtheimpact.net\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/beyondtheimpact.net\/#organization","name":"Beyond the Impact","url":"https:\/\/beyondtheimpact.net\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/beyondtheimpact.net\/#\/schema\/logo\/image\/","url":"https:\/\/beyondtheimpact.net\/wp-content\/uploads\/2025\/04\/955D378D-9439-4958-AA9D-866B66877DCB-1.png","contentUrl":"https:\/\/beyondtheimpact.net\/wp-content\/uploads\/2025\/04\/955D378D-9439-4958-AA9D-866B66877DCB-1.png","width":1024,"height":1024,"caption":"Beyond the Impact"},"image":{"@id":"https:\/\/beyondtheimpact.net\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/beyondtheimpact.net\/#\/schema\/person\/a5cf96dc27c4690dbf266a6cae4ee9aa","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/beyondtheimpact.net\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/59867129c03db343d7fdc6272ec5e0a85250cd376a4e7153307728ae82a1b108?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/59867129c03db343d7fdc6272ec5e0a85250cd376a4e7153307728ae82a1b108?s=96&d=mm&r=g","caption":"admin"},"sameAs":["https:\/\/beyondtheimpact.net"],"url":"https:\/\/beyondtheimpact.net\/?author=1"}]}},"_links":{"self":[{"href":"https:\/\/beyondtheimpact.net\/index.php?rest_route=\/wp\/v2\/posts\/3256","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/beyondtheimpact.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/beyondtheimpact.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/beyondtheimpact.net\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/beyondtheimpact.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3256"}],"version-history":[{"count":0,"href":"https:\/\/beyondtheimpact.net\/index.php?rest_route=\/wp\/v2\/posts\/3256\/revisions"}],"wp:attachment":[{"href":"https:\/\/beyondtheimpact.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3256"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/beyondtheimpact.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3256"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/beyondtheimpact.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3256"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}