Reviewer #3 (Public review):
Summary:
This modelling study connects synaptic plasticity, connectivity motifs, and representational drift. The authors combine excitatory and inhibitory STDP with weight normalization and intrinsic plasticity in a recurrent spiking network of AdEx neurons. This combination generates heavy-tailed synaptic weight distributions and supports repeating spike sequences under both unstructured and structured inputs. While global network statistics stabilize over time, individual synapses continue to change, creating a form of drift. Structured inputs further stabilize sequences, yet the network retains flexibility to learn new patterns.
Strengths:
(1) Multi-scale turnover analysis:
The authors study the evolution of individual synapses, 3-neuron motifs, follower neurons, and entire neuronal sequences, revealing distinct turnover timescales.
(2) Fan-in/out motif analysis:
A specific connectivity motif (fan-in/out) is shown to be over-represented in the network and preferentially stabilised by the plasticity rules compared to other possible motifs. This generates interesting insights and testable predictions.
(3) Connection to representational drift:
The connection of ongoing synaptic plasticity to drift is timely and interesting, reproducing observations of macro-level stability and synapse-level turnover with a relatively simple mechanism.
(4) Rigour and thoroughness:
The overall quality of the numerical experiments performed in this study is high, with extensive supplementary material performing various controls to solidify the claims.
Weaknesses:
(1) Limited connection to network function:
Sequence detection relies on a rather artificial protocol (forced spiking of a single neuron 1,000 times), which I suspect mostly tests whether the lognormal tail of the weight distribution can propagate activity. This risks being circular. I think performing the same sequence analysis on a random network/a network with the same weight distribution but shuffled would help understand what comes from a generic heavy-tailed weight distribution and the particular weights potentiated by the plasticity rules used here.
The network, which would classically be evaluated as a memory network, is not assessed on this aspect. While the authors do not overclaim, this limits the impact.
Relatedly, the relearning experiment (Figure 5G) shows catastrophic forgetting. This is acknowledged in the discussion, but the suggested solutions (alternating patterns, plastic readout) are speculative without supporting simulations. This limits the applicability of the model as a memory model or, more broadly, as a model of a brain region/function.
Additionally, in the sequence learning experiments with structured input, the ability to learn seems tied to the very specific timescale of pattern presentation (~10 ms per pattern, comparable to the STDP kernel time constants), arguably faster than the timescale of external stimuli. The stability of sequences may also owe more to the normalization scheme than to STDP per se.
(2) Novelty claims and positioning within the literature:
On page 16, the authors write: "Our results demonstrate that spiking sequences can be generated in randomly connected networks trained by synaptic plasticity even under unstructured inputs, which supports STDP being the main actor, while stabilizing mechanisms such as weight normalization and intrinsic plasticity play a complementary role." (c1).
Several aspects of this work are less novel than the presentation suggests:
(a) The fact that STDP can create sequence-like dynamics/asymmetric connectivity matrices in recurrent networks has been studied theoretically [1,2] and in simulations [3,4,5]. While [3] is cited, the manuscript underplays the similarity. [4] (uncited) considers e+iSTDP with a different homeostatic term to represent sequential stimuli in large recurrent spiking networks. [5] (uncited) also considers a recurrent spiking network with several STDP-like rules and shows that many combinations can store and recall sequential inputs.
(b) Lognormal weight distributions emerging from STDP-based plasticity and the autonomous emergence of connectivity structures have extensive literature. While many of these articles are already cited in the manuscript, I fail to see what this work brings to this matter compared to existing work (particularly [6]).
(c) Several published works challenge the manuscript's implicit claim (c1) that sequences require their particular combination of rules. Many other plasticity mechanisms can create sequences [3,4,5,7,8,9]. Some interpretations may also need to be dialed down: [10] (uncited) showed that sequences can be stored and retrieved using EI and IE plasticity alone. iSTDP may be doing more computational work than acknowledged, which complicates the interpretation of which mechanisms are truly driving the phenomena.
Overall, most of the relevant work is already cited in the manuscript, but not necessarily acknowledged adequately.
(3) Justification of plasticity model/robustness analysis:
The parameters in Tables 1 and 2 are quite specific without strong justification (for instance, different sparsity values for each connection type and specific normalization factors). Without parameter sweeps, it is difficult to know whether the key findings are robust or overfit to this particular network configuration. Given the number of parameters, exhaustive sweeps are out of question, and the argument made previously would still prevent the rule combination proposed from being considered as more than one possible mechanism for sequence generation among many others. However, this deserves to be acknowledged, and potentially a few sweeps to be run (e.g., over LTP/LTD ratio, normalization threshold, and network size). I don't think that Figure S12, which shows that removing any component of the model causes it to break down in some way, is enough to cover alternative plasticity rules.
A related concern is that the network is small by current standards (1,200E + 240I neurons), especially with sparse connectivity (6-20%). Small networks with few connections are susceptible to synchronization (other studies typically consider networks of at least 10k neurons). The authors should discuss whether the phenomena they observe would persist at larger scales and under more biologically realistic connectivity. Specifically, are the intrinsic and normalization plasticity terms as crucial in this case?
(4) Fan-in/out motif evidence is correlational:
The evidence linking the fan-in/out motif to sequence stability appears to be correlational. Properly establishing causality would require targeted ablations or rewiring of fan-in/out connections. While designing a clean causal intervention may be difficult, the correlational nature of the evidence should be stated explicitly.
Conclusion:
To summarize, the manuscript would benefit from:
(1) Reframing the contribution:
Multi-scale turnover analysis and the discussion around representational drift as the core novelties. I would reposition sequence emergence and lognormal distributions as reproducing known results under a specific plasticity model and analysis method.
(2) Acknowledging that many rule combinations could produce equivalent outcomes, and not suggesting that the combination chosen here is special.
(3) Adding parameter sensitivity analysis or, at a minimum, discussing robustness.
References:
[1] Kempter, Gerstner and van Hemmen, Hebbian learning and spiking neurons, 1999, PRE
[2] Ocker, Litwin-Kumar and Doiron, Self-organization of microcircuits in networks of spiking neurons with plastic synapses, 2015, plos CB<br />
(Theoretical account of STDP in spiking networks and motifs, though it only looks at 2-synapse motifs (not fan-in/fan-out)).
[3] Fiete et al., Spike-Time-Dependent Plasticity and Heterosynaptic Competition Organize Networks to Produce Long Scale-Free Sequences of Neural Activity, 2010, Neuron
[4] Duarte and Morrison, Dynamic stability of sequential stimulus representations in adapting neuronal networks, 2014, Frontiers in Comp Neuro
[5] Confavreux et al., Memory by a thousand rules: Automated discovery of functional multi-type plasticity rules reveals variety and degeneracy at the heart of learning, 2025, bioRxiv
[6] Zheng, Dimitrakakis and Triesch , Network Self-Organization Explains the Statistics and Dynamics of Synaptic Connection Strengths in Cortex, 2013, plos CB
[7] Zheng and Triesch, Robust development of synfire chains from multiple plasticity mechanisms, 2014, Front Comp Neuro
[8] Ravid Tannenbaum and Burak, Shaping Neural Circuits by High Order Synaptic Interactions, 2016, plos CB
[9] Bell, Duffy, and Fairhall, Discovering plasticity rules that organize and maintain neural circuits, 2024, NeurIPS
[10] Gong and Brunel, Inhibitory Plasticity Enhances Sequence Storage Capacity and Retrieval Robustness, 2024, bioRxiv