10,000 Matching Annotations
  1. Last 7 days
    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review): 

      Summary:

      In this study, Lamberti et al. investigate how translation initiation and elongation are coordinated at the single-mRNA level in mammalian cells. The authors aim to uncover whether and how cells dynamically adjust initiation rates in response to elongation dynamics, with the overarching goal of understanding how translational homeostasis is maintained. To this end, the study combines single-molecule live-cell imaging using the SunTag system with a kinetic modeling framework grounded in the Totally Asymmetric Simple Exclusion Process (TASEP). By applying this approach to custom reporter constructs with different coding sequences, and under perturbations of the initiation/elongation factor eIF5A, the authors infer initiation and elongation rates from individual mRNAs and examine how these rates covary.

      The central finding is that initiation and elongation rates are strongly correlated across a range of coding sequences, resulting in consistently low ribosome density ({less than or equal to}12% of the coding sequence occupied). This coupling is preserved under partial pharmacological inhibition of eIF5A, which slows elongation but is matched by a proportional decrease in initiation, thereby maintaining ribosome density. However, a complete genetic knockout of eIF5A disrupts this coordination, leading to reduced ribosome density, potentially due to changes in ribosome stalling resolution or degradation.

      Strengths:

      A key strength of this work is its methodological innovation. The authors develop and validate a TASEP-based Hidden Markov Model (HMM) to infer translation kinetics at single-mRNA resolution. This approach provides a substantial advance over previous population-level or averaged models and enables dynamic reconstruction of ribosome behavior from experimental traces. The model is carefully benchmarked against simulated data and appropriately applied. The experimental design is also strong. The authors construct matched SunTag reporters differing only in codon composition in a defined region of the coding sequence, allowing them to isolate the effects of elongation-related features while controlling for other regulatory elements. The use of both pharmacological and genetic perturbations of eIF5A adds robustness and depth to the biological conclusions. The results are compelling: across all constructs and conditions, ribosome density remains low, and initiation and elongation appear tightly coordinated, suggesting an intrinsic feedback mechanism in translational regulation. These findings challenge the classical view of translation initiation as the sole rate-limiting step and provide new insights into how cells may dynamically maintain translation efficiency and avoid ribosome collisions.

      We thank the reviewer for their constructive assessment of our work, and for recognizing the methodological innovation and experimental rigor of our study.

      Weaknesses:

      A limitation of the study is its reliance on exogenous reporter mRNAs in HeLa cells, which may not fully capture the complexity of endogenous translation regulation. While the authors acknowledge this, it remains unclear how generalizable the observed coupling is to native mRNAs or in different cellular contexts.

      We agree that the use of exogenous reporters is a limitation inherent to the SunTag system, for which there is currently no simple alternative for single-mRNA translation imaging. However, we believe our findings are likely generalizable for several reasons.

      As discussed in our introduction and discussion, there is growing mechanistic evidence in the literature for coupling between elongation (ribosome collisions) and initiation via pathways such as the GIGYF2-4EHP axis (Amaya et al. 2018, Hickey et al. 2020, Juszkiewicz et al. 2020), which might operate on both exogenous and endogenous mRNAs.

      As already acknowledged in our limitations section, our exogenous reporters may not fully recapitulate certain aspects of endogenous translation (e.g., ER-coupled collagen processing), yet the observed initiation-elongation coupling was robust across all tested constructs and conditions.

      We have now expanded the Discussion (L393-395) to cite complementary evidence from Dufourt et al. (2021), who used a CRISPR-based approach in Drosophila embryos to measure translation of endogenous genes. We also added a reference to Choi et al. 2025, who uses a ER-specific SunTag reporter to visualize translation at the ER (L395-397).

      Additionally, the model assumes homogeneous elongation rates and does not explicitly account for ribosome pausing or collisions, which could affect inference accuracy, particularly in constructs designed to induce stalling. While the model is validated under low-density assumptions, more work may be needed to understand how deviations from these assumptions affect parameter estimates in real data.

      We agree with the reviewer that the assumption of homogeneous elongation rates is a simplification, and that our work represents a first step towards rigorous single-trace analysis of translation dynamics. We have explicitly tested the robustness of our model to violations of the low-density assumption through simulations (Figure 2 - figure supplement 2). These show that while parameter inference remains accurate at low ribosome densities, accuracy slightly deteriorates at higher densities, as expected. In fact, our experimental data do provide evidence for heterogeneous elongation: the waiting times between termination events deviate significantly from an exponential distribution (Figure 3 - figure supplement 2C), indicating the presence of ribosome stalling and/or bursting, consistent with the reviewer's concern. We acknowledge in the Limitations section (L402-406) that extending the model to explicitly capture transcript-dependent elongation rates and ribosome interactions remains challenging. The TASEP is difficult to solve analytically under these conditions, but we note that simulation-based inference approaches, such as particle filters to replace HMMs, could provide a path forward for future work to capture this complexity at the single-trace level.

      Furthermore, although the study observes translation "bursting" behavior, this is not explicitly modeled. Given the growing recognition of translational bursting as a regulatory feature, incorporating or quantifying this behavior more rigorously could strengthen the work's impact.

      While we do not explicitly model the bursting dynamics in the HMM framework, we have quantified bursting behavior directly from the data. Specifically, we measure the duration of translated (ON) and untranslated (OFF) periods across all reporters and conditions (Figure 1G for control conditions and Figure 4G-H for perturbed conditions), finding that active translation typically lasts 10-15 minutes interspersed with shorter silent periods of 5-10 minutes. This empirical characterization demonstrates that bursting is a consistent feature of translation across our experimental conditions. The average duration of silent periods is similar to what was inferred by Livingston et al. 2023 for a similar SunTag reporter; while the average duration of active periods is substantially shorter (~15 min instead of ~40 min), which is consistent with the shorter trace duration in our system compared to theirs (~15 min compared to ~80 min, on average). Incorporating an explicit two-state or multi-state bursting model into the TASEP-HMM framework would indeed be computationally intensive and represents an important direction for future work, as it would enable inference of switching rates alongside initiation and elongation parameters. We have added this point to the Discussion (L415-417).

      Assessment of Goals and Conclusions:

      The authors successfully achieve their stated aims: they quantify translation initiation and elongation at the single-mRNA level and show that these processes are dynamically coupled to maintain low ribosome density. The modeling framework is well suited to this task, and the conclusions are supported by multiple lines of evidence, including inferred kinetic parameters, independent ribosome counts, and consistent behavior under perturbation.

      Impact and Utility:

      This work makes a significant conceptual and technical contribution to the field of translation biology. The modeling framework developed here opens the door to more detailed and quantitative studies of ribosome dynamics on single mRNAs and could be adapted to other imaging systems or perturbations. The discovery of initiation-elongation coupling as a general feature of translation in mammalian cells will likely influence how researchers think about translational regulation under homeostatic and stress conditions.

      The data, models, and tools developed in this study will be of broad utility to the community, particularly for researchers studying translation dynamics, ribosome behavior, or the effects of codon usage and mRNA structure on protein synthesis.

      Context and Interpretation:

      This study contributes to a growing body of evidence that translation is not merely controlled at initiation but involves feedback between elongation and initiation. It supports the emerging view that ribosome collisions, stalling, and quality control pathways play active roles in regulating initiation rates in cis. The findings are consistent with recent studies in yeast and metazoans showing translation initiation repression following stalling events. However, the mechanistic details of this feedback remain incompletely understood and merit further investigation, particularly in physiological or stress contexts. 

      In summary, this is a thoughtfully executed and timely study that provides valuable insights into the dynamic regulation of translation and introduces a modeling framework with broad applicability. It will be of interest to a wide audience in molecular biology, systems biology, and quantitative imaging.

      We appreciate the reviewer's thorough and positive assessment of our work, and that they recognize both the technical innovation of our modeling framework and its potential broad utility to the translation biology community. We agree that further mechanistic investigation of initiation-elongation feedback under various physiological contexts represents an important direction for future research.

      Reviewer #2 (Public review):

      Summary:

      This manuscript uses single-molecule run-off experiments and TASEP/HMM models to estimate biophysical parameters, i.e., ribosomal initiation and elongation rates. Combining inferred initiation and elongation rates, the authors quantify ribosomal density. TASEP modeling was used to simulate the mechanistic dynamics of ribosomal translation, and the HMM is used to link ribosomal dynamics to microscope intensity measurements. The authors' main conclusions and findings are:

      (1) Ribosomal elongation rates and initiation rates are strongly coordinated.

      (2) Elongation rates were estimated between 1-4.5 aa/sec. Initiation rates were estimated between 0.5-2.5 events/min. These values agree with previously reported values.

      (3) Ribosomal density was determined below 12% for all constructs and conditions.

      (4) eIF5A-perturbations (KO and GC7 inhibition) resulted in non-significant changes in translational bursting and ribosome density.

      (5) eIF5A perturbations resulted in increases in elongation and decreases in initiation rates.

      Strengths:

      This manuscript presents an interesting scientific hypothesis to study ribosome initiation and elongation concurrently. This topic is highly relevant for the field. The manuscript presents a novel quantitative methodology to estimate ribosomal initiation rates from Harringtonine run-off assays. This is relevant because run-off assays have been used to estimate, exclusively, elongation rates.

      We thank the reviewer for their careful evaluation of our work and for recognizing the novelty of our quantitative methodology to extract both initiation and elongation rates from harringtonine run-off assays, extending beyond the traditional use of these experiments.

      Weaknesses:

      The conclusion of the strong coordination between initiation and elongation rates is interesting, but some results are unexpected, and further experimental validation is needed to ensure this coordination is valid. 

      We agree that some of our findings need further experimental investigation in future studies. However, we believe that the coordination between initiation and elongation is supported by multiple results in our current work: (1) the strong correlation observed across all reporters and conditions (Figure 3E), and (2) the consistent maintenance of low ribosome density despite varying elongation rates. While additional experimental validation would be valuable, we note that directly manipulating initiation or elongation independently in mammalian cells remains technically challenging. Nevertheless, our findings are consistent with emerging mechanistic understanding of collision-sensing pathways (GIGYF2-4EHP) that could mediate such coupling, as discussed in our manuscript.

      (1) eIF5a perturbations resulted in a non-significant effect on the fraction of translating mRNA, translation duration, and bursting periods. Given the central role of eIF5a, I would have expected a different outcome. I would recommend that the authors expand the discussion and review more literature to justify these findings.

      We appreciate this comment. This finding is indeed discussed in detail in our manuscript (Discussion, paragraphs 6-7). As we note there, while eIF5A plays a critical role in elongation, the maintenance of bursting dynamics and ribosome density upon perturbation can be explained by compensatory feedback mechanisms. Specifically, the coordinated decrease in initiation rates that counterbalances slower elongation to maintain homeostatic ribosome density. We also discuss several factors that complicate interpretation: (1) potential RQC-mediated degradation masking stronger effects in proline-rich constructs, (2) differences between GC7 treatment and genetic knockout suggesting altered stalling resolution kinetics, and (3) the limitations of using exogenous reporters that lack ER-coupled processing, which may be critical for eIF5A function in endogenous collagen translation (as suggested by Rossi et al., 2014; Mandal et al., 2016; Barba-Aliaga et al., 2021). The mechanistic complexity and tissue-specific nature of eIF5A function in mammals, which differs substantially from the better-characterized yeast system, likely contributes to the nuanced phenotype we observe. We believe our Discussion adequately addresses these points.

      (2) The AAG construct leading to slow elongation is very surprising. It is the opposite of the field consensus, where codon-optimized gene sequences are expected to elongate faster. More information about each construct should be provided. I would recommend more bioinformatic analysis on this, for example, calculating CAI for all constructs, or predicting the structures of the proteins.

      We agree that the slow elongation of the AAG construct is counterintuitive and indeed surprising. Following the reviewer's suggestion, we have now calculated the Codon Adaptation Index (CAI) for all constructs (Renilla 0.89, Col1a1 0.78, Col1a1 mutated 0.74). It is therefore unlikely that codon bias explains the slow translation, particularly since we designed the mutated Col1a1 construct with alanine codons selected to respect human codon usage bias, thereby minimizing changes in codon optimality. As we discuss in the manuscript, we hypothesize that the proline-to-alanine substitutions disrupted co-translational folding of the collagen-derived sequence. Prolines are critical for collagen triple-helix formation (Shoulders and Raines, 2009), and their replacement with alanines likely generates misfolded intermediates that cause ribosome stalling (Barba-Aliaga et al., 2021; Komar et al., 2024). This interpretation is supported by the high frequency (>30%) of incomplete run-off traces for AAG, suggesting persistent stalling events. Our findings thus illustrate an important potential caveat: "optimizing" a sequence based solely on codon usage can be detrimental when it disrupts functionally important structural features or co-translational folding pathways.

      This highlights that elongation rates depend not only on codon optimality but also on the interplay between nascent chain properties and ribosome progression.

      (3) The authors should consider using their methodology to study the effects of modifying the 5'UTR, resulting in changes in initiation rate and bursting, such as previously shown in reference Livingston et al., 2023. This may be outside of the scope of this project, but the authors could add this as a future direction and discuss if this may corroborate their conclusions. 

      We thank the reviewer for this excellent suggestion. We agree that applying our methodology to 5'-UTR variants would provide a complementary test of initiation-elongation coupling, and we have now added this as a future direction in the Discussion (L417-420).

      (4) The mathematical model and parameter inference routines are central to the conclusions of this manuscript. In order to support reproducibility, the computational code should be made available and well-documented, with a requirements file indicating the dependencies and their versions. 

      We have added the Github link in the manuscript (https://github.com/naef-lab/suntag-analysis) and have also deposited the data (.ome.tif) on Zenodo (https://zenodo.org/records/17669332).

      Reviewer #3 (Public review):

      Disclaimer:

      My expertise is in live single-molecule imaging of RNA and transcription, as well as associated data analysis and modeling. While this aligns well with the technical aspects of the manuscript, my background in translation is more limited, and I am not best positioned to assess the novelty of the biological conclusions.

      Summary:

      This study combines live-cell imaging of nascent proteins on single mRNAs with time-series analysis to investigate the kinetics of mRNA translation.

      The authors (i) used a calibration method for estimating absolute ribosome counts, and (ii) developed a new Bayesian approach to infer ribosome counts over time from run-off experiments, enabling estimation of elongation rates and ribosome density across conditions.

      They report (i) translational bursting at the single-mRNA level, (ii) low ribosome density (~10% occupancy

      {plus minus} a few percents), (iii) that ribosome density is minimally affected by perturbations of elongation (using a drug and/or different coding sequences in the reporter), suggesting a homeostatic mechanism potentially involving a feedback of elongation onto initiation, although (iv) this coupling breaks down upon knockout of elongation factor eIF5A.

      Strengths:

      (1) The manuscript is well written, and the conclusions are, in general, appropriately cautious (besides the few improvements I suggest below).

      (2) The time-series inference method is interesting and promising for broader applications. 

      (3) Simulations provide convincing support for the modeling (though some improvements are possible). 

      (4) The reported homeostatic effect on ribosome density is surprising and carefully validated with multiple perturbations.

      (5) Imaging quality and corrections (e.g., flat-fielding, laser power measurements) are robust.

      (6) Mathematical modeling is clearly described and precise; a few clarifications could improve it further.

      We thank the reviewer for recognizing the novelty of the approach and its rigour, and for providing suggestions to improve it further.

      Weaknesses:

      (1) The absolute quantification of ribosome numbers (via the measurement of $i_{MP}$ ) should be improved.This only affects the finding that ribosome density is low, not that it appears to be under homeostatic control. However, if $i_{MP}$ turns out to be substantially overestimated (hence ribosome density underestimated), then "ribosomes queuing up to the initiation site and physically blocking initiation" could become a relevant hypothesis. In my detailed recommendations to the authors, I list points that need clarification in their quantifications and suggest an independent validation experiment (measuring the intensity of an object with a known number of GFP molecules, e.g., MS2-GFP MS2-GFP-labeled RNAs, or individual GEMs).

      We agree with the reviewer that the estimation of the number of ribosomes is central to our finding that translation happens at low density on our reporters. This result derives from our measurement of the intensity of one mature protein (i<sub>MP</sub>), that we have achieved by using a SunTag reporter with a RH1 domain in the C terminus of the mature protein, allowing us to stabilise mature proteins via actin-tethering. In addition, as suggested by the reviewer, we already validated this result with an independent estimate of the mature protein intensity (Figure 5 - figure supplement 2B), which was obtained by adding the mature protein intensity directly as a free parameter of the HMM. The inferred value of mature protein intensity for each construct (10-15 a.u) was remarkably close to the experimental calibration result (14 ± 2 a.u.). Therefore, we have confidence that our absolute quantification of ribosome numbers is accurate.

      (2) The proposed initiation-elongation coupling is plausible, but alternative explanations, such as changes in abortive elongation frequency, should be considered more carefully. The authors mention this possibility, but should test or rule it out quantitatively. 

      We thank the reviewer for the comment, but we consider that ruling out alternative explanations through new perturbation experiments is beyond the scope of the present work.

      (3) The observation of translational bursting is presented as novel, but similar findings were reported by Livingston et al. (2023) using a similar SunTag-MS2 system. This prior work should be acknowledged, and the added value of the current approach clarified.

      We did cite Livingston et al. (2023) in several places, but we recognized that we could add a few citations in key places, to make clear that the observation of bursting is not novel but is in agreement with previous results. We now did so in the Results and Discussion sections.

      (4) It is unclear what the single-mRNA nature of the inference method is bringing since it is only used here to report _average_ ribosome elongation rate and density (averaged across mRNAs and across time during the run-off experiments - although the method, in principle, has the power to resolve these two aspects).

      While decoding individual traces, our model infers shared (population-level) rates. Inferring transcript-specific parameters would be more informative, but it is highly challenging due to the uncertainty on the initial ribosome distribution on single transcripts. Pooling multiple transcripts together allows us to use some assumptions on the initial distribution and infer average elongation and initiation-rate parameters, while revealing substantial mRNA-to-mRNA variability in the posterior decoding (e.g. Figure 3 - figure Supplement 2C). Indeed, the inference still informs on the single-trace run-off time distribution (Figure 3 A) and the waiting time between termination events (Figure 3 - figure supplement 2C), suggesting the presence of stalling and bursting. In addition, the transcript-to-transcript heterogeneity is likely accounted for by our model better than previous methods (linear fit of the average run-off intensity), as suggested by their comparison (Figure 3 - figure supplement 2 A). In the future the model could be refined by introducing transcript-specific parameters, possibly in a hierarchical way, alongside shared parameters.

      (5) I did not find any statement about data availability. The data should be made available. Their absence limits the ability to fully assess and reproduce the findings.

      We have added the Github link in the manuscript (https://github.com/naef-lab/suntag-analysis) and have also deposited the data (.ome.tif) on Zenodo (https://zenodo.org/records/17669332).

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors): 

      Major Comments:

      (1) Lack of Explicit Bursting Model

      Although translation "bursts" are observed, the current framework does not explicitly model initiation as a stochastic ON/OFF process. This limits insight into regulatory mechanisms controlling burst frequency or duration. The authors should either incorporate a two-state/more-state (bursting) model of initiation or perform statistical analysis (e.g., dwell-time distributions) to quantify bursting dynamics. They should clarify how bursting influences the interpretation of initiation rate estimates.

      We agree with the reviewer that an explicit bursting model (e.g., a two-state telegraph model) would be the ideal theoretical framework. However, integrating such a model into the TASEP-HMM inference framework is computationally intensive and complex. As a robust first step, we have opted to quantify bursting empirically based on the decoded single-mRNA traces. As shown in Figure 1G (control) and Figure 4G (perturbed conditions), we explicitly measured the duration of "ON" (translated) and "OFF" (untranslated) periods. This statistical analysis provides a quantitative description of the bursting dynamics without relying on the specific assumptions of a telegraph model. We have clarified this in the text (L123-125) and, as suggested, added a discussion (L415-417) on the potential extensions of the model to include explicit switching kinetics in the Outlook section.

      (2) Assumption of Uniform Elongation Rates

      The model assumes homogeneous elongation across coding sequences, which may not hold for stalling-prone inserts (e.g., PPG). This simplification could bias inference, particularly in cases of sequence-specific pausing. Adding simulations or sensitivity analysis to assess how non-uniform elongation affects the accuracy of inferred parameters. The authors should explicitly discuss how ribosome stalling, collisions, or heterogeneity might skew model outputs (see point 4).

      A strong stalling sequence that affects all ribosomes equally should not deteriorate the inference of the initiation rate, provided that the low-density assumption holds. The scenario where stalling events lead to higher density, and thus increased ribosome-ribosome interactions, is comparable to the conditions explored in Figure 2E. In those simulations, we tested the inference on data generated with varying initiation and elongation rates, resulting in ribosome densities ranging from low to high. We demonstrated that the inference remains robust at low ribosome densities (<10%). At higher densities, the accuracy of the initiation rate estimate decreases, whereas the elongation rate estimate remains comparatively robust. Additionally, the model tends to overestimate ribosome density under high-density conditions, likely because it neglects ribosome interference at the initiation site (Figure 2 figure supplement 2C). We agree that a deeper investigation into the consequences of stochastic stalling and bursting would be beneficial, and we have explicitly acknowledged this in the Limitations section.

      (3) Interpretation of eIF5A Knockout Phenotype

      The observation that eIF5A KO reduces initiation more than elongation, leading to decreased ribosome density, is biologically intriguing. However, the explanation invoking altered RQC kinetics is speculative and not directly tested. The authors should consider validating the RQC hypothesis by monitoring reporter mRNA stability, ribosome collision markers, or translation termination intermediates.

      We thank the reviewer for the comment, but we consider that ruling out alternative explanations through new experiments is beyond the scope of the present work.

      (4) To strengthen the manuscript, the authors should incorporate insights from three studies.

      - Livingston et al. (PMC10330622) found that translation occurs in bursts, influenced by mRNA features and initiation factors, supporting the coupling of initiation and elongation.

      - Madern et al. (PMID: 39892379) demonstrated that ribosome cooperativity enhances translational efficiency, highlighting coordinated ribosome behavior.

      - Dufourt et al. (PMID: 33927056) observed that high initiation rates correlate with high elongation rates, suggesting a conserved mechanism across cell cultures and organisms.

      Integrating these studies could enrich the manuscript's interpretation and stimulate new avenues of thought.

      We thank the reviewer for the valuable comment. We added citations of Livingston et al. in the context of translational bursting. We already cited Madern et al. in multiple places and, although its observations of ribosome cooperativity are very compelling, they cannot be linked with our observations of a feedback between initiation and elongation, and it would be very challenging to see a similar effect on our reporters. This is why we did not expressly discuss cooperativity. We also integrated Dufourt et al. in the Discussion about the possibility of designing genetically-encoded reporter. We also added a sentence about the possibility of using an ER-specific SunTag reporter, as done recently in Choi et al., Nature (2025) (https://doi.org/10.1038/s41586-025-09718-0).

      Minor Comments:

      (1) Use consistent naming for SunTag reporters (e.g., "PPG" vs "proline-rich") throughout.

      Thank you for the comment. However, the term proline-rich always appears together with PPG, so we believe that the naming is clear and consistent.

      (2) Consider a schematic overview of the experimental design and modeling pipeline for accessibility.

      Thank you for the suggestion. We consider that experimental design and modeling is now sufficiently clearly described and does not justify an additional scheme. 

      (3) Clarify how incomplete run-off traces are handled in the HMM inference.

      Incomplete run-off traces are treated identically to complete traces in our HMM inference. This is possible because our model relies on the probability of transitions occurring per time step to infer rates. It does not require observing the final "empty" state to estimate the kinetic parameters ɑ and λ. The loss of signal (e.g., mRNA moving out of the focal volume or photobleaching) does not invalidate the kinetic information contained in the portion of the trace that was observed. We have clarified this in the Methods section.

      Reviewer #2 (Recommendations for the authors):

      (1) Reproducibility:

      (1.1) The authors should use a GitHub repository with a timestamp for the release version.

      The code is available on GitHub (https://github.com/naef-lab/suntag-analysis).

      (1.2) Make raw images and data available in a figure repository like Figshare.

      The raw images (.ome.tif) are now available on Zenodo (https://zenodo.org/records/17669332).

      (2) Paper reorganization and expansion of the intensity and ribosome quantification:

      (2.1) Given the relevance of the initiation and elongation rates for the conclusions of this study, and the fact that the authors inferred these rates from the spot intensities. I recommend that the authors move Figure 1 Supplement 2 to the main text and expand the description of the process to relate spot intensity and number of ribosomes. Please also expand the figure caption for this image.

      We agree with the importance of this validation. We have expanded the description of the calibration experiment in the main text and in the figure caption.

      (2.2) I suggest the authors explicitly mention the use of HMM in the abstract.

      We have now explicitly mentioned the TASEP-based HMM in the abstract.

      (2.3) In line 492, please add the frame rate used to acquire the images for the run-off assays.

      We have added the specific frame rate (one frame every 20 seconds) to the relevant section.

      (3) Figures and captions:

      (3.1) Figure 1, Supplement 2. Please add a description of the colors used in plots B, C. 

      We have expanded the caption and added the color description.

      (3.2) In the Figure 2 caption. It is not clear what the authors mean by "traceseLife". Please ensure it is not a typo.

      Thank you for spotting this. We have corrected the typo.

      (3.3) Figure 1 A, in the cartoon N(alpha)->N-1, shouldn't the transition also depend on lambda?

      The transition probability was explicitly derived in the “Bayesian modeling of run-off traces” section (Eqs. 17-18), and does not depend on λ, but only on the initiation rate under the low-density assumption.

      (3.4) Figure 3, Supplement 2. "presence of bursting and stalling.." has a typo.

      Corrected.

      (3.5) Figure 5, panel C, the y-axis label should be "run-off time (min)."

      Corrected.

      (3.6) For most figures, add significance bars.

      (3.7) In the figure captions, please add the total number of cells used for each condition.

      We have systematically indicated the number of traces (n<sub>t</sub>) and the number of independent experiments (n<sub>e</sub>) in the captions in this format (n<sub>t</sub>, n<sub>e</sub>).

      (4) Mathematical Methods:

      We greatly thank the reviewer for their detailed attention to the mathematical notation. We have addressed all points below.

      (4.1) In lines 555, Materials and Methods, subsection, Quantification of Intensity Traces, multiple equations are not numbered. For example, after Equation (4), no numbers are provided for the rest of the equations. Please keep consistency throughout the whole document.

      We have ensured that all equations are now consistently numbered throughout the document.

      (4.2) In line 588, the authors mention "$X$ is a standard normal random variable with mean $\mu$ and standard deviation $s_0$". Please ensure this is correct. A standard normal random variable has a 0 mean and std 1. 

      Thank you for the suggestion, we have corrected the text (L678).

      (4.3) Line 546, Equation 2. The authors use mu(x,y) to describe a 2d Gaussian function. But later in line 587, the authors reuse the same variable name in equation 5 to redefine the intensity as mu = b_0 + I.

      We have renamed the 2D Gaussian function to \mu_{2D}(x,y) in the spot tracking section

      (4.4) For the complete document, it could be beneficial to the reader if the authors expand the definition of the relationship between the signal "y" and the spot intensity "I". Please note how the paragraph in lines 582-587 does not properly introduce "y".

      We have added an explicit definition of y and its relationship to the underlying spot intensity I in the text to improve readability and clarity.

      (4.5) Please ensure consistency in variable names. For example, "I" is used in line 587 for the experimental spot intensity, then line 763 redefines I(t) as the total intensity obtained from the TASEP model; please use "I_sim(t)" for simulated intensities. Please note that reusing the variable "I" for different contexts makes it hard for the reader to follow the text. 

      We agree that this was confusing. We have implemented the suggestion and now distinguish simulated intensities using the notation I<sub>S</sub> .

      (4.6) Line 555 "The prior on the total intensity I is an "uninformative" prior" I ~ half_normal(1000). Please ensure it is not "I_0 ~ half_normal(1000)."? 

      We confirm that “I” is the correct variable representing the total intensity in this context; we do not use an “I<sub>0</sub>” variable here.

      (4.7) In lines 595, equation 6. Ensure that the equation is correct. Shouldn't it be: s_0^2 = ln ( 1 + (sigma_meas^2 / ⟨y⟩^2) )? Please ensure that this is correct and it is not affecting the calculated values given in lines 598.

      Thank you for catching this typo. We have corrected the equation in the manuscript. We confirm that the calculations performed in the code used the correct formula, so the reported values remain unchanged.

      (4.8) In line 597, "the mean intensity square ^2". Please ensure it is not "the square of the temporal mean intensity."

      We have corrected the text to "the square of the temporal mean intensity."

      (4.9) In lines 602-619, Bayesian modeling of run-off traces, please ensure to introduce the constant "\ell". Used to define the ribosomal footprint?

      We have added the explicit definition of 𝓁 as the ribosome footprint size (length of transcript occupied by one ribosome) in the "Bayesian modeling of run-off traces" section.

      (4.10) Line 687 has a minor typo "[...] ribosome distribution.. Then, [...]"

      We have corrected the punctuation.

      (4.11) In line 678, Equation 19 introduces the constant "L_S", Please ensure that it is defined in the text.

      We have added the explicit definition of L<sub>S</sub> (the length of the SunTag) to the text surrounding Equation 19.

      (4.12) In line 695, Equation 22, please consider using a subscript to differentiate the variance due to ribosome configuration. For example, instead of "sigma (...)^2" use something like "sigma_c ^2 (...)". Ensure that this change is correctly applied to Equation 24 and all other affected equations.

      Thank you, we have implemented the suggestions.

      (4.13) In line 696, please double-check equations 26 and 27. Specifically, the denominator ^2. Given the previous text, it is hard to follow the meaning of this variable. 

      We have revised the notation in Equations 26 and 27 to ensure the denominator is consistent with the definitions provided in the text.

      (4.14) In lines 726, the authors mention "[...], but for the purposes of this dissertation [...]", it should be "[...], but for the purposes of this study [...]"

      Thank you for spotting this. We have replaced "dissertation" with "study."

      (4.15) Equations 5, 28, 37, and the unnumbered equation between Equations 16 and 17 are similar, but in some, "y" does not explicitly depend on time. Please ensure this is correct. 

      We have verified these equations and believe they are correct.

      (4.16) Please review the complete document and ensure that variables and constants used in the equations are defined in the text. Please ensure that the same variable names are not reused for different concepts. To improve readability and flow in the text, please review the complete Materials and Methods sections and evaluate if the modeling section can be written more clearly and concisely. For example, Equation 28 is repeated in the text.

      We have performed a comprehensive review of the Materials and Methods section. To improve conciseness and flow, we have merged the subsection “Observation model and estimation of observation parameters” with the “Bayesian modeling of run-off traces” section. This allowed us to remove redundant definitions and repeated equations (such as the previous Equation 28). We have also checked that all variables and constants are defined upon first use and that variable names remain consistent throughout the manuscript.

      Reviewer #3 (Recommendations for the authors):

      (1) Data Presentation

      (1.1) In main Figures 1D and 4E, the traces appear to show frequent on-off-on transitions ("bursting"), but in supplementary figures (1-S1A and 4-S1A), this behavior is seen in only ~8 of 54 traces. Are the main figure examples truly representative?

      We acknowledge the reviewer's point. In Figure 1D, we selected some of the longest and most illustrative traces to highlight the bursting dynamics. We agree that the term "representative" might be misleading if interpreted as "average." We have updated the text to state "we show bursting traces" to more accurately reflect the selection.

      (1.2) There are 8 videos, but I could not identify which is which.

      Thank you for pointing this out. We have renamed the video files to clearly correspond to the figures and conditions they represent.

      (2) Data Availability:

      As noted above, the data should be shared. This is in accordance with eLife's policy: "Authors must make all original data used to support the claims of the paper, or that are required to reproduce them, available in the manuscript text, tables, figures or supplementary materials, or at a trusted digital repository (the latter is recommended). [...] eLife considers works to be published when they are posted as preprints, and expects preprints we review to meet the standards outlined here." Access to the time traces would have been helpful for reviewers.

      We have now added the Github link for the code (https://github.com/naef-lab/suntag-analysis) and deposited the raw data (.ome.tif files) on Zenodo (10.5281/zenodo.17669332).

      (3) Model Assumptions:

      (3.1) The broad range of run-off times (Figure 3A) suggests stalling, which may be incompatible with the 'low-density' assumption used on the TASEP model, which essentially assumes that ribosomes do not bump into each other. This could impact the validity of the assumptions that ribosomes behave independently, elongate at constant speed (necessary for the continuum-limit approximation), and that the rate-limiting step is the initiation. How robust are the inferences to this assumption?

      We agree that the deviation of waiting times from an exponential distribution (Figure 3 - figure supplement 2C) suggests the presence of stalling, which challenges the strict low-density assumption and constant elongation speed. We explicitly explored the robustness of our model to higher ribosome densities in simulations. As shown in Figure 2 - figure supplement 2, while the model accuracy for single parameters deteriorates at very high densities (overestimating density due to neglected interference), it remains robust for estimating global rates in the regime relevant to our data. We have expanded the discussion on the limitations of the low density and homogeneous elongation rate assumptions in the text (L404-408).

      (3.2) Since all constructs share the same SunTag region, elongation rates should be identical there and diverge only in the variable region. This would affect $\gamma (t)$ and hence possibly affect the results. A brief discussion would be helpful.

      This is a valid point. Currently, our model infers a single average elongation rate that effectively averages the behavior over the SunTag and the variable CDS regions. Modeling distinct rates for these regions would be a valuable extension but adds significant complexity. While our current "effective rate" approach might underestimate the magnitude of differences between reporters, it captures the global kinetic trend. We have added a brief discussion acknowledging this simplification (L408-412).

      (3.3) A similar point applies to the Gillespie simulations: modeling the SunTag region with a shared elongation rate would be more accurate.

      We agree. Simulating distinct rates for the SunTag and CDS would increase realism, though our current homogeneous simulations serve primarily to benchmark the inference framework itself. We have noted this as a potential future improvement (L413-414).

      (3.4) Equation (13) assumes that switching between bursting and non-bursting states is much slower than the elongation time. First, this should be made explicit. Second, this is not quite true (~5 min elongation time on Figure 3-s2A vs ~5-15min switching times on Figure 1). It would be useful to show the intensity distribution at t=0 and compare it to the expected mixture distribution (i.e., a Poisson distribution + some extra 'N=0' cells). 

      We thank the reviewer for this insightful comment. We have added a sentence to the text explicitly stating the assumption that switching dynamics are slower than the translation time. While the timescales are indeed closer than ideal (5 min vs. 5-15 min), this assumption allows for a tractable approximation of the initial conditions for the run-off inference. Comparing the intensity distribution at t=0 to a zero-inflated Poisson distribution is an excellent suggestion for validation, which we will consider for future iterations of the model.

      (4) Microscopy Quantifications:

      (4.1) Figure 1-S2A shows variable scFv-GFP expression across cells. Were cells selected for uniform expression in the analysis? Or is the SunTag assumed saturated? which would then need to be demonstrated. 

      All cell lines used are monoclonal, and cells were selected via FACS for consistent average cytoplasmic GFP signal. We assume the SunTag is saturated based on the established characterization of the system by Tanenbaum et al. (2014), where the high affinity of the scFv-GFP ensures saturation at expression levels similar to ours.

      (4.2) As translation proceeds, free scFv-GFP may become limiting due to the accumulation of mature SunTag-containing proteins. This would be difficult to detect (since mature proteins stay in the cytoplasm) and could affect intensity measurements (newly synthesized SunTag proteins getting dimmer over time).

      This effect can occur with very long induction times. To mitigate this, we optimized the Doxycycline (Dox) incubation time for our harringtonine experiments to prevent excessive accumulation of mature protein. We also monitor the cytoplasmic background for granularity, which would indicate aggregation or accumulation.

      (4.3) The statements "for some traces, the mRNA signal was lost before the run-off completion" (line 195) and "we observed relatively consistent fractions of translated transcripts and trace duration distributions across reporters" (line 340) should be supported by a supplementary figure.

      The first statement is supported by Figure 2 - figure supplement 1, which shows representative run-off traces for all constructs, including incomplete ones.

      The second statement regarding consistency is supported by the quantitative data in Figure 1E and G, which summarize the fraction of translated transcripts and trace durations across conditions.

      (4.4) Measurements of single mature protein intensity $i_{MP}$:

      (4.4.1) Since puromycin is used to disassemble elongating ribosomes, calibration may be biased by incomplete translation products (likely a substantial fraction, since the Dox induction is only 20min and RNAs need several minutes to be transcribed, exported, and then fully translated).

      As mentioned in the “Live-cell imaging” paragraph, the imaging takes place 40 min after the end of Dox incubation. This provides ample time for mRNA export and full translation of the synthesized proteins. Consequently, the fraction of incomplete products generated by the final puromycin addition is negligible compared to the pool of fully synthesized mature proteins accumulated during the preceding hour.

      (4.4.2) Line 519: "The intensity of each spot is averaged over the 100 frames". Do I understand correctly that you are looking at immobile proteins? What immobilizes these proteins? Are these small aggregates? It would be surprising that these aggregates have really only 1, 2, or 3 proteins, as suggested by Figure 1-S2A.

      We are visualizing mature proteins that are specifically tethered to the actin cytoskeleton. This is achieved using a reporter where the RH1 domain is fused directly to the C-terminus of the Renilla protein (SunTag-Renilla-RH1). The RH1 domain recruits the endogenous Myosin Va motor, which anchors the protein to actin filaments, rendering it immobile. Since each Myosin Va motor interacts with one RH1 domain (and thus one mature protein), the resulting spots represent individual immobilized proteins rather than aggregates. We have now revised the text and Methods section to make this calibration strategy and the construct design clearer (L130-140).

      (4.4.3) Estimating the average intensity $i_{MP}$ of single proteins all resides in the seeing discrete modes in the histogram of Figure 1-S2B, which is not very convincing. A complementary experiment, measuring *on the same microscope* the intensity of an object with a known number of GFP molecules (e.g., MS2-GFP labeled RNAs, or individual GEMs https://doi.org/10.1016/j.cell.2018.05.042 (only requiring a single transfection)) would be reassuring to convince the reader that we are not off by an order of magnitude.

      While a complementary calibration experiment would be valuable, we believe our current estimate is robust because it is independently validated by our model. When we inferred i<sub>MP</sub> as a free parameter in the HMM (Figure 5 - figure supplement 2B), the resulting value (10-15 a.u.) was remarkably consistent with our experimental calibration (14 ± 2 a.u.). We have clarified this independent validation in the text to strengthen the confidence in our quantification (L264-272).

      (4.4.4) Further on the histogram in Figure 1-S2B:

      - The gap between the first two modes is unexpectedly sharp. Can you double-check? It means that we have a completely empty bin between two of the most populated bins.

      We have double-checked the data; the plot is correct, though the sharp gap is likely due to the small sample size (n=29).

      - I am surprised not to see 3 modes or more, given that panel A shows three levels of intensity (the three colors of the arrows).

      As noted below, brighter foci exist but fall outside the displayed range of the histogram.

      - It is unclear what the statistical test is and what it is supposed to demonstrate.

      The Student's t-test compares the means of the two identified populations to confirm they are statistically distinct intensity groups.

      - I count n = 29, not 31. (The sample is small enough that the bars of the histogram show clear discrete heights, proportional to 1, 2, 3, 4, and 5 --adding up all the counts, I get 29). Is there a mistake somewhere? Or are some points falling outside of the displayed x-range?

      You are correct. Two brighter data points fell outside the displayed range. The total number of foci in the histogram is 29. We have corrected the figure caption and the text accordingly.

      (5) Miscellaneous Points: 

      (5.1) Panel B in Figure 2-s1 appears to be missing.

      The figure contains only one panel.

      (5.2) In Equation (7), $l$ is not defined (presumably ribosome footprint length?). Instead, $J$ is defined right after eq (7), as if it were used in this equation.

      Thank you for pointing this out, we have corrected it.

      (5.3) Line 703, did you mean to write something else than "Equation 26" (since equation 26 is defined after)?

      Yes, this was a typo. We have corrected the cross-reference.

    1. Stratégies d’apaisement et d’autorégulation en milieu scolaire : Analyse et mise en œuvre

      Résumé exécutif

      Ce document synthétise les perspectives de Madame Claudia Verrette, docteure en sciences de l’activité physique et professeure à l’UQAM, sur le déploiement des mesures d'apaisement en milieu scolaire.

      Initialement issues du domaine de la santé mentale et de l'ergothérapie pour des besoins spécifiques (autisme, troubles sensoriels), ces mesures sont désormais utilisées plus largement pour favoriser l'autorégulation de tous les élèves.

      L'objectif central est de maintenir ou de restaurer la « disponibilité pour l’apprentissage » de l’élève.

      L'analyse identifie quatre catégories majeures d'outils : l'aménagement de l'espace, les techniques physiques, les stratégies de diversion ou d'ancrage, et l'activité physique.

      La réussite de ces interventions ne repose pas sur l'objet lui-même, mais sur un processus d'accompagnement réflexif mené par l'adulte.

      Pour être efficaces, ces stratégies doivent s'inscrire dans un changement de paradigme au sein de l'équipe-école, passant d'une approche punitive à une gestion bienveillante et proactive des comportements.

      --------------------------------------------------------------------------------

      Définition et fondements des mesures d'apaisement

      Les mesures d'apaisement constituent une famille d'outils et d'activités visant à aider l'élève à s'autocontrôler.

      Bien que le terme « apaisement » suggère principalement le calme (référant aux calming tools en anglais), il est plus juste de parler de mesures d'autorégulation.

      Objectifs clés

      Disponibilité : Permettre à l'élève de rester dans une zone propice à l'apprentissage.

      Modulation : Selon le besoin, activer l'élève (vigilance) ou le calmer.

      Alternative : Offrir une option aux mesures coercitives traditionnelles pour gérer les comportements.

      Origines et évolution

      Ces outils proviennent initialement de la psychiatrie et de l'ergothérapie, conçus pour des élèves présentant des troubles du spectre de l'autisme ou des troubles d'intégration sensorielle.

      Par la médiation sensorielle (pression profonde, stimulation des récepteurs musculaires), ils envoient des signaux d'apaisement au cerveau.

      Aujourd'hui, leur usage s'est généralisé, notamment au primaire, pour pallier l'hyperactivité ou l'inattention.

      --------------------------------------------------------------------------------

      Typologie des mesures d'autorégulation

      Les interventions se divisent en quatre grandes catégories distinctes, chacune répondant à des besoins spécifiques de l'élève.

      | Catégorie | Exemples d'outils et d'activités | Objectifs visés | | --- | --- | --- | | Aménagement de la salle | Coins calmes, coins « zen », chaises berçantes, coussins, musique douce, écouteurs. | Offrir un espace de retrait volontaire (non punitif) loin des stimulus de la classe. | | Mesures physiques | Respiration lente et profonde (yoga, méditation), automassage (balles, rouleaux), technique de Jacobson (contraction/relâchement). | Envoyer un signal physiologique de sécurité au cerveau par la voie sensorielle et musculaire. | | Diversion et Ancrage | Ancrage : Objets lourds (animaux lestés), musique, autocollants texturés, Fidget spinners. Diversion : Puzzles, démontage d'objets, tri de blocs. | Réorienter l'attention ou se « sortir » d'une situation difficile par l'imagerie positive ou la concentration sur un objet. | | Activité physique | Corridors actifs, pauses actives, séances de 20 min d'intensité élevée, décharge motrice. | Améliorer la concentration post-effort et utiliser le mouvement comme outil de gestion comportementale. |

      --------------------------------------------------------------------------------

      L'activité physique comme levier d'intervention multiniveau

      L'activité physique occupe une place prépondérante dans les stratégies d'apaisement, structurée selon un modèle de réponse à l'intervention :

      1. Niveau Universel : Éducation physique, récréations et corridors actifs accessibles à tous les élèves pour favoriser la santé et le calme général.

      2. Niveau Ciblé : Périodes supplémentaires d'activité pour des sous-groupes d'élèves, parfois utilisées comme récompense pour un comportement attendu.

      3. Niveau Individualisé (Le cas du « Ring ») :

      Concept : Salle de décharge motrice pour élèves avec troubles graves du comportement.  

      Fonctionnement : Séquences contrôlées (ex: 10 Jumping Jacks, poussées au mur, saut à la corde) entrecoupées de respirations profondes.    

      Accompagnement : Un adulte guide la réflexion de l'élève sur son état émotionnel (ex: passage de la colère à une zone de retour en classe).  

      Résultat : Ce dispositif est identifié par les élèves comme la mesure la plus efficace et appréciée.

      --------------------------------------------------------------------------------

      Conditions de réussite et mise en œuvre efficace

      L'efficacité d'une mesure d'apaisement ne réside pas dans l'objet lui-même, qui peut sinon devenir une simple source de distraction.

      Le processus d'autorégulation assistée

      Pour que l'élève devienne autonome, l'adulte doit l'accompagner dans un processus cognitif en trois étapes :

      Reconnaissance : Aider l'élève à nommer son état (colère, agitation, envahissement par les pensées).

      Choix : Sélectionner l'outil approprié dans un répertoire personnel préalablement pratiqué (est-ce un besoin d'activation ou de calme ?).

      Retour réflexif : Évaluer après coup si l'outil a été efficace et s'il peut être réutilisé.

      Facteurs de succès organisationnels

      Habituation : Permettre à tous les élèves d'explorer les outils au début pour dissiper l'effet de nouveauté (« lune de miel »).

      Cohérence de l'équipe-école : Les stratégies doivent être communes à tous les intervenants entourant l'élève pour assurer une prévisibilité et une efficacité accrue.

      Vision bienveillante : Abandonner le présupposé que l'élève « devrait être capable » de s'autoréguler seul, surtout au secondaire où les besoins persistent.

      --------------------------------------------------------------------------------

      Conclusion : Le changement de paradigme

      Le passage aux mesures d'apaisement exige une réflexion profonde sur la discipline.

      Un même objet (comme un banc) peut servir de punition ou d'outil d'autorégulation selon l'intention de l'adulte.

      Le succès de ces mesures dépend de la volonté de l'équipe-école de s'engager vers des pratiques axées sur l'autodétermination et la bienveillance, plutôt que sur la coercition.

      Sans cette concertation et cet accompagnement humain, les outils d'apaisement risquent d'être délaissés après quelques mois d'utilisation inefficace.

    1. Synthèse de la Matinale Associations : Fiscalité, Mécénat et Fonds de Dotation

      Résumé Exécutif

      Ce document synthétise les interventions de la Direction Régionale des Finances Publiques (DRFIP) d’Île-de-France lors d'un webinaire consacré à l'actualité fiscale des organismes sans but lucratif (OSBL).

      La gestion fiscale des associations et fonds de dotation est marquée par une recherche accrue de sécurité juridique, illustrée par une hausse constante des demandes de rescrit fiscal (près de 50 % des demandes totales concernent le secteur associatif).

      Les points critiques à retenir sont le renforcement des contrôles sur l'émission des reçus fiscaux suite à la loi du 24 août 2021, l'application rigoureuse des critères de non-lucrativité (règle des « 4P » et gestion désintéressée), et la distinction impérative entre le mécénat et le parrainage commercial.

      Enfin, le cadre des fonds de dotation, bien que plus souple, impose des obligations déclaratives et de dotation minimale (15 000 €) strictes.

      --------------------------------------------------------------------------------

      I. Le Cadre d'Action de la DRFIP et la Sécurité Juridique

      La Direction Régionale des Finances Publiques d'Île-de-France, et plus particulièrement son pôle de contrôle fiscal et des affaires juridiques, assure une mission de sécurisation de la dépense fiscale.

      1. La montée en puissance du rescrit fiscal

      Le rescrit est une procédure volontaire permettant à un organisme d'obtenir une prise de position formelle de l'administration sur son régime fiscal.

      Statistiques : En 2025, la DRFIP prévoit de traiter environ 1 140 demandes de rescrits, dont 493 concernent spécifiquement les associations (soit environ 45 %).

      Objectif : Sécuriser l'émission des reçus fiscaux pour les donateurs afin d'éviter des remises en cause ultérieures lors de contrôles.

      Limites : Le rescrit ne protège l'organisme que si les informations fournies sont exhaustives et conformes à la réalité. Il n'empêche pas un contrôle fiscal ultérieur.

      2. Le renforcement des contrôles (Loi du 24 août 2021)

      La loi confortant le respect des principes de la République a transformé la nature des contrôles :

      Avant 2021 : Simple contrôle de concordance des montants.

      Depuis 2021 : Contrôle de validité sur le fond. L'administration vérifie si l'organisme est réellement fondé à émettre des reçus fiscaux au regard des critères d'intérêt général.

      --------------------------------------------------------------------------------

      II. Analyse de la Lucrativité : Critères et Méthodologie

      Le régime par défaut d'une association est l'exonération des impôts commerciaux, basée sur une présomption simple de non-lucrativité.

      L'administration peut toutefois apporter la preuve contraire en suivant une analyse par étapes.

      1. La gestion désintéressée

      C’est la condition préalable indispensable. Elle repose sur trois piliers :

      Absence de rémunération des dirigeants : Les dirigeants doivent être bénévoles.

      Une tolérance existe pour une rémunération ne dépassant pas les 3/4 du SMIC, appréciée annuellement.

      Absence de distribution de ressources : Aucun bénéfice ne doit être reversé aux membres.

      Absence d'attribution de parts d'actif : Les membres ne peuvent pas s'approprier les biens de l'association, même lors de sa dissolution.

      2. L'examen de la concurrence et la règle des « 4P »

      Si une association intervient dans un secteur concurrentiel, l'administration évalue ses modalités de gestion par rapport aux entreprises commerciales selon le faisceau d'indices dit des « 4P » (par ordre d'importance décroissante) :

      | Critère | Analyse | | --- | --- | | Produit | L'utilité sociale du service rendu (ex: méthodes adaptées pour les troubles dys). | | Public | Le service s'adresse-t-il à des personnes ne pouvant normalement pas y accéder (critères sociaux) ? | | Prix | Les tarifs sont-ils nettement inférieurs au marché ou modulés selon les revenus ? | | Publicité | L'association utilise-t-elle des méthodes commerciales de promotion ou une simple information ? |

      3. La notion de communauté d'intérêt

      Une association peut être jugée lucrative si elle constitue le prolongement d'une entreprise commerciale ou lui offre des débouchés.

      Jurisprudence "Audace" (2016) : Une association servant de « capteur de clientèle » pour une société d'assistance juridique dirigée par la même personne a été requalifiée en organisme lucratif.

      Relations privilégiées : Cette notion s'applique lorsque l'association permet à des entreprises membres de réduire leurs dépenses (ex: études de marché à moindre coût), leur offrant ainsi un avantage concurrentiel.

      --------------------------------------------------------------------------------

      III. Le Régime du Mécénat et du Parrainage

      Le dispositif du mécénat a été libéralisé par la loi de décembre 2023 (entrée en vigueur en janvier 2024), mais reste soumis à des définitions strictes.

      1. L'intérêt général fiscal

      L'intérêt général au sens fiscal (articles 200 et 238 bis du CGI) diffère du sens commun. Il exige :

      • Une gestion désintéressée.

      • Une activité non lucrative.

      • L'absence de bénéfice pour un « cercle restreint » de personnes.

      2. Distinction Mécénat vs Parrainage (Sponsoring)

      La distinction repose sur la valorisation des contreparties :

      Mécénat : Il doit exister une disproportion marquée entre le don et les contreparties reçues par le donateur (ex: simple mention du nom du donateur).

      Parrainage (Sponsoring) : Si les contreparties (publicité, logos sur maillots, cocktails premium, places réservées) ont une valeur proche du montant versé, il s'agit d'une prestation de service commerciale taxable.

      3. Cas particulier du spectacle vivant

      Le législateur autorise certains organismes lucratifs (ex: sociétés commerciales détenues par des entités publiques) à bénéficier du mécénat pour des activités de spectacle vivant, de cinéma ou d'expositions d'art contemporain, à condition que la gestion reste désintéressée.

      --------------------------------------------------------------------------------

      IV. Les Fonds de Dotation : Un Outil Spécifique

      Créés par la loi de 2008, les fonds de dotation visent à favoriser le mécénat pour le financement de missions d'intérêt général.

      1. Modes de fonctionnement

      Fonds opérateur : Réalise lui-même des activités d'intérêt général.

      Fonds redistributeur : Collecte des fonds pour les reverser à d'autres organismes d'intérêt général.

      Mixte : Combine les deux activités.

      2. Obligations et fiscalité

      Dotation minimale : 15 000 €.

      Obligations déclaratives : Déclaration annuelle en préfecture précisant le montant de la collecte et des redistributions.

      Consomptibilité : Si les statuts prévoient que la dotation peut être consommée, le fonds perd certains avantages fiscaux sur ses revenus patrimoniaux (soumission à l'IS à taux réduit).

      Taxe sur les salaires : Les fonds de dotation y sont soumis sans l'abattement dont bénéficient les associations (2 144 €), sauf pour les salaires liés à l'organisation de six manifestations de bienfaisance annuelles.

      --------------------------------------------------------------------------------

      V. Jurisprudences et Exemples de Contrôle

      L'administration s'appuie sur des cas concrets pour illustrer l'application des règles :

      École de voile de Carantec : Requalification lucrative car la zone de chalandise (touristes venant de toute la France) et les tarifs étaient comparables aux écoles de voile commerciales de la région.

      Arrêt "Piou-Piou" (2022) : Une association de ski pour enfants entretenait des relations privilégiées avec les moniteurs de l'ESF (membres de l'association), car elle leur fournissait un débouché économique direct.

      Défense de la mémoire (Affaire Maréchal Pétain) : Le mécénat est refusé si l'activité éligible (ex: un musée) est accessoire par rapport à l'objet principal de l'association qui, lui, ne rentre pas dans les critères de la loi.

      VI. Secteur Lucratif Accessoire et Sectorisation

      Une association non lucrative peut exercer des activités commerciales accessoires.

      Franchise d'impôts : Jusqu'à un seuil de 90 011 € (chiffre cité pour 2023/2024), ces revenus ne sont pas imposés si l'activité non lucrative reste prépondérante.

      Au-delà du seuil : L'association doit sectoriser ses activités. Elle paie des impôts commerciaux sur le secteur lucratif dès le premier euro.

      Critère de prépondérance : L'administration ne regarde pas seulement les recettes, mais aussi la mobilisation des ressources (temps de bénévolat, occupation des locaux, salaires) pour déterminer si l'activité non lucrative reste dominante.

    1. Reviewer #3 (Public review):

      Summary:

      The manuscript by Chaya and Syed focuses on understanding the link between cell cycle and temporal patterning in central brain type II neural stem cells (NSCs). To investigate this, the authors perturb the progression of the cell cycle by delaying the entry into M phase and preventing cytokinesis. Their results convincingly show that temporal factor expression requires progression of the cell cycle in both Type 1 and Type 2 NSCs in the Drosophila central brain. Overall, this study establishes an important link between the two timing mechanisms of neurogenesis.

      Strengths:

      The authors provide solid experimental evidence for the coupling of cell cycle and temporal factor progression in Type 2 NSCs. The quantified phenotype shows an all-or-none effect of cell cycle block on the emergence of subsequent temporal factors in the NSCs, strongly suggesting that both nuclear division and cytokinesis are required for temporal progression. The authors also extend this phenotype to Type 1 NSCs in the central brain, providing a generalizable characterization of the relationship between cell cycle and temporal patterning.

      Weaknesses:

      One major weakness of the study is that the authors do not explore the mechanistic relationship between cell cycle and temporal factor expression. Although their results are quite convincing, they do not provide an explanation as to why Cdk1 depletion affects Syp and EcR expression but not the onset of svp. This result suggests that at least a part of the temporal cascade in NSCs is cell-cycle independent which isn't addressed or sufficiently discussed.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      Drosophila larval type II neuroblasts generate diverse types of neurons by sequentially expressing different temporal identity genes during development. Previous studies have shown that the transition from early temporal identity genes (such as Chinmo and Imp) to late temporal identity genes (such as Syp and Broad) depends on the activation of the expression of EcR by Seven-up (Svp) and progression through the G1/S transition of the cell cycle. In this study, Chaya and Syed examined whether the expression of Syp and EcR is regulated by cell cycle and cytokinesis by knocking down CDK1 or Pav, respectively, throughout development or at specific developmental stages. They find that knocking down CDK1 or Pav either in all type II neuroblasts throughout development or in single-type neuroblast clones after larval hatching consistently leads to failure to activate late temporal identity genes Syp and EcR. To determine whether the failure of the activation of Syp and EcR is due to impaired Svp expression, they also examined Svp expression using a Svp-lacZ reporter line. They find that Svp is expressed normally in CDK1 RNAi neuroblasts. Further, knocking down CDK1 or Pav after Svp activation still leads to loss of Syp and EcR expression. Finally, they also extended their analysis to type I neuroblasts. They find that knocking down CDK1 or Pav, either at 0 hours or at 42 hours after larval hatching, also results in loss of Syp and EcR expression in type I neuroblasts. Based on these findings, the authors conclude that cycle and cytokinesis are required for the transition from early to late temporal identity genes in both types of neuroblasts. These findings add mechanistic details to our understanding of the temporal patterning of Drosophila larval neuroblasts.

      Strengths:

      The data presented in the paper are solid and largely support their conclusion. Images are of high quality. The manuscript is well-written and clear.

      We appreciate the reviewer’s detailed summary and recognition of the study’s strengths.

      Weaknesses:

      The quantifications of the expression of temporal identity genes and the interpretation of some of the data could be more rigorous.

      (1) Expression of temporal identity genes may not be just positive or negative. Therefore, it would be more rigorous to quantify the expression of Imp, Syp, and EcR based on the staining intensity rather than simply counting the number of neuroblasts that are positive for these genes, which can be very subjective. Or the authors should define clearly what qualifies as "positive" (e.g., a staining intensity at least 2x background).

      We thank the reviewer for this helpful suggestion. In the new version, we have now clarified how positive expression was defined and added more details of our quantification strategy to the Methods section (page 11, lines 380-388; lines 426-434 in tracked changes file). Fluorescence intensity for each neuroblast was normalized to the mean intensity of neighboring wild-type neuroblasts imaged in the same field. A neuroblast was considered positive for a given factor when its normalized nuclear intensity was at least 2× the local background. This scoring criterion was applied uniformly across all genotypes and time points. All quantifications were performed on the raw LSM files in Fiji prior to assembling the figure panels.

      (2) The finding that inhibiting cytokinesis without affecting nuclear divisions by knocking down Pav leads to the loss of expression of Syp and EcR does not support their conclusion that nuclear division is also essential for the early-late gene expression switch in type II NSCs (at the bottom of the left column on page 5). No experiments were done to specifically block the nuclear division in this study specifically. This conclusion should be revised.

      We blocked both cell cycle progression and cytokinesis, and both these manipulations affected temporal gene transitions, suggesting that both cell cycle and cytokinesis are essential. To our knowledge, no mechanism/tool exists that selectively blocks nuclear division while leaving cell cycle progression intact. We have added more clarification on page 4, line 123 onwards (lines 126 onwards in tracked changes file).

      (3) Knocking down CDK1 in single random neuroblast clones does not make the CDK1 knockdown neuroblast develop in the same environment (except still in the same brain) as wild-type neuroblast lineages. It does not help address the concern whether "type 2 NSCS with cell cycle arrest failed to undergo normal temporal progression is indirectly due to a lack of feedback signaling from their progeny", as discussed (from the bottom of the right column on page 9 to the top of the left column on page 10). The CDK1 knockdown neuroblasts do not divide to produce progeny and thus do not receive a feedback signal from their progeny as wild-type neuroblasts do. Therefore, it cannot be ruled out that the loss of Syp and EcR expression in CDK1 knockdown neuroblasts is due to the lack of the feedback signal from their progeny. This part of the discussion needs to be clarification.

      Thanks to the reviewer for raising this critical point. We agree and have added more clarification of our interpretations and limitations to our studies in the revised text on page 8, line 278-282 (lines 296-300 in tracked changes file)

      (4) In Figure 2I, there is a clear EcR staining signal in the clone, which contradicts the quantification data in Figure 2J that EcR is absent in Pav RNAi neuroblasts. The authors should verify that the image and quantification data are consistent and correct.

      When cytokinesis is blocked using pav-RNAi, the neuroblasts become extremely large and multinucleated. In some large pav RNAi clones, we observed a weak EcR signal near the cell membrane. However, more importantly, none of the nuclear compartments showed detectable EcR staining, where EcR is typically localized. We selected a representative nuclear image for the figure panel. To clarify this observation, we have now added an explanatory note to the discussion section on page 8, lines 283-291 (lines 301-309 in tracked changes file).

      Reviewer #2 (Public review):

      Summary:

      Neural stem cells produce a wide variety of neurons during development. The regulatory mechanisms of neural diversity are based on the spatial and temporal patterning of neural stem cells. Although the molecular basis of spatial patterning is well-understood, the temporal patterning mechanism remains unclear. In this manuscript, the authors focused on the roles of cell cycle progression and cytokinesis in temporal patterning and found that both are involved in this process.

      Strengths:

      They conducted RNAi-mediated disruption on cell cycle progression and cytokinesis. As they expected, both disruptions affected temporal patterning in NSCs.

      We appreciate the reviewer’s positive assessment of our experimental results.

      Weaknesses:

      Although the authors showed clear results, they needed to provide additional data to support their conclusion sufficiently.

      For example, they need to identify type II NSCs using molecular markers (Ase/Dpn).The authors are encouraged to provide a more detailed explanation of each experiment. The current version of the manuscript is difficult for non-expert readers to understand.

      Thanks for your feedback. We have now included a detailed description of how we identify type II NSCs in both wild-type and mutant clones. We have also added a representative Asense staining to clearly distinguish type 1 (Ase<sup>+</sup>) from type 2 (Ase<sup>-</sup>) NSCs see Figure S1. We have also added a resources table explaining the genotypes associated with each figure, which was omitted due to an error in the previous version of the manuscript. 

      Reviewer #3 (Public review):

      Summary:

      The manuscript by Chaya and Syed focuses on understanding the link between cell cycle and temporal patterning in central brain type II neural stem cells (NSCs). To investigate this, the authors perturb the progression of the cell cycle by delaying the entry into M phase and preventing cytokinesis. Their results convincingly show that temporal factor expression requires progression of the cell cycle in both Type 1 and Type 2 NSCs in the Drosophila central brain. Overall, this study establishes an important link between the two timing mechanisms of neurogenesis.

      Strengths:

      The authors provide solid experimental evidence for the coupling of cell cycle and temporal factor progression in Type 2 NSCs. The quantified phenotype shows an all-ornone effect of cell cycle block on the emergence of subsequent temporal factors in the NSCs, strongly suggesting that both nuclear division and cytokinesis are required for temporal progression. The authors also extend this phenotype to Type 1 NSCs in the central brain, providing a generalizable characterization of the relationship between cell cycle and temporal patterning.

      We thank the reviewer for recognizing the robustness of our data linking the cell cycle to temporal progression.

      Weaknesses:

      One major weakness of the study is that the authors do not explore the mechanistic relationship between the cell cycle and temporal factor expression. Although their results are quite convincing, they do not provide an explanation as to why Cdk1 depletion affects Syp and EcR expression but not the onset of svp. This result suggests that at least a part of the temporal cascade in NSCs is cell-cycle independent, which isn't addressed or sufficiently discussed.

      Thank you for bringing up this important point. We are equally interested in uncovering the mechanism by which the cell cycle regulates temporal gene transitions; however, such mechanistic exploration is beyond the scope of the present study. Interestingly, while the temporal switching factor Svp is expressed independently of the cell cycle, the subsequent temporal transitions are not. We have expanded our discussion on this intriguing finding (page 9, line 307-315; lines 345-355 in tracked changes file). Specifically, we propose that svp activation marks a cell-cycle–independent phase, whereas EcR/Syp induction likely depends on cell-cycle–coupled mechanisms, such as mitosis-dependent chromatin remodeling or daughter-cell feedback. Although further dissection of this mechanism lies beyond the current study, our findings establish a foundation for future work aimed at identifying how developmental timekeeping is molecularly coupled to cell-cycle progression.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors): 

      (1) Figure 1 C and D, it would be better to put a question mark to indicate that these are hypotheses to be tested. 

      We appreciate this suggestion and have added question marks in Figure 1C and 1D to clearly indicate that these panels represent hypotheses under investigation clearly.

      (2) Figure 2A-I, Figure 4A-I, Figure 5A-I and K-S, in addition to enlarged views of single type II neuroblasts, it would be more convincing to include zoomed-out images of the entire larval brain or at least a portion of the brain to include neighboring wild-type type II neuroblasts as internal controls. Also, it would be ideal to show EcR staining from the same neuroblasts as IMP and Syp staining. 

      We thank the reviewer for this valuable input. In our imaging setup, the number of available antibody channels was limited to four (anti-Ase, anti-GFP, anti-Syp, and antiImp). Adding EcR in the same sample was therefore not technically possible, we performed EcR staining separately. 

      (3) The authors cited "Syed et al., 2024" (in the middle of the right column on page 5), but this reference is missing in the "References" section and should be added. 

      The missing citation has been added to the reference section.  

      (4) It would be better to include Ase staining in the relevant figure to indicate neuroblast identity as type I or type II. 

      We agree and now include representative Ase staining for both type 1 and type 2 NSC clones in Figure S1, along with corresponding text updates that describe these markers.

      Reviewer #2 (Recommendations for the authors): 

      Major comments 

      (1) The present conclusion relies on the results using Cdk1 RNAi and pav RNAi. It is still possible that Cdk1 and Pav are involved in the regulation of temporal patterning independent of the regulation of cell cycle or cytokinesis, respectively. To avoid this possibility, the authors need to inhibit cell cycle progression or cytokinesis in another alternative manner. 

      We thank the reviewer for raising this important point. While we cannot completely exclude gene-specific, cell-cycle-independent roles for Cdk1 or Pav, we observe consistent phenotypes across several independent manipulations that slow or block the cell cycle. Also, earlier studies using orthogonal approaches that delay G1/S (Dacapo/Rbf) or impair mitochondrial OxPhos (which lengthens G1/S; van den Ameele & Brand, 2019) produce similar temporal delays. These concordant phenotypes strongly support the interpretation that altered cell-cycle progression—rather than specific roles of a single gene—is the primary cause of the defect. While we cannot exclude additional, gene-specific effects of Cdk1 or Pav, the concordant phenotypes across independent perturbations make the cell-cycle disruption model the most parsimonious interpretation. We have clarified this reasoning in the discussion section on pages 8-9, lines 293-305 (lines 311-343 in tracked changes file).

      (2) To reach the present conclusion, the authors need to address the effects of acceleration of cell cycle progression or cytokinesis on temporal patterning. 

      We thank the reviewer for this insightful suggestion. To our knowledge, there are currently no established genetic tools that can specifically accelerate cell-cycle progression in Drosophila neuroblasts. However, our results demonstrate that blocking the cell cycle impairs the transition from early to late temporal gene expression. These findings suggest that proper cell-cycle progression is essential for the transition from early to late temporal identity in neuroblasts.

      Minor comments 

      (3) P3L2 (right), ... we blocked the NSC cell cycle...

      How did they do it? 

      Which fly lines were used?

      Why did they use the line? 

      These details are now included in the Materials and Methods and the Resource Table (pages 11-13). We used Wor-Gal4, Ase-Gal80 to drive UAS-Cdk1RNAi and UASpavRNAi in type 2 NSCs 

      (4) P5L1(left), ... we used the flip-out approach...

      Why did they conduct it? 

      Probably, the authors have reasons other than "to further ensure." 

      We have clarified in the text on page 4, lines 137-139, that the flip-out approach was used to generate random single-cell clones, enabling quantitative analysis of type 2 NSCs within an otherwise wild-type brain. 

      (5) P5L8(left), ... type 2 hits were confirmed by lack of the type 1 Asense...  The authors must examine Deadpan (Dpn) expression as well. Because there are a lot of Asense (Ase) negative cells in the brain (neurons, glial cell, and neuroepithelial cells). 

      Type II NSCs can be identified as Dpn+/Ase- cells.

      We agree that Dpn is a helpful marker. However, we reliably distinguished type II NSCs by their lack of Ase and larger cell size relative to surrounding neurons and glia, which are smaller in size and located deeper within the clone. These differences, together with established lineage patterns, allow unambiguous identification of type 2 NSCs across all genotypes. We have now added representative type I and type 2 NSC clones to the supplemental figure S1 (E-G’) with Asense stains to demonstrate how we differentiate type I from type II NSCs. 

      (6) P5L32(left), To do this, we induced... 

      This sentence should be made more concise.

      Please rephrase it. 

      The sentence has been rewritten for clarity and concision.

      (7)  P5L42(left), ...lack of EcR/Syp expression (Figure 2).  However, EcR expression is still present (Figure 2I). 

      In some large pavRNAi clones, a weak EcR signal can be observed near the cell membrane; however, none of the nuclear compartments—where EcR is typically localized—show detectable staining. We selected a representative nuclear image for the figure and addressed this observation on page 8, lines 283-291 (lines 301-309 in tracked changes file).

      (8) P7L29(left), ......had persistent Imp expression...

      Imp expression is faint compared to that in Figure 2G.

      The differences between Figures 2G and 3G should be discussed. 

      We thank the reviewer for this comment. We have added a note in the Methods section clarifying that brightness and contrast were adjusted per panel for optimal visualization; thus, apparent differences in signal intensity do not reflect biological variation. Fluorescence intensity for each neuroblast was normalized to the mean intensity of neighboring wild-type neuroblasts imaged in the same field. A neuroblast was considered Imp-positive when its normalized nuclear intensity was at least 2× the local background. This scoring criterion was applied uniformly across all genotypes and time points. All quantifications were performed on the raw LSM files in Fiji prior to assembling the figure panels.

      (9) P8 (Figure 5)

      The Imp expression is faint compared to that in Figure 5Q.

      The difference between Figure 5G and 5Q should be discussed further. 

      As mentioned above, we have clarified our image processing approach in the Methods section to explain any differences in signal appearance between these figures.

      (10) P10 Materials and Methods

      The authors did not mention the fly lines used. This is very important for the readers. 

      We thank the reviewer for bringing this oversight to our attention. The Resource Table was inadvertently omitted from the initial submission. The complete list of fly lines and reagents used in this study is now provided in the updated Resource Table.

      Reviewer #3 (Recommendations for the authors): 

      Major points 

      (1) The authors mention that the heat-shock induction at 42ALH is well after svp temporal window and therefore the cell cycle block independently affects Syp and EcR expression. However, Figure 3 shows svp-LacZ expression at 48ALH. If svp expression is indeed transient in Type 2 NSCs, then this must be validated using an immunostaining of the svp-LacZ line with svp antibody. This is crucial as the authors claim that cell cycle block doesn't affect does affect svp expression and is required independently. 

      We thank the reviewer for bringing this important issue to our attention. As noted, Svp protein is expressed transiently and stochastically in type 2 NSCs (Syed et al., 2017), making direct antibody quantification challenging upon cell cycle block. Consistent with previous work (Syed et al., 2017), we used the svp-LacZ reporter line to visualize stabilized Svp expression, which reliably captures Svp expression in type 2 NSCs (Syed et al., 2017 https://doi.org/10.7554/eLife.26287, and Dhilon et al., 2024 https://doi.org/10.1242/dev.202504).

      (2) The authors have successfully slowed down the cell cycle and showed that it affects temporal progression. However, a converse experiment where the cell cycle is sped up in NSCs would be an important test for the direct coupling of temporal factor expression and cell cycle, wherein the expectation would be the precocious expression of late temporal factors in faster cycle NSCs. 

      We agree that such an experiment would be ideal. However, as noted above (Reviewer #2 comment 2), to our knowledge, no suitable tools currently exist to accelerate neuroblast cell-cycle progression without pleiotropic effects.

      Minor point 

      The authors must include Ray and Li (https://doi.org/10.7554/eLife.75879) in the references when describing that "...cell cycle has been shown to influence temporal patterning in some systems,...".  

      We thank the reviewer for this helpful suggestion. The cited reference (Ray and Li, eLife, 2022) has now been included and appropriately referenced in the revised manuscript.

    1. Réforme de l'éducation : Enjeux, modèles et perspectives systémiques

      Résumé analytique

      Le système éducatif européen, et particulièrement le modèle allemand, fait face à une remise en question fondamentale de ses structures centenaires.

      Le débat oppose deux visions : une approche neuroscientifique et réformatrice, prônant l'abolition des notes et l'autonomie, et une approche sociologique et réaliste, soulignant les fonctions de sélection et de cohésion sociale de l'école.

      Les points critiques incluent l'impact délétère de l'évaluation chiffrée sur le développement cérébral des jeunes enfants, la persistance des inégalités sociales à travers le tri précoce des élèves, et la nécessité de passer d'une motivation extrinsèque (notes) à une motivation intrinsèque.

      Toutefois, les recherches convergent vers un constat central : au-delà de la structure du système, la qualité et l'investissement de l'enseignant demeurent le facteur le plus déterminant de la réussite scolaire.

      --------------------------------------------------------------------------------

      I. La problématique de l'évaluation : L'impact des notes

      Le système de notation est au cœur des tensions entre partisans de la tradition et réformateurs.

      L'analyse des sources révèle des conséquences divergentes selon le profil des élèves.

      A. Perspectives neuroscientifiques

      La professeure Michaela Brohm-Badri souligne que les notes modifient la chimie cérébrale des élèves :

      Pour les bons élèves : La réussite déclenche la libération de dopamine (motivation) et d'ocytocine.

      Cependant, cela remplace la motivation intrinsèque (curiosité naturelle) par une motivation extrinsèque de récompense.

      Pour les élèves en difficulté : L'échec libère de l'adrénaline et du cortisol (hormones du stress).

      L'amygdale bloque alors le cortex préfrontal, empêchant toute réflexion correcte et créant un cercle vicieux de contre-performance.

      Immaturité cérébrale : Le cortex préfrontal n'atteint sa maturité qu'entre 21 et 23 ans.

      Noter et orienter les enfants dès 9 ou 10 ans revient à figer leur destin social avant la fin de leur développement biologique.

      B. Biais cognitifs et subjectivité

      L'évaluation est critiquée pour son manque d'objectivité, influencée par plusieurs phénomènes :

      La constante macabre : Tendance inconsciente des enseignants à reproduire une courbe de répartition (bons, moyens, faibles) quel que soit le niveau réel de la classe.

      L'effet d'ordre : Un devoir moyen semble meilleur s'il suit une copie très médiocre.

      Facteurs exogènes : L'apparence physique (lunettes, coiffure), l'origine sociale, le sexe ou l'humeur de l'enseignant interfèrent avec la note.

      --------------------------------------------------------------------------------

      II. Les fonctions sociales et politiques de l'école

      Selon le professeur Roland Reichenbach, l'école ne peut être réduite à un simple lieu d'apprentissage ; elle remplit une dizaine de fonctions essentielles à la société.

      Instruction et intégration : Transmission des savoirs et apprentissage de la vie en communauté.

      Sélection : Bien que critiquée, la sélection prépare à la réalité du marché du travail et de l'économie.

      Gardiennage : Une fonction logistique fondamentale permettant le fonctionnement de la société.

      Éducation démocratique : L'école apprend à l'individu à s'autocorriger, à viser l'objectivité et à dépasser ses désirs individuels.

      Protection contre l'arbitraire privé : Si l'école publique renonçait à l'évaluation, cette mission incomberait au secteur privé, favorisant alors exclusivement les plus riches ou les plus puissants.

      --------------------------------------------------------------------------------

      III. Modèles pédagogiques et expérimentations

      A. Comparaison des systèmes européens

      Le document met en évidence des disparités majeures dans l'organisation scolaire en Europe :

      | Pays | Caractéristiques du système | | --- | --- | | Allemagne | Système conservateur. Orientation précoce (10 ans) vers trois filières (professionnelle, technique, générale). | | France | État centralisé, programmes nationaux, style d'enseignement plutôt autoritaire et hiérarchisé. | | Finlande | Relation d'égalité prof-élève. Pas de notes avant la 3ème. Très haut niveau de performance. | | Royaume-Uni | Forte présence du privé. Innovation technologique précoce (programmation obligatoire dès le secondaire). |

      B. L'exemple de l'Alemanon Schule (Wutöschingen)

      Cette école allemande propose une alternative radicale au modèle frontal :

      Apprentissage autonome : Les élèves sont des "partenaires d'apprentissage". Les cours classiques ("inputs") sont réduits au profit d'ateliers libres.

      Responsabilisation : L'élève décide du moment où il passe ses tests de compétences.

      Mixité sociale et tutorat : L'entraide entre élèves de différentes filières est encouragée.

      Résultats : En 2022, les résultats au baccalauréat y étaient supérieurs à la moyenne régionale, avec une augmentation du nombre d'élèves brillants.

      --------------------------------------------------------------------------------

      IV. Le facteur humain : La centralité de l'enseignant

      La méta-analyse "Visible Learning" de John Hattie, portant sur plus de 2 100 études, apporte des conclusions nuancées qui bousculent les idéologies :

      1. L'enseignant est la variable clé : La réussite scolaire dépend avant tout de la clarté de l'enseignant, de sa gestion de classe et de son investissement individuel auprès des élèves.

      2. Dépassement du clivage traditionnel/moderne : Si Hattie valide certains aspects de l'enseignement traditionnel (consignes directes), il soutient également des réformes comme le feedback individualisé et l'abolition des étiquettes (notes).

      3. Valorisation de la profession : Dans les pays performants (Finlande, Suède), seuls les 10 % des meilleurs diplômés peuvent devenir enseignants, et la profession bénéficie d'une haute reconnaissance sociale.

      --------------------------------------------------------------------------------

      V. Synthèse des risques et perspectives

      A. Le piège de la "pédagogie des privilégiés"

      Une mise en garde est formulée concernant l'autonomie totale : certains élèves, issus de milieux éloignés de la culture scolaire, ont besoin d'un encadrement strict et d'un guidage direct.

      L'apprentissage autonome peut, paradoxalement, accroître les inégalités s'il n'est pas accompagné d'un renforcement de l'affirmation de soi pour les élèves les plus fragiles.

      B. L'objectif d'équité

      L'égalité des chances ne signifie pas que tous les élèves doivent être identiques ou avancer au même rythme. Le défi moderne de l'école est de concilier :

      • Le développement du goût du risque et de l'expérimentation.

      • La nécessité d'un feedback pour grandir.

      • Le maintien de la motivation intrinsèque face à un monde concurrentiel.

      En conclusion, si le système de performance semble inévitable pour la structure sociale et économique, l'enjeu majeur reste de transformer l'autorité autoritaire en une autorité inspirante, capable de valoriser la différence sans la stigmatiser par l'échec.

    1. Comprendre la Contre-volonté : Analyse de l'Opposition Instinctive chez l'Enfant

      Résumé Exécutif

      Ce document propose une analyse approfondie du concept de « contre-volonté », un phénomène souvent confondu avec l'opposition ou l'impolitesse dans le cadre de l'éducation et du développement de l'enfant.

      Contrairement aux perceptions populaires qui valorisent l'obéissance immédiate, la recherche développementale démontre que la contre-volonté est une réaction instinctive, saine et nécessaire.

      Elle assure la protection de l'individu contre les influences extérieures non sécurisées et constitue le socle de l'affirmation de soi et de l'esprit critique à l'âge adulte.

      Le document souligne que les interventions basées sur la pression, les ultimatums ou la punition sont contre-productives, car elles alimentent la résistance au lieu de favoriser la coopération.

      La clé d'une collaboration harmonieuse réside dans la réactivation intentionnelle du lien d'attachement.

      En privilégiant la connexion émotionnelle, l'humour et la créativité, les adultes peuvent transformer une dynamique de confrontation en une adhésion naturelle, permettant à l'enfant de se développer sans sacrifier son intégrité personnelle.

      --------------------------------------------------------------------------------

      1. Définition et Origines de la Contre-volonté

      La contre-volonté se distingue de la simple « opposition » par sa nature structurelle et instinctive dans le développement humain.

      Un être autodéterminé : L'humain est, par essence, un être doté d'une volonté propre. La contre-volonté émerge lorsque la volonté de l'adulte entre en conflit direct avec celle de l'enfant.

      Opposition vs Contre-volonté : Alors que le terme « opposition » est souvent utilisé de manière péjorative dans le jargon populaire pour décrire un manque de respect, la « contre-volonté » décrit plus précisément le processus biologique et psychologique de résistance à une consigne externe perçue comme intrusive.

      Le mythe de l'enfant « bien élevé » : Le modèle traditionnel valorise l'obéissance au doigt et à l'œil.

      Or, une obéissance totale et immédiate s'apparente davantage au fonctionnement d'un robot ou d'une marionnette qu'à celui d'un être humain en développement.

      2. La Valeur Développementale et Sécuritaire

      Loin d'être un défaut de comportement, la contre-volonté remplit des fonctions vitales pour l'individu.

      Protection et Survie

      Résistance instinctive : Les humains sont programmés pour résister aux directives de personnes avec lesquelles ils n'ont pas de lien d'attachement solide.

      Sécurité physique : Cette résistance est un mécanisme de protection essentiel (par exemple, refuser de suivre un inconnu dans la rue).

      L'enfant fait alors preuve de contre-volonté pour préserver son intégrité.

      Affirmation de Soi et Esprit Critique

      Préparation à l'âge adulte : L'affirmation de soi ne commence pas à 18 ou 22 ans.

      Elle se cultive dès l'enfance. Un adulte capable de négocier son salaire ou de poser des limites dans son couple est un enfant qui a pu exercer sa contre-volonté.

      Développement du jugement : La capacité de remettre en question, d'argumenter et de ne pas tout accepter « pour argent comptant » est le fondement de l'esprit critique.

      Sans contre-volonté, l'enfant devient un adolescent et un adulte vulnérable à l'influence d'autrui.

      3. Les Causes de la Résistance au Quotidien

      L'analyse identifie plusieurs facteurs exacerbant la contre-volonté dans les interactions quotidiennes :

      | Facteur | Description | | --- | --- | | Immaturité cérébrale | Le cerveau de l'enfant traite souvent une seule information à la fois. S'il est absorbé par le jeu, il n'ignore pas l'adulte par mépris, mais par incapacité neurologique à basculer instantanément sa volonté. | | Pression extérieure | L'usage de l'autorité brute, des menaces, des punitions ou des ultimatums augmente la contre-volonté au lieu de susciter la collaboration. | | Déconnexion relationnelle | Donner une consigne à distance ou sans avoir préalablement établi un contact visuel ou émotionnel crée un fossé qui déclenche la résistance. |

      4. Stratégies de Collaboration : De la Pression à la Connexion

      Pour réduire la contre-volonté, l'adulte doit chercher à « augmenter la volonté » de l'enfant de collaborer par des leviers relationnels.

      Le Concept de la « Bulle » et du « Velcro »

      La Bulle d'attachement : L'adulte doit inviter l'enfant à entrer dans sa « bulle » de sécurité. Lorsque l'enfant est connecté à l'adulte, il a naturellement tendance à suivre la direction de ce dernier.

      L'effet Velcro : Plutôt que d'être une « balle de ping-pong » (donner un ordre et repartir), l'adulte doit devenir « velcro » : s'approcher physiquement, s'intéresser à l'activité de l'enfant et établir un lien avant de formuler une demande.

      Leviers d'Intervention Efficaces

      La Connexion avant la Consigne : Prendre quelques secondes pour saluer l'enfant, le flatter ou exprimer son plaisir de le retrouver.

      La Créativité et l'Humour : Utiliser le jeu pour contourner la résistance (ex: faire parler un jouet pour inviter au lavage des mains). La créativité est présentée comme une alternative supérieure à l'autorité pure.

      L'Empathie : Reconnaître que la volonté de l'enfant est légitime, même si elle diffère de la nôtre. L'objectif n'est pas de céder sur tout, mais d'imposer une structure dans le respect du stade développemental de l'enfant.

      5. Perspectives Systémiques : Adolescence et Milieu Scolaire

      La dynamique de la contre-volonté s'étend au-delà de la petite enfance et touche toutes les sphères sociales.

      Adolescence : C'est une période de contre-volonté intense.

      Les interventions basées sur la déconnexion et les attentes irréalistes de soumission ne font qu'empirer la situation.

      Milieu Scolaire : Les enfants ayant les besoins relationnels les plus importants sont souvent ceux qui résistent le plus.

      Le système tend malheureusement à les exclure ou à les punir (systèmes de couleurs, retrait de privilèges), ce qui rompt davantage le lien d'attachement et renforce leur comportement d'opposition.

      Vie Adulte : La contre-volonté persiste chez l'adulte.

      Un employé réagira par la résistance face à un supérieur qui impose une directive sans considération pour son travail en cours ou sans politesse élémentaire.

      Conclusion

      La contre-volonté n'est pas un problème de comportement à éradiquer, mais un signal de besoin de connexion ou d'affirmation.

      En changeant de perspective — en passant de la gestion de l'opposition à la culture de l'attachement — les éducateurs et parents favorisent le développement d'individus autonomes, critiques et capables de respecter leurs propres limites tout en collaborant avec la structure sociale.

      Comprendre ce mécanisme permet de passer d'une éducation basée sur la force à une éducation basée sur la relation.

    1. Reviewer #2 (Public review):

      Summary:

      This work addresses the question whether artificial deep neural network models of the brain could be improved by incorporating top-down feedback, inspired by the architecture of neocortex.

      In line with known biological features of cortical top-down feedback, the authors model such feedback connections with both, a typical driving effect and a purely modulatory effect on the activation of units in the network.

      To asses the functional impact of these top-down connections, they compare different architectures of feedforward and feedback connections in a model that mimics the ventral visual and auditory pathways in cortex on an audiovisual integration task.

      Notably, one architecture is inspired by human anatomical data, where higher visual and auditory layers possess modulatory top-down connections to all lower-level layers of the same modality, and visual areas provide feedforward input to auditory layers, whereas auditory areas provide modulatory feedback to visual areas.

      First, the authors find that this brain-like architecture imparts the models with a light visual bias similar to what is seen in human data, which is the opposite in a reversed architecture, where auditory areas provide feedforward drive to the visual areas.

      Second, they find that, in their model, modulatory feedback should be complemented by a driving component to enable effective audiovisual integration, similar to what is observed in neural data.

      Overall, the study shows some possible functional implications when adding feedback connections in a deep artificial neural network that mimic some functional aspects of visual perception in humans.

      Strengths:

      The study contains innovative ideas, such as incorporating an anatomically inspired architecture into a deep ANN, and comparing its impact on a relevant task to alternative architectures.

      Moreover, the simplicity of the model allows it to draw conclusions on how features of the architecture and functional aspects of the top-down feedback affects performance of the network.

      This could be a helpful resource for future studies of the impact of top-down connections in deep artificial neural network models of neocortex.

      Weaknesses:

      Some claims not yet supported.

      The problem is that results are phrased quite generally in the abstract and discussion, while the actual results shown in the paper are very specific to certain implementations of top-down feedback and architectures. This could lead to misunderstanding and requires some revisions of the claims in the abstract and discussion (see below).

      "Altogether our findings demonstrate that modulatory top-down feedback is a computationally relevant feature of biological brain..."

      This claim is not supported, since no performance increase is demonstrated for modulatory feedback. So far, only the second half of the sentence is supported: "...and that incorporating it into ANNs affects their behavior and constrains the solutions it's likely to discover."

      "This bias does not impair performance on the audiovisual tasks."

      This is only true for the composite top-down feedback that combines driving and modulatory effects, whereas modulatory feedback alone can impair the performance (e.g., in the visual tasks VS1 and VS2). The fact that modulatory feedback alone is insufficient in ANNs to enable effective cross-modal integration and requires some driving component is actually very interesting, but it is not stressed enough in the abstract. This is hinted at in the following sentence, but should be made more explicitly:

      "The results further suggest that different configurations of top-down feedback make otherwise identically connected models functionally distinct from each other, and from traditional feedforward and laterally recurrent models."

      "Here we develop a deep neural network model that captures the core functional properties of top-down feedback in the neocortex" -> this is too strong, take out "the", because very likely there are other important properties that are not yet incorporated.

      "Altogether, our results demonstrate that the distinction between feedforward and feedback inputs has clear computational implications, and that ANN models of the brain should therefore consider top-down feedback as an important biological feature."

      This claim is still not substantiated by evidence provided in the paper. First, the wording is a bit imprecise, because mechanistically, it is not really the feedforward versus feedback (a purely feedforward model is not considered at all in the paper), but modulatory versus driving. Moreover, the second part of the sentence is problematic: The results imply that, computationally/functionally, driving connections are doing the job, while modulatory feedback does not really seem to improve performance (best case, it does not do any harm). It is true that it is a feature that is inspired by biology, but I don't see why the results imply that (modulatory) top-down feedback should be considered in ANN models of the brain. This would require to show that such models either improve performance, or do improve the ability to fit neural data, both which are beyond the scope of the paper.

      The same argument holds for the following sentence, which is not supported by the results of the paper:

      "More broadly, our work supports the conclusion that both the cellular neurophysiology and structure of feed-back inputs have critical functional implications that need to be considered by computational models of brain function."

      Additional supplementary material required

      Although the second version checked the influence of processing time, this was not done for the most important figure of the paper, Figure 4. A central claim in the abstract "This bias does not impair performance on the audiovisual tasks" relies on this figure, because only with composite feedback the performance is comparable between the the "drive-only" and "brain-like" models. Thus, the supplementary Figure 3 should also include the composite networks and drive only network to check the robustness of the claim with respect to process time. This robustness analysis should then also be mentioned in the text. For example, it should be mentioned whether results in these networks are robust or not with respect to process time, whether there are differences between network architectures or types of feedback in general etc.

      Moreover, the current analysis for networks with modulatory feedback is a bit confusing. Why is the performance so low for the reverse model for a process time of 3 and 10? This is a very strong effect that warrants explanation. More details should be added in the caption as well. For example, are the models separately trained for the output after 3 and 10 processing steps for the comparison, or just evaluated at these times? Not training these networks separately might explain the low performance for some networks, so ideally networks are trained for each choice of processing steps.

    2. Reviewer #3 (Public review):

      Summary:

      This study investigates the computational role of top-down feedback in artificial neural networks (ANNs), a feature that is prevalent in the brain but largely absent in standard ANN architectures. The authors construct hierarchical recurrent ANN models that incorporate key properties of top-down feedback in the neocortex. Using these models in an audiovisual integration task, they find that hierarchical structures introduce a mild visual bias, akin to that observed in human perception, not always compromising task performance.

      Strengths:

      The study investigates a relevant and current topic of considering top-down feedback in deep neural networks. In designing their brain-like model, they use neurophysiological data, such as externopyramidisation and hierarchical connectivity. Their brain-like model exhibits a visual bias that qualitatively matches human perception.

      Weaknesses:

      While the model is brain-inspired, it has limited bioplausibility. The model assumes a simplified and fixed hierarchy. The authors acknowledge this limitation in the discussion.

      While the brain-like model showed an advantage in ignoring distracting auditory inputs, it struggled when visual information had to be ignored. This suggests that its rigid bias toward visual processing could make it less adaptive in tasks requiring flexible multimodal integration. It hence does not necessarily constitute an improvement over existing ANNs. The study does not evaluate whether the top-down feedback architecture scales well to more complex problems or larger datasets. A valuable future contribution would be to evaluate how the network's behaviour fits to human data.

    3. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      Here, the authors aim to investigate the potential improvements of ANNs when used to explain brain data using top-down feedback connections found in the neocortex. To do so, they use a retinotopic and tonotopic organization to model each subregion of the ventral visual (V1, V2, V4, and IT) and ventral auditory (A1, Belt, A4) regions using Convolutional Gated Recurrent Units. The top-down feedback connections are inspired by the apical tree of pyramidal neurons, modeled either with a multiplicative effect (change of gain of the activation function) or a composite effect (change of gain and threshold of the activation function).

      To assess the functional impact of the top-down connections, the authors compare three architectures: a brain-like architecture derived directly from brain data analysis, a reversed architecture where all feedforward connections become feedback connections and vice versa, and a random connectivity architecture. More specifically, in the brain-like model the visual regions provide feedforward input to all auditory areas, whereas auditory areas provide feedback to visual regions.

      First, the authors found that top-down feedback influences audiovisual processing and that the brain-like model exhibits a visual bias in multimodal visual and auditory tasks. Second, they discovered that in the brain-like model, the composite integration of top-down feedback, similar to that found in the neocortex, leads to an inductive bias toward visual stimuli, which is not observed in the feedforward-only model. Furthermore, the authors found that the brain-like model learns to utilize relevant stimuli more quickly while ignoring distractors. Finally, by analyzing the activations of all hidden layers (brain regions), they found that the feedforward and feedback connectivity of a region could determine its functional specializations during the given tasks.

      Strengths:

      The study introduces a novel methodology for designing connectivity between regions in deep learning models. The authors also employ several tasks based on audiovisual stimuli to support their conclusions. Additionally, the model utilizes backpropagation of error as a learning algorithm, making it applicable across a range of tasks, from various supervised learning scenarios to reinforcement learning agents. Conversely, the presented framework offers a valuable tool for studying top-down feedback connections in cortical models. Thus, it is a very nice study that also can give inspiration to other fields (machine learning) to start exploring new architectures.

      We thank the reviewer for their accurate summary of our work and their kind assessment of its strengths.

      Weaknesses:

      Although the study explores some novel ideas on how to study the feedback connections of the neocortex, the data presented here are not complete in order to propose a concrete theory of the role of top-down feedback inputs in such models of the brain.

      (1) The gap in the literature that the paper tries to fill in the ability of DL algorithms to predict behavior: "However, there are still significant gaps in most deep neural networks' ability to predict behavior, particularly when presented with ambiguous, challenging stimuli." and "[...] to accurately model the brain."

      It is unclear to me how the presented work addresses this gap, as the only facts provided are derived from a simple categorization task that could also be solved by the feedforward-only model (see Figures 4 and 5). In my opinion, this statement is somewhat far-fetched, and there is insufficient data throughout the manuscript to support this claim.

      We can see now that the way the introduction was initially written led to some confusion about our goal in this study. Our goal here was not to demonstrate that top-down feedback can enable superior matches to human behaviour. Rather, our goal was to determine if top-down feedback had any real implications for processing ambiguous stimuli. The sentence that the reviewer has highlighted was intended as an explanation for why top-down feedback, and its impact on ambiguous stimuli, might be something one would want to examine for deep neural networks. But, here, we simply wanted to (1) provide an overview of the code base we have created, (2) demonstrate that top-down feedback does impact the processing of ambiguous stimuli.

      We agree with the reviewer that if our goal was to improve our ability to predict behaviour, then there was a big gap in the evidence we provided here. But, this was not our goal, and we believe that the data we provide here does convincingly show that top-down feedback has an impact on processing of ambiguous stimuli. We have updated the text in the introduction to make our goals more clear for the reader and avoid this misunderstanding of what we were trying to accomplish here. Specifically, the end of the introduction is changed to:

      “To study the effect of top-down feedback on such tasks, we built a freely available code base for creating deep neural networks with an algorithmic approximation of top-down feedback. Specifically, top-down feedback was designed to modulate ongoing activity in recurrent, convolutional neural networks. We explored different architectural configurations of connectivity, including a configuration based on the human brain, where all visual areas send feedforward inputs to, and receive top-down feedback from, the auditory areas. The human brain-based model performed well on all audiovisual tasks, but displayed a unique and persistent visual bias compared to models with only driving connectivity and models with different hierarchies. This qualitatively matches the reported visual bias of humans engaged in audio-visual tasks. Our results confirm that distinct configurations of feedforward/feedback connectivity have an important functional impact on a model's behavior. Therefore, top-down feedback captures behaviors and perceptual preferences that do not manifest reliably in feedforward-only networks. Further experiments are needed to clarify whether top-down feedback helps an ANN fit better to neural data, but the results show that top-down feedback affects the processing of stimuli and is thus a relevant feature that should be considered for deep ANN models in computational neuroscience more broadly.”

      (2) It is not clear what the advantages are between the brain-like model and a feedforward-only model in terms of performance in solving the task. Given Figures 4 and 5, it is evident that the feedforward-only model reaches almost the same performance as the brain-like model (when the latter uses the modulatory feedback with the composite function) on almost all tasks tested. The speed of learning is nearly the same: for some tested tasks the brain-like model learns faster, while for others it learns slower. Thus, it is hard to attribute a functional implication to the feedback connections given the presented figures and therefore the strong claims in the Discussion should be rephrased or toned down.

      Again, we believe that there has been a misunderstanding regarding the goals of this study, as we are not trying to claim here that there are performance advantages conferred by top-down feedback in this case. Indeed, we share the reviewer’s assessment that the feedforward only model seems to be capable of solving this task well. To reiterate: our goal here was to demonstrate that top-down feedback alters the computations in the network and, thus, has distinct effects on behaviour that need to be considered by researchers who use deep networks to model the brain. But we make no claims of “superiority” of the brain-like model.

      In-line with this, we’re not completely sure which claims in the discussion the reviewer is referring to. We note that we were quite careful in our claims. For example, in the first section of the discussion we say:

      “Altogether, our results demonstrate that the distinction between feedforward and feedback inputs has clear computational implications, and that ANN models of the brain should therefore consider top-down feedback as an important biological feature.”

      And later on:

      “In summary, our study shows that modulatory top-down feedback and the architectural diversity enabled by it can have important functional implications for computational models of the brain. We believe that future work examining brain function with deep neural networks should therefore consider incorporating top-down modulatory feedback into model architectures when appropriate.”

      If we have missed a claim in the discussion that implies superiority of the brain-like model in terms of task performance we would be happy to change it.

      (3) The Methods section lacks sufficient detail. There is no explanation provided for the choice of hyperparameters nor for the structure of the networks (number of trainable parameters, number of nodes per layer, etc). Clarifying the rationale behind these decisions would enhance understanding. Moreover, since the authors draw conclusions based on the performance of the networks on specific tasks, it is unclear whether the comparisons are fair, particularly concerning the number of trainable parameters. Furthermore, it is not clear if the visual bias observed in the brain-like model is an emerging property of the network or has been created because of the asymmetries in the visual vs. auditory pathway (size of the layer, number of layers, etc).

      We thank the reviewer for raising this issue, and want to provide some clarifications: First, the number of trainable parameters are roughly equal, since we were only switching the direction of connectivity (top-down versus bottom-up), not the number of connections. We confirmed the biggest difference in size is between models with composite and multiplicative feedback; models with composite feedback have roughly ~1K more parameters, and all models are within the 280K parameter range. We now state this in the methods.

      Second, because superior performance was not the goal of this study, as stated above, we conducted limited hyperparameter tuning. Given the reviewer’s comment, we wondered whether this may have impacted our results. Therefore, we explored different hyperparameters for the model during the multimodal auditory tasks, which show the clearest example of the visual dominance in the brainlike model (Figure 3).

      We explored different hidden state sizes, learning rates and processing times, and examined whether the core results were different. We found that extremely high learning rates (0.1) destabilize all models and that some models perform poorly under different processing times. But overall, the core results are evident across all hyperparameters where the models learn i.e the different behaviors of models with different connectivities and the visual dominance observed in the brainlike model. We now provide these results in a supplementary figure (Fig. S2, showing larger models trained with different learning rates, and Fig S3, which shows the effect of processing time on AS task performance).

      Reviewer #2 (Public review):

      Summary:

      This work addresses the question of whether artificial deep neural network models of the brain could be improved by incorporating top-down feedback, inspired by the architecture of the neocortex.

      In line with known biological features of cortical top-down feedback, the authors model such feedback connections with both, a typical driving effect and a purely modulatory effect on the activation of units in the network.

      To assess the functional impact of these top-down connections, they compare different architectures of feedforward and feedback connections in a model that mimics the ventral visual and auditory pathways in the cortex on an audiovisual integration task.

      Notably, one architecture is inspired by human anatomical data, where higher visual and auditory layers possess modulatory top-down connections to all lower-level layers of the same modality, and visual areas provide feedforward input to auditory layers, whereas auditory areas provide modulatory feedback to visual areas.

      First, the authors find that this brain-like architecture imparts the models with a light visual bias similar to what is seen in human data, which is the opposite in a reversed architecture, where auditory areas provide a feedforward drive to the visual areas.

      Second, they find that, in their model, modulatory feedback should be complemented by a driving component to enable effective audiovisual integration, similar to what is observed in neural data.

      Last, they find that the brain-like architecture with modulatory feedback learns a bit faster in some audiovisual switching tasks compared to a feedforward-only model.

      Overall, the study shows some possible functional implications when adding feedback connections in a deep artificial neural network that mimics some functional aspects of visual perception in humans.

      Strengths:

      The study contains innovative ideas, such as incorporating an anatomically inspired architecture into a deep ANN, and comparing its impact on a relevant task to alternative architectures.

      Moreover, the simplicity of the model allows it to draw conclusions on how features of the architecture and functional aspects of the top-down feedback affect the performance of the network.

      This could be a helpful resource for future studies of the impact of top-down connections in deep artificial neural network models of the neocortex.

      We thank the reviewer for their summary and their recognition of the innovative components and helpful resources therein.

      Weaknesses:

      Overall, the study appears to be a bit premature, as several parts need to be worked out more to support the claims of the paper and to increase its impact.

      First, the functional implication of modulatory feedback is not really clear. The "only feedforward" model (is a drive-only model meant?) attains the same performance as the composite model (with modulatory feedback) on virtually all tasks tested, it just takes a bit longer to learn for some tasks, but then is also faster at others. It even reproduces the visual bias on the audiovisual switching task. Therefore, the claims "Altogether, our results demonstrate that the distinction between feedforward and feedback inputs has clear computational implications, and that ANN models of the brain should therefore consider top-down feedback as an important biological feature." and "More broadly, our work supports the conclusion that both the cellular neurophysiology and structure of feed-back inputs have critical functional implications that need to be considered by computational models of brain function" are not sufficiently supported by the results of the study. Moreover, the latter points would require showing that this model describes neural data better, e.g., by comparing representations in the model with and without top-down feedback to recorded neural activity.

      To emphasize again our specific claims, we believe that our data shows that top-down feedback has functional implications for deep neural network behaviour, not increased performance or neural alignment. Indeed, our results demonstrate that top-down feedback alters the behaviour of the networks, as shown by the differences in responses to various combinations of ambiguous stimuli. We agree with the reviewer that if our goal was to claim either superior performance on these tasks, or better fit to neural data, we would need to actually provide data supporting that claim.

      Given the comments from the reviewer, we have tried to provide more clarity in the introduction and discussion regarding our claims. In particular, we now highlight that we are not trying to demonstrate that the models with top-down feedback exhibit superior performance or better fit to neural data.

      As one final note, yes, the reviewer understood correctly that the “only feedforward” model is a model with only driving inputs. We have renamed the feedforward-only models to drive only models and added additional emphasis in the text to ensure that the distinction is clear for all readers.

      Second, the analyses are not supported by supplementary material, hence it is difficult to evaluate parts of the claims. For example, it would be helpful to investigate the impact of the process time after which the output is taken for evaluation of the model. This is especially important because in recurrent and feedback models the convergence should be checked, and if the network does not converge, then it should be discussed why at which point in time the network is evaluated.

      This is an excellent point, and we thank the reviewer for raising it. We allowed the network to process the stimuli for seven time-steps, which was enough for information from any one region to be transmitted to any other. We found in some initial investigations that if we shortened the processing time some seeds would fail to solve the task. But, based on the reviewer’s comment, we have now also run additional tests with longer processing times for the auditory tasks where we see the clearest visual bias (Figure 3). We find that different process times do not change the behavioral biases observed in our models, but may introduce difficulties ignoring visual stimuli for some models. Thus, while process time is an important hyperparameter for optimal performance of the model, the central claim of the paper remains. We include this new data in a supplementary figure S3.

      Third, the descriptions of the models in the methods are hard to understand, i.e., parameters are not described and equations are explained by referring to multiple other studies. Since the implications of the results heavily rely on the model, a more detailed description of the model seems necessary.

      We agree with the reviewer that the methods could have been more thorough. Therefore, we have greatly expanded the methods section. We hope the model details are now more clear.

      Lastly, the discussion and testable predictions are not very well worked out and need more details. For example, the point "This represents another testable prediction flowing from our study, which could be studied in humans by examining the optical flow (Pines et al., 2023) between auditory and visual regions during an audiovisual task" needs to be made more precise to be useful as a prediction. What did the model predict in terms of "optic flow", how can modulatory from simple driving effect be distinguished, etc.

      We see that the original wording of this prediction was ambiguous, thank you for pointing this out. In the study highlighted (Pines et al., 2023) the authors use an analysis technique for measuring information flow between brain regions, which is related to analysis of optical flow in images, but applied to fMRI scans. This is confusing given the current study, though. Therefore, we have changed this sentence to make clear that we are speaking of information flow here. 

      Reviewer #3 (Public review):

      Summary:

      This study investigates the computational role of top-down feedback in artificial neural networks (ANNs), a feature that is prevalent in the brain but largely absent in standard ANN architectures. The authors construct hierarchical recurrent ANN models that incorporate key properties of top-down feedback in the neocortex. Using these models in an audiovisual integration task, they find that hierarchical structures introduce a mild visual bias, akin to that observed in human perception, not always compromising task performance.

      Strengths:

      The study investigates a relevant and current topic of considering top-down feedback in deep neural networks. In designing their brain-like model, they use neurophysiological data, such as externopyramidisation and hierarchical connectivity. Their brain-like model exhibits a visual bias that qualitatively matches human perception.

      We thank the reviewer for their summary and evaluation of our paper’s strengths.

      Weaknesses:

      While the model is brain-inspired, it has limited bioplausibility. The model assumes a simplified and fixed hierarchy. In the brain with additional neuromodulation, the hierarchy could be more flexible and more task-dependent.

      We agree, there are still many facets of top-down feedback that we have not captured here, and the modulation of hierarchy is an interesting example. We have added some consideration of this point to the limitations section of the discussion.

      While the brain-like model showed an advantage in ignoring distracting auditory inputs, it struggled when visual information had to be ignored. This suggests that its rigid bias toward visual processing could make it less adaptive in tasks requiring flexible multimodal integration. It hence does not necessarily constitute an improvement over existing ANNs. It is unclear, whether this aspect of the model also matches human data. In general, there is no direct comparison to human data. The study does not evaluate whether the top-down feedback architecture scales well to more complex problems or larger datasets. The model is not well enough specified in the methods and some definitions are missing.

      We agree with the reviewer that we have not demonstrated anything like superior performance (since the brain-like network is quite rigid, as noted) nor have we shown better match to human data with the brain-like network. This was not our intended claim. Rather, we demonstrated here simply that top-down feedback impacts behavior of the networks in response to ambiguous stimuli. We have now added statements to the introduction and discussion to make our specific claims (which are supported by our data, we believe) clear.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      I believe that the work is very nice but not so mature at this stage. Below, you can find some comments that eventually could improve your manuscript.

      (1) Intro, last sentence: "Therefore, top-down feedback is a relevant feature that should be considered for deep ANN models in computational neuroscience more broadly." I don't understand what the authors refer to with this sentence. There are numerous models (deep ANNs) that have been used to model the neural activity and are much simpler than the one proposed here which contains very complex models and connectivity. Although I do agree that the top-down connections are very important there is no data to support their importance for modeling the brain.

      Respectfully, we disagree with the reviewer that we don’t provide data to demonstrate the importance of top-down feedback for modelling. Indeed, we provided a great deal of data to show that top-down feedback in the networks has real functional implications for behaviour, e.g., it can induce a human-like visual bias. Thus, top-down feedback is a factor that one should care about when modelling the brain. But, we agree with the reviewer that more demonstration of the utility of using top-down feedback for achieving better fits to neural data would be an important next step. 

      (2) I suggest adding some extra supplementary simulations where, for example, the number of data for visual and auditory pathways is equal in size (i.e., the same number of examples), the number of layers is identical (3 per pathway), and also the number of parameters. Doing this would help strengthen the claims presented in the paper.

      In fact, all of the hyperparameters the reviewer mentions here were identical for the different networks, so the experiments the reviewer is requesting here were already part of the paper. We now clarify this in the text.

      (3) Results: I suggest adding Tables with quantifications of the presented results. For example, best performance, epochs to converge, etc. As it is now, it is very hard to follow the evidence shown in Figures.

      This is a good suggestion, we have now added this table to the start of the supplemental figures.

      (4) Figure 2e, 3e: Although VS3, and AS3 have been used only for testing, the plot shows alignments with respect to training epochs. The authors should clarify in the Methods if they tested the network with all intermediate weights during VS1/VS2 or AS1/AS2 training.

      Testing scenarios in this context meant that the model was never shown the scenario/task during training, but the models were indeed evaluated on the VS3 and AS3 after each training epoch. We have added clarifications to the figure legends.

      (5) Methods: It would be beneficial to discuss how specific hyperparameters were selected based on prior research, empirical testing, or theoretical considerations. Also, it is not clear how the alignment (visual or audio) is calculated. Do the authors use the examples that have been classified correctly for both stimuli or do they exclude those from the analysis (maybe I have missed it).

      As noted above, because superior performance was not the goal of this study, we conducted limited hyperparameter tuning. But we have extended the results with additional hyperparameter tuning in a supplementary figure, and describe the hyperparameter choices more thoroughly in the methods. As well, all data includes all model responses, regardless of whether they were correct or not. We now clarify this in the methods.

      (6) Code: The code repository lacks straightforward examples demonstrating how to utilize the modeling approach. Given that it is referred to as a "framework", one would expect it to facilitate easy integration into various models and tasks. Including detailed instructions or clear examples would significantly improve usability and help users effectively apply the proposed methodology.

      We agree with the reviewer, this would be beneficial. We have revised the README of the codebase to explain the model and its usage more clearly and included an interactive jupyter notebook with example training on MNIST.

      Some minor comments are given below. Generally speaking, the Figures need to be more carefully checked for consistent labels, colors, etc.

      (1) Page 4, 1st paragraph - grammar correction: "a larger infragranular layer" or "larger infragranular layers"

      Thank you for catching this, we have fixed the text.

      (2) Page 4, 2nd para - rephrase: "In three additional control ANNs" → "In the third additional control ANN"

      In fact, we did mean three additional control ANNs, each one representing a different randomized connectivity profile. We now clarify this in the text and provide the connectivity of the two other random graphs in the supplemental figures.

      (3) Page 4, VAE acronym needs to be defined before its first use

      The variational autoencoder is introduced by its full name in the text now.

      (4) Page 4: Fig. 2c reference should be Fig. 2b, Fig. 2d should be Fig. 2c, Fig. 2b should be Fig. 2d, VS4; Fig. 2b, bottom should be VS4; Fig. 2f, Fig. 2f to Fig. 2g. Double check the Figure references in the text. Here is very confusing for the reader.

      We have now fixed this, thank you for catching it.

      (5) Page 5, 1st para: "Altogether, our results demonstrated both" → "Altogether, our results demonstrated that both"

      This has been updated.

      (6) Figure 2: In the e and g panels the x label is missing.

      This was actually because the x-axis were the same across the panels, but we see how this was unclear, so we have updated the figure.

      (7) Figure 3: There is no panel g (the title is missing); In panels b, c, e, and g the y label is missing, and in panels e and g the x label is missing. Also, the Feedforward model is shown in panel g but it is introduced later in the text. Please remove it from Figure 3. Also in legend: "AV Reverse graph" → "Reverse graph". Also, "Accuracy" and "Alignment" should be presented as percentages (as in Figure 2).

      This has been corrected.

      (8) Figure 4; x labels are missing.

      As with point (6), this was actually because the x-axis were the same across the panels, but we see how this was unclear, so we have updated the figure.

      (9) Page 7; I can’t find the cited Figure S1.

      Apologies, we have added the supplemental figure (now as S4). It shows the results of models with multiplicative feedback on the task in Fig 5 (as opposed to models with composite feedback shown in the main figure).

      Reviewer #2 (Recommendations for the authors):

      (1) Discussion Section 3.1 is only a literature review, and does not really add any value.

      Respectfully, we think it is important to relate our work to other computational work on the role of top-down feedback, and to make clear what our specific contribution is. But, we have updated the text to try to place additional emphasis on our study’s contribution, so that this section is more than just a literature review.

      “Our study adds to this previous work by incorporating modulatory top-down feedback into deep, convolutional, recurrent networks that can be matched to real brain anatomy. Importantly, using this framework we could demonstrate that the specific architecture of top-down feedback in a neural network has important computational implications, endowing networks with different inductive biases.”

      (2) Including ipython notebooks and some examples would be great to make it easier to use the code.

      We now provide a demo of how to use the code base in a jupyter notebook.

      (3) The description of the model is hard to comprehend. Please name and describe all parameters. Also, a figure would be great to understand the different model equations.

      We have added definitions of all model terms and parameters.

      (4) The terminology is not really clear to me. For example "The results further suggest that different configurations of top-down feedback make otherwise identically connected models functionally distinct from each other and from traditional feedforward only recurrent models." The feedforward and only recurrent seem to contradict each other. Would maybe driving and modulatory be a better term here? I also saw in the code that you differentiate between three types of inputs, modulatory, threshold offset and basal (like feedforward). How about you only classify connections based on these three type? I was also confused about the feedforward only model, because I was unsure whether it is still feedback connections but with "basal" quality, or whether feedback connections between modalities and higher-to-lower level layers were omitted altogether.

      We take the reviewer’s point here. To clarify this, we have updated the text to refer to “driving only” rather than “feedforward only”, to make it obvious that what we change in these models is simply whether the connection has any modulatory impact on the activity. 

      (5) "incorporating it into ANNs can affect their behavior and help determine the solutions that the network can discover." -> Do you mean constrain? Overall, I did not really get this point.

      Yes, we mean that it constrains the solutions that the network is likely to discover.

      (6) "ignore the auditory inputs when they visual inputs were unambiguous" -> the not they

      This has been fixed. Thank you for catching it.

      (7) xlabel in Figure 4 is missing.

      This has been fixed, thank you for catching it.

      Reviewer #3 (Recommendations for the authors):

      Major:

      (1) How alignment is computed is not defined. In addition to a proper definition in the methods section, it would be nice to briefly define it when it first appears in the results section.

      We’ve added an explicit definition of how alignment is calculated in the methods and emphasized the calculation when its first explained in the results

      (2) A connectivity matrix for the feedforward-only model is missing and could be added.

      We have added this to Figure 1.

      (3) The connectivity matrix for each random model should also be shown.

      We’ve shown each of the random model configurations in the new supplemental figure S1.

      (4) Initial parameters are not defined, such as W, b etc. A table with all model parameters would be great.

      We have added a table to the methods listing all of the parameters.

      (5) Would be nice to show the t-sne plots (not just the NH score) for each model and each task in the appendix.

      We can provide these figures on request. They massively increase the file size of the paper pdf, as there’s 49 of them for each task and each model, 980 in total. An example t-SNE plot is provided in figure 6.

      Minor:

      (1) Page 4:

      "we refer to this as Visual-dominant Stimulus case 1, or VS1; Fig. 1a, top)." This should be Fig. 2a.

      (2) "In stimulus condition VS1, all of the models were able to learn to use the auditory clues to disambiguate the images (Fig. 2c)."

      This should be Fig. 2b.

      (3) "In comparison, in VS2, we found that the brainlike model learned to ignore distracting audio inputs quickly and consistently compared to the random models, and a bit more rapidly than the auditory information (Fig 2d)."

      This should be Fig. 2c.

      (4) "VS3; Fig. 2b, top"

      This should be Fig. 2d

      (5) "while all other models had to learn to do so further along in training (Fig. 2e)."

      It is not stated explicitly, but this suggests that the image-aligned target was considered correct, and that weight updates were happening.

      (6) "VS4; Fig. 2b, bottom"

      This should be Fig. 2f

      (7) "adept at learning (Fig. 2f)."

      This should be Fig. 2g

      (8) Figure 3:b,c,e y-labels are missing

      3f: both x and y labels are missing

      (9) Figure labeling in the text is not consistent (Fig. 1A versus Fig. 2a)

      (10) Doubled "the" in ""This shows that the inductive bias towards vision in the brainlike model depended on the presence of the multiplicative component of the the feedback"

      (11) Page 9 Figure 6: The caption says b shows the latent spaces for the VS2 task, whereas the main text refers to 6b as showing the latent space for the AS2 task. Please correct which task it is.

      (12) Methods 4.1 page 13

      "which is derived from the feedback input (h_{l−1})"

      This should be h_{l+1}

      (13) r_l, u_l, u and c are not defined to which aspects of the model they refer to

      Even though this is based on a previous model, the methods section should completely describe the model.

      Equations 1,2,3: the notation [x;y] is unclear and should be defined.

      Equation 5: u should probably be u_l.

      (14) Page 14 typo: externopyrmidisation.

      (15) It is confusing to use different names for the same thing: the all-feedforward model, the all feedforward network, the feedforward network, and the feedforward-only model are probably all the same? Consistent naming would help here.

      Thank you for the detailed comments! We’ve fixed the minor errors and renamed the feedforward models to drive-only models.

    1. Reviewer #1 (Public review):

      Summary:

      Jeay-Bizot and colleagues investigate the neural correlates of the preparation of, and commitment to, a self-initiated motor action. In their introduction, they differentiate between theoretical proposals relating to the timing of such neural correlates relative to the time of a recorded motor action (e.g., a keypress). These are categorised into 'early' and 'late' timing accounts. The authors advocate for 'late' accounts based on several arguments that align well with contemporary models of decision-making in other domains (for example, evidence accumulation models applied to perceptual decisions). They also clearly describe prevalent methodological issues related to the measurement of event-related potentials (ERPs) and time-frequency power to gauge the timing of the commitment to making a motor action. These methodological insights are communicated clearly and denote potentially important limitations on the inferences that can be drawn from a large body of existing work.

      To attempt to account for such methodological concerns, the authors devise an innovative experiment that includes an experimental condition whereby participants make a motor action (a right-hand keypress) to make an image disappear. They also include a condition whereby the stimulus presentation program automatically proceeds at a set time that is matched to the response timing in a previous trial. In this latter condition, no motor action is required by the participant. The authors then attempt to determine the times at which they can differentiate between these two conditions (motor action vs no motor action) based on EEG and MEG data, using event-related potential analyses, time-frequency analyses, and multivariate classifiers. They also apply analysis techniques based on comparing M/EEG amplitudes at different time windows (as used in previous work) to compare these results to those of their key analyses.

      When using multivariate classifiers to discriminate between conditions, they observed very high classification performance at around -100ms from the time of the motor response or computer-initiated image transition, but lower classification performance and a lack of statistically significant effects across analyses for earlier time points. Based on this, they make the key claim that measured M/EEG responses at the earlier time points (i.e., earlier than around -100ms from the motor action) do not reliably correlate with the execution of a motor action (as opposed to no such action being prepared or made). This is argued to favour 'late' accounts of motor action commitment, aligning with the well-made theoretical arguments in favour of these accounts in the introduction. Although the exact time window related to 'late' accounts is not concretely specified, an effect that occurs around -100ms from response onset is assumed here to fall within that window.

      Importantly, this claim relies on accepting the null hypothesis of zero effect for the time points preceding around -100ms based on a somewhat small sample of n=15 and some additional analyses of individual participant datasets. Although the authors argue that their classifiers are sensitive to detecting relevant effects, and the study appears well-powered to detect the (likely to be large magnitude) M/EEG signal differences occurring around the time of the response or computer-initiated image transition, there is no guarantee that the study is adequately sensitive to detect earlier differences in M/EEG signals. These earlier effects are likely to be more subtle and exhibit lower signal-to-noise ratios, but would still be relevant to the 'early' vs 'late' debate framed in the manuscript. This, along with some observed patterns in the data, may substantially reduce the confidence one may have in the key claim about the onset timing of M/EEG signal differences.

      Notably, there is some indication of above-chance (above 0.5 AUC) classification performance at time points earlier than -100ms from the response, as visible in Figure 3A for the task-based EEG analyses (EEG OC dataset, blue line). While this was not statistically significantly above chance for their n=15 sample, these results do not appear to be clear evidence in favour of a zero-effect null-hypothesis. In Figures 2A-B, there are also visible differences in the ERPs across conditions, from around the time that motor action-related components have been previously observed (around -500ms from the response). The plotted standard errors in the data are large enough to indicate that the study may not have been adequately powered to differentiate between the conditions.

      Although the authors acknowledge this limitation in the discussion section of their manuscript, their counter-argument is that the classifiers could reliably differentiate between conditions at time points very close to the motor response, and in the time-based analyses where substantive confounds are likely to be present, as demonstrated in a set of analyses. Based on this data, the authors imply that the study is sufficiently powered to detect effects across the range of time points used in the analyses. While it's commendable that these extra analyses were run, they do not provide convincing evidence that the study is necessarily sensitive to detecting more subtle effects that may occur at earlier time points. In other words, the ability of classifiers (or other analysis methods) to detect what are likely to be very prominent, large effects around the time of the motor response does not guarantee that such analyses will detect smaller magnitude effects at other time points.

      In summary, the authors develop some very important lines of argument for why existing work may have misestimated the timing of neural signals that precede motor actions. This in itself is an important contribution to the field. However, their attempt to better estimate the timing of such signals is limited by a reliance on accepting the null hypothesis based on non-statistically significant results, and arguably a limited degree of sensitivity to detect subtle but meaningful effects.

      Strengths:

      This manuscript provides compelling reasons why existing studies may have misestimated the timing of the neural correlates of motor action preparation and execution. They provide additional analyses as evidence of the relevant confounds and provide simulations to back up their claims. This will be important to consider for many in the field. They also endeavoured to collect large numbers of trials per participant to also examine effects in individuals, which is commendable and arguably better aligned with contemporary theory (which pertains to how individuals make decisions to act, rather than groups of people).

      The innovative control condition in their experiment may also be very useful for providing complementary evidence that can better characterise the neural correlates of motor action preparation and commitment. The method for matching image durations across active and passive conditions is particularly well thought-out and provides a nice control for a range of potential confounding factors.

      Weaknesses:

      There is a mismatch between the stated theoretical phenomenon of interest (commitment to making a motor action) and what is actually tested in the study (differences in neural responses when an action is prepared and made compared to when no action is required). The assumed link between these concepts could be made more explicit for readers, particularly because it is argued in the manuscript that neural correlates of motor action preparation are not necessarily correlates of motor action commitment.

      As mentioned in the summary, the main issue is the strong reliance on accepting the null hypothesis of no differences between motor action and computer initiation conditions based on a lack of statistically significant results from the modest (n=15) sample. Although a larger sample will increase measurement precision at the group level, there are some EEG data processing changes that could increase the signal-to-noise ratio of the analysed data and produce more precise estimates of effects, which may improve the ability to detect more subtle effects, or at least provide more confidence in the claims of null effects.

      First, it is stated in the EEG acquisition and preprocessing section that the 64-channel Biosemi EEG data were recorded with a common average reference applied. Unless some non-standard acquisition software was used (of which we are not aware exists), Biosemi systems do not actually apply this reference at recording (it is for display purposes only, but often mistaken to be the actual reference applied). As stated in the Biosemi online documentation, a reference should be subsequently applied offline; otherwise, there is a substantial decrease in the signal-to-noise ratio of the EEG data, and a large portion of ambient alternating current noise is retained in the recordings. This can be easily fixed by applying a referencing scheme (e.g., the common average reference) offline as one of the first steps of data processing. If this was, in fact, done offline, it should be clearly communicated in the manuscript.

      In addition, the data is downsampled using a non-integer divisor of the original sampling rate (a 2,048 Hz dataset is downsampled to 500 Hz rather than 512 Hz). Downsampling using a non-integer divisor is not recommended and can lead to substantial artefacts in raw data as a result, as personally observed by this Reviewer in Biosemi data. Finally, although a 30 Hz low-pass filter is applied for visualisation purposes of ERPs, no such filter is applied prior to analyses, and no method is used to account for alternating current noise that is likely to be in the data. As noted above, much of the alternating current noise will be retained when an offline reference is not applied, and this is likely to further degrade the quality of the data and reduce one's ability to identify subtle patterns in EEG signals. Changes in data processing to address these issues would likely lead to more precise estimates of EEG signals (and by extension differences across conditions).

      With regard to possible effects extending hundreds of milliseconds before the response, it would be helpful for the authors to more precisely clarify the time windows associated with 'early' and 'late' theories in this case. The EEG data that would be required to support 'early' theories is also not made sufficiently clear. For example, even quite early neural correlates of motor actions in this task (e.g., around -500ms from the response, or earlier) could still be taken as evidence for the 'late' theories if these correlates simply reflect the accumulation of evidence toward making a decision and associated motor action, as implied by the Leaky Stochastic Accumulator model described by the authors. In other words, even observations of neural correlates of motor action preparation that occur much earlier than the response would not constitute clear evidence against the 'late' account if this neural activity represents an antecedent to a decision and action (rather than commitment to the action), as the authors point out in the introduction.

      In addition, there is some discrepancy regarding the data that is used by the classifiers to differentiate between the conditions in the EEG data and the claims about the timing of neural responses that differentiate between conditions. Unless we reviewers are mistaken, the Sliding Window section of the methods states that the AUC scores in Figure 3 are based on windows of EEG data that extend from the plotted time point until 0.5 seconds into the past. In other words, an AUC value at -100ms from the response is based on classifiers applied to data ranging from -600 to -100 milliseconds relative to the response. In this case, the range of data used by the classifiers extends much earlier than the time points indicated by Figure 3, and it is difficult to know whether the data at these earlier time points may have contributed (even in subtle ways) to the success of the classifiers. This may undermine the claim that neural responses only become differentiable from around -100ms from response onset. The spans of these windows used for classification could be made more explicit in Figure 3, and classification windows that are narrower could be included in a subset of analyses to ensure that classifiers only using data in a narrow window around the response show the high degree of classification performance in the dataset. If we are mistaken, then perhaps these details could be clarified in the method and results sections.

    1. Reviewer #1 (Public review):

      Summary:

      This study reports the effects of psilocin on iPSC-derived human cortical neurons.

      Strengths:

      The characterization was comprehensive, involving immunohistochemistry of various markers, 5-HT2A receptors, BDNF, and TrkB, transcriptomics analyses, morphological determination, electrophysiology, and finally synaptic protein measurements. The results are in close agreement with prior work (PMID 29898390) on rat cultured cortical neurons. Nevertheless, there is value in confirming those earlier findings and furthermore to demonstrate the effects in human neurons, which are important for translation. The genetic, proteomics, and cell structure analyses used in this paper are its major strength. The study supports the value of using iPSC-derived human cortical neurons for drug development involving psychedelics-related compounds.

      Weaknesses:

      (1) Line 140: 5-HT2A receptor expression was found via immunocytochemistry to reside in the somatodendritic and axonal compartments. However, prior work from ex vivo tissue using electron microscopy has found predominantly 5-HT2A receptor expression in the somatodendritic compartment (PMID: 12535944). Was this antibody validated to be 5-HT2A receptor-specific? Can the authors reason why the discrepancy may arise, and if the axonal expression is specific to the cultured neurons?

      (2) Line 143: It would be helpful to specify the dose of psilocin tested, and describe how this dose was chosen.

      (3) Figure 1: The interpretation is that the differential internalization in the axonal and somatodendritic compartments is time-dependent. However, given that only one dose is tested, it is also possible that this reflects dose dependence, with the longer time exposure leading to higher dose exposure, so these variables are related. That is, if a higher dose is given, internalization may also be observed after 10 minutes in the dendritic compartment.

      (4) Figure 3 & 4: What is the 'control' here? A more appropriate control for the 24 hours after psilocin application would be 24 hours after vehicle application. Here the authors are looking at before and after, but the factor of time elapsed and perturbation via application is not controlled for.

      (5) The sample size was not clearly described. In the figure legend, N = the number of neurites is provided, but it is unclear how many cells have been analyzed, and then how many of those cells belong to the same culture. These are important sample size information that should be provided. Relatedly, statistical analyses should consider that the neurites from the same cells are not independent. If the neurites indeed come from the same cells, then the sample size is much smaller and a statistical analysis considering the nested nature of the data should be used.

      Comments on revisions:

      The authors performed substantial experiments to check validity of the HTR2A antibody for the revision. Briefly, they found that western blot shows a single band, abolished by a blocking peptide, in neural progenitors and iPSC-derived neurons, suggesting positive results. However, they also detected immunofluorescence signals in HEK293 and HeLa cells, which do not express 5-HT2A receptors as scRNAseq analysis of these cells show complete absence of the transcript. Therefore the antibody has epitope-selective binding but also has some non-specific binding, precluding its use. The authors rightfully removed the data related to the antibody in the revised manuscript. The account is repeated here to highlight to anyone who may find the information helpful. Overall, the additional results added rigor to the study.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1:

      Comment 1: 5-HT2A Antibody Specificity

      Was this antibody validated to be 5-HT2A receptor-specific? Can the authors reason why the discrepancy may arise, and if the axonal expression is specific to the cultured neurons?

      We performed extensive validation of the anti-5-HT2A receptor antibody (Alomone #ASR-033), which is summarized in the accompanying Author response images:

      Positive findings (Author response image 1c-e, Author response image 2a): (1) Western blot showed a single band at the expected molecular weight (~50 kDa) in neural progenitors and iPSCderived neurons. (2) The blocking peptide (#BLP-SR033) abolished Western blot bands and markedly reduced immunofluorescence signals in neurons, confirming epitope-specific binding.

      Negative findings (Author response image 1a-b, Author response image 2a-b, Author response image 3): (1) We detected positive immunofluorescence signals in HEK293 and HeLa cells (Author response image 1a-b), which do not express 5-HT2AR. (2) Western blot also showed bands in HEK293 and HeLa cells (Author response image 2a-b). (3) Single-cell RNA-seq analysis of HEK293T cells confirmed complete absence of HTR2A expression (Author response image 3a). (4) qPCR showed no detectable HTR2A transcripts in iPSCs or HeLa cells (Ct > 36), while neural progenitors and neurons showed clear expression (Author response image 3b). (5) siRNA knockdown experiments failed to produce a corresponding decrease in immunofluorescence or Western blot signals, despite reduced HTR2A transcript levels (data not shown).

      BLAST analysis: Protein BLAST analysis of the 13-amino acid immunogenic peptide sequence identified the human 5-HT2A receptor as the top hit (9/13 amino acids overlap). However, shorter sequence similarities were also found with other proteins, including APPBP1 (6/9 amino acids), Immunoglobulin Heavy Chain (6/7 amino acids), and Interleukin31 receptor (6/8 amino acids). While these partial homologies do not provide a definitive mechanistic explanation for the observed off-target binding, they illustrate that the epitope sequence is not entirely unique to the 5-HT2A receptor.

      Conclusion: While our validation confirmed epitope-specific binding (blocking peptide effective in neurons), the antibody clearly detects something in cells that demonstrably lack HTR2A gene expression. This indicates off-target binding to other proteins sharing the epitope sequence. We have therefore removed all antibody-based 5-HT2A receptor experiments from the revised manuscript. This includes the receptor internalization data from Figure 1. The remaining findings (BDNF upregulation, gene expression changes, morphological effects, electrophysiology) are supported by independent methods including pharmacological blockade with ketanserin.

      Comment 2: Psilocin Dose Selection

      It would be helpful to specify the dose of psilocin tested, and describe how this dose was chosen.

      We used 10 µM psilocin based on: (1) The seminal study by Ly et al. (2018), which demonstrated neuroplasticity effects at this concentration in rat cortical neurons. (2) Our own dose-response experiments (Figure S2B) showing maximal BDNF increase at 10 µM compared to lower concentrations (10 nM, 100 nM, 1 µM). We have clarified this in the revised Methods section.

      Comment 3: Dose vs. Time Dependence

      Given that only one dose is tested, it is also possible that this reflects dose dependence, with the longer time exposure leading to higher dose exposure.

      We agree that dose dependence cannot be excluded with our current experimental design. This point is now moot as we have removed the 5-HT2A receptor internalization experiments from the manuscript. Future studies in our group will address dose-dependent effects on other readouts.

      Comment 4: Control Conditions

      What is the 'control' here? A more appropriate control would be 24 hours after vehicle application.

      The control condition is indeed a vehicle (DMSO) control collected at the same time point as the experimental condition (i.e., 24 hrs post-treatment). We have clarified this in the revised figure legends and Methods section to avoid confusion.

      Comment 5: Sample Size Description

      The sample size was not clearly described. Statistical analyses should consider that neurites from the same cells are not independent.

      We have expanded the sample size descriptions in the figure legends. Analyses were performed using 5-10 microscope images per condition, with 15 ROIs per image, across at least two independent differentiations from two genetic backgrounds. Regarding independence: each neurite segment exists within a distinct microenvironment and can be considered an independent measurement unit, consistent with established practices in the field (Paul et al., 2021, CNS Neurosci Ther). We acknowledge this increases statistical power and have noted this in the Methods.

      Reviewer #2:

      Comment 1: 5-HT2A Antibody Validation

      Without validation (using for example knockdown techniques to decrease expression of 5HT2A), the experiments using this antibody should be excluded from the manuscript.

      We agree with this assessment. As detailed in our response to Reviewer 1 (Comment 1) and documented in the Response to Reviewer Figure, our extensive validation attempts—including siRNA knockdown—could not conclusively demonstrate antibody specificity. We have removed all antibody-based 5-HT2A receptor experiments from the revised manuscript.

      Comment 2: Serotonin in Cell Media

      Did the authors evaluate whether 5-HT is present in the cell media?

      The cell culture media used in our experiments does not contain serotonin. We have explicitly stated this in the revised Methods section.

      Comment 3: Statistical Analysis of Figure S1F

      Some of the datasets are not statistically analyzed, such as Figure S1F.

      Figure S1F related to the 5-HT2A receptor experiments and has been removed from the revised manuscript along with the associated data.

      Comment 4: Translational Validity of Prolonged Exposure

      The authors continuously exposed cells to psilocin for hours or days. Since this is not the model of what occurs in vivo, the findings lack translational validity.

      We acknowledge this limitation. Most experiments (BDNF, gene expression, branching) were conducted 24–48 hrs after a brief 10-minute exposure, which better reflects the in vivo situation. Prolonged exposures (96 hrs) were used specifically for synaptogenesis experiments based on literature showing that repeated LSD administration enhances spine density (Inserra et al., 2022; De Gregorio et al., 2022). Our in vitro system lacks metabolizing enzymes and glial cells, which may introduce temporal biases. We have added a discussion of these limitations in the revised manuscript.

      Comment 5: Ketanserin Effect on BDNF

      In Figure 2E, ketanserin by itself seems to reduce BDNF density. How do the authors conclude that ketanserin blocks psi-induced effects?

      We identified that one cell line (Ctrl 1) with inherently higher BDNF density was inadvertently excluded from the ketanserin-only condition. After removing Ctrl 1 from all conditions and reanalyzing, the difference between Ctrl and Ket alone is no longer significant. The significant difference between Psi+Ket and Ket alone demonstrate that psilocin exerts effects that ketanserin can block, consistent with 5-HT2A receptor mediation. The revised figure and statistical analysis are included in the updated manuscript.

      Comment 6: mCherry Localization mCherry (Fig 4A) seems to be retained in the nucleus.

      The CamKII promoter drives expression of cytoplasmic mCherry, which fills the entire neuron including soma, dendrites, and axons. The apparent nuclear signal reflects mCherry accumulation in the soma, which surrounds the nucleus. The images clearly show mCherry extending into neurites, which was essential for our Sholl analysis of neuronal complexity.

      Comment 7: Reference 36

      Reference 36 is a review article that does not mention psilocin.

      Our statement refers broadly to serotonergic psychedelics increasing neurotrophic factors. Reference 36 (Colaço et al., 2020) examines ayahuasca, which contains the serotonergic psychedelic DMT. We have revised the text to clarify this point.

      Summary of Major Revisions

      (1) Removed all 5-HT2A receptor antibody-based experiments from Figure 1 and supplementary figures due to inconclusive specificity validation. An Author response image documenting our validation attempts is provided.

      (2) Clarified control conditions (vehicle controls at matched time points) in figure legends.

      (3) Expanded sample size descriptions in Methods and figure legends.

      (4) Re-analyzed ketanserin experiments with consistent cell line inclusion.

      (5) Added discussion of translational limitations.

      (6) Added new Figure S5 summarizing proposed signaling pathways.

      (7) Expanded discussion on the relevance of iPSC-derived neurons for drug development.

      Author response image 1.

      Immunostaining for 5-HT2A receptor across cell types and peptide-blocking control. (a) HEK293 cells display a positive immunofluorescent signal despite not endogenously expressing 5-HT2AR, indicating nonspecific antibody reactivity. (b) HeLa cells also exhibit a positive signal despite lacking endogenous 5-HT2AR expression, further demonstrating nonspecific antibody binding in non-expressing cell types. (c) Neural progenitor cells show clear positive 5-HT2AR staining. (d) iPSC-derived neurons exhibit robust and well-defined 5-HT2AR staining. (e) Application of the Alomone 5-HT2AR blocking peptide (#BLP-SR033) markedly reduces neuronal signal intensity, supporting epitope-specific binding.

      Author response image 2.

      Western blot analysis of 5-HT2A receptor abundance and peptide-blocking control. (a-b) In line with the immunofluorescence a single band is detected in iPSCs, HEK cells, neural progenitors, iPSC-derived neurons and (b) HeLa cells. (a) Preincubation of the primary antibody with the corresponding blocking peptide abolishes this band across all samples, consistent with specific binding of the antibody to its intended epitope.

      Author response image 3.

      Lack of detectable 5-HT2AR expression in HEK and HeLa cells. (a) Analysis of a human-only HEK293T single-cell RNA-seq dataset (10x Genomics; https://www.10xgenomics.com/datasets/293-t-cells-1-standard-1-1-0, accessed 2025-11-25) shows no meaningful HTR2A expression, whereas other genes such as GAPDH, TP53, MYC, and ACTB are robustly detected. Consistently, evaluation of a “Barnyard” dataset - an equal mixture of human HEK293T and mouse NIH3T3 cells (10x Genomics; https://www.10xgenomics.com/datasets/20-k-1-1mixture-of-human-hek-293-t-and-mouse-nih-3-t-3-cells-3-ht-v-3-1-3-1-high-6-1-0, accessed 2025-1125) reveals only ~4 of ~10,000 droplets with minimal HTR2A signal, confirming the absence of meaningful expression.(b) (b) qPCR analysis further demonstrates no detectable HTR2A transcripts in iPSCs or HeLa cells (Ct > 36), while neural progenitors and iPSC-derived cortical neurons show expression when normalized to housekeeping genes GAPDH and TBP.

    1. Reviewer #1 (Public review):

      Summary:

      This manuscript presents findings on the adaptation mechanisms of Saccharomyces cerevisiae under extreme stress conditions. The authors try to generalize this to adaptation to stress tolerance. A major finding is that S. cerevisiae evolves a quiescence-like state with high trehalose to adapt to freeze-thaw tolerance independent of their genetic background. The manuscript is comprehensive, and each of the conclusions is well supported by careful experiments.

      Strengths:

      This is excellent interdisciplinary work.

      I have commented on the response of the authors, in-line, below. This is to maintain the conversation thread with the authors.

      Comment 1:

      Earlier papers have shown that loss of ribosomal proteins, that slow growth, leads to better stress tolerance in S. cerevisiae. Given this, isn't it expected that any adaptation that slows down growth would, overall, increase stress tolerance? Even for other systems, it has been shown that slowing down growth (by spore formation in yeast or bacteria/or dauer formation in C. elegans) is an effective strategy to combat stress and hence is a likely route to adaptation. The authors stress this as one of the primary findings. I would like the authors to explain their position, detailing how their findings are unexpected in the context of the literature.

      Response:

      We agree that the link between slower growth and higher stress tolerance has been well stud-ied. What is distinctive here is that repeated, near-lethal freeze-thaw selected not only for a tolerant/quiescent-like state but also for a shorter lag on re-entry. In this regime of freeze-thaw-regrowth, cells that are tolerant but slow to restart would be outcompeted by naive fast growers. Our quiescence-based selection simulations reproduce exactly this constraint. We have added this explanation to the Results to make clear that the novelty is the co-evolution of a tolerant, trehalose-rich state together with rapid regrowth under an alternating regime.

      Comment to Response: I get the point. I believe that the outcome is highly dependent on how selection pressure is administered. So, generalizing this over all stresses (as done in the abstract) may not be accurate.

      Comment 2:

      Convergent evolution of traits: I find the results unsurprising. When selecting for a trait, if there is a major mode to adapt to that stress, most of the strains would adapt to that mode, independent of the route. According to me, finding out this major route was the objective of many of the previous reports on adaptive evolution. The surprising part in the previous papers (on adaptive evolution of bacteria or yeast) was the resampling of genes that acquired mutations in multiple replicates of an evolution experiments, providing a handle to understand the major genetic route or the molecular mechanism that guides the adaptation (for example in this case it would be - what guides the over-accumulation of trehalose). I fail to understand why the authors find the results surprising, and I would be happy to understand that from the authors. I may have missed something important.

      Response:

      Our surprise was precisely that we did not see the classical pattern of "phenotypic convergence + repeated mutations in the same locus/module." All independently evolved lines converged on a trehalose-rich, mechanically reinforced, quiescence-like phenotype, but population sequencing across lines did not reveal a single repeatedly hit gene or small shared pathway, even when we increased selection stringency (1-3 freeze-thaw cycles per round). We have now stated in the manuscript that this decoupling (strong phenotypic convergence, non-overlapping genetic routes) is the central inference: selection is acting on a physiologically defined state that multiple genotypes can reach.

      Comment to Response: You indeed saw a case of phenotypic convergence. Converging towards trehalose-rich, mechanically reinforced, quiescent like - are phenotypes that have converged. This is what prevented lysis. The same locus need not be mutated over and over again, if the trehalose pathway is controlled by many processes (it is, and many are still unknown as I point in the next comment), many different mutations on different loci can result in the same regulation! I do not see the decoupling between phenotypic convergence and decoupling of genetic mutations as surprising or novel; molecular and cellular biology is replete with such examples where deletion(mutation) of hundreds of different genes can have the same phenotypic outcome (yeast deletion library screening, indirect effects etc). If this was a specific question unsolved in evolutionary biology, then the matter is different.

      A minor point: Here I would also like to point out that the three phenotypes you measure may be linked to each other, so their independent evolution may just be a cause-effect relationship. For example Trehalose accumulation may drive the other two. This has not been deconvoluted in this manuscript.

      Comment 3:

      Adaptive evolution would work on phenotype, as all of selective evolution is supposed to. So, given that one of the phenotypes well-known in literature to allow free-tolerance is trehalose accumulation, I think it is not surprising that this trait is selected. For me, this is not a case of "non-genetic" adaptation as the authors point out: it is likely because perturbation of many genes can individually result in the same outcome - up-regulation of trehalose accumulation. Thereby, although the adaptation is genetic, it is not homogeneous across the evolving lines - the end result is. Do the authors check that the trait is actually a non-genetic adaptation, i.e., if they regrow the cells for a few generations without the stress, the cells fall back to being similarly only partially fit to freeze-thaw cycles? Additionally, the inability to identify a network that is conserved in the sequencing does not mean that there is no regulatory pathway. A large number of cryptic pathways may exist to alter cellular metabolic states.<br /> This is a point in continuation of point #2, and I would like to understand what I have missed.

      Response:

      We agree, and we have removed the wording "non-genetic adaptation." The evolved populations retain high survival even after regrowth for {greater than or equal to}25 generations without freeze-thaw, so the adaptation is clearly genetically maintained. What our data show is that there is no single genetic route to the shared phenotype; different mutations can all drive cells into the same trehalose-rich, quiescence-like, mechanochemically reinforced state. We now describe this as "genetic diversification with phenotypic convergence."

      Comment to Response: While the last term does explain what is going on, isn't it an outcome that is routine in cell biology (as pointed out in my previous comment to your response)? I apologize for not understanding the punchline that is provided in the last few sentences of the abstract.

      Comment 4:

      To propose the convergent nature, it would be important to check for independently evolved lines and most probably more than 2 lines. It is not clear from their results section if they have multiple lines that have evolved independently.

      Response:

      We indeed evolved four independent lines and maintained two independent controls. We have added this information at the start of the Results so that the level of replication is immediately clear.

      Comment to Response: Previous large scale studies have done hundreds of sequencing to oversample the pathway and figure out reproducible loci. With pooled sequencing (as mentioned below) and only 4 sample evolution, I am not sure that you would have the power in your study to conclude in the loci are sampled or not! If there were 10 gene LOFs that control Trehalose levels (which you can find from the published deletion screening experiment), then four of the experiments are likely to go through one of these routes; what is the likely event that you would identify the same route in two pools? It is unlikely, and therefore, sequencing of 4 pools cannot tell you if the mutation path is repeatedly sampled or not.

      Comment 5:

      For the genomic studies, it is not clear if the authors sequenced a pool or a single colony from the evolved strains. This is an important point, since an average sequence will miss out on many mutations and only focus on the mutations inherited from a common ancestral cell. It is also not clear from the section.

      Response:

      We sequenced population samples from the evolved lines. Our specific question was whether independently evolved lines would show the same high-frequency genetic solution, as is often seen in parallel evolution. Pool sequencing may under-sample rare/private variants, but it is appropriate for detecting such shared, high-frequency routes - and we do not find any. We have clarified this rationale in the Methods/Results.

      Comment to Response: Please provide the average sequencing depth of each sequencing run. It is essential to understand the power of this study in identifying mutations. What coverage was used in Xgenome size?

    2. Author response:

      The following is the authors’ response to the original reviews.

      We thank the editor and the reviewers for the detailed and constructive comments. In revising the manuscript we have: (i) clarified what is new relative to prior stress tolerance work, (ii) made explicit that we observe phenotypic convergence without a shared genetic route, (iii) stated upfront that we evolved four independent lines plus two controls, and (iv) corrected figure legends, statistics, and the missing citations. Below we respond point-by-point.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      This manuscript presents findings on the adaptation mechanisms of Saccharomyces cerevisiae under extreme stress conditions. The authors try to generalize this to adaptation to stress tolerance. A major finding is that S. cerevisiae evolves a quiescence-like state with high trehalose to adapt to freeze-thaw tolerance independent of their genetic background. The manuscript is comprehensive, and each of the conclusions is well supported by careful experiments.

      Strengths:

      This is excellent interdisciplinary work.

      Weaknesses:

      I have questions regarding the overall novelty of the proposal, which I would like the authors to explain.

      (1) Earlier papers have shown that loss of ribosomal proteins, that slow growth, leads to better stress tolerance in S. cerevisiae. Given this, isn’t it expected that any adaptation that slows down growth would, overall, increase stress tolerance? Even for other systems, it has been shown that slowing down growth (by spore formation in yeast or bacteria/or dauer formation in C. elegans) is an effective strategy to combat stress and hence is a likely route to adaptation. The authors stress this as one of the primary findings. I would like the authors to explain their position, detailing how their findings are unexpected in the context of the literature.

      We agree that the link between slower growth and higher stress tolerance has been well studied. What is distinctive here is that repeated, near-lethal freeze–thaw selected not only for a tolerant/quiescent-like state but also for a shorter lag on re-entry. In this regime of freeze–thaw–regrowth, cells that are tolerant but slow to restart would be outcompeted by naive fast growers. Our quiescence-based selection simulations reproduce exactly this constraint. We have added this explanation to the Results to make clear that the novelty is the co-evolution of a tolerant, trehaloserich state together with rapid regrowth under an alternating regime.

      (2) Convergent evolution of traits: I find the results unsurprising. When selecting for a trait, if there is a major mode to adapt to that stress, most of the strains would adapt to that mode, independent of the route. According to me, finding out this major route was the objective of many of the previous reports on adaptive evolution. The surprising part in the previous papers (on adaptive evolution of bacteria or yeast) was the resampling of genes that acquired mutations in multiple replicates of an evolution experiments, providing a handle to understand the major genetic route or the molecular mechanism that guides the adaptation (for example in this case it would be - what guides the overaccumulation of trehalose). I fail to understand why the authors find the results surprising, and I would be happy to understand that from the authors. I may have missed something important.

      Our surprise was precisely that we did not see the classical pattern of “phenotypic convergence + repeated mutations in the same locus/module.” All independently evolved lines converged on a trehalose-rich, mechanically reinforced, quiescence-like phenotype, but population sequencing across lines did not reveal a single repeatedly hit gene or small shared pathway, even when we increased selection stringency (1–3 freeze–thaw cycles per round). We have now stated in the manuscript that this decoupling (strong phenotypic convergence, non-overlapping genetic routes) is the central inference: selection is acting on a physiologically defined state that multiple genotypes can reach.

      (3) Adaptive evolution would work on phenotype, as all of selective evolution is supposed to. So, given that one of the phenotypes well-known in literature to allow free-tolerance is trehalose accumulation, I think it is not surprising that this trait is selected. For me, this is not a case of ”non-genetic” adaptation as the authors point out: it is likely because perturbation of many genes can individually result in the same outcome - up-regulation of trehalose accumulation. Thereby, although the adaptation is genetic, it is not homogeneous across the evolving lines - the end result is. Do the authors check that the trait is actually a non-genetic adaptation, i.e., if they regrow the cells for a few generations without the stress, the cells fall back to being similarly only partially fit to freeze-thaw cycles? Additionally, the inability to identify a network that is conserved in the sequencing does not mean that there is no regulatory pathway. A large number of cryptic pathways may exist to alter cellular metabolic states.

      This is a point in continuation of point #2, and I would like to understand what I have missed.

      We agree, and we have removed the wording “non-genetic adaptation.” The evolved populations retain high survival even after regrowth for ≥25 generations without freeze–thaw, so the adaptation is clearly genetically maintained. What our data show is that there is no single genetic route to the shared phenotype; different mutations can all drive cells into the same trehalose-rich, quiescencelike, mechanochemically reinforced state. We now describe this as “genetic diversification with phenotypic convergence.”

      (4) To propose the convergent nature, it would be important to check for independently evolved lines and most probably more than 2 lines. It is not clear from their results section if they have multiple lines that have evolved independently.

      We indeed evolved four independent lines and maintained two independent controls. We have added this information at the start of the Results so that the level of replication is immediately clear.

      (5) For the genomic studies, it is not clear if the authors sequenced a pool or a single colony from the evolved strains. This is an important point, since an average sequence will miss out on many mutations and only focus on the mutations inherited from a common ancestral cell. It is also not clear from the section.

      We sequenced population samples from the evolved lines. Our specific question was whether independently evolved lines would show the same high-frequency genetic solution, as is often seen in parallel evolution. Pool sequencing may under-sample rare/private variants, but it is appropriate for detecting such shared, high-frequency routes — and we do not find any. We have clarified this rationale in the Methods/Results.

      Reviewer #2 (Public review):

      Summary:

      The authors used experimental evolution, repeatedly subjecting Saccharomyces cerevisiae populations to rapid liquid-nitrogen freeze-thaw cycles while tracking survival, cellular biophysics, metabolite levels, and whole-genome sequence changes. Within 25 cycles, viability rose from ~2 % to ~70 % in all independent lines, demonstrating rapid and highly convergent adaptation despite distinct starting genotypes. Evolved cells accumulated about threefold more intracellular trehalose, adopted a quiescence-like phenotype (smaller, denser, non-budding cells), showed cytoplasmic stiffening and reduced membrane damage, and re-entered growth with shorter lag traits that together protected them from ice-induced injury. Whole-genome sequencing indicated that multiple genetic routes can yield the same mechano-chemical survival strategy. A population model in which trehalose controls quiescence entry, growth rate, lag, and freeze-thaw survival reproduced the empirical dynamics, implicating physiological state transitions rather than specific mutations as the primary adaptive driver. The study therefore concludes that extreme-stress tolerance can evolve quickly through a convergent, trehalose-rich quiescence-like state that reinforces membrane integrity and cytoplasmic structure.

      Strengths:

      The strengths of the paper are the experimental design, data presentation and interpretation, and that it is well-written.

      (1) While the phenotyping is thorough, a few more growth curves would be quite revealing to determine the extent of cross-stress protection. For example, comparing growth rates under YPD vs. YPEG (EtOH/glycerol), and measuring growth at 37ºC or in the presence of 0.8 M KCl.

      We thank the referee for the interesting suggestions. However, growth rates alone may be difficult to interpret since WT strains also show different growth rates under these conditions. Therefore, comparing the relative fitness or survival of the evolved strains versus the WT under these stresses would be more informative. In the present study we limited growth/survival measurements to what was needed to parameterize the adaptation model in YPD under the freeze–thaw regime. We have now added a statement in the Discussion that, given the shared trehalose/mechanical mechanism, such cross-stress assays are an expected and straightforward follow-up.

      (2) Is GEMS integrated prior to evolution? Are the evolved cells transformable?

      Yes. GEMs were integrated prior to evolution, because the non-integrated evolved population showed low transformation efficiency, likely due to altered cell-wall properties.

      (3) From the table, it looks like strains either have mutations in Ras1/2 or Vac8. Given the known requirements of Ras/PKA signaling for the G1/S checkpoint (to make sure there are enough nutrients for S phase), this seems like a pathway worth mentioning and referencing. Regarding Vac8, its emerging roles in NVJ and autophagy suggest another nutrient checkpoint, perhaps through TORC1. The common theme is rewired metabolism, which is probably influencing the carbon shuttling to trehalose synthesis.

      We appreciate the reviewer’s suggestion to consider pathways like Ras/PKA (linked to Ras1/2) and autophagy/TORC1 (linked to Vac8) as potential upstream modulators. While these pathways are involved in nutrient sensing and metabolic regulation, we choose not to emphasize them specifically. This is because (i) some evolved lines lack Ras1/2 or Vac8 variants, and (ii) none of the variants lies directly in trehalose synthesis/degradation pathways. Furthermore, direct links to trehalose accumulation are not well established for these specific variants in this context, and pathways like Ras are global regulators with broad effects. Together with the strongly convergent phenotype, this supports our main inference that multiple genetic/metabolic routes can feed into the same trehalose-rich, mechanochemically reinforced, quiescence-like state. We have added a note in the discussion regarding metabolic rewiring and trehalose.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      Generally, the results sections should have more details. The figures should be corrected, and the legends should be checked for correctness. The manuscript seems to have been assembled in haste?

      We have expanded the relevant Results subsections with one-sentence motivations (why each measurement was performed) and we have corrected the figure legends for ordering and consistency.

      Figure 3: It will be good to have the correct p-values on the figure itself. P-values are typically less than 1, unless there is some special method (here the values presented are , etc). Please explain how the P-values were obtained in the figure legend itself.

      Figure 3 now shows the actual p-values. The legend specifies the details and the sample sizes used.

      Figure 5: It is not clear what the error bars show in 5B, E (different evolved population/ clones/ cells?). All the figure legends are mixed up, please correct them. It is difficult to follow the paper.

      Figure 5 legends now state clearly what the error bars represent (biological replicates) and which panels are from single-cell measurements. We have checked the panel lettering and legend order for consistency with the flow of the main text.

      Reviewer #3 (Recommendations for the authors):

      Overall, the paper is outstanding, well-written, and insightful.

      A point to address is that there are missing citations on lines 60, 91.

      We have added the missing citations at both locations. We apologize for the omission, which was due to a compilation error. This error has been fixed, and the bibliography has been corrected (now containing 74 references).

    1. La ciencia de la información [ 1 ] [ 2 ] [ 3 ] (abreviada como infosci ) es un campo académico que se ocupa principalmente del análisis , la recopilación, la clasificación , la manipulación, el almacenamiento, la recuperación , el movimiento, la difusión y la protección de la información

      A demás de lo que se menciona sobre la ciencia de la informacion, agregaria que es una necesidad del ser humano para su desarrollo integral.

    2. Information science[1][2][3] (abbreviated as infosci) is an academic field that is primarily concerned with the analysis, collection, classification, manipulation, storage, retrieval, movement, dissemination, and protection of information.

      Agregaría a esta afirmación que la información deber ser también legitimizada, llegar hasta su fuente y complementarla con su contexto para así acercarnos desde un perspectiva óptima.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Authors state, "we identified ETF dehydrogenase (ETFDH) as one of the most dispensable metabolic genes in neoplasia." Surely there are thousands of genes that are dispensable for neoplasia. Perhaps the authors can revise this sentence and similar sentiments in the text.

      We agree with the reviewer and have corrected the text accordingly. Specifically, we rephrased the sentence: “Surprisingly, we observed that in contrast to muscle, ETFDH is one of the most non-essential metabolic genes in cancer cells.” to “Surprisingly, we observed that in contrast to muscle, ETFDH is a non-essential gene in acute lymphoblastic leukemia NALM-6 cells”

      Authors state, "These findings show that ETFDH loss elevates glutamine utilization in the CAC to support mitochondrial metabolism." While elevated glutamine to CAC flux is consistent with the statement that increased glutamine, the authors have not measured the effect of restoring glutamine utilization to baseline on mitochondrial metabolism. Thus, the causality implied by the authors can only be inferred based on the data presented. Indeed, the increased glutamine consumption may be linked to the increase in ROS, as glutamate efflux via system xCT is a major determinant of glutamine catabolism in vitro.

      Indeed. We changed the statement "These findings show that ETFDH loss elevates glutamine utilization in the CAC to support mitochondrial metabolism." to "Collectively, these data demonstrate that ETF insufficiency in cancer cells remodels mitochondrial metabolism and increases the glutamine consumption and anaplerosis."

      Authors state that the mechanism described is an example of "retrograde signaling". However, the mechanism seems to be related to a reduction in BCAA catabolism, suggesting that the observed effects may be a consequence of altered metabolic flux rather than a direct signaling pathway. The data presented do not delineate whether the observed effects stem from disrupted mitochondrial communication or from shifts in nutrient availability and metabolic regulation.

      Notwithstanding that the term “retrograde” was used to refer to signaling from mitochondria to mTORC1, rather than from mTORC1 to mitochondria [1], we have removed the term “retrograde signaling” throughout the manuscript.

      The authors should discuss which amino acids that are ETFDH substrates might affect mTORC1 activity or consider whether other ETFDH substrates might also affect mTORC1 in their discussion. Along these lines, the authors might consider discussing why amino acids that are not ETFDH substrates are increased upon ETFDH loss.

      Based on the literature, we expect that branched chain amino acids that are ETFDH substrates (e.g., leucine) are likely to play a major role in activating mTORC1 upon ETFDH abrogation. As expected, the aforementioned amino acids are among those that are the most highly upregulated in ETFDH deficient cells (Fig 3A). We have, however, never formally tested the role of branched chain amino acid in activating mTORC1 in the context of ETFDH disruption. The increase in amino acids that are not metabolized via ETFDH, is likely to stem from global metabolic rewiring of ETFDH-deficient cells and observed alterations in amino acid uptake (e.g., glutamine; Fig 2F). We discuss this in the revised version of the paper as follows:

      “Several metabolites can be sensed via signaling partners upstream of mTORC1, including leucine, arginine, methionine/SAM, and threonine [2]. Branched-chain amino acids (leucine, isoleucine, and valine), which are among the highest upregulated metabolites in ETFDH deficient cells (Fig 3A) serve as ETFDH substrates, and have been described to display strong activation capabilities towards mTORC1 in the literature [3,4]. Glutamine can also activate mTORC1 through Arf family of GTPases [5]. Indeed, glutamine can supplement the non-essential amino acid (NEAA) pool through transamination [6] and amino acid uptake [7]. Accordingly, the maintenance of NEAA that are non-ETFDH substrates may be supported by the global metabolic rewiring fueled by enhanced glutamine metabolism in ETFDH-deficient cells. Deciphering the mechanisms leading to accumulation of specific amino acids and their role in ETFDH-dependent mTORC1 modulation is warranted.”

      Reviewer #2 (Public review):

      The authors would strengthen the paper considerably by adding back catalytically inactive ETFDH to show that the activity of this enzyme is responsible for the increased growth phenotypes and changes in labeling that they observe.

      Based on the Reviewers’ suggestions we performed these experiments. Herein, we took advantage of Y304A/G306E ETFDH mutant that impairs electron transfer from ETF and cannot substitute for the wild type (WT) gene function in ETFDH-deficient myoblasts [8]. We expressed WT and Y304A/G306E ETFDH mutant in ETFDH KO HCT116 colorectal cancer cells and confirmed that they are expressed to a comparable level (Supplementary Figure 6C). Re-expression of WT decreased proliferation, while suppressing mTORC1 signaling and increasing 4E-BP1 levels relative to control (vector infected) ETFDH KO EV HCT116 cells (Supplementary Figure 6D). In contrast, proliferation rates, mTORC1 signaling and 4E-BP1 levels remained largely unchanged upon Y304A/G306E ETFDH mutant expression in ETFDH KO HCT116 cells (Supplementary Figure 6D). Similarly, re-expression of WT ETFDH disrupted the bioenergetic phenotype associated with ETFDH loss, in contrast to re-expression of Y304A/G306E ETFDH mutant, which exhibited similar bioenergetic profiles as ETFDH KO control (Supplementary Figure 6E-F). Collectively these findings argue that the ETFDH activity is required for its tumor suppressive effects.

      If nucleotide pool and labeling data are available, or can be obtained readily, this would significantly strengthen the tracing data already obtained.

      We followed Reviewer’s suggestion and measured nucleotide levels. This revealed that loss of ETFDH results in increase in steady-state nucleotide pools (Supplementary Figure 2K), consistent with increased aspartate labelling and accelerated tumor growth.

      References

      (1) Morita, M. et al. mTORC1 controls mitochondrial activity and biogenesis through 4EBP-dependent translational regulation. Cell Metab 18, 698-711 (2013). https://doi.org/10.1016/j.cmet.2013.10.001

      (2) Valenstein, M. L. et al. Structural basis for the dynamic regulation of mTORC1 by amino acids. Nature 646, 493-500 (2025). https://doi.org/10.1038/s41586-025-09428-7

      (3) Appuhamy, J. A., Knoebel, N. A., Nayananjalie, W. A., Escobar, J., & Hanigan, M. D. Isoleucine and leucine independently regulate mTOR signaling and protein synthesis in MAC-T cells and bovine mammary tissue slices. J Nutr 142, 484-491 (2012). https://doi.org/10.3945/jn.111.152595

      (4) Herningtyas, E. H. et al. Branched-chain amino acids and arginine suppress MaFbx/atrogin-1 mRNA expression via mTOR pathway in C2C12 cell line. Biochim Biophys Acta 1780, 1115-1120 (2008). https://doi.org/10.1016/j.bbagen.2008.06.004

      (5) Jewell, J. L. et al. Metabolism. Differential regulation of mTORC1 by leucine and glutamine. Science 347, 194-198 (2015). https://doi.org/10.1126/science.1259472

      (6) Tan, H. W. S., Sim, A. Y. L. & Long, Y. C. Glutamine metabolism regulates autophagy-dependent mTORC1 reactivation during amino acid starvation. Nat Commun 8, 338 (2017). https://doi.org/10.1038/s41467-017-00369-y

      (7) Chen, R. et al. The general amino acid control pathway regulates mTOR and autophagy during serum/glutamine starvation. J Cell Biol 206, 173-182 (2014).https://doi.org/10.1083/jcb.201403009

      (8) Herrero Martin, J. C. et al. An ETFDH-driven metabolon supports OXPHOS efficiency in skeletal muscle by regulating coenzyme Q homeostasis. Nat Metab 6, 209-225 (2024). https://doi.org/10.1038/s42255-023-00956-y

    1. Reviewer #2 (Public review):

      The substantially revised paper has increased in clarity and is much more accessibe and straightforward than the first version. The analyses are now clearer and support the conclusions better. There are however some remaining methodological weakness, which in my mind still renders the evidence to not be entirely convincing.

      (1) The temporal autocorrelation concern is not fully convincingly addressed. The temporal autocorrelation curves supplied in the supplements are really helpful, but linearly regressing out the temporal distance from the neural distance clearly does not work, as one can see from the right panel of supplementary Figure 1. If the method had worked correctly the line should have been flat. The analysis however shows that decision trials with a lag > 2 are basically independent - so a simple way to address this is to restrict the RSA analysis to trials with a decision lag of > 2. This analysis would strengthen the paper a lot.

      (2) In the final analysis, the authors use all the trials to make the claim that the hippocampus represents the characters in a shared social space. However, as within-character distances are still included in the analysis, this result could still be driven by the effects of within-character representations that are not shared across characters. A simple way of addressing this concern would be to only include between-character distances in this analysis, making it truly complementary to the previous within-character analysis. It would also be very interesting to compare the the within- and between-character analyses in the hippocampus directly.

      (3) Overall, the correction for multiple comparisons in the fMRI and the resulting corrected p-values are not sufficiently explained and documented in the paper. What was exactly permuted in the tests? Was correction applied in a voxel-wise or cluster-wise fashion? If cluster-wise, the cluster-wise p-values need to be reported.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Public reviews:

      Reviewer #1 (Public review):

      Summary:

      Schafer et al. tested whether the hippocampus tracks social interactions as sequences of neural states within an abstract social space defined by dimensions of affiliation and power, using a task in which participants engaged in narrative-based social interactions. The findings of this study revealed that individual social relationships are represented by unique sequences of hippocampal activity patterns. These neural trajectories corresponded to the history of trial-to-trial affiliation and power dynamics between participants and each character, suggesting an extended role of the hippocampus in encoding sequences of events beyond spatial relationships.

      The current version has limited information on details in decoding and clustering analyses which can be improved in the future revision.

      Strengths:

      (1) Robust Analysis: The research combined representational similarity analysis with manifold analyses, enhancing the robustness of the findings and the interpretation of the hippocampus's role in social cognition.

      (2) Replicability: The study included two independent samples, which strengthens the generalizability and reliability of the results.

      Weaknesses:

      I appreciate the authors for utilizing contemporary machine-learning techniques to analyze neuroimaging data and examine the intricacies of human cognition. However, the manuscript would benefit from a more detailed explanation of the rationale behind the selection of each method and a thorough description of the validation procedures. Such clarifications are essential to understand the true impact of the research. Moreover, refining these areas will broaden the manuscript's accessibility to a diverse audience.

      We thank the reviewer for these comments and have addressed them in various ways.

      First, we removed the spline-based decoding and spectral clustering analyses. As we detail in our response to the recommendations, these approaches were complex and raised legitimate interpretational concerns, making it unclear how they supported our core claims. The revised manuscript now focuses on a set of representational similarity analyses to show representations consistent with social dimension similarity (affiliation vs. power decision trials) and social location similarity (trajectory/map-like coding based on participant choices).

      Second, we expanded the Methods and Results to more clearly explain the analyses, the questions they address, and associated controls and robustness tests. The dimension similarity analysis tests whether hippocampal patterns differentiate affiliation and power decisions in a way consistent with an abstract dimension representation. The location similarity RSAs test whether within-character neural pattern distances scale with Euclidean distance in social space (relationship-specific trajectories), and whether pattern distances across all characters scale with location distances when distances are globally standardized, consistent with a shared map-like coordinate system.

      Third, we emphasize new controls. For the dimension similarity RSA, we test for potential confounds such as word count, text sentiment, and reaction time differences between affiliation and power trials. For the location similarity RSA, we control for temporal distance between trials and show (in the Supplement) that the reported effects cannot be explained by temporal autocorrelation in the fMRI data or by the relationship between temporal distance and behavioral location distance.

      We believe that these changes address the reviewer’s request for clearer rationale and validation.

      Reviewer #2 (Public review):

      Summary:

      Using an innovative task design and analysis approach, the authors set out to show that the activity patterns in the hippocampus related to the development of social relationships with multiple partners in a virtual game. While I found the paper highly interesting (and would be thrilled if the claims made in the paper turned out to be true), I found many of the analyses presented either unconvincing or slightly unconnected to the claims that they were supposed to support. I very much hope the authors can alleviate these concerns in a revision of the paper.

      Strengths & Weaknesses:

      (1) The innovative task design and analyses, and the two independent samples of participants are clear strengths of the paper.

      We thank the reviewer for this comment.

      (2) The RSA analysis is not what I expected after I read the abstract and tile of the result section "The hippocampus represents abstract dimensions of affiliation and power". To me, the title suggests that the hippocampus has voxel patterns, which could be read out by a downstream area to infer the affiliation and power value, independent of the exact identity of the character in the current trial. The presented RSA analysis however presents something entirely different - namely that the affiliation trials and power trials elicit different activity patterns in the area indicated in Figure 3. What is the meaning of this analysis? It is not clear to me what is being "decoded" here and alternative explanations have not been considered. How do affiliation and power trials differ in terms of the length of sentences, complexity of the statements, and reaction time? Can the subsequent decision be decoded from these areas? I hope in the revision the authors can test these ideas - and also explain how the current RSA analysis relates to a representation of the "dimensions of affiliation and power".

      We agree that this analysis needed to be better justified and explained. We have revised the text to clarify that by “represents the interaction decision trials along abstract social dimensions” we mean that hippocampal multivoxel patterns differentiate affiliation and power decisions in a way consistent with the conceptual framework of underlying latent dimensions. The analysis tests one simple prediction of this view – that on average these trial types are separable in the neural patterns. We have added details to the Methods, showing how the affiliation and power trials do not differ in word count or in sentiment, but do differ in their semantics, as assessed by a Large Language Model, as we expect from our task assumptions. Thanks to the reviewer’s comment, we also tested for and found a reaction time difference between affiliation and power trials, that we now control for.

      (3) Overall, I found that the paper was missing some more fundamental and simpler RSA analyses that would provide a necessary backdrop for the more complicated analyses that followed. Can you decode character identity from the regions in question? If you trained a simple decoder for power and affiliation values (using the LLE, but without consideration of the sequential position as used in the spline analysis), could you predict left-out trials? Are affiliation and power represented in a way that is consistent across participants - i.e. could you train a model that predicts affiliation and power from N-1 subjects and then predict the Nth subject? Even if the answer to these questions is "no", I believe that they are important to report for the reader to get a full understanding of the nature of the neural representations in these areas. If the claim is that the hippocampus represents an "abstract" relationship space, then I think it is important to show that these representations hold across relationships. Otherwise, the claim needs to be adjusted to say that it is a representation of a relationship-specific trajectory, but not an abstract social space.

      We appreciate this comment and agree on the value of clear, conceptually simple analyses. To address this concern, we have simplified our main analysis significantly by removing the spline-based analysis and substituting it with a multiple regression representational similarity analysis approach. We test whether within-character neural pattern distances scale with distance in social space (relationship-specific trajectories), and whether pattern distances across all characters scale with location distances when distances are globally standardized. We find evidence for both, consistent with a shared map-like coordinate system.

      We agree that decoding character identity and an across-participant decoding approach could be informative. However, our current task is not well designed for such analyses and as such would complicate the paper. Although we agree that these questions are interesting, they would test questions that are outside the scope of this paper. 

      (4) To determine that the location of a specific character can be decoded from the hippocampal activity patterns, the authors use a sequential analysis in a lowdimensional space (using local linear embedding). In essence, each trial is decoded by finding the pair of two temporally sequential trials that is closest to this pattern, and then interpolating the power/affiliation values linearly between these two points. The obvious problem with this analysis is that fMRI pattern will have temporal autocorrelation and the power and affiliation values have temporal autocorrelation. Successful decoding could just reflect this smoothness in both time series. The authors present a series of control analyses, but I found most of them to not be incisive or convincing and I believe that they (and their explanation of their rationale) need to be improved. For example, the circular shifting of the patterns preserves some of the autocorrelation of the time series - but not entirely. In the shifted patterns, the first and last items are considered to be neighboring and used in the evaluation, which alone could explain the poor performance. The simplest way that I can see is to also connect the first and last item in a circular fashion, even when evaluating the veridical ordering. The only really convincing control condition I found was the generation of new sequences for every character by shuffling the sequence of choices and re-creating new artificial trajectories with the same start and endpoint. This analysis performs much better than chance (circular shuffling), suggesting to me that a lot of the observed decoding accuracy is indeed simply caused by the temporal smoothness of both time series.

      We thank the reviewer for emphasizing this important concern; we agree that we did not sufficiently address this in the initial submission. This concern is one main reason we removed the spline-based analysis and now use regression-based representational similarity analyses in its place. In the revision, we report autocorrelation-related analyses in the supplement, and via controls and additional analysis show that temporal distance (or its square) cannot explain the location-like effects. This substantially improves our ability to interpret the findings.

      (5) Overall, I found the analysis of the brain-behavior correlation presented in Figure 5 unconvincing. First, the correlation is mostly driven by one individual with a large network size and a 6.5 cluster. I suspect that the exclusion of this individual would lead to the correlation losing significance. Secondly, the neural measure used for this analysis (determining the number of optimal clusters that maximize the overlap between neural clustering and behavioral clustering) is new, non-validated, and disconnected from all the analyses that had been reported previously. The authors need to forgive me for saying so, but at this point of the paper, would it not be much more obvious to use the decoding accuracy for power and affiliation from the main model used in the paper thus far? Does this correlate? Another obvious candidate would be the decoding accuracy for character identity or the size of the region that encodes affiliation and power. Given the plethora of candidate neural measures, I would appreciate if the authors reported the other neural measures that were tried (and that did not correlate). One way to address this would have been to select the method on the initial sample and then test it on the validation sample - unfortunately, the measure was not pre-registered before the validation sample was collected. It seems that the correlation was only found and reported on the validation sample?

      We agree that this analysis was too complicated and under constrained, and thus not convincing. We think that removing this cluster-based analysis is the most conservative response to the reviewer’s concerns and have removed it from the revised paper.

      Recommendations to the authors:

      Reviewer #1 (Recommendations for the authors):

      The manuscript's description of the shuffling analysis performed during decoding is currently ambiguous, particularly concerning the control variables. This ambiguity is present only in the Figure 4 legends and requires a more detailed explanation within the methods section. It is essential to clarify whether the permutation process was conducted within each character's data set or across multiple characters' data sets. If permutations were confined to within-character data, the conclusion would be that the hippocampus encodes context-specific information rather than providing a twodimensional common space.

      We thank the reviewer for this comment. We have now removed the spline analysis due to these and other problems and have replaced it with representational similarity analyses that are both more rigorous and easier to interpret. We think these analyses allow us to make the claim that the characters are represented in a common space. 

      In the methods, we explain the analyses (page 23-24, lines 475-500):

      “We also expected the hippocampus to represent the different characters’ changing social locations, which are implicit in the participant’s choices. We used multiple regression searchlight RSA to test whether hippocampal pattern dissimilarity increases with social location distance, based on participant-specific trial-wise beta images where boxcar regressors spanned each trial’s reaction time.”

      “We ran two complementary regression analyses to address two related questions. First, we asked whether the hippocampus represents how a specific relationship changes over time. For this analysis, for each participant and each searchlight, we computed character-specific (i.e., only for same character trial pairs) correlation distances between trial-wise beta patterns and Euclidean distances between the social location behavioral coordinates. Distances were zscored within character trial pairs to isolate character-specific changes. The second analysis asked whether the there is a common map-like representation, where all trials, regardless of relationship, are represented in a shared coordinate system. Here, we included all trial pairs and z-scored the distances globally. For both regression analyses, we included control distances to control for possible confounds. To account for generic time-related changes, we controlled for absolute scan-time difference, as this correlated with location distance across participants (see Temporal autocorrelation of hippocampal beta patterns in the supplement). Although the square of this temporal distance did not explain any additional variance in behavioral distances, we ran a robustness analysis including both temporal distance and its square and saw qualitatively the same clusters with similar effect sizes. As such, we report the main analysis only. We included binary dimension difference (0 = trial pairs of different dimension, 1 = trials pairs of the same dimension), to ensure effects could not be explained by dimension-related effects. In the group-level model, we controlled for sample and the average reaction time between affiliation and power decisions.”

      In the results, we describe the results and our interpretation (pages 11-12, lines 185208):

      “We have shown that the left hippocampus represents the affiliation and power trials differently, consistent with an abstract dimensional representation. Does it also represent the changing social coordinates of each character? To test this, we multiple-regression RSA searchlight to test whether left hippocampus patterns represent the characters’ changing social locations across interactions (see Figure 3). We restricted the distances to those from trial pairs from the same character and standardized the distances within character (see Figure 3BD). We controlled for temporal distance to ensure the effect was not explainable by the time between trials, and for whether the trials shared the same underlying dimension (affiliation or power; see Location similarity searchlight analyses for more details). At the group level, we controlled for sample and the average reaction time difference between affiliation and power trials. Using the same testing logic as the dimensionality similarity analysis, we first tested our hypothesis in the bilateral hippocampus and found widespread effects in both the left (peak voxel MNI x/y/z = -35/-22/-15, cluster extent = 1470 voxels) and right (peak voxel MNI x/y/z = 37/-19/-14, cluster extent = 1953 voxels) hemispheres. The whole-brain searchlight analysis revealed additional clusters in the left putamen (-27/-3/14, cluster extent = 131 voxels) and left posterior cingulate cortex (-10/-28/41, cluster extent = 304 voxels).”

      “We then asked a second, complementary question: does the hippocampus represent all interactions, across characters, within a shared map? To test for this map-like structure, we repeated the analysis but now included all trial pairs, z-scoring distances globally rather than within character (Figure 3E-F). The remainder of the procedure followed the same logic as the preceding analysis. The hippocampus analysis revealed an extensive right hippocampal cluster (27/27/-14, cluster extent = 1667 voxels). The whole-brain analysis did not show any significant clusters.”

      We also describe the results in the discussion (page 12, lines 220-226): 

      “Then, we show that the hippocampus tracks the changing social locations (affiliation and power coordinates), above and beyond the effects of dimension or time; the hippocampus seemed to reflect both the changing within-character locations, tracking their locations over time, and locations across characters, as if in a shared map. Thus, these results suggest that the hippocampus does not just encode static character-related representations but rather tracks relationship changes in terms of underlying affiliation and power.”

      The manuscript's description of the decoding analysis is unclear regarding the variability of the decoded positions. The authors appear to decode the position of a character along a spline, which raises the question of whether this position correlates with time, since characters are more likely to be located further from the center in later trials. There is a concern that the decoded position may not solely reflect the hippocampal encoding of spatial location, but could also be influenced by an inherent temporal association. Given that a character's position at time t is likely to be similar to its positions at t−1 and t+1, it is crucial that the authors clearly articulate their approach to separating spatial representation from temporal autocorrelation. While this issue may have been addressed in the construction of the test set, the manuscript does not seem to adequately explain how such biases were mitigated in the training set.

      We agree that temporal confounding needs to be better accounted for, as our claims depend on space-like signals being separable from time-like ones. We address this in several ways in the revised manuscript.

      First, we emphasize that this is a narrative-based task, where temporal structure is relevant. As such, our analyses aim to demonstrate that effects go beyond simple temporal confounds, like trial order or time elapsed.

      Despite the temporal structure to the task, the decisions for the same character are spaced in time, and interleaved with other characters’ decisions, reducing the chance that a simple temporal confound could explain trajectory-related effects. We now describe the task better in the revised methods (page 16, lines 314-318):

      “All six characters’ decision trials are interleaved with one another and with narrative slides. On average, after a decision trial for a given character, participants view ~11 narrative slides and complete ~3 decisions for other characters before returning to that same character, such that each character’s choices are separated by an average of ~20 seconds (range 12 seconds to 10 min).”

      To address temporal autocorrelation in the fMRI time series, we used SPM’s FAST algorithm. Briefly, FAST models temporal autocorrelation as a weighted combination of candidate correlation functions, using the best estimate to remove autocorrelated signal.

      We also now report the temporal autocorrelation profile of the hippocampal beta series in the supplement, including (pages 29-31, lines 593-656):

      “The Social Navigation Task is a narrative-based task, where the relationships with characters evolve over time; trial pairs that are close in time may have more similar fMRI patterns for reasons unrelated to social mapping (e.g., slow drift). It is important to account for the role of time in our analyses, to ensure effects go beyond simple temporal confounds, like the time between decision trials. To aid in this, we quantified how fMRI signals change over time using a pattern autocorrelation function across decision trial lags. We defined the left and right hippocampus and the left and right intracalcarine cortex using the HarvardOxford atlas and thresholded them at 50% probability. We chose intracalcarine corex as an early visual control region that largely corresponds to primary visual cortex (V1), as it is likely to be driven by the visually presented narrative. We used the same trial-wise beta images as in the location similarity RSA (boxcar regressors spanning each decision trial’s reaction time). For each participant and region-of-interest (ROI), we extracted the decision trial-by-voxel beta matrix and quantified three kinds of temporal dependence: beta autocorrelation, multivoxel pattern correlation and multivoxel pattern correlation after regressing out temporal distance.”

      “To estimate the temporal autocorrelation of the trial-wise beta values, we treated each voxel’s beta values as a time series across trials and measured how much a voxel’s response on one trial correlated (Pearson) with its response on previous trials. We averaged these voxel wise autocorrelations within each ROI. At one trial apart (lag 1), both the hippocampus and V1 showed small positive autocorrelations, indicating modest trial-to-trial carryover in response amplitude (see Supplemental figure 1) that by three trials apart was approximately 0.”

      “Because our representational similarity analyses depend on trial-by-trial pattern similarity, we also estimated how multivoxel patterns were autocorrelated over time. For each lag, we computed the Pearson correlation between each trial’s voxelwise pattern and the pattern from the trial that many trials earlier, then averaged those correlations to obtain a single autocorrelation value for that lag. At one trial apart, both regions showed positive autocorrelation, with V1 having greater autocorrelation than the hippocampus; pattern correlations between trials 3 or 4 trials apart reduced across participants, settling into low but positive values. Then, for each participant and ROI, we regressed out the effect of absolute trial onset differences from all pairwise pattern correlations, to mirror the effects of controlling for these temporal distances in regressions. After removing this temporal distance component, the short lag pattern autocorrelation dropped substantially in both regions. The similarity in autocorrelation profiles between the two regions suggests that significant similarity effects in the hippocampus are unlikely to be driven by generic temporal autocorrelation.”

      “Relationship between behavioral location distance and temporal distance “

      “We also quantified how temporal distances between trials relates to their behavioral location distances, participant by participant. Our dimension similarity analysis controls for temporal distance between trials by design (see Social dimension similarity searchlight analysis), but our location similarity analysis does not. To decide on covariates to include in the analysis, we tested whether temporal distances can explain behavioral location distances. For each participant, we computed the correlations between trial pairs’ Euclidean distances in social locations and their linear temporal distances (“linear”) and the temporal distances squared (“quadratic”), to test for nonlinear effects. We then summarized the correlations using one-sample t-tests. The linear relationship was statistically significant (t<sub>49</sub> = 12.24, p < 0.001), whereas the quadratic relationship was not (t<sub>49</sub> = -0.55, p = 0.586). Similarly, in participant specific regressions with both linear and quadratic temporal distances, the linear effect was significant (t<sub>49</sub> = 5.69, p < 0.001) whereas the quadratic effect was not (t<sub>49</sub> = 0.20, p = 0.84). Based on this, we included linear temporal distances as a covariate in our location similarity analyses (see Location similarity searchlight analyses), and verified that adding a quadratic temporal distance covariate does not alter the results. Thus, the reported location-related pattern similarity effects go beyond what can be explained by temporal distance alone.”

      How the free parameter of spectral clustering was determined, if there is any?

      The interpretation of the number of hippocampal activity clusters is ambiguous. It is suggested that this number could fluctuate due to unique activity patterns or the fit to behaviorally defined trajectories. A lower number of clusters might indicate either a noisier or less distinct representation, raising the question of the necessity and interpretability of such a complex analysis. This concern is compounded by the potential sensitivity of the clustering to the variance in Euclidean distances of each trial's position relative to the center. If a character's position is consistently near the center, this could artificially reduce the perceived number of clusters. Furthermore, the manuscript should address whether there is any correlation between the number of clusters and behavioral performance. Specifically, what are the implications if participants are able to perform the task adequately with a smaller number of distinct hippocampal representation states?

      The rationale for conducting both cluster analysis and position decoding as separate analyses remains unclear. While cluster analysis can corroborate the findings of position decoding, it is not apparent why the authors chose to include trials across characters for cluster analysis but not for decoding analysis. An explanation of the reasoning behind this methodological divergence would help in understanding the distinct contributions of each analysis to the study's findings.

      The paper by Cohen et al. (1997), which provides the questionnaire for measuring the social network index, is not cited in the references. Upon reviewing the questionnaire that the author may have used, it appears that the term "social network size" does not refer to the actual size but to a score or index derived from the questionnaire responses. It may be more appropriate to replace the term "size" with a different term to more accurately reflect this distinction.

      Thank you for seeking these clarifications. Given the complexity of this analysis, we have decided to drop it to focus instead on our dimension and location representational similarity analysis results.

      Reviewer #2 (Recommendations for the authors):

      How did the participants' decisions on previous trials influence the future trials that the subjects saw? If the different participants were faced with different decision trials, then how did you compare their decision? If two participants made the same decisions, would they have seen exactly the same sequence of trials (see point X on how the trial sequence was randomized).

      All participants experience the same narrative, with the same decisions (i.e., the same available options); their choices (i.e., the options they select) are what implicitly shape each character’s affiliation and power locations, and thus each character’s trajectory. In other words, the narrative is fixed; what changes is the social coordinates assigned to each trial’s outcome depending on the participant’s choice of how to interact from the two narrative options. This means that we can meaningfully compare participants' neural patterns, given that every participant received the same text and images throughout.

      We have now added details on the narrative structure, replacing more ambiguous statements with a clearer description (page 16, lines 309-318):

      “The sequence of trials, including both narrative and decision trials, were fixed across participants; all that differs are the choices that the participants make. Narrative trials varied in duration, depending on the content (range 2-10 seconds), but were identical across participants. Decision trials always lasted 12 seconds, with two options presented until the participant made a choice, after which a blank screen was presented for the remainder of the duration. All six characters’ decision trials are interleaved with one another, and with the narrative slides. On average, after a decision trial for a given character, participants view ~11 narrative slides and complete ~3 decisions for other characters before returning to another decision with the same character, such that each character’s choices are separated by an average of ~20 seconds (ranging from 12 seconds to 10 min).”

      Figure 2B: I assume that "count" is "count of participants"? It would be good to indicate this on the axis/caption.

      Thank you for noting this. We have now removed this figure to improve the clarity of our figures. 

      We have shown that the hippocampus represents the interaction decision trials along abstract social dimensions, but does it track each relationship's unique sequence of abstract social coordinates?". Please clarify what you mean by "represents the interaction decision trials”.

      By “represents the interaction decision trials along abstract social dimensions”, we mean that when the participant makes a choice during the social interactions the hippocampal patterns represent the current social dimension of the choice (affiliation vs power). In other words, the hippocampal BOLD patterns differentiate affiliation and power decisions, consistent with our hypothesis of abstract social dimension representation in the hippocampus. We have clarified this (page 11, lines 185-187):

      “We have shown that the left hippocampus represents the affiliation and power trials differently, consistent with an abstract dimensional representation.”

      Page 8: "Hippocampal sequences are ordered like trajectories": It is not entirely clear to me what is meant by the split midpoint. Is this the midpoint of the piece-wise linear interpolation between two points, or simply the mean of all piecewise splines from one character? If the latter, is the null model the same as simply predicting the mean affiliation and power value for this character? If yes, please clarify and simplify this for the reader.

      Page 8: "Hippocampal sequences track relationship-specific paths". First, I was misled by the "relationship-specific". I first understood this to mean that you wanted to test whether two relationships (i.e. the identity of the partner) had different representations in Hippocampus, even if the power/affiliation trajectories are the same. I suggest changing the title of this section.

      The analysis in this section also breaks any temporal autocorrelation of measured patterns - so I am not sure if this is a strong analysis that should be interpreted at all. This analysis seems to not address the claim and conclusion that is drawn from it. I assume that the random trajectories have different choices and different affiliation/power values than the true trajectories. So the fact that the true trajectories can be better decoded simply shows that either choices or affiliation and power (or both) are represented in the neural code - but not necessarily anything beyond this.

      Page 9: "Neural trajectories reflect social locations, not just choices". The motivation of this analysis is not clear to me. As I understand this analysis, both social location and choices are changed from the real trajectories. How can it then show that it reflects social locations, not just the choices?

      Figure 4 caption: "on the -based approximation" Is there a missing "point"-[based] here?

      We agree with the reviewer that this analysis is hard to interpret and does not adequately address concerns regarding temporal autocorrelation, and as such we have removed it from the manuscript. We describe the new results that include controlling for temporal distance between trials (pages 11-12, lines 185-208):

      “We have shown that the left hippocampus represents the affiliation and power trials differently, consistent with an abstract dimensional representation. Does it also represent the changing social coordinates of each character? To test this, we multiple-regression RSA searchlight to test whether left hippocampus patterns represent the characters’ changing social locations across interactions (see Figure 3). We restricted the distances to those from trial pairs from the same character and standardized the distances within character (see Figure 3BD). We controlled for temporal distance to ensure the effect was not explainable by the time between trials, and for whether the trials shared the same underlying dimension (affiliation or power; see Location similarity searchlight analyses for more details). At the group level, we controlled for sample and the average reaction time difference between affiliation and power trials. Using the same testing logic as the dimensionality similarity analysis, we first tested our hypothesis in the bilateral hippocampus and found widespread effects in both the left (peak voxel MNI x/y/z = -35/-22/-15, cluster extent = 1470 voxels) and right (peak voxel MNI x/y/z = 37/-19/-14, cluster extent = 1953 voxels) hemispheres. The whole-brain searchlight analysis revealed additional clusters in the left putamen (-27/-3/14, cluster extent = 131 voxels) and left posterior cingulate cortex (-10/-28/41, cluster extent = 304 voxels).”

      “We then asked a second, complementary question: does the hippocampus represent all interactions, across characters, within a shared map? To test for this map-like structure, we repeated the analysis but now included all trial pairs, z-scoring distances globally rather than within character (Figure 3E-F). The remainder of the procedure followed the same logic as the preceding analysis. The hippocampus analysis revealed an extensive right hippocampal cluster (27/27/-14, cluster extent = 1667 voxels). The whole-brain analysis did not show any significant clusters.”

      We emphasize that the results are robust to the inclusion of temporal distance squared, in the methods (pages 23-24, lines 493-496):

      “Although the square of this temporal distance did not explain any additional variance in behavioral distances, we ran a robustness analysis including both temporal distance and its square and saw qualitatively the same clusters with similar effect sizes.”

      Page 8: last paragraph: The text sounds like you have already shown that you can decode character identity from the patterns - but I do not believe you have it this point. I would consider this would be an interesting addition to the paper, though.

      This section has been removed, and we have been careful to not imply this in the current version of the manuscript. While we agree a character identity decoding would enrich our argument, we do not believe our task is well-suited to capture a character identity effect. Each character only has 12 decision trials, and these trials are partially clustered in time - this is one problem of temporal autocorrelation that we thank the reviewers for pushing us to consider in more detail. Dimension and location patterns, on the other hand, are more natural to analyze in our task, especially in representational similarity analyses that test whether the relevant differences scale with neural distances.

      Page 14ff: Why is "Analysis section" not part of "Materials and Methods"? I believe adding the analysis after a careful description of the methods would improve the clarity of this section.

      We agree with the reviewer and have now consolidated these two sections.

      Two or three examples of Affiliation and Power decision trials should be provided, so the reader can form a more thorough understanding of how these dimensions were operationalized. For the RSA analysis, it is important to consider other differences between these two types of trials.

      We agree that adding examples will clarify the operationalization of these dimensions. We now include example affiliation and power trials in a table (page 17-18).

      We thank the reviewer for noting the need to rule out alternative hypotheses; we have added several such tests. Affiliation and power trials were not different in word count (page 17, lines 329-332):

      “To ensure that any observed neural or behavioral differences were not confounded by trivial features of the text, we tested for differences between the affiliation and power trials (where the two options are concatenated). There were no differences in word count (affiliation average = 26.6, power average = 25.6; t-test p = 0.56).”

      They were also not different in their sentiment, as assessed by a Large Language Model (LLM) analysis (page 17, lines 332-335): 

      “The text’s sentiment also did not differ between these trial types (t-test p = 0.72), as quantified by comparing sentiment compound scores (from most negative, −1, to most positive, +1), using a Large Language Model (LLM) specialized for sentiment analysis [26]. “

      The affiliation and power trials were different in terms of semantic content, consistent with our assumptions (page 17, lines 337-347):

      “Our framework assumes that affiliation and power trials differ in their semantic content–that is, in the conceptual meaning of the text, beyond word count or sentiment. To test this assumption, we used an LLM-based semantic embedding analysis. Each decision trial was embedded into a semantic vector. We then measured the cosine similarity between pairs of trials and calculated the difference between average within-dimension similarity (affiliation-affiliation and power-power comparisons) and average between-dimension similarity (affiliationpower comparisons) and assessed its statistical significance with permutation testing (1,000 shuffles of trial labels). As expected, decision trials of the same dimension were more similar to each other than trials of different dimension, across multiple LLMs (OpenAI’s text-embedding-3-small [27]: similarity difference = 0.041, p < 0.001; all-MiniLM-L12-v2 [28]: similarity difference = 0.032, p < 0.001).”

      The affiliation and power trials were different in average reaction time. To control for this difference in the dimension RSA analysis, we added each participant’s absolute value reaction time difference between the trial types as a covariate. The results were nearly identical to what they were before. We updated the text to reflect this new control (page 23, lines 471-474):

      “However, there was a significant difference in the average reaction time between affiliation and power decisions across participants (t<sub>49</sub> = 6.92, p < 0.001; affiliation mean = 4.92 seconds (s), power mean = 4.51 s), so we controlled for this in the group-level analysis.”

      The exact implementation and timing of the behavioral tasks should be described better. How many narrative trials were intermixed with the decision trials? Which characters were they assigned to? How was the sequence of trials determined? Was it fixed across participants, or randomized?

      We agree that additional details are helpful. In the Methods, we now describe this with more detail (page 16, lines 301-318):

      “There are two types of trials: “narrative” trials where background information is provided or characters talk or take actions (a total of 154 trials), and “decision” trials where the participant makes decisions in one-on-one interactions with a character that can change the relationship with that character (a total of 63 trials). On each decision, participants used a button response box to select between the two options. The options (1 or 2, assigned to the index and middle fingers) choice directions (+/-1 arbitrary unit on the current dimension) were counterbalanced.”

      “The sequence of trials, including both narrative and decision trials, were fixed across participants; all that differs are the choices that the participants make. Narrative trials varied in duration, depending on the content (range 2-10 seconds), but were identical across participants. Decision trials always lasted 12 seconds, with two options presented until the participant made a choice, after which a blank screen was presented for the remainder of the duration. All six characters’ decision trials are interleaved with one another, and with the narrative slides. On average, after a decision trial for a given character, participants view ~11 narrative slides and complete ~3 decisions for other characters before returning to another decision with the same character, such that each character’s choices are separated by an average of ~20 seconds (ranging from 12 seconds to 10 min).”

      What is the exact timing of trials during fMRI acquisition - i.e. how long were the trials, what was the ITI, were there long phases of rest to determine the resting baseline? These are all important factors that will determine the covariance between regressors and should be reported carefully. Ideally, I would like to see the trial-by-trial temporal auto-correlation structure across beta-weights to be reported.

      We thank the reviewer for asking for this clarification. We have added the following text to clarify the trial timing (page 16, lines 314-318):

      “All six characters’ decision trials are interleaved with one another and with narrative slides. On average, after a decision trial for a given character, participants view ~11 narrative slides and complete ~3 decisions for other characters before returning to that same character, such that each character’s choices are separated by an average of ~20 seconds (range 12 seconds to 10 min).”

      We now describe the temporal autocorrelation patterns in the supplement, including how we decided on how to control for temporal distance in representational similarity analyses (pages 29-31, lines 593-656):

      “The Social Navigation Task is a narrative-based task, where the relationships with characters evolve over time; trial pairs that are close in time may have more similar fMRI patterns for reasons unrelated to social mapping (e.g., slow drift). It is important to account for the role of time in our analyses, to ensure effects go beyond simple temporal confounds, like the time between decision trials. To aid in this, we quantified how fMRI signals change over time using a pattern autocorrelation function across decision trial lags. We defined the left and right hippocampus and the left and right intracalcarine cortex using the HarvardOxford atlas and thresholded them at 50% probability. We chose intracalcarine corex as an early visual control region that largely corresponds to primary visual cortex (V1), as it is likely to be driven by the visually presented narrative. We used the same trial-wise beta images as in the location similarity RSA (boxcar regressors spanning each decision trial’s reaction time). For each participant and region-of-interest (ROI), we extracted the decision trial-by-voxel beta matrix and quantified three kinds of temporal dependence: beta autocorrelation, multivoxel pattern correlation and multivoxel pattern correlation after regressing out temporal distance.”

      “To estimate the temporal autocorrelation of the trial-wise beta values, we treated each voxel’s beta values as a time series across trials and measured how much a voxel’s response on one trial correlated (Pearson) with its response on previous trials. We averaged these voxel wise autocorrelations within each ROI. At one trial apart (lag 1), both the hippocampus and V1 showed small positive autocorrelations, indicating modest trial-to-trial carryover in response amplitude (see Supplemental figure 1) that by three trials apart was approximately 0.”

      “Because our representational similarity analyses depend on trial-by-trial pattern similarity, we also estimated how multivoxel patterns were autocorrelated over time. For each lag, we computed the Pearson correlation between each trial’s voxelwise pattern and the pattern from the trial that many trials earlier, then averaged those correlations to obtain a single autocorrelation value for that lag. At one trial apart, both regions showed positive autocorrelation, with V1 having greater autocorrelation than the hippocampus; pattern correlations between trials 3 or 4 trials apart reduced across participants, settling into low but positive values. Then, for each participant and ROI, we regressed out the effect of absolute trial onset differences from all pairwise pattern correlations, to mirror the effects of controlling for these temporal distances in regressions. After removing this temporal distance component, the short lag pattern autocorrelation dropped substantially in both regions. The similarity in autocorrelation profiles between the two regions suggests that significant similarity effects in the hippocampus are unlikely to be driven by generic temporal autocorrelation.”

      “Relationship between behavioral location distance and temporal distance “

      “We also quantified how temporal distances between trials relates to their behavioral location distances, participant by participant. Our dimension similarity analysis controls for temporal distance between trials by design (see Social dimension similarity searchlight analysis), but our location similarity analysis does not. To decide on covariates to include in the analysis, we tested whether temporal distances can explain behavioral location distances. For each participant, we computed the correlations between trial pairs’ Euclidean distances in social locations and their linear temporal distances (“linear”) and the temporal distances squared (“quadratic”), to test for nonlinear effects. We then summarized the correlations using one-sample t-tests. The linear relationship was statistically significant (t<sub>49</sub> = 12.24, p < 0.001), whereas the quadratic relationship was not (t<sub>49</sub> = -0.55, p = 0.586). Similarly, in participant specific regressions with both linear and quadratic temporal distances, the linear effect was significant (t<sub>49</sub> = 5.69, p < 0.001) whereas the quadratic effect was not (t<sub>49</sub> = 0.20, p = 0.84). Based on this, we included linear temporal distances as a covariate in our location similarity analyses (see Location similarity searchlight analyses), and verified that adding a quadratic temporal distance covariate does not alter the results. Thus, the reported location-related pattern similarity effects go beyond what can be explained by temporal distance alone.”

    1. Briefing : Feuille de Route de l'Éducation Nationale pour les Droits et le Bien-être des Enfants

      Synthèse

      Ce document synthétise les axes stratégiques et les constats chiffrés présentés par Édouard Geffray, ministre de l'Éducation nationale, lors de son audition devant la délégation aux droits des enfants.

      L'école y est définie par deux fonctions cardinales : instruire et protéger. Les priorités ministérielles s'articulent autour de trois piliers majeurs : la santé mentale des élèves, la lutte contre le harcèlement scolaire et la sécurisation des parcours pour les enfants les plus vulnérables (situation de handicap ou sous protection).

      Le ministre souligne une situation alarmante de la santé mentale des jeunes, exacerbée par les usages numériques, et propose des mesures systémiques : déploiement du programme "Phare", interdiction du portable au lycée, et création d'un cadre de "scolarité protégée".

      Malgré une baisse démographique drastique (un million d'élèves en moins d'ici 2029), le ministère affirme vouloir maintenir une trajectoire de recrutement pour les personnels médico-sociaux afin de répondre à l'explosion des besoins de détection et d'orientation.

      --------------------------------------------------------------------------------

      I. Santé Mentale et Lutte contre le Harcèlement Scolaire : Un Enjeu de Sécurité Absolue

      Le ministre place la santé mentale parmi ses trois priorités absolues, s'appuyant sur des indicateurs de détresse psychologique en forte hausse.

      État des lieux et chiffres clés

      Risques de dépression : 14 % des collégiens et 15 % des lycéens présentent un risque important.

      Idées suicidaires : 24 % des lycéens déclarent avoir eu des pensées suicidaires au cours des 12 derniers mois.

      Harcèlement : Environ 5 % des élèves (soit un élève par classe en moyenne) sont victimes de harcèlement chaque année.

      Urgences : Augmentation de 80 % des passages aux urgences pour intentions ou tentatives de suicide depuis la crise du COVID-19.

      Stratégies de réponse

      Désanonymisation des questionnaires : Le questionnaire annuel de harcèlement (rempli du CE2 à la Terminale) permet désormais aux élèves de décliner leur identité en fin de document pour être recontactés par l'équipe enseignante.

      Formation des personnels : L'objectif est de former deux personnels "sentinelles" par établissement pour repérer et orienter les élèves. Actuellement, la moyenne est de 1,6 personnel formé.

      Dispositif "Coupe-file" : Un mécanisme est en cours de finalisation avec le ministère de la Santé pour garantir aux infirmiers et médecins scolaires une prise de rendez-vous rapide vers les Centres Médico-Psychologiques (CMP) ou la médecine de ville, évitant des délais d'attente de 3 à 6 mois.

      Arsenal répressif : La loi du 2 mars 2022 fait du harcèlement un délit. 10 000 affaires ont été enregistrées par les parquets depuis 2022. Le décret du 16 août 2023 permet désormais de changer d'école l'élève auteur de harcèlement ou de violences intentionnelles.

      --------------------------------------------------------------------------------

      II. Protection de l'Enfance et "Scolarité Protégée"

      L'école s'affirme comme le premier émetteur d'informations préoccupantes (IP) et d'articles 40 en France.

      Signalements : Le nombre d'informations préoccupantes émises par l'école est passé de 50 000 à 80 000 en deux ans. Un guide national de standardisation des alertes est en cours de publication.

      Circulaire "Scolarité Protégée" : Publiée prochainement, elle vise à garantir la continuité pédagogique des enfants confiés à l'Aide Sociale à l'Enfance (ASE), dont 70 % sortent actuellement du système sans diplôme. Elle prévoit :

      ◦ Un suivi individuel par les services départementaux (DASEN).  

      ◦ Des appuis scolaires spécifiques pour éviter les ruptures liées aux changements de foyers ou de familles d'accueil.  

      ◦ Un soutien renforcé à l'orientation et à l'estime de soi.

      --------------------------------------------------------------------------------

      III. École Inclusive et Évolution de l'Accompagnement

      Le ministre distingue les élèves "non accompagnés" (disposant d'une solution pédagogique mais attendant une aide humaine) des élèves "sans solution" (exclus du système faute de structure adaptée).

      De la compensation à l'accessibilité : Le ministère souhaite sortir d'un modèle basé uniquement sur l'aide humaine systématique (AESH) pour privilégier l'accessibilité pédagogique et matérielle. L'objectif est d'éviter "l'externalisation" du handicap à l'intérieur de la classe.

      Pôles d'Appui à la Scolarité (PAS) : Déployés pour favoriser l'intervention du médico-social directement dans les murs de l'école et fluidifier les parcours entre le milieu ordinaire et les structures spécialisées.

      Besoins : 42 000 élèves seraient encore en attente d'accompagnement après les vacances de la Toussaint, malgré la création de 1 200 postes d'AESH supplémentaires pour 2026.

      --------------------------------------------------------------------------------

      IV. Numérique et Éducation à la Vie Affective (EVARS)

      La régulation des écrans

      Le ministre défend une interdiction stricte du portable au lycée (prévue pour 2026), justifiée par des enjeux cognitifs et de santé publique :

      Corrélation scientifique : La dégradation psychique des élèves est proportionnelle à la consommation d'écrans (le risque de troubles anxio-dépressifs passe de 30 % à 60 % pour les gros utilisateurs).

      Conscience avant contenu : Le ministre souhaite rétablir une primauté de l'éducation aux risques numériques avant l'exposition massive aux contenus violents ou faux.

      Éducation à la vie affective, relationnelle et sexuelle (EVARS)

      Obligation : Les trois séances annuelles sont présentées comme "non négociables", tant dans le public que dans le privé sous contrat.

      Constats : 15 % des filles et 12 % des garçons au collège déclarent avoir subi une forme de violence sexuelle.

      Déploiement : Au 31 décembre, 66 % des écoles et 48 % des collèges publics avaient réalisé au moins une séance.

      Formation des enseignants : Le ministère reconnaît la nécessité de protéger les personnels qui, étant parfois eux-mêmes d'anciennes victimes, pourraient subir des traumatismes en dispensant ces enseignements.

      --------------------------------------------------------------------------------

      V. Pilotage Institutionnel et Défis Démographiques

      La gestion des moyens humains

      Le système éducatif fait face à une chute démographique sans précédent :

      Données : Perte d'un million d'élèves entre 2019 et 2029 dans le premier degré. Une génération de 200 000 élèves "disparaît" tous les quatre ans.

      Ajustements : Le ministre justifie les suppressions de postes d'enseignants (4 000 prévus) par cette baisse, tout en souhaitant augmenter progressivement les effectifs médico-sociaux (300 à 500 postes par an) pour compenser l'explosion des besoins en santé mentale.

      L'éducation prioritaire (REP/REP+)

      Le ministre admet que la carte actuelle, figée depuis 2015, est obsolète. Cependant, il refuse une révision avant 2027 pour deux raisons :

      1. Technique : Le processus de concertation avec les collectivités et les syndicats nécessite 15 à 18 mois.

      2. Démocratique : Il considère que ce débat doit appartenir à la prochaine échéance présidentielle et refuse de "figer" une carte qui s'imposerait au futur gouvernement.

      Création d'un défenseur des droits des enfants

      Un adjoint à la médiatrice de l'Éducation nationale sera spécifiquement chargé de la protection de l'enfance. Sa mission sera de traiter les litiges entre scolaire et périscolaire pour assurer une sécurité "de la porte à la porte" et de produire un rapport annuel dédié à ces enjeux.

      --------------------------------------------------------------------------------

      VI. Tableau Synthétique : Chiffres de la Santé Mentale et du Bien-être

      | Indicateur | Donnée Statistique | | --- | --- | | Élèves victimes de harcèlement | 5 % (stable du CE2 à la Terminale) | | Lycéens avec idées suicidaires | 24 % | | Passage aux urgences (suicide) | \+ 80 % depuis le Covid | | Information préoccupantes (École) | 80 000 / an (en hausse de 30 000) | | Sortie de l'ASE sans diplôme | 70 % | | Couverture EVARS (Écoles) | 66 % (au 31/12) | | Élèves en attente d'AESH | 42 000 (Toussaint 2025) |

    1. Reviewer #1 (Public review):

      Summary:

      The manuscript by Mengxing et al., reports an assessment of three first-order thalamic nuclei (auditory, visual, somatosensory) in a 3 x 2 factorial design to test for specificity of responses in first-order thalamic nuclei to linguistic processing particularly in the left hemisphere. The conditions are reading, speech production, and speech comprehension and their respective control conditions. The authors report the following results:

      (1) BOLD-response analyses: left MGB linguistic vs non-linguistic significant; left LGN linguistic vs non-linguistic significant. There is no hemisphere x stimulus interaction.

      (2) MVPA: left MGB linguistic vs. non-linguistic significant; bilateral VLN linguistic vs. non-linguistic significant; significant lateralisation in MGB (left MGB responses better classified linguistic vs. non-linguistic in contrast to right).

      (3) Functional connectivity: there is, in general, connectivity between the thalamic ROIs and the respective primary cortices independent of linguistics.

      Strengths:

      The study has a clear and comprehensive design and addresses a timely topic. First-order thalamic nuclei and their interaction with the respective cerebral cortex area are likely key to understanding how perception works in a world where one has to compute highly dynamic stimuli often in an instant. Speech is a prime example of an ecologically important, extremely dynamic, and complex stimulus. The field of the contribution of cerebral cortex-thalamic loops is wide open, and the study presents a solid approach to address their role in different speech modalities (i.e., reading, comprehension, production).

      Weaknesses:

      I see two major overall weaknesses in the manuscript in its current form:

      (1) Statistics:

      Unfortunately, I have doubts about the solidity of the statistics. In the analyses of the BOLD responses, the authors do not find significant hemisphere x stimulus interactions. In my view, such results would pre-empt doing a post-hoc t-test. Nevertheless, the authors motivate their post-hoc t-test by 'trends' in the interaction and prior hypotheses. I see two difficulties with that. First, the origin of the prior hypotheses is somewhat unclear (see also the comment below on hypotheses), and the post-hoc t-test is not corrected for multiple comparisons. I find that it is a pity that the authors did not derive more specific hypotheses grounded in the literature to guide the statistical testing, as I think these would have been available, and the response properties of the MGB and LGN also make sense in light of them. In addition, I was wondering whether the MVPA results would also need to be corrected for the three tests, i.e., the three ROIs.

      Hypotheses:

      In my view, it is relatively unclear where the hypotheses precisely come from. For example, the paragraph on the hypotheses in the introduction (p. 6-7) is devoid of references. I also have the impression that the hypotheses are partly not taking into account previous reports on first-order thalamic nuclei involvement in linguistic vs. non-linguistic processing. For example, the authors test for lateralisation of linguistic vs. non-linguistic responses in all nuclei. However, from previous literature, one could derive the hypothesis that the lateralisation in MGB for speech might be there - previous work shows, for example, that speech recognition abilities consistently correlate with left MGB only (von Kriegstein et al., 2008 Curr Biol; Mihai et al., 2019 eLife). In addition, the involvement of the MGB in speech in noise processing is present in the left MGB (Mihai et al., 2021, J Neuroscience). Developmental dyslexia, which is supposed to be based on imprecise phonological processing (Ramus et al., 2004 TiCS), has alterations in left MGB (Diaz et al., 2012 PNAS; Galaburda et al., 1994 PNAS) and left MGB connections to planum temporale (Tschentscher et al., 2019 J Neurosci) as well as altered lateralisation (Müller-Axt et al., 2025 Brain). Conversely, in the LGN, I'm not aware of any studies showing lateralisation for speech. See, for example, Diaz et al., 2018, Neuroimage, where there are correlations of LGN task-dependent modulation with visual speech recognition behaviour in both LGNs. Thus, based on this literature, one could have predicted the result pattern displayed, for example, in Figure 3A at least for MGB and LGN.

      In summary, the motivation for the different hypotheses needs to be carved out more and couched into previous literature that is directly relevant to the topic. The above paragraph is, of course, my view on the topic, but currently, the paper lacks different literature as references to fully understand where the hypotheses are derived from.

    2. Reviewer #2 (Public review):

      Summary:

      This study investigates the involvement of first-order thalamic nuclei in language-related tasks using task-based fMRI in a 3 × 2 design contrasting linguistic and non-linguistic versions of reading, speech comprehension, and speech production. By focusing on the LGN, MGN, and VLN and combining activation, connectivity, lateralization, and multivariate pattern analyses, the authors aim to characterize modality-specific and language-related thalamic contributions.

      Strength:

      A major strength of the work is its hypothesis-driven and multimodal analytical approach, and the modality-specific engagement of first-order thalamic nuclei is robust and consistent with known thalamocortical organization. This is a very sound study overall.

      Weaknesses:

      However, several conceptual issues complicate the interpretation of the results as evidence for linguistic modulation per se. A central concern relates to the operationalization of the linguistic versus non-linguistic contrast. In the present design, linguistic and non-linguistic stimuli differ along multiple dimensions beyond linguistic content. For example, written words and scrambled images differ in spatial frequency structure, edge composition, contrast regularities, and familiarity, while intelligible speech and acoustically scrambled sounds differ substantially in temporal and spectral statistics. This is particularly relevant given that first-order thalamic nuclei such as the LGN are known to be highly sensitive to low-level sensory properties. As a result, observed differences in thalamic responses may reflect sensitivity to stimulus properties rather than linguistic processing per se, and this limits the specificity of claims regarding linguistic modulation.

      Relatedly, although the manuscript frequently refers to effects "depending on the linguistic nature of the stimuli," the statistical evidence for linguistic versus non-linguistic modulation is uneven across analyses. Whole-brain contrasts collapse across stimulus type and primarily test modality effects. Similarly, the primary ROI analyses of activation amplitude are collapsed across linguistic and non-linguistic conditions and convincingly demonstrate modality-specific engagement of thalamic nuclei, but do not in themselves provide evidence for linguistic modulation. Linguistic effects emerge only in later, more targeted analyses focusing on hemispheric lateralization and multivariate pattern classification, and these effects are nucleus-, modality-, and analysis-specific rather than general. Taken together, these results suggest that linguistic modulation constitutes a secondary and selective finding, whereas modality-specific task engagement represents the primary and most robust outcome of the study.

      An additional interpretational issue concerns task engagement and attention. The tasks differ substantially in cognitive demands (e.g., passive reading and listening versus overt speech production), and linguistic and non-linguistic blocks may differ systematically in salience or engagement. This is particularly important given prior evidence, cited by the authors, that LGN and MGN activity can be modulated by task demands and attention. In the absence of behavioral measures indexing task engagement or compliance, it is difficult to determine whether differences between linguistic and non-linguistic conditions reflect linguistic processing per se or are mediated by attentional factors.

      Finally, while the manuscript emphasizes the novelty of evaluating thalamic involvement in language, thalamic contributions to language have been documented previously in both lesion and functional imaging studies. The contribution of the present work, therefore, lies less in establishing thalamic involvement in language per se, and more in its focus on specific first-order nuclei, its multimodal design, and its combination of univariate, connectivity, and multivariate analyses. Moderating claims of novelty would help place the findings more clearly within the existing literature.

    1. Reviewer #2 (Public review):

      Summary:

      The manuscript reports a cryo-EM structure of TMAO demethylase from Paracoccus sp. This is an important enzyme in the metabolism of trimethylamine oxide (TMAO) and trimethylamine (TMA) in human gut microbiota, so new information about this enzyme would certainly be of interest.

      Strengths:

      The cryo-EM structure for this enzyme is new and provides new insights into the function of the different protein domains, and a channel for formaldehyde between the two domains.

      Weaknesses:

      (1) The proposed catalytic mechanism in this manuscript does not make sense. Previous mechanistic studies on the Methylocella silvestris TMAO demethylase (FEBS Journal 2016, 283, 3979-3993, reference 7) reported that, as well as a Zn2+ cofactor, there was a dependence upon non-heme Fe2+, and proposed a catalytic mechanism involving deoxygenation to form TMA and an iron(IV)-oxo species, followed by oxidative demethylation to form DMA and formaldehyde.

      In this work, the authors do not mention the previously proposed mechanism, but instead say that elemental analysis "excluded iron". This is alarming, since the previous work has a key role for non-heme iron in the mechanism. The elemental analysis here gives a Zn content of about 0.5 mol/mol protein (and no Fe), whereas the Methylocella TMAO demethylase was reported to contain 0.97 mol Zn/mol protein, and 0.35-0.38 mol Fe/mol protein. It does, therefore, appear that their enzyme is depleted in Zn, and the absence of Fe impacts the mechanism, as explained below.

      The proposed catalytic mechanism in this manuscript, I am sorry to say, does not make sense to me, for several reasons:

      (i) Demethylation to form formaldehyde is not a hydrolytic process; it is an oxidative process (normally accomplished by either cytochrome P450 or non-heme iron-dependent oxygenase). The authors propose that a zinc (II) hydroxide attacks the methyl group, which is unprecedented, and even if it were possible, would generate methanol, not formaldehyde.

      (ii) The amine oxide is then proposed to deoxygenate, with hydroxide appearing on the Zn - unfortunately, amine oxide deoxygenation is a reductive process, for which a reducing agent is needed, and Zn2+ is not a redox-active metal ion;

      (iii) The authors say "forming a tetrahedral intermediate, as described for metalloproteinase", but zinc metalloproteases attack an amide carbonyl to form an oxyanion intermediate, whereas in this mechanism, there is no carbonyl to attack, so this statement is just wrong.

      So on several counts, the proposed mechanism cannot be correct. Some redox cofactor is needed in order to carry out amine oxide deoxygenation, and Zn2+ cannot fulfil that role. Fe2+ could do, which is why the previously proposed mechanism involving an iron(IV)-oxo intermediate is feasible. But the authors claim that their enzyme has no Fe. If so, then there must be some other redox cofactor present. Therefore, the authors need to re-analyse their enzyme carefully and look either for Fe or for some other redox-active metal ion, and then provide convincing experimental evidence for a feasible catalytic mechanism. As it stands, the proposed catalytic mechanism is unacceptable.

      (2) Given the metal content reported here, it is important to be able to compare the specific activity of the enzyme reported here with earlier preparations. The authors do quote a Vmax of 16.52 µM/min/mg; however, these are incorrect units for Vmax, they should be µmol/min/mg. There is a further inconsistency between the text saying µM/min/mg and the Figure saying µM/min/µg.

      (3) The consumption of formaldehyde to form methylene-THF is potentially interesting, but the authors say "HCHO levels decreased in the presence of THF", which could potentially be due to enzyme inhibition by THF. Is there evidence that this is a time-dependent and protein-dependent reaction? Also in Figure 1C, HCHO reduction (%) is not very helpful, because we don't know what concentration of formaldehyde is formed under these conditions; it would be better to quote in units of concentration, rather than %.

      (4) Has this particular TMAO demethylase been reported before? It's not clear which Paracoccus strain the enzyme is from; the Experimental Section just says "Paracoccus sp.", which is not very precise. There has been published work on the Paracoccus PS1 enzyme; is that the strain used? Details about the strain are needed, and the accession for the protein sequence.

    2. Author response:

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      Thach et al. report on the structure and function of trimethylamine N-oxide demethylase (TDM). They identify a novel complex assembly composed of multiple TDM monomers and obtain high-resolution structural information for the catalytic site, including an analysis of its metal composition, which leads them to propose a mechanism for the catalytic reaction.

      In addition, the authors describe a novel substrate channel within the TDM complex that connects the N-terminal Zn²-dependent TMAO demethylation domain with the C-terminal tetrahydrofolate (THF)-binding domain. This continuous intramolecular tunnel appears highly optimized for shuttling formaldehyde (HCHO), based on its negative electrostatic properties and restricted width. The authors propose that this channel facilitates the safe transfer of HCHO, enabling its efficient conversion to methylenetetrahydrofolate (MTHF) at the C-terminal domain as a microbial detoxification strategy.

      Strengths:

      The authors provide convincing high-resolution cryo-EM structural evidence (up to 2 Å) revealing an intriguing complex composed of two full monomers and two half-domains. They further present evidence for the metal ion bound at the active site and articulate a plausible hypothesis for the catalytic cycle. Substantial effort is devoted to optimizing and characterizing enzyme activity, including detailed kinetic analyses across a range of pH values, temperatures, and substrate concentrations. Furthermore, the authors validate their structural insights through functional analysis of active-site point mutants.

      In addition, the authors identify a continuous channel for formaldehyde (HCHO) passage within the structure and support this interpretation through molecular dynamics simulations. These analyses suggest an exciting mechanism of specific, dynamic, and gated channeling of HCHO. This finding is particularly appealing, as it implies the existence of a unique, completely enclosed conduit that may be of broad interest, including potential applications in bioengineering.

      Weaknesses:

      Although the idea of an enclosed channel for HCHO is compelling, the experimental evidence supporting enzymatic assistance in the reaction of HCHO with THF is less convincing. The linear regression analysis shown in Figure 1C demonstrates a THF concentration-dependent decrease in HCHO, but the concentrations used for THF greatly exceed its reported KD (enzyme concentration used in this assay is not reported). It has previously been shown that HCHO and THF can couple spontaneously in a non-enzymatic manner, raising the possibility that the observed effect does not require enzymatic channeling. An additional control that can rule out this possibility would help to strengthen the evidence. For example, mutating the THF binding site to prevent THF binding to the protein complex could clarify whether the observed decrease in HCHO depends on enzyme-mediated proximity effects. A mutation which would specifically disable channeling could be even more convincing (maybe at the narrowest bottleneck).

      We agree with the reviewer that HCHO and THF can react spontaneously in a non-enzymatic manner, and our experiments were not intended to demonstrate enzymatic channeling. The linear regression analysis in Figure 1C was designed solely to confirm that HCHO reacts with THF under our assay conditions. Accordingly, THF was titrated over a broad concentration range starting from zero, and the observed THF concentration–dependent decrease in HCHO reflects this chemical reactivity.

      We do not interpret these data as evidence that the enzyme catalyzes or is required for the HCHO–THF coupling reaction. Instead, the structural observation of an enclosed channel is presented as a separate finding. We have clarified this point in the revised text to avoid overinterpretation of the biochemical data (page 2, line 16).

      Another concern is that the observed decrease in HCHO could alternatively arise from a reduced production of HCHO due to a negative allosteric effect of THF binding on the active site. From this perspective, the interpretation would be more convincing if a clear coupled effect could be demonstrated, specifically, that removal of the product (HCHO) from the reaction equilibrium leads to an increase in the catalytic efficiency of the demethylation reaction.

      We agree that, in principle, a decrease in detectable HCHO could also arise from an indirect effect of THF binding on enzyme activity. However, in our study the experiment was not designed to assess catalytic coupling or allosteric regulation. The assay in question monitors HCHO levels under defined conditions and does not distinguish between changes in HCHO production and downstream consumption.

      Additionally, we do not interpret the observed decrease in HCHO as evidence that THF binding enhances catalytic efficiency, or that removal of HCHO shifts the reaction equilibrium. Instead, the data are presented to establish that HCHO can react with THF under the assay conditions. Any potential allosteric effects of THF on the demethylation reaction, or kinetic coupling between HCHO removal and catalysis, are beyond the scope of the current study, and are not claimed.

      While the enzyme kinetics appear to have been performed thoroughly, the description of the kinetic assays in the Methods section is very brief. Important details such as reaction buffer composition, cofactor identity and concentration (Zn<sup>2+</sup>), enzyme concentration, defined temperature, and precise pH are not clearly stated. Moreover, a detailed methodological description could not be found in the cited reference (6), if I am not mistaken.

      Thank you for the suggestion. We have added reference [24] to the methodological description on page 8. The Methods section has been revised accordingly on page 8 under “TDM Activity Assay,” without altering the Zn<sup>2+</sup> concentration.

      The composition of the complex is intriguing but raises some questions. Based on SDS-PAGE analysis, the purified protein appears to be predominantly full-length TDM, and size-exclusion chromatography suggests an apparent molecular weight below 100 kDa. However, the cryo-EM structure reveals a substantially larger complex composed of two full-length monomers and two half-domains.

      We appreciate the reviewer’s careful analysis of the apparent discrepancy between the biochemical characterization and the cryo-EM structure. This issue is addressed in Figure S1, which may have been overlooked.

      As shown in Figure S1, the stability of TDM is highly dependent on protein and salt conditions. At 150 mM NaCl, SEC reveals a dominant peak eluting between 10.5 and 12 mL, corresponding to an estimated molecular weight of ~170–305 kDa (blue dot, Author response image 1). This fraction was explicitly selected for cryo-EM analysis and yields the larger complex observed in the reconstruction. At lower salt concentrations (50 mM) or higher (>150 mM NaCl), the protein either aggregates or elutes near the void volume (~8 mL).

      SDS–PAGE analysis detects full-length TDM together with smaller fragments (~40–50 kDa and ~22–25 kDa). The apparent predominance of full-length protein on SDS–PAGE likely reflects its greater staining intensity per molecule and/or a higher population, rather than the absence of truncated species.

      Author response image 1.

      Given the lack of clear evidence for proteolytic fragments on the SDS-PAGE gel, it is unclear how the observed stoichiometry arises. This raises the possibility of higher-order assemblies or alternative oligomeric states. Did the authors attempt to pick or analyze larger particles during cryo-EM processing? Additional biophysical characterization of particle size distribution - for example, using interferometric scattering microscopy (iSCAT)-could help clarify the oligomeric state of the complex in solution.

      Cryo-EM data were collected exclusively from the size-exclusion chromatography fraction eluting between 10.5 and 12 mL. This fraction was selected to isolate the dominant assembly in solution. Extensive 2D and 3D particle classification did not reveal distinct classes corresponding to smaller species or higher-order oligomeric assemblies. Instead, the vast majority of particles converged to a single, well-defined structure consistent with the 2 full-length + 2 half-domain stoichiometry.

      A minor subpopulation (~2%) exhibited increased flexibility in the N-terminal region of the two full-length subunits, but these particles did not form a separate oligomeric class, indicating conformational heterogeneity rather than alternative assembly states (Author response image 2). Together, these data support the 2+2½ architecture as the predominant and stable complex under the conditions used for cryo-EM. Additional techniques, such as iSCAT, would provide complementary information, but are not required to support the conclusions drawn from the SEC and cryo-EM analyses presented here.

      Author response image 2.

      The authors mention strict symmetry in the complex, yet C2 symmetry was enforced during refinement. While this is reasonable as an initial approach, it would strengthen the structural interpretation to relax the symmetry to C1 using the C2-refined map as a reference. This could reveal subtle asymmetries or domain-specific differences without sacrificing the overall quality of the reconstruction.

      We thank the reviewer for this thoughtful suggestion. In standard cryo-EM data processing, symmetry is typically not imposed initially to minimize potential model bias; accordingly, we first performed C1 refinement before applying C2 symmetry. The resulting C1 reconstructions revealed no detectable asymmetry or domain-specific differences relative to the C2 map. In addition, relaxing the symmetry consistently reduced overall resolution, indicating lower alignment accuracy and further supporting the presence of a predominantly symmetric assembly.

      In this context, the proposed catalytic role of Zn<sup>2+</sup> raises additional questions. Why is a 2:1 enzyme-to-metal stoichiometry observed, and how does this reconcile with previous reports? This point warrants discussion. Does this imply asymmetric catalysis within the complex? Would the stoichiometry change under Zn<sup>2+</sup>-saturating conditions, as no Zn<sup>2+</sup> appears to be added to the buffers? It would be helpful to clarify whether Zn<sup>2+</sup> occupancy is equivalent in both active sites when symmetry is not imposed, or whether partial occupancy is observed.

      The observed ~2:1 enzyme-to-Zn<sup>2+</sup> stoichiometry likely reflects the composition of the 2 full-length + 2 half-domain (2+2½) complex. In this assembly, only the core domains that are fully present in the complex contribute to metal binding. The truncated or half-domains lack the Zn<sup>2+</sup> binding domain. As a result, only two metal-binding sites are occupied per assembled complex, consistent with the measured stoichiometry.

      We note that Zn<sup>2+</sup> was not deliberately added to the buffers, so occupancy may not reflect full saturation. Based on our cryo-EM and biochemical data, both metal-binding sites in the full-length subunits appear to be occupied to an equivalent extent, and no clear evidence of asymmetric catalysis is observed under these current experimental conditions. Full Zn<sup>2+</sup> saturation could potentially increase occupancy, but was not explored in these experiments.

      The divalent ion Zn<sup>2+</sup> is suggested to activate water for the catalytic reaction. I am not sure if there is a need for a water molecule to explain this catalytic mechanism. Can you please elaborate on this more? As one aspect, it might be helpful to explain in more detail how Zn-OH and D220 are recovered in the last step before a new water molecule comes in.

      Thank you for your suggestion. We revised our text in page 2 as bellow.

      Based on our structural and biochemical data, we propose a structurally informed working model for TMAO turnover by TDM (Scheme 1). In this model, Zn<sup>2+</sup> plays a non-redox role by polarizing the O–H bond of the bound hydroxyl, thereby lowering its pK<sub>a</sub>. The D220 carboxylate functions as a general base, abstracting the proton to generate a hydroxide nucleophile. This hydroxide then attacks the electrophilic N-methyl carbon of TMAO, forming a tetrahedral carbinolamine (hemiaminal) intermediate. Subsequent heterolytic cleavage of the C–N bond leads to the release of HCHO. D220 then switches roles to act as a general acid, donating a proton to the departing nitrogen, which facilitates product release and regenerates the active site. This sequence allows a new water molecule to rebind Zn<sup>2+</sup>, enabling subsequent catalytic turnovers. This proposed pathway is consistent with prior mechanistic studies, in which water addition to the azomethine carbon of a cationic Schiff base generates a carbinolamine intermediate, followed by a rate-limiting breakdown to yield an amino alcohol and a carbonyl compound, in the published case, an aldehyde (Pihlaja et al., J. Chem. Soc. Perkin Trans. 2, 1983, 8, 1223–1226).

      Overall, the authors were successful in advancing our structural and functional understanding of the TDM complex. They suggest an interesting oligomeric complex composition which should be investigated with additional biophysical techniques.

      Additionally, they provide an intriguing hypothesis for a new type of substrate channeling. Additional kinetic experiments focusing on HCHO and THF turnover by enzymatic proximity effects would strengthen this potentially fundamental finding. If this channeling mechanism can be supported by stronger experimental evidence, it would substantially advance our understanding and knowledge of biologic conduits and enable future efforts in the design of artificial cascade catalysis systems with high conversion rate and efficiency, as well as detoxification pathways.

      Reviewer #2 (Public review):

      Summary:

      The manuscript reports a cryo-EM structure of TMAO demethylase from Paracoccus sp. This is an important enzyme in the metabolism of trimethylamine oxide (TMAO) and trimethylamine (TMA) in human gut microbiota, so new information about this enzyme would certainly be of interest.

      Strengths:

      The cryo-EM structure for this enzyme is new and provides new insights into the function of the different protein domains, and a channel for formaldehyde between the two domains.

      Weaknesses:

      (1) The proposed catalytic mechanism in this manuscript does not make sense. Previous mechanistic studies on the Methylocella silvestris TMAO demethylase (FEBS Journal 2016, 283, 3979-3993, reference 7) reported that, as well as a Zn2+ cofactor, there was a dependence upon non-heme Fe<sup>2+</sup>, and proposed a catalytic mechanism involving deoxygenation to form TMA and an iron(IV)-oxo species, followed by oxidative demethylation to form DMA and formaldehyde.

      In this work, the authors do not mention the previously proposed mechanism, but instead say that elemental analysis "excluded iron". This is alarming, since the previous work has a key role for non-heme iron in the mechanism. The elemental analysis here gives a Zn content of about 0.5 mol/mol protein (and no Fe), whereas the Methylocella TMAO demethylase was reported to contain 0.97 mol Zn/mol protein, and 0.35-0.38 mol Fe/mol protein. It does, therefore, appear that their enzyme is depleted in Zn, and the absence of Fe impacts the mechanism, as explained below.

      The proposed catalytic mechanism in this manuscript, I am sorry to say, does not make sense to me, for several reasons:

      (i) Demethylation to form formaldehyde is not a hydrolytic process; it is an oxidative process (normally accomplished by either cytochrome P450 or non-heme iron-dependent oxygenase). The authors propose that a zinc (II) hydroxide attacks the methyl group, which is unprecedented, and even if it were possible, would generate methanol, not formaldehyde.

      (ii) The amine oxide is then proposed to deoxygenate, with hydroxide appearing on the Zn - unfortunately, amine oxide deoxygenation is a reductive process, for which a reducing agent is needed, and Zn2+ is not a redox-active metal ion;

      (iii) The authors say "forming a tetrahedral intermediate, as described for metalloproteinase", but zinc metalloproteases attack an amide carbonyl to form an oxyanion intermediate, whereas in this mechanism, there is no carbonyl to attack, so this statement is just wrong.

      So on several counts, the proposed mechanism cannot be correct. Some redox cofactor is needed in order to carry out amine oxide deoxygenation, and Zn<sup>2+</sup>cannot fulfil that role. Fe<sup>2+</sup> could do, which is why the previously proposed mechanism involving an iron(IV)-oxo intermediate is feasible. But the authors claim that their enzyme has no Fe. If so, then there must be some other redox cofactor present. Therefore, the authors need to re-analyse their enzyme carefully and look either for Fe or for some other redox-active metal ion, and then provide convincing experimental evidence for a feasible catalytic mechanism. As it stands, the proposed catalytic mechanism is unacceptable.

      We thank the reviewer for the detailed and thoughtful mechanistic critique. We fully agree that Zn<sup>2+</sup> is not redox-active, and cannot directly mediate oxidative demethylation or amine oxide deoxygenation. We acknowledge that the oxidative step required for the conversion of TMAO to HCHO is not explicitly resolved in the present study. Accordingly, we have revised the manuscript to remove any implication of Zn<sup>2+</sup>-mediated redox chemistry, and have eliminated the previously imprecise analogy to zinc metalloproteases.

      We recognize and now discuss prior biochemical work on TMAO demethylase from Methylocella silvestris (MsTDM), which proposed an iron-dependent oxidative mechanism (Zhu et al., FEBS 2016, 3979–3993). That study reported approximately one Zn<sup>2+</sup> and one non-heme Fe<sup>2+</sup> per active enzyme, implicated iron in catalysis through homology modeling and mutagenesis, and used crossover experiments suggesting a trimethylamine-like intermediate and oxygen transfer from TMAO, consistent with an Fe-dependent redox process. However, that system lacked experimental structural information, and did not define discrete metal-binding sites.

      In contrast,

      (1) Our high-resolution cryo-EM structures and metal analyses of TDM consistently reveal only a single, well-defined Zn<sup>2+</sup>-binding site, with no structural evidence for an additional iron-binding site as in the previous report (Zhu et al., FEBS 2016, 3979–3993).

      (2) To investigate the potential involvement of iron, we expressed TDM in LB medium supplemented with Fe(NH<sub>4</sub>)<sub>2</sub>SO<sub>4</sub> and determined its cryo-EM structure. This structure is identical to the original one, and no EM density corresponding to a second iron ion was observed. Moreover, the previously proposed Fe<sup>2+</sup>-binding residues are spatially distant (Figure S6).

      (3) ICP-MS analysis shows undetectable Iron, and only Zinc ion (Figure S5).

      (4) Our enzyme kinetics analysis with the TDM without Iron is comparable to that of from MsTDM (Figure 1A). The differences in Km and Vmax we propose is due to the difference in the overall sequence of the enzymes. Please also see comment at the end on a new published paper on MsTDM.

      While we cannot comment on the MsTDM results, our ‘experimental’ results do not support the presence of an iron-binding site. Our data indicate that this chemistry is unlikely to be mediated by a canonical non-heme iron center as proposed for MsTDM. We therefore revised our model as a structural framework that rationalizes substrate binding, metal coordination, and product stabilization, while clearly delineating the limits of mechanistic inference supported by the current data.

      The scheme 1 and proposal mechanism section were revised in page 4. Figure S6 was added.

      (2) Given the metal content reported here, it is important to be able to compare the specific activity of the enzyme reported here with earlier preparations. The authors do quote a Vmax of 16.52 µM/min/mg; however, these are incorrect units for Vmax, they should be µmol/min/mg. There is a further inconsistency between the text saying µM/min/mg and the Figure saying µM/min/µg.

      Thank you for the correction. We converted the V<sub>max</sub> unit to nmol/min/mg. and revised the text in page 2. We also compared with the value of the previous report in the TDM enzyme by revising the text on page 2. See also the note on a newly published manuscript and its comparison.

      (3) The consumption of formaldehyde to form methylene-THF is potentially interesting, but the authors say "HCHO levels decreased in the presence of THF", which could potentially be due to enzyme inhibition by THF. Is there evidence that this is a time-dependent and protein-dependent reaction? Also in Figure 1C, HCHO reduction (%) is not very helpful, because we don't know what concentration of formaldehyde is formed under these conditions; it would be better to quote in units of concentration, rather than %.

      We appreciate this important point. We have revised Figure 1C to present HCHO levels in absolute concentration units. While the current data demonstrate reduced detectable HCHO in the presence of THF, we agree that distinguishing between HCHO consumption and potential THF-mediated enzyme inhibition would require dedicated time-course and protein-dependence experiments. We have therefore revised the description to avoid overinterpretation and limit our conclusions to the observed changes in HCHO concentration in page 2, line 18-19.

      (4) Has this particular TMAO demethylase been reported before? It's not clear which Paracoccus strain the enzyme is from; the Experimental Section just says "Paracoccus sp.", which is not very precise. There has been published work on the Paracoccus PS1 enzyme; is that the strain used? Details about the strain are needed, and the accession for the protein sequence.

      Thank you for this comment. We now indicate that the enzyme is derived from Paracoccus sp. DMF and provide the accession number for the protein sequence (WP_263566861) in the Experimental Section (page 8, line 4).

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      (1) The ITC experiment requires a ligand-into-buffer titration as an additional control. Also, maybe I misunderstood the molar ratio or the concentrations you used, but if you indeed added a total of 4.75 μL of 20 μM THF into 250 μL of 5 μM TDM, it is not clear to me how this leads to a final molar ratio of 3.

      We thank the reviewer for this suggestion. A ligand-into-buffer control ITC experiment was performed and is now included in Figure S8C, which shows no realizable signal.

      Regarding the molar ratio, it is our mistake. The experiment used 2.45 μL injections of 80 μM THF into 250 μL of 5 μM TDM. This corresponds to a final ligand concentration of ~12.8 μM, giving a ligand-to-protein molar ratio of ~2.6. We revised our text in page 9, ITC section.

      (2) Characterization/quality check of all mutant enzymes should be performed by NanoDSF, CD spectroscopy or similar techniques to confirm that proteins are properly folded and fit for kinetic testing.

      We appreciate the reviewer’s suggestion. All mutant proteins, including D220A, D367A, and F327A, were purified with yields similar to the wild-type enzyme. Additionally, cryo-EM maps of the mutants show well-defined density and overall structural integrity consistent with the wild-type. These findings indicate that the introduced mutations do not significantly affect protein folding, supporting their use for kinetic analysis. While NanoDSF might reveal differences in thermal stability due to mutations, it does not provide structural information. Our conclusions are not based on minor differences in thermostability. Our cryo-EM structures of the mutants offer much more reliable structural data than CD spectroscopy.

      (3) Best practice would suggest overlapping pH ranges between different buffer systems in the pH-dependence experiments to rule out buffer-specific effects independent of pH.

      We thank the reviewer for this helpful suggestion. We agree that overlapping pH ranges between different buffer systems can be valuable for excluding buffer-specific effects. In this study, the pH-dependence experiments were intended to provide a qualitative assessment of pH sensitivity rather than a detailed analysis of buffer-independent pKa values. While we cannot fully exclude minor buffer-specific contributions, the overall trends observed were reproducible and sufficient to support the conclusions drawn. We have added a clarifying statement to the revised manuscript to reflect this consideration, page 2, line 12.

      (4) Structural comparison revealed high similarity to a THF-binding protein, with superposition onto a T protein.": It would be nice to show this as an additional figure, as resolution and occupancy for THF are low.

      We thank the reviewer for this suggestion. To address this point, we have revised Figure S6 by adding an additional panel (C, now is Figure S7C) showing the structural superposition of TDM with the THF-binding T protein. This comparison is included to better illustrate the structural similarity, despite the limited resolution and partial occupancy of THF density in our map.

      (5) Editing could have been done more thoroughly. Some spelling mistakes, e.g. "RESEULTS", "redius", "complec"; kinetic rate constants should be written in italic (not uniform between text and figures); Prism version is missing; Vmax of 16.52 µM/min/mg - doublecheck units; Figure S1B: The "arrow on the right" might have gone missing.

      We corrected the spelling in page 2 ~ line 10, page 5 ~ line 34, page 6 ~ line40. Prism version was added. The arrow was added into figure S1B. The Vmax unit is corrected to nmol/min/mg.

      Reviewer #2 (Recommendations for the authors):

      (1) The authors must re-examine the metal content of their purified enzyme, looking in particular for Fe or another redox-active metal ion, which could be involved in a reasonable catalytic mechanism.

      We thank the reviewer for this suggestion and have carefully re-examined the metal content of TDM. Elemental analyses by EDX and ICP-MS consistently detected Zn<sup>2+</sup> in purified TDM (Zn:protein ≈ 1:2), whereas Fe was below the detection limit across multiple independent preparations (Fig. S5A,B). To assess whether iron could be incorporated or play a functional role, we expressed TDM in E. coli grown in LB medium supplemented with Fe(NH<sub>4</sub>SO<sub>4</sub>)<sub>2</sub> and performed activity assays in the presence of exogenous Fe<sup>2+</sup>. Neither condition resulted in enhanced enzymatic activity.

      Consistent with these biochemical data, all cryo-EM structures reveal a single, well-defined metal-binding site coordinated by three conserved cysteine residues and occupied by Zn<sup>2+</sup>, with no evidence for an additional iron species or other redox-active metal site.

      (2) The specific activity of the enzyme should be quoted in the same units as other literature papers, so that the enzyme activity can be compared. It could be, for example, that the content of Fe (or other redox-active metal) is low, and that could then give rise to a low specific activity.

      Thank you for the suggestion, we quoted the enzyme units as similar with previous report. and revised the text in in page 2.

      Since the submission of our paper a new report on MsTDM has been published (Cappa et al., Protein Science 33(11), e70364). It further supports our findings. First, the reported kinetic parameters using ITC (Vmax = 0.309 μmol/s, approximately 240 nmol/min/mg; Km = 0.866 mM) are comparable to our observed (156 nmol/min/mg and 1.33 mM, respectively) in the absence of exogenous iron. Second, the optimal pH for enzymatic activity similar to that observed in our paraTDM. Third, the reported two-state unfolding behavior is consistent with our cryo-EM structural observations, in which the more dynamic subunits appear to destabilize prior to unfolding of the core domains. Based on these findings, we now propose that Zn<sup>2+</sup> appears to function primarily as an organizational cofactor at the core catalytic domain (revised Scheme 1).

    1. Reviewer #1 (Public review):

      This study by Radziun and colleagues investigates the effects of using a hand-augmentation device on mental body representations. The authors use a proprioceptive localisation task to measure metric representations of finger length before and after participants wear the device, and then before and after they learn to use the device, which extends the lengths of the fingers by 10 cm. The authors find changes between different time points, which they interpret as evidence for three distinct forms of plasticity: one related to simply wearing the device, one related to learning to use it, and an aftereffect after taking the device off. A control experiment with a similar device, which does not lengthen the fingers, showed the first and third of these forms of plasticity, but not the second.

      This study takes an interesting approach to a timely and theoretically significant issue. The study appears to be appropriately designed and conducted. There are, however, some points which require clarification.

      (1) The nature of the localization task is unclear. On its face, the task appears to involve localization of each landmark within the 2-dimensional surface of the touchscreen. However, the regression analysis presupposes that localization is made in a 1-dimensional space. Figure S2 shows that three lines are presented on the screen above the index, middle, and ring fingers, which I imagine the participant is meant to use as a guide. But it is at least conceivable that the perceived location or orientation of the finger might not correspond exactly to these lines. While the method can deal gracefully with proximal-distal translations of the fingers (i.e., with the intercept parameter of the regression), it isn't clear how the participant is supposed to respond if their proprioceptive perception of finger location is translated left-right or rotated relative to the lines on the screen. I also worry that presenting a long, thin line to represent each finger on the screen may not be a neutral method and may prime participants to represent the finger as long and thin.

      (2) The task used here fits within a wider family of tasks in the literature using localization judgments of multiple landmarks to map body representations. I feel that some discussion of this broader set of tasks and their use to measure body representation and plasticity is notably absent from the paper. It is also striking to me that some of the present authors have themselves recently criticized the use of landmark localization methods as a measure of represented body size and shape (Peviani et al, 2024, Current Biology). It is therefore surprising to see them use this task here as a measure of represented finger length without commenting on this issue.

      (3) 18 participants strikes me as a relatively small sample size for this type of study. It weakens the manuscript that the authors do not provide any justification, or even comment on, the sample size. This is especially true as participants are excluded from the entire sample, and from specific analyses, on rather post-hoc grounds.

      (4) I have some concerns about the interpretation of contraction in stage 2. The authors claim that wearing the finger extended produces "a contraction",i.e., an "under-representation" (page 12). But in both experiments, regression slopes in stage 2 were not significantly different from 1 (i.e., 0.98 [SE: 0.07] in Exp 1a and 1.04 [SE: 0.09] in Experiment 1b). So how can that be interpreted as "under-representation"?

      (5) I also have concerns about the interpretation of the stretch that is claimed to occur following training. In Exp 1a, regression slopes in stage 3 are on average 1.15. That is LESS than in the pretest at stage 1 (mean: 1.16). The idea of stretch only comes about because of the lower slopes in stage 2, which the authors have interpreted as reflecting contraction. So what the authors call stretch and a 2nd form of plasticity could just be the contraction from stage 2 wearing off or dissipating, since perceived finger length in stage 3 just appears to return to the baseline level seen in stage 1. While the authors describe their results in terms of three distinct forms of plasticity, these are not in fact statistically independent. The dip in regression slopes in stage 2 is interpreted as evidence for two distinct plasticity effects, which I do not find convincing.

      (6) The distinction between plasticity at stage 3 (which appears specific to augmentation) and plasticity at stage 4 (which does not appear specific, as it also occurs in Experiment 1b) feels strained. This feels like a very subtle distinction, and the theoretical significance of it is not convincingly developed.

      (7) The reporting of statistics is not always consistent. For example, 95%CIs are presented for regression slopes in stages 1, 3, and 4, but not for stage 2. Statistics are performed on regression slopes, except for one t-test on page 7 comparing lengths in cm. Estimates of effect size would be nice additions to statistical tests.

      (8) Minor point: On page 4, the authors write, "These included sorting colored blocks, stacking a Jenga tower, and sorting pegs into holes; the latter task required fine-grained manipulation and was used as our outcome measure of motor learning." This suggests that peg sorting was the outcome measure, but in Figure 1D, Jenga is presented as the outcome measure.

    2. Reviewer #2 (Public review):

      Summary:

      This study aimed to explore dynamic changes in the somatosensory representation of both the body and artificial body parts. The study investigated how proprioceptive localisation along the finger changes when participants wear, actively use, and then remove a hand augmentation device - a rigid finger-extension. By mapping perceived target locations along the biological finger and the extension across multiple stages, the authors aim to characterise how the somatosensory system updates our spatial body representation during and after interaction with body augmentation technology.

      Strengths:

      The manuscript addresses an interesting question of how augmentation devices alter proprioceptive localisation abilities. Conceptually, the work moves beyond classic tool-use paradigms by focusing on a device that is used with the hand to extend the fingers' abilities (versus a tool that is simply used by the hand), and by attempting to map perceived spatial structure across both biological and artificial segments within the same framework.

      A major strength is the multi-stage design, which samples localisation abilities at baseline, the beginning of device wear, post-training, and immediately post-removal. This provides a richer characterisation of short-term adaptation compared to a simple pre/post comparison. The dense sampling across stages and target locations generates a rich behavioural dataset that will be valuable to readers interested in somatosensory body representation. The within-subject, counterbalanced control session further strengthens interpretability, providing a useful comparison for interpreting stage-dependent effects, and to probe how functional training shapes changes in the perceptual representations. Finally, the augmentation device itself appears carefully engineered, with thoughtful design decisions regarding wearability, including comfort and customised fit. The manuscript is also communicated clearly, with transparent reporting of analyses and succinct figures that make the pattern changes across stages straightforward to evaluate.

      Weaknesses:

      There is conceptual ambiguity in how the regression outcomes are interpreted in relation to perceived length and spatial integration. The manuscript treats regression slope as a proxy for "length perception" and discards the intercept as "spatial bias," but in this localisation task translation (intercept) and scaling (slope) are coupled: changes in anchoring at the proximal baseline (intercept) or distal endpoint can generate slope differences without uniform rescaling across the mapped surface. Relatedly, the analyses do not establish whether the reported effects are global across targets or disproportionately driven by the most distal locations. This limits the strength of inferences about "partitioning" or "reallocation" of representational space across biological and artificial segments. Some interpretive statements also appear stronger than the evidence supports (e.g., describing the stage 2 bio-extension map as "geometrically accurate", despite Bayes factors that provide only anecdotal support for no difference from true length). Extensive repeated judgements to a fixed set of locations may additionally stabilise response strategies or anchoring even without feedback, complicating the separation of body-representation change from task-specific calibration.

      The manuscript would also benefit from clearer conceptual framing of what the device is and what its training probes are. The device is described variably as an "artificial finger" versus a rigid "finger extension," with different implications for perception and function. In addition, the training tasks appear to emphasise manipulation and dexterity more than scenarios requiring an extended reachable workspace (indeed, participants appear to have performed at least as well, if not better, in the control training), which brings into question whether participants explored the device's intended functionality and possible proprioceptive consequences. The control experiment is thoughtfully designed to test whether functional training contributes to the stage 3 changes, but because localisation is not performed while wearing the short device, the design does not resolve whether the stage 2 change and the post-removal aftereffect are specific to the augmentative extension versus more general consequences of wearing a device on the finger (and the following possible distorted distal cues).

      Finally, the immediate post-removal aftereffects are intriguing, but the mechanistic interpretation remains underspecified. As presented within the internal model framework, the magnitude and consistency of the aftereffect following brief exposure are difficult to reconcile with the stability expected from a lifetime biological finger model, and because the aftereffect is assessed only immediately after removal, its time course and functional significance remain unclear.

    3. Reviewer #3 (Public review):

      Summary:

      The study aims to investigate sensorimotor plasticity mechanisms by exposing a cohort of 20 subjects to manipulation activities while using wearable finger extensions. With a series of experiments involving localization and motor tasks, the authors provide evidence that the finger extensions are integrated into the body representation of the subjects.

      Strengths:

      The study deserves attention, and the psychophysical protocols are carefully designed, and the statistical analyses are solid.

      Weaknesses:

      However, the current version of the manuscript, in my opinion, makes an exaggerated use of the term plasticity, and this should be amended. This is because the authors support the plasticity claims with psychophysical experiments, without providing evidence of neural-plasticity mechanisms (e.g., neuroimaging methods are not used).

      The authors are recommended to revise the wording of the manuscript and possibly perform additional experiments with brain imaging methods (e.g., EEG or fMRI).

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      This manuscript describes critical intermediate reaction steps of a HA synthase at the molecular level; specifically, it examines the 2nd step, polymerization, adding GlcA to GlcNAc to form the initial disaccharide of the repeating HA structure. Unlike the vast majority of known glycosyltransferases, the viral HAS (a convenient proxy extrapolated to resemble the vertebrate forms) uses a single pocket to catalyze both monosaccharide transfer steps. The authors' work illustrates the interactions needed to bind & proof-read the UDP-GlcA using direct and '2nd layer' amino acid residues. This step also allows the HAS to distinguish the two UDP-sugars; this is very important as the enzymes are not known or observed to make homopolymers of only GlcA or GlcNAc, but only make the HA disaccharide repeats GlcNAc-GlcA.

      Strengths:

      Overall, the strengths of this paper lie in its techniques & analysis.

      The authors make significant leaps forward towards understanding this process using a variety of tools and comparisons of wild-type & mutant enzymes. The work is well presented overall with respect to the text and illustrations (especially the 3D representations), and the robustness of the analyses & statistics is also noteworthy.

      Furthermore, the authors make some strides towards creating novel sugar polymers using alternative primers & work with detergent binding to the HAS. The authors tested a wide variety of monosaccharides and several disaccharides for primer activity and observed that GlcA could be added to cellobiose and chitobiose, which are moderately close structural analogs to HA disaccharides. Did the authors also test the readily available HA tetramer (HA4, [GlcA-GlcNAc]2) as a primer in their system? This is a highly recommended experiment; if it works, then this molecule may also be useful for cryo-EM studies of CvHAS as well.

      The reviewer requested testing whether an HA tetratsaccharide could also serve as an glycosyl transfer acceptor for HAS. The commerically available HA tetrasaccharide (HA4) is terminated at its non-reducing end by GlcA, therein we proceeded to measure its effect on UDP-GlcNAc turnover kientics. Titration of HA4 failed to elicit any detectable change in UDP-GlcNAc turnover rate, indicating no priming. This is now mentioned in the main text and the data is shown in Fig. S9.

      Weaknesses:

      In the past, another report describing the failed attempt of elongating short primers (HA4 & chitin oligosaccharides larger than the cello- or chitobiose that have activity in this report) with a vertebrate HAS, XlHAS1, an enzyme that seems to behave like the CvHAS ( https://pubmed.ncbi.nlm.nih.gov/10473619/); this work should probably be cited and briefly discussed. It may be that the longer primers in the 1999 paper and/or the different construct or isolation specifics (detergent extract vs crude) were not conducive to the extension reaction, as the authors extracted recombinant enzyme.

      We apologize for the oversight. This reference is now cited (ref. 18) together with the description of the failed elongation of HA4 by CvHAS.

      There are a few areas that should be addressed for clarity and correctness, especially defining the class of HAS studied here (Class I-NR) as the results may (Class I-R) or may not (Class II) align (see comment (a) below), but overall, a very nicely done body of work that will significantly enhance understanding in the field.

      Done as requested

      Reviewer #2 (Public review):

      Summary:

      The paper by Stephens and co-workers provides important mechanistic insight into how hyaluronan synthase (HAS) coordinates alternating GlcNAc and GlcA incorporation using a single Type-I catalytic centre. Through cryo-EM structures capturing both "proofreading" and fully "inserted" binding poses of UDP-GlcA, combined with detailed biochemical analysis, the authors show how the enzyme selectively recognizes the GlcA carboxylate, stabilizes substrates through conformational gating, and requires a priming GlcNAc for productive turnover.

      These findings clarify how one active site can manage two chemically distinct donor sugars while simultaneously coupling catalysis to polymer translocation.

      The work also reports a DDM-bound, detergent-inhibited conformation that possibly illuminates features of the acceptor pocket, although this appears to be a purification artefact (it is indeed inhibitory) rather than a relevant biological state.

      Overall, the study convincingly establishes a unified catalytic mechanism for Type-I HAS enzymes and represents a significant advance in understanding HA biosynthesis at the molecular level.

      Strengths:

      There are many strengths.

      This is a multi-disciplinary study with very high-quality cryo-EM and enzyme kinetics (backed up with orthogonal methods of product analysis) to justify the conclusions discussed above.

      Weaknesses:

      There are few weaknesses.

      The abstract and introduction assume a lot of detailed prior knowledge about hyaluronan synthases, and in doing so, risk lessening the readership pool.

      A lot of discussion focuses on detergents (whose presence is totally inhibitory) and transfer to non-biological acceptors (at high concentrations). This risks weakening the manuscript.

      The abstract and parts of the introduction have been revised to address the reviewer’s concerns.

      Reviewer #1 (Recommendations for the authors):

      (1) As noted above, please state in title, abstract & introduction that this work is focused on a "Class I-NR HAS" (as described in Ref. #4), and NOT all HAS families...this is truly essential to note as someone working with the Pasteurella HAS version (Class II) would be totally misled & at this point, no one knows the Streptococcus HAS (Class-IR) mechanistic details which could be different due to its inverse molecular directionality of elongation compared to the CvHAS Class I-NR enzyme.

      Done as requested.

      (2) Page 6 - for the usefulness of the HAS mutants as being folded correctly, it was stated these mutants are suitable since they all 'purify' similarly...the use of the more proper term should probably be 'chromatograph', similarly suggesting similar hydrodynamic radii without massive folding issues.

      This has been revised to state that they all exhibited comparable size exclusion chromatography profiles.

      “All mutants share similar size exclusion chromatography profiles with the WT enzyme, suggesting that the substitutions do not cause a folding defect (Fig. S3).”

      (3) Page 7 - please check these sentences (& rest of paragraph?) as the meaning is not clear. "First, UDP-GlcNAc was titrated in the presence of excess UDP-GlcA, resulting in a response similar to the acceptor-free condition (Fig. 2C). However, the maximum reaction velocity at 20 mM UDP-GlcNAc was approximately 25% lower than that measured in the presence of UDP-GlcNAc only (Fig. 2C)."

      The paragraph has been revised to avoid confusion.

      (4) In Methods, please use an italicized 'g' for the centrifugation steps globally.

      Changed as requested

      (5) Please note the source/vendor for the HA standards on gels.

      Done

      (6) Page 35 - TLC section.

      (a) 'n-butanol' (with italic n) is the most widespread chemical name (not butan-1-ol).

      Done

      (b) Also, for all of the TLC images, the origin and the solvent front should be marked.

      Changed as suggested.

      Reviewer #2 (Recommendations for the authors):

      A number of minor issues should be addressed.

      (1) Abstract

      Two comments on the Abstract, which I found surprisingly weak given the quality of the work, and lacking a key detail.

      A major conceptual contribution of this work is the demonstration of how a single Type-I catalytic centre discriminates, positions, and transfers two chemically distinct substrates in an alternating pattern. This distinguishes HAS from dual-active-site (Type-II) glycosyltransferases and is important for understanding HA polymerization.

      However, this central point is not clearly articulated in the abstract. I suggest explicitly stating that HAS performs both GlcNAc and GlcA transfer reactions within a single catalytic site, and that the proofreading/inserted poses illuminate how this multifunctionality is achieved.

      The abstract currently ends with the observation of a DDM-bound, detergent-inhibited state. While this is interesting, it absolutely does not represent the central conceptual advance of the study and gives the abstract an artefactual ending.

      I strongly recommend revising the final sentences to emphasize the broader mechanistic insight and not an "artefact" (indeed, the enzyme is inactive in the presence of this detergent; it is thus a very unusual way to conclude an abstract).

      That is, finish with the wider implications of how HAS coordinates alternating substrate use, proofreading, and polymer translocation. Ending on the main mechanistic or biological significance would make the abstract considerably stronger and more aligned with the main message of the paper.

      The abstract has been revised thoroughly to reflect the important insights gained on CvHAS’ catalytic function and HA biogenesis in general.

      (2) Introduction

      The distinction between single active-centre enzymes, which transfer both sugars alternately, and twin catalytic domain enzymes that each perform one addition is surely central to the whole paper. But it is not discussed. Surely this has to be covered. There is a lot of work in this space, including, but not limited to:

      https://doi.org/10.1093/glycob/cwg085

      https://doi.org/10.1093/glycob/10.9.883

      https://doi.org/10.1093/glycob/cwad075 (includes this author team)

      Originally back to https://doi.org/10.1021/bi990270y

      If the authors instead assume such a level of knowledge for the reader, then surely they are writing for a specialist audience, not consistent with the wider readership ambitions of eLife?

      The Introduction has been revised as suggested by the reviewer, providing necessary background to frame our description of the Chlorella virus HAS. We made a deliberate effort to put new insights into a broader context.

      (3) Results and Discussion

      DDM "was observed for >50% of the analysed particles". I struggled with this. I couldn't understand how the authors selected particles that did or did not contain DDM. The main body text states: "To our surprise, careful sorting of the UDP-GlcA supplemented cryo EM dataset revealed a CvHAS subpopulation that was not bound to the substrate, but, instead, a DDM molecule near the active site (Fig 3A and S7). This was observed for >50% of the analyzed particles."

      That reads like there is one sample with two populations. But the figures and the methods section suggest differently: they suggest two samples with different data-collection regimes. That does not match the main text. Could this be clarified?

      Yes, that wasn’t explained well. We clarified the text to stress that the DDM-bound sample came from a dataset that was intended to resolve an UDP-GlcA-bound state, but instead revealed the inhibition by DDM.

      Also in this space, in the modern world, "nominal magnification" has no real meaning, and calibrated pixel size would be more appropriate. Can this be given, please?

      The relevant Methods section now states: “imaging of … was performed at a calibrated pixel size of 0.652 Å”.

      The discovery of DDM in the active site is surprising. But it is an inhibitory artefact. Is this section pushed a little too hard? Also, "The coordination of DDM's maltoside moiety, an αlinked glucose disaccharide, is consistent with priming by cellobiose and chitobiose." I'm not sure why an α-linked maltose is consistent with the binding of a β-linked cellobiose. That makes no sense. There will be no other enzymes where starch and cellulose oligos are mutually accepted. Consider rewriting.

      We like to stress the DDM coordination because it could lead to the development of compounds that can really function as inhibitors, either for HAS or other related enzymes. In the observed DDM binding pose, the alpha-linkage is not recognized. Instead, the reducing end glucosyl unit stacks against Trp342 while the non-reducing unit extends into the catalytic pocket. Hence, a similar binding pose is conceivable for cellobiose and potentially also for chitobiose. The relevant section has been reworded.

    1. Author response:

      The following is the authors’ response to the previous reviews

      Public Review:

      Reviewer #1 (Public review):

      Ewing sarcoma is an aggressive pediatric cancer driven by the EWS-FLI oncogene. Ewing sarcoma cells are addicted to this chimeric transcription factor, which represents a strong therapeutic vulnerability. Unfortunately, targeting EWS-FLI has proven to be very difficult and better understanding how this chimeric transcription factor works is critical to achieving this goal. Towards this perspective, the group had previously identified a DBD-𝛼4 helix (DBD) in FLI that appears to be necessary to mediate EWS-FLI transcriptomic activity. Here, the authors used multi-omic approaches, including CUT&tag, RNAseq, and MicroC to investigate the impact of this DBD domain. Importantly, these experiments were performed in the A673 Ewing sarcoma model where endogenous EWS-FLI was silenced, and EWS-FLI-DBD proficient or deficient isoforms were re-expressed (isogenic context). They found that the DBD domain is key to mediate EWS-FLI cis activity (at msat) and to generate the formation of specific TADs. Furthermore, cells expressing DBD deficient EWS-FLI display very poor colony forming capacity, highlighting that targeting this domain may lead to therapeutic perspectives.

      This new version of the study comprises as requested new data from an additional cell line. The new data has strengthened the manuscript. Nevertheless, some of the arguments of the authors pertaining to the limitations of immunoblots to assess stability of the DBD constructs or the poor reproducibility of the Micro C data remain problematic. While the effort to repeat MicroC in a different cell line is appreciated, the data are as heterogeneous as those in A673 and no real conclusion can be drawn. The authors should tone down their conclusions. If DBD has a strong effect on chromatin organization, it should be reproducible and detectable. The transcriptomic and cut and tag data are more consistent and provide robust evidence for their findings at these levels. 

      We agree that the Micro-C data have more apparent heterogeneity within and across cell lines as compared to other analyses such as our included CUT&Tag and RNA-seq. We addressed the possible limitations of the technique as well as inherent biology that might be driving these findings in our previous responses. Despite the poor clustering on the PCA plots, our analysis on differential interacting regions, TADs and loops remain consistent across both cell lines. We are confident that these findings reflect the context of transcriptional regulation by the constructs, therefore the role of the alpha-helix in modulating chromatin organization. To address the concerns raised by the editors and reviewers for the strength of the conclusions we drew from the Micro-C findings we have made changes to the language used to describe them throughout the manuscript. Find these changes outlined below.

      • On lines 70-71, "is required to restructure" was changed to "is implicated in restructuring of"

      • On line 91, "is required for" was changed to "participates in"

      • On line 98, "is required for" changed to "is potentially required for"

      • On line 360-361, "is required for restructuring" changed to "participates in restructuring"

      Concerning the issue of stability of the DBD and DBD+ constructs, a simple protein half-life assay (e.g. cycloheximide chase assay) could rule out any bias here and satisfactorily address the issue.

      While we generally agree that a cycloheximide assay is a relatively simple approach to look at protein half-life, as we discussed last me the assays included in this paper are performed at equilibrium and rely on the concentration of protein at the me of the assay. This is particularly true for assays involving crosslinking, like Micro-C. As discussed in our prior response, western blots are semi quantitative at best, even when normalized to a housekeeping protein. In analyzing the relative protein concentration of DBD vs. DBD+ with relative protein intensities first normalized to tubulin and using the wildtype EWSR1::FLI1 rescue as a reference point, we find that there is no statistical difference in the samples used for micro-C here (Author responseimage 1A) or across all of the samples that we have used for publication (Author response image 1B). This does show that DBD generally has more variable expression levels relative to wildtype EWSR1::FLI1, and this is consistent with our experience in the lab.

      Nonetheless, we did attempt to perform the requested cycloheximide chase experiment to determine protein stability. Unfortunately, despite an extensive number of troubleshooting attempts, we have not been able to get good expression of DBD for these experiments. The first author who performed this work has left the lab and we have moved to a new lab space since the benchwork was performed. We continue to try to troubleshoot to get this experimental system for DBD and DBD+ to work again. When we tried to look at stability of DBD+ following cycloheximide treatment, there did appear to be some difference in protein stability (Author response image 2). However, these conditions are not the same conditions as those we published, they do not meet our quality control standards for publication, and we are concerned about being close to the limit of detection for DBD throughout the later timepoints. Additional studies will be needed with more comparable expression levels between DBD and DBD+ to satisfactorily address the reviewer concerns.

      Author response image 1.

      Expression Levels of DBD and DBD+ Across Experiments. Expression levels of DBD and DBD+ protein based on western blot band intensity normalized by tubulin band intensity. Expression levels are relative to wildtype EWSR1::FLI1 rescue levels and are calculated for (A) A673 samples used for micro-C and (B) all published studies of DBD and DBD+. P-values were calculated with an unpaired t-test.

      Author response image 2.

      CHX chase assay to determine the stability of DBD and DBD+. (A) Knock-down of endogenous EWSR1::FLI1 detected with FLI1 ab and rescue with DBD and DBD+ detected with FLAG ab. (B) CHX chase assay to determine the stability of DBD and DBD+ in A-673 cells with quantification of the protein levels (n=3). Error bars represent standard deviation. The half-lives (t1/2) of DBD and DBD+ were listed in the table.

      Suggestions:

      The Reviewing Editor and a referee have considered the revised version and the responses of the referees. While the additional data included in the new version has consolidated many conclusions of the study, the MicroC data in the new cell line are also heterogeneous and as the authors argue, this may be an inherent limitation of the technique. In this situation, the best would be for the authors to avoid drawing robust conclusions from this data and to acknowledge its current limitations.

      As discussed above, we have changed the language regarding our conclusions from micro-C data to soften the conclusions we draw per the Editor’s suggestion.

      The referee and Reviewing Editor also felt that the arguments of the authors concerning a lack of firm conclusions on the stability of EWS-FLI1 under +/-DBD conditions could be better addressed. We would urge the authors to perform a cycloheximide chase type assay to assess protein half-life. These types of experiments are relatively simple to perform and should address this issue in a satisfactory manner.

      As discussed above, we do not feel that differences in protein stability would affect the results here because the assays performed required similar levels of protein at equilibrium. Our additional analyses in this response shows that there are not significant differences between DBD and DBD+ levels in samples that pass quality control and are used in published studies. However, we attempted to address the reviewer and editor comments with a cycloheximide chase assay and were unable to get samples that would have passed our internal quality control standards. These data may suggest differences in protein stability, but it is unclear that these conditions accurately reflect the conditions of the published experiments, or that this would matter with equal protein levels at equilibrium.

    1. 3: Determine likelihood of occurrence

      The Two Parts of Likelihood: Likelihood isn't just one number. It is calculated by combining two factors:

      Likelihood of Initiation/Occurrence: Will the enemy try to attack? (Or will the hurricane happen?)

      Likelihood of Resulting in Adverse Impact: If they do attack, will they succeed? (Will our defenses stop them, or will they break through?)

      Overall Likelihood: You combine those two factors to get the final score (e.g., if they are likely to try, AND likely to succeed, the Overall Likelihood is High).

    Annotators

    1. How do mission and vision relate to a firm’s strategy

      relate to an organization’s purpose and aspirations, and are typically communicated in some form of brief written statements

    1. Note d'Information : Priorités de la Protection de l’Enfance et Justice des Mineurs

      Synthèse de l'Exécutif

      Ce document synthétise les orientations stratégiques et les réformes engagées par le ministère de la Justice pour renforcer la protection de l’enfance et moderniser la justice des mineurs.

      Les points clés incluent :

      Urgence et Rapidité : Réduction des délais de jugement (passés de 18 mois à 8,7 mois en quatre ans) et création d'une ordonnance de protection provisoire permettant au procureur de statuer en 72 heures.

      Refonte du Placement : Fermeture des Centres Éducatifs Fermés (CEF) publics au profit des Unités de Placement de la Jeunesse et de l'Éducation (UJPE), mettant l'accent sur la continuité pédagogique (52 semaines/an).

      Moyens Humains Massifs : Création de 1 600 postes au ministère de la Justice, dont 50 nouveaux cabinets de juges des enfants en deux ans et 70 postes à la Protection Judiciaire de la Jeunesse (PJJ).

      Évolutions Législatives : Soutien à l'imprescriptibilité des crimes sexuels sur mineurs, à la présence obligatoire de l'avocat pour l'enfant, et volonté de réformer l'« excuse de minorité » pour les crimes les plus graves.

      Protection contre les Fléaux Modernes : Lutte contre la prostitution des mineurs (6 prostituées sur 10 sont mineures), interdiction des téléphones portables en centres de placement, et encadrement du protoxyde d'azote.

      --------------------------------------------------------------------------------

      1. Renforcement de la Protection des Enfants Victimes

      Urgence Judiciaire et Mesures de Sûreté

      L'accent est mis sur la nécessité d'une justice qui s'adapte au rythme de l'enfant.

      Ordonnance de protection provisoire : Un nouveau dispositif permet au procureur d'agir en 72 heures pour protéger immédiatement un mineur, avec des interdictions de contact et l'attribution provisoire du logement au parent protecteur.

      Le juge dispose ensuite de 8 jours pour être saisi et de 15 jours pour statuer.

      Loi du 18 mars 2024 : Prévoit le retrait automatique de l'autorité parentale pour les parents condamnés pour crime ou violence sexuelle sur leur enfant, ainsi que l'élargissement de la suspension de l'exercice de cette autorité dès la mise en examen.

      Accompagnement et Droits des Mineurs

      Avocat pour l'enfant : Soutien à la présence obligatoire d'un avocat en assistance éducative.

      Une expérimentation avec les barreaux est envisagée avant une généralisation législative.

      Unités d'Accueil Pédiatrique (UAPED) : Déploiement en cours sur tout le territoire pour améliorer le recueil de la parole et le soin des victimes.

      Chiens d'assistance judiciaire : Passage de 10 à une trentaine de chiens actuellement, avec un objectif de 100 chiens (un par département) d'ici un à deux ans pour apaiser les enfants lors des procédures.

      --------------------------------------------------------------------------------

      2. Réforme de la Justice Pénale des Mineurs

      Équilibre entre Sanction et Éducation

      La doctrine ministérielle refuse l'opposition entre ces deux concepts.

      La sanction comme acte éducatif : « La sanction fait partie de l'éducation. La sanction toute seule n'est pas un but en soi [...] et une éducation sans aucun interdit mène au n'importe quoi. »

      Efficacité du Code de la Justice Pénale des Mineurs (CJPM) : Les délais entre les faits et la sanction ont été divisés par deux en quatre ans (8,7 mois en 2024 contre 18 mois en 2020).

      Transformation des Structures de Placement

      Le constat sur les Centres Éducatifs Fermés (CEF) est jugé sévère : coût élevé (30 à 50 % de plus), taux de fugue identique aux centres classiques, et déshérence éducative (seulement 5 à 10 heures de cours par semaine).

      Création des UJPE : Ces nouvelles unités fusionnent les anciens foyers et les CEF pour garantir un parcours de reconstruction pédagogique.

      Recrutement de professeurs techniques : Réouverture d'un concours pour 40 professeurs dépendant directement du ministère de la Justice afin d'assurer 26 heures de cours par semaine, 52 semaines sur 52, y compris durant les vacances scolaires.

      Santé et Addictions : Recrutement de 60 infirmiers pour pallier les carences de soins psychiatriques et de prise en charge des addictions dans les centres de placement.

      --------------------------------------------------------------------------------

      3. Moyens et Organisation de la Justice

      Augmentation des Effectifs

      Le budget de la Justice permet une hausse inédite des moyens humains :

      Magistrature : Création de 50 cabinets de juges des enfants supplémentaires en deux ans (notamment à Bobigny, Cambrai, Alès).

      Actuellement, certains cabinets gèrent entre 400 et 500 dossiers.

      PJJ : Recréation de 70 postes, permettant de renforcer les effectifs là où ils baissaient depuis 20 ans (ex: Marseille, Île-de-France).

      Milieu Ouvert : Réaffectation de 150 éducateurs vers le milieu ouvert pour ramener la charge de travail à environ 23 dossiers par agent (contre 25 auparavant).

      Unité de Commandement

      Le système actuel est jugé trop fragmenté (plusieurs ministères concernés, compétences partagées avec les départements pour l'ASE).

      Une volonté de meilleure coordination, voire d'unité de responsabilité, est exprimée.

      --------------------------------------------------------------------------------

      4. Enjeux de Société et Nouvelles Menaces

      Violences Sexuelles et Imprescriptibilité

      Fin de la prescription : Avis favorable pour l'imprescriptibilité des crimes sexuels sur mineurs, ainsi que pour les crimes de sang (assassinats).

      Prostitution des mineurs : Un constat alarmant montre que 60 % des prostituées en France sont mineures.

      Des unités dédiées au sein de la PJJ sont opérationnelles depuis trois mois pour lutter contre ce fléau et les réseaux de proxénétisme.

      Sécurité Numérique et Addictions

      Interdiction des téléphones : La nouvelle circulaire de politique éducative et pénale impose l'interdiction des téléphones portables dans les chambres des centres de placement pour protéger les mineurs des prédations numériques (trafiquants, proxénètes).

      Protoxyde d'azote : Soutien à la pénalisation du transport et de l'achat en ligne (en dehors du cadre médical), alors que les intoxications ont triplé entre 2020 et 2023.

      Débats sur la Responsabilité Pénale

      Excuse de minorité : Position favorable à la fin de l'automatisme de l'atténuation de peine pour les crimes les plus graves (assassinats, tortures) commis par des mineurs de 13 à 15 ans.

      Cela nécessiterait une évolution constitutionnelle tout en préservant la spécialisation du jugement des mineurs.

      --------------------------------------------------------------------------------

      5. Données Clés et Statistiques

      | Indicateur | Donnée Source | | --- | --- | | Délai moyen de jugement (2020) | 18 mois | | Délai moyen de jugement (2024) | 8,7 mois | | Dossiers par cabinet de juge des enfants | 400 à 500 (moyenne) | | Proportion de mineurs parmi les prostitués | 60 % | | Nombre de mineurs à l'ASE | 400 000 (dont 200 000 placés) | | Heures de cours en CEF | < 10h/semaine (contre 26h en milieu classique) | | Placements chez des tiers de confiance | < 9 % (19 000 jeunes) |

      --------------------------------------------------------------------------------

      Citations Marquantes

      « L'enfant ne vit pas au rythme d'un dossier administratif ou d'un dossier judiciaire. [...] 4 mois pour un mineur c'est une vie. »

      « Nous devrions pouvoir en grande partie avoir honte de la façon dont on traite une partie de ces enfants notamment à l'aide sociale à l'enfance. »

      « Le placement doit protéger et pas rendre encore plus vulnérable. »

      « La sanction fait partie de l'éducation. [...] Une éducation sans jamais aucun interdit mène au n'importe quoi. »

    1. Risk

      Step 1: Prepare for Assessment Before you start, you need a plan. You align the assessment with the organization's goals. (Slide 2 explains this in detail).

      Step 2: Conduct Assessment (The Core) This is the "Execution" phase. Memorize this sequence inside the gray box, as it is often a quiz question:

      • Identify Threats: Who/what wants to attack us?
      • Identify Vulnerabilities: Where are we weak?
      • Determine Likelihood: What are the odds of this happening?
      • Determine Impact: If it happens, how bad will it hurt?
      • Determine Risk: Combine Likelihood and Impact to get a Risk score. Formula to remember: Risk = Likelihood × Impact

      Step 3: Communicate Results Notice the arrows go both ways. You don't just report at the end; you talk to stakeholders during the process to ensure facts are correct.

      Step 4: Maintain Assessment Risk assessment is not a one-time event. You must monitor and update it over time as technology changes.

    Annotators

    1. Ay, lord, she will become thy bed, I warrant, 1485 115 And bring thee forth brave brood.

      We're all very sympathetic to Caliban, but he does attempt to use Miranda as a bargaining chip here, specifically selling her out for sexual purposes to Stephano. Its interesting that Miranda is made a sexual object by Caliban and Prospero, who hates Caliban for his attempted rape of her. Ferdinand isn't sexless, but is far less threatening as a white guy and prince. Prosper approves of him for those reasons, as well as the fact that he's in control, ]since he wants Ferdinand to fall in love with Miranda so he can exploit their courtship for power.

    1. About 13% to 16% of White students versus 3% to 4% of Black or Hispanic students displayed advanced science or mathematics achievement during kindergarten.

      I would be interested to see the location of this study. Was it for an entire district? a city? a state?

    1. Enrique García Naranjo | Tombstone Checkpoint | Mesa Mainstage 2017Tap to unmute2xEnrique García Naranjo | Tombstone Checkpoint | Mesa Mainstage 2017The Moth 5,004 views 1 year agoSearchCopy linkInfoShoppingIf playback doesn't begin shortly, try restarting your device.Pull up for precise seeking3:28•You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmUp nextLiveUpcomingCancelPlay NowThe MothSubscribeSubscribedThe Moth is a nonprofit dedicated to the art and craft of storytelling. Since launching in 1997, The Moth has presented over 40,000 true personal stories, told live, without notes, to standing-room-only audiences around the globe. The Moth produces approximately 600 live shows each year in 28 cities worldwide. Additionally, The Moth runs storytelling workshops for high school students, teachers, adults and advocates through its Education, Community and Global Programs, and MothWorks. The Moth Podcast is downloaded over 90 million times a year, and each week, the Peabody Award-winning The Moth Radio Hour, presented by The Public Radio Exchange, is heard on 570 radio stations. The Moth has published three critically acclaimed books — international bestseller The Moth: 50 True Stories (2013), All These Wonders: True Stories about Facing the Unknown (2017) and The New York Times Best Seller, Occasional Magic: True Stories About Defying the Impossible (2019). The Moth Mainstage180 videosHideShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.0:000:00 / 8:06Live•Watch full video••40:0040 minutes of silenceIra Bnut91K views • 11 years agoLivePlaylist ()Mix (50+)1:00:23Vintage Mountains TV Screensaver | Pacific Northwest | Vintage Art Slideshow | 1 Hr 4K HD PaintingsTV Art Museum 245K views • 3 years agoLivePlaylist ()Mix (50+)7:59Children's Mental Health Week 2026 - Wormwood Scrubs Pony Centre's Sophie on belongingWHO Collaborating Centre for Public Health Edu3 views • 3 days agoLivePlaylist ()Mix (50+)7:03Gratitude & How He Loves - Salim Worship (Cover) By Brandon Lake and David CrowderIglesia Salim335 views • 2 weeks agoLivePlaylist ()Mix (50+)5:54An old mans advice.Bernard Albertson33M views • 12 years agoLivePlaylist ()Mix (50+)1:02:041 hour of Abstract Wave Pattern | 4k screensaver | BackgroundHypnoRimaVisuals559K views • 1 year agoLivePlaylist ()Mix (50+)1:04:15QUIET TIME WITH JESUS - Soaking worship instrumental | Prayer and DevotionalCentral Record1M views • 9 months agoLivePlaylist ()Mix (50+)7:28Let Yourself Be SkinnyTara Marino1.2M views • 13 years agoLivePlaylist ()Mix (50+)14:53The Moth Podcast Archive | Kathi Kinnear Hill: But I Just MightThe Moth8K views • 2 years agoLivePlaylist ()Mix (50+)30:2030 Min Aura Timer - Deep Focus for Relaxing, Studying and WorkingAuraTimer850K views • 2 years agoLivePlaylist ()Mix (50+)12:51nothing, except everything.Wesley Wang9.1M views • 2 years agoLivePlaylist ()Mix (50+)1:03:29IN HIS PRESENCE - Soaking worship instrumental | Prayer and DevotionalCentral Record2.6M views • 1 year agoLivePlaylist ()Mix (50+) Enrique García Naranjo

      Enrique's stake is a sense of identity as a Mexican American individual and safety when confronted at a border checkpoint. He was cautious and aware of the reality of his scenario. At the end, he showed greater clarity about his own identity and confidence in his voice to speak up. He had shown great change, and even though this experience wasn't pleasant, the ability to continue and speak up about a hurtful issue he was able to persevere after reflecting.

    2. And I immediately think about Jose Antonio Elena, 3:56 a 16-year-old Mexican, 3:58 killed by Border Patrol in Nogales 4:00 for holding rocks in his hand. 4:03 And I'm stunned in that fear 4:08 and that disempowerment of not knowing 4:11 how the situation would end up.

      I chose this point specifically to remember that implicit bias is a real and big issue, and how the tiniest of things can escalate due to these types of things. I feel like this ties in with the theme that having an implicit bias can have long lasting effects to family's, which I believe why he thought back to this person, is because anything can happen regardless of status, implicit bias is still a prevalent thing.

    1. Back Skip navigation Search Search with your voice Create 9+ Notifications {"@context":"https://schema.org","@type":"VideoObject","description":"This week, Kathi Kinnear Hill has hard conversations on the campaign trail. This week’s episode of The Moth Podcast is hosted by Jon Goode.\n\nHosted by: Jon Goode\n\nStoryteller: Kathi Kinnear Hill\n--\nThe Moth is a non-profit that promotes the art of storytelling to celebrate the diversity and commonality of the human experience. Subscribe to our channel!\n\nListen to The Moth Podcast on all major platforms including: \n \nApple: https://apple.co/3iCJdkr\n\nSpotify: https://spoti.fi/3c0mRIg\n\nStitcher: https://bit.ly/3c5Mjwl\n\nand http://themoth.org","duration":"PT893S","embedUrl":"https://www.youtube.com/embed/roDFh6reWLM","name":"The Moth Podcast Archive | Kathi Kinnear Hill: But I Just Might","thumbnailUrl":["https://i.ytimg.com/vi/roDFh6reWLM/maxresdefault.jpg?sqp=-oaymwEmCIAKENAF8quKqQMa8AEB-AH-CYAC0AWKAgwIABABGH8gQygqMA8=&rs=AOn4CLCeIpIwB4APvCUOLeEJgpXnR5sSig"],"uploadDate":"2023-09-03T12:00:22-07:00","@id":"https://www.youtube.com/watch?v=roDFh6reWLM","interactionStatistic":[{"@type":"InteractionCounter","interactionType":"https://schema.org/WatchAction","userInteractionCount":"8035"},{"@type":"InteractionCounter","interactionType":"https://schema.org/LikeAction","userInteractionCount":"112"}],"genre":"Entertainment","author":"The Moth"} The Moth Podcast Archive | Kathi Kinnear Hill: But I Just MightTap to unmute2xThe Moth Podcast Archive | Kathi Kinnear Hill: But I Just MightThe Moth 8,035 views 2 years agoSearchInfoShoppingCopy linkIf playback doesn't begin shortly, try restarting your device.5:06Pull up for precise seekingView chapter4:06Intro•You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmUp nextLiveUpcomingCancelPlay NowShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.IntroVoting for ObamaKathis backgroundKathis reflectionsOutro5:175:20 / 14:52Live•Watch full video•Voting for Obama•22:03The Gift Of Waiting On God | When Silence Speaks LouderPilgrim Devotions176K views • 1 month agoLivePlaylist ()Mix (50+)7:03Gratitude & How He Loves - Salim Worship (Cover) By Brandon Lake and David CrowderIglesia Salim335 views • 2 weeks agoLivePlaylist ()Mix (50+)24:14WCPTS 820 Interview: Anderson ClaytonWCPT 8207 views • 6 days agoLivePlaylist ()Mix (50+)1:02:30Asking Christian Men WHAT THEY REALLY Look For in a WifeMoral Revolution160K views • 1 month agoLivePlaylist ()Mix (50+)The Moth Podcast: From the ArchiveLivePlaylist (54)Mix (50+)7:11Saved By The Belle | Moth Grandslam Winner Colin RyanColin Ryan36K views • 9 years agoLivePlaylist ()Mix (50+)12:54The pain of becoming yourselfAlastair2.9M views • 2 months agoLivePlaylist ()Mix (50+)27:38The Story of Jezebel | Powerful Animated Bible Story of Power, Corruption & JudgmentBible Chronicles Animation294K views • 9 days agoLivePlaylist ()Mix (50+)8:2912 Hours With Bad Bunny In Puerto Rico | VogueVogue2.5M views • 9 months agoLivePlaylist ()Mix (50+)1:29:18What Really Happened On Jeffrey Epstein’s Private Island?Rotten Mango8.1M views • 2 years agoLivePlaylist ()Mix (50+)17:17Iraq War Veterans, 20 Years Later: ‘I Don’t Know How to Explain the War to Myself’ | Op-DocsNew York Times Opinion13M views • 2 years agoLivePlaylist ()Mix (50+)6:12My Dad was a Cat Lady by Marya Morris (Moth StorySlam winner 2019)Marya Morris1.6K views • 5 years agoLivePlaylist ()Mix (50+) Comments Top Show featured comments Newest Show recent comments, including potential spam Online TherapySponsoredtalkspace.com/teen/therapyStart now Convenient Online TherapyOnline Therapy Is As Effective AsIn-Person And Easier To Get StartedTalkspace For CouplesRediscover Your RelationshipAnd Make It Stronger Than EverCopay As Low As $0Most Insured Talkspace Members HaveA $0 Copay. Get Started Online Now.Start now In this video ChaptersTranscript Chapters These chapters are auto-generated Intro Intro 0:00 Intro 0:00 Voting for Obama Voting for Obama 5:01 Voting for Obama 5:01 Kathis background Kathis background 11:22 Kathis background 11:22 Kathis reflections Kathis reflections 11:56 Kathis reflections 11:56 Outro Outro 12:56 Outro 12:56 Sync to video time Create clip Cindy Quintanilla Public   Add a title (required)   0/140 – 30.0 seconds Cancel Share clip Continue clipping after ad finishes Can’t create clip while ad is playing Description The Moth Podcast Archive | Kathi Kinnear Hill: But I Just Might 112Likes8,035Views2023Sep 3 This week, Kathi Kinnear Hill has hard conversations on the campaign trail. This week’s episode of The Moth Podcast is hosted by Jon Goode. Hosted by: Jon Goode Storyteller: Kathi Kinnear Hill -- The Moth is a non-profit that promotes the art of storytelling to celebrate the diversity and commonality of the human experience. Subscribe to our channel! Listen to The Moth Podcast on all major platforms including: Apple: https://apple.co/3iCJdkr Spotify: https://spoti.fi/3c0mRIg Stitcher: https://bit.ly/3c5Mjwl and http://themoth.org …...more ...more Show less AskGet answers, explore topics, and moreAsk questions Chapters View all Intro Intro 0:00 Intro 0:00 Voting for Obama Voting for Obama 5:01 Voting for Obama 5:01 Kathis background Kathis background 11:22 Kathis background 11:22 Kathis reflections Kathis reflections 11:56 Kathis reflections 11:56 Explore the podcast 54 episodes 54 episodes The Moth Podcast: From the Archive The Moth Podcasts Transcript Follow along using the transcript. Show transcript The Moth 178K subscribers Videos About VideosAboutInstagramFacebookTikTok Transcript NaN / NaN The Moth Podcast Archive | Kathi Kinnear Hill: But I Just Might

      The story centers on a moment where Kathi Kinnear is telling herself what she can’t do. She is limiting her capabilities in her own mind, which is a problem she was facing in the beginning of her hardship. The real stakes are whether she will accept those limitations or challenge them. If she fails to surpass her limitations, she reinforces doubt and dwells in that state of despair. If she succeeds in pushing past her own limits, she reshapes her identity. In the end, Kathi was able to see light through these dark times in her memory and persevere past what once restrained her.

    1. YOUR OWN CREATIVE WELL for the next 3 months?

      On first read, I'm not seeing the direct connection that this is what I need to do to lean back in to my life (yet) -- might just mean switching this copy out and including it in this bigger text section

    1. In fact, in our focus groups, participants in three of the four countries displayed a strong disenchantment with politics in general, and frequently criticised politicians for corruption, incompetence, or lack of responsiveness. Germany was the only exception.

      Notably, in focus groups 3/4 countries say dfisillusioned w/ politics IN GENRAL as oppsoed to only EU

      In either case, MOST say they have no ideas how it works, SOME take conspiratorial attitude (not by today's standards, but still).

    1. Défis et controverses de l'éducation des parents : Analyse et perspectives

      Résumé exécutif

      Le présent document synthétise l'analyse du sociologue Claude Martin concernant l'évolution des pratiques et des politiques d'éducation des parents en France.

      Le constat initial révèle un « effet de ciseaux » alarmant : une explosion de la souffrance psychique chez les jeunes coïncidant avec un affaissement de l'offre de soins et de soutien humain.

      L'analyse souligne un basculement paradigmatique majeur : le passage d'un déterminisme social (collectif et structurel) à un déterminisme parental (individuel et comportemental).

      Cette évolution a favorisé l'émergence d'un marché du conseil aux parents et d'une « parentalité positive » qui, bien que prônant la bienveillance, impose de nouvelles injonctions de performance et de bonheur.

      Le document explore également les usages politiques des neurosciences et les controverses actuelles entourant les méthodes éducatives, concluant sur le paradoxe du « double bind » (double contrainte) auquel les parents modernes sont confrontés.

      --------------------------------------------------------------------------------

      1. L'état des lieux : Une jeunesse en souffrance

      La situation actuelle de l'enfance en France est marquée par une dégradation notable de la santé mentale, un phénomène antérieur à la pandémie de COVID-19 mais accentué par celle-ci.

      L'effet de ciseaux

      Le Haut Conseil de l'enfance, de la famille et de l'âge (HCFEA) alerte sur deux phénomènes concomitants :

      Explosion de la demande : Une hausse massive des manifestations de souffrance psychique chez les enfants et adolescents.

      Affaissement de l'offre : Une réduction drastique des moyens de prise en charge (thérapies de parole, lieux d'accueil) et une crise du secteur de la pédopsychiatrie.

      La réponse médicamenteuse

      Faute de structures d'accompagnement suffisantes, la réponse s'est déplacée vers la prescription de psychotropes, avec des augmentations spectaculaires entre 2014 et 2021 :

      | Type de médicament | Augmentation de la prescription (2014-2021) | | --- | --- | | Antidépresseurs | \+ 63 % | | Psychostimulants | \+ 78 % | | Hypnotiques et sédatifs | \+ 155 % | | Antipsychotiques | \+ 50 % |

      Le phénomène du retrait social

      Le document identifie l'émergence en France du phénomène de retrait social (type Hikikomori), touchant principalement des garçons lycéens (15-17 ans).

      Ce refus d'entrer dans la course à la réussite scolaire est parfois analysé, de manière controversée, à travers le prisme de l'influence parentale (notamment des mères jugées excessives ou intrusives).

      --------------------------------------------------------------------------------

      2. Le basculement des déterminismes

      L'approche sociologique a radicalement changé de nature entre les années 1960 et aujourd'hui.

      Du collectif à l'individuel

      Le déterminisme social (Années 60-70) : La réussite ou l'échec d'un enfant était perçu comme le résultat de l'appartenance à une classe sociale et de la reproduction des inégalités. C'était un enjeu de lutte collective et politique.

      Le déterminisme parental (Actuel) : La focale s'est déplacée vers le comportement individuel des parents.

      Les difficultés de l'enfant (santé mentale, échec scolaire, comportement antisocial) sont désormais imputées à un déficit de « compétences parentales ».

      La psychologisation des problèmes publics

      Cette vision individualiste conduit à une responsabilisation accrue des parents, générant souvent un sentiment de culpabilité.

      Des auteurs comme Frank Furedi (Paranoid Parenting) ou d'autres parlent de « parentalité narcissique », soulignant un manque de confiance des adultes dans le futur qui compromettrait leur capacité à éduquer.

      --------------------------------------------------------------------------------

      3. Évolution historique du contrôle de la fonction parentale

      L'éducation des parents n'est pas un concept nouveau, mais elle a traversé plusieurs phases distinctes :

      1. Fin XIXe - Début XXe siècle (Hygiénisme et Protection) : Lutte contre la mortalité infantile et protection contre les « classes dangereuses ».

      Il s'agissait alors d'enseigner aux mères les soins de base et de limiter la puissance paternelle absolue.

      2. L'après-guerre (Le marché du conseil) : Émergence de manuels à succès (Benjamin Spock, Laurence Pernoud, Françoise Dolto).

      Ce secteur économique puissant prospère sur l'inquiétude des parents : plus ils consomment de conseils, plus ils se sentent déroutés, alimentant une consommation accrue.

      3. Années 1990 (L'invention de la « Parentalité ») : Le terme parenting (centré sur l'acte et le comportement plutôt que sur le statut) est traduit par « parentalité ».

      Cela devient un segment à part entière de l'action publique, visant à « soutenir » les parents, mais les prenant en réalité comme cibles d'intervention.

      --------------------------------------------------------------------------------

      4. Neurosciences et "Neuro-parenting"

      L'usage des neurosciences dans l'éducation des parents fait l'objet de critiques importantes, notamment concernant la surinterprétation de données scientifiques.

      Le mythe des trois premières années : Une fascination scientiste pour l'imagerie cérébrale a conduit à l'idée d'une « fenêtre d'opportunité » unique durant les trois premières années de vie.

      Cette vision déterministe présente le bébé comme un « petit ordinateur » dont le câblage dépendrait entièrement des stimuli parentaux.

      L'adolescent stigmatisé : À l'inverse de la vision « mine d'or » du cerveau du nourrisson, le cerveau de l'adolescent est souvent présenté par les politiques publiques comme « mal foutu » ou intrinsèquement problématique, justifiant des interventions urgentes.

      --------------------------------------------------------------------------------

      5. La parentalité positive : Entre bienveillance et injonction

      La « parentalité positive » est devenue un courant dominant, porté par un lobbying actif auprès des pouvoirs publics.

      La controverse du "Time Out"

      Une polémique oppose actuellement deux visions :

      Les partisans du cadre : Préconisent des méthodes simples comme le « Time Out » (envoyer l'enfant dans sa chambre) pour gérer les crises.

      Les radicaux de la bienveillance : Assimilent le « Time Out » à une « violence éducative ordinaire », créant une continuité entre ces pratiques et des dérives graves comme l'infanticide.

      L'injonction au bonheur

      La parentalité moderne impose une « norme sous la peau » : les mères ne doivent pas seulement bien agir, elles doivent être « authentiquement heureuses ». Un faux sourire est perçu comme dangereux pour l'enfant, créant une pression psychologique insoutenable pour les parents.

      --------------------------------------------------------------------------------

      6. Conclusion : Le paradoxe de la mission parentale

      Le document conclut sur l'impasse du « double bind » parental actuel :

      D'un côté : Les parents qui « n'en font pas assez » sont désignés comme irresponsables ou absents.

      De l'autre : Les « parents hélicoptères » (parentalité intensive) sont critiqués pour générer une dépendance problématique chez l'enfant.

      L'analyse de Claude Martin suggère que la politique de parentalité devrait redevenir un soutien collectif et générationnel plutôt qu'une focalisation sur les comportements individuels.

      L'éducation est une improvisation située historiquement ; les modèles parentaux ne peuvent être des invariants déconnectés du contexte social et des limites de chaque génération.

    1. Due to the decline in warm–dry sagebrush steppe, total sagebrush steppe vegetation (both warm–dry and cool–moist) fell by 60 % under CSIRO, 63 % under Hadley, and 57 % under MIROC by the end of the century.

      Decline in Golden Eagle prey habitat = decline in Golden Eagle population and therefore an overall disturbance to the ecosystem.

    2. Warm–dry sagebrush steppe showed wide fluctuations in all climate change scenarios, with large increases projected mid-century in two climate scenarios but declines of 38–72 % (42,000–78,000 ha) by the end of the century.

      Warm-dry sagebrush steppe is a critically important habitat for Golden Eagles, serving as a nesting habitat, hunting ground (and primary habitat for prey). The health of this ecosystem is a key indicator of the Golden Eagle population's survival.

    1. On average, 0.70 of the area within 3 km of nesting centroids burned between 1981 and 2013, and the mean proportion of unburned shrubland decreased from 0.73 in 1979 to 0.22 in 2014.

      mean proportion of unburned shrubland decreased from 0.73 to 0.22 inches in 2014

    1. Dossier de Synthèse : L'Implication des Usagers dans les Structures d'Exercice Coordonné

      Synthèse

      Ce document synthétise les enseignements du webinaire régional concernant l'indicateur « Implication des usagers » pour les Maisons de Santé Pluriprofessionnelles (MSP) et les Centres de Santé (CdS).

      Initialement centré sur la satisfaction des patients, cet indicateur a évolué pour devenir un levier global de transformation du système de santé, incitant les structures à passer d'une logique de soin « pour » le patient à une logique de soin « avec » le patient.

      Bien qu'optionnel, cet indicateur est considéré comme un objectif structurant pour l'exercice coordonné, conditionnant une partie du financement par l'Assurance Maladie via l'Accord Cadre Interprofessionnel (ACI).

      En 2024, plus de 70 % des structures ont atteint le niveau 1 de cet indicateur, démontrant une maturité croissante.

      Le passage au niveau 2, qui implique une co-décision et un partenariat pérenne, reste le défi majeur pour les équipes de soins primaires.

      1. Cadre Stratégique et Enjeux de l'Indicateur

      L'implication des usagers n'est plus perçue comme un objectif isolé, mais comme une démarche transversale visant à améliorer l'efficacité des soins et l'adéquation de l'offre de santé aux besoins réels des territoires.

      Objectifs de la démarche

      Améliorer la qualité des soins : En intégrant l'expertise de vie du patient (maladie, handicap).

      Renforcer la démocratie en santé : Donner une voix légitime aux usagers dans la co-construction des actions de santé.

      Évolution du projet de santé : Utiliser les retours des usagers pour faire évoluer de manière vivante le projet de la structure.

      Qualité de vie au travail (QVT) : Le partenariat est identifié comme un levier d'amélioration du quotidien des professionnels.

      Financement et Justification

      Le financement par l'Assurance Maladie est conditionné par la fourniture de justificatifs probants.

      Cette exigence est présentée non pas comme une suspicion, mais comme une garantie de transparence dans la gestion des fonds publics.

      Nouveauté : Les négociations en cours suggèrent une évolution du modèle pour supprimer les niveaux de complexité, tout en maintenant l'évaluation de la satisfaction et la co-décision.

      Dynamisme : Pour être rémunérée, une structure doit démontrer une progression ou une révision de ses outils d'une année sur l'autre.

      2. La Philosophie du Partenariat en Santé

      Le passage au partenariat repose sur un changement de paradigme, souvent appelé le « modèle de Montréal ».

      | Modèle | Approche | Position de l'usager | | --- | --- | --- | | Paternaliste | Pour le patient | Objet de soin, passif. | | Centré sur le patient | Pour le patient | Au centre des préoccupations, mais exclu des décisions d'équipe. | | Partenariat | Avec le patient | Membre de l'équipe, reconnaissance de ses savoirs expérientiels. |

      Le Continuum de l'Engagement

      L'implication se décline en quatre étapes progressives :

      1. Information : Diffusion de données de santé publique ou de fonctionnement de la structure.

      2. Consultation : Recueil d'avis (questionnaires de satisfaction, boîtes à idées).

      3. Collaboration : Travail conjoint sur des projets ponctuels (création d'une affiche, soirée thématique).

      4. Partenariat : Co-construction, co-décision et co-réalisation sur le long terme.

      3. Niveaux d'Atteinte et Justificatifs Requis

      L'indicateur se structure en deux niveaux cumulatifs pour l'octroi de la rémunération.

      Niveau 1 : Information et Consultation

      Actions : Mise en place d'outils pour évaluer la satisfaction et recueillir les besoins.

      Justificatifs : Exemplaires des questionnaires, synthèse des résultats, plan d'action découlant des retours usagers.

      Évolution annuelle : Si la structure reste au niveau 1, elle doit prouver que l'outil a été révisé ou analysé à nouveau.

      Niveau 2 : Collaboration et Partenariat

      Actions : Intégration pérenne des usagers dans la gouvernance ou les groupes de travail.

      Justificatifs : Désignation d'un référent usager, compte-rendu de réunions de co-construction, description de l'apport réel de l'usager dans les décisions.

      Exemple de dynamique : « Si l'année suivante la structure reste au niveau 2, elle doit évaluer ce qui a été fait l'année précédente dans le cadre de la collaboration. »

      4. Les Acteurs du Partenariat

      La diversité des profils permet d'adapter l'implication selon les besoins du projet de santé.

      L'Usager : Patient, personne accompagnée ou proche-aidant.

      Le Patient Partenaire / Expert : Individu ayant développé des compétences suite à sa maladie et pouvant intervenir en Éducation Thérapeutique du Patient (ETP) ou en recherche.

      Le Représentant des Usagers (RU) : Membre d'une association agréée, formé au système de santé et siégeant dans des instances officielles.

      Le Citoyen Engagé : Habitant du quartier souhaitant contribuer à la vie de la structure de proximité.

      Le Médiateur en Santé : Facilite le lien dans les salles d'attente ou lors de l'accueil.

      Donnée clé (Enquête BVA 2021) : 80 % des habitants d'Occitanie souhaitent le développement des regroupements de professionnels et 47 % se disent prêts à s'impliquer auprès de ces équipes.

      5. Exemples Concrets et Ressources

      Le webinaire a mis en avant des initiatives réussies illustrant la mise en œuvre de l'indicateur :

      Éducation Thérapeutique (ETP) : Une MSP a intégré un patient expert pour reconstruire totalement son programme diabète, augmentant significativement la satisfaction de la patientèle.

      Groupes de parole : En Haute-Garonne, une patiente partenaire et une psychologue co-animent mensuellement un groupe de parole sur le cancer.

      Gouvernance : Bien que les SISA (Sociétés Interprofessionnelles de Soins Ambulatoires) soient juridiquement limitées aux professionnels, des comités d'usagers peuvent être créés pour influencer les décisions stratégiques.

      Communication : Utilisation de lettres d'information, de panneaux en salle d'attente ou de vidéos "ambassadeurs" où des patients expliquent l'offre de soins de la structure à leurs pairs.

      Ressources Disponibles

      COPS (Centre Opérationnel du Partenariat en Santé) : Dispositif financé par l'ARS Occitanie offrant des fiches pratiques, un répertoire de patients partenaires et des compagnonnages.

      France Assos Santé : Propose des formations gratuites pour les usagers souhaitant s'impliquer.

      Haute Autorité de Santé (HAS) : Guide sur l'engagement des usagers dans les structures de soins primaires.

      6. Points de Vigilance et Obstacles

      Statut juridique et financier : Il n'existe pas encore de statut de « métier » pour le patient partenaire. La rémunération reste complexe (micro-entreprise ou bénévolat avec défrayage).

      Recrutement : Il est conseillé de recruter un patient partenaire « comme un collaborateur », sur la base de ses compétences, de son savoir-être et de valeurs partagées avec l'équipe.

      Représentativité : Il est illusoire de chercher une représentativité statistique parfaite. L'objectif est de combiner une diversité de visions et de compétences.

      Accompagnement : Compte tenu de l'absence de cadre légal rigide, les structures sont encouragées à se faire accompagner par des tiers facilitateurs pour sécuriser leurs projets.

    1. located the Welsh tales in a broader context of medievalEuropean romance in her introduction and quoted extensively fromrelated French tales in her copious notes.3

      Lady guest reinforced the place of arthur as a welshman through her translation. however, did such a translation into english, and her setting into the 'broader context of medieval European romance', reduce the extent that arthur belonged to the welsh?

    1. L’Attention aux Vulnérabilités : Une Priorité Éthique et Pédagogique

      Résumé Exécutif

      Ce document de synthèse examine le rôle critique de l'attention aux vulnérabilités dans le milieu scolaire, positionnant cette approche non seulement comme une obligation éthique, mais aussi comme un facteur déterminant de l'efficacité pédagogique.

      L'analyse souligne que la relation enseignant-élève est intrinsèquement asymétrique, plaçant l'élève dans une position d'exposition aux risques — de la blessure émotionnelle au décrochage scolaire.

      Les points clés abordés incluent :

      La redéfinition de la vulnérabilité : Elle n'est plus perçue comme un état permanent de la personne, mais comme une situation (momentanée ou durable) affectant jusqu'à la moitié des effectifs scolaires sur une année.

      L'impact des besoins fondamentaux : La satisfaction des besoins de compétence, d'autonomie et d'affiliation est essentielle à la sécurité relationnelle.

      La lutte contre la « Violence Pédagogique Ordinaire » : L'identification et l'élimination des micro-violences (verbales, comportementales) sont impératives.

      Le passage à la bienveillance active : L'adoption de gestes professionnels ciblés, tels que le feedback positif et l'exigence bienveillante, corrèle directement avec la réussite des élèves.

      --------------------------------------------------------------------------------

      1. La Nature de la Relation Pédagogique : Une Asymétrie Fondamentale

      La relation éducative est définie par une asymétrie structurelle. L'enseignant détient la maîtrise des compétences, du statut, des objectifs pédagogiques, de l'espace et du temps, tandis que l'élève évolue dans une position de dépendance et de moindre conscience des enjeux.

      La Vulnérabilité comme Situation

      Le terme vulnérabilité (du latin vulnus, la blessure) désigne une fragilité qui expose l'élève à des risques de blessures concernant ses droits, sa dignité ou, plus fréquemment, ses besoins fondamentaux.

      Évolution conceptuelle : La recherche actuelle privilégie la notion de « situations de vulnérabilité » plutôt que de « personnes vulnérables ».

      Typologie des situations :

      Durables : Élèves en situation de handicap ou à besoins éducatifs particuliers (environ 470 000 à 800 000 élèves incluant les profils neurodéveloppementaux, haut potentiel et allophones).   

      Momentanées : Élèves traversant des crises familiales (séparation), économiques (perte d'emploi des parents), affectives ou liées au parcours migratoire.

      On estime que près de 50 % des élèves vivent de telles phases chaque année.

      --------------------------------------------------------------------------------

      2. Cartographie des Besoins Fondamentaux en Milieu Scolaire

      Pour garantir une relation éthique, l'enseignant doit répondre à une nomenclature de besoins multidimensionnels.

      | Catégorie de Besoin | Composantes Clés | | --- | --- | | Besoins de base (Deci & Ryan) | Compétence, Autonomie, Affiliation. | | Sécurité et Confiance | Sécurité relationnelle, confiance en soi, confiance en l'adulte et en l'institution. | | Socialisation et Équité | Appartenance au groupe, besoin de justice, respect et considération. | | Accompagnement | Besoin d'aide, besoin de temps, besoin de dialogue avec l'adulte. |

      --------------------------------------------------------------------------------

      3. Gestes Professionnels et Leviers de Réussite

      La recherche, notamment les méta-analyses de John Hattie, démontre que les facteurs relationnels ont un impact supérieur à la moyenne sur la réussite scolaire (coefficients de corrélation supérieurs à 0,7, là où le seuil de significativité est à 0,4).

      Levier Majeur : Le Feedback

      Le feedback positif agit comme un levier fondamental pour nourrir le besoin d'estime et de sécurité de l'élève. Il doit être intégré dans les moments pédagogiques critiques :

      • L'accueil des élèves.

      • La mise en activité.

      • Les phases d'évaluation (annonce, correction, exploitation).

      • La gestion des obstacles et des erreurs (dédramatisation).

      Communication et Posture

      La communication se divise en trois dimensions :

      1. Verbale : Les mots utilisés.

      2. Non-verbale : Gestes, mimiques, posture spatiale.

      3. Paraverbale : Ton, volume et débit de la voix (cruciaux pour la perception de la satisfaction de l'enseignant par l'élève).

      --------------------------------------------------------------------------------

      4. La Violence Pédagogique Ordinaire (VPO)

      La VPO regroupe des micro-violences souvent inconscientes mais délétères, désormais interdites par la loi du 10 juillet 2019.

      Manifestations : Cris, moqueries, intimidations, stigmatisations, discriminations sociales, comparaisons excessives ou injonctions paradoxales.

      Conséquences : Stress, mal-être, conduites antisociales et agressivité. Ces comportements ajoutent une vulnérabilité supplémentaire à celle déjà présente, créant un cercle vicieux de l'échec.

      --------------------------------------------------------------------------------

      5. Vers une Éthique de la Bienveillance Active

      L'éthique est ici définie comme une disposition psychique visant à rechercher le comportement le plus juste pour l'élève.

      Distinction entre Bienveillances

      Le passage d'une posture passive à une posture active est nécessaire :

      Bienveillance Passive (ou minimale) : Se limiter à ne pas blesser l'élève et le laisser affronter seul ses difficultés par manque de temps ou de ressources.

      Bienveillance Active : Caractérisée par une qualité de présence, un soutien de proximité, des exigences adaptées et un intérêt réel pour la personne de l'élève au-delà de ses résultats.

      Les 5 Modes d'Expression (selon Gwénola Reto)

      1. S'intéresser à l'élève : Encourager sa pensée et accepter ses divergences.

      2. Prendre en compte les besoins : Identifier les besoins cognitifs et fondamentaux.

      3. Se soucier de son bien-être : Veiller à son intérêt et sa motivation.

      4. Valoriser la personne : Distinguer l'individu de ses résultats normatifs lors des évaluations.

      5. Manifester de la compassion : Montrer une sensibilité face aux difficultés rencontrées par l'élève.

      --------------------------------------------------------------------------------

      Conclusion

      L'attention aux vulnérabilités ne doit pas être perçue comme une baisse d'exigence, mais comme une exigence bienveillante.

      En sécurisant le cadre relationnel et en répondant aux besoins psycho-affectifs, l'enseignant rend l'exigence scolaire acceptable et fructueuse, garantissant ainsi que l'élève reste « dans le jeu de la réussite ».

    1. Reviewer #3 (Public review):

      Summary:

      Combining electrophysiological recording, circuit tracing, single cell RNAseq, and optogenetic and chemogenetic manipulation, Howe and colleagues have identified a graded division between anterior and posterior plCoA and determined the molecular characteristics that distinguish the neurons in this part of the amygdala. They demonstrate that the expression of slc17a6 is mostly restricted to the anterior plCoA whereas slc17a7 is more broadly expressed. Through both anterograde and retrograde tracing experiments, they demonstrate that the anterior plCoA neurons preferentially projected to the MEA whereas those in the posterior plCoA preferentially innervated the nucleus accumbens. Interestingly, optogenetic activation of the aplCoA drives avoidance in a spatial preference assay whereas activating the pplCoA leads to preference. The data support a model that spatially segregated and molecularly defined populations of neurons and their projection targets carry valence specific information for the odors. Moreover, the intermingling of neurons in the plCoA is consistent with prior observations. The presence of a gradient rather than a distinct separation of the cells fits the model being proposed. The discoveries represent a conceptual advance in understanding plCoA function and innate valence coding in the olfactory system.

      Strengths:

      The strongest evidence supporting the model comes from single-cell RNASeq, genetically facilitated anterograde and retrograde circuit tracing, and optogenetic stimulation. The evidence clear demonstrates two molecularly defined cell populations with differential projection targets. Stimulating the two populations produced opposite behavioral responses.

      Weaknesses:

      The weaknesses noted in primary review have all been addressed adequately.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      This study by Howe and colleagues investigates the role of the posterolateral cortical amygdala (plCoA) in mediating innate responses to odors, specifically attraction and aversion. By combining optogenetic stimulation, single-cell RNA sequencing, and spatial analysis, the authors identify a topographically organized circuit within plCoA that governs these behaviors. They show that specific glutamatergic neurons in the anterior and posterior regions of plCoA are responsible for driving attraction and avoidance, respectively, and that these neurons project to distinct downstream regions, including the medial amygdala and nucleus accumbens, to control these responses.

      Strengths:

      The major strength of the study is the thoroughness of the experimental approach, which combines advanced techniques in neural manipulation and mapping with high-resolution molecular profiling. The identification of a topographically organized circuit in plCoA and the connection between molecularly defined populations and distinct behaviors is a notable contribution to understanding the neural basis of innate motivational responses. Additionally, the use of functional manipulations adds depth to the findings, offering valuable insights into the functionality of specific neuronal populations.

      Weaknesses:

      There are some weaknesses in the study's methods and interpretation. The lack of clarity regarding the behavior of the mice during head-fixed imaging experiments raises the possibility that restricted behavior could explain the absence of valence encoding at the population level.

      We agree with idea that head-fixation may alter the state of the animal and the neural encoding of odor. To address this, we have provided further analysis of walking behavior during the imaging sessions, which is provided in Figure S2. Overall, we could not identify any clear patterns in locomotor behavior that are odor-specific. Moreover, when neural activity was sorted depending on the behavioral state (walking, pausing or fleeing) we didn’t observe any apparent patterns in odor-evoked neural activity. This is now discussed in the Results and Limitations sections of the manuscript.

      Furthermore, while the authors employ chemogenetic inhibition of specific pathways, the rationale for this choice over optogenetic inhibition is not fully addressed, and this could potentially affect the interpretation of the results.

      The rationale was logistical. First, inhibition of over a timescale of minutes is problematic with heat generation during prolonged optical stimulation. Second, our behavioral apparatus has a narrow height between the ceiling and floor, making tethering difficult. This is now explained the results section. The trade-off of using chemogenetics is that we are silencing neurons and not specific projections. However, because we find that NAc- and MeA- projecting neurons have little shared collateralization, we believe the conclusion of divergent pathways still stands. This is now discussed in the Limitations section.

      Additionally, the choice of the mplCoA for manipulation, rather than the more directly implicated anterior and posterior subregions, is not well-explained, which could undermine the conclusions drawn about the topographic organization of plCoA.

      We targeted the middle region of plCoA because it contains a mixture of cell types found in both the anterior and posterior plCoA, allowing us to test the hypothesis that cell types, not intra plCoA location, elicit different responses. Had we targeted the anterior or posterior regions, we would expect to simply recapitulate the result from activation of random cells in each region. As a result, we think stimulation in the middle plCoA is a better test for the contribution of cell types. We have now clarified this in the text.

      Despite these concerns, the work provides significant insights into the neural circuits underlying innate behaviors and opens new avenues for further research. The findings are particularly relevant for understanding the neural basis of motivational behaviors in response to sensory stimuli, and the methods used could be valuable for researchers studying similar circuits in other brain regions. If the authors address the methodological issues raised, this work could have a substantial impact on the field, contributing to both basic neuroscience and translational research on the neural control of behavior.

      Reviewer #2 (Public review):

      Summary:

      The manuscript by the Root laboratory and colleagues describes how the posterolateral cortical amygdala (plCoA) generates valenced behaviors. Using a suite of methods, the authors demonstrate that valence encoding is mediated by several factors, including spatial localization of neurons within the plCoA, glutamatergic markers, and projection. The manuscript shows convincingly that multiple features (spatial, genetic, and projection) contribute to overall population encoding of valence. Overall, the authors conduct many challenging experiments, each of which contains the relevant controls, and the results are interpreted within the framework of their experiments.

      Strengths:

      - For a first submission the manuscript is well constructed, containing lots of data sets and clearly presented, in spite of the abundance of experimental results.

      - The authors should be commended for their rigorous anatomical characterizations and posthoc analysis. In the field of circuit neuroscience, this is rarely done so carefully, and when it is, often new insights are gleaned as is the case in the current manuscript.

      - The combination of molecular markers, behavioral readouts and projection mapping together substantially strengthen the results.

      - The focus on this relatively understudied brain region in the context is valence is well appreciated, exciting and novel.

      Weaknesses:

      - Interpretation of calcium imaging data is very limited and requires additional analysis and behavioral responses specific to odors should be considered. If there are neural responses behavioral epochs and responses to those neuronal responses should be displayed and analyzed.

      We have now considered this, see response above.

      - The effect of odor habituation is not considered.

      We considered this, but we did not find any apparent differences in valence encoding as measured by the proportion of neurons with significant valence scores across trials (see Figure 1J).

      - Optogenetic data in the two subregions relies on very careful viral spread and fiber placement. The current anatomy results provided should be clear about the spread of virus in A-P, and D-V axis, providing coordinates for this, to ensure readers the specificity of each sub-zone is real.

      We were careful to exclude animals for improper targeting. The spread of virus is detailed in Figures S3, S8 & S9.

      - The choice of behavioral assays across the two regions doesn't seem balanced and would benefit from more congruency.

      The choice of the 4-quadrant assay was used because this study builds off of our prior experiments that demonstrate a role for the plCoA in innate behavior. It is noteworthy that the responses to odor seen in this assay are generally in agreement with other olfactory behavioral assays, so one wouldn’t predict a different result. Moreover, the approach and avoidance responses measured in this assay are precisely the behaviors we wish to understand. We did examine other non-olfactory behavioral readouts (Figures S3, S8), and didn’t observe any effect of manipulation of these pathways.

      - Rationale for some of the choices of photo-stimulation experiment parameters isn't well defined.

      The parameters for photo-stimulation were based on those used in our past work (Root et al., 2014). We used a gradient of frequency from 1-10 Hz based on the idea that odor likely exists in a gradient and this was meant to mimic a potential gradient, though we don’t know if it exists. The range in stimulation frequencies appears to align with the actual rate of firing of plCoA neurons (Iurilli et al., 2017).

      Reviewer #3 (Public review):

      Summary:

      Combining electrophysiological recording, circuit tracing, single cell RNAseq, and optogenetic and chemogenetic manipulation, Howe and colleagues have identified a graded division between anterior and posterior plCoA and determined the molecular characteristics that distinguish the neurons in this part of the amygdala. They demonstrate that the expression of slc17a6 is mostly restricted to the anterior plCoA whereas slc17a7 is more broadly expressed. Through both anterograde and retrograde tracing experiments, they demonstrate that the anterior plCoA neurons preferentially projected to the MEA whereas those in the posterior plCoA preferentially innervated the nucleus accumbens. Interestingly, optogenetic activation of the aplCoA drives avoidance in a spatial preference assay whereas activating the pplCoA leads to preference. The data support a model that spatially segregated and molecularly defined populations of neurons and their projection targets carry valence specific information for the odors. The discoveries represent a conceptual advance in understanding plCoA function and innate valence coding in the olfactory system.

      Strengths:

      The strongest evidence supporting the model comes from single cell RNASeq, genetically facilitated anterograde and retrograde circuit tracing, and optogenetic stimulation. The evidence clear demonstrates two molecularly defined cell populations with differential projection targets. Stimulating the two populations produced opposite behavioral responses.

      Weaknesses:

      There are a couple of inconsistencies that may be addressed by additional experiments and careful interpretation of the data.

      Stimulating aplCoA or slc17a6 neurons results in spatial avoidance, and stimulating pplCoA or slc17a7 neurons drives approach behaviors. On the other hand, the authors and others in the field also show that there is no apparent spatial bias in odor-driven responses associated with odor valence. This discrepancy may be addressed better. A possibility is that odor-evoked responses are recorded from populations outside of those defined by slc17a6/a7. This may be addressed by marking activated cells and identifying their molecular markers. A second possibility is that optogenetic stimulation activates a broad set of neurons that and does not recapitulate the sparseness of odor responses. It is not known whether sparsely activation by optogenetic stimulation can still drive approach of avoidance behaviors.

      We agree that marking specific genetic or projection defined neurons could help to clarify if there are some neurons have more selective valence responses. However, we are not able to perform these experiments at the moment. We have included new data demonstrating that sparser optogenetic activation evokes behaviors similar in magnitude as the broader activation (see Figure S4).

      The authors show that inhibiting slc17a7 neurons blocks approaching behaviors toward 2-PE. Consistent with this result, inhibiting NAc projection neurons also inhibits approach responses. However, inhibiting aplCOA or slc17a6 neurons does not reduce aversive response to TMT, but blocking MEA projection neurons does. The latter two pieces of evidence are not consistent with each other. One possibility is that the MEA projecting neurons may not be expressing slc17a6. It is not clear that the retrogradely labeling experiments what percentage of MEA- and NACprojecting neurons express slc17a6 and slc17a7. It is possible that neurons expressing neither VGluT1 nor VGluT2 could drive aversive or appetitive responses. This possibility may also explain that silencing slc17a6 neurons does not block avoidance.

      We have now performed RNAscope staining on retrograde tracing to better define this relationship. Although the VGluT1 and VGluT2 neurons have biased projections to the MeA and NAc, respectively, there is some nuance detailed in Figure S10. Generally, MeA projecting neurons are predominately VGluT2+, whereas NAc projecting have about 20% that express both. Some (less than 35%) retrogradely labeled neurons were not detected as VGluT1 or VGluT2 positive, suggesting that other populations could also contribute. We agree that the discrepancy between MeA-projection and VGluT2 silencing is likely due to incomplete targeting of the MeA-projecting population with the VGluT2-cre line. This is included in the Discussion section.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      Main:

      (1) For the head-fixed imaging experiments, what is the behavior of the mice during odor exposure? Could the weak reliability of individual neurons be due to a lack of approach or avoidance behavior? Could restricted behavior also explain the lack of valence encoding at the population level?

      We agree that this is a limitation of head-fixed recordings. In the revised manuscript we did attempt to characterize their behavioral response, and look for correlations in odor representation. Although we did find different patterns of odor-evoked walking behavior, these patterns were not reliable or specific to particular odors (Figure S2). For example, one might expect aversive odors to pause walking or elicit a fast fleeing-like response, but we did not observe any apparent differences for locomotion between odors as all odors evoked a mixture of responses (Figure S2A-D, text lines 208-232). We then examined responses to odor depending on the behavioral state (walking, pausing or fleeing) and didn’t observe any apparent patterns in odor responses (Figure S2E,F). Lastly, we acknowledge in the text that the lack of valence encoding may be an artifact of head-fixation (see lines 849-857).

      (2) For the optogenetic manipulations of Vglut1 and Vglut2 neurons, why was the injection and fiber targeted to the medial portion of the plCoA, if the hypothesis was that these glutamatergic neuron populations in different regions (anterior or posterior) are responsible for approach and avoidance? 

      We targeted the middle region of plCoA because it contains a mixture of cell types found in both the anterior and posterior plCoA, allowing us to test the hypothesis that cell types, not intraplCoA location, elicit different responses. Had we targeted the anterior or posterior regions, we would expect to simply recapitulate the result from activation of random cells in each region. As a result, we think stimulation in the middle plCoA is a better test for the contribution of cell types. We have clarified this in the text (Lines 417-419).

      Could this explain the lack of necessity with the DREADD experiments? 

      For the loss of function experiments, a larger volume of virus was injected to cover a larger area and we did confirm targeting of the appropriate areas. Though, it is always possible that the lack of necessity is due to incomplete silencing.

      Further, why was an optogenetic inhibition approach not utilized? 

      Although optogenetic inhibition could have plausibly been used instead, we chose chemogenetic inhibition for two reasons: First, for minutes-long periods of inhibition, optical illumination poses the risk of introducing heat related effects (Owen et al., 2019). In fact, we first tried optical inhibition but controls were exhibited unusually large variance. Second, it is more feasible in our assay as it has a narrow height between the floor and lid that complicates tethering to an optic fiber. Past experiments overcame this with a motorized fiber retraction system (Root et al., 2014), but this is highly variable with user-dependent effects, so we found chemogenetics to be a more practical strategy. We have added a sentence to explain the rationale (see lines 561-563).

      (3) The specific subregion of the nucleus accumbens that was targeted should be named, as distinct parts of the nucleus accumbens can have very different functions. 

      We attempted to define specific subregions of the nucleus accumbens and found that plCoA projection is not specific to the shell or core, anterior or posterior, rather it broadly innervates the entire structure. We have added a note about this in manuscript (see lines 470-471). Given that we did not find notable subregion-specific outputs within the NAc, targeting was directed to the middle region of NAc, with coordinates stated in the methods. 

      (4) Why was an intersectional DREADD approach used to inhibit the projection pathways, as opposed to optogenetic inhibition? The DREADD approach could potentially affect all projection targets, and the authors might want to address how this could influence the interpretation of the results.

      This is partly addressed above in point 2. As for interpretation, we acknowledge that the intersectional approach silences the neurons projecting to a given target and not the specific projection and we have been careful with the wording. Although this may complicate the conclusion, we did map the collaterals for NAc and MeA projecting neurons and find that neurons do not appreciably project to both targets and have minimal projections to other targets. We have now taken care to state that we silence the neurons projecting to a structure, not silencing the projection, and we acknowledge this caveat. However, since the MeA- and NAcprojecting neurons appear to be distinct from each other (largely not collateralizing to each other), the conclusion that these divergent pathways are required still stands. We have added discussion of this in the Limitations section (see lines 859-863).

      Minor:

      (1) Line 402 needs a reference.

      We have added the missing reference (now line 441).

      (2) The Supplemental Figure labeling in the main text should be checked carefully.

      Thank you for pointing this out. We have fixed the prior errors.

      (3) Panel letter D is missing from Figure 2.

      This has been fixed.

      Reviewer #2 (Recommendations for the authors):

      Major Concerns, additional experiments:

      - In the calcium imaging experiments mice were presented with the same odor many times. Overall responses to odor presentations were quite variable and appear to habituate dramatically (Figure S1F). The general conclusion from these experiments are a lack of consistent valence-specific responses of individual neurons, but I wonder if this conclusion is slightly premature. A few potential explanatory factors that may need additional attention are: -First, despite recording video of the mouse's face during experiments, no behavioral response to any odor is described. Is it possible these odors when presented in head-fixed conditions do not have the same valence?

      Yes, we agree that this is a possibility. We have added a discussion in the Limitations section (see lines 849-857). We have also added additional behavioral analysis discussed below.

      On trials with neural responses are there behavioral responses that could be quantified? 

      We have now added data in which we attempt to characterize their behavioral response, to look for correlations in odor representation (see lines 208-228). Although we did observe different patterns of odor-evoked walking behavior, these patterns were not reliable or specific to particular odors (Figure S2). One might expect aversive odors to pause walking or elicit a fast fleeing-like response, but we did not observe any apparent differences for locomotion between odors (Figure S2A-D). Next, we examined responses to odor depending on the behavioral state (walking, pausing or fleeing) and didn’t observe any meaningful differences in odor responses (Figure S2E,F). Lastly, we acknowledge that the odor representation may be different in freely moving animals that exhibit dynamic responses to odor (see lines 859-857).

      - Habituation seems to play a prominent role in the neural signals, is there a larger contribution of valence if you look only at the first delivery (or some subset of the 20 presentations) of an odor type for a given trial? 

      Indeed, we considered this, but we did not find any apparent differences in valence encoding as measured by the proportion of neurons with significant valence scores across trials (see Figure 1J).

      - Is it reasonable to exclude valence encoding as a possibility when largely neurons were unresponsive to the positive valence odors (2PE and peanut) chosen when looking at the average cluster response (Figure 1F)? 

      It is true that we see fewer neurons responding to the appetitive odors (Figure 1H) and smaller average responses within the cluster, but some neurons do respond robustly. If these were valence responses, we would predict that neural responses should be similarly selective, but we do not observe any such selectivity. The sparseness of responses to appetitive odors does cause the average cluster analysis (Figure 1F) to show muted responses to these odors, consistent with the decreased responsivity to appetitive odors. Moreover, single neuron response analysis reveals that a given neuron is not more likely to respond to appetitive or aversive odors with any selectivity greater than chance. For these reasons, we think it is reasonable to conclude an absence of valence responses, which is consistent with the conclusion from another report (Iurilli et al., 2017).

      - While the preference and aversion assay with 4 corners is an interesting set-up and provides a lot of data for this particular manuscript. It would be helpful to test additional behaviors to determine whether these circuits are more conserved. As it stands the current manuscript relies on very broad claims using a single behavioral readout. Some attempts to use head-fixed approaches with more defined odor delivery timelines and/or additional valenced behavioral readouts is warranted.

      We appreciate the suggestion, but are not able to perform these experiments at the moment. The choice of the 4-quadrant assay was used because it built off of our prior experiments that demonstrate a role for the plCoA in innate behavior. It is noteworthy that the responses to odor seen in this assay are generally in agreement with other olfactory behavioral assays, so one wouldn’t predict a different result. The approach and avoidance responses measured in this assay are precisely the behaviors we wish to understand. Moreover, we did examine other nonolfactory behavioral readouts (Figures S3, S8), and didn’t observe any effect of manipulation of these pathways. Lastly, we have tried to define parameters for head-fixed behavior that would permit correlation of neural responses with behavior, including longer stimulations and closed loop locomotion control of odor concentration, but were unsuccessful at establishing parameters that generated reliable behavioral responses. We acknowledge that one limitation of the study is the limited behavioral tests with two odors and whether the circuits are more broadly necessary for other odors. 

      Minor comments:

      • Please define PID in the Results when it is first introduced.

      Done (see line 154)

      • Line 412 Figure S5C-N should be Figure S6C-N.

      Fixed. Now Figure S8C-N due to additional figures (see line 451).

      • Throughout the Discussion it would be helpful if the authors referred to specific Figure panels that support their statements (e.g. lines 654-656 "[...] which is supported by other findings presented here showing that both VGluT2+ and VGluT1+ neurons project to MeA, while the projection to NAc is almost entirely composed of VGluT1+ neurons".

      Thank you for the suggestion. We have added figure references in the discussion.

      • Line 778 "producing" should be "produce".

      Corrected (see line 840)

      • The figures are very busy, especially all the manipulations. The authors are commended for including each data point, but they might consider a more subtle design (translucent lines only for each animal, and one mean dot for the SEM), just to reduce the overall clutter of an already overwhelming figure set. But this is ultimately left to the authors to resolve and style to their liking. 

      Thank you for the suggestion. We have tried some different styles but like the original best.

      Reviewer #3 (Recommendations for the authors):

      If within reach, I suggest that the author determine the percentage of retrogradely labeled neurons to NAc or MEA that expresses GluT1 and GluT2. 

      We have done this for the middle region plCoA that has the greatest mixture of cell types (See Figure S10, lines 504-517). We find that the MeA projecting neurons are mostly VGluT2+ with a minority that express both VGluT1 and VGlut2. NAc-projecting neurons are primarily VGluT1+ with about 20% expressing VGlut2 as well.

      It would also be nice to sparse label of aplCoA and pplCoA using ChR2 to see if sparse activation drives approach or avoidance. 

      We agree that it would be useful to vary the sparseness of the ChR2 expression, to see if produces similar results. We examined this using sparsely labeled odor ensembles, as previously done (Root et al., 2014). Briefly, we used the Arc-CreER mouse to label TMT responsive neurons with a cre-dependent ChR2 AAV vector targeted to the anterior or posterior regions, while previously we had broadly targeted the entirety of plCoA. We had established that this labeling method captures about half of the active cells detected by Arc expression, which is on the order of hundreds of neurons rather than thousands by broad cre-independent expression. Remarkably, we get effects similar in magnitude that are not significantly different from that with broader activation of the anterior or posterior domains (see new Figure S4, lines 267-288). It still remains possible that there is a threshold number of neurons that are necessary to elicit behavior, but that is beyond the scope of the current study. However, these data indicate that the effect of activating anterior and posterior domains is not an artifact of broad stimulation.

    1. Reviewer #1 (Public review):

      Summary:

      This study set out to investigate potential pharmacological drug-drug interactions between the two most common antimalarial classes, the artemisinins and quinolines. There is strong rationale for this aim, because drugs from these classes are already widely-used in Artemisinin Combination Therapies (ACTs) in the clinic, and drug combinations are an important consideration in the development of new medicines. Furthermore, whilst there is ample literature proposing many diverse mechanisms of action and resistance for the artemisinins and quinolines, it is generally accepted that the mechanisms for both classes involve heme metabolism in the parasite, and that artemisinin activity is dependent on activation by reduced heme. The study was designed to measure drug-drug interactions associated with a short pulse exposure (4 h) that is reminiscent of the short duration of artemisinin exposure obtained after in vivo dosing. Clear antagonism was observed between dihydroartemisinin (DHA) and chloroquine, which became even more extensive in chloroquine-resistant parasites. Antagonism was also observed in this assay for the more clinically-relevant ACT partner drugs piperaquine and amodiaquine, but not for other ACT partners mefloquine and lumefantrine, which don't share the 4-aminoquinoline structure or mode of action. Interestingly, chloroquine induced an artemisinin resistance phenotype in the standard in vitro Ring-stage Survival Assay, whereas this effect was not as extensive for piperaquine.

      The authors also utilised a heme-reactive probe to demonstrate that the 4-aminoquinolines can inhibit heme-mediated activation of the probe within parasites, which suggests that the mechanism of antagonism involves the inactivation of heme, rendering it unable to activate the artemisinins. Measurement of protein ubiquitination showed reduced DHA-induced protein damage in the presence of chloroquine, which is also consistent with decreased heme-mediated activation, and/or with decreased DHA activity more generally.

      Overall, the study clearly demonstrates a mechanistic antagonism between DHA and 4-aminoquinoline antimalarials in vitro. It is interesting that this combination is successfully used to treat millions of malaria cases every year, which may raise questions about the clinical relevance of this finding. However, the conclusions in this paper are supported by multiple lines of evidence and the data is clearly and transparently presented, leaving no doubt that DHA activity is compromised by the presence of chloroquine in vitro. It is perhaps fortunate the that the clinical dosing regimens of 4-aminoquinoline-based ACTs have been sufficient to maintain clinical efficacy despite the non-optimal combination. Nevertheless, optimisation of antimalarial combinations and dosing regimens is becoming more important in the current era of increasing resistance to artemisinins and 4-aminoquinolines. Therefore, these findings should be considered when proposing new treatment regimens (including Triple-ACTs) and the assays described in this study should be performed on new drug combinations that are proposed for new or existing antimalarial medicines.

      Strengths:

      This manuscript is clearly written and the data presented is clear and complete. The key conclusions are supported by multiple lines of evidence, and most findings are replicated with multiple drugs within a class, and across multiple parasite strains, thus providing more confidence in the generalisability of these findings across the 4-aminoquinoline and peroxide drug classes.

      A key strength of this study was the focus on short pulse exposures to DHA (4 h in trophs and 3 h in rings), which is relevant to the in vivo exposure of artemisinins. Artemisinin resistance has had a significant impact on treatment outcomes in South-East Asia, and is now emerging in Africa, but is not detected using a 'standard' 48 or 72 h in vitro growth inhibition assay. It is only in the RSA (a short pulse of 3-6 h treatment of early ring stage parasites) that the resistance phenotype can be detected in vitro. Therefore, assays based on this short pulse exposure provide the most relevant approach to determine whether drug-drug interactions are likely to have a clinically-relevant impact on DHA activity. These assays clearly showed antagonism between DHA and 4-aminoquinolines (chloroquine, piperaquine, amodiaquine and ferroquine) in trophozoite stages. Interestingly, whilst chloroquine clearly induced an artemisinin-resistant phenotype in the RSA, piperaquine only had a minor impact on the early ring stage activity of DHA, which may be fortunate considering that piperaquine is a currently recommended DHA partner drug in ACTs, whereas chloroquine is not.

      The evaluation of additional drug combinations at the end of this paper is a valuable addition, which increases the potential impact of this work. The finding of antagonism between piperaquine and OZ439 in trophozoites is consistent with the general interactions observed between peroxides and 4-aminoquinolines, and it may be interesting to see whether piperaquine impacts the ring-stage activity of OZ439.

      The evaluation of reactive heme in parasites using a fluorescent sensor, combined with the measurement of K48-linked ubiquitin, further support the findings of this study, providing independent read-outs for the chloroquine-induced antagonism.<br /> The in-depth discussion of the interpretation and implications of the results are an additional strength of this manuscript. Whilst the discussion section is rather lengthy, there are important caveats to the interpretation of some of these results, and clear relevance to the future management of malaria that require these detailed explanations.

      Overall, this is a high quality manuscript describing an important study that has implications for the selection of antimalarial combinations for new and existing malaria medicines.

      Weaknesses:

      This study is an in vitro study of parasite cultures, and therefore caution should be taken when applying these findings to decisions about clinical combinations. The drug concentrations and exposure durations in these assays are intended to represent clinically relevant exposures, although it is recognised that the in vitro system is somewhat simplified and there may be additional factors that influence in vivo activity. This limitation is reasonably well acknowledged in the manuscript.

      It is also important to recognise that the majority of the key findings regarding antagonism are based on trophozoite-stage parasites, and one must show caution when generalising these findings to other stages or scenarios. For example, piperaquine showed clear antagonism in trophozoite stages, but minimal impact in ring stages under these assay conditions.

      A key limitation is the interpretation of the mechanistic studies that implicate heme-mediated artemisinin activation as the mechanism underpinning antagonism by chloroquine. This study did not directly measure the activation of artemisinins. The data obtained from the activation of the fluorescent probe are generally supportive of chloroquine suppressing the heme-mediated activation of artemisinins, and I think this is the most likely explanation, but there are significant caveats to consider. Primarily, the inconsistency between the fluorescence profile in the chemical reactions and the cell-based assay raise questions about the accuracy of this readout. In the chemical reaction, mefloquine and chloroquine showed identical inhibition of fluorescence, whereas piperaquine had minimal impact. On the contrary, in the cell, chloroquine and piperaquine had similar impacts on fluorescence, but mefloquine had minimal impact. This inconsistency indicates that the cellular fluorescence based on this sensor does not give a simple direct readout of the reactivity of ferrous heme, and therefore, these results should be interpreted with caution. Indeed, the correlation between fluorescence and antagonism for the tested drugs is a correlation, not causation. There could be several reasons for the disconnect between the chemical and biological results, either via additional mechanisms that quench fluorescence, or the presence of biomolecules that alter the oxidation state or coordination chemistry of heme or other potential catalysts of this sensor. It is possible that another factor that influences the H-FluNox fluorescence in cells also influences the DHA activity in cells, leading to the correlation with activity. It should be noted that H-FluNox is not a chemical analogue of artemisinins. It's activation relies on Fenton-like chemistry, but with a N-O rather that O-O bond, and it possesses very different steric and electronic substituents around the reactive centre, which are known to alter reactivity to different iron sources. Despite these limitations, the authors have provided reasonable justification for the use of this probe to directly visualise heme reactivity in cells, and the results are still informative.

      Another interesting finding that was not elaborated by the authors is the impact of chloroquine in the DHA dose-response curves from the ring stage assays. Detection of artemisinin resistance in the RSA generally focuses on the % survival at high DHA concentrations (700 nM) as there is minimal shift in the IC50 (see Fig 2), however, chloroquine clearly induces a shift in the IC50 (~5-fold), where the whole curve is shifted to the right, whereas the increase in % survival is relatively small. This different profile suggests that the mechanism of chloroquine-induced antagonism may be different to the mechanism of artemisinin resistance. Current evidence regarding the mechanism of artemisinin resistance generally points towards decreased heme-mediated drug activation due to a decrease in hemoglobin uptake, which should be analogous to the decrease in heme-mediated drug activation caused by chloroquine. However, these different dose response curves suggest different mechanisms are primarily responsible. Additional mechanisms have been proposed for artemisinin resistance, involving redox or heat stress responses, proteostatic responses, mitochondrial function, dormancy and PI3K signalling among others. Whilst the H-FluNox probe generally supports the idea that chloroquine suppresses heme-mediated DHA activation, it remains plausible that chloroquine could induce these, or other, cellular responses that suppress DHA activity.

      Impact:

      This study has important implications for the selection of drugs to form combinations for the treatment of malaria. The overall findings of antagonism between peroxide antimalarials and 4-aminoquinolines in the trophozoite stage are robust, and the this carries across to the ring stage for chloroquine.

      The manuscript also provides a plausible mechanism to explain the antagonism, although future work will be required to further explore the details of this mechanism and to rule out alternative factors that may contribute.

      Overall, this is an important contribution to the field and provides a clear justification for the evaluation of potential drug combinations in relevant in vitro assays before clinical testing.

    2. Reviewer #3 (Public review):

      Summary:

      The authors present an in vitro evaluation of drug-drug interactions between artemisinins and quinoline antimalarials, as an important aspect for screening the current artemisinin-based combination therapies for Plasmodium falciparum. Using a revised pulsing assay, they report antagonism between dihydroartemisinin (DHA) and several quinolines, including chloroquine, piperaquine (PPQ), and amodiaquine. This antagonism is increased in CQ-resistant strains in isobologram analyses. Moreover, CQ co-treatment was found to induce artemisinin resistance even in parasites lacking K13 mutations during the ring-stage survival assay. This implies that drug-drug interactions, not just genetic mutations, can influence resistance phenotypes. By using a chemical probe for reactive heme, the authors demonstrate that quinolines inhibit artemisinin activation by rendering cytosolic heme chemically inert, thereby impairing the cytotoxic effects of DHA. The study also observed negative interactions in triple-drug regimens (e.g., DHA-PPQ-Mefloquine) and in combinations involving OZ439, a next-generation peroxide antimalarial. Taken together, these findings raise significant concerns regarding the compatibility of artemisinin and quinoline combinations, which may promote resistance or reduce efficacy.

      With the additive profile as the comparison and a lack of synergistic effect in any of the comparisons, it is hard to contextualize the observed antagonism. Including a known synergistic pair (e.g., artemisinin + lumefantrine) would have provided a useful benchmark to assess the relative impact of the drug interactions described.

      Strengths:

      This study demonstrates the following strengths:

      • The use of a pulsed in vitro assay that is more physiologically relevant over the traditional 48h or 72h assays

      • Small molecule probes, H-FluNox, and Ac-H-FluNox to detect reactive cytosolic heme, demonstrating that quinolines render heme inert and thereby block DHA activation.

      • Evaluates not only traditional combinations but also triple-drug combinations and next-generation artemisinins like OZ439. This broad scope increases the study's relevance to current treatment strategies and future drug development.

      • By using the K13 wild-type parasites, the study suggests that resistance phenotypes can emerge from drug-drug interactions alone, without requiring genetic resistance markers.

      Weaknesses:

      • The study would benefit from a future characterization of the molecular basis for the observed heme inactivation by quinolines to support this hypothesis - while the probe experiments are valuable, they do not fully elucidate how quinolines specifically alter heme chemistry at the molecular level.

      • Suggestion of alternative combinations that show synergy could have improved the significance of the work. The invitro study did not include pharmacokinetic/pharmacodynamic modeling, hence it leaves questions about how the observed antagonism would manifest under real-world dosing conditions, necessitating furture work based on these findings.

    3. Author response:

      The following is the authors’ response to the original reviews.

      eLife Assessment

      We appreciate the positive assessment. We recognize that since all of the work in this manuscript was done in vitro, there are reasonable concerns about the translatability of these data to clinical settings. These results should not directly inform malaria policy, but we hope that these data bring new considerations to the approach for choosing strategic antimalarial combinations. We have modified the manuscript to clarify this distinction.

      Public Reviews

      Reviewer #1 (Public Review):

      We thank the reviewer for their thoughtful summary of this manuscript. It is important to note that DHA-PPQ did show antagonism in RSAs. In this modified RSA, 200 nM PPQ alone inhibited growth of PPQ-sensitive parasites approximately 20%. If DHA and PPQ were additive, then we would expect that addition of 200 nM PPQ would shift the DHA dose response curve to the left and result in a lower DHA IC50. Please refer to Figure 4a and b as examples of additive relationships in dose-response assays. We observed no significant shift in IC50 values between DHA alone and DHA + PPQ. This suggests antagonism, albeit not to the extent seen with CQ. We have modified the manuscript to emphasize this point. As the reviewer pointed out, it is fortunate that despite being antagonistic, clinically used artemisinin-4-aminoquinoline combinations are effective, provided that parasites are sensitive to the 4-aminoquinoline. It is possible that superantagonism is required to observe a noticeable effect on treatment efficacy (Sutherland et al. 2003 and Kofoed et al. 2003), but that classical antagonism may still have silent consequences. For example, if PPQ blocks some DHA activation, this might result in DHA-PPQ acting more like a pseudo-monotherapy. However, as the reviewer pointed out, while our data suggest that DHA-PPQ and AS-ADQ are “non-optimal” combinations, the clinical consequences of these interactions are unclear. We have modified the manuscript to emphasize the later point.

      While the Ac-H-FluNox and ubiquitin data point to a likely mechanism for DHA-quinoline antagonism, we agree that there are other possible mechanisms to explain this interaction.  We have addressed this limitation in the discussion section. Though we tried to measure DHA activation in parasites directly, these attempts were unsuccessful. We acknowledge that the chemistry of DHA and Ac-H-FluNox activation is not identical and that caution should be taken when interpreting these data. Nevertheless, we believe that Ac-H-FluNox is the best currently available tool to measure “active heme” in live parasites and is the best available proxy to assess DHA activation in live parasites. These points are now addressed in the discussion section. Both in vitro and in parasite studies point to a roll for CQ in modulating heme, though an exact mechanism will require further examination. Similar to the reviewer, we were perplexed by the differences observed between in vitro and in parasite assays with PPQ and MFQ. We proposed possible hypotheses to explain these discrepancies in the discussion section. Interestingly, our data corelate well with hemozoin inhibition assays in which all three antimalarials inhibit hemozoin formation in solution, but only CQ and PPQ inhibit hemozoin formation in parasites. In both assays, in-parasite experiments are likely to be more informative for mechanistic assessment.

      It remains unclear why K13 genotype influences RSA values, but not early ring DHA IC50 values. In K13<sup>WT</sup> parasites, both RSA values and DHA IC50 values were increased 3-5 fold upon addition of CQ. This suggests that CQ-mediated resistance is more robust than that conferred by K13 genotype. However, this does not necessarily suggest a different resistance mechanism. We acknowledge that in addition to modulating heme, it is possible that CQ may enhance DHA survival by promoting parasite stress responses. Future studies will be needed to test this alternative hypothesis. This limitation has been acknowledged in the manuscript. We have also addressed the reviewer’s point that other factors, including poor pharmacokinetic exposure, contributed to OZ439-PPQ treatment failure.

      Reviewer #2 (Public Review):

      We appreciate the positive feedback. We agree that there have been previous studies, many of which we cited, assessing interactions of these antimalarials. We also acknowledge that previous work, including our own, has shown that parasite genetics can alter drug-drug interactions. We have included the author’s recommended citations to the list of references that we cited. Importantly, our work was unique not only for utilizing a pulsing format, but also for revealing a superantagonistic phenotype, assessing interactions in an RSA format, and investigating a mechanism to explain these interactions. We agree with the reviewer that implications from this in vitro work should be cautious, but hope that this work contributes another dimension to critical thinking about drug-drug interactions for future combination therapies. We have modified the manuscript to temper any unintended recommendations or implications.

      The reviewer notes that we conclude “artemisinins are predominantly activated in the cytoplasm”. We recognize that the site of artemisinin activation is contentious. We were very clear to state that our data combined with others suggest that artemisinins can be activated in the parasite cytoplasm. We did not state that this is the primary site of activation. We were clear to point out that technical limitations may prevent Ac-H-FluNox signal in the digestive vacuole, but determined that low pH alone could not explain the absence of a digestive vacuole signal.

      With regard to the “reproducibility” and “mechanistic definition” of superantagonism, we observed what we defined as a one-sided superantagonistic relationship for three different parasites (Dd2, Dd2 PfCRT<sup>Dd2</sup>, and Dd2 K13<sup>R539T</sup>) for a total of nine independent replicates. In the text, we define that these isoboles are unique in that they had mean ΣFIC50 values > 2.4 and peak ΣFIC50 values >4 with points extending upward instead of curving back to the axis. As further evidence of the reproducibility of this relationship, we show that CQ has a significant rescuing effect on parasite survival to DHA as assessed by RSAs and IC50 values in early rings.

      Reviewer #3 (Public Review):

      We thank the reviewer for their positive feedback. We acknowledge that no combinations tested in this manuscript were synergistic. However, two combinations, DHA-MFQ and DHA-LM, were additive, which provides context for contextualizing antagonistic relationships. We have previously reported synergistic and additive isobolograms for peroxide-proteasome inhibitor combinations using this same pulsing format (Rosenthal and Ng 2021). These published results are now cited in the manuscript.

      We believe that these findings are specific to 4-aminoquinoline-peroxide combinations, and that these findings cannot be generalized to antimalarials with different mechanisms of action. Note that the aryl amino alcohols, MFQ and LM, were additive with DHA. Since the mechanism of action of MFQ and LM are poorly understood, it is difficult to speculate on a mechanism underlying these interactions.

      We agree with the reviewer that while the heme probe may provide some mechanistic insight to explain DHA-quinoline interactions, there is much more to learn about CQ-heme chemistry, particularly within parasites.

      The focus of this manuscript was to add a new dimension to considerations about pairings for combination therapies. It is outside the scope of this manuscript to suggest alternative combinations. However, we agree that synergistic combinations would likely be more strategic clinically.

      An in vitro setup allows us to eliminate many confounding variables in order to directly assess the impact of partner drugs on DHA activity. However, we agree that in vivo conditions are incredibly more complex, and explicitly state this.

      We agree that in the future, modeling studies could provide insight into how antagonism may contribute to real-world efficacy. This is outside the scope of our studies.

      Recommendations for the Authors:

      Reviewer #1 (Recommendations for the Authors):

      The key weaknesses identified in this manuscript are described in the 'weaknesses' section of the public review. The major one is the inconsistency around the H-FluNox response in the chemical vs biological experiments. I can't think of a simple experiment to resolve this issue, but it is good that this data is openly provided in the manuscript. I believe there could be more discussion to clarify this limitation with the current study, and the conclusions, and particularly the title, should be softened regarding the mechanism of antagonism being based on heme reactivity.

      We have softened the title and conclusions to take into account the limitations of our studies.

      (1) Please double-check the definitions for isobologram interpretation. In most antimicrobial interaction studies, I see the threshold for antagonism at sumFIC50 of 1.5, or even 2. 1.25 is often interpreted as additive in many studies.

      We acknowledge that different studies use various cutoff values. Our interpretations for additive versus antagonistic versus superantagonistic were based not only on mean ΣFIC50 values, but also isobologram shape. For example, the flat isoboles for MFQ-DHA were clearly distinct from the curved isoboles of PPQ-DHA. It is unclear what cutoff value(s) would be most clinically relevant.

      (2) For the MFQ-PPQ interaction study, please make it clear that these drugs have very long half-lives (weeks), so the 4 h pulse assay isn't really relevant to their overall activity. It probably shows a slower onset of action, but there is plenty of drug remaining for many days in the clinical scenario, so perhaps the data from the traditional 48h assay is more relevant. The same consideration applies to OZ439, which may impact the interpretation of that data.

      We have now included the half-lives of these compounds in the discussion section. Our intent was to use a pulsing format to make these isobolograms comparable with the other assays. It is important to note that pulses can reveal stronger phenotypes that might be missed with traditional methods. Thus, while 48 h assays may better mimic in vivo conditions, they could also mask important phenotypes.

      Reviewer #3 (Recommendations for the Authors):

      I have included most of my concerns in the public review. Below are some additional specific points for consideration:

      (1) It is expected to include a synergistic combination as a control (e.g., artemisinin + lumefantrine) to contextualize the degree of antagonism observed. The experimental design should show some synergistic profiles in comparison. Adding a few experiments by including a synergistic control is needed.

      Both MFQ-DHA and LM-DHA combinations were additive, which provides context for antagonistic combinations. This is now stated in the results section pertaining to Figure 1. We have also included a reference to our previous publication in which we demonstrated that proteasome inhibitor-peroxide combinations are synergistic to additive using this same pulsing format.

      (2) Consider in vivo validation or pharmacokinetic/pharmacodynamic modeling to strengthen the translational relevance of the findings when it comes to doses and the IC50 correlations.

      We agree that this would be useful to do in future, but it is outside the scope of the current study.

      (3) It would be beneficial to include a discussion section on how the findings are generalizable to different Plasmodium falciparum genotypes (3D7, Dd2, MRA-1284) and their relevance.

      Findings were consistent across three parasite backgrounds depending on PfCRT genotype. This point has been included in the discussion section. The background of these parasites is also provided in Table 1.

      (4) Potential evaluation criteria to understand where certain combinations should be reconsidered can be included as a suggestion for the wider audience.

      Our in vitro studies suggest that pulsing isobolograms would be a useful assay to include when evaluating combination therapies. While we believe that synergistic combinations would be more strategic than antagonistic combinations, we cannot provide evaluation criteria or make recommendations for reconsidering currently used combinations.

      (5) Further elaborate on the mechanistic basis of heme inactivation by quinolines. If data are available, please include more data on the specificity of the process.

      Despite our best efforts, we were unable to evaluate quinoline-heme interactions in parasites. Even in vitro, this interaction has remined elusive for decades. We agree that this would be an important future step towards supporting a specific mechanism for quinoline-DHA antagonism.

    1. Reviewer #2 (Public review):

      Summary:

      In this study, the authors identify a previously uncharacterised regulator of mitochondrial function using a genetic screen and propose a role for this protein in supporting mitochondrial protein production. They provide evidence that the protein localises to mitochondria, interacts with components of the mitochondrial translation machinery, and is required for normal heart function in an animal model.

      Strengths:

      A major strength of the work is the use of multiple independent approaches to assess mitochondrial activity and protein production, which together provide support for the central conclusions. The in vivo data linking loss of this factor to impaired heart function are particularly compelling and elevate the relevance of the study beyond a purely cell-based context.

      Weaknesses:

      Given prior reports placing this protein outside mitochondria, its mitochondrial localisation would benefit from more rigorous and quantitative validation, and the proposed mechanism of the interaction with the mitochondrial translation machinery remains only partially explored. In addition, the physiological analysis is largely limited to the heart, leaving open questions about how broadly this pathway operates across tissues.

      Major comments:

      (1) Evidence for mitochondrial localization of EOLA1<br /> EOLA1 has previously been reported as a nuclear and cytosolic protein and is not annotated in MitoCarta 3.0, making rigorous validation of its mitochondrial localization particularly important. Although the authors provide several lines of evidence, interpretation is complicated by the use of different cell lines across localization, interaction, and functional experiments. Greater consistency in the cellular models used would strengthen the conclusions. The immunofluorescence analysis of tagged EOLA1 would also benefit from quantification across more cells and the inclusion of an additional mitochondrial marker (e.g., an outer membrane marker such as TOM20), as HSP60 staining can vary with mitochondrial state.

      (2) Normalization of OCR measurements<br /> Clarification of how Seahorse oxygen consumption rate measurements were normalized (e.g., cell number or protein content) would aid interpretation, particularly given potential effects of Eola1 loss on cell growth.

      (3) Linking interaction data to functional phenotypes<br /> Loss-of-function analyses are performed in mouse cell lines, whereas localization and interactome studies are conducted in human HEK293T cells. The absence of a human EOLA1 knockout model makes it difficult to directly connect the interaction data to the observed functional phenotypes. Additional validation or discussion of species conservation would improve clarity.

      (4) Mechanistic interpretation of the EOLA1-TUFM-12S rRNA interaction<br /> The identification of TUFM and 12S mt-rRNA as EOLA1 interactors is an interesting finding; however, the basis for prioritizing TUFM among the many mitochondrial proteins identified in the interactome is not fully explained. Providing enrichment statistics and functional categorization of mitochondrial interactors would increase transparency. In addition, the proposed role of the ASCH domain in RNA binding would be strengthened by structure-informed or mutational analysis of the conserved RNA-binding motif.

      (5) Interpretation of mitochondrial translation and protein abundance data<br /> Several assays supporting impaired mitochondrial translation would benefit from additional controls and quantification. The de novo mitochondrial translation assay (Fig. 3h) is not quantified, making it difficult to assess the magnitude and reproducibility of the effect. In addition, western blots showing reduced levels of mitochondrially encoded OXPHOS subunits (Figure 3g) lack a mitochondrial loading control (e.g., TOM20 or VDAC). Since loss of EOLA1 may affect mitochondrial mass, normalization to a mitochondrial marker is necessary. Relatedly, it would be informative to assess whether steady-state levels of mitoribosomal proteins (e.g., MRPS15, MRPL37) and nuclear-encoded OXPHOS subunits are altered upon Eola1 loss, both in knockout cell lines and in the knockout mouse.

      (6) Physiological scope of the in vivo analysis<br /> The cardiac phenotype observed in the whole-body Eola1 knockout mouse is compelling, but the focus on a single tissue limits interpretation of EOLA1's broader physiological role. Examination of additional high-energy-demand tissues would help clarify whether the observed effects are heart-specific or more general. In addition, the presence of residual EOLA1 protein bands in western blots (Figure 4a) and remaining Eola1 transcripts in qRT-PCR analyses (Extended Figure 4e) from knockout tissues should be addressed. The authors should clarify whether these signals reflect incomplete knockout, alternative isoforms, antibody cross-reactivity, or technical background.

      (7) Relationship to previously reported MT2A interaction<br /> Given prior reports of EOLA1 interaction with MT2A, a brief comment on whether MT2A was detected in the authors' co-immunoprecipitation experiments and how this relates to the proposed mitochondrial role would be useful.

    2. Reviewer #3 (Public review):

      The authors identified EOLA1 in a CRISPR/Cas9 screen for essential mitochondrial genes in a mouse B16-F10 cell line; however, no information on the library used for this screen or the list of all identified essential genes is provided. What was the p-value for EOLA1 in Figure 1b?

      The authors show that EOLA1 is indeed a mitochondrial protein (using both mouse and human cell lines). It is valuable that the authors use different cell lines to investigate the function of this protein; however, this also presents a challenge, as four different cell lines (two mouse and two human) are used across individual experiments, with no consistency between them. Knock-out (KO) experiments were performed in mouse cell lines only, and human cell lines were used in overexpression experiments, in which EOLA1 was tagged with FLAG-HA. It would be beneficial if a knock-out were also generated in a human cell line to confirm the effect on the expression of mitochondria-encoded proteins, along with a rescue experiment in which the EOLA1 protein is reintroduced into KO cells.

      Functional analysis of EOLA1: The authors performed affinity immunoprecipitation of FLAG-HA-tagged EOLA1 from stably overexpressing cells, and identified 202 co-immunoprecipitating proteins, of which 71 were known mitochondrial proteins; however, no list of these proteins is provided. Why did the authors choose TUFM? Were any mitochondrial ribosomal proteins co-immunoprecipitated, if EOLA1 is suggested to regulate translation? Were levels of TUFM affected in EOLA1-KO cells?

      The authors continued to analyze mitochondrial ribosomes using sucrose gradient fractionation and in-vitro mitochondrial translation. However, there are several technical problems with the presented data: It has been established that mitochondrial ribosomes do not form polysomes in mammalian cells but rather perform translation as monosomes. The authors indirectly confirm this: almost no 12S or 16S rRNA (Fig. 3f) or MRP proteins (Extended data 3c) are present in "polysome" fractions. Although indeed 12S and 16S rRNAs are decreased in monosome fractions, the levels of mRNAs are not different between KO and WT cells, and neither is the migration of mitochondrial ribosomal proteins. As there is no loading control provided for the sucrose gradients blots (such as SDHA, VDAC), it is not possible to assess the overall levels of mitochondrial ribosomes. The gel presented for mitochondrial translation is of poor quality, as it is impossible to identify any of the expected 13 polypeptides. Although the intensity of the signal is weaker for KO, so is the intensity in the portion of Coomassie stained gel. A better-quality gel and quantification need to be provided to support the claims.

      What is the difference between endogenous and exogenous RIP-qPCR? EOLA1 pulled down 12S rRNA without cross-linking (Figure 3d) or with UV-crosslinking (Figure 3e), however, both 12S and 16S rRNAs were enriched in UV-crosslinked cells (Figure 3c) and by UV-RIP-seq (Extended data 3b; although no control is provided here). Is no discussion offered for this observation? Is it possible that EOLA1 plays a role in the maturation of the mito-ribosome, rather than translation? Does EOLA1 co-migrate with the mito-ribosome on sucrose gradients?

      Altogether, there is insufficient evidence to support the conclusion that EOLA1 plays a role in mitochondrial translation.

      To investigate EOLA1 biological function, the authors created a whole-body EOLA1-/- mouse that exhibited no overall developmental abnormalities; however presented with an abnormal cardiac function. This is an ideal model to confirm prior observations in cellular models; however, apart from one western-blot for three mitochondrial encoded subunits, no other experiments were provided (such as measurements of the levels of 12S, or 16S rRNA, TUFM levels, ribosomes profile, mitochondrial translation, OXPHOS assembly, respirometry).

      In Figure 2 g-i: TEM images are presented, but the method is not described, nor is any information on the cells used provided, nor is it clear how the circularity was determined. KO cells certainly look abnormal; however, are the authors sure that the indicated structures are mitochondria? They rather resemble autophagosomes/lysosomes with lamellar inclusions.

    1. na de las paradojas más sorprendentes inherentes a la escritura es suestrecha asociación con la muerte. Esta es insinuada en la acusación platónica deque la escritura es inhumana, semejante a un objeto, y destructora de la memoria.También es muy evidente en un sinnúmero de referencias a la escritura (o a laimprenta) que pueden hallarse en los diccionarios impresos de citas, desde 2Corintios 3:6, "La letra mata, más el espíritu vivifica", y la mención que Horacio hacede sus tres libros de Odas como un "monumento" (

      Yo como lector reflexiono que: "La escritura es el monumento de la memoria", porque genera un registro de lo que vamos viviendo, construyendo o pensando, pero que al final de cuentas es un producto industrializado porque esta es una representa idealista sobre nuestro pensamiento que no tiene un contexto y no se puede complejizar tanto como nuestros pensamientos. Porque para mi, la escritura es el resultado escrito de nuestras reflexiones, pero para poder llegar a concretar esas ideas especificas que me parecieron valiosas para comunicarlas de forma escrita, tuve que haber tenido un proceso de pensamiento de dias, semanas o meses que no fue registrado. La escritura es el único lugar donde pueden existir las palabras habladas.

    2. Es un objeto, un producto manufacturado. Desdeluego, lo mismo se dice de las computadoras. En segundo lugar, afirma el Sócratesde Platón, la escritura destruye la memoria. Los que la utilicen se harán olvidadizosal depender de un recurso exterior por lo que les falta en recursos internos. Laescritura debilita el pensamiento. Hoy en día, los padres, y otros además de ellos,temen que las calculadoras de bolsillo proporcionen un recurso externo para lo quedebiera ser el recurso interno de las tablas de multiplicaciones aprendidas dememoria. Las calculadoras debilitan el pensamiento, le quitan el trabajo que lomantiene en forma. En tercer lugar, un texto escrito no produce respuestas. Si uno lepide a una persona que explique sus palabras, es posible obtener una explicación; siuno se lo pide a un texto, no se recibe nada a cambio, salvo las mismas palabras, amenudo estúpidas, que provocaron la pregunta en un principio.

      Aquí se da el desarrollo de lo argumentos por la frase de platon donde menciona que: "es inhumana al pretender establecer fuera del pensamiento lo que en realidad solo puede existir dentro de él". Argumentos que sostienen esta frase: 1. Es un objeto, un producto manufacturado. Lo mismo se dice de las computadoras 2. La escritura destruye la memoria. Coloca el ejemplo de que la invención de las calculadoras es un recurso externo que produjo que las tablas aprendidas de memoria que son un recurso interno sean olvidadas. 3. Un texto escrito no produce respuestas, si se le pregunta a una persona que explica sus palabras presencialmente este podría hacerlo, pero en un texto escrito no se podría porque no tenemos al autor en frente. 4. El hecho de que la palabra escrita no puede defender como es capaz de hacerlo la palabra hablada natural. Menciona que el habla y pensamiento real está medida por un contexto de ida y vuelta, pero en la palabra escrita es pasiva, es decir, fuera de dicho contexto, irreal y artificial al igual que las computadoras.

    Annotators

    1. Document de Synthèse : Le Projet FUSÉ – Une Approche Structurelle pour la Réussite des Élèves Fragilisés

      Résumé Exécutif

      Le projet FUSÉ (Formation à l’utilisation de stratégies efficaces pour l’engagement) est une initiative novatrice mise en œuvre à l’école secondaire Carrefour (Centre de services scolaire des Draveurs) pour contrer le décrochage scolaire précoce.

      Ce projet cible les élèves du premier cycle du secondaire en situation de grande vulnérabilité, particulièrement ceux ayant des acquis de 6e année mais se trouvant en échec dans plusieurs matières à sanction.

      Partant du constat que le redoublement traditionnel ne produisait aucun résultat positif (33 % de taux de sortie sans diplôme), la direction a instauré une structure rigoureuse remplaçant la culture du redoublement par un accompagnement intensif basé sur l’autodétermination et la réussite immédiate.

      Après une année d'application, les résultats sont probants : sur 39 élèves ciblés, 20 ont réussi leur passage en secondaire 3, évitant ainsi des trajectoires de formation moins qualifiantes.

      Le projet repose sur une mobilisation des services complémentaires, une réorganisation des horaires et l'utilisation d'un quartier général dédié : le « Bistrado ».

      --------------------------------------------------------------------------------

      1. Contexte et Problématique

      1.1 Un constat d'échec systémique

      L'école secondaire Carrefour, située en milieu urbain défavorisé, accueille environ 2 000 élèves. Avant l'implantation de FUSÉ, l'école faisait face à des défis majeurs :

      Taux de décrochage élevé : 33 % de sorties sans diplôme au régulier, contre une moyenne québécoise de 24,6 % pour des milieux équivalents.

      Décrochage précoce : Le profil type du décrocheur se dessinait dès l'âge de 15 ans, souvent suite à une reprise de la première année du secondaire.

      Inefficacité du redoublement : Les données montraient que les élèves reprenant leur secondaire 1 obtenaient des résultats inférieurs à leur première tentative, tout en développant des problèmes de comportement et de motivation accrus.

      1.2 L'urgence d'agir

      En mars 2024, les prévisions indiquaient que 42 élèves sur 200 au régulier étaient en échec dans au moins trois matières à sanction.

      Face à la pression du personnel pour un redoublement massif ou un transfert en adaptation scolaire (non justifié par les acquis académiques), la direction a choisi de rompre avec les pratiques établies.

      --------------------------------------------------------------------------------

      2. Fondements et Vision du Projet FUSÉ

      Le projet s'appuie sur une philosophie de « création du possible » lorsque les méthodes traditionnelles échouent.

      2.1 Objectifs centraux

      Maintenir la trajectoire scolaire : Éviter que les élèves ne soient dirigés prématurément vers des parcours comme la FMS (Formation menant à l'exercice d'un métier semi-spécialisé).

      Favoriser l'autodétermination : Baser l'intervention sur les besoins fondamentaux d'appartenance, de relation et de compétence.

      Inverser l'effort : Faire en sorte que l'élève devienne l'acteur principal de sa réussite, plutôt que de voir les adultes « travailler plus fort que l'élève ».

      2.2 Cadre théorique et leviers

      Le projet s'inspire de modèles existants tels que :

      • L'approche Check & Connect (utilisée au 2e cycle sous le nom de « Boussole éducative »).

      • Le Plan d'intervention autodéterminé, soutenu par une formation de la conseillère pédagogique du centre de services.

      • L'utilisation de données probantes pour identifier les facteurs de risque et de protection.

      --------------------------------------------------------------------------------

      3. Structure Opérationnelle et Mise en Œuvre

      La réussite de FUSÉ repose sur une structure « bétonnée » plutôt que sur un simple changement de culture imposé au personnel enseignant.

      3.1 Le « Bistrado » : Le Quartier Général

      Le bistro étudiant de l'école est transformé chaque matin en centre de services centralisé pour les élèves FUSÉ. C'est un lieu sécurisant, loin de l'agitation des classes, où s'effectue l'accueil quotidien.

      3.2 L'Intervenant Pivot

      Chaque élève est lié à un intervenant pivot (agent de réadaptation, orthopédagogue, enseignant ressource ou intervenant en toxicomanie). Ce dernier :

      • Centralise les communications.

      • Assure un accueil quotidien (les « Soleils FUSÉ »).

      • Suit les objectifs personnels de l'élève.

      3.3 Analyse des données et sous-groupes

      Les élèves sont regroupés selon la nature de leurs besoins, tout en restant intégrés dans leur profil ou programme d'origine (pas de classes fermées) :

      | Profil de sous-groupe | Nature des difficultés | | --- | --- | | Comportement | Manifestations comportementales perturbatrices. | | Motivation / Assiduité | Taux d'absentéisme élevé, désengagement. | | Apprentissage | Lacunes académiques graves en français ou mathématiques. |

      --------------------------------------------------------------------------------

      4. L'Expérience Élève et Engagement

      4.1 Le contrat d'engagement

      La participation est volontaire. L'élève doit signer un contrat d'engagement. Si l'engagement fait défaut, l'élève peut être retiré du projet, avec la possibilité d'y revenir lorsqu'il se sent prêt.

      4.2 Le déroulement quotidien

      Période solée (8h40 - 9h00) : Accueil au Bistrado, petit-déjeuner pour les élèves en milieu défavorisé, et fixation d'objectifs quotidiens ou hebdomadaires (ex: arriver à l'heure, participer en classe).

      Suivi des objectifs : Les réussites sont soulignées par des « billets de tirage » et des certificats de reconnaissance, favorisant l'émulation.

      Horaire différencié : Pour certains élèves, des matières comme les arts, l'anglais ou le CCQ sont temporairement allégées pour permettre des périodes de rattrapage intensif en français et mathématiques avec des enseignants ressources.

      --------------------------------------------------------------------------------

      5. Résultats et Impact

      5.1 Statistiques de la première cohorte (39 élèves)

      Les résultats ont surpassé les attentes initiales de la direction :

      20 élèves ont intégré le secondaire 3 régulier.

      4 élèves ont été dirigés vers la FMS.

      2 élèves vers le Pré-DEP.

      1 élève vers la formation générale des adultes.

      8 élèves ont repris leur secondaire 2 (mais avec un meilleur accompagnement).

      Seulement 4 abandons (dont 2 en cours d'année).

      5.2 Gains qualitatifs

      Amélioration du lien école-famille : Les parents, souvent découragés, ont retrouvé de l'espoir grâce à une communication axée sur le positif.

      Cohérence organisationnelle : Le personnel partage désormais un langage commun autour de l'autodétermination.

      Épanouissement social : Participation à des activités d'émulation (ex: sorties au théâtre) et implication bénévole des élèves au sein de l'école.

      --------------------------------------------------------------------------------

      6. Évolution : FUSÉ 2.0 et Perspectives

      Fort de son succès, le projet entame sa deuxième année avec des ajustements majeurs :

      1. Enseignement multiniveaux : Création de groupes en français et mathématiques pour les élèves ayant des lacunes profondes (niveau 5e année primaire), tout en évitant le cloisonnement.

      2. Expansion au secondaire 1 : Identification précoce des élèves fragiles dès la rentrée pour prévenir l'échec.

      3. Intégration systémique : Fusion de l'approche FUSÉ dans la « Boussole éducative » globale de l'école pour assurer une transition fluide entre le premier et le deuxième cycle.

      4. Adaptation scolaire : Réflexion sur l'application de l'approche fusée pour les élèves en adaptation afin de viser une progression constante plutôt que la simple réussite de fin d'année.

      Le projet FUSÉ démontre qu'en réallouant les ressources existantes et en structurant rigoureusement l'accompagnement, il est possible de modifier radicalement la trajectoire d'élèves que le système considérait autrefois comme perdus.

    1. Reklamní ploty pdf [3 MB] Šablona pro návrh grafiky na ploty

      Plotová pole Grafické šablony pro plotová pole stany ve všech velikostech

    1. Only three Member States met the December 2020 deadline for transposing the EECC into national law. The transposition in all 27 Member States was only completed in August 2024, with the Commission supporting the Member States in the implementation process.

      The EECC is a directive, and transposition in MS took very long. 3 by the 2020/12 deadline. All only by 2024/08, with EC support.

    1. Cadre de référence sur les mesures de contrôle en milieu scolaire : Note de synthèse

      https://www.youtube.com/watch?v=D43t0L_G7-Y

      Résumé exécutif

      Ce document de référence, fruit d'une collaboration entre le ministère de l’Éducation (MEQ) et la Fédération des centres de services scolaires du Québec (FCSSQ), définit les orientations nationales concernant l’utilisation des mesures de contrôle — contention et isolement — dans les établissements d'enseignement.

      La prémisse fondamentale est que ces mesures ne doivent être envisagées qu'en dernier recours, exclusivement dans des situations d'urgence où la sécurité de l'élève ou d'autrui est menacée de façon imminente.

      Le cadre privilégie une approche préventive et éducative, structurée autour du Système de soutien à paliers multiples (SSPM), visant à réduire au minimum le recours à la force ou à la contrainte.

      Il clarifie les responsabilités légales et professionnelles, notamment depuis les modifications réglementaires d'octobre 2023 habilitant certains professionnels (psychologues et psychoéducateurs) à décider de l’utilisation de mesures de contention.

      La mise en œuvre repose sur une démarche rigoureuse en cinq étapes, incluant l'élaboration de protocoles spécifiques (école ou élève) et l'application de modalités postsituationnelles pour assurer le bien-être et la réévaluation constante des pratiques.

      1. Fondements et principes directeurs

      Le recours aux mesures de contrôle est strictement encadré par des références légales (Charte des droits et libertés, Code civil, Loi sur l'instruction publique) et doit respecter les principes de dignité, d'intégrité et de sécurité de l'élève.

      Principes fondamentaux de l'intervention :

      Dernier recours : Utilisé uniquement lorsque les interventions préventives et les mesures alternatives ont échoué.

      Danger imminent : La menace doit être caractérisée par sa prévisibilité, son immédiateté et la gravité de ses conséquences.

      Contrainte minimale : La mesure doit être la moins restrictive possible et durer le moins longtemps possible (cesser dès que le danger est écarté).

      Respect et dignité : L'intervention doit être empreinte de bienveillance et de chaleur humaine, sous une surveillance constante.

      Suivi obligatoire : Chaque application doit faire l'objet d'un suivi postsituationnel pour évaluer l'efficacité et réguler les futures interventions.

      2. Définitions des mesures de contrôle

      Le cadre distingue plusieurs types d'interventions pour assurer une compréhension commune au sein du réseau scolaire.

      | Type de mesure | Description | Exemples | | --- | --- | --- | | Contention physique | Utilisation de la force humaine pour immobiliser ou diriger un élève contre son gré. | Tenir le bras d'un élève qui résiste ou le maintenir s'il frappe. | | Contention mécanique | Emploi d'un équipement ou de matériel pour limiter le mouvement. | Mitaines de sécurité, vestes de retenue dans le transport scolaire. | | Retrait de matériel | Confiscation d'un appareil palliant normalement un handicap. | Retirer les freins d'un fauteuil roulant ou confisquer une marchette. | | Isolement | Confinement de l'élève dans un lieu d'où il ne peut sortir librement. | Tenir la poignée d'une porte fermée ou bloquer physiquement l'accès. |

      Note : L'administration de substances chimiques à des fins de contrôle nécessite une prescription médicale et n'est pas traitée dans ce document.

      3. Cadre opérationnel : Intervention planifiée vs non planifiée

      Le cadre distingue deux contextes d'application, impactant directement les responsabilités professionnelles.

      | Caractéristique | Intervention Non Planifiée | Intervention Planifiée | | --- | --- | --- | | Contexte | Comportement inhabituel et imprévisible. | Comportement connu et susceptible de se répéter. | | Outil de gestion | Protocole-école (universel). | Protocole-élève (personnalisé, lié au Plan d'intervention). | | Décision (Contention) | Activité non réservée (urgence). | Activité réservée aux professionnels habilités. | | Décision (Isolement) | Activité non réservée. | Activité non réservée (mais encadrée). | | Application | Activité non réservée. | Activité non réservée. |

      4. La démarche d'intervention en cinq étapes

      Pour assurer la sécurité et le respect des droits, une structure systématique est proposée :

      1. Élaboration du protocole : Mise en place préventive de balises (comité-école pour le protocole-école ; équipe-école et parents pour le protocole-élève).

      2. Application des interventions préventives et alternatives : Utilisation de stratégies éducatives pour éviter la crise (diversion, sécurisation de l'environnement).

      3. Évaluation du danger : Analyse rigoureuse de la situation selon les critères de prévisibilité, d'immédiateté et de gravité.

      4. Application de la mesure de contrôle : Mise en œuvre selon les balises du protocole et les recommandations professionnelles.

      5. Modalités postsituationnelles : Retour sur l'événement, établissement des faits, soutien aux témoins (élèves et adultes) et révision du protocole.

      5. Prévention et climat scolaire

      La prévention est la "première voie d'action". Le document souligne l'importance du Système de soutien à paliers multiples (SSPM) :

      Palier 1 (Universel) : Soutien proactif pour tous les élèves (climat sain, règles claires, relations positives).

      Palier 2 (Ciblé) : Soutien supplémentaire pour les élèves à risque (autorégulation, habiletés sociales).

      Palier 3 (Intensif) : Interventions individualisées pour les difficultés graves ou persistantes.

      Le modèle "3 x 3" du CSSMB est cité en exemple, croisant l'intensité de l'intervention avec les sphères individuelle, scolaire et familiale.

      6. Rôles et responsabilités clés

      Le succès de ce cadre repose sur une responsabilité partagée :

      Direction d'établissement : Coordonne l'élaboration des protocoles, assure la formation du personnel et veille au bien-être physique et psychologique de tous.

      Personnel professionnel habilité (Ergothérapeutes, infirmiers, médecins, physiothérapeutes, psychoéducateurs, psychologues) : Réalise l'évaluation clinique, décide de la mesure en contexte planifié et émet des recommandations.

      Intervenants scolaires : Collaborent à l'analyse des comportements, appliquent les mesures en suivant les protocoles et informent la direction.

      Parents et élèves : Doivent être impliqués activement dans l'élaboration du protocole-élève. Un consentement libre et éclairé est requis pour toute mesure planifiée.

      Citations et informations critiques

      « Une mesure de contrôle [...] est une intervention de dernier recours qui devrait être réalisée exclusivement en situation d’urgence, c’est-à-dire lorsque la sécurité du personnel ou des élèves est menacée. » — Bernard Drainville, Ministre de l'Éducation

      « L’utilisation d’une mesure de contrôle n’est pas préconisée en milieu scolaire. [...] Elle ne doit jamais être employée comme mesure éducative ou punitive ou encore pour faciliter la surveillance de l’élève. » — Source Contextuelle, Section 1.1

      « Le recours aux mesures de contrôle est susceptible d’entraîner des blessures physiques et psychologiques qui peuvent avoir des implications à long terme. » — Source Contextuelle, Section 1

    1. – sowohl für das gesamte Korpus als auch für einzelne Elemente –

      die Gedankenstriche geben ChatGPT-Vibes. Vielleicht lieber Kommas?

    1. CSV-Datei, bei der in der ersten Zeile ein Tabellenkopf steht, in den dann folgenden Zeilen jeweils zunächst eine durchzählende ID, dann ein Wort, gefolgt von weiteren linguistischen Informationen: der Grundform (“Lemma”) und der Wortart (“POS”, “Part of Speech”)

      Hier sieht man auch schön, den Zusammenhang zwischen Text und Tabelle. Es lässt sich nicht immer so leicht trennen

    2. Beispiel für eine XML-Kodierung nach TEI-Standard. Im Kopfbereich der Datei steht der <teiHeader> mit Metadaten, es folgt das <text>-Element, in dem der Text mit Strukturinformationen (z.B. <head> für Überschrift) gesoeichert wird

      Mega!

    3. Zudem ist die Erstellung in den entsprechende Editoren unterschiedlich, etwa des weit verbreiteten Oxygen XML Editors, Notepad++ oder Atom, was den Einstieg erschwert.

      Sind die open-source? Oxygen zumindest nicht. Ein Hinweis dazu wäre vielleicht sinnvoll

    4. Die Grippe wütet weiter Zunahme der schweren Fälle in Berlin.  Die Zahl der Grippefälle ist in den letzten beiden Tagen auch in Groß-Berlin noch deutlich gestiegen. Die Warenhäuser und sonstigen Geschäfte, die Kriegs- und die privaten Betriebe klagen, dass übermäßig viele Angestellte krank melden müssen, und auch bei der Post und bei der Straßenbahn ist die Zahl der Grippekranken bedeutend gestiegen. Beispiel für Reinen Text ohne jede Formatierung, üblicherweise als TXT-Datei gespeichert

      Sehr schön, gern mehr Beispiele einbauen

    1. Sammlungen von maschinenlesbaren Textdokumenten, die nach bestimmten Kriterien zusammengestellt wurden.

      Deutlicher Machen. Korpora sind sehr grundlegend für das OER. Das sollte auf jeden Fall hängen bleiben

    1. Im zurückliegenden Kapitel haben wir die Forschungsfrage der vorliegenden Fallstudie expliziert und mithilfe von historischen Visualisierungen veranschaulicht. Um die Forschungsfrage für eine quantitativ-digitale Analyse adressierbar zu machen, haben wir eine Operationalisierung vorgenommen, durch die wir einen Messvorgang definieren können, der als Antwort für die Frage gelten kann. Im nächsten Kapitel werden wir nun unser Forschungskorpus aufbauen, auf dem wir den Messvorgang durchführen werden. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { name: "python3", path: "./research_question" }, predefinedOutput: true } kernelName = 'python3'

      Sehr schön mit dem "Wir"

    2. Jede Operationalisierung bringt diskutable Einschränkungen mit sich. Die kritische Reflexion dieser Grenzen ist essentieller Bestandteil von Digital-Humanities-Projekten.

      Vielleicht etwas dazu schreiben, dass auch traditionelle Methoden Einschränkungen mit sich bringen. Sonst bleibt hängen, dass DH super kritisch ist und der Rest nicht

    1. A retail strategy indicates how a retailer will deal effectively with its environment, customers, and competitors.2 As the retail management decision-making process (discussed in Chapter 1) indicates, the retail strategy (Section II) is the bridge between understanding the world of retailing (Section I) and more tactical merchandise management and store operations activities (Sections III and IV) undertaken to implement the retail strategy. The first part of this chapter defines the term retail strategy and discusses three important elements of retail strategy: (1) the target market segment, (2) the retail format, and (3) the retailer’s bases of sustainable competitive advantage. Then we outline approaches that retailers use to build a sustainable competitive advantage. After reviewing the various growth opportunities, including international expansion, that retailers can pursue, the chapter concludes with a discussion of the strategic retail planning process.

      Retail strategy is essentially the game plan that a retailer adopts in order to be successful. It describes how the business will react to the environment, how it will satisfy the needs of the customers, and how it will differentiate itself from other retailers. It links the overall understanding of retailing to the operational decisions such as merchandise and store operations. A retail strategy has three key components: target market, retail format, and competitive advantage.

    1. Who will be measuring the extent of this so-called commitment, how long is it expected to take, and what will be implemented based on the findings? Often, these fundamental questions are left up to the very same white leaders of the entity, who eagerly take credit for what is only a PR statement and not real change.

      These statements regarding the little progress made in wage equality and equal representation was discussed in module 3 in the You Tube video Democracy in the Workplace. The video reminds us of the growth of corporations and capitalism in the 1970's, leaving workers behind, particularly people of color. As productivity grew salaries did not. Little compasion was shown for workers, this article highlights that this remains the same.

    1. Respondents to the Musicians’ Union survey noted “a lack of confidence from employers in [female musicians’] abilities” and that “very often women were asked if they were fans, rather than musicians or it was assumed they must be singers not instrumentalists”.3

      Respondents to the Musicians’ Union survey noted “a lack of confidence from employers in [female musicians’] abilities” and that “very often women were asked if they were fans, rather than musicians or it was assumed they must be singers not instrumentalists."

    2. unjustifiable limitations in opportunity, a lack of support, gender discrimination and sexual harassment as well as the “persistent issue of equal pay” in a sector dominated by self-employment.3

      Women also face less opportunities, are given less moral support, and unequal pay.

    1. Table 2 illuminates the breakdown of artist gender by song genre across all 13 years.3 Three trends arereadily apparent. First, women artists were relegated almost exclusively to Pop and R&B genres. Roughlya third (35.9%) of all Pop songs and a quarter of R&B songs (25.2%) credited women as artists. Second,approximately 20% or less of all artists working in Dance/Electronic, Country, Alternative, or Hip-Hopwere women. Finally, no women participated as artists in Música Mexicana. Music is still largely a boy’sclub, with lanes carved out for women in only specific areas of creativity and art.

      Discussion of table 2

    2. There were only 3 womensongwriters who worked as often as 6 of the 15-most working men.

      Only 3 women songwriters who worked as often as 6 of the 15-most working men.

    1. If "7 out of 10" means something very different to different people, that's a fundamental challenge for the WELLBY as a tool for comparing interventions.

      This is a bit too simple. Note WELLBY, as used in the simplest approaches, mainly requires differences to be comparable-- and even linear -- across individuals. Moving from 1 to 3 is equally valued as moving from 4 to 6 or 8 to 10, and gets twice the value in this measure as moving 2 people from 3 to 4.

    1. Le Partenariat en Santé : Synthèse de trois Expériences de Terrain

      Ce document de synthèse analyse les interventions de trois équipes lors d'une session organisée par le Centre Opérationnel du Partenariat en Santé (COPS).

      Il explore la mise en œuvre concrète du partenariat en santé à travers les secteurs des soins primaires, du sanitaire et du médico-social.

      Résumé Exécutif

      L'intégration du patient et de ses proches comme partenaires actifs transforme durablement les pratiques de soin.

      Les retours d'expérience mettent en lumière une transition fondamentale : passer d'une logique de « faire pour » le patient à une logique de « faire avec » lui.

      Points clés à retenir :

      Diversité des modèles : Le partenariat s'adapte à différents contextes, de la gouvernance des structures territoriales (CPTS) à la co-construction de parcours hospitaliers spécifiques ou au soutien à domicile.

      Défis opérationnels : Le recrutement des patients partenaires, l'acculturation des professionnels, la gestion du temps commun et la pérennisation des financements constituent les principaux obstacles.

      Bénéfices mutuels : Le partenariat améliore la pertinence des soins, réduit l'isolement des familles et renforce le sens du travail pour les professionnels de santé, contribuant ainsi à une meilleure qualité de vie au travail (QVT).

      Rigueur méthodologique : Pour éviter le « tokenisme » (participation de façade), une méthodologie rigoureuse et une coordination dédiée sont essentielles.

      --------------------------------------------------------------------------------

      1. Expérience en Soins Primaires : La CPTS du Grand Pic Saint-Loup

      Les Communautés Professionnelles Territoriales de Santé (CPTS) regroupent des acteurs de santé libéraux pour mener des actions de santé publique. Dans cette expérience, le partenariat est envisagé comme une confrontation de « morceaux de réalité ».

      Niveaux d'implication

      Le partenariat au sein de la CPTS se décline sur plusieurs strates :

      Consultation : Réalisation d'enquêtes sur l'expérience des patients dans les lieux de soins non programmés pour comprendre leurs motivations de déplacement.

      Parcours de soins : Implication de patients experts dans des groupes de travail pluriprofessionnels (insuffisance cardiaque, diabète, santé orale).

      Gouvernance : Création d'un collège spécifique au sein de l'association ouvert aux patients, élus et habitants, disposant de voix consultatives au conseil d'administration.

      Freins et leviers identifiés

      | Catégorie | Détails | | --- | --- | | Freins | Confusion sémantique (multiplicité des termes : patient expert, traceur, coach) ; Difficulté de recrutement local ; Absence de statut administratif (SIRET) pour rémunérer les patients sans association. | | Leviers | Appui des médecins spécialistes hospitaliers déjà acculturés ; Création d'espaces de rencontre hors cabinets médicaux (ex: dépistage en centre commercial). |

      --------------------------------------------------------------------------------

      2. Expérience en Secteur Sanitaire : Polyclinique Saint-Roch

      Le projet « Au cœur des soins », mené avec l'association Tremplin, porte sur le parcours des enfants porteurs de fentes faciales. Il repose sur une collaboration étroite entre parents partenaires et soignants.

      Objectifs et Méthodologie

      L'ambition est de promouvoir une relation de soin partenariale dès le diagnostic.

      1. Recueil de l'expérience : Écoute du vécu des parents.

      2. Approfondissement : Identification précise des besoins.

      3. Co-construction : Utilisation d'outils participatifs nouveaux en milieu hospitalier.

      L'ingrédient secret : Une coordination dédiée (représentant 2/3 des fonds du projet) pour organiser les espaces de dialogue et garantir la rigueur de la démarche, évitant ainsi d'utiliser les patients pour la simple forme.

      Impacts observés

      Pour les familles : Reconnaissance de leur rôle d'acteur et réduction du sentiment d'isolement.

      Pour les professionnels : Meilleure compréhension des besoins réels. Une orthophoniste témoigne : « J'ai le sentiment d'aller plus vite, d'être plus efficace... la charge mentale est aussi vraiment moindre. »

      Relationnel : Établissement d'une horizontalité dans les échanges, permettant aux patients de comprendre aussi les contraintes des soignants.

      --------------------------------------------------------------------------------

      3. Expérience en Secteur Médico-Social : Association AA

      L'association AA (Aide et Soins à domicile) s'est engagée dans le partenariat suite à une crise majeure : un conflit délétère entre une équipe de soins et une aidante, ayant entraîné un épuisement professionnel massif (10 arrêts de travail sur 10 salariés).

      Évolution de la démarche

      Initialement, l'association a commis l'erreur de construire la démarche entre professionnels uniquement. Le « rétropédalage » a été nécessaire pour intégrer réellement des aidants et des personnes accompagnées dans les groupes de travail en 2025.

      Les témoignages clés recueillis :

      Mme Isabelle (personne accompagnée) : Souligne l'importance d'être attentif à la demande : « Parfois le salarié agit comme il souhaite mais pas comme la personne le souhaite. »

      M. Marc (aidant) : Note que les soignants doivent accepter les conseils des tiers lorsqu'ils manquent de connaissance sur le patient spécifique.

      Défis spécifiques au domicile

      Fatigabilité : La participation des usagers est contrainte par leur état de santé ou leur charge d'aidant.

      Changement de paradigme : Abandonner le terme de « prise en charge » (jugé passif) au profit de « prendre en soin » ou « accompagner ».

      Transparence : Accepter de recevoir des critiques directes et parfois dures sur la qualité de l'accompagnement.

      --------------------------------------------------------------------------------

      4. Analyse Transversale : Obstacles, Leviers et Perspectives

      L'analyse comparée des trois interventions permet de dégager des constantes dans la mise en œuvre du partenariat en santé.

      Synthèse des obstacles communs

      1. L'Acculturation : Le niveau de maturité face au partenariat est très hétérogène. Certains professionnels y voient une remise en question de leur autorité, d'autres une perte de temps.

      2. Le Recrutement : Trouver le « bon patient pour le bon parcours », disponible et prêt à s'investir dans la durée, reste complexe.

      3. La Temporalité : Aligner les agendas des professionnels libéraux, des salariés hospitaliers et des patients (souvent fatigués) est un défi logistique permanent.

      4. Le Financement : La pérennisation des ressources pour rémunérer le temps de coordination et l'expertise des patients est cruciale.

      Facteurs de succès

      Volonté Institutionnelle : Un engagement fort de la direction et des cadres est indispensable pour lever les résistances.

      Savoirs Expérientiels : Reconnaître que le savoir issu du vécu de la maladie est complémentaire au savoir scientifique et clinique.

      Évaluation de l'impact : Bien que difficile, la mesure de l'amélioration de l'expérience patient et de la qualité des soins est nécessaire pour valider la démarche à long terme.

      Conclusion sur la Qualité de Vie au Travail (QVT)

      Une observation majeure émerge : le partenariat en santé est un levier puissant de bien-être au travail.

      En améliorant la compréhension des besoins et en réduisant les situations conflictuelles, il redonne du sens aux missions des professionnels et diminue leur charge mentale, malgré l'investissement temporel initial requis pour sa mise en place.

    1. Briefing : Perspectives et Jalons du Partenariat en Santé

      Synthèse Sommaire

      Ce document de synthèse détaille les perspectives post-événement de la journée régionale consacrée au partenariat en santé en Occitanie.

      Il s'articule autour de l'action coordonnée de trois entités clés : la Structure Régionale d'Appui (SRA), France Assos Santé Occitanie et le Centre Opérationnel du Partenariat en Santé (COPS).

      L'objectif central est de transformer les réflexions de la journée en actions concrètes par le biais de la formation, de l'accompagnement méthodologique et de la mise à disposition de ressources structurantes.

      Les points saillants incluent l'intégration du partenariat dans l'évaluation des pratiques professionnelles à l'horizon 2026, le renforcement de la synergie entre représentants des usagers et patients partenaires, et le déploiement d'outils numériques pour faciliter le maillage territorial des projets de santé.

      --------------------------------------------------------------------------------

      1. Orientations Stratégiques de la Structure Régionale d'Appui (SRA)

      La SRA réaffirme son ambition d'agir collectivement pour l'amélioration des parcours de santé à travers huit thématiques prioritaires, dont le partenariat fait partie intégrante.

      Modalités d'Intervention

      L'action de la SRA se déploie selon plusieurs axes opérationnels :

      Information et Sensibilisation : Organisation de journées régionales annuelles.

      Formation et Enseignement : Participation à l'enseignement universitaire et à la recherche.

      Accompagnement Méthodologique : Soutien à l'évaluation des pratiques et des organisations sur le terrain, sans se substituer aux acteurs locaux.

      Production de Données : Publication de travaux de recherche dans le domaine de la santé.

      Prospective : L'Horizon 2026

      Une ambition majeure a été annoncée pour l'année 2026 : l'intégration de la thématique du partenariat en santé au cœur de l'évaluation des pratiques professionnelles (EPP). Ce sujet, reconnu comme complexe et passionnant, fera l'objet d'un appel à manifestation d'intérêt pour les acteurs souhaitant approfondir cette réflexion.

      --------------------------------------------------------------------------------

      2. France Assos Santé Occitanie : Mobilisation et Formation

      En tant qu'union d'associations agréées, France Assos Santé joue un rôle de fédérateur tant au niveau régional que national.

      Structure et Représentation

      | Niveau | Volume d'associations | Domaines couverts | | --- | --- | --- | | Régional | 70 associations | Personnes malades, situation de handicap, consommateurs, santé environnementale, associations familiales, précarité. | | National | ~100 associations | Identiques au niveau régional. |

      Missions et Ressources pour les Usagers

      Information et Veille : Observation du bon fonctionnement du système de santé et interventions médiatiques.

      Accès aux données : Mise à disposition d'un site web (régional et national) et d'un extranet dédié aux représentants des usagers (RU) comprenant fiches pratiques et guides.

      Guide de référence : Co-construction avec "Savoir Patient" d'un guide sur les facettes de l'engagement de l'usager partenaire (pair-aidance, recherche, formation des professionnels).

      Dispositif de Formation

      L'organisme propose un parcours structuré pour accompagner les mandats des RU :

      Volume : 41 jours de formation dispensés l'an passé dans 7 départements.

      Formation "RU et Patients Partenaires" : Un module spécifique visant à améliorer la collaboration et la connaissance mutuelle entre ces deux types d'acteurs de l'engagement.

      Accessibilité : Formations disponibles en présentiel et en distanciel via un catalogue dédié.

      --------------------------------------------------------------------------------

      3. Le Centre Opérationnel du Partenariat en Santé (COPS) : Appui Opérationnel

      Le COPS se définit comme un facilitateur de projets de partenariat, agissant concrètement auprès des structures et des acteurs.

      Accompagnement de Projets

      Le COPS intervient en binôme (incluant un chargé de projet et une perspective professionnelle) sur sollicitation via une plateforme dédiée. Les domaines d'appui incluent :

      • Le médico-social et les soins primaires.

      • La co-construction de parcours (ex: hospitalisation à domicile - HAD, oncologie, santé mentale).

      • L'accompagnement stratégique et la qualité.

      Outils et Plateforme Collaborative

      La plateforme participative du COPS offre plusieurs services en libre accès :

      Répertoire et Cartographie : Outils permettant d'identifier des patients partenaires ou des structures porteuses de projets pour favoriser le réseautage autonome.

      Ressources Multimédia : "Copcasts" (podcasts), webinaires, supports de présentation et guides (dont la fiche "Engager" pour l'implication des patients).

      Formation : Offre de e-learning certifiée Qualiopi, avec des formats "à la carte" pour les équipes projets.

      --------------------------------------------------------------------------------

      4. Jalons et Événements à Venir

      Le calendrier institutionnel prévoit plusieurs étapes clés pour maintenir la dynamique du partenariat en santé :

      | Date / Période | Événement / Action | Thématique | | --- | --- | --- | | 9 décembre | Webinaire | Lien entre partenariat en santé et Qualité de Vie au Travail (QVT). | | Prochainement | Soirée départementale | Déplacement dans le Lot (actions "aller vers"). | | 1er Trimestre 2026 | Soirée départementale | Rencontre dans les Pyrénées-Orientales (PO). | | Courant 2026 | Nouveaux formats | Groupes d'analyse de pratiques (mixtes, patients et professionnels) et ateliers de co-développement. |

      5. Synthèse Éthique

      L'Espace de réflexion éthique Occitanie, représenté par le Professeur Michel Clanet, assure une fonction de "grand témoin".

      Son rôle est d'analyser la place du partenariat en santé dans la démarche éthique globale, soulignant que l'engagement des partenaires n'est pas seulement une modalité organisationnelle, mais une réflexion profonde sur la pratique du soin et le respect des parties prenantes.

    1. suggesting that Ptf1a Cre-mediatedSox9 misexpression has no overt effect on pancreaticdevelopment

      What are Ptf1a Cre; Sox9 OE mice?

      These are mice that were engineered so that:

      Ptf1a-Cre turns on genetic changes specifically in pancreatic acinar cells (the enzyme-producing cells).

      Sox9 OE means Sox9 is overexpressed (OE = overexpression), meaning the Sox9 gene is artificially turned on at higher-than-normal levels.

      The system likely includes:

      An HA-tag (a small detectable protein tag used to track the overexpressed protein)

      RFP (red fluorescent protein) as a marker for cells that did not undergo recombination.

      🔬 What did they observe?

      Sox9 overexpression happened mostly in acinar cells

      These cells expressed:

      The HA-tag

      Extra Sox9

      This confirms the genetic system worked where expected.

      Duct cells and endocrine cells mostly did NOT change

      They remained unrecombined

      They still expressed RFP

      Meaning the genetic modification did not activate in those cells.

      So the gene change was specific to acinar cells, which is what the researchers intended.

      🐭 What about the pancreas itself?

      In 3-week-old modified mice:

      Pancreas weight → normal

      Tissue structure (morphology) → normal

      Blood glucose levels → normal

      When compared to control mice, there were no obvious differences.

      📌 What’s the conclusion?

      Even though Sox9 was artificially overexpressed in acinar cells, it:

      Did not disrupt pancreatic development

      Did not change pancreas size

      Did not affect blood sugar levels

      In short:

      For early development (up to 3 weeks), forcing Sox9 expression in acinar cells does not cause obvious problems in the pancreas.

    2. ow-grade PanINs wereuniformly SOX9 +, whereas higher-grade PanIN2/3 lesions andPDA displayed heterogeneous SOX9 expression (Figure 1L;72% of PanIN2/3 and 69% of PDA were SOX9+ ). These findingssuggest that a SOX9 + state is associated with PDA initiation.

      Sox9+ state associated with PDA inititiation.

    Annotators

    1. Sécuriser l'apprentissage et l'épanouissement : Mettre fin à la violence dans et par l'éducation

      Synthèse de haut niveau

      La violence en milieu éducatif constitue une crise mondiale d'une ampleur alarmante, touchant environ un milliard d'enfants chaque année.

      Loin d'être des incidents isolés, ces violences — qu'elles soient physiques, sexuelles ou psychologiques — s'inscrivent dans un continuum qui entrave le droit fondamental à l'éducation et compromet le développement des sociétés.

      L'impact économique est colossal, avec une perte estimée à 11 000 milliards de dollars en revenus futurs à l'échelle mondiale.

      Le présent document souligne l'impératif de passer d'interventions fragmentées à une approche holistique et systémique.

      L'éducation ne doit plus seulement être vue comme un lieu où la violence se produit, mais comme le levier principal pour la prévenir.

      Pour transformer durablement les écoles en sanctuaires de sécurité, il est impératif d'intégrer la prévention et la réponse à la violence au cœur même des systèmes éducatifs, et non comme une simple responsabilité additionnelle.

      --------------------------------------------------------------------------------

      I. État des lieux : Les multiples visages de la violence

      La violence en milieu éducatif est un phénomène complexe qui dépasse largement le cadre des agressions physiques visibles. Elle se manifeste sous plusieurs formes interdépendantes :

      1. Typologie de la violence envers les apprenants

      Violence physique : Inclut les bagarres, les attaques et les châtiments corporels. Plus d'un tiers des élèves ont été impliqués dans une bagarre physique au cours de l'année écoulée.

      Violence psychologique : Humiliation, intimidation, insultes et exclusion sociale. À titre d'exemple, 42 % des jeunes LGBTQ+ rapportent avoir été ridiculisés ou menacés à l'école.

      Violence sexuelle : Harcèlement, attouchements et rapports forcés. Jusqu'à 25 % des adolescents subissent des violences sexuelles, dont 40 % se produisent dans l'enceinte scolaire.

      Harcèlement (Bullying) : Caractérisé par un déséquilibre de pouvoir, il touche 1 apprenant sur 3 chaque mois à travers le monde.

      Violence facilitée par la technologie : Le cyberharcèlement et l'exploitation en ligne amplifient la portée des agressions au-delà des murs de l'école.

      2. Violence institutionnelle et structurelle

      La violence ne provient pas uniquement des individus ; elle peut être intégrée au système lui-même via :

      • Des politiques discriminatoires (ex: codes vestimentaires biaisés).

      • Des méthodes d'enseignement inéquitables ou un curriculum excluant certains groupes.

      • La normalisation de la violence comme outil de discipline.

      3. Violence contre le personnel éducatif

      Le personnel n'est pas épargné. Une enquête révèle que près de 80 % des enseignants ont subi une forme de violence à l'école au cours d'une année scolaire, ce qui dégrade leur bien-être et leur efficacité pédagogique.

      --------------------------------------------------------------------------------

      II. Analyse des moteurs de la violence : Une approche intersectionnelle

      La violence est alimentée par une interaction complexe de facteurs à plusieurs niveaux. L'identité de l'apprenant (genre, handicap, race, orientation sexuelle) détermine souvent la nature et l'intensité de la violence subie.

      | Niveau de facteur | Exemples de moteurs identifiés | | --- | --- | | Individuel | Antécédents de violence domestique, manque de sensibilisation aux droits. | | Interpersonnel | Mauvaise gestion des conflits, absence de modèles adultes positifs. | | Systémique | Manque de formation sur la discipline positive, absence de protocoles de signalement. | | Communautaire | Normalisation des châtiments corporels, influence des gangs ou des conflits locaux. | | Sociétal | Inégalités socio-économiques, cadres juridiques faibles ou inexistants. | | Normatif | Normes de genre néfastes (valorisation de la dureté masculine, soumission féminine). |

      La dimension de genre (SRGBV)

      La violence de genre en milieu scolaire (SRGBV) est omniprésente. Les filles sont plus exposées au harcèlement sexuel et aux grossesses précoces forcées, tandis que les garçons subissent davantage de châtiments corporels et de violences physiques, souvent au nom de normes de masculinité rigides.

      --------------------------------------------------------------------------------

      III. Les répercussions : Au-delà de l'enceinte scolaire

      Les conséquences de la violence sont profondes et durables, affectant non seulement l'individu mais aussi la société entière :

      Impact éducatif : Les élèves victimes sont trois fois plus susceptibles de se sentir aliénés et deux fois plus enclins à manquer l'école. Cela mène à une baisse des résultats en lecture et calcul, et souvent au décrochage scolaire.

      Santé mentale : Anxiété, dépression, perte d'estime de soi et comportements d'automutilation.

      Santé physique : Risques accrus de VIH, d'infections sexuellement transmissibles et de grossesses non planifiées (facteur majeur de décrochage chez les adolescentes).

      Coût économique : La violence entrave le développement du capital humain, entraînant des pertes de revenus massives sur toute une vie.

      --------------------------------------------------------------------------------

      IV. Le cadre d'action : Une approche holistique

      Pour mettre fin à la violence, l'UNESCO et ses partenaires préconisent une transformation radicale basée sur six piliers fondamentaux :

      1. Curriculum et apprentissage : Intégrer des programmes d'éducation sexuelle complète (ESC), d'apprentissage socio-émotionnel (SEL) et de prévention de la violence pour transformer les attitudes dès le plus jeune âge.

      2. Environnement scolaire : Créer des espaces physiques sûrs (toilettes séparées, éclairage) et promouvoir une culture de "discipline positive" qui exclut tout châtiment corporel.

      3. Mécanismes de signalement : Mettre en place des systèmes confidentiels, accessibles et adaptés aux enfants (lignes d'assistance, boîtes aux lettres, focal points).

      4. Politiques et lois : Adopter des législations nationales interdisant explicitement les châtiments corporels (comme au Pérou en 2015) et promouvoir l'inclusion radicale (comme en Sierra Leone).

      5. Partenariats et mobilisation : Collaborer avec les syndicats d'enseignants, les parents, les leaders communautaires et les entreprises technologiques.

      6. Données et preuves : Utiliser des outils de diagnostic et des enquêtes numériques (ex: système Ma’An en Jordanie) pour orienter les interventions de manière factuelle.

      --------------------------------------------------------------------------------

      V. Perspectives pour un changement durable

      La réussite de cette transformation repose sur quatre principes transversaux :

      Centrage sur l'apprenant : Prioriser la sécurité et le "ne pas nuire" (do no harm).

      Sensibilité aux traumatismes : Éviter la re-traumatisation lors du soutien aux victimes.

      Adaptation au contexte : Reconnaître que les solutions en zone de conflit diffèrent de celles en zone urbaine stable.

      Transformation du rôle de l'enseignant : Soutenir les enseignants non seulement comme protecteurs, mais aussi comme individus ayant besoin de protection et de formation continue.

      Citation clé

      "Puisque les guerres prennent naissance dans l'esprit des hommes et des femmes, c'est dans l'esprit des hommes et des femmes que doivent être élevées les défenses de la paix."Acte constitutif de l'UNESCO

      En conclusion, mettre fin à la violence dans l'éducation n'est pas seulement une obligation morale et légale, c'est une condition sine qua non pour bâtir une société juste, inclusive et prospère.

      L'heure est à l'action collective et systématique pour faire de chaque école un véritable havre de paix.

    1. Sécuriser l'apprentissage et l'épanouissement : Mettre fin à la violence dans et par l'éducation

      Synthèse de haut niveau

      La violence en milieu éducatif constitue une crise mondiale d'une ampleur alarmante, touchant environ un milliard d'enfants chaque année.

      Loin d'être des incidents isolés, ces violences — qu'elles soient physiques, sexuelles ou psychologiques — s'inscrivent dans un continuum qui entrave le droit fondamental à l'éducation et compromet le développement des sociétés.

      L'impact économique est colossal, avec une perte estimée à 11 000 milliards de dollars en revenus futurs à l'échelle mondiale.

      Le présent document souligne l'impératif de passer d'interventions fragmentées à une approche holistique et systémique.

      L'éducation ne doit plus seulement être vue comme un lieu où la violence se produit, mais comme le levier principal pour la prévenir.

      Pour transformer durablement les écoles en sanctuaires de sécurité, il est impératif d'intégrer la prévention et la réponse à la violence au cœur même des systèmes éducatifs, et non comme une simple responsabilité additionnelle.

      --------------------------------------------------------------------------------

      I. État des lieux : Les multiples visages de la violence

      La violence en milieu éducatif est un phénomène complexe qui dépasse largement le cadre des agressions physiques visibles. Elle se manifeste sous plusieurs formes interdépendantes :

      1. Typologie de la violence envers les apprenants

      Violence physique : Inclut les bagarres, les attaques et les châtiments corporels. Plus d'un tiers des élèves ont été impliqués dans une bagarre physique au cours de l'année écoulée.

      Violence psychologique : Humiliation, intimidation, insultes et exclusion sociale. À titre d'exemple, 42 % des jeunes LGBTQ+ rapportent avoir été ridiculisés ou menacés à l'école.

      Violence sexuelle : Harcèlement, attouchements et rapports forcés. Jusqu'à 25 % des adolescents subissent des violences sexuelles, dont 40 % se produisent dans l'enceinte scolaire.

      Harcèlement (Bullying) : Caractérisé par un déséquilibre de pouvoir, il touche 1 apprenant sur 3 chaque mois à travers le monde.

      Violence facilitée par la technologie : Le cyberharcèlement et l'exploitation en ligne amplifient la portée des agressions au-delà des murs de l'école.

      2. Violence institutionnelle et structurelle

      La violence ne provient pas uniquement des individus ; elle peut être intégrée au système lui-même via :

      • Des politiques discriminatoires (ex: codes vestimentaires biaisés).

      • Des méthodes d'enseignement inéquitables ou un curriculum excluant certains groupes.

      • La normalisation de la violence comme outil de discipline.

      3. Violence contre le personnel éducatif

      Le personnel n'est pas épargné. Une enquête révèle que près de 80 % des enseignants ont subi une forme de violence à l'école au cours d'une année scolaire, ce qui dégrade leur bien-être et leur efficacité pédagogique.

      --------------------------------------------------------------------------------

      II. Analyse des moteurs de la violence : Une approche intersectionnelle

      La violence est alimentée par une interaction complexe de facteurs à plusieurs niveaux. L'identité de l'apprenant (genre, handicap, race, orientation sexuelle) détermine souvent la nature et l'intensité de la violence subie.

      | Niveau de facteur | Exemples de moteurs identifiés | | --- | --- | | Individuel | Antécédents de violence domestique, manque de sensibilisation aux droits. | | Interpersonnel | Mauvaise gestion des conflits, absence de modèles adultes positifs. | | Systémique | Manque de formation sur la discipline positive, absence de protocoles de signalement. | | Communautaire | Normalisation des châtiments corporels, influence des gangs ou des conflits locaux. | | Sociétal | Inégalités socio-économiques, cadres juridiques faibles ou inexistants. | | Normatif | Normes de genre néfastes (valorisation de la dureté masculine, soumission féminine). |

      La dimension de genre (SRGBV)

      La violence de genre en milieu scolaire (SRGBV) est omniprésente. Les filles sont plus exposées au harcèlement sexuel et aux grossesses précoces forcées, tandis que les garçons subissent davantage de châtiments corporels et de violences physiques, souvent au nom de normes de masculinité rigides.

      --------------------------------------------------------------------------------

      III. Les répercussions : Au-delà de l'enceinte scolaire

      Les conséquences de la violence sont profondes et durables, affectant non seulement l'individu mais aussi la société entière :

      Impact éducatif : Les élèves victimes sont trois fois plus susceptibles de se sentir aliénés et deux fois plus enclins à manquer l'école. Cela mène à une baisse des résultats en lecture et calcul, et souvent au décrochage scolaire.

      Santé mentale : Anxiété, dépression, perte d'estime de soi et comportements d'automutilation.

      Santé physique : Risques accrus de VIH, d'infections sexuellement transmissibles et de grossesses non planifiées (facteur majeur de décrochage chez les adolescentes).

      Coût économique : La violence entrave le développement du capital humain, entraînant des pertes de revenus massives sur toute une vie.

      --------------------------------------------------------------------------------

      IV. Le cadre d'action : Une approche holistique

      Pour mettre fin à la violence, l'UNESCO et ses partenaires préconisent une transformation radicale basée sur six piliers fondamentaux :

      1. Curriculum et apprentissage : Intégrer des programmes d'éducation sexuelle complète (ESC), d'apprentissage socio-émotionnel (SEL) et de prévention de la violence pour transformer les attitudes dès le plus jeune âge.

      2. Environnement scolaire : Créer des espaces physiques sûrs (toilettes séparées, éclairage) et promouvoir une culture de "discipline positive" qui exclut tout châtiment corporel.

      3. Mécanismes de signalement : Mettre en place des systèmes confidentiels, accessibles et adaptés aux enfants (lignes d'assistance, boîtes aux lettres, focal points).

      4. Politiques et lois : Adopter des législations nationales interdisant explicitement les châtiments corporels (comme au Pérou en 2015) et promouvoir l'inclusion radicale (comme en Sierra Leone).

      5. Partenariats et mobilisation : Collaborer avec les syndicats d'enseignants, les parents, les leaders communautaires et les entreprises technologiques.

      6. Données et preuves : Utiliser des outils de diagnostic et des enquêtes numériques (ex: système Ma’An en Jordanie) pour orienter les interventions de manière factuelle.

      --------------------------------------------------------------------------------

      V. Perspectives pour un changement durable

      La réussite de cette transformation repose sur quatre principes transversaux :

      Centrage sur l'apprenant : Prioriser la sécurité et le "ne pas nuire" (do no harm).

      Sensibilité aux traumatismes : Éviter la re-traumatisation lors du soutien aux victimes.

      Adaptation au contexte : Reconnaître que les solutions en zone de conflit diffèrent de celles en zone urbaine stable.

      Transformation du rôle de l'enseignant : Soutenir les enseignants non seulement comme protecteurs, mais aussi comme individus ayant besoin de protection et de formation continue.

      Citation clé

      "Puisque les guerres prennent naissance dans l'esprit des hommes et des femmes, c'est dans l'esprit des hommes et des femmes que doivent être élevées les défenses de la paix."Acte constitutif de l'UNESCO

      En conclusion, mettre fin à la violence dans l'éducation n'est pas seulement une obligation morale et légale, c'est une condition sine qua non pour bâtir une société juste, inclusive et prospère.

      L'heure est à l'action collective et systématique pour faire de chaque école un véritable havre de paix.

    1. La Protection de l’Enfance en France : Analyse de la Crise et Préconisations du CESE

      Synthèse (Executive Summary)

      Le système de protection de l’enfance en France traverse une crise profonde et structurelle qui menace ses missions fondamentales.

      Bien que le cadre législatif (lois de 2007, 2016 et 2022) soit considéré comme l'un des plus aboutis, plaçant l'intérêt supérieur et les besoins fondamentaux de l'enfant au cœur des dispositifs, un décalage alarmant persiste entre l'ambition légale et la réalité du terrain.

      Les points critiques identifiés incluent une augmentation constante des besoins (+49 % de mineurs accueillis en 20 ans), une pénurie sévère de professionnels qualifiés, et une hétérogénéité territoriale préoccupante.

      L'un des constats les plus graves est l'inexécution d'une part significative des décisions de justice destinées à protéger les enfants en danger.

      Le Conseil économique, social et environnemental (CESE) appelle à une remobilisation nationale, une gouvernance interministérielle renforcée sous l'égide du Premier ministre, et une garantie d'égalité de traitement pour tous les mineurs, incluant les mineurs non accompagnés (MNA) et les enfants en situation de handicap.

      --------------------------------------------------------------------------------

      I. Un État de Crise Structurelle et Statistique

      A. Une hausse préoccupante de la demande de protection

      Les données de l'Observatoire national de la protection de l'enfance (ONPE) et de la DREES révèlent une pression sans précédent sur les services de l'Aide Sociale à l'Enfance (ASE) :

      Chiffres clés : Au 31 décembre 2022, 344 682 mineurs et jeunes majeurs sont pris en charge.

      Évolution : Le nombre de jeunes accueillis en établissement a augmenté de plus de 50 % entre 2011 et 2022.

      Déjudiciarisation en échec : Malgré la volonté de privilégier l'administratif, 82 % des prises en charge de mineurs résultent d'une décision judiciaire.

      B. Le lien entre pauvreté et protection de l'enfance

      Il existe une corrélation forte entre la précarité économique et l'intervention de la protection de l'enfance. La France affiche un taux de pauvreté infantile de 20 % (33ème position sur 39 pays de l'UE/OCDE).

      Conséquences : 2,9 millions d'enfants vivent sous le seuil de pauvreté ; 42 000 sont sans domicile fixe.

      Coût social : Les événements traumatisants subis pendant l'enfance coûtent environ 34,5 milliards d'euros par an à la France en frais de santé et entraînent une perte d'espérance de vie de 20 ans pour les victimes.

      --------------------------------------------------------------------------------

      II. Défaillances de Gouvernance et de Financement

      A. Pilotage national et territorial

      La gouvernance actuelle souffre d'un manque de lisibilité interministérielle et de disparités territoriales majeures.

      Inégalités territoriales : Le taux de prise en charge varie de 10 pour 1000 en Guyane à 49 pour 1000 dans la Nièvre.

      Financement : Les dépenses des départements pour l'ASE ont atteint 9,7 milliards d'euros en 2023. Les ressources (principalement les DMTO) sont volatiles et déconnectées de la dynamique des besoins.

      Contractualisation : Le levier financier de l'État reste marginal (environ 140 M€ via le programme 304) par rapport aux budgets départementaux.

      B. L'inexécution des décisions de justice

      Le système repose sur des juges en sous-effectif (un juge suit 450 à 500 enfants contre un idéal de 325). En raison du manque de places en structure, des décisions de placement ne sont pas exécutées, laissant des enfants en danger dans leur milieu familial, ou "mal exécutées" dans des structures inadaptées.

      --------------------------------------------------------------------------------

      III. Garantir les Droits et les Besoins de l'Enfant

      A. Le Projet pour l'Enfant (PPE) : Une obligation non respectée

      Instauré en 2007, le PPE doit être la "boussole" du parcours de l'enfant pour garantir sa stabilité et son développement. Cependant, il n'est toujours pas effectif dans de nombreux départements.

      Préconisation : Faire du PPE une condition préalable à l'attribution des financements de l'État.

      B. La prise en charge de la santé et du handicap

      Les enfants de l'ASE présentent des pathologies psychiques et somatiques plus fréquentes.

      Urgence psychologique : Le CESE demande que tout enfant protégé soit présumé en situation d'urgence psychologique pour faciliter l'accès immédiat aux soins (CMPP).

      Handicap : Environ 25 % des enfants accueillis sont en situation de handicap, mais seul un tiers bénéficie d'un accompagnement médico-social adapté.

      --------------------------------------------------------------------------------

      IV. Groupes Particulièrement Vulnérables

      A. Les Mineurs Non Accompagnés (MNA) : Une protection "au rabais"

      Le CESE dénonce une approche de plus en plus centrée sur les politiques migratoires plutôt que sur la protection de l'enfance.

      Discrimination financière : Le prix de journée pour un MNA est souvent de 50-60 € contre 170 € pour les autres mineurs.

      Évaluation de la minorité : Les procédures sont jugées lapidaires et s'appuient trop souvent sur des tests osseux au manque de fiabilité scientifique avéré.

      B. Les jeunes majeurs

      La sortie du dispositif à 18 ou 21 ans reste une rupture brutale. Une étude de l'Insee indique qu'un quart des sans-abri sont d'anciens enfants placés.

      --------------------------------------------------------------------------------

      V. Les Professionnels : Une Crise d'Attractivité Majeure

      Le secteur souffre d'une pénurie de personnel dans toutes les catégories (éducateurs, assistants familiaux, médecins scolaires).

      Assistants familiaux : Leurs effectifs ont baissé de 9 % en 6 ans.

      Médecine scolaire : Moins de 800 médecins pour 12 millions d'élèves, ce qui entrave le repérage précoce.

      Conditions de travail : Les horaires atypiques, les faibles rémunérations et le sentiment de "travail en miettes" découragent les vocations.

      --------------------------------------------------------------------------------

      VI. Tableau Synthétique des Préconisations Clés du CESE

      | N° | Thématique | Mesure Principale | | --- | --- | --- | | 1 | Statistique | Missionner le GIP France Enfance Protégée pour un état des lieux annuel exhaustif des besoins et des mesures non exécutées. | | 2 & 3 | État | Créer une stratégie interministérielle bisannuelle avec péréquation financière et incitations pour les départements. | | 4 | Coordination | Généraliser les Comités Départementaux pour la Protection de l'Enfance (CDPE) pour décloisonner les acteurs. | | 6 | MNA | Interdire toute distinction de traitement entre MNA et autres mineurs (santé, éducation). | | 8 | Formation | Définir un plan de formation commun à tous les professionnels "sentinelles" (Éducation nationale, police, santé). | | 9 | Accueil | Diversifier les modes de prise en charge en multipliant les petites unités de vie (moins de 7 enfants). | | 10 | PPE | Rendre le "Projet pour l'Enfant" effectif et obligatoire pour tout financement. | | 11 | Santé | Systématiser l'accueil rapide en pédopsychiatrie (présomption d'urgence psychologique). | | 13 | Justice | Assistance systématique d'un avocat spécialisé pour l'enfant protégé. | | 15 | Contrôle | Créer une autorité nationale indépendante pour le contrôle des structures d'accueil. | | 17 | Droit | Créer un Code de l'Enfance regroupant l'ensemble des droits, libertés et devoirs des enfants. | | 18 | Encadrement | Publier les décrets sur le socle minimal d'encadrement et instaurer un nombre maximal de mesures par travailleur social. |

      --------------------------------------------------------------------------------

      Conclusion

      La protection de l'enfance ne peut plus être la variable d'ajustement des dysfonctionnements institutionnels.

      Le CESE insiste sur le fait que l'enfant doit être le sujet et non l'objet de la protection.

      Sans un investissement massif dans les ressources humaines et une coordination réelle entre l'État et les départements, la promesse républicaine de protéger les plus vulnérables ne pourra être tenue.

    1. L'Engagement des Usagers et le Partenariat en Santé : Vision et Outils de la Haute Autorité de Santé (HAS)

      Synthèse opérationnelle

      L'évolution du système de santé français est marquée par une intégration croissante de l'engagement des usagers, une priorité désormais inscrite au cœur de la stratégie de la Haute Autorité de Santé (HAS).

      Le passage d'une approche consultative vers un véritable partenariat vise à faire du « pouvoir d'agir » des personnes un vecteur fondamental de la qualité des soins et des accompagnements.

      Les points clés de cette mutation incluent :

      Institutionnalisation de l'engagement : La HAS systématise la participation des usagers dans l'ensemble de ses missions (recommandations, évaluations, indicateurs).

      Clarification conceptuelle : Le partenariat est défini comme le niveau le plus abouti de l'engagement, reposant sur la co-construction, la co-décision et la co-évaluation.

      Diversité des rôles : La figure du « patient partenaire » se décline en multiples statuts (expert, formateur, chercheur, ressource) selon le contexte et les compétences mobilisées.

      Défis structurants : La pérennisation de ces pratiques nécessite une évolution réglementaire (statut, rémunération) et une acculturation profonde des professionnels et de la gouvernance des établissements.

      --------------------------------------------------------------------------------

      I. Vision Stratégique de la HAS (2019-2030)

      La vision de la HAS s'est construite de manière incrémentale sur vingt ans, passant d'une participation ponctuelle à une priorité stratégique structurée.

      Évolution des projets stratégiques

      Projet 2019-2024 : L'objectif était de donner aux usagers la capacité d'être acteurs de la qualité et de systématiser leur présence au sein des instances de la HAS.

      Projet 2025-2030 : La notion de « pouvoir d'agir » devient centrale. L'engagement est positionné comme un levier fondamental de la sécurité des soins et de l'amélioration de l'expérience patient.

      Réalité opérationnelle

      En 2024, la HAS collabore avec 470 personnes différentes (patients, proches, enfants, adultes, personnes accompagnées dans le secteur médico-social).

      Ces usagers interviennent dans :

      • L'élaboration de recommandations de bonnes pratiques.

      • La création d'indicateurs de qualité.

      • L'évaluation des produits de santé (médicaments et dispositifs médicaux).

      • La promotion de programmes de santé publique (vaccination, dépistage).

      --------------------------------------------------------------------------------

      II. Cadre Conceptuel : Engagement, Participation et Partenariat

      La HAS souligne l'importance d'une terminologie adaptée pour couvrir les secteurs sanitaire, social et médico-social.

      Distinction sectorielle

      Secteur Sanitaire : Utilise préférentiellement les termes « engagement » et « usager ».

      Secteur Médico-social : Privilégie le terme « participation » et rejette souvent celui d'« usager » au profit de « personne accompagnée ».

      Réponse institutionnelle : Création de la Commission pour la participation et l'engagement des personnes pour assurer une transversalité totale.

      Le Continuum de l'Engagement

      L'engagement est perçu comme une échelle de maturité croissante :

      1. Information : Niveau de base.

      2. Consultation : Recueil de l'expression des personnes.

      3. Partenariat : Niveau le plus élevé. Il implique de « définir ensemble les modalités de réalisation d'un projet en étroite collaboration ».

      Ses piliers sont la co-construction, la co-décision, la co-mise en œuvre et la co-évaluation.

      --------------------------------------------------------------------------------

      III. Les Figures du Patient Partenaire

      Le passage du concept à l'action se matérialise par diverses figures de patients partenaires, chacune répondant à des besoins spécifiques :

      | Statut | Domaine d'intervention | | --- | --- | | Patient Ressource | Organisation des séjours, reconstruction de locaux, réflexion sur les parcours. | | Pair-aidant / Patient Expert | Intervention auprès d'autres patients, éducation thérapeutique, expertise sanitaire (HAS, Santé Publique France). | | Patient Formateur / Enseignant | Formation des futurs professionnels de santé. | | Co-chercheur | Projets de recherche scientifique. | | Patient Coach | Accompagnement d'autres patients partenaires (concept sous vigilance réglementaire). |

      --------------------------------------------------------------------------------

      IV. Outils et Dispositifs de la HAS

      La HAS met à disposition des professionnels et des usagers une batterie d'outils pour opérationnaliser l'engagement.

      Mesure de l'expérience (Indicateurs ISATIS)

      Sanitaire : Dispositifs nationaux depuis 2016 (MCO, chirurgie ambulatoire, psychiatrie, SMR). Développement en cours pour les urgences et les maternités.

      Médico-social : Programme pluriannuel depuis 2018 pour recueillir le point de vue des personnes vulnérables sur le temps long.

      Usage des données : La HAS incite les établissements à donner accès aux représentants des usagers aux « verbatims » (commentaires libres) d'ISATIS pour nourrir les plans d'amélioration.

      Certification et Évaluation

      L'engagement est utilisé comme levier de transformation lors des visites de certification :

      Méthode du Patient Traceur : Analyse du parcours du point de vue du patient.

      Accompagné Traceur : Adaptation de la méthode au secteur médico-social.

      Rôle des élus : Participation active des élus du Conseil de la Vie Sociale (CVS) à l'évaluation.

      --------------------------------------------------------------------------------

      V. Leviers et Freins à l'Implémentation

      Le déploiement du partenariat se heurte à des défis structurels et culturels nécessitant une attention particulière.

      Principes de reconnaissance

      Pour faire perdurer l'engagement, la HAS identifie sept principes, notamment :

      • Une égale considération entre les parties.

      • La proportionnalité de la formation (ne pas sur-former si les compétences expérientielles suffisent).

      • L'accompagnement et le soutien des personnes engagées.

      Obstacles réglementaires et financiers

      Rémunération vs Droits sociaux : Le versement d'indemnités ou de salaires peut entraîner la perte de l'Allocation aux Adultes Handicapés (AAH) ou de pensions d'invalidité. Une évolution du cadre fiscal et social est jugée nécessaire.

      Frais : Le non-remboursement récurrent des frais de déplacement des représentants d'usagers reste un frein majeur.

      La question de la co-responsabilité

      Le partenariat soulève la question du curseur de la décision :

      Cadre légal : La responsabilité juridique finale incombe souvent à la personnalité morale ou au professionnel (ex: prescription médicale).

      Clarification initiale : Il est crucial de définir dès le début d'un projet si le groupe de travail partenarial dispose d'un pouvoir de décision ou s'il fait des propositions à une gouvernance qui décide en dernier ressort.

      --------------------------------------------------------------------------------

      VI. Conclusion : Un Facteur de Transformation Durable

      Le partenariat est un outil de démocratie participative qui complète la démocratie représentative. Sa réussite repose sur :

      1. L'acculturation : Une transformation des représentations sociales, tant chez les usagers que chez les professionnels.

      2. L'approche ascendante (Micro vers Macro) : Le partenariat fonctionne souvent mieux lorsqu'il part de l'échange direct en consultation avant d'être porté par la gouvernance comme axe stratégique.

      3. L'engagement de la gouvernance : Une volonté politique forte au sommet des institutions est indispensable pour transformer des initiatives isolées en modèles systémiques.

    1. Synthèse de la Réflexion Éthique et du Partenariat en Santé

      Résumé Exécutif

      Ce document synthétise les interventions du Professeur Michel Clanet lors de la journée « Redonner du sens », consacrée au partenariat en santé et à l'éthique. L'analyse met en lumière le rôle pivot des Espaces de Réflexion Éthique (ERE) dans l'acculturation des professionnels et des citoyens aux enjeux de la bioéthique.

      Les points clés incluent :

      La redéfinition de la relation de soin comme une rencontre entre deux vulnérabilités (soignant et soigné), visant une horizontalité accrue via le partenariat.

      La distinction entre conscience professionnelle et conscience morale, dont le conflit génère le « dilemme éthique ».

      L'institutionnalisation de la réflexion éthique par le dialogue collégial, indispensable pour éclairer la décision clinique et institutionnelle.

      L'élargissement de l'éthique à la démocratie en santé, intégrant la prévention, la santé environnementale et la lutte contre les inégalités sociales.

      --------------------------------------------------------------------------------

      I. Les Espaces de Réflexion Éthique (ERE) : Cadre et Missions

      Les ERE sont des structures institutionnelles nées d'un concept de 2004 et créées officiellement en 2012. Ils dépendent du ministère de la Santé (DGOS) et sont rattachés aux centres hospitaliers universitaires (CHU). En Occitanie, le site principal se situe à Toulouse, avec un site d’appui à Montpellier.

      Missions principales

      Leurs actions s'articulent autour de deux axes majeurs :

      1. Secteur du soin et de l'accompagnement :

      ◦ Former et acculturer les professionnels à la réflexion éthique et à la bioéthique.    ◦ Répondre aux exigences de certification des établissements de santé et médico-sociaux.    ◦ Produire des guides pratiques (ex: La collégialité au domicile, La prise en charge de la vulnérabilité au domicile).

      2. Secteur de la cité et du citoyen :

      ◦ Agir comme prolongement régional du Comité Consultatif National d'Éthique (CCNE).    ◦ Organiser des États Généraux de la Santé (prochaine session au printemps 2026) pour recueillir la vision citoyenne sur des thèmes comme l’intelligence artificielle, la fin de vie, la PMA ou la santé environnement.

      --------------------------------------------------------------------------------

      II. Fondements Conceptuels de la Relation de Soin

      L'éthique en santé s'appuie sur une distinction entre la technique et l'intentionnalité.

      La double dimension du soin (selon Frédéric Vorms)

      Toute pratique de soin comporte deux éléments inséparables :

      Soigner quelque chose : L'aspect pratique et technique visant à traiter une maladie ou une souffrance isolée.

      Soigner quelqu'un : La dimension relationnelle et intentionnelle. Le soin s'exerce par égard pour autrui ; il ne suffit pas de pouvoir soigner, il faut le vouloir.

      La phénoménologie de l'attention (selon Jean-Philippe Pierron)

      La relation de soin se décline en trois niveaux d'attention :

      Faire attention : Prendre conscience de la vulnérabilité de l'autre.

      Être attentif : Exercer sa compétence technique et professionnelle.

      Être attentionné : Porter un regard de sollicitude et de disponibilité vers l'autre.

      --------------------------------------------------------------------------------

      III. La Démarche de Réflexion Éthique en Pratique

      L'éthique n'est pas qu'un cadre normatif ; c'est un engagement et un questionnement permanent sur la légitimité de l'action (« Que faut-il faire pour bien faire ? »).

      Le dilemme éthique

      Le dilemme naît d'un glissement ou d'un conflit entre :

      • La conscience professionnelle, parfois prisonnière de la connaissance technique et des procédures.

      • La conscience morale, qui renvoie aux valeurs fondamentales.

      Le dialogue collégial

      Pour résoudre une situation complexe, les structures éthiques privilégient le dialogue collégial, dont les caractéristiques sont :

      Absence de hiérarchie : La parole d'un médecin, d'un cadre ou d'un directeur a la même valeur que celle des autres professionnels.

      Multiplicité des regards : Écoute de tous les acteurs, y compris le recueil de la voix du patient.

      Éclairage et non décision : La réunion collégiale n'est pas un organe décisionnel mais une instance qui éclaire le responsable de la décision finale.

      Les principes cardinaux de l'éthique

      La réflexion s'appuie sur quatre piliers fondamentaux :

      1. Le respect de l'autonomie (liberté de choix).

      2. La bienfaisance (agir pour le bien).

      3. La non-malfaisance (éviter de nuire).

      4. La justice et l'équité.

      --------------------------------------------------------------------------------

      IV. Éthique et Partenariat en Santé : Une Convergence

      Le partenariat en santé est présenté comme un levier éthique majeur permettant de rééquilibrer la relation de soin.

      | Concept clé | Impact Éthique | | --- | --- | | Horizontalité | Lutte contre le « pouvoir du sarrau » (pouvoir médical) pour établir une relation plus égalitaire. | | Reconnaissance réciproque | Admettre que la vulnérabilité est partagée entre le soigné (besoin de soin) et le soignant (limites techniques/morales). | | Savoirs expérientiels | Reconnaissance par le soignant que le patient possède des savoirs propres et multiples. | | Pouvoir d'agir | Renforcement de l'autonomie et de la liberté de choix du patient (Empowerment). |

      Note critique : Une interrogation subsiste quant à l'équité d'accès au statut de « patient partenaire ». Il existe un risque de biais de recrutement, où certains profils pourraient ne pas se sentir légitimes pour assumer ce rôle.

      --------------------------------------------------------------------------------

      V. Perspective Macro : Démocratie en Santé et Prévention

      Le partenariat doit dépasser le cadre individuel du soin pour s'inscrire dans une dimension politique et sociale.

      Plaidoyer pour le partenariat : Nécessité de communiquer davantage pour convaincre les acteurs encore réticents ou ignorants du concept.

      Inégalités sociales de santé : Urgence d'aller vers les populations « invisibles » et précaires pour garantir une véritable équité dans le partenariat.

      Prévention et Citoyenneté : La santé commence dès l'enfance et concerne le maintien du bien-être. Le citoyen doit être acteur de la prévention, notamment face aux déterminants environnementaux (ex: maladies professionnelles liées aux pesticides chez les agriculteurs).

      Conclusion : Le partenariat en santé et la démocratie sanitaire relèvent d'un même combat éthique visant à impliquer fondamentalement les citoyens dans la gestion de leur santé et de leur environnement.

    1. Qualité et Partenariat en Santé : Vers un Nouveau Paradigme de Soins

      Synthèse

      Ce document synthétise les réflexions issues d'interventions d'experts sur l'articulation entre la qualité des parcours de santé et l'engagement des usagers.

      Le constat central est la nécessité de passer d'une vision paternaliste du soin à un véritable partenariat de co-leadership.

      L'analyse repose sur une réponse graduée de l'offre de soins (primaire, territoriale, tertiaire) et une définition de la qualité articulée autour de cinq piliers : accessibilité, pertinence, attentes des usagers, sécurité et efficience.

      Le partenariat est présenté non comme une finalité, mais comme un moyen d'atteindre une qualité optimale.

      Ce changement de modèle s'appuie sur la reconnaissance des « savoirs expérientiels » du patient, qui consacre en moyenne 6 250 heures par an à sa propre santé, contre seulement 5 à 10 heures en présence de professionnels.

      --------------------------------------------------------------------------------

      I. La Structure des Parcours de Santé et la Qualité

      Le déploiement d'un parcours de santé de qualité repose sur une organisation graduée et une coopération étroite entre les différents échelons de soins.

      A. Une offre de soins graduée

      Le parcours est conceptualisé selon trois niveaux de réponse aux besoins de l'usager :

      Soins primaires (proximité) : Fondés sur l'exercice coordonné (équipes de proximité, pharmacies, laboratoires, imagerie en coupe) pour répondre aux besoins immédiats.

      Équipes de référence territoriales : Portées par des établissements publics ou privés pour les soins non programmés et les urgences.

      Soins tertiaires : Centres d'expertise régionaux pour les maladies rares ou spécifiques.

      B. Les leviers de la performance

      Pour garantir la fluidité de ce parcours, deux notions sont essentielles :

      1. La délégation de tâches : Sortir du dogme de la réponse exclusivement médicale au profit de nouveaux métiers (infirmiers de pratique avancée, coordinateurs de parcours).

      2. La coopération : Nécessité d'une orchestration territoriale, souvent pilotée par les Agences Régionales de Santé (ARS).

      C. Les cinq dimensions de la qualité

      La qualité ne se définit pas par un seul aspect, mais par un équilibre harmonieux entre cinq facteurs fondamentaux :

      1. Accessibilité : Capacité du système à fournir une réponse en temps utile.

      2. Pertinence : Conformité aux données de la science et adéquation de la réponse au besoin.

      3. Attentes de l'usager : Respect des valeurs et des préférences de la personne.

      4. Sécurité : Garantie de la sécurité des soins et des réponses apportées.

      5. Efficience : Utilisation optimale des fonds de la solidarité nationale, par nature limités.

      --------------------------------------------------------------------------------

      II. Le Continuum de l'Engagement et le Partenariat

      S'appuyant sur les recommandations de la Haute Autorité de Santé (HAS) de septembre 2020, l'analyse distingue quatre niveaux d'engagement des usagers.

      A. Les quatre niveaux d'engagement

      | Niveau | Leadership | Type d'interaction | | --- | --- | --- | | Information | Professionnel | Transmission de documents (ex: flyers d'accueil). | | Consultation | Professionnel | Recueil de la satisfaction ou de l'expérience (enquêtes). | | Collaboration | Professionnel | Relecture ou avis sur des documents par des usagers. | | Partenariat | Co-leadership | Co-construction, co-décision et co-mise en œuvre. |

      B. Le partenariat comme moyen stratégique

      Le partenariat avec les patients et leurs proches-aidants n'est pas une fin en soi, mais un levier au service de la sécurité et de la qualité des parcours. L'objectif est d'atteindre, pour chaque situation, le niveau d'engagement le plus élevé possible.

      --------------------------------------------------------------------------------

      III. La Reconnaissance du Savoir Expérientiel

      L'argument majeur en faveur du partenariat réside dans la disparité entre le temps clinique et le temps de vie avec la maladie.

      Le constat chiffré : Une personne vivant avec une vulnérabilité de santé passe entre 5 et 10 heures par an avec des professionnels. En revanche, elle consacre environ 6 250 heures par an à prendre soin de sa santé par elle-même.

      La « Vivrologie » : Ce terme désigne l'expertise issue de l'expérience de la maladie. Elle englobe des savoirs spécifiques : gestion de la vie intime, adaptation des traitements pendant les vacances, maintien de l'activité professionnelle.

      Le Modèle de Montréal : Ce modèle remplace le paternalisme par une vision où le patient est un membre à part entière de l'équipe de soins. Le centre de gravité n'est plus le patient lui-même, mais le projet de santé.

      --------------------------------------------------------------------------------

      IV. Dimensions et Typologies du Partenariat

      Le partenariat doit être envisagé de manière systémique, avec des impacts à trois niveaux :

      1. Micro : La relation individuelle de soin entre le patient et le professionnel.

      2. Méso : L'organisation des soins, l'enseignement et la recherche.

      3. Macro : La définition des politiques publiques de santé.

      Les profils de patients partenaires

      Il n'existe pas un profil unique de patient partenaire, mais des compétences spécifiques selon le domaine d'intervention :

      Patient partenaire de soins : Focalisé sur son propre projet de santé.

      Patient partenaire formateur : Intervient dans la formation des futurs professionnels.

      Patient partenaire chercheur : Contribue à la recherche clinique ou organisationnelle.

      Patient partenaire ressource : Apporte son expertise dans l'éducation thérapeutique ou l'amélioration de la qualité des soins.

      --------------------------------------------------------------------------------

      V. Conclusion : Vers une Culture Partagée

      Le Comité Régional d’Impulsion et d’Analyse du Partenariat en Santé (CRAPS) définit le partenariat comme une action commune pour le bien-être global, s'appuyant sur la complémentarité des savoirs.

      Une évolution sémantique et culturelle est préconisée : passer de la « prise en charge » à la « prise en soins ». Cette transition souligne que le patient n'est pas une « charge » pesant sur le système, mais une véritable solution pour améliorer l'efficacité et la pertinence de l'offre de santé.

      Le partenariat est, en somme, la rencontre réciproque de deux expertises : celle, scientifique, du professionnel et celle, expérientielle, du patient.

    1. Transgression adolescente, climat socio-éducatif et sanction éducative : Synthèse de recherche-action

      Synthèse de direction

      Ce document synthétise les résultats d'une recherche-action de trois ans menée par Valérie Benoit et Annik Skrivan (Haute école pédagogique du canton de Vaud, Suisse) au sein d'un établissement scolaire de 500 élèves.

      L'étude remet en question l'efficacité des pratiques punitives traditionnelles face aux transgressions adolescentes et propose un changement de paradigme vers la sanction éducative.

      Les conclusions majeures indiquent que :

      • Les pratiques punitives classiques (heures d'arrêt, copies) sont perçues par les élèves comme inutiles, voire incitatrices à la récidive.

      • Le climat scolaire se dégrade significativement à mesure que les élèves grandissent (passage du cycle 2 au cycle 3), particulièrement concernant la relation enseignant-élève.

      • L'insécurité est fortement ressentie par les élèves dans les espaces "hors murs" (gares, parkings) et dans les lieux de transition.

      • La mise en place d'un Espace de Sanction Éducative (ESE) favorise la responsabilisation, la restauration du lien social et une meilleure compréhension des besoins fondamentaux des adolescents.

      --------------------------------------------------------------------------------

      1. Cadre théorique et contextuel de la recherche

      1.1. Le contexte de l'école vaudoise

      La recherche s'inscrit dans un cadre législatif et structurel spécifique au canton de Vaud :

      Loi sur l'enseignement obligatoire (Léo) : Structure le secondaire 1 en deux voies (prégymnasiale et générale). La voie générale, marquée par des niveaux de compétences différents selon les branches, entraîne un morcellement des groupes classes et une complexité relationnelle accrue.

      Concept 360 : Politique d'école inclusive intégrant des élèves à besoins éducatifs particuliers, augmentant l'hétérogénéité des classes.

      Impact post-pandémique : La crise du COVID-19 a agi comme révélateur de problèmes latents, exacerbant les troubles de la santé mentale et créant un effet d'anomie (perte de sens des normes) chez les jeunes.

      1.2. Comprendre l'adolescence

      L'adolescence est définie comme une phase de mutation et de "liminalité" :

      Rite de passage : En l'absence de rites formalisés dans la société actuelle, l'adolescent se crée ses propres épreuves, souvent par la transgression.

      Besoins fondamentaux : Outre les besoins pédagogiques, l'adolescent a des besoins sociaux essentiels : sécurité, confiance, responsabilité, autonomie, affection, et reconnaissance.

      Expression par l'agir : Les adolescents privilégient l'action au discours pour exprimer leurs émotions et construire leur identité (processus de séparation-individuation).

      --------------------------------------------------------------------------------

      2. Analyse du climat socio-éducatif et des perceptions

      La recherche-action s'appuie sur le questionnaire environnement socio-éducatif (Cais), révélant des divergences marquées entre les acteurs.

      2.1. Évolution des perceptions selon l'âge

      Il existe une corrélation négative entre l'âge et la perception du climat scolaire.

      Les élèves plus âgés (12-16 ans) perçoivent l'environnement de manière beaucoup plus négative que les plus jeunes (10-12 ans).

      | Dimension évaluée | Perception Cycle 2 (10-12 ans) | Perception Cycle 3 (12-16 ans) | | --- | --- | --- | | Relations élèves-enseignants | Plutôt positives | Chute massive / Perçues comme froides | | Soutien pédagogique | Présent | Perçu comme insuffisant | | Sentiment de participation | Modéré | Très faible (parole muselée) | | Gestion des comportements | Acceptable | Perçue comme injuste/punitive |

      2.2. La problématique de la sécurité

      Contrairement aux autres dimensions, le sentiment de sécurité est plus élevé chez les plus grands, car les plus jeunes craignent les agressions des aînés.

      Zones de vulnérabilité : Les lieux les moins sécurisés sont la gare, le parking et le voisinage immédiat.

      Violence insidieuse : Les élèves dénoncent une violence verbale et une agressivité de la part de certains enseignants (cris, humiliations, dénigrement).

      2.3. Divergence des priorités

      Une déconnexion est observée entre les préoccupations des élèves et celles des enseignants :

      Priorité élèves : Violence physique/verbale, vols et qualité de la relation humaine.

      Priorité enseignants : Réussite scolaire, absentéisme et décrochage.

      --------------------------------------------------------------------------------

      3. De la punition à la sanction éducative

      3.1. L'inefficacité des pratiques punitives

      Les témoignages d'élèves confirment que les punitions classiques n'enseignent rien :

      Max (15 ans) : "Les heures d'arrêt ça me fait plus rien... c'est juste punir pour punir."

      Yann : "Je sors, je me dis pas que je vais arrêter de faire des bêtises." La punition génère souvent un sentiment d'humiliation et de désintérêt, augmentant le risque de décrochage.

      3.2. Le modèle de l'Espace de Sanction Éducative (ESE)

      Inspiré des travaux d'Éric Prairat et Élisabeth Maheu, l'ESE repose sur une approche de "langage trait-d'union".

      Les quatre contraintes de la sanction éducative :

      1. Rappel de la règle : Expliquer le sens de la loi pour la cohésion du groupe.

      2. Mise en mots de la transgression : Identifier les besoins inassouvis et proposer des comportements alternatifs.

      3. Obligation de réparer : Restaurer le lien social avec les personnes lésées.

      4. Sanction individuelle : S'adresser au sujet responsable dans un cadre privé.

      Les principes structurants :

      Signification : Prendre le temps d'expliquer l'acte.

      Objectivation : Se centrer sur l'acte commis, jamais sur la personnalité du jeune.

      Privation : La sanction doit avoir lieu sur un temps libre (mercredi après-midi) pour marquer la limite.

      --------------------------------------------------------------------------------

      4. Obstacles et perspectives professionnelles

      4.1. Résistances du corps enseignant

      La recherche met en lumière des tensions identitaires chez les enseignants :

      Vision de la mission : Certains se considèrent uniquement comme des "transmetteurs de savoir" et refusent la dimension éducative ou relationnelle de leur métier.

      Posture "Adultocentrée" : Une tendance à percevoir la transgression comme une attaque personnelle plutôt que comme un symptôme du développement adolescent.

      Narcissisme territorial : Difficulté à collaborer et à harmoniser les pratiques de gestion de classe au sein de l'établissement.

      4.2. Pistes d'amélioration

      Pour pérenniser ce changement de regard, plusieurs leviers sont identifiés :

      Mesures préventives : Ne pas attendre l'explosion du comportement. Travailler sur l'engagement des élèves et la qualité de la relation quotidienne.

      Formation continue : Développer les compétences socio-émotionnelles, l'éthique professorale et la gestion de classe.

      Posture d'autorité éducative : Passer de l'autoritarisme (soumission) à une autorité qui contient et sécurise sans "casser" l'adolescent.

      Patience institutionnelle : Les résultats (baisse des incivilités) demandent du temps et un soutien fort de la direction.

      --------------------------------------------------------------------------------

      Citations clés

      "Un ado qui ne transgresse pas, c'est un ado qui m'angoisse... c'est une bombe à retardement."Annik Skrivan

      "La sanction est un moyen de promouvoir un sujet responsable en lui imputant les conséquences de ses actes."Éric Prairat (cité par Valérie Benoit)

      "L'enseignant par nature est extrêmement narcissique et territorial... l'école est un lieu où on soumet et on contraint, mais il ne faut pas oublier qu'on socialise aussi beaucoup."Annik Skrivan

    1. L’Évaluation en Contexte Scolaire : Enjeux Éthiques et Débats Politiques

      Résumé Analytique

      Ce document de synthèse analyse les enjeux complexes de l'évaluation en milieu scolaire, tels qu'exposés par Camille Roelens.

      L'évaluation ne doit pas être perçue comme un simple outil technique, mais comme un objet philosophique et politique central dans une société démocratique.

      Le constat de départ est paradoxal : bien que l'évaluation soit souvent jugée obscure et injuste (la "science sinistre" de la docimologie), elle demeure omniprésente et incontournable.

      L'analyse démontre que l'école moderne a pour mission de produire des individus autonomes et de gérer la stratification sociale dans une société où les rangs de naissance ont disparu.

      L'évaluation devient alors le mécanisme de création d'"inégalités justes". Cependant, aucun modèle de justice scolaire — qu'il soit méritocratique, distributif ou basé sur des minima garantis — n'est parfait.

      Le document souligne que l'enjeu actuel de l'école réside dans la reconquête de sa légitimité à travers une "bienveillance" redéfinie, visant à accompagner chaque élève vers une autonomie réelle plutôt que de simplement valider des acquis.

      --------------------------------------------------------------------------------

      1. Les Paradoxes de l'Évaluation

      L'évaluation en milieu scolaire repose sur trois constats fondamentaux, dont deux critiques et un pragmatique.

      L'obscurité : Il est souvent difficile de déterminer avec précision ce qui est réellement évalué (la compétence réelle, la capacité à gérer le stress, ou la compréhension de la consigne).

      L'injustice perçue : Le sentiment que l'effort ne se traduit pas toujours par la réussite crée une perception de l'évaluation comme une épreuve "tragique" ou inéquitable.

      L'omniprésence (Le "2+1") : Malgré ces défauts, l'évaluation est un "thème incontournable". Elle s'exerce de manière "sauvage" et constante dans tous les aspects de la vie sociale (jugement sur un film, un restaurant, ou choix de partenaires sportifs).

      --------------------------------------------------------------------------------

      2. Critique de la Philosophie de l'Évaluation

      Selon les travaux de Danilo Martuccelli, l'évaluation est devenue une véritable philosophie structurante de la société moderne, reposant sur huit principes majeurs, souvent contestables.

      Principes et Critiques de Martuccelli

      | Principe de la philosophie de l'évaluation | Critique et limites | | --- | --- | | Tout est mesurable et évaluable. | Toutes les pratiques ne sont pas également quantifiables sans biaiser la réalité. | | Tout le monde doit être évalué et mis en concurrence. | L'évaluation n'est pas homologue selon les acteurs et les enjeux (ex: concours vs suivi). | | Assure une gestion transparente du pouvoir. | L'évaluation n'est pas une information neutre ; c'est un instrument de pouvoir. | | Assure la meilleure utilisation des ressources. | L'évaluation a un coût financier et humain massif (inspections, concours). | | Augmente l'efficacité (carotte et bâton). | C'est un pouvoir performatif qui oriente les comportements de manière insidieuse. | | Motive et implique les acteurs. | L'impact est radicalement différent si l'évaluation vise un individu ou un groupe. | | Légitime les organisations (monopole des grades). | Elle alimente une crise de légitimité entre la théorie et la réalité du terrain. | | Incarne la rationalisation moderne. | L'évaluation est devenue une "croyance collective" non rationnelle. |

      --------------------------------------------------------------------------------

      3. L'École comme Rouage de la Modernité Démocratique

      Dans une société d'"égalité des conditions" (Tocqueville), où la naissance ne détermine plus le rang, la stratification sociale doit être reconstruite. Deux leviers principaux assurent cette fonction : le marché et l'école.

      La fabrication de l'individu : L'école a pour mission de transformer des enfants "dépendants et vulnérables" en individus "libres, égaux et autonomes". L'évaluation sert à vérifier si cette demande sociale est remplie.

      La gestion des inégalités : Puisque tout le monde ne peut être "soliste", l'école doit sélectionner. Cette tâche est décrite comme "Sisyphe" : elle est structurellement injuste car elle évalue parfois des acquis non transmis par l'école (capital culturel familial), mais elle est indispensable pour éviter l'arbitraire ou le tirage au sort.

      --------------------------------------------------------------------------------

      4. Les Quatre Modèles de Justice Scolaire

      Français Dubet et Marie Duru-Bellat identifient quatre modèles de justice, chacun présentant des avantages et des dérives potentielles.

      4.1. L'égalité des chances et le mérite

      Principe : Les mêmes épreuves pour tous, correction anonyme.

      Faiblesse : Ce modèle ignore que l'école ne représente qu'une fraction du temps de vie de l'enfant. Il est "rude pour les vaincus" et tend à reproduire les appartenances sociales sous couvert de mérite.

      4.2. La justice distributive (et inclusive)

      Principe : "Donner plus à ceux qui ont moins" (ex: éducation prioritaire). L'autonomie est vue comme une capacité accompagnée (étayage).

      Faiblesse : Risque d'obsession de l'efficacité pédagogique et de stigmatisation (effet "étiquette" REP+). Ce modèle pèse lourdement sur la vocation des enseignants, parfois poussés jusqu'à l'épuisement.

      4.3. Les minima garantis (Inspiration de John Rawls)

      Principe : Déterminer les règles de justice derrière un "voile d'ignorance". Le système le moins injuste est celui qui traite le mieux les plus faibles (principe du socle commun).

      Faiblesse : Souvent perçu comme un "smic culturel" ou un renoncement à l'excellence.

      4.4. Les sphères de justice et effets sociaux (Michael Walzer)

      Principe : Les inégalités dans une sphère (scolaire) ne devraient pas contaminer les autres sphères de la vie.

      Faiblesse : En France, le diplôme est excessivement déterminant pour le destin social global. L'évaluation est dramatisée car elle "joue la peau" des élèves.

      --------------------------------------------------------------------------------

      5. Vers une Autonomie Réelle : Capabilités et Bienveillance

      L'éducation vise l'autonomie (capacité d'agir, de choisir et de penser par soi-même). Cependant, l'autonomie en droit n'est pas l'autonomie en fait.

      La notion de Capabilités (Amartya Sen) : L'autonomie dépend de la connexion entre les capacités personnelles et un contexte facilitateur. Évaluer un élève sans tenir compte de son environnement (ex: barrière de la langue) est une erreur d'évaluation.

      La Bienveillance comme levier de légitimité : Dans un contexte de "déclin des institutions", l'école ne peut plus imposer sa légitimité par simple statut. La bienveillance doit être comprise en trois sens :

      1. Bien veiller : Comprendre le monde et la singularité de chaque élève.   

      2. Bien veiller sur : Avoir soin de la relation et des individus (sollicitude et tact).  

      3. Bien veiller à : Donner concrètement les moyens de l'autonomie.

      Conclusion

      L'évaluation scolaire est au cœur d'un "polythéisme des jugements". Il n'existe pas de solution parfaite, mais une quête de l'évaluation "la moins pire".

      L'école juste ne peut reposer sur un seul principe, mais sur une composition de principes croisés.

      L'enjeu ultime est de passer d'une fonction de sélection prioritaire à une fonction de transmission d'outils d'autonomie intellectuelle, tout en acceptant que l'école ne peut, à elle seule, régler tous les problèmes de la société.

    1. Analyse de l’Expérience Émotionnelle en Milieu Scolaire : Le Dispositif des « Moments Spéciaux »

      Synthèse

      Ce document de synthèse détaille les recherches menées par Sophie Necker et ses collègues sur la saisie des états émotionnels au sein de la classe.

      S’appuyant sur une étude menée en 2021 dans deux classes de CM2, le projet repose sur le dispositif de la « boîte à moments spéciaux ».

      Cette méthode permet d’accéder à la subjectivité des élèves et des enseignants à travers l'écriture quotidienne et volontaire de billets anonymes.

      Les conclusions mettent en lumière la dimension systémique des émotions, où les vécus individuels s'entremêlent pour former un paysage émotionnel collectif.

      L’innovation majeure de cette recherche réside dans la création de « l’Émoscope », une cartographie graphique permettant de visualiser la complexité des interactions entre déclencheurs, évaluations subjectives et expressions émotionnelles à l’échelle d’une journée de classe.

      --------------------------------------------------------------------------------

      1. Le Dispositif de Recherche : La Boîte à Moments Spéciaux

      La recherche vise à accéder aux traces des émotions et à la subjectivité des acteurs en milieu scolaire.

      Méthodologie et Protocole de Recueil

      Contexte : Étude réalisée en mai 2021 dans deux classes de CM2 à Lille (51 élèves et 2 enseignantes).

      Le Support : Des bandelettes de papier (environ 10 cm de haut) intitulées « billet moment spécial ».

      La Consigne : « Tu as vécu un moment spécial dans la classe aujourd'hui. Peux-tu l'écrire et le mettre dans la boîte s'il te plaît ? ».

      Caractéristiques du recueil :

      ◦ Écriture volontaire et quotidienne en fin de journée.    ◦ Anonymat préservé pour favoriser la liberté d’expression.  

      ◦ Durée d’un mois, totalisant 764 billets recueillis.

      Le « Moment Spécial » : Défini par sa singularité et sa significativité pour l’individu, sans injonction de valence positive ou négative.

      Il s'inspire des concepts de « moments optimaux » ou de « flow », mais élargi à toute intensité émotionnelle.

      --------------------------------------------------------------------------------

      2. Fondements Théoriques : Une Approche Systémique

      La recherche considère l’expérience vécue comme un objet scientifique à part entière.

      L'Interdépendance Émotionnelle

      La classe est envisagée comme un système d’interactions réciproques et complexes :

      Influence mutuelle : Les états émotionnels de l'enseignant impactent ceux des élèves et réciproquement.

      Attention conjointe : La perception de la situation est déterminée par le partage de l'attention entre les acteurs.

      Relation élève-enseignant : Cette relation influence la qualité de vie scolaire, les comportements et le regard porté sur les apprentissages.

      Définition de l'Émotion

      L’émotion est comprise comme un processus évaluatif dynamique :

      • Elle permet à l’individu de spécifier la signification d’une situation à ses yeux.

      • Une même situation peut donner lieu à des évaluations différentes selon les individus ou les contextes.

      Les composantes de l'évaluation (selon Audrin) :

      1. Physiologique : Réactions corporelles (ex. frissons).  

      2. Expression motrice : Expressions faciales, voix, posture.   

      3. Motivationnelle : Tendance à l'action (approche ou fuite).  

      4. Sentiment subjectif : Synthèse des différentes dimensions.

      --------------------------------------------------------------------------------

      3. Analyse des Résultats : Typologie des Expériences

      L'analyse des billets révèle plusieurs dimensions du rapport au monde scolaire.

      Rapport à Soi et à Autrui

      Connaissance de soi : Les billets expriment des attirances ou des antipathies (« Je déteste la danse »).

      Sentiment de compétence : La réussite ou la difficulté face à une tâche génère des émotions saillantes (fierté, stress de l'évaluation).

      Présence d'autrui : L'autre peut être déclencheur (exposé d'un camarade), partenaire d'émotion ou destinataire d'une action.

      L'enseignant est souvent évoqué indirectement à travers ses choix pédagogiques et didactiques.

      Continuité et Rupture

      Zone de confort et continuité : Moments venant renforcer l'identité de l'élève ou s'inscrivant dans une unité sociale et temporelle réconfortante.

      Rupture et irruption : Émotions liées à la nouveauté, à la découverte de connaissances, à des activités inhabituelles ou à des irruptions spatiales (intervenant extérieur, sortie).

      Littératie Émotionnelle et Verbalisation

      L'étude observe une gradation dans la capacité des élèves à verbaliser l'émotion :

      Niveau 1 : Nommer uniquement le déclencheur (ex: « L'histoire »).

      Niveau 2 : Décrire les faits ou les actions.

      Niveau 3 : Transcrire le ressenti ou attribuer une valeur (ex: « J'ai aimé »).

      Niveau 4 : Argumenter l'évaluation (ex: « C'est passionnant car... »).

      --------------------------------------------------------------------------------

      4. L’Émoscope : Cartographier le Paysage Émotionnel

      L'innovation majeure de la recherche est la création de l'Émoscope, un outil de représentation graphique.

      | Caractéristique de l'Émoscope | Fonctionnalité | | --- | --- | | Structure | Une roue où chaque portion représente un billet individuel. | | Code Couleur | Identifie l'événement déclencheur (ex: sport, conseil de classe, exposé). | | Pictogrammes | Indiquent la nature du rapport (soi, autrui, rupture, continuité). | | Bulles de Verbatim | Reprennent les mots exacts utilisés pour décrire l'émotion. | | Flèches | Symbolisent le processus évaluatif et les composantes identifiées. |

      Cet outil permet de passer de l’analyse d’un billet individuel à une vision globale du climat de la classe sur une unité de temps donnée (la journée).

      --------------------------------------------------------------------------------

      5. Perspectives et Implications Pédagogiques

      La recherche ouvre des pistes pour la formation et la pratique enseignante.

      Pour les Praticiens et Chercheurs

      Analyse de pratiques : Utiliser l'Émoscope pour comparer les vécus selon les enseignants ou les dispositifs pédagogiques.

      Évolution méthodologique : Envisager des formats numériques (audio, vidéo) pour lever les freins liés aux compétences rédactionnelles.

      Suivi longitudinal : Utiliser des carnets de billets pour suivre l'évolution émotionnelle d'un élève sur le long terme.

      Pour la Formation

      Conscientisation : Aider les futurs enseignants à comprendre la systémie émotionnelle de la classe.

      Indicateur d'apprentissage : Explorer les émotions des élèves comme des marqueurs de progression et de sécurité affective.

      Conclusion de l'Étude

      Le dispositif de la boîte à moments spéciaux démontre que les émotions, bien que subjectives, peuvent être saisies et cartographiées.

      Elles constituent une porte d'entrée essentielle pour comprendre les dynamiques d'apprentissage et le bien-être au sein de la communauté éducative.

    1. pedagogy, eee vo venation : ye mre i ciate their ite . imp s the opportunity to asso i autng o staan hie lesson grew out of the work Cassandra had “ seen el sominer when she worked with other candidates in a a eae session Understanding by Design (UbD) unit to be used wi focused on growth mind-set. As we noted in chapter 3, etical in nature: they aE alt ‘anal s, and also give candidates prac the UDD curriculum plants artic jum guidelines and making profes- experience in looking at district Soa uicelines ae i be e

      I would have loved to have the opportunity to plan with the district curriculum to create lessons with growth mind-set and in collaboration with my collage peers. I think that this goes along with the mentees having the opportunity to work with experience teachers, so that together they can create lessons or align the curriculum lessons so that they include growth mind-set in them.

    1. code-switching is a phenomenon that increasing numbers of people are likely to experience. Claire Kramsch describes this phenomenon as ‘language crossings’ (1998). She provides examples which highlight complex manifestations of identity enactment;

      Code switching happens a lot in English in the US depending on the social group you are with

    2. In any case, if we learn to speak a second language, it provides unique insights into what it is that is valued

      In Chinese, I am learning that honor and shared cultural values are extremely important

    3. The existence of a hybrid language such as Haitian Creole is one indication of the significant link between language and culture. Languages are rarely used in their "pure", standard form. Speakers adapt linguistically to others around them.

      In Macau, we have Macanese--an Asian Creole language that is a combination of Chinese, Hindi, Dutch and African.

    4. There was for Laforest a tragic disconnect between the language he used to describe the world and to embody his literary imagination on the one hand and the social and racial reality of Haiti on the other.

      I read Edward Franklin Frazier, a social worker in the US who had a similar racial experience and spoke French and English.

    1. Enquête sur le Milieu Périscolaire et les Établissements Privés : Failles de Sécurité et Défaillances Institutionnelles

      Résumé Exécutif

      Cette synthèse met en lumière une crise de confiance et de sécurité au sein du système périscolaire et des établissements scolaires en France.

      L'enquête révèle que le temps périscolaire — qui peut représenter jusqu'à cinq heures par jour pour 5,5 millions d'élèves — souffre d'un manque criant de surveillance et de données officielles.

      Malgré la multiplication des signalements d'agressions sexuelles et de maltraitances, les structures administratives (mairies et Éducation nationale) sont accusées d'inertie, voire d'avoir instauré une forme d'omerta pour protéger l'image des institutions.

      Le recrutement précaire, l'absence de suivi statistique des violences au niveau ministériel et les retards dans les enquêtes administratives créent un environnement vulnérable pour les enfants, particulièrement en maternelle.

      1. Le Secteur Périscolaire : Un Système sous Haute Tension

      Le temps périscolaire concerne 90 % des enfants de maternelle et d'élémentaire.

      Bien que ces activités se déroulent au sein des écoles, elles dépendent des municipalités et non de l'Éducation nationale.

      Données Clés sur l'Encadrement

      Volume horaire : Jusqu'à 5 heures par jour (accueil du matin, cantine, étude du soir).

      Population concernée : 5,5 millions d'élèves.

      Perception du métier : Qualifié de « sous-métier » ou de « profession poubelle » par certains acteurs, reflétant une précarité qui impacte la qualité du recrutement.

      Financement : L'État finance à 75 % les établissements privés sous contrat, mais les contrôles sur les violences éducatives ou sexuelles y sont jugés insuffisants par des lanceurs d'alerte.

      Défaillances de Recrutement

      L'enquête souligne des processus d'embauche parfois expéditifs.

      À Rezé, un animateur condamné pour agressions sur 12 mineurs avait été recruté à 51 ans sans expérience préalable dans l'enfance, après une carrière dans la grande distribution.

      L'entretien d'embauche a été décrit comme s'étant déroulé « assez rapidement ».

      2. État des Lieux des Violences et de l'Invisibilité Statistique

      Un constat majeur de l'enquête est l'absence totale de données centralisées sur les violences en milieu périscolaire.

      Néant Statistique : Le ministère de la Justice a confirmé ne pas enregistrer de données spécifiques sur les violences commises par des animateurs périscolaires.

      Réalité du terrain : En compilant les articles de la presse régionale sur 10 ans, l'enquête a recensé au moins une centaine d'affaires médiatisées partout en France (Marseille, Moselle, Courbevoie, Haute-Savoie, etc.).

      Typologie des faits :

      ◦ Agressions sexuelles et viols sur mineurs.   

      ◦ Maltraitances physiques (étranglements, violences à la cantine).  

      ◦ Tentatives de corruption de mineurs.

      3. Analyse des Failles Institutionnelles : L'Omerta et la Gestion des Signalements

      L'enquête pointe du doigt une gestion administrative défaillante qui privilégie souvent la protection de l'institution au détriment de la sécurité des enfants.

      Dysfonctionnements Identifiés

      | Type de Dysfonctionnement | Description et Conséquences | | --- | --- | | Déplacement des agents | Pratique consistant à déplacer un animateur signalé d'une école à une autre plutôt que de le sanctionner ou de l'écarter. | | Absence de suites administratives | Dans l'affaire du 15e arrondissement de Paris, deux ans après l'ouverture d'une enquête administrative, aucun débriefing n'a été fourni aux familles. | | Ignorance des alertes parentales | Des parents avaient alerté sur des comportements suspects (animateur seul avec un enfant, porte fermée) dès 2019, soit des années avant l'arrestation de l'agresseur présumé. | | Espaces à risques | Malgré un rapport de 2015 recommandant de prohiber les espaces isolés (comme les coins bibliothèque), ces lieux ont continué d'être utilisés sans surveillance adéquate. |

      Citations Marquantes sur l'Institution

      • « C'était toujours on protège l'institution, on règle ça entre nous mais rien ne sort. »

      • « Le sanctuaire qui se brise » : expression utilisée par les parents pour décrire la perte de confiance envers l'école.

      • « Vous avez l'impression que tout le monde est complice de cette omerta. »

      4. Impact Psychologique et Parole de l'Enfant

      Le professeur Thierry Bobet, pédopsychiatre, apporte un éclairage crucial sur la difficulté de recueillir la parole des victimes, particulièrement entre 3 et 6 ans.

      Les Obstacles à la Révélation

      1. Absence de représentation : Un enfant de maternelle n'a aucune notion de ce qu'est la sexualité adulte. Il utilise des termes comme « quelqu'un m'a embêté ».

      2. Confusion de l'autorité : L'animateur représente une extension de l'autorité parentale, ce qui rend la dénonciation paradoxale pour l'enfant.

      3. Fragilité de la mémoire : Entre 3 et 6 ans, la mémoire n'est pas mature.

      Un souvenir peut être précis pendant six mois puis devenir confus, d'où l'urgence d'une prise en charge rapide.

      Signaux d'Alerte Observés par les Parents

      Régressions : Retour des couches, pipi au lit, demande de biberons.

      Troubles du comportement : Crises violentes au moment de partir à l'école, terreurs nocturnes, phobie scolaire.

      Comportements sexualisés : Jeux ou mimiques inadaptés à l'âge de l'enfant (ex: postures « vulgaires » induites par l'adulte).

      5. Cas d'Étude : Le Processus de Manipulation

      L'enquête détaille des modes opératoires récurrents visant à isoler les enfants et à instaurer un climat de secret.

      Le secret comme outil de contrôle : « Vous ne dites rien à la maîtresse, c'est notre secret. »

      Rituels détournés : Dans une école parisienne, l'animateur utilisait des chansons et des jeux (ex: « la culotte de mon grand-père ») pour amener les enfants à se déshabiller et à subir des attouchements sous couvert d'activité ludique.

      Posture de l'agresseur : Souvent décrit initialement comme un « papi un peu ours » ou quelqu'un de très apprécié qui « adore les enfants », utilisant cette image pour manipuler l'entourage et isoler les victimes.

      Conclusion

      L'enquête de Cash Investigation démontre que les violences dans le milieu périscolaire ne sont pas des faits divers isolés, mais le résultat de failles structurelles :

      • manque de moyens des collectivités,
      • absence de contrôle rigoureux de l'État sur le financement des écoles privées et culture du secret au sein des administrations.

      L'urgence est à la transparence statistique et à une réforme profonde des protocoles de signalement et d'encadrement pour protéger les publics vulnérables.

    1. État des Lieux du Périscolaire et de l'Enseignement Privé : Enquête sur les Violences et les Défaillances Institutionnelles

      Résumé Exécutif

      Ce document de synthèse expose les conclusions d'une enquête approfondie sur la sécurité et l'encadrement des enfants au sein du périscolaire public et des établissements privés sous contrat en France.

      Points clés identifiés :

      Insécurité structurelle du périscolaire : Le secteur souffre d'un manque de statistiques officielles sur les violences, de recrutements précaires sans vérification de compétences réelles et d'un encadrement souvent en sous-effectif.

      Culture de l'omerta dans le privé : Malgré un financement public à hauteur de 75 %, certains établissements privés privilégient la protection de leur image institutionnelle au détriment du signalement des violences sexuelles ou pédagogiques.

      Échec de la réponse judiciaire : 73 % des plaintes pour violences sexuelles sur mineurs sont classées sans suite, et les délais d'instruction (parfois plusieurs années) nuisent à la fiabilité de la parole de l'enfant.

      Pratiques de "chaises musicales" : Au lieu d'être sanctionnés, certains animateurs signalés pour comportements inappropriés sont simplement déplacés d'une école à une autre.

      Urgence d'une réforme : Les experts préconisent une professionnalisation accrue, une centralisation des signalements et l'adoption de protocoles d'audition spécialisés (type protocole "Niche").

      --------------------------------------------------------------------------------

      1. Le Secteur Périscolaire Public : Un Système sous Haute Tension

      Le temps périscolaire concerne 5,5 millions d'élèves en France. Bien qu'il se déroule dans l'enceinte des écoles, il dépend des mairies et non de l'Éducation nationale.

      1.1. Une profession dévalorisée et précaire

      Le secteur est décrit par les intervenants comme une « profession poubelle » ou un « sous-métier ».

      Conditions de travail : Temps partiels imposés, plannings morcelés et salaires de misère (entre 600 et 700 € nets par mois).

      Recrutement "à la va-vite" : Pour combler les manques, les mairies embauchent des vacataires sans aucune expérience.

      Une journaliste infiltrée a été recrutée en 6 jours après un entretien où seules sa disponibilité et sa « bienveillance » ont été interrogées, sans test de compétences avec les enfants.

      1.2. Défaillances d'encadrement et de surveillance

      Sous-effectifs chroniques : La loi impose un animateur pour 14 enfants de moins de 6 ans, mais des taux de 1 pour 23 ou plus sont observés sur le terrain.

      Surveillance passive : L'enquête révèle des animateurs absorbés par leur téléphone portable durant les temps de cantine ou de cour de récréation, enfreignant la charte de l'animateur.

      Violences verbales et physiques : Des scènes de cris systématiques, d'humiliations et d'intimidation (« ferme ta bouche », privation de nourriture) ont été documentées.

      --------------------------------------------------------------------------------

      2. Violences Sexuelles : Des Alertes Ignorées aux Sanctions Insuffisantes

      En 10 ans, rien qu'à Paris, 128 animateurs ont été suspendus pour suspicion de violences sexuelles.

      2.1. Le dysfonctionnement des signalements

      Plusieurs cas démontrent que les alertes des parents ne sont pas toujours transmises à la direction :

      Affaire de l'école Baudin (Paris) : Des parents avaient alerté sur des attouchements dès septembre 2024.

      L'information n'a pas été remontée, et l'animateur est resté en poste jusqu'à son interpellation en avril 2025 pour agression sur cinq enfants.

      Affaire de l'école Emerio (Paris) : Un animateur de bibliothèque, en poste depuis 20 ans, a été mis en examen. Des parents avaient pourtant signalé des situations suspectes (portes fermées, enfants sur les genoux) dès 2019.

      2.2. Le déplacement des agents problématiques

      L'enquête confirme une pratique de « mauvaise habitude » : le déplacement d'un animateur signalé pour maltraitance vers une autre école au sein du même arrondissement, au lieu d'un licenciement ou d'une sanction disciplinaire ferme.

      | Cas de figure | Mesure constatée | Impact | | --- | --- | --- | | Maltraitance physique (fessée/secouage) | Déplacement dans une autre maternelle | Risque de récidive sur un nouveau public | | Comportements inappropriés | Mutation d'une école maternelle à une école élémentaire | Absence de dossier de suivi centralisé |

      --------------------------------------------------------------------------------

      3. L'Enseignement Privé Sous Contrat : Entre Omerta et Autonomie

      L'État finance l'enseignement privé à hauteur de 10,9 milliards d'euros (2024), payant l'intégralité des salaires des enseignants.

      3.1. La protection de l'image institutionnelle

      Dans certains établissements catholiques, comme l'institution Champagnat (Alsace), la priorité semble être de « laver le linge sale en famille ».

      Pressions sur les victimes : Des enregistrements montrent des religieux incitant des victimes d'agressions sexuelles à retirer leur plainte pour ne pas nuire à la réputation de l'école.

      Rétention d'information : Un établissement a attendu 9 mois avant de signaler au rectorat une enseignante ayant une relation sexuelle avec un mineur de 15 ans.

      3.2. Le manque de contrôle étatique

      Le Secrétariat Général de l'Enseignement Catholique (SGEC) a longtemps freiné l'adoption de l'application « Faits Établissement », souhaitant filtrer les signalements avant qu'ils n'atteignent le ministère.

      Ce « ministère bis » limite la visibilité de l'État sur la réalité des violences dans le privé.

      --------------------------------------------------------------------------------

      4. Dérives Idéologiques et Maltraitances : Le Cas de l'Institution "L'Espérance"

      Cet établissement de Vendée, sous tutelle de la Fraternité Saint-Pierre, illustre les failles extrêmes du contrôle des écoles sous contrat.

      Violences rituelles : Le directeur pratiquait un système de "pactes" où il recevait ou donnait des claques aux élèves devant toute l'école en fonction des résultats scolaires.

      Climat de haine : Des anciens élèves témoignent de propos racistes, homophobes et xénophobes omniprésents (croix gammées sur les murs, surnoms racistes comme "Bamboula" ou "Chang").

      Non-respect des programmes : Des cours d'éducation civique sont refusés car jugés "républicains", remplacés par des enseignements sur la monarchie ou la scolastique médiévale.

      Encadrement défaillant : L'absence de surveillants adultes la nuit, remplacés par des élèves de terminale (« capitaines d'internat »), a favorisé des humiliations (rituel de la mare).

      --------------------------------------------------------------------------------

      5. La Réponse de la Justice et de la Psychiatrie

      5.1. Le traumatisme de l'enfant et la parole différée

      Le professeur Thierry Bobet et le docteur Louis Alvarez soulignent que :

      • Un enfant de maternelle n'a aucune représentation de la sexualité adulte ; il ne parlera pas d'agression mais de quelqu'un qui l'a « embêté ».

      • Le secret est souvent imposé par l'agresseur par le biais de "jeux" ou de "secrets".

      • La mémoire des 3-6 ans est immature : si l'audition n'est pas immédiate, les souvenirs deviennent confus, favorisant les classements sans suite.

      5.2. Statistiques et Justice

      Taux de condamnation : Seules 3 % des plaintes pour viol sur mineur aboutissent à une condamnation en France.

      Le protocole "Niche" : Utilisé dans les pays nordiques (taux de poursuite de 60 %), ce protocole d'audition filmé et standardisé est encore trop peu utilisé en France (25 % des cas contre 90 % dans certains pays).

      --------------------------------------------------------------------------------

      6. Modèles Inspirants et Pistes de Solution

      6.1. L'exemple de la commune de Lemont (Vosges)

      La municipalité a fait le choix politique d'un « périscolaire premium » :

      Ratios d'encadrement : 1 animateur pour 10 enfants (mieux que les 1 pour 14 légaux).

      Professionnalisation : Les temps de préparation et de réunion sont rémunérés.

      Stabilité : Contrats allant jusqu'à 33 heures par semaine pour fidéliser le personnel.

      6.2. Recommandations des experts

      1. Centralisation : Création d'un fichier national des signalements incluant les violences physiques et psychologiques (pas seulement sexuelles).

      2. Formation : Rendre obligatoire la formation sur la protection de l'enfance et la Convention internationale des droits de l'enfant pour tout personnel encadrant.

      3. Transparence : Soumettre les établissements privés aux mêmes obligations de signalement immédiat (« Faits Établissement ») que le public.

      4. Priorité Judiciaire : Créer un "ticket accélérateur" pour que les enquêtes impliquant des mineurs soient traitées en priorité absolue afin de préserver la fiabilité des preuves.

    1. Note de Synthèse : La Violence à l'École et les Stratégies d'Intervention Efficaces

      Résumé Exécutif

      Cette note de synthèse analyse les propos de Claire Baumont, Docteure en psychopédagogie, sur la violence en milieu scolaire.

      L'idée maîtresse est que la perception d'une augmentation généralisée de la violence dans les écoles n'est pas étayée par des données probantes, mais plutôt alimentée par une couverture médiatique alarmiste.

      Le monitorage national québécois (2013-2019) n'a pas confirmé cette hausse et a même noté de légères améliorations.

      La professeure Baumont insiste sur l'importance de « l'effet établissement » : la nécessité pour chaque école de baser ses interventions sur les faits observés localement, là où le personnel a un pouvoir d'action réel, plutôt que sur des moyennes nationales ou des récits extérieurs.

      L'analyse révèle également que les formes d'agression les plus rapportées ne sont pas toujours celles attendues.

      Les comportements d'humiliation et les regards méprisants de la part des adultes envers les élèves, ainsi que les agressions entre collègues, se classent parmi les plus fréquents (3e ou 4e position), bien avant la cyberintimidation.

      Les stratégies d'intervention les plus efficaces ont évolué, passant d'approches punitives inefficaces à des approches systémiques axées sur le climat scolaire et, plus récemment, sur le développement des compétences socio-émotionnelles des élèves et du personnel.

      La clé réside dans le renforcement des relations par des actions quotidiennes et la responsabilisation du personnel scolaire en tant que modèles.

      1. L'Expertise de Claire Baumont

      L'analyse est fondée sur les perspectives de Claire Baumont, une experte reconnue dans le domaine :

      Formation et expérience : Docteure en psychopédagogie, elle a été psychologue scolaire et clinicienne auprès de jeunes avec d'importants problèmes d'adaptation.

      Carrière académique : Professeure associée au Département d'études sur l'enseignement et l'apprentissage de l'Université Laval.

      Recherche de pointe : Elle a dirigé la Chaire de recherche sur le bien-être et la prévention de la violence à l'école (2012-2023) et le premier monitorage national de la violence dans les écoles québécoises (2013-2019).

      Objectif : Ses recherches visent à améliorer la qualité de vie des élèves et du personnel scolaire.

      2. Mythes et Réalités : La Montée de la Violence Scolaire

      Un thème central de la discussion est la remise en question de la perception d'une augmentation de la violence dans les écoles.

      Une narration médiatique persistante : La professeure Baumont souligne que les médias rapportent une "montée de la violence" depuis près de 40 ans, souvent en généralisant à partir d'événements ponctuels et en créant un climat d'insécurité.

      Absence de preuves empiriques : Le monitorage national mené entre 2013 et 2019, utilisant des outils standardisés, n'a pas réussi à prouver une augmentation de la violence.

      Au contraire, il a révélé de "légères améliorations".

      Situation actuelle : Il n'existe pas de portrait national récent pour confirmer ou infirmer une hausse depuis 2019-2020.

      Il est donc crucial de garder un esprit critique face aux discours ambiants.

      La volatilité des données locales : Le suivi de certaines écoles a montré que la situation peut évoluer rapidement.

      Un établissement peut voir son taux de violence augmenter en quelques années, tandis qu'un autre peut s'améliorer.

      Cela démontre que les moyennes nationales ne sont pas représentatives de la réalité de chaque milieu.

      3. Le Concept Clé : L'Effet Établissement

      Face à l'incertitude des données nationales et à l'influence des facteurs externes, la professeure Baumont met en avant le concept de « l'effet établissement » (ou « effet école »).

      Définition : Il s'agit de se concentrer sur les composantes et les interventions sur lesquelles le personnel scolaire a un pouvoir d'action direct au sein de son propre établissement.

      Principe d'action : La première étape est d'ajuster les interventions sur la base de ce qui est réellement observé dans l'école, et non sur des perceptions externes.

      Autonomisation : Cette approche permet aux intervenants de se centrer sur des solutions concrètes et de ne pas se laisser démoraliser par des facteurs hors de leur contrôle.

      Elle place l'intervenant comme le "premier décideur" de ses actions avec les ressources dont il dispose.

      4. Les Dimensions de la Violence Scolaire

      La violence en milieu scolaire est un phénomène complexe et multifactoriel, dont les manifestations dépassent les agressions entre élèves.

      4.1. Une Problématique Multifactorielle

      La violence s'explique par une interaction de facteurs à plusieurs niveaux :

      Globaux : Les conflits mondiaux et les guerres (une personne sur huit sur la planète serait en situation de guerre en décembre 2024) contribuent à un sentiment d'insécurité généralisé.

      Sociétaux : Les différences culturelles et religieuses peuvent être des sources de tension.

      Communautaires : La vie dans le quartier et la situation familiale des élèves influencent leurs comportements à l'école.

      Institutionnels : La formation du personnel scolaire joue un rôle.

      Malgré ces multiples facteurs, l'effet établissement demeure le levier d'action le plus pertinent pour les intervenants.

      4.2. Les Comportements d'Agression : Au-delà des Élèves

      L'analyse des types de violence révèle une réalité souvent sous-estimée : l'impact du comportement des adultes.

      Violence des adultes envers les élèves : Selon des données de 2024, les comportements d'humiliation et les regards méprisants de la part des adultes se classent en 3e ou 4e position des agressions les plus rapportées par les élèves, surtout au secondaire.

      Ces actes incluent les cris et les punitions humiliantes.

      Violence entre adultes : Le personnel scolaire rapporte également subir des agressions de la part de collègues.

      Les insultes et l'exclusion des réunions se classent aussi en 3e ou 4e position des comportements d'agression subis par les enseignants.

      Un constat surprenant : Ces formes de violence relationnelle et psychologique sont rapportées bien plus fréquemment que la cyberintimidation, qui est souvent perçue comme un problème majeur.

      L'impact de ces comportements d'adultes sur le climat scolaire et la qualité de l'enseignement est considérable.

      5. Stratégies d'Intervention : Évolution et Bonnes Pratiques

      Les approches pour prévenir et gérer la violence ont évolué au cours des 50 dernières années.

      | Étape d'Évolution | Approche Principale | Limites et Constats | | --- | --- | --- | | Approches initiales | Programmes ciblés sur les agresseurs, basés sur la punition. | Inefficaces. "On s'est rendu compte que les punitions ça la prenait pas aux enfants de bons comportements." | | Développement | Approches globales et systémiques axées sur l'amélioration du climat scolaire. | Plus efficaces, mais peuvent être complétées. | | Approches récentes | Focalisation sur le bien-être des élèves, puis sur celui des élèves ET du personnel scolaire. | Agir sur les sources du mal-être pour prévenir la violence. | | Approche actuelle | Développement des compétences socio-émotionnelles pour tous (élèves et personnel). | Apprendre l'autorégulation, l'expression des désaccords et le savoir-être. Le personnel adulte agit comme un modèle essentiel. |

      Le modèle actuel met l'accent sur le rôle crucial des adultes.

      La relation qu'ils établissent avec les jeunes, basée sur leurs propres compétences socio-émotionnelles, est un facteur déterminant pour un climat scolaire positif.

      6. Recommandations Finales pour une Action Efficace

      Pour intervenir de manière constructive, la professeure Baumont propose une série de principes directeurs :

      1. Baser les interventions sur des faits observés localement : Se concentrer sur les dynamiques propres à son établissement pour un maximum d'impact (« effet établissement »).

      2. Impliquer les élèves et le personnel : Faire participer l'ensemble de la communauté scolaire aux décisions favorise le sentiment d'appartenance, l'engagement, l'entraide et la collaboration.

      3. Agir avec les ressources disponibles : Plutôt que d'attendre des décisions ou des ressources gouvernementales, il est essentiel d'agir proactivement avec les moyens à disposition.

      "Je suis la première personne qui peut décider de ce que je fais avec ce que j'ai."

      4. Privilégier la fréquence à l'intensité : Le plus important n'est pas de réaliser de grandes activités ponctuelles, mais de poser de petits gestes significatifs au quotidien.

      Il faut "savoir-faire souvent" pour renforcer durablement les relations entre adultes et élèves.

    1. one of the biggest problems a contemporary defender of a merito-cratic order can see is the fact that parents of means are not willingto let their children fail, even if, by the logic of merit, they should. Atthe point that parents can prop up future generations, skill and effortbecome less relevant than birthright and inherited position, subvert-ing the meritocracy with the very aristocratic dynamics that skill pluseffort was designed to reject and continuing the cycle seen through-out China’s history with meritocracy.

      It feels "fair", and it "works"... why shouldn't the people who work harder and have "the most" mechanotechnical capabilities be assigned to a job? In other words, as a friend of mine told, if we could have 3 Michelin star cooks only, why wouldn't we? It's an enticing idea, if we had these cooks with functional diversity, from different cultural backgrounds, skin tones, health, etc. it makes sense to begin with that we would assign resources to them.

      But this ignores who we would be leaving. Further, we are skipping past what makes a 3-star chef, which is to say, it's NEVER a "chef", it's a WHOLE RESTAURANT. It's the location, ambience, the service, which often takes much much longer than a "typical" one, and requires many many more people (it's an spectacle in of itself, as they have minuscule dishes, and they often prepare them in front of people ONE BY ONE), plus, it essentialises consumption, as there is ONLY ONE 3-M Vegan restaurant in the world. It requires special utensils, learning, makes the process elitist and consumerist (telling you, you don't have to engage in it, leave that to experts), displacing hobbyism (the root of innovation), failure, spiral (not linear) learning processes, and many other externalities, like the type of exotic (highly limited) produce needed to make most recipes.

      And that's accounting for the magical position that the process would be inclusive of everyone, and have enough chefs to feed the whole world. In what mind? Since we can't have this kind of home cook (or robot cook) for every person, we would have to rely on mass prepared dishes, probably inundating shelves with non-recyclable plastic containers to extend the food's life, these requiring a lot more carbon for transportation, and de-skilling people (less versatile, spitting at transference and imagination for other tasks, and reducing ability to make diverse stories and engage in interdisciplinary dialogue) who would pick food from a distant commodified service.

    Annotators

    1. Jak usunąć MIKROPLASTIK i BPA z organizmu? Toksykolog dr hab. Aleksandra Rutkowska

      1. Understanding the "Toxic Cocktail" (Chemical Types)

      The expert emphasizes that we are exposed to a mixture of substances that act together. Key chemicals include: * Bisphenols (BPA, BPS, BPF, etc.): BPA (Bisphenol A) is a major endocrine disruptor used in hard plastics and can linings. Crucially, the expert warns against "BPA-Free" labels, noting they are often a form of Greenwashing. Manufacturers frequently replace BPA with BPS (Bisphenol S) or BPF (Bisphenol F), which are structurally similar and potentially just as harmful [00:28:38]. * Phthalates: Used to make plastics flexible (like PVC). Found in flooring, food wraps, and cosmetics, they interfere with reproductive and metabolic health [00:07:03]. * PFAS ("Forever Chemicals"): Used in non-stick pan coatings. These do not break down easily and can stay in the human body for many years [00:14:37], [00:43:13]. * Alkylphenols & Flame Retardants: Chemicals used in detergents and furniture that accumulate in household dust and disrupt thyroid function [00:08:10], [00:15:48].

      2. Health Impacts: The "Grandchild Method"

      • Hormonal Mimicry: These chemicals trick the body into treating them like natural hormones (mimicking estrogen). They block receptors and can "program" fat cells to store more fat, leading to obesity [00:08:43], [00:09:18].
      • Diseases: Long-term exposure is linked to Type II diabetes, infertility, and hormone-dependent cancers like breast and prostate cancer.
      • Inflammation: Microplastic particles act as foreign bodies, causing chronic internal inflammation—the root cause of most civilization diseases [00:02:42].

      3. Fish and Food Packaging: Best vs. Worst

      • The Danger of Cans: Canned fish is ranked as the worst source of bisphenols because the fat causes chemicals from the can's lining to leach into the food [00:27:00].
      • Trout (Pstrąg): The healthiest choice. It lives a short life in clean, moving water, accumulating minimal toxins [00:27:50].
      • Tuna & Flounder: Recommended to avoid. Tuna lives too long (accumulating chemicals), and Flounder lives at the bottom where pollutants settle [00:27:27], [00:27:33].
      • Recommendation: Buy fish in glass jars or fresh rather than in metal cans [00:27:10].

      4. Hidden Exposure Sources: Imports and Interior

      • Asian Imports & Clothing: Products from Asian platforms often bypass EU safety standards and contain higher toxic concentrations. Synthetic clothes from Asia are heavily impregnated with chemicals to survive weeks in transport containers. The expert strongly advises washing new clothes at least twice before the first wear to reduce skin absorption of these toxins [00:17:32].
      • Tea Bags: Certain bags containing plastic mesh or glue can release billions of microplastic particles into a single cup [00:00:00].
      • Home Interiors: The combination of underfloor heating + vinyl or laminate panels is highly toxic; heat "bakes" chemicals into the air you breathe [00:32:08], [00:37:12].

      5. Practical Detox and Prevention

      • Liver Support: The liver can clear most bisphenols in a week if you stop exposure. Warning: Avoid aggressive "juice cleanses" that cause rapid weight loss, as this floods the blood with toxins previously stored in your fat tissue [00:20:26], [00:30:02].
      • The "First Step" Strategy: Start by wet-dusting your home and creating a 5-minute draft (intensive ventilation) twice a day [00:48:11].
      • Kitchen Changes: Switch to glass or stainless steel for storage, stop cooking rice/grains in plastic bags (cook them loose), and use cast iron or stainless steel pans instead of non-stick [00:21:22], [00:41:48], [00:43:13].
    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Manuscript number: RC-2025-03280

      Corresponding author(s): Stephan Gruber

      1. General Statements [optional]

      First, we would like to thank the editor at Review Commons for the efficient handling of our manuscript. We also apologize for our delayed response.

      We are grateful to all three reviewers for their careful evaluation of our work and for their constructive feedback, which will provide a valuable basis for improving the figures and the text, as described below. We expect to be able to complete the revision following the plan described below quickly.

      We note that the reviewer reports (Rev. #1 and Rev. #3) made us realize that the manuscript text was misleading on the following point. Although we used the purified ATP hydrolysis–deficient Smc protein for sybody isolation, this does not restrict the selection to a specific conformation. As described in detail in Vazquez-Nunez et al. (Figure 5), this mutant displays the ATP-engaged conformation only in a smaller fraction of complexes (~25% in the presence of ATP and DNA), consistent with prior in vivo observations reported by Diebold-Durand et al. (Figure 5). Rather than limiting the selection to a particular configuration, our aim was to reduce the prevalence of the predominant rod state in order to broaden the range of conformations represented during sybody selection. Consistent with this interpretation, only a small number of isolated sybodies show strong conformation-specific binding in the presence or absence of ATP/DNA, as observed by ELISA (now included in the manuscript). We will revise the manuscript text accordingly to clarify this point.

      2. Description of the planned revisions

      Insert here a point-by-point reply that explains what revisions, additional experimentations and analyses are planned to address the points raised by the referees.

      • *

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      Gosselin et al., develop a method to target protein activity using synthetic single-domain nanobodies (sybodies). They screen a library of sybodies using ribosome/ phage display generated against bacillus Smc-ScpAB complex. Specifically, they use an ATP hydrolysis deficient mutant of SMC so as to identify sybodies that will potentially disrupt Smc-ScpAB activity. They next screen their library in vivo, using growth defects in rich media as a read-out for Smc activity perturbation. They identify 14 sybodies that mirror smc deletion phenotype including defective growth in fast-growth conditions, as well as chromosome segregation defects. The authors use a clever approach by making chimeras between bacillus and S. pnuemoniae Smc to narrow-down to specific regions within the bacillus Smc coiled-coil that are likely targets of the sybodies. Using ATPase assays, they find that the sybodies either impede DNA-stimulated ATP hydrolysis or hyperactivate ATP hydrolysis (even in the absence of DNA). The authors propose that the sybodies may likely be locking Smc-ScpAB in the "closed" or "open" state via interaction with the specific coiled-coil region on Smc. I have a few comments that the authors should consider:

      Major comments: 1. Lack of direct in vitro binding measurements: The authors do not provide measurements of sybody affinities, binding/ unbinding kinetics, stoichiometries with respect to Smc-ScpAB. Additionally, do the sybodies preferentially interact with Smc in ATP/ DNA-bound state? And, do the sybodies affect the interaction of ScpAB with SMC? It is understandable that such measurements for 14 sybodies is challenging, and not essential for this study. Nonetheless, it is informative to have biochemical characterization of sybody interaction with the Smc-ScpAB complex for at least 1-2 candidate sybodies described here.

      We agree with the reviewer that adding such data would be reassuring and that obtaining solid data using purified components is not easy even for a smaller selection of sybodies. We have data that show direct binding of Smc to sybodies by various methods including ELISA, pull-downs and by biophysical methods (GCI). Initially, we omitted these data from the manuscript as we are convinced that the mapping data obtained with chimeric SMC proteins is more definitive and relevant. During the revision we will incorporate the ELISA data showing direct binding and also indicating a lack of preference for a specific state of Smc.

      Many modes of sybody binding to Smc are plausible The authors provide an elaborate discussion of sybodies locking the Smc-ScpAB complex in open/ closed states. However, in the absence of structural support, the mechanistic inferences may need to be tempered. For example, is it also not possible for the sybodies to bind the inner interface of the coiled-coil, resulting in steric hinderance to coiled-coil interactions. It is also possible that sybody interaction disrupts ScpAB interaction (as data ruling this possibility out has not been provided). Thus, other potential mechanisms would be worth considering/ discussing. In this direction, did AlphaFold reveal any potential insights into putative binding locations?

      We have attempted to map the binding by structure prediction, however, so far, even the latest versions of AlphaFold are not able to clearly delineate the binding interface. Indeed, many ways of binding are possible, including disruption of ScpAB interaction. However, since the main binding site is located on the SMC coiled coils, the later scenario would likely be an indirect consequence of altered coiled coil configuration, consistent with our current interpretation.

      1. Sybody expression in vivo Have the authors estimated sybody expression in vivo? Are they all expressed to similar levels?

      We have tagged selected sybodies with gfp and performed live cell imaging. This showed that they are all roughly equally expressed and that they localize as foci in the cell presumably by binding to Smc complexes loaded onto the chromosome at ParB/parS sites. We will include this data in the revised version of the manuscript.

      1. Sybodies should phenocopy ATP hydrolysis mutant of Smc The sybodies were screened against an ATP hydrolysis deficient mutant of Smc, with the rationale that these sybodies would interfere this step of the Smc duty cycle. Does the expression of the sybodies in vivo phenocopy the ATP hydrolysis deficient mutant of Smc? Could the authors consider any phenotypic read-outs that can indicate whether the sybody action results in an smc-null effect or specifically an ATP hydrolysis deficient effect?

      As eluded to above, we think that our selection gave rise to sybodies that bind various, possibly multiple Smc conformations. Consistent with this idea, the phenotypes are similar to null mutant rather than the ATP-hydrolysis defective EQ mutant, which display even more severe growth phenotypes. We will add the following notes to the text:

      “These conditions favour ATP-engaged particles alongside the typically predominant ATP-disengaged rod-shaped state (add Vazquez Nunez et al., 2021).”

      “ELISA data confirm that nearly all clones bind Smc-ScpAB; however, their binding shows little or no dependence on the presence of ATP or DNA.”

      Minor comments: 1. It was surprising that no sybodies were found that could target both bacillus and spneu Smc. For example, sybodies targeting the head regions of Smc that might work in a more universal manner. Could the authors comment on the coverage of the sybodies across the protein structure?

      It is rather common that sybodies (like antibodies and nanobodies) exhibit strong affinity differences between highly conserved proteins (> 90 % identity). The underlying reasons for such strong discrimination are i) location of less conserved residues primarily at the target protein surface and ii) the large interaction interface between sybody and target which offers multiple vulnerabilities for disturbance, in particular through bulky side chains resulting in steric clashes. Another frequently observed phenomenon is sybody binding to a dominant epitope, which also often applies to nanobodies and antibodies. A great example for this are the dominant epitopes on SARS-CoV-2 RBDs.

      Growth curves (Fig. S3) show a large jump in recovery in growth under sybody induction conditions. Could the authors address this observation here and in the text?

      We suppose that this recovery represents suppressor mutants and/or (more likely) improved growth in the absence of functional Smc during nutrient limitation (see Gruber et al., 2013 and Wang et al., 2013). We will add this statement to the text.

      L41- Sentence correction: Loop can be removed. Ah, yes, sorry for this confusing error. Thank you. 4. L525 - bsuSmc 'E' :extra E can be removed. To do. Thank you. 5. References need to be properly formatted. To do. Thank you. 6. The authors should add in figure legend for Fig 1i) details on representation of the purple region, and explain the grey strokes for orientation of the loop. To do. 7. How many cells were analysed in the cell biological assays? Legends should include these information. To Be Included.

      Reviewer #1 (Significance (Required)):

      Overall, this is an impressive study that uses an elegant strategy to find inhibitors of protein activity in vivo. The manuscript is clearly written and the experiments are logical and well-designed. The findings from the study will be significant to the broad field of genome biology, synthetic biology and also SMC biology. Specifically, the coiled coil domain of SMC proteins have been proposed to be of high functional value. The authors have elegantly identified key coiled-coil regions that may be important for function, and parallelly exhibited potential of the use of synthetic sybody/designed binders for inhibition of protein activity.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      Review: "Single Domain Antibody Inhibitors Target the Coiled Coil Arms of the Bacillus subtilis SMC complex" by Ophélie Gosselin et al, Review Commons RC-2025-03280 Structural Maintenance of Chromosome proteins (SMCs), a family of proteins found in almost all organisms, are organizers of DNA. They accomplish this by a process known as loop extrusion, wherein double-stranded DNA is actively reeled in and extruded into loops. Although SMCs are known to have several DNA binding regions, the exact mechanism by which they facilitate loop extrusion is not understood but is believed to entail large conformational changes. There are currently several models for loop extrusion, including one wherein the coiled coil (CC) arms open, but there is a lack of insightful experimentation and analysis to confirm any of these models. The work presented aims to provide much-needed new tools to investigate these questions: conformation-selective sybodies (synthetic nanobodies) that are likely to alter the CC opening and closing reactions. The authors produced, isolated, and expressed sybodies that specifically bound to Bacillus subtilis Smc-ScpAB. Using chimeric Smc constructs, where the coiled coils were partly replaced with the corresponding sequences from Streptococcus pneumoniae, the authors revealed that the isolated sybodies all targeted the same 4N CC element of the Smc arms. This region is likely disrupted by the sybodies either by stopping the arms from opening (correctly) or forcing them to stay open (enough). Disrupting these functional elements is suggested to cause the Smc-dependent chromosome organization lethal phenotype, implying that arm opening and closing is a key regulatory feature of bacterial Smc-ScpAB. In summary, the authors present a new method for trapping bacterial Smc's in certain conformations using synthetic antibodies. Using these antibodies, they have pinpointed the (previously suggested) 4N region of the coiled coils as an essential site for the opening and closing of the Smc coiled coil arms and that hindering these reactions blocks Smc-driven chromosomal organization. The work has important implications for how we might elucidate the mechanism of DNA loop extrusion by SMC complexes. Some specific comments: Line 75: "likely stabilizing otherwise rare intermediates of the conformational cycle." - sorry, why is that being concluded? Why not stabilizing longer-lived oncformations? We will clarify this statement!

      Line 89: Sorry, possibly our lack of understanding: why first ribosome and then phage display?

      Ribosome display offers to screen around 10^12 sybodies per selection round (technically unrestricted library size), while for phage display, the library size is restricted to around 10^9 sybodies due to the fact that production of a phage library requires transformation of the phagemid plasmid into E. coli, thereby introducing a diversity bottleneck. This is why the sybody platform starts off with ribosome display. It switches to phage display from round 2 onwards because the output of the initial round of ribosome display is around 10^6 sybodies, which can be easily transferred into the phage display format. Phage display is used to minimize selection biases. For more information, please consult the original sybody paper (PMID: 29792401).

      Line 100: Why was only lethality selected? Less severe phenotypes not clear enough?

      Yes, colony size is more difficult to score robustly, as the sizes of individual transformant colonies can vary quite widely. The number of isolated sybodies was at the limit of further analysis.

      Line 106: Could it be tested somehow if convex and concave library sybodies fold in Bs?

      We did not focus on the non-functional sybody candidates and only sybodies of the loop library turned out to cause functional consequences at the cellular level. Notably, we will include gfp-imaging showing that non-lethal sybodies are expressed to similar levels that toxic sybodies. Given the identical scaffold of concave and loop sybodies (they only differ in their CDR3 length), we expect that the concave sybodies fold in the cytoplasm of B. subtilis. For the convex sybodies exhibiting a different scaffold, this will be tested.

      Line 125: Could Pxyl be repressed by glucose?

      To our knowledge and experience, repression by glucose (catabolite repression) does not work well in this context in B. subtilis.

      Line 131: The SMC replacement strain is a cool experiment and removes a lot of doubts!

      Thank you! (we agree 😊)

      Line 141: The mapping is good and looks reliable, but looks and feels like a tour de force? Of course, some cryo-EM would have been lovely (lines 228-229 understood, it has been tried!).

      Yes, we have made several attempts at structural biology. Unfortunately, Smc-ScpAB is not well suited for cryo-EM in our hands and crystallography with Smc fragments and sybodies did not yield well-diffracting crystals.

      Line 179: Mmmh. Do we not assume DNA binding on top of the dimerised heads to open the CC (clamp)?

      We will clarify the text here.

      Line 187: Having sybodies that presumably keep the CC together (closing) and some that do not allow them to come together correctly (opening) is really cool and probably important going forward.

      Thank you!

      Figure 1 Ai is not very colour-blind friendly.

      We are sorry for this oversight. We will try to make the color scheme more inclusive. Thank you for the notification.

      Optional: did the authors see any spontaneous mutations emerge that bypass the lethal phenotype of sybody expression?

      No, we did not observe spontaneous mutations suppressing the phenotype, possibly due to the limited number of cell generations observed. We tried to avoid suppressors by limiting growth, but this may indeed be a good future approach for further fine map the binding sites and to obtain insights into the mechanism of inhibition.

      Optional: we think it would be nice to try some biochemical experiment with BMOE/cysteine-crosslinked B. subtilis Smc in the mid-region (4N or next to it) of the Smc coiled coils to try to further strengthen the story. Some of the authors are experts in this technique and strains might already exist?

      We have indeed tried to study the impact of sybody binding on Smc conformation by cysteine cross-linking. However, we were not convinced by the results and thus prefer not to draw any conclusions from them. We will add a corresponding note to the text.

      Reviewer #2 (Significance (Required)):

      The authors present a new method for trapping bacterial Smc's in certain conformations using synthetic antibodies. Using these antibodies, they have pinpointed the (previously suggested) 4N region of the coiled coils as an essential site for the opening and closing of the Smc coiled coil arms and that hindering these reactions blocks Smc-driven chromosomal organization. The work has important implications for how we might elucidate the mechanism of DNA loop extrusion by SMC complexes. Thank you!

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      Gosselin et al. use the sybody technology to study effects of in vivo inhibition oft he Bacillus subtilis SMC complex. Smc proteins are central DNA binding elements of several complexes that are vital for chromosome dynamics in almost all organisms. Sybodies are selected from three different libraries of the single domain antibodies, using the „transition state" mutant Smc. They identify 14 such mutant sybodies that are lethal when expressed in vivo, because they prevent proper function of Smc. The authors present evidence suggesting that all obtained sybodies bind to a coiled-coil region close to the Smc „neck", and thereby interfere with the Smc activity cycle, as evidenced by defective ATPase activity when Smc is bound to DNA. The study is well done and presented and shows that the strategy is very potent in finding a means to quickly turn off a protein's function in vivo, much quicker than depleting the protein.

      The authors also draw conclusions on the molecular mode of action of the SMC complex. The provide a number of suggestive experiments, but in my view mostly indirect evidence for such mechanism.

      My main criticism ist hat the authors have used a single - and catalytically trapped form of SMC. They speculate why they only obtain sybodies from one library, and then only idenfity sybodies that bind to a rather small part oft he large Smc protein. While the approach is definitely valuable, it is biassed towards sybodies that bind to Smc in a quite special way, it seems. Using wild type Smc would be interesting, to make more robust statements about the action of sybodies potantially binding to different parts of Smc.

      As explained above, we are quite confident the Smc ATPase mutation did not bias the selection in an obvious way. The surprising bias towards coiled coil binding sites has likely other explanations, as they likely form a preferred epitope recognized by sybodies.

      Line 105: Alternatively, the other libraries did not produce good binders or these sybodies were 106 not stably expressed in B. subtilis. This could be tested using Western blotting - I am assuming sybody antibodies are commercially avalable. However, this test is not important for the overall study, it would just clarify a minor point.

      While there are antibody fragments available to augment the size of sybodies (PMID: 40108246), these recognize 3D-epitopes and are thus not suited for Western blotting. We did not follow up on the negative results much, but would like to point out again that there are several biases that likely emerge for the same reason (bias to library, bias to coiled coil binding site). If correct, then likely few other sybodies are effectively lethal in B. subtilis, with the exception of the ones isolated and characterized. We have added this notion to the manuscript. We have also tested the expression of non-lethal sybodies by gfp-tagging and imaging. These results will be included in the revision.

      Fig. 2B: is is odd to count Spo0J foci per cells, as it is clear from the images that several origins must be present within the fluorescent foci. I am fine with the „counting" method, as the images show there is a clear segregation defect when sybodies are expressed, I believe the authors should state, though, that this is not a replication block, but failure to segregate origins.

      We agree that this is an important point and will add a corresponding comment to the text.

      Testing binding sites of sybodies tot he SMC complex is done in an indirect manner, by using chimeric Smc constructs. I am surprised why the authors have not used in vitro crosslinking: the authors can purify Smc, and mass spectrometry analyses would identify sites where sybodies are crosslinked to Smc. Again, I am fine with the indirect method, but the authors make quite concrete statements on binding based on non-inhibition of chimeric Smc; I can see alternative explanations why a chimera may not be targeted.

      We have made several attempts of testing direct binding with mixed outcomes and decided to not include those results in the light of the stronger and more relevant in vivo mapping. However, we will add ELISA results and briefly discuss grating coupled interferometry (GCI) data and pull-downs.

      Smc-disrupting sybodies affect the ATPase activity in one of two ways. Again, rather indirect experiments. This leads to the point Revealing Smc arm dynamics through synthetic binders in the discussion. The authors are quite careful in stating that their experiments are suggestive for a certain mode of action of Smc, which is warranted.

      In line 245, they state More broadly, the study demonstrates how synthetic binders can trap, stabilize, or block transient conformations of active chromatin-associated machines, providing a powerful means to probe their mechanisms in living cells. This is off course a possible scenario for the use of sybodies, but the study does not really trap Smc in a transient conformation, at least this is not clearly shown.

      We agree and will carefully rephrase this statement. Thank you.

      Overall, it is an interesting study, with a well-presented novel technology, and a limited gain of knowledge on SMC proteins. We respectfully disagree with the last point, since our unique results highlight the importance of the Smc coiled coils, which are otherwise largely neglected in the SMC literature, likely (at least in part) due the mild effect of single point mutations on coiled coil dynamics.

      Reviewer #3 (Significance (Required)):

      The work describes the gaining and use of single-binder antibodies (sybodies) to interfere with the function of proteins in bacteria. Using this technology for the SMC complex, the authors demonstrate that they can obtain a significant of binders that target a defined region is SMC and thereby interfere with the ATPase cycle.

      The study does not present a strong gain of knowledge of the mode of action of the SMC complex.

      As pointed out above, we respectfully disagree with this assertion.

      • *

      3. Description of the revisions that have already been incorporated in the transferred manuscript

      Please insert a point-by-point reply describing the revisions that were already carried out and included in the transferred manuscript. If no revisions have been carried out yet, please leave this section empty.

      • *

      4. Description of analyses that authors prefer not to carry out

      Please include a point-by-point response explaining why some of the requested data or additional analyses might not be necessary or cannot be provided within the scope of a revision. This can be due to time or resource limitations or in case of disagreement about the necessity of such additional data given the scope of the study. Please leave empty if not applicable.

      As pointed out above, there are a few minor points that we prefer not to experimentally address. In particular, we do not consider it as necessary to determine the expression levels of sybodies which were non-inhibitory. We also wish to note that we attempted to obtain structural additional biochemical data and to that end performed cryo-EM, crystallography and cysteine cross-linking experiments. Unfortunately, we did not obtain sybody complex structures and the cross-linking data were unfortunately not conclusive. We also wish to note that the first author has finished her PhD and left the lab, which limits our capacity to add additional experiments. However, as the reviewers also pointed out, the main conclusions are well supported by the data already.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      Gosselin et al. use the sybody technology to study effects of in vivo inhibition oft he Bacillus subtilis SMC complex. Smc proteins are central DNA binding elements of several complexes that are vital for chromosome dynamics in almost all organisms. Sybodies are selected from three different libraries of the single domain antibodies, using the „transition state" mutant Smc. They identify 14 such mutant sybodies that are lethal when expressed in vivo, because they prevent proper function of Smc. The authors present evidence suggesting that all obtained sybodies bind to a coiled-coil region close to the Smc „neck", and thereby interfere with the Smc activity cycle, as evidenced by defective ATPase activity when Smc is bound to DNA. The study is well done and presented and shows that the strategy is very potent in finding a means to quickly turn off a protein's function in vivo, much quicker than depleting the protein.

      The authors also draw conclusions on the molecular mode of action of the SMC complex. The provide a number of suggestive experiments, but in my view mostly indirect evidence for such mechanism.

      My main criticism ist hat the authors have used a single - and catalytically trapped form of SMC. They speculate why they only obtain sybodies from one library, and then only idenfity sybodies that bind to a rather small part oft he large Smc protein. While the approach is definitely valuable, it is biassed towards sybodies that bind to Smc in a quite special way, it seems. Using wild type Smc would be interesting, to make more robust statements about the action of sybodies potantially binding to different parts of Smc.

      Line 105: Alternatively, the other libraries did not produce good binders or these sybodies were 106 not stably expressed in B. subtilis. This could be tested using Western blotting - I am assuming sybody antibodies are commercially avalable. However, this test is not important for the overall study, it would just clarify a minor point.

      Fig. 2B: is is odd to count Spo0J foci per cells, as it is clear from the images that several origins must be present within the fluorescent foci. I am fine with the „counting" method, as the images show there is a clear segregation defect when sybodies are expressed, I believe the authors should state, though, that this is not a replication block, but failure to segregate origins.

      Testing binding sites of sybodies tot he SMC complex is done in an indirect manner, by using chimeric Smc constructs. I am surprised why the authors have not used in vitro crosslinking: the authors can purify Smc, and mass spectrometry analyses would identify sites where sybodies are crosslinked to Smc. Again, I am fine with the indirect method, but the authors make quite concrete statements on binding based on non-inhibition of chimeric Smc; I can see alternative explanations why a chimera may not be targeted.

      Smc-disrupting sybodies affect the ATPase activity in one of two ways. Again, rather indirect experiments. This leads to the point Revealing Smc arm dynamics through synthetic binders in the discussion. The authors are quite careful in stating that their experiments are suggestive for a certain mode of action of Smc, which is warranted.

      In line 245, they state More broadly, the study demonstrates how synthetic binders can trap, stabilize, or block transient conformations of active chromatin-associated machines, providing a powerful means to probe their mechanisms in living cells. This is off course a possible scenario for the use of sybodies, but the study does not really trap Smc in a transient conformation, at least this is not clearly shown.

      Overall, it is an interesting study, with a well-presented novel technology, and a limited gain of knowledge on SMC proteins.

      Significance

      The work describes the gaining and use of single-binder antibodies (sybodies) to interfere with the function of proteins in bacteria. Using this technology for the SMC complex, the authors demonstrate that they can obtain a significant of binders that target a defined region is SMC and thereby interfere with the ATPase cycle.

      The study does not present a strong gain of knowledge of the mode of action of the SMC complex.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      Gosselin et al., develop a method to target protein activity using synthetic single-domain nanobodies (sybodies). They screen a library of sybodies using ribosome/ phage display generated against bacillus Smc-ScpAB complex. Specifically, they use an ATP hydrolysis deficient mutant of SMC so as to identify sybodies that will potentially disrupt Smc-ScpAB activity. They next screen their library in vivo, using growth defects in rich media as a read-out for Smc activity perturbation. They identify 14 sybodies that mirror smc deletion phenotype including defective growth in fast-growth conditions, as well as chromosome segregation defects. The authors use a clever approach by making chimeras between bacillus and S. pnuemoniae Smc to narrow-down to specific regions within the bacillus Smc coiled-coil that are likely targets of the sybodies. Using ATPase assays, they find that the sybodies either impede DNA-stimulated ATP hydrolysis or hyperactivate ATP hydrolysis (even in the absence of DNA). The authors propose that the sybodies may likely be locking Smc-ScpAB in the "closed" or "open" state via interaction with the specific coiled-coil region on Smc. I have a few comments that the authors should consider:

      Major comments:

      1. Lack of direct in vitro binding measurements: The authors do not provide measurements of sybody affinities, binding/ unbinding kinetics, stoichiometries with respect to Smc-ScpAB. Additionally, do the sybodies preferentially interact with Smc in ATP/ DNA-bound state? And, do the sybodies affect the interaction of ScpAB with SMC? It is understandable that such measurements for 14 sybodies is challenging, and not essential for this study. Nonetheless, it is informative to have biochemical characterization of sybody interaction with the Smc-ScpAB complex for at least 1-2 candidate sybodies described here.
      2. Many modes of sybody binding to Smc are plausible The authors provide an elaborate discussion of sybodies locking the Smc-ScpAB complex in open/ closed states. However, in the absence of structural support, the mechanistic inferences may need to be tempered. For example, is it also not possible for the sybodies to bind the inner interface of the coiled-coil, resulting in steric hinderance to coiled-coil interactions. It is also possible that sybody interaction disrupts ScpAB interaction (as data ruling this possibility out has not been provided). Thus, other potential mechanisms would be worth considering/ discussing. In this direction, did AlphaFold reveal any potential insights into putative binding locations?
      3. Sybody expression in vivo Have the authors estimated sybody expression in vivo? Are they all expressed to similar levels?
      4. Sybodies should phenocopy ATP hydrolysis mutant of Smc The sybodies were screened against an ATP hydrolysis deficient mutant of Smc, with the rationale that these sybodies would interfere this step of the Smc duty cycle. Does the expression of the sybodies in vivo phenocopy the ATP hydrolysis deficient mutant of Smc? Could the authors consider any phenotypic read-outs that can indicate whether the sybody action results in an smc-null effect or specifically an ATP hydrolysis deficient effect?

      Minor comments:

      1. It was surprising that no sybodies were found that could target both bacillus and spneu Smc. For example, sybodies targeting the head regions of Smc that might work in a more universal manner. Could the authors comment on the coverage of the sybodies across the protein structure?
      2. Growth curves (Fig. S3) show a large jump in recovery in growth under sybody induction conditions. Could the authors address this observation here and in the text?
      3. L41- Sentence correction: Loop can be removed.
      4. L525 - bsuSmc 'E' :extra E can be removed.
      5. References need to be properly formatted.
      6. The authors should add in figure legend for Fig 1i) details on representation of the purple region, and explain the grey strokes for orientation of the loop.
      7. How many cells were analysed in the cell biological assays? Legends should include these information.

      Significance

      Overall, this is an impressive study that uses an elegant strategy to find inhibitors of protein activity in vivo. The manuscript is clearly written and the experiments are logical and well-designed. The findings from the study will be significant to the broad field of genome biology, synthetic biology and also SMC biology. Specifically, the coiled coil domain of SMC proteins have been proposed to be of high functional value. The authors have elegantly identified key coiled-coil regions that may be important for function, and parallelly exhibited potential of the use of synthetic sybody/designed binders for inhibition of protein activity.

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Response to Reviewers

      We thank the Reviewers for their appreciative comments (Reviewer 1: “first time that a well-established existing mathematical model of signaling response extended and applied to heterogeneous ligand mixtures”)and constructive suggestions for improvement. In this extensive revision, we have not only addressed the suggestions comprehensively but also extended our analysis of signaling antagonism to all doses and at the single-cell level using novel computational workflows. This resulted in the discovery of several mechanismsof antagonism and synergy that are dose-dependent, and dependent on the cell-specific state of the signaling network, thereby manifesting in only a subset of cells.

      We have addressed Reviewer comments: we have made substantial revisions to improve clarity, rigor, and biological interpretation. Below we briefly summarize the main concerns raised by Reviewers 1-3 and how we have addressed them.

      • We have rewritten the Methods section to clarify our approaches. We have also added the explanation of methodology and the rationale in the main text to improve readability and comprehensiveness (Addressing Reviewer #1 comments). This includes explaining and justifying the signaling codon approaches (Reviewer 1), our core-module parameter matching methodology and discussion (Reviewer #1, point 11, Reviewer #2, point 1), and the model schematic (Reviewer #1, point 5).
      • For one of our major conclusions – that macrophages may distinguish stimuli in the context of ligand mixtures – we have validated these results with experiments, which increases confidence in this conclusion (Reviewer #2, point 3, Reviewer #3, point 2).
      • We have updated the model for CpG-pIC competition using Michaelis–Menten kinetics without any additional parameters, rather than introducing new free parameters. This change removes parameter freedom for fitting combinatorial conditions, leading to a more constrained and mechanistically grounded model whose predictions align better with experimental data (Updated Figures 2 and S2; Reviewer #2, point 2).
      • We have addressed all other editorial and clarification-related concerns as well, as detailed in our point-by-point response below. In addition, we have extended the scope of the manuscript. We have extended our analysis of ligand combinations across a broad dose range, from non-responsive to saturated conditions. This led to several additional discoveries. For example, we show that ultrasensitive IKK activation can underlie synergistic combinations of ligands at low doses. In contrast, beyond the CpG-poly(I:C) antagonism, we identify that competition for CD14 uptake by LPS and Pam can generate antagonism between these ligands within specific dose ranges.

      Importantly, such antagonism or synergy is not evident in all cells in the population. It may also not be picked up by studies of the mean behavior. With our new computational workflow that allows for single-cell resolution we identify the conditions that must be met by the signaling network state, for antagonism or synergy to take place.

      Further, we examine the hypothesis that such signaling pathway interactions affect stimulus-response specificity in combinatorial stimulus conditions. By comparing models with and without this antagonism, we demonstrate that antagonistic interactions can improve stimulus-response specificity in complex ligand mixtures.

      These additional analyses provide a new mechanistic understanding of cellular information processing and elucidate how synergy and antagonism can mechanistically shape signaling fidelity in response to complex ligand mixtures.

      Point-by-Point Response

      Reviewer #1

      Evidence, reproducibility and clarity

      The authors extend an existing mathematical model of NFkB signalling under stimulation of various single receptors, to model that describes responses to stimulation of multiple receptors simultaneously. They compare this model to experimental data derived from live-cell imaging of mouse macrophages, and modify the model to account for potential antagonism between TLR3 and TLR9 response due to competition for endosomal transport. Using this framework they show that, despite distinguishability decreasing with increasing numbers of heterogenous stimuli, macrophages are still able in principle to distinguish these to a statistically significant degree. I congratulate the authors on an interesting approach that extends and validates an existing mathematical model, and also provides valuable information regarding macrophage response.

      Response: We thank the reviewer for this appreciative assessment and for the careful reading of our work. The constructive comments helped us substantially improve the rigor and clarity of the manuscript.

      In addition to revising the text for clarity, we have extended our analysis to systematically investigate dose-response behavior for each pair of ligand combination. Using the experimentally validated model, we explored 10 ligand pairs across a range of doses from non-responsive to saturating. This allowed us to identify mechanistic regimes in which synergy and antagonism arise at the single-cell level. In particular, we found that low-dose synergy can be explained by ultrasensitive IKK activation (Figure 4 and corresponding supplementary figures), while antagonism can emerge from competition for shared components such as CD14 (Figure 5 and corresponding supplementary figures). We further show that antagonism can enhance condition distinguishability in ligand mixtures, thereby contributing to stimulus-response specificity (Figure 5 and corresponding supplementary figures).

      There are no major issues affecting the scientific conclusions of the paper, however the lack of detail surrounding the mathematical model and the 'signaling codons' that are used throughout the paper make it difficult to read. This is exacerbated by the fact that I was unable to find Ref 25 which apparently describes the model, however I was able to piece together the essential components from the description in Ref 8 and the supplementary material.

      Response: This comment helped us to improve the writing. We apologize that the key reference 25 was still not publicly available. It is now published in Nature Communications. In addition, we have added more details to clarify the mathematical model as well as the signaling codons, in results and in methods. Please see below for details.

      Lots of the minor comments below stem from this, however there are also a few other places that could benefit from some additional clarification and explanation.

      Significance: 1. '...it remains unclear complex...' -> '...it remains unclear whether complex...' Response: We have rewritten the Significance (now it is Synopsis).

      Introduction: 2. 'temporal dynamics of NFkB' - it would be good to be more concrete regarding the temporal dynamics of what aspect of this (expression, binding, conformation, etc), if possible. Response: It refers to the presence of NFκB into nucleus, which represents active NFκB capable of activating gene expression. We have clarified this (Lines 59-61 in introduction paragraph 2). “Upon stimulation, NFκB translocates into the nucleus, … activating immune gene expression (10, 15–19).

      'signaling codons' - the behaviour of these is key to the entire paper, so even if they are well described in the reference, it would be good to have a short description as early as possible so that the reader can get an idea in their mind what exactly is being discussed here. Later, it would be good to have concrete description of exactly what these capture.

      Response: We thank the reviewer for this comment. We have added one whole paragraph in the early introduction to describe the concept of Signaling Codons which allow quantitative characterization of NFkB stimulus-response-specific dynamics (Lines 60-67). We have also added more concrete description of Signaling Codons in the results as well as adding an illustration for the signaling codons (Lines 169-175, Figure S2B).

      'This challenge...population of macrophages' - this seems a bit out of place, and is a bit of a run on sentence, so I suggest moving this to the next paragraph and working it into the first sentence there '...regulatory mechanisms, and this challenge could be addressed with a model parameterised to account for heterogeneous...Early models ...', or something similar.

      Response: We thank the reviewer for this suggestion, we have revised this as suggested. This improves the logic flow (Lines 87-88).

      Ref 25: I can't find a paper with this title anywhere, so if it's an accepted preprint then it would be good to have this available as well. That said, I still think it would be difficult to grasp the work done in this paper without some description of the mathematical model here, at least schematically, if not the full set of ODEs. For example, there are numerous references to how this incorporates heterogeneous responses, the 'core module', etc, and the reader has no context of these if they aren't familiar with the structure of the model. Response: We apologize that Ref 25 was not on PubMed. Now it’s published, and we have updated the corresponding information. This comment also helped us to improve the writing by adding a description of the mathematical model in the Introduction (Lines 95-105), the results (Lines 129-141), and a detailed description of the model in the Methods (Simulation of heterogenous NFκB dynamical responses.)

      We have also added the schematic of the model topology in Figure S1 (adapted from previous publications Guo et al 2025, Adelaja et al 2021) to make sure the paper is self-contained.

      'A key challenge which is...' -> 'A key challenge is...' Response: We have revised the Introduction and removed this sentence.

      'With model simulation ...' -> a bit of a run on sentence, I suggest breaking after 'conditions'. Response: We have revised the introduction and removed this sentence.

      Results:

      1. This section would benefit from a more in-depth description of the model and experimental setup. In particular for the experiment, the reader never really knows what this workflow for this is, nor what the model ingests as input, and what the predictions are of. Response: This comment helped us to improve clarity by adding an in-depth description of the model and experimental setup. We have revised the Results as suggested (Lines 129-141). We also appended the corresponding revision here for reviewer reference.

      This mechanistic model was trained on single-ligand response experimental datasets, capturing the single-ligand stimulus-response specificity of the population of macrophages while accounting for cellular heterogeneity. Specifically, quantitative NFκB dynamic trajectory data from hundreds of single macrophages responding to five single ligands (TNF, pIC, Pam, CpG, LPS) at 3-5 doses was obtained from live cell imaging experiments. The mathematical model (Figure S1) consists of a 52-dimensional system of ordinary differential equations, including 52 intracellular species, 101 reactions and 133 parameters, and is divided into five receptor modules, which respond to the corresponding ligands respectively, and the IKK-NFκB core module that contains the prominent IκBα negative feedback loop. By fitting the single-cell experimental data set with a non-linear mixed effect statistical model (coupling with 52-dimensional NFκB ODE model), the parameter distributions for the single-cell population were inferred. Analyzing the resulting simulated NFκB trajectories with Information theoretic and machine learning classification analyses confirmed that the virtual cell model simulations reproduced key SRS performance characteristics of live macrophages.”

      '..mechanistic model was trained...' - trained in this study, or in the previous referenced study? Response: The mechanistic model was trained in a previous study (Guo et al 2025 Nature Comm), and we have clarified this in the revision (Lines 127 - 129).

      1. 'determined parameter distributions' - this is where it would be good to have more background on the model. What parameters are these, and what do they correspond to biologically? It would also be nice to see in the methods or supplementary material how this is done (maximum likelihood, etc). Response: This comment helps us to clarify the predetermined parameter distributions. We have revised the methods to include this information (Simulation of heterogenous NFκB dynamical responses, paragraph 3). We have appended the corresponding text here for reviewer’s convenience.

      “The ODE model was then fitted to the population of single-cell trajectories to recapitulate the cell-to-cell heterogeneity in the experimental data (2). This is achieved by solving the non-linear mixed effects model (NLME) through stochastic approximation of expectation maximation algorithm (SAEM) (3–6). Seventeen parameters were estimated. Within the core module, the estimated parameters included the rates governing TAK1 activation (k52, k65), the time delays of IκBα transcription regulated by NFκB (k99, k101), and the total cellular NFκB abundance (tot NFκB). Within the receptor module, receptor synthesis rates (k54 for TNF, k68 for Pam, k85 for CpG, k35 for LPS, k77 for pIC), degradation rates of the receptor–ligand complexes (k56, k61, k64 for TNF; k75 for Pam; k93 for CpG; k44 for LPS; k83 for pIC), and endosomal uptake rates (k87 for CpG; k36 and k40 for LPS; k79 for pIC) were fitted. All remaining parameters were fixed at literature-suggested values (1). The single-cell parameters inferred from experimental individualcell trajectories then served as empirical distributions for generating the new dataset (see SupplementaryDataset2).”

      'matching cells with similar core model...' - it's difficult to follow the logic as to why this is done, so I think this needs to be a little clearer. My guess would be that the assumption is that simulated cells with similar 'core' parameters have a similar downstream signalling response, and therefore the receptors can be 'transplanted'. So it would be nice to see exactly what these distributions are and what the effect of a bad match would be. Response: We thank the reviewer for this comment. In the revision, we have explained the rationale for matching cells with similar core module (Lines 145-152).

      Previous work determined parameter distributions for only the cognate receptor module (and the core module) that provided the best fit for the relevant single ligand experimental data (Figure 1A, Step 1), but other receptor modules’ parameter values were not determined. To simulate stimulus responses to more than two ligands, we imputed the other ligand-receptor module parameters using shared core-module parameters as common variables and employing nearest-neighbor hot-deck imputation (35). In this setup, the core module functions as an “anchor” to harmonize two or more receptor-specific parameter distributions.

      This nearest-neighbor hot-deck imputation approach (the core module matching method) was shown to outperform other approaches, including random matching and rescaled-similarity matching (Guo et al. 2025, Supplementary Figure S11). For the reviewer’s convenience, we have also appended the corresponding figure below.

      Figure S11 from (Guo et al., 2025). Assessment of matching techniques for predicting single-cell responses to various ligand stimuli (a-d). Heatmaps illustrating the Wasserstein distance between the signaling codon distributions predicted by the model and those observed in experiments. The analysis employs four distinct matching methods to align the five ligand-receptor module parameters: (a) “Random Matching”, (b) “Similarity Matching” (the method used in our study), (c) “Rescaled-Similarity Matching”, and (d) “Sampling Approximated Distribution”. In the heatmaps, rows represent signaling codons, columns denote ligands, and the color intensity indicates the Wasserstein distance, providing a visual metric of similarity between model predictions and experimental data. e-f. Histogram of the average Wasserstein distance between the model-predicted and experimentally observed signaling codon distributions, summarized across signaling codons (e) and ligands (f).

      Some explanation of how this relates to the experimental data the parameters are fit on would also be useful. (a) Is there a correspondence between individual simulated cells and the experimental data for the single ligand stimulation, and then the smallest set of these is taken? Is there also a matching from the simulated multi-receptor modules and the multi-receptor data, and if so, is this done in the same way? Response: This comment to help us clarify the correspondence relationship between model simulations and experimental data.

      Yes—there is a correspondence between individual simulated cells and the previously published experimental data (Guo et al., 2025b) for single-ligand stimulation. We have revised the first paragraph of the Results (Lines 136–148) and the Methods (Lines 544-557) to clarify how the model simulations were fit to the previous experimental dataset. See Reviewer 1, Comments 10 for the updates in Methods. We have pasted in the revised Results section below for the reviewer’s reference.

      By fitting the single-cell experimental data set with a non-linear mixed effect statistical model (coupling with 52-dimensional NFκB ODE model), the parameter distributions for the single cell population were inferred.

      'six signaling codons' - here it would be good to recapitulate what these represent, but also what the 'strength' and 'activity' correspond to (total integrated value, maximum value, etc) Response: We thank the reviewer for the suggestion and have clarified this point (Lines 169-175, Figure S2B).

      'pre-defined thresholds' - no need to state these numerically in the text (although giving some sense of how/why these were chosen would give some context), but I couldn't find the values of these, nor values corresponding to the signaling codons. Response: We appreciate the reviewer’s comment. We have added this information in the figure legend (Figure 1B-C) and Method -- “Responder fraction” (Lines 666-672). Specifically, for the model simulation data, the integral thresholds are 0.4 (µM·h), 0.5 (µM·h), and 0.6 (µM·h). The peak thresholds are 0.12 (µM), 0.14 (µM), and 0.16 (µM). For the experimental data, the integral thresholds are 0.2 (A.U.·h), 0.3 (A.U.·h), and 0.4 (A.U.·h). The peak thresholds are 0.14 (A.U.), 0.18 (A.U.), and 0.22 (A.U.). Thresholds were selected so that the medium threshold yields 50% responder cells under single-ligand conditions, while the responder ratio remains unsaturated under three-ligand stimulation.

      'non-responder cells are likely a result of cellular heterogeneity in receptor modules rather than the core module' - is this the 'ill health' referenced earlier? If so make this clear. Response: Yes, this is the ‘ill health’ referenced earlier, and we have clarified this (Lines 198-199).

      It's also very difficult to follow this chain of logic, given that the reader at this point doesn't have any knowledge of what the 'core' module is, nor the significance of the thresholds on the signaling codons. I would suggest making this much clearer, with reference to each of these. Response: We apologize for the poor explanation. We have now explained in the Introduction (Lines 95-106) and the results (Lines 129-141) how the model is structured into receptor-proximal modules that converge on the common core module. We have also added a schematic for clarity (Figure S1). For further clarification of the math models, we have significantly revised the Methods (Simulation of heterogenous NFκB dynamical responses). The defined thresholds are clarified in the Methods -- “Responder fraction”.

      '...but the model represented these as independent mass action reactions' - the significance of this may not be clear to someone not familiar with biophysical models, so probably better to make it explicit. Response: We thank the reviewer for this reminder, and we have added a description of the significance of this point (Lines 225-227).

      '...we trained a random forest classifier...' - is this trained on the 'raw' experimental time series data, or on the signaling codons? Response: It is trained on the signaling codons calculated from model simulations of NFκB trajectories. We have clarified this (Lines 260-261).

      'We also applied a Long Short-Term Memory (LSTM) machine learning model...' - it might be good to reference these three approaches at the beginning of this section, otherwise they seem to come out of the blue a little. Response: We have added the references of these three approaches in the beginning of this section (Lines 242-246).

      'We then used machine learning classifiers...' - random forests, LSTMs, or a different model? Response: We have clarified that this as random forest classifier (Line 276).

      Discussion:

      1. '...over statistical models...' - suggest maybe 'purely statistical models' Response: We thank the reviewer for this suggestion. We have rewritten the whole Discussion to include the new insights of antagonism and synergy and their roles in maintaining unexpectedly high SRS performance. Thus, this sentence was removed.

      'We found that endosomal transport...' - A paper by Huang, et. al. (https://www.jneurosci.org/content/40/33/6428) observed a synergistic phagocytic response between CpC and pIC stimulation in microglia. This is still consistent with a saturation effect dependent on dose, but may be worth a mention. Response: We thank the reviewer for referring this interesting paper to us, and this comment helps us to improve the Discussion of inflammatory signaling pathways besides NFκB. This paper demonstratessynergistic effects between CpG and pIC in inhibiting tumor growth and promoting cytokine production(Huang et al., 2020), such as IFN-β and TNF-α, whose expression is also regulated by the IRF and MAPK signaling pathways (Luecke et al., 2021; Sheu et al., 2023). This finding does not contradict our findings that CpG and pIC act antagonistically in the NFκB signaling pathway because of the combinatorial pathways that act on gene expression: CpG can activate the MAPK signaling pathway (Luecke et al., 2024) but not the IRF signaling pathway, whereas pIC activates the IRF signaling pathway (Akira and Takeda, 2004) but only weakly the MAPK pathway. Therefore, their combination can synergistically regulate inflammatory responses. We have added this to the discussion (Lines 515-522).

      '...features termed...' -> 'features, termed' Response: We thank the reviewer for their carefully reading, and we have rewritten the Discussion.

      '...we applied a Long Short-Term Memory (LSTM) machine learning model..' - maybe make clear that this is on the time-series data (also LSTM has already been defined). Response: We thank the reviewer for their carefully reading, and we have rewritten the Discussion.

      Materials and methods:

      1. The descriptions in this section are quite vague, so I would suggest expanding this with more detail from the supplementary material, where things are quite well explained. Response: We thank the reviewer for this suggestion, and we have rewritten the whole Methods as suggested.

      'sampling distribution' - not clear what this refers to in this context Response: We have clarified this in the revision (Methods -- Simulation of heterogenous NFκB dynamical responses, paragraph 3). The single-cell signaling-pathway parameter values used for bootstrapping sampling to generate model simulations are given in Supplementary dataset 2.

      'RelA-mVenus mouse strain' - it would be good to mention the relevance of the reporter for NFkB signaling Response: We have added the relevance of the reporter for NFkB signaling (Methods, Lines 624-626).

      '...A random forest classifier...' -> a random forest classifier

      Response: We have rewritten the methods.

      Significance

      This study provides mechanistically interpretable insight on the important question of how immune cells perform target recognition in realistic scenarios, and also provides validation of existing mathematical models by extending these beyond their original domain. The paper uses 'signaling codons' as a proxy for information processing, however in this instance it is cross-validated with an LSTM model that is applied directly to the time series data. Nevertheless, the scope of the paper is such that it does not deal with the question of how these signals are transmitted or used in a downstream immune response. To my knowledge, this is the first time that a well established existing mathematical model of signalling response has been extended and applied to heterogeneous ligand mixtures. These results will be of interest to those studying immune cell responses, and to those interested in basic research on mathematical models of signaling and cellular information processing more generally.

      My background is in biophysical models, machine learning, and signaling in cancer. I have a basic understanding of immunology, but no experience in experimental cell biology.

      Response: We thank the reviewer for highlighting the novelty of our study. We appreciate the reviewer’s recognition that our work advances the understanding of cellular information processing in the context of ligand mixtures, particularly as the first to extend computational models to investigate signaling fidelity under mixed-ligand conditions.

      We agree that this work will interest computational biologists focused on signaling network modeling and information processing. In addition, we believe it will also be valuable for all signaling biologists, as we provide fundamental insights. For experimental biologists in particular, our model provides an efficient, quantitative framework for exploring and generating testable hypotheses.

      We would also like to gently emphasize that evaluating specificity within signaling pathways is as essential as studying downstream functional responses. While immune function outcomes are certainly important, they rely on the upstream signaling pathways that first respond to environmental cues. Understanding how these signaling pathways achieve specificity and discriminability is therefore crucial. For example, this is particularly relevant for drug development targeting pathways such as NFκB, where assessing the direct signaling output—NFκB activation dynamics—can provide valuable insight into the effects of pharmacological interventions.

      Reviewer #2

      Evidence, reproducibility and clarity

      Guo et al. developed a heterogeneous, single-cell ODE model of NFκB signaling parameterized on five individual ligands (TNF, Pam, LPS, CpG, pIC) and extended it, via core-module parameter matching, to predict responses to all 31 combinations of up to five ligands. They found that simulated responder fractions and signaling codon features generally agreed with live-cell imaging data. A notable discrepancy emerged for the CpG (TLR9) + pIC (TLR3) pair: experiments exhibited non-integrative antagonism unpredicted by the original model. This issue was resolved by incorporating a Hill-type term for competitive, limited endosomal trafficking of these ligands. Finally, by decomposing NFκB trajectories into six "signaling codons" and applying Wasserstein distances plus random-forest and LSTM classifiers, the authors showed that stimulus-response specificity (SRS) declines with ligand complexity but remains statistically significant even for quintuple mixtures. This is a well written and scientifically sound manuscript about complexities of cellular signaling, especially considering the limitations of in vitro experiments in recapitulating in vivo dynamics.

      Response: We thank the reviewer for carefully reading the manuscript and for this endorsement. We have significantly improved the manuscript thanks to the reviewer’s insightful comments (see below for point-to-point responses).

      Besides addressing the reviewer’s questions, we have further extended our work to investigate how ligand pairs interact across all doses and how those interactions affect stimulus-response specificity. As the reviewer pointed out, experimental studies are limited in recapitulating the multitude of complex physiological contexts. The model is helpful to explore more complex scenarios beyond the feasibility of in-vitro experimental setups. Using computational simulations, we have further explored 360 conditions generated from 10 ligand pairs, each evaluated at 6 doses spanning non-responsive to saturating levels, and with each condition considered 1000 cells to capture the heterogeneity of the population.

      From this extended analysis, we identified the mechanistic bases for observations of both synergy and antagonism. Synergy for certain low-dose ligand combinations can be explained by ultrasensitive IKK activation (Figure 4), while antagonism between LPS and Pam arises from competition for the cofactor CD14 (Figure 5). We show that these phenomena are dependent on the signaling network state and therefore are not observed in all cells of the population. We define the network conditions that must be met for antagonism and synergy to occur. Importantly, we then show that antagonism can contribute to stimulus-response specificity in ligand mixtures (Figure 5).

      Here are a few comments and recommendations:

      1. The modeling approach used in this manuscript, while interesting, might need further validation. Inferring multi-ligand receptor parameters by matching single-ligand cells on core-module similarity may not capture true co-variation in receptor expression or adaptor availability. Single cell measurements of receptor expressions could be done (e.g. via flow cytometry) to ground this assumption in real data. If the authors think this is out of scope for this manuscript, they could fit core-matched single cell models with two receptor modules from scratch to the two-ligand experimental data. Would this fitted model produce similar receptor parameters compared to the presented approach? At least the authors should add a bit more explanation for why their modeling approach is better (or valid) than fitting the models with 2/3/4/5 receptor modules from scratch to the experimental data.

      Response: We thank the reviewer for this comment, this helped us improve the explanation of the methodology, the rationale, and the validation. The methodology is based on the well-established statistical method of nearest-neighbor hot-deck imputation (Andridge and Little, 2010). In this implementation, the core module functions as a stabilizing “anchor” (common variables) to harmonize various receptor-specific parameter distributions. Similar methodologies have been successfully applied to correct batch effects or integrate single-cell RNAseq datasets using anchor cell types (Stuart et al., 2019). Our workflow has been validated on single-ligand stimuli conditions in a previous study (Guo et al., 2025) (See below 3rdparagraph). Here, we used this method to generate predictions for ligand mixtures and have validated them with experimental studies of the dual-ligand stimuli, and we found that our predictions align well with the experimental data. As the reviewer suggested in point 3, in the revision, we also added experimental validation on the binary classifiers of macrophage determines whether specific stimuli are presented in the ligand mixture. The question we are interested in in this work is how macrophage process ligand-specific information in the context of ligand mixtures. For this question, the experimental results align with the model predictions, reaching consistent conclusions.

      In the revision, we have explained the rationale for using the nearest-neighbor hot-deck imputation by matching cells with similar core module (Lines 143-150).

      Previous work determined parameter distributions for only the cognate receptor module (and the core module) that provided the best fit for the single ligand experimental data (Figure 1A, Step 1), and other receptor modules parameter information is missing. To simulate stimulus responses to more than two ligands, we imputed the other ligand–receptor module parameters using shared core-module parameters as common variables and employing nearest-neighbor hot-deck imputation (35). In this setup, the core module functions as an “anchor” to harmonize two or more receptor-specific parameter distributions. This was achieved by by minimizing Euclidean distance between the core module parameters associated with the independently parameterized single-ligand models (Figure 1A, Step 2).

      In Guo et al. (2025) (see Supplementary Figure S11), the nearest-neighbor hot-deck imputation approach (core module similarity matching method) was compared with other approaches, including random matching and rescaled-similarity matching. The results show that, after matching, the core module method best preserves the single-ligand stimulus signaling codon distributions. For the reviewer’s convenience, we have also appended the figure in the response to Reviewer 1, Comment 11.

      The advantage of our workflow is that it does not need to be fit to new experimental data and still gives reliable predictions on signaling dynamics. For the reviewer’s interest, we have tried to fit core-matched single cell models with two receptor modules. As fitting parameters require sufficiently large and high-quality datasets, single-ligand stimulation data with more than 1,000 cells can be adequate to estimate 6~7 parameters (Guo et al., 2025) (approx. 1400 cells to 2000 cells per ligand). However, our current experimental dataset for combinatorial-ligand conditions contains only 500~1,000 cells, and we have tested these datasets but results show a poor fit of heterogeneous signaling dynamics. This is due to an insufficient number of cells for estimating 8~10 parameters. We estimate that at least ~1,500 cells would be needed for reliable parameter estimation under dual-ligand stimulation (and more cells may be needed for combinatorial ligand stimuli involving more ligands). This is currently not feasible to obtain for mixed ligands given the large number of combinatorial conditions.

      Overall, in this paper, the nearest-neighbor hot-deck imputation approach is presented as a feasible and acceptable approach that best reflects our current understanding of the signaling network. Importantly, it helps identify potential gaps by highlighting discrepancies between model predictions and experimental observations.

      (a) The refined model posits competitive, saturable endosomal transport for CpG and pIC, but no direct measurements of endosomal uptake rates or compartmental saturation thresholds are provided, leaving the Hill parameters under-constrained. The authors could produce dose-response curves for CpG and pIC individually and in combination across a range of concentrations to fit the Hill parameters for competitive uptake. (b) If this is out of scope for this paper, the authors should at least comment on why the endosome hypothesis is better than others e.g. crosstalks and other parallel pathway activations. Especially given that even the refined model simulations with Hill equations for CpG and pIC do not quite match with the experimental data (Fig 2 B,E).

      Response: (a) The reviewer’s comments helped us to improve our work by employing the Michaelis-Menten Kinetics for substrate competition reactions, which increases the mathematic rigor of the CpG-pIC competition model. In this updated model, there is no free parameters to tune, as all the Vmax, Kd, should be consistent with the single-ligand scenario. And the Hill is same as single-ligand case, equal to 1.

      The comments on examining dose-response curves for CpG and pIC inspired us to extend the dose-response curves for all ligand pair combination, allowing us to identify the synergy in low-dose ligand pairs and antagonism for high-dose LPS-Pam, besides CpG-pIC (new Figure 4 & 5).

      (b) Regarding alternative hypotheses for antagonism—such as crosstalk or parallel-pathway activation: any antagonistic effect would have to arise from negative regulation acting within the first 30 min. However, IκBα-mediated feedback only becomes appreciable after ~30 min (Hoffmann et al., 2002), and A20-dependent attenuation requires ≥2 h (Werner et al., 2005). Beyond these delayed feedback, NFκB activation depends primarily on phosphorylation and K63-linked ubiquitination, for which no mechanism produces true antagonism; at most, combinatorial inputs saturate the response to the level of the strongest single ligand. We have added this rationale to the Discussion to explain why we favor the endosome saturation hypothesis over other mechanisms (Lines 459-465). While this may not capture every nuance, it represents the simplest model extension capable of reproducing the observed antagonism.

      Authors asses the distinguishability of single-ligand stimuli and combinatorial ligands stimuli using the simulations from the refined model. While this is informative, the simulated data could propagate deviations from the experimental data to the classifiers. How would the classifiers fare when the experimental data is used to assess the single-stimulus distinguishability? The authors could use the experimental data they already have and confirm their main claim of the paper, that cells retain stimulus-response specificity even with multiple ligand exposure. In short, how would Fig 3E look when trained/validated on available experimental data?

      Response: We thank the reviewer’s valuable comments, and they helped us strengthen the rigor of our analysis by incorporating cross-model testing. Specifically, we refined our analysis of ligand presence/absence classification by including ROC AUC and balanced accuracy metrics. This adjustment accounts for the fact that the experimental data did not cover all combinatorial conditions, thereby mitigating potential biases from data imbalance and threshold choice. The experimental results are qualitatively consistent with the simulations, though—as expected—they show somewhat lower ligand distinguishability compared to the noise-free simulated dataset. We have updated Figures 3E–F (previously Figure 3E), added Figure S8, and revised the manuscript accordingly (Lines 292–301). For the reviewer’s convenience, we have also pasted in the revised manuscript text below.

      “Classifiers trained to distinguish TNF-present from TNF-absent conditions achieved a Receiver Operating Characteristic-Area Under the Curve (ROC AUC) of 0.96, significantly above the 0.5 baseline (Figure 3D, Figure S8A). Extending this analysis to other ligands, cells detected LPS (0.85), Pam (0.84), pIC (0.73), and CpG (0.63) in mixtures (Figure 3D, S8A). Using experimental data from double- and triple-ligand stimuli (Figure 1D), ROC AUC values were TNF 0.74, LPS 0.74, Pam 0.66, pIC 0.75, and CpG 0.66 (Figure 3E, S8B). Classifier accuracies yielded consistent results (Figure S8C-D). These results indicated a remarkable capability of preserving ligand-specific dynamic features within complex NFκB signal trajectories that enable nuclear detection of extracellular ligands even in complex stimulus mixtures.”

      While the approach of presented here with multiple simultaneous ligand exposures is a major step towards the in vivo-like conditions, the temporal aspect is still missing. That is, temporal phasing i.e. sequential exposure to multiple ligands as one would expect in vivo rather than all at once. This is probably out of scope for this paper but the authors could comment how how their work could be taken forward in such direction and would the SRS be better or worse in such conditions. Response: We thank the reviewer for this insightful comment. We have added “the temporal aspect of multiple ligand exposures” to the discussion (Lines 503-510), and we pasted the corresponding paragraph here for reviewer’s references (black fonts are previous version, and blue fonts is the revised new texts):

      Cells may be expected to interpret not only the combination of signals but also their timing and duration to mount appropriate transcriptional responses (58, 59). For example, acute inflammation integrates pathogen-derived cues with pro- and anti-inflammatory signals over a timeframe of hours to days (58), to coordinate the pathogen removal and tissue repairing process. Investigating sequential stimulus combinations in our model is therefore crucial for understanding how cells process complex physiological inputs. Simulations that account for longer timescales may require additional feedback mechanisms, as described in some of our previous studies for NFκB (15, 60). **

      There is no caption for Figure 3F in the figure legend nor a reference in the main text.

      Response: In the revised manuscript we actually removed Figure 3F.

      Significance

      General assessment: This is a good manuscript in it's present form which could get better with revision. There needs more supporting data and validation to back the main claim presented in the manuscript.

      Significance/impact/readership: When revised this manuscript could be of interest to a broad community involving single cells biology, cell and immune signaling, and mathematical modeling. Especially the models presented here could be used a starting point to more complex and detailed modeling approaches.

      Response: We thank the reviewer for this endorsement. The reviewer’s constructive suggestion helped us significantly improve the clarity and rigor of our main conclusion.

      In summary, we have strengthened the computational framework in several ways. We improved the model’s fit to experimental single-ligand training data and reformulated the antagonistic CpG-pIC model using Michaelis–Menten kinetics, thereby reducing parameter arbitrariness and increasing mechanistic interpretability. These changes led to better agreement between model predictions and experimental observations for combinatorial ligand responses (Updated Figure 2 and Figure S2), which we hope will further increase experimentalists’ confidence in the modeling results. We have also validated one key conclusion (“cells retain stimulus-response specificity even with multiple ligand exposure”) using the experimental dataset, and it aligns with the model predictions.

      In addition, we have further extended our analysis and the scope. Inspired by the reviewer’s advice (and Reviewer 3’s comment 1b) on dose-combination study for CpG-pIC pair, we expanded our research to dose-response relationships for all dual-ligand combinations (Lines 302-406, Figure 4-5). This additional comprehensive analysis allowed us to identify the mechanism of synergistic and antagonistic effects in single-cell responses and to pinpoint the corresponding dose ranges among different ligand pairs.

      Interestingly, we found that IKK ultrasensitive activation may lead to low-dose ligand combinations synergistic response for single cells. We also found that CD14 uptake competition between LPS and Pam may lead to antagonistic/non-integrative combination. Our simulation-based finding of non-integrative combination of LPS-Pam stimuli aligns with previous independent experimental finding of non-integrative response for LPS and Pam combination (Kellogg et al., 2017), and this independent experimental study validated our model prediction.

      We further analyzed stimulus-response specificity under conditions predicted to exhibit synergy or antagonism. Our results indicate that antagonistic combinations of ligands can increase stimulus-response specificity in the context of ligand mixtures.

      Reviewer #3

      Evidence, reproducibility and clarity

      The authors investigate experimentally single macrophages' NF-kB responses to five ligands, separately and to 3 pairs of ligands. Using the single ligand stimulations, they train an existing mathematical model to replicate single-cell NF-kB nuclear trajectories. From what I understand, for each single cell trajectory in response to a given ligand, the best fit parameters of the core module and the receptor module (specific for the given ligand) are found.

      Then (again, from what I understand), single ligand models are used to generate responses to combinations of ligands. The parametrizations of single ligand models (to be combined) are chosen to have the most similar core modules. It is not described how the responses to more than one ligand are calculated - I expect that respective receptor modules work in parallel, providing signals to the core module. After observing that the response to CpG+pIC is lower (in terms of duration and total) than for CpG alone, the model is modified to account for competition for endosomal transport required by both ligands.

      Having the trained model, simulations of responses to all 31 combinations of ligands are performed, and each NF-κB trajectory is described by six signaling codons-Speed, Peak, Duration, Total, Early vs. Late, and Oscillations. Next, these codons are used to reconstruct (using a random forest model) the stimuli (which may be the combination of ligands). The single and even the two ligand stimuli are relatively well recognized, which is interpreted as the ability of macrophages to distinguish ligands even if present in combination.

      We thank the reviewer for careful reading of the manuscript.

      Major comments

      1) The demonstrated ability to recognize stimuli is based on several key assumptions that can hardly be met in reality.

      Response: We thank the reviewer for this comment, which prompted us to carefully reflect on the rigor of our work, inspired us to extend our analysis to a broad range of ligand-dose combinations, and helped us improve clarifying the limitations of our approach. Please see our detailed responses below.

      a) The cell knows the stimulation time, and then it can use speed as a codon. Look on fig. S4A: The trajectories in response to plC are similar to those in response to TNF, but just delayed. Response: We thank the reviewer for this comment. We updated the model parameterization to better fit to the single-ligand pIC condition (Lines 557-559). In the updated model, the simulated responses to TNF and pIC are quite different (Fig. S2A-B, Fig. S5A-B). Specifically, the Peak, Duration, EarlyVsLate, and Total signaling codons have different values. In addition, the literature suggests that timing difference of NFκB activation are sufficient to elicit differences in downstream gene expression responses, especially for the early response genes (ERG) and intermediate response genes (ING) (Figure 1 in Ando, et al, 2021). For reviewer’s convenience, we have also appended the figures. Specifically, within the first 60 minutes, ctrl exhibit higher Speed of NFκB activation, and the NFκB regulated ERG and ING show differences in the first 60 minutes (Below Fig 1a,b). Ando et al then identified the gene regulatory mechanism that is able to distinguish between differences in the Speed codon. Importantly, this mechanism does not require knowledge of t=0, i.e. when the timer was started.

      The signaling codon Speed, which is based on derivatives, is one way to quantify such timing differences in activation. It was selected from a library of more than 900 different dynamic features using an information maximizing algorithm (Adelaja et al., 2021). It is possible that other ways of measuring time, e.g. time to half-max, might not be distinguished that well by these regulatory mechanisms.

      b) The increase of stimulus concentration typically increases Peak, Duration, and Total, so a similar effect can be achieved by changing the ligand or concentration. Response: This (“the increase of stimulus concentration typically increases Peak, Duration, and Total”) is not an assumption. What the reviewer described (“a similar effect can be achieved by changing the ligand or concentration”) may occur or may not. The six informative signaling codons can vary under different ligands or doses. For example, with increasing doses of Pam, the NFκB response shows a higher peak, potentially making it appear more like LPS stimulation. However, as the Pam dose increases, the response duration decreases, which distinguishes it from LPS stimulation (See experimental data shown in Figure 4A, second row, and Figure 3A, second row in Luecke et al., (2024), we also pasted the corresponding figure below for reviewer’s convenience).

      Figure 4A and Figure 3A from Luecke et al., (2024). Figure 4A: NFκB activity dynamics in the single cells in response to 0, 0.01, 0.1, 1, 10, and 100 ng/ml P3C4 stimulation. Eight hours were measured by fluorescence microscopy of reporter hMPDMs. Each row of the heatmap represents the p38 or NFκB signaling trajectory of one cell. Trajectories are sorted by the maximum amplitude of p38 activity. Data from two pooled biological replicates are depicted. Total # of cells: 898, 834, 827, 787, 778, and 923. Figure 3A: NFκB activity dynamics in the single cells in response to 100 ng/ml LPS stimulation. Eight hours were measured by fluorescence microscopy of reporter hMPDMs. Each row of the heatmap represents the NFκB signaling trajectory of one cell (with p38 measured shown in the original paper). Trajectories are sorted by the maximum amplitude of p38 activity. Data from two pooled biological replicates are depicted.

      Inspired by the reviewer’s comment (and also Reviewer 2’s comments), in the revision, we expanded our research to dose-response relationships for all dual-ligand combinations (Lines 302-406, Figure 4-5). This additional comprehensive analysis allowed us to identify the mechanism of synergistic and antagonistic effects in single-cell responses and to pinpoint the corresponding dose ranges among different ligand pairs.

      Interestingly, we found that IKK ultrasensitive activation may lead to synergistic responses to low-dose ligand combinations but only in a subset of single cells. We also found that CD14 uptake competition between LPS and Pam may lead to antagonistic/non-integrative combination. Our simulation-based finding of non-integrative combination of LPS-Pam stimuli aligns with previous independent experimental findings of non-integrative response for LPS and Pam combination (Kellogg et al., 2017).

      c) Distinguishing a given ligand in the presence of some others, even stronger bases, on the assumption that these ligands were given at the same time, which is hardly justified. Response: We agree with the reviewer that ligands could be given at different times. Considering time delays between ligands (the inset and also removal) dramatically adds to the combinatorial complexity. Some initial studies by the Tay lab are beginning to explore some scenarios of time-shifted ligand pairs (Wang et al 2025). Here we focus on a systematic exploration of all ligand combinations at 6 different doses. The fact that we do not consider time delays is not an assumption but admittedly a limitation that may well be addressed in future studies. We have included a brief discussion of this issue in the discussion (Lines 503-514). We’ve appended here for reviewer’s convenience.

      Cells may be expected to interpret not only the combination of signals but also their timing and duration to mount appropriate transcriptional responses (Kumar et al., 2004; Son et al., 2023). For example, acute inflammation integrates pathogen-derived cues with pro- and anti-inflammatory signals over a timeframe of hours to days (Kumar et al., 2004), to coordinate the pathogen removal and tissue repairing process. Investigating sequential stimulus combinations in our model is therefore crucial for understanding how cells process complex physiological inputs. Simulations that account for longer timescales may require additional feedback mechanisms, as described in some of our previous studies for NFκB (Werner et al., 2008, 2005).

      We would like to suggest that despite (or maybe because) limiting our study to coincident stimuli, we made some noteworthy discoveries.

      2) For single ligands, it would be nice to see how the random forest classifier works on experimental data, not only on in silico data (even if generated by a fitted model).

      Response: This comment and Reviewer 2 comment 3 have helped us strengthen the rigor of our analysis by incorporating cross-model testing. We pasted the response below.

      Specifically, we refined our analysis of ligand presence/absence classification by including ROC AUC and balanced accuracy metrics. This adjustment accounts for the fact that the experimental data did not cover all combinatorial conditions, thereby mitigating potential biases from data imbalance and threshold choice. The experimental results are qualitatively consistent with the simulations, though—as expected—they show somewhat lower ligand distinguishability compared to the noise-free simulated dataset. We have updated Figures 3E–F (previously Figure 3E), added Figure S8, and revised the manuscript accordingly (Lines 292–301). For the reviewer’s convenience, we have also included the revised manuscript text below.

      “Classifiers trained to distinguish TNF-present from TNF-absent conditions achieved a Receiver Operating Characteristic-Area Under the Curve (ROC AUC) of 0.96, significantly above the 0.5 baseline (Figure 3D, Figure S8A). Extending this analysis to other ligands, cells detected LPS (0.85), Pam (0.84), pIC (0.73), and CpG (0.63) in mixtures (Figure 3D, S8A). Using experimental data from double- and triple-ligand stimuli (Figure 1D), ROC AUC values were TNF 0.74, LPS 0.74, Pam 0.66, pIC 0.75, and CpG 0.66 (Figure 3E, S8B). Classifier accuracies yielded consistent results (Figure S8C-D). These results indicated a remarkable capability of preserving ligand-specific dynamic features within complex NFκB signal trajectories that enable nuclear detection of extracelular ligands even in complex stimulus mixtures.”

      3) My understanding of ligand discrimination is such that it is rather based on a combination of pathways triggered than solely on a single transcription factor response trajectory, which varies with ligand concentration and ligand concentration time profile (no reason to assume it is OFF-ON-OFF). For example, some of the considered ligands (plC and CpG) activate IRF3/IRF7 in addition to NF-kB, which leads to IFN production and activation of STATs. This should at least be discussed.

      Response: We thank the reviewer for this comment and fully agree. In the previous version, we discussed different signaling pathways combinatorically distinguishing stimulus. In the revision, we have extended this discussion to include the example of pIC and CpG activation, as suggested (Lines 515-522). We pasted the corresponding text below.

      Furthermore, innate immune responses do not solely rely on NFκB but also involve the critical functions of AP1, p38, and the IRF3-ISGF3 axis. The additional pathways are likely activated in a coordinated manner and provide additional information (Luecke et al., 2021). This is exemplified by the studies demonstrating synergistic effects between CpG and pIC in inhibiting tumor growth and promoting cytokine production (Huang et al., 2020), such as IFNβ and TNFα, whose expression is also regulated by the IRF and MAPK signaling pathways (Luecke et al., 2021; Sheu et al., 2023). Therefore the inclusion of parallel pathways of AP1 and MAPK, as well as the type I interferon network (Cheng et al., 2015; Davies et al., 2020; Hanson and Batchelor, 2022; Luecke et al., 2024; Paek et al., 2016; Peterson et al., 2022) are next steps for expanding the mathematical models presented here.”

      Technical comments

      1) Reference 25: X. Guo, A. Adelaja, A. Singh, W. Roy, A. Hoffmann, Modeling single-cell heterogeneity in signaling dynamics of macrophages reveals principles of information transmission. Nature Communications (2025) does not lead to any paper with the same or a similar title and author list. This Ref is given as a reference to the model. Fortunately, Ref 8 is helpful. Nevertheless, authors should include a schematic of the model.

      Response: We apologize for the paper not being accessible on time. It is now. We have also added a schematic of the model as suggested (Figure S1) and have added detailed description of the model and simulations in introduction (Lines 95-106), results (Lines 129-141), and methods (Simulation of heterogenous NFκB dynamical responses).

      2) Also Mendeley Data DOI:10.17632/bv957x6frk.1 and GitHub https://github.com/Xiaolu-Guo/Combinatorial_ligand_NFkB lead to nowhere.

      Response: We thank the reviewer for this comment, and we have made the GitHub codes public. Mendeley Data DOI:10.17632/bv957x6frk.1 can be accessed via the shared link: https://data.mendeley.com/preview/bv957x6frk?a=6d56e079-d7b0-482e-951f-8a8e06ee8797

      and will be public once the paper accepted.

      3) Dataset 1 is not described. Possibly it contains sets of parameters of receptor modules (different numbers of sets for each module, why?), but the names of parameters never appear in the text, which makes it impossible to reproduce the data.

      Response: We thank the reviewer for this comment, and we have added the description of the dataset (S3 SupplementaryDataset2_NFkB_network_single_cell_parameter_distribution.xlsx) and added the parameter names in the methods (Simulation of heterogenous NFκB dynamical responses).


      4) It is difficult to understand how the simulations in response to more than one ligand are performed.

      Response: We thank the reviewer for this comment, and we have improved the explanation of the methods (Results, Lines 145-152) and included a detailed description of the model and simulations for combinatorial ligands (Methods, Predicting heterogeneous single-cell responses to combinatorial-ligand stimulation).

      Significance

      A lot of work has been done, the methodology is interesting, but the biological conclusions are overstated.

      Response: We thank the reviewer for their interest in the methodology. We have revised the title, the abstract, and added the discussion about our finding to more accurately document what we have found. In the revision, we have increased the clarity and rigor of the work. For the key conclusion that macrophages maintain some level of NFκB signaling fidelity in response to ligand mixtures, we have validated the binary classifier results on experimental data as reviewer suggested.

      In the revision, we have also extended our methodology to explore further, the dose-response curves for different dosage combination for ligand pairs. This further work allowing us identified the synergistic and antagonistic regimes. By comparing the stimulus response specificity for antagonistic model vs the non-antagonistic model, we demonstrated that signaling antagonism may increase the distinguishability of presence or absence of specific ligands within complex ligand mixtures. This provides a mechanism of how signaling fidelity is maintained to the surprising degree we reported.

      REFERENCES

      Adelaja, A., Taylor, B., Sheu, K.M., Liu, Y., Luecke, S., Hoffmann, A., 2021. Six distinct NFκB signaling codons convey discrete information to distinguish stimuli and enable appropriate macrophage responses. Immunity 54, 916-930.e7. https://doi.org/10.1016/j.immuni.2021.04.011

      Akira, S., Takeda, K., 2004. Toll-like receptor signalling. Nat Rev Immunol 4, 499–511. https://doi.org/10.1038/nri1391

      Andridge, R.R., Little, R.J.A., 2010. A Review of Hot Deck Imputation for Survey Non-response. Int Stat Rev 78, 40–64. https://doi.org/10.1111/j.1751-5823.2010.00103.x

      Cheng, Z., Taylor, B., Ourthiague, D.R., Hoffmann, A., 2015. Distinct single-cell signaling characteristics are conferred by the MyD88 and TRIF pathways during TLR4 activation. Sci Signal 8, ra69. https://doi.org/10.1126/scisignal.aaa5208

      Davies, A.E., Pargett, M., Siebert, S., Gillies, T.E., Choi, Y., Tobin, S.J., Ram, A.R., Murthy, V., Juliano, C., Quon, G., Bissell, M.J., Albeck, J.G., 2020. Systems-Level Properties of EGFR-RAS-ERK Signaling Amplify Local Signals to Generate Dynamic Gene Expression Heterogeneity. Cell Systems 11, 161-175.e5. https://doi.org/10.1016/j.cels.2020.07.004

      Guo, X., Adelaja, A., Singh, A., Roy, W., Hoffmann, A., 2025a. Modeling single-cell heterogeneity in signaling dynamics of macrophages reveals principles of information transmission. Nature Communications.

      Guo, X., Adelaja, A., Singh, A., Wollman, R., Hoffmann, A., 2025b. Modeling heterogeneous signaling dynamics of macrophages reveals principles of information transmission in stimulus responses. Nat Commun 16, 5986. https://doi.org/10.1038/s41467-025-60901-3

      Hanson, R.L., Batchelor, E., 2022. Coordination of MAPK and p53 dynamics in the cellular responses to DNA damage and oxidative stress. Molecular Systems Biology 18, e11401. https://doi.org/10.15252/msb.202211401

      Huang, Y., Zhang, Q., Lubas, M., Yuan, Y., Yalcin, F., Efe, I.E., Xia, P., Motta, E., Buonfiglioli, A., Lehnardt, S., Dzaye, O., Flueh, C., Synowitz, M., Hu, F., Kettenmann, H., 2020. Synergistic Toll-like Receptor 3/9 Signaling Affects Properties and Impairs Glioma-Promoting Activity of Microglia. J. Neurosci. 40, 6428–6443. https://doi.org/10.1523/JNEUROSCI.0666-20.2020

      Kellogg, R.A., Tian, C., Etzrodt, M., Tay, S., 2017. Cellular Decision Making by Non-Integrative Processing of TLR Inputs. Cell Rep 19, 125–135. https://doi.org/10.1016/j.celrep.2017.03.027

      Kumar, R., Clermont, G., Vodovotz, Y., Chow, C.C., 2004. The dynamics of acute inflammation. Journal of Theoretical Biology 230, 145–155. https://doi.org/10.1016/j.jtbi.2004.04.044

      Luecke, S., Guo, X., Sheu, K.M., Singh, A., Lowe, S.C., Han, M., Diaz, J., Lopes, F., Wollman, R., Hoffmann, A., 2024. Dynamical and combinatorial coding by MAPK p38 and NFκB in the inflammatory response of macrophages. Molecular Systems Biology 20, 898–932. https://doi.org/10.1038/s44320-024-00047-4

      Luecke, S., Sheu, K.M., Hoffmann, A., 2021. Stimulus-specific responses in innate immunity: Multilayered regulatory circuits. Immunity 54, 1915–1932. https://doi.org/10.1016/j.immuni.2021.08.018

      Paek, A.L., Liu, J.C., Loewer, A., Forrester, W.C., Lahav, G., 2016. Cell-to-Cell Variation in p53 Dynamics Leads to Fractional Killing. Cell 165, 631–642. https://doi.org/10.1016/j.cell.2016.03.025

      Peterson, A.F., Ingram, K., Huang, E.J., Parksong, J., McKenney, C., Bever, G.S., Regot, S., 2022. Systematic analysis of the MAPK signaling network reveals MAP3K-driven control of cell fate. Cell Systems 13, 885-894.e4. https://doi.org/10.1016/j.cels.2022.10.003

      Sheu, K.M., Guru, A.A., Hoffmann, A., 2023. Quantifying stimulus-response specificity to probe the functional state of macrophages. Cell Systems 14, 180-195.e5. https://doi.org/10.1016/j.cels.2022.12.012

      Son, M., Wang, A.G., Keisham, B., Tay, S., 2023. Processing stimulus dynamics by the NF-κB network in single cells. Exp Mol Med 55, 2531–2540. https://doi.org/10.1038/s12276-023-01133-7

      Stuart, T., Butler, A., Hoffman, P., Hafemeister, C., Papalexi, E., Mauck, W.M., Hao, Y., Stoeckius, M., Smibert, P., Satija, R., 2019. Comprehensive Integration of Single-Cell Data. Cell 177, 1888-1902.e21. https://doi.org/10.1016/j.cell.2019.05.031

      Werner, S.L., Barken, D., Hoffmann, A., 2005. Stimulus Specificity of Gene Expression Programs Determined by Temporal Control of IKK Activity. Science 309, 1857–1861. https://doi.org/10.1126/science.1113319

      Werner, S.L., Kearns, J.D., Zadorozhnaya, V., Lynch, C., O’Dea, E., Boldin, M.P., Ma, A., Baltimore, D., Hoffmann, A., 2008. Encoding NF-kappaB temporal control in response to TNF: distinct roles for the negative regulators IkappaBalpha and A20. Genes Dev 22, 2093–2101. https://doi.org/10.1101/gad.1680708

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      The authors investigate experimentally single macrophages' NF-kB responses to five ligands, separately and to 3 pairs of ligands. Using the single ligand stimulations, they train an existing mathematical model to replicate single-cell NF-kB nuclear trajectories. From what I understand, for each single cell trajectory in response to a given ligand, the best fit parameters of the core module and the receptor module (specific for the given ligand) are found. Then (again, from what I understand), single ligand models are used to generate responses to combinations of ligands. The parametrizations of single ligand models (to be combined) are chosen to have the most similar core modules. It is not described how the responses to more than one ligand are calculated - I expect that respective receptor modules work in parallel, providing signals to the core module. After observing that the response to CpG+pIC is lower (in terms of duration and total) than for CpG alone, the model is modified to account for competition for endosomal transport required by both ligands.

      Having the trained model, simulations of responses to all 31 combinations of ligands are performed, and each NF-κB trajectory is described by six signaling codons-Speed, Peak, Duration, Total, Early vs. Late, and Oscillations. Next, these codons are used to reconstruct (using a random forest model) the stimuli (which may be the combination of ligands). The single and even the two ligand stimuli are relatively well recognized, which is interpreted as the ability of macrophages to distinguish ligands even if present in combination.

      Major comments

      1. The demonstrated ability to recognize stimuli is based on several key assumptions that can hardly be met in reality.

      a) The cell knows the stimulation time, and then it can use speed as a codon. Look on fig. S4A: The trajectories in response to plC are similar to those in response to TNF, but just delayed.

      b) The increase of stimulus concentration typically increases Peak, Duration, and Total, so a similar effect can be achieved by changing the ligand or concentration.

      c) Distinguishing a given ligand in the presence of some others, even stronger bases, on the assumption that these ligands were given at the same time, which is hardly justified. 2. For single ligands, it would be nice to see how the random forest classifier works on experimental data, not only on in silico data (even if generated by a fitted model). 3. My understanding of ligand discrimination is such that it is rather based on a combination of pathways triggered than solely on a single transcription factor response trajectory, which varies with ligand concentration and ligand concentration time profile (no reason to assume it is OFF-ON-OFF). For example, some of the considered ligands (plC and CpG) activate IRF3/IRF7 in addition to NF-kB, which leads to IFN production and activation of STATs. This should at least be discussed.

      Technical comments

      1. Reference 25: X. Guo, A. Adelaja, A. Singh, W. Roy, A. Hoffmann, Modeling single-cell heterogeneity in signaling dynamics of macrophages reveals principles of information transmission. Nature Communications (2025) does not lead to any paper with the same or a similar title and author list. This Ref is given as a reference to the model. Fortunately, Ref 8 is helpful. Nevertheless, authors should include a schematic of the model.
      2. Also Mendeley Data DOI:10.17632/bv957x6frk.1 and GitHub https://github.com/Xiaolu-Guo/Combinatorial_ligand_NFkB lead to nowhere.
      3. Dataset 1 is not described. Possibly it contains sets of parameters of receptor modules (different numbers of sets for each module, why?), but the names of parameters never appear in the text, which makes it impossible to reproduce the data.
      4. It is difficult to understand how the simulations in response to more than one ligand are performed.

      Significance

      A lot of work has been done, the methodology is interesting, but the biological conclusions are overstated.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      Guo et al. developed a heterogeneous, single-cell ODE model of NFκB signaling parameterized on five individual ligands (TNF, Pam, LPS, CpG, pIC) and extended it, via core-module parameter matching, to predict responses to all 31 combinations of up to five ligands. They found that simulated responder fractions and signaling codon features generally agreed with live-cell imaging data . A notable discrepancy emerged for the CpG (TLR9) + pIC (TLR3) pair: experiments exhibited non-integrative antagonism unpredicted by the original model. This issue was resolved by incorporating a Hill-type term for competitive, limited endosomal trafficking of these ligands. Finally, by decomposing NFκB trajectories into six "signaling codons" and applying Wasserstein distances plus random-forest and LSTM classifiers, the authors showed that stimulus-response specificity (SRS) declines with ligand complexity but remains statistically significant even for quintuple mixtures. This is a well written and scientifically sound manuscript about complexities of cellular signaling, especially considering the limitations of in vitro experiments in recapitulating in vivo dynamics. Here are a few comments and recommendations:

      1. The modeling approach used in this manuscript, while interesting, might need further validation. Inferring multi-ligand receptor parameters by matching single-ligand cells on core-module similarity may not capture true co-variation in receptor expression or adaptor availability. Single cell measurements of receptor expressions could be done (e.g. via flow cytometry) to ground this assumption in real data. If the authors think this is out of scope for this manuscript, they could fit core-matched single cell models with two receptor modules from scratch to the two-ligand experimental data. Would this fitted model produce similar receptor parameters compared to the presented approach? At least the authors should add a bit more explanation for why their modeling approach is better (or valid) than fitting the models with 2/3/4/5 receptor modules from scratch to the experimental data.
      2. The refined model posits competitive, saturable endosomal transport for CpG and pIC, but no direct measurements of endosomal uptake rates or compartmental saturation thresholds are provided, leaving the Hill parameters under-constrained. The authors could produce dose-response curves for CpG and pIC individually and in combination across a range of concentrations to fit the Hill parameters for competitive uptake. If this is out of scope for this paper, the authors should at least comment on why the endosome hypothesis is better than others e.g. crosstalks and other parallel pathway activations. Especially given that even the refined model simulations with Hill equations for CpG and pIC do not quite match with the experimental data (Fig 2 B,E).
      3. Authors asses the distinguishability of single-ligand stimuli and combinatorial ligands stimuli using the simulations from the refined model. While this is informative, the simulated data could propagate deviations from the experimental data to the classifiers. How would the classifiers fare when the experimental data is used to assess the single-stimulus distinguishability? The authors could use the experimental data they already have and confirm their main claim of the paper, that cells retain stimulus-response specificity even with multiple ligand exposure. In short, how would Fig 3E look when trained/validated on available experimental data?
      4. While the approach of presented here with multiple simultaneous ligand exposures is a major step towards the in vivo-like conditions, the temporal aspect is still missing. That is, temporal phasing i.e. sequential exposure to multiple ligands as one would expect in vivo rather than all at once. This is probably out of scope for this paper but the authors could comment how how their work could be taken forward in such direction and would the SRS be better or worse in such conditions.
      5. There is no caption for Figure 3F in the figure legend nor a reference in the main text.

      Significance

      General assessment: This is a good manuscript in it's present form which could get better with revision. There needs more supporting data and validation to back the main claim presented in the manuscript.

      Significance/impact/readership: When revised this manuscript could be of interest to a broad community involving single cells biology, cell and immune signaling, and mathematical modeling. Especially the models presented here could be used a starting point to more complex and detailed modeling approaches.

    1. Harnessing

      Problem There are no citations for this eBook. I would love to see citations. Especially for SoTL (Science of Teaching and Learning) - and particularly because this is an eBook for Higher Education. Without citations, you are merely doing bad-theft - because you give no credit. Watch from 3:42

    1. The reason is the principle of comparative advantage, which says that each country should specialize in the products that it can produce most readily and cheaply and trade those products for goods that foreign countries can produce most readily and cheaply. This specialization ensures greater product availability and lower prices.

      The explanation of comparative advantage helps clarify why countries trade even if one country can produce everything more efficiently. It reminds me of group projects because each person focuses on what they do best so the overall result is stronger. This challenges the argument that stopping trade would protect jobs, because trade actually increases efficiency and total output.

    2. U.S. managers must develop a global vision if they are to recognize and react to international business opportunities, as well as remain competitive at home. Often a U.S. firm’s toughest domestic competition comes from foreign companies. Moreover, a global vision enables a manager to understand that customer and distribution networks operate worldwide, blurring geographic and political barriers and making them increasingly irrelevant to business decisions. Over the past three decades, world trade has climbed from $200 billion a year to more than $1.4 trillion.1 U.S. companies play a major role in this growth in world trade, with 113 of the Fortune 500 companies making over 50 percent of their profits outside the United States.

      The main point of this section seems to be that having a global vision is no longer optional for U.S. businesses. I found it interesting that many U.S. companies like Apple and Microsoft earn over half their profits outside the United States. This shows that even companies we think of as “American” are actually deeply dependent on international markets.

    3. Even if the United States had an absolute advantage in both coffee and air traffic control systems, it should still specialize and engage in trade. Why? The reason is the principle of comparative advantage, which says that each country should specialize in the products that it can produce most readily and cheaply and trade those products for goods that foreign countries can produce most readily and cheaply. This specialization ensures greater product availability and lower prices.

      Comparative advantage is a great way to increase productivity because even if one country has the absolute advantage in producing a lot of goods, it would be better if another country could produce those goods for a lower cost.

    4. Many countries depend more on international commerce than the United States does. For example, France, Great Britain, and Germany all derive more than 55 percent of their gross domestic product (GDP) from world trade, compared to about 28 percent for the United States.5

      This shows how important doing business internationally is for the well-being of a country. Furthermore, it shows how beneficial trade is for creating jobs.

    5. outsourcing

      Outsourcing refers to sending domestic jobs that can be done in one country to another, typically to reduce costs. They can reduce costs by outsourcing to lower-income countries, where they pay less for the same amount of work they pay for in their own country. There are pros and cons, such as being able to provide lower-cost services or products, but this means that jobs are being removed from the country to a different country, which results in the unemployment percentage going up.

    6. protectionism

      Without protectionism, free trade exists. Itallows anyone, businesses included, to buy and sell without restrictions. It has been nearly impossible in today's world to have free trade, as each nation protects its home industries from outside competition through different forms. Most, if not all, countries have some form of protectionism. They do this so as not to cause harm to their economy and local industries.

    7. Companies decide to “go global” for a number of reasons. Perhaps the most urgent reason is to earn additional profits. If a firm has a unique product or technological advantage not available to other international competitors, this advantage should result in major business successes abroad. In other situations, management may have exclusive market information about foreign customers, marketplaces, or market situations. In this case, although exclusivity can provide an initial motivation for going global, managers must realize that competitors will eventually catch up. Finally, saturated domestic markets, excess capacity, and potential for cost savings can also be motivators to expand into international markets. A company can enter global trade in several ways, as this section describes.

      This is important because it explains why companies choose to expand into other countries. Businesses often go global to make more profit, especially if they have a unique product or special knowledge that gives them an advantage. They may also expand because their home market is full, they have extra production capacity, or they can reduce costs by operating internationally. Understanding this helps us see how and why companies grow beyond their own borders.

    8. A country has an absolute advantage when it can produce and sell a product at a lower cost than any other country or when it is the only country that can provide a product. The United States, for example, has an absolute advantage in reusable spacecraft and other high-tech items. Suppose that the United States has an absolute advantage in air traffic control systems for busy airports and that Brazil has an absolute advantage in coffee. The United States does not have the proper climate for growing coffee, and Brazil lacks the technology to develop air traffic control systems. Both countries would gain by exchanging air traffic control systems for coffee.

      These paragraphs explain the absolute advantage and why countries benefit from trade. This means that if each country is better at producing certain goods, they should specialize in what they do best and trade. By doing this, both countries would benefit instead of trying to produce things they are not good at making.

    9. Even if the United States had an absolute advantage in both coffee and air traffic control systems, it should still specialize and engage in trade. Why? The reason is the principle of comparative advantage, which says that each country should specialize in the products that it can produce most readily and cheaply and trade those products for goods that foreign countries can produce most readily and cheaply. This specialization ensures greater product availability and lower prices.

      Comparative advantage is a simple way for countries to benefit from trade. Some countries have an absolute advantage in certain goods and can trade those goods to a different country that may need them, in exchange for a good that they need. This promotes efficiency in our economy because everyone will be better off trading for a good they need, rather than producing it themselves at a high price. Comparative advantage will also show what goods need to be imported and exported in a country.

    10. Each year the United States exports more food, animal feed, and beverages than the year before. A third of U.S. farm acreage is devoted to crops for export. The United States is also a major exporter of engineering products and other high-tech goods, such as computers and telecommunications equipment. For more than 60,000 U.S. companies (the majority of them small), international trade offers exciting and profitable opportunities. Among the largest U.S. exporters are Apple, General Motors Corp., Ford Motor Co., Procter & Gamble, and Cisco Systems.

      Exporting and importing goods are some of the best ways for an economy to be efficient. Exporting, in particular, is beneficial for the U.S. because it creates new job opportunities and increases wages. Also, to export goods, the U.S. will be exposed to a competitive international market, which forces us to become more productive. Exporting goods internationally opens you up to a global market that offers higher production, greater profits, and greater scalability.

    11. One might argue that the best way to protect workers and the domestic economy is to stop trade with other nations. Then the whole circular flow of inputs and outputs would stay within our borders. But if we decided to do that, how would we get resources like cobalt and coffee beans? The United States simply can’t produce some things, and it can’t manufacture some products, such as steel and most clothing, at the low costs we’re used to. The fact is that nations—like people—are good at producing different things: you may be better at balancing a ledger than repairing a car. In that case you benefit by “exporting” your bookkeeping services and “importing” the car repairs you need from a good mechanic. Economists refer to specialization like this as advantage.

      This means that no country can efficiently produce everything it needs, so it makes sense to trade with others. Countries focus on producing goods and services they are best at making and trade for the things they cannot produce easily or cheaply. This specialization helps the economy grow because it lowers costs, increases efficiency, and allows everyone to benefit from trade.

    12. When a country’s currency depreciates, foreign goods become more expensive for domestic consumers, which tends to reduce imports. At the same time, the country’s goods become cheaper for foreign buyers, leading to an increase in exports. When a currency appreciates, the opposite occurs: imports become cheaper, exports become more expensive, and trade patterns shift as necessary.

    13. The difference between the value of a country’s exports and the value of its imports during a specific time is the country’s balance of trade.

      The term balance of trade means that the value of exports balances the value of imports. This is important in an economic sense because if these two values are not balanced it can lead to either a trade surplus or a trade deficit. A trade surplus is when there are more exports than imports which can be good because a country is making more money than it is spending. The opposite of this is a trade deficit where there are more imports rather than exports.

    14. Exports are goods and services made in one country and sold to others. Imports are goods and services that are bought from other countries. The United States is the largest importer and second largest exporter in the world.

      Exports and imports are important terms to know when learning about the world of business because they are one of the most if not the most important aspects of trade. Exports are the goods that are made in a country and sold and imports are the goods that are purchased. Specializing in exporting goods that a country is relatively good at producing allows for a country to benefit significantly from trading.

    1. La gestion de classe : Réalités et pistes de solution

      Ce document de synthèse récapitule les points essentiels de la formation dispensée par Elfa Hakimi et Ian Ducharme pour le Centre franco lors de l'Institut d'hiver 2025. Il explore les défis contemporains de la gestion de classe et propose des cadres théoriques et pratiques pour favoriser un environnement d'apprentissage optimal.

      Résumé exécutif

      La gestion de classe ne se limite pas à la discipline ; elle constitue un défi multidimensionnel exigeant une planification rigoureuse des ressources, l'établissement de relations authentiques et une communication pédagogique explicite. Les points saillants de cette analyse incluent :

      L'approche systémique de Nancy Gaudreau : Utilisation de la métaphore des « cinq doigts de la main » pour structurer la gestion (ressources, attentes, relations, engagement, indiscipline).

      Le passage de la réaction à la proaction : L'importance d'anticiper les comportements par l'enseignement explicite des routines et la connaissance approfondie du profil des élèves.

      L'équilibre relationnel : L'adoption d'une posture d'adulte selon l'analyse transactionnelle pour éviter le « triangle dramatique » (Persécuteur, Sauveur, Victime).

      L'engagement par la clarté : L'utilisation de résultats d'apprentissage (RA) et de critères de réussite (CR) visibles pour donner du sens aux tâches.

      --------------------------------------------------------------------------------

      1. Les défis de la gestion de classe contemporaine

      La gestion de classe est un défi incontournable qui influence directement le bon déroulement des apprentissages.

      Les comportements perturbateurs (bavardages, distractions, désobéissance, agressions) proviennent de facteurs divers :

      Troubles intrinsèques : Troubles de l'attention ou difficultés émotionnelles.

      Facteurs extrinsèques : Conflits interpersonnels ou situations familiales complexes.

      Désintéressement : Concurrence des stimuli externes (ex. : jeux vidéo).

      La formation souligne que l'enseignant doit agir comme un animateur capable de « vendre sa salade » en rendant les tâches attrayantes et accessibles.

      --------------------------------------------------------------------------------

      2. Le cadre de référence : Les cinq ingrédients de Nancy Gaudreau

      Inspiré de l'ouvrage de Nancy Gaudreau, ce modèle utilise les doigts de la main pour symboliser les piliers d'une gestion efficace.

      A. Le Pouce : La gestion des ressources

      Il s'agit de l'organisation matérielle et humaine :

      Temps et espace : L'espace est considéré comme le « troisième enseignant ». Il doit être polyvalent (travail en grand groupe, en dyades, centres de lecture).

      Ressources humaines : Utilisation des élèves comme « gardiens du temps », implication des parents, des orthopédagogues et des techniciens.

      Technologie : Intégration du codage et de la littératie numérique pour accroître la motivation.

      B. L'Index : Les attentes claires

      Ce pilier concerne la définition des règles et des routines :

      Enseignement explicite : Ne rien prendre pour acquis. On modélise le comportement (« Je fais »), on le pratique ensemble (« Nous faisons »), puis l'élève l'exécute seul (« Tu fais »).

      Signalétique visuelle : Utilisation de pictogrammes ou de systèmes de couleurs (vert, jaune, rouge) pour définir les niveaux de bruit permis selon l'activité (temps libre vs transition).

      C. Le Majeur : Les relations sociales positives

      La qualité du lien enseignant-élève est primordiale :

      Authenticité : Apprendre les prénoms rapidement, s'intéresser aux centres d'intérêt des élèves (ex. : sport) et échanger de manière informelle.

      Respect mutuel : Utiliser un ton calme, même en situation de conflit, et dissocier le comportement de la personne.

      D. L'Annulaire : L'attention et l'engagement

      Maintenir l'intérêt sur l'objet d'apprentissage :

      Zone proximale de développement : Proposer des tâches ni trop simples ni trop complexes pour éviter le découragement.

      Stratégies de captation : Utiliser des techniques de « reset » (éteindre les lumières, tapements de mains rythmés, signaux non verbaux comme le doigt sur le nez).

      E. L'Auriculaire : La gestion de l'indiscipline

      Bien que plus petit, ce doigt est crucial pour traiter les comportements inacceptables :

      Proaction : Anticiper les crises en connaissant les dossiers scolaires (DSO).

      Autorégulation : Enseigner l'empathie et la gestion des émotions par des cercles de communication.

      --------------------------------------------------------------------------------

      3. Cadres théoriques de l'accompagnement

      La théorie de la réalité (William Glasser)

      Ce processus en huit étapes vise à responsabiliser l'élève plutôt qu'à le punir :

      1. Créer un lien.

      2. Identifier le comportement.

      3. Faire évaluer le comportement par l'élève (« Est-ce que cela t'aide ? »).

      4. Établir un plan.

      5. Obtenir un engagement.

      6. Démontrer de la confiance.

      7. Ne pas accepter d'excuses ni punir inutilement.

      8. Persévérer.

      L'Analyse Transactionnelle (Eric Berne)

      Les interactions en classe sont influencées par trois états du « moi » :

      Le Parent (Normatif ou Nourricier) : Établit les attentes ou soutient.

      L'Adulte : État rationnel et équilibré à privilégier pour la résolution de problèmes.

      L'Enfant (Spontané, Soumis ou Rebelle) : Siège des émotions.

      Le Triangle Dramatique à éviter :

      Le Persécuteur : Domine et punit (« Tu es insupportable »).

      Le Sauveur : Fait le travail à la place de l'élève, nuisant à son autonomie.

      La Victime : Se sent impuissante et évite ses responsabilités (« Je suis nul »).

      --------------------------------------------------------------------------------

      4. Pistes pratiques et méthodologiques

      | Thème | Stratégies suggérées | | --- | --- | | Communication | Remplacer « Est-ce que tu comprends ? » par « Peux-tu reformuler dans tes mots ? ». | | Littératie | Utilisation de centres d'apprentissage et de la Littératie Structurée (80% grand groupe, 15% petit groupe, 5% individuel). | | Numératie | Pratiques pédagogiques à fort impact, manipulation de matériel concret, robotique et classes « collaboréflexives ». | | Rétroaction | privilégier le renforcement positif (« strokes ») et célébrer les progrès par des privilèges ou des certificats de valeur. |

      Conclusion

      Une gestion de classe efficace repose sur la capacité de l'enseignant à rester flexible et à adapter son style (autocratique, démocratique ou permissif) selon la situation.

      En rendant l'apprentissage visible et en structurant l'environnement de manière prévisible, l'enseignant réduit les opportunités d'indiscipline et favorise le succès de tous les élèves.

    1. Briefing : Analyse des idées reçues sur l'animation jeunesse

      Synthèse

      Ce document synthétise les travaux de l'Institut national de la jeunesse et de l'éducation populaire (INJEP) présentés lors de la parution de l'ouvrage collectif Idées reçues sur l'animation jeunesse.

      Le secteur de l'animation en France, bien qu'il concerne près de 4 millions de jeunes et mobilise plus de 350 000 intervenants, souffre d'un manque de reconnaissance et de représentations sociales souvent réductrices.

      L'analyse démontre que l'animation n'est pas un simple service de « gardiennage » ou de loisirs récréatifs, mais un pilier historique et structurel de l'écosystème éducatif français.

      Les principaux enjeux identifiés concernent la précarité des conditions d'emploi (notamment dans le périscolaire), la complexification des missions (gestion du handicap, violences sexistes et sexuelles) et la tension constante entre l'animation « volontaire » (occasionnelle) et l'animation professionnelle.

      Malgré une image de secteur « peu sérieux », les recherches en sciences sociales soulignent que le jeu et les activités de groupe constituent des vecteurs d'apprentissages fondamentaux, complémentaires à l'école.

      --------------------------------------------------------------------------------

      1. Évolution historique et structuration du secteur

      L'animation contemporaine est le fruit d'une longue histoire qui lie les mouvements d'éducation populaire à la construction du modèle républicain.

      Origines et continuité pédagogique : Dès la fin du XIXe siècle, les premières expérimentations (colonies de vacances, patronages) visaient à combler la vacance du temps scolaire.

      Ces initiatives ont souvent été portées par des enseignants cherchant à expérimenter des pédagogies actives en dehors du cadre formel.

      Professionnalisation : On observe un glissement sémantique et statutaire au fil des décennies : de « moniteur » à « éducateur », puis vers le terme « animateur » dans les années 1960.

      Soutien public et réseau associatif : Le secteur s'est structuré grâce à une combinaison d'initiatives associatives nationales (CMA, Francas, etc.) et d'un soutien de l'État via des agréments, des subventions et la création de corps de métiers au sein du ministère de la Jeunesse et des Sports.

      Réorientation vers l'insertion : Entre les années 1970 et 1990, sous l'effet de la crise économique, l'animation s'est progressivement intégrée aux politiques de jeunesse, avec un accent mis sur l'insertion des jeunes.

      --------------------------------------------------------------------------------

      2. Portrait de l'univers professionnel : Entre engagement et précarité

      Le secteur de l'animation se caractérise par des profils spécifiques et des conditions de travail souvent dégradées.

      Profils des animateurs et animatrices

      | Indicateur | Données clés | | --- | --- | | Féminisation | 3/4 des effectifs sont des femmes (surreprésentées dans le périscolaire). | | Âge | 50 % ont moins de 34 ans ; 25 % ont moins de 25 ans. | | Employeurs principaux | 60 % sont recrutés par des collectivités locales. | | Niveau de formation | 70 % possèdent un diplôme égal ou inférieur au baccalauréat. |

      Conditions d'emploi

      Instabilité : Utilisation massive de contrats courts et de temps partiels subis, particulièrement dans l'animation périscolaire où le temps de travail est fractionné (matin, midi, soir).

      Rémunération : Le salaire net moyen en équivalent temps plein est inférieur de 450 € à la moyenne des autres secteurs (environ 1 800 € net).

      Rotation : Un taux de rotation élevé (turnover), avec 30 % des équipes présentes dans leur structure depuis moins d'un an.

      --------------------------------------------------------------------------------

      3. Enjeux de formation : Du BAFA aux diplômes professionnels

      La formation constitue un point de tension majeur dans la reconnaissance du métier.

      Prédominance du BAFA : Bien que ce ne soit qu'un brevet pour l'animation occasionnelle, le BAFA reste la porte d'entrée principale (50 000 délivrés par an contre 3 000 diplômes professionnels de type BPJEPS).

      Technicisation du contenu : Le BAFA s'est densifié. Les stagiaires sont désormais formés à gérer des problématiques complexes : harcèlement, discriminations, violences sexistes et sexuelles, ou accueil d'enfants en situation de handicap.

      Abaissement de l'âge : Le passage de l'âge d'entrée en formation à 16 ans n'a pas révolutionné le secteur, mais nécessite des ajustements pédagogiques pour accompagner ces très jeunes encadrants.

      Délaissement des diplômes longs : Les employeurs, notamment les communes, privilégient souvent le BAFA car il est moins coûteux et plus rapide que les diplômes professionnels universitaires (BUT) ou de l'animation spécialisée.

      --------------------------------------------------------------------------------

      4. L'impact de l'animation sur les publics jeunes

      L'animation joue un rôle crucial dans la socialisation et le développement des enfants et adolescents.

      Apprentissage par les pairs : La proximité d'âge entre animateurs et jeunes favorise une transmission de savoirs différente du cadre scolaire, sans pour autant supprimer la hiérarchie éducative.

      Valeur éducative du jeu : La recherche infirme l'idée que les enfants « ne savent plus jouer ». Le jeu est un espace d'apprentissage de l'autonomie, de la négociation et de la prise de parole en public.

      Inégalités sociales : Les classes les plus favorisées investissent davantage la diversité des offres (culture, sport, loisirs), tandis que certaines fractions des classes populaires privilégient une prise en charge familiale au foyer.

      Saturation des rythmes : Les enfants sont souvent épuisés par l'empilement des activités scolaires et périscolaires, ce qui limite leur temps de « jeu libre » pour eux-mêmes.

      --------------------------------------------------------------------------------

      5. Défis contemporains et angles morts de la recherche

      Le document souligne plusieurs thématiques émergentes qui nécessitent une attention accrue.

      Violences sexuelles : Les accueils collectifs de mineurs (ACM) sont statistiquement des lieux plus sécurisés que le cadre familial. Cependant, la recherche montre que les filles subissent un continuum de violences sexistes de la petite enfance à l'âge adulte.

      Handicap : Cette question est identifiée comme un angle mort majeur de la recherche actuelle. Bien que traitée en formation, l'inclusion réelle des jeunes et des animateurs en situation de handicap reste peu documentée.

      Contrôle et réglementation : Le secteur est soumis à une inflation de normes (sécurité, alimentation, hygiène) qui transforme les pratiques professionnelles.

      Territorialisation : Il existe de fortes disparités dans l'offre d'animation selon les régions et le tissu associatif local (différences notables entre la Bretagne et la région PACA, par exemple).

      --------------------------------------------------------------------------------

      Citations clés

      « Si ce secteur concerne près de 4 millions de jeunes et plus de 350 000 animateurs et animatrices, il reste encore largement méconnu. Il est souvent associé au loisir et relégué aux marges de l'école. »

      « Le BAFA est une porte d'entrée majoritaire... Certains vont se former au BAFA sans savoir qu'ensuite ils vont se diriger vers l'animation comme métier. »

      « Moins que l'incapacité des enfants à jouer, c'est l'impossibilité de le faire au regard de l'ensemble des activités qui leur est demandé... à la fin desquelles ils sont régulièrement épuisés. »

    1. using bioelectrical impedance analysis

      I like that the study uses BIA to assess body composition and not just BMI, which does not account for lean vs fat mass

    2. randomised controlled trial include the long 8-week duration of each intervention diet, and provision of all food and drink to participants’ homes to provide

      The study type (randomized control) and the highly controlled intervention (food delivered to home based on diet type) are great strengths. They make this study more replicable and leave less room for error.

    3. UK dietary energy source

      It is interesting to me that UPFs are the main dietary source of energy in Europe. I feel like Americans are constantly comparing their diets to those living in Europe and stating they eat much less processed foods.

    1. Arguments for Utilitarianismfunction togglePlayOrPause(){document.getElementById("player-container").classList.add("show-player"),document.getElementById("audio-icon").outerHTML=""}Table of ContentsIntroduction: Moral Methodology & Reflective EquilibriumArguments for UtilitarianismWhat Fundamentally MattersThe Veil of IgnoranceEx Ante ParetoExpanding the Moral CircleThe Poverty of the AlternativesThe Paradox of DeontologyThe Hope ObjectionSkepticism About the Distinction Between Doing and AllowingStatus Quo BiasEvolutionary Debunking ArgumentsConclusionResources and Further ReadingIntroduction: Moral Methodology & Reflective EquilibriumYou cannot prove a moral theory. Whatever arguments you come up with, it’s always possible for someone else to reject your premises—if they are willing to accept the costs of doing so. Different theories offer different advantages. This chapter will set out some of the major considerations that plausibly count in favor of utilitarianism. A complete view also needs to consider the costs of utilitarianism (or the advantages of its competitors), which are addressed in Chapter 8: Objections to Utilitarianism. You can then reach an all-things-considered judgment as to which moral theory strikes you as overall best or most plausible.To this end, moral philosophers typically use the methodology of reflective equilibrium. 1 1 This involves balancing two broad kinds of evidence as applied to moral theories:Intuitions about specific cases (thought experiments).General theoretical considerations, including the plausibility of the theory’s principles or systematic claims about what matters.General principles can be challenged by coming up with putative counterexamples, or cases in which they give an intuitively incorrect verdict. In response to such putative counterexamples, we must weigh the force of the case-based intuition against the inherent plausibility of the principle being challenged. This could lead you to either revise the principle to accommodate your intuitions about cases or to reconsider your verdict about the specific case, if you judge the general principle to be better supported (especially if you are able to “explain away” the opposing intuition as resting on some implicit mistake or confusion).As we will see, the arguments in favor of utilitarianism rest overwhelmingly on general theoretical considerations. Challenges to the view can take either form, but many of the most pressing objections involve thought experiments in which utilitarianism is held to yield counterintuitive verdicts.There is no neutral, non-question-begging answer to how one ought to resolve such conflicts. 2 2 It takes judgment, and different people may be disposed to react in different ways depending on their philosophical temperament. As a general rule, those of a temperament that favors systematic theorizing are more likely to be drawn to utilitarianism (and related views), whereas those who hew close to common sense intuitions are less likely to be swayed by its theoretical virtues. Considering the arguments below may thus do more than just illuminate utilitarianism; it may also help you to discern your own philosophical temperament!While our presentation focuses on utilitarianism, it’s worth noting that many of the arguments below could also be taken to support other forms of welfarist consequentialism (just as many of the objections to utilitarianism also apply to these related views). This chapter explores arguments for utilitarianism and closely related views over non-consequentialist approaches to ethics.Arguments for UtilitarianismWhat Fundamentally MattersMoral theories serve to specify what fundamentally matters, and utilitarianism offers a particularly compelling answer to this question.Almost anyone would agree with utilitarianism that suffering is bad, and well-being is good. What could be more obvious? If anything matters morally, human well-being surely does. And it would be arbitrary to limit moral concern to our own species, so we should instead conclude that well-being generally is what matters. That is, we ought to want the lives of sentient beings to go as well as possible (whether that ultimately comes down to maximizing happiness, desire satisfaction, or other welfare goods).Could anything else be more important? Such a suggestion can seem puzzling. Consider: it is (usually) wrong to steal. 3 3 But that is plausibly because stealing tends to be harmful, reducing people’s well-being. 4 4 By contrast, most people are open to redistributive taxation, if it allows governments to provide benefits that reliably raise the overall level of well-being in society. So it’s not that individuals just have a natural right to not be interfered with no matter what. When judging institutional arrangements (such as property and tax law), we recognize that what matters is coming up with arrangements that tend to secure overall good results, and that the most important factor in what makes a result good is that it promotes well-being. 5 5Such reasoning may justify viewing utilitarianism as the default starting point for moral theorizing. 6 6 If someone wants to claim that there is some other moral consideration that can override overall well-being (trumping the importance of saving lives, reducing suffering, and promoting flourishing), they face the challenge of explaining how that could possibly be so. Many common moral rules (like those that prohibit theft, lying, or breaking promises), while not explicitly utilitarian in content, nonetheless have a clear utilitarian rationale. If they did not generally promote well-being—but instead actively harmed people—it’s hard to see what reason we would have to still want people to follow them. To follow and enforce harmful moral rules (such as rules prohibiting same-sex relationships) would seem like a kind of “rule worship”, and not truly ethical at all. 7 7 Since the only moral rules that seem plausible are those that tend to promote well-being, that’s some reason to think that moral rules are, as utilitarianism suggests, purely instrumental to promoting well-being.Similar judgments apply to hypothetical cases in which you somehow know for sure that a typically reliable rule is, in this particular instance, counterproductive. In the extreme case, we all recognize that you ought to lie or break a promise if lives are on the line. In practice, of course, the best way to achieve good results over the long run is to respect commonsense moral rules and virtues while seeking opportunities to help others. (It’s important not to mistake the hypothetical verdicts utilitarianism offers in stylized thought experiments with the practical guidance it offers in real life.) The key point is just that utilitarianism offers a seemingly unbeatable answer to the question of what fundamentally matters: protecting and promoting the interests of all sentient beings to make the world as good as it can be.The Veil of IgnoranceHumans are masters of self-deception and motivated reasoning. If something benefits us personally, it’s all too easy to convince ourselves that it must be okay. We are also more easily swayed by the interests of more salient or sympathetic individuals (favoring puppies over pigs, for example). To correct for such biases, it can be helpful to force impartiality by imagining that you are looking down on the world from behind a “veil of ignorance”. This veil reveals the facts about each individual’s circumstances in society—their income, happiness level, preferences, etc.—and the effects that each choice would have on each person, while hiding from you the knowledge of which of these individuals you are. 8 8 To more fairly determine what ideally ought to be done, we may ask what everyone would have most personal reason to prefer from behind this veil of ignorance. If you’re equally likely to end up being anyone in the world, it would seem prudent to maximize overall well-being, just as utilitarianism prescribes. 9 9How much weight we should give to the verdicts that would be chosen, on self-interested grounds, from behind the veil? The veil thought experiment highlights how utilitarianism gives equal weight to everyone’s interests, without bias. That is, utilitarianism is just what we get when we are beneficent to all: extending to everyone the kind of careful concern that prudent people have for their own interests. 10 10 But it may seem question-begging to those who reject welfarism, and so deny that interests are all that matter. For example, the veil thought experiment clearly doesn’t speak to whether non-sentient life or natural beauty has intrinsic value. It’s restricted to that sub-domain of morality that concerns what we owe to each other, where this includes just those individuals over whom our veil-induced uncertainty about our identity extends: presently existing sentient beings, perhaps. 11 11 Accordingly, any verdicts reached via the veil of ignorance will still need to be weighed against what we might yet owe to any excluded others (such as future generations, or non-welfarist values).Still, in many contexts other factors will not be relevant, and the question of what we morally ought to do will reduce to the question of how we should treat each other. Many of the deepest disagreements between utilitarians and their critics concern precisely this question. And the veil of ignorance seems relevant here. The fact that some action is what everyone affected would personally prefer from behind the veil of ignorance seems to undermine critics’ claims that any individual has been mistreated by, or has grounds to complain about, that action.Ex Ante ParetoA Pareto improvement is better for some people, and worse for none. When outcomes are uncertain, we may instead assess the prospect associated with an action—the range of possible outcomes, weighted by their probabilities. A prospect can be assessed as better for you when it offers you greater well-being in expectation, or ex ante. 12 12 Putting these concepts together, we may formulate the following principle:Ex ante Pareto: in a choice between two prospects, one is morally preferable to another if it offers a better prospect for some individuals and a worse prospect for none.This bridge between personal value (or well-being) and moral assessment is further developed in economist John Harsanyi’s aggregation theorem. 13 13 But the underlying idea, that reasonable beneficence requires us to wish well to all, and prefer prospects that are in everyone’s ex ante interests, has also been defended and developed in more intuitive terms by philosophers. 14 14A powerful objection to most non-utilitarian views is that they sometimes violate ex ante Pareto, such as when choosing policies from behind the veil of ignorance. Many rival views imply, absurdly, that prospect Y could be morally preferable to prospect X, even when Y is worse in expectation for everyone involved.Caspar Hare illustrates the point with a Trolley case in which all six possible victims are stuffed inside suitcases: one is atop a footbridge, five are on the tracks below, and a train will hit and kill the five unless you topple the one on the footbridge (in which case the train will instead kill this one and then stop before reaching the others). 15 15 As the suitcases have recently been shuffled, nobody knows which position they are in. So, from each victim’s perspective, their prospects are best if you topple the one suitcase off the footbridge, increasing their chances of survival from 1/6 to 5/6. Given that this is in everyone’s ex ante interests, it’s deeply puzzling to think that it would be morally preferable to override this unanimous preference, shared by everyone involved, and instead let five of the six die; yet that is the implication of most non-utilitarian views. 16 16Expanding the Moral CircleWhen we look back on past moral atrocities—like slavery or denying women equal rights—we recognize that they were often sanctioned by the dominant societal norms at the time. The perpetrators of these atrocities were grievously wrong to exclude their victims from their “circle” of moral concern. 17 17 That is, they were wrong to be indifferent towards (or even delight in) their victims’ suffering. But such exclusion seemed normal to people at the time. So we should question whether we might likewise be blindly accepting of some practices that future generations will see as evil but that seem “normal” to us. 18 18 The best protection against making such an error ourselves would be to deliberately expand our moral concern outward, to include all sentient beings—anyone who can suffer—and so recognize that we have strong moral reasons to reduce suffering and promote well-being wherever we can, no matter who it is that is experiencing it.While this conclusion is not yet all the way to full-blown utilitarianism, since it’s compatible with, for example, holding that there are side-constraints limiting one’s pursuit of the good, it is likely sufficient to secure agreement with the most important practical implications of utilitarianism (stemming from cosmopolitanism, anti-speciesism, and longtermism).The Poverty of the AlternativesWe’ve seen that there is a strong presumptive case in favor of utilitarianism. If no competing view can be shown to be superior, then utilitarianism has a strong claim to be the “default” moral theory. In fact, one of the strongest considerations in favor of utilitarianism (and related consequentialist views) is the deficiencies of the alternatives. Deontological (or rule-based) theories, in particular, seem to rest on questionable foundations. 19 19Deontological theories are explicitly non-consequentialist: instead of morally assessing actions by evaluating their consequences, these theories tend to take certain types of action (such as killing an innocent person) to be intrinsically wrong. 20 20 There are reasons to be dubious of this approach to ethics, however.The Paradox of DeontologyDeontologists hold that there is a constraint against killing: that it’s wrong to kill an innocent person even if this would save five other innocent people from being killed. This verdict can seem puzzling on its face. 21 21 After all, given how terrible killing is, should we not want there to be less of it? Rational choice in general tends to be goal-directed, a conception which fits poorly with deontic constraints. 22 22 A deontologist might claim that their goal is simply to avoid violating moral constraints themselves, which they can best achieve by not killing anyone, even if that results in more individuals being killed. While this explanation can render deontological verdicts coherent, it does so at the cost of making them seem awfully narcissistic, as though the deontologist’s central concern was just to maintain their own moral purity or “clean hands”.Deontologists might push back against this characterization by instead insisting that moral action need not be goal-directed at all. 23 23 Rather than only seeking to promote value (or minimize harm), they claim that moral agents may sometimes be called upon to respect another’s value (by not harming them, even as a means to preventing greater harm to others), which would seem an appropriately outwardly-directed, non-narcissistic motivation.The challenge remains that such a proposal makes moral norms puzzlingly divergent from other kinds of practical norms. If morality sometimes calls for respecting value rather than promoting it, why is the same not true of prudence? (Given that pain is bad for you, for example, it would not seem prudent to refuse a painful operation now if the refusal commits you to five comparably painful operations in future.) Deontologists may offer various answers to this question, but insofar as we are inclined to think, pre-theoretically, that ethics ought to be continuous with other forms of rational choice, that gives us some reason to prefer consequentialist accounts. 24 24Deontologists also face a tricky question about where to draw the line. Is it at least okay to kill one person to prevent a hundred killings? Or a million? Absolutists never permit killing, no matter the stakes. But such a view seems too extreme for many. Moderate deontologists allow that sufficiently high stakes can justify violations. But how high? Any answer they offer is apt to seem arbitrary and unprincipled. Between the principled options of consequentialism or absolutism, many will find consequentialism to be the more plausible of the two.The Hope ObjectionImpartial observers should want and hope for the best outcome. Non-consequentialists claim, nonetheless, that it’s sometimes wrong to bring about the best outcome. Putting the two claims together yields the striking result that you should sometimes hope that others act wrongly.Suppose it would be wrong for some stranger—call him Jack—to kill one innocent person to prevent five other (morally comparable) killings. Non-consequentialists may claim that Jack has a special responsibility to ensure that he does not kill anyone, even if this results in more killings by others. But you are not Jack. From your perspective as an impartial observer, Jack’s killing one innocent person is no more or less intrinsically bad than any of the five other killings that would thereby be prevented. You have most reason to hope that there is only one killing rather than five. So you have reason to hope that Jack acts “wrongly” (killing one to save five). But that seems odd.More than merely being odd, this might even be taken to undermine the claim that deontic constraints matter, or are genuinely important to abide by. After all, to be important just is to be worth caring about. For example, we should care if others are harmed, which validates the claim that others’ interests are morally important. But if we should not care more about Jack’s abiding by the moral constraint against killing than we should about his saving five lives, that would seem to suggest that the constraint against killing is not in fact more morally important than saving five lives.Finally, since our moral obligations ought to track what is genuinely morally important, if deontic constraints are not in fact important then we cannot be obligated to abide by them. 25 25 We cannot be obliged to prioritize deontic constraints over others’ lives, if we ought to care more about others’ lives than about deontic constraints. So deontic constraints must not accurately describe our obligations after all. Jack really ought to do whatever would do the most good overall, and so should we.Skepticism About the Distinction Between Doing and AllowingYou might wonder: if respect for others requires not harming them (even to help others more), why does it not equally require not allowing them to be harmed? Deontological moral theories place great weight on distinctions such as those between doing and allowing harm, or killing and letting die, or intended versus merely foreseen harms. But why should these be treated so differently? If a victim ends up equally dead either way, whether they were killed or “merely” allowed to die would not seem to make much difference to them—surely what matters to them is just their death. Consequentialism accordingly denies any fundamental significance to these distinctions. 26 26Indeed, it’s far from clear that there is any robust distinction between “doing” and “allowing”. Sometimes you might “do” something by remaining perfectly still. 27 27 Also, when a doctor unplugs a terminal patient from life support machines, this is typically thought of as “letting die”; but if a mafioso, worried about an informant’s potentially incriminating testimony, snuck in to the hospital and unplugged the informant’s life support, we are more likely to judge it to constitute “killing”. 28 28 Jonathan Bennett argues at length that there is no satisfactory, fully general distinction between doing and allowing—at least, none that would vindicate the moral significance that deontologists want to attribute to such a distinction. 29 29 If Bennett is right, then that might force us towards some form of consequentialism (such as utilitarianism) instead.Status Quo BiasOpposition to utilitarian trade-offs—that is, benefiting some at a lesser cost to others—arguably amounts to a kind of status quo bias, prioritizing the preservation of privilege over promoting well-being more generally.Such conservatism might stem from the Just World fallacy: the mistake of assuming that the status quo is just, and that people naturally get what they deserve. Of course, reality offers no such guarantees of justice. What circumstances one is born into depends on sheer luck, including one’s endowment of physical and cognitive abilities which may pave the way for future success or failure. Thus, even later in life we never manage to fully wrest back control from the whimsies of fortune and, consequently, some people are vastly better off than others despite being no more deserving. In such cases, why should we not be willing to benefit one person at a lesser cost to privileged others? They have no special entitlement to the extra well-being that fortune has granted them. 30 30 Clearly, it’s good for people to be well-off, and we certainly would not want to harm anyone unnecessarily. 31 31 However, if we can increase overall well-being by benefiting one person at the lesser cost to another, we should not refrain from doing so merely due to a prejudice in favor of the existing distribution. 32 32 It’s easy to see why traditional elites would want to promote a “morality” which favors their entrenched interests. It’s less clear why others should go along with such a distorted view of what (and who) matters.It can similarly be argued that there is no real distinction between imposing harms and withholding benefits. The only difference between the two cases concerns what we understand to be the status quo, which lacks moral significance. Suppose scenario A is better for someone than B. Then to shift from A to B would be a “harm”, while to prevent a shift from B to A would be to “withhold a benefit”. But this is merely a descriptive difference. If we deny that the historically given starting point provides a morally privileged baseline, then we must say that the cost in either case is the same, namely the difference in well-being between A and B. In principle, it should not matter where we start from. 33 33Now suppose that scenario B is vastly better for someone else than A is: perhaps it will save their life, at the cost of the first person’s arm. Nobody would think it okay to kill a person just to save another’s arm (that is, to shift from B to A). So if we are to avoid status quo bias, we must similarly judge that it would be wrong to oppose the shift from A to B—that is, we should not object to saving someone’s life at the cost of another’s arm. 34 34 We should not care especially about preserving the privilege of whoever stood to benefit by default; such conservatism is not truly fair or just. Instead, our goal should be to bring about whatever outcome would be best overall, counting everyone equally, just as utilitarianism prescribes.Evolutionary Debunking ArgumentsAgainst these powerful theoretical objections, the main consideration that deontological theories have going for them is closer conformity with our intuitions about particular cases. But if these intuitions cannot be supported by independently plausible principles, that may undermine their force—or suggest that we should interpret these intuitions as good rules of thumb for practical guidance, rather than as indicating what fundamentally matters.The force of deontological intuitions may also be undermined if it can be demonstrated that they result from an unreliable process. For example, evolutionary processes may have endowed us with an emotional bias favoring those who look, speak, and behave like ourselves; this, however, offers no justification for discriminating against those unlike ourselves. Evolution is a blind, amoral process whose only “goal” is the propagation of genes, not the promotion of well-being or moral rightness. Our moral intuitions require scrutiny, especially in scenarios very different from our evolutionary environment. If we identify a moral intuition as stemming from our evolutionary ancestry, we may decide not to give much weight to it in our moral reasoning—the practice of evolutionary debunking. 35 35Katarzyna de Lazari-Radek and Peter Singer argue that views permitting partiality are especially susceptible to evolutionary debunking, whereas impartial views like utilitarianism are more likely to result from undistorted reasoning. 36 36 Joshua Greene offers a different psychological debunking argument. He argues that deontological judgments—for instance, in response to trolley cases—tend to stem from unreliable and inconsistent emotional responses, including our favoritism of identifiable over faceless victims and our aversion to harming someone up close rather than from afar. By contrast, utilitarian judgments involve the more deliberate application of widely respected moral principles. 37 37Such debunking arguments raise worries about whether they “prove too much”: after all, the foundational moral judgment that pain is bad would itself seem emotionally-laden and susceptible to evolutionary explanation—physically vulnerable creatures would have powerful evolutionary reasons to want to avoid pain whether or not it was objectively bad, after all! 38 38However, debunking arguments may be most applicable in cases where we feel that a principled explanation for the truth of the judgment is lacking. We do not tend to feel any such lack regarding the badness of pain—that is surely an intrinsically plausible judgment if anything is. Some intuitions may be over-determined: explicable both by evolutionary causes and by their rational merits. In such a case, we need not take the evolutionary explanation to undermine the judgment, because the judgment also results from a reliable process (namely, rationality). By contrast, deontological principles and partiality are far less self-evidently justified, and so may be considered more vulnerable to debunking. Once we have an explanation for these psychological intuitions that can explain why we would have them even if they were rationally baseless, we may be more justified in concluding that they are indeed rationally baseless.As such, debunking objections are unlikely to change the mind of one who is drawn to the target view (or regards it as independently justified and defensible). But they may help to confirm the doubts of those who already felt there were some grounds for scepticism regarding the intrinsic merits of the target view.ConclusionUtilitarianism can be supported by several theoretical arguments, the strongest perhaps being its ability to capture what fundamentally matters. Its main competitors, by contrast, seem to rely on dubious distinctions—like “doing” vs. “allowing”—and built-in status quo bias. At least, that is how things are apt to look to one who is broadly sympathetic to a utilitarian approach. Given the flexibility inherent in reflective equilibrium, these arguments are unlikely to sway a committed opponent of the view. For those readers who find a utilitarian approach to ethics deeply unappealing, we hope that this chapter may at least help you to better understand what appeal others might see in the view.However strong you judge the arguments in favor of utilitarianism to be, your ultimate verdict on the theory will also depend upon how well the view is able to counter the influential objections that critics have raised against it.The next chapter discusses theories of well-being, or what counts as being good for an individual.Next Chapter: Theories of Well-BeingHow to Cite This PageChappell, R.Y. and Meissner, D. (2023). Arguments for Utilitarianism. In R.Y. Chappell, D. Meissner, and W. MacAskill (eds.), An Introduction to Utilitarianism, <https://www.utilitarianism.net/arguments-for-utilitarianism>, accessed document.write((new Date).toLocaleDateString("en-US"))2/13/2026.
    1. On the other hand, the performance Órbita #3 (2022) brings the present bodies of two performers

      "Órbita #3 (2022) is an earlier performative work that brings the present bodies of two performers..."

    1. Comprendre le racisme et la discrimination systémique : Note de synthèse

      Résumé analytique

      Ce document de synthèse s'appuie sur le premier d'une série de quatre ateliers visant à développer l'humilité culturelle en milieu scolaire. L'objectif central est de déconstruire les mythes biologiques entourant la notion de race pour mettre en lumière sa nature de construction socio-historique. L'analyse démontre que le racisme n'est pas un événement isolé ou purement individuel, mais une structure systémique ancrée dans 500 ans de colonisation et d'idéologies de classification humaine. Les points clés incluent la nécessité d'un parcours d'introspection individuel sur le pouvoir et le privilège, la reconnaissance des biais institutionnels (notamment dans les suspensions scolaires et les évaluations) et l'importance de créer un espace de dialogue courageux malgré l'inconfort inhérent à ces sujets.

      --------------------------------------------------------------------------------

      Cadre de dialogue : Les cinq accords

      Pour aborder ces thématiques sensibles, l'atelier établit cinq principes fondamentaux (dont les quatre premiers sont issus de la communauté autochtone Ojibway) afin de garantir une conversation constructive :

      | Accord | Description | | --- | --- | | Être engagé | Mettre de côté les distractions pour s'investir pleinement dans la formation. | | Dire sa vérité | Parler en son nom propre (« Je ») basé sur son expérience vécue unique. | | Vivre l'inconfort | Accepter que l'inconfort est nécessaire pour identifier des solutions et passer à l'action. | | S'attendre à l'absence de conclusion | Reconnaître que la formation est le début d'une conversation, sans solution magique immédiate. | | Respecter les limites socio-émotionnelles | Autoriser le retrait momentané si le sujet devient trop difficile personnellement. |

      --------------------------------------------------------------------------------

      Déconstruction biologique de la race

      Un apport majeur de la source est la distinction entre les réalités biologiques et les constructions sociales.

      La réalité génétique

      L'ADN humain est composé de 300 milliards de nucléotides. Les recherches en biologie évolutionnaire démontrent que :

      • La différence entre deux individus, quelle que soit leur apparence physique, n'est que de 3 millions de nucléotides (soit 0,1 % de variation).

      • À titre de comparaison, deux mouches à fruits présentent 10 fois plus de variances génétiques que les humains.

      • L'humain partage 99,9 % de son ADN avec n'importe quel autre membre de l'espèce.

      Origines des différences physiques

      Les variations d'apparence (couleur de peau, traits) s'expliquent par deux facteurs purement scientifiques :

      1. La dérive génétique : Le déplacement de petits groupes de populations hors d'Afrique (berceau de l'humanité) a réduit la variance génétique disponible dans ces nouveaux groupes.

      2. L'environnement : L'adaptation sur des générations à des climats différents (ex: moins de soleil en Europe du Nord) a modifié l'apparence physique sans créer d'espèces distinctes.

      --------------------------------------------------------------------------------

      Genèse historique et idéologique

      Le document établit que le racisme a précédé la notion de race. La race a été inventée pour justifier des actions politiques et économiques.

      La Colonisation et la Doctrine de la Découverte : Pour justifier la prise de terres (Île de la Tortue) et l'esclavage, il fallait déshumaniser les peuples autochtones et noirs en les déclarant « non civilisés ».

      La Classification de Carl von Linné : Ce scientifique suédois a hiérarchisé les êtres vivants, plaçant l'homme blanc au sommet de l'échelle et les peuples noirs et autochtones au bas.

      L'Idéologie de la suprématie blanche : Cette classification a donné naissance à une idéologie qui persiste aujourd'hui, intégrée dans les structures sociales et institutionnelles.

      --------------------------------------------------------------------------------

      Le racisme comme structure systémique

      Le racisme doit être compris comme une interrelation entre trois niveaux :

      1. Niveau Idéologique/Culturel : Des croyances ancrées dans la société (ex: le préjugé selon lequel un garçon noir serait plus agressif).

      2. Niveau Systémique : La traduction de ces biais dans les politiques et les chiffres.

      Exemple scolaire : Une disparité marquée dans les taux de suspension des élèves noirs et autochtones.    ◦ Exemple institutionnel : Des calendriers scolaires calqués uniquement sur les fêtes chrétiennes, ou des tests de dépistage (type OCRE) biaisés culturellement.

      3. Niveau Individuel : Les actions d'une personne (ex: un enseignant qui pénalise plus sévèrement un élève racisé en raison de biais inconscients).

      La question des données francophones

      Il est noté que les conseils scolaires anglophones publient davantage de données sur ces disparités. Du côté francophone, les statistiques sont parfois jugées « non fiables » en raison de la taille des échantillons, bien que les réalités de terrain soient similaires aux conseils limitrophes. Les populations noires francophones, souvent de l'immigration plus récente (2e génération), commencent seulement à documenter massivement ces expériences.

      --------------------------------------------------------------------------------

      Clarifications conceptuelles et enjeux contemporains

      L'analyse apporte des réponses précises à des questions souvent contentieuses :

      Communautés historiquement marginalisées : Ce terme désigne principalement les peuples noirs et autochtones en raison de 500 ans d'oppression systémique. Cela inclut aussi d'autres groupes ne correspondant pas au « moule » de l'homme blanc (faibles revenus, LGBTQ2S+, etc.).

      Le « Racisme envers les Blancs » (Racisme inversé) : La source affirme qu'il n'existe pas de racisme systémique envers les Blancs. Si un individu blanc peut subir de la discrimination ou des insultes, cela n'impacte pas ses chances de réussite de manière structurelle, car le système et le pouvoir institutionnel demeurent en faveur de la population blanche.

      Pouvoir et Privilège : Le privilège blanc est décrit comme un avantage « non mérité ». En prendre conscience n'est pas une question de culpabilité, mais de responsabilité et d'introspection.

      Fragilité blanche : Concept (théorisé par Robin DiAngelo) décrivant les réactions défensives ou émotionnelles fortes des personnes blanches lorsqu'elles sont confrontées à la question du racisme.

      Conclusion

      Le chemin vers l'humilité culturelle nécessite de reconnaître que le racisme est une structure omniprésente dans laquelle tout le monde baigne. Le désapprentissage de ces biais est un processus continu. Les prochaines étapes de cette réflexion porteront sur les biais implicites, l'intersectionnalité et la pratique concrète de l'humilité culturelle en milieu scolaire.

    1. Rapport de Synthèse : Autorité, Vérité et Défis Informationnels à l'Horizon 2050

      Résumé Exécutif

      Ce document synthétise les interventions de Pierre Rosanvallon, David Chavalarias et Antoine Bayet devant la délégation sénatoriale concernant l'évolution des valeurs d'autorité et de vérité face aux réseaux sociaux et aux mutations médiatiques.

      Les points clés identifiés sont :

      La crise de l'autorité : L'autorité ne se décrète pas ; elle est une "institution invisible" qui se reconnaît d'en bas. Sa reconstruction nécessite de valoriser la démarche scientifique (tâtonnements, confrontation) plutôt que le simple énoncé de vérités lointaines.

      La menace systémique des plateformes : Les réseaux sociaux, par leurs algorithmes de maximisation de l'engagement, favorisent structurellement les contenus toxiques (+49 % de toxicité mesurée sur X) et permettent des manipulations géopolitiques (Russie, États-Unis) visant à miner les démocraties européennes.

      L'émergence de la "Dark Information" : Une partie de la population, souvent diplômée et insérée, délaisse les médias traditionnels pour des canaux alternatifs qui imitent les codes du journalisme professionnel (information "Canada Dry") pour diffuser des récits militants ou tronqués.

      Scénarios 2050 : L'avenir de l'information oscille entre un miracle de réappropriation citoyenne, un effondrement total de la vérité, ou une fragmentation durable de la réalité en bulles hermétiques.

      Pistes d'action : La réponse réside dans la transparence algorithmique, l'éducation aux médias étendue à l'IA, la souveraineté numérique et l'adoption de nouveaux modes de délibération et de scrutin.

      --------------------------------------------------------------------------------

      1. La Nature de l'Autorité et de la Légitimité

      1.1. L'Autorité comme "Institution Invisible"

      L'autorité se distingue fondamentalement du pouvoir. Alors que le pouvoir dispose de moyens de coercition (police, règles), l'autorité, au même titre que la confiance et la légitimité, ne peut être imposée par décret.

      Reconnaissance ascendante : L'autorité "vient d'en bas". Elle est octroyée par ceux qui la reconnaissent, et non par celui qui prétend l'exercer.

      Le modèle universitaire médiéval : Historiquement, l'autorité s'est construite non par la parole d'un seul, mais par la confrontation critique et la discussion (procédures quodlibétiques).

      1.2. La Crise de l'Autorité Scientifique

      Le savant est aujourd'hui perçu comme une figure lointaine, enfermée dans sa bulle. Pour restaurer cette autorité, il est nécessaire de :

      Rendre la démarche sensible : Montrer le "bricolage", l'hésitation et le tâtonnement inhérents à la recherche.

      Privilégier la proximité : À l'instar des savants des années 1930 ou de François Arago au XIXe siècle, l'autorité se gagne en se mettant au service de la collectivité et en restant accessible.

      Accueillir l'indétermination : La démocratie doit accepter de prendre en charge les doutes et les préjugés des citoyens plutôt que de chercher à "rééduquer les cerveaux" de manière descendante.

      --------------------------------------------------------------------------------

      2. Réseaux Sociaux : Infrastructures de Manipulation

      2.1. Un Contexte Géopolitique de "Tenaille"

      L'Europe est confrontée à deux types d'influences extérieures cherchant à modifier la perception des citoyens :

      L'Est (Russie) : Utilisation de la doctrine du KGB visant à miner les démocraties en ciblant les médias et en désorientant l'opinion.

      L'Ouest (États-Unis/Big Tech) : Une stratégie visant à "inonder la zone" de contenus confus pour disqualifier les sources d'autorité traditionnelles au profit de modèles autoritaires ou suprémacistes.

      2.2. La Toxicité Algorithmique

      Les plateformes numériques ne sont pas des canaux neutres. Elles pratiquent une "éditorialisation" algorithmique délétère :

      Maximisation de l'engagement : Pour retenir l'attention, les algorithmes favorisent le clash et l'hostilité.

      Distorsion du flux : Sur X (anciennement Twitter), l'arrivée d'Elon Musk a fait passer la part de contenus toxiques dans les fils d'actualité de 32 % à 49 %.

      Invisibilisation des abonnements : Un utilisateur ne voit en moyenne que 3 % de la production de son environnement social réel, le reste étant sélectionné par la plateforme.

      2.3. Risques Systémiques et Souveraineté

      L'Astroturfing : Création de foules factices (robots, IA) pour simuler une adhésion populaire à une cause (ex: MacronLeaks en 2017, soutien à l'AFD en Allemagne).

      Dépendance aux infrastructures : Le cas de Starlink illustre le risque qu'un acteur privé puisse, en 2050, couper l'accès internet d'un État pour imposer sa volonté politique.

      La "Tech Autoritaire" : Pilotage de la démocratie par des outils technologiques opaques et centralisés.

      --------------------------------------------------------------------------------

      3. Les Nouveaux Visages de l'Information

      3.1. Les "Décrocheurs" de l'Information

      Contrairement aux clichés, les citoyens qui rejettent les médias traditionnels ("mainstream") sont souvent :

      • Très insérés socialement (cadres, médecins, avocats, élus).

      • Diplômés et actifs numériquement.

      • En recherche d'une "légitimité alternative".

      3.2. La "Dark Information" ou Information "Canada Dry"

      Cette forme d'information imite parfaitement les codes professionnels pour mieux tromper :

      Mise en scène : Interviews en studio, experts affichés, vocabulaire journalistique.

      Viralité supérieure : Lors du premier confinement, les contenus d'un groupe Facebook pro-Didier Raoult ont été plus partagés que ceux de six grands médias réunis (BFM, Le Monde, Le Figaro, etc.).

      L'effet "Holdup" : Utilisation de personnalités crédibles (anciens ministres, chercheurs) pour valider des récits tronqués ou manipulés.

      3.3. La Crise du Contexte

      L'information moderne souffre d'une décontextualisation systématique. Une image ou une vidéo extraite de son cadre devient une arme. Le combat pour la vérité passe désormais par la "guerre du contexte" et le temps long de l'archive.

      --------------------------------------------------------------------------------

      4. Prospective : Les Mondes de l'Information en 2050

      Trois scénarios contrastés ont été élaborés pour anticiper l'évolution du système :

      | Scénario | Description | Caractéristiques Clés | | --- | --- | --- | | Le Miracle | Reprise en main citoyenne | Information comme bien commun, algorithmes audités, IA au service du contexte, consentement à payer. | | L'Obscur | Effondrement de la vérité | Disparition de l'indépendance, fatigue informationnelle des citoyens, plateformes totalement dominantes, démocratie vulnérable. | | Le Clair-Obscur | Fragmentation (Le plus probable) | Coexistence de plusieurs régimes de vérité ; information de haute qualité pour une élite vs bulles informationnelles fermées pour le reste. |

      --------------------------------------------------------------------------------

      5. Pistes de Solution et Recommandations

      Pour parer à la destruction du débat démocratique, plusieurs leviers sont identifiés :

      1. Réforme des Modes de Scrutin : Sortir du scrutin uninominal, vulnérable à la manipulation de l'entre-deux-tours, pour aller vers des systèmes comme le Jugement Majoritaire, qui réduit le "vote utile" et la division haineuse.

      2. Transparence et Régulation : Appliquer strictement le Digital Services Act (DSA) pour ouvrir les "boîtes noires" algorithmiques, tout en développant des "communs numériques" et des services publics d'information.

      3. Éducation Globale : Étendre l'éducation aux médias (EMI) à une éducation à l'IA dès le collège. Il ne s'agit pas seulement de vérifier les faits (fact-checking), mais de comprendre la logistique de production de l'information et les biais des outils.

      4. Souveraineté Numérique : S'émanciper des infrastructures captives (États-Unis/Chine) pour garantir l'état de droit.

      5. Pédagogie de la Fabrication : Les journalistes et chercheurs doivent "montrer les coutures" de leur métier, accepter de dire "je ne sais pas" et expliciter leurs méthodes pour regagner la confiance.

    1. Reviewer #3 (Public review):

      This work by Du et al. addresses a critical problem in cryo-electron microscopy. To date, there are few ways of generating phase contrast during cryo-EM imaging while remaining in focus. Cryo-EM practitioners today must generate contrast by collecting out-of-focus exposures, a process that introduces aberrations in the resulting image data. Recent work has shown that standing wave lasers are capable of using the ponderomotive effect to shift the phase of electrons in transmission electron microscopy to generate in-focus phase contrast imaging for cryo-EM. A limitation of this 'laser phase plate' is the high laser power required, which can damage optical mirrors and necessitate high laser safety. Thus, alternative approaches are needed for phase contrast imaging in cryo-EM.

      In this manuscript, Du et al. exploit their expertise in ultrafast electron microscopy to explore the ability to shift the phase of electrons using pulsed electrons and lasers. The motivation for exploring pulsed laser phase plates stems from the fact that femtosecond pulses from 9W lasers can generate extremely high power (as much as the standing-wave laser phase plate, > 1 gigawatt) at the back focal plane. If successful, this type of instrument will likely be much more affordable and easier to deploy worldwide.

      The work outlined here shows a proof of principle, highlighting that an ultrafast scanning electron microscopy beam at 30 kV can have the electron packets phase shift by 430 radians (24637 degrees), which is much greater than the required 1.5 radians (90 degrees) needed for phase contrast imaging. The data presented do not use any biological samples; instead, they measure the spread of the electron beam on a test sample to assess the ability to target pulsed lasers onto electron packets and the amount of electron spread (which relates to the phase shift). They were also able to take their system a step further to measure how changes to the system in terms of laser power affect performance, and show that the system can be stable for 10+ hours.

      The only weaknesses relate to the broad readability of the text. Improved textual clarity will help ensure a wider readership.

      Overall, this work is an important step toward developing lower-cost alternatives to the standing-wave laser phase plate.

    1. But short of that, replacing the market for child pornography with simulated imagery may be a useful stopgap.

      3) Treatment for these biological urges should not make AI imagery more acceptable, therapies should be a first line for stopage. To me it seems to normalizes these urges and depited behaviors.

  2. Feb 2026
    1. Author response:

      The following is the authors’ response to the previous reviews

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      This study generated 3D cell constructs from endometrial cell mixtures that were seeded in the Matrigel scaffold. The cell assemblies were treated with hormones to induce a "window of implantation" (WOI) state. Although many bioinformatic analyses point in this direction, there are major concerns that must be addressed.

      Strengths:

      The addition of 3 hormones to enhance the WOI state (although not clearly supported in comparison to the secretory state).

      Comments on revisions:

      The authors did their best to revise their study according to the Reviewers' comments. However, the study remains unconvincing, incomplete and at the same time still too dense and not focused enough.

      Reviewer #2 (Public review):

      Zhang et al. have developed an advanced three-dimensional culture system of human endometrial cells, termed a receptive endometrial assembloid, that models the uterine lining during the crucial window of implantation (WOI). During this mid-secretory phase of the menstrual cycle, the endometrium becomes receptive to an embryo, undergoing distinctive changes. In this work, endometrial cells (epithelial glands, stromal cells, and immune cells from patient samples) were grown into spheroid assembloids and treated with a sequence of hormones to mimic the natural cycle. Notably, the authors added pregnancy-related factors (such as hCG and placental lactogen) on top of estrogen and progesterone, pushing the tissue construct into a highly differentiated, receptive state. The resulting WOI assembloid closely resembles a natural receptive endometrium in both structure and function. The cultures form characteristic surface structures like pinopodes and exhibit abundant motile cilia on the epithelial cells, both known hallmarks of the mid-secretory phase. The assembloids also show signs of stromal cell decidualization and an epithelial mesenchymal transition, like process at the implantation interface, reflecting how real endometrial cells prepare for possible embryo invasion.

      Although the WOI assembloid represents an important step forward, it still has limitations: the supportive stromal and immune cell populations decrease over time in culture, so only earlypassage assembloids retain full complexity. Additionally, the differences between the WOI assembloid and a conventional secretory-phase organoid are more quantitative than absolute; both respond to hormones and develop secretory features, but the WOI assembloid achieves a higher degree of differentiation due to the addition of "pregnancy" signals. Overall, while it's a reinforced model (not an exact replica of the natural endometrium), it provides a valuable in vitro system for implantation studies and testing potential interventions, with opportunities to improve its long-term stability and biological fidelity in the future.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      This study generated 3D cell constructs (i.e., assembloids) that were treated with hormones to induce a 'window of implantation' (WOI) state. While the authors have made large efforts to address the reviewers' feedback, the study's findings remain unconvincing and incomplete.

      (1) The authors have appropriately revised the terminology from 'organoids' to 'assembloids' in several parts of the manuscript. However, this revision remains incomplete, as the main title, figure legends, and figure titles still contain the incorrect term. A thorough review of the entire manuscript is recommended to ensure consistent and accurate use of terminology.

      Thank you for your meticulous review. We have now conducted a full check and confirmed that terminology is used consistently and accurately throughout the text.

      (1) Previous comments raised concerns about the feasibility of robustly passaging assembloid structures - comprising epithelial, stromal and immune cells - under epithelial growth conditions. The authors responded by stating that they optimized the expansion medium with a stromal cell-promoting factor. Additionally, rather than conducting scRNA-seq on both early and late passages (P6-P10) as suggested, they performed immunofluorescence staining, which confirmed the persistence of stromal cells at passage 6. However, the presence of immune cells was not addressed. Confirmation of their presence is essential for all further claims. Moreover, a more zoomed-out view of the immunostaining would help clarify the overall cellular composition across the entire well and facilitate comparison with corresponding brightfield images.

      Whole-mount immunofluorescence of the 6th - generation assembloids revealed that CD45<sup>+</sup> immune cells surrounded FOXA2<sup>+</sup> glands, with a more zoomed-out view provided.

      Author response image 1.

      Whole-mount immunofluorescence showed that CD45<sup>+</sup> cells (immune cells) were arranged around the glandular spheres that were FOXA2<sup>+</sup>. Scale bar =50 μm (left) and 30 μm (right).

      In their response, the authors mention using the first three passages to ensure optimal cell diversity and viability. However, the manuscript states that 'assembloids derived from the first generation are used for experiments' (line 106). This discrepancy must be clarified.

      Thank you for your suggestion. We have revised the relevant content to “The assembloids derived from the first three generation are used for experiments” (Line 90-91).

      (2) The authors have made a commendable effort to bring more focus to the manuscript, which has improved readability.

      We thank you for your insightful suggestions, which have greatly improved the quality of our manuscript.

      (3) The "embryo implantation" part remains very unconvincing. How did authors define "the blastoids could grow within the endometrial assembloids and interact with them"? What did they mean with "grow"? Did blastoids further differentiate? Normally, blastoids cannot further "grow". "Survival rates of blastoids" is not equal to "growth". It is not clear how the survival rate was quantified. Besides, regarding the "interaction rates", how did authors define and quantify it? Actually, blastoids are able to attach to Matrigel efficiently (even without any endometrial cells), so authors cannot simply define the "interaction" as the co-localization of blastoids and assembloids via brightfield images. In addition, for the assembloids as the 3D structures grow in the Matrigel, the epithelial parts are normally apical-in, while the blastoids attach to the apical (lumen) side of the epithelial cells, so physiologically, blastoids should interact with the apical part of the epithelial cells instead of the outside of the assembloids.

      (1) What did they mean with "grow"? Did blastoids further differentiate?

      On the one hand, volume and morphology undergo continuous dynamic changes; on the other hand, only the inner cell mass and trophectoderm exist at the blastocyst stage, with the ICM further differentiating into OCT4<sup>+</sup> epiblast and GATA6<sup>+</sup> hypoblast.

      (2) Survival rates of blastoids" is not equal to "growth". It is not clear how the survival rate was quantified.

      The definition of "survival rate" is as follows: morphologically, the blastocoel remains noncollapsed and the cell boundaries are distinct (with no obvious cell detachment); molecularly, the markers of epiblast, hypoblast and trophectoderm are expressed. The survival rate is calculated as the ratio of viable embryoids to the total number of embryoids.

      (3) Besides, regarding the "interaction rates", how did authors define and quantify it? Actually, blastoids are able to attach to Matrigel efficiently (even without any endometrial cells), so authors cannot simply define the "interaction" as the co-localization of blastoids and assembloids via brightfield images.

      The criteria for determining interaction include not only attachment between the blastoids and assembloids observed via brightfield images, but also their sustained tight adhesion against external mechanical perturbations (e.g., medium replacement, immunostaining procedures).

      (4) In addition, for the assembloids as the 3D structures grow in the Matrigel, the epithelial parts are normally apical-in, while the blastoids attach to the apical (lumen) side of the epithelial cells, so physiologically, blastoids should interact with the apical part of the epithelial cells instead of the outside of the assembloids.

      You are absolutely correct. In vivo, the embryo indeed makes initial contact with the apical side of the epithelial cells. The introduction of the blastoid co-culture model herein is intended to demonstrate that this receptive endometrial assembloids can better support blastoid growth and development.

      (4) Previous comments highlighted the absence of distinct shifts in gene expression profiles between SEC assembloids and WOI assembloids, which contrasts with findings from primary endometrial tissue reported by Wang et al. (2020). While the authors have expanded their analysis using the Mfuzz algorithm and identified changes in mitochondria- and cilia-associated genes, the manuscript still lacks evidence of significant transcriptional changes in key WOI marker genes, as described in Wang et al. This discrepancy must be addressed and discussed in greater depth to clarify the biological relevance of their model.

      The endometrium in vivo involves complex crosstalk among multiple cell types and is tightly regulated by the hypothalamic-pituitary-ovarian (HPO) axis, thus exhibiting distinct shifts in gene expression during the peri-implantation period.

      In our in vitro model, alterations in mitochondria- and cilia-related genes were observed, which to a certain extent demonstrates that these window of implantation (WOI) assembloids possess receptive-phase characteristics and can be employed to investigate WOI-associated scientific questions or conduct in vitro drug screening.

      However, substantial efforts are still required to optimize the current model for fully recapitulating the dynamic changes in endometrial gene expression across different phases in vivo, and this aspect is further addressed in the Limitations section of our discussion (Line 342-353).

      “However, our WOI endometrial assembloids also exhibit some limitations. It is undeniable that the assembloids cannot perfectly replicate the in vivo endometrium, which comprises functional and basal layers with a greater abundance of cell subtypes, under superior regulation by hypothalamic-pituitary-ovarian (HPO) axis. Specifically, stromal and immune cells are challenging to stably passage, and their proportion is lower than in the in vivo endometrium. While the in vivo peri-implantation period exhibits intricate gene expression dynamics driven by systemic regulation, our models only partially recapitulate these changes, primarily in mitochondria- and cilia-associated genes. Nevertheless, to some extent, these WOI assembloids possess receptivity characteristics and can be utilized for investigating receptivity-related scientific questions or conducting in vitro drug screening. Further refinements are required to fully simulate the dynamic endometrial gene expression patterns across all menstrual cycle stages. We are looking forward to integrating stem cell induction, 3D printing, and microfluidic systems to modify the culture environment.”

      (5) In the authors' response document, they present data integrating their results with those of Garcia Alonso et al. (2021). However, these integrated analyses are not included in the revised manuscript (which should be, if answering a major concern).

      Thanks for your valuable suggestions. We have now integrated the findings of Garcia Alonso et al. (2021) into the revised manuscript (Line 132) and Figure S2E–F.

      (8) Fig 2D: The authors have clarified that CD45+ staining is used. However, they have not yet adapted the typo in the figure legend of the right picture.

      Thanks for your thorough review. The left panel of Figure 2D is stained with CD45 to label immune cells, while the right panel is stained with CD44. These details have been clearly indicated in both the manuscript and the figure legend.  

      (9) All quantification analyses (as described in the authors' response document) should be clearly described in the Materials & Methods section.  

      Thanks for your valuable suggestions. All quantification analyses have now been added to the Supporting Materials and Methods section (Line 94-104, Line 110-111, Line 241244).

      (10) The authors have provided clarification regarding their method for quantifying immunofluorescence staining (e.g., OLFM4 expression in Fig. 3C) in their response document. However, these methodological details are not included in the revised manuscript. It is important that such information is incorporated into the manuscript itself to ensure transparency and reproducibility for others.

      Thanks for your valuable suggestions. All quantification analyses have now been added to the Supporting Materials and Methods section (Line 94-104).

      (13) It is needed to include the author's response to the comment about literature showing the opposite of increased number of cilia during the WOI into the discussion part of the paper.

      We appreciate your suggestions. The relevant content has now been added to the Discussion section (Lines 319–323).

      (14) In the authors' response, they explain the difference between pinopodes and microvilli. They should include this explanation briefly in the manuscript. Moreover, Fig. 3F lacks a picture of cilia structure in CTRL condition. In addition, the structures that are indicated as cilia with an orange arrow seem to not be attached to the endometrial cells (anymore). It would be useful to show another more representative picture for the cilia.

      (1) Thank you for your valuable suggestions. The distinction between pinopodes and microvilli has now been added to the Supporting Materials and Methods section (Line 230-236).

      (2) You are probably referring to Figure 2F—we did not observe ciliary structures in the CTRL group.

      (3) The cilia structure was visualized via transmission electron microscopy (TEM), which requires ultrathin sectioning. Thus, the cilia shown in the image correspond to a single cross-section of the captured assembloids. Owing to technical limitations, three-dimensional visualization of cilia on the cells cannot be achieved.

      (17) The results on co-culturing blastoids with the WOI assembloids is not convincing. The blastoids are exposed to the basolateral side of the endometrial epithelial cells, while in vivo, blastocysts interact with the apical side of the endometrial epithelial cells first (apposition and attachment), followed by invasion into the endometrium. This means that the interaction shown here is not physiological. Therefore, it is not justified to say that this platform holds promise to investigate maternal-fetal interactions.

      We agree with your perspective that discrepancies exist between this model and the physiological processes in vivo. However, such differences do not negate the scientific value of the model.

      The core merit of this study lies in the successful establishment of co-culture systems for blastoids and WOI assembloids. Notably, genuine cross-talk occurs between the two components, thereby providing a practical and operational tool for subsequent research.

      Although the current contact orientation differs from that observed in vivo, future optimization of the cell culture protocol (via modulation of cell polarity) will enable the model to better recapitulate physiological conditions. Therefore, the innovation and operability of this model within specific research contexts still render it a robust platform for investigating maternal-fetal interactions.

      Overall, it is highly recommended that the authors carefully review the manuscript for grammatical errors, inconsistencies and issues with scientific phrasing. The language throughout the text requires substantial editing to improve clarity, readability and precision. 

      We appreciate your suggestions. A full manuscript check was performed to rectify grammatical errors, inconsistencies, and inappropriate scientific phrasing, with further language refinement by a native English-speaking specialist.

      Fig 1A: This overview is unclear. How many days do the assembloids grow before being stimulated with hormones? Are CTRL assembloids only kept in culture until day 2 and SEC and WOI assembloids until day 8? This is also not clear form the Materials and Methods section. Should be clarified.

      Thanks for your valuable suggestions. We have now updated the overview (Figure 1A) and Materials and Methods section (Line 370-371, Line 379-381).

      “Hormonal treatment was initiated following the assembly of the endometrial assembloids (about 7-day growth period).”

      “The CTRL group was cultured in ExM without hormone supplementation and subjected to parallel culture for 8 days along with the two aforementioned groups.”

      Fig 1B: From these brightfield images, it appears that the size of the assembloids remains relatively consistent from Day 0 to Day 3 and up to Day 11 (especially in CTRL). However, in Fig S1A, the assembloids on Day 11 appear significantly larger compared to those on Day 2 (or Day 4). Authors should clarify this discrepancy (since both of the figures are shown as "brightfield of endometrial assembloids").

      You are probably referring to the observation that the assembloids at Day 11 in Fig. S1A are smaller in size than those at Day 2 (or Day 4) in Fig. 1B. This discrepancy arises because the time points in Fig. 1B are calculated starting from the initiation of hormone treatment for the SEC and WOI groups, rather than from the beginning of the overall culture as in Fig. S1A. In addition, assembloids exhibit size variability during the same culture period due to individual heterogeneity.

      To eliminate ambiguity, we have now labeled “Hormone Day 0, Day 2, Day 8” in Fig. 1B and revised the corresponding figure legend to read: “Endometrial assembloids from the CTRL, SEC, and WOI groups, which were subjected to hormone treatment on Days 0, 2, and 8, exhibited comparable growth patterns throughout the culture period.”

      Fig 2G: authors still used the description "organoids" here instead of "assembloids".

      We appreciate your careful review. Corrections have been made accordingly.

      Fig. 3C: For the OLFM4 staining quantification, in the Y-axis authors wrote "proportion of OLFM4 (+) cells (OLFM4 (+)/total", but in the rebuttal letter they mention "its fluorescence intensity (quantified as mean grey value) was significantly stronger in both the SEC and WOI groups compared to the CTRL group". This is confounding and should be clarified.

      We apologize for incorrectly writing "fluorescence intensity" in the rebuttal letter; the correct term should be the "proportion of OLFM4 (+) cells (OLFM4 (+)/total)" as shown in Fig. 3C.

      Fig 5D: Acetyl-α-tubulin is the marker of ciliated cells and should be expressed in the cilia instead of the whole cells. It is very strange to quantify as "mean fluorescence intensity (acetyl-αtubulin/DAPI)" to assess the cilia. Please clarify.

      Thank you for your insightful comment. To clarify, the ratio "mean fluorescence intensity (acetyl-α-tubulin/DAPI)" was calculated within individual acetyl-α-tubulin<sup>+</sup> ciliated cells. Acetyl-αtubulin fluorescence was normalized to the DAPI signal of the same cell nucleus, not the wholecell population. This corrected for variations in cell number and staining efficiency to ensure data accuracy.

      Fig 5F: it is very bizarre that unciliated epithelium was transformed from ciliated epithelium, and CTRL was transformed from SEC and WOI. Should be clarified and discussed.

      Pseudotime analysis sorts discrete cells along a "pseudotime axis" based on similarities and differences in cellular gene expression, thereby simulating cell state transitions.

      Ciliated epithelium → unciliated epithelium: During the menstrual cycle, ciliated and unciliated epithelia undergo mutual transformation from the secretory phase (or mid-secretory phase) to the menstrual phase, and then to the proliferative phase. Here, we demonstrate the transition of ciliated cells to unciliated cells from the SEC and WOI stages to the CTRL stage.

      Notably, the two cell types coexist, and what is presented here merely reflects a transformation trend. Relative content has been incorporated into the Discussion section (Line 319-321).

      “Throughout the menstrual cycle, ciliated and unciliated epithelia undergo mutual transformation from the secretory phase (or mid-secretory phase) to the menstrual phase, and then to the proliferative phase.”

      Fig 5H: To show "enhanced invasion ability", authors must provide some quantification and statistic analysis. It is very hard to see the difference between the CTRL and SEC regarding ROR2Wnt5A.

      We appreciate your suggestion. Quantification and statistic analysis have been added to Figure 5H.

      Fig 6A: please elaborate the "mIVC1" and "mIVC2" in the figure legends.

      Additions have been made to the figure legends accordingly, as follows: "mIVC1: modified In Vitro Culture Medium 1; mIVC2: modified In Vitro Culture Medium 2."

      Fig S1D: Is the PAS staining also done in CTRL assembloids? In addition, it is stated that the assembloids secrete glycogen because of a positive PAS staining, while it could also be neutral mucins, glycoproteins, etc, which are all detected by PAS staining. So, the authors should be more careful in stating that it is glycogen, or a PAS staining with diastase digestion should be done.

      The PAS staining results for the CTRL group are presented in Fig. S1I. In addition, results of PAS staining with diastase digestion are included in Figure S1.

      Line 120: references?

      The reference has been added accordingly.

      Line 178: The term 'Endometrial Receptivity Test (ERT)' is used. Do the authors mean Endometrial Receptivity Analysis (ERA) test? ERA is the commonly used abbreviation for this test. Moreover, the authors describe ERA as 'a kind of gene analysis-based test.' This should be rephrased more scientifically correct.

      Thank you for your valuable suggestion. We have revised the term to ERA, and modified the phrase "a kind of gene analysis-based test" to "gene expression profiling-based diagnostic assay" (Lines 160–163).

      “We performed Endometrial Receptivity Analysis (ERA), a gene expression profiling-based diagnostic assay that integrates high-throughput sequencing and machine learning to quantify the expression of endometrial receptivity-associated genes.”

      Line 83: assemblies à assembloids

      We appreciate your suggestion. The text has been updated to “the endometrial assembloids progressed from epithelial organoids, to assemblies of epithelial and stromal cells and then to stem cell-laden 3D artificial endometrium”.

      The Materials and Methods section currently lacks the needed details. Authors should substantially expand this section to clearly describe all experimental and analytical procedures, including, aùmong others, immunofluorescence staining, quantification methods, bioinformatics analyses and statistical approaches. Providing comprehensive methodological information is essential.

      A detailed description of these methods is provided in the Supporting Materials and Methods section.

      Reviewer #2 (Recommendations for the authors): 

      The revised manuscript is much improved in clarity, focus, and experimental support. The authors have thoughtfully addressed the major concerns from the previous review. In particular, the logic and flow of the paper are clearer, it now guides the reader through the rationale (constructing a WOI model), the comparative analysis against in vivo tissue and simpler organoids, and the key features that distinguish the WOI assembloid. The added functional validation (especially the blastoid co-culture experiment) significantly strengthens the work by showing a tangible outcome of "receptivity" beyond molecular profiling. The distinction between the standard secretory-phase organoid and the WOI assembloid is now more convincing, as the authors highlight several specific differences in morphology (more cilia, pinopodes), metabolism, and implantation success that favor the WOI model. The manuscript also reads cleaner with the bioinformatic sections condensed to the most important findings (excess detail was trimmed or moved to supplements) and the rationale for gene/pathway selection explicitly stated.

      The manuscript has been significantly strengthened through the addition of functional assays (like the blastoid co-culture), clearer transcriptomic and proteomic data, and detailed analyses of hormone treatments, cilia biology, and stromal and immune cell behavior in early passages. These updates confirm that the WOI assembloid supports embryo attachment and outperforms standard secretory organoids, while integrating external references and clarifications on terminology. Minor suggestions remain, such as clarifying statistical significance and adding functional interpretations for certain observations, but overall, the manuscript is now more robust and biologically convincing.

      Remaining points for clarification: There are a few minor points that still merit attention:

      - Use of the Endometrial Receptivity Test (ERT): As previously mentioned, if the authors have ERT data for the SEC organoid group, including that information would further support the claim that the WOI assembloid is uniquely receptive. If not, it would be helpful to add a statement clarifying that the ERT was employed specifically as a confirmatory test for the WOI assembloids, rather than as a comparative measure across all groups.

      Thank you for your valuable suggestion. We have now supplemented the description in the Supporting Materials and Methods section (Lines 160–162) as follows: “ERA was employed specifically as a confirmatory test for the WOI assembloids, rather than as a comparative measure across all groups.”

      - Because the assembloids are created from primary tissue samples, it would be helpful to briefly comment on how consistent the findings were across different patient-derived samples. For example, did all biological replicates show similar expression of receptivity markers and comparable capacity to support blastoid attachment? Although this seems implied, including a sentence in the Methods or Results sections that specifies the number of donor lines tested would help readers assess the model's variability and reproducibility.

      We appreciated your advice. The relevant statement has been added to the Supporting Materials and Methods section. (Line 312-313).

      “All biological replicates (fourteen individuals) of endometrial assembloids show similar expression of receptivity markers and comparable capacity to support blastoid attachment.”

      - The authors mention promising future directions, such as integrating 3D printing and microfluidics to further enhance the model, which is an excellent forward-looking statement. It would also be valuable to suggest the inclusion of additional cell types, like more robust immune cell populations or endothelial components, as future improvements to create an even more comprehensive model of the endometrial lining.

      Thank you for your valuable suggestion. 3D printing and microfluidics serve as approaches for introducing multiple cell types. We have supplemented the following statement in the manuscript: “We are looking forward to integrating stem cell induction, 3D printing, and microfluidic systems to modify the culture environment.” (Line 352-353).

      We are grateful for your valuable feedback and constructive criticism, which have helped us improve the quality of our work in terms of content and presentation. We have diligently revised the manuscript and made necessary changes. Here, we have attached the revised manuscript, figures, and all supplementary materials for your re-evaluation. Thank you again for your continued support and look forward to your favorable decision.

    1. Reviewer #1 (Public review):

      Summary:

      This paper presents maRQup a Python pipeline for automating the quantitative analysis of preclinical cancer immunotherapy experiments using bioluminescent imaging in mice. maRQup processes images to quantify tumor burden over time and across anatomical regions, enabling large-scale analysis of over 1,000 mice. The study uses this tool to compare different CAR-T cell constructs and doses, identifying differences in initial tumor control and relapse rates, particularly noting that CD19.CD28 CAR-T cells show faster initial killing but higher relapse compared to CD19.4-1BB CAR-T cells. Furthermore, maRQup facilitates the spatiotemporal analysis of tumor dynamics, revealing differences in growth patterns based on anatomical location, such as the snout exhibiting more resistance to treatment than bone marrow.

      Strengths:

      (1) The maRQup pipeline enables the automatic processing of a large dataset of over 1,000 mice, providing investigators with a rapid and efficient method for analyzing extensive bioluminescent tumor image data.

      (2) Through image processing steps like tail removal and vertical scaling, maRQup normalizes mouse dimensions to facilitate the alignment of anatomical regions across images. This process enables the reliable demarcation of nine distinct anatomical regions within each mouse image, serving as a basis for spatiotemporal analysis of tumor burden within these consistent regions by quantifying average radiance per pixel.

      Weaknesses:

      (1) While the pipeline aims to standardize images for regional assessment, the reliance on scaling primarily along the vertical axis after tail removal may introduce limitations to the quantitative robustness of the anatomically defined regions. This approach does not account for potential non-linear growth across dimensions in animals of different ages or sizes, which could result in relative stretching or shrinking of subjects compared to an average reference.

      (2) Furthermore, despite excluding severely slanted images, the pipeline does not fully normalize for variations in animal pose during image acquisition (e.g., tucked body, leaning). This pose variability not only impacts the precise relative positioning of internal anatomical regions, potentially making their definition based on relative image coordinates more qualitative than truly quantitative for precise regional analysis, but it also means that the bioluminescent light signal from the tumor will not propagate equally to the camera as photons will travel differentially through the tissue. This differing light path through tissues due to variable positioning can introduce large variability in the measured radiance that was not accounted for in the analysis algorithm. Achieving more robust anatomical and quantitative normalization might require methods that control animal posture using a rigid structure during imaging.

      Comments on revisions:

      (1) Clarification of 2D Analysis. We strongly recommend that the authors explicitly define maRQup as a 2D spatiotemporal analysis technique. Since optical imaging quantification is inherently dependent on tissue type and signal depth, characterizing this as a 3D or volumetric method without tomographic correction is inaccurate. Please precede "spatiotemporal" with "2D" throughout the text to ensure precision regarding the method's capabilities.

      (2) Data Validation and Scaling in Supplemental Figure g currently lacks the units necessary to support the assertion.

      Non-Uniform Growth: The authors' method implies that mouse growth is linear and uniform in all directions (isotropic). However, murine growth is not akin to the inflation of a balloon; animals elongate and widen at different rates. The current scaling does not account for these physiological non-linearities.

      Pose Variability: The scaling approach appears to neglect significant variability in animal positioning. Even under anesthesia, animal pose is rarely identical across subjects or time points.

      Requirement for Evidence: Without quantitative data, there appears to be significant differences between the individual images and the merged image. If the authors assert that this is a "classical setting" where mouse positioning is 100% consistent and growth curves are identical in multiple dimensions, please provide specific references that validate these assumptions. Otherwise, the scaling must be corrected to account for anisotropic growth and pose differences or stated that scaling was only based on one dimension.

      (3) Methodology of Spatial Regions The manuscript does not currently indicate how the nine distinct spatial regions were determined. Please expand the methods section to include the specific segmentation algorithms or anatomical criteria used to define these regions, as this is critical for reproducibility.

    2. Reviewer #3 (Public review):

      Summary:

      The paper "The 1000+ mouse project: large-scale spatiotemporal parametrization and modeling of preclinical cancer immunotherapies" is focused on developing a novel methodology for automatic processing of bioluminescence imaging data. It provides quantitative and statistically robust insights on preclinical experiments that will contribute to optimizing cell-based therapies. There is an enormous demand for such methods and approaches that enable the spatiotemporal evaluation of cell monitoring in large cohorts of experimental animals.

      Strengths:

      The manuscript is generally well written, and the experiments are scientifically sound. The conclusions reflect the soundness of experimental data. This approach seems to be quite innovative and promising to improve the statistical accuracy of BLI data quantification.<br /> This methodology can be used as a universal quantification tool for BLI data for in vivo assessment of adoptively transferred cells due to the versatility of the technology.

      Comments on revisions:

      The critiques have been taken care of appropriately.

    3. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      This paper presents maRQup, a Python pipeline for automating the quantitative analysis of preclinical cancer immunotherapy experiments using bioluminescent imaging in mice. maRQup processes images to quantify tumor burden over time and across anatomical regions, enabling large-scale analysis of over 1,000 mice. The study uses this tool to compare different CAR-T cell constructs and doses, identifying differences in initial tumor control and relapse rates, particularly noting that CD19.CD28 CAR-T cells show faster initial killing but higher relapse compared to CD19.4-1BB CAR-T cells. Furthermore, maRQup facilitates the spatiotemporal analysis of tumor dynamics, revealing differences in growth patterns based on anatomical location, such as the snout exhibiting more resistance to treatment than bone marrow.

      Strengths:

      (1) The maRQup pipeline enables the automatic processing of a large dataset of over 1,000 mice, providing investigators with a rapid and efficient method for analyzing extensive bioluminescent tumor image data.

      (2) Through image processing steps like tail removal and vertical scaling, maRQup normalizes mouse dimensions to facilitate the alignment of anatomical regions across images. This process enables the reliable demarcation of nine distinct anatomical regions within each mouse image, serving as a basis for spatiotemporal analysis of tumor burden within these consistent regions by quantifying average radiance per pixel.

      Weaknesses:

      (1) While the pipeline aims to standardize images for regional assessment, the reliance on scaling primarily along the vertical axis after tail removal may introduce limitations to the quantitative robustness of the anatomically defined regions. This approach does not account for potential non-linear growth across dimensions in animals of different ages or sizes, which could result in relative stretching or shrinking of subjects compared to an average reference.

      Our answer to this comment is included in the Supplemental Methods. The standard deviation of the mouse pixels was calculated to ensure that the image processing steps did not alter the shape or size of the mice. Such consistency is particularly striking because our dataset was accrued by nine lab members over the last five years, before we conceived and carried out our analysis (c.f., answer to point #2). In fact, it is the very consistency of this IVIS measurement that led us to conceive our pipeline. As seen from Supplemental Figure 4G, there is minimal difference in the shape or size of the mice across 7,534 images. A total of 99 images were removed either due to being too slanted (91/7663, 1.2%) or due to processing errors (8/7633, 0.1%). Also, the vertical scaling was conducted while keeping the aspect ratio unchanged to prevent any non-anatomical scaling. Hence, we did not record any nonlinear growth of the mice that would warrant more convoluted alignment and/or batch correction for our images.

      (2) Furthermore, despite excluding severely slanted images, the pipeline does not fully normalize for variations in animal pose during image acquisition (e.g., tucked body, leaning). This pose variability not only impacts the precise relative positioning of internal anatomical regions, potentially making their definition based on relative image coordinates more qualitative than truly quantitative for precise regional analysis, but it also means that the bioluminescent light signal from the tumor will not propagate equally to the camera, as photons will travel differentially through the tissue. This differing light path through tissues due to variable positioning can introduce large variability in the measured radiance that was not accounted for in the analysis algorithm. Achieving more robust anatomical and quantitative normalization might require methods that control animal posture using a rigid structure during imaging.

      Reviewer #1 is correct that different mouse postures would be an issue when aligning the images and normalizing for size. However, all experiments are conducted for luminescence measurements in the IVIS system (i.e., this requires anesthesia and long integration time for imaging). In our experience and in our 1000+ mouse dataset, we noticed that all experiments (n=37) did place the anesthetized mice in a stretched/elongated position. Of note, these experiments were conducted by nine different researchers who were not instructed on how to place the mice on the machine for ideal image processing, thus showing that the standard protocol of imaging mice on IVIS does not introduce large variations in animal pose during image acquisition. We think the issue raised by Reviewer #1 is moot in the context of classical settings for mouse luminescence imaging.

      Reviewer #2 (Public review):

      Summary:

      The authors developed a method that automatically processes bioluminescent tumor images for quantitative analysis and used it to describe the spatiotemporal distribution of tumor cells in response to CD19-targeting CAR-T cells, comprising CD28 or 4-1BB costimulatory domains. The conclusion highlights the dependence of tumor decay and relapse on the number of injected cells, the type of cells, and the initial growth rate of tumors (where initial is intended from the first day of therapy). The authors also determined the spatiotemporal analysis of tumor response to CAR T therapy in different regions of the mouse body in a model of acute lymphoblastic leukemia (ALL).

      Strengths:

      The analysis is based on a large number of images and accounts for many variables. The results of the analysis largely support their claims that the kinetics of tumor decay and relapse are dependent on the CAR T co-stimulatory domain and number of cells injected and tumor growth rates. 

      Weaknesses:

      The study does not specify how a) differences in mouse positioning (and whether they excluded not-aligned mice) and b) tumor spread at the start of therapy influenced their data. The study does not take into account the potential heterogeneity of CAR T cells in terms of CAR T expression or T cell immunophenotype (differentiation, exhaustion, fitness...).

      See answer #2 to Reviewer #1.

      Author response image 1.

      Author response image 1 shows the average tumor radiance on day zero (when CAR-T cell therapy was administered) for all mice. While there is some spread, most mice had tumor localized to the liver or bone marrow.

      Reviewer #3 (Public review):

      Summary:

      The paper "The 1000+ mouse project: large-scale spatiotemporal parametrization and modeling of preclinical cancer immunotherapies" is focused on developing a novel methodology for automatic processing of bioluminescence imaging data. It provides quantitative and statistically robust insights into preclinical experiments that will contribute to optimizing cell-based therapies. There is an enormous demand for such methods and approaches that enable the spatiotemporal evaluation of cell monitoring in large cohorts of experimental animals.

      Strengths:

      The manuscript is generally well written, and the experiments are scientifically sound. The conclusions reflect the soundness of experimental data. This approach seems to be quite innovative and promising to improve the statistical accuracy of BLI data quantification. 

      This methodology can be used as a universal quantification tool for BLI data for in vivo assessment of adoptively transferred cells due to the versatility of the technology.

      Weaknesses: 

      No weaknesses were identified by this Reviewer. 

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      In this paper, the authors propose a significant advancement in optical image data analysis by employing automation. They effectively demonstrate the valuable insights that can be gained from analyzing extensive datasets with a more unbiased methodology. At present, I do not have any specific suggestions for improvement.

      However, it is important to note that this work is limited in its operational scope. Specifically, it relies on predefined ROIs rather than aligning the signal site with anatomical systems. The scaling model and image cropping are simplistic, animal pose is not taken into account, and the data output needs to be called semi-quantitative or qualitative, and would have been stronger utilizing an AI agent. Nevertheless, this work underscores the potential of automated systems in preclinical image analysis, which is a crucial step towards developing more sophisticated approaches to optical image data analysis.

      While our analysis used predefined ROIs, the maRQup pipeline allows users to manually draw ROIs on the mouse image.

      Reviewer #2 (Recommendations for the authors):

      The writing and presentation of data are clear and accurate, but some additional information should be added regarding the imaging protocol used to acquire the original data. 

      The authors mention fluorescence in Figure 1. I expected all the data to be generated from bioluminescent NALM-6 tumors, since bioluminescence is indeed measured in average radiance and can be per pixel (p/sec/cm2/sr/pixel). Fluorescence should be measured using radiance efficiency (p/sec/cm2/sr)/(µW/cm2), a unit that compensates for non-uniform excitation light pattern in the instrument. Would the author find different results if fluorescence data were analyzed separately?

      Reviewer #2 is correct that the unit for fluorescence would be radiance efficiency. The word “fluorescent” was included in the label of Figure 1a  to highlight that our workflow could be applied to other types of light-generating methods (i.e., fluorescence vs. bioluminescence). However, in this study, measurements of bioluminescent tumors only were analyzed. If fluorescence measurements are to be analyzed, our methods of image acquisition and processing would be directly applicable.

      Did the author ever check the signal of the snout in mice with no tumor?

      In mice with no tumor, there is no detectable signal in the snout (or anywhere else, for that matter).

      The urine of mice contains phosphor, and might give a background signal, especially if longer exposure is used at the end of the study.

      For the mice with no tumor injection, the luminescence signal was below background (<10<sup>2</sup> p/sec/cm<sup>2</sup>/sr/pixel). In particular, we do not detect any signal in the bladder/urine. Additionally, as described in the Supplemental Methods and Figure 1b, only pixels that were on the mouse as determined from the brightfield image were used to calculate the tumor burden from the radiance of the luminescent image. This method ensures that any background signal (e.g., from phosphor in mouse urine) would be excluded in the radiance quantification and not bias the results.

      Additionally, as described in the Methods, the exposure time was held constant at 30 seconds for each IVIS measurement across all 37 experiments.

      The data using more than 2 million cells comes from only 10 mice, and maybe the biological relevance of this group is limited since it will not be achievable and translatable in humans (PMID: 33653113).

      We appreciate Reviewer #2’s attention to this issue. The effect observed in our study is large enough to reach statistical significance despite the small number of mice. Note that the dosing regimen used was optimized for the murine NSG model and would require appropriate scaling before clinical application. Nonetheless, NSG mice remain the gold standard for pre‑clinical in vivo evaluation and their use is generally required by regulatory agencies, such as the FDA, for assessing novel CAR‑T cell therapies; thus these findings are relevant for advancing such treatments.

    1. Briefing : Préparation de la 10ème Semaine de l'ESS à l'école (SESSE 2026)

      Résumé Exécutif

      Ce document synthétise les points clés du webinaire organisé par l'association L'ESPER en préparation de la 10ème édition de la Semaine de l'Économie Sociale et Solidaire (ESS) à l'école, qui se déroulera du 23 au 28 mars 2026.

      Copiloté avec l'OCCE, cet événement vise à sensibiliser les élèves, du primaire au supérieur, aux modèles économiques alternatifs basés sur la démocratie, la justice sociale et l'intérêt général.

      Le webinaire souligne une double ambition : éduquer à l'ESS (compréhension des modèles) et par l'ESS (expérimentation de projets collectifs).

      Les interventions mettent en avant des dispositifs concrets, des témoignages d'acteurs de terrain (notamment des Scops et des Scics) et une panoplie d'outils pédagogiques « clés en main » pour les enseignants.

      L'objectif final est de transformer la société en intégrant ces principes dans le parcours scolaire et citoyen des individus.

      --------------------------------------------------------------------------------

      1. Cadre Institutionnel et Ambitions Éducatives

      L'association L'ESPER, regroupant 41 organisations de l'éducation et de l'ESS, porte une vision politique et pédagogique forte pour le système éducatif français.

      Vision et Plaidoyer

      L'ESPER considère l'ESS comme un levier nécessaire pour transformer l'économie. Ses ambitions s'articulent autour de deux axes :

      Éducation à l'ESS : Faire comprendre un modèle de société basé sur la justice sociale et l'intérêt général. Un plaidoyer publié en août 2025 appelle d'ailleurs à l'intégration de l'ESS dans les programmes scolaires dès le collège.

      Éducation par l'ESS : Favoriser l'émancipation individuelle et collective par la mise en œuvre de projets concrets en classe, permettant aux élèves de découvrir la coopération par l'action.

      La Semaine de l'ESS à l'école (SESSE)

      Inscrite au calendrier de l'Éducation Nationale, cette semaine annuelle permet trois modes d'engagement :

      1. Équipes éducatives : Valorisation de projets annuels ou organisation d'actions ponctuelles.

      2. Acteurs de l'ESS : Accueil de classes dans leurs structures ou interventions directes en milieu scolaire.

      3. Élèves/Étudiants : Montage de projets autonomes et sensibilisation de leurs pairs.

      --------------------------------------------------------------------------------

      2. Fondamentaux de l'Économie Sociale et Solidaire

      L'ESS n'est pas une économie récente, mais elle s'est institutionnalisée, notamment via la loi Hamon du 31 juillet 2014.

      Les 5 types de structures de l'ESS

      | Type de structure | Caractéristiques principales | | --- | --- | | Associations | Groupements de personnes volontaires autour d'un projet non lucratif. | | Fondations | Affectation irrévocable de biens à une œuvre d'intérêt général. | | Coopératives | Entreprises où les associés partagent le pouvoir et les bénéfices. | | Mutuelles | Organismes à but non lucratif pratiquant la solidarité entre membres. | | Sociétés commerciales de l'ESS | Sociétés privées respectant les principes de l'ESS. |

      Principes et Valeurs Cardinaux

      Toutes ces organisations partagent un socle commun :

      Finalité d'intérêt général ou collectif.

      Lucrativité limitée : Les bénéfices sont prioritairement réinvestis dans le projet.

      Gestion démocratique : Application du principe « une personne, une voix », indépendamment du capital détenu.

      --------------------------------------------------------------------------------

      3. Retours d'Expérience et Témoignages d'Acteurs

      L'Union Régionale des Scops et Scics (Occitanie)

      Eugénie Bruni souligne l'importance de la promotion du modèle coopératif auprès des jeunes.

      Actions types : Interventions de 2 heures présentant l'histoire, les spécificités et des exemples concrets de coopératives.

      Impact : Ouverture des perspectives professionnelles pour les étudiants en montrant que la coopération est un modèle économique viable (4 558 sociétés coopératives en France générant 10,2 milliards d'euros de chiffre d'affaires).

      Conseils : Ne pas hésiter à solliciter les Unions Régionales qui disposent de délégués sur tout le territoire pour accompagner les projets.

      La Scop Morasuti (Imprimerie, région AURA)

      Témoignage de Damien sur une reprise d'entreprise à la barre du tribunal par les salariés.

      Le combat social : Transformation en Scop en juillet 2024. Le modèle a permis de supprimer les jours de carence et de rééquilibrer les salaires pour corriger les inégalités d'ancienneté.

      Engagement scolaire : Mise à disposition gratuite de chutes de matériaux pour les écoles et accompagnement technique (design, PAO) pour des projets d'exposition.

      Observation sur la démocratie : Les élèves sont souvent surpris par la double casquette « ouvrier et patron ». Damien explique : « Personne ne peut être d'accord avec tout... la démocratie, c'est aux voix. »

      --------------------------------------------------------------------------------

      4. Ressources et Outils Pédagogiques

      L'ESPER propose des outils testés et adaptés pour différents niveaux (collège, lycée, supérieur).

      Outils de sensibilisation "Clés en main"

      | Outil | Objectif | Méthode | | --- | --- | --- | | Junior Coopérative | Initier à la méthodologie de projet. | Puzzle sur les étapes d'un projet et études de cas réels. | | Idées reçues sur l'ESS | Déconstruire les préjugés. | Débat mouvant à partir de cartes "Vrai/Faux". | | Filmographie ESS | Illustrer les réalités de l'ESS. | Sélection de documentaires avec guides pédagogiques. | | Fiches Pratiques | Organiser une intervention. | Guides logistiques pour les visites d'entreprises ou les interventions en classe. |

      Recommandations pour les intervenants

      Adaptation : Simplifier le discours pour les collégiens en se concentrant sur les piliers (solidarité, partage des richesses, démocratie) plutôt que sur les détails juridiques.

      Interactivité : Utiliser des supports vidéo (ex: série "Ma boîte en Scop") et favoriser le dialogue.

      Préparation : Prévoir environ une heure d'échange préalable entre l'enseignant et l'intervenant pour cadrer l'action.

      --------------------------------------------------------------------------------

      5. Calendrier et Inscriptions

      Inscriptions : Ouvertes sur le site de L'ESPER. L'équipe salariée assure la mise en relation entre les établissements scolaires et les acteurs de l'ESS.

      25 février 2026 : Second webinaire de préparation dédié à une présentation détaillée de l'ESS avec l'expert Hervé de Falvar.

      23 au 28 mars 2026 : Déroulement de la Semaine de l'ESS à l'école. Valorisation des actions sur les réseaux sociaux et newsletters de L'ESPER.

      Citation clé : « Le SS porte un modèle de société qui est basé notamment sur la démocratie, la justice sociale, l'intérêt général [...] pour aboutir à une société plus juste dans laquelle les individus sont émancipés individuellement mais également collectivement. »

    1. État des Lieux Scientifique des Thérapies Manuelles : Entre Mythes et Réalités

      Résumé Exécutif

      Ce document de synthèse analyse l'état actuel des connaissances scientifiques concernant les thérapies manuelles (kinésithérapie, ostéopathie, chiropraxie, étiopathie), avec un accent particulier sur le mal de dos, principal motif de consultation.

      Les points saillants sont les suivants :

      Le primat du mouvement : La science moderne démontre que le traitement le plus efficace contre la lombalgie est le mouvement actif.

      Les thérapies passives ne doivent pas être utilisées de manière isolée.

      Obligations légales et déontologiques : Contrairement aux pseudomédecines, la kinésithérapie est encadrée par l'obligation d'utiliser des moyens conformes aux « données acquises de la science », un principe juridique ancré depuis l'arrêt Mercier de 1936.

      Déconstruction des mythes : Les concepts de « vertèbre déplacée » ou de « bassin décalé » sont des vues de l'esprit sans réalité anatomique.

      La palpation manuelle, bien que rassurante, manque de fiabilité scientifique pour établir un diagnostic de texture ou de blocage.

      Risques et conséquences sociales : Au-delà de l'effet placebo ou contextuel, certaines manipulations (notamment cervicales) présentent des risques graves comme l'accident vasculaire cérébral (AVC).

      De plus, ces pratiques peuvent parasiter les messages de santé publique et altérer la littératie en santé des patients.

      --------------------------------------------------------------------------------

      1. L'Évolution de la Science face au Mal de Dos

      L'approche médicale de la lombalgie a radicalement changé au cours des trente dernières années, passant d'une logique de repos à une logique d'action.

      Chronologie des changements de paradigme

      1986 : Une étude du New England Journal of Medicine suggère que deux jours de repos au lit sont plus bénéfiques que sept jours.

      1995 : Une étude pivot démontre que le groupe "témoin" (continuant à vivre normalement) récupère mieux que les groupes soumis à un repos strict ou à des exercices trop prudents.

      2019 : La Haute Autorité de Santé (HAS) et l'Assurance Maladie lancent des recommandations officielles : « Le bon traitement, c'est le mouvement ».

      Les thérapies passives isolées sont déclarées inefficaces sur l'évolution de la lombalgie.

      Le bénéfice physiologique du mouvement

      Contrairement aux idées reçues, des activités comme la course à pied améliorent la physiologie discale.

      L'alternance de pressions et dépressions (environ 1 Hz) lors de la course permet d'hydrater les disques intervertébraux. Statistiquement, les coureurs de fond souffrent moins du dos que les autres sportifs.

      --------------------------------------------------------------------------------

      2. Cadre Juridique et Déontologique : La Science comme Obligation

      La distinction entre kinésithérapie et thérapies alternatives repose sur un fondement juridique historique.

      L'Arrêt Mercier (1936)

      Ce tournant de la Cour de cassation a établi trois principes majeurs :

      1. Le contrat de soins : Il existe un lien contractuel entre le soignant et le patient.

      2. L'obligation de moyens : Le soignant n'a pas d'obligation de résultat (guérison), mais doit mettre en œuvre tous les moyens nécessaires.

      3. Les données acquises de la science : Les moyens choisis doivent être conformes aux connaissances scientifiques actuelles.

      Évolution des pratiques en kinésithérapie

      Le code de déontologie impose aux kinésithérapeutes d'abandonner les pratiques invalidées. Par exemple :

      Bronchiolite : La kinésithérapie respiratoire pédiatrique n'est plus recommandée depuis 2019 pour les nourrissons sains, car le bénéfice est jugé insuffisant par rapport au caractère traumatisant du soin.

      Massage : Son usage est désormais limité (cicatrices, œdèmes) et n'est plus recommandé comme traitement de première intention pour le mal de dos.

      --------------------------------------------------------------------------------

      3. Analyse Critique des Thérapies Manuelles

      Les limites de la palpation et du diagnostic manuel

      La science démontre que le sens tactile des praticiens est sujet à l'illusion.

      Manque de fiabilité : Deux évaluateurs sont rarement d'accord sur la texture (dur/mou) ou le caractère « bloqué » d'un tissu.

      Précision anatomique : En palpant une structure évidente sous la peau, l'erreur moyenne est de 5 cm.

      Impossibilité mécanique : Il est impossible de mobiliser une seule vertèbre de façon isolée ; une manipulation en impacte au minimum trois.

      Effet "Gate Control" et placebo

      Les thérapies manuelles produisent un effet antalgique réel mais transitoire :

      Distraction sensorielle : Le système nerveux privilégie les sensations tactiles, de chaud ou de froid sur la douleur. C'est un effet à court terme (quelques minutes à quelques heures).

      Effet contextuel : Le rituel de la consultation, l'attention portée par le praticien et la régression naturelle vers la moyenne (la douleur diminue souvent d'elle-même au moment où l'on consulte) renforcent l'illusion d'efficacité.

      --------------------------------------------------------------------------------

      4. Histoire et Fondements des Pseudomédecines Manuelles

      Les thérapies comme l'ostéopathie ou la chiropraxie reposent sur le vitalisme, une philosophie du XIXe siècle postulant l'existence d'une « force vitale » non physique.

      | Discipline | Origine | Fondements Idéologiques | État actuel en Europe | | --- | --- | --- | --- | | Ostéopathie | A.T. Still (1874) | "Le corps est la pharmacie de Dieu". Flux sanguin synonyme de santé. | Branche "puriste" (Littlejohn) très présente, axée sur le crânio-sacré et le fluidique. | | Chiropraxie | D.D. Palmer (1895) | Système nerveux central comme maître du corps. Recours aux manipulations à haute vélocité (faire craquer). | Pratique restée proche des concepts originels, avec une forte présence sur les réseaux sociaux. | | Étiopathie | C. Trédaniel (Fr) | Recherche de l'origine de la pathologie dans l'ajustement articulaire. | Très similaire à l'ostéopathie, sans distinction scientifique réelle. |

      Note sur l'exception américaine : Aux États-Unis, l'ostéopathie s'est médicalisée suite au rapport Flexner (1910). Les "DO" y sont des médecins généralistes qui ne pratiquent quasiment plus de thérapie manuelle, contrairement à la branche européenne restée mystique.

      --------------------------------------------------------------------------------

      5. Risques et Impacts Sociétaux

      Sécurité et perte de chance

      Risques graves : Les manipulations cervicales peuvent provoquer des dissections de l'artère vertébrale, entraînant des AVC ou le syndrome de "Locked-in" (paralysie totale avec conscience préservée).

      Erreurs de diagnostic : Le recours direct à ces thérapies sans avis médical peut retarder la prise en charge de pathologies graves (ex: fractures non détectées).

      Parasitage du message médical

      Le "vernis médical" utilisé par ces disciplines (mots tels que « diagnostic », « anamnèse », « consultation ») crée une confusion chez les patients :

      Atteinte à la littératie en santé : En ancrant des concepts erronés (vertèbre déplacée, jambe plus courte), les praticiens créent une dépendance et une peur de bouger (kinésiophobie).

      Facteurs sociaux : Le principal facteur de persistance d'une lombalgie n'est pas mécanique, mais lié à l'insatisfaction au travail ou à des problèmes sociétaux. Les thérapies manuelles, en se focalisant sur le "crack and go", ignorent cette complexité.

      Conclusion

      Si les thérapies manuelles offrent un soulagement temporaire et un confort relationnel, elles ne constituent pas une solution de fond au mal de dos.

      La science préconise une approche centrée sur l'éducation thérapeutique, la gestion de la motivation et, impérativement, le mouvement actif du patient.

    1. Reviewer #3 (Public review):

      Summary:

      The authors of this report wish to show that distinct populations of meningeal macrophages respond to cortical spreading depolarization (CSD) via unique calcium activity patterns depending on their location in the meningeal sub-compartments. Perivascular macrophages display calcium signaling properties that are sometimes in opposition to non-perivascular macrophages. Many of the meningeal macrophages also displayed synchronous activity at variable distances from one another. Other macrophages were found to display calcium signals in response to dural vasomotion. CSD could induce variable calcium responses in both perivascular and non-perivascular macrophages in the meninges, in part due to RAMP1-dependent effects. Results will inform future research on the calcium responses displayed by macrophages in the meninges under both normal and pathological conditions.

      Strengths:

      Sophisticated in vivo imaging of meningeal immune cells is employed in the study, which has not been performed previously. A detailed analysis of the distinct calcium dynamics in various subtypes of meningeal macrophages is provided. Functional relevance of the responses is also noted in relation to CSD events.

      Weaknesses:

      The specificity of the methods used to target both meningeal macrophages and RAMP1 is limited. Additional discussion points on the functional relevance of the two subtypes of meningeal macrophages and their calcium responses are warranted. A section on potential pitfalls should be included.

    2. Author response:

      Public Reviews:

      Reviewer #1 (Public review): 

      Strengths:

      (1) The use of chronic two-photon Ca<sup>2+</sup> imaging in awake, behaving mice represents a major technical strength, minimizing confounds introduced by anesthesia. The development of a Pf4Cre:GCaMP6s reporter line, combined with high-resolution intravital imaging, enables long-term and subcellular analysis of macrophage Ca<sup>2+</sup> dynamics in the meninges.

      (2) The comparison between perivascular and non-perivascular macrophages reveals clear niche-dependent differences in Ca<sup>2+</sup> signaling properties. The identification of macrophage Ca<sup>2+</sup> activity temporally coupled to dural vasomotion is particularly intriguing and highlights a potential macrophage-vascular functional unit in the dura.

      (3) By linking macrophage Ca<sup>2+</sup> responses to CSD and implicating CGRP/RAMP1 signaling in a subset of these responses, the study connects meningeal macrophage activity to clinically relevant neuroimmune pathways involved in migraine and other neurological disorders.

      Thank you for recognizing the strengths in our work.

      Weaknesses: 

      (1) The manuscript relies heavily on Pf4Cre-driven GCaMP6s expression to selectively image meningeal macrophages. Although prior studies are cited to support Pf4 specificity, Pf4 is not an exclusively macrophage-restricted marker, and developmental recombination cannot be excluded. The authors should provide direct validation of reporter specificity in the adult meninges (e.g., co-labeling with established macrophage markers and exclusion of other Pf4-expressing lineages). At minimum, the limitations of Pf4Cre-based labeling should be discussed more explicitly, particularly regarding how off-target expression might affect Ca<sup>2+</sup> signal interpretation.

      We acknowledge that PF4 is not an exclusively macrophage-restricted marker. Yet, among meningeal immunocytes, it is almost exclusively expressed in macrophages (1, 2). Furthermore, in the adult mouse meninges, Pf4<sup>Cre</sup>-based reporter lines label nearly all dural and leptomeningeal macrophages and almost no other cells (3, 4). This Cre line has also been used to target border-associated macrophages (2, 4). Moreover, a recent study suggests that the bacterial artificial chromosome used to generate the Pf4<sup>Cre</sup> line does not affect meningeal macrophage activity (4). Nonetheless, while we already discussed PF4 expression in meningeal megakaryocytes, in a revised version, we plan to discuss the possibility that a very small population of other meningeal immune cells may also be labeled.

      (2) The manuscript offers an extensive characterization of Ca<sup>2+</sup> event features (frequency spectra, propagation patterns, synchrony), but the biological significance of these signals is largely speculative. There is no direct link established between Ca<sup>2+</sup> activity patterns and macrophage function (e.g., activation state, motility, cytokine release, or interaction with other meningeal components). The discussion frequently implies functional specialization based on Ca<sup>2+</sup> dynamics without experimental validation. To strengthen the conceptual impact, a clearer framing of the study as a foundational descriptive resource, rather than a functional dissection, would improve alignment between data and conclusions.

      In our discussion, we indicated that “the exact link between the distinct Ca<sup>2+</sup> signal properties of meningeal macrophage subsets observed herein and their homeostatic function remains to be established”. In a revised version, we plan to further acknowledge that this is primarily a descriptive study that provides a foundational landscape of Ca<sup>2+</sup> dynamics in meningeal macrophages.

      (3) The GLM analysis revealing coupling between dural perivascular macrophage Ca<sup>2+</sup> activity and vasomotion is technically sophisticated and intriguing. However, the directionality of this relationship remains unresolved. The current data do not distinguish whether macrophages actively regulate vasomotion, respond to mechanical or hemodynamic changes, or are co-modulated by neural activity. Statements suggesting that macrophages may "mediate" vasomotion are therefore premature. The authors should reframe these conclusions more cautiously, emphasizing correlation rather than causation, and expand the discussion to explicitly outline experimental strategies required to establish causality (e.g., macrophage-specific Ca<sup>2+</sup> manipulation). 

      In the results section, we indicated that our data suggest that dural perivascular macrophages are functionally coupled to locomotion-driven dural vasomotion, either responding to it or mediating it. Furthermore, in our discussion, we discussed the possibilities that 1) macrophages sense vascular-related mechanical changes and 2) macrophage Ca<sup>2+</sup> signaling may regulate dural vasomotion. Moreover, we explicitly state that studying causality will require an experimental approach that has yet to be developed, enabling selective manipulation of dural perivascular macrophages.

      (4) The authors conclude that synchronous Ca<sup>2+</sup> events across macrophages are driven by extrinsic signals rather than intercellular communication, based primarily on distance-time analyses. This conclusion is not sufficiently supported, as spatial independence alone does not exclude paracrine signaling, vascular cues, or network-level coordination. No perturbation experiments are presented to test alternative mechanisms. The authors can either provide additional experimental evidence or rephrase the conclusion to acknowledge that the source of synchrony remains unresolved. 

      Thank you for this suggestion. In the revision, we will indicate that the source of synchrony remains unresolved.

      (5) A major and potentially important finding is that the dominant macrophage response to CSD is a persistent decrease in Ca<sup>2+</sup> activity, which is independent of CGRP/RAMP1 signaling. However, this phenomenon is not mechanistically explored. It remains unclear whether Ca<sup>2+</sup> suppression reflects macrophage inhibition, altered viability, homeostatic resetting, or an anti-inflammatory program. Minimally, the discussion should be more deeply engaged with possible interpretations and implications of this finding. 

      While we propose that the decrease in macrophage calcium signaling following CSD could indicate that a hyperexcitable cortex dampens meningeal immunity, in the revised version, we plan to elaborate on the possible implications of this finding.

      (6) The pharmacological blockade of RAMP1 supports a role for CGRP signaling in persistent Ca<sup>2+</sup> increases after CSD, but the experiments are based on a relatively small number of cells and animals. The limited sample size constrains confidence in the generality of the conclusions. Pharmacological inhibition alone does not establish cell-autonomous effects in macrophages. The authors should acknowledge these limitations more explicitly and avoid overextension of the conclusions. 

      We plan to acknowledge these limitations.

      Reviewer #2 (Public review): 

      Using chronic intravital two-photon imaging of calcium dynamics in meningeal macrophages in Pf4Cre:TIGRE2.0-GCaMP6 mice, the study identified heterogeneous features of perivascular and non-perivascular meningeal macrophages at steady state and in response to cortical spreading depolarization (CSD). Analyses of calcium dynamics and blood vessels revealed a subpopulation of perivascular meningeal macrophages whose activity is coupled to behaviorally driven diameter fluctuations of their associated vessels. The analyses also investigated synchrony between different macrophage populations and revealed a role for CGRP/RAMP1 signaling in the CSD-induced increase, but not the decrease, in calcium transients.

      This is a timely study at both the technical and conceptual levels, examining calcium dynamics of meningeal macrophages in vivo. The conclusions are well supported by the findings and will provide an important foundation for future research on immune cell dynamics within the meninges in vivo. The paper is well written and clearly presented.

      Thank you.

      I have only minor comments. 

      (1) Please indicate the formal definition of perivascular versus non-perivascular macrophages in terms of distance from the blood vessel. This information is not provided in the main text or the Methods. In addition, please explain how the meningeal vasculature was imaged in the main text. 

      We did not measure the exact distance of the perivascular macrophages from the blood vessels, but defined them as such based on previous data showing that these cells reside along the abluminal surface and maintain tight interactions with mural cells (5). We plan to provide this information in the revised manuscript.

      (2) Similarly, the method used to induce acute CSD (pin prick) is not described in the main text and is only mentioned in the figure legends and Methods. Additional background on the neurobiology of acute CSD, as well as the resulting brain activity and neuroinflammatory responses, could be helpful.

      We plan to add the method for inducing CSD (i.e., a pinprick in the frontal cortex) to the Results section and provide more background in the Introduction section.

      Reviewer #3 (Public review):

      Strengths: 

      Sophisticated in vivo imaging of meningeal immune cells is employed in the study, which has not been performed previously. A detailed analysis of the distinct calcium dynamics in various subtypes of meningeal macrophages is provided. Functional relevance of the responses is also noted in relation to CSD events.

      Thank you for recognizing the strengths of our paper

      Weaknesses:

      (1) The specificity of the methods used to target both meningeal macrophages and RAMP1 is limited. Additional discussion points on the functional relevance of the two subtypes of meningeal macrophages and their calcium responses are warranted. A section on potential pitfalls should be included. 

      We plan to address these issues in the revision

      References

      (1) H. Van Hove et al., A single-cell atlas of mouse brain macrophages reveals unique transcriptional identities shaped by ontogeny and tissue environment. Nat Neurosci 22, 1021-1035 (2019).

      (2) F. A. Pinho-Ribeiro et al., Bacteria hijack a meningeal neuroimmune axis to facilitate brain invasion. Nature 615, 472-481 (2023).

      (3) G. L. McKinsey et al., A new genetic strategy for targeting microglia in development and disease. Elife 9,  (2020).

      (4) H. J. Barr et al., The circadian clock regulates scavenging of fluid-borne substrates by brain border-associated macrophages. bioRxiv,  (2025).

      (5) H. Min et al., Mural cells interact with macrophages in the dura mater to regulate CNS immune surveillance. J Exp Med 221,  (2024).

    1. Reviewer #1 (Public review):

      Summary:

      In this study, the authors' aim was to determine whether hepatic palmitoylation is a physiologically relevant regulator of systemic metabolism. The data demonstrate that loss of DHHC7 in hepatocytes disrupts Gαi palmitoylation, enhances cAMP-PKA-CREB signaling, and drives transcriptional upregulation and secretion of Prg4. The KO mice display increased body weight, fat mass, and plasma cholesterol, but at 12 weeks on HFD, do not exhibit insulin resistance. The potential mechanism underlying the metabolic phenotype was examined by assessing adipocyte signaling and by exploring whether Prg4 acts through GPR146. Through this pathway, the authors intend to link DHHC7-dependent palmitoylation to the regulation of hepatokines that exert systemic metabolic effects.

      Strengths:

      (1) Hepatic palmitoylation in systemic metabolic regulation is largely unexplored. The authors demonstrate the role of DHHC7 in vivo using a successful liver-specific knockout mouse model that causes HFD-dependent obesity without insulin resistance.

      (2) Several studies were performed on chow and HFD, as well as male and female mice.

      (3) Plasma proteomics identified Prg4 as a circulating factor elevated in KO mice. Prg4 overexpression phenocopied the KO mice.

      (4) There is solid mechanistic data supporting the hypothesis that hepatic DHHC7 loss selectively increases Prg4 secretion as a hepatokine.

      (5) There is convincing evidence for the DHHC7 mechanism in liver: DHHC7 controls cAMP-PKA-CREB via Gαi palmitoylation. The authors recognize that the palmitoylation change is causative rather than correlated, and this needs to be more fully explored in the future.

      (6) Strong in vitro data support that Prg4 acts through adipocyte GPR146 via its SMB domain

      Weaknesses:

      (1) The assessment of liver and adipose tissue responses to DHH7 loss is insufficient to support claims that it alters systemic lipolysis. In this new mouse model, liver histology is necessary, especially given the cholesterol increase in the KO. As this is a newly established mouse line, common assessments of the liver during HFD feeding would be important for interpreting the phenotype.

      (2) The data show DHH7 loss causes adipose tissue dysfunction and alterations in lipid metabolism. Beyond that, I suggest not stating more regarding the phenotype of the DHH7 mice for this work. A thorough analysis would be needed to determine which factor drives the obesity and changes in energy balance in the mice. For example, the KO mice had lower oxygen consumption (but no change in CO2 production, which is also usually similarly altered), suggesting a CNS component could drive obesity. However, since the data are not normalized for lean mass and there is no information about locomotor activity, this analysis is incomplete. RER may be informative if available. A broad conservative description of the KO phenotype would be more accurate since Pgr4 has many paracrine targets and likely has autocrine signaling in the liver.

      (3) Most references to lipolysis or lipolysis flux systemically would be inaccurate. To suggest a suppression of lipolysis, serum NEFA would need to be measured, and in vivo or in vitro lipolysis assays performed to test the effect of DHH7 loss or the specificity of PGR4 action on adipocytes in vivo. To demonstrate adipose tissue dysfunction, analysis of lipogenesis markers, canonical markers for insulin sensitivity, and mitochondrial dysfunction should be performed/measured.

      (4) Line 179: The experiment was performed in brown adipocytes to show that Prg4 does not affect p-CREB Figure S8 under the heading: "DHHC7 controls hepatic PKA-CREB activity through Gαi palmitoylation to regulate Prg4 transcription." Unless repeated using liver lysate, the conclusions stated in the text throughout the paper should be revised.

      (5) It appears that the serum and liver proteomics were only assessed for factors that increased in KO mice? Were proteins that were significantly decreased analyzed?

      (6) The beige adipocyte culture method is unclear. The methods do not describe the fat pad used, and the protocol suggests the cells would be differentiated into mature white adipocytes. If they are beige cells, a reference for the method, gene expression, and cell images could support that claim.

      (7) The use of tamoxifen can confound adipocyte studies, as it increases beigeing and weight gain even after a brief initiation period. Both groups were treated with Tam, but another way to induce Cre would be ideal.

      (8) Evidence for the lack of the glucose phenotype is incomplete. One reason could be due to the IP route of glucose administration, which has a large impact on glucose handling during a GTT. To confirm the absence of a glucose tolerance phenotype, an OGTT should be performed, as it is more physiological. In addition, the mice should be fed for 16 weeks. Prg4 affects immune cells, changing how adipose tissue expands, and 12 weeks of HFD feeding is often not long enough to see the effects of adipose tissue inflammation spilling over into the system.

      (9) There may be liver-adipose tissue crosstalk in KO mice, but this was not fully assessed in this study and would be difficult to determine in any setting, given the diverse cell types that are targets of Pdg4. The crosstalk claim is unnecessary to share the basic premises; there is the DHH7 mechanism/phenotype and the Pgr4 mechanism/phenotype, and while there is no Pgr4 adipose direct mechanism, the paper can be successfully reframed.

      (10) Although the DHH7 loss on the chow diet did not result in a phenotype, did the Pgr4 increase in the KO mice on chow? This would determine whether either i) the expression of Pgr4 is dependent on HFD/obesity, or ii) circulating Pgr4 has effects only in an HFD condition. The receptors may also change on HFD, especially in adipocytes.

      Impact:

      This work would significantly contribute to the study of liver metabolism, provided it includes data describing the liver. The role of Pgr4 in adipocytes and other cell types is of substantial value to the field of metabolism. By reframing the paper and conducting some key experiments, its quality and impact can be increased.

    2. Reviewer #2 (Public review):

      In the current report, Sun and Colleagues sought to determine the liver-specific role that DHHC7, a DHHC palmitoyltransferase protein, plays in regulating whole-body energy balance and hepatic crosstalk with adipose tissues. The authors generated an inducible, liver-specific DHHC7 knockout mouse to determine how altered palmitoylation in hepatocytes alters hepatokine production/secretion, and in turn, systemic metabolism. The ablation of DHHC7 was found to alter the production of proteoglycan 4 (Prg4), a hepatokine previously linked to metabolic regulation. The authors propose that the change in Prg4 production is mediated by the loss of Gαi palmitoylation, due to DHHC7 ablation, thereby augmenting cAMP-PKA-CREB signaling in hepatocytes, which alleviates the 'brake' on Prg4 production. The authors further propose that Prg4 overexpression leads to excessive binding to GPR146 on adipocytes, which in turn suppresses PKA-mediated HSL activation, promoting impairments in lipolysis, leading to obesity. The report is interesting and generally well-written, but it appears to have some clear gaps in additional data that would aid in interpretation. The addition of confirmatory culture studies would be incredibly helpful for testing the hypotheses being explored. My comments, concerns, and/or suggestions are outlined below in no particular order.

      (1) Figures: All data should be presented in dot-boxplot format so the reader knows how many samples were analyzed for each assay and group. n=3 for some assays/experiments is incredibly low, particularly when considering the heterogeneity in responsiveness to HFD, food intake, etc....

      (2) Figure 1E-F: It is unclear when the food intake measure was performed. Mice can alter their feeding behavior based on a myriad of environmental and biological cues. It would also be interesting to show food intake data normalized to body mass over time. Mice can counterregulate anorexigenic cues by altering neuropeptide production over time. It is not clear if this is occurring in these mice, but the timing of measuring food intake is important. Additionally, the VO2 measure appears to be presented as being normalized to total body mass, when in fact, it would probably be more accurate to normalize this to lean body mass. Normalizing to total body mass provides a denominator effect due to excessive adiposity, but white fat is not as metabolically active as other high-glucose-consuming tissues. If my memory serves me right, several reports have discussed appropriate normalizations in circumstances such as this.

      (3) Figure 1J-N: It is not all that surprising that fasting glucose and/or TGs were found to be similar between groups. It is well-established that mice have an incredible ability to become hyperinsulinemic in an effort to maintain euglycemia and lipid metabolism dynamics. A few relatively easy assays can be performed to glean better insights into the metabolic status of the authors' model. First, fasting insulin concentrations will be incredibly helpful. Secondly, if the authors want to tease out which adipose depot is most adversely affected by ablation, they could take an additional set of CON and KO mice, fast them for 5-6 hours, provide a bolus injection of insulin (similar to that provided during an insulin tolerance test), and then quickly harvest the animals ~15 minutes after insulin injections; followed by evaluating AKT phosphorylation. This will really tell them if these issues have impairments in insulin signaling. The gold-standard approach would be to perform a hyperinsulinemic-euglyemic clamp in the CON and KO mice. I now see GTT and ITT data, but the aforementioned assays could help provide insight.

      (4) Figure 3A: This looks overexposed to me.

      (5) Figures 3-4: It appears that several of these assays could be complemented with culture-based models, which would almost certainly be cleaner. The conditioned media could then be used from hepatocyte cultures to treat differentiated adipocytes.

      (6) Figure 4: It is unclear how to interpret the phospho-HSL data because the fasting state can affect this readout. It needs to be made clear how the harvest was done. Moreover, insulin and glucagon were never measured, and these hormones have a significant influence over HSL activity. I suspect the KO mice have established hyperinsulinemia, which would likely affect HSL activity. This provides an example of why performing some of these experiments in a dish would make for cleaner outcomes that are easier to interpret.

    3. Reviewer #3 (Public review):

      Summary:

      In the current manuscript, Sun et al aimed to determine the metabolic function of hepatocyte DHHC7, one of the key enzymes in protein palmitoylation. They generated inducible liver-specific Dhhc7 knockout mice and discovered that Dhhc7-LKO mice are more prone to gain weight and develop adipose expansion and obesity. Via unbiased proteomic analysis, they identified PRG4 as one of the top secreted factors in the liver of Dhhc7-LKO mice. Hepatic overexpression of PRG4 recapitulates the obesity phenotype observed in Dhh7-LKO mice. At the mechanistic level, PRG4, once secreted from the liver, can bind to GPR146 on adipocytes and inhibit PKA-HSL signaling and lipolysis. Taken together, their findings suggest a novel pathway by which the liver communicates with adipose tissue and impacts systemic metabolism.

      Strengths:

      (1) The systemic metabolic homeostasis depends on coordination among metabolically active tissues. Thus, active communication between the liver and adipose tissue when facing nutritional challenges (such as high-fat diet feeding) is crucial for achieving metabolic health. The concept that the liver can communicate with adipose tissue and impact the lipolysis process via secreted hepatokines is quite significant but remains poorly understood.

      (2) Hepatocyte Dhhc7 knockout mice developed a significant obesity phenotype, which is associated with adipose expansion.

      (3) Unbiased proteomic analysis identified PRG4 as one of the top secreted factors in the liver of Dhh7-LKO mice. Hepatic overexpression of PRG4 recapitulates the obesity phenotype observed in Dhh7-LKO mice.

      (4) In vitro cell-based assay showed that PRG4 can bind to adipocyte GPR146, inhibit PKA-mediated HSL phosphorylation, and subsequently, the lipolysis process.

      Weaknesses:

      (1) Lack of a causal-effect study to generate evidence directly linking hepatocyte DHH7 and PRG4 in driving adipose expansion and obesity upon HFD feeding.

      (2) Lack of direct evidence to support that PRG4 inhibits adipocyte lipolysis via GPR146. A functional assay demonstrating adipocyte lipolysis is required.

      (3) The conclusion is largely based on the correlation evidence.

    4. Author response:

      Public reviews:

      Reviewer #1 (Public review):

      Weaknesses:

      (1) The assessment of liver and adipose tissue responses to DHH7 loss is insufficient to support claims that it alters systemic lipolysis. In this new mouse model, liver histology is necessary, especially given the cholesterol increase in the KO. As this is a newly established mouse line, common assessments of the liver during HFD feeding would be important for interpreting the phenotype.

      We will add the data of the liver histology in the revised version.

      (2) The data show DHH7 loss causes adipose tissue dysfunction and alterations in lipid metabolism. Beyond that, I suggest not stating more regarding the phenotype of the DHH7 mice for this work. A thorough analysis would be needed to determine which factor drives the obesity and changes in energy balance in the mice. For example, the KO mice had lower oxygen consumption (but no change in CO2 production, which is also usually similarly altered), suggesting a CNS component could drive obesity. However, since the data are not normalized for lean mass and there is no information about locomotor activity, this analysis is incomplete. RER may be informative if available. A broad conservative description of the KO phenotype would be more accurate since Pgr4 has many paracrine targets and likely has autocrine signaling in the liver.

      We will add the data of CO2 production, locomotor activity and RER in the revised version.

      (3) Most references to lipolysis or lipolysis flux systemically would be inaccurate. To suggest a suppression of lipolysis, serum NEFA would need to be measured, and in vivo or in vitro lipolysis assays performed to test the effect of DHH7 loss or the specificity of PGR4 action on adipocytes in vivo. To demonstrate adipose tissue dysfunction, analysis of lipogenesis markers, canonical markers for insulin sensitivity, and mitochondrial dysfunction should be performed/measured.

      We will measure the serum NEFA to test the effect of DHHC7. We will analyze the lipogenesis markers, canonical markers for insulin sensitivity, and mitochondrial dysfunction.

      (4) Line 179: The experiment was performed in brown adipocytes to show that Prg4 does not affect p-CREB Figure S8 under the heading: "DHHC7 controls hepatic PKA-CREB activity through Gαi palmitoylation to regulate Prg4 transcription." Unless repeated using liver lysate, the conclusions stated in the text throughout the paper should be revised.

      The figure S8 is to demonstrate that Prg4 has no impact on forskolin induced CREB phosphorylation at Ser133, and provide the evidence that the prg4 acts on the upstream of adenylyl cyclase. We will revise the description.

      (5) It appears that the serum and liver proteomics were only assessed for factors that increased in KO mice? Were proteins that were significantly decreased analyzed?

      We are analyzing the decreased proteins in the following project.

      (6) The beige adipocyte culture method is unclear. The methods do not describe the fat pad used, and the protocol suggests the cells would be differentiated into mature white adipocytes. If they are beige cells, a reference for the method, gene expression, and cell images could support that claim.

      We will add a reference for the method, gene expression, asn cell images.

      (7) The use of tamoxifen can confound adipocyte studies, as it increases beigeing and weight gain even after a brief initiation period. Both groups were treated with Tam, but another way to induce Cre would be ideal.

      We will use the Doxycycline-inducible systems in the future.

      (8) Evidence for the lack of the glucose phenotype is incomplete. One reason could be due to the IP route of glucose administration, which has a large impact on glucose handling during a GTT. To confirm the absence of a glucose tolerance phenotype, an OGTT should be performed, as it is more physiological. In addition, the mice should be fed for 16 weeks. Prg4 affects immune cells, changing how adipose tissue expands, and 12 weeks of HFD feeding is often not long enough to see the effects of adipose tissue inflammation spilling over into the system.

      We will perform the OGTT and feed the mice for 16 weeks in the future.

      (9) There may be liver-adipose tissue crosstalk in KO mice, but this was not fully assessed in this study and would be difficult to determine in any setting, given the diverse cell types that are targets of Pdg4. The crosstalk claim is unnecessary to share the basic premises; there is the DHH7 mechanism/phenotype and the Pgr4 mechanism/phenotype, and while there is no Pgr4 adipose direct mechanism, the paper can be successfully reframed.

      We will reframe the paper.

      (10) Although the DHH7 loss on the chow diet did not result in a phenotype, did the Pgr4 increase in the KO mice on chow? This would determine whether either i) the expression of Pgr4 is dependent on HFD/obesity, or ii) circulating Pgr4 has effects only in an HFD condition. The receptors may also change on HFD, especially in adipocytes.

      We will test the Prg4 in the KO mice on chow diet.

      Reviewer #2 (Public review):

      (1) Figures: All data should be presented in dot-boxplot format so the reader knows how many samples were analyzed for each assay and group. n=3 for some assays/experiments is incredibly low, particularly when considering the heterogeneity in responsiveness to HFD, food intake, etc.

      We will present the data in dot-boxplot format.

      (2) Figure 1E-F: It is unclear when the food intake measure was performed. Mice can alter their feeding behavior based on a myriad of environmental and biological cues. It would also be interesting to show food intake data normalized to body mass over time. Mice can counterregulate anorexigenic cues by altering neuropeptide production over time. It is not clear if this is occurring in these mice, but the timing of measuring food intake is important. Additionally, the VO2 measure appears to be presented as being normalized to total body mass, when in fact, it would probably be more accurate to normalize this to lean body mass. Normalizing to total body mass provides a denominator effect due to excessive adiposity, but white fat is not as metabolically active as other high-glucose-consuming tissues. If my memory serves me right, several reports have discussed appropriate normalizations in circumstances such as this.

      We will see how to be more accurate to normalize.

      (3) Figure 1J-N: It is not all that surprising that fasting glucose and/or TGs were found to be similar between groups. It is well-established that mice have an incredible ability to become hyperinsulinemic in an effort to maintain euglycemia and lipid metabolism dynamics. A few relatively easy assays can be performed to glean better insights into the metabolic status of the authors' model. First, fasting insulin concentrations will be incredibly helpful. Secondly, if the authors want to tease out which adipose depot is most adversely affected by ablation, they could take an additional set of CON and KO mice, fast them for 5-6 hours, provide a bolus injection of insulin (similar to that provided during an insulin tolerance test), and then quickly harvest the animals ~15 minutes after insulin injections; followed by evaluating AKT phosphorylation. This will really tell them if these issues have impairments in insulin signaling. The gold-standard approach would be to perform a hyperinsulinemic-euglyemic clamp in the CON and KO mice. I now see GTT and ITT data, but the aforementioned assays could help provide insight.

      We have the data for evaluating AKT phosphorylation and will add it in the revised version.

      (4) Figure 3A: This looks overexposed to me.

      We will replace it with short exposed one.

      (5) Figures 3-4: It appears that several of these assays could be complemented with culture-based models, which would almost certainly be cleaner. The conditioned media could then be used from hepatocyte cultures to treat differentiated adipocytes.

      We will perform the cell culture experiments for Figures 3-4

      (6) Figure 4: It is unclear how to interpret the phospho-HSL data because the fasting state can affect this readout. It needs to be made clear how the harvest was done. Moreover, insulin and glucagon were never measured, and these hormones have a significant influence over HSL activity. I suspect the KO mice have established hyperinsulinemia, which would likely affect HSL activity. This provides an example of why performing some of these experiments in a dish would make for cleaner outcomes that are easier to interpret.

      We will perform some experiments in cell culture dish.

      Reviewer #3 (Public review):

      Weaknesses:

      (1) Lack of a causal-effect study to generate evidence directly linking hepatocyte DHH7 and PRG4 in driving adipose expansion and obesity upon HFD feeding.

      We will perform the causal-effect study to demonstrate the hypothesis.

      (2) Lack of direct evidence to support that PRG4 inhibits adipocyte lipolysis via GPR146. A functional assay demonstrating adipocyte lipolysis is required.

      We will add the direct evidence in the revised version.

      (3) The conclusion is largely based on the correlation evidence.

      We will perform the experiment to strengthen the conclusion base on the a causal-effect study.

    1. room-temperature

      The greater the width of the band gap, the more suitable a semiconductor material is for operation at room temperature. Reducing a detector’s physical size also improves its performance at room temperature because the detector will contain fewer electrons. Fewer electrons means less thermionic noise.

    2. cooling the detector crystal

      To reduce the thermionic noise(where electron in the semiconductor might have sufficient thermal energy to climb to the conduction band) which deteriorates the detector resolution.

    3. the intrinsic region

      Intrinsic layer has the property that, in thermal equilibrium, the number of conduction band electrons per unit volume, is equal to the number of valence band holes. The intrinsic region presents a larger volume in which photons can produce electron hole pairs, increased quantum efficiency.

    1. Analyse de la Rhétorique Complotiste : Mécanismes, Discours et l'Allégorie du « Mouton »

      Ce document de synthèse analyse les recherches et les réflexions de Loïc Massaia, vulgarisateur pour le projet Utopia, concernant la rhétorique employée dans les milieux complotistes.

      Il détaille les structures argumentatives, les fonctions psychologiques du discours et l'usage spécifique de l'insulte « mouton » comme outil de distinction sociale et de clôture du débat.

      Synthèse

      L'analyse de la rhétorique complotiste révèle un système de communication visant moins à établir une vérité qu'à asseoir un ascendant sur l'auditoire.

      Cette rhétorique se caractérise par une structure circulaire (tautologique) et un recours systématique à l'essentialisme.

      L'usage de termes comme « mouton » remplit une triple fonction : une attaque ad personam pour éviter le débat de fond, une accusation de complicité passive, et un mécanisme de distinction permettant de renforcer l'estime de soi du locuteur.

      En s'affranchissant des règles du « débat sain », le discours complotiste s'établit comme un système fermé où la conclusion (l'existence d'un complot) est déjà contenue dans les prémisses.

      -------------------------------------------------------------------------------

      1. Définition et Catégorisation de la Rhétorique Complotiste

      Le document propose de définir la rhétorique comme l'ensemble des moyens mis en œuvre dans un discours pour convaincre, briller, manipuler ou obtenir un ascendant sur autrui.

      Une définition complémentaire la décrit comme la « négociation de la différence entre les individus sur une question donnée ».

      Dans le cadre du complotisme, les expressions récurrentes peuvent être classées selon quatre dimensions principales :

      | Dimension | Exemples de phrases types | Objectif recherché | | --- | --- | --- | | Accusatoire | « Journalopes », « Merdias », « On ne vous dit pas tout » | Discréditer les sources d'information officielles. | | Incitatoire | « Faites vos propres recherches », « Réveillez-vous » | Pousser l'interlocuteur à adopter la même conclusion par une illusion d'autonomie. | | Négation du hasard | « Coïncidence ? Je ne crois pas », « Tout est lié » | Refuser la contingence au profit d'un dessein caché. | | Surconfiance et Distinction | « Tous des moutons », « On avait raison » | Se placer au-dessus de la « masse » ignorante. |

      --------------------------------------------------------------------------------

      2. Analyse Structurelle de l'Argumentation

      Le Modèle de Toulmin

      Pour évaluer la solidité d'un argument, le document mobilise le modèle de Toulmin, qui identifie les composants d'une argumentation optimale :

      1. Données : Les informations de base.

      2. Conclusion : Ce que l'on veut démontrer.

      3. Justifications : Le lien logique entre données et conclusion.

      4. Fondement : Ce qui rend la justification solide et acceptée.

      5. Réfutation : L'intégration des limites et des conditions qui pourraient contredire l'argument.

      La défaillance du discours complotiste

      L'analyse montre que le discours complotiste omet généralement la réfutation.

      Par exemple, l'argument consistant à dire que le gouvernement est une secte parce qu'il lutte contre les dérives sectaires (pour étouffer la dissidence) s'effondre si l'on introduit d'autres facteurs de distinction entre État et secte.

      Circularité et Essentialisme

      Le discours complotiste est décrit comme un système fermé ou une tautologie.

      Il repose sur l'essentialisation : on décrète que la « nature » profonde d'une entité (le gouvernement, les élites) est malveillante.

      Dès lors, toute action de cette entité, même positive en apparence, est interprétée comme une preuve supplémentaire de sa malveillance.

      Le complot existe nécessairement au départ pour expliquer les faits qui servent ensuite à prouver l'existence du complot.

      --------------------------------------------------------------------------------

      3. L'Allégorie du « Mouton » : Origines et Usages

      L'expression « tous des moutons » est un idiotisme animalier présent dans plusieurs langues (français, italien, anglais, polonais).

      Origine Littéraire

      L'image du mouton qui suit aveuglément remonte notamment à Rabelais (l'épisode des moutons de Panurge), où les animaux sautent à l'eau et meurent simplement parce que le premier a sauté.

      Cela souligne une dimension « naturelle » ou essentialiste de l'animal : le besoin de suivre.

      Fonctions dans le discours complotiste

      1. L'identification du comploteur : S'il y a des moutons, il y a nécessairement un « berger » ou un « maître » (le comploteur).

      2. L'accusation de complicité : Les non-complotistes sont jugés idiots, mais aussi complices par leur passivité.

      3. Le besoin de distinction : Se déclarer « non-mouton » permet de s'extraire de la masse. Selon les travaux d'Anthony Lantian (2015), l'adhésion aux théories du complot serait un moyen de rehausser une estime de soi initialement basse en se sentant détenteur d'un savoir supérieur.

      --------------------------------------------------------------------------------

      4. La Rhétorique comme Rupture du Débat

      L'usage de l'insulte « mouton » est qualifié d'argument ad personam.

      Théorisée par Schopenhauer, cette tactique consiste à attaquer l'individu plutôt que ses arguments pour mettre fin à une discussion que l'on ne peut pas gagner sur le fond.

      Violation des règles de la controverse honorable

      En s'appuyant sur les travaux de Levi Hedge (XIXe siècle), le document identifie trois règles fondamentales d'un débat sain systématiquement violées par la rhétorique complotiste :

      Règle n°4 : Interdiction des attaques personnelles.

      Règle n°5 : Interdiction d'accuser l'adversaire de mobiles cachés.

      Règle n°7 : La vérité doit être le but, non la victoire. L'usage du ridicule ou de la raillerie (traiter l'autre de mouton) est une violation de cette règle.

      Toutefois, le document souligne que ces dérives ne sont pas l'apanage des complotistes ; elles se retrouvent fréquemment dans tout débat public où l'objectif des participants est de « gagner » plutôt que de chercher la vérité.

      --------------------------------------------------------------------------------

      5. Perspectives Critiques

      En conclusion, le document invite à une réflexion sur la nature même de la critique du complotisme.

      Si l'on définit la rhétorique complotiste comme étant « par nature » une tautologie basée sur un essentialisme, on court le risque de produire soi-même un discours fermé et essentialiste.

      Cette mise en abyme suggère que l'analyse du complotisme doit elle-même rester vigilante quant à ses propres structures argumentatives pour ne pas tomber dans les travers qu'elle dénonce.

    1. Briefing : Devenir parent, un grand défi — Analyse des obstacles systémiques, médicaux et sociaux

      Résumé exécutif

      Ce document synthétise les échanges d'une table ronde consacrée aux défis majeurs de l'accès à la parentalité.

      L'analyse révèle un décalage profond entre l'injonction sociétale à la natalité et la réalité des parcours « atypiques » (infertilité, handicap, adoption).

      Les parents et futurs parents font face à une triple épreuve :

      1. Des préjugés tenaces : Une stigmatisation de l'infertilité masculine et une négation de la compétence parentale des personnes handicapées.

      2. Une faillite de l'accompagnement : Un manque d'information neutre et de formation du personnel médical, poussant parfois les individus vers des dérives idéologiques ou des pseudo-sciences.

      3. Des barrières systémiques violentes : Des procédures administratives d'adoption exténuantes et une surveillance intrusive des services sociaux pouvant mener à des traumatismes familiaux graves (placements abusifs).

      Malgré ces obstacles, l'esprit critique et l'engagement associatif émergent comme des outils de résilience essentiels pour naviguer dans ces systèmes complexes.

      --------------------------------------------------------------------------------

      1. L'infertilité : Entre réalités biologiques et mythes sociaux

      L'infertilité est souvent perçue à tort comme une problématique essentiellement féminine.

      Les données scientifiques et les témoignages personnels rectifient cette vision.

      Répartition des causes d'infertilité

      Selon Marjorie Whitfield (chercheuse à l'Inserm), la responsabilité de l'infertilité est équitablement répartie :

      Un tiers des cas est d'origine féminine.

      Un tiers des cas est d'origine masculine.

      Un tiers des cas est d'origine mixte (impliquant les deux partenaires).

      Le poids des préjugés masculins

      L'infertilité masculine est particulièrement sujette à des amalgames psychologiques et sociaux :

      Confusion avec l'impuissance : La société confond souvent la capacité à procréer (production de spermatozoïdes) et la virilité ou la performance sexuelle. Un homme stérile peut avoir une fonction sexuelle normale.

      Atteinte à la virilité : Pour beaucoup, l'incapacité à concevoir est vécue comme une défaillance du « contrat » de virilité.

      Déni de paternité : Dans les cas de recours à un donneur, le préjugé social tend à nier le rôle de père au profit de la seule génétique.

      --------------------------------------------------------------------------------

      2. Parentalité et handicap : Un parcours d'obstacles discriminatoire

      Le témoignage de Leitha met en lumière un système de santé et un encadrement social profondément « validocentrés », où le handicap est systématiquement perçu comme un frein, voire un danger.

      La stigmatisation médicale

      Les professionnels de santé manifestent souvent une incompréhension totale face au désir de grossesse d'une personne handicapée :

      Invisibilisation de la sexualité : Étonnement des soignants face à la conception (« Comment avez-vous fait ? »).

      Orientation systématique vers l'IVG : Des patientes se voient proposer l'interruption volontaire de grossesse par défaut, sans que leur choix ou leur projet parental ne soit envisagé.

      Manque de matériel adapté : Absence de tables d'examen gynécologique ou d'instruments permettant la prise en charge de personnes en fauteuil roulant, menant à des violences gynécologiques.

      La suspicion des services sociaux

      Une fois parents, les personnes handicapées subissent une surveillance disproportionnée :

      Injonctions contradictoires : Les services sociaux imposent des cadres rigides et changeants, sans offrir de solutions concrètes aux difficultés quotidiennes liées au handicap.

      Le « signalement » par défaut : Des inquiétudes infondées ou des préjugés sur la capacité de protection de l'enfant peuvent mener à des procédures de placement.

      Traumatismes familiaux : Des enfants sont parfois retirés à leurs parents durant plusieurs années sur la base de suspicions de danger jamais étayées par des faits.

      --------------------------------------------------------------------------------

      3. Les entraves administratives et législatives

      L'accès à la parentalité est également conditionné par des mécanismes bureaucratiques lourds qui peuvent décourager les candidats.

      | Type de parcours | Nature des obstacles identifiés | | --- | --- | | Adoption | Délais d'agrément longs (5 ans), enquêtes sociales intrusives (voisinage, famille), tests psychologiques obsolètes (ex: test de Rorschach), et fermetures de pays étrangers suite à des évolutions législatives françaises (ex: Mariage pour tous). | | PMA | Délais rallongés pour les personnes handicapées (examens supplémentaires), limitation du nombre de tentatives prises en charge, et coût élevé des démarches à l'étranger. | | Suivi Social | Surveillance psychosociale non demandée, sentiment d'être « jugé à la loupe » contrairement aux parents biologiques sans difficultés apparentes. |

      --------------------------------------------------------------------------------

      4. Le danger du manque d'information et de l'isolement

      Le déficit d'accompagnement par les structures officielles crée un vide dangereux que comblent des organisations aux agendas variés.

      Dérives idéologiques : En l'absence de ressources publiques pour accompagner les grossesses avec handicap, des associations anti-IVG deviennent parfois les seules détentrices d'informations pratiques, utilisant cette aide pour manipuler psychologiquement les futures mères.

      Pseudo-médecines : Le désir de parentalité est un marché lucratif pour des cures ou formations miracles promettant de « booster » la fertilité sans base scientifique.

      Isolement psychologique : La culpabilité, souvent induite par le discours médical (« Vous ne pouvez pas faire ça à un enfant »), isole les parents et fragilise leur santé mentale.

      --------------------------------------------------------------------------------

      5. Le rôle crucial de l'esprit critique

      L'esprit critique est présenté comme un levier fondamental pour reprendre le pouvoir sur son parcours de parent.

      1. Filtrer l'information : Apprendre à vérifier les sources et à ne pas accepter la parole médicale comme une vérité absolue, surtout lorsqu'elle est empreinte de jugements de valeur.

      2. Désamorcer la culpabilité : Comprendre les mécanismes systémiques permet de réaliser que l'échec ou la difficulté n'est pas une faute individuelle mais le résultat d'un manque de soutien.

      3. Créer des ressources : Face à l'absence de structures adaptées, l'engagement associatif (comme la création de sites de ressources neutres) permet de briser l'isolement et de proposer un accompagnement basé sur l'expérience et les preuves (EBM - Evidence-Based Medicine).

      --------------------------------------------------------------------------------

      Conclusion : Une question de dignité et de droits

      Les parcours de Sylvain Rozier et de Leitha démontrent que devenir parent, lorsqu'on s'écarte de la norme biologique ou sociale, est un acte de résistance.

      Malgré la dureté des épreuves — 11 ans de combat pour l'un, des années de bataille judiciaire pour l'autre — l'issue positive de ces parcours souligne la nécessité urgente d'une réforme de l'accompagnement de la parentalité :

      Formation des personnels soignants et sociaux aux enjeux du handicap.

      Neutralité et accessibilité de l'information médicale.

      Soutien logistique plutôt que surveillance répressive.

      « La parentalité est un chemin semé d'embûches [...] mais sur des parcours atypiques, on est vraiment à un autre niveau d'embûches qui isolent. » — Marjorie Whitfield.

    1. L'Esprit Critique au Cœur de l'Enquête Privée Spécialisée : Analyse des Pratiques de Benoît Judde

      Ce document de synthèse analyse les interventions de Benoît Judde, détective privé spécialisé, concernant l'évolution de la profession de détective en France, le cadre juridique des dérives sectaires et l'utilisation de l'esprit critique comme outil méthodologique fondamental pour l'administration de la preuve.

      Synthèse

      La profession de détective privé en France, désormais strictement réglementée et contrôlée par le ministère de l'Intérieur (CNAPS), s'est transformée en un auxiliaire de fait pour la défense des intérêts privés et le système judiciaire.

      Benoît Judde, spécialisé dans les faits de manipulation et les dérives sectaires, démontre que l'efficacité de l'enquêteur repose sur une maîtrise rigoureuse du cadre juridique et sur l'application de l'esprit critique.

      Cette approche, adossée aux psychologies cognitive et sociale expérimentales, permet de transformer des phénomènes subjectifs comme la « sujétion psychologique » en éléments de preuve objectifs, circonstanciés et recevables en justice.

      Le passage récent (2024) de la sujétion psychologique au statut d'infraction autonome renforce la nécessité d'une expertise technique capable de caractériser les manœuvres de manipulation sans tomber dans le biais de confirmation.

      --------------------------------------------------------------------------------

      1. Le Cadre Légal et Déontologique de la Profession

      La profession de détective privé, officiellement dénommée « agent de recherche privée », est définie par le Code de la sécurité intérieure (CSI).

      Définition et Prérogatives

      Selon l'article L621-1 du CSI, le détective est un professionnel libéral dont la mission consiste à recueillir des informations ou des renseignements destinés à des tiers, en vue de la défense de leurs intérêts.

      Anonymat d'enquête : C’est la seule profession parajuridique autorisée à enquêter sans révéler sa qualité, son identité réelle ou l’objet de sa mission. Contrairement aux commissaires de justice (huissiers), le détective peut agir sous une identité fictive.

      Recevabilité des preuves : Les rapports de détective doivent être « détaillés, circonstanciés et précis » (DCP) pour être recevables devant les tribunaux, selon une jurisprudence de la Cour de cassation datant de 1962.

      Régulation et Formation

      La profession est passée d'un état de « freestyle » à un encadrement strict :

      Contrôle du CNAPS : Le Conseil national des activités privées de sécurité (sous tutelle du ministère de l'Intérieur) délivre trois agréments distincts (personne physique, structure juridique, carte professionnelle), renouvelables tous les 5 ans après enquête de moralité approfondie.

      Formation obligatoire : Un niveau Bac+3 (licence professionnelle) est requis. Il n'existe que quatre écoles en France (deux universités et deux écoles privées), formant environ 120 nouveaux professionnels par an.

      Déontologie : Les détectives sont soumis au secret professionnel et à une obligation de conseil. Ils doivent notamment vérifier la légitimité de la demande pour éviter de servir des projets de vengeance ou des recherches malveillantes.

      --------------------------------------------------------------------------------

      2. L'Enquête Spécialisée dans les Dérives Sectaires

      Le champ d'action des détectives est vaste (recherche de personnes, contrefaçon, fraude à l'assurance), mais la spécialisation de Benoît Judde porte sur la manipulation mentale.

      Les Critères de la MIVILUDES

      Pour objectiver une dérive sectaire, l'enquêteur s'appuie sur le référentiel de la Mission interministérielle de vigilance et de lutte contre les dérives sectaires (MIVILUDES), qui identifie 10 critères principaux.

      | Catégorie d'atteinte | Exemples de sous-critères | | --- | --- | | Atteintes aux personnes | Rupture avec l'environnement d'origine, perte d'esprit critique, embrigadement des enfants, privation de sommeil ou de nourriture. | | Atteintes aux biens | Exigences financières disproportionnées, endettement, travail dissimulé (ex: détournement du concept de woofing). | | Vie sociale et démocratique | Discours antisocial, trouble à l'ordre public, détournement des circuits économiques. |

      Collaboration Interdisciplinaire

      L'enquêteur travaille en binôme avec un psychologue (spécialisé en psychologie scientifique, cognitive et sociale) pour valider la réalité de l'emprise.

      Cette collaboration permet d'apporter une « parole psychologique » crédible que le juriste ou le détective ne peut formuler seul, notamment pour qualifier le préjudice ou la sujétion devant un juge.

      --------------------------------------------------------------------------------

      3. Évolutions Législatives Récentes (Loi de 2024)

      Le cadre juridique français a récemment évolué pour faciliter la répression des dérives sectaires, rendant le rôle de la preuve plus complexe et crucial.

      Autonomie de la sujétion psychologique : Auparavant liée à l'abus de faiblesse (nécessitant de prouver un état de faiblesse préalable et un préjudice), la « mise en état de sujétion psychologique » est devenue une infraction autonome en 2024.

      Il suffit désormais de prouver l'utilisation de techniques de pression ou de manipulation altérant le jugement.

      Détournement de traitement médical : Une nouvelle infraction punit le fait de provoquer une personne à abandonner un traitement médical thérapeutique ou prophylactique (vaccination) au profit de pratiques pseudo-scientifiques.

      L'Escroquerie et la Cybermalveillance : Dans le domaine numérique, 95 % des arnaques reposent sur l'ingénierie sociale (manipulation humaine) plutôt que sur des failles purement techniques.

      --------------------------------------------------------------------------------

      4. L'Esprit Critique comme Méthodologie d'Enquête

      Pour Benoît Judde, l'esprit critique n'est pas une posture intellectuelle mais un outil de travail permettant d'éviter le biais de confirmation et d'assurer l'objectivité du rapport.

      Les Trois Piliers de la Manipulation

      L'enquêteur analyse les situations à travers trois mécanismes identifiés par la psychologie expérimentale :

      1. L'automanipulation : Utilisation des biais cognitifs naturels des individus.

      2. La soumission librement consentie : Techniques comme le « pied dans la porte » (obtenir un petit engagement pour en obtenir un plus grand) ou la « porte au nez » (demander l'excessif pour obtenir le raisonnable).

      3. La soumission à l'autorité : Référence à l'expérience de Milgram. La manipulation réussit si l'autorité est perçue comme légitime (ex: port d'une blouse, titre de « frère de Jésus », etc.).

      L'Objectivité de la Preuve

      Recours à la technologie : Utilisation de caméras cachées lors d'infiltrations pour fournir une preuve brute et incontestable, évitant ainsi la faillibilité de la mémoire humaine ou les accusations de partialité.

      Nécessité et proportionnalité : L'enquêteur doit justifier que l'atteinte à la vie privée (infiltration, surveillance) était strictement indispensable à la manifestation de la vérité et proportionnée à l'enjeu (droit à la preuve vs droit à la vie privée).

      --------------------------------------------------------------------------------

      5. Conclusion : Vers un Continuum de Sécurité

      Le document souligne que l'État ne peut assurer seul la surveillance de tous les risques, particulièrement dans les domaines complexes des dérives sectaires et thérapeutiques.

      Synergie Public-Privé : Le détective privé intervient là où la police ne peut plus agir (disparitions non inquiétantes, enquêtes pré-pénales pour consolider une plainte).

      Auxiliaire de Justice : En apportant des éléments basés sur un consensus scientifique (psychologie expérimentale), le détective aide le magistrat à fonder sa décision sur des faits plutôt que sur des témoignages contradictoires.

      Complémentarité : L'objectif n'est pas une « américanisation » du système, mais une validation réciproque où le secteur privé complète l'action régalienne en fournissant une expertise technique et de terrain spécifique.

    1. Synthèse Clinique : Comprendre et Accompagner la Cooccurrence TSA-TDAH (ODHD)

      Résumé Exécutif

      Ce document propose une analyse approfondie de la cooccurrence entre le Trouble du Spectre de l'Autisme (TSA) et le Trouble du Déficit de l'Attention avec ou sans Hyperactivité (TDAH), un profil souvent désigné sous l'acronyme anglo-saxon « ODHD ».

      Longtemps ignorée par les classifications officielles (notamment avant le DSM-5 en 2013), cette double problématique est aujourd'hui reconnue comme une entité clinique à part entière, et non une simple addition de symptômes.

      Les points clés de cette analyse incluent :

      Prévalence élevée : Plus de 40 % des individus avec un TSA présentent un TDAH associé.

      Complexité clinique : La combinaison des deux troubles entraîne une sévérité accrue des symptômes, une fatigue majeure (burnout autistique) et des profils sensoriels complexes.

      Prise en charge spécifique : L'approche doit être multidisciplinaire, privilégiant la psychoéducation et une pharmacologie prudente, tout en évitant le recours systématique aux antipsychotiques.

      Changement de paradigme : Il est crucial de passer d'une vision centrée sur le symptôme à une vision axée sur le fonctionnement global et la qualité de l'environnement.

      --------------------------------------------------------------------------------

      1. Analyse du Diagnostic et Prévalence

      1.1 Évolution des Classifications

      Avant 2013, le DSM-5 interdisait formellement le double diagnostic TSA et TDAH. Pourtant, la pratique clinique révélait déjà des patients présentant des caractéristiques marquées des deux troubles. Depuis la levée de cette interdiction, la littérature scientifique et l'expérience de terrain confirment une imbrication fréquente.

      1.2 Statistiques de Cooccurrence

      Les données actuelles mettent en évidence une asymétrie dans la comorbidité :

      TSA avec TDAH : Plus de 40 % des personnes autistes répondent également aux critères du TDAH.

      TDAH avec TSA : Environ 13 % à 20 % des personnes TDAH présentent des traits autistiques associés.

      1.3 L'importance du Diagnostic Différentiel

      Il est impératif de distinguer l'origine des symptômes pour éviter un empilement erroné de diagnostics. Par exemple :

      • Les difficultés sociales du TDAH sont souvent liées à l'impulsivité ou l'inattention, tandis que dans le TSA, elles relèvent de la cognition sociale.

      • Les troubles attentionnels du TSA sont souvent la conséquence d'une hyper-sensorialité ou d'intérêts restreints plutôt que d'un mécanisme TDAH intrinsèque.

      --------------------------------------------------------------------------------

      2. Manifestations Cliniques et Impacts Fonctionnels

      L'association des deux troubles (ODHD) crée un tableau singulier où les symptômes s'influencent mutuellement, augmentant la sévérité globale.

      | Domaine de fonctionnement | Impact de la cooccurrence TSA + TDAH | | --- | --- | | Fonctions Exécutives | Difficultés plus marquées (inhibition, flexibilité, attention) ; profil proche du TDAH isolé mais plus sévère. | | Cognition Sociale | Difficultés sociales accrues, contact visuel moindre et peu d'amélioration spontanée avec le temps. | | Sensorialité | Cumul des hypersensibilités ; profil sensoriel complexe et particulièrement intense. | | Santé Mentale | Risque accru de troubles dépressifs, troubles du sommeil, épuisement majeur et burnout autistique. | | Adaptation | Précarité économique plus importante et difficultés psychosociales majeures. |

      2.1 La Question du "Trouble" vs "Fonctionnement"

      Un point crucial de l'analyse est la distinction entre avoir un fonctionnement neurodivergent et présenter un trouble. Le trouble n'apparaît que lorsqu'il y a une répercussion fonctionnelle négative. Cette répercussion est étroitement liée à la qualité environnementale (par exemple, la personnalité d'un enseignant ou l'adaptation d'un poste de travail).

      --------------------------------------------------------------------------------

      3. Stratégies Thérapeutiques et Accompagnement

      3.1 La Psychoéducation : Le Pilier Central

      La psychoéducation doit être « sextuple » (incluant l'enfant, les parents et la fratrie). Ses objectifs sont de :

      • Donner du sens aux symptômes.

      • Mettre fin aux idées reçues et aux préjugés (notamment ceux des soignants).

      • Réduire l'auto-stigmatisation et la culpabilité.

      • Limiter le "masking" (suradaptation permanente), qui est une cause majeure d'épuisement et de burnout.

      3.2 Approche Médicamenteuse (Méthylphénidate)

      Le recours au méthylphénidate est possible mais nécessite une expertise clinique fine :

      Sensibilité accrue : Les patients TSA sont souvent hyper-sensibles aux substances (perception fine des changements corporels).

      Posologie : Il est recommandé de commencer par des doses très faibles (ex: 5 mg) et d'augmenter de manière très progressive.

      Vigilance : Surveiller l'augmentation potentielle des stéréotypies ou de l'irritabilité.

      Critique des pratiques : Le document dénonce comme une « hérésie » l'usage de première intention des antipsychotiques (type Haldol ou Risperdal) en France, au détriment du méthylphénidate.

      3.3 La "Thérapie de Mamie" et Médiations Corporelles

      L'hygiène de vie et le corps sont des leviers fondamentaux :

      Hygiène de vie : Régime méditerranéen, sommeil de qualité et régulation de l'exposition aux écrans.

      Activité physique : Présente une efficacité majeure prouvée par la littérature pour la régulation du TDAH.

      Régulation émotionnelle : Utilisation d'outils de cohérence cardiaque (ex: RespiRelax) pour agir sur le système nerveux autonome.

      Médiations alternatives : La musicothérapie et la danse-thérapie sont particulièrement efficaces car elles passent par les fréquences et le corps plutôt que par le langage verbal.

      --------------------------------------------------------------------------------

      4. Neurodiversité : Forces et Perspectives Évolutionnistes

      Il est essentiel de ne pas réduire l'individu à ses symptômes mais de reconnaître les forces inhérentes à ces profils.

      Forces du TDAH : Empathie, créativité (issue des stratégies d'adaptation développées), curiosité, enthousiasme, intuition et rapidité.

      Forces du TSA : Précision, sérieux, honnêteté, respect des horaires et sens du détail.

      Lecture évolutionniste : La persistance des troubles du neurodéveloppement (TND) dans l'évolution humaine suggère leur utilité sociale. Par exemple, le TDAH pour l'exploration et la résolution de problèmes rapides, et le TSA pour la vigilance et l'expertise technique au sein d'un groupe.

      Vers des environnements inclusifs

      Le projet « Atipy Friendly » illustre la transition nécessaire vers une société (notamment l'université) capable de s'adapter à la singularité de ces fonctionnements, plutôt que d'exiger une suradaptation systématique des personnes concernées.

      --------------------------------------------------------------------------------

      Conclusion

      Le profil TSA-TDAH (ODHD) nécessite une attention particulière et une coordination accrue entre les professionnels (psychomotriciens, pédopsychiatres, éducateurs).

      L'enjeu n'est pas seulement de traiter des symptômes, mais de répondre aux besoins spécifiques de la personne pour favoriser son autonomie et sa qualité de vie, tout en valorisant les forces liées à sa neurodivergence.

    1. search for a solution that gives seven green checkmarks.

      seven green checkmarks

      what is required above the threshold, the sweet spot

      • Fast
      • Multi-device 3
      • Offline
      • Collaboration
      • Longevity
      • Privacy
      • User control
    1. THE AMERICAN YAWP Menu Skip to content HomeAbout Barbara Jordan – On the Impeachment of Richard Nixon (1974) Brookes print Casta painting Contributors How the Other Half Lived: Photographs of Jacob Riis Introduction Note on Recommended Readings Press Sample Feedback (@AmericanYawp) Teaching Materials TEST: 11/18/2025 Updates Who Pays for This? 6. A New Nation “The Federal Pillars,” from The Massachusetts Centinel, August 2, 1789. Library of Congress. *The American Yawp is an evolving, collaborative text. Please click here to improve this chapter.* I. IntroductionII. Shays’s RebellionIII. The Constitutional ConventionIV. Ratifying the ConstitutionV. Rights and CompromisesVI. Hamilton’s Financial SystemVII. The Whiskey Rebellion and Jay’s TreatyVIII. The French Revolution and the Limits of LibertyIX. Religious FreedomX. The Election of 1800XI. ConclusionXII. Primary SourcesXIII. Reference Material I. Introduction On July 4, 1788, Philadelphians turned out for a “grand federal procession” in honor of the new national constitution. Workers in various trades and professions demonstrated. Blacksmiths carted around a working forge, on which they symbolically beat swords into farm tools. Potters proudly carried a sign paraphrasing from the Bible, “The potter hath power over his clay,” linking God’s power with an artisan’s work and a citizen’s control over the country. Christian clergymen meanwhile marched arm-in-arm with Jewish leaders. The grand procession represented what many Americans hoped the United States would become: a diverse but cohesive, prosperous nation.1 Over the next few years, Americans would celebrate more of these patriotic holidays. In April 1789, for example, thousands gathered in New York to see George Washington take the presidential oath of office. That November, Washington called his fellow citizens to celebrate with a day of thanksgiving, particularly for “the peaceable and rational manner” in which the government had been established.2 But the new nation was never as cohesive as its champions had hoped. Although the officials of the new federal government—and the people who supported it—placed great emphasis on unity and cooperation, the country was often anything but unified. The Constitution itself had been a controversial document adopted to strengthen the government so that it could withstand internal conflicts. Whatever the later celebrations, the new nation had looked to the future with uncertainty. Less than two years before the national celebrations of 1788 and 1789, the United States had faced the threat of collapse.   II. Shays’s Rebellion Daniel Shays became a divisive figure, to some a violent rebel seeking to upend the new American government, to others an upholder of the true revolutionary virtues Shays and others fought for. This contemporary depiction of Shays and his accomplice Job Shattuck portrays them in the latter light as rising “illustrious from the Jail.” Unidentified artist, Daniel Shays and Job Shattuck, 1787. Wikimedia. In 1786 and 1787, a few years after the Revolution ended, thousands of farmers in western Massachusetts were struggling under a heavy burden of debt. Their problems were made worse by weak local and national economies. Many political leaders saw both the debt and the struggling economy as a consequence of the Articles of Confederation, which provided the federal government with no way to raise revenue and did little to create a cohesive nation out of the various states. The farmers wanted the Massachusetts government to protect them from their creditors, but the state supported the lenders instead. As creditors threatened to foreclose on their property, many of these farmers, including Revolutionary War veterans, took up arms. Led by a fellow veteran named Daniel Shays, these armed men, the “Shaysites,” resorted to tactics like the patriots had used before the Revolution, forming blockades around courthouses to keep judges from issuing foreclosure orders. These protesters saw their cause and their methods as an extension of the “Spirit of 1776”; they were protecting their rights and demanding redress for the people’s grievances. Governor James Bowdoin, however, saw the Shaysites as rebels who wanted to rule the government through mob violence. He called up thousands of militiamen to disperse them. A former Revolutionary general, Benjamin Lincoln, led the state force, insisting that Massachusetts must prevent “a state of anarchy, confusion and slavery.”3 In January 1787, Lincoln’s militia arrested more than one thousand Shaysites and reopened the courts. Daniel Shays and other leaders were indicted for treason, and several were sentenced to death, but eventually Shays and most of his followers received pardons. Their protest, which became known as Shays’s Rebellion, generated intense national debate. While some Americans, like Thomas Jefferson, thought “a little rebellion now and then” helped keep the country free, others feared the nation was sliding toward anarchy and complained that the states could not maintain control. For nationalists like James Madison of Virginia, Shays’s Rebellion was a prime example of why the country needed a strong central government. “Liberty,” Madison warned, “may be endangered by the abuses of liberty as well as the abuses of power.”4   III. The Constitutional Convention The uprising in Massachusetts convinced leaders around the country to act. After years of goading by James Madison and other nationalists, delegates from twelve of the thirteen states met at the Pennsylvania state house in Philadelphia in the summer of 1787. Only Rhode Island declined to send a representative. The delegates arrived at the convention with instructions to revise the Articles of Confederation. The biggest problem the convention needed to solve was the federal government’s inability to levy taxes. That weakness meant that the burden of paying back debt from the Revolutionary War fell on the states. The states, in turn, found themselves beholden to the lenders who had bought up their war bonds. That was part of why Massachusetts had chosen to side with its wealthy bondholders over poor western farmers.5 James Madison, however, had no intention of simply revising the Articles of Confederation. He intended to produce a completely new national constitution. In the preceding year, he had completed two extensive research projects—one on the history of government in the United States, the other on the history of republics around the world. He used this research as the basis for a proposal he brought with him to Philadelphia. It came to be called the Virginia Plan, named after Madison’s home state.6 James Madison was a central figure in the reconfiguration of the national government. Madison’s Virginia Plan was a guiding document in the formation of a new government under the Constitution. John Vanderlyn, Portrait of James Madison, 1816. Wikimedia. The Virginia Plan was daring. Classical learning said that a republican form of government required a small and homogenous state: the Roman republic, or a small country like Denmark, for example. Citizens who were too far apart or too different could not govern themselves successfully. Conventional wisdom said the United States needed to have a very weak central government, which should simply represent the states on certain matters they had in common. Otherwise, power should stay at the state or local level. But Madison’s research had led him in a different direction. He believed it was possible to create “an extended republic” encompassing a diversity of people, climates, and customs. The Virginia Plan, therefore, proposed that the United States should have a strong federal government. It was to have three branches—legislative, executive, and judicial—with power to act on any issues of national concern. The legislature, or Congress, would have two houses, in which every state would be represented according to its population size or tax base. The national legislature would have veto power over state laws.7 Other delegates to the convention generally agreed with Madison that the Articles of Confederation had failed. But they did not agree on what kind of government should replace them. In particular, they disagreed about the best method of representation in the new Congress. Representation was an important issue that influenced a host of other decisions, including deciding how the national executive branch should work, what specific powers the federal government should have, and even what to do about the divisive issue of slavery. For more than a decade, each state had enjoyed a single vote in the Continental Congress. William Patterson’s New Jersey Plan proposed to keep things that way. The Connecticut delegate Roger Sherman, furthermore, argued that members of Congress should be appointed by the state legislatures. Ordinary voters, Sherman said, lacked information, were “constantly liable to be misled” and “should have as little to do as may be” about most national decisions.8 Large states, however, preferred the Virginia Plan, which would give their citizens far more power over the legislative branch. James Wilson of Pennsylvania argued that since the Virginia Plan would vastly increase the powers of the national government, representation should be drawn as directly as possible from the public. No government, he warned, “could long subsist without the confidence of the people.”9) Ultimately, Roger Sherman suggested a compromise. Congress would have a lower house, the House of Representatives, in which members were assigned according to each state’s population, and an upper house, which became the Senate, in which each state would have one vote. This proposal, after months of debate, was adopted in a slightly altered form as the Great Compromise: each state would have two senators, who could vote independently. In addition to establishing both types of representation, this compromise also counted three-fifths of a state’s enslaved population for representation and tax purposes. The delegates took even longer to decide on the form of the national executive branch. Should executive power be in the hands of a committee or a single person? How should its officeholders be chosen? On June 1, James Wilson moved that the national executive power reside in a single person. Coming only four years after the American Revolution, that proposal was extremely contentious; it conjured up images of an elected monarchy.10 The delegates also worried about how to protect the executive branch from corruption or undue control. They endlessly debated these questions, and not until early September did they decide the president would be elected by a special electoral college. In the end, the Constitutional Convention proposed a government unlike any other, combining elements copied from ancient republics and English political tradition but making some limited democratic innovations—all while trying to maintain a delicate balance between national and state sovereignty. It was a complicated and highly controversial scheme.   IV. Ratifying the Constitution Delegates to the Constitutional Convention assembled, argued, and finally agreed in this room, styled in the same manner as during the Convention. Photograph of the Assembly Room, Independence Hall, Philadelphia, Pennsylvania. Wikimedia. Creative Commons Attribution-Share Alike 3.0 Unported. The convention voted to send its proposed Constitution to Congress, which was then sitting in New York, with a cover letter from George Washington. The plan for adopting the new Constitution, however, required approval from special state ratification conventions, not just Congress. During the ratification process, critics of the Constitution organized to persuade voters in the different states to oppose it. Importantly, the Constitutional Convention had voted down a proposal from Virginia’s George Mason, the author of Virginia’s state Declaration of Rights, for a national bill of rights. This omission became a rallying point for opponents of the document. Many of these Anti-Federalists argued that without such a guarantee of specific rights, American citizens risked losing their personal liberty to the powerful federal government. The pro-ratification Federalists, on the other hand, argued that including a bill of rights was not only redundant but dangerous; it could limit future citizens from adding new rights.11 Citizens debated the merits of the Constitution in newspaper articles, letters, sermons, and coffeehouse quarrels across America. Some of the most famous, and most important, arguments came from Alexander Hamilton, John Jay, and James Madison in the Federalist Papers, which were published in various New York newspapers in 1787 and 1788.12 The first crucial vote came at the beginning of 1788 in Massachusetts. At first, the Anti-Federalists at the Massachusetts ratifying convention probably had the upper hand, but after weeks of debate, enough delegates changed their votes to narrowly approve the Constitution. But they also approved a number of proposed amendments, which were to be submitted to the first Congress. This pattern—ratifying the Constitution but attaching proposed amendments—was followed by other state conventions. The most high-profile convention was held in Richmond, Virginia, in June 1788, when Federalists like James Madison, Edmund Randolph, and John Marshall squared off against equally influential Anti-Federalists like Patrick Henry and George Mason. Virginia was America’s most populous state, it had produced some of the country’s highest-profile leaders, and the success of the new government rested upon its cooperation. After nearly a month of debate, Virginia voted 89 to 79 in favor of ratification.13 On July 2, 1788, Congress announced that a majority of states had ratified the Constitution and that the document was now in effect. Yet this did not mean the debates were over. North Carolina, New York, and Rhode Island had not completed their ratification conventions, and Anti-Federalists still argued that the Constitution would lead to tyranny. The New York convention would ratify the Constitution by just three votes, and finally Rhode Island would ratify it by two votes—a full year after George Washington was inaugurated as president.   V. Rights and Compromises Although debates continued, Washington’s election as president cemented the Constitution’s authority. By 1793, the term Anti-Federalist would be essentially meaningless. Yet the debates produced a piece of the Constitution that seems irreplaceable today. Ten amendments were added in 1791. Together, they constitute the Bill of Rights. James Madison, against his original wishes, supported these amendments as an act of political compromise and necessity. He had won election to the House of Representatives only by promising his Virginia constituents such a list of rights. There was much the Bill of Rights did not cover. Women found no special protections or guarantee of a voice in government. Many states continued to restrict voting only to men who owned significant amounts of property. And slavery not only continued to exist; it was condoned and protected by the Constitution. Of all the compromises that formed the Constitution, perhaps none would be more important than the compromise over the slave trade. Americans generally perceived the transatlantic slave trade as more violent and immoral than slavery itself. Many northerners opposed it on moral grounds. But they also understood that letting southern states import more Africans would increase their political power. The Constitution counted each enslaved individual as three fifths of a person for purposes of representation, so in districts with many enslaved people, the white voters had extra influence. On the other hand, the states of the Upper South also welcomed a ban on the Atlantic trade because they already had a surplus of enslaved laborers. Banning importation meant enslavers in Virginia and Maryland could get higher prices when they sold their enslaved laborers to states like South Carolina and Georgia that were dependent on a continued slave trade. New England and the Deep South agreed to what was called a “dirty compromise” at the Constitutional Convention in 1787. New Englanders agreed to include a constitutional provision that protected the foreign slave trade for twenty years; in exchange, South Carolina and Georgia delegates had agreed to support a constitutional clause that made it easier for Congress to pass commercial legislation. As a result, the Atlantic slave trade resumed until 1808 when it was outlawed for three reasons. First, Britain was also in the process of outlawing the slave trade in 1807, and the United States did not want to concede any moral high ground to its rival. Second, the Haitian Revolution (1791–1804), a successful slave revolt against French colonial rule in the West Indies, had changed the stakes in the debate. The image of thousands of armed Black revolutionaries terrified white Americans. Third, the Haitian Revolution had ended France’s plans to expand its presence in the Americas, so in 1803, the United States had purchased the Louisiana Territory from the French at a fire-sale price. This massive new territory, which had doubled the size of the United States, had put the question of slavery’s expansion at the top of the national agenda. Many white Americans, including President Thomas Jefferson, thought that ending the external slave trade and dispersing the domestic slave population would keep the United States a white man’s republic and perhaps even lead to the disappearance of slavery. The ban on the slave trade, however, lacked effective enforcement measures and funding. Moreover, instead of freeing illegally imported Africans, the act left their fate to the individual states, and many of those states simply sold intercepted enslaved people at auction. Thus, the ban preserved the logic of property ownership in human beings. The new federal government protected slavery as much as it expanded democratic rights and privileges for white men.14   VI. Hamilton’s Financial System Alexander Hamilton saw America’s future as a metropolitan, commercial, industrial society, in contrast to Thomas Jefferson’s nation of small farmers. While both men had the ear of President Washington, Hamilton’s vision proved most appealing and enduring. John Trumbull, Portrait of Alexander Hamilton, 1806. Wikimedia. President George Washington’s cabinet choices reflected continuing political tensions over the size and power of the federal government. The vice president was John Adams, and Washington chose Alexander Hamilton to be his secretary of the treasury. Both men wanted an active government that would promote prosperity by supporting American industry. However, Washington chose Thomas Jefferson to be his secretary of state, and Jefferson was committed to restricting federal power and preserving an economy based on agriculture. Almost from the beginning, Washington struggled to reconcile the Federalist and Republican (or Democratic-Republican) factions within his own administration.15 Alexander Hamilton believed that self-interest was the “most powerful incentive of human actions.” Self-interest drove humans to accumulate property, and that effort created commerce and industry. According to Hamilton, government had important roles to play in this process. First, the state should protect private property from theft. Second, according to Hamilton, the state should use human “passions” and “make them subservient to the public good.”16 In other words, a wise government would harness its citizens’ desire for property so that both private individuals and the state would benefit. Hamilton, like many of his contemporary statesmen, did not believe the state should ensure an equal distribution of property. Inequality was understood as “the great & fundamental distinction in Society,” and Hamilton saw no reason why this should change. Instead, Hamilton wanted to tie the economic interests of wealthy Americans, or “monied men,” to the federal government’s financial health. If the rich needed the government, then they would direct their energies to making sure it remained solvent.17 Hamilton, therefore, believed that the federal government must be “a Repository of the Rights of the wealthy.”18 As the nation’s first secretary of the treasury, he proposed an ambitious financial plan to achieve just that. The first part of Hamilton’s plan involved federal “assumption” of state debts, which were mostly left over from the Revolutionary War. The federal government would assume responsibility for the states’ unpaid debts, which totaled about $25 million. Second, Hamilton wanted Congress to create a bank—a Bank of the United States. The goal of these proposals was to link federal power and the country’s economic vitality. Under the assumption proposal, the states’ creditors (people who owned state bonds or promissory notes) would turn their old notes in to the treasury and receive new federal notes of the same face value. Hamilton foresaw that these bonds would circulate like money, acting as “an engine of business, and instrument of industry and commerce.”19 This part of his plan, however, was controversial for two reasons. First, many taxpayers objected to paying the full face value on old notes, which had fallen in market value. Often the current holders had purchased them from the original creditors for pennies on the dollar. To pay them at full face value, therefore, would mean rewarding speculators at taxpayer expense. Hamilton countered that government debts must be honored in full, or else citizens would lose all trust in the government. Second, many southerners objected that they had already paid their outstanding state debts, so federal assumption would mean forcing them to pay again for the debts of New Englanders. Nevertheless, President Washington and Congress both accepted Hamilton’s argument. By the end of 1794, 98 percent of the country’s domestic debt had been converted into new federal bonds.20 Hamilton’s plan for a Bank of the United States, similarly, won congressional approval despite strong opposition. Thomas Jefferson and other Republicans argued that the plan was unconstitutional; the Constitution did not authorize Congress to create a bank. Hamilton, however, argued that the bank was not only constitutional but also important for the country’s prosperity. The Bank of the United States would fulfill several needs. It would act as a convenient depository for federal funds. It would print paper banknotes backed by specie (gold or silver). Its agents would also help control inflation by periodically taking state bank notes to their banks of origin and demanding specie in exchange, limiting the amount of notes the state banks printed. Furthermore, it would give wealthy people a vested interest in the federal government’s finances. The government would control just 20 percent of the bank’s stock; the other eighty percent would be owned by private investors. Thus, an “intimate connexion” between the government and wealthy men would benefit both, and this connection would promote American commerce. In 1791, therefore, Congress approved a twenty-year charter for the Bank of the United States. The bank’s stocks, together with federal bonds, created over $70 million in new financial instruments. These spurred the formation of securities markets, which allowed the federal government to borrow more money and underwrote the rapid spread of state-charted banks and other private business corporations in the 1790s. For Federalists, this was one of the major purposes of the federal government. For opponents who wanted a more limited role for industry, however, or who lived on the frontier and lacked access to capital, Hamilton’s system seemed to reinforce class boundaries and give the rich inordinate power over the federal government. Hamilton’s plan, furthermore, had another highly controversial element. In order to pay what it owed on the new bonds, the federal government needed reliable sources of tax revenue. In 1791, Hamilton proposed a federal excise tax on the production, sale, and consumption of a number of goods, including whiskey.   VII. The Whiskey Rebellion and Jay’s Treaty Grain was the most valuable cash crop for many American farmers. In the West, selling grain to a local distillery for alcohol production was typically more profitable than shipping it over the Appalachians to eastern markets. Hamilton’s whiskey tax thus placed a special burden on western farmers. It seemed to divide the young republic in half—geographically between the East and West, economically between merchants and farmers, and culturally between cities and the countryside. In the fall of 1791, sixteen men in western Pennsylvania, disguised in women’s clothes, assaulted a tax collector named Robert Johnson. They tarred and feathered him, and the local deputy marshals seeking justice met similar fates. They were robbed and beaten, whipped and flogged, tarred and feathered, and tied up and left for dead. The rebel farmers also adopted other protest methods from the Revolution and Shays’s Rebellion, writing local petitions and erecting liberty poles. For the next two years, tax collections in the region dwindled. Then, in July 1794, groups of armed farmers attacked federal marshals and tax collectors, burning down at least two tax collectors’ homes. At the end of the month, an armed force of about seven thousand, led by the radical attorney David Bradford, robbed the U.S. mail and gathered about eight miles east of Pittsburgh. President Washington responded quickly. First, Washington dispatched a committee of three distinguished Pennsylvanians to meet with the rebels and try to bring about a peaceful resolution. Meanwhile, he gathered an army of thirteen thousand militiamen in Carlisle, Pennsylvania. On September 19, Washington became the only sitting president to lead troops in the field, though he quickly turned over the army to the command of Henry Lee, a Revolutionary hero and the current governor of Virginia. As the federal army moved westward, the farmers scattered. Hoping to make a dramatic display of federal authority, Alexander Hamilton oversaw the arrest and trial of a number of rebels. Many were released because of a lack of evidence, and most of those who remained, including two men sentenced to death for treason, were soon pardoned by the president. The Whiskey Rebellion had shown that the federal government was capable of quelling internal unrest. But it also demonstrated that some citizens, especially poor westerners, viewed it as their enemy.21 Around the same time, another national issue also aroused fierce protest. Along with his vision of a strong financial system, Hamilton also had a vision of a nation busily engaged in foreign trade. In his mind, that meant pursuing a friendly relationship with one nation in particular: Great Britain. America’s relationship with Britain since the end of the Revolution had been tense, partly because of warfare between the British and French. Their naval war threatened American shipping, and the impressment of men into Britain’s navy terrorized American sailors. American trade could be risky and expensive, and impressment threatened seafaring families. Nevertheless, President Washington was conscious of American weakness and was determined not to take sides. In April 1793, he officially declared that the United States would remain neutral.22 With his blessing, Hamilton’s political ally John Jay, who was currently serving as chief justice of the Supreme Court, sailed to London to negotiate a treaty that would satisfy both Britain and the United States. Jefferson and Madison strongly opposed these negotiations. They mistrusted Britain and saw the treaty as the American state favoring Britain over France. The French had recently overthrown their own monarchy, and Republicans thought the United States should be glad to have the friendship of a new revolutionary state. They also suspected that a treaty with Britain would favor northern merchants and manufacturers over the agricultural South. In November 1794, despite their misgivings, John Jay signed a “treaty of amity, commerce, and navigation” with the British. Jay’s Treaty, as it was commonly called, required Britain to abandon its military positions in the Northwest Territory (especially Fort Detroit, Fort Mackinac, and Fort Niagara) by 1796. Britain also agreed to compensate American merchants for their losses. The United States, in return, agreed to treat Britain as its most prized trade partner, which meant tacitly supporting Britain in its current conflict with France. Unfortunately, Jay had failed to secure an end to impressment.23 For Federalists, this treaty was a significant accomplishment. Jay’s Treaty gave the United States, a relatively weak power, the ability to stay officially neutral in European wars, and it preserved American prosperity by protecting trade. For Jefferson’s Republicans, however, the treaty was proof of Federalist treachery. The Federalists had sided with a monarchy against a republic, and they had submitted to British influence in American affairs without even ending impressment. In Congress, debate over the treaty transformed the Federalists and Republicans from temporary factions into two distinct (though still loosely organized) political parties.   VIII. The French Revolution and the Limits of Liberty The mounting body count of the French Revolution included that of the queen and king, who were beheaded in a public ceremony in early 1793, as depicted in the engraving. While Americans disdained the concept of monarchy, the execution of King Louis XVI was regarded by many Americans as an abomination, an indication of the chaos and savagery reigning in France at the time. Charles Monnet (artist), Antoine-Jean Duclos and Isidore-Stanislas Helman (engravers), Day of 21 January 1793 the death of Louis Capet on the Place de la Révolution, 1794. Wikimedia. In part, the Federalists were turning toward Britain because they feared the most radical forms of democratic thought. In the wake of Shays’s Rebellion, the Whiskey Rebellion, and other internal protests, Federalists sought to preserve social stability. The course of the French Revolution seemed to justify their concerns. In 1789, news had arrived in America that the French had revolted against their king. Most Americans imagined that liberty was spreading from America to Europe, carried there by the returning French heroes who had taken part in the American Revolution. Initially, nearly all Americans had praised the French Revolution. Towns all over the country hosted speeches and parades on July 14 to commemorate the day it began. Women had worn neoclassical dress to honor republican principles, and men had pinned revolutionary cockades to their hats. John Randolph, a Virginia planter, named two of his favorite horses Jacobin and Sans-Culotte after French revolutionary factions.24 In April 1793, a new French ambassador, “Citizen” Edmond-Charles Genêt, arrived in the United States. During his tour of several cities, Americans greeted him with wild enthusiasm. Citizen Genêt encouraged Americans to act against Spain, a British ally, by attacking its colonies of Florida and Louisiana. When President Washington refused, Genêt threatened to appeal to the American people directly. In response, Washington demanded that France recall its diplomat. In the meantime, however, Genêt’s faction had fallen from power in France. Knowing that a return home might cost him his head, he decided to remain in America. Genêt’s intuition was correct. A radical coalition of revolutionaries had seized power in France. They initiated a bloody purge of their enemies, the Reign of Terror. As Americans learned about Genêt’s impropriety and the mounting body count in France, many began to have second thoughts about the French Revolution. Americans who feared that the French Revolution was spiraling out of control tended to become Federalists. Those who remained hopeful about the revolution tended to become Republicans. Not deterred by the violence, Thomas Jefferson declared that he would rather see “half the earth desolated” than see the French Revolution fail. “Were there but an Adam and an Eve left in every country, and left free,” he wrote, “it would be better than as it now is.”25 Meanwhile, the Federalists sought closer ties with Britain. Despite the political rancor, in late 1796 there came one sign of hope: the United States peacefully elected a new president. For now, as Washington stepped down and executive power changed hands, the country did not descend into the anarchy that many leaders feared. The new president was John Adams, Washington’s vice president. Adams was less beloved than the old general, and he governed a deeply divided nation. The foreign crisis also presented him with a major test. In response to Jay’s Treaty, the French government authorized its vessels to attack American shipping. To resolve this, President Adams sent envoys to France in 1797. The French insulted these diplomats. Some officials, whom the Americans code-named X, Y, and Z in their correspondence, hinted that negotiations could begin only after the Americans offered a bribe. When the story became public, this XYZ Affair infuriated American citizens. Dozens of towns wrote addresses to President Adams, pledging him their support against France. Many people seemed eager for war. “Millions for defense,” toasted South Carolina representative Robert Goodloe Harper, “but not one cent for tribute.”26 By 1798, the people of Charleston watched the ocean’s horizon apprehensively because they feared the arrival of the French navy at any moment. Many people now worried that the same ships that had aided Americans during the Revolutionary War might discharge an invasion force on their shores. Some southerners were sure that this force would consist of Black troops from France’s Caribbean colonies, who would attack the southern states and cause their enslaved laborers to revolt. Many Americans also worried that France had covert agents in the country. In the streets of Charleston, armed bands of young men searched for French disorganizers. Even the little children prepared for the looming conflict by fighting with sticks.27 Meanwhile, during the crisis, New Englanders were some of the most outspoken opponents of France. In 1798, they found a new reason for Francophobia. An influential Massachusetts minister, Jedidiah Morse, announced to his congregation that the French Revolution had been hatched in a conspiracy led by a mysterious anti-Christian organization called the Illuminati. The story was a hoax, but rumors of Illuminati infiltration spread throughout New England like wildfire, adding a new dimension to the foreign threat.28 Against this backdrop of fear, the French Quasi-War, as it would come to be known, was fought on the Atlantic, mostly between French naval vessels and American merchant ships. During this crisis, however, anxiety about foreign agents ran high, and members of Congress took action to prevent internal subversion. The most controversial of these steps were the Alien and Sedition Acts. These two laws, passed in 1798, were intended to prevent French agents and sympathizers from compromising America’s resistance, but they also attacked Americans who criticized the president and the Federalist Party. The Alien Act allowed the federal government to deport foreign nationals, or “aliens,” who seemed to pose a national security threat. Even more dramatically, the Sedition Act allowed the government to prosecute anyone found to be speaking or publishing “false, scandalous, and malicious writing” against the government.29 These laws were not simply brought on by war hysteria. They reflected common assumptions about the nature of the American Revolution and the limits of liberty. In fact, most of the advocates for the Constitution and the First Amendment accepted that free speech simply meant a lack of prior censorship or restraint, not a guarantee against punishment. According to this logic, “licentious” or unruly speech made society less free, not more. James Wilson, one of the principal architects of the Constitution, argued that “every author is responsible when he attacks the security or welfare of the government.”30 In 1798, most Federalists were inclined to agree. Under the terms of the Sedition Act, they indicted and prosecuted several Republican printers—and even a Republican congressman who had criticized President Adams. Meanwhile, although the Adams administration never enforced the Alien Act, its passage was enough to convince some foreign nationals to leave the country. For the president and most other Federalists, the Alien and Sedition Acts represented a continuation of a conservative rather than radical American Revolution. However, the Alien and Sedition Acts caused a backlash in two ways. First, shocked opponents articulated a new and expansive vision for liberty. The New York lawyer Tunis Wortman, for example, demanded an “absolute independence” of the press.31 Likewise, the Virginia judge George Hay called for “any publication whatever criminal” to be exempt from legal punishment.32 Many Americans began to argue that free speech meant the ability to say virtually anything without fear of prosecution. Second, James Madison and Thomas Jefferson helped organize opposition from state governments. Ironically, both of them had expressed support for the principle behind the Sedition Act in previous years. Jefferson, for example, had written to Madison in 1789 that the nation should punish citizens for speaking “false facts” that injured the country.33 Nevertheless, both men now opposed the Alien and Sedition Acts on constitutional grounds. In 1798, Jefferson made this point in a resolution adopted by the Kentucky state legislature. A short time later, the Virginia legislature adopted a similar document written by Madison. The Kentucky and Virginia Resolutions argued that the national government’s authority was limited to the powers expressly granted by the U.S. Constitution. More importantly, they asserted that the states could declare federal laws unconstitutional. For the time being, these resolutions were simply gestures of defiance. Their bold claim, however, would have important effects in later decades. In just a few years, many Americans’ feelings toward France had changed dramatically. Far from rejoicing in the “light of freedom,” many Americans now feared the “contagion” of French-style liberty. Debates over the French Revolution in the 1790s gave Americans some of their earliest opportunities to articulate what it meant to be American. Did American national character rest on a radical and universal vision of human liberty? Or was America supposed to be essentially pious and traditional, an outgrowth of Great Britain? They couldn’t agree. It was on this cracked foundation that many conflicts of the nineteenth century would rest.   IX. Religious Freedom One reason the debates over the French Revolution became so heated was that Americans were unsure about their own religious future. The Illuminati scare of 1798 was just one manifestation of this fear. Across the United States, a slow but profound shift in attitudes toward religion and government began. In 1776, none of the American state governments observed the separation of church and state. On the contrary, all thirteen states either had established, official, and tax-supported state churches, or at least required their officeholders to profess a certain faith. Most officials believed this was necessary to protect morality and social order. Over the next six decades, however, that changed. In 1833, the final state, Massachusetts, stopped supporting an official religious denomination. Historians call that gradual process disestablishment. In many states, the process of disestablishment had started before the creation of the Constitution. South Carolina, for example, had been nominally Anglican before the Revolution, but it had dropped denominational restrictions in its 1778 constitution. Instead, it now allowed any church consisting of at least fifteen adult males to become “incorporated,” or recognized for tax purposes as a state-supported church. Churches needed only to agree to a set of basic Christian theological tenets, which were vague enough that most denominations could support them.34 South Carolina tried to balance religious freedom with the religious practice that was supposed to be necessary for social order. Officeholders were still expected to be Christians; their oaths were witnessed by God, they were compelled by their religious beliefs to tell the truth, and they were called to live according to the Bible. This list of minimal requirements came to define acceptable Christianity in many states. As new Christian denominations proliferated between 1780 and 1840, however, more and more Christians fell outside this definition. South Carolina continued its general establishment law until 1790, when a constitutional revision removed the establishment clause and religious restrictions on officeholders. Many other states, though, continued to support an established church well into the nineteenth century. The federal Constitution did not prevent this. The religious freedom clause in the Bill of Rights, during these decades, limited the federal government but not state governments. It was not until 1833 that a state supreme court decision ended Massachusetts’s support for the Congregational Church. Many political leaders, including Thomas Jefferson and James Madison, favored disestablishment because they saw the relationship between church and state as a tool of oppression. Jefferson proposed a Statute for Religious Freedom in the Virginia state assembly in 1779, but his bill failed in the overwhelmingly Anglican legislature. Madison proposed it again in 1785, and it defeated a rival bill that would have given equal revenue to all Protestant churches. Instead Virginia would not use public money to support religion. “The Religion then of every man,” Jefferson wrote, “must be left to the conviction and conscience of every man; and it is the right of every man to exercise it as these may dictate.”35 At the federal level, the delegates to the Constitutional Convention of 1787 easily agreed that the national government should not have an official religion. This principle was upheld in 1791 when the First Amendment was ratified, with its guarantee of religious liberty. The limits of federal disestablishment, however, required discussion. The federal government, for example, supported Native American missionaries and congressional chaplains. Well into the nineteenth century, debate raged over whether the postal service should operate on Sundays, and whether non-Christians could act as witnesses in federal courts. Americans continued to struggle to understand what it meant for Congress not to “establish” a religion.   X. The Election of 1800 The year 1800 brought about a host of changes in government, in particular the first successful and peaceful transfer of power from one political party to another. But the year was important for another reason: the U.S. Capitol in Washington, D.C. (pictured here in 1800) was finally opened to be occupied by Congress, the Supreme Court, the Library of Congress, and the courts of the District of Columbia. William Russell Birch, A view of the Capitol of Washington before it was burnt down by the British, c. 1800. Wikimedia. Meanwhile, the Sedition and Alien Acts expired in 1800 and 1801. They had been relatively ineffective at suppressing dissent. On the contrary, they were much more important for the loud reactions they had inspired. They had helped many Americans decide what they didn’t want from their national government. By 1800, therefore, President Adams had lost the confidence of many Americans. They had let him know it. In 1798, for instance, he had issued a national thanksgiving proclamation. Instead of enjoying a day of celebration and thankfulness, Adams and his family had been forced by rioters to flee the capital city of Philadelphia until the day was over. Conversely, his prickly independence had also put him at odds with Alexander Hamilton, the leader of his own party, who offered him little support. After four years in office, Adams found himself widely reviled. In the election of 1800, therefore, the Republicans defeated Adams in a bitter and complicated presidential race. During the election, one Federalist newspaper article predicted that a Republican victory would fill America with “murder, robbery, rape, adultery, and incest.”36 A Republican newspaper, on the other hand, flung sexual slurs against President Adams, saying he had “neither the force and firmness of a man, nor the gentleness and sensibility of a woman.” Both sides predicted disaster and possibly war if the other should win.37 In the end, the contest came down to a tie between two Republicans, Thomas Jefferson of Virginia and Aaron Burr of New York, who each had seventy-three electoral votes. (Adams had sixty-five.) Burr was supposed to be a candidate for vice president, not president, but under the Constitution’s original rules, a tie-breaking vote had to take place in the House of Representatives. It was controlled by Federalists bitter at Jefferson. House members voted dozens of times without breaking the tie. On the thirty-sixth ballot, Thomas Jefferson emerged victorious. Republicans believed they had saved the United States from grave danger. An assembly of Republicans in New York City called the election a “bloodless revolution.” They thought of their victory as a revolution in part because the Constitution (and eighteenth-century political theory) made no provision for political parties. The Republicans thought they were fighting to rescue the country from an aristocratic takeover, not just taking part in a normal constitutional process. This image attacks Jefferson’s support of the French Revolution and religious freedom. The letter, “To Mazzei,” refers to a 1796 correspondence that criticized the Federalists and, by association, President Washington. Providential Detection, 1797. Courtesy American Antiquarian Society. Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0. In his first inaugural address, however, Thomas Jefferson offered an olive branch to the Federalists. He pledged to follow the will of the American majority, whom he believed were Republicans, but to respect the rights of the Federalist minority. His election set an important precedent. Adams accepted his electoral defeat and left the White House peacefully. “The revolution of 1800,” Jefferson wrote years later, did for American principles what the Revolution of 1776 had done for its structure. But this time, the revolution was accomplished not “by the sword” but “by the rational and peaceable instrument of reform, the suffrage of the people.”38 Four years later, when the Twelfth Amendment changed the rules for presidential elections to prevent future deadlocks, it was designed to accommodate the way political parties worked. Despite Adams’s and Jefferson’s attempts to tame party politics, though, the tension between federal power and the liberties of states and individuals would exist long into the nineteenth century. And while Jefferson’s administration attempted to decrease federal influence, Chief Justice John Marshall, an Adams appointee, worked to increase the authority of the Supreme Court. These competing agendas clashed most famously in the 1803 case of Marbury v. Madison, which Marshall used to establish a major precedent. The Marbury case seemed insignificant at first. The night before leaving office in early 1801, Adams had appointed several men to serve as justices of the peace in Washington, D.C. By making these “midnight appointments,” Adams had sought to put Federalists into vacant positions at the last minute. On taking office, however, Jefferson and his secretary of state, James Madison, had refused to deliver the federal commissions to the men Adams had appointed. Several of the appointees, including William Marbury, sued the government, and the case was argued before the Supreme Court. Marshall used Marbury’s case to make a clever ruling. On the issue of the commissions, the Supreme Court ruled in favor of the Jefferson administration. But Chief Justice Marshall went further in his decision, ruling that the Supreme Court reserved the right to decide whether an act of Congress violated the Constitution. In other words, the court assumed the power of judicial review. This was a major (and lasting) blow to the Republican agenda, especially after 1810, when the Supreme Court extended judicial review to state laws. Jefferson was particularly frustrated by the decision, arguing that the power of judicial review “would make the Judiciary a despotic branch.”39   XI. Conclusion A grand debate over political power engulfed the young United States. The Constitution ensured that there would be a strong federal government capable of taxing, waging war, and making law, but it could never resolve the young nation’s many conflicting constituencies. The Whiskey Rebellion proved that the nation could stifle internal dissent but exposed a new threat to liberty. Hamilton’s banking system provided the nation with credit but also constrained frontier farmers. The Constitution’s guarantee of religious liberty conflicted with many popular prerogatives. Dissension only deepened, and as the 1790s progressed, Americans became bitterly divided over political parties and foreign war. During the ratification debates, Alexander Hamilton had written of the wonders of the Constitution. “A nation, without a national government,” he wrote, would be “an awful spectacle.” But, he added, “the establishment of a Constitution, in time of profound peace, by the voluntary consent of a whole people, is a prodigy,” a miracle that should be witnessed “with trembling anxiety.”40 Anti-Federalists had grave concerns about the Constitution, but even they could celebrate the idea of national unity. By 1795, even the staunchest critics would have grudgingly agreed with Hamilton’s convictions about the Constitution. Yet these same individuals could also take the cautions in Washington’s 1796 farewell address to heart. “There is an opinion,” Washington wrote, “that parties in free countries are useful checks upon the administration of the government and serve to keep alive the spirit of liberty.” This, he conceded, was probably true, but in a republic, he said, the danger was not too little partisanship, but too much. “A fire not to be quenched,” Washington warned, “it demands a uniform vigilance to prevent its bursting into a flame, lest, instead of warming, it should consume.”41 For every parade, thanksgiving proclamation, or grand procession honoring the unity of the nation, there was also some political controversy reminding American citizens of how fragile their union was. And as party differences and regional quarrels tested the federal government, the new nation increasingly explored the limits of its democracy.   XII. Primary Sources 1. Hector St. Jean de Crèvecœur describes the American people, 1782 Hector St. John de Crèvecœur was born in France, but relocated to the colony of New York and married a local woman named Mehitable Tippet. For a period of several years, de Crèvecœur wrote about the people he encountered in North America. The resulting work was widely successful in Europe. In this passage, Crèvecœur attempts to reflect on the difference between life in Europe and life in North America. 2. A Confederation of Native peoples seek peace with the United States, 1786 In 1786, half a year before the Constitutional Convention, a collection of Native American leaders gathered on the banks of the Detroit River to offer a unified message to the Congress of the United States. Despite this proposal, American surveyors, settlers, and others continued to cross the Ohio River. 3. Mary Smith Cranch comments on politics, 1786-87 In the aftermath of the Revolution, politics became a sport consumed by both men and women. In a series of letters sent to her sister, Mary Smith Cranch comments on a series of political events including the lack of support for diplomats, the circulation of paper or hard currency, legal reform, tariffs against imported tea tables, Shays’s rebellion, and the role of women in supporting the nation’s interests. 4. James Madison, Memorial and Remonstrance Against Religious Assessments, 1785 Before the American Revolution, Virginia supported local Anglican churches through taxes. After the American Revolution, Virginia had to decide what to do with this policy. Some founding fathers, including Patrick Henry, wanted to equally distribute tax dollars to all churches. In this document, James Madison explains why he did not want any government money to support religious causes in Virginia. 5. George Washington, “Farewell Address,” 1796 George Washington used his final public address as president to warn against what he understood as the two greatest dangers to American prosperity: political parties and foreign wars. Washington urged the American people to avoid political partisanship and entanglements with European wars.  6. Venture Smith, A Narrative of the Life and Adventures of Venture Smith, 1798 Venture Smith’s autobiography is one of the earliest slave narratives to circulate in the Atlantic World. Slave narratives grew into the most important genre of antislavery literature and bore testimony to the injustices of the slave system. Smith was unusually lucky in that he was able to purchase his freedom, but his story nonetheless reveals the hardships faced by even the most fortunate enslaved men and women. 7. Susannah Rowson, Charlotte Temple, 1794 In Charlotte Temple, the first novel written in America, Susannah Rowson offered a cautionary tale of a woman deceived and then abandoned by a roguish man. Americans throughout the new nation read the book with rapt attention and many even traveled to New York City to visit the supposed grave of this fictional character. 8. Constitutional ratification cartoon, 1789 The Massachusetts Centinel ran a series of cartoons depicting the ratification of the Constitution.  Each vertical pillar represents a state that has ratified the new government.  In this cartoon, North Carolina’s pillar is being guided into place (it would vote for ratification in November 1789).  Rhode Island’s pillar, however, is crumbling and shows the uncertainty of the vote there.    9. Anti-Thomas Jefferson Cartoon, 1797 This image attacks Jefferson’s support of the French Revolution and religious freedom.  The Altar to “Gallic Despotism” mocks Jefferson’s allegiance to the French. The letter, “To Mazzei,” refers to a 1796 correspondence that criticized the Federalists and, by association, President Washington.    XIII. Reference Material This chapter was edited by Tara Strauch, with content contributions by Marco Basile, Nathaniel C. Green, Brenden Kennedy, Spencer McBride, Andrea Nero, Cara Rogers, Tara Strauch, Michael Harrison Taylor, Jordan Taylor, Kevin Wisniewski, and Ben Wright. Recommended citation: Marco Basile et al., “A New Nation,” Tara Strauch, ed., in The American Yawp, eds. Joseph Locke and Ben Wright (Stanford, CA: Stanford University Press, 2018).   Recommended Reading Allgor, Catherine. Parlor Politics: In which the Ladies of Washington Help Build a City and a Government. Charlottesville: University of Virginia Press, 2000. Appleby, Joyce. Inheriting the Revolution: The First Generation of Americans. Cambridge, Mass.: Belknap Press, 2001. Bartolini-Tuazon, Kathleen. For Fear of an Elective King: George Washington and the Presidential Title Controversy of 1789. Ithaca: Cornell University Press, 2014. Beeman, Richard, Stephen Botein, and Edward C. Carter II eds. Beyond Confederation: Origins of the Constitution and American National Identity. Chapel Hill, N.C.: University of North Carolina Press, 1987. Bilder, Mary Sarah. Madison’s Hand: Revising the Constitutional Convention. Cambridge: Harvard University Press, 2015. Bouton, Terry. “A Road Closed: Rural Insurgency in Post-Independence Pennsylvania,” Journal of American History 87:3 (December 2000): 855-887. Cunningham, Noble E. The Jeffersonian Republicans: The Formation of Party Organization, 1789-1801. Chapel Hill, N.C.: University of North Carolina Press, 1967. Dunn, Susan. Jefferson’s Second Revolution: The Election of 1800 and the Triumph of Republicanism. Boston: Houghton Mifflin, 2004. Edling, Max. A Revolution in Favor of Government: Origins of the U.S. Constitution and the Making of the American State. New York: Oxford University Press, 2003 Gordon-Reed, Annette. The Hemingses of Monticello: An American Family. New York: W. W. Norton, 2008. Halperin, Terri Diane. The Alien and Sedition Acts of 1798: Testing the Constitution. Baltimore: Johns Hopkins University Press, 2016. Holton, Woody. Unruly Americans and the Origins of the Constitution. 1st edition. New York: Hill and Wang, 2007. Kierner, Cynthia A. Martha Jefferson Randolph, Daughter of Monticello: Her Life and Times. Chapel Hill: University of North Carolina Press, 2012. Maier, Pauline. Ratification: The People Debate the Constitution, 1787-1788. New York: Simon & Schuster, 2010. Papenfuse, Eric Robert. “Unleashing the ‘Wildness’: The Mobilization of Grassroots Antifederalism in Maryland,” Journal of the Early Republic 16:1 (Spring 1996): 73-106. Pasley, Jeffrey L. The First Presidential Contest: 1796 and the Founding of American Democracy. Lawrence: The University of Kansas Press, 2013. Smith-Rosenberg, Carroll. “Dis-Covering the Subject of the ‘Great Constitutional Discussion,’ 1786-1789,” Journal of American History 79:3 (December 1992): 841-873 Taylor, Alan. William Cooper’s Town: Power and Persuasion on the Frontier of the Early American Republic. Reprint edition. New York: Vintage, 1996. Rakove, Jack N. Original Meanings: Politics and Ideas in the Making of the Constitution. New York: Vintage Books, 1996. Salmon, Marylynn. Women and the Law of Property in Early America. Chapel Hill, N.C.: University of North Carolina Press, 1989. Sharp, James Roger. American Politics in the Early Republic: The New Nation in Crisis. New Haven: Yale University Press, 1993. Slaughter, Thomas P. The Whiskey Rebellion: Frontier Epilogue to the American Revolution. New York: Oxford University Press, 1988. Waldstreicher, David. In the Midst of Perpetual Fetes : The Making of American Nationalism, 1776-1820. Chapel Hill : Williamsburg, Virginia, by the University of North Carolina Press, 1997. Wood, Gordon. Empire of Liberty: A History of the Early Republic, 1789-1815. Oxford: Oxford University Press, 2011. Zagarri, Rosemarie. Revolutionary Backlash: Women and Politics in the Early American Republic. Philadelphia: University of Pennsylvania Press, 2007. Allgor, Catherine. Parlor Politics: In Which the Ladies of Washington Help Build a City and a Government. Charlottesville: University of Virginia Press, 2000. Appleby, Joyce. Inheriting the Revolution: The First Generation of Americans. Cambridge, MA: Belknap Press, 2001. Bartolini-Tuazon, Kathleen. For Fear of an Elective King: George Washington and the Presidential Title Controversy of 1789. Ithaca, NY: Cornell University Press, 2014. Beeman, Richard, Stephen Botein, and Edward C. Carter II, eds. Beyond Confederation: Origins of the Constitution and American National Identity. Chapel Hill: University of North Carolina Press, 1987. Bilder, Mary Sarah. Madison’s Hand: Revising the Constitutional Convention. Cambridge, MA: Harvard University Press, 2015. Bouton, Terry. “A Road Closed: Rural Insurgency in Post-Independence Pennsylvania.” Journal of American History 87, no. 3 (December 2000): 855–887. Cunningham, Noble E. The Jeffersonian Republicans: The Formation of Party Organization, 1789–1801. Chapel Hill: University of North Carolina Press, 1967. Dunn, Susan. Jefferson’s Second Revolution: The Election of 1800 and the Triumph of Republicanism. Boston: Houghton Mifflin, 2004. Edling, Max. A Revolution in Favor of Government: Origins of the U.S. Constitution and the Making of the American State. New York: Oxford University Press, 2003. Gordon-Reed, Annette. The Hemingses of Monticello: An American Family. New York: Norton, 2008. Halperin, Terri Diane. The Alien and Sedition Acts of 1798: Testing the Constitution. Baltimore: Johns Hopkins University Press, 2016. Holton, Woody. Unruly Americans and the Origins of the Constitution. New York: Hill and Wang, 2007. Kierner, Cynthia A. Martha Jefferson Randolph, Daughter of Monticello: Her Life and Times. Chapel Hill: University of North Carolina Press, 2012. Maier, Pauline. Ratification: The People Debate the Constitution, 1787–1788. New York: Simon and Schuster, 2010. Papenfuse, Eric Robert. “Unleashing the ‘Wildness’: The Mobilization of Grassroots Antifederalism in Maryland.” Journal of the Early Republic 16, no. 1 (Spring 1996): 73–106. Pasley, Jeffrey L. The First Presidential Contest: 1796 and the Founding of American Democracy. Lawrence: University of Kansas Press, 2013. Rakove, Jack N. Original Meanings: Politics and Ideas in the Making of the Constitution. New York: Vintage Books, 1996. Salmon, Marylynn. Women and the Law of Property in Early America. Chapel Hill: University of North Carolina Press, 1989. Sharp, James Roger. American Politics in the Early Republic: The New Nation in Crisis. New Haven, CT: Yale University Press, 1993. Slaughter, Thomas P. The Whiskey Rebellion: Frontier Epilogue to the American Revolution. New York: Oxford University Press, 1986. Smith-Rosenberg, Carroll. “Dis-Covering the Subject of the ‘Great Constitutional Discussion,’ 1786–1789.” Journal of American History 79, no. 3 (December 1992): 841–873. Taylor, Alan. William Cooper’s Town: Power and Persuasion on the Frontier of the Early American Republic. New York: Vintage, 1996. Waldstreicher, David. In the Midst of Perpetual Fetes : The Making of American Nationalism, 1776–1820. Chapel Hill : University of North Carolina Press, 1997. Wood, Gordon. Empire of Liberty: A History of the Early Republic, 1789–1815. Oxford, UK: Oxford University Press, 2011. Zagarri, Rosemarie. Revolutionary Backlash: Women and Politics in the Early American Republic. Philadelphia: University of Pennsylvania Press, 2007   Notes Francis Hopkinson, An Account of the Grand Federal Procession, Philadelphia, July 4, 1788 (Philadelphia: Carey, 1788). []George Washington, Thanksgiving Proclamation, October, 3, 1789; Fed. Reg., Presidential Proclamations, 1791–1991. []Hampshire Gazette (CT), September 13, 1786. []James Madison, The Federalist Papers, (New York: Signet Classics, 2003), no. 63. []Woody Holton, Unruly Americans and the Origins of the Constitution (New York: Hill and Wang, 2007), 8–9. []Madison took an active role during the convention. He also did more than anyone else to shape historians’ understandings of the convention by taking meticulous notes. Many of the quotes included here come from Madison’s notes. To learn more about this important document, read Mary Sarah Bilder, Madison’s Hand: Revising the Constitutional Convention (Cambridge, MA: Harvard University Press, 2015). []Virginia (Randolph) Plan as Amended (National Archives Microfilm Publication M866, 1 roll); The Official Records of the Constitutional Convention; Records of the Continental and Confederation Congresses and the Constitutional Convention, 1774–1789, Record Group 360; National Archives. []Richard Beeman, Plain, Honest Men: The Making of the American Constitution (New York: Random House, 2009), 114. []Herbert J. Storing, What the Anti-Federalists Were For: The Political Thought of the Opponents of the Constitution (Chicago: University of Chicago Press, 1981), 16. []Ray Raphael, Mr. President: How and Why the Founders Created a Chief Executive (New York: Knopf, 2012), 50. See also Kathleen Bartoloni-Tuazon, For Fear of an Elected King: George Washington and the Presidential Title Controversy of 1789 (Ithaca, NY: Cornell University Press, 2014). []David J. Siemers, Ratifying the Republic: Antifederalists and Federalists in Constitutional Time (Stanford, CA: Stanford University Press, 2002). []Alexander Hamilton, James Madison, and John Jay, The Federalist Papers, ed. Ian Shapiro (New Haven, CT: Yale University Press, 2009). []Pauline Maier, Ratification: The People Debate the Constitution, 1787–1788 (New York: Simon and Schuster, 2010), 225–237. []David Waldstreicher, Slavery’s Constitution: From Revolution to Ratification (New York: Hill and Wang, 2009). []Carson Holloway, Hamilton Versus Jefferson in the Washington Administration: Completing the Founding or Betraying the Founding? (New York: Cambridge University Press, 2015). []Alexander Hamilton, The Works of Alexander Hamilton, Volume 1, ed. Henry Cabot Lodge, ed. (New York: Putnam, 1904), 70, 408. []Alexander Hamilton, Report on Manufactures (New York: Childs and Swaine, 1791). []James H. Hutson, ed., Supplement to Max Farrand’s the Records of the Federal Convention of 1787 (New Haven, CT: Yale University Press, 1987), 119. []Hamilton, Report on Manufactures). []Richard Sylla, “National Foundations: Public Credit, the National Bank, and Securities Markets,” in Founding Choices: American Economic Policy in the 1790s, ed. Douglas A. Irwin and Richard Sylla (Chicago: University of Chicago Press, 2011), 68. []Thomas P. Slaughter, The Whiskey Rebellion: Frontier Epilogue to the American Revolution (New York: Oxford University Press, 1986). []“Proclamation of Neutrality, 1793,” in A Compilation of the Messages and Papers of the Presidents Prepared Under the Direction of the Joint Committee on printing, of the House and Senate Pursuant to an Act of the Fifty-Second Congress of the United States (New York: Bureau of National Literature, 1897). []United States, Treaty of Amity, Commerce, and Navigation, signed at London November 19, 1794, Submitted to the Senate June 8, Resolution of Advice and Consent, on condition, June 24, 1795. Ratified by the United States August 14, 1795. Ratified by Great Britain October 28, 1795. Ratifications exchanged at London October 28, 1795. Proclaimed February 29, 1796. []Elizabeth Fox-Genovese and Eugene D. Genovese, The Mind of the Master Class: History and Faith in the Southern Slaveholders Worldview (New York: Cambridge University Press, 2005), 18. []From Thomas Jefferson to William Short, 3 January 1793,” Founders Online, National Archives. http://founders.archives.gov/documents/Jefferson/01-25-02-0016, last modified June 29, 2015; The Papers of Thomas Jefferson, vol. 25, 1 January–10 May 1793, ed. John Catanzariti (Princeton, NJ: Princeton University Press, 1992), 14–17. []Robert Goodloe Harper, June 18, 1798, quoted in American Daily Advertiser (Philadelphia), June 20, 1798. []Robert J. Alderson Jr., This Bright Era of Happy Revolutions: French Consul Michel-Ange-Bernard Mangourit and International Republicanism in Charleston, 1792–1794 (Columbia: University of South Carolina Press, 2008). []Rachel Hope Cleves, The Reign of Terror in America: Visions of Violence from Anti-Jacobinism to Antislavery (New York: Cambridge University Press, 2012), 47. []Alien Act, July 6, 1798, and An Act in Addition to the Act, Entitled “An Act for the Punishment of Certain Crimes Against the United States,” July 14, 1798; Fifth Congress; Enrolled Acts and Resolutions; General Records of the United States Government; Record Group 11; National Archives. []James Wilson, Congressional Debate, December 1, 1787, in Jonathan Elliot, ed., The Debates in the Several State Conventions on the Adoption of the Federal Constitution as Recommended by the General Convention at Philadelphia in 1787, Vol. 2 (New York: s.n., 1888) 448–450. []Tunis Wortman, A Treatise Concerning Political Enquiry, and the Liberty of the Press (New York: Forman, 1800), 181. []George Hay, An Essay on the Liberty of the Press (Philadelphia: s.n., 1799), 43. []Thomas Jefferson to James Madison, August 28, 1789, from The Works of Thomas Jefferson in Twelve Volumes, Federal Edition, ed. Paul Leicester Ford. http://www.loc.gov/resource/mtj1.011_0853_0861 []Francis Newton Thorpe, ed., The Federal and State Constitutions, Colonial Charters, and Other Organic Laws of the States, Territories, and Colonies Now or Heretofore Forming the United States of America Compiled and Edited Under the Act of Congress of June 30, 1906 (Washington, DC: U.S. Government Printing Office, 1909). []Thomas Jefferson, An Act for Establishing Religious Freedom, 16 January 1786, Manuscript, Records of the General Assembly, Enrolled Bills, Record Group 78, Library of Virginia. []Catherine Allgor, Parlor Politics: In Which the Ladies of Washington Help Build a City and a Government (Charlottesville: University of Virginia Press, 2000), 14. []James T. Callender, The Prospect Before Us (Richmond: s.n., 1800). []Letter from Thomas Jefferson to Spencer Roane, September 6, 1819, in The Writings of Thomas Jefferson, 20 vols., ed. Albert Ellery Bergh (Washington, DC: Thomas Jefferson Memorial Association of the United States, 1903), 142. []Harold H. Bruff, Untrodden Ground: How Presidents Interpret the Constitution (Chicago: University of Chicago Press, 2015), 65. []Alexander Hamilton, The Federalist Papers (New York: Signet Classics, 2003), no. 85. []George Washington, Farewell Address, Annals of Congress, 4th Congress, 2869–2870. [] This entry was posted in Uncategorized on June 7, 2013 by All Chapters. Post navigation ← 5. The American Revolution 7. The Early Republic →

      The discussion of Shays’s Rebellion reveals how economic struggles and weak national power under the Articles of Confederation created serious unrest among farmers. While some leaders viewed the rebellion as a dangerous threat to order, others believed it represented the same revolutionary spirit that founded the country.

    1. Reading stands at the heart of the process of writing academic essays. No matter what kinds of sources and methods you use, you are always reading and interpreting text.

      This is the main idea and it sets the tone for the rest of the text.

    1. Table 3-2.Ways to explore cultural beliefs in discussing bad news

      What do you think might be going on? What do you call the problem?

      What do you think has caused the problem?

      What do you think will happen with this illness?

      What do you fear most with this illness?

      Would you want to handle the information and decision making, or should that be done by someone else in the family?

    1. Jak pokonać BEZSENNOŚĆ po 40-tce? 3 prawdziwe historie z gabinetu terapeuty - dr Klaudia Tabała
      • CBT-I Therapy Foundations: The primary method for treating insomnia is Cognitive Behavioral Therapy for Insomnia (CBT-I), which focuses on changing sleep-related behaviors and beliefs rather than just basic hygiene.
      • Sleep Compression/Restriction: One of the most effective techniques is paradoxically shortening the time spent in bed (e.g., to 5.5–6 hours). This builds "sleep pressure" and prevents the brain from associating the bed with anxiety and wakefulness [00:10:32].
      • The Sleeping Pill Trap: Drugs like benzodiazepines and "Z-drugs" do not treat the underlying causes of insomnia; they merely "shut down" the nervous system. Long-term use can disrupt sleep architecture, hinder brain detoxification, and lead to dependency [00:30:08].
      • Light and Circadian Rhythm: Natural daylight is the strongest regulator of the sleep-wake cycle. Morning sun exposure and avoiding bright light in the evening help natural melatonin production [00:17:41].
      • Shift Work Strategies: Shift workers should simulate night conditions after finishing work by wearing sunglasses on the way home and using blackout curtains to ensure restorative sleep during the day [00:18:15].
      • Orthosomnia and Wearables: Obsessively tracking sleep data via smartwatches can paradoxically worsen sleep quality. This "orthosomnia" creates anxiety when users see "bad" scores, leading to further sleep interference [00:50:23].
      • Mental Offloading: It is crucial to "unload" the mind before bed. Techniques like writing down worries or the next day's to-do list during the early evening (outside of the bedroom) help prevent racing thoughts at night [01:10:09].
      • Individualized Relaxation: Methods such as Jacobson’s Progressive Muscle Relaxation or even nature observation (e.g., birdwatching) can lower the nervous system's overall arousal levels [01:13:14].
    1. 1993. It includes Canada, the United States, and Mexico, with a combined population of 450 million and an economy of over $20.8 trillion.

      I believe that this is a fantastic example of keeping tensions low between 3 neighboring nations who share different cultural values. In this example, NAFTA allowed for the U.S., Mexico, and Canada, to both work together in terms of trade, rather than compete with one another in the North American theatre. If we hadn't had North American trade between the 3 countries, than the U.S. would never have the chance to produce cars in Mexico, giving Mexico more jobs and wealth within their economy, which in turn helps the U.S.

    2. International trade improves relationships with friends and allies; helps ease tensions among nations; and—economically speaking—bolsters economies, raises people’s standard of living, provides jobs, and improves the quality of life.

      I think this is a great point that shows that international trade has prevented many potential conflicts between nations who would rather seek trade benefits than the detriment from war. Having two countries that would otherwise be enemies trade with one another allows for mutual benefit by keeping tensions low. However, issues can arise in the event that one country begins using their trading relationship to take advantage of the other through corporate espionage.