10,000 Matching Annotations
  1. Nov 2025
    1. 1— “Debate has raged”

      Some headline news from the budget: Labour is finally, after an 18-month internal battle, scrapping the two-child benefit cap. How did they get here? Ailbhe is here, as always, with the inside track. Finn

      2—“Mortal danger”

      Is it all over in Ukraine? The country cannot fight a war for another year, that much is clear. Europe is facing a lonely future, without its American guarantor and with an expansionist, unchecked Russia. Andrew Marr assesses the grave situation. Finn

      3—“How did this happen?”

      Will Dunn makes an unappetising expedition for the sketch this week. There is “a hulking glacier of crap 500 feet long in the heart of the Oxfordshire countryside.” Criminals used it as an illegal rubbish tip. Will holds his nose and follows Ed Davey once more unto the heap. George

      4—“Her rally or his…”

      It’s Your Party conference weekend, and it’s going to be massive. Some predict a barney, some a bust-up. We’ve got two pieces for the meantime. First, Megan Kenyon sat down with Jeremy Corbyn to discuss his apology to Your Party members, his breakfast meeting with Zack Polanski and his ambitions for the leadership. Watch here, and read here.

      And then we have a weekend essay from the left-wing veteran, Andrew Murray. He has some advice for the Your Party high-ups, most saliently to “to stop doing stupid stuff”. Nicholas

      5—“Who was Salman Rushdie?”

      This is a major one. When one colleague asked Tanjil how he felt to be writing about Sir Salman Rushdie, he said, “Well, I have been reading him since I was a boy.” And Tanjil’s boyhood is foreground and background in this essay-cum-meditation-cum-memoir. Not a dry eye in the house. Nicholas

      To enjoy our latest analysis of politics, news and events, in addition to world-class literary and cultural reviews, click here to subscribe to the New Statesman. You'll enjoy all of the New Statesman's online content, ad-free podcasts and invitations to NS events.

      75% off

      6—”Here’s the trick”

      It takes a village (or un village?). While Will Dunn was inspecting the giant trash heap I was thoroughly investigating this year’s Beaujolais nouveau. Come along for a glass of summer in the bleak mid winter: the unassuming Gamay grape can teach us more than you might think about life. Trust me, or read me, to find out what. Finn

      7—“Hymns of isolation”

      I’ve always thought of Radiohead as headphone music: that falsetto over those arrangements, it’s something intense and private, not for 20,000 people standing in a field. But, in this wonderful review of the band live, George has won me round to the alternative. Nicholas

      8—”Just-so satisfaction” William Nicholson and the pleasure in the paint No one can really agree on how significant William Nicholson’s contribution to 20th century painting was. Probably thanks to all those plodding still lifes. Michael Prodger jumps in to tell me to stop being such a hater – there is real pleasure in the close reading, he says. Convinced? Finn

      9—”Like the Stasi in East Berlin”

      Ethan Croft scopes out a faction with traction in the Labour party. Blue Labour involves a “bricolage of calls for reindustrialisation and lower migration, inspired by Catholic social teaching”. Others write it off as a load of Tories. Its influence has gone up, then down, then up, and so on. Right now they’re riding high. Ethan never fails to provide your quotient of gossip and Labour infighting. George

      Elsewhere Naomi Klein: surrealism against fascism (from the brilliant new mag, Equator)

      Why would China want to trade with us?

      Guardian investigates the Free Birth Society

      New Yorker: Airport lounge wars

      Atlantic: Stranger Things comes to an exhausting end

      Ryan Lizza/Olivia Nuzzi latest

      Gamma the tortoise dies in her prime, at 141 :(

      Recipe of the week: Nigel Slater’s pear and chocolate crumble (a crowd pleaser)

      And with that…

      Something smells fishy! And snail-y. And wine-y. I am talking, of course, about the recent spate of luxury grocery theft. Some thieves have stolen €90,000 worth of snails, intended for the restaurant trade. The producer (funny word for that job, I thought) said he was shocked when he learnt of the disappearance of 450kg of snails from his farm in Bouzy, in – get this – the Champagne region of France. The Times described the theft as “yet another blow to a struggling sector”.

      Meanwhile, closer to home in Chelsea, a woman has been caught on CCTV making off with a box of langoustines, stolen from the doorstep of the Michelin-starred restaurant Elystan Street. That’s about £200 worth of big prawns. And in Virginia, a couple posed as wealthy collectors in order to secure private tours of restaurant wine cellars. While one distracted the sommelier, the other swiped. In their haul? A rare 2020 Romanée-Conti, worth $24,000.

      I can’t help but think about the Louvre jewel heist in October: a crime of extraordinary effort. To pull it off, you do not just need to outsmart Louvre security, you then have to work out how to sell the things. And as Michael explains, flogging stolen jewels without alerting the authorities is a hard task. Snail theft is starting to sound appealing: no need for a cross-border pan-European crime network or experts in recutting precious stones; just a hot oven, some salted butter, chopped parsley and a splash of dry white, and you have already succeeded.

    1. L'Égalité des Genres : Analyse des Origines du Patriarcat et des Modèles Alternatifs

      Résumé

      Ce document de synthèse analyse la thèse selon laquelle le patriarcat n'est pas une loi naturelle et immuable, mais une construction historique.

      S'appuyant sur des exemples historiques, archéologiques et anthropologiques, il démontre que les relations entre les genres ont pris des formes très diverses au cours de l'histoire humaine.

      L'égalité a non seulement existé, mais elle persiste dans certaines sociétés matrilinéaires contemporaines.

      L'analyse révèle que l'émergence des premiers États a été un facteur décisif dans l'institutionnalisation et la propagation mondiale du patriarcat comme outil de contrôle démographique et social.

      Le cas de l'Islande illustre que l'égalité moderne est une conquête récente et fragile, fruit d'une lutte collective déterminée, et non un retour à un état originel.

      En conclusion, la reconnaissance de la mutabilité des structures sociales ouvre la voie à la possibilité de construire un avenir égalitaire, en comprenant que l'ordre social actuel n'est pas une fatalité.

      --------------------------------------------------------------------------------

      1. La Remise en Question du Patriarcat comme Ordre Naturel

      La perception commune présente la lutte pour les droits des femmes comme un combat sans fin contre un patriarcat qui serait une constante de l'histoire humaine. Cette vision postule une rébellion perpétuelle contre l'exclusion du pouvoir, le travail domestique non rémunéré et la violence.

      Le documentaire remet fondamentalement en cause cette narration en posant la question centrale : « les femmes et les hommes n'ont-ils jamais été égaux ? ».

      Il suggère que loin d'être une "loi naturelle", l'organisation patriarcale n'est qu'une des nombreuses façons dont les sociétés humaines ont structuré les relations de genre au fil du temps.

      2. La Lutte Moderne pour l'Égalité : Le Cas de l'Islande

      L'Islande est souvent citée comme un modèle de l'égalité des genres au 21e siècle, avec une égalité salariale inscrite dans la loi, un congé parental largement adopté par les pères, et des femmes aux plus hautes fonctions politiques. Cependant, cette situation est le résultat d'une lutte récente et intense.

      Le Contexte d'Inégalité : Dans les années 1970-80, la situation était radicalement différente.

      L'anthropologue Sigridur Duna Christmunir, cofondatrice du premier parti féministe islandais en 1983, rapporte qu'à l'époque, les femmes gagnaient à peine 60 % du salaire de leurs collègues masculins.

      Elle compare la frustration grandissante des femmes à une « éruption volcanique ».

      La Grève Historique du 24 octobre 1975 : Face à cette inégalité, 90 % des femmes islandaises ont refusé de travailler lors du « jour de vacances des femmes » (gena Frida Urine).

      Cette grève concernait à la fois le travail rémunéré et les tâches domestiques (cuisine, garde d'enfants, ménage).

      Impact : La société a été « totalement paralysée », créant un « état d'urgence total ».

      Sigridur Duna Christmunir se souvient :

      « Je sentais l'odeur de la viande brûlée dans les rues. Les hommes faisaient la cuisine [...]. L'odeur de la viande brûlée me rappelle toujours cette journée. »

      Conséquences Politiques et Législatives : L'événement a provoqué une accélération spectaculaire des réformes :

      1976 : Entrée en vigueur de la loi sur l'égalité salariale.  

      1980 : Élection de Vigdís Finnbogadóttir, première femme au monde élue présidente démocratiquement.  

      ◦ Par la suite, l'entrée au parlement de la « Liste des femmes », dont faisait partie Sigridur Duna, a « révolutionné la politique islandaise ».

      3. Relecture de l'Histoire : Des Vikings à la Préhistoire

      L'analyse historique et archéologique révèle des indices d'organisations sociales non patriarcales, contredisant l'idée d'une domination masculine universelle.

      A. Le Statut des Femmes Viking : Entre Mythe et Réalité

      Les sagas et les découvertes archéologiques nuancent l'image d'une société viking strictement patriarcale.

      Droits et Autonomie : Les sagas du 13e siècle, comme la Saga de Laxdæla, dépeignent des femmes de la classe supérieure comme intelligentes et volontaires.

      Le premier recueil de lois islandais, les Grágás, confirme que les femmes vikings pouvaient divorcer et, en tant que veuves, hériter et gérer leur propre fortune.

      Limites du Pouvoir : Ce statut ne s'appliquait pas à toutes.

      Il concernait principalement l'élite et excluait les esclaves.

      Surtout, les femmes n'avaient aucun pouvoir politique direct et n'avaient pas voix au chapitre au Þing, l'assemblée populaire. Leur influence était indirecte, via leurs liens avec des hommes puissants.

      La Guerrière de Birka : La découverte en 2017 que la tombe d'un guerrier viking de haut rang, découverte en 1878 en Suède, contenait en réalité le squelette d'une femme (prouvé par l'ADN) a forcé une réévaluation des préjugés sur les rôles de genre, illustrant comment les idées actuelles sont projetées sur le passé.

      B. Indices d'Égalité dans les Sociétés Préhistoriques

      L'archéologie préhistorique suggère fortement l'existence de sociétés égalitaires.

      Pratiques Funéraires : Dans les tombes somptueuses de l'Âge du Fer, des femmes étaient enterrées avec les mêmes trésors (chars, armes, bijoux) que les hommes, indiquant un statut social potentiellement égal dans la mort comme dans la vie.

      Le Cas de Çatalhöyük : Ce site anatolien, l'une des plus anciennes cités connues (9000 ans), offre des preuves frappantes.

      L'analyse des résidus pulmonaires et des squelettes a montré que les hommes et les femmes passaient autant de temps à l'intérieur qu'à l'extérieur et que leur différence de taille était minime.

      La journaliste scientifique Angela Saini, qui a étudié le site, rapporte la conclusion des archéologues : « dans les plus anciennes colonies humaines, les hommes et les femmes menaient à peu de choses près la même vie [...] sur un pied d'égalité ».

      4. Le Débat sur le Matriarcat et la Matrilinéarité

      Le concept de matriarcat est souvent mal interprété. L'anthropologie lui préfère le terme de société matrilinéaire pour décrire des modèles sociaux non patriarcaux.

      Critique du Concept de Matriarcat : L'archéologue Brigitte Röder considère les termes « matriarcat » et « patriarcat » comme des « catégories scientifiques non appropriées » car elles reposent sur un modèle binaire des genres, produit de la société bourgeoise du 18e siècle.

      La Théorie de Marija Gimbutas : Dans les années 70, l'archéologue Marija Gimbutas a postulé l'existence de cultures matriarcales pacifiques en Europe primitive, centrées sur le culte d'une déesse mère, qui auraient été détruites par des tribus de cavaliers patriarcaux.

      Cette théorie a été critiquée pour son interprétation très libre des données archéologiques, de nombreux artefacts étant ambigus (la "déesse" pouvant être un phallus).

      Les Sociétés Matrilinéaires : Il existe des preuves de l'existence de plus de 160 cultures matrilinéaires, où la filiation, l'héritage et le statut social se transmettent par la mère.

      L'Exemple des Mosuo (Chine) : Ce groupe ethnique vivant autour du lac Lugu offre un exemple contemporain.      

      Organisation Sociale : La grand-mère est la chef de famille. Tous les membres de la lignée maternelle vivent ensemble. Les femmes gèrent les finances et les affaires importantes.       

      Relations et Filiation : Les hommes restent vivre dans la maison de leur mère.

      Les relations amoureuses prennent la forme du « mariage par visite », où l'homme rend visite à la femme la nuit mais ne vit pas avec elle.

      Le frère de la mère assume le rôle de père social pour les enfants.     

      Stabilité : Selon Jiong Zhidui, directeur du musée des Mosuo, ce modèle familial est « le plus stable qui soit », car l'homogénéité familiale limite les conflits.

      5. L'Émergence et l'Imposition du Patriarcat

      Le patriarcat ne s'est pas imposé comme une défaite unique et soudaine du genre féminin, mais comme un processus graduel et insidieux, étroitement lié à la naissance des États.

      Le Rôle Clé de l'État : L'émergence des premiers États en Mésopotamie (environ 5000 ans avant notre ère) a été un tournant.

      La gestion de larges populations a nécessité un contrôle démographique et une organisation stricte de la société.

      La Codification des Rôles de Genre : Les élites étatiques ont établi une répartition claire des rôles (qui combat, qui s'occupe des enfants, qui travaille) et les ont inscrits dans des listes classées par genre.

      Une fois ces différences « gravées dans le marbre », elles ont commencé à être perçues comme naturelles.

      Un Instrument de Contrôle : Le patriarcat est devenu un instrument efficace pour contrôler la population.

      Comme le souligne Angela Saini : « Les systèmes de domination ne tirent pas seulement leur pouvoir de la force brute, ils déploient également leur puissance en imposant des idées ».

      L'Expansion Mondiale : Ce modèle s'est répandu à travers le monde par l'expansion des États, qui ont supplanté d'autres formes d'organisation sociale.

      Les lois sur le mariage, le divorce et l'adultère sont devenues de plus en plus strictes pour les femmes, légitimant et solidifiant un ordre social qui avantageait une élite masculine au sommet du pouvoir.

      6. Conclusion : L'Égalité, un Horizon Possible

      L'analyse des différentes formes d'organisation sociale à travers l'histoire humaine mène à une conclusion fondamentale : il n'existe pas de forme "naturelle" de cohabitation entre hommes et femmes.

      La Mutabilité des Sociétés : La diversité des modèles observés prouve que les structures sociales sont des constructions culturelles et peuvent changer. Le patriarcat lui-même est une construction.

      Le Mécanisme du Patriarcat : Son ressort le plus efficace est de « monter les uns contre les autres et nous faire oublier que les sociétés peuvent changer ».

      L'idée d'une opposition fondamentale entre hommes et femmes est un produit de ce système.

      Une Lutte Continue : Même dans un pays avancé comme l'Islande, des problèmes comme la violence domestique et la misogynie persistent.

      Sigridur Duna Christmunir conclut : « Je me demande s'il y aura un jour une égalité parfaite quelque part. Peut-être n'est-ce qu'un mythe. Quoi qu'il en soit, il reste encore beaucoup à faire. »

      Regarder vers l'Avenir : Il n'est pas nécessaire de prouver l'existence d'un passé parfaitement égalitaire pour imaginer un futur égalitaire. Il suffit de comprendre que ce qui est considéré comme "normal" n'est pas immuable.

      La lutte pour les droits des femmes appartient au présent.

    1. Le Programme pHARe : Stratégie et Mise en Œuvre de la Lutte Contre le Harcèlement Scolaire

      Synthèse

      Ce document présente une analyse exhaustive de la politique française de lutte contre le harcèlement scolaire, axée sur le programme pHARe.

      Initié à titre expérimental en 2019 et renforcé par le plan interministériel de septembre 2023, le programme pHARe constitue une réponse systémique et globale déployée de l'école primaire au lycée.

      Il s'articule autour de trois ambitions majeures : la prévention, la détection et l'apport de solutions concrètes.

      La stratégie repose sur une "responsabilité collective" mobilisant l'ensemble de la communauté éducative : personnels, élèves et parents.

      Les données issues d'une enquête annuelle à grande échelle révèlent que si le harcèlement au sens strict concerne 3 à 5 % des élèves, les situations de vulnérabilité et de violences répétées touchent une part bien plus large, atteignant jusqu'à 20 % et 30 % des élèves respectivement.

      Les piliers du programme pHARe incluent la formation de l'ensemble des personnels, la mise en place d'équipes ressources spécialisées, le déploiement de plus de 120 000 élèves ambassadeurs et l'organisation d'un questionnaire annuel pour tous les élèves du CE2 à la terminale.

      Une nouveauté majeure permet désormais aux élèves de renseigner leur identité sur ce questionnaire pour faciliter une prise en charge directe.

      L'implication des parents est un axe stratégique, évoluant d'une simple information à une participation active via des ateliers de sensibilisation et le nouveau dispositif de parents ambassadeurs, visant à renforcer la prévention et le dialogue.

      De multiples ressources, telles que la plateforme en ligne "Des clés pour les familles", les protocoles de traitement des situations et le numéro national 30 18, sont mises à disposition pour outiller chaque acteur.

      L'objectif final est de construire une "alliance éducative" solide pour garantir un climat scolaire sécurisant, condition essentielle à l'épanouissement et aux apprentissages de chaque élève.

      --------------------------------------------------------------------------------

      1. Contexte et Ampleur du Phénomène de Harcèlement

      La politique de lutte contre le harcèlement scolaire s'inscrit dans une démarche de longue haleine, mais a connu une accélération significative face à un phénomène perçu comme "s'approfondissant".

      Historique et Cadre Politique : Le programme pHARe a été lancé à titre expérimental dès 2019.

      La politique a été renforcée et dotée de moyens nouveaux par le plan interministériel de septembre 2023, structuré autour de trois axes : prévention, détection et solutions.

      Cette politique s'intègre dans une vision plus large de la protection de la santé physique et psychique des élèves, considérée par le ministère comme l'un des deux piliers de l'école, avec l'instruction.

      Mesure du Phénomène : Pour mieux connaître et combattre le harcèlement, le ministère s'appuie sur une enquête annuelle d'envergure menée par la DEPP (Direction de l'évaluation, de la prospective et de la performance) auprès de plus de 30 000 élèves, du CE2 à la terminale.

      Données Clés sur le Harcèlement Scolaire

      Catégorie de Harcèlement

      Population Concernée

      Taux

      Harcèlement au sens strict

      Écoliers

      3 %

      Collégiens

      5 %

      Lycéens

      3 %

      Situations de vulnérabilité ou de fragilité

      Écoliers

      Près de 20 % (17 % spécifiquement mentionné)

      Violences répétées (insultes, etc.)

      Tous niveaux

      Jusqu'à 30 % des élèves (victimes d'au moins deux types de violence plusieurs fois dans l'année)

      Le ministère adopte une "vision extensive du phénomène", considérant non seulement le harcèlement strict mais aussi toutes les formes de violence et de mal-être pour calibrer son action.

      2. Le Programme pHARe : Une Approche Structurée et Globale

      L'objectif central du programme pHARe est de doter chaque école, collège et lycée d'un plan de prévention du harcèlement structuré et efficient.

      Il repose sur la mobilisation de tous les acteurs et se décline à travers un système de labellisation progressif.

      2.1. Les Piliers du Programme

      1. Formation des Adultes : Formation de l'ensemble des personnels pour repérer les signaux faibles, comprendre les mécanismes du harcèlement et savoir prendre en charge les situations.

      2. Sensibilisation des Élèves : Organisation de séances de sensibilisation pour tous les élèves, afin qu'ils comprennent ce qu'est le harcèlement et comment réagir.

      3. Élèves Ambassadeurs : Au collège et au lycée, des élèves volontaires sont formés et encadrés pour être des relais attentifs auprès de leurs pairs et mener des actions de prévention.

      4. Implication des Parents : Les parents sont considérés comme des partenaires essentiels, avec une implication croissante à chaque niveau du programme.

      2.2. Le Système de Labellisation

      L'engagement des établissements est structuré par un label à trois niveaux, qui vient récompenser leur degré d'implication.

      Niveau de Label

      Exigences Clés

      Statut

      Niveau 1

      - Constitution d'une équipe ressource formée (au niveau de la circonscription pour le primaire, de l'établissement pour le secondaire).<br>\

      • Participation à la journée nationale (9 novembre) avec passation du questionnaire annuel par tous les élèves (CE2-Terminale).<br>\

      • Information des parents sur le programme.<br>- Mise en place d'élèves ambassadeurs (secondaire).

      Obligatoire pour 100% des écoles et établissements. Environ 80% sont officiellement dans ce niveau via la plateforme de suivi.

      Niveau 2

      Inclut les critères du niveau 1 et ajoute l'organisation d'un atelier de sensibilisation à destination des parents sur une thématique liée au harcèlement.

      Volontaire

      Niveau 3

      Inclut les critères des niveaux 1 et 2 et ajoute la mise en place du dispositif de parents ambassadeurs.

      Volontaire

      3. Les Acteurs Clés et Leurs Rôles

      La réussite du programme repose sur une répartition claire des rôles et une collaboration active entre les différents acteurs.

      3.1. Les Équipes Ressources et les Coordinateurs

      Dans chaque collège et lycée, un coordinateur pHARe est nommé par le chef d'établissement.

      Il est chargé de piloter l'équipe ressource, composée de 5 personnes formées, et de déployer l'ensemble des actions du programme.

      Pour le premier degré, cette équipe est mutualisée au niveau de la circonscription.

      Ces équipes sont les expertes du traitement des situations et suivent un protocole précis.

      3.2. Les Élèves Ambassadeurs

      Nombre : Plus de 120 000 élèves ambassadeurs sont actifs dans les collèges et lycées.

      Sélection : Ils sont choisis sur la base du volontariat.

      Rôle : Formés et encadrés par des adultes, leur mission est d'être attentifs à leurs pairs, de relayer les situations préoccupantes aux adultes et de mener des actions de sensibilisation.

      Visibilité : Leur identité est connue de tous les élèves via des trombinoscopes, des badges ou des présentations en classe pour qu'ils soient facilement identifiables.

      3.3. Les Parents Ambassadeurs

      Ce dispositif, correspondant au niveau 3 de la labellisation, est un axe de développement prioritaire.

      Initiative : La démarche est initiée par l'établissement, en concertation avec les parents.

      Rôle : Leur mission n'est pas de résoudre les situations de harcèlement, ce qui reste la responsabilité de l'établissement. Leur rôle est centré sur la prévention :

      ◦ Sensibiliser les autres familles.   

      ◦ Aider à identifier les signes de harcèlement.  

      ◦ Orienter les parents vers les bons interlocuteurs.    ◦ Promouvoir une communication constructive avec l'établissement.

      Cadre : Une "charte d'engagement mutuel" formalise la relation de confiance entre les parents ambassadeurs et l'établissement. Il n'est pas nécessaire d'être un parent élu pour devenir parent ambassadeur.

      4. Outils et Ressources Pratiques

      Un ensemble d'outils concrets est déployé pour soutenir la politique de lutte contre le harcèlement.

      Le Questionnaire Annuel : Passé par tous les élèves du CE2 à la terminale entre le 6 et le 21 novembre.

      Depuis cette année, il offre la possibilité aux élèves d'inscrire leur nom et prénom pour permettre une aide plus directe et rapide.

      Les Protocoles de Traitement : Des documents méthodologiques "pas à pas" sont fournis aux personnels pour les guider depuis le signalement d'une situation jusqu'à sa résolution.

      Ces protocoles sont publics et téléchargeables sur le site du ministère, garantissant la transparence de la démarche. La politique est qu'« aucune situation ne doit rester sans réponse ».

      Plateforme "non au harcèlement - des clés pour les familles" : Créée avec le CNED, cette plateforme propose un parcours d'auto-formation gratuit d'une heure en quatre modules.

      Elle explique le phénomène du harcèlement et les actions mises en œuvre dans les établissements.

      Site Ministériel (education.gouv.fr) : Centralise les informations institutionnelles, les campagnes de communication (comme le clip annuel "tous différents, jamais indifférent"), et les coordonnées des lignes d'assistance académiques.

      Le Numéro 30 18 : Plateforme nationale gratuite et confidentielle, ouverte 7j/7 de 9h à 23h.

      Gérée par l'association e-Enfance, elle offre une écoute, des conseils et, si nécessaire, transmet les signalements de harcèlement scolaire aux responsables académiques qui saisissent l'établissement concerné.

      5. Recommandations Pratiques pour les Parents

      Comment Signaler une Situation

      La chaîne de signalement recommandée est la suivante :

      1. Contact Direct avec l'Établissement : C'est le premier et principal interlocuteur.

      Les parents doivent s'adresser à l'équipe de direction, au coordinateur pHARe, ou à tout adulte de confiance au sein de l'école ou de l'établissement.

      2. Lignes d'Assistance Académiques : Si le contact direct est difficile ou n'aboutit pas, chaque académie dispose d'une ligne téléphonique dédiée dont les numéros sont disponibles sur les sites du ministère et des académies.

      3. Le 30 18 : En dernier recours ou pour un conseil extérieur, ce numéro national prend en charge le signalement et assure le relais vers l'Éducation nationale.

      Suivi du Protocole

      Une fois un signalement effectué, le protocole est déclenché rapidement.

      L'établissement assure la mise en protection de l'élève victime et engage un dialogue avec toutes les parties concernées.

      Les parents sont tenus informés de la mise en œuvre du protocole par l'équipe qui prend en charge la situation, typiquement le coordinateur pHARe.

      Devenir Parent Ambassadeur

      Pour devenir parent ambassadeur, il faut se rapprocher de la direction de l'établissement de son enfant pour savoir si la démarche est engagée ou pour proposer de l'initier.

      Le processus repose sur le volontariat et une discussion avec l'équipe de direction pour s'accorder sur les objectifs et les modalités, formalisés par la charte d'engagement.

    1. s remem

      Change font again for all the 3 below: "small band", medium band and large band headers , to the font of "our simple booking process" header

    1. Notre capacité de concentration : Déclin ou Adaptation ?

      Résumé

      Ce document de synthèse analyse l'état actuel de la capacité de concentration humaine à l'ère numérique, en se basant sur des perspectives historiques, psychologiques et neuroscientifiques.

      Loin de l'idée répandue d'un déclin généralisé, les données suggèrent une adaptation profonde de notre cerveau aux nouvelles exigences environnementales.

      La capacité attentionnelle fondamentale, soit la faculté de traiter simultanément un nombre limité d'informations (entre un et quatre éléments), demeure stable depuis les années 1960.

      Les tests objectifs montrent même une amélioration de la performance en attention sélective au cours des dernières décennies.

      La découverte centrale est que l'attention n'est pas un état constant, mais un processus rythmique et oscillatoire.

      Notre cerveau alterne à une fréquence très élevée (toutes les 250 millisecondes) entre un état de concentration sensorielle intense et un état moteur, plus propice à l'action et à la distraction.

      Ce mécanisme, hérité d'une évolution de plus de 22 millions d'années, confère une flexibilité cognitive essentielle.

      L'environnement numérique, avec son flux constant de notifications et de contenus, n'a pas détruit notre capacité de concentration mais a favorisé le développement de nouvelles compétences, comme le passage rapide d'une tâche à l'autre et un filtrage plus efficace de l'information.

      La véritable question n'est donc pas celle d'une perte de capacité, mais celle de l'autodétermination : qui, ou quoi, contrôle notre attention ?

      La capacité à maintenir une concentration prolongée n'est pas perdue ; elle peut être réapprise et renforcée par un entraînement ciblé, démontrant la plasticité continue de notre cerveau.

      --------------------------------------------------------------------------------

      1. Le Mythe du Déclin de l'Attention

      L'idée que notre capacité de concentration se dégrade est une préoccupation récurrente, mais elle manque de fondement scientifique solide.

      Une anxiété historique : Le débat sur la concentration n'est pas nouveau.

      Il a émergé au 19ème siècle avec l'industrialisation, qui exigeait une attention soutenue pour maximiser la productivité et la sécurité.

      La psychologie naissante s'est alors emparée de l'étude de l'attention pour optimiser le recrutement de la main-d'œuvre.

      La fable du poisson rouge : En 2015, une affirmation largement relayée prétendait que la durée d'attention humaine (8 secondes) était devenue inférieure à celle d'un poisson rouge (9 secondes).

      Cette donnée provient d'une étude de Microsoft mesurant le temps passé sur une page web.

      Plutôt qu'une dégradation, ce chiffre peut indiquer une amélioration de notre efficacité à filtrer l'information en ligne.

      Comme le souligne le document, "être attentif c'est sélectionner l'information".

      Les paniques morales : Chaque nouvelle technologie a suscité des craintes similaires.

      Au 18ème siècle, le roman était jugé dangereux ; au 20ème, le cinéma.

      Aujourd'hui, les réseaux sociaux et le streaming sont les boucs émissaires.

      2. La Nature Fondamentale de la Concentration

      Les mécanismes de base de notre attention sont bien étudiés et révèlent une capacité stable et multifactorielle.

      Une capacité de base stable : Des tests de laboratoire, reproduits régulièrement depuis les années 1960, démontrent que notre capacité attentionnelle fondamentale est limitée et stable.

      Nous pouvons nous concentrer sur un à quatre éléments simultanément, selon leur complexité.

      Les deux fonctions essentielles : L'attention remplit un double rôle crucial :

      1. Traitement sélectif : Focaliser nos ressources cognitives sur l'information pertinente.   

      2. Filtrage : Occulter les stimuli parasites, qu'ils soient externes (bruits, lumières) ou internes (pensées, émotions).

      Les conditions de l'état de "Flow" : Le psychologue Mihaly Csikszentmihalyi a décrit le "flow" comme un état de concentration totale et sans effort, où l'on est absorbé par une tâche qui procure satisfaction.

      Cet état optimal est atteint lorsque la difficulté d'une tâche est parfaitement équilibrée :

      Ni trop facile : pour éviter l'ennui et la divagation des pensées.  

      Ni trop difficile : pour éviter le sentiment d'être dépassé et l'abandon.   

      ◦ La motivation intrinsèque est également une composante essentielle.

      3. Le Rythme Caché de notre Cerveau

      Des recherches récentes révèlent que l'attention est un processus dynamique et non un état statique.

      Une oscillation permanente : L'attention n'est pas uniforme. Elle suit un rythme ondulatoire rapide. Des expériences montrent qu'elle "croit et décroit" en permanence.

      L'alternance Sensoriel/Moteur : Notre cerveau alterne constamment entre deux états à une fréquence d'environ 250 millisecondes :

      État sensoriel : Un pic de concentration, où nous sommes plus focalisés et absorbons plus d'informations.  

      État moteur : Un creux où notre système moteur est plus actif, nous rendant plus facilement distraits mais aussi plus prompts à l'action.

      Une flexibilité cognitive évolutive : Ce rythme est un mécanisme évolutif fondamental, retrouvé chez les macaques, ce qui suggère une origine remontant à au moins 22 millions d'années.

      Cette "alternance attention-action" nous permet à la fois de nous concentrer intensément et de réagir rapidement à de nouvelles informations pertinentes.

      La distraction est donc une composante intrinsèque de la concentration ; elles sont "les deux faces d'une même pièce".

      L'illusion de la maîtrise totale : L'idée que l'attention est un acte purement volontaire est une illusion.

      L'effet "cocktail party" illustre que des informations subjectivement pertinentes (comme notre prénom) peuvent percer notre filtre attentionnel de manière quasi-automatique, redirigeant notre "projecteur" attentionnel.

      4. L'Adaptation à l'Ère Numérique

      Contrairement aux idées reçues, les données objectives ne soutiennent pas une thèse de dégradation, mais plutôt celle d'une adaptation.

      Une performance en hausse : Une méta-analyse menée entre 1990 et 2021 sur le test d'attention D2 (un test standardisé d'attention sélective) a révélé que la performance moyenne des participants a augmenté au fil des ans.

      Cela indique qu'il n'y a "aucune raison de basculer dans le catastrophisme".

      De nouvelles compétences : L'environnement numérique agit comme un entraînement intensif pour certaines facultés :

      ◦ Les utilisateurs de médias numériques et les joueurs de jeux vidéo développent une grande habileté à passer rapidement d'une tâche à l'autre.   

      ◦ Ils affinent leur capacité à détecter les signaux pertinents (visuels, textuels).   

      ◦ Il s'agit d'un "gain, une adaptation nécessaire de notre cerveau à ce qu'il doit faire à un moment donné".

      Les défis de l'environnement moderne : Si notre capacité de base n'a pas diminué, le contexte a changé.

      ◦ L'effet "Brain Drain" : La simple présence d'un smartphone peut réduire la capacité de concentration et de mémorisation disponible.   

      Des alternatives attractives : Les médias numériques offrent des distractions puissantes, particulièrement alléchantes lorsque nous sommes confrontés à des tâches routinières ou ennuyeuses.

      5. Le Spectre de l'Attention et la Question du Pouvoir

      La discussion sur la concentration dépasse la simple mesure de performance pour toucher à des questions de neurodéveloppement et de contrôle personnel.

      Les extrêmes du spectre : Les troubles de l'attention (TDAH) peuvent être compris comme une défaillance du cycle rythmique de l'attention.

      L'hyperactivité : Les individus sont bloqués dans le "creux" du rythme, l'état moteur, passant constamment d'une activité à l'autre.   

      L'hyperfixation : Les individus sont bloqués dans le "pic" du rythme, l'état sensoriel, incapables de se détacher de leur objet de concentration.  

      ◦ L'attention est qualifiée de "mère de toutes les fonctions cognitives", et ses défaillances ont des impacts dramatiques.

      La question de l'autodétermination : Le véritable enjeu contemporain n'est pas la capacité, mais le contrôle.

      La possibilité de réapprentissage : La capacité de concentration prolongée n'est pas perdue, mais simplement moins sollicitée.

      Elle peut être réentraînée. Des activités comme lire un livre ou apprendre un instrument de musique permettent de réapprendre à maintenir son attention.

      Cela "demandera beaucoup de travail et d'entraînement, mais ce n'est pas perdu pour toujours".

      Conclusion

      Notre capacité de concentration n'a pas diminué ; elle a évolué pour s'adapter à un monde hyper-connecté.

      Le discours alarmiste ignore la remarquable plasticité de notre cerveau et les nouvelles compétences que nous développons.

      Le monde moderne n'est "ni mieux ni pire", il est simplement "différent".

      Le défi pour chacun est de devenir plus conscient et volontaire dans la gestion de cette ressource précieuse, en trouvant un équilibre personnel entre les sollicitations externes et les objectifs internes.

      La question fondamentale qui demeure est : à quoi choisissons-nous d'accorder notre attention ?

    1. Synthèse des Expériences sur les Préjugés et le Racisme Inconscient

      Résumé

      Ce document de synthèse analyse une émission d'investigation sociale qui, à travers une série d'expériences en caméra cachée, démontre comment les préjugés et les stéréotypes raciaux influencent de manière inconsciente les comportements, les jugements et même la perception de la réalité.

      Cinquante participants, croyant participer à une émission sur "les mystères de notre cerveau", sont confrontés à des situations de la vie quotidienne conçues pour révéler des biais automatiques.

      Les résultats sont unanimes : des mécanismes cognitifs comme la catégorisation sociale poussent les individus à privilégier la similarité, à juger plus sévèrement les minorités visibles, et à percevoir une menace accrue en leur présence.

      Les expériences révèlent également que ces biais sont acquis dès l'enfance et peuvent mener à une internalisation des stéréotypes par les groupes minoritaires eux-mêmes.

      Le contexte s'avère crucial, capable d'atténuer ou de renforcer les stéréotypes.

      Finalement, l'émission conclut que si ces mécanismes sont universels, la prise de conscience, l'éducation et la rencontre avec l'autre sont des leviers puissants pour les déconstruire, rappelant que ce qui rassemble les êtres humains est fondamentalement plus fort que ce qui les divise.

      1. Dispositif Expérimental et Concepts Fondamentaux

      L'émission met en scène 50 volontaires qui ignorent le véritable sujet de l'étude : le racisme.

      Le faux titre, "Les mystères de notre cerveau", vise à garantir la spontanéité de leurs réactions.

      Leurs comportements sont observés et analysés par la présentatrice Marie Drucker, le comédien et réalisateur Lucien Jean-Baptiste, et le psychosociologue Sylvain Delouvée.

      L'analyse repose sur plusieurs concepts clés de la psychologie sociale :

      La Catégorisation Sociale : Mécanisme mental naturel et "paresseux" par lequel le cerveau classe les individus en groupes (hommes/femmes, jeunes/vieux, noirs/blancs) pour simplifier la complexité du monde.

      Ce processus entraîne une perception accrue des ressemblances au sein de son propre groupe ("nous") et des différences avec les autres groupes ("eux"), pouvant générer méfiance et rejet.

      Le Stéréotype : Défini comme "un ensemble d'idées préconçues que l'on va appliquer à un individu du simple fait de son appartenance à un groupe."

      Les stéréotypes ont un caractère automatique et sont intégrés culturellement (médias, éducation, etc.).

      Le Préjugé : C'est l'attitude, positive ou négative, que l'on développe envers un groupe sur la base de stéréotypes.

      La Discrimination : Le comportement qui découle des préjugés, comme le fait d'écarter une personne d'un emploi ou d'un logement.

      Sylvain Delouvée souligne que "toutes les expériences que nous allons voir s'appuient sur des études scientifiques parfaitement documentées" et que les mécanismes étudiés (misogynie, sexisme, homophobie, etc.) reposent sur les mêmes fondements.

      2. Le Biais de Similarité et le Jugement Spontané

      Les premières expériences démontrent une tendance instinctive à favoriser les individus qui nous ressemblent et à porter des jugements hâtifs basés sur l'apparence physique.

      Expérience 1 : La Salle d'Attente

      Dispositif : Les participants entrent un par un dans une salle d'attente où sont assis deux complices, un homme noir (Jean-Philippe) et un homme blanc (Florian), habillés identiquement. Une chaise vide est disponible de chaque côté.

      Résultats : La quasi-totalité des participants choisit de s'asseoir à côté de l'homme blanc.

      Même lorsque les complices échangent leurs places pour éliminer un biais lié à la configuration de la pièce, le résultat reste le même.

      Analyse : Selon Sylvain Delouvée, ce comportement n'est pas "raciste en tant que tel" mais relève d'une recherche de similarité.

      "On va chercher les gens qui nous ressemblent."

      C'est un mécanisme presque "reptilien", hérité des tribus primitives qui se méfiaient de la différence.

      Lucien Jean-Baptiste souligne les conséquences dramatiques de ce biais dans des contextes comme "l'accès au logement" ou la recherche d'emploi.

      Expérience 2 : Le Procès Fictif

      Dispositif : Les participants agissent en tant que jurés et doivent attribuer une peine de prison (de 3 à 15 ans) à un accusé pour "coups et blessures volontaires ayant entraîné la mort sans l'intention de la donner".

      Le crime et le contexte sont identiques pour tous, mais la moitié des participants juge un accusé blanc, l'autre moitié un accusé d'origine maghrébine.

      Résultats : L'accusé d'origine maghrébine écope en moyenne d'une peine de prison plus lourde.

      Fait marquant, les participants ont été 5 fois plus nombreux à lui infliger la peine maximale de 15 ans.

      Analyse : Les commentaires des participants révèlent leurs biais : "Il a une bonne tête, il n'a pas l'air d'être violent" pour l'accusé blanc ; "Il n'y a pas de perpétuité ?" pour l'accusé maghrébin.

      Delouvée explique que ce jugement est influencé par un "fameux biais intégré" via la culture et les médias, qui associent certaines catégories de personnes à la délinquance.

      3. La Perception de la Menace et de la Culpabilité

      Les expériences suivantes illustrent comment les stéréotypes raciaux activent automatiquement une perception de danger ou de culpabilité, menant à des réactions discriminatoires.

      Expérience 3 : Le Vol de Vélo

      Dispositif : En caméra cachée dans la rue, trois comédiens (un homme blanc, Johan ; un homme d'origine maghrébine, Bachir ; une jeune femme blonde, Urielle) scient tour à tour l'antivol d'un vélo.

      Résultats :

      Johan (blanc) : Les passants sont indifférents ou bienveillants. Une femme lui dit même qu'il a "une tête de type honnête".  

      Bachir (maghrébin) : Les réactions sont immédiates et hostiles ("C'est pas bien, de faire ça").

      Les passants l'interpellent et appellent la police, qui intervient réellement, forçant l'équipe de tournage à s'interposer.  

      ◦ **Urielle (blonde) :

      ** Plusieurs hommes s'arrêtent spontanément pour lui proposer leur aide, sans jamais remettre en question la propriété du vélo.

      Analyse : Cette expérience démontre un comportement discriminatoire flagrant.

      Le stéréotype s'active automatiquement ("fait-il partie de mon groupe ?"), entraîne un préjugé ("j'ai confiance ou non") et déclenche une action (l'appel à la police).

      Lucien Jean-Baptiste témoigne : "Il m'est arrivé combien de fois de rentrer dans des halls d'immeuble et qu'on me demande : 'Qu'est-ce que vous faites là ?'".

      Expérience 4 : Le Laser Game (Le Biais du Tireur)

      Dispositif : Les participants, armés d'un pistolet de laser game, doivent neutraliser le plus rapidement possible des figurants armés qui surgissent, tout en évitant de tirer sur ceux qui tiennent un téléphone.

      Les figurants sont de différentes origines (blancs, noirs, maghrébins).

      Résultats :

      1. Les participants ont tiré près de 4 fois plus sur les figurants désarmés noirs ou d'origine maghrébine que sur les figurants désarmés blancs.    

      1. Face à un dilemme où un homme blanc et un homme maghrébin surgissent simultanément armés, ils ont été 4 fois plus nombreux à tirer en priorité sur le figurant d'origine maghrébine.

      Analyse : Cette expérience, inspirée de recherches sur les forces de police américaines, illustre le "biais du tireur".

      Elle ne signifie pas que les participants sont racistes, mais met en évidence "l'ancrage fort et automatique d'un stéréotype".

      Face à une situation menaçante, le cerveau s'accroche aux stéréotypes pour agir, percevant la scène comme "encore plus menaçante qu'elle ne l'est".

      4. La Genèse des Préjugés chez l'Enfant

      Ces expériences démontrent que les stéréotypes raciaux sont absorbés et intégrés très tôt, non pas de manière innée, mais par observation et modélisation du monde adulte.

      Expérience 5 : Les Marionnettes

      Dispositif : Des enfants de 5 à 6 ans assistent à un spectacle de marionnettes où le goûter de Vanessa a été volé. Deux suspects leur sont présentés : Kevin (blanc) et Moussa (noir).

      On demande aux enfants de désigner le coupable.

      Résultats : Une majorité d'enfants désigne spontanément Moussa comme le voleur le plus probable.

      Analyse : "Ça commence très tôt", réagit Lucien Jean-Baptiste.

      Delouvée précise que cela "ne prouve pas que les enfants sont enclins naturellement à la discrimination" mais qu'ils sont très sensibles aux normes sociales et "incorporent les stéréotypes, les préjugés de leur entourage".

      Expérience 6 : Le Test de la Poupée

      Dispositif : L'émission présente les résultats d'une réplication du célèbre test des psychologues Kenneth et Mamie Clark (années 1940), issue du documentaire "Noirs en France".

      On présente à de jeunes enfants, y compris des enfants noirs, une poupée blanche et une poupée noire et on leur pose des questions ("Quelle est la plus jolie ?", "La moins jolie ?").

      Résultats : Les enfants, y compris les enfants noirs, désignent majoritairement la poupée blanche comme la plus jolie et la poupée noire comme la moins jolie. Une petite fille noire déclare :

      "Parce qu'elle est noire... quand je serai grande, je mettrai de la crème pour devenir blanche."

      Analyse : Ce test illustre tragiquement l'internalisation du stéréotype, où les membres d'un groupe minoritaire finissent par incorporer les préjugés négatifs qui leur sont attribués.

      Le résultat, constant à travers les décennies, montre la puissance des modèles culturels et de l'entourage.

      5. Stéréotypes, Contexte et Raccourcis Cognitifs

      Cette section regroupe des expériences montrant comment les stéréotypes fonctionnent comme des raccourcis mentaux, comment le contexte peut les moduler et comment même les préjugés "positifs" sont problématiques.

      Expérience 7 : La Reconnaissance des Visages ("Ils se ressemblent tous")

      Dispositif : Six comédiens (quatre blancs, deux asiatiques) jouent une courte scène.

      Les participants doivent ensuite réattribuer chaque réplique au bon comédien via une application.

      Résultats : Les participants ont fait quasiment deux fois plus d'erreurs en attribuant les répliques aux comédiens d'origine asiatique qu'aux comédiens blancs.

      Analyse : Ce phénomène illustre que le cerveau perçoit moins les différences "intracatégorielles" pour les groupes qui ne sont pas le nôtre.

      Comme l'explique Delouvée, "à partir du moment où nous catégorisons les individus en groupe, ce biais apparaît, cette tendance à voir les membres d'un groupe qui n'est pas le nôtre comme se ressemblant."

      Expérience 8 : Les Accents des Conférenciers

      Dispositif : Trois groupes de participants assistent à la même conférence sur l'IA, mais donnée par trois "experts" différents.

      1. Groupe 1 : Un comédien blanc prenant un fort accent allemand.    

      1. Groupe 2 : Le même comédien prenant un accent marseillais.    

      2. Groupe 3 : Un véritable professeur d'université noir, M. Diallo.

      Résultats :

      Accent allemand : Jugé "très compétent", "sérieux", mais "moyennement chaleureux".   

      Accent marseillais : Jugé "moins compétent", "pas convaincant", mais "sympathique" et "très chaleureux".    ◦ Professeur noir :

      Les participants sont perplexes, peinent à le qualifier et expriment des doutes sur sa légitimité ("Pour moi, il s'agit d'un comédien").

      Analyse : L'accent active un stéréotype qui devient le critère principal de jugement.

      L'Allemand est perçu comme rigoureux, le Marseillais comme sympathique mais peu sérieux.

      Le professeur noir, lui, ne correspond à aucun stéréotype clair dans l'esprit des participants, ce qui crée une dissonance cognitive.

      Le fait qu'il soit le seul véritable expert est la conclusion ironique de l'expérience.

      Expérience 9 : Les Sprinteurs (Le Préjugé Positif)

      Dispositif : On demande aux participants qui, d'un sprinteur noir ou blanc, a le plus de chances de gagner une course.

      Résultats : Une majorité répond le sprinteur noir, se basant sur le cliché "les Noirs courent plus vite".

      Analyse : L'émission déconstruit ce stéréotype, expliquant qu'il n'a aucune base scientifique fiable.

      Sa persistance est liée à des facteurs historiques (le corps noir associé au labeur physique durant l'esclavage) et socio-culturels (le sport comme l'un des rares modèles de réussite pour les jeunes noirs).

      Delouvée qualifie ce type de croyance de "préjugé positif très problématique", car il "retire le mérite aux coureurs noirs de gagner", réduisant leur succès à une essence biologique plutôt qu'à leur travail.

      Expérience 10 : L'Association de Mots (Le Rôle du Contexte)

      Dispositif : Trois groupes voient une photo d'une même femme asiatique dans trois contextes différents et doivent donner le premier mot qui leur vient à l'esprit.

      1. Photo 1 : Mangeant avec des baguettes.  

      2. Photo 2 : Se maquillant.  

      3. Photo 3 : Portant une blouse blanche avec un stéthoscope.

      Résultats :

      Photo 1 : Les réponses évoquent l'origine ("Asie", "sushi", "femme asiatique").   

      Photo 2 : Les réponses évoquent la féminité ("maquillage", "rouge à lèvres", "belle femme").  

      Photo 3 : Les réponses évoquent la profession ("médecin", "infirmière", "hôpital").

      Analyse : L'expérience démontre que le contexte est capable d'effacer ou de renforcer un stéréotype.

      Lorsque le contexte fournit une information plus saillante (le métier, la féminité), l'origine ethnique passe au second plan.

      6. L'Impact Neurologique et Mémoriel des Préjugés

      Ces expériences finales explorent les fondements biologiques et cognitifs des préjugés, montrant comment ils peuvent altérer l'empathie et même réécrire les souvenirs.

      Expérience 11 : L'Empathie et la Douleur

      Dispositif : L'émission rapporte une étude neurologique où l'on mesure la réaction cérébrale de sujets (blancs et noirs) regardant une main se faire piquer par une aiguille.

      Résultats :

      ◦ Le cerveau d'un sujet blanc réagit (empathie, "freezing") en voyant une main blanche se faire piquer, mais pas en voyant une main noire.   

      ◦ Inversement, le cerveau d'un sujet noir réagit à la douleur d'une main noire, mais pas d'une main blanche.   

      ◦ Étonnamment, quand la main est de couleur violette (un groupe pour lequel aucun préjugé n'existe), les cerveaux des sujets blancs et noirs réagissent tous les deux avec empathie.

      Analyse : C'est la seule expérience basée sur la neurologie. Elle révèle que "nos préjugés effacent notre empathie à l'égard de personnes différentes de nous".

      Le cerveau est plastique, et c'est "par la rencontre, l'éducation" que l'on peut développer une empathie plus universelle.

      Expérience 12 : La Photo Contre-Stéréotypique et le Bouche-à-Oreille

      Dispositif : Les participants observent une photo de rue où un homme d'origine maghrébine donne une pièce à un homme blanc faisant la manche.

      Puis, on teste leur mémoire.

      Dans un second temps, une chaîne de bouche-à-oreille est créée pour voir comment l'information se transmet.

      Résultats :

      1. Test de mémoire : Près de la moitié des participants décrivent la scène en inversant les rôles, affirmant avoir vu un homme blanc donner de l'argent à un SDF maghrébin.

      Un participant, décrivant la scène correctement, la qualifie de "très perturbante".   

      2. Bouche-à-oreille : Même lorsque la première personne décrit la scène correctement, l'information se déforme rapidement au fil de la transmission.

      Les rôles s'inversent, et la scène d'aumône se transforme même en "une altercation".

      Analyse : La photo est "contre-stéréotypique" : elle contredit les attentes du cerveau.

      Pour simplifier, le cerveau "corrige" la réalité pour la faire correspondre au stéréotype (le Maghrébin en situation de précarité).

      L'expérience du bouche-à-oreille, basée sur une étude classique sur les rumeurs (Allport & Postman, 1940), montre comment "nos croyances et stéréotypes nous permettent de lire cette scène" et de la transformer.

      7. Révélation Finale et Humanité Partagée

      À la fin de la journée, le véritable titre de l'émission, "Sommes-nous tous racistes ?", est révélé aux participants, provoquant choc et prise de conscience.

      L'objectif, leur explique-t-on, n'était pas de juger mais de montrer que "nous avons toutes et tous les mêmes mécanismes qui se déclenchent dans nos têtes".

      L'ultime expérience vise à déconstruire les divisions.

      Répartis en groupes de couleurs distinctes, les participants sont invités à avancer au centre s'ils se sentent concernés par une série de questions, allant du léger ("Qui a déjà revendu des cadeaux de Noël ?") au profondément intime.

      "Qui, parmi vous, se sent très seul ?" Plusieurs personnes, de groupes différents, se rejoignent au centre, partageant une vulnérabilité commune.

      "Qui, parmi vous, a été harcelé pendant sa scolarité ?"

      Un grand nombre de participants avancent, partageant des témoignages émouvants sur le harcèlement lié à la couleur de peau ou à d'autres différences.

      Cette dernière séquence démontre visuellement que malgré les appartenances à des groupes différents, les expériences humaines fondamentales (joie, amour, solitude, souffrance) sont partagées.

      La conclusion de l'émission est un appel à la reconnaissance de cette humanité commune :

      "Ce qui nous rassemble est toujours plus fort que ce qui nous divise."

    1. Further references (in Step 3)

      This doesn't work. These materials need to be provided when they are required (e.g. in Step 3). There's a bigger issue, however, which is that the reference materials will need quite some time to read and digest. They would have to be read and understood by the peers long before the workshop started. There's also the additional (major) issue that only 3 of the 7 weblinks you give are open sources, so only 3 are admissiable as part of this OER.

    2. Bridgstock, R. (2013). Not a dirty word: Arts entrepreneurship and higher education. Arts and Humanities in Higher Education, 12(2-3), pp.122–137. doi:https://doi.org/10.1177/1474022212465725.

      This is not open access - you need to remove it from the OER as it's not an OER.

    3. This open toolkit provides access to multiple websites and resources concerning arts careers. By tracking keywords, it aims to reshape your understanding of career opportunities within the arts sector and identify effective solutions to address your specific “keyword” challenges.

      This open toolkit provides access to multiple websites and resources concerning arts careers. By tracking keywords, it aims to reshape your understanding of career opportunities within the arts sector and identify effective solutions to address your specific “keyword” challenges.

      This isn't a step, it's a statement of intent - it's what the toolkit aims to do. Step 3 needs to explain how to track the keywords, why they are being "tracked" and to what ends they are being tracked...

    1. I've never tried Wexford before either, but often those sorts of products are mass produced in China by one company and just re-labeled for half a dozen different companies, so searching around may find something similar under a different name.

      I will say that some of the ones you listed tend to be the cheapest, lower quality cards I've run across. I use the Amazon Basics a lot, but primarily because they had a sale on their bricks of 500 cards a year or two back and I picked up 20 of them for $2.50 each.

      Oxford cards are some of the smoother (inexpensive) cards I've tried in the past, but even their paper quality has shifted a bit over the past 15 years.

      If you're doing 3x5 cards in blank, Brodart's library catalog cards are of a much higher quality and durability without breaking the bank and they're wonderfully smooth as well. https://www.shopbrodart.com/

      Stockroom plus has some great quality, smooth cards, but I've only ever seen them in gridded format and never plain or lined: https://www.amazon.com/Grid-Index-Cards-Inches-White/dp/B08BJ11LWC/

      Notsu also has some high quality smooth cards, but I don't think I've seen them in lined format and they can tend toward being very expensive.

      If you have the funds and want something incredibly smooth, try the Exacompta Bristol cards made by Clairefontaine. Their manufacturing process is dramatically different and they're incredibly smooth, particularly for fountain pen use. The downside is that they can be almost 3 times more expensive than other brands. They do carry their cards in a wide variety of sizes and formats though.

      One of these days I ought to lay out a grid of the more common cards and do some more serious reviews.

      reply to https://old.reddit.com/r/indexcards/comments/1p8xog6/looking_for_index_card_recommendations_similar_to/

    1. Les AESH : Pilier Méconnu et Précaire de l'École Inclusive

      Résumé Exécutif

      Ce document de synthèse analyse les conditions de travail, le rôle et le manque de reconnaissance des Accompagnants d'Élèves en Situation de Handicap (AESH), un métier jugé indispensable au projet de l'école inclusive en France.

      Il ressort une tension fondamentale : alors que les AESH sont essentiels à la scolarisation de près de 500 000 élèves et expriment une grande fierté pour leur mission, ils subissent une maltraitance institutionnelle systémique.

      Cette situation se caractérise par une précarité salariale extrême, une absence de formation qualifiante, une hiérarchie floue et un manque de reconnaissance symbolique et matérielle.

      Le "bricolage" permanent et le flou entourant leurs missions, bien que pratiques pour l'institution, abîment non seulement les professionnels mais compromettent également l'idéal de l'école inclusive, en faisant peser sur les AESH la responsabilité de compenser les défaillances du système.

      L'analyse met en lumière que la négligence envers cette profession est intrinsèquement liée à la négligence envers les élèves qu'ils accompagnent.

      1. Définition et Complexité du Métier d'AESH

      Le métier d'AESH, bien que central pour l'application des lois de 2005 et 2019 sur l'école inclusive, demeure mal connu et peu défini. Il s'inscrit dans la tradition des métiers du "care" (soin à la personne) mais peine à trouver sa place en tant que profession éducative à part entière.

      Trois Axes Fondamentaux : Le travail s'articule autour de trois missions principales :

      1. Aide à l'accès aux apprentissages.    2. Aide à la socialisation et à l'intégration dans le groupe-classe.    3. Aide dans les gestes de la vie quotidienne.

      Dimension Relationnelle Centrale : Au-delà de ces missions, le métier est profondément relationnel.

      L'AESH est en interaction constante non seulement avec l'élève (souvent en relation duelle), mais aussi avec les enseignants et les autres adultes de l'établissement pour adapter l'environnement aux besoins de l'élève.

      Un Rôle d'Interface : Les AESH agissent comme une "passerelle" ou un "tampon" entre l'élève, le groupe-classe et les enseignants. Ils sont souvent amenés à "absorber les dysfonctionnements du système" pour permettre la scolarisation.

      Des Tâches Dépassant le Cadre Défini : Dans la pratique, les missions peuvent s'étendre bien au-delà du cadre officiel, incluant la surveillance de classes entières ou la réalisation de gestes de soin complexes (comme changer la canule de trachéotomie d'un élève) sans formation adéquate, les transformant de fait en "soignantes".

      2. Une Profession en Proie à la Maltraitance Institutionnelle

      Un thème majeur est le paradoxe vécu par les AESH : une grande fierté tirée du travail accompli et de son utilité sociale, juxtaposée à un sentiment de maltraitance et de mépris de la part de l'institution.

      Le Manque de Reconnaissance Symbolique : Cette maltraitance se manifeste par des "micro-mises à l'écart" quotidiennes :

      Invisibilisation : Oubli systématique dans les communications officielles de la hiérarchie (par exemple, les vœux de vacances).  

      Exclusion des Espaces Communs : Des "salles des profs" qui ne sont pas renommées en "salles des adultes" ou "des personnels", excluant symboliquement les AESH.   

      Absence aux Réunions Clés : Les AESH sont souvent "évincées" des Équipes de Suivi de la Scolarisation (ESS), alors que leur parole est cruciale pour l'évaluation des besoins de l'élève.

      Une Hiérarchie Floue et Oppressante : La structure hiérarchique est mal définie, créant une situation inconfortable. Une AESH résume ce sentiment par la phrase :

      "Dans mon école, tout le monde est mon chef."

      Le Poids des Injonctions Paradoxales : Les AESH doivent constamment arbitrer entre des valeurs contradictoires.

      Par exemple, leur mission est de lutter contre la stigmatisation de l'élève, tout en faisant elles-mêmes partie d'un dispositif (ULIS, accompagnement individualisé) qui est de fait stigmatisant.

      3. Précarité Salariale et Pénibilité du Travail

      Les conditions matérielles des AESH sont marquées par une précarité extrême qui reflète la faible valeur accordée à leur travail par l'institution.

      Aspect

      Description

      Rémunération

      Payées au SMIC horaire, avec des contrats à temps incomplet qui placent beaucoup d'entre elles sous le seuil de pauvreté.

      Pluri-activité

      La majorité des AESH sont contraintes de cumuler plusieurs emplois (cantine, aide aux devoirs, aide à domicile) pour subvenir à leurs besoins.

      Primes

      L'accès aux primes REP/REP+ (éducation prioritaire) est très récent (2023) et d'un montant faible (environ 80 €).

      Pénibilité Physique

      Le métier engendre des troubles musculosquelettiques, notamment lors de la prise en charge d'élèves (toilette, déplacements) dans des bâtiments non adaptés.

      Charge Émotionnelle

      La charge mentale et émotionnelle est immense, liée à la gestion de crises, à la crainte permanente de l'incident ("l'accident"), à l'attachement aux élèves et à l'incertitude sur leur avenir.

      4. Le Déficit Criant de Formation Professionnelle

      L'absence de formation adéquate est un point de critique central, perçu comme un signe de mépris et une source de difficultés professionnelles.

      Une "Adaptation à l'Emploi" Insuffisante : La formation officielle se résume à 60 heures d'adaptation à l'emploi, un héritage des anciens contrats aidés.

      Elle est décrite comme une simple transmission d'informations via des diaporamas, et non une véritable formation professionnelle.

      De nombreux AESH n'ont même jamais reçu cette formation.

      L'Autoformation comme Norme : Face à la diversité des handicaps (autisme, dyslexie, comorbidités, etc.), les AESH sont contraintes de s'autoformer sur leur temps personnel, en lisant des ouvrages ou en cherchant des informations pour s'adapter aux besoins spécifiques de chaque élève.

      Revendication d'un Statut Professionnel : Les syndicats, comme le SNES-FSU, revendiquent la création d'une véritable formation diplômante de niveau Bac+2, sur le modèle du CAPPEI pour les enseignants spécialisés, afin de reconnaître et de structurer le métier.

      5. L'École Inclusive : Entre Idéal et "Bricolage"

      Vingt ans après la loi fondatrice de 2005, le projet de l'école inclusive repose en grande partie sur le "bricolage" et le dévouement des AESH, ce qui fragilise l'ensemble du système.

      Des Chiffres Alarmants : Près de 50 000 élèves ayant une notification pour un accompagnement ne sont pas suivis, faute de moyens.

      Un Système Organisé pour Dysfonctionner : Selon Frédéric Grimaux, "si on voulait que l'école inclusive disfonctionne, on s'y prendrait pas autrement".

      Le flou des missions, le manque de temps de concertation et la non-reconnaissance du travail collaboratif comme un travail en soi organisent l'échec.

      Exemples d'Indignité : Des situations dégradantes sont rapportées, comme celle d'un élève changé sur des sacs poubelles à l'arrière d'une classe, derrière un paravent improvisé avec des rideaux, illustrant "l'indignité totale de l'enfant, des travailleurs et de l'institution scolaire".

      La Mutualisation (PIAL) : Les Pôles Inclusifs d'Accompagnement Localisés (PIAL) ont accentué la mutualisation des moyens, menant à des situations où des AESH doivent accompagner plusieurs élèves simultanément ou effectuer des missions sur des sites géographiquement éloignés, au détriment de la qualité de l'accompagnement.

      6. Le Poids du Langage et de la Stigmatisation

      Le vocabulaire utilisé à l'école révèle les tensions et les préjugés entourant le handicap.

      La Prolifération des Sigles : Le jargon institutionnel (AESH, AVS, ULIS, ESS, GEVASCO, MDPH) est souvent incompréhensible pour les non-initiés, y compris les familles et les élèves.

      L'Infantilisation : Le fait d'appeler "les enfants" des adolescents au collège contribue à une infantilisation des élèves en situation de handicap.

      La Stigmatisation par le Langage : Le terme "Ulis" devient une insulte dans la cour de récréation ("T'es un Ulis").

      Des mots comme "mongol" ou "autiste" sont encore couramment utilisés de manière péjorative, montrant que les mentalités évoluent lentement.

      La Persistance de la "Normalité" : Le concept de "normalité" reste prégnant, y compris chez certains professionnels de l'éducation, ce qui va à l'encontre de la philosophie d'une école inclusive qui devrait valoriser les différences.

      7. Évolutions Récentes et Inquiétudes Futures

      La situation des AESH pourrait se dégrader davantage avec les réformes à venir, notamment le Pôle d'Appui à la Scolarité (PAS).

      Ce dispositif prévoit d'étendre les missions des AESH à l'ensemble des élèves à besoins éducatifs particuliers (enfants du voyage, allophones, élèves "dys", etc.), et pas seulement ceux en situation de handicap.

      Cette évolution fait craindre une augmentation considérable de la charge de travail et de la charge mentale, sans formation ni revalorisation correspondantes, en s'appuyant une fois de plus sur le "dévouement" de ces professionnels.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Summary:

      From a forward genetic mosaic mutant screen using EMS, the authors identify mutations in glucosylceramide synthase (GlcT), a rate-limiting enzyme for glycosphingolipid (GSL) production, that result in EE tumors. Multiple genetic experiments strongly support the model that the mutant phenotype caused by GlcT loss is due to by failure of conversion of ceramide into glucosylceramide. Further genetic evidence suggests that Notch signaling is comprised in the ISC lineage and may affect the endocytosis of Delta. Loss of GlcT does not affect wing development or oogenesis, suggesting tissue-specific roles for GlcT. Finally, an increase in goblet cells in UGCG knockout mice, not previously reported, suggests a conserved role for GlcT in Notch signaling in intestinal cell lineage specification.

      Strengths:

      Overall, this is a well-written paper with multiple well-designed and executed genetic experiments that support a role for GlcT in Notch signaling in the fly and mammalian intestine. I do, however, have a few comments below.

      Weaknesses:

      (1) The authors bring up the intriguing idea that GlcT could be a way to link diet to cell fate choice. Unfortunately, there are no experiments to test this hypothesis.

      We indeed attempted to establish an assay to investigate the impact of various diets (such as high-fat, high-sugar, or high-protein diets) on the fate choice of ISCs. Subsequently, we intended to examine the potential involvement of GlcT in this process. However, we observed that the number or percentage of EEs varies significantly among individuals, even among flies with identical phenotypes subjected to the same nutritional regimen. We suspect that the proliferative status of ISCs and the turnover rate of EEs may significantly influence the number of EEs present in the intestinal epithelium, complicating the interpretation of our results. Consequently, we are unable to conduct this experiment at this time. The hypothesis suggesting that GlcT may link diet to cell fate choice remains an avenue for future experimental exploration.

      (2) Why do the authors think that UCCG knockout results in goblet cell excess and not in the other secretory cell types?

      This is indeed an interesting point. In the mouse intestine, it is well-documented that the knockout of Notch receptors or Delta-like ligands results in a classic phenotype characterized by goblet cell hyperplasia, with little impact on the other secretory cell types. This finding aligns very well with our experimental results, as we noted that the numbers of Paneth cells and enteroendocrine cells appear to be largely normal in UGCG knockout mice. By contrast, increases in other secretory cell types are typically observed under conditions of pharmacological inhibition of the Notch pathway.

      (3) The authors should cite other EMS mutagenesis screens done in the fly intestine.

      To our knowledge, the EMS screen on 2L chromosome conducted in Allison Bardin’s lab is the only one prior to this work, which leads to two publications (Perdigoto et al., 2011; Gervais, et al., 2019). We have now included citations for both papers in the revised manuscript.

      (4) The absence of a phenotype using NRE-Gal4 is not convincing. This is because the delay in its expression could be after the requirement for the affected gene in the process being studied. In other words, sufficient knockdown of GlcT by RNA would not be achieved until after the relevant signaling between the EB and the ISC occurred. Dl-Gal4 is problematic as an ISC driver because Dl is expressed in the EEP.

      This is an excellent point, and we agree that the lack of an observable phenotype using NRE-Gal4 could be due to delayed expression, which may result in missing the critical window required for effective GlcT knockdown. Consequently, we cannot rule out the possibility that GlcT also plays a role in early EBs or EEPs. We have revised the manuscript to soften this conclusion and to include this alternative explanation for the experiment.

      (5) The difference in Rab5 between control and GlcT-IR was not that significant. Furthermore, any changes could be secondary to increases in proliferation.

      We agree that it is possible that the observed increase in proliferation could influence the number of Rab5+ endosomes, and we will temper our conclusions on this aspect accordingly. However, it is important to note that, although the difference in Rab5+ endosomes between the control and GlcT-IR conditions appeared mild, it was statistically significant and reproducible. In our revised experiments, we have not only added statistical data and immunofluorescence images for Rab11 but also unified the approaches used for detecting Rab-associated proteins (in the previous figures, Rab5 was shown using U-Rab5-GFP, whereas Rab7 was detected by direct antibody staining). Based on this unified strategy, we optimized the quantification of Dl-GFP colocalization with early, late, and recycling endosomes, and the results are consistent with our previous observations (see the updated Fig. 5).

      Reviewer #2 (Public review):

      Summary:

      This study genetically identifies two key enzymes involved in the biosynthesis of glycosphingolipids, GlcT and Egh, which act as tumor suppressors in the adult fly gut. Detailed genetic analysis indicates that a deficiency in Mactosyl-ceramide (Mac-Cer) is causing tumor formation. Analysis of a Notch transcriptional reporter further indicates that the lack of Mac-Ser is associated with reduced Notch activity in the gut, but not in other tissues.

      Addressing how a change in the lipid composition of the membranes might lead to defective Notch receptor activation, the authors studied the endocytic trafficking of Delta and claimed that internalized Delta appeared to accumulate faster into endosomes in the absence of Mac-Cer. Further analysis of Delta steady-state accumulation in fixed samples suggested a delay in the endosomal trafficking of Delta from Rab5+ to Rab7+ endosomes, which was interpreted to suggest that the inefficient, or delayed, recycling of Delta might cause a loss in Notch receptor activation.

      Finally, the histological analysis of mouse guts following the conditional knock-out of the GlcT gene suggested that Mac-Cer might also be important for proper Notch signaling activity in that context.

      Strengths:

      The genetic analysis is of high quality. The finding that a Mac-Cer deficiency results in reduced Notch activity in the fly gut is important and fully convincing.

      The mouse data, although preliminary, raised the possibility that the role of this specific lipid may be conserved across species.

      Weaknesses:

      This study is not, however, without caveats and several specific conclusions are not fully convincing.

      First, the conclusion that GlcT is specifically required in Intestinal Stem Cells (ISCs) is not fully convincing for technical reasons: NRE-Gal4 may be less active in GlcT mutant cells, and the knock-down of GlcT using Dl-Gal4ts may not be restricted to ISCs given the perdurance of Gal4 and of its downstream RNAi.

      As previously mentioned, we acknowledge that a role for GlcT in early EBs or EEPs cannot be completely ruled out. We have revised our manuscript to present a more cautious conclusion and explicitly described this possibility in the updated version.

      Second, the results from the antibody uptake assays are not clear.: i) the levels of internalized Delta were not quantified in these experiments; ii) additionally, live guts were incubated with anti-Delta for 3hr. This long period of incubation indicated that the observed results may not necessarily reflect the dynamics of endocytosis of antibody-bound Delta, but might also inform about the distribution of intracellular Delta following the internalization of unbound anti-Delta. It would thus be interesting to examine the level of internalized Delta in experiments with shorter incubation time.

      We thank the reviewer for these excellent questions. In our antibody uptake experiments, we noted that Dl reached its peak accumulation after a 3-hour incubation period. We recognize that quantifying internalized Dl would enhance our analysis, and we will include the corresponding statistical graphs in the revised version of the manuscript. In addition, we agree that during the 3-hour incubation, the potential internalization of unbound anti-Dl cannot be ruled out, as it may influence the observed distribution of intracellular Dl. We therefore attempted to supplement our findings with live imaging experiments to investigate the dynamics of Dl/Notch endocytosis in both normal and GlcT mutant ISCs. However, we found that the GFP expression level of Dl-GFP (either in the knock-in or transgenic line) was too low to be reliably tracked. During the three-hour observation period, the weak GFP signal remained largely unchanged regardless of the GlcT mutation status, and the signal resolution under the microscope was insufficient to clearly distinguish membrane-associated from intracellular Dl. Therefore, we were unable to obtain a dynamic view of Dl trafficking through live imaging. Nevertheless, our Dl antibody uptake and endosomal retention analyses collectively support the notion that MacCer influences Notch signaling by regulating Dl endocytosis.

      Overall, the proposed working model needs to be solidified as important questions remain open, including: is the endo-lysosomal system, i.e. steady-state distribution of endo-lysosomal markers, affected by the Mac-Cer deficiency? Is the trafficking of Notch also affected by the Mac-Cer deficiency? is the rate of Delta endocytosis also affected by the Mac-Cer deficiency? are the levels of cell-surface Delta reduced upon the loss of Mac-Cer?

      Regarding the impact on the endo-lysosomal system, this is indeed an important aspect to explore. While we did not conduct experiments specifically designed to evaluate the steady-state distribution of endo-lysosomal markers, our analyses utilizing Rab5-GFP overexpression and Rab7 staining did not indicate any significant differences in endosome distribution in MacCer deficient conditions. Moreover, we still observed high expression of the NRE-LacZ reporter specifically at the boundaries of clones in GlcT mutant cells (Fig. 4A), indicating that GlcT mutant EBs remain responsive to Dl produced by normal ISCs located right at the clone boundary. Therefore, we propose that MacCer deficiency may specifically affect Dl trafficking without impacting Notch trafficking.

      In our 3-hour antibody uptake experiments, we observed a notable decrease in cell-surface Dl, which was accompanied by an increase in intracellular accumulation. These findings collectively suggest that Dl may be unstable on the cell surface, leading to its accumulation in early endosomes.

      Third, while the mouse results are potentially interesting, they seem to be relatively preliminary, and future studies are needed to test whether the level of Notch receptor activation is reduced in this model.

      In the mouse small intestine, Olfm4 is a well-established target gene of the Notch signaling pathway, and its staining provides a reliable indication of Notch pathway activation. While we attempted to evaluate Notch activation using additional markers, such as Hes1 and NICD, we encountered difficulties, as the corresponding antibody reagents did not perform well in our hands. Despite these challenges, we believe that our findings with Olfm4 provide an important start point for further investigation in the future.

      Reviewer #3 (Public review):

      Summary:

      In this paper, Tang et al report the discovery of a Glycoslyceramide synthase gene, GlcT, which they found in a genetic screen for mutations that generate tumorous growth of stem cells in the gut of Drosophila. The screen was expertly done using a classic mutagenesis/mosaic method. Their initial characterization of the GlcT alleles, which generate endocrine tumors much like mutations in the Notch signaling pathway, is also very nice. Tang et al checked other enzymes in the glycosylceramide pathway and found that the loss of one gene just downstream of GlcT (Egh) gives similar phenotypes to GlcT, whereas three genes further downstream do not replicate the phenotype. Remarkably, dietary supplementation with a predicted GlcT/Egh product, Lactosyl-ceramide, was able to substantially rescue the GlcT mutant phenotype. Based on the phenotypic similarity of the GlcT and Notch phenotypes, the authors show that activated Notch is epistatic to GlcT mutations, suppressing the endocrine tumor phenotype and that GlcT mutant clones have reduced Notch signaling activity. Up to this point, the results are all clear, interesting, and significant. Tang et al then go on to investigate how GlcT mutations might affect Notch signaling, and present results suggesting that GlcT mutation might impair the normal endocytic trafficking of Delta, the Notch ligand. These results (Fig X-XX), unfortunately, are less than convincing; either more conclusive data should be brought to support the Delta trafficking model, or the authors should limit their conclusions regarding how GlcT loss impairs Notch signaling. Given the results shown, it's clear that GlcT affects EE cell differentiation, but whether this is via directly altering Dl/N signaling is not so clear, and other mechanisms could be involved. Overall the paper is an interesting, novel study, but it lacks somewhat in providing mechanistic insight. With conscientious revisions, this could be addressed. We list below specific points that Tang et al should consider as they revise their paper.

      Strengths:

      The genetic screen is excellent.

      The basic characterization of GlcT phenotypes is excellent, as is the downstream pathway analysis.

      Weaknesses:

      (1) Lines 147-149, Figure 2E: here, the study would benefit from quantitations of the effects of loss of brn, B4GalNAcTA, and a4GT1, even though they appear negative.

      We have incorporated the quantifications for the effects of the loss of brn, B4GalNAcTA, and a4GT1 in the updated Figure 2.

      (2) In Figure 3, it would be useful to quantify the effects of LacCer on proliferation. The suppression result is very nice, but only effects on Pros+ cell numbers are shown.

      We have now added quantifications of the number of EEs per clone to the updated Figure 3.

      (3) In Figure 4A/B we see less NRE-LacZ in GlcT mutant clones. Are the data points in Figure 4B per cell or per clone? Please note. Also, there are clearly a few NRE-LacZ+ cells in the mutant clone. How does this happen if GlcT is required for Dl/N signaling?

      In Figure 4B, the data points represent the fluorescence intensity per single cell within each clone. It is true that a few NRE-LacZ+ cells can still be observed within the mutant clone; however, this does not contradict our conclusion. As noted, high expression of the NRE-LacZ reporter was specifically observed around the clone boundaries in MacCer deficient cells (Fig. 4A), indicating that the mutant EBs can normally receive Dl signal from the normal ISCs located at the clone boundary and activate the Notch signaling pathway. Therefore, we believe that, although affecting Dl trafficking, MacCer deficiency does not significantly affect Notch trafficking.

      (4) Lines 222-225, Figure 5AB: The authors use the NRE-Gal4ts driver to show that GlcT depletion in EBs has no effect. However, this driver is not activated until well into the process of EB commitment, and RNAi's take several days to work, and so the author's conclusion is "specifically required in ISCs" and not at all in EBs may be erroneous.

      As previously mentioned, we acknowledge that a role for GlcT in early EBs or EEPs cannot be completely ruled out. We have revised our manuscript to present a more cautious conclusion and described this possibility in the updated version.

      (5) Figure 5C-F: These results relating to Delta endocytosis are not convincing. The data in Fig 5C are not clear and not quantitated, and the data in Figure 5F are so widely scattered that it seems these co-localizations are difficult to measure. The authors should either remove these data, improve them, or soften the conclusions taken from them. Moreover, it is unclear how the experiments tracing Delta internalization (Fig 5C) could actually work. This is because for this method to work, the anti-Dl antibody would have to pass through the visceral muscle before binding Dl on the ISC cell surface. To my knowledge, antibody transcytosis is not a common phenomenon.

      We thank the reviewer for these insightful comments and suggestions. In our in vivo experiments, we observed increased co-localization of Rab5 and Dl in GlcT mutant ISCs, indicating that Dl trafficking is delayed at the transition to Rab7⁺ late endosomes, a finding that is further supported by our antibody uptake experiments. We acknowledge that the data presented in Fig. 5C are not fully quantified and that the co-localization data in Fig. 5F may appear somewhat scattered; therefore, we have included additional quantification and enhanced the data presentation in the revised manuscript.

      Regarding the concern about antibody internalization, we appreciate this point. We currently do not know if the antibody reaches the cell surface of ISCs by passing through the visceral muscle or via other routes. Given that the experiment was conducted with fragmented gut, it is possible that the antibody may penetrate into the tissue through mechanisms independent of transcytosis.

      As mentioned earlier, we attempted to supplement our findings with live imaging experiments to investigate the dynamics of Dl/Notch endocytosis in both normal and GlcT mutant ISCs. However, we found that the GFP expression level of Dl-GFP (either in the knock-in or transgenic line) was too low to be reliably tracked. During the three-hour observation period, the weak GFP signal remained largely unchanged regardless of the GlcT mutation status, and the signal resolution under the microscope was insufficient to clearly distinguish membrane-associated from intracellular Dl. Therefore, we were unable to obtain a dynamic view of Dl trafficking through live imaging. Nevertheless, our Dl antibody uptake and endosomal retention analyses collectively support the notion that MacCer influences Notch signaling by regulating Dl endocytosis.

      (6) It is unclear whether MacCer regulates Dl-Notch signaling by modifying Dl directly or by influencing the general endocytic recycling pathway. The authors say they observe increased Dl accumulation in Rab5+ early endosomes but not in Rab7+ late endosomes upon GlcT depletion, suggesting that the recycling endosome pathway, which retrieves Dl back to the cell surface, may be impaired by GlcT loss. To test this, the authors could examine whether recycling endosomes (marked by Rab4 and Rab11) are disrupted in GlcT mutants. Rab11 has been shown to be essential for recycling endosome function in fly ISCs.

      We agree that assessing the state of recycling endosomes, especially by using markers such as Rab11, would be valuable in determining whether MacCer regulates Dl-Notch signaling by directly modifying Dl or by influencing the broader endocytic recycling pathway. In the newly added experiments, we found that in GlcT-IR flies, Dl still exhibits partial colocalization with Rab11, and the overall expression pattern of Rab11 is not affected by GlcT knockdown (Fig. 5E-F). These observations suggest that MacCer specifically regulates Dl trafficking rather than broadly affecting the recycling pathway.

      (7) It remains unclear whether Dl undergoes post-translational modification by MacCer in the fly gut. At a minimum, the authors should provide biochemical evidence (e.g., Western blot) to determine whether GlcT depletion alters the protein size of Dl.

      While we propose that MacCer may function as a component of lipid rafts, facilitating Dl membrane anchorage and endocytosis, we also acknowledge the possibility that MacCer could serve as a substrate for protein modifications of Dl necessary for its proper function. Conducting biochemical analyses to investigate potential post-translational modifications of Dl by MacCer would indeed provide valuable insights. We have performed Western blot analysis to test whether GlcT depletion affects the protein size of Dl. As shown below, we did not detect any apparent changes in the molecular weight of the Dl protein. Therefore, it is unlikely that MacCer regulates post-translational modifications of Dl.

      Author response image 1.

      To investigate whether MacCer modifies Dl by Western blot,(A) Four lanes were loaded: the first two contained 20 μL of membrane extract (lane 1: GlcT-IR, lane 2: control), while the last two contained 10 μL of membrane extract (B) Full blot images are shown under both long and shortexposure conditions.

      (8) It is unfortunate that GlcT doesn't affect Notch signaling in other organs on the fly. This brings into question the Delta trafficking model and the authors should note this. Also, the clonal marker in Figure 6C is not clear.

      In the revised working model, we have explicitly described that the events occur in intestinal stem cells. Regarding Figure 6C, we have delineated the clone with a white dashed line to enhance its clarity and visual comprehension.

      (9) The authors state that loss of UGCG in the mouse small intestine results in a reduced ISC count. However, in Supplementary Figure C3, Ki67, a marker of ISC proliferation, is significantly increased in UGCG-CKO mice. This contradiction should be clarified. The authors might repeat this experiment using an alternative ISC marker, such as Lgr5.

      Previous studies have indicated that dysregulation of the Notch signaling pathway can result in a reduction in the number of ISCs. While we did not perform a direct quantification of ISC numbers in our experiments, our Olfm4 staining—which serves as a reliable marker for ISCs—demonstrates a clear reduction in the number of positive cells in UGCG-CKO mice.

      The increased Ki67 signal we observed reflects enhanced proliferation in the transit-amplifying region, and it does not directly indicate an increase in ISC number. Therefore, in UGCG-CKO mice, we observe a decrease in the number of ISCs, while there is an increase in transit-amplifying (TA) cells (progenitor cells). This increase in TA cells is probably a secondary consequence of the loss of barrier function associated with the UGCG knockout.

    1. Reviewer #1 (Public review):

      The study analyzes the gastric fluid DNA content identified as a potential biomarker for human gastric cancer. However, the study lacks overall logicality, and several key issues require improvement and clarification. In the opinion of this reviewer, some major revisions are needed:

      (1) This manuscript lacks a comparison of gastric cancer patients' stages with PN and N+PD patients, especially T0-T2 patients.

      (2) The comparison between gastric cancer stages seems only to reveal the difference between T3 patients and early-stage gastric cancer patients, which raises doubts about the authenticity of the previous differences between gastric cancer patients and normal patients, whether it is only due to the higher number of T3 patients.

      (3) The prognosis evaluation is too simplistic, only considering staging factors, without taking into account other factors such as tumor pathology and the time from onset to tumor detection.

      (4) The comparison between gfDNA and conventional pathological examination methods should be mentioned, reflecting advantages such as accuracy and patient comfort.

      (5) There are many questions in the figures and tables. Please match the Title, Figure legends, Footnote, Alphabetic order, etc.

      (6) The overall logicality of the manuscript is not rigorous enough, with few discussion factors, and cannot represent the conclusions drawn

    1. On the erosion of middle class America. Poverty line is around 140k if actual costs taken into account. The 1960s benchmark assumed cost of food to be 1/3 of overall costs. Now it's 7% or so, meaning 1/15 of overall costs. This pushes up the poverty line to 5 times the level used, or some 150k USD pa

      Example of a proxy being used as 'measurement' and the assumptions in a proxy never re-evaluated.

    1. Reviewer #3 (Public review):

      Summary:

      This study concerns how macaque visual cortical area MT represents stimuli composed of more than one speed of motion.

      Strengths:

      The study is valuable because little is known about how the visual pathway segments and preserves information about multiple stimuli. The study presents compelling evidence that (on average) MT neurons shift from faster-speed-takes-all at low speeds to representing the average of the two speeds at higher speeds. An additional strength of the study is the inclusion of perceptual reports from both humans and one monkey participant performing a task in which they judged whether the stimuli involved one vs two different speeds. Ultimately, this study raises intriguing questions about how exactly the response patterns in visual cortical area MT might preserve information about each speed, since such information is potentially lost in an average response as described here.

      Reviewing Editor comment on revised version:

      The remaining concern was resolved.

    2. Author response:

      The following is the authors’ response to the previous reviews

      Reviewer #3 (Recommendations for the authors):

      The authors have done an excellent job of addressing most comments, but my concerns about Figure 5 remain. I appreciate the authors' efforts to address the problem involving Rs being part of the computation on both the x and y axes of Figure 5, but addressing this via simulation addresses statistical significance but overlooks effect size. I think the authors may have misunderstood my original suggestion, so I will attempt to explain it better here. Since "Rs" is an average across all trials, the trials could be subdivided in two halves to compute two separate averages - for example, an average of the even numbered trials and an average of the odd numbered trials. Then you would use the "Rs" from the even numbered trials for one axis and the "Rs" from the odd numbered trials for the other. You would then plot R-Rs_even vs Rf-Rs_odd. This would remove the confound from this figure, and allow the text/interpretation to be largely unchanged (assuming the results continue to look as they do).

      We have added a description and the result of the new analysis (line #321 to #332), and a supplementary figure (Suppl. Fig. 1) (line #1464 to #1477). 

      “We calculated 𝑅<sub>𝑠</sub> in the ordinate and abscissa of Figure 5A-E using responses averaged across different subsets of trials, such that 𝑅<sub>𝑠</sub> was no longer a common term in the ordinate and abscissa. For each neuron, we determined 𝑅<sub>𝑠1</sub> by averaging the firing rates of 𝑅<sub>𝑠</sub> across half of the recorded trials, selected randomly. We also determined 𝑅<sub>𝑠2</sub> by averaging the firing rates of 𝑅<sub>𝑠</sub> across the rest of the trials.  We regressed (𝑅 − 𝑅<sub>𝑠1</sub> )  on (𝑅<sub>𝑓</sub> − 𝑅<sub>𝑠2</sub>) , as well as (𝑅<sub>𝑠</sub> - 𝑅<sub>𝑠2</sub>)  on (𝑅<sub>𝑓</sub> − 𝑅<sub>𝑠1</sub>), and repeated the procedure 50 times. The averaged slopes obtained with 𝑅<sub>𝑠</sub> from the split trials showed the same pattern as those using 𝑅<sub>𝑠</sub> from all trials (Table 1 and Supplementary Fig. 1), although the coefficient of determination was slightly reduced (Table 1). For ×4 speed separation, the slopes were nearly identical to those shown in Figure 5F1. For ×2 speed separation, the slopes were slightly smaller than those in Figure 5F2, but followed the same pattern (Supplementary Fig. 1). Together, these analysis results confirmed the faster-speed bias at the slow stimulus speeds, and the change of the response weights as stimulus speeds increased.”

      An additional remaining item concerns the terminology weighted sum, in the context of the constraint that wf and ws must sum to one. My opinion is that it is non-standard to use weighted sum when the computation is a weighted average, but as long as the authors make their meaning clear, the reader will be able to follow. I suggest adding some phrasing to explain to the reader the shift in interpretation from the more general weighted sum to the more constrained weighted average. Specifically, "weighted sum" first appears on line 268, and then the additional constraint of ws + wf =1 is introduced on line 278. Somewhere around line 278, it would be useful to include a sentence stating that this constraint means the weighted sum is constrained to be a weighted average.

      Thanks for the suggestion. We have modified the text as follows. Since we made other modifications in the text, the line numbers are slightly different from the last version. 

      Line #274 to 275: 

      “Since it is not possible to solve for both variables, 𝑤<sub>𝑠</sub> and 𝑤<sub>𝑓</sub>, from a single equation (Eq. 5) with three data points, we introduced an additional constraint: 𝑤<sub>𝑠</sub> + 𝑤<sub>𝑓</sub> =1. With this constraint, the weighted sum becomes a weighted average.”

      Also on line #309:

      “First, at each speed pair and for each of the 100 neurons in the data sample shown in Figure 5, we simulated the response to the bi-speed stimuli (𝑅<sub>𝑒</sub>) as a randomly weighted average of 𝑅<sub>𝑓</sub> and 𝑅<sub>𝑠</sub> of the same neuron. 

      in which 𝑎 was a randomly generated weight (between 0 and 1) for 𝑅<sub>𝑓</sub>, and the weights for 𝑅<sub>𝑓</sub> and 𝑅<sub>𝑠</sub> summed to one.”

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1 (Evidence, reproducibility and clarity (Required)): The authors map the ZFP36L1 protein interactome in human T cells using UltraID proximity labeling combined with quantitative mass spectrometry. They optimize labeling conditions in primary T cells, profile resting and activated cells, and include a time course at 2, 5, and 16 hours. They complement the interactome with co-immunoprecipitation in the presence or absence of RNase to assess RNA dependence. They then test selected candidates using CRISPR knockouts in primary T cells, focusing on UPF1 and GIGYF1/2, and report effects on global translation, stress, activation markers, and ZFP36L1 protein levels. The work argues that ZFP36L1 sits at the center of multiple post-transcriptional pathways in T cells (which in itself is not a novel finding) and that UPF1 supports ZFP36L1 expression at the mRNA and protein level. The main model system is primary human T cells, with some data in Jurkat cells.

      The core datasets show thousands of identified proteins in total lysates and enriched biotinylated fractions. Known partners from CCR4-NOT, decapping, stress granules, and P-bodies appear, with additional candidates like GIGYF1/2, PATL1, DDX6, and UPF1. Time-resolved labeling suggests shifts in proximity during early activation. Co-IP with and without RNase suggests both RNA-dependent and RNA-independent contacts. CRISPR loss of UPF1 or GIGYF1/2 increases translation at rest and elevates activation markers, and UPF1 loss reduces ZFP36L1 protein and mRNA while MG132 does not rescue protein levels; UPF1 RIP enriches ZFP36L1 mRNA.

      Among patterns worth noting are that the activation state drives the principal variance in both proteome and proximity datasets. Deadenylation, decapping, and granule proteins are consistently near ZFP36L1 across conditions, while some contacts dip at 2 hours and recover by 5 to 16 hours. Mitochondrial ribosomal proteins become more proximal later. UPF1 and GIGYF1 show time-linked behavior and RNase sensitivity that fits roles in mRNA surveillance and translational control. These observations support a dynamic hub model, though they remain proximity-based rather than direct binding maps.

      We thank the reviewer for their careful reading and thoughtful summary. Please find our point-to point response below.

      Major comments

      1) The key conclusions are directionally convincing for a broad and dynamic ZFP36L1 neighborhood in human T cells. The data robustly recover established complexes and add plausible candidates. The time-course and RNase experiments strengthen the claim that interactions shift with activation state and RNA context. The functional tests around UPF1 and GIGYF1/2 point to biological relevance. That said, some statements could be qualified. The statement that ZFP36L1 "coordinates" multiple pathways implies mechanism and directionality that proximity data alone cannot prove. I suggest reframing as "positions ZFP36L1 within" or "supports a model where ZFP36L1 sits within" these networks.

      We thank this reviewer for considering our data ‘directionally convincing, and robust, adding new plausible candidates as interactors with ZFP36L1’. We agree that the proposed wording is more appropriate and will change it accordingly.

      2) UPF1, as an upstream regulator of ZFP36L1 expression, is a promising lead. The reduction of ZFP36L1 protein and mRNA in UPF1 knockout, the non-rescue by MG132, and the UPF1 RIP on ZFP36L1 mRNA together argue that UPF1 influences ZFP36L1 transcript output or processing. This claim would read stronger with one short rescue or perturbation that pins the mechanism. A compact test would be UPF1 re-expression in UPF1-deficient T cells with wild-type and helicase-dead alleles. This is realistic in primary T cells using mRNA electroporation or virus-based systems. Approximate time 2 to 3 weeks, including guide design check and expansion. Reagents and sequencing about 2 to 4k USD depending on donor numbers. This would help separate viability or stress effects from a direct role in ZFP36L1 mRNA handling.

      We agree that a rescue experiment with wild-type and helicase-dead UPF1 in UPF1-deficient primary T cells would be interesting. Unfortunately, however, UPF1 knockout T cells are less viable and divide less (Supp Figure 6B), making further manipulations such as re-expression by viral transduction technically impossible. We will clarify this limitation in the Discussion and will more explicitly indicate that UPF1 promotes ZFP36L1 mRNA and protein expression, while acknowledging that the precise mechanistic contribution of UPF1 (e.g. to transcript processing, export, or surveillance) remain to be fully resolved.

      3) The inference that ZFP36L1 proximity to decapping and deadenylation complexes reflects pathway engagement is reasonable and, frankly, expected. Still, where the manuscript moves from proximity to function, the narrative works best when supported by orthogonal validation. Two compact additions would raise confidence without opening new lines of work. First, a small set of reciprocal co-IPs for PATL1 or DDX6 at endogenous levels in activated T cells, run with and without RNase, would tie the RNase-class assignments to biochemistry. Second, a short-pulse proximity experiment using a reduced biotin dose and shorter labeling window in activated cells would address whether long incubations drive non-specific labeling. Both are feasible in 2 to 3 weeks with minimal extra cost for antibodies and MS runs if the facility is in-house.

      We fully agree with the reviewer that orthogonal biochemical validation is valuable. Therefore, we already combined time-resolved proximity labeling (between 0-2h, 2-5h, and 5-16 hours) with time-resolved ZFP36L1 co-IPs ± RNase, to address the dynamic behavior and potential temporal broadening of the interactome.

      As to running reciprocal co-IPs for PATL1 or DDX6: we had in fact already considered to follow up on PATL1. However, we failed to identified specific antibodies, revealing many unspecific bands (see below). As to DDX6, antibodies suitable for IP have been reported, and we can therefore offer such reciprocal IP as requested.

      To further address the raised points, we will (i) clarify how we define and interpret RNase-sensitive versus RNase-resistant classes (ii) emphasize that some key factors (including PATL1) are already detected in shorter labeling conditions (2 h) in activated T cells (Fig 4C); and (iii) better highlight that the our data provide strong candidates and pathway hypotheses that warrant further mechanistic experimentation in follow-up studies, when moving from proximity to function.

      As to the suggested lowering dose of biotin: As described in Figure S1, this appeared unsuccessful. We owe it to the reported dependence and use of biotin in primary T cells (Ref’s 31-33 of this manuscript). This also included that we could not culture T cells in biotin-free medium prior to labeling, as most protocols would do in cell lines.

      The reviewer also suggested shorter labeling times. Please be advised that the labeling times chosen were based on the reported protein induction and activity on target mRNAs: 1) ZFP36L1 expression peaks at 2h of T cell activation (Zandhuis et al. 2025; 0.1002/eji.202451641, Petkau et al. 2024; 10.1002/eji.202350700), 3) shows the strongest effects on T cell function between 4-5h, and displays a late phase of activity at 5-16h (Popovic et al. Cell Reports 2023; 10.1016/j.celrep.2023.112419). We realize that additional explanation is warranted for this rationale, which we will provide.

      4) Reproducibility is helped by donor pooling, repeated T-cell screens, Jurkat confirmation, and detailed methods including MaxQuant, LIMMA, and supervised patterning. Deposition of MS data is listed. The authors should consider adding a brief, stand-alone analysis notebook in SI or on GitHub with exact filtering thresholds and "shape" definitions, since the supervised profiles are central to claims. This would let others reproduce figures from raw tables with the same code and workflows.

      We thank the reviewer for his or her suggestion and we have done as suggested. We will include the following link in the manuscript: https://github.com/ajhoogendijk/ZFP36L1_UltraID

      5) Replication and statistics are mostly adequate for discovery proteomics. The thresholds are clear, and PCA and correlation frameworks are appropriate. For functional readouts in edited T cells, please make the number of donors and independent experiments explicit in figure legends, and indicate whether statistics are paired by donor. Where viability differs (UPF1), note any gating strategies used to avoid bias in puromycin or activation marker measurements. These clarifications are quick to add.

      Please be advised that the current figure legends already contain the requested information at the bottom (which test used, donor number etc). To highlight this better, we will indicate this point more explicitly in the methods section.

      Minor comments 6) The UltraID optimization in primary T cells is useful, but the long 16-hour labeling and high biotin should be framed as a compromise rather than a standard. A short statement about potential off-target labeling during extended incubations would set expectations and justify the RNase and time-course controls.

      Please be advised that 1) high biotin was required because primary T cells depend on biotin and 2) increase biotin absorption a 2-7-fold upon activation (Ref 31-33 from the paper). For better time resolution, we included a labeling of 2h (from 0-2h of activation), 3h (from 2-5h) and 9h (from 5-16h) of T cell activation. Nevertheless, we agree that we cannot exclude the risk of off-target labeling, which in fact is inherent to any labeling and pulldown method. We will include such statement in the discussion.

      7) The overlap across T-cell screens and with HEK293T APEX datasets is discussed, but a compact quantitative reconciliation would help. A table that lists shared versus cell-type-specific interactors with brief notes on known expression patterns would make this point concrete.

      We thank the reviewer for this suggestion. We agree and we will include such table.

      8) Figures are generally clear. Where proximity and total proteome PCA are shown, consider adding sample-wise annotations for donor pools and activation time to help readers link variance to biology. Ensure all volcano plots and heatmaps display the exact cutoffs used in text.

      We agree that sample-wise annotations would be a nice addition. However, when testing this for e.g. FIgure 1D&E, such differentiation into individual donors becomes illegible due to the many different variables already present. We therefore decided against it.

      9) Prior work on ZFP36 family roles in decay, deadenylation via CCR4-NOT, granules, and translational control is cited within the manuscript. In a few places, recent proximity and interactome papers could be more explicitly integrated when comparing overlap, especially where conclusions differ by cell type. A concise paragraph in Discussion that lays out what is truly new in primary T cells would help clarify the contribution of this work to the field.

      We appreciate this suggestion and will revise the Discussion accordingly. As to what is new in primary T cells, we would also like to mention that adding H2O2 (required for APEX labeling) to T cells results in immediate cell death can therefore not be employed on T cells. This technical limitation further underscores the valuable contribution of the UltraID-based approach we present here.

      Reviewer #1 (Significance (Required)):

      Nature and type of advance. The study is a technical and contextual advance in mapping ZFP36L1 proximity partners directly in human primary T cells during activation. The combination of time-resolved labeling and RNase-class assignments is informative. The CRIS PR perturbations provide an initial functional bridge from proximity to phenotype, especially for UPF1.

      Context in the literature. ZFP36 family proteins have long been linked to ARE-mediated decay, CCR4-NOT recruitment, and granule localization. The present work confirms those cores and extends them to include decapping and GIGYF1/2-4EHP scaffolds in primary T cells with temporal resolution. The UPF1 link to ZFP36L1 expression adds a plausible surveillance angle that merits follow-up. The cell-type specificity analysis versus HEK293T underscores that proximity networks vary with context.

      Audience. Readers in RNA biology, T-cell biology, and proteomics will find the dataset valuable. Groups studying post-transcriptional regulation in immunity can use the resource to prioritize candidate nodes for mechanistic work.

      Expertise and scope. I work on post-transcriptional regulation, RNA-protein complexes, and T-cell effector biology. I am comfortable evaluating the conceptual claims, experimental design, and statistical treatment. I am not a mass spectrometry specialist, so I rely on the presented parameters and deposited data for MS acquisition specifics.

      To conclude, the manuscript delivers a substantive proximity map of ZFP36L1 in human T cells, with useful temporal and RNA-class information. The UPF1 observations are promising and would benefit from a compact rescue to secure causality. A few minor additions for biochemical validation and transparency in replication would further strengthen the paper.

      We thank the reviewer for this comprehensive and constructive assessment. We agree that our study primarily provides a substantive and well-annotated proximity map of ZFP36L1 in human T cells, including temporal and RNA-class information, and that the UPF1 observations constitute a promising lead that merits more detailed mechanistic analysis in follow-up studies.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)): The manuscript by Wolkers and colleagues describes the protein interactome of the RNA-binding protein ZFP36L1 in primary human T-cells. There is inherent value in the use of primary cells of human origin, but there is also value in that the study is quite complete, as it is performed in a variety of conditions: T-cells that have been activated or not, at different time points after activation, and by two methods (co-IP and proximity labeling). One might imagine that this basically covers all what can be detected for this protein in T-cells. The authors report a large amount of new interactors involved at all steps in post-transcriptional regulation. In addition, the authors show that UPF1, a known interactor of ZFP36L1, actually binds to ZFP36L1 mRNA and enhances its levels. In sum, the work provides a valuable resource of ZFP36L1 interactors. Yet, how the data add to the mechanistic understanding of ZFP36L1 functions and/or regulation of ZFP36L1 remains unclear.

      We thank the reviewer for this enthusiasm on our experimental setups, considering the use of primary T cells of inherent value and our study with the variety of conditions complete.

      Major comments: 1) Fig 2: It is confusing that the Pearson correlation to define ZFP36L1 interactors is changed depending on figure panel. In panels A-C, a correlation {greater than or equal to} 0.6 is used, while panel D uses a correlation > 0.5, which changes the nº of interactors. Then, this is changed again in Fig 3A for some cell types but not for others. Why has this been done? It would be better to stick to the same thresholds throughout the manuscript.

      Please be advised that different correlation thresholds arise from the composition of the individual datasets: they in depth, number of controls, and the overall dynamic range. The initial proximity labeling experiment (Figure 2A–C) had a higher depth and a larger number of suitable control samples, which allowed us to apply a stricter cutoff (r ≥ 0.6). The time-course experiment and some of the cross-cell-type comparisons have fewer controls and somewhat lower depth, which then required a more permissive threshold (e.g. r > 0.5) to retain known core interactors.

      We fully agree that this rationale needs to be explicit. In the revised manuscript we (i) clearly state for each dataset which correlation cutoff is used (ii) emphasize that these thresholds are somewhat arbitrary and should not be directly compared across experiments, and (iii) highlight that our key biological conclusions do not depend on the exact boundary chosen but rather on the consistent enrichment of core complexes and pathways across .

      2) Fig 3A: It would be nice to have the information of this Figure panel as a Table (protein name, molecular process(es), known or novel, previously detected in which cells) in addition to the figure.

      We agree that this would increase the value of our work as a resource to the community, and we will include such table and merge it with the table Reviewer 1 asked about.

      3) Fig 6: To what extent are the effects of UPF1 and GIGFYF1 knock-out on translation and T-cell hyper-activation mediated by ZFP36L1? If deletion of ZFP36L1 itself has no effect on these processes, it seems unlikely that it is involved. In this respect, I am not sure that Fig 6 contributes to the understanding of ZFP36L.

      We appreciate this conceptual question. In our dataset, ZFP36L1 knockout affects T-cell activation markers, but does not recapitulate the increased global translation observed upon UPF1 or GIGYF1/2 deletion. We will discuss this finding more explicitly in the Results and Discussion. We discuss the possibility that other ZFP36 family members (e.g. ZFP36/TTP, ZFP36L2) may partially compensate for the absence of ZFP36L1 in some readouts1. Moreover, we will emphasize that at this point it is not clear whether ZFP36L1’s contribution to UPF1 and GIGYF1 protein levels is direct or indirect.

      We nonetheless consider Fig. 6 an important component of the story, as it demonstrates that proximity partners emerging from the interactome (UPF1, GIGYF1/2) have measurable functional consequences on T cell activation and translational control, thereby illustrating how the resource can guide mechanistic hypotheses. We will now more carefully phrase this as “first indications of mechanism” and avoid implying that these phenotypes are mediated exclusively via ZFP36L1.

      4) Fig 7E: Differences in ZFP36L1 mRNA expression are claimed as a consequence of UPF1 deletion, and indeed there is a clear tendency to reduction of ZFP36L1 mRNA levels upon UPF1 KO. Yet the difference is statistically non-significant. Please, repeat this experiment to increase statistical significance. In addition, a clear discussion on how UPF1 -generally associated to mRNA degradation- contributes to increase ZFP36L1 mRNA levels would be appreciated.

      We would like to refrain from including repeats for increasing statistical power. We find similar trends with n=3 at 0h as with n=7 at 3h of activation (Fig. 7E). We rather would like to stress that despite the width overall expression levels which most probably stems from using primary human material, the overall levels of ZFP36L1 mRNA are lower in UPF1 KO T cells. We will include a point on how UPF1 possibly may contribute to the decreased ZFP36L1 mRNA levels, as suggested.

      5) Fig 6A: The decrease in global translation by GIGFYF1 knock-out upon activation claimed by the authors is not clear in Fig 6A and is non-significant upon quantification. Please, modify narrative accordingly.

      Indeed, this was not phrased well. We will correct our description to match the statistical analysis.

      6) Page 6: The authors state 'This included the PAN2/3 complex proteins which trim poly(A) tails prior to mRNA degradation through the CCR4/NOT complex'. To the best of my knowledge, the CCR4/NOT complex does not degrade the body of the mRNA. Both PAN2/3 and CCR4/NOT are deadenylases that function independently.

      We thank the reviewer for highlighting this inaccuracy. PAN2/3 and CCR4–NOT are indeed both deadenylase complexes that function independently rather than one acting strictly upstream of the other in degrading the mRNA body. We will correct this statement to that PAN2/3 and CCR4–NOT cooperate in poly(A) tail shortening and do not themselves degrade the mRNA body, which is instead handled by the downstream decay machinery.

      7) Please, label all Table sheets. Right now one has to guess what is being shown in most of them. Furthermore, it would be convenient to join all Tables related to the same Figure in one unique Excel with several sheets, rather than having many Tables with only one sheet each.

      We appreciate this suggestion. In the revised supplementary files all table sheets will be clearly labeled to indicate the corresponding figure and dataset, and combined into a single excel file when multiple tables relate to the same figure. We have already done so.

      Minor comments: 8) Fig 1E: Shouldn't there be a better separation by biotinylation in the UltraID IP principal component analysis? In theory, only biotinylated proteins should be immunoprecipitated.

      In theory this should indeed be the case. However, in practice, pull down experiments always suffer from background stickiness of proteins to tubes, beads etc. Combined, these known background issues highlight the critical addition of control samples, allowing for unequivocal call of proteins that are above background.

      In addition, as we indicated in the manuscript, primary T cells depend on Biotin. This prohibited us to use biotin-free medium, even for a short culture period (it resulted in cell death). Such biotin-free culture steps are included in proximity labeling assays performed in cell lines. Owing to the continuous addition of biotin, some of the ‘background’ biotinylation signal may even be ‘real’. Nevertheless, the higher levels of biotin we added during the labeling results in increased signals, and statistical analysis with these controls identifies which of the proteins are above background, irrespective from the source. We will include a short note on this in the manuscript

      9) Fig 3B-E: Is the labeling not swapped, top (always +) is Biotin and bottom (- or +) is aCD3/aCD28?

      We thank the reviewer for catching this mistake- we have corrected it

      10) Fig 7A data is from another paper, so I suggest to move this panel to Supplementary materials.

      We respectfully disagree. Please be advised that we reanalysed data from published datasets, that resulted in this figure. Re-analysis is a widely accepted method and certainly used for main figure panels. Our re-analysis from Bestenhorn et al 2025; (10.1016/j.molcel.2025.01.001) confirms that ZFP36L1 interacts with UPF1 and GIGYF1/2 in the RAW 264.7 macrophage cell line, which we consider an important consolidation of our findings. To highlight that this table is a re-analysis of published data, we will include this information (including the reference) below the data. As ‘extracted from Bestenhorn et al'

      11) Fig S1A: Why is there so much labeling in the UltraID only lane without biotin?

      This is a phenomenon also reported by others (Kubitz et al. 2022; 10.1038/s42003-022-03604-5: Figure 5A). UltraID alone is a small protein of (19.7KD), comparable to TurboID or others (Kubitz et al. 2022; 10.1038/s42003-022-03604-5). If not tethered to a specific compartment, these proximity labeling moieties can diffuse through the cytoplasm, biotinylating any protein they ‘bump’ into. Please be advised that we included this control to show this effect, to substantiate why we use GFP-UltraID- as control, to limit such background effects. To highlight this point better, we will better articulate this reasoning in the results section.

      12) Fig S1E: Please, explain better. What is WT?

      We thank the reviewer for catching this inconsistency. We will explicitly define “WT” as wild-type primary T cells (non-edited, non-transduced) and clarify how this relates to the other conditions.

      13) Fig S4B: Please, explain the labels on top of the shapes.

      We will update the figure, explaining how the labels above each shape are chosen (e.g. indicating specific clusters, functional categories, or experimental conditions, as appropriate). This should make the reading more intuitive.

      14) Page 3: A time-course of incubation with biotin is lacking in Fig S1B, and thereby it is confusing that the authors direct readers to this figure when an increased to 16h incubation is claimed to be better.

      Please be advised that short labeling times yielded disappointing results in primary human T cells. Therefore all first analyses were performed with 16h biotinylation, as depicted in Figure S1B). Only after achieving good results (presented in Figure 1B), we performed time course experiments (presented in __Figure 4, __lowering incubation times to 2h, 3h and 9h). We realize that this is confusing and we will rephrase this point in page 3.

      Reviewer #2 (Significance (Required)): Strengths: A thorough repository of ZFP36L1 interactors in primary human T-cells. A valuable resource for the community. Weaknesses: There is little mechanistic insight on ZFP36L1 function or regulation.

      We would like to highlight that the purpose of our study was to provide a comprehensive interactome of ZFP36L1, and to study the dynamics of these interactions. In addition to known interactors, we identified novel putative interactors of ZFP36L1. We have indeed not followed up on all interactions, which we consider beyond the scope of this manuscript. Rather, we consider our study as a toolbox for the community, that helps in their studies.

      Nevertheless, in Fig 6-7, we show first indications of mechanistic insights on ZFP36L1 interactors, exemplifying how the findings of this resource paper can be used by the community.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      The authors have analyzed the interactome of ZFP36L1 in primary human T cells using a biotin-based proximity labeling method. In addition to proteins that are known to interact with ZFP36L1, the authors defined a multitude of novel interactions involved in mRNA decapping, mRNA degradation pathways, translation repressors, stress granule/p-body formation, and other regulatory pathways. Time-lapse proximity labeling revealed that the ZFP36L1 interactome undergoes remodeling during T cell activation. Co-IP for ZFP36L1 executed in the presence/absence of RNA further revealed the interactome and possible regulators of ZFP36L1, including the helicase UPF1. In addition to interacting with ZFP36L1, UPF1 promotes the ZFP36L1 protein expression, seemingly by binding to the ZFP36L1 mRNA transcript, and in some way stabilizing it. This comprehensive interactome map highlights the widespread interactions of ZFP36L1 with proteins of many types, and its potential roles in diverse T cell processes. Although somewhat descriptive, rather than hypothesis-testing, this work represents an important contribution to understanding the potential roles of the ZFP36 family proteins, and sets up many future experiments which could test molecular details.

      We thank the reviewer for these thoughtful points, and for recognizing our paper as an important contribution for the field as resource, that should support future experiments.

      Major points: 1) Can the authors discuss the specificity of the antibody for ZFP36L1 used in the Co-IP experiments? The antibody listed in Appendix A is abcam catalog number ab42473, although the catalog number for this antibody (unlike the others major ones used) is not listed in the Methods section - please add this to the Methods to make it easier for readers to find this detail. Could this antibody also be immunoprecipitating ZFP36 or ZFP36L2? Other antibodies have had cross-reactivity for the different family members. It is also notable that this antibody has been discontinued by the manufacturer (https://www.abcam.com/en-us/products/unavailable/zfp36l1-antibody-ab42473). Have the authors tried the current abcam anti-ZFP36L1 antibody being sold, catalog number ab230507?

      We appreciate the opportunity to clarify this important technical point. We have now added the catalog number (ab42473, Abcam) of the anti-ZFP36L1 antibody used for co-IP to the Methods section, in addition to Appendix A, to facilitate reproducibility. The antibody ab42473 has indeed been discontinued by the manufacturer. We have contacted the manufacturer on multiple occasions with no luck.

      We have evaluated multiple alternative anti-ZFP36L1 antibodies, including the currently available Abcam antibody ab230507. In our hands, these alternatives showed weaker or less specific detection of ZFP36L1 compared to the original ZFP36L1 antibody. Only antibody 1A3 recognized ZFP36L1. We therefore used this antibody for the Co-IP. Importantly, even though the signal is lower than the original antibody we used, the migration patterns observed with ab42473 in our co-IP experiments match the expected molecular weight of ZFP36L1 and do not suggest substantial cross-reactivity with ZFP36 or ZFP36L2, which display distinct sizes (we will add the sizes to the WB in figures). We discuss this point briefly in the revised Methods/Results.

      2) On this point, the authors report interactions between ZFP36L1 and its related proteins ZFP36 and ZFP36L2 in the Co-IP experiment (Supp 5C). Did these proteins interact in the proximity labeling? Ideally this could be discussed in the Discussion section.

      ZFP36 and ZFP36L2 were indeed detected as co-precipitating with ZFP36L1 in the co-IP experiments but were not found as high-confidence interactors in the UltraID proximity labeling datasets. Also in the APEX proximity labeling of Bestehorn et al. In RAW macrophage cells, they did not find ZFP36 or ZFP36L1 to interact with ZFP36L1. * *We now explicitly mention this in the Results and discuss it in the Discussion.

      3) Can the authors discuss more fully the limited overlap in identified interactors across the two proximity labeling screens performed in primary T cells (Fig 2C)? Likewise, can the authors comment on the very limited overlap between the screens in T cells and the published ZFP36L1-APEX proximity labelling experiment performed in the HEK293T cell line by Bestehorn et al. (ref 42)? Only 6.8% of proteins found in either T cell screen were found as interactors in this cell line. The authors comment that this may be because "...either expression of certain proteins is cell-type specific, or [because] ZFP36L1 has cell-type specific protein interactions, in addition to its core interactome". While I agree that cell-type specific interactions may be at play, I would think most of the interactors found in the T cell screens are widely expressed proteins necessary for central cell functions.

      First, the apparent overlap percentage depends on depth and filtering. As noted above and now detailed in a new Supplementary table, a core set of decapping, deadenylation, and granule-associated factors is consistently recovered across our T-cell screens and the HEK293T APEX dataset. However, beyond this core protein, overlap is reduced, reflecting several factors: (i) differences in expression levels of many interactors between HEK293T cells and primary T cells; (ii) the activation-dependent nature of ZFP36L1 function in T cells, which cannot be fully mimicked in HEK293T; (iii) different proximity labeling enzymes and fusion constructs (APEX vs UltraID, different tags, expression levels); and (iv) distinct experimental designs and control strategies, which influence statistical filtering and the effective “depth” of each interactome.

      In the revised Discussion and in the new comparative table, we now emphasize that while many of the ZFP36L1 proximity partners identified in T cells are indeed widely expressed, their effective labeling and enrichment are strongly context dependent. We therefore interpret the relatively limited overlap as highlighting both a robust core interactome and substantial context-specific remodeling, rather than as evidence of artifacts in one or the other dataset.


      Minor comments: 4) In Figure 3D, the legend states that black circles indicate significantly enriched proteins in biotin samples, while grey circles indicate non-significant enrichment. However, some genes, including DCP1A, DDX6, YBX1, have black circles in the -biotin group and grey in the +biotin group, which creates confusion in interpretation.

      We thank the reviewer for this comment. We have accidentally switched the labeling of biotin and activation as pointed out by reviewer 2. Once this is fixed, this comment will also be fixed.

      5) Did the authors find any interactors whose expression is known to be specific to CD4 or CD8 T cells?

      In our current dataset we did not identify interactors whose presence was clearly restricted to CD4 or CD8 T-cells. We agree that differential ZFP36L1 interactomes in defined T-cell subsets represent an interesting avenue for future targeted studies and will outline this is the discussion.

      Reviewer #3 (Significance (Required)):

      The authors present the first comprehensive analysis of the ZFP36L1 interactome in primary T cells. The use of biotin-based proximity labeling enables detection of physiologically relevant interactions in live cells. This approach revealed many novel interactors.

      Strengths include the overall richness of the dataset, and the hypothesis-provoking experiments that could follow in the future. Limitations include somewhat limited overlap with a published proximity labeling dataset from performed in a different cell line, suggesting that there may be artifacts in one or both datasets.

      The audience for this article would include those interested broadly in RNA binding proteins and those interested in post-transcriptional and translational regulation.

      I have immunology expertise on T cell activation and differentiation and expertise on transcriptional and post-transcriptional regulation of gene expression in T cells.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      The authors have analyzed the interactome of ZFP36L1 in primary human T cells using a biotin-based proximity labeling method. In addition to proteins that are known to interact with ZFP36L1, the authors defined a multitude of novel interactions involved in mRNA decapping, mRNA degradation pathways, translation repressors, stress granule/p-body formation, and other regulatory pathways. Time-lapse proximity labeling revealed that the ZFP36L1 interactome undergoes remodeling during T cell activation. Co-IP for ZFP36L1 executed in the presence/absence of RNA further revealed the interactome and possible regulators of ZFP36L1, including the helicase UPF1. In addition to interacting with ZFP36L1, UPF1 promotes the ZFP36L1 protein expression, seemingly by binding to the ZFP36L1 mRNA transcript, and in some way stabilizing it. This comprehensive interactome map highlights the widespread interactions of ZFP36L1 with proteins of many types, and its potential roles in diverse T cell processes. Although somewhat descriptive, rather than hypothesis-testing, this work represents an important contribution to understanding the potential roles of the ZFP36 family proteins, and sets up many future experiments which could test molecular details.

      Major points:

      1) Can the authors discuss the specificity of the antibody for ZFP36L1 used in the Co-IP experiments? The antibody listed in Appendix A is abcam catalog number ab42473, although the catalog number for this antibody (unlike the others major ones used) is not listed in the Methods section - please add this to the Methods to make it easier for readers to find this detail. Could this antibody also be immunoprecipitating ZFP36 or ZFP36L2? Other antibodies have had cross-reactivity for the different family members. It is also notable that this antibody has been discontinued by the manufacturer (https://www.abcam.com/en-us/products/unavailable/zfp36l1-antibody-ab42473). Have the authors tried the current abcam anti-ZFP36L1 antibody being sold, catalog number ab230507?

      2) On this point, the authors report interactions between ZFP36L1 and its related proteins ZFP36 and ZFP36L2 in the Co-IP experiment (Supp 5C). Did these proteins interact in the proximity labeling? Ideally this could be discussed in the Discussion section.

      3) Can the authors discuss more fully the limited overlap in identified interactors across the two proximity labeling screens performed in primary T cells (Fig 2C)? Likewise, can the authors comment on the very limited overlap between the screens in T cells and the published ZFP36L1-APEX proximity labelling experiment performed in the HEK293T cell line by Bestehorn et al. (ref 42)? Only 6.8% of proteins found in either T cell screen were found as interactors in this cell line. The authors comment that this may be because "...either expression of certain proteins is cell-type specific, or [because] ZFP36L1 has cell-type specific protein interactions, in addition to its core interactome". While I agree that cell-type specific interactions may be at play, I would think most of the interactors found in the T cell screens are widely expressed proteins necessary for central cell functions.

      Minor comments:

      4) In Figure 3D, the legend states that black circles indicate significantly enriched proteins in biotin samples, while grey circles indicate non-significant enrichment. However, some genes, including DCP1A, DDX6, YBX1, have black circles in the -biotin group and grey in the +biotin group, which creates confusion in interpretation.

      5) Did the authors find any interactors whose expression is known to be specific to CD4 or CD8 T cells?

      Significance

      The authors present the first comprehensive analysis of the ZFP36L1 interactome in primary T cells. The use of biotin-based proximity labeling enables detection of physiologically relevant interactions in live cells. This approach revealed many novel interactors.

      Strengths include the overall richness of the dataset, and the hypothesis-provoking experiments that could follow in the future. Limitations include somewhat limited overlap with a published proximity labeling dataset from performed in a different cell line, suggesting that there may be artifacts in one or both datasets.

      The audience for this article would include those interested broadly in RNA binding proteins and those interested in post-transcriptional and translational regulation.

      I have immunology expertise on T cell activation and differentiation and expertise on transcriptional and post-transcriptional regulation of gene expression in T cells.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      The manuscript by Wolkers and colleagues describes the protein interactome of the RNA-binding protein ZFP36L1 in primary human T-cells. There is inherent value in the use of primary cells of human origin, but there is also value in that the study is quite complete, as it is performed in a variety of conditions: T-cells that have been activated or not, at different time points after activation, and by two methods (co-IP and proximity labeling). One might imagine that this basically covers all what can be detected for this protein in T-cells. The authors report a large amount of new interactors involved at all steps in post-transcriptional regulation. In addition, the authors show that UPF1, a known interactor of ZFP36L1, actually binds to ZFP36L1 mRNA and enhances its levels. In sum, the work provides a valuable resource of ZFP36L1 interactors. Yet, how the data add to the mechanistic understanding of ZFP36L1 functions and/or regulation of ZFP36L1 remains unclear.

      Major comments:

      1) Fig 2: It is confusing that the Pearson correlation to define ZFP36L1 interactors is changed depending on figure panel. In panels A-C, a correlation {greater than or equal to} 0.6 is used, while panel D uses a correlation > 0.5, which changes the nº of interactors. Then, this is changed again in Fig 3A for some cell types but not for others. Why has this been done? It would be better to stick to the same thresholds throughout the manuscript.

      2) Fig 3A: It would be nice to have the information of this Figure panel as a Table (protein name, molecular process(es), known or novel, previously detected in which cells) in addition to the figure.

      3) Fig 6: To what extent are the effects of UPF1 and GIGFYF1 knock-out on translation and T-cell hyper-activation mediated by ZFP36L1? If deletion of ZFP36L1 itself has no effect on these processes, it seems unlikely that it is involved. In this respect, I am not sure that Fig 6 contributes to the understanding of ZFP36L.

      4) Fig 7E: Differences in ZFP36L1 mRNA expression are claimed as a consequence of UPF1 deletion, and indeed there is a clear tendency to reduction of ZFP36L1 mRNA levels upon UPF1 KO. Yet the difference is statistically non-significant. Please, repeat this experiment to increase statistical significance. In addition, a clear discussion on how UPF1 -generally associated to mRNA degradation- contributes to increase ZFP36L1 mRNA levels would be appreciated.

      5) Fig 6A: The decrease in global translation by GIGFYF1 knock-out upon activation claimed by the authors is not clear in Fig 6A and is non-significant upon quantification. Please, modify narrative accordingly.

      6) Page 6: The authors state 'This included the PAN2/3 complex proteins which trim poly(A) tails prior to mRNA degradation through the CCR4/NOT complex'. To the best of my knowledge, the CCR4/NOT complex does not degrade the body of the mRNA. Both PAN2/3 and CCR4/NOT are deadenylases that function independently.

      7) Please, label all Table sheets. Right now one has to guess what is being shown in most of them. Furthermore, it would be convenient to join all Tables related to the same Figure in one unique Excel with several sheets, rather than having many Tables with only one sheet each.

      Minor comments:

      8) Fig 1E: Shouldn't there be a better separation by biotinylation in the UltraID IP principal component analysis? In theory, only biotinylated proteins should be immunoprecipitated.

      9) Fig 3B-E: Is the labeling not swapped, top (always +) is Biotin and bottom (- or +) is aCD3/aCD28?

      10) Fig 7A data is from another paper, so I suggest to move this panel to Supplementary materials.

      11) Fig S1A: Why is there so much labeling in the UltraID only lane without biotin?

      12) Fig S1E: Please, explain better. What is WT?

      13) Fig S4B: Please, explain the labels on top of the shapes.

      14) Page 3: A time-course of incubation with biotin is lacking in Fig S1B, and thereby it is confusing that the authors direct readers to this figure when an increased to 16h incubation is claimed to be better.

      Significance

      Strengths: A thorough repository of ZFP36L1 interactors in primary human T-cells. A valuable resource for the community.

      Weaknesses: There is little mechanistic insight on ZFP36L1 function or regulation.

    4. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      The authors map the ZFP36L1 protein interactome in human T cells using UltraID proximity labeling combined with quantitative mass spectrometry. They optimize labeling conditions in primary T cells, profile resting and activated cells, and include a time course at 2, 5, and 16 hours. They complement the interactome with co-immunoprecipitation in the presence or absence of RNase to assess RNA dependence. They then test selected candidates using CRISPR knockouts in primary T cells, focusing on UPF1 and GIGYF1/2, and report effects on global translation, stress, activation markers, and ZFP36L1 protein levels. The work argues that ZFP36L1 sits at the center of multiple post-transcriptional pathways in T cells (which in itself is not a novel finding) and that UPF1 supports ZFP36L1 expression at the mRNA and protein level. The main model system is primary human T cells, with some data in Jurkat cells.

      The core datasets show thousands of identified proteins in total lysates and enriched biotinylated fractions. Known partners from CCR4-NOT, decapping, stress granules, and P-bodies appear, with additional candidates like GIGYF1/2, PATL1, DDX6, and UPF1. Time-resolved labeling suggests shifts in proximity during early activation. Co-IP with and without RNase suggests both RNA-dependent and RNA-independent contacts. CRISPR loss of UPF1 or GIGYF1/2 increases translation at rest and elevates activation markers, and UPF1 loss reduces ZFP36L1 protein and mRNA while MG132 does not rescue protein levels; UPF1 RIP enriches ZFP36L1 mRNA.

      Among patterns worth noting are that the activation state drives the principal variance in both proteome and proximity datasets. Deadenylation, decapping, and granule proteins are consistently near ZFP36L1 across conditions, while some contacts dip at 2 hours and recover by 5 to 16 hours. Mitochondrial ribosomal proteins become more proximal later. UPF1 and GIGYF1 show time-linked behavior and RNase sensitivity that fits roles in mRNA surveillance and translational control. These observations support a dynamic hub model, though they remain proximity-based rather than direct binding maps.

      Major comments

      The key conclusions are directionally convincing for a broad and dynamic ZFP36L1 neighborhood in human T cells. The data robustly recover established complexes and add plausible candidates. The time-course and RNase experiments strengthen the claim that interactions shift with activation state and RNA context. The functional tests around UPF1 and GIGYF1/2 point to biological relevance. That said, some statements could be qualified. The statement that ZFP36L1 "coordinates" multiple pathways implies mechanism and directionality that proximity data alone cannot prove. I suggest reframing as "positions ZFP36L1 within" or "supports a model where ZFP36L1 sits within" these networks.

      UPF1, as an upstream regulator of ZFP36L1 expression, is a promising lead. The reduction of ZFP36L1 protein and mRNA in UPF1 knockout, the non-rescue by MG132, and the UPF1 RIP on ZFP36L1 mRNA together argue that UPF1 influences ZFP36L1 transcript output or processing. This claim would read stronger with one short rescue or perturbation that pins the mechanism. A compact test would be UPF1 re-expression in UPF1-deficient T cells with wild-type and helicase-dead alleles. This is realistic in primary T cells using mRNA electroporation or virus-based systems. Approximate time 2 to 3 weeks, including guide design check and expansion. Reagents and sequencing about 2 to 4k USD depending on donor numbers. This would help separate viability or stress effects from a direct role in ZFP36L1 mRNA handling.

      The inference that ZFP36L1 proximity to decapping and deadenylation complexes reflects pathway engagement is reasonable and, frankly, expected. Still, where the manuscript moves from proximity to function, the narrative works best when supported by orthogonal validation. Two compact additions would raise confidence without opening new lines of work. First, a small set of reciprocal co-IPs for PATL1 or DDX6 at endogenous levels in activated T cells, run with and without RNase, would tie the RNase-class assignments to biochemistry. Second, a short-pulse proximity experiment using a reduced biotin dose and shorter labeling window in activated cells would address whether long incubations drive non-specific labeling. Both are feasible in 2 to 3 weeks with minimal extra cost for antibodies and MS runs if the facility is in-house.

      Reproducibility is helped by donor pooling, repeated T-cell screens, Jurkat confirmation, and detailed methods including MaxQuant, LIMMA, and supervised patterning. Deposition of MS data is listed. The authors should consider adding a brief, stand-alone analysis notebook in SI or on GitHub with exact filtering thresholds and "shape" definitions, since the supervised profiles are central to claims. This would let others reproduce figures from raw tables with the same code and workflows.

      Replication and statistics are mostly adequate for discovery proteomics. The thresholds are clear, and PCA and correlation frameworks are appropriate. For functional readouts in edited T cells, please make the number of donors and independent experiments explicit in figure legends, and indicate whether statistics are paired by donor. Where viability differs (UPF1), note any gating strategies used to avoid bias in puromycin or activation marker measurements. These clarifications are quick to add.

      Minor comments

      The UltraID optimization in primary T cells is useful, but the long 16-hour labeling and high biotin should be framed as a compromise rather than a standard. A short statement about potential off-target labeling during extended incubations would set expectations and justify the RNase and time-course controls.

      The overlap across T-cell screens and with HEK293T APEX datasets is discussed, but a compact quantitative reconciliation would help. A table that lists shared versus cell-type-specific interactors with brief notes on known expression patterns would make this point concrete.

      Figures are generally clear. Where proximity and total proteome PCA are shown, consider adding sample-wise annotations for donor pools and activation time to help readers link variance to biology. Ensure all volcano plots and heatmaps display the exact cutoffs used in text.

      Prior work on ZFP36 family roles in decay, deadenylation via CCR4-NOT, granules, and translational control is cited within the manuscript. In a few places, recent proximity and interactome papers could be more explicitly integrated when comparing overlap, especially where conclusions differ by cell type. A concise paragraph in Discussion that lays out what is truly new in primary T cells would help clarify the contribution of this work to the field.

      Significance

      Nature and type of advance. The study is a technical and contextual advance in mapping ZFP36L1 proximity partners directly in human primary T cells during activation. The combination of time-resolved labeling and RNase-class assignments is informative. The CRISPR perturbations provide an initial functional bridge from proximity to phenotype, especially for UPF1.

      Context in the literature. ZFP36 family proteins have long been linked to ARE-mediated decay, CCR4-NOT recruitment, and granule localization. The present work confirms those cores and extends them to include decapping and GIGYF1/2-4EHP scaffolds in primary T cells with temporal resolution. The UPF1 link to ZFP36L1 expression adds a plausible surveillance angle that merits follow-up. The cell-type specificity analysis versus HEK293T underscores that proximity networks vary with context.

      Audience. Readers in RNA biology, T-cell biology, and proteomics will find the dataset valuable. Groups studying post-transcriptional regulation in immunity can use the resource to prioritize candidate nodes for mechanistic work.

      Expertise and scope. I work on post-transcriptional regulation, RNA-protein complexes, and T-cell effector biology. I am comfortable evaluating the conceptual claims, experimental design, and statistical treatment. I am not a mass spectrometry specialist, so I rely on the presented parameters and deposited data for MS acquisition specifics.

      To conclude, the manuscript delivers a substantive proximity map of ZFP36L1 in human T cells, with useful temporal and RNA-class information. The UPF1 observations are promising and would benefit from a compact rescue to secure causality. A few minor additions for biochemical validation and transparency in replication would further strengthen the paper.

    1. Document d'Information : Le Traitement Médiatique des Violences Faites aux Femmes

      Résumé Exécutif

      Ce document d'information synthétise les discussions d'une table ronde sur le traitement médiatique des violences faites aux femmes, réunissant une journaliste d'investigation, une vulgarisatrice et une militante féministe.

      Il ressort que si la médiatisation de ce sujet sociétal est croissante, elle est entachée de biais significatifs et de pratiques problématiques. Les points essentiels sont les suivants :

      Le Rôle Ambivalent des Médias : Les médias jouent un rôle crucial en rendant publiques des violences souvent cantonnées à la sphère privée, ce qui permet de faire évoluer les mentalités et de reconnaître le caractère systémique du problème.

      Chaque avancée sociétale sur le sujet est liée à la médiatisation d'une affaire emblématique (Mazneff, Depardieu, etc.).

      Critiques Principales du Traitement Médiatique : La couverture médiatique est critiquée pour sa tendance à racialiser les agresseurs, servant un agenda politique raciste en surreprésentant les agresseurs étrangers ou racisés contre des victimes blanches.

      On observe également une différence de traitement majeure entre la presse nationale, qui aborde parfois le sujet sous un angle systémique, et la presse locale (PQR), qui le confine souvent au sensationnalisme du "fait divers".

      Éthique Journalistique et Protection des Victimes : Le traitement rigoureux d'une affaire de violence sexiste et sexuelle (VSS) repose sur des principes déontologiques stricts.

      La priorité est de croire et de protéger la victime, notamment par l'anonymat, et de respecter son choix de parler ou non.

      L'enquête doit être irréprochable pour éviter les risques de diffamation et garantir la crédibilité du récit, ce qui inclut la vérification des faits et la procédure du "contradictoire" (contacter l'agresseur présumé).

      Les Angles Morts de la Médiatisation : De nombreuses formes de violences demeurent largement invisibles.

      C'est le cas des violences psychologiques (contrôle, harcèlement numérique via traceurs) et surtout des violences visant les populations les plus marginalisées : les enfants, les travailleuses du sexe et les femmes trans, dont les agressions sont souvent ignorées, voire justifiées par un traitement médiatique transphobe et déshumanisant.

      --------------------------------------------------------------------------------

      1. Introduction et Définitions Clés

      La discussion établit un cadre conceptuel pour analyser le traitement médiatique des violences faites aux femmes, un sujet de plus en plus présent dans le débat public, souvent à travers le prisme d'affaires très médiatisées impliquant des personnalités publiques (PPDA, Gérard Depardieu, Léo Grasset).

      Définition du Patriarcat et de la Notion de "Femme"

      Pour analyser les violences, les intervenantes adoptent une approche matérialiste et sociologique.

      Femme : Dans ce contexte, une "femme" n'est pas définie par sa biologie ou son identité de genre, mais comme une personne subissant des conditions sociales spécifiques, notamment le sexisme, les violences et l'exploitation par le système patriarcal.

      Patriarcat : Il est défini comme un système social qui hiérarchise les groupes sociaux "hommes" et "femmes".

      Ce système organise l'exploitation (notamment économique via le travail domestique) et l'oppression des femmes, et sanctionne toute personne déviant des normes qu'il impose (ex: hétéronormativité, sanctionnée par l'homophobie).

      2. Les Formes de Violence et le Rôle des Médias

      Typologie des Violences Sexistes et Sexuelles (VSS)

      Les VSS englobent une large gamme de violences, souvent sous-représentées dans leur diversité.

      Violences les plus médiatisées : Le viol et les agressions sexuelles sont les plus visibles médiatiquement, car perçus comme les plus graves.

      Les violences conjugales physiques sont également mentionnées, mais les violences psychologiques restent largement ignorées.

      Statistiques et Binarité : Les statistiques disponibles sur les VSS sont majoritairement binaires (hommes/femmes), ce qui invisibilise les victimes non-binaires.

      Pauline Bouty souligne que si la plupart des victimes sont des femmes et la plupart des auteurs des hommes, il est crucial de rappeler que des personnes de tous genres peuvent être victimes.

      Il est rappelé que près de 90 % des victimes connaissent leur agresseur, qui est souvent un membre de la famille ou le conjoint, contredisant le mythe de l'agresseur inconnu dans une ruelle sombre.

      L'Importance Cruciale du Rôle des Médias

      Le traitement médiatique des VSS est considéré comme un enjeu public majeur et non une affaire privée.

      Le "5ème Pouvoir" : Jade Bourgerie, journaliste, qualifie les médias de "5ème pouvoir" dont le rôle est de refléter les maux de la société.

      Traiter une affaire de VSS relève de l'intérêt public, car ces violences sont le symptôme d'une "société malade".

      Visibilité et Existence : Selon Pauline Bouty, "ce qu'on ne voit pas n'existe pas".

      La médiatisation permet au public de prendre conscience de l'existence et de l'ampleur de ces violences.

      Chaque progression dans la compréhension de ce phénomène est directement liée à la couverture médiatique d'une affaire symbolique.

      Déconstruire les Stéréotypes : La médiatisation aide à humaniser les victimes et les agresseurs, brisant l'image du "monstre".

      Elle montre que l'agresseur peut être "votre voisin, votre frère, votre oncle", une personne perçue comme sympathique en société.

      3. Pratiques et Éthique Journalistiques dans le Traitement des VSS

      La journaliste Jade Bourgerie détaille les règles déontologiques qu'elle s'impose pour traiter ces sujets sensibles, en l'absence de règles formelles universelles dans la profession.

      Les Règles Déontologiques et la Rigueur de l'Enquête

      1. Respecter et Croire la Victime : Le point de départ est de croire la parole de la victime et de respecter ses volontés.

      2. Rigueur de l'Enquête : L'article doit être "parfait" et "solide".

      Cela implique de vérifier méticuleusement chaque élément fourni par la victime pour construire un dossier inattaquable et se prémunir contre les accusations de diffamation.

      Exemple donné : retrouver une gynécologue consultée par une victime dans les années 90 pour corroborer une partie de son récit.

      3. Le Contradictoire : Une étape essentielle consiste à contacter la personne mise en cause (l'agresseur présumé) pour lui exposer les faits recueillis et lui donner la possibilité de se défendre.

      Le Rôle de l'Anonymat pour la Protection des Victimes

      L'anonymat est un outil de protection essentiel pour les victimes, en particulier dans les milieux professionnels restreints (ex: musique classique) où tout le monde se connaît. Il permet à la victime d'éviter :

      • D'être durablement étiquetée comme "victime de viol".

      • De subir des représailles professionnelles ou sociales dans une société encore peu avancée sur ces questions.

      4. Critiques Majeures du Traitement Médiatique Actuel

      Plusieurs problèmes récurrents dans la couverture des VSS sont identifiés par les intervenantes.

      La Racialisation des Récits

      Lou Girard dénonce un biais racial majeur : les médias, en particulier ceux détenus par des groupes de droite et d'extrême-droite (citant les "empires Bolloré et Drahi"), tendent à surreprésenter les affaires où des femmes blanches sont agressées par des hommes racisés ou migrants.

      Ce traitement sert un "narratif raciste" qui présente "la femme blanche, pure, la Française" comme étant attaquée par "le migrant, l'étranger".

      Cela occulte la réalité statistique : la grande majorité des violences sont intra-communautaires et intrafamiliales.

      Disparités entre Presse Nationale et Presse Quotidienne Régionale (PQR)

      Un clivage important existe entre les types de médias.

      Critère

      Presse Nationale (ex: Le Monde, Libération)

      Presse Quotidienne Régionale (PQR) (ex: La Dépêche)

      Traitement

      Tendance à traiter les affaires sous un angle plus systémique, souvent liées à des personnalités connues ou à des faits de grande ampleur.

      Traitement majoritairement sous le prisme du fait divers et du sensationnalisme.

      Biais Racial

      Le narratif racialisant est "assez absent" des grands médias nationaux.

      Le schéma "femme blanche victime d'un agresseur racisé" est beaucoup plus fréquent.

      Causes

      Journalistes plus jeunes, formés aux enjeux actuels des VSS dans les écoles de journalisme.

      Journalistes souvent en poste depuis des décennies, moins formés à ces problématiques spécifiques.

      L'Évolution du Vocabulaire : Du "Crime Passionnel" au "Féminicide"

      Le langage utilisé a évolué, mais des termes problématiques persistent.

      Progrès : Le terme "féminicide" a émergé et s'est démocratisé après le mouvement #MeToo. Son usage est politique : il souligne que la victime a été tuée parce qu'elle est une femme, et non dans le cadre d'un simple homicide.

      Persistance : Des termes euphémisants ou inappropriés comme "crime passionnel" ou la description de viols comme des "relations sexuelles imposées" sont encore utilisés, minimisant la notion de violence et de domination.

      5. Les Violences Invisibilisées et les Critères de Médiatisation

      Violences Psychologiques et Violences contre les Populations Marginalisées

      Certaines violences sont systématiquement absentes de la couverture médiatique.

      Violences Psychologiques : Le contrôle insidieux, qui ne "laisse pas de bleu", est très peu représenté. Pauline Bouty cite le documentaire Traquée de Marine Périn sur les hommes installant des traceurs sur les téléphones de leurs compagnes.

      Ce contrôle peut aussi être financier ou social.

      Violences contre les enfants : Les enfants sont particulièrement vulnérables car dépendants des adultes qui sont souvent leurs agresseurs.

      Violences contre les femmes trans : Lou Girard souligne leur vulnérabilité extrême. "En tant que femme on a peur d'être violé, en tant que femme trans on a peur d'être violé puis tué."

      Le traitement médiatique, quand il existe, est souvent abominable, utilisant des termes transphobes ("homme travesti") et présentant l'agression comme un fait divers "presque marrant".

      Les victimes sont mégenrées, même après leur mort.

      Violences contre les travailleuses du sexe : Leurs agressions sont souvent invisibilisées ou justifiées par leur profession, niant la notion de consentement.

      Les Critères de Médiatisation d'une Affaire

      Pour qu'une affaire soit traitée médiatiquement de manière solide, plusieurs critères sont souvent nécessaires du point de vue journalistique :

      Avoir plusieurs victimes : Cela permet d'éviter la situation de "parole contre parole".

      Au moins une victime acceptant de parler à visage découvert : Cela renforce la crédibilité du récit.

      Des faits documentables avec des preuves : Une affaire reposant uniquement sur un témoignage sans plainte ni preuve est quasiment impossible à traiter pour un journaliste.

      Le consentement de la victime : Le respect de la parole de la victime est primordial. De nombreuses affaires ne sortent pas car les victimes ne souhaitent pas parler, un choix qui doit être absolument respecté.

      6. L'Impact sur les Victimes et la Question du Langage

      Le Manque de Couverture sur les Conséquences pour les Victimes

      Les médias se concentrent sur les faits et les agresseurs, mais très rarement sur l'impact à long terme des violences sur la vie des victimes (psychologique, social, professionnel).

      Analyse Politique : Lou Girard analyse ce manque comme un choix politique.

      S'intéresser à la "carrière brisée" de l'agresseur est commun, mais parler des "conséquences terribles du viol" sur la vie des femmes serait un acte "hautement féministe" que beaucoup de médias évitent.

      Le Rôle des Livres : Pauline Bouty nuance en affirmant que ce n'est peut-être pas le rôle des journalistes de parler à la place des victimes de leur ressenti.

      Elle défend l'importance des espaces où les victimes peuvent s'exprimer avec leur propre voix, comme les livres (citant Florence Porcel) ou les films (Les Chatouilles).

      L'Importance de la Précision Terminologique

      L'usage de termes précis est un enjeu politique.

      Pédocriminalité vs. Pédophilie : Il est crucial de différencier la pédophilie (une paraphilie, un attrait) de la pédocriminalité (le passage à l'acte).

      La plupart des personnes ayant des attirances pédophiles ne passent pas à l'acte et se font suivre. Un pédocriminel cherche avant tout à exercer une emprise et n'est pas nécessairement "pédophile".

      La Voix Active : Il est recommandé d'utiliser la voix active pour nommer l'agresseur et sa responsabilité : "un homme a violé une femme" plutôt que "une femme s'est fait violer".

      Présenter les faits est un choix politique : soit on le fait avec des euphémismes, soit on nomme la violence telle qu'elle est.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      The authors focus on the molecular mechanisms by which EMT cells confer resistance to cancer cells. The authors use a wide range of methods to reveal that overexpression of Snail in EMT cells induces cholesterol/sphingomyelin imbalance via transcriptional repression of biosynthetic enzymes involved in sphingomyelin synthesis. The study also revealed that ABCA1 is important for cholesterol efflux and thus for counterbalancing the excess of intracellular free cholesterol in these snail-EMT cells. Inhibition of ACAT, an enzyme catalyzing cholesterol esterification, also seems essential to inhibit the growth of snail-expressing cancer cells.

      However, It seems important to analyze the localization of ABCA1, as it is possible that in the event of cholesterol/sphingomyelin imbalance, for example, the intracellular trafficking of the pump may be altered.

      The authors should also analyze ACAT levels and/or activity in snail-EMT cells that should be increased. Overall, the provided data are important to better understand cancer biology.

      We thank the reviewer for recognizing the significance of our study. Consistent with the hypothesis that ABCA1 contributes to chemoresistance in hybrid E/M cells, we agree that demonstrating the localization of ABCA1 at the plasma membrane is important, and we have included additional experiments to address this point.

      We also examined the expression of the major ACAT isoform in the kidney, SOAT1, across RCC cell lines. However, its expression did not correlate with that of Snail (Figure 4B), suggesting that SOAT1 is constitutively expressed at a certain level regardless of Snail expression. The details of these additional experiments are provided in the point-by-point responses below.

      Reviewer #2 (Public review):

      Summary:

      In this study, the authors discovered that the chemoresistance in RCC cell lines correlates with the expression levels of the drug transporter ABCA1 and the EMT-related transcription factor Snail. They demonstrate that Snail induces ABCA1 expression and chemoresistance, and that ABCA1 inhibitors can counteract this resistance. The study also suggests that Snail disrupts the cholesterol-sphingomyelin (Chol/SM) balance by repressing the expression of enzymes involved in very long-chain fatty acid-sphingomyelin synthesis, leading to excess free cholesterol. This imbalance activates the cholesterol-LXR pathway, inducing ABCA1 expression. Moreover, inhibiting cholesterol esterification suppresses Snail-positive cancer cell growth, providing potential lipid-targeting strategies for invasive cancer therapy.

      Strengths:

      This research presents a novel mechanism by which the EMT-related transcription factor Snail confers drug resistance by altering the Chol/SM balance, introducing a previously unrecognized role of lipid metabolism in the chemoresistance of cancer cells. The focus on lipid balance, rather than individual lipid levels, is a particularly insightful approach. The potential for targeting cholesterol detoxification pathways in Snail-positive cancer cells is also a significant therapeutic implication.

      Weaknesses:

      The study's claim that Snail-induced ABCA1 is crucial for chemoresistance relies only on pharmacological inhibition of ABCA1, lacking additional validation. The causal relationship between the disrupted Chol/SM balance and ABCA1 expression or chemoresistance is not directly supported by data. Some data lack quantitative analysis.

      We thank the reviewer for his/her insightful and constructive comments. In response, we have performed additional experiments using complementary approaches to further substantiate the contribution of Snail-induced ABCA1 expression to chemoresistance. Furthermore, to clarify the causal relationship between reduced sphingomyelin biosynthesis and ABCA1 expression, we conducted new experiments showing that supplementation with sphingolipids attenuates ABCA1 upregulation (Figure 3H). The details of these additional experiments are described in the point-by-point responses below.

      Reviewer #1 (Recommendations for the authors):

      In this paper, the authors reveal that snail expression in EMT-cells leads to an imbalance between cholesterol and sphingomyelin via a transcriptional repression of enzymes involved in the biosynthesis of sphingomyelin.

      This paper is interesting and highlights how the imbalance of lipids would impact chemotherapy resistance. However, I have a few comments.

      In Figure 2 in Eph4 cells, while filipin staining appears exclusively at the plasma membrane in the case of EpH4-snail cells filipin staining is also intracellular. It seems plausible that all filipin-positive intracellular staining is not exclusively in LDs, authors should therefore try to colocalize filipin with other intracellular markers. To this aim, authors might want to use topfluocholesterol-probe for instance.

      We examined the distribution of TopFluor-cholesterol in hybrid E/M cells (Figure 2H) and found that TopFluor-cholesterol colocalizes with lipid droplets. In addition, we analyzed the colocalization between intracellular filipin signals and organelle-specific proteins, ADRP (lipid droplets) and LAMP1 (lysosomes) (Figure 2I). Since filipin binds exclusively to unesterified cholesterol, filipin signals did not colocalize with ADRP. Instead, we observed colocalization of filipin with LAMP1, suggesting that cholesterol accumulates in hybrid E/M cells in both esterified and unesterified forms.

      In Figure 3, the authors reveal that the exogenous expression of the snail alters the ratio of cholesterol to sphingomyelin. The authors should reveal where is found the intracellular cholesterol and intracellular sphingomyelin within these cells Eph4-snail.

      To investigate the lipid composition of the plasma membrane, we utilized lipid-binding protein probes, D4 (for cholesterol) and lysenin (for sphingomyelin) (Figures 2L and 2M). We found that the plasma membrane cholesterol content was not affected by EMT, whereas sphingomyelin levels were markedly decreased. In addition, intracellular cholesterol was visualized (Comment 1-1; Figures 2E–2K). On the other hand, because visualization of intracellular sphingomyelin is technically challenging, we were unable to include this analysis in the present study. We consider this an important direction for future investigation.

      Regarding the model described in panel K of Figure 3. I would expect that the changes in lipid-membrane organization depicted in panel K should affect the pattern of GM1 toxin for instance or the motility of raft-associated proteins for instance. The authors could perform these experiments in order to sustain the change of lipid plasma membrane organization.

      We attempted staining with FITC–cholera toxin to visualize GM1, but both EpH4 and EpH4–Snail cells exhibited very low levels of GM1, resulting in minimal or no detectable staining (data not shown). Instead, to assess the impact of decreased sphingomyelin on the overall biophysical properties of the plasma membrane, we used a plasma membrane–specific lipid-order probe, FπCM–SO₃ (Figures 2N–2P and Figure 2—figure supplement 3). We found that the plasma membrane of EpH4–Snail cells was more disordered (fluidized), suggesting that the overall properties of the plasma membrane are altered by ectopic expression of Snail.

      Another issue is the intracellular localization of ABCA1 in Eph4-Snail cells. Knowing that a change in the cholesterol/sphingomyelin ratio can also modify intracellular protein trafficking, it seems important to analyze the intracellular localization of ABCA1 in EPh4-Snail cells.

      We performed immunofluorescence microscopy for ABCA1 and found that ABCA1 was mainly localized at the plasma membrane in EpH4–Snail cells (Figure 1M).

      As for the data on ACAT inhibition, we expect an increase in ACAT activity and protein levels in EMT cells overexpressing Snail. The authors should also investigate this point.

      As noted in our response to the public review, we examined the expression of the major ACAT isoform in the kidney, SOAT1, across RCC cell lines. However, its expression did not correlate with Snail (Figure 4B), suggesting that SOAT1 is expressed at sufficient levels even in cells with low Snail expression. We agree that measuring ACAT activity would be important, as ACATs are regulated at multiple levels. However, we consider this to be beyond the scope of the present study and plan to address it in future work.

      Minor comments

      I do not understand why in the text, Figure S1 appears after Figure S2. The authors might want to change the numbering of these two figures.

      We thank the reviewer for pointing this out. We have corrected the numbering of the supplementary figures so that Figure S1 now appears before Figure S2 in both the text and the revised figure legends.

      Page 5, lane 20 Figure 1I instead of 1H.

      Page 6, lane 2, Figure 1J instead of 1I, and lane 9 Figure 1H instead of 1I.

      We thank the reviewer for carefully checking the figure references. We have corrected the figure numbering errors in the text as suggested.

      Reviewer #2 (Recommendations for the authors):

      For Figures 1B, 1H, 1J, 2B, 2C, 3G, S3A, and S3B, to enhance data reliability, it is necessary to conduct a quantitative analysis of the Western blot data. The average values from at least three biological replicates should be calculated, with statistical significance assessed.

      We have conducted quantitative analyses of the Western blot data for Figures 1B, 1H, 1J, 2B, 2C, 3G, S3A, and S3B. Band intensities from at least three independent biological replicates were quantified, and the mean values with statistical significance are now presented in the revised figures.

      For Figures 1D, 2A, 2D, and S2, the images of cells or tissues should not rely solely on selected fields. Quantitative analysis is required, and the mean values from at least three biological replicates should be provided with statistical significance testing.

      We have performed quantitative analyses for Figures 1D, 2A, 2D, and S2. The quantification was based on data from at least three independent biological replicates, and the mean values with statistical significance are now included in the revised figures.

      For Figures 1A, 1G, 4, and S5, evaluating ABCA1's involvement in drug resistance based solely on CsA treatment is insufficient. Demonstrating the loss of drug resistance through ABCA1 knockdown or knockout is necessary.

      We generated ABCA1 knockout EpH4–Snail cells and examined their resistance to nitidine chloride. However, knockout of ABCA1 alone did not affect resistance to the compound (Figure 2 - figure supplement 2). This may be due to secondary metabolic alterations induced by ABCA1 loss or compensatory upregulation of other LXR-induced cholesterol efflux transporters. Instead, we demonstrated that treatment with the LXR inhibitor GSK2033 reduced the nitidine chloride resistance of EpH4–Snail cells (Figure 2C), supporting the idea that enhanced efflux of antitumor agents through the LXR–ABCA1–mediated cholesterol efflux pathway contributes to nitidine chloride resistance.

      For Figure 3, to establish a causal relationship between changes in the Chol/SM balance and ABCA1 expression, it is important to test whether modifying cholesterol and SM levels to disrupt this balance affects ABCA1 expression.

      Regarding causality, as shown in Figure 2, we have already demonstrated that reducing cholesterol levels in EpH4–Snail cells decreases ABCA1 expression. To further explore this relationship, we examined whether increasing sphingomyelin levels by adding ceramide to the culture medium—thereby restoring the sphingomyelin-to-cholesterol ratio—would reduce ABCA1 expression (Figure 3H). Indeed, supplementation with C22:0 ceramide decreased ABCA1 expression, suggesting that downregulation of the VLCFA-sphingomyelin biosynthetic pathway triggers ABCA1 upregulation. Collectively, these findings support a causal relationship between the Chol/SM balance and ABCA1 expression.

      In Figure 3, if there is any information on differences in cholesterol affinity between LCFA-SM and VLCFA-SM, it would be beneficial to include it in the manuscript.

      Differences in cholesterol affinity between LCFA-SM and VLCFA-SM in cellular membranes remain controversial and have yet to be fully elucidated. The decrease in cell surface sphingomyelin content, evaluated by lysenin staining (Figure 2L), was more pronounced than that of total sphingomyelin (Figure 3A). Given that VLCFA-SMs have been suggested to undergo distinct trafficking during recycling from endosomes to the plasma membrane (Koivusalo et al. Mol Biol Cell 2007), their reduction may lead to decreased plasma membrane sphingomyelin content by altering its intracellular distribution. We have added this discussion to the revised manuscript.

      In Figure 3F, it is recommended to assess housekeeping gene expression as a control. Quantitative real-time PCR should be performed, and the average values from at least three biological replicates should be presented.

      We have performed quantitative RT-PCR analysis. The average values from at least three independent biological replicates are presented in Figure 3G.

      For Figure 3F, to show whether the reduction of CERS3 or ELOVL7 affects the Chol/SM balance and ABCA1 expression, it is necessary to investigate the phenotypes following the knockdown or knockout of these enzymes.

      We fully agree that phenotypic analyses of epithelial cells lacking CerS3 or ELOVL7 would provide valuable insights. However, we consider such investigations to be beyond the scope of the present study and plan to pursue them in future work.

      Clarifying whether similar phenotypes are induced by other EMT-related transcription factors, or if they are specific to Snail, would be beneficial.

      We agree that examining whether similar phenotypes are induced by other EMT-related transcription factors would be highly valuable for understanding the broader EMT network. However, as the focus of the present study is on lipid metabolic alterations associated with EMT—particularly the imbalance between sphingomyelin and cholesterol—we consider this investigation to be beyond the scope of the current work and plan to address it in future studies.

      There are errors in figure citations within the text that need correction:

      p.9 l.18 Fig. 3D → Fig. 3G

      p.9 l.22 Fig. 3I → Fig. 3H

      p.9 l.23 Fig. S2 → Fig. S4

      p.10 l.6 Fig. 3J → Fig. 1J

      p.10 l.8 Fig. 3J → Fig. 1J

      p.10 l.9 Fig. 3K → Fig. 3I

      p.10 l.12 Fig. 3H → Fig. 3J

      p.10 l.14 Fig. 2D and Fig. S4 → Fig. 2G and Fig. S4D

      We thank the reviewer for carefully pointing out these citation errors. We have corrected all figure references in the text as suggested.

    1. Reviewer #3 (Public review):

      Summary:

      This study aims to develop and characterize phenylhydrazone-based small molecules that selectively activate the ATF6 arm of the unfolded protein response by covalently modifying a subset of ER-resident PDIs. The authors identify AA263 as a lead scaffold and optimize its structure to generate analogs with improved potency and ATF6 selectivity, notably AA263-20. These compounds are shown to restore proteostasis and functional expression of disease-associated misfolded proteins in cellular models involving both secretory (AAT-Z) and membrane (GABAA receptor) proteins. The findings provide valuable chemical tools for modulating ER proteostasis and may serve as promising leads for therapeutic development targeting protein misfolding diseases.

      Strengths:

      The study presents a well-defined chemical biology framework integrating proteomics, transcriptomics, and disease-relevant functional assays.

      Identification and optimization of a new electrophilic scaffold (AA263) that selectively activates ATF6 represents a valuable advance in UPR-targeted pharmacology.

      SAR studies are comprehensive and logically drive the development of more potent and selective analogs such as AA263-20.

      Functional rescue is demonstrated in two mechanistically distinct disease models of protein misfolding-one involving a secretory protein and the other a membrane protein-underscoring the translational relevance of the approach.

      Weaknesses:

      ATF6 activation is primarily inferred from reporter assays and transcriptional profiling; direct biochemical evidence of ATF6 cleavage or nuclear translocation remains missing. However, the authors have added supporting data showing that co-treatment with the ATF6 inhibitor CP7 suppresses target gene induction, which partially strengthens the evidence for ATF6-dependent activity.

      Although the proposed mechanism involving PDI modification and ATF6 activation is plausible, it is still not experimentally demonstrated and remains incompletely characterized.

      In vivo validation is absent, and thus the pharmacological feasibility, selectivity, and bioavailability of these compounds in physiological systems remain untested.

      Comments on revisions:

      The authors have generally addressed my comments.

    2. Author response:

      The following is the authors’ response to the previous reviews.

      Reviewer #1 (Public review):

      Summary: 

      This study builds off prior work that focused on the molecule AA147 and its role as an activator of the ATF6 arm of the unfolded protein response. In prior manuscripts, AA147 was shown to enter the ER, covalently modify a subset of protein disulfide isomerases (PDIs), and improve ER quality control for the disease-associated mutants of AAT and GABAA. Unsuccessful attempts to improve the potency of AA147 have led the authors to characterize a second hit from the screen in this study: the phenylhydrazone compound AA263. The focus of this study on enhancing the biological activity of the AA147 molecule is compelling, and overcomes a hurdle of the prior AA147 drug that proved difficult to modify. The study successfully identifies PDIs as a shared cellular target of AA263 and its analogs. The authors infer, based on the similar target hits previously characterized for AA147, that PDI modification accounts for a mechanism of action for AA263. 

      Strengths: 

      The authors are able to establish that, like AA147, AA263 covalently targets ER PDIs. The work establishes the ability to modify the AA263 molecule to create analogs with more potency and efficacy for ATF6 activation. The "next generation" analogs are able to enhance the levels of functional AAT and GABAA receptors in cellular models expressing the Z-variant of AAT or an epilepsy-associated variant of the GABAA receptor, outlining the therapeutic potential for this molecule and laying the foundation for future organism-based studies. 

      We thank the reviewer for the positive comments on our manuscript. We address the reviewers remaining comments on our work, as described below.

      Weaknesses: 

      Arguably, the work does not fully support the statement provided in the abstract that the study "reveals a molecular mechanism for the activation of ATF6". The identification of targets of AA263 and its analogs is clear. However, it is a presumption that the overlap in PDIs as targets of both AA263 and AA147 means that AA263 works through the PDIs. While a likely mechanism, this conclusion would be bolstered by establishing that knockdown of the PDIs lessens drug impact with respect to ATF6 activation. 

      We thank the reviewer for this comment. We previously showed that genetic depletion of different PDIs modestly impacts ATF6 activation afforded by ATF6 activating compound such as AA147 (see Paxman et al (2018) ELIFE). However, as discussed in this manuscript, the ability for AA147 and AA263 to activate ATF6 signaling is mediated through polypharmacologic targeting of multiple different PDIs involved in regulating the redox state of ATF6. Thus, individual knockdowns are predicted to only minimally impact the ability for AA263 and its analogs to activate ATF6 signaling. 

      To address this comment, we have tempered our language regarding the mechanism of AA263-dependent ATF6 activation through PDI targeting described herein to better reflect the fact that we have not explicitly proven that PDI targeting is responsible for this activity, as highlighted below:

      “Page 7, Line 158: “Intriguingly, 12 proteins were shared between these two conditions, including 7 different ER-localized PDIs (Fig. 1H). This includes PDIs previously shown to regulate ATF6 activation including TXNDC12/ERP18.[45,46] These results are similar to those observed when comparing proteins modified by the selective ATF6 activating compound AA147<sup>yne</sup> and AA132<sup>yne</sup>.[38] Further, we found that the extent of labeling for PDIs including PDIA1, PDIA4, PDIA6, and TMX1, but not TXNDC12, showed greater modification by AA132<sup>yne</sup>, as compared to AA263<sup>yne</sup> (Fig. 1I). Similar results were observed for AA147<sup>yne</sup>.[38] This suggests that, like AA147, the selective activation of ATF6 afforded by AA263 is likely attributed to the modifications of a subset of multiple different ER-localized PDIs by this compound.”

      Alternatively, it has previously been suggested that the cell-type dependent activity of AA263 may be traced to the presence of cell-type specific P450s that allow for the metabolic activation of AA263 or cell-type specific PDIs (Plate et al 2016; Paxman et al 2018). If the PDI target profile is distinct in different cell types, and these target difference correlates with ATF6-induced activity by AA263, that would also bolster the authors' conclusion. 

      As highlighted by the reviewer, different ER oxidases (e.g., P450s) could differentially influence activation of compounds such as AA263 to promote PDI modification and subsequent ATF6 activation. The specific ER oxidases responsible for AA263 activation are currently unknown; however, we anticipate that multiple different enzymes can promote this activity making it difficult to discern the specific contributions of any one oxidase. We have made this point clearer in the revised submission, as below:

      Page 7, Line 169: “This specificity for ER proteins instead suggests the localized generation of AA263 quinone methides at the ER membrane, likely through metabolic activation by different ER localized oxidases, which has been previously been shown to contribute to the selective modification of ER proteins afforded by other compounds such as AA147 [49]”   

      Reviewer #2 (Public review):

      Modulating the UPR by pharmacological targeting of its sensors (or regulators) provides mostly uncharted opportunities in diseases associated with protein misfolding in the secretory pathway. Spearheaded by the Kelly and Wiseman labs, ATF6 modulators were developed in previous years that act on ER PDIs as regulators of ATF6. However, hurdles in their medicinal chemistry have hampered further development. In this study, the authors provide evidence that the small molecule AA263 also targets and covalently modifies ER PDIs, with the effect of activating ATF6. Importantly, AA263 turned out to be amenable to chemical optimization while maintaining its desired activity. Building on this, the authors show that AA263 derivatives can improve the aggregation, trafficking, and function of two disease-associated mutants of secretory pathway proteins. Together, this study provides compelling evidence for AA263 (and its derivatives) being interesting modulators of ER proteostasis. Mechanistic details of its mode of action will need more attention in future studies that can now build on this.

      We thank the reviewer for their positive comments on our manuscript. We address the reviewer’s specific queries on our work, as outlined below. 

      In detail, the authors provide strong evidence that AA263 covalently binds to ER PDIs, which will inhibit the protein disulfide isomerase activity. ER PDIs regulate ATF6, and thus their finding provides a mechanistic interpretation of AA263 activating the UPR. It should be noted, however, that AA263 shows broad protein labeling (Figure 1G), which may suggest additional targets, beyond the ones defined as MS hits in this study. 

      This is true. We do show broad proteome-wide labeling with AA263<sup>yne</sup>, which are largely reflected in the hits identified by MS beyond PDI family members. It is possible that other observed engaged targets, in addition to PDIs, may contribute to the activation of ATF6 signaling. Regardless, our MS analysis clearly shows that the compounds modified by AA263 are enriched for PDIs, further supporting our model whereby AA263-dependent PDI modification is likely responsible for ATF6 activation. 

      Also, a further direct analysis of the IRE1 and PERK pathways (activated or not by AA263) would have been a benefit, as e.g., PDIA1, a target of AA263, directly regulates IRE1 (Yu et al., EMBOJ, 2020), and other PDIs also act on PERK and IRE1. The authors interpret modest activation of IRE1/PERK target genes (Figure 2C) as an effect on target gene overlap, indeed the most likely explanation based on their selective analyses on IRE1 (ERdj4) and PERK (CHOP) downstream genes, but direct activation due to the targeting of their PDI regulators is also a possible explanation. 

      While we do observe mild increases in IRE1/XBP1s target genes, we do not observe significant increases in PERK/ISR target genes in cells treated with optimized AA263 analogs (see Fig. 2C). We previously showed that genetic ATF6 activation leads to a modest increase in IRE1/XBP1s target genes, reflecting the overlap in target genes of the IRE1/XBP1s and ATF6 pathways (see Shoulders et al (2013) Cell Reports). However, with our data, we cannot explicitly rule out the possibility that the mild increase in IRE1/XBP1s target genes reflects direct IRE1/XBP1s activation, as suggested by the reviewer. To address this, we have adapted the text to highlight this point, now specifically referring to preferential ATF6 activation afforded by these compounds, as below:

      Page 5, Line 100: “In addition to finding AA147, our original high-throughput screen also identified the phenylhydrazone compound AA263 as a compound that preferentially activates the ATF6 arm of the UPR [26]”  

      Further key findings of this paper are the observed improvement of AAT behavior and GABAA trafficking and function. Further strength to the mechanistic conclusion that ATF6 activation causes this could be obtained by using ATF6 inhibitors/knockouts in the presence of AA263 (as the target PDIs may directly modulate the behavior of AAT and/or GABAA). 

      AA263 and related compounds could influence ER proteostasis of destabilized proteins through multiple mechanisms including ATF6 activation or direct modification of a subset of PDIs. We previously showed that AA263-dependent enhancement of A1AT-Z secretion and activity can be largely attributed to ATF6 activation (see Sun et al (2023) Cell Chem Biol). In the revised submission, we now show that increased levels of g2(R177G) afforded by treatment with AA263<sup>yne</sup> are partially blocked by co-treatment with the ATF6 inhibitor Ceapin-A7 (CP7), highlighting the contributions of ATF6 activation for this phenotype (Fig. S5B,C). Intriguingly, this result also demonstrates the benefit for targeting ER proteostasis using compounds such as our optimized AA263 analogs, as this approach allows us to enhance ER proteostasis of destabilized proteins through multiple mechanisms. We further expand on this specific point in the revised manuscript as below:

      Page 14, Line 375: “AA263 and its related analogs can influence ER proteostasis in these models through different mechanisms including ATF6-dependent remodeling of ER proteostasis and direct alterations to the activity of specific PDIs.(*) Consistent with this, we show that pharmacologic inhibition of ATF6 only partially blocks increases of g2(R177G) afforded by treatment with AA263<sup>yne</sup>, highlighting the benefit for targeting multiple aspects of ER proteostasis to enhance ER proteostasis of this diseaserelevant GABA<sub>A</sub> variant. While additional studies are required to further deconvolute the relative contributions of these two mechanisms on the protection afforded by our optimized compounds, our results demonstrate the potential for these compounds to enhance ER proteostasis in the context of different protein misfolding diseases.”  

      Along the same line, it also warrants further investigation why the different compounds, even if all were used at concentrations above their EC50, had different rescuing capacities on the clients.

      This is an interesting question that we are continuing to study. While in general, we observe fairly good correlation between ATF6 activation and correction of diseases of ER proteostasis linked to proteins such as A1AT-Z or GABA<sub>A</sub> receptors, as the reviewer points out, we do find some compounds are more efficient at correcting proteostasis than others activate ATF6 to similar levels. We attribute this to differences in either labeling efficiency of PDIs or differential regulation of various ER proteostasis factors, although that remains to be further defined. As we continue working with these (and other) compounds, we will focus on defining a more molecular basis for these findings. 

      Together, the study now provides a strong basis for such in-depth mechanistic analyses.

      We agree and we are continuing to pursue the mechanistic basis of ER proteostasis remodeling afforded by these and related compounds. 

      Reviewer #3 (Public review):

      Summary: 

      This study aims to develop and characterize phenylhydrazone-based small molecules that selectively activate the ATF6 arm of the unfolded protein response by covalently modifying a subset of ER-resident PDIs. The authors identify AA263 as a lead scaffold and optimize its structure to generate analogs with improved potency and ATF6 selectivity, notably AA263-20. These compounds are shown to restore proteostasis and functional expression of disease-associated misfolded proteins in cellular models involving both secretory (AAT-Z) and membrane (GABAA receptor) proteins. The findings provide valuable chemical tools for modulating ER proteostasis and may serve as promising leads for therapeutic development targeting protein misfolding diseases.

      Strengths: 

      (1) The study presents a well-defined chemical biology framework integrating proteomics, transcriptomics, and disease-relevant functional assays. 

      (2) Identification and optimization of a new electrophilic scaffold (AA263) that selectively activates ATF6 represents a valuable advance in UPR-targeted pharmacology.

      (3) SAR studies are comprehensive and logically drive the development of more potent and selective analogs such as AA263-20.

      (4) Functional rescue is demonstrated in two mechanistically distinct disease models of protein misfolding-one involving a secretory protein and the other a membrane protein-underscoring the translational relevance of the approach. 

      We thank the reviewer for their positive comments related to our work. We address specific weaknesses highlighted by the reviewer, as outlined below. 

      Weaknesses: 

      (1) ATF6 activation is primarily inferred from reporter assays and transcriptional profiling; however, direct evidence of ATF6 cleavage is lacking.

      While ATF6 trafficking and processing can be visualized in cell culture models following severe ER insults (e.g., Tg, Tm), we showed previously that the more modest activation afforded by pharmacologic activators such as AA147 and AA263 cannot be easily visualized by monitoring ATF6 processing (see Plate et al (2016) ELIFE). As we have shown in numerous other manuscripts, we have established a transcriptional profiling approach that accurately defines ATF6 activation. We use that approach to confirm preferential ATF6 activation in this manuscript. We feel that this is sufficient for confirming ATF6 activation. However, we also now include data showing that co-treatment with ATF6 inhibitors (e.g., CP7) blocks increased expression of ATF6 target genes induced by our prioritized compound AA263<sup>yne</sup> (Fig. S1B). This further supports our assertion that this compound activates ATF6 signaling.  

      (2) While the mechanism involving PDI modification and ATF6 activation is plausible, it remains incompletely characterized. 

      We thank the reviewer for this comment. We previously showed that genetic depletion of different PDIs modestly impacts ATF6 activation afforded by ATF6 activating compound such as AA147. However, as discussed in this manuscript, the ability for AA147 and AA263 to activate ATF6 signaling is mediated through polypharmacologic targeting of multiple different PDIs involved in regulating ATF6 redox. Thus, individual knockdowns are predicted to only minimally impact the ability for AA263 and its analogs to activate ATF6 signaling. 

      To address this comment, we have tempered out language regarding the mechanism of AA263-dependent ATF6 activation through PDI targeting described herein to better reflect the fact that we have not explicitly proven that PDI targeting is responsible for this activity, as highlighted below:

      Page 7, Line 158: “Intriguingly, 12 proteins were shared between these two conditions, including 7 different ER-localized PDIs (Fig. 1H). This includes PDIs previously shown to regulate ATF6 activation including TXNDC12/ERP18.[45,46] These results are similar to those observed when comparing proteins modified by the selective ATF6 activating compound AA147<sup>yne</sup> and AA132<sup>yne</sup>.[38] Further, we found that the extent of labeling for PDIs including PDIA1, PDIA4, PDIA6, and TMX1, but not TXNDC12, showed greater modification by AA132<sup>yne</sup>, as compared to AA263<sup>yne</sup> (Fig. 1I). Similar results were observed for AA147<sup>yne</sup>[38] This suggests that, like AA147, the selective activation of ATF6 afforded by AA263 is likely attributed to the modifications of a subset of multiple different ER-localized PDIs by this compound.”

      (3) No in vivo data are provided, leaving the pharmacological feasibility and bioavailability of these compounds in physiological systems unaddressed.

      We are continuing to test the in vivo activity of these compounds in work outside the scope of this initial study. 

      Reviewer #1 (Recommendations for the authors): 

      (1) First page of the discussion, last sentence. "We previously showed the relatively labeling of PDI modification directly impacts..." should be reworded.

      Thank you. We have corrected this in the revised manuscript. 

      (2) What is the rationale for measuring ERSE-Fluc activity at 18 h but RNAseq at 6 h? What is known about the timing of action for AA263?

      Compound-dependent activation of luciferase reporters requires the translation and accumulation of the luciferase protein for sufficient signal, while qPCR does not. We normally use longer incubations for reporter assays to ensure that we have sufficient quantity of reporter protein to accurately monitor activation. We have found that AA263 can rapidly increase ATF6 activity, with gene expression increases being observed after only a few hours of treatment. This is consistent with the proposed mechanism of ATF6 activation discussed herein involving metabolic activation and subsequent PDI modification.   

      (3) Figure 1 panel E and Figure S2 panel B. Are these the same data for AA263 and AA263yne, with the AA2635 added to the plot for Figure S2? If so, it would be nice to note that panel B represents data from 3 of the replicates that are shown in Figure 1 (n=6).

      Yes. The AA263 and AA263<sup>yne</sup> data shown in Fig. 1E and Fig. S2B are the same data, as these experiments were performed at the same time. We apologize for this oversight, which has now been corrected in the revised version. Note that there were n=3 replicates for the dose response shown in Fig. 1E, which we corrected in the figure legend as below:

      Fig. S2B Figure Legend: “B. Activation of the ERSE-FLuc ATF6 reporter in HEK293T cells treated for 18 h with the indicated concentration of AA263, AA263<sup>yne</sup>, or AA263-5. Error bars show SEM for n= 3 replicates. The data for AA263 and AA263<sup>yne</sup> is the same as that shown in Fig. 1E and are shown for comparison.” 

      (4) Figure S3. The legend notes 5 µM AA263-yne and 20 µM analog, whereas the figure itself outlines the same ratio but different concentrations: 10 µM and 40 µM.

      We apologize for this mistake in the legend, which has been corrected. The information in the figure is correct. 

      Reviewer #2 (Recommendations for the authors): 

      (1) The activation mechanism of ATF6 is still debated (really trafficking as a monomer?); the authors may want to word more carefully here. 

      We agree. We have corrected this in the revised manuscript to indicate that increased populations of reduced ATF6 traffic for proteolytic processing. 

      (2) In Figure 1B, below the figure, mM is written for BME, but micromolar is meant.

      Thank you. This has been corrected in the revised manuscript. 

      (3) The authors may want to make clearer, why BME does not completely inhibit AA263 and does not cause ER stress itself under the conditions tested.

      The addition of BME in our experiments is designed to shift the redox potential of the cell to increase intracellular thiol reagents, such as glutathione, that can quench ‘activated’ AA263 and its analogs. However, BME is actively being oxidized upon addition and the intracellular redox environment can rapidly equilibrate following BME addition. Thus, we do not expect that AA263 or other metabolically activated compounds will be fully quenched using this approach, as is observed. This is consistent with other experiments where we show that the use of these types of reducing agents do not fully suppress the activity of reactive molecules, instead shifting their dosedependent activation of specific pathways.  

      (4) The data in Figure 4C seems to disagree with the other data on the tested compounds; this should be clarified. 

      It is unclear to what the reviewer is referring. The data in 4C shows that treatment with our optimized AA263 analogs improved elastase inhibition afforded by secreted A1AT, as would be predicted. 

      (5) PDIs that have been shown to regulate ATF6 should be discussed in more detail in the light of the presented data/interactome (e.g., ERp18).

      Thank you for the suggestion. We now explicitly note that AA263<sup>yne</sup> covalent modifies TXNDC12/ERP18 in our proteomic dataset. However, we also note that there is no difference in labeling of this specific PDI between AA263<sup>yne</sup> and AA132<sup>yne</sup>. This may indicate that the targeting of this protein is responsible for the larger levels of ATF6 activation afforded by both these compounds relative to AA147, with the activation of other UPR pathways afforded by AA132 resulting from increased labeling of other PDIs. We are now exploring this possibility in work outside the scope of this current manuscript. 

      Page 7 Line 158: “Intriguingly, 12 proteins were shared between these two conditions, including 7 different ER-localized PDIs (Fig. 1H). This includes PDIs previously shown to regulate ATF6 activation including TXNDC12/ERP18.[45,46] These results are similar to those observed when comparing proteins modified by the selective ATF6 activating compound AA147<sup>yne</sup> and AA132<sup>yne</sup>.[38] Further, we found that the extent of labeling for PDIs including PDIA1, PDIA4, PDIA6, and TMX1, but not TXNDC12, showed greater modification by AA132<sup>yne</sup>, as compared to AA263<sup>yne</sup> (Fig. 1I). Similar results were observed for AA147<sup>yne</sup> [38] This suggests that, like AA147, the selective activation of ATF6 afforded by AA263 is likely attributed to the modifications of a subset of multiple different ER-localized PDIs by this compound.”

      Reviewer #3 (Recommendations for the authors):

      (1) Please consider adding detection of ATF6 cleavage by Western blot as direct evidence of AA263-induced ATF6 activation, to substantiate the central mechanistic claim.

      While ATF6 trafficking and processing can be visualized in cell culture models following severe ER insults (e.g., Tg, Tm), we showed previously that the more modest activation afforded by pharmacologic activators such as AA147 and AA263 cannot be easily visualized through monitoring ATF6 proteolytic processing by western blotting (see Plate et al (2016) ELIFE). As we have shown in numerous other manuscripts, we have established a transcriptional profiling approach that accurately defines ATF6 activation. We use that approach to confirm preferential ATF6 activation in this manuscript. We feel that this is sufficient for confirming ATF6 activation. However, we also now include qPCR data showing that co-treatment with ATF6 inhibitors (e.g., CP7) blocks increased expression of ATF6 target genes induced by our prioritized compounds. 

      (2) To strengthen causal inference, loss-of-function experiments such as PDI knockdown, cysteine mutant inactivation, or reconstitution studies may be informative.

      We thank the reviewer for this comment. We previously showed that genetic depletion of different PDIs modestly impacts ATF6 activation afforded by ATF6 activating compound such as AA147. However, as discussed in this manuscript, the ability for AA147 and AA263 to activate ATF6 signaling is mediated through polypharmacologic targeting of multiple different PDIs involved in regulating ATF6 redox state rather than a single PDI family member. Thus, individual knockdowns are predicted to only minimally impact the ability for AA263 and its analogs to activate ATF6 signaling. 

      To address this comment, we have tempered out language regarding the mechanism of AA263-dependent ATF6 activation through PDI targeting described herein to better reflect the fact that we have not explicitly proven that PDI targeting is responsible for this activity.

      (3) Since β-mercaptoethanol inhibits ATF6 activation, it would be helpful to examine whether DTT also suppresses the activity of AA263 or its analogs, to clarify the redox sensitivity of the mechanism.

      The use of reducing agents stronger than BME, such as DTT, globally activates the UPR, including the ATF6 arm of the UPR. Thus, we are unable to perform the requested experiments. We specifically use BME because it is a sufficiently mild reducing agent that can quench reactive metabolites (e.g., activated AA263 analogs) through alterations in cellular glutathione levels without globally activating the UPR.  

      (4) Given the electrophilic nature of AA263, which may allow it to react with endogenous thiols (e.g., glutathione or cysteine), a brief discussion or experimental validation of this potential liability would enhance the interpretation of in vivo applicability.

      Metabolically activated AA263, like AA147, can be quenched by endogenous thiols such as glutathione. However, treatment with our metabolically activatable electrophiles AA147 and AA263 , either in vitro or in vivo, does not seem to induce activation of the NRF2-regulated oxidative stress response (OSR) in the cell lines used in this manuscript (e.g., Fig. S2C). This suggests that treatment with these compounds does not globally disrupt the intracellular redox state, at least in the tested cell lines. While AA147 has been shown to activate NRF2 in specifical neuronal cell lines and in primary neurons, AA147 does not activate NRF2 signaling in other nonneuronal cell lines or other tissues (see Rosarda et al (2021) ACS Chem Bio). We are currently testing the potential for AA263 to similarly activate adaptive NRF2 signaling in neuronal cells. Regardless, AA147, which functions through a similar mechanism to that proposed for AA263, has been shown to be beneficial in multiple models of disease both in vitro and in vivo. This indicates that this mechanism of action is suitable for continued translational development to mitigate pathologic ER proteostasis disruption observed in diverse types of human disease.  

      (5) Evaluation of in vivo activity, such as BiP induction in the liver following intraperitoneal administration of AA263-20 or related analogs, could substantially increase the translational impact of the work.

      We are continuing to probe the activity of our optimized AA263 analogs in vivo in work outside the scope of this current manuscript. We thank the reviewer for this suggestion. 

      (6) The degree of BiP induction may also be contextualized by comparison with known ER stress inducers such as thapsigargin or tunicamycin, ideally by providing relative dose-equivalent responses.

      We are not sure to what the reviewer is referring. We show comparative activation of ATF6 in cells treated with the ER stressor Tg and our compounds by both reporter assay (e.g., Fig. 2B) and qPCR of the ATF6 target gene BiP (HSPA5) (Fig. S2A). We feel that this provides context for the more physiologic levels of ATF6 activation afforded by these compounds.

    1. 1 AbstractRoot hairs play a key role in plant nutrient and water uptake. Historically, root hair traits have been largely quantified manually. As such, this process has been laborious and low-throughput. However, given their importance for plant health and development, high-throughput quantification of root hair morphology could help underpin rapid advances in the genetic understanding of these traits. With recent increases in the accessibility and availability of artificial intelligence (AI) and machine learning techniques, the development of tools to automate plant phenotyping processes has been greatly accelerated. Here, we present pyRootHair, a high-throughput, AI-powered software application to automate root hair trait extraction from images of plant roots grown on agar plates. pyRootHair is capable of batch processing over 600 images per hour without manual input from the end user. In this study, we deploy pyRootHair on a panel of 24 diverse wheat cultivars and uncover a large, previously unresolved amount of variation in many root hair traits. We show that the overall root hair profile falls under two distinct shape categories, and that different root hair traits often correlate with each other. We also demonstrate that pyRootHair can be deployed on a range of plant species, including arabidopsis (Arabidopsis thaliana), brachypodium (Brachypodium distachyon), medicago (Medicago truncatula), oat (Avena sativa), rice (Oryza sativa), teff (Eragostis tef) and tomato (Solanum lycopersicum). The application of pyRootHair enables users to rapidly screen large numbers of plant germplasm resources for variation in root hair morphology, supporting high-resolution measurements and high-throughput data analysis. This facilitates downstream investigation of the impacts of root hair genetic control and morphological variaton on plant performance.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf141), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Wanneng Yang

      This paper introduces an artificial intelligence-driven software named pyRootHair, which enables high-throughput automated extraction of root hair traits from plant root images, thereby facilitating rapid analysis of root hair morphological variations in various plants, including wheat. However, the following issues remain: 1)Compared to previously published work, the contributions and innovations of this study are not sufficiently highlighted. For instance, the work by Lu, Wei, Xiaochan Wang, and Wei Jia, titled "Root hair image processing based on deep learning and prior knowledge" (Comput. Electron. Agric. 202, 2022: 107397), should be explicitly referenced to clarify the advancements presented here. 2) Although the study demonstrates that pyRootHair can be applied to multiple plant species, including Arabidopsis, Brachypodium, rice, and tomato, the primary validation and analysis are conducted on wheat. For other species, only segmentation results and trait extraction figures are presented, lacking detailed comparative validation with manual measurements as thoroughly as for wheat. 3)The process of "straightening" curved roots is implemented, but the potential introduction of new errors by this procedure is not discussed. 4) In the trait validation section, the correlation analysis between automated and manual measurements shows strong agreement for root hair length and root length, but weaker correlation for elongation zone length. The study should provide a more in-depth discussion on the possible reasons for this lower correlation. 5)The details of the core algorithms (CNN architecture, random forest classifier) are insufficiently described. Key aspects such as parameter selection, optimization, training procedures, and the division ratios of the training/validation/test sets are not clearly specified. Additionally, the specific strategies for data augmentation are not mentioned. 6) No quantitative comparisons with similar tools (e.g., in terms of speed and accuracy) are provided.

    1. RNA-Seq analysis has become a routine task in numerous genomic research labs, driven by the reduced cost of bulk RNA sequencing experiments. These generate billions of reads that require accurate, efficient, effective, and reproducible analysis. But the time required for comprehensive analysis remains a bottleneck. Many labs rely on in-house scripts, making standardization and reproducibility challenging. To address this, we developed RNA-SeqEZPZ, an automated pipeline with a user-friendly point-and-click interface, enabling rigorous and reproducible RNA-Seq analysis without requiring programming or bioinformatics expertise. For advanced users, the pipeline can also be executed from the command line, allowing customization of steps to suit specific requirements.This pipeline includes multiple steps from quality control, alignment, filtering, read counting to differential expression and pathway analysis. We offer two different implementations of the pipeline using either (1) bash and SLURM or (2) Nextflow. The two implementation options allow for straightforward installation, making it easy for individuals familiar with either language to modify and/or run the pipeline across various computing environments.RNA-SeqEZPZ provides an interactive visualization tool using R shiny to easily select the FASTQ files for analysis and compare differentially expressed genes and their functions across experimental conditions. The tools required by the pipeline are packaged into a Singularity image for ease of installation and to ensure replicability. Finally, the pipeline performs a thorough statistical analysis and provides an option to perform batch adjustment to minimize effects of noise due to technical variations across replicates.RNA-SeqEZPZ is freely available and can be downloaded from https://github.com/cxtaslim/RNA-SeqEZPZ.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf133), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 2: Yang Yang

      The manuscript describes RNA-SeqEZPZ, an automated RNA-Seq analysis pipeline with a user-friendly point-and-click interface. It aims to make comprehensive transcriptomics analyses more accessible to researchers who lack extensive bioinformatics skills by addressing common issues with standardization and usability that arise from using in-house scripts. The pipeline's main features are the use of a Singularity container to simplify software installation and a Nextflow version to support scalability across different computing environments like clouds and clusters. However, I'm not sure if this manuscript fits the journal's scope in its current form. It seems to be just an integration of existing tools without offering new methods or findings.

      Major comments:

      1. The manuscript mentions several existing RNA-Seq pipelines, such as ENCODE, nf-core, ROGUE, Shiny-Seq, bulkAnalyseR, Partek™ flow, RaNA-Seq, and RASflow. A more detailed comparison of RNA-SeqEZPZ with these tools is needed, especially regarding specific features, performance metrics, and ease of use. For example, it would be helpful to compare the computational resources required by each pipeline or the statistical methods used for differential expression analysis.

      2. The manuscript emphasizes reproducibility through Singularity containers and Nextflow. However, it would be stronger if it included a more rigorous demonstration of reproducibility. This could involve running the pipeline on multiple datasets and comparing the results, or providing a detailed protocol for other researchers to reproduce the findings.

      3. The manuscript highlights the scalability and portability of RNA-SeqEZPZ due to its Nextflow version. It would be useful to include specific examples of how the pipeline has been used in different computing environments (e.g., cloud, cluster) and to provide performance data to demonstrate its scalability.

      4. The point-and-click interface is a key feature, but the manuscript could benefit from a more detailed description of the interface and its functionalities. Including screenshots or a video demonstration would be valuable for potential users.

      5. The manuscript shows the effects of batch adjustment using a public dataset. It would be beneficial to expand this section with a discussion of the limitations of batch adjustment methods and to provide guidance on when and how to apply them.

    2. RNA-Seq analysis has become a routine task in numerous genomic research labs, driven by the reduced cost of bulk RNA sequencing experiments. These generate billions of reads that require accurate, efficient, effective, and reproducible analysis. But the time required for comprehensive analysis remains a bottleneck. Many labs rely on in-house scripts, making standardization and reproducibility challenging. To address this, we developed RNA-SeqEZPZ, an automated pipeline with a user-friendly point-and-click interface, enabling rigorous and reproducible RNA-Seq analysis without requiring programming or bioinformatics expertise. For advanced users, the pipeline can also be executed from the command line, allowing customization of steps to suit specific requirements.This pipeline includes multiple steps from quality control, alignment, filtering, read counting to differential expression and pathway analysis. We offer two different implementations of the pipeline using either (1) bash and SLURM or (2) Nextflow. The two implementation options allow for straightforward installation, making it easy for individuals familiar with either language to modify and/or run the pipeline across various computing environments.RNA-SeqEZPZ provides an interactive visualization tool using R shiny to easily select the FASTQ files for analysis and compare differentially expressed genes and their functions across experimental conditions. The tools required by the pipeline are packaged into a Singularity image for ease of installation and to ensure replicability. Finally, the pipeline performs a thorough statistical analysis and provides an option to perform batch adjustment to minimize effects of noise due to technical variations across replicates.RNA-SeqEZPZ is freely available and can be downloaded from https://github.com/cxtaslim/RNA-SeqEZPZ.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf133), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Unitsa Sangket

      This research presents a well-designed and powerful program for comprehensive transcriptomics analysis with interactive visualizations. The tool is conceptually strong and user-friendly, requiring only raw reads in FASTQ format to initiate the analysis, with no need for manual quality checks. However, a limitation is that the software must be installed manually, which typically requires access to a high-performance computing (HPC) system and support from a system administrator for installation and server maintenance. As such, non-technical users may find it difficult to install and operate the program independently.

      With appropriate revisions based on the comments below, the manuscript has the potential to be significantly improved.

      • Page 8, line 158-160 "DESeq2 was selected based on findings by Rapaport et al. (2013)40, which demonstrated its superior specificity and sensitivity as well as good control of false positive errors." The findings in the paper titled "bestDEG: a web-based application automatically combines various tools to precisely predict differentially expressed genes (DEGs) from RNA-Seq data" (https://peerj.com/articles/14344) show that DESeq2 achieves higher sensitivity than other tools when applied to newer human RNA-Seq datasets. This finding should be included in the manuscript. For example, DESeq2 was selected based on findings by Rapaport et al. (2013)⁴⁰, which demonstrated its superior specificity and sensitivity as well as good control of false positive errors. Additionally, recent findings from the bestDEG study (cite bestDEG) further support the higher sensitivity of DESeq2 than other tools when applied to newer human RNA-Seq datasets.

      • Page 6, line 124-125 "Raw reads quality control are then performed using 125 FASTQC18 and QC reports are compiled using MultiQC19." The quality of the trimmed reads can be assessed using FastQC, as demonstrated and summarized in the paper titled "VOE: automated analysis of variant epitopes of SARS-CoV-2 for the development of diagnostic tests or vaccines for COVID-19." (https://peerj.com/articles/17504/) (Page 4, in last paragraph ""(1) Per base sequence quality (median value of each base greater than 25), (2) per sequence quality (median quality greater than 27), (3) perbase N content (N base less than 5% at each read position) and (4) adapter content (adapter sequences at each position less than 5% of all reads)". This point should be mentioned in the manuscript, including the cutoff values for each FastQC metrics used in RNA-SeqEZPZ, as these thresholds may vary. For example, the quality of the trimmed FASTQ reads was assessed based on the four FastQC metrics, as summarized by Lee et al. (2024). The cutoffs for RNA-SeqEZPZ were set as follows: the median value of each base must be greater than [x], the median quality score must be above [y], the percentage of N bases at each read position must be less than [z]%, and the proportion of adapter sequences at each position must be below [xx]% of all reads.

      • The programs used for counts table creation and alignment process should be mentioned in the manuscript.

      • The default cutoffs for FDR and log₂ fold change, as well as instructions on how to modify these thresholds, should be clearly stated in the manuscript.

    1. Reviewer #2 (Public review):

      Summary:

      This paper formulates an individual-based model to understand the evolution of division of labor in vertebrates. The model considers a population subdivided in groups, each group has a single asexually-reproducing breeder, other group members (subordinates) can perform two types of tasks called "work" or "defense", individuals have different ages, individuals can disperse between groups, each individual has a dominance rank that increases with age, and upon death of the breeder a new breeder is chosen among group members depending on their dominance. "Workers" pay a reproduction cost by having their dominance decreased, and "defenders" pay a survival cost. Every group member receives a survival benefit with increasing group size. There are 6 genetic traits, each controlled by a single locus, that control propensities to help and disperse, and how task choice and dispersal relate to dominance. To study the effect of group augmentation without kin selection, the authors cross-foster individuals to eliminate relatedness. The paper allows for the evolution of the 6 genetic traits under some different parameter values to study the conditions under which division of labour evolves, defined as the occurrence of different subordinates performing "work" and "defense" tasks. The authors envision the model as one of vertebrate division of labor.

      The main conclusion of the paper is that group augmentation is the primary factor causing the evolution of vertebrate division of labor, rather than kin selection. This conclusion is drawn because, for the parameter values considered, when the benefit of group augmentation is set to zero, no division of labor evolves and all subordinates perform "work" tasks but no "defense" tasks.

      Strengths:

      The model incorporates various biologically realistic details, including the possibility to evolve age polytheism where individuals switch from "work" to "defence" tasks as they age or vice versa, as well as the possibility of comparing the action of group augmentation alone with that of kin selection alone.

      Weaknesses:

      The model and its analysis are limited, which in my view makes the results insufficient to reach the main conclusion that group augmentation and not kin selection is the primary cause of the evolution of vertebrate division of labour. There are several reasons.

      First, although the main claim that group augmentation drives the evolution of division of labour in vertebrates, the model is rather conceptual in that it doesn't use quantitative empirical data that applies to all/most vertebrates and vertebrates only. So, I think the approach has a conceptual reach rather than being able to achieve such a conclusion about a real taxon.

      Second, I think that the model strongly restricts the possibility that kin selection is relevant. The two tasks considered essentially differ only by whether they are costly for reproduction or survival. "Work" tasks are those costly for reproduction and "defense" tasks are those costly for survival. The two tasks provide the same benefits for reproduction (eqs. 4, 5) and survival (through group augmentation, eq. 3.1). So, whether one, the other, or both helper types evolve presumably only depends on which task is less costly, not really on which benefits it provides. As the two tasks give the same benefits, there is no possibility that the two tasks act synergistically, where performing one task increases a benefit (e.g., increasing someone's survival) that is going to be compounded by someone else performing the other task (e.g., increasing that someone's reproduction). So, there is very little scope for kin selection to cause the evolution of labour in this model. Note synergy between tasks is not something unusual in division of labour models, but is in fact a basic element in them, so excluding it from the start in the model and then making general claims about division of labour is unwarranted. In their reply, the authors point out that they only consider fertility benefits as this, according to them, is what happens in cooperative breeders with alloparental care; however, alloparental care entails that workers can increase other's survival *without group augmentation*, such as via workers feeding young or defenders reducing predator-caused mortality, as a mentioned in my previous review but these potentially kin-selected benefits are not allowed here.

      Third, the parameter space is understandably little explored. This is necessarily an issue when trying to make general claims from an individual-based model where only a very narrow parameter region of a necessarily particular model can be feasibly explored. As in this model the two tasks ultimately only differ by their costs, the parameter values specifying their costs should be varied to determine their effects. In the main results, the model sets a very low survival cost for work (yh=0.1) and a very high survival cost for defense (xh=3), the latter of which can be compensated by the benefit of group augmentation (xn=3). Some limited variation of xh and xn is explored, always for very high values, effectively making defense unevolvable except if there is group augmentation. In this revision, additional runs have been included varying yh and keeping xh and xn constant (Fig. S6), so without addressing my comment as xn remains very high. Consequently, the main conclusion that "division of labor" needs group augmentation seems essentially enforced by the limited parameter exploration, in addition to the second reason above.

      Fourth, my view is that what is called "division of labor" here is an overinterpretation. When the two helper types evolve, what exists in the model is some individuals that do reproduction-costly tasks (so-called "work") and survival-costly tasks (so-called "defense"). However, there are really no two tasks that are being completed, in the sense that completing both tasks (e.g., work and defense) is not necessary to achieve a goal (e.g., reproduction). In this model there is only one task (reproduction, equation 4,5) to which both helper types contribute equally and so one task doesn't need to be completed if completing the other task compensates for it; instead, it seems more fitting to say that there are two types of helpers, one that pays a fertility cost and another one a survival cost, for doing the same task. So, this model does not actually consider division of labor but the evolution of different helper types where both helper types are just as good at doing the single task but perhaps do it differently and so pay different types of costs. In this revision, the authors introduced a modified model where "work" and "defense" must be performed to a similar extent. Although I appreciate their effort, this model modification is rather unnatural and forces the evolution of different helper types if any help is to evolve.

      I should end by saying that these comments don't aim to discourage the authors, who have worked hard to put together a worthwhile model and have patiently attended to my reviews. My hope is that these comments can be helpful to build upon what has been done to address the question posed.

    2. Author response:

      The following is the authors’ response to the previous reviews

      Reviewer #1 (Public review):

      This paper presents a computational model of the evolution of two different kinds of helping ("work," presumably denoting provisioning, and defense tasks) in a model inspired by cooperatively breeding vertebrates. The helpers in this model are a mix of previous offspring of the breeder and floaters that might have joined the group, and can either transition between the tasks as they age or not. The two types of help have differential costs: "work" reduces "dominance value," (DV), a measure of competitiveness for breeding spots, which otherwise goes up linearly with age, but defense reduces survival probability. Both eventually might preclude the helper from becoming a breeder and reproducing. How much the helpers help, and which tasks (and whether they transition or not), as well as their propensity to disperse, are all evolving quantities. The authors consider three main scenarios: one where relatedness emerges from the model, but there is no benefit to living in groups, one where there is no relatedness, but living in larger groups gives a survival benefit (group augmentation, GA), and one where both effects operate. The main claim is that evolving defensive help or division of labor requires the group augmentation; it doesn't evolve through kin selection alone in the authors' simulations.

      This is an interesting model, and there is much to like about the complexity that is built in. Individual-based simulations like this can be a valuable tool to explore the complex interaction of life history and social traits. Yet, models like this also have to take care of both being very clear on their construction and exploring how some of the ancillary but potentially consequential assumptions affect the results, including robust exploration of the parameter space. I think the current manuscript falls short in these areas, and therefore, I am not yet convinced of the results. In this round, the authors provided some clarity, but some questions still remain, and I remain unconvinced by a main assumption that was not addressed.

      Based on the authors' response, if I understand the life history correctly, dispersers either immediately join another group (with 1-the probability of dispersing), or remain floaters until they successfully compete for a breeder spot or die? Is that correct? I honestly cannot decide because this seems implicit in the first response but the response to my second point raises the possibility of not working while floating but can work if they later join a group as a subordinate. If it is the case that floaters can have multiple opportunities to join groups as subordinates (not as breeders; I assume that this is the case for breeding competition), this should be stated, and more details about how. So there is still some clarification to be done, and more to the point, the clarification that happened only happened in the response. The authors should add these details to the main text. Currently, the main text only says vaguely that joining a group after dispersing " is also controlled by the same genetic dispersal predisposition" without saying how.

      In each breeding cycle, individuals have the opportunity to become a breeder, a helper, or a floater. Social role is really just a state, and that state can change in each breeding cycle (see Figure 1). Therefore, floaters may join a group as subordinates at any point in time depending on their dispersal propensity, and subordinates may also disperse from their natal group any given time. In the “Dominance-dependent dispersal propensities” section in the SI, this dispersal or philopatric tendency varies with dominance rank.

      We have added: “In each breeding cycle” (L415) to clarify this further.

      In response to my query about the reasonableness of the assumption that floaters are in better condition (in the KS treatment) because they don't do any work, the authors have done some additional modeling but I fail to see how that addresses my point. The additional simulations do not touch the feature I was commenting on, and arguably make it stronger (since assuming a positive beta_r -which btw is listed as 0 in Table 1- would make floaters on average be even more stronger than subordinates). It also again confuses me with regard to the previous point, since it implies that now dispersal is also potentially a lifetime event. Is that true?

      We are not quite sure where the reviewer gets this idea because we have never assumed a competitive advantage of floaters versus helpers. As stated in the previous revision, floaters can potentially outcompete subordinates of the same age if they attempt to breed without first queuing as a subordinate (step 5 in Figure 1) if subordinates are engaged in work tasks. However, floaters also have higher mortality rates than group members, which makes them have lower age averages. In addition, helpers have the advantage of always competing for an open breeding position in the group, while floaters do not have this preferential access (in Figure S2 we reduce even further the likelihood of a floater to try to compete for a breeding position).

      Moreover, in the previous revision (section: “Dominance-dependent dispersal propensities” in the SI) we specifically addressed this concern by adding the possibility that individuals, either floaters or subordinate group members, react to their rank or dominance value to decide whether to disperse (if subordinate) or join a group (if floater). Hence, individuals may choose to disperse when low ranked and then remain on the territory they dispersed to as helpers, OR they may remain as helpers in their natal territory as low ranked individuals and then disperse later when they attain a higher dominance value. The new implementation, therefore, allows individuals to choose when to become floaters or helpers depending on their dominance value. This change to the model affects the relative competitiveness between floaters and helpers, which avoids the assumption that either low- or high-quality individuals are the dispersing phenotype and, instead, allows rank-based dispersal as an emergent trait. As shown in Figure S5, this change had no qualitative impact on the results.

      To make this all clearer, we have now added to all of the relevant SI tables a new row with the relative rank of helpers vs floaters. As shown, floaters do not consistently outrank helpers. Rather, which role is most dominant depends on the environment and fitness trade-offs that shape their dispersing and helping decisions.

      Some further clarifications: beta_r is a gene that may evolve either positive or negative values, 0 (no reaction norm of dispersal to dominance rank) is the initial value in the simulations before evolution takes place. Therefore, this value may evolve to positive or negative values depending on evolutionary trade-offs. Also, and as clarified in the previous comment, the decision to disperse or not occurs at each breeding cycle, so becoming a floater, for example, is not a lifetime event unless they evolve a fixed strategy (dispersal = 0 or 1). 

      Meanwhile, the simplest and most convincing robustness check, which I had suggested last round, is not done: simply reduce the increase in the R of the floater by age relative to subordinates. I suspect this will actually change the results. It seems fairly transparent to me that an average floater in the KS scenario will have R about 15-20% higher than the subordinates (given no defense evolves, y_h=0.1 and H_work evolves to be around 5, and the average lifespan for both floaters and subordinates are in the range of 3.7-2.5 roughly, depending on m). That could be a substantial advantage in competition for breeding spots, depending on how that scramble competition actually works. I asked about this function in the last round (how non-linear is it?) but the authors seem to have neglected to answer.

      As we mentioned in the previous comment above, we have now added the relative rank between helpers and floaters to all the relevant SI tables, to provide a better idea of the relative competitiveness of residents versus dispersers for each parameter combination. As seen in Table S1, the competitive advantage of floaters is only marginally in the favor for floaters in the “Only kin selection” implementation. This advantage only becomes more pronounced when individuals can choose whether to disperse or remain philopatric depending on their rank. In this case, the difference in rank between helpers and floaters is driven by the high levels of dispersal, with only a few newborns (low rank) remaining briefly in the natal territory (Table S6). Instead, the high dispersal rates observed under the “Only kin selection” scenario appear to result from the low incentives to remain in the group when direct fitness benefits are absent, unless indirect fitness benefits are substantially increased. This effect is reinforced by the need for task partitioning to occur in an all-or-nothing manner (see the new implementation added to the “Kin selection and the evolution of division of labor” in the Supplementary materials; more details in following comments).

      In addition, we specifically chose not to impose this constraint of forcing floaters to be lower rank than helpers because doing so would require strong assumptions on how the floaters rank is determined. These assumptions are unlikely to be universally valid across natural populations (and probably not commonly met in most species) and could vary considerably among species. Therefore, it would add complexity to the model while reducing generalizability.

      As stated in the previous revision, no scramble competition takes place, this was an implementation not included in the final version of the manuscript in which age did not have an influence in dominance. Results were equivalent and we decided to remove it for simplicity prior to the original submission, as the model is already very complex in the current stage; we simply forgot to remove it from Table 1, something we explained in the previous round of revisions.

      More generally, I find that the assumption (and it is an assumption) floaters are better off than subordinates in a territory to be still questionable. There is no attempt to justify this with any data, and any data I can find points the other way (though typically they compare breeders and floaters, e.g.: https://bioone.org/journals/ardeola/volume-63/issue-1/arla.63.1.2016.rp3/The-Unknown-Life-of-Floaters--The-Hidden-Face-of/10.13157/arla.63.1.2016.rp3.full concludes "the current preliminary consensus is that floaters are 'making the best of a bad job'."). I think if the authors really want to assume that floaters have higher dominance than subordinates, they should justify it. This is driving at least one and possibly most of the key results, since it affects the reproductive value of subordinates (and therefore the costs of helping).

      We explicitly addressed this in the previous revision in a long response about resource holding potential (RHP). Once again, we do NOT assume that dispersers are at a competitive advantage to anyone else. Floaters lack access to a territory unless they either disperse into an established group or colonize an unoccupied territory. Therefore, floaters endure higher mortalities due to the lack of access to territories and group living benefits in the model, and are not always able to try to compete for a breeding position.

      The literature reports mixed evidence regarding the quality of dispersing individuals, with some studies identifying them as low-quality and others as high-quality, attributing this to them experiencing fewer constraints when dispersing that their counterparts (e.g. Stiver et al. 2007 Molecular Ecology; Torrents‐Ticó, et al. 2018 Journal of Zoology). Additionally, dispersal can provide end-of-queue individuals in their natal group an opportunity to join a queue elsewhere that offers better prospects, outcompeting current group members (Nelson‐Flower et al. 2018 Journal of Animal Ecology). Moreover, in our model floaters do not consistently have lower dominance values or ranks than helpers, and dominance value is often only marginally different.

      In short, we previously addressed the concern regarding the relative competitiveness of floaters compared to subordinate group members. To further clarify this point here, we have now included additional data on relative rank in all of the relevant SI tables. We hope that these additions will help alleviate any remaining concerns on this matter.

      Regarding division of labor, I think I was not clear so will try again. The authors assume that the group reproduction is 1+H_total/(1+H_total), where H_total is the sum of all the defense and work help, but with the proviso that if one of the totals is higher than "H_max", the average of the two totals (plus k_m, but that's set to a low value, so we can ignore it), it is replaced by that. That means, for example, if total "work" help is 10 and "defense" help is 0, total help is given by 5 (well, 5.1 but will ignore k_m). That's what I meant by "marginal benefit of help is only reduced by a half" last round, since in this scenario, adding 1 to work help would make total help go to 5.5 vs. adding 1 to defense help which would make it go to 6. That is a pretty weak form of modeling "both types of tasks are necessary to successfully produce offspring" as the newly added passage says (which I agree with), since if you were getting no defense by a lot of food, adding more food should plausibly have no effect on your production whatsoever (not just half of adding a little defense). This probably explains why often the "division of labor" condition isn't that different than the no DoL condition.

      The model incorporates division of labor as the optimal strategy for maximizing breeder productivity, while penalizing helping efforts that are limited to either work or defense alone. Because the model does not intend to force the evolution of help as an obligatory trait (breeders may still reproduce in the absence of help; k<sub>0</sub> ≠ 0), we assume that the performance of both types of task by the helpers is a non-obligatory trait that complements parental care.

      That said, we recognize the reviewer’s concern that the selective forces modeled for division of labor might not be sufficient in the current simulations. To address this, we have now introduced a new implementation, as discussed in the “Kin selection and the evolution of division of labor” section in the SI. In this implementation, division of labor becomes obligatory for breeders to gain a productivity boost from the help of subordinate group members. The new implementation tests whether division of labor can arise solely from kin selection benefits. Under these premises, philopatry and division of labor do emerge through kin selection, but only when there is a tenfold increase in productivity per unit of help compared to the default implementation. Thus, even if such increases are biologically plausible, they are more likely to reflect the magnitudes characteristic of eusocial insects rather than of cooperatively breeding vertebrates (the primary focus of this model). Such extreme requirements for productivity gains and need for coordination further suggest that group augmentation, and not kin selection, is probably the primary driving force particularly in harsh environments. This is now discussed in L210-213.

      Reviewer #2 (Public review):

      Summary:

      This paper formulates an individual-based model to understand the evolution of division of labor in vertebrates. The model considers a population subdivided in groups, each group has a single asexually-reproducing breeder, other group members (subordinates) can perform two types of tasks called "work" or "defense", individuals have different ages, individuals can disperse between groups, each individual has a dominance rank that increases with age, and upon death of the breeder a new breeder is chosen among group members depending on their dominance. "Workers" pay a reproduction cost by having their dominance decreased, and "defenders" pay a survival cost. Every group member receives a survival benefit with increasing group size. There are 6 genetic traits, each controlled by a single locus, that control propensities to help and disperse, and how task choice and dispersal relate to dominance. To study the effect of group augmentation without kin selection, the authors cross-foster individuals to eliminate relatedness. The paper allows for the evolution of the 6 genetic traits under some different parameter values to study the conditions under which division of labour evolves, defined as the occurrence of different subordinates performing "work" and "defense" tasks. The authors envision the model as one of vertebrate division of labor.

      The main conclusion of the paper is that group augmentation is the primary factor causing the evolution of vertebrate division of labor, rather than kin selection. This conclusion is drawn because, for the parameter values considered, when the benefit of group augmentation is set to zero, no division of labor evolves and all subordinates perform "work" tasks but no "defense" tasks.

      Strengths:

      The model incorporates various biologically realistic details, including the possibility to evolve age polytheism where individuals switch from "work" to "defence" tasks as they age or vice versa, as well as the possibility of comparing the action of group augmentation alone with that of kin selection alone.

      Weaknesses:

      The model and its analysis is limited, which makes the results insufficient to reach the main conclusion that group augmentation and not kin selection is the primary cause of the evolution of vertebrate division of labor. There are several reasons.

      First, the model strongly restricts the possibility that kin selection is relevant. The two tasks considered essentially differ only by whether they are costly for reproduction or survival. "Work" tasks are those costly for reproduction and "defense" tasks are those costly for survival. The two tasks provide the same benefits for reproduction (eqs. 4, 5) and survival (through group augmentation, eq. 3.1). So, whether one, the other, or both tasks evolve presumably only depends on which task is less costly, not really on which benefits it provides. As the two tasks give the same benefits, there is no possibility that the two tasks act synergistically, where performing one task increases a benefit (e.g., increasing someone's survival) that is going to be compounded by someone else performing the other task (e.g., increasing that someone's reproduction). So, there is very little scope for kin selection to cause the evolution of labour in this model. Note synergy between tasks is not something unusual in division of labour models, but is in fact a basic element in them, so excluding it from the start in the model and then making general claims about division of labour is unwarranted. I made this same point in my first review, although phrased differently, but it was left unaddressed.

      The scope of this paper was to study division of labor in cooperatively breeding species with fertile workers, in which help is exclusively directed towards breeders to enhance offspring production (i.e., alloparental care), as we stated in the previous review. Therefore, in this context, helpers may only obtain fitness benefits directly or indirectly by increasing the productivity of the breeders. This benefit is maximized when division of labor occurs between group members as there is a higher return for the least amount of effort per capita. Our focus is in line with previous work in most other social animals, including eusocial insects and humans, which emphasizes how division of labor maximizes group productivity. This is not to suggest that the model does not favor synergy, as engaging in two distinct tasks enhances the breeders' productivity more than if group members were to perform only one type of alloparental care task. We have expanded on the need for division of labor by making the performance of each type of task a requirement to boost the breeders productivity, see more details in a following comment.

      Second, the parameter space is very little explored. This is generally an issue when trying to make general claims from an individual-based model where only a very narrow parameter region has been explored of a necessarily particular model. However, in this paper, the issue is more evident. As in this model the two tasks ultimately only differ by their costs, the parameter values specifying their costs should be varied to determine their effects. Instead, the model sets a very low survival cost for work (yh=0.1) and a very high survival cost for defense (xh=3), the latter of which can be compensated by the benefit of group augmentation (xn=3). Some very limited variation of xh and xn is explored, always for very high values, effectively making defense unevolvable except if there is group augmentation. Hence, as I stated in my previous review, a more extensive parameter exploration addressing this should be included, but this has not been done. Consequently, the main conclusion that "division of labor" needs group augmentation is essentially enforced by the limited parameter exploration, in addition to the first reason above.

      We systematically explored the parameter landscape and report in the body of the paper only those ranges that lead to changes in the reaction norms of interest (other ranges are explored in the SI). When looking into the relative magnitude of cost of work and defense tasks, it is important to note that cost values are not directly comparable because they affect different traits. However, the ranges of values capture changes in the reaction norms that lead to rank-depending task specialization.

      To illustrate this more clearly, we have added a new section in the SI (Variation in the cost of work tasks instead of defense tasks section) showing variation in y<sub>h</sub>, which highlights how individuals trade off the relative costs of different tasks. As shown, the results remain consistent with everything we showed previously: a higher cost of work (high y<sub>h</sub>) shifts investment toward defense tasks, while a higher cost of defense (high x<sub>h</sub>) shifts investment toward work tasks.

      Importantly, additional parameter values were already included in the SI of the previous revision, specifically to favor the evolution of division of labor under only kin selection. Basically, division of labor under only kin selection does happen, but only under conditions that are very restrictive, as discussed in the “Kin selection and the evolution of division of labor” section in the SI. We have tried to make this point clearer now (see comments to previous reviewer above, and to this reviewer right below).

      Third, what is called "division of labor" here is an overinterpretation. When the two tasks evolve, what exists in the model is some individuals that do reproduction-costly tasks (so-called "work") and survival-costly tasks (so-called "defense"). However, there are really no two tasks that are being completed, in the sense that completing both tasks (e.g., work and defense) is not necessary to achieve a goal (e.g., reproduction). In this model there is only one task (reproduction, equation 4,5) to which both "tasks" contribute equally and so one task doesn't need to be completed if the other task compensates for it. So, this model does not actually consider division of labor.

      Although it is true that we did not make the evolution of help obligatory and, therefore, did not impose division of labor by definition, the assumptions of the model nonetheless create conditions that favor the emergence of division of labor. This is evident when comparing the equilibria between scenarios where division of labor was favored versus not favored (Figure 2 triangles vs circles).

      That said, we acknowledge the reviewer’s concern that the selective forces modeled in our simulations may not, on their own, be sufficient to drive the evolution of division of labor under only kin selection. Therefore, we have now added a section where we restrict the evolution of help to instances in which division of labor is necessary to have an impact on the dominant breeder productivity. Under this scenario, we do find division of labor (as well as philopatry) evolving under only kin selection. However, this behavior only evolves when help highly increases the breeders’ productivity (by a factor of 10 what is needed for the evolution of division of labor under group augmentation). Therefore, group augmentation still appears to be the primary driver of division of labor, while kin selection facilitates it and may, under certain restrictive circumstances, also promote division of labor independently (discussed in L210-213).

      Reviewer #1 (Recommendations for the authors):

      I really think you should do the simulations where floaters do not come out ahead by floating. That will likely change the result, but if it doesn't, you will have a more robust finding. If it does, then you will have understood the problem better.

      As we outlined in the previous round of revisions, implementing this change would be challenging without substantially increasing model complexity and reducing its general applicability, as it would require strong assumptions that could heavily influence dispersal decisions. For instance, by how much should helpers outcompete floaters? Would a floater be less competitive than a helper regardless of age, or only if age is equal? If competitiveness depends on equal age, what is the impact of performing work tasks given that workers always outcompete immigrants? Conversely, if floaters are less competitive regardless of age, is it realistic that a young individual would outcompete all immigrants? If a disperser finds a group immediately after dispersal versus floating for a while, is the dominance value reduced less (as would happen to individuals doing prospections before dispersal)? 

      Clearly it is not as simple as the referee suggests because there are many scenarios that would need to be considered and many assumptions made in doing this. As we explained to the points above, we think our treatment of floaters is consistent with the definition of floaters in the literature, and our model takes a general approach without making too many assumptions.

      Reviewer #2 (Recommendations for the authors):

      The paper's presentation is still unclear. A few instances include the following. It is unclear what is plotted in the vertical axes of Figure 2, which is T but T is a function of age t, so this T is presumably being plotted at a specific t but which one it is not said.

      The values graphed are the averages of the phenotypically expressed tasks, not the reaction norms per se. We have now rewritten the the axis to “Expressed task allocation T (0 = work, 1 = defense)” to increase clarity across the manuscript.

      The section titled "The need for division of labor" in the methods is still very unclear.

      We have rephased this whole section to improve clarity.

    1. Reviewer #1 (Public review):

      Summary:

      The authors investigate how the Drosophila TNF receptor-associated factor Traf4 - a multifunctional adaptor protein with potential E3 ubiquitin ligase activity - regulates JNK signaling and adherens junctions (AJs) in wing disc epithelium. When they overexpress Traf4 in the posterior compartment of the wing disc, many posterior cells express the JNK target gene puckered (puc), apoptose, and are basally extruded from the epithelium. The authors term this process "delamination", but I think that this is an inaccurate description, especially since they can suppress the "delamination" by blocking programmed cell death (by concomitantly overexpressing p35). Through Y2H assays using Traf4 as a bait, they identified the Bearded family proteins E(spl)m4 (and to a lesser extent E(spl)m2), as Traf4 interactors. They use Alphafold to model computationally the interaction between Traf4 and E(spl)m4. They show that co-overexpression of Traf4 with E(spl)m4 in the posterior domain of the wing disc reduces death of posterior cells. They generate a new, weaker hypomorphic allele of Traf4 that is viable (as opposed to the homozygous lethality of null Traf4 alleles). There is some effect of these mutations on wing margin bristles; fewer wing margin bristle defects are seen when E(spl)m4 is overexpressed, suggesting opposite effects of Traf4 and E(spl)m4. Finally, they use the Minute model of cell competition to show that Rp/+ loser clones have greater clone area (indicating increased survival) when they are depleted for Traf4 or when they overexpress E(spl)m4. Only the cell competition results are quantified. Because most of the data in the preprint are not quantified, it is impossible to know how penetrant the phenotypes are. The authors conclude that E(spl)m4 binds the Traf4 MATH/TRAF domain, disrupts Traf4 trimerization, and selectively suppresses Traf4-mediated JNK and caspase activation without affecting its role in AJ destabilization. However, I believe that this is an overstatement. First, there is no biochemical evidence showing that Traf4 binds E(spl)m4 and that E(spl)m4 disrupts Traf4 trimerization. Second, the data on AJs is weak and not quantified; additionally, cells that are being basally extruded lose contact with neighboring cells, hence changes in adhesion proteins. Related to this, the authors, in my opinion, inaccurately describe basal extrusion of dying cells from the wing disc epithelium as delamination.

      Strengths:

      (1) The authors use multiple approaches to test the model that overexpressed E(spl)m4 inhibits Traf4, including genetics, cell biological imaging, yeast two-hybrid assays, and molecular modeling.

      (2) The authors generate a new Traf4 hypomorphic mutant and use this mutant in cell competition studies, which supports the concept that E(spl)m4 (when overexpressed) can antagonize Traf4.

      Weaknesses:

      (1) Conflation of "delamination" with "basal extrusion of apoptotic cells": Over-expression of Traf4 causes apoptosis in wing disc cells, and this is a distinct process from delamination of viable cells from an epithelium. However, the two processes are conflated by the authors, and this weakens the premise of the paper.

      (2) Dependence on overexpression: The conclusions rely heavily on ectopic expression of Traf4 and E(spl)m4. Thus, the physiological relevance of the interaction remains inferred rather than demonstrated.

      (3) Lack of quantitative rigor: Except for the cell competition studies, phenotypic descriptions (e.g., number of apoptotic cells, puc-LacZ intensity) are qualitative; additional quantification, inclusion of sample size, and statistical testing would strengthen the conclusions.

      (4) Limited biochemical validation: The Traf4-E(spl)m4 binding is inferred from Y2H and in silico models, but no co-immunoprecipitation or in vitro binding assays confirm direct interaction or the predicted disruption of trimerization.

      (5) Specificity within the Bearded family: While E(spl)m2 shows partial binding and Tom shows none, the mechanistic basis for this selectivity is not deeply explored experimentally, leaving questions about motif-context contributions unresolved.

    2. Reviewer #2 (Public review):

      Summary:

      This manuscript analyzes the contribution of Traf4 to the fate of epithelial cells in the developing wing imaginal disc tissue. The manuscript is direct and concise and suggests an interesting and valuable hypothesis with dual functions of Traf4 in JNK pathway activation and cell delamination. However, the text is partially speculative, and the evidence is incomplete as the main claims are only partially supported. Some results require validation to support the conclusions.

      Strengths:

      (1) The manuscript is direct and concise, with a well-written and precise introduction.

      (2) It presents an interesting and valuable hypothesis regarding the dual role of Traf4 in JNK pathway activation and cell delamination.

      (3) The study addresses a relevant biological question in epithelial tissue development using a genetically tractable model.

      (4) The use of newly generated Traf4 mutants adds novelty to the experimental approach.

      (5) The manuscript includes multiple experimental strategies, such as genetic manipulation and imaging, to explore Traf4 function.

      Weaknesses:

      (1) The evidence supporting key claims is incomplete, and some conclusions are speculative.

      (2) The use of GFP-tagged Traf4 lacks validation regarding its functional integrity.

      (3) Orthogonal views and additional imaging data are needed to confirm changes in apicobasal localization and cell delamination.

      (4) Experimental conditions and additional methods should be further detailed.

      (5) The interaction between Traf4 and E(spl)m4 remains speculative in Drosophila.

      (6) New mutants require deeper analysis and validation.

      (7) The elimination of Traf4 mutant clones may be due to cell competition, which requires further experimental clarification.

      (8) The role of Traf4 in cell competition is contradictory and needs to be resolved.

    3. Reviewer #3 (Public review):

      Summary:

      This is an important and well-conceived study that identifies the Bearded-type small protein E(spl)m4 as a physical and genetic interactor of TRAF4 in Drosophila. By combining classical genetics, yeast two-hybrid assays, and AlphaFold in silico modeling, the authors convincingly demonstrate that E(spl)m4 acts as an inhibitor of TRAF4-mediated induction of JNK-driven apoptosis in developing larval imaginal wing discs, while not affecting TRAF4's role in adherence junction remodeling.

      Based primarily on modeling, the authors propose that the specificity of E(spl)m4 towards TRAF4-mediated signaling arises from its interference with TRAF4 trimerization, which is likely required for the activation of the JNK signaling arm but not for the maintenance of adherence junctions and stability of E-cadherin/β-catenin complex.

      Overall, this study is of broad interest to cell and developmental biologists. It also holds potential biomedical relevance, particularly for strategies aimed at modulating TRAF protein activities to dissect and modulate canonical versus non-canonical signaling functions.

      Strengths:

      (1) The work identifies the Bearded-type small protein E(spl)m4 as a physical and genetic interactor of TRAF4 in Drosophila, extending the understanding of E(spl)m4 beyond its established functions in Notch signaling.

      (2) The study is experimentally solid, well-executed, and written, combining classical genetics with protein-protein interaction assays and modeling to reveal E(spl)m4 as a new regulator of TRAF4 signaling.

      (3) The genetic and biochemical data convincingly show the ability of E(spl)m4 overexpression to inhibit TRAF4-induced JNK-dependent apoptosis, while leaving the TRAF4 role in adherens junction remodeling unaffected.

      (4) The findings have important implications for the regulation of cell signaling and apoptosis and may guide pharmacological targeting of TRAF proteins.

      Weaknesses:

      The study is overall strong; however, several aspects could be clarified or expanded to strengthen the proposed mechanism and data presentation:

      (1) The proposed mechanism that E(spl)m4 inhibits TRAF4 activation of JNK signaling by affecting TRAF4 trimerization relies mainly on modeling. Experimental evidence would strengthen this claim. For example, a native or non-denaturing SDS-PAGE could be used to assess TRAF4 oligomerization states in the absence or presence of E(spl)m4 overexpression, testing whether E(spl)m4 interferes with high-molecular-weight TRAF4 assemblies.

      (2) The study depends largely on E(spl)m4 overexpression, which may not reflect physiological conditions. It would be valuable to test, or at least discuss, whether loss-of-function or knockdown of E(spl)m4 modulates the strength or duration of JNK-mediated signaling, potentially accelerating apoptosis. Such data would reinforce the model that E(spl)m4 acts as a physiological modulator of TRAF4-JNK signaling in vivo.

      (3) The authors initially identify both E(spl)m4 and E(spl)m2 as TRAF4 interactions, but subsequently focus on E(spl)m4. It would be helpful to clarify or discuss the rationale for prioritizing E(spl)m4 for detailed functional analysis.

      (4) E(spl)m4 overexpression appears to protect RpS3 loser clones (Figure 6H-K), yet caspase-3-positive cells are still visible in mosaic wing discs. Please comment on the nature of these Caspase 3-positive cells, whether they are cell-autonomous to the clone or non-autonomous (Figure 6K)?

      (5) This is a clear, well-executed, and conceptually strong study that significantly advances understanding of TRAF4 signaling specificity and its modulation by the Bearded-type protein E(spl)m4.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Nielsen et al have identified a new disease mechanism underlying hypoplastic left heart syndrome due to variants in ribosomal protein genes that lead to impaired cardiomyocyte proliferation. This detailed study starts with an elegant screen in stemcell-derived cardiomyocytes and whole genome sequencing of human patients and extends to careful functional analysis of RP gene variants in fly and fish models. Striking phenotypic rescue is seen by modulating known regulators of proliferation, including the p53 and Hippo pathways. Additional experiments suggest that the cell type specificity of the variants in these ubiquitously expressed genes may result from genetic interactions with cardiac transcription factors. This work positions RPs as important regulators of cardiomyocyte proliferation and differentiation involved in the etiology of HLHS, although the downstream mechanisms are unclear.

      We thank Reviewer 1 for the thoughtful assessment of our manuscript. Our point-bypoint responses to the recommendations are provided (Reviewer 1, “Recommendations for the authors”).

      Reviewer #2 (Public review):

      Tanja Nielsen et al. present a novel strategy for the identification of candidate genes in Congenital Heart Disease (CHD). Their methodology, which is based on comprehensive experiments across cell models, Drosophila and zebrafish models, represents an innovative, refreshing and very useful set of tools for the identification of disease genes, in a field which are struggling with exactly this problem. The authors have applied their methodology to investigate the pathomechanisms of Hypoplastic Left Heart Syndrome (HLHS) - a severe and rare subphenotype in the large spectrum of CHD malformations. Their data convincingly implicates ribosomal proteins (RPs) in growth and proliferation defects of cardiomyocytes, a mechanism which is suspected to be associated with HLHS.

      By whole genome sequencing analysis of a small cohort of trios (25 HLHS patients and their parents), the authors investigated a possible association between RP encoding genes and HLHS. Although the possible association between defective RPs and HLHS needs to be verified, the results suggest a novel disease mechanism in HLHS, which is a potentially substantial advance in our understanding of HLHS and CHD. The conclusions of the paper are based on solid experimental evidence from appropriate high- to medium-throughput models, while additional genetic results from an independent patient cohort are needed to verify an association between RP encoding genes and HLHS in patients.

      We thank Reviewer 2 for the thoughtful assessment of our manuscript. Our point-by-point responses to the recommendations are provided (Reviewer 2, “Recommendations for the authors”).

      Reviewer #1 (Recommendations for the authors): 

      (1) Despite an interesting surveillance model, the disease-causing mechanisms directly downstream of the RP variants remain unclear. Can the authors provide any evidence for abnormal ribosomes or defects in translation in cells harboring such variants? The possibility that reduced translation of cardiac transcription factors such as TBX5 and NKX2-5 may contribute to the functional interactions observed should be considered. How do the authors consider that the RP variants are affecting transcript levels as observed in the study?

      Our model implies that cell cycle arrest does not require abnormal ribosomes or translational defects but instead relies on the sensing of RP levels or mutations as a fitness-sensing mechanism that activates TP53/CDKN1A-dependent arrest. Supporting this framework, we observed no significant changes in TBX5 or NKX2-5 expression (data not shown), but rather an upregulation of CDKN1A levels upon RP KD.

      (2) The authors suggest that a nucleolar stress program is activated in cells harboring RP gene variants. Can they provide additional evidence for this beyond p53 activation? 

      We added additional data to support nucleolar stress (Suppl. Fig. 6) and text (lines 52635):

      To determine whether cardiac KD of RpS15Aa causes nucleolar stress in the Drosophila heart, we stained larval hearts for Fibrillarin, a marker for nucleoli and nucleolar integrity.  We found that RpS15Aa KD causes expansion of nucleolar Fibrillarin staining in cardiomyocyte, which is a hallmark of nucleolar stress (Suppl. Fig. 6A-C). As a control, we also performed cardiac KD of Nopp140, which is known to cause nucleolar stress upon loss-of-function. We found a similar expansion of Fibrillarin staining in larval cardiomyocyte nuclei (Suppl. Fig. 6C,D). This suggests that RpS15Aa KD indeed causes nucleolar stress in the Drosophila heart, that likely contributes to the dramatic heart loss in adults.

      Other recommendations: 

      (3) Concerning the cell type specificity, in the proliferation screen, were similar effects seen on the actinin negative as actinin positive EdU+ cells? It would be helpful to refer to the fibroblast result shown in Supplementary Figure 1C in the results section

      As suggested by reviewer #1, we have added a reference to Supplementary Fig. 1C, D and noted that RP knockdown exerts a non–CM-specific effect on proliferation.

      (4) The authors refer to HLHS patients with atrial septal defects and reduced right ventricular ejection fraction. Please clarify the specificity of the new findings to HLHS versus other forms of CHD, as implied in several places in the manuscript, including the abstract.

      This study focused on a cohort of 25 HLHS proband-parent trios selected for poor clinical outcome, including restrictive atrial septal defect and reduced right ventricular ejection fraction.  We have revised the following sentence  in response to the Reviewer’s comment (lines 567-571): “While our study highlights the potential of this approach for gene prioritization, additional research is needed to directly demonstrate the functional consequence of the identified genetic variants, verify an association between RP encoding genes and HLHS in other patient cohorts with and without poor outcome, and determine if RP variants have a broader role in CHD susceptibility.

      (5) The multi-model approach taken by the authors is clearly a good system for characterizing disease-causing variants. Did the authors score for cardiomyocyte proliferation or the time of phenotypic onset in the zebrafish model? 

      We used an antibody against phosphohistone 3 to identify proliferating cells and DAPI to identify all cardiac cells in control injected, rps15a morphants, and rps15a crispants. We found that  cell numbers and proliferating cells were significantly reduced at 24 and 48 hpf. By 72 hpf cardiac cell proliferation is greatly diminished even in controls, where proliferation typically declines. 

      Reduced ventricular cardiomyocyte numbers could potentially result from impaired addition of LTPB3-expressing progenitors. In experiments where altered cardiac rhythm is observed, please comment on the possible links to proliferation.

      Heart function data showed that heart period (R-R interval) was unaffected in morphants and crispants at 72 hpf where we also observed significant reductions in cell numbers. This suggests that the bradycardia observed in the rps15a + nkx2.5 or tbx5a double KD (Sup. Fig. 5D & E) was not due to the reduction in cell numbers alone. 

      Author response image 1.

      Finally, the use of the mouse to model HLHS in potential follow-up studies should be discussed. 

      We have added a mouse model comment to the discussion (lines 571-74): “In conclusion, we propose that the approach outlined in this study provides a novel framework for rapidly prioritizing candidate genes and systematically testing them, individually or in combination, using a CRISPR/Cas9 genome-editing strategy in mouse embryos (PMID: 28794185)”.

      (6) When the authors scored proliferation in cells from the proband in family 75H, did they validate that RPS15A expression is reduced, consistent with a regulatory region defect? 

      Good point. We examined RPS15A expression in these cells and found no significant reduction in gene expression in day 25 cardiomyocytes (data not shown). One possible explanation is that this variant may regulate RPS15A expression in a stage-specific manner during differentiation or under additional stress conditions.

      (7) Minor point. Typo on line 494: comma should be placed after KD, not before.

      Thank you, this has now been corrected (new line 490)

      Reviewer #2 (Recommendations for the authors):  

      (1) The authors are invited to revise the part of the manuscript that describes the genetic analysis and provide a more balanced discussion of the WGS data, with a conclusion that aligns with the strength of the human genetic data. 

      We disagree with reviewer #2’s assessment. The goal of our study is not to apply a classical genetic approach to establish variant pathogenicity, but rather to employ a multidisciplinary framework to prioritize candidate genes and variants and to examine their roles in heart development using model systems. In this context, genetic analysis serves primarily as a filtering tool rather than as a means of definitively establishing causality.

      (2) The genetic analysis of patients does not appear to provide strong evidence for an association between RP gene variants and HLHS. More information regarding methodology and the identified variants is needed. 

      HLHS is widely recognized as an oligogenic and heterogeneous genetic disease in which traditional genetic analyses have consistently failed to prioritize any specific gene class as reviewer#2 is pointing out. Therefore, relying solely on genetic analysis is unlikely to yield strong evidence for association with a given gene class. This limitation provides the rationale for our multidisciplinary gene prioritization strategy, which leverages model systems to interrogate candidate gene function. Ultimately, definitive validation of this approach will require studies in relevant in vivo models to establish causality within the context of a four-chambered heart (see also Discussion).

      In Table S2, it would be appropriate to provide information on sequence, MAF, and CADD. Please note the source of MAF% (GnomAD version?, which population?).  

      As summarized in Figure 2A, the 292 genes from the families with the 25 proband with poor outcome displayed in Supplemental Table 2 fulfilled a comprehensive candidate gene prioritization algorithm based on the variant, gene, inheritance, and enrichment, which required all of the following: 1) variants identified by whole genome sequencing with minor allele frequency <1%; 2) missense, loss-of-function, canonical splice, or promoter variants; 3) upper quartile fetal heart expression; and 4)De novo or recessive inheritance. Unbiased network analysis of these 292 genes, which are displayed in Supplemental Table 2 for completeness, identified statistically significant enrichment of ribosomal proteins. The details about MAF, CADD score, and sequence highlighted by the Reviewer are provided for the RP genes in Table 1, which are central to the focus and findings of the manuscript.    

      It would also be helpful for the reader if genome coordinates (e.g., 16-11851493-G-A for RSL1D1 p.A7V) were provided for each variant in both Table 1 and S2.

      Genome coordinates have been added to Table 1.

      (3) The dataset from the hPSC-CM screen could be of high value for the community. It would be appropriate if the complete dataset were made available in a usable format. 

      The dataset from the hPSC-CM screen has been added to the manuscript as Supp Table 1

      (4) The "rare predicted-damaging promoter variant in RPS15A" (c.-95G>A) does not appear so rare. Considering the MAF of 0,00662, the frequency of heterozygous carriers of this variant is 1 out of 76 individuals in the general population. Thus, considering the frequency of HLHS in the population (2-3 out of 10,000) and the small size of family 75H, the data do not appear to indicate any association between this particular variant and HLHS. The variants in Table 1 also appear to have relatively mild effects on the gene product, judging from the MAF and CADD scores. The authors are invited to discuss why they find these variants disease-causing in HLHS

      Our study design is based on the widely held premise that HLHS is an oligogenic disorder. Our multi-model systems platform centered on comprehensive filtering of coding and regulatory variants identified by whole genome sequencing of HLHS probands to identify candidate genes associated with susceptibility to this rare developmental phenotype. 75H proved to be a high-value family for generating a relatively short list of candidate genes for left-sided CHD. Given the rarity of both left-sided CHD and the RPS15A variant identified in the HLHS proband and his 5th degree relative, with a frequency consistent with a risk allele for an oligogenic disorder, we made the reasonable assumption that this was a bona fide genotype-phenotype association rather than a chance occurrence. Moreover, incomplete penetrance and variable expression is consistent with a genetically complex basis of disease whereby the shared variant is risk-conferring and acts in conjunction with additional genetic, epigenetic, and/or environmental factors that lead to a left-sided CHD phenotype. In sum, we do not claim these variants are definitively disease causing, but rather potentially contributing risk factors.

      (5) Information is lacking on how clustering of RP genes was demonstrated using STRING (with P-values that support the conclusions). What is meant by "when the highest stringency filter was applied"? Does this refer to the STRING interaction score or something else? The authors could also explain which genes were used to search STRING (e.g., all 292 candidate genes) and provide information on the STRING interaction score used in the analysis, the number of nodes and edges in the network.

      To determine whether certain gene networks were over-represented, two online bioinformatics tools were used. First, genes were inputted into STRING (Author response table 2 below) to investigate experimental and predicted protein-protein and genetic interactions. Clustering of ribosomal protein genes was demonstrated when applying the highest stringency filter. Next, genes were analyzed for potential enrichment of genes by ontology classification using PANTHER .Applying Fisher’s exact test and false discovery rate corrections, ribosomal proteins were the most enriched class when compared to the reference proteome, including data annotated by molecular function (4.84-fold, p=0.02), protein class (6.45-fold, p=0.00001), and cellular component (9.50fold, p=0.001). A majority of the identified RP candidate genes harbored variants that fit a recessive inheritance disease model.

      Author response image 2.

    1. Synthèse du Débat : Le Genre Précède-t-il le Sexe ?

      Résumé Exécutif

      Ce document de synthèse analyse le débat contradictoire portant sur l'affirmation « Le genre précède le sexe », opposant Lou Girard (position affirmative) et Franck Ramus (position négative).

      Le débat met en lumière une divergence fondamentale entre deux cadres d'analyse :

      • l'un, issu des études de genre et de la sociologie, postule que les structures sociales (le genre) façonnent la conceptualisation scientifique de la biologie (le sexe) ;

      • l'autre, ancré dans la biologie évolutionniste, soutient que les réalités biologiques (le sexe) constituent le substrat sur lequel se développent les constructions culturelles (le genre).

      Lou Girard, s'appuyant sur les travaux de Christine Delphy et Thomas Laqueur, argue que la notion de sexe binaire est une construction scientifique récente (XVIIIe siècle), historiquement contingente et influencée par le système patriarcal qu'elle visait à justifier.

      Pour Girard, le genre, en tant que système social hiérarchique, est donc premier.

      Franck Ramus contre-argumente sur trois niveaux : ontologique (le phénomène biologique du sexe existe depuis un milliard d'années), développemental (un individu est sexué dès la conception, bien avant l'influence du genre) et évolutionniste (les différences de stratégies reproductives entre mâles et femelles expliquent l'émergence de rôles de genre récurrents dans les sociétés humaines).

      La divergence principale ne réside pas seulement dans la conclusion, mais dans l'épistémologie :

      quel poids accorder aux preuves issues de la sociologie historique par rapport à celles de la biologie évolutionniste ?

      Le débat révèle que même lorsque les deux intervenants partagent des sources communes, leurs cadres interprétatifs radicalement différents les mènent à des conclusions opposées, notamment sur la nature binaire du sexe et la validité des reconstructions historiques des concepts scientifiques.

      --------------------------------------------------------------------------------

      1. Contexte et Cadre du Débat

      Le débat a été organisé dans un format de "débat constructif" visant à clarifier les points d'accord et de désaccord plutôt qu'à déterminer un vainqueur.

      Les deux intervenants ont été invités à défendre des positions opposées sur la proposition "Le genre précède le sexe".

      Position Affirmative ("Oui") : Défendue par Lou Girard.

      Position Négative ("Non") : Défendue par Franck Ramus.

      Le format incluait des phases distinctes :

      • une prise de position initiale, une session de clarification pour assurer la compréhension mutuelle,

      • une phase de "personne de fer" où chaque intervenant reformulait la position de l'autre de manière charitable,

      • et des discussions sur les racines des convictions, les limites des approches respectives,

      • et enfin les points de convergence et de divergence.

      2. Position Affirmative (Lou Girard) : Le Genre comme Principe Organisateur

      La position de Lou Girard s'ancre dans le champ pluridisciplinaire des études sur le genre (sociologie, philosophie, études féministes).

      Son argument central est que notre compréhension du "sexe" biologique est une construction sociale façonnée par le système de genre préexistant.

      Origine et Définitions Clés

      Source de l'affirmation : La sociologue Christine Delphy.

      Définition du Genre : Un "système bicatégorisé (hommes/femmes) et hiérarchisé" où les femmes sont subordonnées aux hommes, notamment par l'exploitation de leur travail domestique et reproductif (patriarcat).

      Définition du Sexe : Il ne s'agit pas des organes génitaux, mais du concept de sexe tel qu'utilisé en biologie, c'est-à-dire la "distinction antagoniste entre les mâles et les femelles".

      L'Argument Principal : Une Construction Sociale du Sexe Biologique

      L'affirmation "Le genre précède le sexe" signifie que le concept scientifique du sexe biologique a été construit épistémologiquement sur les bases du patriarcat.

      Il s'agit d'une "justification scientifique d'un système social".

      La science n'a pas découvert le sexe binaire dans un vide neutre ; elle a formalisé une catégorie qui servait à rationaliser une organisation sociale déjà en place.

      Preuves Historiques (Thomas Laqueur)

      Girard s'appuie fortement sur les travaux de l'historien Thomas Laqueur (La fabrique du sexe) pour démontrer que la conception binaire du sexe est une idée récente.

      Avant le XVIIIe siècle : Le sexe n'était pas conçu comme deux catégories distinctes.

      Antiquité : Un modèle à "sexe unique" prévalait, où les organes féminins étaient vus comme une version invertie des organes masculins.  

      Moyen Âge : Le sexe était perçu comme un continuum basé sur la "chaleur vitale", les hommes représentant le plus haut degré de cette chaleur.

      À partir du XVIIIe siècle : Le modèle binaire s'impose, coïncidant avec une volonté de naturaliser les rôles sociaux.

      Implications et Continuité du Biais Patriarcal

      Le modèle binaire, une fois établi, a eu des conséquences concrètes, servant d'outil de normalisation sociale.

      Personnes intersexes : Plutôt que de remettre en question le modèle binaire face à des cas qui ne s'y conforment pas, la médecine a historiquement "mutilé" les personnes intersexes pour les faire correspondre à l'une des deux catégories.

      Homosexuels et personnes trans : Leur existence contrevenant au modèle biomédical, ils ont été psychiatrisés et internés.

      Biais actuel : Ce biais patriarcal continue, selon Girard, d'influencer la recherche scientifique, qui tend à justifier inconsciemment les normes patriarcales plutôt qu'à décrire les faits de manière neutre.

      3. Position Négative (Franck Ramus) : Le Sexe comme Prérequis Biologique

      La position de Franck Ramus repose sur une distinction claire entre le phénomène biologique du sexe et le concept humain de sexe.

      Il soutient que le sexe, en tant que réalité biologique fondamentale, précède et influence l'émergence des constructions sociales comme le genre.

      Définition Fondamentale du Sexe

      Le Sexe comme Stratégie Reproductive : Ramus définit le sexe à son niveau le plus fondamental, stabilisé en biologie, comme la distinction entre deux types sexuels dans la reproduction sexuée anisogame :

      Femelles : Porteurs de gros gamètes (ovocytes).    ◦ Mâles : Porteurs de petits gamètes (spermatozoïdes).

      • Cette définition est primordiale, et les autres aspects (génétiques, hormonaux) en découlent.

      L'Argument Principal : Trois Niveaux d'Analyse

      Ramus défend que le sexe précède le genre à trois échelles distinctes :

      1. Niveau Ontologique : Le phénomène du sexe existe dans la nature depuis environ un milliard d'années, bien avant l'apparition de l'humanité, du patriarcat ou de la conceptualisation humaine du sexe.

      2. Niveau Développemental (Individuel) : Un individu possède un sexe dès la conception (chromosomes sexuels).

      L'influence du genre et des représentations sociales n'intervient qu'après la naissance. Pour le fœtus, le sexe précède donc clairement le genre.

      3. Niveau Évolutionniste (Espèce) : Le genre, en tant que phénomène social, n'émerge pas de rien.

      Il se développe sur la base de prédispositions biologiques issues de l'évolution.

      Le Modèle Évolutionniste : De l'Anisogamie à la Domination Masculine

      Ramus propose une explication évolutionniste à l'origine des rôles de genre.

      Investissement Parental Différentiel : L'anisogamie (différence de taille des gamètes) entraîne un investissement reproductif initial plus élevé pour les femelles.

      Cela les incite à investir davantage dans la survie de la progéniture (gestation, allaitement, élevage).

      L'investissement des mâles peut rester minimal.

      Conséquences Comportementales :

      ◦ Les mâles sont en compétition pour l'accès aux femelles, ce qui sélectionne des traits comme l'agressivité, la taille et la force.  

      ◦ Les femelles, ayant plus à perdre, sont plus sélectives dans le choix de leurs partenaires.

      Origine de la Domination Masculine : La sélection pour une plus grande taille et force chez les mâles (pour la compétition inter-mâles) a pour "effet secondaire" de les rendre physiquement plus forts que les femelles, rendant ainsi la domination masculine possible.

      Division du Travail : Les contraintes reproductives (grossesse, allaitement) rendent les femelles plus sédentaires, tandis que les mâles sont plus mobiles.

      Cela favorise une "répartition relativement naturelle des rôles et des tâches", que l'on retrouve dans de multiples cultures.

      Ramus précise que ce n'est pas une justification morale, mais une explication causale.

      4. Points de Divergence Fondamentaux

      Le débat a cristallisé plusieurs points de désaccord profonds, qui sont moins factuels qu'épistémologiques.

      Primauté de la Nature vs. la Culture

      C'est l'opposition centrale du débat.

      Pour Girard : La culture précède la nature. Les systèmes sociaux (genre) déterminent la manière dont nous conceptualisons et même percevons la réalité biologique (sexe).

      Pour Ramus : La nature précède la culture. Les prédispositions biologiques humaines constituent le socle sur lequel les cultures se développent.

      La Binarité du Sexe : Concept vs. Réalité Biologique

      Pour Ramus : Le sexe, défini par la stratégie reproductive (production de deux types de gamètes), est fondamentalement binaire.

      Pour Girard : Le sexe biologique n'est pas binaire. Cette vision est le produit d'un modèle social imposé à une réalité plus complexe (comme en témoignent les personnes intersexes).

      L'Interprétation des Preuves Historiques et Scientifiques

      Le cas de Thomas Laqueur est emblématique de cette divergence.

      Girard accepte les conclusions de Laqueur comme une preuve historique valide que la conception binaire du sexe est une construction récente.

      Ramus exprime son "incrédulité" face à cette affirmation, la trouvant contre-intuitive.

      Il a du mal à imaginer qu'avant le XVIIIe siècle, les humains n'avaient pas conscience de l'existence de deux sexes.

      Pour lui, le critère d'arbitrage serait le consensus scientifique parmi les historiens, pas la thèse d'un seul auteur.

      Poids Épistémologique des Disciplines et des Données

      Initialement présentée comme une opposition entre sociologie (Girard) et biologie (Ramus), la divergence est plus subtile.

      Girard accorde une grande valeur aux analyses des études de genre pour déconstruire les biais inhérents à la production du savoir scientifique.

      Ramus ne rejette pas les sciences humaines et sociales, mais se dit "non convaincu" par certains arguments et données spécifiques issus des études de genre, qu'il confronte à des données issues de la biologie ou de la psychologie.

      Le débat a montré que même en lisant les mêmes auteurs (ex: Anne Fausto-Sterling), ils en tirent des conclusions radicalement opposées, révélant des cadres d'analyse irréconciliables.

      5. Racines des Positions et Limites Reconnues

      Parcours et Motivations Personnelles

      Franck Ramus : Son intérêt pour le sujet provient de ses recherches en sciences cognitives, où il a observé de manière répétée et non sollicitée des différences entre sexes (prévalence de l'autisme, dyslexie, développement du langage, neuroanatomie), le poussant à en chercher les origines.

      Lou Girard : Sa position est façonnée par son expérience de femme transgenre.

      La confrontation au sexisme et à la transphobie l'a conduite à s'intéresser au féminisme, puis aux études de genre, dont elle a adopté le cadre d'analyse matérialiste comme étant le plus pertinent pour comprendre la société.

      Limites et Incertitudes Avouées

      Franck Ramus : Admet que l'approche évolutionniste est une "inférence à la meilleure explication" et qu'il ne peut apporter de "preuves irréfutables" pour chaque détail de ce récit historique.

      Sa force réside dans sa cohérence et son pouvoir explicatif global.

      Lou Girard : Reconnaît ses limites personnelles en tant que non-experte diplômée, ce qui pourrait limiter sa compréhension des théories qu'elle expose.

      Elle admet également la possibilité de faiblesses épistémologiques dans l'approche des études de genre elle-même, ainsi que l'existence de limites qu'elle ne perçoit pas.

      6. Points de Convergence Identifiés

      Malgré les divergences profondes, quelques points d'accord ont été établis :

      • L'existence du patriarcat en tant que système social qui désavantage les femmes.

      • La préexistence de phénomènes biologiques ("nature") avant l'émergence de la culture humaine.

      • Le fait que les individus sont biologiquement sexués avant d'être socialisés.

      • Un désaccord commun sur la validité du premier modèle des "cinq sexes" d'Anne Fausto-Sterling, bien que leur analyse de l'évolution de son travail diverge par la suite.

    1. Reviewer #1 (Public review):

      The study analyzes the gastric fluid DNA content identified as a potential biomarker for human gastric cancer. However, the study lacks overall logicality, and several key issues require improvement and clarification. In the opinion of this reviewer, some major revisions are needed:

      (1) This manuscript lacks a comparison of gastric cancer patients' stages with PN and N+PD patients, especially T0-T2 patients.

      (2) The comparison between gastric cancer stages seems only to reveal the difference between T3 patients and early-stage gastric cancer patients, which raises doubts about the authenticity of the previous differences between gastric cancer patients and normal patients, whether it is only due to the higher number of T3 patients.

      (3) The prognosis evaluation is too simplistic, only considering staging factors, without taking into account other factors such as tumor pathology and the time from onset to tumor detection.

      (4) The comparison between gfDNA and conventional pathological examination methods should be mentioned, reflecting advantages such as accuracy and patient comfort.

      (5) There are many questions in the figures and tables. Please match the Title, Figure legends, Footnote, Alphabetic order, etc.

      (6) The overall logicality of the manuscript is not rigorous enough, with few discussion factors, and cannot represent the conclusions drawn.

      Comments on revisions:

      The authors have addressed all concerns in the revision.

    2. Reviewer #2 (Public review):

      Summary

      The authors aimed to evaluate whether total DNA concentration in gastric fluid (gfDNA) collected during routine endoscopy could serve as a diagnostic and prognostic biomarker for gastric cancer. Using a large cohort (n=941), they reported elevated gfDNA in gastric cancer patients, an unexpected association with improved survival, and a positive correlation with immune cell infiltration.

      Strengths

      The study benefits from a substantial sample size, clear patient stratification, and control of key clinical confounders. The method is simple and clinically feasible, with preliminary evidence linking gfDNA to immune infiltration.

      Weaknesses

      (1) While the study identifies gfDNA as a potential prognostic tool, the evidence remains preliminary. Unexplained survival associations and methodological gaps weaken support for the conclusions.

      (2) The paradoxical association between high gfDNA and better survival lacks mechanistic validation. The authors acknowledge but do not experimentally distinguish tumor vs. immune-derived DNA, leaving the biological basis speculative.

      (3) Pre-analytical variables were noted but not systematically analyzed for their impact on gfDNA stability.

      Comments on revisions:

      To enhance the completeness and credibility of this research, it is essential to clarify the biological origin of gastric fluid DNA and validate these preliminary findings through a prospective, longitudinal study design.

    3. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review): 

      “The study analyzes the gastric fluid DNA content identified as a potential biomarker for human gastric cancer. However, the study lacks overall logicality, and several key issues require improvement and clarification. In the opinion of this reviewer, some major revisions are needed:” 

      (1) “This manuscript lacks a comparison of gastric cancer patients' stages with PN and N+PD patients, especially T0-T2 patients.”

      We are grateful for this astute remark. A comparison of gfDNA concentration among the diagnostic groups indicates a trend of increasing values as the diagnosis progresses toward malignancy. The observed values for the diagnostic groups are as follows:

      Author response table 1.

      The chart below presents the statistical analyses of the same diagnostic/tumor-stage groups (One-Way ANOVA followed by Tukey’s multiple comparison tests). It shows that gastric fluid gfDNA concentrations gradually increase with malignant progression. We observed that the initial tumor stages (T0 to T2) exhibit intermediate gfDNA levels, which in this group is significantly lower than in advanced disease (p = 0.0036), but not statistically different from non-neoplastic disease (p = 0.74).

      Author response image 1.

      (2) “The comparison between gastric cancer stages seems only to reveal the difference between T3 patients and early-stage gastric cancer patients, which raises doubts about the authenticity of the previous differences between gastric cancer patients and normal patients, whether it is only due to the higher number of T3 patients.”

      We appreciate the attention to detail regarding the numbers analyzed in the manuscript. Importantly, the results are meaningful because the number of subjects in each group is comparable (T0-T2, N = 65; T3, N = 91; T4, N = 63). The mean gastric fluid gfDNA values (ng/µL) increase with disease stage (T0-T2: 15.12; T3-T4: 30.75), and both are higher than the mean gfDNA values observed in non-neoplastic disease (10.81 ng/µL for N+PD and 10.10 ng/µL for PN). These subject numbers in each diagnostic group accurately reflect real-world data from a tertiary cancer center.

      (3) “The prognosis evaluation is too simplistic, only considering staging factors, without taking into account other factors such as tumor pathology and the time from onset to tumor detection.”

      Histopathological analyses were performed throughout the study not only for the initial diagnosis of tissue biopsies, but also for the classification of Lauren’s subtypes, tumor staging, and the assessment of the presence and extent of immune cell infiltrates. Regarding the time of disease onset, this variable is inherently unknown--by definition--at the time of a diagnostic EGD. While the prognosis definition is indeed straightforward, we believe that a simple, cost-effective, and practical approach is advantageous for patients across diverse clinical settings and is more likely to be effectively integrated into routine EGD practice.

      (4) “The comparison between gfDNA and conventional pathological examination methods should be mentioned, reflecting advantages such as accuracy and patient comfort. “

      We wish to reinforce that EGD, along with conventional histopathology, remains the gold standard for gastric cancer evaluation. EGD under sedation is routinely performed for diagnosis, and the collection of gastric fluids for gfDNA evaluation does not affect patient comfort. Thus, while gfDNA analysis was evidently not intended as a diagnostic EGD and biopsy replacement, it may provide added prognostic value to this exam.

      (5) “There are many questions in the figures and tables. Please match the Title, Figure legends, Footnote, Alphabetic order, etc. “

      We are grateful for these comments and apologize for the clerical oversight. All figures, tables, titles and figure legends have now been double-checked.

      (6) “The overall logicality of the manuscript is not rigorous enough, with few discussion factors, and cannot represent the conclusions drawn. “

      We assume that the unusual wording remark regarding “overall logicality” pertains to the rationale and/or reasoning of this investigational study. Our working hypothesis was that during neoplastic disease progression, tumor cells continuously proliferate and, depending on various factors, attract immune cell infiltrates. Consequently, both tumor cells and immune cells (as well as tumor-derived DNA) are released into the fluids surrounding the tumor at its various locations, including blood, urine, saliva, gastric fluids, and others. Thus, increases in DNA levels within some of these fluids have been documented and are clinically meaningful. The concurrent observation of elevated gastric fluid gfDNA levels and immune cell infiltration supports the hypothesis that increased gfDNA—which may originate not only from tumor cells but also from immune cells—could be associated with better prognosis, as suggested by this study of a large real-world patient cohort.

      In summary, we thank Reviewer #1 for his time and effort in a constructive critique of our work.

      Reviewer #2 (Public review):

      Summary: 

      “The authors investigated whether the total DNA concentration in gastric fluid (gfDNA), collected via routine esophagogastroduodenoscopy (EGD), could serve as a diagnostic and prognostic biomarker for gastric cancer. In a large patient cohort (initial n=1,056; analyzed n=941), they found that gfDNA levels were significantly higher in gastric cancer patients compared to non-cancer, gastritis, and precancerous lesion groups. Unexpectedly, higher gfDNA concentrations were also significantly associated with better survival prognosis and positively correlated with immune cell infiltration. The authors proposed that gfDNA may reflect both tumor burden and immune activity, potentially serving as a cost-effective and convenient liquid biopsy tool to assist in gastric cancer diagnosis, staging, and follow-up.”

      Strengths: 

      “This study is supported by a robust sample size (n=941) with clear patient classification, enabling reliable statistical analysis. It employs a simple, low-threshold method for measuring total gfDNA, making it suitable for large-scale clinical use. Clinical confounders, including age, sex, BMI, gastric fluid pH, and PPI use, were systematically controlled. The findings demonstrate both diagnostic and prognostic value of gfDNA, as its concentration can help distinguish gastric cancer patients and correlates with tumor progression and survival. Additionally, preliminary mechanistic data reveal a significant association between elevated gfDNA levels and increased immune cell infiltration in tumors (p=0.001).”

      Reviewer #2 has conceptually grasped the overall rationale of the study quite well, and we are grateful for their assessment and comprehensive summary of our findings.

      Weaknesses: 

      (1) “The study has several notable weaknesses. The association between high gfDNA levels and better survival contradicts conventional expectations and raises concerns about the biological interpretation of the findings.“

      We agree that this would be the case if the gfDNA was derived solely from tumor cells. However, the findings presented here suggest that a fraction of this DNA would be indeed derived from infiltrating immune cells. The precise determination of the origin of this increased gfDNA remains to be achieved in future follow-up studies, and these are planned to be evaluated soon, by applying DNA- and RNA-sequencing methodologies and deconvolution analyses.

      (2) “The diagnostic performance of gfDNA alone was only moderate, and the study did not explore potential improvements through combination with established biomarkers. Methodological limitations include a lack of control for pre-analytical variables, the absence of longitudinal data, and imbalanced group sizes, which may affect the robustness and generalizability of the results.“

      Reviewer #2 is correct that this investigational study was not designed to assess the diagnostic potential of gfDNA. Instead, its primary contribution is to provide useful prognostic information. In this regard, we have not yet explored combining gfDNA with other clinically well-established diagnostic biomarkers. We do acknowledge this current limitation as a logical follow-up that must be investigated in the near future.

      Moreover, we collected a substantial number of pre-analytical variables within the limitations of a study involving over 1,000 subjects. Longitudinal samples and data were not analyzed here, as our aim was to evaluate prognostic value at diagnosis. Although the groups are imbalanced, this accurately reflects the real-world population of a large endoscopy center within a dedicated cancer facility. Subjects were invited to participate and enter the study before sedation for the diagnostic EGD procedure; thus, samples were collected prospectively from all consenting individuals.

      Finally, to maintain a large, unbiased cohort, we did not attempt to balance the groups, allowing analysis of samples and data from all patients with compatible diagnoses (please see Results: Patient groups and diagnoses).

      (3) “Additionally, key methodological details were insufficiently reported, and the ROC analysis lacked comprehensive performance metrics, limiting the study's clinical applicability.“

      We are grateful for this useful suggestion. In the current version, each ROC curve (Supplementary Figures 1A and 1B) now includes the top 10 gfDNA thresholds, along with their corresponding sensitivity and specificity values (please see Suppl. Table 1). The thresholds are ordered from-best-to-worst based on the classic Youden’s J statistic, as follows:

      Youden Index = specificity + sensitivity – 1 [Youden WJ. Index for rating diagnostic tests. Cancer 3:32-35, 1950. PMID: 15405679]. We have made an effort to provide all the key methodological details requested, but we would be glad to add further information upon specific request.

      Reviewer #1 (Recommendations for the authors):

      The authors should pay attention to ensuring uniformity in the format of all cited references, such as the number of authors for each reference, the journal names, publication years, volume numbers, and page number formats, to the best extent possible. 

      Thank you for pointing this inconsistency. All cited references have now been revisited and adjusted properly. We apologize for this clerical oversight.

      Reviewer #2 (Recommendations for the authors):

      (1) “High gfDNA levels were surprisingly linked to better survival, which conflicts with the conventional understanding of cfDNA as a tumor burden marker. Was any qualitative analysis performed to distinguish DNA derived from immune cells versus tumor cells?“

      Tumor-derived DNA is certainly present in gfDNA, as our group has unequivocally demonstrated in a previous publication [Pizzi M. P., et al. (2019) Identification of DNA mutations in gastric washes from gastric adenocarcinoma patients: Possible implications for liquid biopsies and patient follow-up Int J Cancer 145:1090–1097. DOI: 10.1002/ijc.32114]. However, in the present manuscript, our data suggest that gfDNA may also contain DNA derived from infiltrating immune cells. This may also be the case for other malignancies, and qualitative deconvolution studies could provide more informative information. To achieve this, DNA sequencing and RNA-Seq analyses may offer relevant evidence. Our study should be viewed as an original and preliminary analysis that may encourage such quantitative and qualitative studies in biofluids from cancer patients. Currently, this is a simple approach (which might be its essential beauty), but we hope to investigate this aspect further in future studies.

      (2) “The ROC curve AUC was 0.66, indicating only moderate discrimination ability. Did the authors consider combining gfDNA with markers such as CEA or CA19-9 to improve diagnostic accuracy?“

      This is indeed a logical idea, which shall certainly be explored in planned follow-up studies.

      (3) “DNA concentration could be influenced by non-biological factors, including gastric fluid pH, sampling location, time delay, or freeze-thaw cycles. Were these operational variables assessed for their effect on data stability?“

      We appreciate the rigor of the evaluation. Yes, information regarding gastric fluid pH was collected. All samples were collected from the stomach during EGD procedure. Samples were divided in aliquots and were thawed only once. This information is now provided in the updated manuscript text.

      (4) “This cross-sectional study lacks data on gfDNA changes over time, limiting conclusions on its utility for monitoring treatment response or predicting recurrence.“

      Again, temporal evaluation is another excellent point, and it will be the subject of future analyses. In this exploratory study, samples were collected at diagnosis, at a single point. We have not obtained serial samples, as participants received appropriate therapy soon following diagnosis.

      (5) The normal endoscopy group included only 10 patients, the precancerous lesion group 99 patients, while the gastritis group had 596 patients. Such uneven sample sizes may affect statistical reliability and generalizability. Has weighted analysis or optimized sampling been considered for future studies?“

      Yes, in future studies this analysis will be considered, probably by employing stratified random sampling with relevant patient attributes recorded.

      (6) “The SciScore was only 2 points, indicating that key methodological details such as inclusion/exclusion criteria, randomization, sex variables, and power calculation were not clearly described. It is recommended that these basic research elements be supplemented in the Methods section. “

      This was an exploratory research, the first of its kind, to evaluate prognostic potential of gfDNA in the context of gastric cancer. Patients were not included if they did not sign the informed consent or excluded if they withdrew after consenting. Other exclusion criteria included diagnoses of conditions such as previous gastrectomy or esophagectomy, or the presence of non-gastric malignancies. Randomization and power analyses were not applicable, as no prior data were available regarding gfDNA concentration values or its diagnostic/prognostic potential. All subjects, regardless of sex, were invited to participate without discrimination or selection.

      (7) “Although a ROC curve was provided in the supplementary materials (Supplementary Figure 1), only the curve and AUC value were shown without sensitivity, specificity, predictive values, or cutoff thresholds. The authors are advised to provide a full ROC performance assessment to strengthen the study's clinical relevance.

      These data are now given alongside the ROC curves in the Supplementary Information section, specifically in Supplementary Figure 1 and in the newly added Supplementary Table 1.

      We thank Reviewer #2 for an insightful and positive overall assessment of our work.

    1. L'Idéologie et l'Esprit Critique : Synthèse du Débat

      Résumé Exécutif

      Ce document synthétise les arguments et les conclusions du débat sur la compatibilité entre l'idéologie et l'esprit critique, opposant Gwen Pallarès (position positive) et Pascal Wagner-Egger (position négative).

      Gwen Pallarès soutient que l'idéologie est non seulement compatible mais souvent un prérequis et un moteur pour l'esprit critique, arguant que tout individu possède une idéologie qui structure sa pensée et motive sa curiosité.

      Pascal Wagner-Egger défend la position selon laquelle l'idéologie est fondamentalement un obstacle à la pensée critique et à la démarche scientifique, un ensemble de préconceptions qu'il faut activement chercher à minimiser en s'appuyant sur des données empiriques.

      Malgré leurs positions de départ opposées, un consensus significatif a émergé sur plusieurs points.

      Les deux intervenants s'accordent sur l'existence d'un "point de bascule" ou d'un "saut qualitatif" où l'idéologie devient incompatible avec l'esprit critique, notamment dans les cas de fanatisme, de radicalisation ou lorsque les croyances fondamentales liées à l'identité sont menacées.

      Ils reconnaissent également que l'idéologie peut agir comme une puissante "motivation épistémique", incitant à la recherche et à l'analyse.

      La divergence principale réside dans la nature de cette relation.

      Pour Pascal, la motivation induite par l'idéologie est une arme à double tranchant qui exige une vigilance épistémique accrue pour contrer les biais.

      Pour Gwen, cette motivation est un moteur fondamental, et la volonté de se placer dans une position "centriste" pour éviter les biais est elle-même une position idéologique.

      Cette différence de perspective trouve sa source dans des divergences épistémologiques plus profondes sur la nature des sciences, la construction des données et la porosité entre les domaines scientifique et politique.

      1. Introduction au Débat

      Le débat, animé par Peter Barret, a pour objectif d'explorer la question "L’idéologie est-elle compatible avec l’esprit critique ?" dans un format visant à être constructif et à clarifier les positions plutôt qu'à encourager la contre-argumentation.

      Les deux intervenants sont :

      Gwen Pallarès : Maîtresse de conférence en didactique des sciences à l'Université de Reims Champagne-Ardenne, défendant la position positive.

      Pascal Wagner-Egger : Psychologue social à l'Université de Fribourg, défendant la position négative.

      2. Définitions Clés

      Les intervenants se sont accordés sur les définitions suivantes pour encadrer le débat.

      Terme

      Définition de Gwen Pallarès (Psychologie Sociale)

      Définition de Pascal Wagner-Egger (Larousse)

      Idéologie

      Un système d'attitudes, de croyances et de stéréotypes qui coordonne les actions des institutions et des individus.

      Ce système vise notamment à justifier ou à critiquer les hiérarchies sociales existantes (ex: féminisme vs. masculinisme).

      Un système d'idées générales constituant un corps de doctrine philosophique et politique à la base d'un comportement individuel ou collectif (ex: idéologie marxiste, nationaliste).

      Esprit Critique : Défini par Gwen Pallarès comme un ensemble de compétences (analyse, évaluation d'arguments et d'informations) et de dispositions (humilité intellectuelle, curiosité, réflexivité).

      Cet ensemble est orienté vers la prise de décision raisonnée ("Qu'est-ce qu'il convient de croire ou de faire ?") et s'opérationnalise souvent par une argumentation de bonne qualité.

      3. Positions Initiales

      3.1. Position de Gwen Pallarès (Positive) : L'Idéologie comme Prérequis Compatible

      L'argument central de Gwen Pallarès repose sur l'universalité de l'idéologie :

      Tout le monde a une idéologie : La pensée de chaque individu est structurée par des systèmes de croyances, d'attitudes et de stéréotypes.

      Refuser cela serait nier une réalité fondamentale du fonctionnement humain.

      L'incompatibilité rendrait l'esprit critique impossible : Si l'idéologie était incompatible avec l'esprit critique, et puisque tout le monde a une idéologie, alors personne ne pourrait avoir d'esprit critique.

      L'esprit critique est un spectre : Tout le monde possède des compétences minimales d'analyse et d'argumentation, même si leur application peut être biaisée (ex: biais de confirmation où l'on critique plus durement les informations qui contredisent nos croyances).

      Limite de la compatibilité : Elle concède que les formes extrêmes d'idéologie (radicalisation, emprise sectaire, fanatisme) sont, elles, incompatibles avec l'esprit critique car elles poussent à une acceptation acritique des informations.

      3.2. Position de Pascal Wagner-Egger (Négative) : L'Idéologie comme Obstacle à la Science

      Pascal Wagner-Egger ancre sa position dans l'histoire des sciences et la psychologie sociale :

      La science s'est construite contre l'idéologie : Il cite l'exemple de la science luttant contre l'idéologie religieuse, qu'il qualifie de "régime totalitaire".

      La "méthode idéologique" : Elle postule que la vérité est contenue dans un texte fondateur (la Bible, Le Capital) et que toute observation doit s'y conformer. C'est l'inverse de la méthode scientifique.

      L'ennemi intérieur et extérieur : L'idéologie est un obstacle institutionnel (externe) mais aussi un obstacle interne aux chercheurs eux-mêmes.

      Il cite Gaston Bachelard et ses "obstacles épistémologiques" (opinion, connaissance générale) comme précurseurs de la notion de biais cognitifs.

      Le rôle des données empiriques : La méthode scientifique est le principal outil pour limiter les effets de nos idéologies et tester nos préconceptions contre la réalité.

      Il cite des études montrant plus de dogmatisme et de complotisme aux extrêmes politiques.

      4. Racine des Convictions : Les Parcours Académiques

      Les positions des deux débatteurs sont fortement influencées par leurs expériences personnelles et académiques.

      Pascal Wagner-Egger : Son parcours l'a mené des sciences "dures" vers les sciences sociales.

      Il a été frappé par ce qu'il a perçu comme des positions idéologiques dogmatiques chez certains collègues, notamment le rejet des méthodes quantitatives qualifiées d'"impérialisme anglo-saxon".

      Cette expérience a forgé sa conviction que l'idéologie peut nuire à la recherche de la vérité scientifique et qu'il faut s'en prémunir.

      Gwen Pallarès : Son parcours est inverse, des mathématiques vers la didactique des sciences.

      L'étude approfondie des controverses socio-scientifiques (IA, genre, écologie) pour sa thèse l'a progressivement politisée.

      Son engagement politique est devenu un moteur pour produire une recherche scientifique plus rigoureuse et utile socialement, notamment pour l'éducation.

      Pour elle, l'idéologie n'est pas un obstacle à la rigueur, mais ce qui la motive.

      5. Analyse de la Convergence et de la Divergence

      Le débat a révélé un terrain d'entente plus large qu'attendu, tout en précisant la nature des désaccords.

      5.1. Points de Convergence Fondamentaux

      1. Le "Point de Bascule" : Les deux intervenants s'accordent sur le fait qu'il existe un seuil où l'idéologie devient incompatible avec l'esprit critique.

      Ce seuil est atteint dans les cas de fanatisme, de radicalisation, ou lorsque des croyances fondamentales liées à l'identité de la personne sont menacées, rendant le dialogue et la remise en question impossibles.

      2. La Motivation Épistémique : Il est admis par les deux parties que l'idéologie est un puissant moteur.

      Un engagement idéologique (ex: écologiste, féministe) peut stimuler la curiosité intellectuelle, la recherche d'informations et la volonté d'analyser des arguments, qui sont des dispositions centrales de l'esprit critique.

      3. L'Universalité de l'Idéologie : Les deux débatteurs partagent le postulat que chaque individu, y compris les scientifiques, possède une ou plusieurs idéologies qui structurent sa vision du monde.

      5.2. Points de Divergence Clés

      La principale divergence ne porte pas tant sur la compatibilité en soi, mais sur la nature de la relation entre idéologie et esprit critique.

      Point de Divergence

      Position de Pascal Wagner-Egger

      Position de Gwen Pallarès

      Nature du lien

      Une arme à double tranchant : L'idéologie motive, mais elle biaise simultanément.

      Il est donc crucial d'exercer une vigilance épistémique accrue et de chercher à minimiser l'influence de ses propres idéologies, notamment en les confrontant aux données empiriques.

      Un moteur fondamental : L'idéologie est le moteur principal de la recherche et de l'engagement critique. Tenter de l'annuler est illusoire.

      La posture qui consiste à se vouloir "au centre" pour être moins biaisé est elle-même une idéologie ("biais du juste milieu").

      Épistémologie sous-jacente

      Plus proche de l'empirisme et du rationalisme critique (citant Popper et se revendiquant de Lakatos).

      Les données, bien que partiellement construites, permettent par triangulation de s'approcher d'une réalité indépendante de la méthode.

      Plus proche du constructivisme et du pragmatisme. Les données sont fondamentalement construites par la méthodologie, qui est elle-même issue de cadres théoriques.

      La distinction entre science et politique est plus poreuse.

      Rapport Science / Politique

      Vise à maintenir une distinction claire. Dans le domaine scientifique, les données doivent primer sur les préconceptions. Dans le domaine politique, l'idéologie et le militantisme sont utiles et nécessaires.

      La distinction est moins nette. Le travail scientifique est intrinsèquement lié à des enjeux de société et peut être motivé par un engagement politique, cet engagement pouvant être un gage de rigueur pour rendre la science utile.

    1. AbstractIdentifying differentially expressed genes associated with genetic pathologies is crucial to understanding the biological differences between healthy and diseased states and identifying potential biomarkers and therapeutic targets. However, gene expression profiles are controlled by various mechanisms including epigenomic changes, such as DNA methylation, histone modifications, and interfering microRNA silencing.We developed a novel Shiny application for transcriptomic and epigenomic change identification and correlation using a combination of Bioconductor and CRAN packages.The developed package, named EMImR, is a user-friendly tool with an easy-to-use graphical user interface to identify differentially expressed genes, differentially methylated genes, and differentially expressed interfering miRNA. In addition, it identifies the correlation between transcriptomic and epigenomic modifications and performs the ontology analysis of genes of interest.The developed tool could be used to study the regulatory effects of epigenetic factors. The application is publicly available in the GitHub repository (https://github.com/omicscodeathon/emimr).

      This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.168), and has published the reviews under the same license.

      Reviewer 1. Haikuo Li

      Is there a clear statement of need explaining what problems the software is designed to solve and who the target audience is? No. Should be made more clear.

      Comments: The authors developed EMImR as an R toolkit and open-sourced software for analysis of bulk RNA-seq as well as epigenomic sequencing data including DNA methylation seq and non-coding RNA profiling. This work is very interesting and should be of interest to people interested in transcriptomic and epigenomic data analysis but without computational background. I have two major comments: 1. Results presented in this manuscript were only from microarray datasets and are kind of “old” data. Although these data types and sequencing platforms are still very valuable, I don’t think they are widely used as of today, and therefore, it may be less compelling to the audience. It is suggested to validate EMImR using additional more recently published datasets. 2. The authors studied bulk transcriptomic and epigenomic sequencing data. In fact, single-cell and spatially resolved profiling of these modalities are becoming the mainstream of biomedical research since those methods offer much better resolution and biological insights. The authors are encouraged to discuss some key references of this field (for example, PMIDs: 34062119 and 38513647 for single-cell multiomics; PMID: 40119005 for spatial multiomics sequencing), potentially as the future direction of package development. Re-review: The authors have answered my questions and added new content in the Discussion section as suggested.

      Reviewer 2. Weiming He

      Dear Editor-in-Chief, The EMImR developed by the author is a Shiny application designed for the identification of transcriptomic and epigenomic changes and data association. This program is mainly targeted at Windows UI users who do not possess extensive computational skills. Its core function is to identify the intersections between genetic and epigenetic modifications

      Review Recommendation I recommend that after making appropriate revisions to the current “Minor Revision”, the article can be accepted. However, the author needs to address the following issues.

      Major Issue The article does not provide specific information on the resource consumption (memory and time) of the program. This is crucial for new users. Although we assume that the resource consumption is minimal, users need to know the machine configuration required to run the program. Therefore, I suggest adding two columns for “Time” and “Memory” in Table 1.

      Minor Issues 1. GitHub Page The Table of Contents on the GitHub page provides a Demonstration Video. However, due to restricted access to YouTube in some regions, it is recommended to also upload a manual in PDF format named “EMImR_manual.pdf” on GitHub. In step 4 of the Installation Guide, it states that “All dependencies will be installed automaticly”. It is advisable to add a step: if the installation fails, prompt the user about the specific error location and guide the user to install the dependent packages manually first to ensure successful installation. Currently, the command “source(‘Dependencies_emimr.R’)” does not return any error messages, which is extremely inconvenient for novice users. The author can provide the maintainer's email address so that users can seek timely solutions when encountering problems

      1. R Version The author recommends using R - 4.2.1 (2022), which was released three years ago. The current latest version is R 4.5.1. It is suggested that the author test the program with the latest version to ensure its adaptability to future developments.

      2. Flowchart Suggestion It is recommended to add a flowchart to illustrate the sequential relationships among packages such as DESeq2 for differential analysis, clusterProfiler for clustering, enrichplot for plotting, and miRNA - related packages (this is optional).

      4.Function Addition Currently, the program seems to lack a button for saving PDFs, as well as functions for batch uploading, saving sessions, and one - click exporting of PDF/PNG files. It is recommended to add the “shinysaver” and “downloadHandler” functions to fulfill these requirements.

      1. Personalized Features and Upgrade Plan To attract more users, more personalized features should be added. The author can mention the future upgrade plan in the discussion section. For example, currently, DESeq2 is used for differential analysis, and in future upgrades, more methods such as PossionDis, NOIseq, and EBseq could be provided for users to choose from.

      2. Text Polishing Suggestions 6.1 Unify the usage of “down - regulated” and “downregulated”, preferably using the latter. 6.2 “R - studio version” ---》 “RStudio” 6.3 Lumian, ---》 Lumian 6.4 no login wall ---》 does not require user registration 6.5 Rewrite “genes were simultaneously differentially expressed and methylated” as “genes that were both differentially expressed and differentially methylated”. 6.6 Ensure that Latin names of species are in italics 6.7 make corresponding modifications to other sentences to improve the accuracy and professionalism of the language in the article.

      The above are my detailed review comments on this article. I hope they can provide a reference for your decision - making.

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Manuscript number: RC -2025-03175

      Corresponding author(s): Gernot Längst

      [Please use this template only if the submitted manuscript should be considered by the affiliate journal as a full revision in response to the points raised by the reviewers.

      • *

      If you wish to submit a preliminary revision with a revision plan, please use our "Revision Plan" template. It is important to use the appropriate template to clearly inform the editors of your intentions.]

      1. General Statements [optional]

      This section is optional. Insert here any general statements you wish to make about the goal of the study or about the reviews.

      2. Point-by-point description of the revisions

      This section is mandatory. *Please insert a point-by-point reply describing the revisions that were already carried out and included in the transferred manuscript. *

      We thank the reviewers for their efforts and detailed evaluation of our manuscript. We think that the comments of the reviewers allowed us to significantly improve the manuscript.

      With best regards

      The authors of the manuscript

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      Summary: Holzinger et al. present a new automated pipeline, nucDetective, designed to provide accurate nucleosome positioning, fuzziness, and regularity from MNase-seq data. The pipeline is structured around two main workflows-Profiler and Inspector-and can also be applied to time-series datasets. To demonstrate its utility, the authors re-analyzed a Plasmodium falciparum MNase-seq time-series dataset (Kensche et al., 2016), aiming to show that nucDetective can reliably characterize nucleosomes in challenging AT-rich genomes. By integrating additional datasets (ATAC-seq, RNA-seq, ChIP-seq), they argue that the nucleosome positioning results from their pipeline have biological relevance.

      Major Comments:

      Despite being a useful pipeline, the authors draw conclusions directly from the pipeline's output without integrating necessary quality controls. Some claims either contradict existing literature or rely on misinterpretation or insufficient statistical support. In some instances, the pipeline output does not align with known aspects of Plasmodium biology. I outline below the key concerns and suggested improvements to strengthen the manuscript and validate the pipeline:

      Clarification of +1 Nucleosome Positioning in P. falciparum The authors should acknowledge that +1 nucleosomes have been previously reported in P. falciparum. For example, Kensche et al. (2016) used MNase-seq to map ~2,278 TSSs (based on enriched 5′-end RNA data) and found that the +1 nucleosome is positioned directly over the TSS in most genes:

      "Analysis of 2278 start sites uncovered positioning of a +1 nucleosome right over the TSS in almost all analysed regions" (Figure 3A).

      They also described a nucleosome-depleted region (NDR) upstream of the TSS, which varies in size, while the +1 nucleosome frequently overlaps the TSS. The authors should nuance their claims accordingly. Nevertheless, I do agree that the +1 positioning in P. falciparum may be fuzzier as compared to yeast or mammals. Moreover, the correlation between +1 nucleosome occupancy and gene expression is often weak and that several genes show similar nucleosome profiles regardless of expression level. This raises my question: did the authors observe any of these patterns in their new data?

      We appreciate the reviewer’s insightful comment and agree that +1 nucleosomes and nucleosome depleted promoter regions have been previously reported in P. falciparum, notably by the Bartfai and Le Roch groups, including Kensche et al. (PMID: 26578577). Our study advances this understanding by providing, for the first time, a comprehensive view of the entirety of a canonical eukaryotic promoter architecture in P. falciparum—encompassing the NDR, the well-positioned +1 nucleosome, and the downstream phased nucleosome array. This downstream nucleosome array structure has not been characterized before, as prior studies noted a “lack of downstream nucleosomal arrays” (PMID: 26578577) or “relatively random” nucleosome organization within gene bodies (PMID: 24885191). We have revised the manuscript to more clearly acknowledge previous work and highlight our contributions. The changes we applied in the manuscript are highlighted in yellow and shown as well below.

      In the Abstract L26-L230: Contrary to the current view of irregular chromatin, we demonstrate for the first time regular phased nucleosome arrays downstream of TSSs, which, together with the established +1 nucleosome and upstream nucleosome-depleted region, reveal a complete canonical eukaryotic promoter architecture in Pf.

      Introduction L156-L159: For example, we identify a phased nucleosome array downstream of the TSS. Together with a well-positioned +1 nucleosome and an upstream nucleosome-free region. These findings support a promoter architecture in Pf that resembles classical eukaryotic promoters (Bunnik et al. 2014, Kensche et al. 2016).

      Results L181-L183: These new Pf nucleosome maps reveal a nucleosome organisation at transcription start sites (TSS) reminiscent of the general eukaryotic chromatin structure, featuring a reported well-positioned +1 nucleosome , an upstream nucleosome-free region (NFR, Bunnik et al. 2014, Kensche et al. 2016), and shown for the first time in Pf, a phased nucleosome array downstream of the TSS.

      Discussion L414-L419: Previous analyses of Pf chromatin have identified +1 nucleosomes and NFRs (Bunnik et al 2014, Kensche et al. 2016). Here we extend this understanding by demonstrating phased nucleosome array structures throughout the genome. This finding provides evidence for a spatial regulation of nucleosome positioning in Pf, challenging the notion that nucleosome positioning is relatively random in gene bodies (Bunnik et al. 2014, Kensche et al. 2016). Consequently our results contribute to the understanding that Pf exhibits a typical eukaryotic chromatin structure, including well-defined nucleosome positioning at the TSS and regularly spaced nucleosome arrays (Schones et al. 2008; Yuan et al. 2005).

      Regarding the reviewer’s question on +1 nucleosome dynamics. Our data agrees with the reviewer and other studies (e.g. PMID: 31694866), that the +1 nucleosome position is robust and does not correlate with gene expression strength. In the manuscript we show that dynamic nucleosomes are preferentially detected at the –1 nucleosome position (Figure 2C). In line with that we show that the +1 nucleosome position does not markedly change during transcription initiation of a subset of late transcribed genes (Figure 5A). However, we observe an opening of the NDR and within the gene body increased fuzziness and decreased nucleosome array regularity (Figure S4A). To illustrate the relationship between the +1 nucleosome positioning and expression strength, we have included a heatmap showing nucleosome occupancy at the TSS, ordered according to expression strength (NEW Figure S4C):

      We included a sentence describing the relationship of +1 nucleosome position with gene expression in L257-L258: Furthermore, the +1 nucleosome positioning is unaffected by the strength of gene expression (Figure S2C).

      __ Lack of Quality Control in the Pipeline __

      The authors claim (lines 152-153) that QC is performed at every stage, but this is not supported by the implementation. On the GitHub page (GitHub - uschwartz/nucDetective), QC steps are only marked at the Profiler stage using standard tools (FastQC, MultiQC). The Inspector stage, which is crucial for validating nucleosome detection, lacks QC entirely. The authors should implement additional steps to assess the quality of nucleosome calls. For example, how are false positives managed? ROC curves should be used to evaluate true positive vs. false positive rates when defining dynamic nucleosomes. How sequencing biases are adressed?

      The workflow overview chart on GitHub was not properly color coded. Therefore, we changed the graphics and highlighted the QC steps in the overview charts accordingly:

      Based on our long standing expertise of analysing MNase-seq data (PMID: 38959309, PMID: 37641864, PMID: 30496478, PMID: 25608606), the best quality metrics to assess the performance of the challenging MNase experiment are the fragment size distributions revealing the typical nucleosomal DNA lengths and the TSS plots showing a positioned +1 nucleosome and regularly phased nucleosome arrays downstream of the +1 nucleosome. Additionally, visual inspection of the nucleosome profiles in a genome browser is advisable. We make those quality metrics easily available in the nucDetective Profiler workflow (Insertsize Histogram, TSS plot and provide nucleosome profile bigwig files). Furthermore, the PC and correlation analysis based on the nucleosome occupancy in the inspector workflow allows to evaluate replicate reproducibility or integrity of time series data, as shown for data evaluated in this manuscript.

      The inspector workflow uses the well-established DANPOS toolkit to call nucleosome positions. Based on our experience, this step is particularly robust and well-established in the DANPOS toolkit (PMID: 23193179), so there is no need to reinvent it. Nevertheless, appropriate pre-processing of the data as done in the nucDetective pipeline is crucial to obtain highly resolved nucleosome positions. Using the final nucleosome profiles (bigwig) and the nucleosome reference positions (bed) as output of the Inspector workflow allows visual inspection of the called nucleosomes in a genome viewer. Furthermore, to avoid using false positive nucleosome positions for dynamic nucleosome analysis, we take only the 20% best positioned nucleosomes of each sample, as determined by the fuzziness score.

      We understand the value of a gold standard of dynamic nucleosomes to test performance using ROC curves. However, we are not aware that such a gold standard exists in the nucleosome analysis field, especially not when using multi-sample settings, such as time series data. One alternative would be to use simulated data; however, this has several limitations:

      • __Lack of biological complexity: __simulated data often fails to capture the full complexity of biological systems including the heterogeneity, variability, and subtle dependencies present in real-world data. Simplifications and omissions in simulation models can result in test datasets that are more tractable but less realistic, causing software to appear robust or accurate under idealized conditions, while underperforming on actual experimental data.
      • __Risks of Overfitting: __Software may be tuned to perform well on simulated datasets leading to overfitting and falsely inflated performance metrics. This undermines the predictive or diagnostic value of the results for real biological data
      • Poor Model Fidelity and Hidden Assumptions: The authenticity of simulated data is bounded by the fidelity of the underlying models. If those models are inaccurate or make untested assumptions, the generated data may not reflect real experimental or clinical scenarios. This can mask software shortcomings or bias validation toward specific, perhaps irrelevant, scenarios. Therefore, we decided to validate the performance of the pipeline in the biological context of the analyzed data:

      • PCA analysis of the individual nucleosome features shows a cyclic structure as expected for the IDC (Fig. 1D-G).

      • Nucleosome occupancy changes anti-correlate with chromatin accessibility (Fig. 3B) as expected.
      • Dynamic nucleosome features correlate with expression changes (Fig. 5C) We are aware that MNase-seq experiments might have sequence bias caused by the enzyme's endonuclease sequence preference (PMID: 30496478). However, the main aim of the nucDetective pipeline is to identify dynamic nucleosome features genome wide. Therefore, we are comparing the nucleosome features across multiple samples to find the positions in the genome with the highest variability. Comparisons are performed between the same nucleosome positions at the same genomic sites across multiple conditions, so the sequence context is constant and does not confound the analysis. This is like the differential expression analysis of RNA-seq data, where the gene counts are not normalized by gene length. Introducing a sequence normalization step might distort and bias the results of dynamic nucleosomes.

      We included a paragraph describing the limitations to the discussion (L447-457):

      Depending on the degree of MNase digestion, preferentially nucleosomes from GC rich regions are revealed in MNase-seq experiments (Schwartz et al. 2019). However, no sequence or gDNA normalisation step was included in the nucDetective pipeline. To identify dynamic nucleosomes, comparisons are performed between the same nucleosome positions at the same genomic sites across multiple samples. Hence, the sequence context is constant and does not confound the analysis. Introducing a sequence normalization step might even distort and bias the results. Nevertheless, it is highly advisable to use low MNase concentrations in chromatin digestions to reduce the sequence bias in nucleosome extractions. This turned out to be a crucial condition to obtain a homogenous nucleosome distribution in the AT-rich intergenic regions of eukaryotic genomes and especially in the AT-rich genome of Pf (Schwartz et al. 2019, Kensche et al. 2016).

      __ Use of Mono-nucleosomes Only __

      The authors re-analyze the Kensche et al. (2016) dataset using only mono-nucleosomes and claim improved nucleosome profiles, including identification of tandem arrays previously unreported in P. falciparum. Two key issues arise: 1. Is the apparent improvement due simply to focusing on mono-nucleosomes (as implied in lines 342-346)?

      The default setting in nucDetective is to use fragment sizes of 140 – 200 bp, which corresponds to the main mono-nucleosome fraction in standard MNase-seq experiments. However, the correct selection of fragment sizes may vary depending on the organism and the variations in MNase-seq protocols. Therefore, the pipeline offers the option of changing the cutoff parameter (--minLen; --maxLen), accordingly. Kensche et al thoroughly tested and established the best parameters for the data set. We agree with their selected parameters and used the same cutoffs (75-175 bp) in this manuscript. For this particular data set, the fragment size selection is not the reason why we obtain a better resolution. MNase-seq analysis is a multistep process which is optimized in the nucDetective pipeline. Differences in the analysis to Kensche et al are at the pre-processing stage and alignment step:

      Kensche et al. : “Paired-end reads were clipped to 72 bp and all data was mapped with BWA sample (Version 0.6.2-r126)”

      nucDetective:

      • Trimming using TrimGalore --paired -q 10 --stringency 2
      • Mapping using bowtie2 --very-sensitive –dovetail --no-discordant
      • MAPQ >= 20 filtering of aligned read-pairs (samtools). The manuscript text L379 was changed to

      This is achieved using MNase-seq optimized alignment settings, and proper selection of the fragment sizes corresponding to mono-nucleosomal DNA to obtain high resolution nucleosome profiles.

      How does the pipeline perform with di- or tri-nucleosomes, which are also biologically relevant (Kensche et al., 2016 and others)? Furthermore, the limitation to mono-nucleosomes is only mentioned in the methods, not in the results or discussion, which could mislead readers.

      The pipeline is optimized for mono-nucleosome analysis. However, the cutoffs for fragment size selection can be adjusted to analyse other fragment populations in MNase-seq data (--minLen; --maxLen). For example we know from previous studies that the settings in the pipeline could be used for sub-nucleosome analysis as well (PMID: 38959309). Di- or Tri-nucleosome analysis we have not explicitly tested. However, in a previous study (PMID: 30496478) we observed that the inherited MNase sequence bias is more pronounced in di-nucleosomes, which are preferentially isolated from GC-rich regions. This is in line with the depletion of di-nucleosomes in AT-rich intergenic regions in Pf, as was already described by Kensche et al.

      Changes to the manuscript text: We included a paragraph describing the limitations to the discussion (L428-434):

      The nucDetective pipeline has been optimized for the analysis of mono-nucleosomes. However, the selection of fragment sizes can be adjusted manually, enabling the pipeline to be used for other nucleosome categories. The pipeline is suitable to map and annotate sub-nucleosomal particles (

      __ Reference Nucleosome Numbers __

      The authors identify 49,999 reference nucleosome positions. How does this compare to previous analyses of similar datasets? This should be explicitly addressed.

      We thank the reviewer for this suggestion. In order to put our results in perspective, it is important to distinguish between reference nucleosome positions (what we reported in the manuscript) and all detectable nucleosomes. The reference positions are our attempt to build a set of nucleosome positions with strong evidence, allowing confident further analysis across timepoints. The selection of a well positioned subset of nucleosomes for downstream analysis has been done previously (PMID: 26578577) and the merging algorithm we used across timepoints is also used by DANPOS to decide if a MNase-Seq peak is a new nucleosome position or belongs to an existing position (PMID: 23193179).

      To be able to address the reviewer suggestion we prepared and added a table to the supplementary data, including the total number of all nucleosomes detected by our pipeline at each timepoint. We adjusted the results to the following (L223-226):

      “The pipeline identified a total of 127370 ± 1151 (mean ± SD) nucleosomes at each timepoint (Supplementary Data X). To exclude false positive positions in our analysis, we conservatively selected 49,999 reference nucleosome positions, representing sites with a well-positioned nucleosome at least at one time point (see Methods). Among these 1192 nucleosomes exhibited […]”

      Several groups reported nucleosome positioning data for P. falciparum (PMID: 20015349, PMID: 20054063, PMID: 24885191, PMID: 26578577), however only Ponts et al (2010) reported resolved numbers (~45000-90000 nucleosomes depending in development stage) and Bunnik et al reported ~ 75000 nucleosomes in a graph. Although we do not know the reason of why the other studies did not include specific numbers, we speculate that the data quality did not allow them to confidently report a number. In fact, nucleosomal reads are severely depleted in AT-rich intergenic regions in the Ponts and Bunnik datasets. In contrast, Kensche et al (and our analysis) shows that nucleosomes can be identified throughout the genome of Pf. Therefore, the nucleosome numbers reported by Ponts et al and Bunnik et al are very likely underestimated.

      We included the following text in the discussion, addressing previously published datasets (L404 – 405):

      “For example, our pipeline was able to identify a total of ~127,000 nucleosomes per timepoint (=5.4 per kb) in range with observed nucleosome densities in other eukaryotes (typically 5 to 6 per kb). From these, we extracted 49,999 reference nucleosome positions with strong positioning evidence across all timepoints, which we used to characterize nucleosome dynamics of Pf longitudinally. Previous studies of P. falciparum chromatin organization, did not report a total number of nucleosomes (Westenberger et al. 2009, Kensche et al. 2016), or estimated approximately ~45000-90000 nucleosomes across the genome at different developmental stages (Bunnik et al. 2014, Ponts et al. 2010). However, this value likely represents an underestimation due to the depletion of nucleosomal reads in AT-rich intergenic regions observed in their datasets.”

      __ Figure 1B and Nucleosome Spacing __

      The authors claim that Figure 1B shows developmental stage-specific variation in nucleosome spacing. However, only T35 shows a visible upstream change at position 0. In A4, A6, and A8 (Figure S4), no major change is apparent. Statistical tests are needed to validate whether the observed differences are significant and should be described in the figure legends and main text.

      We would like to thank the reviewer for bringing this issue to our attention. We apologize for an error we made, wrongly labelling the figure numbers. The differences in nucleosome spacing across time are visible in Figure 1C. Figure 1B shows the precise array structure of the Pf nucleosomes, when centered on the +1 nucleosome, and is mentioned before. The mistake is now corrected.

      In Figure 1C the mean NRL and 95% confidence interval are depicted, allowing a visual assessment of data significance (non-overlapping 95% CI-Intervals correspond to p Taken together we corrected this mistake and edited the text as follows (L194 – 199):

      “With this +1 nucleosome annotation, regularly spaced nucleosome arrays downstream of the TSS were detected, revealing a precise nucleosome organization in Pf (Figure 1B). Due to the high resolution maps of nucleosomes we can now observe significantvariations in nucleosome spacing depending on the developmental stage (Figure 1C, ANOVA on bootstrapped values (3 per timepoint) F₇,₇₂ = 35.10, p

      __ Genome-wide Occupancy Claims __

      The claim that nucleosomes are "evenly distributed throughout the genome" (Figure S2A) is questionable. Chromosomes 3 and 11 show strong peaks mid-chromosome, and chromosome 14 shows little to no signal at the ends. This should be discussed. Subtelomeric regions, such as those containing var genes, are known to have unique chromatin features. For instance, Lopez-Rubio et al. (2009) show that subtelomeric regions are enriched for H3K9me3 and HP1, correlating with gene silencing. Should these regions not display different nucleosome distributions? Do you expect the Plasmodium genome (or any genome) to have uniform nucleosome distribution?

      On global scale (> 10 kb) we would expect a homogenous distribution of nucleosomes genome wide, regardless of euchromatin or heterochromatin. We have shown this in a previous study for human cells (PMID: 30496478), which was later confirmed for drosophila melongaster (PMID: 31519205,PMID: 30496478) and yeast (PMID: 39587299).

      However, Figure S2A shows the distribution of the dynamic nucleosome features during the IDC, called with our pipeline. We agree with the reviewer, that there are a few exceptions of the uniform distribution, which we address now in the manuscript.

      Furthermore, we agree with the reviewer that the H3K9me3 / HP1 subtelomeric regions are special. Those regions are depleted of dynamic nucleosomes in the IDC as shown in Fig. 2D and now mentioned in L280 - L282.

      We included an additional genome browser snapshot in Supplemental Figure S2B and changed the text accordingly (L245-249):

      We observed a few exceptions to the even distribution of the nucleosomes in the center of chromosome 3, 11 and 12, where nucleosome occupancy changes accumulated at centromeric regions (Figure S2B). Furthermore, the ends of the chromosomes are rather depleted of dynamic nucleosome features.

      Genome browser snapshot illustrating accumulation of nucleosome occupancy changes at a centromeric site. Centered nucleosome coverage tracks (T5-T40 colored coverage tracks), nucleosomes occupancy changes (yellow bar) and annotated centromers (grey bar) taken from (Hoeijmakers et al., 2012)

      Dependence on DANPOS

      The authors criticize the DANPOS pipeline for its limitations but use it extensively within nucDetective. This contradiction confuses the reader. Is nucDetective an original pipeline, or a wrapper built on existing tools?

      One unique feature of the nucDetective pipeline is to identify dynamic nucleosomes (occupancy, fuzziness, regularity, shifts) in complex experimental designs, such as time series data (Inspector workflow). To our knowledge, there is no other tool for MNase-seq data which allows multi-condition/time-series comparisons (PMID: 35061087). For example, DANPOS allows only pair-wise comparisons, which cannot be used for time-series data. For the analysis of dynamic nucleosome features we require nucleosome profiles and positions at high resolution. For this purpose, several tools do already exist (PMID: 35061087). However, researchers without experience in MNase-seq analysis often find the plethora of available tools overwhelming, which makes it challenging to select the most appropriate ones. Here we share our experience and provide the user an automated workflow (Profiler), which builds on existing tools.

      In summary the Profiler workflow is a wrapper built on existing tools and the Inspector workflow is partly a wrapper (uses DANPOS to normalize nucleosome profiles and call nucleosome positions) and implements our original algorithm to detect dynamic nucleosome features in multiple conditions / time-series data.

      __ Control Data Usage __

      The authors should clarify whether gDNA controls were used throughout the analysis, as done in Kensche et al. (2016). Currently, this is mentioned only in the figure legend for Figure 5, not in the methods or results.

      We used the gDNA normalisation to optimize the visualization of the nucleosome depleted region upstream of the TSS in Fig 5A. Otherwise, we did not normalize the data by the gDNA control. The reason is the same as we did not include sequence normalization in the pipeline (see comment above)

      We included a paragraph describing the limitations to the discussion (L447-457):

      Depending on the degree of MNase digestion, preferentially nucleosomes from GC rich regions are revealed in MNase-seq experiments (Schwartz et al. 2019). However, no sequence or gDNA normalisation step was included in the nucDetective pipeline. To identify dynamic nucleosomes, comparisons are performed between the same nucleosome positions at the same genomic sites across multiple samples. Hence, the sequence context is constant and does not confound the analysis. Introducing a sequence normalization step might even distort and bias the results. Nevertheless, it is highly advisable to use low MNase concentrations in chromatin digestions to reduce the sequence bias in nucleosome extractions. This turned out to be a crucial condition to obtain a homogenous nucleosome distribution in the AT-rich intergenic regions of eukaryotic genomes and especially in the AT-rich genome of Pf (Schwartz et al. 2019, Kensche et al. 2016).

      We added following statement to the methods part: Additionally, the TSS profile shown in Figure 5A was normalized by the gDNA control for better NDR visualization.

      __ Lack of Statistical Power for Time-Series Analyses __

      Although the pipeline is presented as suitable for time-series data, it lacks statistical tools to determine whether differences in nucleosome positioning or fuzziness are significant across conditions. Visual interpretation alone is insufficient. Statistical support is essential for any differential analysis.

      We understand the value of statistical support in such an analysis. However, in biology we often face the limitations in terms of the appropriate sample sizes needed to accurately estimate the variance parameters required for statistical modeling. As MNase-seq experiments require a large amount of input material and high sequencing depth, the number of samples in most experiments is low, often with only two replicates (PMID: 23193179). Therefore, we decided that the nucDetective pipeline should be rather handled as a screening method to identify nucleosome features with high variance across all conditions. This prevents misuse of p-values. A common misinterpretation we observed is the use of non-significant p-values to conclude that no biological change exists, despite inadequate statistical power to detect such changes. We included a paragraph in the limitations section discussing the limitations of statistical analysis of MNase-Seq data.

      Changes to the manuscript text: We included a paragraph describing the limitations to the discussion (L435-446).

      As MNase-seq experiments require a large amount of input material and high sequencing depths, most published MNase-seq experiments do not provide the appropriate sample sizes required to accurately estimate the variance parameters necessary for statistical modelling (Chen et al. 2013). Therefore, dynamic nucleosomes are not identified through statistical testing but rather by ranking nucleosome features according to their variance across all samples and applying a variance threshold to distinguish them. This concept is well established to identify super-enhancers (Whyte et al. 2013). In this study we set the variance cutoff to a slope of 3, resulting in a high data confidence. However, other data sets might require further adjustment of the variance cutoff, depending on data quality or sequencing depth. The nucDetective identification of dynamic nucleosomes can be seen as a screening approach to provide a holistic overview of nucleosome dynamics in the system, which provides a basis for further research.

      Reproducibility of Methods

      The Methods section is not sufficient to reproduce the results. The GitHub repository lacks the necessary code to generate the paper's figures and focuses on an exemplary yeast dataset. The authors should either: o Update the repository with relevant scripts and examples, o Clearly state the repository's purpose, or o Remove the link entirely. Readers must understand that nucDetective is dedicated to assessing nucleosome fuzziness, occupancy, shift, and regularity dynamics-not downstream analyses presented in the paper.

      We thank the reviewer for this helpful comment. In addition to the main nucDetective repository, a second GitHub link is provided in the Data Availability section, which contains the scripts used to generate the figures presented in the paper. This separation was intentional to distinguish the general-purpose nucDetective tool from the project-specific analyses performed for this study. We acknowledge that this may not have been sufficiently clear.

      To have all resources available at a single citable permanent location we included a link to the corresponding Zenodo repository (https://doi.org/10.5281/zenodo.16779899) in the Data and materials availability statement.

      The Zenodo repository contains:

      Code (scripts.zip) and annotation of Plasmodium falciparum (Annotation.zip) to reproduce the nucDetective v1.1 (nucDetective-1.1.zip) analysis as done in the research manuscript entitled "Deciphering chromatin architecture and dynamics in Plasmodium falciparum using the nucDetective pipeline".

      The folder "output_nucDetective" conains the complete output of the nucDetective analysis pipeline as generated by the "01_nucDetective_profiler.sh" and "02_nucDetective_inspector.sh" scripts.

      Nucleosome coverage tracks, annotation of nucleosome positions and dynamic nucleosomes are deposited additonally in the folder "Pf_nucleosome_annotation_of_nucDetective".

      To make this clearer we added following text to Material and Methods in ”The nucDetective pipeline” section:

      Changes in the manuscript text (L518-519):

      The code, software and annotations used to run the nucDetective pipeline along with the output have been deposited on Zenodo (https://doi.org/10.5281/zenodo.16779899).

      __ Supplementary Tables __

      Including supplementary tables showing pipeline outputs (e.g., nucleosome scores, heatmaps, TSS extraction) would help readers understand the input-output structure and support figure interpretations.

      See comments above.

      We included a link to the corresponding Zenodo repository (https://doi.org/10.5281/zenodo.16779899) in the Data and materials availability statement.

      The repository contains:

      Code (scripts.zip) and annotation of Plasmodium falciparum (Annotation.zip) to reproduce the nucDetective v1.1 (nucDetective-1.1.zip) analysis as done in the research manuscript entitled "Deciphering chromatin architecture and dynamics in Plasmodium falciparum using the nucDetective pipeline".

      The folder "output_nucDetective" conains the complete output of the nucDetective analysis pipeline as generated by the "01_nucDetective_profiler.sh" and "02_nucDetective_inspector.sh" scripts.

      Minor Comments:

      The authors should moderate claims such as "no studies have reported a well-positioned +1 nucleosome" in P. falciparum, as this contradicts existing literature. Similarly, avoid statements like "poorly understood chromatin architecture of Pf," which undervalue extensive prior work (e.g., discovery of histone lactylation in Plasmodium, Merrick et al., 2023).

      We would like to clarify that we neither wrote that ““no studies have reported a well-positioned +1 nucleosome”” in P. falciparum nor did we intend to imply such thing. However, we acknowledge that our original wording may have been unclear. To address this, we have revised the manuscript to explicitly acknowledge prior studies on chromatin organization and highlight our contribution.

      In the Abstract L26-L30: Contrary to the current view of irregular chromatin, we demonstrate for the first time regular phased nucleosome arrays downstream of TSSs, which, together with the established +1 nucleosome and upstream nucleosome-depleted region, reveal a complete canonical eukaryotic promoter architecture in Pf.

      Introduction L156-L159: For example, we identify a phased nucleosome array downstream of the TSS. Together with a well-positioned +1 nucleosome and an upstream nucleosome-free region. These findings support a promoter architecture in Pf that resembles classical eukaryotic promoters (Bunnik et al. 2014, Kensche et al. 2016).

      Results L180-L183: These new Pf nucleosome maps reveal a nucleosome organisation at transcription start sites (TSS) reminiscent of the general eukaryotic chromatin structure, featuring a reported well-positioned +1 nucleosome , an upstream nucleosome-free region (NFR, Bunnik et al. 2014, Kensche et al. 2016), and shown for the first time in Pf, a phased nucleosome array downstream of the TSS.

      Discussion L412-L421: Previous analyses of Pf chromatin have identified +1 nucleosomes and NFRs (Bunnik et al 2014, Kensche et al. 2016). Here we extend this understanding by demonstrating phased nucleosome array structures throughout the genome. This finding provides evidence for a spatial regulation of nucleosome positioning in Pf, challenging the notion that nucleosome positioning is relatively random in gene bodies (Bunnik et al. 2014, Kensche et al. 2016). Consequently our results contribute to the understanding that Pf exhibits a typical eukaryotic chromatin structure, including well-defined nucleosome positioning at the TSS and regularly spaced nucleosome arrays (Schones et al. 2008; Yuan et al. 2005).

      The phrase “poorly understood chromatin architecture” has been modified to “underexplored chromatin architecture” in order to more accurately reflect the potential for further analyses and contributions to the field, while avoiding any potential misinterpretation of an attempt to undervalue previous work.

      Track labels in figures (e.g., Figure 5B) are too small to be legible.

      We made the labels bigger.

      Several figures (e.g., Figure 5B, S4B) lack statistical significance tests. Are the differences marked with stars statistically significant or just visually different?

      We added statistics to S4B.

      Differences in 5B were identified by visual inspection. To clarify this, we exchanged the asterisks to arrows in Fig.5B and changed the text in the legend:

      Arrows mark descriptive visual differences in nucleosome occupancy.

      Figure S3 includes a small black line on top of the table. Is this an accidental crop?

      We checked the figure carefully; however, the black line does not appear in our PDF viewer or on the printed paper

      The authors should state the weaknesses and limitations of this pipeline.

      We added a limitation section in discussion, see comments above

      Reviewer #1 (Significance (Required)):

      The proposed pipeline is useful and timely. It can benefit research groups willing to analyse MNase-Seq data of complex genomes such as P. falciparum. The tool requires users to have extensive experience in coding as the authors didn't include any clear and explicit codes on how to start processing the data from raw files. Nevertheless, there are multiple tool that can detect nucleosome occupancy and that are not cited by the authors not mention. I have included for the authors a link where a large list of tools for analysis of nucleosome positioning experiments tools/pipelines were developed for (Software to analyse nucleosome positioning experiments - Gene Regulation - Teif Lab). I think it would be useful for the authors to direct the reference this.

      We appreciate the reviewer’s valuable suggestion. We included a citation to the comprehensive database of nucleosome analysis tools curated by the Teif lab (Shtumpf et al., 2022). We chose to reference only selected tools in addition to this resource rather than listing all individual tools to maintain clarity and avoid overloading the manuscript with numerous citations.

      Despite valid, I still believe that controlling their pipeline by filtering out false positives and including more QC steps at the Inspector stage is strongly needed. That would boost the significance of this pipeline.

      We thank the reviewer for the assessment of our study and for recognizing that our MNase-seq analysis pipeline nucDetective can be a useful tool for the chromatin community utilizing MNase-Seq in complex settings.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      In this manuscript, Holzinger and colleagues have developed a new pipeline to assess chromatin organization in linear space and time. They used this pipeline to reevaluate nucleosome organization in the malaria parasite, P. falciparum. Their analysis revealed typical arrangement of nucleosomes around the transcriptional start site. Furthermore, it further strengthened and refined the connection between specific nucleosome dynamics and epigenetic marks, transcription factor binding sites or transcriptional activity.

      Major comments

      • I am wondering what is the main selling point of this manuscript is. If it is the development of the nucDetective pipeline, perhaps it would be best to first benchmark it and directly compare it to existing tools on a dataset where nucleosome fussiness, shifting and regularity has been analyzed before. If on the other hand, new insights into Plasmodium chromatin biology is the primary target validation of some of the novel findings would be advantageous (e.g. refinement of TSS positions, relevance of novel motifs, etc).

      NucDetective presents a novel pipeline to identify dynamic nucleosome properties within different datasets, like time series or developmental stages, as analysed for the erythrocytic cycle in this manuscript. As such kind of a pipeline, allowing direct comparisons, does not exist for MNase-Seq data, we used the existing analysis and high quality dataset of Kensche et al., to visualize the strong improvements of this kind of analysis. Accordingly, we combined the pipeline development and the reasearch of chromatin structure analysis, being able to showcase the utility of this new pipeline.

      • The authors identify a strong positioning of +1 nucleosome by searching for a positioned nucleosomes in the vicinity of the assigned TSS. Given the ill-defined nature of TSSs, this approach sounds logic at first glance. However, given the rather broad search space from -100 till +300bp, I am wondering whether it is a sort of "self-fulfilling prophecy". Conversely, it would be good to validate that this approach indeed helps to refine TSS positions.

      We thank the reviewer for raising this important point. We would like to clarify that we do not claim to redefine or precisely determine TSS positions in our study. Instead, we use annotated TSS coordinates as a reference to identify nucleosomes that correspond to the +1 nucleosome, based on their proximity to the TSS.

      We selected the search window from -100 to +300 bp to account for known variability in Pf TSS annotation. For example, dominant transcription start sites identified by 5'UTR-seq tag clusters can differ by several hundred base pairs within a single time point (Chappell et al., 2020). The broad window thus allows us to capture the principal nucleosome positions near a TSS, even when the TSS itself is imprecise or heterogeneous. Based on the TSS centered plots (Figure 2C and Figure S1B), we reasoned that a window of -100 to +300 is sufficient to capture the majority of the +1 nucleosomes, which would have been missed by using smaller window sizes. This strategy aligns with well-established conventions in yeast chromatin biology, where the +1 nucleosome is defined relative to the TSS (Jiang and Pugh, 2009; Zhang et al. 2011) and commonly used as an anchor point to visualize downstream phased nucleosome arrays and upstream nucleosome-depleted regions (Rossi et al., 2021; Oberbeckmann et al., 2019; Krietenstein et al., 2016 and many more). Accordingly, our approach leverages these accepted standards to interpret nucleosome positioning without re-defining TSS annotations.

      • Figure 1C: I am wondering how should the reader interpret the changes in nucleosomal repeat length changes throughout the cycle. Is linker DNA on average 10 nucleotides shorter at T30 compared to T5 timepoint? If so how could such "dramatic reorganization" be achieved at the molecular level in absence of a known linker DNA-binding protein. More importantly is this observation supported by additional evidence (e.g. dinucleosomal fragment length) or could it be due to slightly different digestion of the chromatin at the different stages or other technical variables?

      We thank the reviewer for this insightful question regarding the interpretation of NRL changes across the cell cycle. The reviewer is right in her or his interpretation – linker DNA is on average ~10 bp shorter at T30 than at T5.

      To address concerns about additional evidence and potential MNase digestion variability, we now analyzed MNase-seq fragment sizes by shifting mononucleosome peaks of each time point to the canonical 147 bp length, to correct for MNase digestion differences. After this normalisation, dinucleosome fragment length distributions revealed the shortest linker lengths at T30 and T35, whereas T5 and T10 showed longer DNA linkers. These results confirm our previous NRL measurements based on mononucleosomal read distances while controlling for MNase digestion bias.

      The molecular basis of this reorganization, is still unclear. While linker histone H1 is considered absent in Plasmodium falciparum, presence of an uncharacterized linker DNA–binding protein or alternative factors fulfilling a similar role can not be excluded (Gill et al. 2010). However, H1 absence across all developmental stages, fails to explain stage-specific chromatin changes. We hypothesize that Apicomplexans evolved specialized chromatin remodelers to compensate for the missing H1, which may also drive the dynamic NRL changes observed. The low NRL coincides with high transcriptional activity in Pf during trophozoite stage is consistent with previous reports linking elevated transcription to reduced NRL in other eukaryotes (Baldi et al. 2018). In addition, the schizont stage involves multiple rounds of DNA replication requiring large histone supplies being produced during that time. It may well be that a high level of histone synthesis and DNA amplification, results in a short time period with increased nucleosome density and shorter NRL, until the system reaches again equilibrium (Beshnova et al. 2014). Although speculative we suggest a model wherein increased transcription promotes elevated nucleosome turnover and re-assembly by specialized remodeling enzymes, combined with high abundance of histones, resulting in higher nucleosome density and decreased NRL. Unfortunately, absolute quantification of nucleosome levels from this MNase-seq dataset is not possible without spike-in controls, which makes it infeasible to test the hypothesis with the available data set (Chen et al. 2016).

      Minor comments

      • I am wondering whether fuzziness and occupancy changes are truly independent categories. I am asking as both could lead to reduction of the signal at the nucleosome dyad and because they show markedly similar distribution in relation to the TSS and associate with identical epigenetic features (Figure 2B-D). Figure 2A indicates minimal overlap between them, but this could be due to the fact that the criteria to define these subtypes is defined such to place nucleosomes to one or the other category, but at the end they represent two flavors of the same thing.

      Indeed, changes in occupancy and fuzziness can appear related because both features may reduce signal intensity at the nucleosome dyad and both are connected to “poor nucleosome positioning”. However, their definitions and measurements are clearly distinct and technically independent. Occupancy reflects the peak height at the nucleosome dyad, while fuzziness quantifies the spread of reads around the peak, measured as the standard deviation of read positions within each nucleosome peak (Jiang and Pugh, 2009; Chen et al., 2013). Although a reduction in occupancy can contribute to increased fuzziness by diminishing the dyad axis signal, fuzziness primarily arises from increased variability in the flanking regions around the nucleosome position center. While this distinction is established in the field, it is also often confused by the concept of well (high occupancy, low fuzziness) and poorly (high fuzziness, low occupancy) positioned nucleosomes, where both of these features are considered.

      • Do the authors detect spatial relationship between fuzzy and repositioned/evicted nucleosomes at the level of individual nucleosomes pairs. With other words, can fuzziness be the consequence of repositioning/eviction of the neighboring nucleosome?

      In Figure 2A we analyse the spatial overlap of all features to each other. The analysis clearly shows that fuzziness, occupancy changes and position changes occur mostly at distinct spatial sites (overlaps between 3 and 10%, Fig. 2A). Therefore, we suggest that the features correspond to independent processes. Likewise, we do observe an overlap between occupancy and ATAC-seq peaks, but not nucleosome positioning shifts, clearly discriminating different processes.

      • Figure 4: enrichment values and measure of statistical significance for the different motifs are missing. Also have there been any other motifs identified.

      This information is present in Supplemental Figure S3. Here we show the top 3 hits in each cluster. In the figure legend of Figure 4 we reference to Fig. S3:

      L1054 –1055:

      “Additional enriched motifs along with the significance of motif enrichment and the fraction of motifs at the respective nucleosome positions are shown in Figure S3”

      • The M&M would benefit from some more details, e.g. settings in the piepline, or which fragment sizes were used to map the MNase-seq data?

      We included a link to the corresponding Zenodo repository (https://doi.org/10.5281/zenodo.16779899) in the Data and materials availability statement.

      The repository contains:

      Code (scripts.zip) and annotation of Plasmodium falciparum (Annotation.zip) to reproduce the nucDetective v1.1 (nucDetective-1.1.zip) analysis as done in the research manuscript entitled "Deciphering chromatin architecture and dynamics in Plasmodium falciparum using the nucDetective pipeline".

      The folder "output_nucDetective" conains the complete output of the nucDetective analysis pipeline as generated by the "01_nucDetective_profiler.sh" and "02_nucDetective_inspector.sh" scripts.

      Nucleosome coverage tracks, annotation of nucleosome positions and dynamic nucleosomes are deposited additonally in the folder "Pf_nucleosome_annotation_of_nucDetective".

      To make this clearer we added following text to Material and Methods in ”The nucDetective pipeline” section:

      Changes in the manuscript (L518-519):

      The code, software and annotations used to run the nucDetective pipeline along with the output have been deposited on Zenodo (https://doi.org/10.5281/zenodo.16779899).

      which fragment sizes were used to map the MNase-seq data?

      The default setting in nucDetective is to use fragment sizes of 140 – 200 bp, which corresponds to the main mono-nucleosome fraction in standard MNase-seq experiments. However, the correct selection of fragment sizes may vary depending on the organism and the variations in MNase-seq protocols. Therefore, the pipeline offers the option of changing the cutoff parameter (--minLen; --maxLen), accordingly. Kensche et al thoroughly tested the best selection of the fragment sizes for the data set, which is used in this manuscript. We agree with their selection and used the same cutoffs (75-175 bp).

      This is stated in line 535-536:

      The fragments are further filtered to mono-nucleosome sized fragments (here we used 75 – 175 bp)

      We changed the text:

      The fragments are further filtered to mono-nucleosome sized fragments (default setting 140-200 bp; changed in this study to 75 – 175 bp)

      We highlighted other parameters used in this study in the material and methods part.

      Reviewer #2 (Significance (Required)):

      Overall, the manuscript is well written and findings are clearly and elegantly presented. The manuscript describes a new pipeline to map and analyze MNase-seq data across different stages or conditions, though the broader applicability of the pipeline and advancements over existing tools could be better demonstrated. Importantly, the manuscript make use of this pipeline to provide a refined and likely more accurate view on (the dynamics of) nucleosome positioning over the AT-rich genome of P. falciparum. While these observations make sense they remain rather descriptive/associative and lack further experimental validation. Overall, this manuscript could be interest to both researchers working on chromatin biology and Plasmodium gene-regulation.

      We thank the reviewer for the assessment of our study and for recognizing that the results of our MNase-seq analysis pipeline nucDetective contribute to a better understanding of Pf chromatin biology.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      The manuscript "Deciphering chromatin architecture and dynamics in Plasmodium 2 falciparum using the nucDetective pipeline" describes computational analysis of previously published data of P falciparum chromatin. This work corrects the prevailing view that this parasitic organism has an unusually disorganized chromatin organization, which had been attributed to its high genomic AT content, lack of histone H1, and ancient derivation. The authors show that instead P falciparum has a very typical chromatin organization. Part of the refinement is due to aligning data on +1 nucleosome positions instead of TSSs, which have been poorly mapped. The computational tools corral some useful features, for querying epigenomic structure that make visualization straightforward, especially for fuzzy nucleosomes.

      Reviewer #3 (Significance (Required)):

      As a computational package this is a nice presentation of fairly central questions. The assessment and display of fuzzy nucleosomes is a nice feature.

      We thank the reviewer for the assessment of our study and are pleased that the reviewer acknowledges the value and usability of our pipeline.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      The manuscript "Deciphering chromatin architecture and dynamics in Plasmodium 2 falciparum using the nucDetective pipeline" describes computational analysis of previously published data of P falciparum chromatin. This work corrects the prevailing view that this parasitic organism has an unusually disorganized chromatin organization, which had been attributed to its high genomic AT content, lack of histone H1, and ancient derivation. The authors show that instead P falciparum has a very typical chromatin organization. Part of the refinement is due to aligning data on +1 nucleosome positions instead of TSSs, which have been poorly mapped. The computational tools corral some useful features, for querying epigenomic structure that make visualization straightforward, especially for fuzzy nucleosomes.

      Significance

      As a computational package this is a nice presentation of fairly central questions. The assessment and display of fuzzy nucleosomes is a nice feature.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      Summary:

      Holzinger et al. present a new automated pipeline, nucDetective, designed to provide accurate nucleosome positioning, fuzziness, and regularity from MNase-seq data. The pipeline is structured around two main workflows-Profiler and Inspector-and can also be applied to time-series datasets. To demonstrate its utility, the authors re-analyzed a Plasmodium falciparum MNase-seq time-series dataset (Kensche et al., 2016), aiming to show that nucDetective can reliably characterize nucleosomes in challenging AT-rich genomes. By integrating additional datasets (ATAC-seq, RNA-seq, ChIP-seq), they argue that the nucleosome positioning results from their pipeline have biological relevance.


      Major Comments:

      Despite being a useful pipeline, the authors draw conclusions directly from the pipeline's output without integrating necessary quality controls. Some claims either contradict existing literature or rely on misinterpretation or insufficient statistical support. In some instances, the pipeline output does not align with known aspects of Plasmodium biology. I outline below the key concerns and suggested improvements to strengthen the manuscript and validate the pipeline:

      • Clarification of +1 Nucleosome Positioning in P. falciparum The authors should acknowledge that +1 nucleosomes have been previously reported in P. falciparum. For example, Kensche et al. (2016) used MNase-seq to map ~2,278 TSSs (based on enriched 5′-end RNA data) and found that the +1 nucleosome is positioned directly over the TSS in most genes: "Analysis of 2278 start sites uncovered positioning of a +1 nucleosome right over the TSS in almost all analysed regions" (Figure 3A). They also described a nucleosome-depleted region (NDR) upstream of the TSS, which varies in size, while the +1 nucleosome frequently overlaps the TSS. The authors should nuance their claims accordingly. Nevertheless, I do agree that the +1 positioning in P. falciparum may be fuzzier as compared to yeast or mammals. Moreover, the correlation between +1 nucleosome occupancy and gene expression is often weak and that several genes show similar nucleosome profiles regardless of expression level. This raises my question: did the authors observe any of these patterns in their new data?
      • Lack of Quality Control in the Pipeline The authors claim (lines 152-153) that QC is performed at every stage, but this is not supported by the implementation. On the GitHub page (GitHub - uschwartz/nucDetective), QC steps are only marked at the Profiler stage using standard tools (FastQC, MultiQC). The Inspector stage, which is crucial for validating nucleosome detection, lacks QC entirely. The authors should implement additional steps to assess the quality of nucleosome calls. For example, how are false positives managed? ROC curves should be used to evaluate true positive vs. false positive rates when defining dynamic nucleosomes. How sequencing biases are addressed?
      • Use of Mono-nucleosomes Only The authors re-analyze the Kensche et al. (2016) dataset using only mono-nucleosomes and claim improved nucleosome profiles, including identification of tandem arrays previously unreported in P. falciparum. Two key issues arise:
      • Is the apparent improvement due simply to focusing on mono-nucleosomes (as implied in lines 342-346)?
      • How does the pipeline perform with di- or tri-nucleosomes, which are also biologically relevant (Kensche et al., 2016 and others)? Furthermore, the limitation to mono-nucleosomes is only mentioned in the methods, not in the results or discussion, which could mislead readers.
      • Reference Nucleosome Numbers The authors identify 49,999 reference nucleosome positions. How does this compare to previous analyses of similar datasets? This should be explicitly addressed.
      • Figure 1B and Nucleosome Spacing The authors claim that Figure 1B shows developmental stage-specific variation in nucleosome spacing. However, only T35 shows a visible upstream change at position 0. In A4, A6, and A8 (Figure S4), no major change is apparent. Statistical tests are needed to validate whether the observed differences are significant and should be described in the figure legends and main text.
      • Genome-wide Occupancy Claims The claim that nucleosomes are "evenly distributed throughout the genome" (Figure S2A) is questionable. Chromosomes 3 and 11 show strong peaks mid-chromosome, and chromosome 14 shows little to no signal at the ends. This should be discussed. Subtelomeric regions, such as those containing var genes, are known to have unique chromatin features. For instance, Lopez-Rubio et al. (2009) show that subtelomeric regions are enriched for H3K9me3 and HP1, correlating with gene silencing. Should these regions not display different nucleosome distributions? Do you expect the Plasmodium genome (or any genome) to have uniform nucleosome distribution?
      • Dependence on DANPOS The authors criticize the DANPOS pipeline for its limitations but use it extensively within nucDetective. This contradiction confuses the reader. Is nucDetective an original pipeline, or a wrapper built on existing tools?
      • Control Data Usage The authors should clarify whether gDNA controls were used throughout the analysis, as done in Kensche et al. (2016). Currently, this is mentioned only in the figure legend for Figure 5, not in the methods or results.
      • Lack of Statistical Power for Time-Series Analyses Although the pipeline is presented as suitable for time-series data, it lacks statistical tools to determine whether differences in nucleosome positioning or fuzziness are significant across conditions. Visual interpretation alone is insufficient. Statistical support is essential for any differential analysis.
      • Reproducibility of Methods The Methods section is not sufficient to reproduce the results. The GitHub repository lacks the necessary code to generate the paper's figures and focuses on an exemplary yeast dataset. The authors should either:
        • Update the repository with relevant scripts and examples,
        • Clearly state the repository's purpose, or
        • Remove the link entirely. Readers must understand that nucDetective is dedicated to assessing nucleosome fuzziness, occupancy, shift, and regularity dynamics-not downstream analyses presented in the paper.
      • Supplementary Tables Including supplementary tables showing pipeline outputs (e.g., nucleosome scores, heatmaps, TSS extraction) would help readers understand the input-output structure and support figure interpretations.

      Minor Comments:

      • The authors should moderate claims such as "no studies have reported a well-positioned +1 nucleosome" in P. falciparum, as this contradicts existing literature. Similarly, avoid statements like "poorly understood chromatin architecture of Pf," which undervalue extensive prior work (e.g., discovery of histone lactylation in Plasmodium, Merrick et al., 2023).
      • Track labels in figures (e.g., Figure 5B) are too small to be legible.
      • Several figures (e.g., Figure 5B, S4B) lack statistical significance tests. Are the differences marked with stars statistically significant or just visually different?
      • Figure S3 includes a small black line on top of the table. Is this an accidental crop?
      • The authors should state the weaknesses and limitations of this pipeline.

      Significance

      • The proposed pipeline is useful and timely. It can benefit research groups willing to analyse MNase-Seq data of complex genomes such as P. falciparum. The tool requires users to have extensive experience in coding as the authors didn't include any clear and explicit codes on how to start processing the data from raw files. Nevertheless, there are multiple tool that can detect nucleosome occupancy and that are not cited by the authors not mention. I have included for the authors a link where a large list of tools for analysis of nucleosome positioning experiments tools/pipelines were developed for (Software to analyse nucleosome positioning experiments - Gene Regulation - Teif Lab). I think it would be useful for the authors to direct the reference this.
      • Despite valid, I still believe that controlling their pipeline by filtering out false positives and including more QC steps at the Inspector stage is strongly needed. That would boost the significance of this pipeline.
    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      This manuscript reports a dual-task experiment intended to test whether language prediction relies on executive resources, using surprisal-based measures of predictability and an n-back task to manipulate cognitive load. While the study addresses a question under debate, the current design and modeling framework fall short of supporting the central claims. Key components of cognitive load, such as task switching, word prediction vs integration, are not adequately modeled. Moreover, the weak consistency in replication undermines the robustness of the reported findings. Below unpacks each point. 

      Cognitive load is a broad term. In the present study, it can be at least decomposed into the following components: 

      (1)  Working memory (WM) load: news, color, and rank. 

      (2)  Task switching load: domain of attention (color vs semantics), sensorimotor rules (c/m vs space).

      (3)  Word comprehension load (hypothesized against): prediction, integration. 

      The components of task switching load should be directly included in the statistical models. Switching of sensorimotor rules may be captured by the "n-back reaction" (binary) predictor. However, the switching of attended domains and the interaction between domain switching and rule complexity (1-back or 2-back) were not included. The attention control experiment (1) avoided useful statistical variation from the Read Only task, and (2) did not address interactions. More fundamentally, task-switching components should be directly modeled in both performance and full RT models to minimize selection bias. This principle also applies to other confounding factors, such as education level. While missing these important predictors, the current models have an abundance of predictors that are not so well motivated (see later comments). In sum, with the current models, one cannot determine whether the reduced performance or prolonged RT was due to affecting word prediction load (if it exists) or merely affecting the task switching load. 

      The entropy and surprisal need to be more clearly interpreted and modeled in the context of the word comprehension process. The entropy concerns the "prediction" part of the word comprehension (before seeing the next word), whereas surprisal concerns the "integration" part as a posterior. This interpretation is similar to the authors writing in the Introduction that "Graded language predictions necessitate the active generation of hypotheses on upcoming words as well as the integration of prediction errors to inform future predictions [1,5]." However, the Results of this study largely ignored entropy (treating it as a fixed effect) and only focus on surprisal without clear justification. 

      In Table S3, with original and replicated model fitting results, the only consistent interaction is surprisal x age x cognitive load [2-back vs. Reading Only]. None of the two-way interactions can be replicated. This is puzzling and undermines the robustness of the main claims of this paper. 

      Reviewer #2 (Public review):

      Summary

      This paper considers the effects of cognitive load (using an n-back task related to font color), predictability, and age on reading times in two experiments. There were main effects of all predictors, but more interesting effects of load and age on predictability. The effect of load is very interesting, but the manipulation of age is problematic, because we don't know what is predictable for different participants (in relation to their age). There are some theoretical concerns about prediction and predictability, and a need to address literature (reading time, visual world, ERP studies). 

      Strengths/weaknesses 

      It is important to be clear that predictability is not the same as prediction. A predictable word is processed faster than an unpredictable word (something that has been known since the 1970/80s), e.g., Rayner, Schwanenfluegel, etc. But this could be due to ease of integration. I think this issue can probably be dealt with by careful writing (see point on line 18 below). To be clear, I do not believe that the effects reported here are due to integration alone (i.e., that nothing happens before the target word), but the evidence for this claim must come from actual demonstrations of prediction. 

      The effect of load on the effects of predictability is very interesting (and also, I note that the fairly novel way of assessing load is itself valuable). Assuming that the experiments do measure prediction, it suggests that they are not cost-free, as is sometimes assumed. I think the researchers need to look closely at the visual world literature, most particularly the work of Huettig. (There is an isolated reference to Ito et al., but this is one of a large and highly relevant set of papers.) 

      There is a major concern about the effects of age. See the Results (161-5): this depends on what is meant by word predictability. It's correct if it means the predictability in the corpus. But it may or may not be correct if it refers to how predictable a word is to an individual participant. The texts are unlikely to be equally predictable to different participants, and in particular to younger vs. older participants, because of their different experiences. To put it informally, the newspaper articles may be more geared to the expectations of younger people. But there is also another problem: the LLM may have learned on the basis of language that has largely been produced by young people, and so its predictions are based on what young people are likely to say. Both of these possibilities strike me as extremely likely. So it may be that older adults are affected more by words that they find surprising, but it is also possible that the texts are not what they expect, or the LLM predictions from the text are not the ones that they would make. In sum, I am not convinced that the authors can say anything about the effects of age unless they can determine what is predictable for different ages of participants. I suspect that this failure to control is an endemic problem in the literature on aging and language processing and needs to be systematically addressed. 

      Overall, I think the paper makes enough of a contribution with respect to load to be useful to the literature. But for discussion of age, we would need something like evidence of how younger and older adults would complete these texts (on a word-by-word basis) and that they were equally predictable for different ages. I assume there are ways to get LLMs to emulate different participant groups, but I doubt that we could be confident about their accuracy without a lot of testing. But without something like this, I think making claims about age would be quite misleading. 

      We thank both reviewers for their constructive feedback and for highlighting areas where our theoretical framing and analyses could be clarified and strengthened. We have carefully considered each of the points raised and made substantial additions and revisions.

      As a summary, we have directly addressed the concerns raised by the reviewers by incorporating task-switching predictors into the statistical models, paralleling our focus on surprisal with a full analysis and interpretation of entropy, clarifying the robustness (and limitations) of the replicated findings, and addressing potential limitations in our Discussion.

      We believe these revisions substantially strengthen the manuscript and improve the reading flow, while also clarifying the scope of our conclusions. We will not illustrate these changes in more detail:

      (1) Cognitive load and task-switching components.

      We agree that cognitive load is a multifaceted construct, particularly since our secondary task broadly targets executive functioning. In response to Reviewer 1, we therefore examined task-switching demands more closely by adding the interaction term n-back reaction × cognitive load to a model restricted to 1-back and 2-back Dual Task blocks (as there were no n-back reactions in the Reading Only condition). This analysis showed significantly longer reading times in the 2-back than in the 1back condition, both for trials with and without an n-back reaction. Interestingly, the difference between reaction and no-reaction trials was smaller in the 2-back condition (β = -0.132, t(188066.09) = -34.269, p < 0.001), which may simply reflect the general increase in reading time for all trials so that the effect of the button press time decreases in comparison to the 1-back. In that sense, these findings are not unexpected and largely mirror the main effect of cognitive load. Crucially, however, the three-way interaction of cognitive load, age, and surprisal remained robust (β = 0.00004, t(188198.86) = 3.540, p < 0.001), indicating that our effects cannot be explained by differences in taskswitching costs across load conditions. To maintain a streamlined presentation, we opted not to include this supplementary analysis in the manuscript.

      (2) Entropy analyses.

      Reviewer 1 pointed out that our initial manuscript placed more emphasis on surprisal. In the revised manuscript, we now report a full set of entropy analyses in the supplementary material. In brief, these analyses show that participants generally benefit from lower entropy across cognitive load conditions, with one notable exception: young adults in the Reading Only condition, where higher entropy was associated with faster reading times. We have added these results to the manuscript to provide a more complete picture of the prediction versus integration distinction highlighted in the review (see sections “Control Analysis: Disentangling the Effect of Cognitive Load on Pre- and PostStimulus Predictive Processing” in the Methods and “Disentangling the Effect of Cognitive Load on Pre- and Post-Stimulus Predictive Processing“ in the Results).

      (3) Replication consistency.

      Reviewer 1 noted that the results of the replication analysis were somewhat puzzling. We take this point seriously and agree that the original model was likely underpowered to detect the effect of interest. To address this, we excluded the higher-level three-way interaction of age, cognitive load, and surprisal, focusing instead on the primary effect examined in this paper: the modulatory influence of cognitive load on surprisal. Using this approach, we observed highly consistent results between the original online subsample and the online replication sample.

      (4) Potential age bias in GPT-2.  

      We thank Reviewer 2 for their thoughtful and constructive feedback and agree that a potential age bias in GPT-2’s next-token predictions warrants caution. We thus added a section in the Discussion explicitly considering this limitation, and explain why it should not affect the implications of our study.

      Reviewer #1 (Recommendations for the authors):

      The d-prime model operates at the block level. How many observation goes into the fitting (about 175*8=1050)? How can the degrees of freedom of a certain variable go up to 188435? 

      We thank the reviewer for spotting this issue. Indeed, there was an error in our initial calculations, which we have now corrected in the manuscript. Importantly, the correction does not meaningfully affect the results for the analysis of d-primes or the conclusions of the study (see line 102).  

      “A linear mixed-effects model revealed n-back performance declined with cognitive load (β = -1.636, t(173.13) = -26.120, p < 0.001), with more pronounced effects with advancing age (β = -0.014, t(169.77) = -3.931, p > 0.001; Fig. 3b, Table S1)”.

      Consider spelling out all the "simple coding schemes" explicitly. 

      We thank the reviewer for this helpful suggestion. In the revised manuscript, we have now included the modelled contrasts in brackets after each predictor variable.

      “Example from line 527: In both models, we included recording location (online vs. lab), cognitive load (1-back and 2back Dual Task vs. Reading Only as the reference level) and continuously measured age (centred) in both models as well as the interaction of age and cognitive load as fixed effects”.

      The relationship between comprehension accuracy and strategies for color judgement is unclear or not intuitive. 

      We thank the reviewer for this helpful comment. The n-back task, which required participants to judge colours, was administered at the single-trial level, with colours pseudorandomised to prevent any specific colour - or sequence of colours - from occurring more frequently than others. In contrast, comprehension questions were presented at the end of each block, meaning that trial-level stimulus colour was unrelated to accuracy on the block-level comprehension questions. However, we agree that this distinction may not have been entirely clear, and we have now added a brief clarification in the Methods section to address this point (see line 534):  

      “Please note that we did not control for trial-level stimulus colour here. The n-back task, which required participants to judge colours, was administered at the single-trial level, with colours pseudorandomised to prevent any specific colour - or sequence of colours - from occurring more frequently than others. In contrast, comprehension questions were presented at the end of each block, meaning that trial-level stimulus colour was unrelated to accuracy on the blocklevel comprehension questions”.

      Could you explain why comprehension accuracy is not modeled in the same way as d-prime, i.e., with a similar set of predictors? 

      This is a very good point. After each block, participants answered three comprehension questions that were intentionally designed to be easy: they could all be answered correctly after having read the corresponding text, but not by common knowledge alone. The purpose of these questions was primarily to ensure participants paid attention to the texts and to allow exclusion of participants who failed to understand the material even under minimal cognitive load. As comprehension accuracy was modelled at the block level with 3 questions per block, participants could achieve only discrete scores of 0%, 33.3%, 66.7%, or 100%. Most participants showed uniformly high accuracy across blocks, as expected if the comprehension task fulfilled its purpose. However, this limited variance in performance caused convergence issues when fitting a comprehension-accuracy model at the same level of complexity as the d′ model. To model comprehension accuracy nonetheless, we therefore opted for a reduced model complexity in this analysis.

      RT of previous word: The motivations described in the Methods, such as post-error-slowing and sequential modulation effects, lack supporting evidence. The actual scope of what this variable may account for is unclear.  

      We are happy to elaborate further regarding the inclusion of this predictor. Reading times, like many sequential behavioral measures, exhibit strong autocorrelation (Schuckart et al., 2025, doi: 10.1101/2025.08.19.670092). That is, the reading time of a given word is partially predictable from the reading time of the previous word(s). Such spillover effects can confound attempts to isolate trialspecific cognitive processes. As our primary goal was to model single-word prediction, we explicitly accounted for this autocorrelation by including the log reading time of the preceding trial as a covariate. This approach removes variance attributable to prior behavior, ensuring that the estimated effects reflect the influence of surprisal and cognitive load on the current word, rather than residual effects of preceding trials. We now added this explanation to the manuscript (see line 553):

      “Additionally, it is important to consider that reading times, like many sequential behavioural measures, exhibit strong autocorrelation (Schuckart et al., 2025), meaning that the reading time of a given word is partially predictable from the reading time of the previous word. Such spillover effects can confound attempts to isolate trial-specific cognitive processes. As our primary goal was to model single-word prediction, we explicitly accounted for this autocorrelation by including the reading time of the preceding trial as a covariate”.  

      Block-level d-prime: It was shown with the d-prime performance model that block-level d-prime is a function of many of the reading-related variables. Therefore, it is not justified to use them here as "a proxy of each participant's working memory capacity."

      We thank the reviewer for their comment. We would like to clarify that the d-prime performance model indeed included only dual-task d-primes (i.e., d-primes obtained while participants were simultaneously performing the reading task). In contrast, the predictor in question is based on singletask d-primes, which are derived from the n-back task performed in isolation. While dual- and singletask d-primes may be correlated, they capture different sources of variance, justifying the use of single-task d-primes here as a measure of each participant’s working memory capacity.

      Word frequency is entangled with entropy and surprisal. Suggest removal.

      We appreciate the reviewer’s comment. While word frequency is correlated with word surprisal, its inclusion does not affect the interpretation of the other predictors and does not introduce any bias. Moreover, it is a theoretically important control variable in reading research. Since we are interested in the effects of surprisal and entropy beyond potential biases through word length and frequency, we believe these are important control variables in our model. Moreover, checks for collinearity confirmed that word frequency was neither strongly correlated with surprisal nor entropy. In this sense, including it is largely pro forma: it neither harms the model nor materially changes the results, but it ensures that the analysis appropriately accounts for a well-established influence on word processing.

      Entropy reflects the cognitive load of word prediction. It should be investigated in parallel and with similar depth as surprisal (which reflects the load of integration).

      This is an excellent point that warrants further investigation, especially since the previous literature on the effects of entropy on reading time is scarce and somewhat contradictory. We have thus added additional analyses and now report the effects of cognitive load, entropy, and age on reading time (see sections “Disentangling the Effect of Cognitive Load on Pre- and Post-Stimulus Predictive Processing” in the Results, “Control Analysis: Disentangling the Effect of Cognitive Load on Pre- and Post-Stimulus Predictive Processing” in the Methods as well as Fig. S7 and Table S6 in the Supplements for full results). In brief, we observe a significant three-way interaction among age, cognitive load, and entropy. Specifically, while all participants benefit from low entropy under high cognitive load, reflected by shorter reading times, in the baseline condition this benefit is observed only in older adults. Interestingly, in the baseline condition with minimal cognitive load, younger adults even show a benefit from high entropy. Thus, although the overall pattern for entropy partly mirrors that for surprisal – older adults showing increased reading times when word entropy is high and generally greater sensitivity to entropy variations – the effects differ in one important respect. Unlike for surprisal, the detrimental impact of increased word entropy is more pronounced under high cognitive load across all participants.

      Reviewer #2 (Recommendations for the authors):

      I agree in relation to prediction/load, but I am concerned (actually very concerned) that prediction needs to be assessed with respect to age. I suspect this is one reason why there is so much inconsistency in the effects of age in prediction and, indeed, comprehension more generally. I think the authors should either deal with it appropriately or drop it from the manuscript.

      Thank you for raising this important concern. It is true that prediction is a highly individual, complex process as it depends upon the experiences a person has made with language over their lifespan. As such, one-size-fits-all approaches are not sufficient to model predictive processing. In our study, we thus took particular care to ensure that our analyses captured both age-related and other interindividual variability in predictive processing.

      First, in our statistical models, we included age not only as a nuisance regressor, but also assessed age-related effects in the interplay of surprisal and cognitive load. By doing so, we explicitly model potential age-related differences in how individuals of different ages predict language under different levels of cognitive load.

      Second, we hypothesised that predictive processing might also be influenced by a range of interindividual factors beyond age, including language exposure, cognitive ability, and more transient states such as fatigue. To capture such variability, all models included by-subject random intercepts and slopes, ensuring that unmodelled individual differences were statistically accommodated.

      Together, these steps allow us to account for both systematic age-related differences and residual individual variability in predictive processing. We are therefore confident that our findings are not confounded by unmodelled age-related variability.

      Line 18, do not confuse prediction (or pre-activation) with predictability. Predictability effects can be due to integration difficulty. See Pickering and Gambi 2018 for discussion. The discussion then focuses on graded parallel predictions, but there is also a literature concerned with the prediction of one word, typically using the "visual world" paradigm (which is barely cited - Reference 60 is an exception). In the next paragraph, I would recommend discussing the N400 literature (particularly Federmeier). There are a number of reading time studies that investigate whether there is a cost to a disconfirmed prediction - often finding no cost (e.g., Frisson, 2017, JML), though there is some controversy and apparent differences between ERP and eye-tracking studies (e.g., Staub). This literature should be addressed. In general, I appreciate the value of a short introduction, but it does seem too focused on neuroscience rather than the very long tradition of behavioural work on prediction and predictability.

      We thank the reviewer for this suggestion. In the revised manuscript, we have clarified the relevant section of the introduction to avoid confusion between predictability and predictive processing, thereby improving conceptual clarity (see line 16).

      “Instead, linguistic features are thought to be pre-activated broadly rather than following an all-or-nothing principle, as there is evidence for predictive processing even for moderately- or low-restraint contexts (Boston et al., 2008; Roland et al., 2012; Schmitt et al., 2021; Smith & Levy, 2013)”.  

      We also appreciate the reviewer’s comment regarding the introduction. While our study is behavioural, we frame it in a neuroscience context because our findings have direct implications for understanding neural mechanisms of predictive processing and cognitive load. We believe that this framing is important for situating our results within the broader literature and highlighting their relevance for future neuroscience research.

      I don't think 2 two-word context is enough to get good indicators of predictability. Obviously, almost anything can follow "in the", but the larger context about parrots presumably gives a lot more information. This seems to me to be a serious concern - or am I misinterpreting what was done? 

      This is a very important point and we thank the reviewer for raising it. Our goal was to generate word surprisal scores that closely approximate human language predictions. In the manuscript, we report analyses using a 2-word context window, following recommendations by Kuribayashi et al. (2022).

      To evaluate the impact of context length, we also tested longer windows of up to 60 words (not reported). While previous work (Goldstein et al., 2022) shows that GPT-2 predictions can become more human-like with longer context windows, we found that in our stimuli – short newspaper articles of only 300 words – surprisal scores from longer contexts were highly correlated with the 2word context, and the overall pattern of results remained unchanged. To illustrate, surprisal scores generated with a 10-word context window and surprisal scores generated with the 2-word context window we used in our analyses correlated with Spearman’s ρ = 0.976.

      Additionally, on a more technical note, using longer context windows reduces the number of analysable trials, since surprisal cannot be computed for the first k words of a text with a k-word context window (e.g., a 50-word context would exclude ~17% of the data).  

      Importantly, while a short 2-word context window may introduce additional noise in the surprisal estimates, this would only bias effects toward zero, making our analyses conservative rather than inflating them. Critically, the observed effects remain robust despite this conservative estimate, supporting the validity of our findings.

      However, we agree that this is a particularly important and sensitive point, and have now added a discussion of it to the manuscript (see line 476).

      “Entropy and surprisal scores were estimated using a two-word context window. While short contexts have been shown to enhance GPT-2’s psychometric alignment with human predictions, making next-word predictions more human-like (Kuribayashi et al., 2022), other work suggests that longer contexts can also increase model–human similarity (Goldstein et al., 2022). To reconcile these findings in our stimuli and guide the choice of context length, we tested longer windows and found surprisal scores were highly correlated with the 2-word context (e.g., 10-word vs. 2-word context: Spearman’s ρ = 0.976), with the overall pattern of results unchanged. Additionally, employing longer context windows would have also reduced the number of analysable trials, since surprisal cannot be computed for the first k words of a text with a k-word context window. Crucially, any additional noise introduced by the short context biases effect estimates toward zero, making our analyses conservative rather than inflating them”.

      Line 92, task performance, are there interactions? Interactions would fit with the experimental hypotheses. 

      Yes, we did include an interaction term of age and cognitive load and found significant effects on nback task performance (d-primes; b = -0.014, t(169.8) = -3.913, p < 0.001), but not on comprehension question accuracy (see table S1 and Fig. S2 in the supplementary material).

      Line 149, what were these values?

      We found surprisal values ranged between 3.56 and 72.19. We added this information in the manuscript (see line 143).

    1. Document d'information : Rencontres interprofessionnelles de la Miprof 2025

      Résumé Exécutif

      Ce document synthétise les analyses, données et stratégies clés présentées lors des Rencontres interprofessionnelles de la Miprof 2025.

      La conférence a souligné l'ampleur systémique des violences sexistes et sexuelles en France, tout en dressant un état des lieux des avancées législatives, des défis judiciaires et des nouvelles menaces. Les points saillants sont les suivants :

      1. Une ambition d'éradication et un cadre législatif renforcé : L'objectif politique affirmé n'est pas de réduire mais d'éradiquer totalement les violences.

      Des avancées législatives majeures ont été réalisées, notamment l'introduction de la notion de non-consentement dans la définition pénale du viol, la reconnaissance du contrôle coercitif et l'allongement des délais de prescription pour les crimes sexuels sur mineurs. Une loi-cadre transpartisane est en préparation pour unifier la réponse institutionnelle.

      2. Des données alarmantes confirmant un fléau de masse : Les statistiques pour 2023-2024 révèlent une prévalence massive des violences. Chaque jour, 3,5 femmes sont victimes de féminicide (direct ou indirect) ou de tentative de féminicide par leur partenaire ou ex-partenaire.

      Les enfants représentent plus de la moitié des victimes de violences sexistes et sexuelles enregistrées. L'analyse confirme que les femmes sont victimes de manière disproportionnée (85 % des victimes de violences sexuelles) et que les agresseurs, majoritairement des hommes, sont le plus souvent des proches, faisant du foyer le lieu le plus dangereux.

      3. L'urgence de la prévention des féminicides et de la protection des enfants co-victimes : L'analyse des homicides conjugaux ("rétex") montre que dans la moitié des cas, des signaux d'alerte préexistaient.

      Les experts appellent à un changement de paradigme : se focaliser sur l'auteur, mieux "criticiser" les situations à haut risque en identifiant des marqueurs clés comme la strangulation et les menaces de mort, et utiliser l'ordonnance de protection de manière préventive.

      Le "suicide forcé", angle mort des féminicides, représente près de 300 décès de femmes par an. Les enfants exposés aux violences conjugales sont reconnus comme des victimes directes subissant des traumatismes sévères, nécessitant une protection judiciaire coordonnée et des outils de prévention ciblés comme le film "Selma".

      4. L'émergence de nouveaux champs de bataille : la cyberviolence et les mouvements masculinistes : Les cyberviolences sexistes et sexuelles touchent massivement les jeunes, avec des conséquences psychologiques graves et un très faible taux de plainte (12 %).

      Parallèlement, la montée en puissance de mouvements masculinistes organisés, professionnels et très bien financés (plus d'un milliard de dollars en Europe) constitue une menace directe. Ces mouvements attaquent les dispositifs d'aide comme le 3919, instrumentalisent les droits des enfants pour affaiblir ceux des mères et cherchent à saper les fondements de l'égalité via un lobbying politique et une présence médiatique accrus.

      En conclusion, la journée a mis en lumière la nécessité d'une vigilance constante, d'une formation continue de tous les professionnels, d'une meilleure coordination inter-institutionnelle et d'une réponse ferme et structurée face aux nouvelles stratégies des agresseurs et de leurs relais idéologiques.

      --------------------------------------------------------------------------------

      1. Vision Politique et Cadre d'Action Stratégique

      Les rencontres ont été ouvertes par une intervention de la Ministre de l'égalité entre les femmes et les hommes, qui a fixé un cap clair : l'objectif n'est pas de réduire ou d'atténuer les violences, mais de les éradiquer complètement et définitivement. Cette ambition se traduit par un renforcement de l'arsenal juridique et une adaptation constante des stratégies d'intervention.

      1.1. Un Phénomène aux Multiples Visages

      La ministre a rappelé la diversité des formes de violences faites aux femmes, qui ne cessent d'évoluer :

      • Physiques, sexuelles, psychologiques

      • Économiques, numériques, chimiques

      • Liées à la traite des êtres humains, souvent dissimulées derrière des façades comme de prétendus salons de massage.

      Cette adaptabilité des violences exige une réponse innovante et proactive de la part des pouvoirs publics.

      1.2. Avancées Législatives Récentes

      L'année 2025 est présentée comme celle du "renforcement et de la clarté", marquée par plusieurs avancées législatives majeures :

      Définition du viol et non-consentement : La proposition de loi introduisant la notion de non-consentement dans la définition pénale du viol est une avancée historique. Elle inscrit dans la loi que "ne pas dire non, ce n'est pas dire oui", mettant fin à une ambiguïté qui protégeait les auteurs. Le silence, la sidération ou la peur ne sont pas des consentements.

      Délais de prescription pour les viols sur mineurs : Une loi a prolongé les délais de prescription, reconnaissant qu'il faut parfois des décennies pour que la parole se libère. L'objectif final reste cependant l'imprescriptibilité des crimes sexuels commis sur les enfants.

      Reconnaissance du contrôle coercitif : Pour la première fois, le droit français reconnaît le contrôle coercitif, un pas décisif pour identifier les violences conjugales avant les coups.

      Celles-ci commencent par des actes comme la confiscation du téléphone, l'isolement social, l'installation de la peur, le contrôle des comptes bancaires, l'hypercontrôle et l'humiliation répétée.

      1.3. Vers une Loi-Cadre et une Mobilisation Nationale

      Pour assurer une vision globale et cohérente, un groupe de travail parlementaire transpartisan a été mis en place pour préparer une loi-cadre contre les violences sexuelles et intrafamiliales.

      L'objectif est de bâtir une "nation mobilisée" où la détection, l'écoute, la protection et la coordination deviennent des réflexes pour tous les professionnels et citoyens.

      1.4. Vigilance face aux Mouvements Masculinistes

      Une alerte a été lancée contre la montée des mouvements masculinistes qui cherchent à relativiser la violence et à banaliser les inégalités.

      Leur discours, souvent masqué derrière la "liberté d'expression", vise à faire reculer les droits des femmes.

      La réponse doit être ferme : "La liberté d'expression n'a jamais été la liberté de nuire" et l'égalité femmes-hommes est un principe fondateur de la République, non une opinion.

      --------------------------------------------------------------------------------

      2. Données Clés 2024 : Une Violence de Masse Systémique et Genrée

      La présentation de la Lettre n°25 de l'Observatoire national des violences faites aux femmes a objectivé l'ampleur du phénomène à travers des données multi-sources (Ministères de l'Intérieur et de la Justice, associations).

      2.1. Statistiques Générales des Violences

      Catégorie de Violence

      Donnée Clé

      Source

      Fréquence

      Toutes les 23 secondes, une femme subit du harcèlement, de l'exhibition sexuelle ou un envoi non sollicité de contenu sexuel.

      Miprof

      Toutes les 2 minutes, une femme est victime de viol, tentative de viol ou agression sexuelle.

      Miprof

      Violences Sexuelles (Victimation déclarée 2023)

      1 809 000 personnes majeures se sont déclarées victimes.

      Enquête VRS (SSMSI)

      Détail pour les femmes

      Harcèlement sexuel : 1 155 000

      Enquête VRS (SSMSI)

      Exhibition / Envoi contenu sexuel non sollicité : 369 000

      Enquête VRS (SSMSI)

      Viol ou tentative de viol : 159 000

      Enquête VRS (SSMSI)

      Agression sexuelle : 222 000

      Enquête VRS (SSMSI)

      Violences au sein du couple (Victimation déclarée 2023)

      376 000 femmes majeures se sont déclarées victimes.

      Enquête VRS (SSMSI)

      Violences enregistrées par les forces de l'ordre (2024)

      Violences sexuelles : 94 900 filles et femmes victimes (52 % de mineures).

      Police / Gendarmerie

      Violences au sein du couple : 228 000 femmes victimes.

      Police / Gendarmerie

      2.2. Féminicides et Tentatives (2024)

      L'analyse des féminicides inclut désormais les "féminicides indirects", à savoir le harcèlement conduisant au suicide.

      Féminicides directs : 107 femmes tuées.

      Tentatives de féminicides directs : 270 femmes.

      Harcèlement par conjoint/ex ayant conduit au suicide ou à sa tentative : 906 femmes.

      Total combiné : 1 283 femmes que leur partenaire ou ex-partenaire a tuées, tenté de tuer ou poussées au suicide. Cela représente 3,5 femmes par jour.

      Enfants devenus orphelins en 2024 : 94. Depuis 2011, ce chiffre s'élève à 1 473.

      2.3. La Réponse Judiciaire et les Dispositifs de Protection

      Indicateur

      Chiffre 2024 / 2025

      Source

      Poursuites (Violences sexuelles)

      11 200 mis en cause poursuivis (sur 43 700 cas traités).

      SDSE (Justice)

      Condamnations (Violences sexuelles)

      7 000 condamnations définitives.

      SDSE (Justice)

      Poursuites (Violences au sein du couple)

      54 400 mis en cause poursuivis (sur 145 400 cas traités).

      SDSE (Justice)

      Condamnations (Violences au sein du couple)

      42 200 condamnations définitives.

      SDSE (Justice)

      Accueil en Unité Médico-Judiciaire (UMJ)

      74 000 victimes de violences sexistes et sexuelles.

      Données administratives

      Hébergement et logement dédiés

      11 300 places au 31 décembre 2024.

      Données administratives

      Ordonnances de Protection

      4 200 délivrées.

      SDSE (Justice)

      Téléphones Grave Danger (TGD) actifs

      5 400 (début novembre 2025).

      Données administratives

      Bracelets Anti-Rapprochement (BAR) actifs

      660 (début novembre 2025).

      Données administratives

      Appels traités par le 3919

      Plus de 100 000.

      FNSF

      Signalements traités par le 119 (enfants co-victimes)

      5 200.

      SNATED

      2.4. Analyse : Une Violence Systémique et un Danger Proche

      Dimension genrée : Les femmes représentent 85 % des victimes de violences sexuelles.

      Pour 9 victimes sur 10, quel que soit leur sexe, l'agresseur est un homme. 84 % des victimes de violences au sein du couple sont des femmes (98 % pour les violences sexuelles au sein du couple).

      Danger au sein du foyer : Le discours public se focalise souvent sur le danger extérieur, mais les données démontrent le contraire. 46 % des viols enregistrés sur des femmes ont été commis dans le cadre conjugal. 58 % des femmes tuées en 2024 l'ont été par un membre de leur famille ou leur partenaire/ex-partenaire.

      Sous-déclaration massive : La loi du silence reste prégnante. Seules 2 % des femmes victimes de harcèlement sexuel ou d'exhibitionnisme déposent plainte. Ce taux monte à seulement 7 % pour les viols et agressions sexuelles.

      --------------------------------------------------------------------------------

      3. Focus : Les Cyberviolences Sexistes et Sexuelles

      Une enquête nationale menée par un consortium d'associations (Point de contact, Féministes contre le cyberharcèlement, Stop Fisha) a révélé l'ampleur et les spécificités des violences en ligne.

      3.1. Profil des Victimes et Nature des Actes

      Cibles principales : Les femmes et les filles, dont plus de la moitié sont mineures.

      L'image comme arme : Plus d'un quart des victimes ont subi une diffusion non consentie de leurs contenus intimes. Ce chiffre atteint 36 % chez les mineurs.

      Proximité de l'agresseur : Dans 85 % des cas où l'agresseur est connu, il s'agit d'un homme. Deux tiers des victimes connaissaient leur agresseur, qui provenait majoritairement de l'entourage proche (relation de couple pour 52 %, camarades de classe pour un tiers).

      3.2. Conséquences Dévastatrices et Faible Recours à la Justice

      Impact psychologique : Les conséquences sont lourdes, même sans contact physique.

      Pensées suicidaires : 1 victime sur 10 (cyberviolence seule) ; 1 sur 3 (si les violences se prolongent hors ligne).   

      Tentatives de suicide : 7 % (cyberviolence seule) ; 1 sur 4 (si les violences se prolongent hors ligne).

      Taux de plainte : Seulement 12 % des victimes portent plainte (10 % pour les mineurs).

      Freins au dépôt de plainte :

      Méconnaissance : Un tiers des mineurs ne savaient pas qu'ils pouvaient porter plainte.  

      Sentiment d'inutilité : Un tiers des victimes estiment que la plainte ne les aiderait pas.  

      Culpabilisation : Deux tiers des victimes qui ont porté plainte déclarent s'être senties culpabilisées lors du processus.

      3.3. Recommandations

      Prévention : Renforcer massivement la prévention, la sensibilisation et la formation en milieu scolaire et auprès du grand public, avec un discours de réduction des risques et de déculpabilisation.

      Formation : Former tous les professionnels (justice, police, santé, éducation) dans une perspective de genre.

      Accompagnement : Créer une plateforme unique et holistique pour les victimes adultes.

      Régulation : Généraliser le retrait préventif des contenus signalés par les plateformes, sans attendre la décision de modération finale.

      --------------------------------------------------------------------------------

      4. Focus : La Protection des Françaises Victimes de Violences à l'Étranger

      Une table ronde a mis en lumière la situation souvent invisible des femmes françaises victimes de violences à l'étranger, estimées entre 3 et 3,5 millions de personnes.

      4.1. Vulnérabilités Spécifiques

      Les chiffres officiels (186 situations suivies en 2024) sous-estiment largement la réalité. Les femmes à l'étranger font face à des difficultés supplémentaires :

      Dépendance : Dépendance économique et administrative vis-à-vis du conjoint (le visa est souvent lié).

      Isolement : Barrière linguistique et isolement social, loin du réseau de soutien.

      Risques juridiques : Contexte local où les violences ne sont pas toujours reconnues ou poursuivies, et risque de déplacement illicite d'enfants en cas de départ du pays.

      Stéréotypes : L'image des "expatriés privilégiés" masque la réalité des violences et freine la prise de conscience et l'action.

      4.2. Stratégies de Réponse et Initiatives Modèles

      Feuille de route de la diplomatie féministe : Le Ministère de l'Europe et des Affaires étrangères a intégré la protection des Françaises à l'étranger dans sa stratégie, autour de trois axes : mieux informer, mieux protéger, mieux accompagner.

      Le modèle de Singapour : Une initiative pilote a été présentée : une clinique juridique gratuite et bilingue, fruit d'un partenariat entre le Barreau de Paris, la Law Society de Singapour et l'Ambassade de France.

      Elle offre un accès au droit sécurisé et anonyme, articule les systèmes juridiques français et local, et oriente vers un réseau de partenaires (hébergement, psychologues).

      Formation du réseau consulaire : Des formations spécifiques, élaborées avec la Miprof, sont en cours de déploiement pour les 186 agents référents dans les consulats.

      Accès aux dispositifs nationaux : La plateforme numérique arretonslesviolences.gouv.fr est désormais accessible depuis l'étranger, mais le 3919 ne l'est pas encore, ce qui constitue un combat prioritaire.

      --------------------------------------------------------------------------------

      5. Focus : La Prévention des Féminicides

      Une table ronde d'experts (magistrats, médecin légiste, avocate) a analysé les leviers pour mieux prévenir les passages à l'acte.

      5.1. Enseignements des "Retours d'Expérience" (Retex)

      L'analyse systématique des homicides conjugaux par les parquets a permis d'identifier des axes d'amélioration :

      • Dans 50 % des cas, des signaux d'alerte ou des antécédents judiciaires existaient.

      • Les failles se situent souvent au niveau du traitement des premiers signalements, de la communication entre acteurs judiciaires et de l'évaluation du danger.

      5.2. Vers un Changement de Paradigme Judiciaire

      Focalisation sur l'auteur : La magistrate Gwnola Joly-Coz a insisté sur la nécessité de déplacer le regard de la victime vers l'auteur et ses stratégies, notamment via la notion de contrôle coercitif.

      "Criticiser" les situations : Les magistrats doivent identifier les situations de "très haute intensité" en se basant sur des critères objectifs et prédictifs.

      Marqueurs de danger imminent :

      1. La strangulation : Un acte "sexo-spécifique" visant à faire taire et à arrêter la respiration, qui doit être considéré comme un critère de gravité absolue.  

      2. Les menaces de mort : Elles ne doivent jamais être euphémisées ou minimisées, car elles manifestent une intention criminelle.

      5.3. Le Rôle Clé de l'Ordonnance de Protection et du Repérage des Suicides Forcés

      Ordonnance de Protection : Ernestine Ronai a rappelé que cet outil (4 200 délivrées en France contre 33 000 en Espagne) est sous-utilisé et intervient trop tard.

      Il doit devenir une première marche de protection accessible avant le dépôt de plainte, dès que des violences sont "vraisemblables".

      Suicide forcé : Yael Mellul a souligné que cet "angle mort" représente environ 300 féminicides par an.

      La loi existe mais est très peu appliquée. Elle préconise une "autopsie psychologique" systématique en cas de suicide pour rechercher un contexte de harcèlement et de violences.

      --------------------------------------------------------------------------------

      6. Focus : Les Enfants Co-victimes

      Les enfants exposés aux violences conjugales sont désormais reconnus comme des victimes directes, mais leur protection reste un défi majeur.

      6.1. L'Impact Traumatique

      • Les enfants sont profondément affectés, même sans subir de coups directs. 60 % présentent un diagnostic de trouble de stress post-traumatique.

      • L'enfant est souvent utilisé comme une arme dans le cadre du contrôle coercitif exercé sur la mère.

      6.2. Les Défis de la Protection

      Silos institutionnels : La complexité du système judiciaire (Juge aux Affaires Familiales, Juge des Enfants, juge pénal) peut conduire à des décisions contradictoires et à une vision parcellaire de la situation familiale.

      Des initiatives comme les "chambres des VIF" en cour d'appel visent à décloisonner en jugeant le civil et le pénal de manière coordonnée.

      Exercice de l'autorité parentale : C'est un enjeu central, car elle est un levier majeur du contrôle coercitif post-séparation.

      La loi a évolué pour permettre sa suspension ou son retrait, mais son application reste complexe.

      Rôle des services de protection de l'enfance (ASE) : Les professionnels doivent être formés à ne pas symétriser les violences et à toujours recentrer l'analyse sur le contexte de violence, même lorsque l'intervention porte sur les symptômes de l'enfant.

      6.3. Le Film "Selma" : Un Outil de Prévention

      Objectif : Un court-métrage de fiction commandé par la Direction de la Jeunesse (DJEPVA) et réalisé par Johanna Benaïnous pour sensibiliser les animateurs et directeurs d'accueils collectifs de mineurs.

      Thématiques : Le film aborde la difficulté de signaler pour un jeune professionnel, la stratégie de l'agresseur pour déstabiliser et inverser la culpabilité, et un modèle d'accueil bienveillant par les forces de l'ordre.

      Déploiement : Il s'accompagne d'un livret de formation et sera déployé nationalement pour former les formateurs et les acteurs de terrain, en insistant sur le contrôle d'honorabilité, l'obligation de signalement et l'éducation au consentement.

      --------------------------------------------------------------------------------

      7. Focus : La Montée des Mouvements Masculinistes

      La dernière table ronde a alerté sur la structuration et la professionnalisation des mouvements masculinistes, qui représentent une contre-offensive organisée face aux avancées féministes.

      7.1. Idéologie et Stratégie

      Postulat de base : Le féminisme serait allé trop loin et les hommes seraient désormais les principales victimes, menacés d'éradication par un "complot" féministe.

      Tactique : Ils se présentent comme des "groupes de soutien" pour des hommes en souffrance, en leur offrant un bouc émissaire (les femmes, les féministes) et des solutions simplistes à des problèmes complexes (confiance en soi, relations).

      Recrutement : Ils ciblent particulièrement les jeunes hommes en quête identitaire via des influenceurs sur les réseaux sociaux, capitalisant financièrement et politiquement sur leur mal-être.

      7.2. Une Offensive Financée et Professionnalisée

      Financement : Le rapport "La Nouvelle Vague" révèle qu'au moins 1,2 milliard de dollars ont financé les mouvements anti-genre en Europe entre 2019 et 2023.

      Les fonds proviennent des États-Unis (droite chrétienne), de la Russie, mais sont majoritairement européens.

      Professionnalisation : Cet argent a permis de créer une infrastructure de lobbying à haut niveau, un écosystème de think tanks, une forte présence médiatique et la création de "services anti-genre" (ex: centres de "crise de grossesse" pour dissuader de l'IVG).

      7.3. Manifestations et Impacts Concrets

      Attaques contre les dispositifs d'aide : La FNSF a témoigné des attaques ciblées contre le 3919 : tentatives de saturation de la ligne, harcèlement des professionnelles, et lobbying politique pour "ouvrir la ligne aux hommes" dans une logique de fausse symétrie qui nie la nature systémique des violences.

      Instrumentalisation des droits des enfants : Des propositions de loi (comme la PPL 819 sur la résidence alternée de principe) sont portées par des groupes masculinistes sous couvert de "défense des enfants", alors que leur objectif est de renforcer les droits des pères, y compris violents, au détriment de la sécurité des mères et des enfants.

      Infiltration politique : Ces mouvements ne sont plus marginaux. Ils sont "en costard-cravate" et obtiennent des rendez-vous dans les ministères et les parlements, faisant sauter les "digues républicaines".

      7.4. Pistes de Réponse

      Médias : Traiter le masculinisme comme un fait et une menace terroriste, non comme une "opinion".

      Prévention : Renforcer l'éducation à l'égalité dès le plus jeune âge en s'appuyant sur les acteurs de terrain.

      Régulation : Contraindre légalement les plateformes numériques à modérer ces contenus haineux.

      Écoute des associations : Prendre au sérieux les alertes lancées par les associations féministes sur la banalisation des discours de haine et la revictimisation des femmes dans le système judiciaire (ex: contre-plaintes, stages pour auteurs imposés aux victimes).

    1. Reviewer #1 (Public review):

      Summary:

      Wojnowska et al. report structural and functional studies of the interaction of Streptococcus pyogenes M3 protein with collagen. They show through X-ray crystallographic studies that the N-terminal hypervariable region of M3 protein forms a T-like structure, and that the T-like structure binds a three-stranded collagen-mimetic peptide. They indicate that the T-like structure is predicted by AlphaFold3 with moderate confidence level in other M proteins that have sequence similarity to M3 protein and M-like proteins from group C and G streptococci. For some, but not all, of these related M and M-like proteins, AlphaFold3 predicts, with moderate confidence level, complexes similar to the one observed for M3-collagen. Functionally, the authors show that emm3 strains form biofilms with more mass when surfaces are coated with collagen, and this effect can be blocked by an M3 protein fragment that contains the T-structure. They also show the co-occurrence of emm3 strains and collagen in patient biopsies and a skin tissue organoid. Puzzlingly, M1 protein has been reported to bind collagen, but collagen inhibits biofilm in a particular emm1 strain but that same emm1 strain colocalizes with collagen in a patient biopsy sample. The implications of the variable actions of collagen on biofilm formation are not clear.

      Strengths:

      The paper is well written and the results are presented in a logical fashion.

      Weaknesses:

      A major limitation of the paper is that it is almost entirely observational and lacks detailed molecular investigation. Insufficient details or controls are provided to establish the robustness of the data.

      Comments on revisions:

      The authors' response to this reviewer's Major issue #1 is inadequate. Their argument is essentially that if they denature the protein, then there is no activity. This does not address the specificity of the structure or its interactions.

      They went only part way to addressing this reviewer's Major issue #2. While Figure 8 - supplement 3 shows 1D NMR spectra for M3 protein (what temperature?), it does not establish that stability is unaltered (to a significant degree).

      This reviewer's Major issue #3 is one of the major reasons for considering this study to be observational. This reviewer agrees that structural biology is by its nature observational, but modern standards require validation of structural observations. The authors' response is that a mechanistic investigation involving mutant bacterial strains and validation involving mutated proteins is beyond their scope. Therefore, the study remains observational.

      Major issue 4 was addressed suitably, but brings up the problematic point that the emm1 2006 strain colocalizes quite well with collagen in a patient biopsy sample but not in other assays. This calls into question the overall interpretability of the patient biopsy data.

      The authors have not provided a point-by-point response. Issues that were indicated to be minor previously were deemed to be minor because this reviewer thought that they could easily be addressed in a revision. It appears that the authors have ignored many of these comments, and these issues are therefore now considered to be major issues. For example, no errors are given for Kd measurements, Table 2 is sloppy and lacks the requested information, negative controls are missing (Figure 10 - figure supplement 1), and there is no indication of how many independent times each experiment was done.

      And "C4-binding protein" should be corrected to "C4b-binding protein."

    2. Author response:

      The following is the authors’ response to the current reviews.

      We thank the reviewers for their comments on the initial submission, which helped us improve and extend the paper. We would like to respond specifically to reviewer #1.

      We disagree with the broad criticism of this study as being “almost entirely observational” and lacking “detailed molecular investigation”. We report structures and binding data, show mechanistic detail, identify critical residues and structural features underlying biological activity, and present biologically meaningful data demonstrating a role of the interaction of the M3 protein with collagens. We disagree that insufficient details or controls are included. We agree that our report has limitations, such as an understanding of potential emm1 strain binding to collagen, which might play a role in host tissue colonization, but not in biofilm.

      In response to issues raised in the initial review, we conducted several new experiments for the revised manuscript. We believe these strengthen what we report. Firstly, as the reviewer suggested, we conducted a binding experiment where the tertiary fold of M3-NTD was disrupted to confirm the T-shaped fold is indeed required for binding to collagen, as might be expected based on the crystal structure of the complex. To achieve this, we did not, as the reviewer states, use denatured protein in the ITC binding experiment. Instead, we used a monomeric form of M3-NTD, which does not adopt a well-defined tertiary structure, but retains all residues in the context of alpha helices. Secondly, we added more evidence for the importance of structural features (amino acid side chains defining the collagen binding site) by analysing the role of Trp103. Together, we provide clear evidence for the specific role of the T-shaped fold of M3-NTD for collagen binding.

      Responding to a constructive criticism by reviewer #1 we characterised M3-NTD mutants to demonstrate conservation of overall structure. NMR is an exquisite tool for this as it is highly sensitive to structural changes. It is not clear why the reviewer suggested we should have measured the stability of the proteins, which is irrelevant here. What matters is that the fold is conserved between mutated variants at the chosen experimental temperature (now added to the Methods section), which NMR demonstrates.

      We added errors for the ITC-derived dissociation constants.

      In the submitted versions of the paper we did not include the negative control requested by reviewer #1 for experiments shown in Figure 10 - figure supplement 1B. In our view this does not add information supporting our findings. However, we have now added two negative controls, staining of emm1 and emm28 strains. As expected, no reactivity was found with the type-specific M3 HVR antiserum while the M3 BCW antiserum showed weak reactivity, in line with some sequence similarity of the C-terminal regions of M proteins.

      Table 2 contains essential information, in line with what generally is shown in crystallographic tables in this journal. All other information can be found in the depositions of our data at the PDB. The structures have been scrutinised and checked by the PDB and passed all quality tests.

      We stated how many times experiments were done where appropriate. We now added this information for CLC assays (as given in the previously published protocol, refs. 45, 47). ITC was carried out more than once for optimization but the results of single experiments are shown (as is common practice).


      The following is the authors’ response to the original reviews.

      Many thanks for assessing our submission. We are grateful for the reviews that have informed a revised version of the paper, which includes additional data and modified text to take into account the reviewers’ comments. 

      We addressed the major limitation identified by Reviewer #1 by including data to demonstrate that collagen binding is indeed dependent on the T-shaped fold (major issue 1). Reviewer #1 suggested this needs to be done through extensive mutational work. This in our view was neither feasible nor necessary. Instead, we used ITC to measure collagen peptide binding using a monomeric form of M3, which preserves all residues including the ones involved in binding, but cannot form the T-shaped structure. This achieves the same as unravelling the T fold through mutations, but without the risk of aJecting binding through altering residues that are involved in both binding and definition of the T fold. The experiment shows a very weak interaction, confirming the fold of the M3-NTD is required for binding activity.

      Reviewer #1 finds the study limited for being “almost entirely observational”. Structural biology is by its nature observational, which is not a limitation but the very purpose of this approach. Our study goes beyond observing structures. In the first version of our paper, we identified a critical residue within a previously mapped binding site, and demonstrated through mutagenesis a causal link between presence of this residue on a tertiary fold and collagen binding activity. However, we agree this analysis could have been strengthened by additional mutagenesis, which we carried out and describe in the revised manuscript. This identifies a second residue that is critical for collagen binding. We firmed up these mutational experiments with a characterisation of mutated forms of M3 by NMR spectroscopy to confirm that these mutations did not aJect the overall fold, addressing major issue no. 2 of reviewer #1. We further demonstrate that the interaction between M3 and collagen is the cause of greatly enhanced biofilm formation as observed in patient biopsies and a tissue model of infection. We show that other streptococci that do not possess a surface protein presenting collagen binding sites like M3 do not form collagen-dependent biofilm. We therefore do not think that criticising our study for being almost entirely observational is valid. 

      Major issue 3:

      We agree with the reviewer that it would be useful to carry out experiments with k.o. and complemented strains. Such experiments go beyond the scope of our study, but might be carried out by us or others in the future. We disagree that emm1 is used “as a negative”. Instead, we established that, in contrast to emm3 strains, emm1 strain biofilm formation is not enhanced by collagen. 

      We addressed major issue 4 by quantifying colocalizations in the patient biopsies and 3D tissue model experiments.

      We thank Reviewer #2 for the thorough analysis of our reported findings. The main criticism here (issue 1) concerns the question of whether binding of emm3 streptococci would diJer to diJerent types of collagen. Our collagen peptide binding assays together with the structural data identify the collagen triple helix as the binding site for M3. While collagen types diJer in their distribution, functions and morphology in diJerent tissues, they all have in common triple-helical (COL) regions with high sequence similarity that are non-specifically recognised by M3. Therefore, our data in conjunction with the body of published work showing binding to M3 to collagens I, II, III and IV suggest it is highly likely that emm3 streptococci will indeed bind to all types of collagen in the same manner. We added a statement to the manuscript to make this point more clearly. We also added a prediction of a complex between M3 and a collagen I triple-helical peptide, which supports the idea of conserved binding mechanism for all collagen types. Whether this means all collagen types in the various tissues where they occur are targeted by emm3 streptococci is a very interesting question, however one that goes beyond the scope of our study.

      Minor issues identified by the reviewers were addressed through changes in the text and addition of figures.

      Summary of changes:

      (1) Two new authors have been added due to inclusion of additional data and analysis.

      (2) New experimental data included in section "M3-NTD harbors the collagen binding site".

      (3) Figure 3 panels A and B assigned and swapped.

      (4) Figure 4 changed to include new data and move mutant M3-NTD ITC graphs to supplement.

      (5) Table 2 corrected and amended.

      (6) AlphaFold3 quality parameters ipTM and pTM added to all figures showing predicted structures.

      (7) New supplementary figure added showing crystal packing of M3-NTD/collagen peptide complex.

      (8) Figure supplement of predicted M-protein/collagen peptide complexes includes new panel for a type I collagen peptide bound to M3.

      (9) New figure supplement showing mutant M3-NTD ITC data.

      (10) New figure supplement showing 1D <sup>1</sup>H NMR spectra of M3-NTD mutants.

      (11) Included data for additional M3-NTD mutants assessing role of Trp103 in collagen binding. Text extended to describe and place into context findings from ITC binding studies using these mutants.

      (12) Added quantitative analysis of biopsy and tissue model data (Mander's overlap coeJicient).

      (13) Corrected and extended table 3 to take into account new primers.

      (14) Added experimental details for new NMR and ITC experiments as well as new quantitative image analysis.

      (15) Minor adjustments to the text to improve clarity and correct errors.

    1. Life-changing eye implant helps blind patients read again
      • New "Prima" eye implant is a breakthrough for the blind – allows regaining the ability to read.
      • Involves implanting a microprocessor under the retina, with patients wearing glasses with a camera that transmits the image to the implant.
      • Of 32 people implanted, 27 could read using central vision; after a year, they improved by 5 lines on the vision test chart.
      • 70-year-old patient Sheila Irvine, who lost her sight over 30 years ago and has had the implant for 3 years, now solves crosswords.
      • Rights to PRIMA acquired in 2024 by U.S. company Science Corporation. (video in source)
    1. Misgegaan bij beschikkingsbevoegdheid van overdracht

      LET OP: alleen toetsen aan 3:88 wanneer er sprake is van beschikkingsonbevoegdheid. Als er tekort word gekomen aan een andere/meerdere voorwaarden van art. 3:84 BW hoef je hier niet aan te toetsen

    2. Misgegaan bij beschikkingsbevoegdheid van overdracht

      LET OP: alleen toetsen aan 3:86 wanneer er sprake is van beschikkingsonbevoegdheid. Als er tekort word gekomen aan een andere/meerdere voorwaarden van art. 3:84 BW hoef je hier niet aan te toetsen.

    1. Reviewer #3 (Public review):

      Summary:

      In this well-written manuscript, Unitt and colleagues propose a new, hierarchical nomenclature system for the pathogen Neisseria gonorrhoeae. The proposed nomenclature addresses a longstanding problem in N. gonorrhoeae genomics, namely that the highly recombinant population complicates typing schemes based on only a few loci and that previous typing systems, even those based on the core genome, group strains at only one level of genomic divergence without a system for clustering sequence types together. In this work, the authors have revised the core genome MLST scheme for N. gonorrhoeae and devised life identification numbers (LIN) codes to describe the N. gonorrhoeae population structure.

      Strengths:

      The LIN codes proposed in this manuscript are congruent with previous typing methods for Neisseria gonorrhoeae like cgMLST groups, Ng-STAR, and NG-MAST. Importantly, they improve upon many of these methods as the LIN codes are also congruent with the phylogeny and represent monophyletic lineages/sublineages. Additionally, LIN code cluster assignment is fixed, and clusters are not fused as is common in other typing schemes.

      The LIN code assignment has been implemented in PubMLST allowing other researchers to assign LIN codes to new assemblies and put genomes of interest in context with global datasets, including in private datasets.

      Weaknesses:

      The authors have defined higher resolution thresholds for the LIN code scheme. However, they do not investigate how these levels correspond to previously identified transmission clusters from genomic epidemiology studies. This will be an important focus of future work, but it may be beyond the scope of the current manuscript.

      Comments on revisions:

      The authors have addressed my previous comments. I have no additional recommendations.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Summary:

      Bacterial species that frequently undergo horizontal gene transfer events tend to have genomes that approach linkage equilibrium, making it challenging to analyze population structure and establish the relationships between isolates. To overcome this problem, researchers have established several effective schemes for analyzing N. gonorrhoeae isolates, including MLST and NG-STAR. This report shows that Life Identification Number (LIN) Codes provide for a robust and improved discrimination between different N. gonorrhoeae isolates.

      Strengths:

      The description of the system is clear, the analysis is convincing, and the comparisons to other methods show the improvements offered by LIN Codes.

      Weaknesses:

      No major weaknesses were identified by this reviewer.

      We thank the reviewer for their assessment of our paper.

      Reviewer #2 (Public review):

      Summary:

      This paper describes a new approach for analyzing genome sequences.

      Strengths:

      The work was performed with great rigor and provides much greater insights than earlier classification systems.

      Weaknesses:

      A minor weakness is that the clinical application of LIN coding could be articulated in a more in-depth way. The LIN coding system is very impressive and is certainly superior to other protocols. My recommendation, although not necessary for this paper, is that the authors expand their analysis to noncoding sequences, especially those upstream of open reading frames. In this respect, important cis-acting regulatory mutations that might help to further distinguish strains could be identified.

      We thank the reviewer for their comments. LIN code could be applied clinically, for example in the analysis of antibiotic resistant isolates, or to investigate outbreaks associated with a particular lineage. We have updated the text to note this, starting at line 432.

      In regards to non-coding sequences: unfortunately, intergenic regions are generally unsuitable for use in typing systems as (i) they are subject to phase variation, which can occlude relationships based on descent; (ii) they are inherently difficult to assemble and therefore can introduce variation due to the sequencing procedure rather than biology. For the type of variant typing that LIN code represents, which aims to replicate phylogenetic clustering, protein encoding sequences are the best choice for convenience, stability, and accuracy. This is not to say that it is not a valid object to base a nomenclature on intergenic regions, which might be especially suitable for predicting some phenotypic characters, but this will still be subject to problem (ii), depending on the sequencing technology used.  Such a nomenclature system should stand beside, rather than be combined with or used in place of, phylogenetic typing. However, we could certainly investigate the relationship between an isolates LIN code and regulatory mutations in the future.

      Reviewer #3 (Public review):

      Summary:

      In this well-written manuscript, Unitt and colleagues propose a new, hierarchical nomenclature system for the pathogen Neisseria gonorrhoeae. The proposed nomenclature addresses a longstanding problem in N. gonorrhoeae genomics, namely that the highly recombinant population complicates typing schemes based on only a few loci and that previous typing systems, even those based on the core genome, group strains at only one level of genomic divergence without a system for clustering sequence types together. In this work, the authors have revised the core genome MLST scheme for N. gonorrhoeae and devised life identification numbers (LIN) codes to describe the N. gonorrhoeae population structure.

      Strengths:

      The LIN codes proposed in this manuscript are congruent with previous typing methods for Neisseria gonorrhea, like cgMLST groups, Ng-STAR, and NG-MAST. Importantly, they improve upon many of these methods as the LIN codes are also congruent with the phylogeny and represent monophyletic lineages/sublineages.

      The LIN code assignment has been implemented in PubMLST, allowing other researchers to assign LIN codes to new assemblies and put genomes of interest in context with global datasets.

      Weaknesses:

      The authors correctly highlight that cgMLST-based clusters can be fused due n to "intermediate isolates" generated through processes like horizontal gene transfer. However, the LIN codes proposed here are also based on single linkage clustering of cgMLST at multiple levels. It is unclear if future recombination or sequencing of previously unsampled diversity within N. gonorrhoeae merges together higher-level clusters, and if so, how this will impact the stability of the nomenclature.

      The authors have defined higher resolution thresholds for the LIN code scheme. However, they do not investigate how these levels correspond to previously identified transmission clusters from genomic epidemiology studies. It would be useful for future users of the scheme to know the relevant LIN code thresholds for these investigations.

      We thank the reviewer for their insightful comments. LIN codes do use multi-level single linkage clustering to define the cluster number of isolates. However, unlike previous applications of simple single linkage clustering such as N. gonorrhoeae core genome groups (Harrison et al., 2020), once assigned in LIN code, these cluster numbers are fixed within an unchanging barcode assigned to each isolate. Therefore, the nomenclature is stable, as the addition of new isolates cannot change previously established LIN codes.

      Cluster stability was considered during the selection of allelic mismatch thresholds. By choosing thresholds based on natural breaks in population structure (Figure 3), applying clustering statistics such as the silhouette score, and by assessing where cluster stability has been maintained within the previous core genome groups nomenclature, we can have confidence that the thresholds which we have selected will form stable clusters. For example, with core genome groups there has been significant group fusion with clusters formed at a threshold of 400 allelic differences, while clustering at a threshold of 300 allelic differences has remained cohesive over time (supported by a high silhouette score) and so was selected as an important threshold in the gonococcal LIN code. LIN codes have now been applied to >27000 isolates in PubMLST, and the nomenclature has remained effective despite the continual addition of new isolates to this collection. The manuscript emphasises these points at line 96 and 346.

      Work is in progress to explore what LIN code thresholds are generally associated with transmission chains. These will likely be the last 7 thresholds (25, 10, 7, 5, 3, 1, and 0 allelic differences), as previous work has suggested that isolates linked by transmission within one year are associated with <14 single nucleotide polymorphism differences (De Silva et al., 2016). The results of this analysis will be described in a future article, currently in preparation.

      Harrison, O.B., et al. Neisseria gonorrhoeae Population Genomics: Use of the Gonococcal Core Genome to Improve Surveillance of Antimicrobial Resistance. The Journal of Infectious Diseases 2020.

      De Silva, D., et al. Whole-genome sequencing to determine transmission of Neisseria gonorrhoeae: an observational study. The Lancet Infectious Diseases 2016;16(11):1295-1303.

      Reviewer #3 (Recommendations for the authors):

      (1) Data/code availability: While the genomic data and LIN codes are available in PubMLST and new isolates uploaded to PubMLST can be assigned a LIN code, it is also important to have software version numbers reported in the methods section and code/commands associated with the analysis in this manuscript (e.g. generation of core genome, statistical analysis, comparison with other typing methods) documented in a repository like GitHub.

      Software version numbers have been added to the manuscript. Scripts used to run the software have been compiled and documented on protocols.io, DOI: dx.doi.org/10.17504/protocols.io.4r3l21beqg1y/v1

      (2) Line 37: Missing "a" before "multi-drug resistant pathogen".

      This has been corrected in the text.

      (3) Line 60: Typo in geoBURST.

      The text refers to a tool called goeBURST (global optimal eBURST) as described in Francisco, A.P. et al., 2009. DOI: 10.1186/1471-2105-10-152. Therefore, “geoBURST” would be incorrect.

      (4) Line 136-138: It might be helpful to discuss how premature stop codons are treated in this scheme. Often in isolates with alleles containing early premature stop codons, annotation software like prokka will annotate two separate ORFs, which are then clustered with pangenome software like PIRATE. How does the cgMLST scheme proposed here treat premature stop codons? Are sequences truncated at the first stop codon, or is the nucleotide sequence for the entire gene used even if it is out of frame?

      In PubMLST, alleles with premature stop codons are flagged, but otherwise annotated from the typical start to the usual stop codon, if still present. This also applies to frameshift mutations – a new unique allele will be annotated, but flagged as frameshift. In both cases, each new allele with a premature stop codon or frameshift will require human curator involvement to be assigned, to ensure rigorous allele assignment. As the Ng cgMLST v2 scheme prioritised readily auto-annotated genes, loci which are prone to internal stop codons or frameshifts with inconsistent start/end codons are excluded from the scheme. The text has been updated at line 128 to mention this.

      (5) Line 213-214: What were the versions of software and parameters used for phylogenetic tree construction?

      Version numbers have been added to the text between lines 214-219. Parameters have been included with the scripts documented at protocols.io DOI: dx.doi.org/10.17504/protocols.io.4r3l21beqg1y/v1

      (6) Line 249: K. pneumoniae may also be a more diverse/older species than N. gonorrhoeae.

      The text has been updated at line 252-253 to emphasize the difference in diversity. The age of N. gonorrhoeae as a species is a matter of scientific debate, and out of the scope of this paper to discuss.

      (7) Line 278-279: Were some isolates unable to be typed, or have they just been added since the LIN code assignment occurred?

      Some genomes cannot be assigned a LIN code due to poor genome quality. A minimum of 1405/1430 core genes must have an allele designated for a LIN code to be assigned. Genomes with large numbers of contigs may not meet this requirement. LIN code assignment is an ongoing process that occurs on a weekly basis in PubMLST, performed in batches starting at 23:00 (UK local time) on Sundays. The text has been updated to describe this at lines 196 and 282-283.

      (8) Line 314-315: Was BAPS rerun on the dataset used in this manuscript, or is this based on previously assigned BAPS groups?

      This was based on previously assigned BAPs groups, as described between lines 315-320.

      (9) Line 421-423: Are there options for assigning LIN codes that do not require uploading genomes to PubMLST? I can imagine that there may be situations where researchers or public health institutions cannot share genomic data prior to publication.

      Isolate data does not need to be shared to be uploaded and assigned a LIN code in PubMLST. data owners can create a private dataset within PubMLST viewable only to them, on which automated assignment will be performed. LIN code requires a central repository of genomes for new codes to be assigned in relation to. The text has been updated to emphasize this at line 197 and 427.

      (10) Figure 6: How is this tree rooted? Additionally, do isolates that have unannotated LIN codes represent uncommon LIN codes or were those isolates not typed?

      The tree has been left unrooted, as it is being used to visualise the relationships between the isolates rather than to explore ancestry. Detail on what LIN codes have been annotated can be found in the figure legend, which describes that the 21 most common LIN code lineages in this 1000 isolate dataset have been labelled. All 1000 isolates used in the tree had a LIN code assigned, but to ensure good legibility not all lineages were annotated on the tree. The legend has been updated to improve clarity.

    1. Reviewer #2 (Public review):

      Summary:

      In this paper, the authors describe a novel function involving the cell cycle protein kinase CDK2, which binds to TBK1 (an essential component of the innate immune response) leading to its degradation in a ubiquitin/proteasome-dependent manner. Moreover, the E3 ubiquitin ligase, Dtx4, is implicated in the process by which CDK2 increases the K48-linked ubiquitination of TBK1. This paper presents intriguing findings on the function of CDK2 in lower vertebrates, particularly its regulation of IFN expression and antiviral immunity.

      Strengths:

      (1) The research employs a variety of experimental approaches to address a single question. The data are largely convincing and appear to be well executed.

      (2) The evidence is strong and includes a combination of in vivo and in vitro experiments, including knockout models, protein interaction studies, and ubiquitination analyses.

      (3) This study significantly impacts the field of immunology and virology, particularly concerning the antiviral mechanisms in lower vertebrates. The findings provide new insights into the regulation of IFN expression and the broader role of CDK2 in immune responses. The methods and data presented in this paper are highly valuable for the scientific community, offering new avenues for research into antiviral strategies and the development of therapeutic interventions targeting CDK2 and its associated pathways.

    1. Reviewer #1 (Public review):

      Summary:

      The microbiota of Dactylorhiza traunsteineri, an endangered marsh orchid, forms complex root associations that support plant health. Using 16S rRNA sequencing, we identified dominant bacterial phyla in its rhizosphere, including Proteobacteria, Actinobacteria, and Bacteroidota. Deep shotgun metagenomics revealed high-quality MAGs with rich metabolic and biosynthetic potential. This study provides key insights into root-associated bacteria and highlights the rhizosphere as a promising source of bioactive compounds, supporting both microbial ecology research and orchid conservation.

      Strengths:

      The manuscript presents an investigation of the bacterial communities in the rhizosphere of D. traunsteineri using advanced metagenomic approaches. The topic is relevant, and the techniques are up-to-date; however, the study has several critical weaknesses.

      Weaknesses:

      (1) Title: The current title is misleading. Given that fungi are the primary symbionts in orchids and were not analyzed in this study (nor were they included among other microbial groups), the use of the term "microbiome" is not appropriate. I recommend replacing it with "bacteriome" to better reflect the scope of the work.

      (2) Line 124: The phrase "D. traunsteineri individuals were isolated" seems misleading. A more accurate description would be "individuals were collected", as also mentioned in line 128.

      (3) Experimental design: The major limitation of this study lies in its experimental design. The number of plant individuals and soil samples analyzed is unclear, making it difficult to assess the statistical robustness of the findings. It is also not well explained why the orchids were collected two years before the rhizosphere soil samples. Was the rhizosphere soil collected from the same site and from remnants of the previously sampled individuals in 2018? This temporal gap raises serious concerns about the validity of the biological associations being inferred.

      (4) Low sample size: In lines 249-251 (Results section), the authors mention that only one plant individual was used for identifying rhizosphere bacteria. This is insufficient to produce scientifically robust or generalizable conclusions.

      (5) Contextual limitations: Numerous studies have shown that plant-microbe interactions are influenced by external biotic and abiotic factors, as well as by plant age and population structure. These elements are not discussed or controlled for in the manuscript. Furthermore, the ecological and environmental conditions of the site where the plants and soil were collected are poorly described. The number of biological and technical replicates is also not clearly stated.

      (6) Terminology: Throughout the manuscript, the authors refer to the "microbiome," though only bacterial communities were analyzed. This terminology is inaccurate and should be corrected consistently.

      Considering the issues addressed, particularly regarding experimental design and data interpretation, significant improvements to the study are needed.

    2. Author response:

      Reviewer #1 (Public review):

      The microbiota of Dactylorhiza traunsteineri, an endangered marsh orchid, forms complex root associations that support plant health. Using 16S rRNA sequencing, we identified dominant bacterial phyla in its rhizosphere, including Proteobacteria, Actinobacteria, and Bacteroidota. Deep shotgun metagenomics revealed high-quality MAGs with rich metabolic and biosynthetic potential. This study provides key insights into root-associated bacteria and highlights the rhizosphere as a promising source of bioactive compounds, supporting both microbial ecology research and orchid conservation.  

      The manuscript presents an investigation of the bacterial communities in the rhizosphere of D. traunsteineri using advanced metagenomic approaches. The topic is relevant, and the techniques are up-to-date; however, the study has several critical weaknesses.  

      We thank the reviewer for their careful reading of our manuscript and for the constructive comments. We will revise the manuscript substantially. Our responses to the specific points are below:

      (1) Title: The current title is misleading. Given that fungi are the primary symbionts in orchids and were not analyzed in this study (nor were they included among other microbial groups), the use of the term "microbiome" is not appropriate. I recommend replacing it with "bacteriome" to better reflect the scope of the work.

      In the revised manuscript, we will expand the Results (shotgun sequencing) and Discussion to also include fungal taxa. With these additions, the use of the term microbiome will accurately reflect the inclusion of both bacterial and fungal components.

      (2) Line 124: The phrase "D. traunsteineri individuals were isolated" seems misleading. A more accurate description would be "individuals were collected", as also mentioned in line 128.

      This ambiguity will be corrected in the revised manuscript.

      (3) Experimental design: The major limitation of this study lies in its experimental design. The number of plant individuals and soil samples analyzed is unclear, making it difficult to assess the statistical robustness of the findings. It is also not well explained why the orchids were collected two years before the rhizosphere soil samples. Was the rhizosphere soil collected from the same site and from remnants of the previously sampled individuals in 2018? This temporal gap raises serious concerns about the validity of the biological associations being inferred.

      In the revised manuscript, we will explicitly state the number of individuals and soil samples included in the study, and we will more clearly describe the sequence of sampling events. We will also add a dedicated statement in the Discussion addressing the temporal gap between plant sampling and rhizosphere soil collection, acknowledging that this is a limitation of the study.

      (4) Low sample size: In lines 249-251 (Results section), the authors mention that only one plant individual was used for identifying rhizosphere bacteria. This is insufficient to produce scientifically robust or generalizable conclusions.

      In the revised manuscript, we will clearly state that only one rhizosphere sample was available and will frame the study as exploratory in nature. We will explicitly acknowledge this limitation in both the Methods and Discussion, and we will temper our conclusions accordingly.

      (5) Contextual limitations: Numerous studies have shown that plant-microbe interactions are influenced by external biotic and abiotic factors, as well as by plant age and population structure. These elements are not discussed or controlled for in the manuscript. Furthermore, the ecological and environmental conditions of the site where the plants and soil were collected are poorly described. The number of biological and technical replicates is also not clearly stated.

      In the revised manuscript, we will expand the description of the collection site and environmental conditions to the extent supported by our records. We will also clearly state the number of biological and technical replicates used for each analysis. In the Discussion, we will explicitly acknowledge that plant age, environmental variables, and other biotic/abiotic factors may influence plant–microbe interactions and were not directly assessed in this study.

      (6) Terminology: Throughout the manuscript, the authors refer to the "microbiome," though only bacterial communities were analyzed. This terminology is inaccurate and should be corrected consistently.

      As noted in our response to point (1), we will revise terminology throughout the manuscript to ensure consistency and to accurately reflect the expanded bacterial and fungal coverage in the revised version.

      Reviewer #2 (Public review):

      The authors aim to provide an overview of the D. traunsteineri rhizosphere microbiome on a taxonomic and functional level, through 16S rRNA amplicon analysis and shotgun metagenome analysis. The amplicon sequencing shows that the major phyla present in the microbiome belong to phyla with members previously found to be enriched in rhizospheres and bulk soils. Their shotgun metagenome analysis focused on producing metagenome assembled genomes (MAGs), of which one satisfies the MIMAG quality criteria for high-quality MAGs and three those for medium-quality MAGs. These MAGs were subjected to functional annotations focusing on metabolic pathway enrichment and secondary metabolic pathway biosynthetic gene cluster analysis. They find 1741 BGCs of various categories in the MAGs that were analyzed, with the high-quality MAG being claimed to contain 181 SM BGCs. The authors provide a useful, albeit superficial, overview of the taxonomic composition of the microbiome, and their dataset can be used for further analysis.

      The conclusions of this paper are not well-supported by the data, as the paper only superficially discusses the results, and the functional interpretation based on taxonomic evidence or generic functional annotations does not allow drawing any conclusions on the functional roles of the orchid microbiota.  

      We thank the reviewer for their thoughtful and constructive assessment of our manuscript. The comments have been very helpful in identifying areas where the clarity, structure, and interpretation of our work can be improved. Our responses to the specific points are below:

      (1) The authors only used one individual plant to take samples. This makes it hard to generalize about the natural orchid microbiome.

      We agree with the reviewer that the limited number of plant individuals restricts the generality of the conclusions. In the revised manuscript, we will clearly state that only one rhizosphere sample was available for analysis and will frame the study as exploratory. We will also explicitly acknowledge this limitation in the Discussion and ensure that our interpretations and conclusions remain appropriately cautious.

      (2) The authors use both 16S amplicon sequencing and shotgun metagenomics to analyse the microbiome. However, the authors barely discuss the similarities and differences between the results of these two methods, even though comparing these results may be able to provide further insights into the conclusions of the authors. For example, the relative abundance of the ASVs from the amplicon analysis is not linked to the relative abundances of the MAGs.

      In the revised manuscript, we will expand the Results and Discussion to include a clearer comparison between the taxonomic profiles derived from 16S amplicon sequencing and those obtained from shotgun metagenomic binning.

      (3) Furthermore, the authors discuss that phyla present in the orchid microbiome are also found in other microbiomes and are linked to important ecological functions. However, their results reach further than the phylum level, and a discussion of genera or even species is lacking. The phyla that were found have very large within-phylum functional variability, and reliable functional conclusions cannot be drawn based on taxonomic assignment at this level, or even the genus level (Yan et al. 2017).

      In the revised manuscript, we will incorporate taxonomic discussion at finer resolution where reliable assignments are available. We will also revise the Discussion to avoid overinterpreting phylum-level taxonomy in terms of ecological function.

      (4) Additionally, although the authors mention their techniques used, their method section is sometimes not clear about how samples or replicates were defined. There are also inconsistencies between the methods and the results section, for example, regarding the prediction of secondary metabolite biosynthetic gene clusters (BGCs).

      In the revised Methods section, we will clearly define the number and type of samples included in each analysis, specify the number of replicates and how they were handled, and provide a clearer description of the biosynthetic gene cluster (BGC) prediction workflow, including the tools used and how results were interpreted. 

      (5) The BGC prediction was done with several tools, and the unusually high number of found BGCs (181 in their high-quality MAG) is likely due to false positives or fragmented BGCs. The numbers are much higher than any numbers ever reported in literature supported by functional evidence (Amos et al, 2017), even in a prolific genus like Streptomyces (Belknap et al., 2020). This caveat is not discussed by the authors.

      We thank the reviewer for this important point. Our original intention was to present the BGC predictions as a resource for future exploration, which is why multiple tools were used. However, we understand how this approach may lead to confusion, particularly regarding the confidence level of the predicted clusters and the potential inflation of counts due to assembly fragmentation or tool sensitivity. In the revised manuscript, we will thoroughly revise this section to clearly distinguish highconfidence predictions from more exploratory findings. We will focus on results supported by stronger evidence, explicitly qualify lower-confidence predictions as putative, and temper any functional interpretations accordingly.

      (6) The authors have generated one high-quality MAG and three medium-quality MAGs. In the discussion, they present all four of these as high-quality, which could be misleading. The authors discuss what was found in the literature about the role of the bacterial genera/phyla linked to these MAGs in plant rhizospheres, but they do not sufficiently link their own analysis results (metabolic pathway enrichment and biosynthetic gene cluster prediction) to this discussion. The results of these analyses are only presented in tables without further explanation in either the results section or the discussion, even though there may be interesting findings. For example, the authors only discuss the class of the BGCs that were found, but don't search for experimentally verified homologs in databases, which could shed more light on the possible functional roles of BGCs in this microbiome.

      In the revised manuscript, we will ensure that MAG quality is described accurately and consistently throughout, distinguishing clearly between high-quality and medium-quality bins according to accepted standards.

      (7) In the conclusions, the authors state: "These analyses uncovered potential metabolic capabilities and biosynthetic potentials that are integral to the rhizosphere's ecological dynamics." I don't see any support for this. Mentioning that certain classes of BGCs are present is not enough to make this claim, in my opinion. Any BGC is likely important for the ecological niche the bacteria live in. The fact that rhizosphere bacteria harbour BGCs is not surprising, and it doesn't tell us more than is already known.

      In the revised manuscript, we will rewrite the conclusion to reflect a more cautious interpretation, focusing on the potential metabolic and biosynthetic capabilities suggested by the data without asserting ecological roles that cannot be directly supported. These capabilities will be presented as hypotheses for future investigation rather than established ecological features.

    1. Reviewer #1 (Public review):

      Summary:

      This manuscript addresses an important methodological issue - the fragility of meta-analytic findings - by extending fragility concepts beyond trial-level analysis. The proposed EOIMETA framework provides a generalizable and analytically tractable approach that complements existing methods such as the traditional Fragility Index and Atal et al.'s algorithm. The findings are significant in showing that even large meta-analyses can be highly fragile, with results overturned by very small numbers of event recodings or additions. The evidence is clearly presented, supported by applications to vitamin D supplementation trials, and contributes meaningfully to ongoing debates about the robustness of meta-analytic evidence. Overall, the strength of evidence is moderate to strong, though some clarifications would further enhance interpretability.

      Strengths:

      (1) The manuscript tackles a highly relevant methodological question on the robustness of meta-analytic evidence.

      (2) EOIMETA represents an innovative extension of fragility concepts from single trials to meta-analyses.

      (3) The applications are clearly presented and highlight the potential importance of fragility considerations for evidence synthesis.

      Weaknesses:

      (1) The rationale and mathematical details behind the proposed EOI and ROAR methods are insufficiently explained. Readers are asked to rely on external sources (Grimes, 2022; 2024b) without adequate exposition here. At a minimum, the definitions, intuition, and key formulas should be summarized in the manuscript to ensure comprehensibility.

      (2) EOIMETA is described as being applicable when heterogeneity is low, but guidance is missing on how to interpret results when heterogeneity is high (e.g., large I²). Clarification in the Results/Discussion is needed, and ideally, a simulation or illustrative example could be added.

      (3) The manuscript would benefit from side-by-side comparisons between the traditional FI at the trial level and EOIMETA at the meta-analytic level. This would contextualize the proposed approach and underscore the added value of EOIMETA.

      (4) Scope of FI: The statement that FI applies only to binary outcomes is inaccurate. While originally developed for dichotomous endpoints, extensions exist (e.g., Continuous Fragility Index, CFI). The manuscript should clarify that EOIMETA focuses on binary outcomes, but FI, as a concept, has been generalized.

    2. Reviewer #3 (Public review):

      Summary and strengths:

      In this manuscript, Grimes presents an extension of the Ellipse of Insignificant (EOI) and Region of Attainable Redaction (ROAR) metrics to the meta-analysis setting as metrics for fragility and robustness evaluation of meta-analysis. The author applies these metrics to three meta-analyses of Vitamin D and cancer mortality, finding substantial fragility in their conclusions. Overall, I think extension/adaptation is a conceptually valuable addition to meta-analysis evaluation, and the manuscript is generally well-written.

      Specific comments:

      (1) The manuscript would benefit from a clearer explanation of in what sense EOIMETA is generalizable. The author mentions this several times, but without a clear explanation of what they mean here.

      (2) The authors mentioned the proposed tools assume low between-study heterogeneity. Could the author illustrate mathematically in the paper how the between-study heterogeneity would influence the proposed measures? Moreover, the between-study heterogeneity is high in Zhang et al's 2022 study. It would be a good place to comment on the influence of such high heterogeneity on the results, and specifying a practical heterogeneity cutoff would better guide future users.

      (3) I think clarifying the concepts of "small effect", "fragile result", and "unreliable result" would be helpful for preventing misinterpretation by future users. I am concerned that the audience may be confusing these concepts. A small effect may be related to a fragile meta-analysis result. A fragile meta-analysis doesn't necessarily mean wrong/untrustworthy results. A fragile but precise estimate can still reflect a true effect, but whether that size of true effect is clinically meaningful is another question. Clarifying the effect magnitude, fragility, and reliability in the discussion would be helpful.

    1. Reviewer #3 (Public review):

      This important study by Bohorquez et al examines the determinants necessary for concentrating the spatial modulator of cell division, MinD, at the future site of division and the cell poles. Proper localization of MinD is necessary to bring the division inhibitor, MinC, in proximity to the cell membrane and cell poles where it prevents aberrant assembly of the division machinery. In contrast to E. coli, in which MinD oscillates from pole-to-pole courtesy of a third protein MinE, how MinD localization is achieved in B. subtilis-which does not encode a MinE analog-has remained largely a mystery. The authors present compelling data indicating that MinD dimerization is dispensable for membrane localization but required for concentration at the cell poles. Dimerization is also important for interactions between MinD and MinC, leading to the formation of large protein complexes. Computational modeling, specifically a Monte Carlo simulation, supports a model in which differences in diffusion rates between MinD monomers and dimers lead to concentration of MinD at cell poles. Once there, interaction with MinC increases the size of the complex, further reinforcing diffusion differences. Notably, interactions with MinJ-which has previously been implicated in MinCD localization, are dispensable for concentrating MinD at cell poles although MinJ may help stabilize the MinCD complex at those locations.

      Comments on revisions:

      I believe the authors put respectable effort into revisions and addressing reviewer comments, particularly those that focused on the strengths of the original conclusions. The language in the current version of the manuscript is more precise and the overall product is stronger.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      The authors used fluorescence microscopy, image analysis, and mathematical modeling to study the effects of membrane affinity and diffusion rates of MinD monomer and dimer states on MinD gradient formation in B. subtilis. To test these effects, the authors experimentally examined MinD mutants that lock the protein in specific states, including Apo monomer (K16A), ATP-bound monomer (G12V), and ATPbound dimer (D40A, hydrolysis defective), and compared to wild-type MinD. Overall, the experimental results support the conclusion that reversible membrane binding of MinD is critical for the formation of the MinD gradient, but that the binding affinities between monomers and dimers are similar.  

      The modeling part is a new attempt to use the Monte Carlo method to test the conditions for the formation of the MinD gradient in B. subtilis. The modeling results provide good support for the observations and find that the MinD gradient is sensitive to different diffusion rates between monomers and dimers. This simulation is based on several assumptions and predictions, which raises new questions that need to be addressed experimentally in the future. However, the current story is sufficient without testing these assumptions or predictions.

      Reviewer #2 (Public review): 

      Summary:  

      Bohorquez et al. investigate the molecular determinants of intracellular gradient formation in the B. subtilis Min system. To this end, they generate B. subtilis strains that express MinD mutants that are locked in the monomeric or dimeric states, and also MinD mutants with amphipathic helices of varying membrane affinity. They then assess the mutants' ability to bind to the membrane and form gradients using fluorescence microscopy in different genetic backgrounds. They find that, unlike in the E. coli Min system, the monomeric form of MinD is already capable of membrane binding. They also show that MinJ is not required for MinD membrane binding and only interacts with the dimeric form of MinD. Using kinetic

      Monte Carlo simulations, the authors then test different models for gradient formation, and find that a MinD gradient along the cell axis is only formed when the polarly localized protein MinJ stimulates dimerization of MinD, and when the diffusion rate of monomeric and dimeric MinD differs. They also show that differences in the membrane affinity of MinD monomers and dimers are not required for gradient formation.  

      Strengths:  

      The paper offers a comprehensive collection of the subcellular localization and gradient formation of various MinD mutants in different genetic backgrounds. In particular, the comparison of the localization of these mutants in a delta MinC and MinJ background offers valuable additional insights. For example, they find that only dimeric MinD can interact with MinJ. They also provide evidence that MinD locked in a dimer state may co-polymerize with MinC, resulting in a speckled appearance.  

      The authors introduce and verify a useful measure of membrane affinity in vivo.  

      The modulation of the membrane affinity by using distinct amphipathic helices highlights the robustness of the B. subtilis MinD system, which can form gradients even when the membrane affinity of MinD is increased or decreased.  

      Weaknesses:  

      The main claim of the paper, that differences in the membrane affinity between MinD monomers and dimers are not required for gradient formation, does not seem to be supported by the data. The only measure of membrane affinity presented is extracted from the transverse fluorescence intensity profile of cells expressing the mGFP-tagged MinD mutants. The authors measure the valley-to-peak ratio of the profile, which is lower than 1 for proteins binding to the membrane and higher than 1 for cytosolic proteins. To verify this measure of membrane affinity, they use a membrane dye and a soluble GFP, which results in values of ~0.75 and ~1.25, respectively. They then show that all MinD mutants have a value - roughly in the range of 0.8-0.9 - and they use this to claim that there are no differences in membrane affinity between monomeric and dimeric versions.  

      While this way to measure membrane affinity is useful to distinguish between binders and non-binders, it is unclear how sensitive this assay is, and whether it can resolve more subtle differences in membrane affinity, beyond the classification into binders and non-binders. A dimer with two amphipathic helices should have a higher membrane affinity than a monomer with only one such copy. Thus, the data does not seem to support the claim that "the different monomeric mutants have the same membrane affinity as the wildtype MinD". The data only supports the claim that B. subtilis MinD monomers already have a measurable membrane affinity, which is indeed a difference from the E. coli Min system.  

      While their data does show that a stark difference between monomer and dimer membrane affinity may not be required for gradient formation in the B. subtilis case, it is also not prevented if the monomer is unable to bind to the membrane. They show this by replacing the native MinD amphipathic helix with the weak amphipathic helix NS4AB-AH. According to their membrane affinity assay, NS4AB-AH does not bind to the membrane as a monomer (Figure 4D), but when this helix is fused to MinD, MinD is still capable of forming a gradient (albeit a weaker one). Since the authors make a direct comparison to the E. coli MinDE systems, they could have used the E. coli MinD MTS instead or in addition to the NS4AB-AH amphipathic helix. The reviewer suspects that a fusion of the E. coli MinD MTS to B. subtilis MinD may also support gradient formation.  

      The paper contains insufficient data to support the many claims about cell filamentation and minicell formation. In many cases, statements like "did not result in cell filamentation" or "restored cell division" are only supported by a single fluorescence image instead of a quantitative analysis of cell length distribution and minicell frequency, as the one reported for a subset of the data in Figure 5.  

      The paper would also benefit from a quantitative measure of gradient formation of the distinct MinD mutants, instead of relying on individual fluorescent intensity profiles.  

      The authors compare their experimental results with the oscillating E. coli MinDE system and use it to define some of the rules of their Monte Carlo simulation. However, the description of the E. coli Min system is sometimes misleading or based on outdated findings.

      The Monte Carlo simulation of the gradient formation in B. subtilis could benefit from a more comprehensive approach:

      (1) While most of the initial rules underlying the simulation are well justified, the authors do not implement or test two key conditions:

      (a) Cooperative membrane binding, which is a key component of mathematical models for the oscillating E. coli Min system. This cooperative membrane binding has recently been attributed to MinD or MinCD oligomerization on the membrane and has been experimentally observed in various instances; in fact, the authors themselves show data supporting the formation of MinCD copolymers.  

      (2) Local stimulation of the ATPase activity of MinD which triggers the dimer-to-monomer transition; E. coli MinD ATP hydrolysis is stimulated by the membrane and by MinE, so B. subtilis MinD may also be stimulated by the membrane and/or other components like MinJ. Instead, the authors claim that (a) would only increase differences in diffusion between the monomer and different oligomeric species, and that a 2-fold increase in dimerization on the membrane could not induce gradient formation in their simulation, in the absence of MinJ stimulating gradient formation. However, a 2-fold increase in dimerization is likely way too low to explain any cooperative membrane binding observed for the E. coli Min system. Regarding (b), they also claim that implementing stimulation of ATP hydrolysis on the membrane (dimer-to-monomer transition) would not change the outcome, but no simulation result for this condition is actually shown.  

      (3) To generate any gradient formation, the authors claim that they would need to implement stimulation of dimer formation by MinJ, but they themselves acknowledge the lack of any experimental evidence for this assertion. They then test all other conditions (e.g., differences in membrane affinity, diffusion, etc.) in addition to the requirement that MinJ stimulates dimer formation. It is unclear whether the authors tested all other conditions independently of the "MinJ induces dimerization" condition, and whether either of those alone or in combination could also lead to gradient formation. This would be an important test to establish the validity of their claims.

      Reviewer #3 (Public review): 

      This important study by Bohorquez et al examines the determinants necessary for concentrating the spatial modulator of cell division, MinD, at the future site of division and the cell poles. Proper localization of MinD is necessary to bring the division inhibitor, MinC, in proximity to the cell membrane and cell poles where it prevents aberrant assembly of the division machinery. In contrast to E. coli, in which MinD oscillates from pole to pole courtesy of a third protein MinE, how MinD localization is achieved in B. subtilis - which does not encode a MinE analog - has remained largely a mystery. The authors present compelling data indicating that MinD dimerization is dispensable for membrane localization but required for concentration at the cell poles. Dimerization is also important for interactions between MinD and MinC, leading to the formation of large protein complexes. Computational modeling, specifically a Monte Carlo simulation, supports a model in which differences in diffusion rates between MinD monomers and dimers lead to the concentration of MinD at cell poles. Once there, interaction with MinC increases the size of the complex, further reinforcing diffusion differences. Notably, interactions with MinJ-which has previously been implicated in MinCD localization, are dispensable for concentrating MinD at cell poles although MinJ may help stabilize the MinCD complex at those locations.  

      Reviewer #1 (Recommendations for the authors):  

      (1) The title could be modified to better reflect the emphasis on MinD monomer and dimer diffusion rather than the fact that membrane affinity is not important in MinD gradient formation. In addition, because membrane association requires affinity for the membrane, this title seems inconsistent with statements in the main text, such as Lines 246-247: a reversible membrane association is important for the formation of a MinD gradient along the cell axis.

      We agree with the reviewer that the title can be more accurate, and we have now changed it to “Membrane affinity difference between MinD monomer and dimer is not crucial to MinD gradient formation in Bacillus subtilis”

      (2) This paper reports that the difference in diffusion rates between MinD monomers and dimers is an important factor in the formation of Bs MinD gradients. However, one can argue for the importance of MinD monomers in the cellular context. Since the abundance of ATP in cells often far exceeds the abundance of MinD protein molecules under experimental conditions, MinD can easily form dimers in the cytoplasm. How does the author address this problem?  

      It is a good point that ATP concentration in the cell likely favours dimers in the cytoplasm. However, what is important in our model is that there is cycling between monomer and dimer, rather than where exactly this happen. In fact, the gradients works essentially equally well if dimers can become monomers only whilst they are at the membrane, as we have mentioned in the manuscript (lines 324-326 in the original manuscript). However, in the original manuscript this simulation was not shown, and now we have included this in the new Fig. 8D & E.

      (3)The claim "This oscillating gradient requires cycling of MinD between a monomeric cytosolic and a dimeric membrane attached state." (Lines 46, 47) is not well supported by most current studies and needs to be revised since to my knowledge, most proposed models do not consider the monomer state. The basic reaction steps of Ec Min oscillations include ATP-bound MinD dimers attaching to the membrane that subsequently recruit more MinD dimers and MinE dimers to the membrane; MinE interactions stimulate ATP hydrolysis in MinD, leading to dissociation of ADP-bound MinD dimers from the membrane; nucleotide exchange occurs in the cytoplasm.  

      Here the reviewer refers to a sentence in a short “Importance” abstract that we have added. In fact, such abstract is not necessary, so we have removed it. Of note, the E. coli MinD oscillation, including the role of MinE, is described in detail in the Introduction. 

      A recent reference is a paper by Heermann et al. (2020; doi: 10.1016/j.jmb.2020.03.012), which considers the MinD monomer state, which is not mentioned in this work. How do their observations compare to this work?  

      The Heermann paper mentions that MinD bound to the membrane displays an interface for multimerization, and that this contributes to the local self-enhancement of MinD at the membrane. In our Discussion, we do mention that E. coli MinD can form polymers in vitro and that any multimerization of MinD dimers will further increase the diffusion difference between monomer and dimer, and might contribute to the formation of a protein gradient (lines 459-467). We have now included a reference to the Heermann paper (line 461).

      (4) Throughout the manuscript, errors in citing references were found in several places.                 

      We have corrected this where suggested.

      (5) The introduction may be somewhat misleading due to mixed information from experimental cellular results, in vitro reconstructions, and theoretical models in cells or in vitro environments. Some models consider space constraints, while others do not. Modifications are recommended to clarify differences.  

      See below for responses 

      (6) The citation for MinD monomers:

      The paper by Hu and Lutkenhaus (2003, doi: 10.1046/j.1365-2958.2003.03321.x.) contains experimental evidence showing monomer-dimer transition using purified proteins. Another paper by the same laboratory (Park et al. 2012, doi: 10.1111/j.1365-2958.2012.08110.x.) explained how ATP-induced dimerization, but this paper is not cited.  

      The Park et al. 2012 paper focusses at the asymmetric activation of MinD ATPase by MinE, which goes beyond the scope of our work. However, we have cited several other papers from the Lutkenhaus lab, including the Wu et al. 2011 paper describing the structure of the MinD-ATP complex.

      Other evidence comes from structural studies of Archaea Pyrococcus furiosus (1G3R) and Pyrococcus horikoshii (1ION), and thermophilic Aquifex aeolicus (4V01, 4V02, 4V03). As they may function differently from Ec MinD, they are less relevant to this manuscript.

      We agree. 

      (7) Lines 65, 66: Using the term 'a reaction-diffusion couple' to describe the biochemical facts by citing references of Hu and Lutkenhaus (1999) and Raskin and de Boer (1999) is not appropriate. The idea that the Min system behaves as a reaction-diffusion system was started by Howard et al. (2001), Meinhardt and de Boer (2001), and Huang et al. (2003) et al. In addition, references for MinE oscillation are missing. 

      We have now corrected this (line 52).

      (8) Lines 77-79: Citations are incorrect.

      ATP-induced dimerization: Hu and Lutkenhaus (2003, DOI: 10.1046/j.1365-2958.2003.03321.x), Park et al. (2012). C-terminal amphipathic helix formation: Szeto et al. (2003), Hu and Lutkenhaus (2003, DOI: 10.1046/j.1365-2958.2003.03321.x).

      Citations have been corrected.

      (9) Line 78: The C-terminal amphipathic helix is not pre-formed and then exposed upon conformational change induced by ATP-binding. This alpha-helical structure is an induced fold upon interaction with membranes as experimentally demonstrated by Szeto et al. (2003).  

      We have adjusted the text to correct this (lines 64-66).

      (10) Line 102: 'cycles between membrane association and dissociation of MinD' also requires MinE in addition to ATP.

      We believe that in the context of this sentence and following paragraph it is not necessary to again mention MinE, since it is focused on parallels between the E. coli and B. subtilis MinD membrane binding cycles.

      (11) In the introduction, could the author briefly explain to a general audience the difference between Monte Carlo and reaction-diffusion methods? How do different algorithms affect the results?

      The main difference between the kinetic Monte Carlo and typical reaction-diffusion methods which is relevant to our work is that the first is particle-based, and naturally includes statistical fluctuations (noise), whereas the second is field-based, and is in the normal implementation deterministic, so does not include noise. Whilst it should be noted that one can in principle include noise in the field-based reactiondiffusion methods, this is done rarely. Additionally, although we do not do this here, the kinetic MonteCarlo can also account, in principle, for particle shape (sphere versus rod), or for localized interactions (as sticky patches on the surface): therefore the kinetic Monte Carlo is more microscopic in nature. We have now shortly described the difference in lines 102-105.

      (12)  Lines 126-128: The second part of the sentence uses the protein structure of Pyrococcus furiosus MinD (Ref 37) to support a protein sequence comparison between Ec and Bs MinD. However, the structure of the dimeric E. coli MinD-ATP complex (3Q9L) is available, which is Reference 38 that is more suited for direct comparison.

      To discuss monomeric MinD from P. furiosus, it will be useful to include it in the primary sequence alignment in Figure S1.

      We do not think that this detailed information is necessary to add to Figure S1, since the mutants have been described before (appropriate citations present in the text).

      (13) Lines 127, 166: Where Figure S1 is discussed, a structural model of MinD will be useful alongside with the primary sequence alignment.

      We do not think that this detailed information is necessary to understand the experiments since the mutants have been described before.

      (14) Lines 131-132: Reference is missing for the sentence of " the conserved..."; Reference 38.  In Reference 38, there is no experimental evidence on G12 but inferred from structure analysis. Reference 26 discusses ATP and MinE regulation on the interactions between MinD and phospholipid bilyers; not about MinD dimerization.

      We have corrected this and added the proper references. 

      For easy reading, the mutant MinD phenotypes can be indicated here instead of in the figure legends, including K16A (apo monomer), MinD G12V (ATP-bound monomer), and MinD D40A (ATP-bound dimer, ATP hydrolysis deficient).  

      We have added the suggested descriptions of the mutants in the main text.

      (15) Lines 150-151: Unlike Ec MinD, which forms a clear gradient in one half of the cell, Bs MinD (wild type) mainly accumulates at the hemispheric poles. What percentage of a cell (or cell length) can be covered by the Bs MinD gradient? How does the shaded area in the longitudinal FIP compare to the area of the bacterial hemispherical pole? If possible, it might be interesting to compare with the range of nucleoid occlusion mechanisms that occur.

      Part of the MinD gradient covers the nucleoid area, since the fluorescence signal is still visible along the cell lengths, yet there is no sudden drop in fluorescence, suggesting that nucleoid exclusion does not play a role.

      (16)  Line 160: In addition to summarizing the membrane-binding affinity, descriptions of the differences in the gradient distribution or formation will be useful.  

      We have done this in lines 155-156 of the original manuscript: “The monomeric ATP binding G12V variant shows the same absence of a protein gradient as the K16A variant”.

      (17) Line 262: 'distribution' is not shown.  

      We do not understand this remark. This information is shown in Fig. 5B (now Fig. 6B).

      (18)  Line 287: Wrong citation for reference 31.

      Reference has been corrected.

      (19)  Line 288 and lines 596 regarding the Monte Carlo simulation:

      (a)  An illustration showing the reaction steps for MinD gradient formation will help understand the rationale and assumptions behind this simulation.

      We have added an illustration depicting the different modelling steps in the new Fig. 8.

      (b)  Equations are missing.

      (c)   A table summarizing the parameters used in the simulation and their values.

      (d)  For general readers, it will be helpful to convert the simulation units to real units.

      (e)  Indicate real experimental data with a citation or the reason for any speculative value.

      The Methods section provides a discussion of all parameters used in the potentials on which our kinetic Monte-Carlo algorithm is based. We have now also provided a Table in the SI (Table S1) with typical parameter values in both simulation units and real units. The experimental data and reasoning behind the values chosen are discussed in the Methods section (see “Kinetic Monte Carlo simulation”).

      (20)  Lines 320-321: Reference missing.

      The interaction between MinJ and the dimer form of MinD is based on our findings shown in the original Fig. S4, and this information has not been published before. We have rephrased the sentence to make it more clear. Of note, Fig. S4 has been moved to the main manuscript, at the request of reviewer #2, and is now new Fig. 2. 

      (21)  Lines 355-359: Is the statement specifically made for the Bs Min system? Is there any reference for the statement? Isn't the differences in diffusion rates between molecules 'at different locations' in the system more important than reducing their diffusion rates alone? It is unclear about the meaning of the statement "the Min system uses attachment to the membrane to slow down diffusion". Is this an assumption in the simulation?

      The statement is generic, however the reviewer has a good point and we have made this statement more clear by changing “considerably reduced diffusion rate” to “locally reduced diffusion rate” (line 359).

      (22) Line 403: Citation format.

      We have corrected the text and citation.

      (23) Lines 442-444: The parameters are not defined anywhere in the manuscript.

      Discussed in the M&M and in the new Table S1.

      (24) Lines 464-465: Regarding the final sentence, what does 'this prediction' refer to? Hasn't the author started with experimental observations, predicted possible factors of membrane affinity and diffusion rates, and used the simulation approach to disapprove or support the prediction?

      We have changed “prediction” to “suggestion”, to make it clear that it is related to the suggestion in the previous sentence that  “our modelling suggests that stimulation of MinD-dimerization at cell poles and cell division sites is needed.” (line 471).

      (25) Materials and Methods: Statistical methods for data analyses are missing.

      Added to “Microscopy” section.

      (26) References: References 34, 40, 51 are incomplete.

      References 34 and 40 have been corrected. Reference 51 is a book.

      (27)  Figures: The legends (Figures 1-7) can be shortened by removing redundant details in Material and Methods. Make sure statistical information is provided. The specific mutant MinD states, including Apo monomer, ATP-bound dimer, ATP hydrolysis deficient, and non-membrane binding etc can be specified in the main text. They are repeated in the legends of Figures 1 and 2.

      We have removed redundant details from the legends and provided statistical information.

      (28)  Supporting information:

      Table S1: Content of the acknowledgment statement may be moved to materials and methods and the acknowledgment section. Make sure statistical information is provided in the supporting figure legends.

      We are not sure what the reviewer means with the content acknowledgement in Table S1 (now Table S2). Statistical information has been added.

      Figure S1. Adding a MinD structure model will be useful.

      We do not think that a structural model will enlighten our results since our work is not focused at structural mutagenesis. The mutants that we use have been described in other papers that we have cited.

      Reviewer #2 (Recommendations for the authors):  

      The authors should cite and relate their data to the preprint by Feddersen & Bramkamp, BioRxiv 2024. ATPase activity of B. subtilis MinD is activated solely by membrane binding.

      We have now discussed this paper in relation to our data in lines 407-409. 

      I am not convinced the authors are able to make the statement in lines 160-161 based on their assay: "This confirmed that the different monomeric mutants have the same membrane affinity as wild-type MinD". It is unclear if measuring valley-to-peak ratios in their longitudinal profiles can resolve small differences in membrane affinity. Wildtype MinD should at least be dimeric, or (as the authors also note elsewhere) may even be present in higher-order structures and as such have a higher membrane affinity than a monomeric MinD mutant. The authors should rephrase the corresponding sections in the manuscript to state that the MinD monomer already has detectable membrane affinity, instead of stating that the monomer and dimer membrane affinity are the same.

      We agree that “the same affinity” is too strongly worded, and we have now rephrased this by saying that the different monomeric mutants have a comparable membrane affinity as wild type MinD (line 152).

      According to the authors' analysis, MinD-NS4B would not bind to the membrane as it has a valley-to-peak ratio higher than 1, similar to the soluble GFP. However, the protein is clearly forming a gradient, and as such probably binding to the membrane. The authors should discuss this as a limitation of their membrane binding measure.

      The ratio value of 1 is not a cutoff for membrane binding. As shown in Fig. 1F, GFP has a valley-topeak ratio close to 1.25, whereas the FM5-95 membrane dye has a ratio close to 0.75. In Fig. 3C (now Fig. 4C) we have shown that GFP fused with the NS4B membrane anchor has a lower ratio than free GFP, and we have shown the same in Fig. 4D (now Fig. 5D) for GFP-MinD-NS4B. The difference are small but clear, and not similar to GFP.

      The observation that MinD dimers are localized by MinJ is interesting and key to the rule of the Monte Carlo simulation that dimers attach to MinJ. However, the data is hidden in the supplementary information and is not analysed as comprehensively, e.g., it lacks the analysis of the membrane binding. The paper would benefit from moving the fluorescence images and accompanying analysis into the main text.  

      We have moved this figure to the main text and added an analysis of the fluorescence intensities (new Fig. 2).

      The authors should show the data for cell length and minicell formation, not only for the MinDamphipathic helix versions (Fig. 5), but also for the GFP-MinD, and all the MinD mutants. They do refer to some of this data in lines 145-148 but do not show it anywhere. They also refer to "did not result in cell filamentation" in line 213 and to "resulted in highly filamentous cells" and "Introduction of a minC deletion restored cell division" in lines 167-160 without showing the cell length and minicell data, but instead refer to the fluorescence image of the respective strain. I would suggest the authors include this data either in a subpanel in the respective figure or in the supplementary information.

      The effect of uncontrolled MinC activity is very apparent and leads to long filamentous cells. Also the occurrence of minicells is apparent. Cell lengths distribution of wild type cells is shown in Fig. 6B, and minicell formation is negligibly small in wild type cells.

      The transverse fluorescence intensity profiles used as a measure for membrane binding are an average profile from ~30 cells. In the case of the longitudinal profiles that display the gradient, only individual profiles are displayed. I understand that because of distinct cell length, the longitudinal profiles cannot simply be averaged. However, it is possible to project the profiles onto a unit length for averaging (see for example the projection of profiles in McNamara. et al., BioRxiv (2023)). It would be more convincing to average these profiles, which would allow the authors to also quantify the gradients in more detail. If that is impossible, the authors may at least quantify individual valley-to-peak ratios of the longitudinal fluorescence profiles as a measure of the gradient.

      We agree that in future work it would be better to average the profiles as suggested. However, due to limited time and resources, we cannot do this for the current manuscript.

      Regarding the rules and parameters used for the Monte Carlo simulation (see also the corresponding sections in the public review):

      (1) The authors mention that they have not included multimerization of MinD in their simulation but argue in the discussion that it would only strengthen the differences in the diffusion between monomers and multimers. This is correct, but it may also change the membrane residence time and membrane affinity drastically.

      Simulation of multimerization is difficult, but we have now included a simulation whereby MinD dimers can also form tetramers (lines 341-348), shown in the new Fig. 8K. This did not alter the MinD gradient much. 

      (2) The authors implement a dimer-to-monomer transition rate that they equate with the stochastic ATP hydrolysis rate occurring with a half-life of approximately 1/s (line 305). They claim that this rate is based on information from E. coli and cite Huang and Wingreen. However, the Huang paper only mentions the nucleotide exchange rate from ADP to ATP at 1/s. Later that paper cites their use of an ATP hydrolysis rate of 0.7/s to match the E. coli MinDE oscillation rate of 40s. From the authors' statement, it is unclear to me whether they refer to the actual ATP hydrolysis rate in Huang and Wingreen or something else. For E. coli MinD, both the membrane and MinE stimulate ATPase activity. Even if B. subtilis lacks MinE, ATP hydrolysis may still be stimulated by the membrane, which has also been reported in another preprint (Feddersen & Bramkamp, BioRxiv 2024). It may also be stimulated by other components of the Min system like MinJ. The authors should include in the manuscript the Monte Carlo simulation implementing dimer to monomer transition on the membrane only, which is currently referred to only as "(data not shown)". 

      The exact value of the ATP hydrolysis rate is not so important here, so 1/s only gives the order of magnitude (in line with 0.7/s above), which we have now clarified in lines 631-632. We have now also added the “(data not shown” results to Fig. 8, i.e. simulations where dimer to monomer transitions (i.e. ATPase activity) only occurs at the membrane (Fig. 8D & E, and lines 319-322).

      (3) How long did the authors simulate for? How many steps? What timesteps does the average pictured in Figure 7 correspond to?

      We simulated 10^7time steps (corresponding to 100 s in real time). We have checked that the simulation steps for which we average are in steady state. Typical snapshots are recorded after 10^610^7time steps, when the system is in steady state. We have added this information in lines 299-300.

      There are several misconceptions about the (oscillating E. coli) Min system in the main text:

      (1) Lines 77-78: "In case of the E. coli MinD, ATP binding leads to dimerization of MinD, which induces a conformational change in the C-terminal region, thereby exposing an amphiathic helix that functions as a membrane binding domain" and "This shows a clear difference with the E. coli situation, where dimerization of MinD causes a conformational change of the C-terminal region enabling the amphipathic helix to insert into the lipid bilayer" in lines 400-403 are incorrect. There is no evidence that the amphipathic helix at the C-terminus of MinD changes conformation upon ATP binding; several studies have shown instead that a single copy of the amphipathic helix is too weak to confer efficient membrane binding but that the dimerization confers increased membrane binding as now two amphipathic helices are present leading to an avidity effect in membrane binding. Please refer to the following papers (Szeto et al., JBC (2003); Wu et al., Mol Microbiol (2011); Park et al., Cell (2011); Heermann et al., JMB (2020); Loose et al., Nat Struct Mol Biol (2011); Kretschmer et al., ACS Syn Biol (2021); Ramm et al., Nat Commun (2018) or for a better overview the following reviews on the topic of the E. coli Min system Wettmann and Kruse, Philos Trans R Soc B Biol (2018), Ramm et al., Cell and Mol Life Sci (2019); Halatek et al., Philos Trans R SocB Biol Sci (2018).

      This is indeed incorrectly formulated, and we have now amended this in lines 64-66 and lines 403406. Key papers are cited in the text.

      (2) The authors mention that E. coli MinD may multimerize, citing a study where purified MinD was found to polymerize, and then suggest that this is unlikely to be the case in B. subtilis as FRAP recovery of MinD is quick. However, cooperativity in membrane binding is essential to the mathematical models reproducing E. coli Min oscillations, and there is more recent experimental evidence that E. coli MinD forms smaller oligomers that differ in their membrane residence time and diffusion (e.g., Heermann et al., Nat Methods (2023); Heermann et al., JMB (2020);) I would suggest the authors revise the corresponding text sections and test the multimerization in their simulation (see above).

      As mentioned above, simulating oligomerization is difficult, but in order to approximate related cooperative effects, we have simulated a situation whereby MinD dimers can form tetramers. This simulation did not show a large change in MinD gradient formation. We have added the result of this simulation to Fig. 8 (Fig. 8K), and discuss this further in lines 341-348 and 459-467.

      (3) Lines 75-76 and lines 79-80: The sentences "MinC ... and needs to bind to the Walker A-type ATPase MinD for its activity" and "The MinD dimer recruits MinC ... and stimulates its activity" are misleading. MinC is localized by MinD, but MinD does not alter MinC activity, as MinC mislocalization or overexpression also prevents FtsZ ring formation leading to minicell or filamentous cells, as also later described by the authors (line 98). There is also no biochemical evidence that the presence of MinD somehow alters MinC activity towards FtsZ other than a local enrichment on the membrane. I would rephrase the sentence to emphasize that MinD is only localizing MinC but does not alter its activity.   

      We have rephrased this sentence to prevent misinterpretation (lines 66-67).

      Minor points:  

      (1)  I am not quite sure what the experiment with the CCCP shows. The authors explain that MinD binding via the amphipathic helix requires the presence of membrane potential and that the addition of CCCP disturbs binding. They then show that the MinD with two amphipathic helices is not affected by CCCP but the wildtype MinD is. What is the conclusion of this experiment? Would that mean that the MinD with two amphipathic helices binds more strongly, very differently, perhaps non-physiologically?  

      This experiment was “To confirm that the tandem amphipathic helix increased the membrane affinity of MinD”, as mentioned in the beginning of the paragraph (line 224).  

      (2) Lines 456-457: Please cite the FRAP experiment that shows a quick recovery rate of MinD.

      Reference has been added. 

      (3) Figure 4D: It is unclear to me to which condition the p-value brackets point.

      This is related to a statistical t-test. We have added this information to the legend of the figure.

      (4) Line 111, "in the membrane affinity of the MinD". I think that the "the" before MinD should be removed.  

      Corrected

      (5) Typo in line 199 "indicting" instead of indicating.

      Corrected

      (6) Typo in line 220 "reversable" instead of reversible.

      Corrected

      (7) Lines 279, 284, 905: "Monte-Carlo" should read Monte Carlo.

      Corrected

      Reviewer #3 (Recommendations for the authors):  

      Introduction: As written, the introduction does not provide sufficient background for the uninitiated reader to understand the function of the MinCD complex in the context of assembly and activation of cell division in B. subtilis. The introduction is also quite long and would benefit from condensing the description of the Min oscillation mechanism in E. coli to one or two sentences. While highlighting the role of MinE in this system is important for understanding how it works, it is only needed as a counterpoint to the situation in B. subtilis.

      Since the Min system of E. coli is by far the best understood Min system, we feel that it is important to provide detailed information on this system. However, we have added an introductory sentence to explain the key function of the Min system (line 46-48).

      Line 248: Increasing MinD membrane affinity increases the frequency of minicells - however it is unclear if cells are dividing too much or if it is just a Min mutant (i.e. occasionally dividing at the cell pole vs the middle)? Cell length measurements should be included to clarify this point (Figures 4 and 5).

      This information is presented in Fig. 5B (Cell length distribution), which is now Fig. 6B, indicating that the average cell length increases in the tandem alpha helix mutant, a phenotype that is comparable to a MinD knockout. 

      Figure 5: I am a bit confused as to whether increasing MinD affinity doesn't lead to a general block in division by MinCD rather than phenocopying a minD null mutant.

      Although the tandem alpha helix mutant has a cell length distribution comparable to a minD knockout, the tandem mutant produces much less minicells then the minD knockout, indicating that there is still some cell division regulation.

    1. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      The study has carefully controlled and rigorous data. For the most part, the results are consistent with their claims. Except for a few modifications, it should be published. My suggestions are:

      1. Fig 2A. I cannot see the red line in the plot that is mentioned in the legend. Please add it.
      2. Fig 2A. The Manhattan plot shows a number of loci in the genome that have peaks of significant SNPs, not just the locus encompassing Malt-A. It might be worth highlighting the loci or peaks better in the plot. It is pretty minimalist as is.
      3. Linkage disequilibrium is a problem in Drosophila. Many SNPs are hitchikers riding along with a single causative SNP due to infrequent recombination between hitchiker and causative SNPs. How many SNPs are significant and please list the SNPs or intervals considered significant in the GWAS. The text is vague and brief. The plot in Fig 2A is problematic by being overly minimal.
      4. Regarding the GWAS loci they found. It would be worth comparing these regions of the genome with significant GWAS scores to those regions identified in an earlier study. In 2013, Cassidy et al performed artificial selection on Drosophila populations using the same trait (scutellar bristle number) as this study. They did whole genome sequencing of the population before and after selection, and found loci in the genome that exhibited signs of selection through having altered allele frequencies at some loci. Are some of the loci identified in that study the same as in this GWAS study? Are some of the genes implicated in that study the same? The old data is publicly available and so could be easily mined.
      5. Tables 1 is cut apart in its format. Please format properly.
      6. Across the work, there is a lack of statistical testing of significance in bristle number between treated groups. These phenotypes need testing. The number of animals assayed in each experiment are listed but no tests for statistical significance are presented. A chi square or better yet, a fishers exact test would be appropriate. Some of the sample numbers seem low for the claims made, i.e. 8 animals scored for UAS-MalA1 control group.. This testing should be done for all data in Table 1, Fig 2C, Supp Fig 2 A, Fig 4E and any others I might have missed.
      7. Fig 3A, are the individual datapoints single replicates of metabolomic samples? The description of what PCA was done is minimal and needs more description. I assume they performed PCA using metabolites as variables. They did not say. Nor did they explain how PCA was performed except for the software. They "normalized" the data to the median. Did they center the matrix of variable values to the median before doing PCA - is that what they mean? Why not center to the mean values? Typically one calculates the mean value for a given variable, ie a single metabolite, across all samples, and then calculates the difference between the measured value from one sample and the mean value for that variable. That needs to be done. It is not standard to center to the median. They should also normalize the data to eliminate biasing in the PCA results because of variance due to very abundant metabolites, The variables with large values (ie abundant metabolites) overly contribute to the explanatory variance in a PCA analysis unless one normalizes. This normalization is typically done by taking the difference between measured and mean values (as described above), and dividing that difference by the standard deviation of the variable's measurements. Think of it as a Z-score. The matrix data then is centered around zero for each variable, and each variable's values range from -5 to +5. Then perform PCA. Otherwise highly abundant metabolites bias the analysis. Again, this type of normalization is standard for PCA.
      8. How many metabolites were measured? What were they, ie the list. Provide please
      9. Results described in Fig 5A are the weakest in the manuscript and really could be supplemental. It is weakly circumstantal evidence for the claim being made. Temperature affects so many things, it could be coincidence that dilp levels change and this change correlates with bristle number. Many things change with temperature. Definitely they should not end the results section with such weak data,
      10. Carthew and colleagues showed that IPC ablation suppressed the scutellar bristle phenotypes of miR9a and scute mutants. Does Mal-A1 knockdown have similar effects on these mutants? One would predict yes.
      11. The authors mention the 2019 paper by Cassidy et al and some of the results therein regarding inhibiting carbohydrate metabolism and phenotype suppression (robustness). But not only miR-9a and scutellar bristles were tested in that paper but a wide variety of mutations in TFs, signaling proteins and other miRNAs. All their results were consistent with the findings of the current ms. The authors could discuss this more in depth. Also, Cassidy et al put forth a quantitative model that explained how limiting glucose metabolsm could provide robustness for a wide variety of developmental decisions. It might be worth discussing this model in light of their results.

      Significance

      This manuscript describes an interesting study of developmental robustness and its intersection with organismal metabolism. It builds upon prior papers that have addressed the link between metabolism and development. It describes an ingenious approach to the problem and uncovers maltose metabolism in Drosophila as one such connection to sensory organ development and patterning. The important take home message for me is that they found natural genetic variants from the wild that confer greater robustness to the fly's morphological development, and these genetic variants are found in an enzyme that broadly metabolizes maltose, a simple sugar. Whereas previous studies used genetic manipulation to impact metabolism, this study shows that genetic variants in the wild exhibit effects on robustness. It suggests there might be a tradeoff between more vigorous carbohydrate metabolism and fidelity in morphological development.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      Summary:

      In this study, the authors performed GWAS to identify associations between the mean bristle number in Drosophila melanogaster adults and different SNPs present in 95 lines of the DGRP panel rear at 18C. They selected genes harboring those SNPs linked to bristle number that also had a moderate or high expression at the third insta larva stage to perform an RNAi screen. This RNAi screen, which included 43 genes, identified Maltase-A1 (Mal-A1) as a contributor to bristle number. Therefore, the authors then focus on investigating possible metabolic and transcriptional changes underlying the effect of Mal-A1 knockdown on bristle number. After whole-body knockdown using the da-gal4 driver, the authors identified decreased glucose in whole body and hemolymph, and decreased dilp3 mRNA expression in whole body, intestine, and insulin producing cells (IPC) in the larva brain. Similar to a whole-body Mal-A1 knockdown, a gut epithelial cell-specific gal4 driver (NP1) also decreased dilp3 mRNA expression in the whole body and larva brain. The authors suggest that Mal-A1 activity in the intestine may affect bristle number through lowering available glucose in the intestine, which decreases circulating glucose levels in the hemolymph, and in turn decreases dilp3 mRNA expression in the larva brain, leading to decreased bristle number. Finally, to validate the influence of bristle number via dilp3-mediated insulin signaling in the brain, the authors reared larvae at 18C, which they showed increased bristle number. Supporting their proposed model, rearing larvae at 18C increased dilp3 mRNA expression in the brain, which correlated with increased bristle number.

      Major comments:

      1. The main finding of this paper is the identification of Mal-A1 gene as a regulator of bristle number in Drosophila adults. However, the authors do not to show clear phenotypes which could stem from a lack of experimental rigor. As an example in Fig. 2C (source data not provided) the UAS-Mal-A1-RNAi line V15789 in the absence of GAL4 shows 5% abnormal bristle number compared with 2% upon knockdown. If I'm understanding the data provided, this means that abnormal bristle number was observed in 2 flies (out of 40) in the UAS-line alone compared with ~2 flies (out of 111) in the presence of GAL4. For line V106220, 2% (n=56) showed abnormal bristles compared with 0% (n=37) upon in the presence of GAL4. In absolute numbers this would mean that abnormal bristle number was observed in ~1 fly (out of 56) in the UAS-line alone compared with 0 flies (out of 37) upon knockdown. All of these experiments do not use sufficient n, which according to the reviewers calculations (to show a 3% increase, with 80% confidence the n should be around 750-800). In addition no information on statistical tests or whether biological replicates were performed is included. Due to the main finding heavily relying on this phenotype of abnormal bristle number, this reviewer is not confident that the conclusions of the manuscript are supported. This problem also applies to other experiments presented in the manuscript, which suffer from low n, significantly decreasing the enthusiasm for the presented results.
      2. The authors do not to show that Drosophila insulin- like peptide 3 (dilp3) level affects the SOPs in a nonautonomous manner. The only experiments included are showing indirect effects.
      3. There are important statistical details missing in some of the figures (see comments below)
      4. Important details are missing from the methods for results or analysis to be reproduced. For example, the method section for GWAS analysis is lacking details, a script should be provided as supplemental information, as well as a table similar to the one provided for the RNAi screen.

      Minor comments

      • There are some typos like referring to 'using w118 male mice' in the 'Phenotypic Analysis of Maltase Knockdown; (1) Bristle number count'
      • Details in methods. For GWAS experiments, could the authors define what their cutoffs were for selecting genes harboring SNPs linked to bristle number? How many base pairs from a gene? or enhancer? They selected only those gene with moderate or high expression, but what does it mean?
      • In Fig. 2A, could the authors provide all significant SNPs identified by their GWAS analysis as supplemental material?
      • In Fig. 2A, it is stated in the legend " and the red line represents the significance threshold calculated using Bonferroni correction...". This might be a problem with the pdf document but I did not find the red line in the Manhattan plot that the authors refer to.
      • In Fig. 4E, could the authors provide the n number as in other figures?
      • Check citations. Some references have missing parts. For example; Ref 5 is missing the last 2 words of the title. In Manuscript it reads: "Trehalose metabolism confers developmental robustness and stability in Drosophila by regulating.". It should be "Trehalose metabolism confers developmental robustness and stability in Drosophila by regulating glucose homeostasis."

      Significance

      While the significance of identifying a novel regulatory mechanism for developmental robustness in Drosophila melanogaster is high and would be interesting for a broad audience, the authors do not present convincing experimental evidence to support their hypothesis. This is due to the insufficient number of replicates as well as the lack of experiments showing a direct role of insulin signaling.

    1. inferior

      1. ADJECTIVE (…보다) 못한[질 낮은/열등한]

      2. ADJECTIVE formal 하위[하급/손아래]의, 더 낮은[아래의]

      3. Noun (…보다 재능 등이) 못한[못난] 사람; (지위·계급 등에서) 아래 사람, 하급자

    2. solely

      1. ADJECTIVE 유일한, 단 하나의

      2. ADJECTIVE 혼자[단독]의

      3. Noun 발바닥

      4. Noun (신발·양말의) 바닥, 밑창 (→heel n. (3))

    3. stark

      1. ADJECTIVE 흔히 못마땅함 (아무 색채나 장식이 없어) 삭막한[황량한]

      2. ADJECTIVE (불쾌하지만 피할 수 없는) 냉혹한[엄연한] (=bleak)

      3. ADVERB 흔히 못마땅함 완전히 벌거벗은

    4. audit

      1. Noun 회계 감사

      2. Noun (품질·수준에 대한) 검사 (→green audit)

      3. Verb 회계를 감사하다

      4. Verb 美 (수업을) 청강하다

    5. external

      1. ADJECTIVE (물체·사람의) 외부[겉/외면]의

      2. ADJECTIVE (장소·조직·상황 등의) 외부의[외부적인]

      3. ADJECTIVE (대학·기관 등의) 외부의

    6. mortality

      mortality 1. Noun 언젠가는 죽어야 함, 죽음을 피할 수 없음, 필사

      2. Noun (특정 기간·상황에서의) 사망자 수[사망률]

      3. Noun 전문 용어 사망

    7. transplant

      1. Verb (생체의 조직 등을) 이식하다 (→implant)

      2. Verb (식물을) 옮겨 심다[이식하다]

      3. Noun (생체 조직 등의) 이식

      4. Noun 이식된 장기[조직 등] (→implant)

    1. AWS is 10x slower than a dedicated server for the same price
      • Video Title: AWS is 10x slower than a dedicated server for the same price
      • Core Argument: Cloud providers, particularly AWS, charge significantly more for base-level compute instances than traditional Virtual Private Server (VPS) providers while delivering substantially less performance. The video argues that horizontal scaling is often unnecessary for 95% of businesses.
      • Comparison Setup: The video compared an entry-level AWS instance (EC2 and ECS Fargate) with a similarly specced VPS (1 vCPU, 2 GB RAM) from a popular German provider (Hetzner, referred to as HTNA in the video) using the Sysbench tool.
      • AWS EC2 Results: The base EC2 instance cost almost 3 times more than the VPS but delivered poor performance:
        • CPU: Approximately 20% of the VPS performance.
        • Memory: Only 7.74% of the VPS performance.
      • AWS ECS Fargate Results: Using the "serverless" Fargate option, setup was complex and involved many AWS services (ECS, ECR, IAM).
        • Cost: The instance was 6 times more expensive than the VPS.
        • Performance: Performance improved over EC2 but was still slower and less consistent: 23% (CPU), 80% (Memory), and 84% (File I/O) of the VPS's performance, with fluctuations up to 18%.
      • Cost Efficiency: A dedicated VPS server with 4vCPU and 16 GB of RAM was found to be cheaper than the 1 vCPU ECS Fargate task used in the test.
      • Conclusion: For a similar price point, a dedicated server is about 10 times faster than an equivalent AWS cloud instance. The video concludes that AWS's dominance is due to its large marketing spend, not superior technical or cost efficiency. A real-world example cited is Lichess, which supports 5.2 million chess games per day on a single dedicated server [00:12:06].

      Hacker News Discussion

      The discussion was split between criticizing the video's methodology and debating the fundamental value proposition of hyperscale cloud providers versus traditional hosting.

      • Criticism of Methodology: Several top comments argued the video was a "low effort 'ha ha AWS sucks' video" with an "AWFUL analysis." Critics suggested the author did not properly configure or understand ECS/Fargate and that comparing the lowest-end shared instances isn't a "proper comparison," which should involve mid-range hardware and careful configuration.
      • The Value of AWS Services: Many users defended AWS by stating that customers rarely choose it just for the base EC2 instance price. The true value lies in the managed ecosystem of services like RDS, S3, EKS, ELB, and Cognito, which abstract away operational complexity and allow large customers to negotiate off-list pricing.
      • Complexity and Cost Rebuttals: Counter-arguments highlighted that managing AWS complexity often requires hiring expensive "cloud wizards" (Solutions Architects or specialized DevOps staff), shifting the high cost of a SysAdmin team to high cloud management costs. Anecdotes about sudden huge AWS bills and complex debugging were common.
      • The "Nobody Gets Fired" Factor: The most common justification for choosing AWS, even at a higher cost, is risk aversion and the avoidance of personal liability. If a core AWS region (like US-East-1) goes down, it's a shared industry failure, but if a self-hosted server fails, the admin is solely responsible for fixing it at 3 a.m.
      • Alternative Recommendations: The discussion frequently validated the use of non-hyperscale providers like Hetzner and OVH for significant cost savings and comparable reliability for many non-"cloud native" workloads.
    1. Cylinder seal Near Eastern, Iranian, Elamite Proto-Elamite 3100–2900 B.C. Medium/Technique Black basalt

      After looking at other art pieces and objects from this era and location, I have noticed that black basalt is a common medium. This medium can be used in many ways, in this era it was popular to carve in designs and art work that showed civilization during that time.

    1. Each Works Cited entry has 9 components. You may not use each component in the reference; however, they all form a function to help the reader find the source you have cited.  Note the punctuation after each element: Author. Title of Source. Title of Container, Other Contributors, Version, Number, Publisher, Publication date, Location.

      You may not use each of these components but make sure to keep these in mind when you are citing a source

    2. However, hyperlinks are not very useful for academic papers. Here are some reasons: Links change: The internet changes every day. Websites add and remove articles, on-line magazines and newspapers change their links. If there is only a link to a source and if that link changes, then the reader cannot find the source. Inaccessible Databases: Some of the information you will use will be from CNM databases. The readers of your article may not have access to the same database; therefore, a link is not sufficient. The reader needs to know pertinent information, such as the author’s name, title, etc., to be able to find the source.

      Its better to cite the source instead of just adding a hyperlink. Think about internet changes, websites that need you to sign up in order to read, as well as being able to print out the document for someone to use as a reference. Set the reader up for success in a way.

    3. Start the Works Cited page on a separate page. This should be the last page of your paper. Margins and pagination (last name and page number on the top right) remain the same as the rest of the paper. Title the page Works Cited. Center the title Do not italicize the title Only the title is centered; the rest of the page is left justified The entire Works Cited should be double-spaced. Do not add a space between citations (i.e., do not add an extra double space between citations). Citations should be in alphabetical order.

      Format guidelines to be aware of

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      These authors have developed a method to induce MI or MII arrest. While this was previously possible in MI, the advantage of the method presented here is it works for MII, and chemically inducible because it is based on a system that is sensitive to the addition of ABA. Depending on when the ABA is added, they achieve a MI or MII delay. The ABA promotes dimerizing fragments of Mps1 and Spc105 that can't bind their chromosomal sites. The evidence that the MI arrest is weaker than the MII arrest is convincing and consistent with published data and indicating the SAC in MI is less robust than MII or mitosis. The authors use this system to find evidence that the weak MI arrest is associated with PP1 binding to Spc105. This is a nice use of the system.

      The remainder of the paper uses the SynSAC system to isolate populations enriched for MI or MII stages and conduct proteomics. This shows a powerful use of the system but more work is needed to validate these results, particularly in normal cells.

      Overall the most significant aspect of this paper is the technical achievement, which is validated by the other experiments. They have developed a system and generated some proteomics data that maybe useful to others when analyzing kinetochore composition at each division. Overall, I have only a few minor suggestions.

      We appreciate the reviewers’ support of our study.

      1) In wild-type - Pds1 levels are high during M1 and A1, but low in MII. Can the authors comment on this? In line 217, what is meant by "slightly attenuated? Can the authors comment on how anaphase occurs in presence of high Pds1? There is even a low but significant level in MII.

      The higher levels of Pds1 in meiosis I compared to meiosis II has been observed previously using immunofluorescence and live imaging1–3. Although the reasons are not completely clear, we speculate that there is insufficient time between the two divisions to re-accumulate Pds1 prior to separase re-activation.

      We agree “slightly attenuated” was confusing and we have re-worded this sentence to read “Addition ABA at the time of prophase release resulted in Pds1securin stabilisation throughout the time course, consistent with delays in both metaphase I and II”.

      We do not believe that either anaphase I or II occur in the presence of high Pds1. Western blotting represents the amount of Pds1 in the population of cells at a given time point. The time between meiosis I and II is very short even when treated with ABA. For example, in Figure 2B, spindle morphology counts show that the anaphase I peak is around 40% at its maxima (105 min) and around 40% of cells are in either metaphase I or metaphase II, and will be Pds1 positive. In contrast, due to the better efficiency of meiosis II, anaphase II hardly occurs at all in these conditions, since anaphase II spindles (and the second nuclear division) are observed at very low frequency (maximum 10%) from 165 minutes onwards. Instead, metaphase II spindles partially or fully breakdown, without undergoing anaphase extension. Taking Pds1 levels from the western blot and the spindle data together leads to the conclusion that at the end of the time-course, these cells are biochemically in metaphase II, but unable to maintain a robust spindle. Spindle collapse is also observed in other situations where meiotic exit fails, and potentially reflects an uncoupling of the cell cycle from the programme governing gamete differentiation3–5. We will explain this point in a revised version while referring to representative images that from evidence for this, as also requested by the reviewer below.

      2) The figures with data characterizing the system are mostly graphs showing time course of MI and MII. There is no cytology, which is a little surprising since the stage is determined by spindle morphology. It would help to see sample sizes (ie. In the Figure legends) and also representative images. It would also be nice to see images comparing the same stage in the SynSAC cells versus normal cells. Are there any differences in the morphology of the spindles or chromosomes when in the SynSAC system?

      This is an excellent suggestion and will also help clarify the point above. We will provide images of cells at the different stages. For each timepoint, 100 cells were scored. We have already included this information in the figure legends

      3) A possible criticism of this system could be that the SAC signal promoting arrest is not coming from the kinetochore. Are there any possible consequences of this? In vertebrate cells, the RZZ complex streams off the kinetochore. Yeast don't have RZZ but this is an example of something that is SAC dependent and happens at the kinetochore. Can the authors discuss possible limitations such as this? Does the inhibition of the APC effect the native kinetochores? This could be good or bad. A bad possibility is that the cell is behaving as if it is in MII, but the kinetochores have made their microtubule attachments and behave as if in anaphase.

      In our view, the fact that SynSAC does not come from kinetochores is a major advantage as this allows the study of the kinetochore in an unperturbed state. It is also important to note that the canonical checkpoint components are all still present in the SynSAC strains, and perturbations in kinetochore-microtubule interactions would be expected to mount a kinetochore-driven checkpoint response as normal. Indeed, it would be interesting in future work to understand how disrupting kinetochore-microtubule attachments alters kinetochore composition (presumably checkpoint proteins will be recruited) and phosphorylation but this is beyond the scope of this work. In terms of the state at which we are arresting cells – this is a true metaphase because cohesion has not been lost but kinetochore-microtubule attachments have been established. This is evident from the enrichment of microtubule regulators but not checkpoint proteins in the kinetochore purifications from metaphase I and II. While this state is expected to occur only transiently in yeast, since the establishment of proper kinetochore-microtubule attachments triggers anaphase onset, the ability to capture this properly bioriented state will be extremely informative for future studies. We appreciate the reviewers’ insight in highlighting these interesting discussion points which we will include in a revised version.

      Reviewer #1 (Significance (Required)):

      These authors have developed a method to induce MI or MII arrest. While this was previously possible in MI, the advantage of the method presented here is it works for MII, and chemically inducible because it is based on a system that is sensitive to the addition of ABA. Depending on when the ABA is added, they achieve a MI or MII delay. The ABA promotes dimerizing fragments of Mps1 and Spc105 that can't bind their chromosomal sites. The evidence that the MI arrest is weaker than the MII arrest is convincing and consistent with published data and indicating the SAC in MI is less robust than MII or mitosis. The authors use this system to find evidence that the weak MI arrest is associated with PP1 binding to Spc105. This is a nice use of the system.

      The remainder of the paper uses the SynSAC system to isolate populations enriched for MI or MII stages and conduct proteomics. This shows a powerful use of the system but more work is needed to validate these results, particularly in normal cells.

      Overall the most significant aspect of this paper is the technical achievement, which is validated by the other experiments. They have developed a system and generated some proteomics data that maybe useful to others when analyzing kinetochore composition at each division.

      We appreciate the reviewer’s enthusiasm for our work.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      The manuscript submitted by Koch et al. describes a novel approach to collect budding yeast cells in metaphase I or metaphase II by synthetically activating the spinde checkpoint (SAC). The arrest is transient and reversible. This synchronization strategy will be extremely useful for studying meiosis I and meiosis II, and compare the two divisions. The authors characterized this so-named syncSACapproach and could confirm previous observations that the SAC arrest is less efficient in meiosis I than in meiosis II. They found that downregulation of the SAC response through PP1 phosphatase is stronger in meiosis I than in meiosis II. The authors then went on to purify kinetochore-associated proteins from metaphase I and II extracts for proteome and phosphoproteome analysis. Their data will be of significant interest to the cell cycle community (they compared their datasets also to kinetochores purified from cells arrested in prophase I and -with SynSAC in mitosis).

      I have only a couple of minor comments:

      1) I would add the Suppl Figure 1A to main Figure 1A. What is really exciting here is the arrest in metaphase II, so I don't understand why the authors characterize metaphase I in the main figure, but not metaphase II. But this is only a suggestion.

      This is a good suggestion, we will do this in our full revision.

      2) Line 197, the authors state: ...SyncSACinduced a more pronounced delay in metaphase II than in metaphase I. However, line 229 and 240 the auhtors talk about a "longer delay in metaphase Thank you for pointing this out, this is indeed a typo and we have corrected it.

      3) The authors describe striking differences for both protein abundance and phosphorylation for key kinetochore associated proteins. I found one very interesting protein that seems to be very abundant and phosphorylated in metaphase I but not metaphase II, namely Sgo1. Do the authors think that Sgo1 is not required in metaphase II anymore? (Top hit in suppl Fig 8D).

      This is indeed an interesting observation, which we plan to investigate as part of another study in the future. Indeed, data from mouse indicates that shugoshin-dependent cohesin deprotection is already absent in meiosis II in mouse oocytes6, though whether this is also true in yeast is not known. Furthermore, this does not rule out other functions of Sgo1 in meiosis II (for example promoting biorientation). We will include this point in the discussion.

      Reviewer #2 (Significance (Required)):

      The technique described here will be of great interest to the cell cycle community. Furthermore, the authors provide data sets on purified kinetochores of different meiotic stages and compare them to mitosis. This paper will thus be highly cited, for the technique, and also for the application of the technique.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      In their manuscript, Koch et al. describe a novel strategy to synchronize cells of the budding yeast Saccharomyces cerevisiae in metaphase I and metaphase II, thereby facilitating comparative analyses between these meiotic stages. This approach, termed SynSAC, adapts a method previously developed in fission yeast and human cells that enables the ectopic induction of a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC components upon addition of the plant hormone abscisic acid (ABA). This is a valuable tool, which has the advantage that induces SAC-dependent inhibition of the anaphase promoting complex without perturbing kinetochores. Furthermore, since the same strategy and yeast strain can be also used to induce a metaphase arrest during mitosis, the methodology developed by Koch et al. enables comparative analyses between mitotic and meiotic cell divisions. To validate their strategy, the authors purified kinetochores from meiotic metaphase I and metaphase II, as well as from mitotic metaphase, and compared their protein composition and phosphorylation profiles. The results are presented clearly and in an organized manner.

      We are grateful to the reviewer for their support.

      Despite the relevance of both the methodology and the comparative analyses, several main issues should be addressed: 1.- In contrast to the strong metaphase arrest induced by ABA addition in mitosis (Supp. Fig. 2), the SynSAC strategy only promotes a delay in metaphase I and metaphase II as cells progress through meiosis. This delay extends the duration of both meiotic stages, but does not markedly increase the percentage of metaphase I or II cells in the population at a given timepoint of the meiotic time course (Fig. 1C). Therefore, although SynSAC broadens the time window for sample collection, it does not substantially improve differential analyses between stages compared with a standard NDT80 prophase block synchronization experiment. Could a higher ABA concentration or repeated hormone addition improve the tightness of the meiotic metaphase arrest?

      For many purposes the enrichment and extended time for sample collection is sufficient, as we demonstrate here. However, as pointed out by the reviewer below, the system can be improved by use of the 4A-RASA mutations to provide a stronger arrest (see our response below). We did not experiment with higher ABA concentrations or repeated addition since the very robust arrest achieved with the 4A-RASA mutant deemed this unnecessary.

      2.- Unlike the standard SynSAC strategy, introducing mutations that prevent PP1 binding to the SynSAC construct considerably extended the duration of the meiotic metaphase arrests. In particular, mutating PP1 binding sites in both the RVxF (RASA) and the SILK (4A) motifs of the Spc105(1-455)-PYL construct caused a strong metaphase I arrest that persisted until the end of the meiotic time course (Fig. 3A). This stronger and more prolonged 4A-RASA SynSAC arrest would directly address the issue raised above. It is unclear why the authors did not emphasize more this improved system. Indeed, the 4A-RASA SynSAC approach could be presented as the optimal strategy to induce a conditional metaphase arrest in budding yeast meiosis, since it not only adapts but also improves the original methods designed for fission yeast and human cells. Along the same lines, it is surprising that the authors did not exploit the stronger arrest achieved with the 4A-RASA mutant to compare kinetochore composition at meiotic metaphase I and II.

      We agree that the 4A-RASA mutant is the best tool to use for the arrest and going forward this will be our approach. We collected the proteomics data and the data on the SynSAC mutant variants concurrently, so we did not know about the improved arrest at the time the proteomics experiment was done. Because very good arrest was already achieved with the unmutated SynSAC construct, we could not justify repeating the proteomics experiment which is a large amount of work using significant resources. However, we will highlight the potential of the 4A-RASA mutant more prominently in our full revision.

      3.- The results shown in Supp. Fig. 4C are intriguing and merit further discussion. Mitotic growth in ABA suggest that the RASA mutation silences the SynSAC effect, yet this was not observed for the 4A or the double 4A-RASA mutants. Notably, in contrast to mitosis, the SynSAC 4A-RASA mutation leads to a more pronounced metaphase I meiotic delay (Fig. 3A). It is also noteworthy that the RVAF mutation partially restores mitotic growth in ABA. This observation supports, as previously demonstrated in human cells, that Aurora B-mediated phosphorylation of S77 within the RVSF motif is important to prevent PP1 binding to Spc105 in budding yeast as well.

      We agree these are intriguing findings that highlight key differences as to the wiring of the spindle checkpoint in meiosis and mitosis and potential for future studies, however, currently we can only speculate as to the underlying cause. The effect of the RASA mutation in mitosis is unexpected and unexplained. However, the fact that the 4A-RASA mutation causes a stronger delay in meiosis I compared to mitosis can be explained by a greater prominence of PP1 phosphatase in meiosis. Indeed, our data (Figure 4A) show that the PP1 phosphatase Glc7 and its regulatory subunit Fin1 are highly enriched on kinetochores at all meiotic stages compared to mitosis.

      We agree that the improved growth of the RVAF mutant is intriguing and points to a role of Aurora B-mediated phosphorylation, though previous work has not supported such a role 7.

      We will include a discussion of these important points in a revised version.

      4.- To demonstrate the applicability of the SynSAC approach, the authors immunoprecipitated the kinetochore protein Dsn1 from cells arrested at different meiotic or mitotic stages, and compared kinetochore composition using data independent acquisition (DIA) mass spectrometry. Quantification and comparative analyses of total and kinetochore protein levels were conducted in parallel for cells expressing either FLAG-tagged or untagged Dsn1 (Supp. Fig. 7A-B). To better detect potential changes, protein abundances were next scaled to Dsn1 levels in each sample (Supp. Fig. 7C-D). However, it is not clear why the authors did not normalize protein abundance in the immunoprecipitations from tagged samples at each stage to the corresponding untagged control, instead of performing a separate analysis. This would be particularly relevant given the high sensitivity of DIA mass spectrometry, which enabled quantification of thousands of proteins. Furthermore, the authors compared protein abundances in tagged-samples from mitotic metaphase and meiotic prophase, metaphase I and metaphase II (Supp. Fig. 7E-F). If protein amounts in each case were not normalized to the untagged controls, as inferred from the text (lines 333 to 338), the observed differences could simply reflect global changes in protein expression at different stages rather than specific differences in protein association to kinetochores.

      While we agree with the reviewer that at first glance, normalising to no tag makes the most sense, in practice there is very low background signal in the no tag sample which means that any random fluctuations have a big impact on the final fold change. This approach therefore introduces artefacts into the data rather than improving normalisation.

      To provide reassurance that our kinetochore immunoprecipitations are specific, and that the background (no tag) signal is indeed very low, we will provide a new supplemental figure showing the volcanos comparing kinetochore purifications at each stage with their corresponding no tag control. These volcano plots show very clearly that the major enriched proteins are kinetochore proteins and associated factors, in all cases.

      It is also important to note that our experiment looks at relative changes of the same protein over time, which we expect to be relatively small in the whole cell lysate. We previously documented proteins that change in abundance in whole cell lysates throughout meiosis8. In this study, we found that relatively few proteins significantly change in abundance, supporting this view.

      Our aim in the current study was to understand how the relative composition of the kinetochore changes and for this, we believe that a direct comparison to Dsn1, a central kinetochore protein which we immunoprecipitated is the most appropriate normalisation.

      5.- Despite the large amount of potentially valuable data generated, the manuscript focuses mainly on results that reinforce previously established observations (e.g., premature SAC silencing in meiosis I by PP1, changes in kinetochore composition, etc.). The discussion would benefit from a deeper analysis of novel findings that underscore the broader significance of this study.

      We strongly agree with this point and we will re-frame the discussion to focus on the novel findings, as also raised by the other reviewers.

      Finally, minor concerns are: 1.- Meiotic progression in SynSAC strains lacking Mad1, Mad2 or Mad3 is severely affected (Fig. 1D and Supp. Fig. 1), making it difficult to assess whether, as the authors state, the metaphase delays depend on the canonical SAC cascade. In addition, as a general note, graphs displaying meiotic time courses could be improved for clarity (e.g., thinner data lines, addition of axis gridlines and external tick marks, etc.).

      We will generate the data to include a checkpoint mutant +/- ABA for direct comparison. We will take steps to improve the clarity of presentation of the meiotic timecourse graphs, though our experience is that uncluttered graphs make it easier to compare trends.

      2.- Spore viability following SynSAC induction in meiosis was used as an indicator that this experimental approach does not disrupt kinetochore function and chromosome segregation. However, this is an indirect measure. Direct monitoring of genome distribution using GFP-tagged chromosomes would have provided more robust evidence. Notably, the SynSAC mad3Δ mutant shows a slight viability defect, which might reflect chromosome segregation defects that are more pronounced in the absence of a functional SAC.

      Spore viability is a much more sensitive way of analysing segregation defects that GFP-labelled chromosomes. This is because GFP labelling allows only a single chromosome to be followed. On the other hand, if any of the 16 chromosomes mis-segregate in a given meiosis this would result in one or more aneuploid spores in the tetrad, which are typically inviable. The fact that spore viability is not significantly different from wild type in this analysis indicates that there are no major chromosome segregation defects in these strains, and we therefore do not plan to do this experiment.

      3.- It is surprising that, although SAC activity is proposed to be weaker in metaphase I, the levels of CPC/SAC proteins seem to be higher at this stage of meiosis than in metaphase II or mitotic metaphase (Fig. 4A-B).

      We agree, this is surprising and we will point this out in the revised discussion. We speculate that the challenge in biorienting homologs which are held together by chiasmata, rather than back-to-back kinetochores results in a greater requirement for error correction in meiosis I. Interestingly, the data with the RASA mutant also point to increased PP1 activity in meiosis I, and we additionally observed increased levels of PP1 (Glc7 and Fin1) on meiotic kinetochores, consistent with the idea that cycles of error correction and silencing are elevated in meiosis I.

      4.- Although a more detailed exploration of kinetochore composition or phosphorylation changes is beyond the scope of the manuscript, some key observations could have been validated experimentally (e.g., enrichment of proteins at kinetochores, phosphorylation events that were identified as specific or enriched at a certain meiotic stage, etc.).

      We agree that this is beyond the scope of the current study but will form the start of future projects from our group, and hopefully others.

      5.- Several typographical errors should be corrected (e.g., "Knetochores" in Fig. 4 legend, "250uM ABA" in Supp. Fig. 1 legend, etc.)

      Thank you for pointing these out, they have been corrected.

      Reviewer #3 (Significance (Required)):

      Koch et al. describe a novel methodology, SynSAC, to synchronize budding yeast cells in metaphase I or metaphase II during meiosis, as well and in mitotic metaphase, thereby enabling differential analyses among these cell division stages. Their approach builds on prior strategies originally developed in fission yeast and human cells models to induce a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC proteins upon addition of abscisic acid (ABA). The results from this manuscript are of special relevance for researchers studying meiosis and using Saccharomyces cerevisiae as a model. Moreover, the differential analysis of the composition and phosphorylation of kinetochores from meiotic metaphase I and metaphase II adds interest for the broader meiosis research community. Finally, regarding my expertise, I am a researcher specialized in the regulation of cell division.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      In their manuscript, Koch et al. describe a novel strategy to synchronize cells of the budding yeast Saccharomyces cerevisiae in metaphase I and metaphase II, thereby facilitating comparative analyses between these meiotic stages. This approach, termed SynSAC, adapts a method previously developed in fission yeast and human cells that enables the ectopic induction of a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC components upon addition of the plant hormone abscisic acid (ABA). This is a valuable tool, which has the advantage that induces SAC-dependent inhibition of the anaphase promoting complex without perturbing kinetochores. Furthermore, since the same strategy and yeast strain can be also used to induce a metaphase arrest during mitosis, the methodology developed by Koch et al. enables comparative analyses between mitotic and meiotic cell divisions. To validate their strategy, the authors purified kinetochores from meiotic metaphase I and metaphase II, as well as from mitotic metaphase, and compared their protein composition and phosphorylation profiles. The results are presented clearly and in an organized manner. Despite the relevance of both the methodology and the comparative analyses, several main issues should be addressed:

      1.- In contrast to the strong metaphase arrest induced by ABA addition in mitosis (Supp. Fig. 2), the SynSAC strategy only promotes a delay in metaphase I and metaphase II as cells progress through meiosis. This delay extends the duration of both meiotic stages, but does not markedly increase the percentage of metaphase I or II cells in the population at a given timepoint of the meiotic time course (Fig. 1C). Therefore, although SynSAC broadens the time window for sample collection, it does not substantially improve differential analyses between stages compared with a standard NDT80 prophase block synchronization experiment. Could a higher ABA concentration or repeated hormone addition improve the tightness of the meiotic metaphase arrest? 2.- Unlike the standard SynSAC strategy, introducing mutations that prevent PP1 binding to the SynSAC construct considerably extended the duration of the meiotic metaphase arrests. In particular, mutating PP1 binding sites in both the RVxF (RASA) and the SILK (4A) motifs of the Spc105(1-455)-PYL construct caused a strong metaphase I arrest that persisted until the end of the meiotic time course (Fig. 3A). This stronger and more prolonged 4A-RASA SynSAC arrest would directly address the issue raised above. It is unclear why the authors did not emphasize more this improved system. Indeed, the 4A-RASA SynSAC approach could be presented as the optimal strategy to induce a conditional metaphase arrest in budding yeast meiosis, since it not only adapts but also improves the original methods designed for fission yeast and human cells. Along the same lines, it is surprising that the authors did not exploit the stronger arrest achieved with the 4A-RASA mutant to compare kinetochore composition at meiotic metaphase I and II. 3.- The results shown in Supp. Fig. 4C are intriguing and merit further discussion. Mitotic growth in ABA suggest that the RASA mutation silences the SynSAC effect, yet this was not observed for the 4A or the double 4A-RASA mutants. Notably, in contrast to mitosis, the SynSAC 4A-RASA mutation leads to a more pronounced metaphase I meiotic delay (Fig. 3A). It is also noteworthy that the RVAF mutation partially restores mitotic growth in ABA. This observation supports, as previously demonstrated in human cells, that Aurora B-mediated phosphorylation of S77 within the RVSF motif is important to prevent PP1 binding to Spc105 in budding yeast as well. 4.- To demonstrate the applicability of the SynSAC approach, the authors immunoprecipitated the kinetochore protein Dsn1 from cells arrested at different meiotic or mitotic stages, and compared kinetochore composition using data independent acquisition (DIA) mass spectrometry. Quantification and comparative analyses of total and kinetochore protein levels were conducted in parallel for cells expressing either FLAG-tagged or untagged Dsn1 (Supp. Fig. 7A-B). To better detect potential changes, protein abundances were next scaled to Dsn1 levels in each sample (Supp. Fig. 7C-D). However, it is not clear why the authors did not normalize protein abundance in the immunoprecipitations from tagged samples at each stage to the corresponding untagged control, instead of performing a separate analysis. This would be particularly relevant given the high sensitivity of DIA mass spectrometry, which enabled quantification of thousands of proteins. Furthermore, the authors compared protein abundances in tagged-samples from mitotic metaphase and meiotic prophase, metaphase I and metaphase II (Supp. Fig. 7E-F). If protein amounts in each case were not normalized to the untagged controls, as inferred from the text (lines 333 to 338), the observed differences could simply reflect global changes in protein expression at different stages rather than specific differences in protein association to kinetochores. 5.- Despite the large amount of potentially valuable data generated, the manuscript focuses mainly on results that reinforce previously established observations (e.g., premature SAC silencing in meiosis I by PP1, changes in kinetochore composition, etc.). The discussion would benefit from a deeper analysis of novel findings that underscore the broader significance of this study.

      Finally, minor concerns are:

      1.- Meiotic progression in SynSAC strains lacking Mad1, Mad2 or Mad3 is severely affected (Fig. 1D and Supp. Fig. 1), making it difficult to assess whether, as the authors state, the metaphase delays depend on the canonical SAC cascade. In addition, as a general note, graphs displaying meiotic time courses could be improved for clarity (e.g., thinner data lines, addition of axis gridlines and external tick marks, etc.). 2.- Spore viability following SynSAC induction in meiosis was used as an indicator that this experimental approach does not disrupt kinetochore function and chromosome segregation. However, this is an indirect measure. Direct monitoring of genome distribution using GFP-tagged chromosomes would have provided more robust evidence. Notably, the SynSAC mad3Δ mutant shows a slight viability defect, which might reflect chromosome segregation defects that are more pronounced in the absence of a functional SAC. 3.- It is surprising that, although SAC activity is proposed to be weaker in metaphase I, the levels of CPC/SAC proteins seem to be higher at this stage of meiosis than in metaphase II or mitotic metaphase (Fig. 4A-B). 4.- Although a more detailed exploration of kinetochore composition or phosphorylation changes is beyond the scope of the manuscript, some key observations could have been validated experimentally (e.g., enrichment of proteins at kinetochores, phosphorylation events that were identified as specific or enriched at a certain meiotic stage, etc.). 5.- Several typographical errors should be corrected (e.g., "Knetochores" in Fig. 4 legend, "250uM ABA" in Supp. Fig. 1 legend, etc.)

      Significance

      Koch et al. describe a novel methodology, SynSAC, to synchronize budding yeast cells in metaphase I or metaphase II during meiosis, as well and in mitotic metaphase, thereby enabling differential analyses among these cell division stages. Their approach builds on prior strategies originally developed in fission yeast and human cells models to induce a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC proteins upon addition of abscisic acid (ABA). The results from this manuscript are of special relevance for researchers studying meiosis and using Saccharomyces cerevisiae as a model. Moreover, the differential analysis of the composition and phosphorylation of kinetochores from meiotic metaphase I and metaphase II adds interest for the broader meiosis research community. Finally, regarding my expertise, I am a researcher specialized in the regulation of cell division.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      The manuscript submitted by Koch et al. describes a novel approach to collect budding yeast cells in metaphase I or metaphase II by synthetically activating the spinde checkpoint (SAC). The arrest is transient and reversible. This synchronization strategy will be extremely useful for studying meiosis I and meiosis II, and compare the two divisions. The authors characterized this so-named syncSACapproach and could confirm previous observations that the SAC arrest is less efficient in meiosis I than in meiosis II. They found that downregulation of the SAC response through PP1 phosphatase is stronger in meiosis I than in meiosis II. The authors then went on to purify kinetochore-associated proteins from metaphase I and II extracts for proteome and phosphoproteome analysis. Their data will be of significant interest to the cell cycle community (they compared their datasets also to kinetochores purified from cells arrested in prophase I and -with SynSAC in mitosis).

      I have only a couple of minor comments:

      1) I would add the Suppl Figure 1A to main Figure 1A. What is really exciting here is the arrest in metaphase II, so I don't understand why the authors characterize metaphase I in the main figure, but not metaphase II. But this is only a suggestion.

      2) Line 197, the authors state: ...SyncSACinduced a more pronounced delay in metaphase II than in metaphase I. However, line 229 and 240 the auhtors talk about a "longer delay in metaphase <i compared to metaphase II"... this seems to be a mix-up.

      3) The authors describe striking differences for both protein abundance and phosphorylation for key kinetochore associated proteins. I found one very interesting protein that seems to be very abundant and phosphorylated in metaphase I but not metaphase II, namely Sgo1. Do the authors think that Sgo1 is not required in metaphase II anymore? (Top hit in suppl Fig 8D).

      Significance

      The technique described here will be of great interest to the cell cycle community. Furthermore, the authors provide data sets on purified kinetochores of different meiotic stages and compare them to mitosis. This paper will thus be highly cited, for the technique, and also for the application of the technique.

    4. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      These authors have developed a method to induce MI or MII arrest. While this was previously possible in MI, the advantage of the method presented here is it works for MII, and chemically inducible because it is based on a system that is sensitive to the addition of ABA. Depending on when the ABA is added, they achieve a MI or MII delay. The ABA promotes dimerizing fragments of Mps1 and Spc105 that can't bind their chromosomal sites. The evidence that the MI arrest is weaker than the MII arrest is convincing and consistent with published data and indicating the SAC in MI is less robust than MII or mitosis. The authors use this system to find evidence that the weak MI arrest is associated with PP1 binding to Spc105. This is a nice use of the system.

      The remainder of the paper uses the SynSAC system to isolate populations enriched for MI or MII stages and conduct proteomics. This shows a powerful use of the system but more work is needed to validate these results, particularly in normal cells.

      Overall the most significant aspect of this paper is the technical achievement, which is validated by the other experiments. They have developed a system and generated some proteomics data that maybe useful to others when analyzing kinetochore composition at each division. Overall, I have only a few minor suggestions.

      1) In wild-type - Pds1 levels are high during M1 and A1, but low in MII. Can the authors comment on this? In line 217, what is meant by "slightly attenuated? Can the authors comment on how anaphase occurs in presence of high Pds1? There is even a low but significant level in MII.

      2) The figures with data characterizing the system are mostly graphs showing time course of MI and MII. There is no cytology, which is a little surprising since the stage is determined by spindle morphology. It would help to see sample sizes (ie. In the Figure legends) and also representative images. It would also be nice to see images comparing the same stage in the SynSAC cells versus normal cells. Are there any differences in the morphology of the spindles or chromosomes when in the SynSAC system?

      3) A possible criticism of this system could be that the SAC signal promoting arrest is not coming from the kinetochore. Are there any possible consequences of this? In vertebrate cells, the RZZ complex streams off the kinetochore. Yeast don't have RZZ but this is an example of something that is SAC dependent and happens at the kinetochore. Can the authors discuss possible limitations such as this? Does the inhibition of the APC effect the native kinetochores? This could be good or bad. A bad possibility is that the cell is behaving as if it is in MII, but the kinetochores have made their microtubule attachments and behave as if in anaphase.

      Significance

      These authors have developed a method to induce MI or MII arrest. While this was previously possible in MI, the advantage of the method presented here is it works for MII, and chemically inducible because it is based on a system that is sensitive to the addition of ABA. Depending on when the ABA is added, they achieve a MI or MII delay. The ABA promotes dimerizing fragments of Mps1 and Spc105 that can't bind their chromosomal sites. The evidence that the MI arrest is weaker than the MII arrest is convincing and consistent with published data and indicating the SAC in MI is less robust than MII or mitosis. The authors use this system to find evidence that the weak MI arrest is associated with PP1 binding to Spc105. This is a nice use of the system.

      The remainder of the paper uses the SynSAC system to isolate populations enriched for MI or MII stages and conduct proteomics. This shows a powerful use of the system but more work is needed to validate these results, particularly in normal cells.

      Overall the most significant aspect of this paper is the technical achievement, which is validated by the other experiments. They have developed a system and generated some proteomics data that maybe useful to others when analyzing kinetochore composition at each division.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Summary

      This work performed Raman spectral microscopy at the single-cell level for 15 different culture conditions in E. coli. The Raman signature is systematically analyzed and compared with the proteome dataset of the same culture conditions. With a linear model, the authors revealed correspondence between Raman pattern and proteome expression stoichiometry indicating that spectrometry could be used for inferring proteome composition in the future. With both Raman spectra and proteome datasets, the authors categorized co-expressed genes and illustrated how proteome stoichiometry is regulated among different culture conditions. Co-expressed gene clusters were investigated and identified as homeostasis core, carbon-source dependent, and stationary phase-dependent genes. Overall, the authors demonstrate a strong and solid data analysis scheme for the joint analysis of Raman and proteome datasets.

      Strengths and major contributions

      (1) Experimentally, the authors contributed Raman datasets of E. coli with various growth conditions.

      (2) In data analysis, the authors developed a scheme to compare proteome and Raman datasets. Protein co-expression clusters were identified, and their biological meaning was investigated.

      Weaknesses

      The experimental measurements of Raman microscopy were conducted at the single-cell level; however, the analysis was performed by averaging across the cells. The author did not discuss if Raman microscopy can used to detect cell-to-cell variability under the same condition.

      We thank the reviewer for raising this important point. Though this topic is beyond the scope of our study, some of our authors have addressed the application of single-cell Raman spectroscopy to characterizing phenotypic heterogeneity in individual Staphylococcus aureus cells in another paper (Kamei et al., bioRxiv, doi: 10.1101/2024.05.12.593718). Additionally, one of our authors demonstrated that single-cell RNA sequencing profiles can be inferred from Raman images of mouse cells (Kobayashi-Kirschvink et al., Nat. Biotechnol. 42, 1726–1734, 2024). Therefore, detecting cell-to-cell variability under the same conditions has been shown to be feasible. Whether averaging single-cell Raman spectra is necessary depends on the type of analysis and the available dataset. We will discuss this in more detail in our response to Comment (1) by Reviewer #1 (Recommendation for the authors).

      Discussion and impact on the field

      Raman signature contains both proteomic and metabolomic information and is an orthogonal method to infer the composition of biomolecules. It has the advantage that single-cell level data could be acquired and both in vivo and in vitro data can be compared. This work is a strong initiative for introducing the powerful technique to systems biology and providing a rigorous pipeline for future data analysis.

      Reviewer #2 (Public review):

      Summary and strengths:

      Kamei et al. observe the Raman spectra of a population of single E. coli cells in diverse growth conditions. Using LDA, Raman spectra for the different growth conditions are separated. Using previously available protein abundance data for these conditions, a linear mapping from Raman spectra in LDA space to protein abundance is derived. Notably, this linear map is condition-independent and is consequently shown to be predictive for held-out growth conditions. This is a significant result and in my understanding extends the earlier Raman to RNA connection that has been reported earlier.

      They further show that this linear map reveals something akin to bacterial growth laws (ala Scott/Hwa) that the certain collection of proteins shows stoichiometric conservation, i.e. the group (called SCG - stoichiometrically conserved group) maintains their stoichiometry across conditions while the overall scale depends on the conditions. Analyzing the changes in protein mass and Raman spectra under these conditions, the abundance ratios of information processing proteins (one of the large groups where many proteins belong to "information and storage" - ISP that is also identified as a cluster of orthologous proteins) remain constant. The mass of these proteins deemed, the homeostatic core, increases linearly with growth rate. Other SCGs and other proteins are condition-specific.

      Notably, beyond the ISP COG the other SCGs were identified directly using the proteome data. Taking the analysis beyond they then how the centrality of a protein - roughly measured as how many proteins it is stoichiometric with - relates to function and evolutionary conservation. Again significant results, but I am not sure if these ideas have been reported earlier, for example from the community that built protein-protein interaction maps.

      As pointed out, past studies have revealed that the function, essentiality, and evolutionary conservation of genes are linked to the topology of gene networks, including protein-protein interaction networks. However, to the best of our knowledge, their linkage to stoichiometry conservation centrality of each gene has not yet been established.

      Previously analyzed networks, such as protein-protein interaction networks, depend on known interactions. Therefore, as our understanding of the molecular interactions evolves with new findings, the conclusions may change. Furthermore, analysis of a particular interaction network cannot account for effects from different types of interactions or multilayered regulations affecting each protein species.

      In contrast, the stoichiometry conservation network in this study focuses solely on expression patterns as the net result of interactions and regulations among all types of molecules in cells. Consequently, the stoichiometry conservation networks are not affected by the detailed knowledge of molecular interactions and naturally reflect the global effects of multilayered interactions. Additionally, stoichiometry conservation networks can easily be obtained for non-model organisms, for which detailed molecular interaction information is usually unavailable. Therefore, analysis with the stoichiometry conservation network has several advantages over existing methods from both biological and technical perspectives.

      We added a paragraph explaining this important point to the Discussion section, along with additional literature.

      Finally, the paper built a lot of "machinery" to connect ¥Omega_LE, built directly from proteome, and ¥Omega_B, built from Raman, spaces. I am unsure how that helps and have not been able to digest the 50 or so pages devoted to this.

      The mathematical analyses in the supplementary materials form the basis of the argument in the main text. Without the rigorous mathematical discussions, Fig. 6E — one of the main conclusions of this study — and Fig. 7 could never be obtained. Therefore, we believe the analyses are essential to this study. However, we clarified why each analysis is necessary and significant in the corresponding sections of the Results to improve the manuscript's readability.

      Please see our responses to comments (2) and (7) by Reviewer #1 (Recommendations for the authors) and comments (5) and (6) by Reviewer #2 (Recommendations for the authors).

      Strengths:

      The rigorous analysis of the data is the real strength of the paper. Alongside this, the discovery of SCGs that are condition-independent and that are condition-dependent provides a great framework.

      Weaknesses:

      Overall, I think it is an exciting advance but some work is needed to present the work in a more accessible way.

      We edited the main text to make it more accessible to a broader audience. Please see our responses to comments (2) and (7) by Reviewer #1 (Recommendations for the authors) and comments (5) and (6) by Reviewer #2 (Recommendations for the authors).

      Reviewer #1 (Recommendations for the authors):

      (1) The Raman spectral data is measured from single-cell imaging. In the current work, most of the conclusions are from averaged data. From my understanding, once the correspondence between LDA and proteome data is established (i.e. the matrix B) one could infer the single-cell proteome composition from B. This would provide valuable information on how proteome composition fluctuates at the single-cell level.

      We can calculate single-cell proteomes from single-cell Raman spectra in the manner suggested by the reviewer. However, we cannot evaluate the accuracy of their estimation without single-cell proteome data under the same environmental conditions. Likewise, we cannot verify variations of estimated proteomes of single cells. Since quantitatively accurate single-cell proteome data is unavailable, we concluded that addressing this issue was beyond the scope of this study.

      Nevertheless, we agree with the reviewer that investigating how proteome composition fluctuates at the single-cell level based on single-cell Raman spectra is an intriguing direction for future research. In this regard, some of our authors have studied the phenotypic heterogeneity of Staphylococcus aureus cells using single-cell Raman spectra in another paper (Kamei et al., bioRxiv, doi: 10.1101/2024.05.12.593718), and one of our authors has demonstrated that single-cell RNA sequencing profiles can be inferred from Raman images of mouse cells (Kobayashi-Kirschvink et al., Nat. Biotechnol. 42, 1726–1734, 2024). Therefore, it is highly plausible that single-cell Raman spectroscopy can also characterize proteomic fluctuations in single cells. We have added a paragraph to the Discussion section to highlight this important point.

      (2) The establishment of matrix B is quite confusing for readers who only read the main text. I suggest adding a flow chart in Figure 1 to explain the data analysis pipeline, as well as state explicitly what is the dimension of B, LDA matrix, and proteome matrix.

      We thank the reviewer for the suggestion. Following the reviewer's advice, we have explicitly stated the dimensions of the vectors and matrices in the main text. We have also added descriptions of the dimensions of the constructed spaces. Rather than adding another flow chart to Figure 1, we added a new table (Table 1) to explain the various symbols representing vectors and matrices, thereby improving the accessibility of the explanation.

      (3) One of the main contributions for this work is to demonstrate how proteome stoichiometry is regulated across different conditions. A total of m=15 conditions were tested in this study, and this limits the rank of LDA matrix as 14. Therefore, maximally 14 "modes" of differential composition in a proteome can be detected.

      As a general reader, I am wondering in the future if one increases or decreases the number of conditions (say m=5 or m=50) what information can be extracted? It is conceivable that increasing different conditions with distinct cellular physiology would be beneficial to "explore" different modes of regulation for cells. As proof of principle, I am wondering if the authors could test a lower number (by sub-sampling from m=15 conditions, e.g. picking five of the most distinct conditions) and see how this would affect the prediction of proteome stoichiometry inference.

      We thank the reviewer for bringing an important point to our attention. To address the issue raised, we conducted a new subsampling analysis (Fig. S14).

      As we described in the main text (Fig. 6E) and the supplementary materials, the m x m orthogonal matrix, Θ, represents to what extent the two spaces Ω<sub>LE</sub> and Ω<sub>B</sub> are similar (m is the number of conditions; in our main analysis, m = 15). Thus, the low-dimensional correspondence between the two spaces connected by an orthogonal transformation, such as an m-dimensional rotation, can be evaluated by examining the elements of the matrix Θ. Specifically, large off-diagonal elements of the matrix  mix higher dimensions and lower dimensions, making the two spaces spanned by the first few major axes appear dissimilar. Based on this property, we evaluated the vulnerability of the low-dimensional correspondence between Ω<sub>LE</sub> and Ω<sub>B</sub> to the reduced number of conditions by measuring how close Θ was to the identity matrix when the analysis was performed on the subsampled datasets.

      In the new figure (Fig. S14), we first created all possible smaller condition sets by subsampling the conditions. Next, to evaluate the closeness between the matrix Θ and the identity matrix for each smaller condition set, we generated 10,000 random orthogonal matrices of the same size as . We then evaluated the probability of obtaining a higher level of low-dimensional correspondence than that of the experimental data by chance (see section 1.8 of the Supplementary Materials). This analysis was already performed in the original manuscript for the non-subsampled case (m = 15) in Fig. S9C; the new analysis systematically evaluates the correspondence for the subsampled datasets.

      The results clearly show that low-dimensional correspondence is more likely to be obtained with more conditions (Fig. S14). In particular, when the number of conditions used in the analysis exceeds five, the median of the probability that random orthogonal matrices were closer to the identity matrix than the matrix Θ calculated from subsampled experimental data became lower than 10<sup>-4</sup>. This analysis provides insight into the number of conditions required to find low-dimensional correspondence between Ω<sub>LE</sub> and Ω<sub>B</sub>.

      What conditions are used in the analysis can change the low-dimensional structures of Ω<sub>LE</sub> and Ω<sub>B</sub> . Therefore, it is important to clarify whether including more conditions in the analysis reduces the dependence of the low-dimensional structures on conditions. We leave this issue as a subject for future study. This issue relates to the effective dimensionality of omics profiles needed to establish the diverse physiological states of cells across conditions. Determining the minimum number of conditions to attain the condition-independent low-dimensional structures of Ω<sub>LE</sub> and Ω<sub>B</sub> would provide insight into this fundamental problem. Furthermore, such an analysis would identify the range of applications of Raman spectra as a tool for capturing macroscopic properties of cells at the system level.

      We now discuss this point in the Discussion section, referring to this analysis result (Fig. S14). Please also see our reply to the comment (1) by Reviewer #2 (Recommendations for the authors).

      (4) In E. coli cells, total proteome is in mM concentration while the total metabolites are between 10 to 100 mM concentration. Since proteins are large molecules with more functional groups, they may contribute to more Raman signal (per molecules) than metabolites. Still, the meaningful quantity here is the "differential Raman signal" with different conditions, not the absolute signal. I am wondering how much percent of differential Raman signature are from proteome and how much are from metabolome.

      It is an important and interesting question to what extent changes in the proteome and metabolome contribute to changes in Raman spectra. Though we concluded that answering this question is beyond the scope of this study, we believe it is an important topic for future research.

      Raman spectral patterns convey the comprehensive molecular composition spanning the various omics layers of target cells. Changes in the composition of these layers can be highly correlated, and identifying their contributions to changes in Raman spectra would provide insight into the mutual correlation of different omics layers. Addressing the issue raised by the reviewer would expand the applications of Raman spectroscopy and highlight the advantage of cellular Raman spectra as a means of capturing comprehensive multi-omics information.

      We note that some studies have evaluated the contributions of proteins, lipids, nucleic acids, and glycogen to the Raman spectra of mammalian cells and how these contributions change in different states (e.g., Mourant et al., J Biomed Opt, 10(3), 031106, 2005). Additionally, numerous studies have imaged or quantified metabolites in various cell types (see, for example, Cutshaw et al., Chemical Reviews, 123(13), 8297–8346, 2023, for a comprehensive review). Extending these approaches to multiple omics layers in future studies would help resolve the issue raised by the reviewer.

      (5) It is known that E. coli cells in different conditions have different cell sizes, where cell width increases with carbon source quality and growth rate. Does this effect be normalized when processing the Raman signal?

      Each spectrum was normalized by subtracting the average and dividing it by the standard deviation. This normalization minimizes the differences in signal intensities due to different cell sizes and densities. This information is shown in the Materials and Methods section of the Supplementary Materials.

      (6) I have a question about interpretation of the centrality index. A higher centrality indicates the protein expression pattern is more aligned with the "mainstream" of the other proteins in the proteome. However, it is possible that the proteome has multiple" mainstream modes" (with possibly different contributions in magnitudes), and the centrality seems to only capture the "primary mode". A small group of proteins could all have low centrality but have very consistent patterns with high conservation of stoichiometry. I wondering if the author could discuss and clarify with this.

      We thank the reviewer for drawing our attention to the insufficient explanation in the original manuscript. First, we note that stoichiometry conserving protein groups are not limited to those composed of proteins with high stoichiometry conservation centrality. The SCGs 2–5 are composed of proteins that strongly conserve stoichiometry within each group but have low stoichiometry conservation centrality (Fig. 5A, 5K, 5L, and 7A). In other words, our results demonstrate the existence of the "primary mainstream mode" (SCG 1, i.e., the homeostatic core) and condition-specific "non-primary mainstream modes" (SCGs 2–5). These primary and non-primary modes are distinguishable by their position along the axis of stoichiometry conservation centrality (Fig. 5A, 5K, and 5L).

      However, a single one-dimensional axis (centrality) cannot capture all characteristics of stoichiometry-conserving architecture. In our case, the "non-primary mainstream modes" (SCGs 2–5) were distinguished from each other by multiple csLE axes.

      To clarify this point, we modified the first paragraph of the section where we first introduce csLE (Revealing global stoichiometry conservation architecture of the proteomes with csLE). We also added a paragraph to the Discussion section regarding the condition-specific SCGs 2–5.

      (7) Figures 3, 4, and 5A-I are analyses on proteome data and are not related to Raman spectral data. I am wondering if this part of the analysis can be re-organized and not disrupt the mainline of the manuscript.

      We agree that the structure of this manuscript is complicated. Before submitting this manuscript to eLife, we seriously considered reorganizing it. However, we concluded that this structure was most appropriate because our focus on stoichiometry conservation cannot be explained without analyzing the coefficients of the Raman-proteome correspondence using COG classification (see Fig. 3; note that Fig. 3A relates to Raman data). This analysis led us to examine the global stoichiometry conservation architecture of proteomes (Figs. 4 and 5) and discover the unexpected similarity between the low-dimensional structures of Ω<sub>LE</sub> and Ω<sub>B</sub>

      Therefore, we decided to keep the structure of the manuscript as it is. To partially resolve this issue, however, we added references to Fig. S1, the diagram of this paper’s mainline, to several places in the main text so that readers can more easily grasp the flow of the manuscript.

      (8) Supplementary Equation (2.6) could be wrong. From my understanding of the coordinate transformation definition here, it should be [w1 ... ws] X := RHS terms in big parenthesis.

      We checked the equation and confirmed that it is correct.

      Reviewer #2 (Recommendations for the authors):

      (1) The first main result or linear map between raman and proteome linked via B is intriguing in the sense that the map is condition-independent. A speculative question I have is if this relationship may become more complex or have more condition-dependent corrections as the number of conditions goes up. The 15 or so conditions are great but it is not clear if they are often quite restrictive. For example, they assume an abundance of most other nutrients. Now if you include a growth rate decrease due to nitrogen or other limitations, do you expect this to work?

      In our previous paper (Kobayashi-Kirschvink et al., Cell Systems 7(1): 104–117.e4, 2018), we statistically demonstrated a linear correspondence between cellular Raman spectra and transcriptomes for fission yeast under 10 environmental conditions. These conditions included nutrient-rich and nutrient-limited conditions, such as nitrogen limitation. Since the Raman-transcriptome correspondence was only statistically verified in that study, we analyzed the data from the standpoint of stoichiometry conservation in this study. The results (Fig. S11 and S12) revealed a correspondence in lower dimensions similar to that observed in our main results. In addition, similar correspondences were obtained even for different E. coli strains under common culture conditions (Fig. S11 and S12). Therefore, it is plausible that the stoichiometry-conservation low-dimensional correspondence between Raman and gene expression profiles holds for a wide range of external and internal perturbations.

      We agree with the reviewer that it is important to understand how Raman-omics correspondences change with the number of conditions. To address this issue, we examined how the correspondence between Ω<sub>LE</sub> and Ω<sub>B</sub> changes by subsampling the conditions used in the analysis. We focused on , which was introduced in Fig. 5E, because the closeness of Θ to the identity matrix represents correspondence precision. We found a general trend that the low-dimensional correspondence becomes more precise as the number of conditions increases (Fig. S14). This suggests that increasing the number of conditions generally improves the correspondence rather than disrupting it.

      We added a paragraph to the Discussion section addressing this important point. Please also refer to our response to Comment (3) of Reviewer #1 (Recommendations for the authors).

      (2) A little more explanation in the text for 3C/D would help. I am imagining 3D is the control for 3C. Minor comment - 3B looks identical to S4F but the y-axis label is different.

      We thank the reviewer for pointing out the insufficient explanation of Fig. 3C and 3D in the main text. Following this advice, we added explanations of these plots to the main text. We also added labels ("ISP COG class" and "non-ISP COG class") to the top of these two figures.

      Fig. 3B and S4F are different. For simplicity, we used the Pearson correlation coefficient in Fig. 3B. However, cosine similarity is a more appropriate measure for evaluating the degree of conservation of abundance ratios. Thus, we presented the result using cosine similarity in a supplementary figure (Fig. S4F). Please note that each point in Fig. S4F is calculated between proteome vectors of two conditions. The dimension of each proteome vector is the number of genes in each COG class.

      (3) Can we see a log-log version of 4C to see how the low-abundant proteins are behaving? In fact, the same is in part true for Figure 3A.

      We added the semi-log version of the graph for SCG1 (the homeostatic core) in Fig. 4C to make low-abundant proteins more visible. Please note that the growth rates under the two stationary-phase conditions were zero; therefore, plotting this graph in log-log format is not possible.

      Fig. 3A cannot be shown as a log-log plot because many of the coefficients are negative. The insets in the graphs clarify the points near the origin.

      (4) In 5L, how should one interpret the other dots that are close to the center but not part of the SCG1? And this theme continues in 6ACD and 7A.

      The SCGs were obtained by setting a cosine similarity threshold. Therefore, proteins that are close to SCG 1 (the homeostatic core) but do not belong to it have a cosine similarity below the threshold with any protein in SCG 1. Fig. 7 illustrates the expression patterns of the proteins in question.

      (5) Finally, I do not fully appreciate the whole analysis of connecting ¥Omega_csLE and ¥Omega_B and plots in 6 and 7. This corresponds to a lot of linear algebra in the 50 or so pages in section 1.8 in the supplementary. If the authors feel this is crucial in some way it needs to be better motivated and explained. I philosophically appreciate developing more formalism to establish these connections but I did not understand how this (maybe even if in the future) could lead to a new interpretation or analysis or theory.

      The mathematical analyses included in the supplementary materials are important for readers who are interested in understanding the mathematics behind our conclusions. However, we also thought these arguments were too detailed for many readers when preparing the original submission and decided to show them in the supplemental materials.

      To better explain the motivation behind the mathematical analyses, we revised the section “Representing the proteomes using the Raman LDA axes”.

      Please also see our reply to the comment (6) by Reviewer #2 (Recommendations for the authors) below.

      (6) Along the lines of the previous point, there seems to be two separate points being made: a) there is a correspondence between Raman and proteins, and b) we can use the protein data to look at centrality, generality, SCGs, etc. And the two don't seem to be linked until the formalism of ¥Omegas?

      The reviewer is correct that we can calculate and analyze some of the quantities introduced in this study, such as stoichiometry conservation centrality and expression generality, without Raman data. However, it is difficult to justify introducing these quantities without analyzing the correspondence between the Raman and proteome profiles. Moreover, the definition of expression generality was derived from the analysis of Raman-proteome correspondence (see section 2.2 of the Supplementary Materials). Therefore, point b) cannot stand alone without point a) from its initial introduction.

      To partially improve the readability and resolve the issue of complicated structure of this manuscript, we added references to Fig. S1, which is a diagram of the paper’s mainline, to several places in the main text. Please also see our reply to the comment (7) by Reviewer #1 (Recommendations for the authors).

    1. Reviewer #1 (Public review):

      Summary:

      The authors recorded neural activity using laminar probes while mice engaged in a global/local visual oddball paradigm. The focus of the article is on oscillatory activity, and found activity differences in theta, alpha/beta, and gamma bands related to predictability and prediction error.

      I think this is an important paper, providing more direct evidence for the role of signals in different frequency bands related to predictability and surprise in the sensory cortex.

      Comments:

      Below are some comments that may hopefully help further improve the quality of this already very interesting manuscript.

      (1) Introduction:

      The authors write in their introduction: "H1 further suggests a role for θ oscillations in prediction error processing as well." Without being fleshed out further, it is unclear what role this would be, or why. Could the authors expand this statement?

      (2) Limited propagation of gamma band signals:

      Some recent work (e.g. https://www.cell.com/cell-reports/fulltext/S2211-1247(23)00503-X) suggests that gamma-band signals reflect mainly entrainment of the fast-spiking interneurons, and don't propagate from V1 to downstream areas. Could the authors connect their findings to these emerging findings, suggesting no role in gamma-band activity in communication outside of the cortical column?

      (3) Paradigm:

      While I agree that the paradigm tests whether a specific type of temporal prediction can be formed, it is not a type of prediction that one would easily observe in mice, or even humans. The regularity that must be learned, in order to be able to see a reflection of predictability, integrates over 4 stimuli, each shown for 500 ms with a 500 ms blank in between (and a 1000 ms interval separating the 4th stimulus from the 1st stimulus of the next sequence). In other words, the mouse must keep in working memory three stimuli, which partly occurred more than a second ago, in order to correctly predict the fourth stimulus (and signal a 1000 ms interval as evidence for starting a new sequence).

      A problem with this paradigm is that positive findings are easier to interpret than negative findings. If mice do not show a modulation to the global oddball, is it because "predictive coding" is the wrong hypothesis, or simply because the authors generated a design that operates outside of the boundary conditions of the theory? I think the latter is more plausible. Even in more complex animals, (eg monkeys or humans), I suspect that participants would have trouble picking up this regularity and sequence, unless it is directly task-relevant (which it is not, in the current setting). Previous experiments often used simple pairs (where transitional probability was varied, eg, Meyer and Olson, PNAS 2012) of stimuli that were presented within an intervening blank period. Clearly, these regularities would be a lot simpler to learn than the highly complex and temporally spread-out regularity used here, facilitating the interpretation of negative findings (especially in early cortical areas, which are known to have relatively small temporal receptive fields).

      I am, of course, not asking the authors to redesign their study. I would like to ask them to discuss this caveat more clearly, in the Introduction and Discussion, and situate their design in the broader literature. For example, Jeff Gavornik has used much more rapid stimulus designs and observed clear modulations of spiking activity in early visual regions. I realize that this caveat may be more relevant for the spiking paper (which does not show any spiking activity modulation in V1 by global predictability) than for the current paper, but I still think it is an important general caveat to point out.

      (4) Reporting of results:

      I did not see any quantification of the strength of evidence of any of the results, beyond a general statement that all reported results pass significance at an alpha=0.01 threshold. It would be informative to know, for all reported results, what exactly the p-value of the significant cluster is; as well as for which performed tests there was no significant difference.

      (5) Cluster test:

      The authors use a three-dimensional cluster test, clustering across time, frequency, and location/channel. I am wondering how meaningful this analytical approach is. For example, there could be clusters that show an early difference at some location in low frequencies, and then a later difference in a different frequency band at another (adjacent) location. It seems a priori illogical to me to want to cluster across all these dimensions together, given that this kind of clustering does not appear neurophysiologically implausible/not meaningful. Can the authors motivate their choice of three-dimensional clustering, or better, facilitating interpretability, cluster eg at space and time within specific frequency bands (2d clustering)?

    2. Reviewer #2 (Public review):

      Summary:

      Sennesh and colleagues analyzed LFP data from 6 regions of rodents while they were habituated to a stimulus sequence containing a local oddball (xxxy) and later exposed to either the same (xxxY) or a deviant global oddball (xxxX). Subsequently, they were exposed to a controlled random sequence (XXXY) or a controlled deterministic sequence (xxxx or yyyy). From these, the authors looked for differences in spectral properties (both oscillatory and aperiodic) between three contrasts (only for the last stimulus of the sequence).

      (1) Deviance detection: unpredictable random (XXXY) versus predictable habituation (xxxy)

      (2) Global oddball: unpredictable global oddball (xxxX) versus predictable deterministic (xxxx), and

      (3) "Stimulus-specific adaptation:" locally unpredictable oddball (xxxY) versus predictable deterministic (yyyy).

      They found evidence for an increase in gamma (and theta in some cases) for unpredictable versus predictable stimuli, and a reduction in alpha/beta, which they consider evidence towards the "predictive routing" scheme.

      While the dataset and analyses are well-suited to test evidence for predictive coding versus alternative hypotheses, I felt that the formulation was ambiguous, and the results were not very clear. My major concerns are as follows:

      (1) The authors set up three competing hypotheses, in which H1 and H2 make directly opposite predictions. However, it must be noted that H2 is proposed for spatial prediction, where the predictability is computed from the part of the image outside the RF. This is different from the temporal prediction that is tested here. Evidence in favor of H2 is readily observed when large gratings are presented, for which there is substantially more gamma than in small images. Actually, there are multiple features in the spectral domain that should not be conflated, namely (i) the transient broadband response, which includes all frequencies, (ii) contribution from the evoked response (ERP), which is often in frequencies below 30 Hz, (iii) narrow-band gamma oscillations which are produced by large and continuous stimuli (which happen to be highly predictive), and (iv) sustained low-frequency rhythms in theta and alpha/beta bands which are prominent before stimulus onset and reduce after ~200 ms of stimulus onset. The authors should be careful to incorporate these in their formulation of PC, and in particular should not conflate narrow-band and broadband gamma.

      (2) My understanding is that any aspect of predictive coding must be present before the onset of stimulus (expected or unexpected). So, I was surprised to see that the authors have shown the results only after stimulus onset. For all figures, the authors should show results from -500 ms to 500 ms instead of zero to 500 ms.

      (3) In many cases, some change is observed in the initial ~100 ms of stimulus onset, especially for the alpha/beta and theta ranges. However, the evoked response contributes substantially in the transient period in these frequencies, and this evoked response could be different for different conditions. The authors should show the evoked responses to confirm the same, and if the claim really is that predictions are carried by genuine "oscillatory" activity, show the results after removing the ERP (as they had done for the CSD analysis).

      (4) I was surprised by the statistics used in the plots. Anything that is even slightly positive or negative is turning out to be significant. Perhaps the authors could use a more stringent criterion for multiple comparisons?

      (5) Since the design is blocked, there might be changes in global arousal levels. This is particularly important because the more predictive stimuli in the controlled deterministic stimuli were presented towards the end of the session, when the animal is likely less motivated. One idea to check for this is to do the analysis on the 3rd stimulus instead of the 4th? Any general effect of arousal/attention will be reflected in this stimulus.

      (6) The authors should also acknowledge/discuss that typical stimulus presentation/attention modulation involves both (i) an increase in broadband power early on and (ii) a reduction in low-frequency alpha/beta power. This could be just a sensory response, without having a role in sending prediction signals per se. So the predictive routing hypothesis should involve testing for signatures of prediction while ruling out other confounds related to stimulus/cognition. It is, of course, very difficult to do so, but at the same time, simply showing a reduction in low-frequency power coupled with an increase in high-frequency power is not sufficient to prove PR.

      (7) The CSD results need to be explained better - you should explain on what basis they are being called feedforward/feedback. Was LFP taken from Layer 4 LFP (as was done by van Kerkoerle et al, 2014)? The nice ">" and "<" CSD patterns (Figure 3B and 3F of their paper) in that paper are barely observed in this case, especially for the alpha/beta range.

      (8) Figure 4a-c, I don't see a reduction in the broadband signal in a compared to b in the initial segment. Maybe change the clim to make this clearer?

      (9) Figure 5 - please show the same for all three frequency ranges, show all bars (including the non-significant ones), and indicate the significance (p-values or by *, **, ***, etc) as done usually for bar plots.

      (10) Their claim of alpha/beta oscillations being suppressed for unpredictable conditions is not as evident. A figure akin to Figure 5 would be helpful to see if this assertion holds.

      (11) To investigate the prediction and violation or confirmation of expectation, it would help to look at both the baseline and stimulus periods in the analyses.

    3. Reviewer #3 (Public review):

      Summary:

      In their manuscript entitled "Ubiquitous predictive processing in the spectral domain of sensory cortex", Sennesh and colleagues perform spectral analysis across multiple layers and areas in the visual system of mice. Their results are timely and interesting as they provide a complement to a study from the same lab focussed on firing rates, instead of oscillations. Together, the present study argues for a hypothesis called predictive routing, which argues that non-predictable stimuli are gated by Gamma oscillations, while alpha/beta oscillations are related to predictions.

      Strengths:

      (1) The study contains a clear introduction, which provides a clear contrast between a number of relevant theories in the field, including their hypotheses in relation to the present data set.

      (2) The study provides a systematic analysis across multiple areas and layers of the visual cortex.

      Weaknesses:

      (1) It is claimed in the abstract that the present study supports predictive routing over predictive coding; however, this claim is nowhere in the manuscript directly substantiated. Not even the differences are clearly laid out, much less tested explicitly. While this might be obvious to the authors, it remains completely opaque to the reader, e.g., as it is also not part of the different hypotheses addressed. I guess this result is meant in contrast to reference 17, by some of the same authors, which argues against predictive coding, while the present work finds differences in the results, which they relate to spectral vs firing rate analysis (although without direct comparison).

      (2) Most of the claims about a direction of propagation of certain frequency-related activities (made in the context of Figures 2-4) are - to the eyes of the reviewer - not supported by actual analysis but glimpsed from the pictures, sometimes, with very little evidence/very small time differences to go on. To keep these claims, proper statistical testing should be performed.

      (3) Results from different areas are barely presented. While I can see that presenting them in the same format as Figures 2-4 would be quite lengthy, it might be a good idea to contrast the right columns (difference plots) across areas, rather than just the overall averages.

      (4) Statistical testing is treated very generally, which can help to improve the readability of the text; however, in the present case, this is a bit extreme, with even obvious tests not reported or not even performed (in particular in Figure 5).

      (5) The description of the analysis in the methods is rather short and, to my eye, was missing one of the key descriptions, i.e., how the CSD plots were baselined (which was hinted at in the results, but, as far as I know, not clearly described in the analysis methods). Maybe the authors could section the methods more to point out where this is discussed.

      (6) While I appreciate the efforts of the authors to formulate their hypotheses and test them clearly, the text is quite dense at times. Partly this is due to the compared conditions in this paradigm; however, it would help a lot to show a visualization of what is being compared in Figures 2-4, rather than just showing the results.

    4. Author response:

      We would like to thank the three Reviewers for their thoughtful comments and detailed feedback. We are pleased to hear that the Reviewers found our paper to be “providing more direct evidence for the role of signals in different frequency bands related to predictability and surprise” (R1), “well-suited to test evidence for predictive coding versus alternative hypotheses” (R2), and “timely and interesting” (R3).

      We perceive that the reviewers have an overall positive impression of the experiments and analyses, but find the text somewhat dense and would like to see additional statistical rigor, as well as in some cases additional analyses to be included in supplementary material. We therefore here provide a provisional letter addressing revisions we have already performed and outlining the revision we are planning point-by-point. We begin each enumerated point with the Reviewer’s quoted text and our responses to each point are made below.

      Reviewer 1:

      (1) Introduction:

      The authors write in their introduction: "H1 further suggests a role for θ oscillations in prediction error processing as well." Without being fleshed out further, it is unclear what role this would be, or why. Could the authors expand this statement?”

      We have edited the text to indicate that theta-band activity has been related to prediction error processing as an empirical observation, and must regrettably leave drawing inferences about its functional role to future work, with experiments designed specifically to draw out theta-band activity.

      (2) Limited propagation of gamma band signals:

      Some recent work (e.g. https://www.cell.com/cell-reports/fulltext/S2211-1247(23)00503-X) suggests that gamma-band signals reflect mainly entrainment of the fast-spiking interneurons, and don't propagate from V1 to downstream areas. Could the authors connect their findings to these emerging findings, suggesting no role in gamma-band activity in communication outside of the cortical column?”

      We have not specifically claimed that gamma propagates between columns/areas in our recordings, only that it synchronizes synaptic current flows between laminar layers within a column/area. We nonetheless suggest that gamma can locally synchronize a column, and potentially local columns within an area via entrainment of local recurrent spiking, to update an internal prediction/representation upon onset of a prediction error. We also point the Reviewer to our Discussion section, where we state that our results fit with a model “whereby θ oscillations synchronize distant areas, enabling them to exchange relevant signals during cognitive processing.” In our present work, we therefore remain agnostic about whether theta or gamma or both (or alternative mechanisms) are at play in terms of how prediction error signals are transmitted between areas.

      (3) Paradigm:

      While I agree that the paradigm tests whether a specific type of temporal prediction can be formed, it is not a type of prediction that one would easily observe in mice, or even humans. The regularity that must be learned, in order to be able to see a reflection of predictability, integrates over 4 stimuli, each shown for 500 ms with a 500 ms blank in between (and a 1000 ms interval separating the 4th stimulus from the 1st stimulus of the next sequence). In other words, the mouse must keep in working memory three stimuli, which partly occurred more than a second ago, in order to correctly predict the fourth stimulus (and signal a 1000 ms interval as evidence for starting a new sequence).

      A problem with this paradigm is that positive findings are easier to interpret than negative findings. If mice do not show a modulation to the global oddball, is it because "predictive coding" is the wrong hypothesis, or simply because the authors generated a design that operates outside of the boundary conditions of the theory? I think the latter is more plausible. Even in more complex animals, (eg monkeys or humans), I suspect that participants would have trouble picking up this regularity and sequence, unless it is directly task-relevant (which it is not, in the current setting). Previous experiments often used simple pairs (where transitional probability was varied, eg, Meyer and Olson, PNAS 2012) of stimuli that were presented within an intervening blank period. Clearly, these regularities would be a lot simpler to learn than the highly complex and temporally spread-out regularity used here, facilitating the interpretation of negative findings (especially in early cortical areas, which are known to have relatively small temporal receptive fields).

      I am, of course, not asking the authors to redesign their study. I would like to ask them to discuss this caveat more clearly, in the Introduction and Discussion, and situate their design in the broader literature. For example, Jeff Gavornik has used much more rapid stimulus designs and observed clear modulations of spiking activity in early visual regions. I realize that this caveat may be more relevant for the spiking paper (which does not show any spiking activity modulation in V1 by global predictability) than for the current paper, but I still think it is an important general caveat to point out.”

      We appreciate the Reviewer’s concern about working memory limitations in mice. Our paradigm and training followed on from previous paradigms such as Gavornik and Bear (2014), in which predictive effects were observed in mouse V1 with presentation times of 150ms and interstimulus intervals of 1500ms. In addition, we note that Jamali et al. (2024) recently utilized a similar global/local paradigm in the auditory domain with inter-sequence intervals as long as 28-30 seconds, and still observed effects of a predicted sequence (https://elifesciences.org/articles/102702). For the revised manuscript, we plan to expand on this in the Discussion section.

      That being said, as the Reviewer also pointed out, this would be a greater concern had we not found any positive findings in our study. However, even with the rather long sequence periods we used, we did find positive evidence for predictive effects, supporting the use of our current paradigm. We agree with the reviewer that these positive effects are easier to interpret than negative effects, and plan to expand upon this in the Discussion when we resubmit.

      (4) Reporting of results:

      I did not see any quantification of the strength of evidence of any of the results, beyond a general statement that all reported results pass significance at an alpha=0.01 threshold. It would be informative to know, for all reported results, what exactly the p-value of the significant cluster is; as well as for which performed tests there was no significant difference.”

      For the revised manuscript, we can include the p-values after cluster-based testing for each significant cluster, as well as show data that passes a more stringent threshold of p<0.001 (1/1000) or p<0.005 (1/200) rather than our present p<0.01 (1/100).

      (5) Cluster test:

      The authors use a three-dimensional cluster test, clustering across time, frequency, and location/channel. I am wondering how meaningful this analytical approach is. For example, there could be clusters that show an early difference at some location in low frequencies, and then a later difference in a different frequency band at another (adjacent) location. It seems a priori illogical to me to want to cluster across all these dimensions together, given that this kind of clustering does not appear neurophysiologically implausible/not meaningful. Can the authors motivate their choice of three-dimensional clustering, or better, facilitating interpretability, cluster eg at space and time within specific frequency bands (2d clustering)?”

      We are happy to include a 3D plot of a time-channel-frequency cluster in the revised manuscript to clarify our statistical approach for the reviewer. We consider our current three-dimensional cluster-testing an “unsupervised” way of uncovering significant contrasts with no theory-driven assumptions about which bounded frequency bands or layers do what.

      Reviewer 2:

      Sennesh and colleagues analyzed LFP data from 6 regions of rodents while they were habituated to a stimulus sequence containing a local oddball (xxxy) and later exposed to either the same (xxxY) or a deviant global oddball (xxxX). Subsequently, they were exposed to a controlled random sequence (XXXY) or a controlled deterministic sequence (xxxx or yyyy). From these, the authors looked for differences in spectral properties (both oscillatory and aperiodic) between three contrasts (only for the last stimulus of the sequence).

      (1) Deviance detection: unpredictable random (XXXY) versus predictable habituation (xxxy)

      (2) Global oddball: unpredictable global oddball (xxxX) versus predictable deterministic (xxxx), and

      (3) "Stimulus-specific adaptation:" locally unpredictable oddball (xxxY) versus predictable deterministic (yyyy).

      They found evidence for an increase in gamma (and theta in some cases) for unpredictable versus predictable stimuli, and a reduction in alpha/beta, which they consider evidence towards the "predictive routing" scheme.

      While the dataset and analyses are well-suited to test evidence for predictive coding versus alternative hypotheses, I felt that the formulation was ambiguous, and the results were not very clear. My major concerns are as follows:”

      We appreciate the reviewer’s concerns and outline how we will address them below:

      (1) The authors set up three competing hypotheses, in which H1 and H2 make directly opposite predictions. However, it must be noted that H2 is proposed for spatial prediction, where the predictability is computed from the part of the image outside the RF. This is different from the temporal prediction that is tested here. Evidence in favor of H2 is readily observed when large gratings are presented, for which there is substantially more gamma than in small images. Actually, there are multiple features in the spectral domain that should not be conflated, namely (i) the transient broadband response, which includes all frequencies, (ii) contribution from the evoked response (ERP), which is often in frequencies below 30 Hz, (iii) narrow-band gamma oscillations which are produced by large and continuous stimuli (which happen to be highly predictive), and (iv) sustained low-frequency rhythms in theta and alpha/beta bands which are prominent before stimulus onset and reduce after ~200 ms of stimulus onset. The authors should be careful to incorporate these in their formulation of PC, and in particular should not conflate narrow-band and broadband gamma.”

      We have clarified in the manuscript that while the gamma-as-prediction hypothesis (our H2) was originally proposed in a spatial prediction domain, further work (specifically Singer (2021)) has extended the hypothesis to cover temporal-domain predictions as well.

      To address the reviewer’s point about multiple features in the spectral domain: Our analysis has specifically separated aperiodic components using FOOOF analysis (Supp. Fig. 1) and explicitly fit and tested aperiodic vs. periodic components (Supp. Figs 1&2). We did not find strong effects in the aperiodic components but did in the periodic components (Supp. Fig. 2), allowing us to be more confident in our conclusions in terms of genuine narrow-band oscillations. In the revised manuscript, we will include analysis of the pre-stimulus time window to address the reviewer’s point (iv) on sustained low frequency oscillations.

      (2) My understanding is that any aspect of predictive coding must be present before the onset of stimulus (expected or unexpected). So, I was surprised to see that the authors have shown the results only after stimulus onset. For all figures, the authors should show results from -500 ms to 500 ms instead of zero to 500 ms.

      In our revised manuscript we will include a pre-stimulus analysis and supplementary figures with time ranges from -500ms to 500ms. We have only refrained from doing so in the initial manuscript because our paradigm’s short interstimulus interval makes it difficult to interpret whether activity in the ISI reflects post-stimulus dynamics or pre-stimulus prediction. Nonetheless, we can easily show that in our paradigm, alpha/beta-band activity is elevated in the interstimulus activity after the offset of the previous stimulus, assuming that we baseline to the pre-trial period.

      (3) In many cases, some change is observed in the initial ~100 ms of stimulus onset, especially for the alpha/beta and theta ranges. However, the evoked response contributes substantially in the transient period in these frequencies, and this evoked response could be different for different conditions. The authors should show the evoked responses to confirm the same, and if the claim really is that predictions are carried by genuine "oscillatory" activity, show the results after removing the ERP (as they had done for the CSD analysis).

      We have included an extra sentence in our Materials and Methods section clarifying that the evoked potential/ERP was removed in our existing analyses, prior to performing the spectral decomposition of the LFP signal. We also note that the FOOOF analysis we applied separates aperiodic components of the spectral signal from the strictly oscillatory ones.

      In our revised manuscript we will include an analysis of the evoked responses as suggested by the reviewer.

      (4) I was surprised by the statistics used in the plots. Anything that is even slightly positive or negative is turning out to be significant. Perhaps the authors could use a more stringent criterion for multiple comparisons?

      As noted above to Reviewer 1 (point 4), we are happy to include supplemental figures in our resubmission showing the effects on our results of setting the statistical significance threshold with considerably greater stringency.

      (5) Since the design is blocked, there might be changes in global arousal levels. This is particularly important because the more predictive stimuli in the controlled deterministic stimuli were presented towards the end of the session, when the animal is likely less motivated. One idea to check for this is to do the analysis on the 3rd stimulus instead of the 4th? Any general effect of arousal/attention will be reflected in this stimulus.

      In order to check for the brain-wide effects of arousal, we plan to perform similar analyses to our existing ones on the 3rd stimulus in each block, rather than just the 4th “oddball” stimulus. Clusters that appear significantly contrasting in both the 3rd and 4th stimuli may be attributable to arousal.  We will also analyze pupil size as an index of arousal to check for arousal differences between conditions in our contrasts, possibly stratifying our data before performing comparisons to equalize pupil size within contrasts. We plan to include these analyses in our resubmission.

      (6) The authors should also acknowledge/discuss that typical stimulus presentation/attention modulation involves both (i) an increase in broadband power early on and (ii) a reduction in low-frequency alpha/beta power. This could be just a sensory response, without having a role in sending prediction signals per se. So the predictive routing hypothesis should involve testing for signatures of prediction while ruling out other confounds related to stimulus/cognition. It is, of course, very difficult to do so, but at the same time, simply showing a reduction in low-frequency power coupled with an increase in high-frequency power is not sufficient to prove PR.

      Since many different predictive coding and predictive processing hypotheses make very different hypotheses about how predictions might encoded in neurophysiological recordings, we have focused on prediction error encoding in this paper.

      For the hypothesis space we have considered (H1-H3), each hypothesis makes clearly distinguishable predictions about the spectral response during the time period in the task when prediction errors should be present. As noted by the reviewer, a transient increase in broadband frequencies would be a signature of H3. Changes to oscillatory power in the gamma band in distinct directions (e.g., increasing or decreasing with prediction error) would support either H1 and H2, depending on the direction of change. We believe our data, especially our use of FOOOF analysis and separation of periodic from aperiodic components, coupled to the three experimental contrasts, speaks clearly in favor of the Predictive Routing model, but we do not claim we have “proved” it. This study provides just one datapoint, and we will acknowledge this in our revised Discussion in our resubmission.

      (7) The CSD results need to be explained better - you should explain on what basis they are being called feedforward/feedback. Was LFP taken from Layer 4 LFP (as was done by van Kerkoerle et al, 2014)? The nice ">" and "<" CSD patterns (Figure 3B and 3F of their paper) in that paper are barely observed in this case, especially for the alpha/beta range.

      We consider a feedforward pattern as flowing from L4 outwards to L2/3 and L5/6, and a feedback pattern as flowing in the opposite direction, from L1 and L6 to the middle layers. We will clarify this in the revised manuscript.

      Since gamma-band oscillations are strongest in L2/3, we re-epoched LFPs to the oscillation troughs in L2/3 in the initial manuscript. We can include in the revised manuscript equivalent plots after finding oscillation troughs in L4 instead, as well as calculating the difference in trough times within-band between layers to quantify the transmission delay and add additional rigor to our feedforward vs. feedback interpretation of the CSD data.

      (8) Figure 4a-c, I don't see a reduction in the broadband signal in a compared to b in the initial segment. Maybe change the clim to make this clearer?

      We are looking into the clim/colorbar and plot-generation code to figure out the visibility issue that the Reviewer has kindly pointed out to us.

      (9) Figure 5 - please show the same for all three frequency ranges, show all bars (including the non-significant ones), and indicate the significance (p-values or by *, **, ***, etc) as done usually for bar plots.

      We will add the requested bar-plots for all frequency ranges, though we note that the bars given here are the results of adding up the spectral power in the channel-time-frequency clusters that already passed significance tests and that adding secondary significance tests here may not prove informative.

      (10) Their claim of alpha/beta oscillations being suppressed for unpredictable conditions is not as evident. A figure akin to Figure 5 would be helpful to see if this assertion holds.

      As noted above, we will include the requested bar plot, as well as examining alpha/beta in the pre-stimulus time-series rather than after the onset of the oddball stimulus.

      (11) To investigate the prediction and violation or confirmation of expectation, it would help to look at both the baseline and stimulus periods in the analyses.

      We will include for the Reviewer’s edification a supplementary figure showing the spectrograms for the baseline and full-trial periods to look at the difference between baseline and prestimulus expectation.

      Reviewer 3:

      Summary:

      In their manuscript entitled "Ubiquitous predictive processing in the spectral domain of sensory cortex", Sennesh and colleagues perform spectral analysis across multiple layers and areas in the visual system of mice. Their results are timely and interesting as they provide a complement to a study from the same lab focussed on firing rates, instead of oscillations. Together, the present study argues for a hypothesis called predictive routing, which argues that non-predictable stimuli are gated by Gamma oscillations, while alpha/beta oscillations are related to predictions.

      Strengths:

      (1) The study contains a clear introduction, which provides a clear contrast between a number of relevant theories in the field, including their hypotheses in relation to the present data set.

      (2) The study provides a systematic analysis across multiple areas and layers of the visual cortex.”

      We thank the Reviewer for their kind comments.

      Weaknesses:

      (1) It is claimed in the abstract that the present study supports predictive routing over predictive coding; however, this claim is nowhere in the manuscript directly substantiated. Not even the differences are clearly laid out, much less tested explicitly. While this might be obvious to the authors, it remains completely opaque to the reader, e.g., as it is also not part of the different hypotheses addressed. I guess this result is meant in contrast to reference 17, by some of the same authors, which argues against predictive coding, while the present work finds differences in the results, which they relate to spectral vs firing rate analysis (although without direct comparison).

      We agree that in this manuscript we should restrict ourselves to the hypotheses that were directly tested. We have revised our abstract accordingly,  and softened our claim to note only that our LFP results are compatible with predictive routing.

      (2) Most of the claims about a direction of propagation of certain frequency-related activities (made in the context of Figures 2-4) are - to the eyes of the reviewer - not supported by actual analysis but glimpsed from the pictures, sometimes, with very little evidence/very small time differences to go on. To keep these claims, proper statistical testing should be performed.

      In our revised manuscript, we will either substantiate (with quantification of CSD delays between layers) or soften the claims about feedforward/feedback direction of flow within the cortical column.

      (3) Results from different areas are barely presented. While I can see that presenting them in the same format as Figures 2-4 would be quite lengthy, it might be a good idea to contrast the right columns (difference plots) across areas, rather than just the overall averages.

      In our revised manuscript we will gladly include a supplementary figure showing the right-column difference plots across areas, in order to make sure to include aspects of our dataset that span up and down the cortical hierarchy.

      (4) Statistical testing is treated very generally, which can help to improve the readability of the text; however, in the present case, this is a bit extreme, with even obvious tests not reported or not even performed (in particular in Figure 5).

      We appreciate the Reviewer’s concern for statistical rigor, and as noted to the other reviewers, we can add different levels of statistical description and describe the p-values associated with specific clusters. Regarding Figure 5, we must protest as the bar heights were computed came from clusters already subjected to statistical testing and found significant.  We could add a supplementary figure which considers untested narrowband activity and tests it only in the “bar height” domain, if the Reviewer would like.

      (5) The description of the analysis in the methods is rather short and, to my eye, was missing one of the key descriptions, i.e., how the CSD plots were baselined (which was hinted at in the results, but, as far as I know, not clearly described in the analysis methods). Maybe the authors could section the methods more to point out where this is discussed.

      We have added some elaboration to our Materials and Methods section, especially to specify that CSD, having physical rather than arbitrary units, does not require baselining.

      (6) While I appreciate the efforts of the authors to formulate their hypotheses and test them clearly, the text is quite dense at times. Partly this is due to the compared conditions in this paradigm; however, it would help a lot to show a visualization of what is being compared in Figures 2-4, rather than just showing the results.

      In the revised manuscript we will add a visual aid for the three contrasts we consider.

      We are happy to inform the editors that we have implemented, for the Reviewed Preprint, the direct textual Recommendations for the Authors given by Reviewers 2 and 3. We will implement the suggested Figure changes in our revised manuscript. We thank them for their feedback in strengthening our manuscript.

    1. Reviewer #1 (Public review):

      Summary:

      This study develops and validates a neural subspace similarity analysis for testing whether neural representations of graph structures generalize across graph size and stimulus sets. The authors show the method works in rat grid and place cell data, finding that grid but not place cells generalize across different environments, as expected. The authors then perform additional analyses and simulations to show that this method should also work on fMRI data. Finally, the authors test their method on fMRI responses from entorhinal cortex (EC) in a task that involves graphs that vary in size (and stimulus set) and statistical structure (hexagonal and community). They find neural representations of stimulus sets in lateral occipital complex (LOC) generalize across statistical structure and that EC activity generalizes across stimulus sets/graph size, but only for the hexagonal structures.

      Strengths:

      (1) The overall topic is very interesting and timely and the manuscript is well written.

      (2) The method is clever and powerful. It could be important for future research testing whether neural representations are aligned across problems with different state manifestations.

      (3) The findings provide new insights into generalizable neural representations of abstract task states in entorhinal cortex.

      Weaknesses:

      (1) There are two design confounds that are not sufficiently discussed.

      (1.1) First, hexagonal and community structures are confounded by training order. All subjects learned the hexagonal graph always before the community graph. As such, any differences between the two graphs could be explained (in theory) by order effects (although this is unlikely). However, because community and hexagonal structures shared the same stimuli, it is possible that subjects had to find ways to represent the community structures separately from the hexagonal structures. This could potentially explain why there was no generalization across graph size for community structures.

      (1.2) Second, subjects had more experience with the hexagonal and community structures before and during fMRI scanning. This is another possible reason why there was no generalization for the community structure.

      (2) The authors include the results from a searchlight analysis to show specificity of the effects for EC. A more convincing way (in my opinion) to show specificity would be to test for (and report the results) of a double dissociation between the visual and structural contrast in two independently defined regions (e.g., anatomical ROIs of LOC and EC). This would substantiate the point that EC activity generalizes across structural similarity while sensory regions like LOC generalize across visual similarity.

    2. Reviewer #3 (Public review):

      Summary:

      The article explores the brain's ability to generalize information, with a specific focus on the entorhinal cortex (EC) and its role in learning and representing structural regularities that define relationships between entities in networks. The research provides empirical support for the longstanding theoretical and computational neuroscience hypothesis that the EC is crucial for structure generalization. It demonstrates that EC codes can generalize across non-spatial tasks that share common structural regularities, regardless of the similarity of sensory stimuli and network size.

      Strengths:

      At first glance, a potential limitation of this study appears to be its application of analytical methods originally developed for high-resolution animal electrophysiology (Samborska et al., 2022) to the relatively coarse and noisy signals of human fMRI. Rather than sidestepping this issue, however, the authors embrace it as a methodological challenge. They provide compelling empirical evidence and biologically grounded simulations to show that key generalization properties of entorhinal cortex representations can still be robustly detected. This not only validates their approach but also demonstrates how far non-invasive human neuroimaging can be pushed. The use of multiple independent datasets and carefully controlled permutation tests further underscores the reliability of their findings, making a strong case that structural generalization across diverse task environments can be meaningfully studied even in abstract, non-spatial domains that are otherwise difficult to investigate in animal models.

      Weaknesses:

      While this study provides compelling evidence for structural generalization in the entorhinal cortex (EC), several limitations remain that pave the way for promising future research. One issue is that the generalization effect was statistically robust in only one task condition, with weaker effects observed in the "community" condition. This raises the question of whether the null result genuinely reflects a lack of EC involvement, or whether it might be attributable to other factors such as task complexity, training order, or insufficient exposure possibilities that the authors acknowledge as open questions. Moreover, although the study leverages fMRI to examine EC representations in humans, it does not clarify which specific components of EC coding-such as grid cells versus other spatially tuned but non-grid codes-underlie the observed generalization. While electrophysiological data in animals have begun to address this, the human experiments do not disentangle the contributions of these different coding types. This leaves unresolved the important question of what makes EC representations uniquely suited for generalization, particularly given that similar effects were not observed in other regions known to contain grid cells, such as the medial prefrontal cortex (mPFC) or posterior cingulate cortex (PCC). These limitations point to important future directions for better characterizing the computational role of the EC and its distinctiveness within the broader network supporting learning and decision making based on cognitive maps.

    3. Author response:

      The following is the authors’ response to the original reviews

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      This study develops and validates a neural subspace similarity analysis for testing whether neural representations of graph structures generalize across graph size and stimulus sets. The authors show the method works in rat grid and place cell data, finding that grid but not place cells generalize across different environments, as expected. The authors then perform additional analyses and simulations to show that this method should also work on fMRI data. Finally, the authors test their method on fMRI responses from the entorhinal cortex (EC) in a task that involves graphs that vary in size (and stimulus set) and statistical structure (hexagonal and community). They find neural representations of stimulus sets in lateral occipital complex (LOC) generalize across statistical structure and that EC activity generalizes across stimulus sets/graph size, but only for the hexagonal structures.

      Strengths:

      (1) The overall topic is very interesting and timely and the manuscript is well-written.

      (2) The method is clever and powerful. It could be important for future research testing whether neural representations are aligned across problems with different state manifestations.

      (3) The findings provide new insights into generalizable neural representations of abstract task states in the entorhinal cortex.

      We thank the reviewer for their kind comments and clear summary of the paper and its strengths.

      Weaknesses:

      (1) The manuscript would benefit from improving the figures. Moreover, the clarity could be strengthened by including conceptual/schematic figures illustrating the logic and steps of the method early in the paper. This could be combined with an illustration of the remapping properties of grid and place cells and how the method captures these properties.

      We agree with the reviewer and have added a schematic figure of the method (figure 1a).

      (2) Hexagonal and community structures appear to be confounded by training order. All subjects learned the hexagonal graph always before the community graph. As such, any differences between the two graphs could thus be explained (in theory) by order effects (although this is practically unlikely). However, given community and hexagonal structures shared the same stimuli, it is possible that subjects had to find ways to represent the community structures separately from the hexagonal structures. This could potentially explain why the authors did not find generalizations across graph sizes for community structures.

      We thank the reviewer for their comments. We agree that the null result regarding the community structures does not mean that EC doesn’t generalise over these structures, and that the training order could in theory contribute to the lack of an effect. The decision to keep the asymmetry of the training order was deliberate: we chose this order based on our previous study (Mark et al. 2020), where we show that learning a community structure first changes the learning strategy of subsequent graphs. We could have perhaps overcome this by increasing the training periods, but 1) the training period is already very long; 2) there will still be asymmetry because the group that first learn community structure will struggle in learning the hexagonal graph more than vice versa, as shown in Mark et al. 2020.

      We have added the following sentences on this decision to the Methods section:

      “We chose to first teach hexagonal graphs for all participants and not randomize the order because of previous results showing that first learning community structure changes participants’ learning strategy (mark et al. 2020).”

      (3) The authors include the results from a searchlight analysis to show the specificity of the effects of EC. A better way to show specificity would be to test for a double dissociation between the visual and structural contrast in two independently defined regions (e.g., anatomical ROIs of LOC and EC).

      Thanks for this suggestion. We indeed tried to run the analysis in a whole-ROI approach, but this did not result in a significant effect in EC. Importantly, we disagree with the reviewer that this is a “better way to show specificity” than the searchlight approach. In our view, the two analyses differ with respect to the spatial extent of the representation they test for. The searchlight approach is testing for a highly localised representation on the scale of small spheres with only 100 voxels. The signal of such a localised representation is likely to be drowned in the noise in an analysis that includes thousands of voxels which mostly don’t show the effect - as would be the case in the whole-ROI approach.

      (4) Subjects had more experience with the hexagonal and community structures before and during fMRI scanning. This is another confound, and possible reason why there was no generalization across stimulus sets for the community structure.

      See our response to comment (2).

      Reviewer #2 (Public review):

      Summary:

      Mark and colleagues test the hypothesis that entorhinal cortical representations may contain abstract structural information that facilitates generalization across structurally similar contexts. To do so, they use a method called "subspace generalization" designed to measure abstraction of representations across different settings. The authors validate the method using hippocampal place cells and entorhinal grid cells recorded in a spatial task, then perform simulations that support that it might be useful in aggregated responses such as those measured with fMRI. Then the method is applied to fMRI data that required participants to learn relationships between images in one of two structural motifs (hexagonal grids versus community structure). They show that the BOLD signal within an entorhinal ROI shows increased measures of subspace generalization across different tasks with the same hexagonal structure (as compared to tasks with different structures) but that there was no evidence for the complementary result (ie. increased generalization across tasks that share community structure, as compared to those with different structures). Taken together, this manuscript describes and validates a method for identifying fMRI representations that generalize across conditions and applies it to reveal entorhinal representations that emerge across specific shared structural conditions.

      Strengths:

      I found this paper interesting both in terms of its methods and its motivating questions. The question asked is novel and the methods employed are new - and I believe this is the first time that they have been applied to fMRI data. I also found the iterative validation of the methodology to be interesting and important - showing persuasively that the method could detect a target representation - even in the face of a random combination of tuning and with the addition of noise, both being major hurdles to investigating representations using fMRI.

      We thank the reviewer for their kind comments and the clear summary of our paper.

      Weaknesses:

      In part because of the thorough validation procedures, the paper came across to me as a bit of a hybrid between a methods paper and an empirical one. However, I have some concerns, both on the methods development/validation side, and on the empirical application side, which I believe limit what one can take away from the studies performed.

      We thank the reviewer for the comment. We agree that the paper comes across as a bit of a methods-empirical hybrid. We chose to do this because we believe (as the reviewer also points out) that there is value in both aspects of the paper.

      Regarding the methods side, while I can appreciate that the authors show how the subspace generalization method "could" identify representations of theoretical interest, I felt like there was a noticeable lack of characterization of the specificity of the method. Based on the main equation in the results section of the paper, it seems like the primary measure used here would be sensitive to overall firing rates/voxel activations, variance within specific neurons/voxels, and overall levels of correlation among neurons/voxels. While I believe that reasonable pre-processing strategies could deal with the first two potential issues, the third seems a bit more problematic - as obligate correlations among neurons/voxels surely exist in the brain and persist across context boundaries that are not achieving any sort of generalization (for example neurons that receive common input, or voxels that share spatial noise). The comparative approach (ie. computing difference in the measure across different comparison conditions) helps to mitigate this concern to some degree - but not completely - since if one of the conditions pushes activity into strongly spatially correlated dimensions, as would be expected if univariate activations were responsive to the conditions, then you'd expect generalization (driven by shared univariate activation of many voxels) to be specific to that set of conditions.

      We thank the reviewer for their comments. We would like to point out that we demean each voxel within all states/piles (3-pictures sequences) in a given graph/task (what the reviewer is calling “a condition”). Hence there is no shared univariate activation of many voxels in response to a graph going into the computation, and no sensitivity to the overall firing rate/voxel activation.  Our calculation captures the variance across states conditions within a task (here a graph), over and above the univariate effect of graph activity. In addition, we spatially pre-whiten the data within each searchlight, meaning that noisy voxels with high noise variance will be downweighted and noise correlations between voxels are removed prior to applying our method.

      A second issue in terms of the method is that there is no comparison to simpler available methods. For example, given the aims of the paper, and the introduction of the method, I would have expected the authors to take the Neuron-by-Neuron correlation matrices for two conditions of interest, and examine how similar they are to one another, for example by correlating their lower triangle elements. Presumably, this method would pick up on most of the same things - although it would notably avoid interpreting high overall correlations as "generalization" - and perhaps paint a clearer picture of exactly what aspects of correlation structure are shared. Would this method pick up on the same things shown here? Is there a reason to use one method over the other?

      We thank the reviewer for this important and interesting point. We agree that calculating correlation between the upper triangular elements of the covariance or correlation matrices picks up similar, but not identical aspects of the data (see below the mathematical explanation that was added to the supplementary). When we repeated the searchlight analysis and calculated the correlation between the upper triangular entries of the Pearson correlation matrices we obtained an effect in the EC, though weaker than with our subspace generalization method (t=3.9, the effect did not survive multiple comparisons). Similar results were obtained with the correlation between the upper triangular elements of the covariance matrices(t=3.8, the effect did not survive multiple comparisons).

      The difference between the two methods is twofold: 1) Our method is based on the covariance matrix and not the correlation matrix - i.e. a difference in normalisation. We realised that in the main text of the original paper we mistakenly wrote “correlation matrix” rather than “covariance matrix” (though our equations did correctly show the covariance matrix). We have corrected this mistake in the revised manuscript. 2) The weighting of the variance explained in the direction of each eigenvector is different between the methods, with some benefits of our method for identifying low-dimensional representations and for robustness to strong spatial correlations.  We have added a section “Subspace Generalisation vs correlating the Neuron-by-Neuron correlation matrices” to the supplementary information with a mathematical explanation of these differences.

      Regarding the fMRI empirical results, I have several concerns, some of which relate to concerns with the method itself described above. First, the spatial correlation patterns in fMRI data tend to be broad and will differ across conditions depending on variability in univariate responses (ie. if a condition contains some trials that evoke large univariate activations and others that evoke small univariate activations in the region). Are the eigenvectors that are shared across conditions capturing spatial patterns in voxel activations? Or, related to another concern with the method, are they capturing changing correlations across the entire set of voxels going into the analysis? As you might expect if the dynamic range of activations in the region is larger in one condition than the other?

      This is a searchlight analysis, therefore it captures the activity patterns within nearby voxels. Indeed, as we show in our simulation, areas with high activity and therefore high signal to noise will have better signal in our method as well. Note that this is true of most measures.

      My second concern is, beyond the specificity of the results, they provide only modest evidence for the key claims in the paper. The authors show a statistically significant result in the Entorhinal Cortex in one out of two conditions that they hypothesized they would see it. However, the effect is not particularly large. There is currently no examination of what the actual eigenvectors that transfer are doing/look like/are representing, nor how the degree of subspace generalization in EC may relate to individual differences in behavior, making it hard to assess the functional role of the relationship. So, at the end of the day, while the methods developed are interesting and potentially useful, I found the contributions to our understanding of EC representations to be somewhat limited.

      We agree with this point, yet believe that the results still shed light on EC functionality. Unfortunately, we could not find correlation between behavioral measures and the fMRI effect.

      Reviewer #3 (Public review):

      Summary:

      The article explores the brain's ability to generalize information, with a specific focus on the entorhinal cortex (EC) and its role in learning and representing structural regularities that define relationships between entities in networks. The research provides empirical support for the longstanding theoretical and computational neuroscience hypothesis that the EC is crucial for structure generalization. It demonstrates that EC codes can generalize across non-spatial tasks that share common structural regularities, regardless of the similarity of sensory stimuli and network size.

      Strengths:

      (1) Empirical Support: The study provides strong empirical evidence for the theoretical and computational neuroscience argument about the EC's role in structure generalization.

      (2) Novel Approach: The research uses an innovative methodology and applies the same methods to three independent data sets, enhancing the robustness and reliability of the findings.

      (3) Controlled Analysis: The results are robust against well-controlled data and/or permutations.

      (4) Generalizability: By integrating data from different sources, the study offers a comprehensive understanding of the EC's role, strengthening the overall evidence supporting structural generalization across different task environments.

      Weaknesses:

      A potential criticism might arise from the fact that the authors applied innovative methods originally used in animal electrophysiology data (Samborska et al., 2022) to noisy fMRI signals. While this is a valid point, it is noteworthy that the authors provide robust simulations suggesting that the generalization properties in EC representations can be detected even in low-resolution, noisy data under biologically plausible assumptions. I believe this is actually an advantage of the study, as it demonstrates the extent to which we can explore how the brain generalizes structural knowledge across different task environments in humans using fMRI. This is crucial for addressing the brain's ability in non-spatial abstract tasks, which are difficult to test in animal models.

      While focusing on the role of the EC, this study does not extensively address whether other brain areas known to contain grid cells, such as the mPFC and PCC, also exhibit generalizable properties. Additionally, it remains unclear whether the EC encodes unique properties that differ from those of other systems. As the authors noted in the discussion, I believe this is an important question for future research.

      We thank the reviewer for their comments. We agree with the reviewer that this is a very interesting question. We tried to look for effects in the mPFC, but we did not obtain results that were strong enough to report in the main manuscript, but we do report a small effect in the supplementary.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      (1) I wonder how important the PCA on B1(voxel-by-state matrix from environment 1) and the computation of the AUC (from the projection on B2 [voxel-by-state matrix from environment 1]) is for the analysis to work. Would you not get the same result if you correlated the voxel-by-voxel correlation matrix based on B1 (C1) with the voxel-by-voxel correlation matrix based on B2 (C2)? I understand that you would not have the subspace-by-subspace resolution that comes from the individual eigenvectors, but would the AUC not strongly correlate with the correlation between C1 and C2?

      We agree with the reviewer comments - see our response to reviewer 2 second issue above. 

      (2) There is a subtle difference between how the method is described for the neural recording and fMRI data. Line 695 states that principal components of the neuron x neuron intercorrelation matrix are computed, whereas line 888 implies that principal components of the data matrix B are computed. Of note, B is a voxel x pile rather than a pile x voxel matrix. Wouldn't this result in U being pile x pile rather than voxel x voxel?

      The PCs are calculated on the neuron x neuron (or voxel x voxel) covariance matrix of the activation matrix. We’ve added the following clarification to the relevant part of the Methods:

      “We calculated noise normalized GLM betas within each searchlight using the RSA toolbox. For each searchlight and each graph, we had a nVoxels (100) by nPiles (10) activation matrix (B) that describes the activation of a voxel as a result of a particular pile (three pictures’ sequence). We exploited the (voxel x voxel) covariance matrix of this matrix to quantify the manifold alignment within each searchlight.”

      (3) It would be very helpful to the field if the authors would make the code and data publicly available. Please consider depositing the code for data analysis and simulations, as well as the preprocessed/extracted data for the key results (rat data/fMRI ROI data) into a publicly accessible repository.

      The code is publicly available in git (https://github.com/ShirleyMgit/subspace_generalization_paper_code/tree/main).

      (4) Line 219: "Kolmogorov Simonov test" should be "Kolmogorov Smirnov test".

      thanks!

      (5) Please put plots in Figure 3F on the same y-axis.

      (6) Were large and small graphs of a given statistical structure learned on the same days, and if so, sequentially or simultaneously? This could be clarified.

      The graphs are learned on the same day.  We clarified this in the Methods section.

      Reviewer #2 (Recommendations for the authors):

      Perhaps the advantage of the method described here is that you could narrow things down to the specific eigenvector that is doing the heavy lifting in terms of generalization... and then you could look at that eigenvector to see what aspect of the covariance structure persists across conditions of interest. For example, is it just the highest eigenvalue eigenvector that is likely picking up on correlations across the entire neural population? Or is there something more specific going on? One could start to get at this by looking at Figures 1A and 1C - for example, the primary difference for within/between condition generalization in 1C seems to emerge with the first component, and not much changes after that, perhaps suggesting that in this case, the analysis may be picking up on something like the overall level of correlations within different conditions, rather than a more specific pattern of correlations.

      The nature of the analysis means the eigenvectors are organized by their contribution to the variance, therefore the first eigenvector is responsible for more variance than the other, we did not check rigorously whether the variance is then splitted equally by the remaining eigenvectors but it does not seems to be the case.

      Why is variance explained above zero for fraction EVs = 0 for figure 1C (but not 1A) ? Is there some plotting convention that I'm missing here?

      There was a small bug in this plot and it was corrected - thank you very much!

      The authors say:

      "Interestingly, the difference in AUCs was also 190 significantly smaller than chance for place cells (Figure 1a, compare dotted and solid green 191 lines, p<0.05 using permutation tests, see statistics and further examples in supplementary 192 material Figure S2), consistent with recent models predicting hippocampal remapping that is 193 not fully random (Whittington et al. 2020)."

      But my read of the Whittington model is that it would predict slight positive relationships here, rather than the observed negative ones, akin to what one would expect if hippocampal neurons reflect a nonlinear summation of a broad swath of entorhinal inputs.

      Smaller differences than chance imply that the remapping of place cells is not completely random.

      Figure 2:

      I didn't see any description of where noise amplitude values came from - or any justification at all in that section. Clearly, the amount of noise will be critical for putting limits on what can and cannot be detected with the method - I think this is worthy of characterization and explanation. In general, more information about the simulations is necessary to understand what was done in the pseudovoxel simulations. I get the gist of what was done, but these methods should clear enough that someone could repeat them, and they currently are not.

      Thanks, we added noise amplitude to the figure legend and Methods.

      What does flexible mean in the title? The analysis only worked for the hexagonal grid - doesn't that suggest that whatever representations are uncovered here are not flexible in the sense of being able to encode different things?

      Flexible here means, flexible over stimulus’ characteristics that are not related to the structural form such as stimuli, the size of the graph etc.

      Reviewer #3 (Recommendations for the authors):

      I have noticed that the authors have updated the previous preprint version to include extensive simulations. I believe this addition helps address potential criticisms regarding the signal-to-noise ratio. If the authors could share the code for the fMRI data and the simulations in an open repository, it would enhance the study's impact by reaching a broader readership across various research fields. Except for that, I have nothing to ask for revision.

      Thanks, the code will be publicly available: (https://github.com/ShirleyMgit/subspace_generalization_paper_code/tree/main).

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Manuscript number: RC-2025-03160

      Corresponding author(s) Padinjat, Raghu

      [The “revision plan” should delineate the revisions that authors intend to carry out in response to the points raised by the referees. It also provides the authors with the opportunity to explain their view of the paper and of the referee reports.

      • *

      The document is important for the editors of affiliate journals when they make a first decision on the transferred manuscript. It will also be useful to readers of the reprint and help them to obtain a balanced view of the paper.

      • *

      If you wish to submit a full revision, please use our "Full Revision" template. It is important to use the appropriate template to clearly inform the editors of your intentions.]

      1. General Statements [optional]

      We thank all three reviewers for appreciating the novelty of our analysis of CERT function in a physiological context in vivo. While many studies have been published on the biochemistry and function of CERT in cultured cells, there are limited studies, if any, relating the impact of CRT function at the biochemical level to its function on a physiological process, in our case the electrical response to light.

      We also that all reviewers for commenting on the importance of our rescue of dcert mutants with hCERT and the scientific insights raised by this experiment. All reviewers have also noted the importance of strengthening our observation that hCERT, in these cells, is localized at ER-PM MCS rather that the more widely reported localization at the Golgi. We highlight that many excellent studies which have localized CERT at the Golgi are performed in cultured, immortalized, mammalian cells. There are limited studies on the localization of this protein in primary cells, neurons or in polarized cells. With the additional experiments we have proposed in the revision for this aspect of the manuscript, we believe the findings will be of great novelty and widespread interest.

      We believe we can address almost all points raised by reviewers thereby strengthening this exciting manuscript.

      2. Description of the planned revisions

      Insert here a point-by-point reply that explains what revisions, additional experimentations and analyses are planned to address the points raised by the referees.

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      This manuscript dissects the physiological function of ceramide transfer protein (CERT) by studying the phenotype of CERT null Drosophila.

      dCERT null animals have a reduced electrical response to light in their photoreceptors, reduced baseline PIP2 accumulation in the cells and delayed re-synthesis of PIP2 and its precursor, PI4P after light stimulation. There are also reduced ER:PM contact sites at the rhabdomere and a corresponding reduction in the localization of PI/PA exchange protein, RDGB at this site. Therefore, the animals seem to have an impaired ability for sustaining phototransduction, which is nonetheless milder than that seen after loss of RDGB, for example. In terms of biochemical function, there is no overall change in ceramides, with some minor increases in specific short chain pools. There is however a large decrease in PE-ceramide species, again selective for a few molecular species. Curiously, decreasing ceramides with a mutant in ceramide synthesis is able to partially rescue both the electrical response and RDGB localization in dCERT flies, implying the increased ceramide species contribute to the phenotype. In addition, a mutation in PE-ceramide synthase largely phenocopies the dCERT null, exhiniting both increases ceramides and decreased PE-ceramide.

      In addition, dCERT flies were shown to have reduced localization of some plasma membrane proteins to detergent-resistant membrane fractions, as well as up regulation of the IRE1 and PERK stress-response pathways. Finally, dCERT nulls could be rescued with the human CERT protein, demonstrating conservation of core physiological function between these animals. Surprisingly, CERT is reported to localize to the ER:PM junctions at rhabdomeres, as opposed to the expected ER:Golgi contact sites. Specific areas where the manuscript could be strengthened include:

      Figure 2 studies the phototransduction system. Although clear changes in PI4P and PIP2 are seen, it would be interesting to see if changed PA accumulation occur in the dCERT animals, since RDGB localization is disrupted: this is expected to cause PM PA accumulation along with reduced PIP2 synthesis.

      It is an important question raised by the reviewer to check PA levels. In the present study we have noticed that localization of RDGB at the base of the rhabdomere in dcert1 is reduced but not completely removed. Consequently, one may consider the situation on dcert1 as a partial loss of function of RDGB and consistent with this, the delay in PI4P and PI(4,5)P2 resynthesis is not as severe as in rdgB9 which is a strong hypomorph (PMID: 26203165).

      rdgB9 mutants also show an elevation in PA levels and the reviewer is right that one might expect changes in PA levels too as RDGB is a PI/PA transfer protein. We expect that if measured, there will be a modest elevation in PA levels. However, previous work has shown that elevation of PA levels at the or close to the rhabdomere lead to retinal degeneration Specifically, elevated PA levels by dPLD overexpression disrupts rhabdomere biogenesis and leads to retinal degeneration (PMID: 19349583). Similarly, loss of the lipid transfer protein RDGB leads to photoreceptor degeneration (PMID: 26203165). In this study, we report that retinal degeneration is not a phenotype of dcert1. Thus measurements of PA levels though interesting may not be that informative in the context of the present study. However, if necessary, we can measure PA levels in dcert1.

      Lines 228-230 state: "These findings suggest an important contribution for reduced PE - Cer levels in the eye phenotypes of dcert". Does it not also suggest a contribution of the elevated ceramide species, since these are also observed in the CPES animals?

      We agree with the reviewer that not only reduced PE-Ceramide but also elevated ceramide levels in GMR>CPESi could contribute to the eye phenotype. This statement will be revised to reflect this conclusion.

      Figure 6D is a key finding that human CERT localized to the rhabdomere at ER:PM contact sites, though the reviewer was not convinced by these images. Is the protein truly localized to the contact sites, or simply have a pool of over-expressed protein localized to the surrounding cytoplasm? It also does not rule out localization (and therefore function) at ER:PM contact sites.

      Since hCERT completely rescued eye phenotype of dcert1 the localization we observe for hCERT must be at least partly relevant. We will perform additional IHC experiments to

      • Co-localize hCERT with an ER-PM MCS marker, e.g RDGB in wild type flies
      • Co-localize hCERT with VAP-A that is enriched at the ER-PM MCS. This should help to determine if there are MCS and non-MCS pools of hCERT in these cells. marker, e.g RDGB in wild type flies
      • Test if there is a pool of hCERT, in these cells that also localizes (or not) with the Golgi marker Golgin 84. These will be included in the revision to strengthen this important point.

      Statistics: There are a large number of t-tests employed that do not correct for multiple comparisons, for example in figures 3B, 3D, 3H, 4C, 6C, S2A, S2B, S3B and S3C.

      We will performed multiple comparisons with mentioned data and incorporate in the revised manuscript.

      There are two Western blotting sections in the methods.

      The first Western blotting methods is for general blots in the paper. The second western blotting section is related to the samples from detergent resistant membrane (DRM) fractions. We will clearly explain this information in the methods section of the manuscript.

      Reviewer #1 (Significance (Required)):

      Overall, the manuscript is clearly and succinctly written, with the data well presented and mostly convincing. The paper demonstrates clear phenotypes associated with loss of dCERT function, with surprising consequences for the function of a signaling system localized to ER:PM contact sites. To this reviewer, there seem to be three cogent observations of the paper: (i) loss of dCERT leads to accumulation of ceramides and loss of PE-ceramide, which together drive the phenotype. (ii) this ceramide alteration disrupts ER:PM contact sites and thus impairs phototransduction and (iii) rescue by human CERT and its apparent localization to ER:PM contact sites implies a potential novel site of action. Although surprising and novel, the significance of these observations are a little unclear: there is no obvious mechanism by which the elevated ceramide species and decreased PE-ceramide causes the specific failure in phototrasnduction, and the evidence for a novel site of action of CERT at the ER:PM contact sites is not compelling. Therefore, although an interesting and novel set of observations, the manuscript does not reveal a clear mechanistic basis for CERT physiological function.

      We thank reviewer for appreciating the quality of our manuscript while also highlighting points through which its impact can be enhanced. To our knowledge this is one of the first studies to tackle the challenging problem of a role for CERT in physiological function. We would like to highlight two points raised:

      • We do understand that the localisation of hCERT at ER-PM MCS is unusual compared to the traditional reported localization to ER-Golgi sites. This is important for the overall interpretation of the results in the paper on how dCERT regulates phototransduction. As indicated in response to an earlier comment by the reviewer we will perform additional experiments to strengthen our conclusion of the localization of hCERT.
      • With regard to how loss of dCERT affects phototransduction, we feel to likely mechanisms contribute. If the localization of hCERT to ER-PM MCS is verified through additional experiments (see proposal above) then it is important to note that ER-PM MCS in these cells includes the SMC (smooth endoplasmic reticulum) the major site of lipid synthesis. It is possible that loss of dCERT leads to ceramide accumulation in the smooth ER and disruption of ER-PM contacts. That may explain why reducing the levels of ceramide at this site partially rescues the eye phenotype.

      The multi-protein INAD-TRP-NORPA complex, central to phototransduction have previously been shown to localise to DRMs in photoreceptors. PE-Ceramides are important contributors to the formation of plasma membrane DRMs and we have presented biochemical evidence that the formation of these DRMs are reduced in the dcert1. This may be a mechanism contributing to reduced phototransduction. This latter mechanism has been proposed as a physiological function of DRMs but we think our data may be the first to show it in a physiological model.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      Summary Non-vesicular lipid transfer by lipid transfer proteins regulates organelle lipid compositions and functions. CERT transfers ceramide from the ER to Golgi to produce sphingomyelin, although CERT function in animal development and physiology is less clear. Using dcert1 (a protein-null allele), this paper shows a disruption of the sole Drosophila CERT gene causes reduced ERG amplitude in photoreceptors. While the level and localization of phototransduction machinery appears unaffected, the level of PIP2 and the localization of RDGB are perturbed. Collectively, these observations establish a novel link between CERT and phospholipase signaling in phototransduction. To understand the molecular mechanism further, the authors performed lipid chromatography and mass spec to characterize ceramide species in dcert1. This analysis reveals that whereas the total ceramide remains unaffected, most PE-ceramide species are reduced. The authors use lace mutant (serine palmitoyl transferase) and CPES (ceramide phosphoethanolamine synthase) RNAi to distinguish whether it is the accumulation of ceramide in the ER or the reduction of sphingolipid derivates in the Golgi that is the cause for the reduced ERG amplitude. Mutating one copy of lace reduces ceramide level by 50% and partially rescues the ERG defect, suggesting that the accumulation of ceramide in the ER is a cause. CPES RNAi phenocopies the reduced ERG amplitude, suggesting the production of certain sphingolipid is also relevant.

      Major comments: 1. By showing the reduced PIP2 level, the decreased SMC sites at the base of rhabdomeres, and the diffused RDGB localization in dcert1, the authors favor the model, in which the disruption of ceramide metabolism affects PIP transport. However, it is unclear if the reduced PIP2 level (i.e., reduced PH-PLCd::GFP staining) is specific to the rhabdomeres. It should be possible to compare PH-PLCd::GFP signals in different plasma membranes between wildtype and dcert1. If PH-PLCd::GFP signal is specifically reduced at the rhabdomeres, this conclusion will be greatly strengthened. In addition, the photoreceptor apical plasma membrane includes rhabdomere and stalk membrane. Is the PH-PLCd::GFP signal at the stalk membrane also affected?

      Due to the physical organization of optics in the fly eye, the pseudopupil imaging method used in this study collects the signal for the PIP2 probe (PH-PLCd::GFP) mainly from the apical rhabdomere membrane of photoreceptors in live imaging experimental mode. Therefore, the PIP2 signal from these experiments cannot be used to interpret the level of PIP2 either at the stalk membrane or indeed the basolateral membrane.

      The point raised by the reviewer, i.e whether CERT selectively controls PIP2 levels at the rhabdomere membrane or not, is an interesting one. To do this, we will need to fix fly photoreceptors and determine the PH-PLCd::GFP signal using single slice confocal imaging. When combined with a stalk marker such as CRUMBS, it should be possible to address the question of which are the membrane domains at which dCERT controls PIP2 levels. If the sole mechanism of action of dCERT is via disruption of ER-PM MCS then only the apical rhabdomere membrane PIP2 should be affected leaving the stalk membrane and basolateral membrane unaffected.

      Thank you very much for raising this specific point.

      The analysis of RDGB localization should be done in mosaic dcert1 retinas, which will be more convincing with internal control for each comparison. In addition, the phalloidin staining in Figure 2J shows distinct patterns of adherens junctions, indicating that the wildtype and dcert1 were imaged at different focal planes.

      We understand that having mosaics is an alternative an elegant way to perform a a side by side analysis of control and mutant. However this would require significant investment of time and effort, perhaps beyond the scope of this study. If we were to perform a mosaic analysis, this would compromise our ERG analysis since ERG is an extracellular recording We feel that this is beyond the scope of this study and perhaps may not be necessary as such (see below).

      In the revision we will present equivalent sections of control and dcert1 taken from the nuclear plane of the photoreceptor. This should resolve the reviewer’s concerns.

      The significance of ceramide species levels in dcert1 and GMR>CPESRNAi needs to be explained better. Do certain alterations represent accumulation of ceramides in the ER?

      Species level analysis of changes in ceramides reveal that elevations in dcert1 are seen mainly in the short chain ceramides (14 and 16 carbon chains). These most likely represent the short chain ceramides synthesised in the ER and accumulating due to the block in further metabolism to PE-Cer due to depletion in CPES.

      Species level analysis of changes in ceramides reveal that in dcert1 there is a ceramide transport related defect leading to elevation, primarily, in the short chain ceramides (14 and 16 carbon chains), and this selective supply defect leads to a reduction in PE-Cer levels, with a maximum change in the ratio of short-chain Cer:PE Cer (Figure 3A-D). Though there is no apparent change in the total ceramide level the species specific elevation in the ceramides disturb the fine -balance between the short-chain ceramides and the long and very-long chain ceramides. As the function of long and very-long chain ceramides are implicated in dendrite development and neuronal morphology (doi: 10.1371/journal.pgen.1011880), therefore this alteration in the fine balance between different ceramide species probably impacts the integrity and fluidity of the membrane environment. On the other hand it leads to a possibility of a defined function of the short-chain ceramides in electrical responses to light signalling in the eye, especially with respect to the PE-ceramides that are reduced by around 50%.

      In contrast the GMR>CPESRNAi leads to more of a substrate accumulation showing ceramide increase (14, 16, 18, 20 carbon chains) and decrease in PE-Cer levels (Figure 4D, E). In this case Cer accumulation is due to the block in further metabolism to PE-Cer arising from depletion in CPES.

      We will include this in the discussion of a revised version.

      The suppression by lace is interpreted as evidence that the reduced ERG amplitude in dcert1 is caused by ceramide accumulation in the ER. This interpretation seems preliminary as lace may interact with dcert genetically by other mechanisms.

      The dcert1 mutant exhibits increased levels of short-chain ceramides (Fig 3B), whereas the lace heterozygous mutant (laceK05305/+) displays reduced short-chain ceramide levels (Supp Fig 2B). In the laceK05305/+; dcert1 double mutant, ceramide levels are lower than those observed in the dcert1 mutant alone (Supp Fig 2B), indicating a partial genetic rescue of the elevated ceramide phenotype.

      Furthermore, through multiple independent genetic manipulations that modulate ceramide metabolism (alterations of dcert, cpes and lace), we consistently observe that increased ceramide levels correlate with a reduction in ERG amplitude, suggesting that ceramide accumulation negatively impacts photoreceptor function. Taken together, these observations indicate that the reduction in ceramide levels in the laceK05305/+; dcert1 double mutant likely contributes to the suppression of the ERG defect observed in the dcert1 mutant.

      The authors show that ERG amplitude is reduced in GMR>CPESRNAi. While this phenocopying is consistent with the reduced ERG amplitude in dcert1 being caused by reduced production of PE-ceramide, GMR>CPESRNAi also shows an increase in total ceramide level. Could this support the hypothesis that reduced ERG amplitude is caused by an accumulation of ceramide elsewhere? In addition, is the ERG amplitude reduction in GMR>CPESRNAi sensitive to lace?

      We agree that in addition to reduced PE-Ceramide, the elevated ceramide levels in GMR>CPESi could contribute to the eye phenotype. We will introduce lace heterozygous mutant in the GMR>CPESi background to test the contribution of elevated ceramide levels in the *GMR>CPESi * background and incorporate the data in the revision. Thank you for this suggestion.

      Along the same line, while the total ceramide level is significantly reduced in lace heterozygotes, is the PE-ceramide level also reduced? If yes, wouldn't this be contradictory to PE-ceramide production being important for ERG amplitude?

      Mass spec measurements show that levels of PE-Cer were not reduced in lacek05305/+ compared to wild type. This data will be included in the revised manuscript. However, the ERG amplitude of these flies and also in those with lace depletion using two independent RNAi lines were not reduced.

      What is the explanation and significance for the age-dependent deterioration of ERG amplitude in dcert1? Likewise, the significance of no retinal degeneration is not clearly presented.

      There could be multiple reasons for the age dependent deterioration of the ERG amplitude, in the absence of retinal degeneration. Drosophila phototransduction cascade depends heavily on ATP production. The age dependent reduction in ATP synthesis could lead to deterioration in the ERG amplitude. These may include instability of the DRMs due to reduced PE-Cer, lower ATP levels due to mitochondrial dysfunction, an perhaps others. A previous study has shown that ATP production is highly reduced along with oxidative stress and metabolic dysfunction in dcert1 flies aged to 10 days and beyond (PMID: 17592126). The same study has also found no neuronal degeneration in dcert1 that phenocopies absence of photoreceptor degeneration in the present study. We will attempt a few experiments to rule in or rule out the these and revise the discussion accordingly.

      The rescue of dcert1 phenotype by the expression of human CERT is a nice result. In addition to demonstrating a functional conservation, it allows a determination of CERT protein localization. However, the quality of images in Figure 6D should be improved. The phalloidin staining was rather poor, and the CNX99A in the lower panel was over-exposed, generating bleed-through signals at the rhabdomeres. In addition, the localization of hCERT should be explored further. For instance, does hCERT colocalize with RDGB? Is the hCERT localization altered in lace or GMR>CPESRNAi background?

      As indicated in response to reviewer 1:

      We will perform additional IHC experiments to

      • Co-localize hCERT with an ER-PM MCS marker, e.g RDGB in wild type flies
      • Co-localize hCERT with VAP-A that is enriched at the ER-PM MCS. This should help to determine if there are MCS and non-MCS pools of hCERT in these cells. marker, e.g RDGB in wild type flies
      • Test if there is a pool of hCERT, in these cells that also localizes (or not) with the Golgi marker Golgin 84. These will be included in the revision to strengthen this important point.

      We will also attempt to perform hCERT localization in lace or GMR>CPESRNAi background

      Minor comments: 1. In Line 128, Df(732) should be Df(3L)BSC732.

      Changes will be incorporated in the main manuscript.

      GMR-SMSrRNAi shows an increase in ERG peak amplitude. Is there an explanation for this?

      GMR-SMSrRNAi did show slight increase in ERG peak amplitude but was not statistically significant.

      Reviewer #2 (Significance (Required)):

      Significance As CERT mutations are implicated in human learning disability, a better understanding of CERT function in neuronal cells is certainly of interest. While the link between ceramide transport and phospholipase signaling is novel and interesting, this paper does not clearly explain the mechanism. In addition, as the ERG were measured long after the retinal cells were deficient in CERT or CPES, it is difficult to assess whether the observed phenotype is a primary defect. Furthermore, the quality of some images needs to be improved. Thus, I feel the manuscript in its current form is too preliminary.

      We thank reviewer for highlighting the importance and significance of our work in the light of recent studies of CERT function in ID. As with all genetic studies it is difficult to completely disentangle the role of a gene during development from a role only in the adult. However, we will attempt to perhaps use the GAL80ts system to uncouple these two potential components of CERT function in photoreceptors. The goal will be to determine if CERT has a specific role only in adult photoreceptors or if this is coupled to a developmental role. Since ID is as a neurodevelopmental disorder, a developmental role for CERT would be equally interesting.

      As previously indicated images will be improved bearing in mind the reviewer comments.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      Summary: Lipid transfer proteins (LTPs) shuttle lipids between organelle membranes at membrane contact sites (MCSs). While extensive biochemical and cell culture studies have elucidated many aspects of LTP function, their in vivo physiological roles are only beginning to be understood. In this manuscript, the authors investigate the physiological role of the ceramide transfer protein (CERT) in Drosophila adult photoreceptors-a model previously employed by this group to study LTP function at ER-PM contact sites under physiological conditions. Using a combination of genetic, biochemical, and physiological approaches, they analyze a protein-null mutant of dcert. They show that loss of dcert causes a reduction in electrical response to light with progressive decrease in electroretinogram (ERG) amplitude with age but no retinal degeneration. Lipidomic analysis shows that while the total levels of ceramides are not changed in dcert mutants, they do observe significant change in certain species of ceramides and depletion of downstream metabolite phosphoethanolamine ceramide (PE-Cer). Using fluorescent biosensors, the authors demonstrate reduced PIP2 levels at the plasma membrane, unchanged basal PI4P levels and slower resynthesis kinetics of both lipids following depletion. Electron microscopy and immunolabeling further reveal a reduced density of ER-PM MCSs and mislocalization of the MCS-resident lipid transfer protein RDGB. Genetic interaction studies with lace and RNAi-mediated knockdown of CPES support the conclusion that both ER ceramide accumulation and PM PE-Cer depletion contribute to the observed defects in dcert mutants. In addition, detergent-resistant membrane fractionation indicates altered plasma membrane organization in the absence of dcert. The study also reports upregulation of unfolded protein response transcripts, including IRE1 and PERK, suggesting increased ER stress. Finally, expression of human CERT rescues the reduced electrical response, demonstrating functional conservation across species. Overall the manuscript is well written that builds on established work and experiments are technically rigorous. The results are clearly presented and provide valuable insights into the physiological role of CERT.

      Major comments: 1.The reduced ERG amplitude appears to be the central phenotype associated with the loss of dcert, and most of the experiments in this manuscript effectively build a mechanistic framework to explain this observation. However, the experiments addressing detergent-resistant membrane domains (DRMs) and the unfolded protein response (UPR) seem somewhat disconnected from the main focus of the study. The DRM and UPR data feel peripheral and could benefit from few experiments for functional linkage to the ERG defect or should be moved to supplementary.

      We agree with the reviewer that further experiments are needed to link the DRM data to the ERG defects. That would need specific biochemical alteration at the PM to modulate PE-Cer species and their effect on scaffolding proteins required for phototransduction (that is beyond the scope of the present study). We will consider moving these to the supplementary section as suggested by the reviewer.

      2.The changes in ceramide species and reduction in PE-Cer are key findings of the study. These results should be further validated by performing a genetic rescue using the BAC or hCERT fly line to confirm that the lipidomic changes are specifically due to loss of CERT function.

      Thank you for this comment. We will include this in the revised manuscript.

      3.Figure 2B-C and 2E-F: Representative images corresponding to the quantified data should be included to illustrate the changes in PIP2 and PI4P reporters. Given that the fluorescence intensity of the PIP2 reporter at the PM is reduced in the dcert mutant relative to control, the authors should also verify that the reporter is expressed at comparable levels across genotypes.

      • As mentioned by the reviewer we will include representative images alongside our quantified data both of the basal ones and that of the kinetic study.
      • Western blot of reporters (PH-PLCd::GFP and P4M::GFP) across genotypes will be added to the revised manuscript. 4.Figure 2J-K: The partial mislocalization of RDGB represents an important observation that could mechanistically explain the reduced resynthesis of PI4P and PIP2 and consequently, the decreased ERG amplitude in dcert mutants. However, this result requires further validation. First, the authors should confirm whether this mislocalization is specific to RDGB by performing co-staining with another ER-PM MCS marker, such as VAP-A, to assess whether overall MCS organization is disrupted. Second, the quantification of RDGB enrichment at ER-PM MCSs should be refined. From the representative images, RDGB appears redistributed toward the photoreceptor cell body, but the presented quantification does not clearly reflect this shift. The authors should therefore include an analysis comparing RDGB levels in the cell body versus the submicrovillar region across genotypes. This analysis should be repeated for similar experiments across the study. Additionally, the total RDGB protein level should be quantified and reported. Finally, since RDGB mislocalization could directly contribute to the decreased ERG amplitude, it would be valuable to test whether overexpression of RDGB in dcert mutants can rescue the ERG phenotype.

      • In our ultrastructural studies (Fig. 2H, 2I and Sup. Fig. 1A, 1B) we did see reduction in PM-SMC MCS that was corroborated with RDGB staining.

      • Comparative ratio analysis of RDGB localisation at ER-PM MCS vs cell body will be included in the manuscript for all RDGB staining.
      • We have done western analysis for total RDGB protein level in ROR and dcert1. This data will be included in the revised manuscript.
      • This is a very interesting suggestion and we will test if RDGB overexpression can rescue ERG phenotype in dcert1.

      5.Figure 3F and I-J: Inclusion of appropriate WT and laceK05205/+ controls is necessary to allow proper interpretation of the results. These controls would strengthen the conclusions regarding the functional relationship between dcert and lace.

      Changes will be incorporated as per the suggestion.

      6.Figure 5C: The representative images shown here appear to contradict the findings described in Figure 2A. In Figure 5C, Rhodopsin 1 levels seem markedly reduced in the dcert mutants, whereas the text states that Rh1 levels are comparable between control and mutant photoreceptors. The authors should replace or reverify the representative images to ensure that they accurately reflect the conclusions presented in the text.

      We will reverify the representative images and changes will be accordingly incorporated.

      7.Figure 6D: The reported localization of hCERT to ER-PM MCSs is a key and potentially insightful observation, as it suggests the subcellular site of dcert activity in photoreceptors. However, the representative images provided are not sufficiently conclusive to support this claim. The authors should validate hCERT localization by co-staining with established markers like RDGB for ER-PM CNX99A for the ER and a Golgi marker since mammalian CERT is classically localized to ER-Golgi interfaces. Optionally, the authors could also quantify the relative distribution of hCERT among these compartments to provide a clearer assessment of its subcellular localization.

      As indicated in response to reviewer 1:

      We will perform additional IHC experiments to

      • Co-localize hCERT with an ER-PM MCS marker, e.g RDGB in wild type flies
      • Co-localize hCERT with VAP-A that is enriched at the ER-PM MCS. This should help to determine if there are MCS and non-MCS pools of hCERT in these cells. marker, e.g RDGB in wild type flies
      • Test if there is a pool of hCERT, in these cells that also localizes (or not) with the Golgi marker Golgin 84. These will be included in the revision to strengthen this important point.

      Minor comments: 1.In the first paragraph of introduction, authors should consider citing few of the key MCS literature.

      Additional literature will be included as per the suggestion.

      2.Line 132: data not shown is not acceptable. Authors should consider presenting the findings in the supplemental figure.

      Data will be added in supplement as per the suggestion.

      3.The authors should include a comprehensive table or Excel sheet summarizing all statistical analyses. This should include the sample size, type of statistical test used and exact p-values. Providing this information will improve the transparency, reproducibility and overall rigor of the study.

      We will provide all the statistical analyses in mentioned format as per the suggestion.

      4.The materials and methods section can be reorganized to include citation for flystocks which do not have stock number or RRIDs if the stocks were previously described but are not available from public repositories. They should expand on the details of various quantification methods used in the study. Finally including a section of Statistical analyses would further enhance transparency and reproducibility

      • Stock details will be added wherever missing as per the suggestion.
      • Statistical analyses section will be included in the material and methods. **Referee cross-commenting**

      1.I concur with Reviewer 1 regarding the need for more detailed reporting of statistical analyses.

      We will perform multiple comparisons with mentioned data and incorporate in the revised manuscript.

      2.I also agree with Reviewer 3 that the discussion should be expanded to address the age-dependent deterioration of ERG amplitude observed in the dcert mutants. This progressive decline could provide valuable insight into the long-term requirement of CERT function and signaling capacity at the photoreceptor membrane.

      Expanded discussion on the age dependent ERG amplitude decline will be incorporated in the discussion as per the suggestion.

      Reviewer #3 (Significance (Required)):

      This study explores the physiological function of CERT, a LTP localized at MCSs in Drosophila photoreceptors and uncovers a novel role in regulating plasma membrane PE-Cer levels and GPCR-mediated signaling. These findings significantly advances our understanding of how CERT-mediated lipid transport regulates G-protein coupled phospholipase C signaling in vivo. This work also highlights Drosophila photoreceptors as a powerful system to analyze the physiological significance of lipid-dependent signaling processes. This work will be of interest to researchers in neuronal cell biology, membrane dynamics and lipid signaling community. This review is based on my expertise in neuronal cell biology.

      We thank the reviewer for appreciating the significance of our work from a neuroscience perspective.

      • *

      3. Description of the revisions that have already been incorporated in the transferred manuscript

      Please insert a point-by-point reply describing the revisions that were already carried out and included in the transferred manuscript. If no revisions have been carried out yet, please leave this section empty.

      • *

      4. Description of analyses that authors prefer not to carry out

      Please include a point-by-point response explaining why some of the requested data or additional analyses might not be necessary or cannot be provided within the scope of a revision. This can be due to time or resource limitations or in case of disagreement about the necessity of such additional data given the scope of the study. Please leave empty if not applicable.

      • *

      We can address all reviewer points in the revision. However, we will not be able to perform a mosaic analysis of the impact of dcert1 mutant in the retina. We feel this is beyond the scope of this revision. In our response, we have highlighted how controls included in the revision offset the need for a mosaic analysis at this stage.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      Summary:

      Lipid transfer proteins (LTPs) shuttle lipids between organelle membranes at membrane contact sites (MCSs). While extensive biochemical and cell culture studies have elucidated many aspects of LTP function, their in vivo physiological roles are only beginning to be understood. In this manuscript, the authors investigate the physiological role of the ceramide transfer protein (CERT) in Drosophila adult photoreceptors-a model previously employed by this group to study LTP function at ER-PM contact sites under physiological conditions. Using a combination of genetic, biochemical, and physiological approaches, they analyze a protein-null mutant of dcert. They show that loss of dcert causes a reduction in electrical response to light with progressive decrease in electroretinogram (ERG) amplitude with age but no retinal degeneration. Lipidomic analysis shows that while the total levels of ceramides are not changed in dcert mutants, they do observe significant change in certain species of ceramides and depletion of downstream metabolite phosphoethanolamine ceramide (PE-Cer). Using fluorescent biosensors, the authors demonstrate reduced PIP2 levels at the plasma membrane, unchanged basal PI4P levels and slower resynthesis kinetics of both lipids following depletion. Electron microscopy and immunolabeling further reveal a reduced density of ER-PM MCSs and mislocalization of the MCS-resident lipid transfer protein RDGB. Genetic interaction studies with lace and RNAi-mediated knockdown of CPES support the conclusion that both ER ceramide accumulation and PM PE-Cer depletion contribute to the observed defects in dcert mutants. In addition, detergent-resistant membrane fractionation indicates altered plasma membrane organization in the absence of dcert. The study also reports upregulation of unfolded protein response transcripts, including IRE1 and PERK, suggesting increased ER stress. Finally, expression of human CERT rescues the reduced electrical response, demonstrating functional conservation across species.Overall the manuscript is well written that builds on established work and experiments are technically rigorous. The results are clearly presented and provide valuable insights into the physiological role of CERT.

      Major comments:

      1.The reduced ERG amplitude appears to be the central phenotype associated with the loss of dcert, and most of the experiments in this manuscript effectively build a mechanistic framework to explain this observation. However, the experiments addressing detergent-resistant membrane domains (DRMs) and the unfolded protein response (UPR) seem somewhat disconnected from the main focus of the study. The DRM and UPR data feel peripheral and could benefit from few experiments for functional linkage to the ERG defect or should be moved to supplementary. 2.The changes in ceramide species and reduction in PE-Cer are key findings of the study. These results should be further validated by performing a genetic rescue using the BAC or hCERT fly line to confirm that the lipidomic changes are specifically due to loss of CERT function. 3.Figure 2B-C and 2E-F: Representative images corresponding to the quantified data should be included to illustrate the changes in PIP2 and PI4P reporters. Given that the fluorescence intensity of the PIP2 reporter at the PM is reduced in the dcert mutant relative to control, the authors should also verify that the reporter is expressed at comparable levels across genotypes. 4.Figure 2J-K: The partial mislocalization of RDGB represents an important observation that could mechanistically explain the reduced resynthesis of PI4P and PIP2 and consequently, the decreased ERG amplitude in dcert mutants. However, this result requires further validation. First, the authors should confirm whether this mislocalization is specific to RDGB by performing co-staining with another ER-PM MCS marker, such as VAP-A, to assess whether overall MCS organization is disrupted. Second, the quantification of RDGB enrichment at ER-PM MCSs should be refined. From the representative images, RDGB appears redistributed toward the photoreceptor cell body, but the presented quantification does not clearly reflect this shift. The authors should therefore include an analysis comparing RDGB levels in the cell body versus the submicrovillar region across genotypes. This analysis should be repeated for similar experiments across the study. Additionally, the total RDGB protein level should be quantified and reported. Finally, since RDGB mislocalization could directly contribute to the decreased ERG amplitude, it would be valuable to test whether overexpression of RDGB in dcert mutants can rescue the ERG phenotype. 5.Figure 3F and I-J: Inclusion of appropriate WT and laceK05205/+ controls is necessary to allow proper interpretation of the results. These controls would strengthen the conclusions regarding the functional relationship between dcert and lace. 6.Figure 5C: The representative images shown here appear to contradict the findings described in Figure 2A. In Figure 5C, Rhodopsin 1 levels seem markedly reduced in the dcert mutants, whereas the text states that Rh1 levels are comparable between control and mutant photoreceptors. The authors should replace or reverify the representative images to ensure that they accurately reflect the conclusions presented in the text. 7.Figure 6D: The reported localization of hCERT to ER-PM MCSs is a key and potentially insightful observation, as it suggests the subcellular site of dcert activity in photoreceptors. However, the representative images provided are not sufficiently conclusive to support this claim. The authors should validate hCERT localization by co-staining with established markers like RDGB for ER-PM CNX99A for the ER and a Golgi marker since mammalian CERT is classically localized to ER-Golgi interfaces. Optionally, the authors could also quantify the relative distribution of hCERT among these compartments to provide a clearer assessment of its subcellular localization.

      Minor comments:

      1.In the first paragraph of introduction, authors should consider citing few of the key MCS literature. 2.Line 132: data not shown is not acceptable. Authors should consider presenting the findings in the supplemental figure. 3.The authors should include a comprehensive table or Excel sheet summarizing all statistical analyses. This should include the sample size, type of statistical test used and exact p-values. Providing this information will improve the transparency, reproducibility and overall rigor of the study. 4.The materials and methods section can be reorganized to include citation for flystocks which do not have stock number or RRIDs if the stocks were previously described but are not available from public repositories. They should expand on the details of various quantification methods used in the study. Finally including a section of Statistical analyses would further enhance transparency and reproducibility

      Referee cross-commenting

      1.I concur with Reviewer 1 regarding the need for more detailed reporting of statistical analyses. 2.I also agree with Reviewer 3 that the discussion should be expanded to address the age-dependent deterioration of ERG amplitude observed in the dcert mutants. This progressive decline could provide valuable insight into the long-term requirement of CERT function and signaling capacity at the photoreceptor membrane.

      Significance

      This study explores the physiological function of CERT, a LTP localized at MCSs in Drosophila photoreceptors and uncovers a novel role in regulating plasma membrane PE-Cer levels and GPCR-mediated signaling. These findings significantly advances our understanding of how CERT-mediated lipid transport regulates G-protein coupled phospholipase C signaling in vivo. This work also highlights Drosophila photoreceptors as a powerful system to analyze the physiological significance of lipid-dependent signaling processes. This work will be of interest to researchers in neuronal cell biology, membrane dynamics and lipid signaling community. This review is based on my expertise in neuronal cell biology.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      Summary

      Non-vesicular lipid transfer by lipid transfer proteins regulates organelle lipid compositions and functions. CERT transfers ceramide from the ER to Golgi to produce sphingomyelin, although CERT function in animal development and physiology is less clear. Using dcert1 (a protein-null allele), this paper shows a disruption of the sole Drosophila CERT gene causes reduced ERG amplitude in photoreceptors. While the level and localization of phototransduction machinery appears unaffected, the level of PIP2 and the localization of RDGB are perturbed. Collectively, these observations establish a novel link between CERT and phospholipase signaling in phototransduction. To understand the molecular mechanism further, the authors performed lipid chromatography and mass spec to characterize ceramide species in dcert1. This analysis reveals that whereas the total ceramide remains unaffected, most PE-ceramide species are reduced. The authors use lace mutant (serine palmitoyl transferase) and CPES (ceramide phosphoethanolamine synthase) RNAi to distinguish whether it is the accumulation of ceramide in the ER or the reduction of sphingolipid derivates in the Golgi that is the cause for the reduced ERG amplitude. Mutating one copy of lace reduces ceramide level by 50% and partially rescues the ERG defect, suggesting that the accumulation of ceramide in the ER is a cause. CPES RNAi phenocopies the reduced ERG amplitude, suggesting the production of certain sphingolipid is also relevant.

      Major comments:

      1. By showing the reduced PIP2 level, the decreased SMC sites at the base of rhabdomeres, and the diffused RDGB localization in dcert1, the authors favor the model, in which the disruption of ceramide metabolism affects PIP transport. However, it is unclear if the reduced PIP2 level (i.e., reduced PH-PLC::GFP staining) is specific to the rhabdomeres. It should be possible to compare PH-PLC::GFP signals in different plasma membranes between wildtype and dcert1. If PH-PLC::GFP signal is specifically reduced at the rhabdomeres, this conclusion will be greatly strengthened. In addition, the photoreceptor apical plasma membrane includes rhabdomere and stalk membrane. Is the PH-PLC::GFP signal at the stalk membrane also affected?
      2. The analysis of RDGB localization should be done in mosaic dcert1 retinas, which will be more convincing with internal control for each comparison. In addition, the phalloidin staining in Figure 2J shows distinct patterns of adherens junctions, indicating that the wildtype and dcert1 were imaged at different focal planes.
      3. The significance of ceramide species levels in dcert1 and GMR>CPESRNAi needs to be explained better. Do certain alterations represent accumulation of ceramides in the ER?
      4. The suppression by lace is interpreted as evidence that the reduced ERG amplitude in dcert1 is caused by ceramide accumulation in the ER. This interpretation seems preliminary as lace may interact with dcert genetically by other mechanisms.
      5. The authors show that ERG amplitude is reduced in GMR>CPESRNAi. While this phenocopying is consistent with the reduced ERG amplitude in dcert1 being caused by reduced production of PE-ceramide, GMR>CPESRNAi also shows an increase in total ceramide level. Could this support the hypothesis that reduced ERG amplitude is caused by an accumulation of ceramide elsewhere? In addition, is the ERG amplitude reduction in GMR>CPESRNAi sensitive to lace?
      6. Along the same line, while the total ceramide level is significantly reduced in lace heterozygotes, is the PE-ceramide level also reduced? If yes, wouldn't this be contradictory to PE-ceramide production being important for ERG amplitude?
      7. What is the explanation and significance for the age-dependent deterioration of ERG amplitude in dcert1? Likewise, the significance of no retinal degeneration is not clearly presented.
      8. The rescue of dcert1 phenotype by the expression of human CERT is a nice result. In addition to demonstrating a functional conservation, it allows a determination of CERT protein localization. However, the quality of images in Figure 6D should be improved. The phalloidin staining was rather poor, and the CNX99A in the lower panel was over-exposed, generating bleed-through signals at the rhabdomeres. In addition, the localization of hCERT should be explored further. For instance, does hCERT colocalize with RDGB? Is the hCERT localization altered in lace or GMR>CPESRNAi background?

      Minor comments:

      1. In Line 128, Df(732) should be Df(3L)BSC732.
      2. GMR-SMSrRNAi shows an increase in ERG peak amplitude. Is there an explanation for this?

      Significance

      As CERT mutations are implicated in human learning disability, a better understanding of CERT function in neuronal cells is certainly of interest. While the link between ceramide transport and phospholipase signaling is novel and interesting, this paper does not clearly explain the mechanism. In addition, as the ERG were measured long after the retinal cells were deficient in CERT or CPES, it is difficult to assess whether the observed phenotype is a primary defect. Furthermore, the quality of some images needs to be improved. Thus, I feel the manuscript in its current form is too preliminary.

    1. turns around

      1. 회전하다[시키다]

      2. 방향을 바꾸다, 뒤돌아보다[보게 하다]

      3. 반항하다, 공격[비난]하다, 적대하다((on, upon))

    2. evade

      evade 1. Verb (어떤 일이나 사람을) 피하다[모면하다]

      2. Verb (특히 법적·도덕적 의무를) 회피하다

      3. Verb (취급·논의를) 피하다[회피하다]

    1. Reviewer #1 (Public review):

      The authors present exciting new experimental data on the antigenic recognition of 78 H3N2 strains (from the beginning of the 2023 Northern Hemisphere season) against a set of 150 serum samples. The authors compare protection profiles of individual sera and find that the antigenic effect of amino acid substitutions at specific sites depends on the immune class of the sera, differentiating between children and adults. Person-to-person heterogeneity in the measured titers is strong, specifically in the group of children's sera. The authors find that the fraction of sera with low titers correlates with the inferred growth rate using maximum likelihood regression (MLR), a correlation that does not hold for pooled sera. The authors then measure the protection profile of the sera against historical vaccine strains and find that it can be explained by birth cohort for children. Finally, the authors present data comparing pre- and post- vaccination protection profiles for 39 (USA) and 8 (Australia) adults. The data shows a cohort-specific vaccination effect as measured by the average titer increase, and also a virus-specific vaccination effect for the historical vaccine strains. The generated data is shared by the authors and they also note that these methods can be applied to inform the bi-annual vaccine composition meetings, which could be highly valuable.

      Thanks to the authors for the revised version of the manuscript. A few concerns remain after the revision:

      (1) We appreciate the additional computational analysis the authors have performed on normalizing the titers with the geometric mean titer for each individual, as shown in the new Supplemental Figure 6. We agree with the authors statement that, after averaging again within specific age groups, "there are no obvious age group-specific patterns." A discussion of this should be added to the revised manuscript, for example in the section "Pooled sera fail to capture the heterogeneity of individual sera," referring to the new Supplemental Figure 6.

      However, we also suggested that after this normalization, patterns might emerge that are not necessarily defined by birth cohort. This possibility remains unexplored and could provide an interesting addition to support potential effects of substitutions at sites 145 and 275/276 in individuals with specific titer profiles, which as stated above do not necessarily follow birth cohort patterns.

      (2) Thank you for elaborating further on the method used to estimate growth rates in your reply to the reviewers. To clarify: the reason that we infer from Fig. 5a that A/Massachusetts has a higher fitness than A/Sydney is not because it reaches a higher maximum frequency, but because it seems to have a higher slope. The discrepancy between this plot and the MLR inferred fitness could be clarified by plotting the frequency trajectories on a log-scale.

      For the MLR, we understand that the initial frequency matters in assessing a variant's growth. However, when starting points of two clades differ in time (i.e., in different contexts of competing clades), this affects comparability, particularly between A/Massachusetts and A/Ontario, as well as for other strains. We still think that mentioning these time-dependent effects, which are not captured by the MLR analysis, would be appropriate. To support this, it could be helpful to include the MLR fits as an appendix figure, showing the different starting and/or time points used.

      (3) Regarding my previous suggestion to test an older vaccine strain than A/Texas/50/2012 to assess whether the observed peak in titer measurements is virus-specific: We understand that the authors want to focus the scope of this paper on the relative fitness of contemporary strains, and that this additional experimental effort would go beyond the main objectives outlined in this manuscript. However, the authors explicitly note that "Adults across age groups also have their highest titers to the oldest vaccine strain tested, consistent with the fact that these adults were first imprinted by exposure to an older strain." This statement gives the impression that imprinting effects increase titers for older strains, whereas this does not seem to be true from their results, but only true for A/Texas. It should be modified accordingly.

    2. Reviewer #3 (Public review):

      The authors use high throughput neutralisation data to explore how different summary statistics for population immune responses relate to strain success, as measured by growth rate during the 2023 season. The question of how serological measurements relate to epidemic growth is an important one, and I thought the authors present a thoughtful analysis tackling this question, with some clear figures. In particular, they found that stratifying the population based on the magnitude of their antibody titres correlates more with strain growth than using measurements derived from pooled serum data. The updated manuscript has a stronger motivation, and there is substantial potential to build on this work in future research.

      Comments on revisions:

      I have no additional recommendations. There are several areas where the work could be further developed, which were not addressed in detail in the responses, but given this is a strong manuscript as it stands, it is fine that these aspects are for consideration only at this point.

    3. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      The authors present exciting new experimental data on the antigenic recognition of 78 H3N2 strains (from the beginning of the 2023 Northern Hemisphere season) against a set of 150 serum samples. The authors compare protection profiles of individual sera and find that the antigenic effect of amino acid substitutions at specific sites depends on the immune class of the sera, differentiating between children and adults. Person-to-person heterogeneity in the measured titers is strong, specifically in the group of children's sera. The authors find that the fraction of sera with low titers correlates with the inferred growth rate using maximum likelihood regression (MLR), a correlation that does not hold for pooled sera. The authors then measure the protection profile of the sera against historical vaccine strains and find that it can be explained by birth cohort for children. Finally, the authors present data comparing pre- and post- vaccination protection profiles for 39 (USA) and 8 (Australia) adults. The data shows a cohort-specific vaccination effect as measured by the average titer increase, and also a virus-specific vaccination effect for the historical vaccine strains. The generated data is shared by the authors and they also note that these methods can be applied to inform the bi-annual vaccine composition meetings, which could be highly valuable.

      Thanks for this nice summary of our paper.

      The following points could be addressed in a revision:

      (1) The authors conclude that much of the person-to-person and strain-to-strain variation seems idiosyncratic to individual sera rather than age groups. This point is not yet fully convincing. While the mean titer of an individual may be idiosyncratic to the individual sera, the strain-to-strain variation still reveals some patterns that are consistent across individuals (the authors note the effects of substitutions at sites 145 and 275/276). A more detailed analysis, removing the individual-specific mean titer, could still show shared patterns in groups of individuals that are not necessarily defined by the birth cohort.

      As the reviewer suggests, we normalized the titers for all sera to the geometric mean titer for each individual in the US-based pre-vaccination adults and children. This is only for the 2023-circulating viral strains. We then faceted these normalized titers by the same age groups we used in Figure 6, and the resulting plot is shown. Although there are differences among virus strains (some are better neutralized than others), there are not obvious age group-specific patterns (eg, the trends in the two facets are similar). This observation suggests that at least for these relatively closely related recent H3N2 strains, the strain-to-strain variation does not obviously segregate by age group. Obviously, it is possible (we think likely) that there would be more obvious age-group specific trends if we looked at a larger swath of viral strains covering a longer time range (eg, over decades of influenza evolution). We have added the new plots shown as a Supplemental Figure 6 in the revised manuscript.

      (2) The authors show that the fraction of sera with a titer 138 correlates strongly with the inferred growth rate using MLR. However, the authors also note that there exists a strong correlation between the MLR growth rate and the number of HA1 mutations. This analysis does not yet show that the titers provide substantially more information about the evolutionary success. The actual relation between the measured titers and fitness is certainly more subtle than suggested by the correlation plot in Figure 5. For example, the clades A/Massachusetts and A/Sydney both have a positive fitness at the beginning of 2023, but A/Massachusetts has substantially higher relative fitness than A/Sydney. The growth inference in Figure 5b does not appear to map that difference, and the antigenic data would give the opposite ranking. Similarly, the clades A/Massachusetts and A/Ontario have both positive relative fitness, as correctly identified by the antigenic ranking, but at quite different times (i.e., in different contexts of competing clades). Other clades, like A/St. Petersburg are assigned high growth and high escape but remain at low frequency throughout. Some mention of these effects not mapped by the analysis may be appropriate.

      Thanks for the nice summary of our findings in Figure 5. However, the reviewer is misreading the growth charts when they say that A/Massachusetts/18/2022 has a substantially higher fitness than A/Sydney/332/2023. Figure 5a (reprinted at left panel) shows the frequency trajectory of different variants over time. While A/Massachusetts/18/2022 reaches a higher frequency than A/Sydney/332/2023, the trajectory is similar and the reason that A/Massachusetts/18/2022 reached a higher max frequency is that it started at a higher frequency at the beginning of 2023. The MLR growth rate estimates differ from the maximum absolute frequency reached: instead, they reflect how rapidly each strain grows relative to others. In fact, A/Massachusetts/18/2022 and A/Sydney/332/2023 have similar growth rates, as shown in Supplemental Figure 6b (reprinted at right). Similarly, A/Saint-Petersburg/RII-166/2023 starts at a low initial frequency but then grows even as A/Massachusetts/18/2022 and A/Sydney/332/2023 are declining, and so has a higher growth rate than both of those. 

      In the revised manuscript, we have clarified how viral growth rates are estimated from frequency trajectories, and how growth rate differs from max frequency in the text below:

      “To estimate the evolutionary success of different human H3N2 influenza strains during 2023, we used multinomial logistic regression, which analyzes strain frequencies over time to calculate strain-specific relative growth rates [51–53]. There were sufficient sequencing counts to reliably estimate growth rates in 2023 for 12 of the HAs for which we measured titers using our sequencing-based neutralization assay libraries (Figure 5a,b and Supplemental Figure 9a,b). Note that these growth rates estimate how rapidly each strain grows relative to the other strains, rather than the absolute highest frequency reached by each strain “.  

      (3) For the protection profile against the vaccine strains, the authors find for the adult cohort that the highest titer is always against the oldest vaccine strain tested, which is A/Texas/50/2012. However, the adult sera do not show an increase in titer towards older strains, but only a peak at A/Texas. Therefore, it could be that this is a virus-specific effect, rather than a property of the protection profile. Could the authors test with one older vaccine virus (A/Perth/16/2009?) whether this really can be a general property?

      We are interested in studying immune imprinting more thoroughly using sequencing-based neutralization assays, but we note that the adults in the cohorts we studied would have been imprinted with much older strains than included in this library. As this paper focuses on the relative fitness of contemporary strains with minor secondary points regarding imprinting, these experiments are beyond the scope of this study. We’re excited for future work (from our group or others) to explore these points by making a new virus library with strains from multiple decades of influenza evolution. 

      Reviewer #2 (Public review):

      This is an excellent paper. The ability to measure the immune response to multiple viruses in parallel is a major advancement for the field, which will be relevant across pathogens (assuming the assay can be appropriately adapted). I only have a few comments, focused on maximising the information provided by the sera.

      Thanks very much!

      Firstly, one of the major findings is that there is wide heterogeneity in responses across individuals. However, we could expect that individuals' responses should be at least correlated across the viruses considered, especially when individuals are of a similar age. It would be interesting to quantify the correlation in responses as a function of the difference in ages between pairs of individuals. I am also left wondering what the potential drivers of the differences in responses are, with age being presumably key. It would be interesting to explore individual factors associated with responses to specific viruses (beyond simply comparing adults versus children).

      We thank the reviewer for this interesting idea. We performed this analysis (and the related analyses described) and added this as a new Supplemental Figure 7, which is pasted after the response to the next related comment by the reviewer. 

      For 2023-circulating strains, we observed basically no correlation between the strength of correlation between pairs of sera and the difference in age between those pairs of sera (Supplemental Figure 7), which was unsurprising given the high degree of heterogeneity between individual sera (Figure 3, Supplemental Figure 6, and Supplemental Figure 8). For vaccine strains, there is a moderate negative correlation only in the children, but not in the adults or the combined group of adults and children. This could be because the children are younger with limited and potentially more similar vaccine and exposure histories than the adults. It could also be because the children are overall closer in age than the adults.

      Relatedly, is the phylogenetic distance between pairs of viruses associated with similarity in responses?

      For 2023-circulating strains, across sera cohorts we observed a weak-to-moderate correlation between the strength of correlation between the neutralizing titers across all sera to pairs of viruses and the Hamming distances between virus pairs. For the same comparison with vaccine strains, we observed moderate correlations, but this must be caveated with the slightly larger range of Hamming distances between vaccine strains. Notably, many of the points on the negative correlation slope are a mix of egg- and cell-produced vaccine strains from similar years, but there are some strain comparisons where the same year’s egg- and cell-produced vaccine strains correlate poorly.

      Figure 5C is also a really interesting result. To be able to predict growth rates based on titers in the sera is fascinating. As touched upon in the discussion, I suspect it is really dependent on the representativeness of the sera of the population (so, e.g., if only elderly individuals provided sera, it would be a different result than if only children provided samples). It may be interesting to compare different hypotheses - so e.g., see if a population-weighted titer is even better correlated with fitness - so the contribution from each individual's titer is linked to a number of individuals of that age in the population. Alternatively, maybe only the titers in younger individuals are most relevant to fitness, etc.

      We’re very interested in these analyses, but suggest they may be better explored in subsequent works that could sample more children, teenagers and adults across age groups. Our sera set, as the reviewer suggests, may be under-powered to perform the proposed analysis on subsetted age groups of our larger age cohorts. 

      In Figure 6, the authors lump together individuals within 10-year age categories - however, this is potentially throwing away the nuances of what is happening at individual ages, especially for the children, where the measured viruses cross different groups. I realise the numbers are small and the viruses only come from a small numbers of years, however, it may be preferable to order all the individuals by age (y-axis) and the viral responses in ascending order (x-axis) and plot the response as a heatmap. As currently plotted, it is difficult to compare across panels

      This is a good suggestion. In the revised manuscript we have included a heatmap of the children and pre-vaccination adults, ordered by the year of birth of each individual, as Supplemental figure 8. That new figure is also pasted in this response.

      Reviewer #3 (Public review):

      The authors use high-throughput neutralisation data to explore how different summary statistics for population immune responses relate to strain success, as measured by growth rate during the 2023 season. The question of how serological measurements relate to epidemic growth is an important one, and I thought the authors present a thoughtful analysis tackling this question, with some clear figures. In particular, they found that stratifying the population based on the magnitude of their antibody titres correlates more with strain growth than using measurements derived from pooled serum data. However, there are some areas where I thought the work could be more strongly motivated and linked together. In particular, how the vaccine responses in US and Australia in Figures 6-7 relate to the earlier analysis around growth rates, and what we would expect the relationship between growth rate and population immunity to be based on epidemic theory.

      Thank you for this nice summary. This reviewer also notes that the text related to figures 6 and 7 are more secondary to the main story presented in figures 3-5. The main motivation for including figures 6 and 7 were to demonstrate the wide-ranging applications of sequencing-based neutralization data. We have tried to clarify this with the following minor text revisions, which do not add new content but we hope smooth the transition between results sections. 

      While the preceding analyses demonstrated the utility of sequencing-based neutralization assays for measuring titers of currently circulating strains, our library also included viruses with HAs from each of the H3N2 influenza Northern Hemisphere vaccine strains from the last decade (2014 to 2024, see Supplemental Table 1). These historical vaccine strains cover a much wider span of evolutionary diversity that the 2023-circulating strains analyzed in the preceding sections (Figure 2a,b and Supplemental Figure 2b-e). For this analysis, we focused on the cell-passaged strains for each vaccine, as these are more antigenically similar to their contemporary circulating strains than the egg-passaged vaccine strains since they lack the mutations that arise during growth of viruses in eggs [55–57] (Supplemental Table 1). 

      Our sequencing-based assay could also be used to assess the impact of vaccination on neutralization titers against the full set of strains in our H3N2 library. To do this, we analyzed matched 28-day post-vaccination samples for each of the above-described 39 pre-vaccination samples from the cohort of adults based in the USA (Table 1). We also analyzed a smaller set of matched pre- and post-vaccination sera samples from a cohort of eight adults based in Australia (Table 1). Note that there are several differences between these cohorts: the USA-based cohort received the 2023-2024 Northern Hemisphere egg-grown vaccine whereas the Australia-based cohort received the 2024 Southern Hemisphere cell-grown vaccine, and most individuals in the USA-based cohort had also been vaccinated in the prior season whereas most individuals in the Australia-based cohort had not. Therefore, multiple factors could contribute to observed differences in vaccine response between the cohorts.

      Reviewer #3 (Recommendations for the authors):

      Main comments:

      (1) The authors compare titres of the pooled sera with the median titres across individual sera, finding a weak correlation (Figure 4). I was therefore interested in the finding that geometric mean titre and median across a study population are well correlated with growth rate (Supplemental Figure 6c). It would be useful to have some more discussion on why estimates from a pool are so much worse than pooled estimates.

      We thank this reviewer for this point. We would clarify that pooling sera is the equivalent of taking the arithmetic mean of the individual sera, rather than the geometric mean or median, which tends to bias the measurements of the pool to the outliers within the pool. To address this reviewer’s point, we’ve added the following text to the manuscript:

      “To confirm that sera pools are not reflective of the full heterogeneity of their constituent sera, we created equal volume pools of the children and adult sera and measured the titers of these pools using the sequencing-based neutralization assay. As expected, neutralization titers of the pooled sera were always higher than the median across the individual constituent sera, and the pool titers against different viral strains were only modestly correlated with the median titers across individual sera (Figure 4). The differences in titers across strains were also compressed in the serum pools relative to the median across individual sera (Figure 4). The failure of the serum pools to capture the median titers of all the individual sera is especially dramatic for the children sera (Figure 4) because these sera are so heterogeneous in their individual titers (Figure 3b). Taken together, these results show that serum pools do not fully represent individual-level heterogeneity, and are similar to taking the arithmetic mean of the titers for a pool of individuals, which tends to be biased by the highest titer sera”.

      (2) Perhaps I missed it, but are growth rates weekly growth rates? (I assume so?)

      The growth rates are relative exponential growth rates calculated assuming a serial interval of 3.6 days. We also added clarifying language and a citation for the serial growth interval to the methods section:

      The analysis performing H3 HA strain growth rate estimates using the evofr[51] package is at https://github.com/jbloomlab/flu_H3_2023_seqneut_vs_growth. Briefly, we sought to make growth rate estimates for the strains in 2023 since this was the same timeframe when the sera were collected. To achieve this, we downloaded all publicly-available H3N2 sequences from the GISAID[88] EpiFlu database, filtering to only those sequences that closely matched a library HA1 sequence (within one HA1 amino-acid mutation) and were collected between January 2023 and December 2023. If a sequence was within one HA1 amino-acid mutation of multiple library HA1 proteins then it was assigned to the closest one; if there were multiple equally close matches then it was assigned fractionally to each match. We only made growth rate estimates for library strains with at least 80 sequencing counts (Supplemental Figure 9a), and ignored counts for sequences that did not match a library strain (equivalent results were obtained if we instead fit a growth rate for these sequences as an “other” category). We then fit multinomial logistic regression models using the evofr[51] package assuming a serial interval of 3.6 days[101]  to the strain counts. For the plot in Figure 5a the frequencies are averaged over a 14-day sliding window for visual clarity, but the fits were to the raw sequencing counts. For most of the analyses in this paper we used models based on requiring 80 sequencing counts to make an estimate for strain growth rates, and counting a sequence as a match if it was within one amino-acid mutation; see https://jbloomlab.github.io/flu_H3_2023_seqneut_vs_growth/ for comparable analyses using different reasonable sequence count cutoffs (e.g., 60, 50, 40 and 30, as depicted in Supplemental Figure 9).  Across sequence cutoffs, we found that the fraction of individuals with low neutralization titers and number of HA1 mutations correlated strongly with these MLR-estimated strain growth rates.

      (3)  I found Figure 3 useful in that it presents phylogenetic structure alongside titres, to make it clearer why certain clusters of strains have a lower response. In contrast, I found it harder to meaningfully interpret Figure 7a beyond the conclusion that vaccines lead to a fairly uniform rise in titre. Do the 275 or 276 mutations that seem important for adults in Figure 3 have any impact?

      We are certainly interested in the questions this reviewer raises, and in trying to understand how well a seasonal vaccine protects against the most successful influenza variants that season. However, these post-vaccination sera were taken when neutralizing titers peak ~30 days after vaccination. Because of this, in the larger cohort of US-based post-vaccination adults, the median titers across sera to most strains appear uniformly high. In the Australian-based post-vaccination adults, there was some strain-to-strain variation in median titers across sera, but of course this must be caveated with the much smaller sample size. It might be more relevant to answer this question with longitudinally sampled sera, when titers begin to wane in the following months.

      (4)  It could be useful to define a mechanistic relationship about how you would expect susceptibility (e.g. fraction with titre < X, where X is a good correlate) to relate to growth via the reproduction number: R = R0 x S. For example, under the assumption the generation interval G is the same for all, we have R = exp(r*G), which would make it possible to make a prediction about how much we would expect the growth rate to change between S = 0.45 and 0.6, as in Fig 5c. This sort of brief calculation (or at least some discussion) could add some more theoretical underpinning to the analysis, and help others build on the work in settings with different fractions with low titres. It would also provide some intuition into whether we would expect relationships to be linear.

      This is an interesting idea for future work! However, the scope of our current study is to provide these experimental data and show a correlation with growth; we hope this can be used to build more mechanistic models in future.

      (5) A key conclusion from the analysis is that the fraction above a threshold of ~140 is particularly informative for growth rate prediction, so would it be worth including this in Figure 6-7 to give a clearer indication of how much vaccination reduces contribution to strain growth among those who are vaccinated? This could also help link these figures more clearly with the main analysis and question.

      Although our data do find ~140 to be the threshold that gives max correlation with growth rate, we are not comfortable strongly concluding 140 is a correlate of protection, as titers could influence viral fitness without completely protecting against infection. In addition, inspection of Figure 5d shows that while ~140 does give the maximal correlation, a good correlation is observed for most cutoffs in the range from ~40 to 200, so we are not sure how robustly we can be sure that ~140 is the optimal threshold.

      (6)  In Figure 5, the caption doesn't seem to include a description for (e).

      Thank you to the reviewer for catching this – this is fixed now.

      (7)  The US vs Australia comparison could have benefited from more motivation. The authors conclude ,"Due to the multiple differences between cohorts we are unable to confidently ascribe a cause to these differences in magnitude of vaccine response" - given the small sample sizes, what hypotheses could have been tested with these data? The comparison isn't covered in the Discussion, so it seems a bit tangential currently.

      Thank you to the reviewer for this comment, but we should clarify our aim was not to directly compare US and Australian adults. We are interested in regional comparisons between serum cohorts, but did not have the numbers to adequately address those questions here. This section (and the preceding question) were indeed both intended to be tangential to the main finding, and hopefully this will be clarified with our text additions in response to Reviewer #3’s public reviews.

    1. Reviewer #1 (Public review):

      Summary:

      This study provides evidence that neuropeptide signaling, particularly via the CRH-CRHBP pathway, plays a key role in regulating the precision of vocal motor output in songbirds. By integrating gene expression profiling with targeted manipulations in the song vocal motor nucleus RA, the authors demonstrate that altering CRH and CRHBP levels bidirectionally modulate song variability. These findings reveal a previously unrecognized neuropeptidergic mechanism underlying motor performance control, supported by molecular and functional evidence.

      Strengths:

      Neural circuit mechanisms underlying motor variability have been intensively studied, yet the molecular bases of such variability remain poorly understood. The authors address this important gap using the songbird (Bengalese finch) as a model system for motor learning, providing experimental evidence that neuropeptide signaling contributes to vocal motor variability. They comprehensively characterize the expression patterns of neuropeptide-related genes in brain regions involved in song vocal learning and production, revealing distinct regulatory profiles compared to non-vocal related regions, as well as developmental, revealing distinct regulatory profiles compared to non-vocal regions, as well as developmental and behavioral dependencies, including altered expression following deafening and correlations with singing activity over the two days preceding sampling. Through these multi-level analyses spanning anatomy, development, and behavior, the authors identify the CRH-CRHBP pathway in the vocal motor nucleus RA as a candidate regulator of song variability. Functional manipulations further demonstrate that modulation of this pathway bidirectionally alters song variability.

      Overall, this work represents an effective use of songbirds, though a well-established neuroethological framework uncovers how previously uncharacterized molecular pathways shape behavioral output at the individual level.

      Weaknesses:

      (1) This study uses Bengalese finches (BFs) for all experiments-bulk RNA-seq, in situ hybridization across developmental stages, deafening, gene manipulation, and CRH microinfusion-except for the sc/snRNA-seq analysis. BFs differ from zebra finches (ZFs) in several important ways, including faster song degradation after deafening and greater syllable sequence complexity. This study makes effective use of these unique BF characteristics and should be commended for doing so.

      However, the major concern lies in the use of the single-cell/single-nucleus RNA-seq dataset from Colquitt et al. (2021), which combines data from both ZFs and BFs for cell-type classification. Based on our reanalysis of the publicly available dataset used in both Colquitt et al. (2021) and the present study, my lab identified two major issues:

      (a) The first concern is that the quality of the single-cell RNA-seq data from BFs is extremely poor, and the number of BF-derived cells is very limited. In other words, most of the gene expression information at the single-cell (or "subcellular type") level in this study likely reflects ZF rather than BF profiles. In our verification of the authors' publicly annotated data, we found that in the song nucleus RA, only about 18 glutamatergic cells (2.3%) of a total of 787 RA_Glut (RA_Glut1+2+3) cells were derived from BFs. Similarly, in HVC, only 53 cells (4.1%) out of 1,278 Glut1+Glut4 cells were BF-derived. This clearly indicates that the cell-subtype-level expression data discussed in this study are predominantly based on ZF, not BF, expression profiles.

      Recent studies have begun to report interspecies differences in the expression of many genes in the song control nuclei. It is therefore highly plausible that the expression patterns of CRHBP and other neuropeptide-signaling-related genes differ between ZFs and BFs. Yet, the current study does not appear to take this potential species difference into account. As a result, analyses such as the CellChat results (Fig. 2F and G) and the model proposed in Fig. 6G are based on ZF-derived transcriptomic information, even though the rest of the experimental data are derived from BF, which raises a critical methodological inconsistency.

      (b) The second major concern involves the definition of "subcellular types" in the sc/snRNA-seq dataset. Specifically, the RA_Glut1, 2, and 3 and HVC_Glu1 and 4 clusters-classified as glutamatergic projection neuron subtypes-may in fact represent inter-individual variation within the same cell type rather than true subtypes. Following Colquitt et al. (2021), Toji et al. (PNAS, 2024) demonstrated clear individual differences in the gene expression profiles of glutamatergic projection neurons in RA.

      In our reanalysis of the same dataset, we also observed multiple clusters representing the same glutamatergic projection neurons in UMAP space. This likely occurs because Seurat integration (anchor-based mutual nearest neighbor integration) was not applied, and because cells were not classified based on individual SNP information using tools such as Souporcell. When classified by individual SNPs, we confirmed that the RA_Glut1-3 and HVC_Glu1 and 4 clusters correspond simply to cells from different individuals rather than distinct subcellular types. (Although images cannot be attached in this review system, we can provide our analysis results if necessary.)

      This distinction is crucial, as subsequent analyses and interpretations throughout the manuscript depend on this classification. In particular, Figure 6G presents a model based on this questionable subcellular classification. Similarly, the ligand-receptor relationships shown in Figure 2G - such as the absence of SST-SSTR1 signaling in RA_Glut3 but its presence in RA_Glut1 and 2-are more plausibly explained by inter-individual variation rather than subcellular-type specificity.

      Whether these differences are interpreted as individual variation within a single cell type or as differences in projection targets among glutamatergic neurons has major implications for understanding the biological meaning of neuropeptide-related gene expression in this system.

      (2) Based on the important finding that "CRHBP expression in the song motor pathway is correlated with singing," it is necessary to provide data showing that the observed changes in CRHBP and other neuropeptide-related gene expression during the song learning period or after deafening are not merely due to differences in singing amount over the two days preceding brain sampling.

      Without such data, the following statement cannot be justified: "Regarding CRHBP expression in the song motor pathway increases during song acquisition and decreases following deafening."

      (3) In Figure 5B, the authors should clearly distinguish between intact and deafened birds and show the singing amount for each group. In practice, deafening often leads to a reduction in both the number of song bouts and the total singing time. If, in this experiment, deafened birds also exhibited reduced singing compared to intact birds, then the decreased CRHBP expression observed in HVC and RA (Figures 3 and 4) may not reflect song deterioration, but rather a simple reduction in singing activity.

      As a similar viewpoint, the authors report that CRHBP expression levels in RA and HVC increase with age during the song learning period. However, this change may not be directly related to age or the decline in vocal plasticity. Instead, it could correlate with the singing amount during the one to two days preceding brain sampling. The authors should provide data on the singing activity of the birds used for in situ hybridization during the two days prior to sampling.

    2. Reviewer #3 (Public review):

      Summary:

      The stable production of learned vocalizations like human language and birdsong requires auditory feedback. What happens in the brain areas that generate stable vocalizations as performance deteriorates is not well understood. Using a species of songbird, the current study investigates individual cells within the evolutionarily-conserved brain regions that generate learned vocalizations to describe that the complement of neuropeptide (short proteins) signals may be a key feature of behavioral change. Because neuropeptides are important across species, these findings may help explain diminishing stability in learned behaviors even in humans.

      Strengths:

      The experiments are solid and follow a strong progression from description through manipulation. The songbird model is appropriate and powerful to inform on generalizable biological mechanisms of precisely learned behaviors, including human speech.

      Weaknesses:

      While it is always possible to perform more experiments, most of the weaknesses are in the presentation of the project, not in the evidence or analysis, which are leading-edge and appropriate. Generally, the ability to follow the findings and to independently assess rigor would be enhanced with increased explicit mention of the statistical thresholds and subjective descriptions. In addition, two prior pieces of relevant work seem to be omitted, including one performing deafening, gene expression measures, and behavioral assessment in zebra finches, and another describing neuropeptide complements in zebra finch singing nuclei based largely on mass spectrometry. The former in particular should be related to the current findings.

    1. Reviewer #2 (Public review):

      Summary:

      In this manuscript, He et al. set out to investigate the mechanisms behind Kupffer Cell death in MASLD. As has been previously shown, they demonstrate a loss of resident KCs in MASLD in different mouse models. They then go on to show that this correlates with alterations in genes/metabolites associated with glucose metabolism in KCs. To investigate the role of glucose metabolism further, they subject isolated KCs in vitro to different metabolic treatments and assess cleaved caspase 3 staining, demonstrating that KCs show increased Cl. Casp 3 staining upon stimulation of glycolysis. Finally, they use a genetic mouse model (Chil1KO) where they have previously reported that loss of this gene leads to increased glycolysis and validate this finding in BMDMs (KO). They then remove this gene specifically from KCs (Clec4fCre) and show that this leads to increased macrophage death compared with controls.

      Strengths:

      As we do not yet understand why KCs die in MASLD, this manuscript provides some explanation for this finding. The metabolomics is novel and provides insight into KC biology. It could also lead to further investigation; here, it will be important that the full dataset is made available.

      Weaknesses:

      Different diets are known to induce different amounts of KC loss, yet here, all models examined appear to result in 60% KC death. One small field of view of liver tissue is shown as representative to make these claims, but this is not sufficient, as anything can be claimed based on one field of view. Rather, a full tissue slice should be included to allow readers to really assess the level of death. Additionally, there is no consistency between the markers used to define KCs and moMFs, with CLEC4F being used in microscopy, TIM4 in flow, while the authors themselves acknowledge that moKCs are CLEC4F+TIM4-. As moKCs are induced in MASLD, this limits interpretation. Additionally, Iba1 is referred to as a moMF marker but is also expressed by KCs, which again prevents an accurate interpretation of the data. Indeed, the authors show 60% of KCs are dying but only 30% of IBA1+ moMFs, as KCs are also IBA1+, this would mean that KCs die much more than moMFs, which would then limit the relevance of the BMDM studies performed if the phenotype is KC specific. Therefore, this needs to be clarified. The claim that periportal KCs die preferentially is not supported, given that the majority of KCs are peri-portal. Rather, these results would need to be normalised to KC numbers in PP vs PC regions to make meaningful conclusions. Additionally, KCs are known to be notoriously difficult to keep alive in vitro, and for these studies, the authors only examine cl. Casp 3 staining. To fully understand that data, a full analysis of the viability of the cells and whether they retain the KC phenotype in all conditions is required. Finally, in the Cre-driven KO model, there does not seem to be any death of KCs in the controls (rather numbers trend towards an increase with time on diet, Figure 6E), contrary to what had been claimed in the rest of the paper, again making it difficult to interpret the overall results. Additionally, there is no validation that the increased death observed in vivo in KCs is due to further promotion of glycolysis.

    2. Reviewer #3 (Public review):

      This manuscript provides novel insights into altered glucose metabolism and KC status during early MASLD. The authors propose that hyperactivated glycolysis drives a spatially patterned KC depletion that is more pronounced than the loss of hepatocytes or hepatic stellate cells. This concept significantly enhances our understanding of early MASLD progression and KC metabolic phenotype.

      Through a combination of TUNEL staining and MS-based metabolomic analyses of KCs from HFHC-fed mice, the authors show increased KC apoptosis alongside dysregulation of glycolysis and the pentose phosphate pathway. Using in vitro culture systems and KC-specific ablation of Chil1, a regulator of glycolytic flux, they further show that elevated glycolysis can promote KC apoptosis.

      However, it remains unclear whether the observed metabolic dysregulation directly causes KC death or whether secondary factors, such as low-grade inflammation or macrophage activation, also contribute significantly. Nonetheless, the results, particularly those derived from the Chil1-ablated model, point to a new potential target for the early prevention of KC death during MASLD progression.

      The manuscript is clearly written and thoughtfully addresses key limitations in the field, especially the focus on glycolytic intermediates rather than fatty acid oxidation. The authors acknowledge the missing mechanistic link between increased glycolysis and KC death. Still, several interpretations require moderation to avoid overstatement, and certain experimental details, particularly those concerning flow cytometry and population gating, need further clarification.

      Strengths:

      (1) The study presents the novel observation of profound metabolic dysregulation in KCs during early MASLD and identifies these cells as undergoing apoptosis. The finding that Chil1 ablation aggravates this phenotype opens new avenues for exploring therapeutic strategies to mitigate or reverse MASLD progression.

      (2) The authors provide a comprehensive metabolic profile of KCs following HFHC diet exposure, including quantification of individual metabolites. They further delineate alterations in glycolysis and the pentose phosphate pathway in Chil1-deficient cells, substantiating enhanced glycolytic flux through 13C-glucose tracing experiments.

      (3) The data underscore the critical importance of maintaining balanced glucose metabolism in both in vitro and in vivo contexts to prevent KC apoptosis, emphasizing the high metabolic specialization of these cells.

      (4) The observed increase in KC death in Chil1-deficient KCs demonstrates their dependence on tightly regulated glycolysis, particularly under pathological conditions such as early MASLD.

      Weaknesses:

      (1) The novelty is questionable. The presented work has considerable overlap with a study by the same lab, which is currently under review (citation 17), and it should be considered whether the data should not be presented in one paper.

      (2) The authors report that 60% of KCs are TUNEL-positive after 16 weeks of HFHC diet and confirm this by cleaved caspase-3 staining. Given that such marker positivity typically indicates imminent cell death within hours, it is unexpected that more extensive KC depletion or monocyte infiltration is not observed. Since Timd4 expression on monocyte-derived macrophages takes roughly one month to establish, the authors should consider whether these TUNEL-positive KCs persist in a pre-apoptotic state longer than anticipated. Alternatively, fate-mapping experiments could clarify the dynamics of KC death and replacement.

      (3) The mechanistic link between elevated glycolytic flux and KC death remains unclear.

      (4) The study does not address the polarization or ontogeny of KCs during early MASLD. Given that pro-inflammatory macrophages preferentially utilize glycolysis, such data could provide valuable insight into the reason for increased KC death beyond the presented hyperreliance on glycolysis.

      (5) The gating strategy for monocyte-derived macrophages (moMFs) appears suboptimal and may include monocytes. A more rigorous characterization of myeloid populations by including additional markers would strengthen the study's conclusions.

      (6) While BMDMs from Chil1 knockout mice are used to demonstrate enhanced glycolytic flux, it remains unclear whether Chil1 deficiency affects macrophage differentiation itself.

      (7) The authors use the PDK activator PS48 and the ATP synthase inhibitor oligomycin to argue that increased glycolytic flux at the expense of OXPHOS promotes KC death. However, given the high energy demands of KCs and the fact that OXPHOS yields 15-16 times more ATP per glucose molecule than glycolysis, the increased apoptosis observed in Figure 4C-F could primarily reflect energy deprivation rather than a glycolysis-specific mechanism.

      (8) In Figure 1C, KC numbers are significantly reduced after 4 and 16 weeks of HFHC diet in WT male mice, yet no comparable reduction is seen in Clec4Cre control mice, which should theoretically exhibit similar behavior under identical conditions.

    1. Reviewer #1 (Public review):

      Summary:

      This study addresses the emerging role of fungal pathogens in colorectal cancer and provides mechanistic insights into how Candida albicans may influence tumor-promoting pathways. While the work is potentially impactful and the experiments are carefully executed, the strength of evidence is limited by reliance on in vitro models, small patient sample size, and the absence of in vivo validation, which reduces the translational significance of the findings.

      Strengths:

      (1) Comprehensive mechanistic dissection of intracellular signaling pathways.

      (2) Broad use of pharmacological inhibitors and cell line models.

      (3) Inclusion of patient-derived organoids, which increases relevance to human disease.

      (4) Focus on an emerging and underexplored aspect of the tumor microenvironment, namely fungal pathogens.

      Weaknesses:

      (1) Clinical association data are inconsistent and based on very small sample numbers.

      (2) No in vivo validation, which limits the translational significance.

      (3) Species- and cell type-specificity claims are not well supported by the presented controls.

      (4) Reliance on colorectal cancer cell lines alone makes it difficult to judge whether findings are specific or general epithelial responses.

    2. Reviewer #2 (Public review):

      The authors in this manuscript studied the role of Candida albicans in Colorectal cancer progression. The authors have undertaken a thorough investigation and used several methods to investigate the role of Candida albicans in Colorectal cancer progression. The topic is highly relevant, given the increasing burden of colon cancer globally and the urgent need for innovative treatment options.

      However, there are some inconsistencies in the figures and some missing details in the figures, including:

      (1) The authors should clearly explain in the results section which patient samples are shown in Figure 1B.

      (2) What do a, ab, b, b written above the bars in Figure 1F represent? Maybe authors should consider removing them, because they create confusion. Also, there is no explanation for those letters in the figure legend.

      (3) The authors should submit all the raw images of Western blot with appropriate labels to indicate the bands of protein of interest along with molecular weight markers.

      (4) The authors should do the quantification of data in Figure 2d and include it in the figure.

      (5) In Figure 2h, the authors should indicate if the quantification represents VEGF expression after 6h or 12h of C. albicans co-culture with cells.

      (6) In Figure 2i, quantification of VEGF should be done and data from three independent experiments should be submitted. The authors should also mention the time point.

    1. Superstars are even more valuable than they seem, but you have to evaluate people on their net impact on the performance of the organization.

      انگار داره اینو میگه: ارزیابی عملکرد نباید مبتنی بر رتبه سازمانی باشه، باید ارزشی که طرف خلق میکنه (با حل مشکلات واقعی) رو بسنجی.

      چند تا معیار نادرست دیگه که اتفاقا خیلی توی مدل کارمندی مورد توجه هستن: 1. ساعت کار 2. زحمت فرد (قابل تقدیره ولی خب) 3. ارتباطات و پارتی و...

    1. Briefing : Actualités, Innovations et Stratégies Parentales pour le TDAH avec le Programme PEPS

      Synthèse

      Ce document de briefing synthétise les points clés d'un webinaire portant sur le Trouble du Déficit de l'Attention avec ou sans Hyperactivité (TDAH) et présentant le programme d'entraînement aux habiletés parentales (PEHP) "PEPS".

      Développé par l'équipe du CHU de Montpellier, le programme PEPS constitue une évolution modernisée et adaptée du programme de Barkley, enrichie de 15 années de pratique clinique.

      Les recommandations de 2024 de la Haute Autorité de Santé (HAS) positionnent la psychoéducation et les programmes d'entraînement aux habiletés parentales comme les interventions de première ligne pour le TDAH chez l'enfant, avant même les suivis psychologiques individuels.

      Le TDAH, un trouble du neurodéveloppement affectant 5% des enfants et persistant souvent à l'âge adulte, a un impact majeur sur la qualité de vie, la santé et le fonctionnement familial.

      Le programme PEPS se distingue par plusieurs innovations majeures :

      1. Ajout de modules essentiels : Il intègre des séances dédiées à la gestion des écrans, à la régulation des émotions et des crises de colère, à la gestion du temps, et au bien-être parental ("prendre soin de soi").

      2. Adaptation pour les adolescents : Une section spécifique aborde les enjeux de l'adolescence (autonomie, situations à risque) en s'appuyant sur des stratégies de résistance non violente.

      3. Flexibilité et accessibilité : Le programme abandonne l'approche "scolaire" et rigide de certains modèles pour une plus grande souplesse, évitant de culpabiliser les parents.

      Il est conçu pour être dispensé sous divers formats, notamment en visioconférence, un modèle jugé plus pratique, plus inclusif (favorisant la participation des pères) et essentiel pour un déploiement à grande échelle.

      L'objectif principal du programme n'est pas d'éliminer les symptômes du TDAH, mais d'améliorer les relations intrafamiliales, de réduire le stress parental et d'augmenter le sentiment de compétence des parents.

      En cassant le cycle des interactions coercitives, il vise à renforcer l'estime de soi de l'enfant et à prévenir les complications à long terme, comme les troubles des conduites.

      --------------------------------------------------------------------------------

      1. Contexte du TDAH et Recommandations Officielles

      1.1. Définition et Impact du TDAH

      Nature : Le TDAH est un trouble du neurodéveloppement, au même titre que les troubles du spectre de l'autisme (TSA) ou les troubles "dys".

      Prévalence : Il concerne environ 5 % des enfants et adolescents, un chiffre considéré comme stable et internationalement reconnu.

      Persistance : Les symptômes persistent fréquemment à l'âge adulte, ce qui constitue un enjeu majeur pour l'accompagnement des familles.

      Impact : Le TDAH a un impact significatif sur la qualité de vie, la santé (comorbidités psychiatriques, mortalité) et engendre des coûts économiques considérables.

      1.2. Les Recommandations de la Haute Autorité de Santé (HAS) de 2024

      En 2024, la HAS a publié des recommandations de bonnes pratiques pour la prise en charge du TDAH, établissant un algorithme clair pour les interventions chez l'enfant et l'adolescent.

      L'algorithme de prise en charge :

      1. Étape Incontournable : La Psychoéducation

      ◦ C'est le point de départ de toute prise en charge. Il est essentiel d'expliquer aux parents, à l'enfant ou à l'adolescent la nature du TDAH, ses causes et les stratégies possibles.

      On ne peut pas "faire l'économie" de cette étape.

      2. Interventions de Première Ligne

      Aménagements de l'environnement : Principalement les aménagements scolaires.   

      Programmes d'Entraînement aux Habiletés Parentales (PEHP) : Ils constituent la première chose à mettre en place pour travailler sur la dynamique familiale et l'environnement.

      3. Traitement Pharmacologique

      ◦ Il peut être envisagé d'emblée dans les formes sévères de TDAH.  

      ◦ Dans les autres cas, il est discuté après la mise en place des interventions de première ligne.

      Il n'est pas une intervention "exceptionnelle" ou de dernier recours.

      Point important : Les recommandations actuelles ne placent pas le suivi psychologique individuel de l'enfant en première ligne, car son efficacité n'a pas un niveau de preuve suffisant.

      L'accent est mis sur l'environnement (famille, école).

      2. Les Programmes d'Entraînement aux Habiletés Parentales (PEHP)

      2.1. Définition et Caractéristiques

      Les PEHP ne sont pas de simples "groupes de parole". Ce sont des programmes structurés et validés scientifiquement.

      Objectif : Transmettre des techniques et stratégies éducatives concrètes aux parents.

      Structure : Ils comportent un nombre de séances défini à l'avance, chacune avec des objectifs précis (ex: mettre en place un système de points, gérer le time out).

      Cadre : Ils s'appuient sur un manuel de référence et ont fait l'objet d'une validation scientifique.

      2.2. Exemples de Programmes

      Plusieurs programmes existent en France, partageant une base commune inspirée des thérapies comportementales et cognitives :

      Programme de Barkley : Le plus répandu et le premier importé en France.

      Incredible Years

      Triple P (programme souvent en ligne)

      Mieux vivre avec un TDAH

      Programme PEPS (objet du webinaire)

      3. Le Programme PEPS : Une Évolution du Programme de Barkley

      Le programme PEPS a été développé par l'équipe du CHU de Montpellier (Nathalie Franc, Jessica Chan-Chee et Sylvie Borona) sur la base de plus de 15 ans d'expérience avec le programme de Barkley.

      Il vise à moderniser et adapter ce dernier aux réalités contemporaines et aux besoins spécifiques des familles.

      3.1. Les Limites du Programme de Barkley et les Innovations de PEPS

      Limites de Barkley (programme des années 80)

      Innovations du Programme PEPS

      Ne traite pas de la question des écrans.

      Intégration d'une séance sur la gestion des écrans, une préoccupation majeure des parents.

      Moins d'accent sur la régulation émotionnelle.

      Accent mis sur la régulation des émotions et la gestion des crises de colère, avec des séances dédiées.

      Approche jugée trop "scolaire", rigide et parfois culpabilisante.

      Introduction de plus de souplesse, en acceptant que les parents n'appliquent pas toujours les "devoirs" à la lettre. L'objectif est d'éviter la culpabilisation et la perte de motivation.

      Pas d'outils spécifiques pour les crises violentes.

      Implémentation d'outils issus de la résistance non violente pour répondre à cette problématique.

      Pas de contenu spécifique pour les adolescents.

      Ajout d'une section entière dédiée aux adolescents, avec des stratégies adaptées.

      3.2. Les Formats de Dispense du Programme PEPS

      Le programme est conçu pour être flexible dans son application :

      En individuel : Souvent en pratique libérale, pour les familles ne souhaitant pas ou ne pouvant pas participer à un groupe.

      En groupe : Le format classique (10-12 familles), avec une séance toutes les deux semaines.

      En stage intensif : Toutes les séances sont condensées sur deux jours.

      En visioconférence (online) : Ce format, développé depuis la crise sanitaire, est présenté comme l'avenir des PEHP.

      Avantages du format en visioconférence :

      Praticité : Évite les contraintes de déplacement, de stationnement et de temps.

      Accessibilité : Permet de toucher des familles géographiquement éloignées.

      Inclusivité : Augmentation notable de la participation des pères et facilite l'accès pour les parents socialement plus réservés.

      Flexibilité : Permet aux parents de participer tout en gérant d'autres tâches.

      4. Structure et Contenu Détaillé du Programme PEPS

      Le programme s'articule autour de deux phases principales : la psychoéducation et les 13 séances de guidance parentale.

      4.1. La Psychoéducation : Une Étape Fondamentale

      Cette phase est indispensable et vise à transformer les parents en "parents experts" de leur enfant.

      Objectifs :

      ◦ Expliquer le diagnostic, le trouble et ses comorbidités.  

      ◦ Confronter les idées reçues aux données médicales.    ◦ Déculpabiliser et rassurer les familles.  

      ◦ Éviter les fausses interprétations ("il le fait exprès", "c'est un fainéant").   

      ◦ Orienter vers des solutions efficaces pour ne pas "perdre de temps et d'argent".  

      ◦ Permettre aux parents de s'interroger sur leur propre TDAH parental éventuel.

      Rien que cette étape permet souvent une meilleure tolérance des symptômes par les parents, avant même l'apprentissage des techniques.

      4.2. Les 13 Séances du Programme de Guidance

      Les séances suivent une progression logique, allant du renforcement des comportements positifs à la gestion des situations de crise.

      Thème de la Séance

      Description et Objectifs

      1

      Comprendre la non-obéissance et le renforcement positif

      Changer la balance de l'attention vers les comportements positifs pour en augmenter la fréquence.

      2

      Mettre en place un temps privilégié (moment spécial)

      Améliorer la relation parent-enfant par des temps de qualité, sans attente éducative.

      3

      Optimiser l'efficacité des consignes

      Apprendre à donner des ordres clairs et efficaces.

      4

      Améliorer la gestion du temps (Nouveau)

      Donner des outils pour gérer une difficulté majeure et persistante du TDAH.

      5

      Apprendre à l'enfant à ne pas déranger

      Valoriser les moments où l'enfant joue seul pour lui apprendre à s'occuper.

      6

      Proposer un système de points (économie de jetons)

      Motiver l'enfant à automatiser les routines du quotidien grâce à un système de récompenses.

      7

      Gérer les comportements problématiques avec le time-out

      Utiliser une technique de retrait d'attention (non punitive) pour les refus d'obtempérer. Efficace surtout chez les plus jeunes.

      8

      La gestion des crises émotionnelles (Nouveau)

      Comprendre le mécanisme de la crise (effet "cocotte-minute") et apprendre à gérer la phase de "plateau" où la communication est inutile.

      9

      Réparer plutôt que punir

      Remplacer les punitions (souvent toxiques et inefficaces) par des actes de réparation pour compenser un préjudice sans altérer la relation.

      10

      Prendre soin de soi en tant que parent (Nouveau)

      Prévenir le burn-out parental, une étape essentielle pour l'efficacité des autres stratégies.

      11

      Apprendre à l'enfant à bien se comporter dans les lieux publics

      Stratégies pour gérer les sorties (plus adapté aux plus jeunes).

      12

      Accompagner les devoirs scolaires et faire le lien avec l'école

      Gérer un point de friction majeur et collaborer avec l'équipe pédagogique.

      13

      Gérer les écrans (Nouveau)

      Communiquer, comprendre l'usage des écrans et montrer l'exemple.

      4.3. L'Adaptation pour les Adolescents

      Cette section reconnaît que les problématiques évoluent après 12 ans.

      Comprendre l'adolescent TDAH : Expliquer les enjeux spécifiques de cette période.

      Mettre en place des compromis : Remplacer le système de points (infantilisant) par des négociations pour augmenter l'autonomie.

      Gestion des situations à risque : Aborder directement les sujets comme les addictions ou les mises en danger, fréquents chez les adolescents avec TDAH.

      Base théorique : Les stratégies s'appuient sur les principes de la résistance non violente et de la "nouvelle autorité".

      5. Efficacité, Objectifs et Conclusion

      5.1. L'Efficacité Démontrée des PEPS

      L'efficacité des programmes comme PEPS est largement documentée.

      Ce qui ne change pas : Le niveau des symptômes cardinaux du TDAH (inattention, hyperactivité) de l'enfant.

      Ce qui s'améliore :

      ◦ La tolérance familiale face aux symptômes.  

      ◦ Les relations intrafamiliales.   

      ◦ La diminution du stress parental.  

      ◦ L'augmentation du sentiment de compétence parentale.  

      ◦ Indirectement, l'estime de soi de l'enfant, qui est moins puni et davantage valorisé.

      5.2. Casser la Spirale de la Coercition

      Un point central est que l'éducation coercitive (punitions, cris, violence éducative) est le principal facteur de risque de développement de troubles des conduites chez les enfants, et particulièrement ceux avec un TDAH.

      L'objectif des PEHP est donc de casser cette "spirale infernale" en proposant des stratégies positives et bienveillantes pour modifier la trajectoire développementale de l'enfant.

      5.3. Projection Positive et Ressources

      Déstigmatisation : La prise de parole de personnalités publiques (Louane, Amir, Squeezie, Pomme) sur leur TDAH est un outil puissant pour offrir des modèles d'identification positifs aux jeunes et à leurs parents, montrant qu'un TDAH n'empêche pas de réussir.

      Ressources recommandées :

      ◦ Le livre détaillant le programme PEPS.    ◦ Le site de l'association TDAH France (HyperSupers), pour ses ressources fiables et son actualité scientifique.   

      ◦ Le document de la HAS répertoriant les programmes de guidance parentale pour les troubles du neurodéveloppement.

    1. 450g  full fat cream cheese 225g double/heavy cream 180g Dark Chocolate (around 60% cocoa) 90g Caster Sugar 3 Eggs 15g Cocoa Powder 1 tsp Vanilla Extract Pinch of Salt

      Chocolate Basque Top * cream cheese = 2 packs * 225g heavy cream = 1 cup * 90g castar sugar = 1/2 cup * 15 g cocoa powder = less that 1/4 cup

    1. Jakie suplementy diety warto brać jesienią i zimą? Dr Tadeusz Oleszczuk [Sekrety Długowieczności]
      • Vitamin D3 (Witamina D3):
        • Crucial Supplement: Highly recommended for the autumn/winter season (September to April in Poland) because skin synthesis of D3 is inactive and most people have low levels (safe level is 50-80 ng/mL) [00:00:12], [00:00:33], [00:01:32].
        • Benefits: Supports immunity, reduces infection risk, and is vital for hormone production [00:01:17], [00:01:39].
        • Actionable Advice: Always check your current level before supplementation, and retest after 3-6 months to ensure the optimal level (50-60 ng/mL) is reached [00:01:24], [00:01:59].
      • Omega-3 Fatty Acids (Kwasy omega-3):
        • Component: Provides EPA and DHA, which are essential for brain structure (60% fat), nervous system function, and myelin sheaths [00:03:30], [00:03:38].
        • Functions: Exhibits anti-inflammatory effects and supports the heart, brain, and overall immunity [00:03:38].
        • Storage Tip: Liquid form should be consumed within one month of opening and kept in the refrigerator to prevent oxidation; capsules are more stable [00:03:48], [00:04:01].
      • Magnesium (Magnez):
        • Role: Helps manage stress, improves memory, and supports muscle function [00:07:05].
        • Essential Cofactor: Magnesium is required as a "motor" for the majority of enzymes in the body; deficiency impairs the function of the entire organism [00:07:35], [00:07:42].
        • Consumption: Choose easily absorbable and safe forms like magnesium glycinate [00:07:23]. Be mindful that diuretics like coffee and tea can deplete magnesium levels [00:07:46].
      • Other Key Supplements:
        • Vitamin C and Zinc: Support the immune system and shorten the duration of infections [00:05:03]. It's important to test your zinc level first to avoid harmful excess [00:04:18], [00:04:21].
        • Probiotics and Prebiotics: A healthy gut microbiota is the foundation of immunity [00:06:14]. Probiotics need prebiotics (e.g., resistant starch like cold potatoes) to thrive and create beneficial conditions [00:06:24], [00:05:39].
        • B Vitamins: A B-complex should be considered if the diet is poor, especially since B12 deficiency can be linked to nervous system issues and stomach problems [00:08:14], [00:08:29].
      • General Supplementation Rules:
        • Supplements should be individually chosen based on a person's lifestyle and real, confirmed deficiencies [00:09:16], [00:09:21].
        • When buying, always check the dosage on the label to ensure the amounts are effective and not just minimal [00:08:44], [00:08:56].
        • The foundation of strong immunity remains sleep, diet, and exercise [00:09:26].
    1. Voici un sommaire minuté de la transcription, mettant en évidence les idées fortes :

      • 0:00-0:06 : Introduction du contrôle coercitif comme nouvelle infraction pénale en France, suite à l'adoption de la proposition de loi par l'Assemblée Nationale.
      • 0:07-0:30 : Présentation d'Andréa Gruev-Vintila, spécialiste du sujet et auteure d'un livre de référence sur le contrôle coercitif.
      • 0:31-1:22 : Origine du concept : La notion de contrôle coercitif émerge de la psychologie américaine des années 1950, suite à des observations sur les prisonniers de guerre américains en Corée.

      Les chercheurs tentaient de comprendre pourquoi ils avaient collaboré avec l'ennemi, les études sur le lavage de cerveau, puis les travaux d'Albert Biderman qui s'interroge sur les méthodes des tortionnaires pour obtenir la soumission. * 1:23-1:51 : Le contrôle coercitif est une forme de soumission sans violence physique, comme démontré dans les expériences de Milgram sur la soumission à l'autorité.

      • 1:52-2:07 : L'application du concept aux violences intrafamiliales et la nécessité de comprendre les comportements qui structurent le contrôle coercitif.

      • 2:08-2:32 : Les violences conjugales touchent majoritairement les femmes et les enfants.

      En France, 82% des victimes de violences conjugales sont des mères. L'échec à prévenir et protéger ces victimes souligne l'importance d'une approche globale de la violence conjugale.

      • 2:33-3:24 : Comportements clés du contrôle coercitif : isolement, intimidation, harcèlement, menaces, et surtout, l'attaque à la relation de la victime avec l'enfant.

      L'agresseur impose des règles strictes dans l'espace familial, contrôlant des aspects anodins de la vie quotidienne pour obtenir la soumission.

      • 3:25-3:49 : Exemples de micro-régulations : contrôle de la façon de s'habiller, du temps passé sous la douche, des interactions des enfants, etc.

      • 3:50-4:02 : Le contrôle coercitif se concentre sur le comportement de l'agresseur et comment il empêche la victime de partir, changeant ainsi la question de "pourquoi n'est-elle pas partie ?" à "comment l'en a-t-il empêché ?".

      • 4:03-4:31 : L'identification de faits mineurs pris isolément, qui échappent habituellement à la justice, permet de saisir le climat conjugal ou familial.

      Tous les comportements de contrôle coercitif ne mènent pas au féminicide, mais tous les féminicides passent par le contrôle coercitif.

      • 4:32-4:50 : Le contrôle coercitif comme "captivité": la violence conjugale est une situation de terreur permanente et de captivité, plus qu'une série d'agressions.
      • 4:51-5:28 : Le féminicide comme échec du contrôle : lorsque l'agresseur échoue à contrôler sa victime, il y a une escalade de la violence pouvant mener au féminicide, aux suicides forcés, et aux homicides d'enfants. Le contrôle coercitif est un précurseur majeur de ces violences.

      • 5:29-5:50 : Les enfants sont aussi victimes de la captivité et le contrôle ne cesse pas avec la séparation, ce qui est souvent exercé au détriment des enfants.

      • 5:51-6:20 : La recherche internationale montre que le contrôle coercitif des femmes par les hommes est la cause principale des violences faites aux enfants.

      • 6:21-6:46 : Le contrôle peut s'exercer notamment dans le contexte de procédures judiciaires liées à la séparation, l'agresseur utilisant son droit parental au détriment de la sécurité des enfants.

      L'enfant devient une cible, un informateur ou un espion.

      • 6:47-7:04 : Exemples tragiques comme la petite Chloé, tuée par son père, soulignent l'importance de la protection des enfants, même après une séparation et une ordonnance de protection.

      • 7:05-7:25 : L'Écosse a intégré le contrôle coercitif dès 2018, suivie par la Cour européenne des droits de l'homme et les premiers arrêts en France, notamment ceux de la cour d'appel de Poitier.

      • 7:26-7:34 : L'inscription du contrôle coercitif dans la loi vise à une détection plus précoce et à des sanctions plus sévères.
      • 7:35-8:02 : La loi française ambitionne de donner aux juges un outil juridique pour intervenir sur la réalité des violences conjugales, et non pas seulement en cas de violence physique, et de mieux protéger les victimes.
      • 8:03-8:38 : La loi française est pionnière car elle est pensée avec une approche transversale touchant le droit pénal et le droit civil. Un amendement sur la formation obligatoire des magistrats a été rejeté, mais sera représenté au Sénat.
      • 8:39-8:47 : Demande d'évaluation de la loi une fois adoptée et nécessité de moyens pour son application.
    1. Primary sources are original documents, data, or images: the law code of the Le Dynasty in Vietnam, the letters of Kurt Vonnegut, data gathered from an experiment on color perception, an interview, or Farm Service Administration photographs from the 1930s.[3] Secondary sources are produced by analyzing primary sources. They include news articles, scholarly articles, reviews of films or art exhibitions, documentary films, and other pieces that have some descriptive or analytical purpose. Some things may be primary sources in one context but secondary sources in another.

      This section clarifies something many students, including me, often misunderstand: the difference between primary and secondary sources depends on how the source is used. I found the example about news articles especially helpful. A news article can function as a secondary source when it reports or interprets events, but it becomes a primary source if we use it as raw data for patterns or frequency. This made me realize that choosing sources is not just about finding information, but about understanding the purpose each source serves in our research.

    2. A step below the well-developed reports and feature articles that make up Tier 2 are the short tidbits that one finds in newspapers and magazines or credible websites. How short is a short news article? Usually, they’re just a couple paragraphs or less, and they’re often reporting on just one thing: an event, an interesting research finding, or a policy change.

      This section explains which sources are the most trustworthy in research (Tier 1) and which are least trusted for citation (Tier 4). Freshmen need this to avoid using weak sources in their papers. From Tier 1 = best (used by experts; checked carefully). Tier 2 = still good from places like government agencies or major newspapers. Tier 3 = short news snippets not bad, but not great. Tier 4 = opinions or websites where anyone can write anything like Wikipedia,You can read Tier 4, but you shouldn’t use it in a serious school paper.

    1. Component testing is 3 dimensional, requiring interactions, visual, and accessibility. Interactions include the program functionality and operating in the manner you sought for it. Visual includes the vision of the program and there aren't assets you like for the UI. Finally, accessibility is the ability to make the program follow regulations and enable users to access it. The importance of testing is to ensure you're demonstrating a product that consumers can utilize.

      Storybook enables the user to design programs feasibly and modify the program at different scales. This can be at the smallest level or alternatively on a larger scale.

    2. Yann Braga | Storybook Vitest | ViteConf 2025Tap to unmute2xYann Braga | Storybook Vitest | ViteConf 2025ViteConf 1,938 views 3 weeks agoSearchCopy linkInfoShoppingIf playback doesn't begin shortly, try restarting your device.Pull up for precise seekingPause0:48•Up nextLiveUpcomingCancelPlay NowYou're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.0:001:02 / 23:51Live•Watch full video••10:29How To Use GitHub For Beginnerscorbin599K views • 8 months agoLivePlaylist ()Mix (50+)9:54Every Type Of API You Must Know Explained!Codist 452K views • 1 month agoLivePlaylist ()Mix (50+)56:29Frontend Testing Stack: Storybook, Vitest, PlaywrightChromatic5.3K views • 5 months agoLivePlaylist ()Mix (50+)22:52Vladimir Sheremet | The State of Vitest | ViteConf 2025ViteConf1.9K views • 1 month agoLivePlaylist ()Mix (50+)4:47Parent Teacher Conference - SNLSaturday Night Live4.5M views • 1 month agoLivePlaylist ()Mix (50+)5:54An old mans advice.Bernard Albertson31M views • 12 years agoLivePlaylist ()Mix (50+)18:54how i made my websiteshar560K views • 7 months agoLivePlaylist ()Mix (50+)24:25you need to learn SQL RIGHT NOW!! (SQL Tutorial for Beginners)NetworkChuck1.9M views • 3 years agoLivePlaylist ()Mix (50+)29:23Matt Kane | The Future of Astro | ViteConf 2025ViteConf3.1K views • 4 weeks agoLivePlaylist ()Mix (50+)12:01How to Articulate Your Thoughts More Clearly Than 99% of PeopleLeila Hormozi1.9M views • 3 months agoLivePlaylist ()Mix (50+)26:36Jacob Gross | Rolldown chunking in the wild | ViteConf 2025ViteConf1.4K views • 1 month agoLivePlaylist ()Mix (50+)11:26The Harsh Truth About Being a Developer in 2025Web Developete36K views • 1 month agoLivePlaylist ()Mix (50+) Yann Braga | Storybook Vitest | ViteConf 2025

      This development tool is very useful to applying all of the user interface elements that we have discussed to be important. For example, it was interesting that they provide accessibility test to expose the issues to creators. This very important because many creators might not see the issues on their own so this can be very beneficial to making your app/website successful

    1. Comment y circulerPour circuler dans un carrefour giratoire, le conducteur doit :1. RalentirÀ l’approche, réduire la vitesse et regarder les panneaux.Être prêt à s’arrêter complètement :▷ si un piéton traverse ou s’apprête à le faire ;▷ si une voiture est déjà à l’intérieur du carrefour giratoire,sur la gauche.2. Céder le passageAvant d’y entrer, céder le passage aux véhicules déjà engagés,car ils ont la priorité.3. Entrer par la droiteLorsque le passage est libre.4. Circuler dans le sens de la circulationSans dépasser ni s’arrêter, à moins d’une urgence, comme pouréviter une collision.5. Sortir du carrefour :▷ indiquer l’intention avec le clignotant ;▷ sortir du carrefour (attention aux piétons).

      Comment circuler dans un carrefour giratoire?

    1. Reviewer #1 (Public review):

      This paper examines how geometric regularities in abstract shapes (e.g., parallelograms, kites) are perceived and processed in the human brain. The manuscript contains multimodal data (behavior, fMRI, MEG) from adults and additional fMRI data from 6-year-old children. The key findings show that (1) processing geometric shapes lead to reduced activity in ventral areas in comparison to complex stimuli and increased activity in intraparietal and inferior temporal regions, (2) the degree of geometric regularity modulates activity in intraparietal and inferior temporal regions, (3) similarity in neural representation of geometric shapes can be captured early by using CNN models and later by models of geometric regularity. In addition to these novel findings, the paper also includes a replication of behavioral data, showing that the perceptual similarity structure amongst the geometric stimuli used can be explained by a combination of visual similarities (as indexed by feedforward CNN model of ventral visual pathway) and geometric features. The paper comes with openly accessible code in a well-documented GitHub repository and the data will be published with the paper on OpenNeuro.

      In the revised version of this manuscript, the authors clarified certain aspects of the task design, added critical detail to the description of the methods, and updated the figures to show unsmoothed data and variability across participants. Importantly, the authors thoroughly discussed potential task effects (for the fMRI data only) and added additional analyses that indicate that the effects are unlikely to be driven by linguistic labels/name availability of the stimuli.

      Comments on the revision:

      Thank you for carefully addressing all my concerns and especially for clarifying the task design.

    2. Reviewer #3 (Public review):

      Summary:

      The authors report converging evidence from behavioral studies as well as several brain-imaging techniques that geometric figures, notably quadrilaterals, are processed differently in visual (lower activation) and spatial (greater) areas of the human brain than representative figures. Comparison of mathematical models to fit activity for geometric figures shows the best fit for abstract geometric features like parallelism and symmetry. The brain areas active for geometric figures are also active in processing mathematical concepts even in blind mathematicians, linking geometric shapes to abstract math concepts. The effects are stronger in adults than in 6-year-old Western children. Similar phenomena do not appear in great apes, suggesting that this is uniquely human and developmental.

      Strengths:

      Multiple converging techniques of brain imaging and testing of mathematical models showing special status of perception of abstract forms. Careful reasoning at every step of research and presentation of research, anticipating and addressing possible reservations. Connecting these findings to other findings, brain, behavior, and historical/anthropological to suggest broad and important fundamental connections between abstract visual-spatial forms and mathematical reasoning.

      Weaknesses:

      I have reservations of the authors' use of "symbolic." They seem to interpret "symbolic" as relying on "discrete, exact, rule-based features." Words are generally considered to symbolic (that is their major function), yet words do not meet those criteria. Depictions of objects can be regarded as symbolic because they represent real objects, they are not the same as the object (as Magritte observed). If so then perhaps depictions of quadrilaterals are also symbolic but then they do not differ from depictions of objects on that quality. Relatedly, calling abstract or generalized representations of forms a distinct "language of thought" doesn't seem supportable by the current findings. Minimally, a language has elements that are combined more or less according to rules. The authors present evidence for geometric forms as elements but nowhere is there evidence for combining them into meaningful strings.

      Further thoughts

      Incidentally, there have been many attempts at constructing visual languages from visual elements combined by rules, that is, mapping meaning to depictions. Many written languages like Egyptian hieroglyphics or Mayan or Chinese, began that way; there are current attempts using emoji. Apparently, mapping sound to discrete letters, alphabets, is more efficient and was invented once but spread. That said, for restricted domains like maps, circuit diagrams, networks, chemical interactions, mathematics, and more, visual "languages" work quite well.

      The findings are striking and as such invite speculation about their meaning and limitations. The images of real objects seem to be interpreted as representations of 3D objects as they activate the same visual areas as real objects. By contrast, the images of 2D geometric forms are not interpreted as representations of real objects but rather seemingly as 2D abstractions. It would be instructive to investigate stimuli that are on a continuum from representational to geometric, e. g., real objects that have simple geometric forms like table tops or boxes under various projections or balls or buildings that are rectangular or triangular. Objects differ from geometric forms in many ways: 3D rather than 2D, more complicated shapes; internal features as well as outlines. The geometric figures used are flat, 2-D, but much geometry is 3-D (e. g. cubes) with similar abstract features. The feature space of geometry is more than parallelism and symmetry; angles are important for example. Listing and testing features would be fascinating.

      Can we say that mathematical thinking began with the regularities of shapes or with counting, or both? External representations of counting go far back into prehistory; tallies are frequent and wide-spread. Infants are sensitive to number across domains as are other primates (and perhaps other species). Finding overlapping brain areas for geometric forms and number is intriguing but doesn't show how they are related.

      Categories are established in part by contrast categories; are quadrilaterals and triangles and circles different categories? As for quadrilaterals, the authors say some are "completely irregular." Not really; they are still quadrilaterals, if atypical. See Eleanor Rosch's insightful work on (visual) categories. One wonders about distinguishing squashed quadrilaterals from squashed triangles.

      What in human experience but not the experience of close primates would drive the abstraction of these geometric properties? It's easy to make a case for elaborate brain processes for recognizing and distinguishing things in the world, shared by many species, but the case for brain areas sensitive to abstracting geometric figures is harder. The fact that these areas are active in blind mathematicians and that they are parietal areas suggest that what is important is spatial far more than visual. Could these geometric figures and their abstract properties be connected in some way to behavior, perhaps with fabrication, construction or use of objects? Or with other interactions with complex objects and environments where symmetry and parallelism (and angles and curvature--and weight and size) would be important? Manual dexterity and fabrication also distinguish humans from great apes (quantitatively not qualitatively) and action drives both visual and spatial representations of objects and spaces in the brain. I certainly wouldn't expect the authors to add research to this already packed paper, but raising some of the conceptual issues would contribute to the significance of the paper.

    3. Author response:

      The following is the authors’ response to the original reviews

      Reviewer #1 (Public review):

      Weakness:

      I wonder how task difficulty and linguistic labels interact with the current findings. Based on the behavioral data, shapes with more geometric regularities are easier to detect when surrounded by other shapes. Do shape labels that are readily available (e.g., "square") help in making accurate and speedy decisions? Can the sensitivity to geometric regularity in intraparietal and inferior temporal regions be attributed to differences in task difficulty? Similarly, are the MEG oddball detection effects that are modulated by geometric regularity also affected by task difficulty?

      We see two aspects to the reviewer’s remarks.

      (1) Names for shapes.

      On the one hand, is the question of the impact of whether certain shapes have names and others do not in our task. The work presented here is not designed to specifically test the effect of formal western education; however, in previous work (Sablé-Meyer et al., 2021), we noted that the geometric regularity effect remains present even for shapes that do not have specific names, and even in participants who do not have names for them. Thus, we replicated our main effects with both preschoolers and adults that did not attend formal western education and found that our geometric feature model remained predictive of their behavior; we refer the reader to this previous paper for an extensive discussion of the possible role of linguistic labels, and the impact of the statistics of the environment on task performance.  

      What is more, in our behavior experiments we can discard data from any shape that is has a name in English and run our model comparison again. Doing so diminished the effect size of the geometric feature model, but it remained predictive of human behavior: indeed, if we removed all shapes but kite, rightKite, rustedHinge, hinge and random (i.e., more than half of our data, and shapes for which we came up with names but there are no established names), we nevertheless find that both models significantly correlate with human behavior—see plot in Author response image 1, equivalent of our Fig. 1E with the remaining shapes.

      Author response image 1.

      An identical analysis on the MEG leads to two noisy but significant clusters (CNN: 64.0ms to 172.0ms; then 192.0ms to 296.0ms; both p<.001: Geometric Features: 312.0ms to 364.0ms with p=.008). We have improved our manuscript thanks to the reviewer’s observation by adding a figure with the new behavior analysis to the supplementary figures and in the result section of the behavior task. We now refer to these analysis where appropriate:

      (intro) “The effect appeared as a human universal, present in preschoolers, first-graders, and adults without access to formal western math education (the Himba from Namibia), and thus seemingly independent of education and of the existence of linguistic labels for regular shapes.”

      (behavior results) “Finally, to separate the effect of name availability and geometric features on behavior, we replicated our analysis after removing the square, rectangle, trapezoids, rhombus and parallelogram from our data (Fig. S5D). This left us with five shapes, and an RDM with 10 entries, When regressing it in a GLM with our two models, we find that both models are still significant predictors (p<.001). The effect size of the geometric feature model is greatly reduced, yet remained significantly higher than that of the neural network model (p<.001).”

      (meg results) “This analysis yielded similar clusters when performed on a subset of shapes that do not have an obvious name in English, as was the case for the behavior analysis (CNN Encoding: 64.0ms to 172.0ms; then 192.0ms to 296.0ms; both p<.001: Geometric Features: 312.0ms to 364.0ms with p=.008).”

      (discussion, end of behavior section) “Previously, we only found such a significant mixture of predictors in uneducated humans (whether French preschoolers or adults from the Himba community, mitigating the possible impact of explicit western education, linguistic labels, and statistics of the environment on geometric shape representation) (Sablé-Meyer et al., 2021).”

      Perhaps the referee’s point can also be reversed: we provide a normative theory of geometric shape complexity which has the potential to explain why certain shapes have names: instead of seeing shape names as the cause of their simpler mental representation, we suggest that the converse could occur, i.e. the simpler shapes are the ones that are given names.

      (2) Task difficulty

      On the other hand is the question of whether our effect is driven by task difficulty. First, we would like to point out that this point could apply to the fMRI task, which asks for an explicit detection of deviants, but does not apply to the MEG experiment. In MEG, participants passively looked at sequences of shapes which, for a given block, comprising many instances of a fixed standard shape and rare deviants–even if they notice deviants, they have no task related to them. Yet two independent findings validated the geometric features model: there was a large effect of geometric regularity on the MEG response to deviants, and the MEG dissimilarity matrix between standard shapes correlated with a model based on geometric features, better than with a model based on CNNs. While the response to rare deviants might perhaps be attributed to “difficulty” (assuming that, in spite of the absence of an explicit task, participants try to spot the deviants and find this self-imposed task more difficult in runs with less regular shapes), it seems very hard to explain the representational similarity analysis (RSA) findings based on difficulty. Indeed, what motivated us to use RSA analysis in both fMRI and MEG was to stop relying on the response to deviants, and use solely the data from standard or “reference” shapes, and model their neural response with theory-derived regressors.

      We have updated the manuscript in several places to make our view on these points clearer:

      (experiment 4) “This design allowed us to study the neural mechanisms of the geometric regularity effect without confounding effects of task, task difficulty, or eye movements.”

      (figure 4, legend) “(A) Task structure: participants passively watch a constant stream of geometric shapes, one per second (presentation time 800ms). The stimuli are presented in blocks of 30 identical shapes up to scaling and rotation, with 4 occasional deviant shape. Participants do not have a task to perform beside fixating.”

      Reviewer #2 (Public review):

      Weakness:

      Given that the primary take away from this study is that geometric shape information is found in the dorsal stream, rather than the ventral stream there is very little there is very little discussion of prior work in this area (for reviews, see Freud et al., 2016; Orban, 2011; Xu, 2018). Indeed, there is extensive evidence of shape processing in the dorsal pathway in human adults (Freud, Culham, et al., 2017; Konen & Kastner, 2008; Romei et al., 2011), children (Freud et al., 2019), patients (Freud, Ganel, et al., 2017), and monkeys (Janssen et al., 2008; Sereno & Maunsell, 1998; Van Dromme et al., 2016), as well as the similarity between models and dorsal shape representations (Ayzenberg & Behrmann, 2022; Han & Sereno, 2022).

      We thank the reviewer for this opportunity to clarify our writing. We want to use this opportunity to highlight that our primary finding is not about whether the shapes of objects or animals (in general) are processed in the ventral versus or the dorsal pathway, but rather about the much more restricted domain of geometric shapes such as squares and triangles. We propose that simple geometric shapes afford additional levels of mental representation that rely on their geometric features – on top of the typical visual processing. To the best of our knowledge, this point has not been made in the above papers.

      Still, we agree that it is useful to better link our proposal to previous ones. We have updated the discussion section titled “Two Visual Pathways” to include more specific references to the literature that have reported visual object representations in the dorsal pathway. Following another reviewer’s observation, we have also updated our analysis to better demonstrate the overlap in activation evoked by math and by geometry in the IPS, as well as include a novel comparison with independently published results.

      Overall, to address this point, we (i) show the overlap between our “geometry” contrast (shape > word+tools+houses) and our “math” contrast (number > words); (ii) we display these ROIs side by side with ROIs found in previous work (Amalric and Dehaene, 2016), and (iii) in each math-related ROIs reported in that article, we test our “geometry” (shape > word+tools+houses) contrast and find almost all of them to be significant in both population; see Fig. S5.

      Finally, within the ROIs identified with our geometry localizer, we also performed similarity analyses: for each region we extracted the betas of every voxel for every visual category, and estimated the distance (cross-validated mahalanobis) between different visual categories. In both ventral ROIs, in both populations, numbers were closer to shapes than to the other visual categories including text and Chinese characters (all p<.001). In adults, this result also holds for the right ITG (p=.021) and the left IPS (p=.014) but not the right IPS (p=.17). In children, this result did not hold in the areas.

      Naturally, overlap in brain activation does not suffice to conclude that the same computational processes are involved. We have added an explicit caveat about this point. Indeed, throughout the article,  we have been careful to frame our results in a way that is appropriate given our evidence, e.g. saying “Those areas are similar to those active during number perception, arithmetic, geometric sequences, and the processing of high-level math concepts” and “The IPS areas activated by geometric shapes overlap with those active during the comprehension of elementary as well as advanced mathematical concepts”. We have rephrased the possibly ambiguous “geometric shapes activated math- and number-related areas, particular the right aIPS.” into “geometric shapes activated areas independently found to be activated by math- and number-related tasks, in particular the right aIPS”.

      Reviewer #3 (Public review):

      Weakness:

      Perhaps the manuscript could emphasize that the areas recruited by geometric figures but not objects are spatial, with reduced processing in visual areas. It also seems important to say that the images of real objects are interpreted as representations of 3D objects, as they activate the same visual areas as real objects. By contrast, the images of geometric forms are not interpreted as representations of real objects but rather perhaps as 2D abstractions.

      This is an interesting possibility. Geometric shapes are likely to draw attention to spatial dimensions (e.g. length) and to do so in a 2D spatial frame of reference rather than the 3D representations evoked by most other objects or images. However, this possibility would require further work to be thoroughly evaluated, for instance by comparing usual 3D objects with rare instances of 2D ones (e.g. a sheet of paper, a sticker etc). In the absence of such a test, we refrained from further speculation on this point.

      The authors use the term "symbolic." That use of that term could usefully be expanded here.  

      The reviewer is right in pointing out that “symbolic” should have been more clearly defined. We now added in the introduction:

      (introduction) “[…] we sometimes refer to this model as “symbolic” because it relies on discrete, exact, rule-based features rather than continuous representations  (Sablé-Meyer et al., 2022). In this representational format, geometric shapes are postulated to be represented by symbolic expressions in a “language-of-thought”, e.g. “a square is a four-sided figure with four equal sides and four right angles” or equivalently by a computer-like program from drawing them in a Logo-like language (Sablé-Meyer et al., 2022).”

      Here, however, the present experiments do not directly probe this format of a representation. We have therefore simplified our wording and removed many of our use of the word “symbolic” in favor of the more specific “geometric features”.

      Pigeons have remarkable visual systems. According to my fallible memory, Herrnstein investigated visual categories in pigeons. They can recognize individual people from fragments of photos, among other feats. I believe pigeons failed at geometric figures and also at cartoon drawings of things they could recognize in photos. This suggests they did not interpret line drawings of objects as representations of objects.

      The comparison of geometric abilities across species is an interesting line of research. In the discussion, we briefly mention several lines of research that indicate that non-human primates do not perceive geometric shapes in the same way as we do – but for space reasons, we are reluctant to expand this section to a broader review of other more distant species. The referee is right that there is evidence of pigeons being able to perceive an invariant abstract 3D geometric shape in spite of much variation in viewpoint (Peissig et al., 2019) – but there does not seem to be evidence that they attend to geometric regularities specifically (e.g. squares versus non-squares). Also, the referee’s point bears on the somewhat different issue of whether humans and other animals may recognize the object depicted by a symbolic drawing (e.g. a sketch of a tree). Again, humans seem to be vastly superior in this domain, and research on this topic is currently ongoing in the lab. However, the point that we are making in the present work is specifically about the neural correlates of the representation of simple geometric shapes which by design were not intended to be interpretable as representations of objects.

      Categories are established in part by contrast categories; are quadrilaterals, triangles, and circles different categories?

      We are not sure how to interpret the referee’s question, since it bears on the definition of “category” (Spontaneous? After training? With what criterion?). While we are not aware of data that can unambiguously answer the reviewer’s question, categorical perception in geometric shapes can be inferred from early work investigating pop-out effects in visual search, e.g. (Treisman and Gormican, 1988): curvature appears to generate strong pop-out effects, and therefore we would expect e.g. circles to indeed be a different category than, say, triangles. Similarly, right angles, as well as parallel lines, have been found to be perceived categorically (Dillon et al., 2019).

      This suggests that indeed squares would be perceived as categorically different from triangles and circles. On the other hand, in our own previous work (Sablé-Meyer et al., 2021) we have found that the deviants that we generated from our quadrilaterals did not pop out from displays of reference quadrilaterals. Pop-out is probably not the proper criterion for defining what a “category” is, but this is the extent to which we can provide an answer to the reviewer’s question.

      It would be instructive to investigate stimuli that are on a continuum from representational to geometric, e.g., table tops or cartons under various projections, or balls or buildings that are rectangular or triangular. Building parts, inside and out. like corners. Objects differ from geometric forms in many ways: 3D rather than 2D, more complicated shapes, and internal texture. The geometric figures used are flat, 2-D, but much geometry is 3-D (e. g. cubes) with similar abstract features.

      We agree that there is a whole line of potential research here. We decided to start by focusing on the simplest set of geometric shapes that would give us enough variation in geometric regularity while being easy to match on other visual features. We agree with the reviewer that our results should hold both for more complex 2-D shapes, but also for 3-D shapes. Indeed, generative theories of shapes in higher dimensions following similar principles as ours have been devised (I. Biederman, 1987; Leyton, 2003).  We now mention this in the discussion:

      “Finally, this research should ultimately be extended to the representation of 3-dimensional geometric shapes, for which similar symbolic generative models have indeed been proposed (Irving Biederman, 1987; Leyton, 2003).”

      The feature space of geometry is more than parallelism and symmetry; angles are important, for example. Listing and testing features would be fascinating. Similarly, looking at younger or preferably non-Western children, as Western children are exposed to shapes in play at early ages.

      We agree with the reviewer on all point. While we do not list and test the different properties separately in this work, we would like to highlight that angles are part of our geometric feature model, which includes features of “right-angle” and “equal-angles” as suggested by the reviewer.

      We also agree about the importance of testing populations with limited exposure to formal training with geometric shapes. This was in fact a core aspect of a previous article of ours which tests both preschoolers, and adults with no access to formal western education – though no non-Western children (Sablé-Meyer et al., 2021). It remains a challenge to perform brain-imaging studies in non-Western populations (although see Dehaene et al., 2010; Pegado et al., 2014).

      What in human experience but not the experience of close primates would drive the abstraction of these geometric properties? It's easy to make a case for elaborate brain processes for recognizing and distinguishing things in the world, shared by many species, but the case for brain areas sensitive to processing geometric figures is harder. The fact that these areas are active in blind mathematicians and that they are parietal areas suggests that what is important is spatial far more than visual. Could these geometric figures and their abstract properties be connected in some way to behavior, perhaps with fabrication and construction as well as use? Or with other interactions with complex objects and environments where symmetry and parallelism (and angles and curvature--and weight and size) would be important? Manual dexterity and fabrication also distinguish humans from great apes (quantitatively, not qualitatively), and action drives both visual and spatial representations of objects and spaces in the brain. I certainly wouldn't expect the authors to add research to this already packed paper, but raising some of the conceptual issues would contribute to the significance of the paper.

      We refrained from speculating about this point in the previous version of the article, but share some of the reviewers’ intuitions about the underlying drive for geometric abstraction. As described in (Dehaene, 2026; Sablé-Meyer et al., 2022), our hypothesis, which isn’t tested in the present article, is that the emergence of a pervasive ability to represent aspects of the world as compact expressions in a mental “language-of-thought” is what underlies many domains of specific human competence, including some listed by the reviewer (tool construction, scene understanding) and our domain of study here, geometric shapes.

      Recommendations for the Authors:

      Reviewer #1 (Recommendations for the authors):

      Overall, I enjoyed reading this paper. It is clearly written and nicely showcases the amount of work that has gone into conducting all these experiments and analyzing the data in sophisticated ways. I also thought the figures were great, and I liked the level of organization in the GitHub repository and am looking forward to seeing the shared data on OpenNeuro. I have some specific questions I hope the authors can address.

      (1) Behavior

      - Looking at Figure 1, it seemed like most shapes are clustering together, whereas square, rectangle, and maybe rhombus and parallelogram are slightly more unique. I was wondering whether the authors could comment on the potential influence of linguistic labels. Is it possible that it is easier to discard the intruder when the shapes are readily nameable versus not?

      This is an interesting observation, but the existence of names for shapes does not suffice to explain all of our findings ; see our reply to the public comment.

      (2) fMRI

      - As mentioned in the public review, I was surprised that the authors went with an intruder task because I would imagine that performance depends on the specific combination of geometric shapes used within a trial. I assume it is much harder to find, for example, a "Right Hinge" embedded within "Hinge" stimuli than a "Right Hinge" amongst "Squares". In addition, the rotation and scaling of each individual item should affect regular shapes less than irregular shapes, creating visual dissimilarities that would presumably make the task harder. Can the authors comment on how we can be sure that the differences we pick up in the parietal areas are not related to task difficulty but are truly related to geometric shape regularities?

      Again, please see our public review response for a larger discussion of the impact of task difficulty. There are two aspects to answering this question.

      First, the task is not as the reviewer describes: the intruder task is to find a deviant shape within several slightly rotated and scaled versions of the regular shape it came from. During brain imaging, we did not ask participants to find an exemplar of one of our reference shape amidst copies of another, but rather a deviant version of one shape against copies of its reference version. We only used this intruder task with all pairs of shapes to generate the behavioral RSA matrix.

      Second, we agree that some of the fMRI effect may stem from task difficulty, and this motivated our use of RSA analysis in fMRI, and a passive MEG task. RSA results cannot be explained by task difficulty.

      Overall, we have tried to make the limitations of the fMRI design, and the motivation for turning to passive presentation in MEG, clearer by stating the issues more clearly when we introduce experiment 4:

      “The temporal resolution of fMRI does not allow to track the dynamic of mental representations over time. Furthermore, the previous fMRI experiment suffered from several limitations. First, we studied six quadrilaterals only, compared to 11 in our previous behavioral work. Second, we used an explicit intruder detection, which implies that the geometric regularity effect was correlated with task difficulty, and we cannot exclude that this factor alone explains some of the activations in figure 3C (although it is much less clear how task difficulty alone would explain the RSA results in figure 3D). Third, the long display duration, which was necessary for good task performance especially in children, afforded the possibility of eye movements, which were not monitored inside the 3T scanner and again could have affected the activations in figure 3C.”

      - How far in the periphery were the stimuli presented? Was eye-tracking data collected for the intruder task? Similar to the point above, I would imagine that a harder trial would result in more eye movements to find the intruder, which could drive some of the differences observed here.

      A 1-degree bar was added to Figure 3A, which faithfully illustrates how the stimuli were presented in fMRI. Eye-tracking data was not collected during fMRI. Although the participants were explicitly instructed to fixate at the center of the screen and avoid eye movements, we fully agree with the referee that we cannot exclude that eye movements were present, perhaps more so for more difficult displays, and would therefore have contributed to the observed fMRI activations in experiment 3 (figure 3C). We now mention this limitation explicity at the end of experiment 3. However, crucially, this potential problem cannot apply to the MEG data. During the MEG task, the stimuli were presented one by one at the center of screen, without any explicit task, thus avoiding issues of eye movements. We therefore consider the MEG geometrical regularity effect, which comes at a relatively early latency (starting at ~160 ms) and even in a passive task, to provide the strongest evidence of geometric coding, unaffected by potential eye movement artefacts. 

      - I was wondering whether the authors would consider showing some un-thresholded maps just to see how widespread the activation of the geometric shapes is across all of the cortex.

      We share the uncorrected threshold maps in Fig. S3. for both adults and children in the category localizer, copied here as well. For the geometry task, most of the clusters identified are fairly big and survive cluster-corrected permutations; the uncorrected statistical maps look almost fully identical to the one presented in Fig. 3 (p<.001 map).

      - I'm missing some discussion on the role of early visual areas that goes beyond the RSA-CNN comparison. I would imagine that early visual areas are not only engaged due to top-down feedback (line 258) but may actually also encode some of the geometric features, such as parallel lines and symmetry. Is it feasible to look at early visual areas and examine what the similarity structure between different shapes looks like?

      If early visual areas encoded the geometric features that we propose, then even early sensor-level RSA matrices should show a strong impact of geometric features similarity, which is not what we find (figure 4D). We do, however, appreciate the referee’s request to examine more closely how this similarity structure looks like. We now provide a movie showing the significant correlation between neural activity and our two models (uncorrected participants); indeed, while the early occipital activity (around 110ms) is dominated by a significant correlation with the CNN model, there are also scattered significant sources associated to the symbolic model around these timepoints already.

      To test this further, we used beamformers to reconstruct the source-localized activity in calcarine cortex and performed an RSA analysis across that ROI. We find that indeed the CNN model is strongly significant at t=110ms (t=3.43, df=18, p=.003) while the geometric feature model is not (t=1.04, df=18, p=.31), and the CNN is significantly above the geometric feature model (t=4.25, df=18, p<.001). However, this result is not very stable across time, and there are significant temporal clusters around these timepoints associated to each model, with no significant cluster associated to a CNN > geometric (CNN: significant cluster from 88ms to 140ms, p<.001 in permutation based with 10000 permutations; geometric features has a significant cluster from 80ms to 104ms, p=.0475; no significant cluster on the difference between the two).

      (3) MEG

      - Similar to the fMRI set, I am a little worried that task difficulty has an effect on the decoding results, as the oddball should pop out more in more geometric shapes, making it easier to detect and easier to decode. Can the authors comment on whether it would matter for the conclusions whether they are decoding varying task difficulty or differences in geometric regularity, or whether they think this can be considered similarly?

      See above for an extensive discussion of the task difficulty effect. We point out that there is no task in the MEG data collection part. We have clarified the task design by updating our Fig. 4. Additionally, the fact that oddballs are more perceived more or less easily as a function of their geometric regularity is, in part, exactly the point that we are making – but, in MEG, even in the absence of a task of looking for them.

      - The authors discuss that the inflated baseline/onset decoding/regression estimates may occur because the shapes are being repeated within a mini-block, which I think is unlikely given the long ISIs and the fact that the geometric features model is not >0 at onset. I think their second possible explanation, that this may have to do with smoothing, is very possible. In the text, it said that for the non-smoothed result, the CNN encoding correlates with the data from 60ms, which makes a lot more sense. I would like to encourage the authors to provide readers with the unsmoothed beta values instead of the 100-ms smoothed version in the main plot to preserve the reason they chose to use MEG - for high temporal resolution!

      We fully agree with the reviewer and have accordingly updated the figures to show the unsmoothed data (see below). Indeed, there is now no significant CNN effect before ~60 ms (up to the accuracy of identifying onsets with our method).

      - In Figure 4C, I think it would be useful to either provide error bars or show variability across participants by plotting each participant's beta values. I think it would also be nice to plot the dissimilarity matrices based on the MEG data at select timepoints, just to see what the similarity structure is like.

      Following the reviewer’s recommendation, we plot the timeseries with SEM as shaded area, and thicker lines for statistically significant clusters, and we provide the unsmoothed version in figure Fig. 4. As for the dissimilarity matrices at select timepoints, this has now been added to figure Fig. 4.

      - To evaluate the source model reconstruction, I think the reader would need a little more detail on how it was done in the main text. How were the lead fields calculated? Which data was used to estimate the sources? How are the models correlated with the source data?

      We have imported some of the details in the main text as follows (as well as expanding the methods section a little):

      “To understand which brain areas generated these distinct patterns of activations, and probe whether they fit with our previous fMRI results, we performed a source reconstruction of our data. We projected the sensor activity onto each participant's cortical surfaces estimated from T1-images. The projection was performed using eLORETA and emptyroom recordings acquired on the same day to estimate noise covariance, with the default parameters of mne-bids-pipeline. Sources were spaced using a recursively subdivided octahedron (oct5). Group statistics were performed after alignement to fsaverage. We then replicated the RSA analysis […]”

      - In addition to fitting the CNN, which is used here to model differences in early visual cortex, have the authors considered looking at their fMRI results and localizing early visual regions, extracting a similarity matrix, and correlating that with the MEG and/or comparing it with the CNN model?

      We had ultimately decided against comparing the empirical similarity matrices from the MEG and fMRI experiments, first because the stimuli and tasks are different, and second because this would not be directly relevant to our goal, which is to evaluate whether a geometric-feature model accounts for the data. Thus, we systematically model empirical similarity matrices from fMRI and from MEG with our two models derived from different theories of shape perception in order to test predictions about their spatial and temporal dynamic. As for comparing the similarity matrix from early visual regions in fMRI with that predicted by the CNN model, this is effectively visible from our Fig. 3D where we perform searchlight RSA analysis and modeling with both the CNN and the geometric feature model; bilaterally, we find a correlation with the CNN model, although it sometimes overlap with predictions from the geometric feature model as well. We now include a section explaining this reasoning in appendix:

      “Representational similarity analysis also offers a way to directly compared similarity matrices measured in MEG and fMRI, thus allowing for fusion of those two modalities and tentatively assigning a “time stamp” to distinct MRI clusters. However, we did not attempt such an analysis here for several reasons. First, distinct tasks and block structures were used in MEG and fMRI. Second, a smaller list of shapes was used in fMRI, as imposed by the slower modality of acquisition. Third, our study was designed as an attempt to sort out between two models of geometric shape recognition. We therefore focused all analyses on this goal, which could not have been achieved by direct MEG-fMRI fusion, but required correlation with independently obtained model predictions.”

      Minor comments

      - It's a little unclear from the abstract that there is children's data for fMRI only.

      We have reworded the abstract to make this unambiguous

      - Figures 4a & b are missing y-labels.

      We can see how our labels could be confused with (sub-)plot titles and have moved them to make the interpretation clearer.

      - MEG: are the stimuli always shown in the same orientation and size?

      They are not, each shape has a random orientation and scaling. On top of a task example at the top of Fig. 4, we have now included a clearer mention of this in the main text when we introduce the task:

      “shapes were presented serially, one at a time, with small random changes in rotation and scaling parameters, in miniblocks with a fixed quadrilateral shape and with rare intruders with the bottom right corner shifted by a fixed amount (Sablé-Meyer et al., 2021)”

      - To me, the discussion section felt a little lengthy, and I wonder whether it would benefit from being a little more streamlined, focused, and targeted. I found that the structure was a little difficult to follow as it went from describing the result by modality (behavior, fMRI, MEG) back to discussing mostly aspects of the fMRI findings.

      We have tried to re-organize and streamline the discussion following these comments.

      Then, later on, I found that especially the section on "neurophysiological implementation of geometry" went beyond the focus of the data presented in the paper and was comparatively long and speculative.

      We have reexamined the discussion, but the citation of papers emphasizing a representation of non-accidental geometric properties in non-human animals was requested by other commentators on our article; and indeed, we think that they are relevant in the context of our prior suggestion that the composition of geometric features might be a uniquely human feature – these papers suggest that individual features may not, and that it is therefore compositionality which might be special to the human brain. We have nevertheless shortened it.

      Furthermore, we think that this section is important because symbolic models are often criticized for lack of a plausible neurophysiological implementation. It is therefore important to discuss whether and how the postulated symbolic geometric code could be realized in neural circuits. We have added this justification to the introduction of this section.

      Reviewer #2 (Recommendations for the authors):

      (1) If the authors want to specifically claim that their findings align with mathematical reasoning, they could at least show the overlap between the activation maps of the current study and those from prior work.

      This was added to the fMRI results. See our answers to the public review.

      (2) I wonder if the reason the authors only found aIPS in their first analysis (Figure 2) is because they are contrasting geometric shapes with figures that also have geometric properties. In other words, faces, objects, and houses also contain geometric shape information, and so the authors may have essentially contrasted out other areas that are sensitive to these features. One indication that this may be the case is that the geometric regularity effect and searchlight RSA (Figure 3) contains both anterior and posterior IPS regions (but crucially, little ventral activity). It might be interesting to discuss the implications of these differences.

      Indeed, we cannot exclude that the few symmetries, perpendicularity and parallelism cues that can be presented in faces, objects or houses were processed as such, perhaps within the ventral pathway, and that these representations would have been subtracted out. We emphasize that our subtraction isolates the geometrical features that are present in simple regular geometric shapes, over and above those that might exist in other categories. We have added this point to the discussion:

      “[… ] For instance, faces possess a plane of quasi-symmetry, and so do many other man-made tools and houses. Thus, our subtraction isolated the geometrical features that are present in simple regular geometric shapes (e.g. parallels, right angles, equality of length) over and above those that might already exist, in a less pure form, in other categories.”

      (3) I had a few questions regarding the MEG results.

      a. I didn't quite understand the task. What is a regular or oddball shape in this context? It's not clear what is being decoded. Perhaps a small example of the MEG task in Figure 4 would help?

      We now include an additional sub-figure in Fig. 4 to explain the paradigm. In brief: there is no explicit task, participants are simply asked to fixate. The shapes come in miniblocks of 30 identical reference shapes (up to rotation and scaling), among which some occasional deviant shapes randomly appear (created by moving the corner of the reference shape by some amount).

      b. In Figure 4A/B they describe the correlation with a 'symbolic model'. Is this the same as the geometric model in 4C?

      It is. We have removed this ambiguity by calling it “geometric model” and setting its color to the one associated to this model thought the article.

      c. The author's explanation for why geometric feature coding was slower than CNN encoding doesn't quite make sense to me. As an explanation, they suggest that previous studies computed "elementary features of location or motor affordance", whereas their study work examines "high-level mathematical information of an abstract nature." However, looking at the studies the authors cite in this section, it seems that these studies also examined the time course of shape processing in the dorsal pathway, not "elementary features of location or motor affordance." Second, it's not clear how the geometric feature model reflects high-level mathematical information (see point above about claiming this is related to math).

      We thank the referee for pointing out this inappropriate phrase, which we removed. We rephrased the rest of the paragraph to clarify our hypothesis in the following way:

      “However, in this work, we specifically probed the processing of geometric shapes that, if our hypothesis is correct, are represented as mental expressions that combine geometrical and arithmetic features of an abstract categorical nature, for instance representing “four equal sides” or “four right angles”. It seems logical that such expressions, combining number, angle and length information, take more time to be computed than the first wave of feedforward processing within the occipito-temporal visual pathway, and therefore only activate thereafter.”

      One explanation may be that the authors' geometric shapes require finer-grained discrimination than the object categories used in prior studies. i.e., the odd-ball task may be more of a fine-grained visual discrimination task. Indeed, it may not be a surprise that one can decode the difference between, say, a hammer and a butterfly faster than two kinds of quadrilaterals.

      We do not disagree with this intuition, although note that we do not have data on this point (we are reporting and modelling the MEG RSA matrix across geometric shapes only – in this part, no other shapes such as tools or faces are involved). Still, the difference between squares, rectangles, parallelograms and other geometric shapes in our stimuli is not so subtle. Furthermore, CNNs do make very fine grained distinctions, for instance between many different breeds of dogs in the IMAGENET corpus. Still, those sorts of distinctions capture the initial part of the MEG response, while the geometric model is needed only for the later part. Thus, we think that it is a genuine finding that geometric computations associated with the dorsal parietal pathway are slower than the image analysis performed by the ventral occipito-temporal pathway.

      d. CNN encoding at time 0 is a little weird, but the author's explanation, that this is explained by the fact that temporal smoothed using a 100 ms window makes sense. However, smoothing by 100 ms is quite a lot, and it doesn't seem accurate to present continuous time course data when the decoding or RSA result at each time point reflects a 100 ms bin. It may be more accurate to simply show unsmoothed data. I'm less convinced by the explanation about shape prediction.

      We agree. Following the reviewer’s advice, as well as the recommendation from reviewer 1, we now display unsmoothed plots, and the effects now exhibit a more reasonable timing (Figure 4D), with effects starting around ~60 ms for CNN encoding.

      (4) I appreciate the author's use of multiple models and their explanation for why DINOv2 explains more variance than the geometric and CNN models (that it represents both types of features. A variance partitioning analysis may help strengthen this conclusion (Bonner & Epstein, 2018; Lescroart et al., 2015).

      However, one difference between DINOv2 and the CNN used here is that it is trained on a dataset of 142 million images vs. the 1.5 million images used in ImageNet. Thus, DINOv2 is more likely to have been exposed to simple geometric shapes during training, whereas standard ImageNet trained models are not. Indeed, prior work has shown that lesioning line drawing-like images from such datasets drastically impairs the performance of large models (Mayilvahanan et al., 2024). Thus, it is unlikely that the use of a transformer architecture explains the performance of DINOv2. The authors could include an ImageNet-trained transformer (e.g., ViT) and a CNN trained on large datasets (e.g., ResNet trained on the Open Clip dataset) to test these possibilities. However, I think it's also sufficient to discuss visual experience as a possible explanation for the CNN and DINOv2 results. Indeed, young children are exposed to geometric shapes, whereas ImageNet-trained CNNs are not.

      We agree with the reviewer’s observation. In fact, new and ongoing work from the lab is also exploring this; we have included in supplementary materials exactly what the reviewer is suggesting, namely the time course of the correlation with ViT and with ConvNeXT. In line with the reviewers’ prediction, these networks, trained on much larger dataset and with many more parameters, can also fit the human data as well as DINOv2. We ran additional analysis of the MEG data with ViT and ConvNeXT, which we now report in Fig. S6 as well as in an additional sentence in that section:

      “[…] similar results were obtained by performing the same analysis, not only with another vision transformer network, ViT, but crucially using a much larger convolutional neural network, ConvNeXT, which comprises ~800M parameters and has been trained on 2B images, likely including many geometric shapes and human drawings. For the sake of completeness, RSA analysis in sensor space of the MEG data with these two models is provided in Fig. S6.”

      We conclude that the size and nature of the training set could be as important as the architecture – but also note that humans do not rely on such a huge training set. We have updated the text, as well as Fig. S6, accordingly by updating the section now entitled “Vision Transformers and Larger Neural Networks”, and the discussion section on theoretical models.

      (5) The authors may be interested in a recent paper from Arcaro and colleagues that showed that the parietal cortex is greatly expanded in humans (including infants) compared to non-human primates (Meyer et al., 2025), which may explain the stronger geometric reasoning abilities of humans.

      A very interesting article indeed! We have updated our article to incorporate this reference in the discussion, in the section on visual pathways, as follows:

      “Finally, recent work shows that within the visual cortex, the strongest relative difference in growth between human and non-human primates is localized in parietal areas (Meyer et al., 2025). If this expansion reflected the acquisition of new processing abilities in these regions, it  might explain the observed differences in geometric abilities between human and non-human primates (Sablé-Meyer et al., 2021).”

      Also, the authors may want to include this paper, which uses a similar oddity task and compelling shows that crows are sensitive to geometric regularity:

      Schmidbauer, P., Hahn, M., & Nieder, A. (2025). Crows recognize geometric regularity. Science Advances, 11(15), eadt3718. https://doi.org/10.1126/sciadv.adt3718

      We have ongoing discussions with the authors of this work and are  have prepared a response to their findings (Sablé-Meyer and Dehaene, 2025)–ultimately, we think that this discussion, which we agree is important, does not have its place in the present article. They used a reduced version of our design, with amplified differences in the intruders. While they did not test the fit of their model with CNN or geometric feature models, we did and found that a simple CNN suffices to account for crow behavior. Thus, we disagree that their conclusions follow from their results and their conclusions. But the present article does not seem to be the right platform to engage in this discussion.

      References

      Ayzenberg, V., & Behrmann, M. (2022). The Dorsal Visual Pathway Represents Object-Centered Spatial Relations for Object Recognition. The Journal of Neuroscience, 42(23), 4693-4710. https://doi.org/10.1523/jneurosci.2257-21.2022

      Bonner, M. F., & Epstein, R. A. (2018). Computational mechanisms underlying cortical responses to the affordance properties of visual scenes. PLoS Computational Biology, 14(4), e1006111. https://doi.org/10.1371/journal.pcbi.1006111

      Bueti, D., & Walsh, V. (2009). The parietal cortex and the representation of time, space, number and other magnitudes. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1525), 1831-1840.

      Dehaene, S., & Brannon, E. (2011). Space, time and number in the brain: Searching for the foundations of mathematical thought. Academic Press.

      Freud, E., Culham, J. C., Plaut, D. C., & Bermann, M. (2017). The large-scale organization of shape processing in the ventral and dorsal pathways. eLife, 6, e27576.

      Freud, E., Ganel, T., Shelef, I., Hammer, M. D., Avidan, G., & Behrmann, M. (2017). Three-dimensional representations of objects in dorsal cortex are dissociable from those in ventral cortex. Cerebral Cortex, 27(1), 422-434.

      Freud, E., Plaut, D. C., & Behrmann, M. (2016). 'What 'is happening in the dorsal visual pathway. Trends in Cognitive Sciences, 20(10), 773-784.

      Freud, E., Plaut, D. C., & Behrmann, M. (2019). Protracted developmental trajectory of shape processing along the two visual pathways. Journal of Cognitive Neuroscience, 31(10), 1589-1597.

      Han, Z., & Sereno, A. (2022). Modeling the Ventral and Dorsal Cortical Visual Pathways Using Artificial Neural Networks. Neural Computation, 34(1), 138-171. https://doi.org/10.1162/neco_a_01456

      Janssen, P., Srivastava, S., Ombelet, S., & Orban, G. A. (2008). Coding of shape and position in macaque lateral intraparietal area. Journal of Neuroscience, 28(26), 6679-6690.

      Konen, C. S., & Kastner, S. (2008). Two hierarchically organized neural systems for object information in human visual cortex. Nature Neuroscience, 11(2), 224-231.

      Lescroart, M. D., Stansbury, D. E., & Gallant, J. L. (2015). Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas. Frontiers in Computational Neuroscience, 9(135), 1-20. https://doi.org/10.3389/fncom.2015.00135

      Mayilvahanan, P., Zimmermann, R. S., Wiedemer, T., Rusak, E., Juhos, A., Bethge, M., & Brendel, W. (2024). In search of forgotten domain generalization. arXiv Preprint arXiv:2410.08258.

      Meyer, E. E., Martynek, M., Kastner, S., Livingstone, M. S., & Arcaro, M. J. (2025). Expansion of a conserved architecture drives the evolution of the primate visual cortex. Proceedings of the National Academy of Sciences, 122(3), e2421585122. https://doi.org/10.1073/pnas.2421585122

      Orban, G. A. (2011). The extraction of 3D shape in the visual system of human and nonhuman primates. Annual Review of Neuroscience, 34, 361-388.

      Romei, V., Driver, J., Schyns, P. G., & Thut, G. (2011). Rhythmic TMS over Parietal Cortex Links Distinct Brain Frequencies to Global versus Local Visual Processing. Current Biology, 21(4), 334-337. https://doi.org/10.1016/j.cub.2011.01.035

      Sereno, A. B., & Maunsell, J. H. R. (1998). Shape selectivity in primate lateral intraparietal cortex. Nature, 395(6701), 500-503. https://doi.org/10.1038/26752

      Summerfield, C., Luyckx, F., & Sheahan, H. (2020). Structure learning and the posterior parietal cortex. Progress in Neurobiology, 184, 101717. https://doi.org/10.1016/j.pneurobio.2019.101717

      Van Dromme, I. C., Premereur, E., Verhoef, B.-E., Vanduffel, W., & Janssen, P. (2016). Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision. PLoS Biology, 14(4), e1002445. https://doi.org/10.1371/journal.pbio.1002445

      Xu, Y. (2018). A tale of two visual systems: Invariant and adaptive visual information representations in the primate brain. Annu. Rev. Vis. Sci, 4, 311-336.

      Reviewer #3 (Recommendations for the authors):

      Bring into the discussion some of the issues outlined above, especially a) the spatial rather than visual of the geometric figures and b) the non-representational aspects of geometric form aspects.

      We thank the reviewer for their recommendations – see our response to the public review for more details.

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1

      Evidence, reproducibility and clarity

      This paper addresses a very interesting problem of non-centrosomal microtubule organization in developing Drosophila oocytes. Using genetics and imaging experiments, the authors reveal an interplay between the activity of kinesin-1, together with its essential cofactor Ensconsin, and microtubule organization at the cell cortex by the spectraplakin Shot, minus-end binding protein Patronin and Ninein, a protein implicated in microtubule minus end anchoring. The authors demonstrate that the loss of Ensconsin affects the cortical accumulation non-centrosomal microtubule organizing center (ncMTOC) proteins, microtubule length and vesicle motility in the oocyte, and show that this phenotype can be rescued by constitutively active kinesin-1 mutant, but not by Ensconsin mutants deficient in microtubule or kinesin binding. The functional connection between Ensconsin, kinesin-1 and ncMTOCs is further supported by a rescue experiment with Shot overexpression. Genetics and imaging experiments further implicate Ninein in the same pathway. These data are a clear strength of the paper; they represent a very interesting and useful addition to the field.

      The weaknesses of the study are two-fold. First, the paper seems to lack a clear molecular model, uniting the observed phenomenology with the molecular functions of the studied proteins. Most importantly, it is not clear how kinesin-based plus-end directed transport contributes to cortical localization of ncMTOCs and regulation of microtubule length.

      Second, not all conclusions and interpretations in the paper are supported by the presented data.

      We thank the reviewer for recognizing the impact of this work. In response to the insightful suggestions, we performed extensive new experiments that establish a well-supported cellular and molecular model (Figure 7). The discussion has been restructured to directly link each conclusion to its corresponding experimental evidence, significantly strengthening the manuscript.

      Below is a list of specific comments, outlining the concerns, in the order of appearance in the paper/figures.

      Figure 1. The statement: "Ens loading on MTs in NCs and their subsequent transport by Dynein toward ring canals promotes the spatial enrichment of the Khc activator Ens in the oocyte" is not supported by data. The authors do not demonstrate that Ens is actually transported from the nurse cells to the oocyte while being attached to microtubules. They do show that the intensity of Ensconsin correlates with the intensity of microtubules, that the distribution of Ensconsin depends on its affinity to microtubules and that an Ensconsin pool locally photoactivated in a nurse cell can redistribute to the oocyte (and throughout the nurse cell) by what seems to be diffusion. The provided images suggest that Ensconsin passively diffuses into the oocyte and accumulates there because of higher microtubule density, which depends on dynein. To prove that Ensconsin is indeed transported by dynein in the microtubule-bound form, one would need to measure the residence time of Ensconsin on microtubules and demonstrate that it is longer than the time needed to transport microtubules by dynein into the oocyte; ideally, one would like to see movement of individual microtubules labelled with photoconverted Ensconsin from a nurse cell into the oocyte. Since microtubules are not enriched in the oocyte of the dynein mutant, analysis of Ensconsin intensity in this mutant is not informative and does not reveal the mechanism of Ensconsin accumulation.

      As noted by Reviewer 3, the directional movement of microtubules traveling at ~140 nm/s from nurse cells toward the oocyte through Ring Canals was previously reported using a tagged Ens-MT binding domain reporter line by Lu et al. (2022). We have therefore added the citation of this crucial work in the novel version of the manuscript (lane 155-157) and removed the photo-conversion panel.

      Critically, however, our study provides mechanistic insight that was missing from this earlier work: this mechanism is also crucial to enrich MAPs in the oocyte. The fact that Dynein mutants fail to enrich Ensconsin is a crucial piece of evidence: it supports a model of Ensconsin-loaded MT transport (Figure 1D-1F).

      Figure 2. According to the abstract, this figure shows that Ensconsin is "maintained at the oocyte cortex by Ninein". However, the figure doesn't seem to prove it - it shows that oocyte enrichment of Ensonsin is partially dependent on Ninein, but this applies to the whole cell and not just to the cell cortex. Furthermore, it is not clear whether Ninein mutation affects microtubule density, which in turn would affect Ensconsin enrichment, and therefore, it is not clear whether the effect of Ninein loss on Ensconsin distribution is direct or indirect.

      Ninein plays a critical role in Ensconsin enrichment and microtubule organization in the oocyte (new Figure 2, Figure 3, Figure S3). Quantification of total Tubulin signal shows no difference between control and Nin mutant oocytes (new Figure S3 panels A, B). We found decreased Ens enrichment in the oocyte, and Ens localization on MTs and to the cell cortex (Figure 2E, 2F, and Figure S3C and S3D).

      Novel quantitative analyses of microtubule orientation at the anterior cortex, where MTs are normally preferentially oriented toward the posterior pole (Parton et al. 2011), demonstrate that Nin mutants exhibit randomized MT orientation compared to wild-type oocytes (new Figure 3C-3E).These findings establish that Ninein (although not essential) favors Ensconsin localization on MTs, Ens enrichment in the oocyte, ncMTOC cortical localization, and more robust MT orientation toward the posterior cortex. It also suggests that Ens levels in the oocyte acts as a rheostat to control Khc activation.

      The observation that the aggregates formed by overexpressed Ninein accumulate other proteins, including Ensconsin, supports, though does not prove their interactions. Furthermore, there is absolutely no proof that Ninein aggregates are "ncMTOCs". Unless the authors demonstrate that these aggregates nucleate or anchor microtubules (for example, by detailed imaging of microtubules and EB1 comets), the text and labels in the figure would need to be altered.

      We have modified the manuscript, we now refer to an accumulation of these components in large puncta, rather than aggregates, consistent with previous observations (Rosen et al., 2000). We acknowledge in the revised version that these puncta recruit Shot, Patronin and Ens without mentioning direct interaction (lane 218).

      Importantly, we conducted a more detailed characterization of these Ninein/Shot/Patronin/Ens-containing puncta in a novel Figure S4. To rigorously assess their nucleation capacity, we analyzed Eb1-GFP-labeled MT comets, a robust readout of MT nucleation (Parton et al., 2011, Nashchekin et al., 2016). While few Eb1-positive comets occasionally emanate from these structures, confirming their identity as putative ncMTOCs, these puncta function as surprisingly weak nucleation centers (new Figure S4 E, Video S1) and, their presence does not alter overall MT architecture (new Figure S4 F). Moreover, these puncta disappear over time, are barely visible at stage 10B, they do not impair oocyte development or fertility (Figure S4 G and Table 1).

      Minor comment: Note that a "ratio" (Figure 2C) is just a ratio, and should not be expressed in arbitrary units.

      We have amended this point in all the figures.

      Figure 3B: immunoprecipitation results cannot be interpreted because the immunoprecipitated proteins (GFP, Ens-GFP, Shot-YFP) are not shown. It is also not clear that this biochemical experiment is useful. If the authors would like to suggest that Ensconsin directly binds to Patronin, the interaction would need to be properly mapped at the protein domain level.

      This is a good point: the GFP and Ens-GFP immunoprecipitated proteins are now much clearly identified on the blots and in the figure legend (new Figure 4G). Shot-YFP IP, was used as a positive control but is difficult to be detected by Western blot due to its large size (>106 Da) using conventional acrylamide gels (Nashchekin et al., 2016).

      We now explicitly state that immunoprecipitations were performed at 4°C, where microtubules are fully depolymerized, thereby excluding undirect microtubule-mediated interactions. We agree with this reviewer: we cannot formally rule out interactions through bridging by other protein components. This is stated in the revised manuscript (lane 238-239).

      One of the major phenotypes observed by the authors in Ens mutant is the loss of long microtubules. The authors make strong conclusions about the independence of this phenotype from the parameters of microtubule plus-end growth, but in fact, the quality of their data does not allow to make such a conclusion, because they only measured the number of EB1 comets and their growth rate but not the catastrophe, rescue or pausing frequency."Note that kinesin-1 has been implicated in promoting microtubule damage and rescue (doi: 10.1016/j.devcel.2021).In the absence of such measurements, one cannot conclude whether short microtubules arise through defects in the minus-end, plus-end or microtubule shaft regulation pathways.

      We thank the reviewer for raising this important point. Our data demonstrate that microtubule (MT) nucleation and polymerization rates remain unaffected under Khc RNAi and ens mutant conditions, indicating that MT dynamics alterations must arise through alternative mechanisms.

      As the reviewer suggested, recent studies on Kinesin activity and MT network regulation are indeed highly relevant. Two key studies from the Verhey and Aumeier laboratories examined Kinesin-1 gain-of-function conditions and revealed that constitutively active Kinesin-1 induces MT lattice damage (Budaitis et al., 2022). While damaged MTs can undergo self-repair, Aumeier and colleagues demonstrated that GTP-tubulin incorporation generates "rescue shafts" that promote MT rescue events (Andreu-Carbo et al., 2022). Extrapolating from these findings, loss of Kinesin-1 activity could plausibly reduce rescue shaft formation, thereby decreasing MT rescue frequency and stability. Although this hypothesis is challenging to test directly in our system, it provides a mechanistic framework for the observed reduction in MT number and stability.

      Additionally, the reviewer highlighted the role of Khc in transporting the dynactin complex, an anti-catastrophe factor, to MT plus ends (Nieuwburg et al., 2017), which could further contribute to MT stabilization. This crucial reference is now incorporated into the revised Discussion.

      Importantly, our work also demonstrates the contribution of Ens/Khc to ncMTOC targeting to the cell cortex. Our new quantitative analyses of MT organization (new Figure 5 B) reveal a defective anteroposterior orientation of cortical MTs in mutant conditions, pointing to a critical role for cortical ncMTOCs in organizing the MT network.

      Taken together, we propose that the observed MT reduction and disorganization result from multiple interconnected mechanisms: (1) reduced rescue shaft formation affecting MT stability; (2) impaired transport of anti-catastrophe factors to MT plus ends; and (3) loss of cortical ncMTOCs, which are essential for minus-end MT stabilization and network organization. The Discussion has been revised to reflect this integrated model in a dedicated paragraph (“A possible regulation of MT dynamics in the oocyte at both plus end minus MT ends by Ens and Khc” lane 415-432).

      It is important to note in that a spectraplakin, like Shot, can potentially affect different pathways, particularly when overexpressed.

      We agree that Shot harbors multiple functional domains and acts as a key organizer of both actin and microtubule cytoskeletons. Overexpression of such a cytoskeletal cross-linker could indeed perturb both networks, making interpretation of Ens phenotype rescue challenging due to potential indirect effects.

      To address this concern, we selected an appropriate Shot isoform for our rescue experiments that displayed similar localization to “endogenous” Shot-YFP (a genomic construct harboring shot regulatory sequences) and importantly that was not overexpressed.

      Elevated expression of the Shot.L(A) isoform (see Western Blot Figure S8 A), considered as the wild-type form with two CH1 and CH2 actin-binding motifs (Lee and Kolodziej, 2002), showed abnormal localization such as strong binding to the microtubules in nurse cells and oocyte confirming the risk of gain-of-function artifacts and inappropriate conclusions (Figure S8 B, arrows).

      By contrast, our rescue experiments using the Shot.L(C) isoform (that only harbors the CH2 motif) provide strong evidence against such artifacts for three reasons. First, Shot-L(C) is expressed at slightly lower levels than a Shot-YFP genomic construct (not overexpressed), and at much lower levels than Shot-L(A), despite using the same driver (Figure S8 A). Second, Shot-L(C) localization in the oocyte is similar to that of endogenous Shot-YFP, concentrating at the cell cortex (Figure S8 B, compare lower and top panels). Taken together, these controls rather suggest our rescue with the Shot-L(C) is specific.

      Note that this Shot-L(C) isoform is sufficient to complement the absence of the shot gene in other cell contexts (Lee and Kolodziej, 2002).

      Unjustified conclusions should be removed: the authors do not provide sufficient data to conclude that "ens and Khc oocytes MT organizational defects are caused by decreased ncMTOC cortical anchoring", because the actual cortical microtubule anchoring was not measured.

      This is a valid point. We acknowledge that we did not directly measure microtubule anchoring in this study. In response, we have revised the discussion to more accurately reflect our observations. Throughout the manuscript, we now refer to "cortical microtubule organization" rather than "cortical microtubule anchoring," which better aligns with the data presented.

      Minor comment: Microtubule growth velocity must be expressed in units of length per time, to enable evaluating the quality of the data, and not as a normalized value.

      This is now amended in the revised version (modified Figure S7).

      A significant part of the Discussion is dedicated to the potential role of Ensconsin in cortical microtubule anchoring and potential transport of ncMTOCs by kinesin. It is obviously fine that the authors discuss different theories, but it would be very helpful if the authors would first state what has been directly measured and established by their data, and what are the putative, currently speculative explanations of these data.

      We have carefully considered the reviewer's constructive comments and are confident that this revised version fully addresses their concerns.

      First, we have substantially strengthened the connection between the Results and Discussion sections, ensuring that our interpretations are more directly anchored in the experimental data. This restructuring significantly improves the overall clarity and logical flow of the manuscript.

      Second, we have added a new comprehensive figure presenting a molecular-scale model of Kinesin-1 activation upon release of autoinhibition by Ensconsin (new Figure 7D). Critically, this figure also illustrates our proposed positive feedback loop mechanism: Khc-dependent cytoplasmic advection promotes cortical recruitment of additional ncMTOCs, which generates new cortical microtubules and further accelerates cytoplasmic transport (Figure 7 A-C). This self-amplifying cycle provides a mechanistic framework consistent with emerging evidence that cytoplasmic flows are essential for efficient intracellular transport in both insect and mammalian oocytes.

      Minor comment: The writing and particularly the grammar need to be significantly improved throughout, which should be very easy with current language tools. Examples: "ncMTOCs recruitment" should be "ncMTOC recruitment"; "Vesicles speed" should be "Vesicle speed", "Nin oocytes harbored a WT growth,"- unclear what this means, etc. Many paragraphs are very long and difficult to read. Making shorter paragraphs would make the authors' line of thought more accessible to the reader.

      We have amended and shortened the manuscript according to this reviewer feed-back. We have specifically built more focused paragraphs to facilitates the reading.

      Significance

      This paper represents significant advance in understanding non-centrosomal microtubule organization in general and in developing Drosophila oocytes in particular by connecting the microtubule minus-end regulation pathway to the Kinesin-1 and Ensconsin/MAP7-dependent transport. The genetics and imaging data are of good quality, are appropriately presented and quantified. These are clear strengths of the study which will make it interesting to researchers studying the cytoskeleton, microtubule-associated proteins and motors, and fly development.

      The weaknesses of this study are due to the lack of clarity of the overall molecular model, which would limit the impact of the study on the field. Some interpretations are not sufficiently supported by data, but this can be solved by more precise and careful writing, without extensive additional experimentation.

      We thank the reviewer for raising these important concerns regarding clarity and data interpretation. We have thoroughly revised the manuscript to address these issues on multiple fronts. First, we have substantially rewritten key sections to ensure that our conclusions are clearly articulated and directly supported by the data. Second, we have performed several new experiments that now allow us to propose a robust mechanistic model, presented in new figures. These additions significantly strengthen the manuscript and directly address the reviewer's concerns.

      My expertise is cell biology and biochemistry of the microtubule cytoskeleton, including both microtubule-associated proteins and microtubule motors.

      Reviewer #2

      Evidence, reproducibility and clarity

      In this manuscript, Berisha et al. investigate how microtubule (MT) organization is spatially regulated during Drosophila oogenesis. The authors identify a mechanism in which the Kinesin-1 activator Ensconsin/MAP7 is transported by dynein and anchored at the oocyte cortex via Ninein, enabling localized activation of Kinesin-1. Disruption of this pathway impairs ncMTOC recruitment and MT anchoring at the cortex. The authors combine genetic manipulation with high-resolution microscopy and use three key readouts to assess MT organization during mid-to-late oogenesis: cortical MT formation, localization of posterior determinants, and ooplasmic streaming. Notably, Kinesin-1, in concert with its activator Ens/MAP7, contributes to organizing the microtubule network it travels along. Overall, the study presents interesting findings, though we have several concerns we would like the authors to address. Ensconsin enrichment in the oocyte 1. Enrichment in the oocyte • Ensconsin is a MAP that binds MTs. Given that microtubule density in the oocyte significantly exceeds that in the nurse cells, its enrichment may passively reflect this difference. To assess whether the enrichment is specific, could the authors express a non-Drosophila MAP (e.g., mammalian MAP1B) to determine whether it also preferentially localizes to the oocyte?

      To address this point, we performed a new series of experiments analyzing the enrichment of other Drosophila and non-Drosophila MAPs, including Jupiter-GFP, Eb1-GFP, and bovine Tau-GFP, all widely used markers of the microtubule cytoskeleton in flies (see new Figure S2). Our results reveal that Jupiter-GFP, Eb1-GFP, and bovine Tau-GFP all exhibit significantly weaker enrichment in the oocyte compared to Ens-GFP. Khc-GFP also shows lower enrichment. These findings indicate that MAP enrichment in the oocyte is MAP-dependent, rather than solely reflecting microtubule density or organization. Of note, we cannot exclude that microtubule post-translational modifications contribute to differential MAP binding between nurse cells and the oocyte, but this remains a question for future investigation.

      The ability of ens-wt and ens-LowMT to induce tubulin polymerization according to the light scattering data (Fig. S1J) is minimal and does not reflect dramatic differences in localization. The authors should verify that, in all cases, the polymerization product in their in vitro assays is microtubules rather than other light-scattering aggregates. What is the control in these experiments? If it is just purified tubulin, it should not form polymers at physiological concentrations.

      The critical concentration Cr for microtubule self-assembly in classical BRB80 buffer found by us and others is around 20 µM (see Fig. 2c in Weiss et al., 2010). Here, microtubules were assembled at 40 µM tubulin concentration, i.e., largely above the Cr. As stated in the materials and methods section, we systematically induced cooling at 4°C after assembly to assess the presence of aggregates, since those do not fall apart upon cooling. The decrease in optical density upon cooling is a direct control that the initial increase in DO is due to the formation of microtubules. Finally, aggregation and polymerization curves are widely different, the former displaying an exponential shape and the latter a sigmoid assembly phase (see Fig. 3A and 3B in Weiss et al., 2010).

      Photoconversion caveatsMAPs are known to dynamically associate and dissociate from microtubules. Therefore, interpretation of the Ens photoconversion data should be made with caution. The expanding red signal from the nurse cells to the oocyte may reflect a any combination of dynein-mediated MT transport and passive diffusion of unbound Ensconsin. Notably, photoconversion of a soluble protein in the nurse cells would also result in a gradual increase in red signal in the oocyte, independent of active transport. We encourage the authors to more thoroughly discuss these caveats. It may also help to present the green and red channels side by side rather than as merged images, to allow readers to assess signal movement and spatial patterns better.

      This is a valid point that mirrors the comment of Reviewers 1 and 3. The directional movement of microtubules traveling at ~140 nm/s from nurse cells toward the oocyte via the ring canals was previously reported by Lu et al. (2022) with excellent spatial resolution. Notably, this MT transport was measured using a fusion protein containing the Ens MT-binding domain. We now cite this relevant study in our revised manuscript and have removed this redundant panel in Figure 1.

      Reduction of Shot at the anterior cortex• Shot is known to bind strongly to F-actin, and in the Drosophila ovary, its localization typically correlates more closely with F-actin structures than with microtubules, despite being an MT-actin crosslinker. Therefore, the observed reduction of cortical Shot in ens, nin mutants, and Khc-RNAi oocytes is unexpected. It would be important to determine whether cortical F-actin is also disrupted in these conditions, which should be straightforward to assess via phalloidin staining.

      As requested by the reviewer, we performed actin staining experiments, which are now presented in a new Figure S5. These data demonstrate that the cortical actin network remains intact in all mutant backgrounds analyzed, ruling out any indirect effect of actin cytoskeleton disruption on the observed phenotypes.

      MTs are barely visible in Fig. 3A, which is meant to demonstrate Ens-GFP colocalization with tubulin. Higher-quality images are needed.

      The revised version now provides significantly improved images to show the different components examined. Our data show that Ens and Ninein localize at the cell cortex where they co-localize with Shot and Patronin (Figure 2 A-C). In addition, novel images show that Ens extends along microtubules (new Figure 4 A).

      MT gradient in stage 9 oocytesIn ens-/-, nin-/-, and Khc-RNAi oocytes, is there any global defect in the stage 9 microtubule gradient? This information would help clarify the extent to which cortical localization defects reflect broader disruptions in microtubule polarity.

      We now provide quantitative analysis of microtubule (MT) array organization in novel figures (Figure 3D and Figure 5B). Our data reveal that both Khc RNAi and ens mutant oocytes exhibit severe disruption of MT orientation toward the posterior (new Figure 5B). Importantly, this defect is significantly less pronounced in Nin-/- oocytes, which retain residual ncMTOCs at the cortex (new Figure 3D). This differential phenotype supports our model that cortical ncMTOCs are critical for maintaining proper MT orientation toward the posterior side of the oocyte.

      Role of Ninein in cortical anchoringThe requirement for Ninein in cortical anchorage is the least convincing aspect of the manuscript and somewhat disrupts the narrative flow. First, it is unclear whether Ninein exhibits the same oocyte-enriched localization pattern as Ensconsin. Is Ninein detectable in nurse cells? Second, the Ninein antibody signal appears concentrated in a small area of the anterior-lateral oocyte cortex (Fig. 2A), yet Ninein loss leads to reduced Shot signal along a much larger portion of the anterior cortex (Fig. 2F)-a spatial mismatch that weakens the proposed functional relationship. Third, Ninein overexpression results in cortical aggregates that co-localize with Shot, Patronin, and Ensconsin. Are these aggregates functional ncMTOCs? Do microtubules emanate from these foci?

      We now provide a more comprehensive analysis of Ninein localization. Similar to Ensconsin (Ens), endogenous Ninein is enriched in the oocyte during the early stages of oocyte development but is also detected in NCs (see modified Figure 2 A and Lasko et al., 2016). Improved imaging of Ninein further shows that the protein partially co-localizes with Ens, and ncMTOCs at the anterior cortex and with Ens-bound MTs (Figure 2B, 2C).

      Importantly, loss of Ninein (Nin) only partially reduces the enrichment of Ens in the oocyte (Figure 2E). Both Ens and Kinesin heavy chain (Khc) remain partially functional and continue to target non-centrosomal microtubule-organizing centers (ncMTOCs) to the cortex (Figure 3A). In Nin-/- mutants, a subset of long cortical microtubules (MTs) is present, thereby generating cytoplasmic streaming, although less efficiently than under wild-type (WT) conditions (Figure 3F and 3G). As a non-essential gene, we envisage Ninein as a facilitator of MT organization during oocyte development.

      Finally, our new analyses demonstrate that large puncta containing Ninein, Shot, Patronin, and despite their size, appear to be relatively weak nucleation centers (revised Figure S4 E and Video 1). In addition, their presence does not bias overall MT architecture (Figure S4 F) nor impair oocyte development and fertility (Figure S4 G and Table 1).

      Inconsistency of Khc^MutEns rescueThe Khc^MutEns variant partially rescues cortical MT formation and restores a slow but measurable cytoplasmic flow yet it fails to rescue Staufen localization (Fig. 5). This raises questions about the consistency and completeness of the rescue. Could the authors clarify this discrepancy or propose a mechanistic rationale?

      This is a good point. The cytoplasmic flows (the consequence of cargo transport by Khc on MTs) generated by a constitutively active KhcMutEns in an ens mutant condition, are less efficient than those driven by Khc activated by Ens in a control condition (Figure 6C). The rescued flow is probably not efficient enough to completely rescue the Staufen localization at stage 10.

      Additionally, this KhcMutEns variant rescues the viability of embryos from Khc27 mutant germline clones oocytes but not from ens mutants (Table1). One hypothesis is that Ens harbors additional functions beyond Khc activation.

      This incomplete rescue of Ens by an active Khc variant could also be the consequence of the “paradox of co-dependence”: Kinesin-1 also transport the antagonizing motor Dynein that promotes cargo transport in opposite directions (Hancock et al., 2016). The phenotype of a gain of function variant is therefore complex to interpret. Consistent with this, both KhcMutEns-GFP and KhcDhinge2 two active Khc only rescues partially centrosome transport in ens mutant Neural Stem Cells (Figure S10).

      Minor points: 1. The pUbi-attB-Khc-GFP vector was used to generate the Khc^MutEns transgenic line, presumably under control of the ubiquitous ubi promoter. Could the authors specify which attP landing site was used? Additionally, are the transgenic flies viable and fertile, given that Kinesin-1 is hyperactive in this construct?

      All transgenic constructs were integrated at defined genomic landing sites to ensure controlled expression levels. Specifically, both GFP-tagged KhcWT and KhcMutEns were inserted at the VK05 (attP9A) site using PhiC31-mediated integration. Full details of the landing sites are provided in the Materials and Methods section. Both transgenic flies are homozygous lethal and the transgenes are maintained over TM6B balancers.

      On page 11 (Discussion, section titled "A dual Ensconsin oocyte enrichment mechanism achieves spatial relief of Khc inhibition"), the statement "many mutations in Kif5A are causal of human diseases" would benefit from a brief clarification. Since not all readers may be familiar with kinesin gene nomenclature, please indicate that KIF5A is one of the three human homologs of Kinesin heavy chain.

      We clarified this point in the revised version (lane 465-466).

      On page 16 (Materials and Methods, "Immunofluorescence in fly ovaries"), the sentence "Ovaries were mounted on a slide with ProlonGold medium with DAPI (Invitrogen)" should be corrected to "ProLong Gold."

      This is corrected.

      Significance

      This study shows that enrichment of MAP7/ensconsin in the oocyte is the mechanism of kinesin-1 activation there and is important for cytoplasmic streaming and localization non-centrosomal microtubule-organizing centers to the oocyte cortex

      We thank the reviewers for the accurate review of our manuscript and their positive feed-back.

      Reviewer #3

      Evidence, reproducibility and clarity

      The manuscript of Berisha et al., investigates the role of Ensconsin (Ens), Kinesin-1 and Ninein in organisation of microtubules (MT) in Drosophila oocyte. At stage 9 oocytes Kinesin-1 transports oskar mRNA, a posterior determinant, along MT that are organised by ncMTOCs. At stage 10b, Kinesin-1 induces cytoplasmic advection to mix the contents of the oocyte. Ensconsin/Map7 is a MT associated protein (MAP) that uses its MT-binding domain (MBD) and kinesin binding domain (KBD) to recruit Kinesin-1 to the microtubules and to stimulate the motility of MT-bound Kinesin-1. Using various new Ens transgenes, the authors demonstrate the requirement of Ens MBD and Ninein in Ens localisation to the oocyte where Ens activates Kinesin-1 using its KBD. The authors also claim that Ens, Kinesin-1 and Ninein are required for the accumulation of ncMTOCs at the oocyte cortex and argue that the detachment of the ncMTOCs from the cortex accounts for the reduced localisation of oskar mRNA at stage 9 and the lack of cytoplasmic streaming at stage 10b. Although the manuscript contains several interesting observations, the authors' conclusions are not sufficiently supported by their data. The structure function analysis of Ensconsin (Ens) is potentially publishable, but the conclusions on ncMTOC anchoring and cytoplasmic streaming not convincing.

      We are grateful that the regulation of Khc activity by MAP7 was well received by all reviewers. While our study focuses on Drosophila oogenesis, we believe this mechanism may have broader implications for understanding kinesin regulation across biological systems.

      For the novel function of the MAP7/Khc complex in organizing its own microtubule networks through ncMTOC recruitment, we have carefully considered the reviewers' constructive recommendations. We now provide additional experimental evidence supporting a model of flux self-amplification in which ncMTOC recruitment plays a key role. It is well established that cytoplasmic flows are essential for posterior localization of cell fate determinants at stage 10B. Slow flows have also been described at earlier oogenesis stages by the groups of Saxton and St Johnston. Building on these early publications and our new experiments, we propose that these flows are essential to promote a positive feedback loop that reinforces ncMTOC recruitment and MT organization (Figure 7).

      1) The main conclusion of the manuscript is that "MT advection failure in Khc and ens in late oogenesis stems from defective cortical ncMTOCs recruitment". This completely overlooks the abundant evidence that Kinesin-1 directly drives cytoplasmic streaming by transporting vesicles and microtubules along microtubules, which then move the cytoplasm by advection (Palacios et al., 2002; Serbus et al, 2005; Lu et al, 2016). Since Kinesin-1 generates the flows, one cannot conclude that the effect of khc and ens mutants on cortical ncMTOC positioning has any direct effect on these flows, which do not occur in these mutants.

      We regret the lack of clarity of the first version of the manuscript and some missing references. We propose a model in which the Kinesin-1- dependent slow flows (described by Serbus/Saxton and Palacios/StJohnston) play a central role in amplifying ncMTOC anchoring and cortical MT network formation (see model in the new Figure 7).

      2) The authors claim that streaming phenotypes of ens and khs mutants are due to a decrease in microtubule length caused by the defective localisation of ncMTOCs. In addition to the problem raised above, However, I am not convinced that they can make accurate measurements of microtubule length from confocal images like those shown in Figure 4. Firstly, they are measuring the length of bundles of microtubules and cannot resolve individual microtubules. This problem is compounded by the fact that the microtubules do not align into parallel bundles in the mutants. This will make the "microtubules" appear shorter in the mutants. In addition, the alignment of the microtubules in wild-type allows one to choose images in which the microtubule lie in the imaging plane, whereas the more disorganized arrangement of the microtubules in the mutants means that most microtubules will cross the imaging plane, which precludes accurate measurements of their length.

      As mentioned by Reviewer 4, we have been transparent with the methodology, and the limitations that were fully described in the material and methods section.

      Cortical microtubules in oocytes are highly dynamic and move rapidly, making it technically impossible to capture their entire length using standard Z-stack acquisitions. We therefore adopted a compromise approach: measuring microtubules within a single focal plane positioned just below the oocyte cortex. This strategy is consistent with established methods in the field, such as those used by Parton et al. (2011) to track microtubule plus-end directionality. To avoid overinterpretation, we explicitly refer to these measurements as "minimum detectable MT length," acknowledging that microtubules may extend beyond the focal plane, particularly at stage 10, where long, tortuous bundles frequently exit the plane of focus. These methodological considerations and potential biases are clearly described in the Materials and Methods section and the text now mentions the possible disorganization of the MT network in the mutant conditions (lane 272-273).

      In this revised version, we now provide complementary analyses of MT network organization.Beyond length measurements (and the mentioned limitations), we also quantified microtubule network orientation at stage 9, assessing whether cortical microtubules are preferentially oriented toward the posterior axis as observed in controls (revised Figure 3D and Figure 5B). While this analysis is also subject to the same technical limitations, it reveals a clear biological difference: microtubules exhibit posterior-biased orientation in control oocytes similar to a previous study (Parton et al., 2011) but adopt a randomized orientation in Nin-/-, ens, and Khc RNAi-depleted oocytes (revised Figure 3D and Figure 5B).

      Taken together, these complementary approaches, despite their technical constraints, provide convergent evidence for the role of the Khc/Ens complex in organizing cortical microtubule networks during oogenesis.

      3) "To investigate whether the presence of these short microtubules in ens and Khc RNAi oocytes is due to defects in microtubule anchoring or is also associated with a decrease in microtubule polymerization at their plus ends, we quantified the velocity and number of EB1comets, which label growing microtubule plus ends (Figure S3)." I do not understand how the anchoring or not of microtubule minus ends to the cortex determines how far their plus ends grow, and these measurements fall short of showing that plus end growth is unaffected. It has already been shown that the Kinesin-1-dependent transport of Dynactin to growing microtubule plus ends increases the length of microtubules in the oocyte because Dynactin acts as an anti-catastrophe factor at the plus ends. Thus, khc mutants should have shorter microtubules independently of any effects on ncMTOC anchoring. The measurements of EB1 comet speed and frequency in FigS2 will not detect this change and are not relevant for their claims about microtubule length. Furthermore, the authors measured EB1 comets at stage 9 (where they did not observe short MT) rather than at stage 10b. The authors' argument would be better supported if they performed the measurements at stage 10b.

      We thank the reviewer for raising this important point. The short microtubule (MT) length observed at stage 10B could indeed result from limited plus-end growth. Unfortunately, we were unable to test this hypothesis directly: strong endogenous yolk autofluorescence at this stage prevented reliable detection of Eb1-GFP comets, precluding velocity measurements.

      At least during stage 9, our data demonstrate that MT nucleation and polymerization rates are not reduced in both KhcRNAi and ens mutant conditions, indicating that the observed MT alterations must arise through alternative mechanisms.

      In the discussion, we propose the following interconnected explanations, supported by recent literature and the reviewers’ suggestions:

      1- Reduced MT rescue events. Two seminal studies from the Verhey and Aumeier laboratories have shown that constitutively active Kinesin-1 induces MT lattice damage (Budaitis et al., 2022), which can be repaired through GTP-tubulin incorporation into "rescue shafts" that promote MT rescue (Andreu-Carbo et al., 2022). Extrapolating from these findings, loss of Kinesin-1 activity could plausibly reduce rescue shaft formation, thereby decreasing MT stability. While challenging to test directly in our system, this mechanism provides a plausible framework for the observed phenotype.

      2- Impaired transport of stabilizing factors. As that reviewer astutely points out, Khc transports the dynactin complex, an anti-catastrophe factor, to MT plus ends (Nieuwburg et al., 2017). Loss of this transport could further compromise MT plus end stability. We now discuss this important mechanism in the revised manuscript.

      3- Loss of cortical ncMTOCs. Critically, our new quantitative analyses (revised Figure 3 and Figure 5) also reveal defective anteroposterior orientation of cortical MTs in mutant conditions. These experiments suggest that Ens/Khc-mediated localization of ncMTOCs to the cortex is essential for proper MT network organization, and possibly minus-end stabilization as suggested in several studies (Feng et al., 2019, Goodwin and Vale, 2011, Nashchekin et al., 2016).

      Altogether, we now propose an integrated model in which MT reduction and disorganization may result from multiple complementary mechanisms operating downstream of Kinesin-1/Ensconsin loss. While some aspects remain difficult to test directly in our in vivo system, the convergence of our data with recent mechanistic studies provides an interesting conceptual framework. The Discussion has been revised to reflect this comprehensive view in a dedicated paragraph (“A possible regulation of MT dynamics in the oocyte at both plus end minus MT ends by Ens and Khc” lane 415-432).

      4) The Shot overexpression experiments presented in Fig.3 E-F, Fig.4D and TableS1 are very confusing. Originally , the authors used Shot-GFP overexpression at stage 9 to show that there is a decrease of ncMTOCs at the cortex in ens mutants (Fig.3 E-F) and speculated that this caused the defects in MT length and cytoplasmic advection at stage 10B. However the authors later state on page 8 that : "Shot overexpression (Shot OE) was sufficient to rescue the presence of long cortical MTs and ooplasmic advection in most ens oocytes (9/14), resembling the patterns observed in controls (Figures 4B right panel and 4D). Moreover, while ens females were fully sterile, overexpression of Shot was sufficient to restore that loss of fertility (Table S1)". Is this the same UAS Shot-GFP and VP16 Gal4 used in both experiments? If so, this contradictions puts the authors conclusions in question.

      This is an important point that requires clarification regarding our experimental design.

      The Shot-YFP construct is a genomic insertion on chromosome 3. The ens mutation is also located on chromosome 3 and we were unable to recombine this transgene with the ens mutant for live quantification of cortical Shot. To circumvent this technical limitation, we used a UAS-Shot.L(C)-GFP transgenic construct driven by a maternal driver, expressed in both wild-type (control) and ens mutant oocytes. We validated that the expression level and subcellular localization of UAS-Shot.L(C)-GFP were comparable to those of the genomic Shot-YFP (new Figure S8 A and B).

      From these experiments, we drew two key conclusions. First, cortical Shot.L(C)-GFP is less abundant in ens mutant oocytes compared to wild-type (the quantification has been removed from this version). Second, despite this reduced cortical accumulation, Shot.L(C)-GFP expression partially rescues ooplasmic flows and microtubule streaming in stage 10B ens mutant oocytes, and restores fertility to ens mutant females.

      5) The authors based they conclusions about the involvement of Ens, Kinesin-1 and Ninein in ncMTOC anchoring on the decrease in cortical fluorescence intensity of Shot-YFP and Patronin-YFP in the corresponding mutant backgrounds. However, there is a large variation in average Shot-YFP intensity between control oocytes in different experiments. In Fig. 2F-G the average level of Shot-YFP in the control sis 130 AU while in Fig.3 G-H it is only 55 AU. This makes me worry about reliability of such measurements and the conclusions drawn from them.

      To clarify this point, we have harmonized the method used to quantify the Shot-YFP signals in Figure 4E with the methodology used in Figure 3B, based on the original images. The levels are not strictly identical (Control Figure 2 B: 132.7+/-36.2 versus Control Figure 4 E: 164.0+/- 37.7). These differences are usual when experiments are performed at several-month intervals and by different users.

      6) The decrease in the intensity of Shot-YFP and Patronin-YFP cortical fluorescence in ens mutant oocytes could be because of problems with ncMTOC anchoring or with ncMTOCs formation. The authors should find a way to distinguish between these two possibilities. The authors could express Ens-Mut (described in Sung et al 2008), which localises at the oocyte posterior and test whether it recruits Shot/Patronin ncMTOCs to the posterior.

      We tried to obtain the fly stocks described in the 2008 paper by contacting former members of Pernille Rørth's laboratory. Unfortunately, we learned that the lab no longer exists and that all reagents, including the requested stocks, were either discarded or lost over time. To our knowledge, these materials are no longer available from any source. We regret that this limitation prevented us from performing the straightforward experiments suggested by the reviewer using these specific tools.

      7) According to the Materials and Methods, the Shot-GFP used in Fig.3 E-F and Fig.4 was the BDSC line 29042. This is Shot L(C), a full-length version of Shot missing the CH1 actin-binding domain that is crucial for Shot anchoring to the cortex. If the authors indeed used this version of Shot-GFP, the interpretation of the above experiments is very difficult.

      The Shot.L(C) isoform lacks the CH1 domain but retains the CH2 actin-binding motif. Truncated proteins with this domain and fused to GST retains a weak ability to bind actin in vitro. Importantly, the function of this isoform is context-dependent: it cannot rescue shot loss-of-function in neuron morphogenesis but fully restores Shot-dependent tracheal cell remodeling (Lee and Kolodziej, 2002).

      In our experiments, when the Shot.L(C) isoform was expressed under the control of a maternal driver, its localization to the oocyte cortex was comparable to that of the genomic Shot-YFP construct (new Figure S8). This demonstrates unambiguously that the CH1 domain is dispensable for Shot cortical localization in oocytes, and that CH2-mediated actin binding is sufficient for this localization. Of note, a recent study showed that actin network are not equivalent highlighting the need for specific Shot isoforms harboring specialized actin-binding domain (Nashchekin et al., 2024).

      We note that the expression level of Shot.L(C)-GFP in the oocyte appeared slightly lower than that of Shot-YFP (expressed under endogenous Shot regulatory sequences), as assessed by Western blot (Figure S8 A).

      Critically, Shot.L(C)-GFP expression was substantially lower than that of Shot.L(A)-GFP (that harbored both the CH1 and CH2 domain). Shot.L(A)-GFP was overexpressed (Figure 8 A) and ectopically localized on MTs in both nurse cells and the ooplasm (Figure S8 B middle panel and arrow). These observations are in agreement that the Shot.L(C)-GFP rescue experiment was performed at near-physiological expression levels, strengthening the validity of our conclusions.

      8) Page 6 "converted in NCs, in a region adjacent to the ring canals, Dendra-Ens-labeled MTs were found in the oocyte compartment indicating they are able to travel from NC toward the oocyte through ring canals". I have difficulty seeing the translocation of MT through the ring canals. Perhaps it would be more obvious with a movie/picture showing only one channel. Considering that f Dendra-Ens appears in the oocyte much faster than MT transport through ring canals (140nm/s, Lu et al 2022), the authors are most probably observing the translocation of free Ens rather than Ens bound to MT. The authors should also mention that Ens movement from the NC to the oocyte has been shown before with Ens MBD in Lu et al 2022 with better resolution.

      We fully agree on the caveat mentioned by this reviewer: we may observe the translocation of free Dendra-Ensconsin. The experiment, was removed and replaced by referring to the work of the Gelfand lab. The movement of MTs that travel at ~140 nm/s between nurse cells toward the oocyte through the Ring Canals was reported before by Lu et al. (2022) with a very good resolution. Notably, this directional directed movement of MTs was measured using a fusion protein encompassing Ens MT-binding domain. We decided to remove this inclusive experiment and rather refer to this relevant study.

      9) Page 6: The co-localization of Ninein with Ens and Shot at the oocyte cortex (Figure 2A). I have difficulty seeing this co-localisation. Perhaps it would be more obvious in merged images of only two channels and with higher resolution images

      10) "a pool of the Ens-GFP co-localized with Ch-Patronin at cortical ncMTOCs at the anterior cortex (Figure 3A)". I also have difficulty seeing this.

      We have performed new high-resolution acquisitions that provide clearer and more convincing evidence for the localization cortical distribution of these proteins (revised Figure 2A-2C and Figure 4A). These improved images demonstrate that Ens, Ninein, Shot, and Patronin partially colocalize at cortical ncMTOCs, as initially proposed. Importantly, the new data also reveal a spatial distinction: while Ens localizes along microtubules extending from these cortical sites, Ninein appears confined to small cytoplasmic puncta adjacent but also present on cortical microtubules.

      11) "Ninein co-localizes with Ens at the oocyte cortex and partially along cortical microtubules, contributing to the maintenance of high Ens protein levels in the oocyte and its proper cortical targeting". I could not find any data showing the involvement of Ninein in the cortical targeting of Ens.

      We found decreased Ens localization to MTs and to the cell cortex region (new Figure S3 A-B).

      12) "our MT network analyses reveal the presence of numerous short MTs cytoplasmic clustered in an anterior pattern." "This low cortical recruitment of ncMTOCs is consistent with poor MT anchoring and their cytoplasmic accumulation." I could not find any data showing that short cortical MT observed at stage 10b in ens mutant and Khc RNAi were cytoplasmic and poorly anchored.

      The sentence was removed from the revised manuscript.

      13) "The egg chamber consists of interconnected cells where Dynein and Khc activities are spatially separated. Dynein facilitates transport from NCs to the oocyte, while Khc mediates both transport and advection within the oocyte." Dynein is involved in various activities in the oocyte. It anchors the oocyte nucleus and transports bcd and grk mRNA to mention a few.

      The text was amended to reflect Dynein involvement in transport activities in the oocyte, with the appropriate references (lane 105-107).

      14) The cartoons in Fig.2H and 3I exaggerate the effect of Ninein and Ens on cortical ncMTOCs. According to the corresponding graphs, there is a 20 and 50% decrease in each case.

      New cartoons (now revised Figure 3E and 4F), are amended to reflect the ncMTOC values but also MT orientation (Figure 3E).

      Significance

      Given the important concerns raised, the significance of the findings is difficult to assess at this stage.

      We sincerely thank the reviewer for their thorough evaluation of our manuscript. We have carefully addressed their concerns through substantial new experiments and analyses. We hope that the revised manuscript, in its current form, now provides the clarifications and additional evidence requested, and that our responses demonstrate the significance of our findings.

      Reviewer #4 (Evidence, reproducibility and clarity (Required)):

      Summary: This manuscript presents an investigation into the molecular mechanisms governing spatial activation of Kinesin-1 motor protein during Drosophila oogenesis, revealing a regulatory network that controls microtubule organization and cytoplasmic transport. The authors demonstrate that Ensconsin, a MAP7 family protein and Kinesin-1 activator, is spatially enriched in the oocyte through a dual mechanism involving Dynein-mediated transport from nurse cells and cortical maintenance by Ninein. This spatial enrichment of Ens is crucial for locally relieving Kinesin-1 auto-inhibition. The Ens/Khc complex promotes cortical recruitment of non-centrosomal microtubule organizing centers (ncMTOCs), which are essential for anchoring microtubules at the cortex, enabling the formation of long, parallel microtubule streams or "twisters" that drive cytoplasmic advection during late oogenesis. This work establishes a paradigm where motor protein activation is spatially controlled through targeted localization of regulatory cofactors, with the activated motor then participating in building its own transport infrastructure through ncMTOC recruitment and microtubule network organization.

      There's a lot to like about this paper! The data are generally lovely and nicely presented. The authors also use a combination of experimental approaches, combining genetics, live and fixed imaging, and protein biochemistry.

      We thank the reviewer for this enthusiastic and supportive review, which helped us further strengthen the manuscript.

      Concerns: Page 6: "to assay if elevation of Ninein levels was able to mis-regulate Ens localization, we overexpressed a tagged Ninein-RFP protein in the oocyte. At stage 9 the overexpressed Ninein accumulated at the anterior cortex of the oocyte and also generated large cortical aggregates able to recruit high levels of Ens (Figures 2D and 2H)... The examination of Ninein/Ens cortical aggregates obtained after Ninein overexpression showed that these aggregates were also able to recruit high levels of Patronin and Shot (Figures 2E and 2H)." Firstly, I'm not crazy about the use of "overexpressed" here, since there isn't normally any Ninein-RFP in the oocyte. In these experiments it has been therefore expressed, not overexpressed. Secondly, I don't understand what the reader is supposed to make of these data. Expression of a protein carrying a large fluorescent tag leads to large aggregates (they don't look cortical to me) that include multiple proteins - in fact, all the proteins examined. I don't understand this to be evidence of anything in particular, except that Ninein-RFP causes the accumulation of big multi-protein aggregates. While I can understand what the authors were trying to do here, I think that these data are inconclusive and should be de-emphasized.

      We have revised the manuscript by replacing overexpressed with expressed (lanes 211 and 212). In addition, we now provide new localization data in both cortical (new Figure S4 A, top) and medial focal planes (new Figure S4 A, bottom), demonstrating that Ninein puncta (the word used in Rosen et al, 2019), rather than aggregates are located cortically. We also show that live IRP-labelled MTs do not colocalize with Ninein-RFP puncta. In light of the new experiments and the comments from the other reviewers, the corresponding text has been revised and de-emphasized accordingly.

      Page 7: "Co-immunoprecipitations experiments revealed that Patronin was associated with Shot-YFP, as shown previously (Nashchekin et al., 2016), but also with EnsWT-GFP, indicating that Ens, Shot and Patronin are present in the same complex (Figure 3B)." I do not agree that association between Ens-GFP and Patronin indicates that Ens is in the same complex as Shot and Patronin. It is also very possible that there are two (or more) distinct protein complexes. This conclusion could therefore be softened. Instead of "indicating" I suggest "suggesting the possibility."

      We have toned down this conclusion and indicated “suggesting the possibility” (lane 238-239).

      Page 7: "During stage 9, the average subcortical MT length, taken at one focal plane in live oocytes (see methods)..." I appreciate that the authors have been careful to describe how they measured MT length, as this is a major point for interpretation. I think the reader would benefit from an explanation of why they decided to measure in only one focal plane and how that decision could impact the results.

      We appreciate this helpful suggestion. Cortical microtubules are indeed highly dynamic and extend in multiple directions, including along the Z-axis. Moreover, their diameter is extremely small (approximately 25 nm), making it technically challenging to accurately measure their full length with high resolution using our Zeiss Airyscan confocal microscope (over several, microns): the acquisition of Z-stacks is relatively slow and therefore not well suited to capturing the rapid dynamics of these microtubules. Consequently, our length measurements represent a compromise and most likely underestimate the actual lengths of microtubules growing outside the focal plane. We note that other groups have encountered similar technical limitations (Parton et al., 2011).

      Page 7: "... the MTs exhibited an orthogonal orientation relative to the anterior cortex (Figures 4A left panels, 4C and 4E)." This phenotype might not be obvious to readers. Can it be quantified?

      We have now analyzed the orientation of microtubules (MTs) along the dorso-ventral axis. Our analysis shows that ens, Khc RNAi oocytes (new Figure 5B), and, to a lesser extent, Nin mutant oocytes (new Figure 3D), display a more random MT orientation compared to wild-type (WT) oocytes. In WT oocytes, MTs are predominantly oriented toward the posterior pole, consistent with previous findings (Parton et al., 2011).

      Page 8: "Altogether, the analyses of Ens and Khc defective oocytes suggested that MT organization defects during late oogenesis (stage 10B) were caused by an initial failure of ncMTOCs to reach the cell cortex. Therefore, we hypothesized that overexpression of the ncMTOC component Shot could restore certain aspects of microtubule cortical organization in ens-deficient oocytes. Indeed, Shot overexpression (Shot OE) was sufficient to rescue the presence of long cortical MTs and ooplasmic advection in most ens oocytes (9/14)..." The data are clear, but the explanation is not. Can the authors please explain why adding in more of an ncMTOC component (Shot) rescues a defect of ncMTOC cortical localization?

      We propose that cytoplasmic ncMTOCs can bind the cell cortex via the Shot subunit that is so far the only component that harbors actin-binding motifs. Therefore, we propose that elevating cytoplasmic Shot increase the possibility of Shot to encounter the cortex by diffusion when flows are absent. This is now explained lane 282-285.

      I'm grateful to the authors for their inclusion of helpful diagrams, as in Figures 1G and 2H. I think the manuscript might benefit from one more of these at the end, illustrating the ultimate model.

      We have carefully considered and followed the reviewer’s suggestions. In response, we have included a new figure illustrating our proposed model: the recruitment of ncMTOCs to the cell cortex through low Khc-mediated flows at stage 9 enhances cortical microtubule density, which in turn promotes self-amplifying flows (new Figure 7, panels A to C). Note that this Figure also depicts activation of Khc by loss of auto-inhibition (Figure 7, panel D).

      I'm sorry to say that the language could use quite a bit of polishing. There are missing and extraneous commas. There is also regular confusion between the use of plural and singular nouns. Some early instances include:

      1. Page 3: thought instead of "thoughted."
      2. Page 5: "A previous studies have revealed"
      3. Page 5: "A significantly loss"
      4. Page 6: "troughs ring canals" should be "through ring canals"
      5. Page 7: lives stage 9 oocytes
      6. Page 7: As ens and Khc RNAi oocytes exhibits
      7. Page 7: we examined in details
      8. Page 7: This average MT length was similar in Khc RNAi and ens mutant oocyte..

      We apologize for errors. We made the appropriate corrections of the manuscript.

      Reviewer #4 (Significance (Required)):

      This work makes a nice conceptual advance by showing that motor activation controls its own transport infrastructure, a paradigm that could extend to other systems requiring spatially regulated transport.

      We thank the reviewers for their evaluation of the manuscript and helpful comments.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #4

      Evidence, reproducibility and clarity

      Summary: This manuscript presents an investigation into the molecular mechanisms governing spatial activation of Kinesin-1 motor protein during Drosophila oogenesis, revealing a regulatory network that controls microtubule organization and cytoplasmic transport. The authors demonstrate that Ensconsin, a MAP7 family protein and Kinesin-1 activator, is spatially enriched in the oocyte through a dual mechanism involving Dynein-mediated transport from nurse cells and cortical maintenance by Ninein. This spatial enrichment of Ens is crucial for locally relieving Kinesin-1 auto-inhibition. The Ens/Khc complex promotes cortical recruitment of non-centrosomal microtubule organizing centers (ncMTOCs), which are essential for anchoring microtubules at the cortex, enabling the formation of long, parallel microtubule streams or "twisters" that drive cytoplasmic advection during late oogenesis. This work establishes a paradigm where motor protein activation is spatially controlled through targeted localization of regulatory cofactors, with the activated motor then participating in building its own transport infrastructure through ncMTOC recruitment and microtubule network organization.

      There's a lot to like about this paper! The data are generally lovely and nicely presented. The authors also use a combination of experimental approaches, combining genetics, live and fixed imaging, and protein biochemistry.

      Concerns:

      Page 6: "to assay if elevation of Ninein levels was able to mis-regulate Ens localization, we overexpressed a tagged Ninein-RFP protein in the oocyte. At stage 9 the overexpressed Ninein accumulated at the anterior cortex of the oocyte and also generated large cortical aggregates able to recruit high levels of Ens (Figures 2D and 2H)... The examination of Ninein/Ens cortical aggregates obtained after Ninein overexpression showed that these aggregates were also able to recruit high levels of Patronin and Shot (Figures 2E and 2H)." Firstly, I'm not crazy about the use of "overexpressed" here, since there isn't normally any Ninein-RFP in the oocyte. In these experiments it has been therefore expressed, not overexpressed. Secondly, I don't understand what the reader is supposed to make of these data. Expression of a protein carrying a large fluorescent tag leads to large aggregates (they don't look cortical to me) that include multiple proteins - in fact, all the proteins examined. I don't understand this to be evidence of anything in particular, except that Ninein-RFP causes the accumulation of big multi-protein aggregates. While I can understand what the authors were trying to do here, I think that these data are inconclusive and should be de-emphasized.

      Page 7: "Co-immunoprecipitations experiments revealed that Patronin was associated with Shot-YFP, as shown previously (Nashchekin et al., 2016), but also with EnsWT-GFP, indicating that Ens, Shot and Patronin are present in the same complex (Figure 3B)." I do not agree that association between Ens-GFP and Patronin indicates that Ens is in the same complex as Shot and Patronin. It is also very possible that there are two (or more) distinct protein complexes. This conclusion could therefore be softened. Instead of "indicating" I suggest "suggesting the possibility."

      Page 7: "During stage 9, the average subcortical MT length, taken at one focal plane in live oocytes (see methods)..." I appreciate that the authors have been careful to describe how they measured MT length, as this is a major point for interpretation. I think the reader would benefit from an explanation of why they decided to measure in only one focal plane and how that decision could impact the results.

      Page 7: "... the MTs exhibited an orthogonal orientation relative to the anterior cortex (Figures 4A left panels, 4C and 4E)." This phenotype might not be obvious to readers. Can it be quantified?

      Page 8: "Altogether, the analyses of Ens and Khc defective oocytes suggested that MT organization defects during late oogenesis (stage 10B) were caused by an initial failure of ncMTOCs to reach the cell cortex. Therefore, we hypothesized that overexpression of the ncMTOC component Shot could restore certain aspects of microtubule cortical organization in ens-deficient oocytes. Indeed, Shot overexpression (Shot OE) was sufficient to rescue the presence of long cortical MTs and ooplasmic advection in most ens oocytes (9/14)..." The data are clear, but the explanation is not. Can the authors please explain why adding in more of an ncMTOC component (Shot) rescues a defect of ncMTOC cortical localization?

      I'm grateful to the authors for their inclusion of helpful diagrams, as in Figures 1G and 2H. I think the manuscript might benefit from one more of these at the end, illustrating the ultimate model.

      I'm sorry to say that the language could use quite a bit of polishing. There are missing and extraneous commas. There is also regular confusion between the use of plural and singular nouns. Some early instances include:

      1. Page 3: thought instead of "thoughted."
      2. Page 5: "A previous studies have revealed"
      3. Page 5: "A significantly loss"
      4. Page 6: "troughs ring canals" should be "through ring canals"
      5. Page 7: lives stage 9 oocytes
      6. Page 7: As ens and Khc RNAi oocytes exhibits
      7. Page 7: we examined in details
      8. Page 7: This average MT length was similar in Khc RNAi and ens mutant oocyte..

      Significance

      This work makes a nice conceptual advance by showing that motor activation controls its own transport infrastructure, a paradigm that could extend to other systems requiring spatially regulated transport.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      The manuscript of Berisha et al., investigates the role of Esconsin (Ens), Kinesin-1 and Ninein in organisation of microtubules (MT) in Drosophila oocyte. At stage 9 oocytes Kinesin-1 transports oskar mRNA, a posterior determinant, along MT that are organised by ncMTOCs. At stage 10b, Kinesin-1 induces cytoplasmic advection to mix the contents of the oocyte. Ensconsin/Map7 is a MT associated protein (MAP) that uses its MT-binding domain (MBD) and kinesin binding domain (KBD) to recruit Kinesin-1 to the microtubules and to stimulate the motility of MT-bound Kinesin-1. Using various new Ens transgenes, the authors demonstrate the requirement of Ens MBD and Ninein in Ens localisation to the oocyte where Ens activates Kinesin-1 using its KBD. The authors also claim that Ens, Kinesin-1 and Ninein are required for the accumulation of ncMTOCs at the oocyte cortex and argue that the detachment of the ncMTOCs from the cortex accounts for the reduced localisation of oskar mRNA at stage 9 and the lack of cytoplasmic streaming at stage 10b.

      Although the manuscript contains several interesting observations, the authors' conclusions are not sufficiently supported by their data. The structure function analysis of Ensconsin (Ens) is potentially publishable, but the conclusions on ncMTOC anchoring and cytoplasmic streaming not convincing

      1. The main conclusion of the manuscript is that "MT advection failure in Khc and ens in late oogenesis stems from defective cortical ncMTOCs recruitment". This completely overlooks the abundant evidence that Kinesin-1 directly drives cytoplasmic streaming by transporting vesicles and microtubules along microtubules, which then move the cytoplasm by advection (Palacios et al., 2002; Serbus et al, 2005; Lu et al, 2016). Since Kinesin-1 generates the flows, one cannot conclude that the effect of khc and ens mutants on cortical ncMTOC positioning has any direct effect on these flows, which do not occur in these mutants.
      2. The authors claim that streaming phenotypes of ens and khs mutants are due to a decrease in microtubule length caused by the defective localisation of ncMTOCs. In addition to the problem raised above, However, I am not convinced that they can make accurate measurements of microtubule length from confocal images like those shown in Figure 4. Firstly, they are measuring the length of bundles of microtubules and cannot resolve individual microtubules. This problem is compounded by the fact that the microtubules do not align into parallel bundles in the mutants. This will make the "microtubules" appear shorter in the mutants. In addition, the alignment of the microtubules in wild-type allows one to choose images in which the microtubule lie in the imaging plane, whereas the more disorganised arrangement of the microtubules in the mutants means that most microtubules will cross the imaging plane, which precludes accurate measurements of their length.
      3. "To investigate whether the presence of these short microtubules in ens and Khc RNAi oocytes is due to defects in microtubule anchoring or is also associated with a decrease in microtubule polymerization at their plus ends, we quantified the velocity and number of EB1comets, which label growing microtubule plus ends (Figure S3)." I do not understand how the anchoring or not of microtubule minus ends to the cortex determines how far their plus ends grow, and these measurements fall short of showing that plus end growth is unaffected. It has already been shown that the Kinesin-1-dependent transport of Dynactin to growing microtubule plus ends increases the length of microtubules in the oocyte because Dynactin acts as an anti-catastrophe factor at the plus ends. Thus, khc mutants should have shorter microtubules independently of any effects on ncMTOC anchoring. The measurements of EB1 comet speed and frequency in FigS2 will not detect this change and are not relevant for their claims about microtubule length. Furthermore, the authors measured EB1 comets at stage 9 (where they did not observe short MT) rather than at stage 10b. The authors' argument would be better supported if they performed the measurements at stage 10b.
      4. The Shot overexpression experiments presented in Fig.3 E-F, Fig.4D and TableS1 are very confusing. Originally , the authors used Shot-GFP overexpression at stage 9 to show that there is a decrease of ncMTOCs at the cortex in ens mutants (Fig.3 E-F) and speculated that this caused the defects in MT length and cytoplasmic advection at stage 10B. However the authors later state on page 8 that : "Shot overexpression (Shot OE) was sufficient to rescue the presence of long cortical MTs and ooplasmic advection in most ens oocytes (9/14), resembling the patterns observed in controls (Figures 4B right panel and 4D). Moreover, while ens females were fully sterile, overexpression of Shot was sufficient to restore that loss of fertility (Table S1)". Is this the same UAS Shot-GFP and VP16 Gal4 used in both experiments? If so, this contradictions puts the authors conclusions in question.
      5. The authors based they conclusions about the involvement of Ens, Kinesin-1 and Ninein in ncMTOC anchoring on the decrease in cortical fluorescence intensity of Shot-YFP and Patronin-YFP in the corresponding mutant backgrounds. However, there is a large variation in average Shot-YFP intensity between control oocytes in different experiments. In Fig. 2F-G the average level of Shot-YFP in the control sis 130 AU while in Fig.3 G-H it is only 55 AU. This makes me worry about reliability of such measurements and the conclusions drawn from them.
      6. The decrease in the intensity of Shot-YFP and Patronin-YFP cortical fluorescence in ens mutant oocytes could be because of problems with ncMTOC anchoring or with ncMTOCsformation. The authors should find a way to distinguish between these two possibilities. The authors could express Ens-Mut (described in Sung et al 2008), which localises at the oocyte posterior and test whether it recruits Shot/Patronin ncMTOCs to the posterior.
      7. According to the Materials and Methods, the Shot-GFP used in Fig.3 E-F and Fig.4 was the BDSC line 29042. This is Shot L(C), a full-length version of Shot missing the CH1 actin-binding domain that is crucial for Shot anchoring to the cortex. If the authors indeed used this version of Shot-GFP, the interpretation of the above experiments is very difficult.
      8. Page 6 "converted in NCs, in a region adjacent to the ring canals, Dendra-Ens-labeled MTs were found in the oocyte compartment indicating they are able to travel from NC toward the oocyte trough ring canals". I have difficulty seeing the translocation of MT through the ring canals. Perhaps it would be more obvious with a movie/picture showing only one channel. Considering that f Dendra-Ens appears in the oocyte much faster than MT transport through ring canals (140nm/s, Lu et al 2022) , the authors are most probably observing the translocation of free Ens rather than Ens bound to MT. The authors should also mention that Ens movement from the NC to the oocyte has been shown before with Ens MBD in Lu et al 2022 with better resolution.
      9. Page 6: The co-localization of Ninein with Ens and Shot at the oocyte cortex (Figure 2A). I have difficulty seeing this co-localisation. Perhaps it would be more obvious in merged images of only two channels and with higher resolution images
      10. "a pool of the Ens-GFP co-localized with Ch-Patronin at cortical ncMTOCs at the anterior cortex (Figure 3A)". I also have difficulty seeing this.
      11. "Ninein co-localizes with Ens at the oocyte cortex and partially along cortical microtubules, contributing to the maintenance of high Ens protein levels in the oocyte and its proper cortical targeting". I could not find any data showing the involvement of Ninein in the cortical targeting of Ens.
      12. "our MT network analyses reveal the presence of numerous short MTs cytoplasmic clustered in an anterior pattern." "This low cortical recruitment of ncMTOCs is consistent with poor MT anchoring and their cytoplasmic accumulation." I could not find any data showing that short cortical MT observed at stage 10b in ens mutant and Khc RNAi were cytoplasmic and poorly anchored.
      13. "The egg chamber consists of interconnected cells where Dynein and Khc activities are spatially separated. Dynein facilitates transport from NCs to the oocyte, while Khc mediates both transport and advection within the oocyte." Dynein is involved in various activities in the oocyte. It anchors the oocyte nucleus and transports bcd and grk mRNA to mention a few.
      14. The cartoons in Fig.2H and 3I exaggerate the effect of Ninein and Ens on cortical ncMTOCs. According to the corresponding graphs, there is a 20 and 50% decrease in each case.

      Significance

      Given the important concerns raised, the significance of the findings is difficult to assess at this stage.

    4. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      In this manuscript, Berisha et al. investigate how microtubule (MT) organization is spatially regulated during Drosophila oogenesis. The authors identify a mechanism in which the Kinesin-1 activator Ensconsin/MAP7 is transported by dynein and anchored at the oocyte cortex via Ninein, enabling localized activation of Kinesin-1. Disruption of this pathway impairs ncMTOC recruitment and MT anchoring at the cortex. The authors combine genetic manipulation with high-resolution microscopy and use three key readouts to assess MT organization during mid-to-late oogenesis: cortical MT formation, localization of posterior determinants, and ooplasmic streaming. Notably, Kinesin-1, in concert with its activator Ens/MAP7, contributes to organizing the microtubule network it travels along. Overall, the study presents interesting findings, though we have several concerns we would like the authors to address.

      Ensconsin enrichment in the oocyte

      1. Enrichment in the oocyte
        • Ensconsin is a MAP that binds MTs. Given that microtubule density in the oocyte significantly exceeds that in the nurse cells, its enrichment may passively reflect this difference. To assess whether the enrichment is specific, could the authors express a non-Drosophila MAP (e.g., mammalian MAP1B) to determine whether it also preferentially localizes to the oocyte?
        • The ability of ens-wt and ens-LowMT to induce tubulin polymerization according to the light scattering data (Fig. S1J) is minimal and does not reflect dramatic differences in localization. The authors should verify that, in all cases, the polymerization product in their in vitro assays is microtubules rather than other light-scattering aggregates. What is the control in these experiments? If it is just purified tubulin, it should not form polymers at physiological concentrations.
      2. Photoconversion caveats MAPs are known to dynamically associate and dissociate from microtubules. Therefore, interpretation of the Ens photoconversion data should be made with caution. The expanding red signal from the nurse cells to the oocyte may reflect a any combination of dynein-mediated MT transport and passive diffusion of unbound Ensconsin. Notably, photoconversion of a soluble protein in the nurse cells would also result in a gradual increase in red signal in the oocyte, independent of active transport. We encourage the authors to more thoroughly discuss these caveats. It may also help to present the green and red channels side by side rather than as merged images, to allow readers to assess signal movement and spatial patterns better.
      3. Reduction of Shot at the anterior cortex
        • Shot is known to bind strongly to F-actin, and in the Drosophila ovary, its localization typically correlates more closely with F-actin structures than with microtubules, despite being an MT-actin crosslinker. Therefore, the observed reduction of cortical Shot in ens, nin mutants, and Khc-RNAi oocytes is unexpected. It would be important to determine whether cortical F-actin is also disrupted in these conditions, which should be straightforward to assess via phalloidin staining.
        • MTs are barely visible in Fig. 3A, which is meant to demonstrate Ens-GFP colocalization with tubulin. Higher-quality images are needed.
      4. MT gradient in stage 9 oocytes In ens-/-, nin-/-, and Khc-RNAi oocytes, is there any global defect in the stage 9 microtubule gradient? This information would help clarify the extent to which cortical localization defects reflect broader disruptions in microtubule polarity.
      5. Role of Ninein in cortical anchoring The requirement for Ninein in cortical anchorage is the least convincing aspect of the manuscript and somewhat disrupts the narrative flow. First, it is unclear whether Ninein exhibits the same oocyte-enriched localization pattern as Ensconsin. Is Ninein detectable in nurse cells? Second, the Ninein antibody signal appears concentrated in a small area of the anterior-lateral oocyte cortex (Fig. 2A), yet Ninein loss leads to reduced Shot signal along a much larger portion of the anterior cortex (Fig. 2F)-a spatial mismatch that weakens the proposed functional relationship. Third, Ninein overexpression results in cortical aggregates that co-localize with Shot, Patronin, and Ensconsin. Are these aggregates functional ncMTOCs? Do microtubules emanate from these foci?
      6. Inconsistency of Khc^MutEns rescue The Khc^MutEns variant partially rescues cortical MT formation and restores a slow but measurable cytoplasmic flow yet it fails to rescue Staufen localization (Fig. 5). This raises questions about the consistency and completeness of the rescue. Could the authors clarify this discrepancy or propose a mechanistic rationale?

      Minor points:

      1. The pUbi-attB-Khc-GFP vector was used to generate the Khc^MutEns transgenic line, presumably under control of the ubiquitous ubi promoter. Could the authors specify which attP landing site was used? Additionally, are the transgenic flies viable and fertile, given that Kinesin-1 is hyperactive in this construct?
      2. On page 11 (Discussion, section titled "A dual Ensconsin oocyte enrichment mechanism achieves spatial relief of Khc inhibition"), the statement "many mutations in Kif5A are causal of human diseases" would benefit from a brief clarification. Since not all readers may be familiar with kinesin gene nomenclature, please indicate that KIF5A is one of the three human homologs of Kinesin heavy chain.
      3. On page 16 (Materials and Methods, "Immunofluorescence in fly ovaries"), the sentence "Ovaries were mounted on a slide with ProlonGold medium with DAPI (Invitrogen)" should be corrected to "ProLong Gold."

      Significance

      This study shows that enrichment of MAP7/ensconsin in the oocyte is the mechanism of kinesin-1 activation there and is important for cytoplasmic streaming and localization non-centrosomal microtubule-organizing centers to the oocyte cortex

    5. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      This paper addresses a very interesting problem of non-centrosomal microtubule organization in developing Drosophila oocytes. Using genetics and imaging experiments, the authors reveal an interplay between the activity of kinesin-1, together with its essential cofactor Ensconsin, and microtubule organization at the cell cortex by the spectraplakin Shot, minus-end binding protein Patronin and Ninein, a protein implicated in microtubule minus end anchoring. The authors demonstrate that the loss of Ensconsin affects the cortical accumulation non-centrosomal microtubule organizing center (ncMTOC) proteins, microtubule length and vesicle motility in the oocyte, and show that this phenotype can be rescued by constitutively active kinesin-1 mutant, but not by Ensconsin mutants deficient in microtubule or kinesin binding. The functional connection between Ensconsin, kinesin-1 and ncMTOCs is further supported by a rescue experiment with Shot overexpression. Genetics and imaging experiments further implicate Ninein in the same pathway. These data are a clear strength of the paper; they represent a very interesting and useful addition to the field.

      The weaknesses of the study are two-fold. First, the paper seems to lack a clear molecular model, uniting the observed phenomenology with the molecular functions of the studied proteins. Most importantly, it is not clear how kinesin-based plus-end directed transport contributes to cortical localization of ncMTOCs and regulation of microtubule length.

      Second, not all conclusions and interpretations in the paper are supported by the presented data. Below is a list of specific comments, outlining the concerns, in the order of appearance in the paper/figures.

      1. Figure 1. The statement: "Ens loading on MTs in NCs and their subsequent transport by Dynein toward ring canals promotes the spatial enrichment of the Khc activator Ens in the oocyte" is not supported by data. The authors do not demonstrate that Ens is actually transported from the nurse cells to the oocyte while being attached to microtubules. They do show that the intensity of Ensconsin correlates with the intensity of microtubules, that the distribution of Ensconsin depends on its affinity to microtubules and that an Ensconsin pool locally photoactivated in a nurse cell can redistribute to the oocyte (and throughout the nurse cell) by what seems to be diffusion. The provided images suggest that Ensconsin passively diffuses into the oocyte and accumulates there because of higher microtubule density, which depends on dynein. To prove that Ensconsin is indeed transported by dynein in the microtubule-bound form, one would need to measure the residence time of Ensconsin on microtubules and demonstrate that it is longer than the time needed to transport microtubules by dynein into the oocyte; ideally, one would like to see movement of individual microtubules labelled with photoconverted Ensconsin from a nurse cell into the oocyte. Since microtubules are not enriched in the oocyte of the dynein mutant, analysis of Ensconsin intensity in this mutant is not informative and does not reveal the mechanism of Ensconsin accumulation.
      2. Figure 2. According to the abstract, this figure shows that Ensconsin is "maintained at the oocyte cortex by Ninein". However, the figure doesn't seem to prove it - it shows that oocyte enrichment of Ensonsin is partially dependent on Ninein, but this applies to the whole cell and not just to the cell cortex. Furthermore, it is not clear whether Ninein mutation affects microtubule density, which in turn would affect Ensconsin enrichment, and therefore, it is not clear whether the effect of Ninein loss on Ensconsin distribution is direct or indirect. The observation that the aggregates formed by overexpressed Ninein accumulate other proteins, including Ensconsin, supports, though does not prove their interactions. Furthermore, there is absolutely no proof that Ninein aggregates are "ncMTOCs". Unless the authors demonstrate that these aggregates nucleate or anchor microtubules (for example, by detailed imaging of microtubules and EB1 comets), the text and labels in the figure would need to be altered.

      Minor comment: Note that a "ratio" (Figure 2C) is just a ratio, and should not be expressed in arbitrary units. 3. Figure 3B: immunoprecipitation results cannot be interpreted because the immunoprecipitated proteins (GFP, Ens-GFP, Shot-YFP) are not shown. It is also not clear that this biochemical experiment is useful. If the authors would like to suggest that Ensconsin directly binds to Patronin, the interaction would need to be properly mapped at the protein domain level. 4. One of the major phenotypes observed by the authors in Ens mutant is the loss of long microtubules. The authors make strong conclusions about the independence of this phenotype from the parameters of microtubule plus-end growth, but in fact, the quality of their data does not allow to make such a conclusion, because they only measured the number of EB1 comets and their growth rate but not the catastrophe, rescue or pausing frequency. Note that kinesin-1 has been implicated in promoting microtubule damage and rescue (doi: 10.1016/j.devcel.2021). In the absence of such measurements, one cannot conclude whether short microtubules arise through defects in the minus-end, plus-end or microtubule shaft regulation pathways. It is important to note in that a spectraplakin, like Shot, can potentially affect different pathways, particularly when overexpressed. Unjustified conclusions should be removed: the authors do not provide sufficient data to conclude that "ens and Khc oocytes MT organizational defects are caused by decreased ncMTOC cortical anchoring", because the actual cortical microtubule anchoring was not measured.

      Minor comment: Microtubule growth velocity must be expressed in units of length per time, to enable evaluating the quality of the data, and not as a normalized value. 5. A significant part of the Discussion is dedicated to the potential role of Ensconsin in cortical microtubule anchoring and potential transport of ncMTOCs by kinesin. It is obviously fine that the authors discuss different theories, but it would be very helpful if the authors would first state what has been directly measured and established by their data, and what are the putative, currently speculative explanations of these data.

      Minor comment: The writing and particularly the grammar need to be significantly improved throughout, which should be very easy with current language tools. Examples: "ncMTOCs recruitment" should be "ncMTOC recruitment"; "Vesicles speed" should be "Vesicle speed", "Nin oocytes harbored a WT growth,"- unclear what this means, etc. Many paragraphs are very long and difficult to read. Making shorter paragraphs would make the authors' line of thought more accessible to the reader.

      Significance

      This paper represents significant advance in understanding non-centrosomal microtubule organization in general and in developing Drosophila oocytes in particular by connecting the microtubule minus-end regulation pathway to the Kinesin-1 and Ensconsin/MAP7-dependent transport. The genetics and imaging data are of good quality, are appropriately presented and quantified. These are clear strengths of the study which will make it interesting to researchers studying the cytoskeleton, microtubule-associated proteins and motors, and fly development.

      The weaknesses of this study are due to the lack of clarity of the overall molecular model, which would limit the impact of the study on the field. Some interpretations are not sufficiently supported by data, but this can be solved by more precise and careful writing, without extensive additional experimentation.

      My expertise is cell biology and biochemistry of the microtubule cytoskeleton, including both microtubule-associated proteins and microtubule motors.

    1. Reviewer #1 (Public review):

      Summary:

      This paper presents three experiments. Experiments 1 and 3 use a target detection paradigm to investigate the speed of statistical learning. The first experiment is a replication of Batterink, 2017, in which participants are presented with streams of uniform-length, trisyllabic nonsense words and asked to detect a target syllable. The results replicate previous findings, showing that learning (in the form of response time facilitation to later-occurring syllables within a nonsense word) occurs after a single exposure to a word. In the second experiment, participants are presented with streams of variable length nonsense words (two trisyllabic words and two disyllabic words), and perform the same task. A similar facilitation effect was observed as in Experiment 1. In Experiment 3 (newly added in the Revised manuscript), an adult version of the study by Johnson and Tyler is included. Participants were exposed to streams of words of either uniform length (all disyllabic) or mixed length (two disyllabic, two trisyllabic) and then asked to perform a familiarity judgment on a 1-5 scale on two words from the stream and two part-words. Performance was better in the uniform length condition.

      The authors interpret these findings as evidence that target detection requires mechanisms different from segmentation. They present results of a computational model to simulate results from the target detection task, and find that a bigram model can produce facilitation effects similar to the ones observed by human participants in Experiments 1 and 2 (though this model was not directly applied to test whether human-like effects were also produced to account for the data in Experiment 3). PARSER was also tested and produced differing results from those observed by humans across all three experiments. The authors conclude that the mechanisms involved in the target detection task are different from those involved in the word segmentation task.

      Strengths:

      The paper presents multiple experiments that provide internal replication of a key experimental finding, in which response times are facilitated after a single exposure to an embedded pseudoword. Both experimental data and results from a computational model are presented, providing converging approaches for understanding and interpreting the main results. The data are analyzed very thoroughly using mixed effects models with multiple explanatory factors. The addition of Experiment 3 provides direct evidence that the profile of performance for familiarity ratings and target detection differ as a function of word length variability.

      Weaknesses:

      (1) The concept of segmentation is still not quite clear. The authors seem to treat the testing procedure of Experiment 3 as synonymous with segmentation. But the ability to more strongly endorse words from the stream versus part-words as familiar does not necessarily mean that they have been successfully "segmented", as I elaborated on in my earlier review. In my view, it would be clearer to refer to segmentation as the mechanism or conceptual construct of segmenting continuous speech into discrete words. This ability to accurately segment component words could support familiarity judgments but is not necessary for above-chance familiarity or recognition judgments, which could be supported by more general memory signals. In other words, segmentation as an underlying ability is sufficient but not necessary for above-chance performance on familiarity-driven measures such as the one used in experiment 3.

      (2) The addition of experiment 3 is an added strength of the revised paper and provides more direct evidence of dissociations as a function of word length on the two tasks (target detection and familiarity ratings), compared to the prior strategy of just relying on previous work for this claim. However, it is not clear why the authors chose not to use the same stimuli as used in experiment 1 and 2, which would have allowed for more direct comparisons to be made. It should also be specified whether test items in the UWL and MWL were matched for overall frequency during exposure. Currently, the text does not specify whether test words in the UWL condition were taken from the high frequency or low frequency group; if they were taken from the high frequency group this would of course be a confound when comparing to the MWL condition. Finally, the definition of part-words should also be clarified,

      (3) The framing and argument for a prediction/anticipation mechanism was dropped in the Revised manuscript, but there are still a few instances where this framing and interpretation remain. E.g. Abstract - "we found that a prediction mechanism, rather than clustering, could explain the data from target detection." Discussion page 43 "Together, these results suggest that a simple prediction-based mechanism can explain the results from the target detection task, and clustering-based approaches such as PARSER cannot, contrary to previous claims."

      Minor (4) It was a bit unclear as to why a conceptual replication of Batterink 2017 was conducted, given that the target syllables at the beginning and end of the streams were immediately dropped from further analysis. Why include syllable targets within these positions in the design if they are not analyzed?

      (5) Figures 3 and 4 are plotted on different scales, which makes it difficult to visually compare the effects between word length conditions.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Summary:

      This paper presents two experiments, both of which use a target detection paradigm to investigate the speed of statistical learning. The first experiment is a replication of Batterink, 2017, in which participants are presented with streams of uniform-length, trisyllabic nonsense words and asked to detect a target syllable. The results replicate previous findings, showing that learning (in the form of response time facilitation to later-occurring syllables within a nonsense word) occurs after a single exposure to a word. In the second experiment, participants are presented with streams of variable-length nonsense words (two trisyllabic words and two disyllabic words) and perform the same task. A similar facilitation effect was observed as in Experiment 1. The authors interpret these findings as evidence that target detection requires mechanisms different from segmentation. They present results of a computational model to simulate results from the target detection task and find that an "anticipation mechanism" can produce facilitation effects, without performing segmentation. The authors conclude that the mechanisms involved in the target detection task are different from those involved in the word segmentation task.

      Strengths:

      The paper presents multiple experiments that provide internal replication of a key experimental finding, in which response times are facilitated after a single exposure to an embedded pseudoword. Both experimental data and results from a computational model are presented, providing converging approaches for understanding and interpreting the main results. The data are analyzed very thoroughly using mixed effects models with multiple explanatory factors.

      Weaknesses:

      In my view, the main weaknesses of this study relate to the theoretical interpretation of the results.

      (1) The key conclusion from these findings is that the facilitation effect observed in the target detection paradigm is driven by a different mechanism (or mechanisms) than those involved in word segmentation. The argument here I think is somewhat unclear and weak, for several reasons:

      First, there appears to be some blurring in what exactly is meant by the term "segmentation" with some confusion between segmentation as a concept and segmentation as a paradigm.

      Conceptually, segmentation refers to the segmenting of continuous speech into words. However, this conceptual understanding of segmentation (as a theoretical mechanism) is not necessarily what is directly measured by "traditional" studies of statistical learning, which typically (at least in adults) involve exposure to a continuous speech stream followed by a forced-choice recognition task of words versus recombined foil items (part-words or nonwords). To take the example provided by the authors, a participant presented with the sequence GHIABCDEFABCGHI may endorse ABC as being more familiar than BCG, because ABC is presented more frequently together and the learned association between A and B is stronger than between C and G. However, endorsement of ABC over BCG does not necessarily mean that the participant has "segmented" ABC from the speech stream, just as faster reaction times in responding to syllable C versus A do not necessarily indicate successful segmentation. As the authors argue on page 7, "an encounter to a sequence in which two elements co-occur (say, AB) would theoretically allow the learner to use the predictive relationship during a subsequent encounter (that A predicts B)." By the same logic, encoding the relationship between A and B could also allow for the above-chance endorsement of items that contain AB over items containing a weaker relationship.

      Both recognition performance and facilitation through target detection reflect different outcomes of statistical learning. While they may reflect different aspects of the learning process and/or dissociable forms of memory, they may best be viewed as measures of statistical learning, rather than mechanisms in and of themselves.

      Thanks for this nuanced discussion, and this is an important point that R2 also raised. We agree that segmentation can refer to both an experimental paradigm and a mechanism that accounts for learning in the experimental paradigm. In the experimental paradigm, participants are asked to identify which words they believe to be (whole) words from the continuous syllable stream. In the target-detection experimental paradigm, participants are not asked to identify words from continuous streams, and instead, they respond to the occurrences of a certain syllable. It’s possible that learners employ one mechanism in these two tasks, or that they employ separate mechanisms. It’s also the case that, if all we have is positive evidence for both experimental paradigms, i.e., learners can succeed in segmentation tasks as well as in target detection tasks with different types of sequences, we would have no way of talking about different mechanisms, as you correctly suggested that evidence for segmenting AB and processing B faster following A, is not evidence for different mechanisms.

      However, that is not the case. When the syllable sequences contain same-length subsequences (i.e., words), learning is indeed successful in both segmentation and target detection tasks. However, in studies such as Hoch et al. (2013), findings suggest that words from mixed-length sequences are harder to segment than words from uniform-length sequences. This finding exists in adult work (e.g., Hoch et al. 2013) as well as infant work (Johnson & Tyler, 2010), and replicated here in the newly included Experiment 3, which stands in contrast to the positive findings of the facilitation effect with mixed-length sequences in the target detection paradigm (one of our main findings in the paper). Thus, it seems to be difficult to explain, if the learning mechanisms were to be the same, why humans can succeed in mixed-length sequences in target detection (as shown in Experiment 2) but fail in uniform-length sequences (as shown in Hoch et al. and Experiment 3).

      In our paper, we have clarified these points describe the separate mechanisms in more detail, in both the Introduction and General Discussion sections.

      (2) The key manipulation between experiments 1 and 2 is the length of the words in the syllable sequences, with words either constant in length (experiment 1) or mixed in length (experiment 2). The authors show that similar facilitation levels are observed across this manipulation in the current experiments. By contrast, they argue that previous findings have found that performance is impaired for mixed-length conditions compared to fixed-length conditions. Thus, a central aspect of the theoretical interpretation of the results rests on prior evidence suggesting that statistical learning is impaired in mixed-length conditions. However, it is not clear how strong this prior evidence is. There is only one published paper cited by the authors - the paper by Hoch and colleagues - that supports this conclusion in adults (other mentioned studies are all in infants, which use very different measures of learning). Other papers not cited by the authors do suggest that statistical learning can occur to stimuli of mixed lengths (Thiessen et al., 2005, using infant-directed speech; Frank et al., 2010 in adults). I think this theoretical argument would be much stronger if the dissociation between recognition and facilitation through RTs as a function of word length variability was demonstrated within the same experiment and ideally within the same group of participants.

      To summarize the evidence of learning uniform-length and mixed-length sequences (which we discussed in the Introduction section), “even though infants and adults alike have shown success segmenting syllable sequences consisting of words that were uniform in length (i.e., all words were either disyllabic; Graf Estes et al., 2007; or trisyllabic, Aslin et al., 1998), both infants and adults have shown difficulty with syllable sequences consisting of words of mixed length (Johnson & Tyler, 2010; Johnson & Jusczyk, 2003a; 2003b; Hoch et al., 2013).” The newly added Experiment 3 also provided evidence for the difference in uniform-length and mixed-length sequences. Notably, we do not agree with the idea that infant work should be disregarded as evidence just because infants were tested with habituation methods; not only were the original findings (Saffran et al. 1996) based on infant work, so were many other studies on statistical learning.

      There are other segmentation studies in the literature that have used mixed-length sequences, which are worth discussing. In short, these studies differ from the Saffran et al. (1996) studies in many important ways, and in our view, these differences explain why the learning was successful. Of interest, Thiessen et al. (2005) that you mentioned was based on infant work with infant methods, and demonstrated the very point we argued for: In their study, infants failed to learn when mixed-length sequences were pronounced as adult-directed speech, and succeeded in learning given infant-directed speech, which contained prosodic cues that were much more pronounced. The fact that infants failed to segment mixed-length sequences without certain prosodic cues is consistent with our claim that mixed-length sequences are difficult to segment in a segmentation paradigm. Another such study is Frank et al. (2010), where continuous sequences were presented in “sentences”. Different numbers of words were concatenated into sentences where a 500ms break was present between each sentence in the training sequence. One sentence contained only one word, or two words, and in the longest sentence, there were 24 words. The results showed that participants are sensitive to the effect of sentence boundaries, which coincide with word boundaries. In the extreme, the one-word-per-sentence condition simply presents learners with segmented word forms. In the 24-word-per-sentence condition, there are nevertheless sentence boundaries that are word boundaries, and knowing these word boundaries alone should allow learners to perform above chance in the test phase. Thus, in our view, this demonstrates that learners can use sentence boundaries to infer word boundaries, which is an interesting finding in its own right, but this does not show that a continuous syllable sequence with mixed word lengths is learnable without additional information. In summary, to our knowledge, syllable sequences containing mixed word lengths are better learned when additional cues to word boundaries are present, and there is strong evidence that syllable sequences containing uniform-word lengths are learned better than mixed-length ones.

      Frank, M. C., Goldwater, S., Griffiths, T. L., & Tenenbaum, J. B. (2010). Modeling human performance in statistical word segmentation. Cognition, 117(2), 107-125.

      To address your proposal of running more experiments to provide stronger evidence for our theory, we were planning to run another study to have the same group of participants do both the segmentation and target detection paradigm as suggested, but we were unable to do so as we encountered difficulties to run English-speaking participants. Instead, we have included an experiment (now Experiment 3), showing the difference between the learning of uniform-length and mixed-length sequences with the segmentation paradigm that we have never published previously. This experiment provides further evidence for adults’ difficulties in segmenting mixed-length sequences.

      (3) The authors argue for an "anticipation" mechanism in explaining the facilitation effect observed in the experiments. The term anticipation would generally be understood to imply some kind of active prediction process, related to generating the representation of an upcoming stimulus prior to its occurrence. However, the computational model proposed by the authors (page 24) does not encode anything related to anticipation per se. While it demonstrates facilitation based on prior occurrences of a stimulus, that facilitation does not necessarily depend on active anticipation of the stimulus. It is not clear that it is necessary to invoke the concept of anticipation to explain the results, or indeed that there is any evidence in the current study for anticipation, as opposed to just general facilitation due to associative learning.

      Thanks for raising this point. Indeed, the anticipation effect we reported is indistinguishable from the facilitation effect that we reported in the reported experiments. We have dropped this framing.

      In addition, related to the model, given that only bigrams are stored in the model, could the authors clarify how the model is able to account for the additional facilitation at the 3rd position of a trigram compared to the 2nd position?

      Thanks for the question. We believe it is an empirical question whether there is an additional facilitation at the 3rd position of a trigram compared to the 2nd position. To investigate this issue, we conducted the following analysis with data from Experiment 1. First, we combined the data from two conditions (exact/conceptual) from Experiment 1 so as to have better statistical power. Next, we ran a mixed effect regression with data from syllable positions 2 and 3 only (i.e., data from syllable position 1 were not included). The fixed effect included the two-way interaction between syllable position and presentation, as well as stream position, and the random effect was a by-subject random intercept and stream position as the random slope. This interaction was significant (χ<sup>2</sup>(3) =11.73, p=0.008), suggesting that there is additional facilitation to the 3rd position compared to the 2nd position.

      For the model, here is an explanation of why the model assumes an additional facilitation to the 3rd position. In our model, we proposed a simple recursive relation between the RT of a syllable occurring for the nth time and the n+1<sup>th</sup> time, which is:

      and

      RT(1) = RT0 + stream_pos * stream_inc, where the n in RT(n) represents the RT for the n<sup>th</sup> presentation of the target syllable, stream_pos is the position (3-46) in the stream, and occurrence is the number of occurrences that the syllable has occurred so far in the stream.

      What this means is that the model basically provides an RT value for every syllable in the stream. Thus, for a target at syllable position 1, there is a RT value as an unpredictable target, and for targets at syllable position 2, there is a facilitation effect. For targets at syllable position 3, it is facilitated the same amount. As such, there is an additional facilitation effect for syllable position 3 because effects of predication are recursive.

      (4) In the discussion of transitional probabilities (page 31), the authors suggest that "a single exposure does provide information about the transitions within the single exposure, and the probability of B given A can indeed be calculated from a single occurrence of AB." Although this may be technically true in that a calculation for a single exposure is possible from this formula, it is not consistent with the conceptual framework for calculating transitional probabilities, as first introduced by Saffran and colleagues. For example, Saffran et al. (1996, Science) describe that "over a corpus of speech there are measurable statistical regularities that distinguish recurring sound sequences that comprise words from the more accidental sound sequences that occur across word boundaries. Within a language, the transitional probability from one sound to the next will generally be highest when the two sounds follow one another within a word, whereas transitional probabilities spanning a word boundary will be relatively low." This makes it clear that the computation of transitional probabilities (i.e., Y | X) is conceptualized to reflect the frequency of XY / frequency of X, over a given language inventory, not just a single pair. Phrased another way, a single exposure to pair AB would not provide a reliable estimate of the raw frequencies with which A and AB occur across a given sample of language.

      Thanks for the discussion. We understand your argument, but we respectively disagree that computing transitional probabilities must be conducted under a certain theoretical framework. In our humble opinion, computing transitional probabilities is a mathematical operation, and as such, it is possible to do so with the least amount of data possible that enables the mathematical operation, which concretely is a single exposure during learning. While it is true that a single exposure may not provide a reliable estimate of frequencies or probabilities, it does provide information with which the learner can make decisions.

      This is particularly true for topics under discussion regarding the minimal amount of exposure that can enable learning. It is important to distinguish the following two questions: whether learners can learn from a short exposure period (from a single exposure, in fact) and how long of an exposure period does the learner require for it to be considered to produce a reliable estimate of frequencies. Incidentally, given the fact that learners can learn from a single exposure based on Batterink (2017) and the current study, it does not appear that learners require a long exposure period to learn about transitional probabilities.

      (5) In experiment 2, the authors argue that there is robust facilitation for trisyllabic and disyllabic words alike. I am not sure about the strength of the evidence for this claim, as it appears that there are some conflicting results relevant to this conclusion. Notably, in the regression model for disyllabic words, the omnibus interaction between word presentation and syllable position did not reach significance (p= 0.089). At face value, this result indicates that there was no significant facilitation for disyllabic words. The additional pairwise comparisons are thus not justified given the lack of omnibus interaction. The finding that there is no significant interaction between word presentation, word position, and word length is taken to support the idea that there is no difference between the two types of words, but could also be due to a lack of power, especially given the p-value (p = 0.010).

      Thanks for the comment. Firstly, we believe there is a typo in your comment, where in the last sentence, we believe you were referring to the p-value of 0.103 (source: “The interaction was not significant (χ2(3) = 6.19, p= 0.103”). Yes, a null result with a frequentist approach cannot support a null claim, but Bayesian analyses could potentially provide evidence for the null.

      To this end, we conducted a Bayes factor analysis using the approach outlined in Harms and Lakens (2018), which generates a Bayes factor by computing a Bayesian information criterion for a null model and an alternative model. The alternative model contained a three-way interaction of word length, word presentation, and word position, whereas the null model contained a two-way interaction between word presentation and word position as well as a main effect of word length. Thus, the two models only differ in terms of whether there is a three-way interaction. The Bayes factor is then computed as exp[(BICalt − BICnull)/2]. This analysis showed that there is strong evidence for the null, where the Bayes Factor was found to be exp(25.65) which is more than 1011. Thus, there is no power issue here, and there is strong evidence for the null claim that word length did not interact with other factors in Experiment 2.

      There is another issue that you mentioned, of whether we should conduct pairwise comparisons if the omnibus interaction did not reach significance. This would be true given the original analysis plan, but we believe that a revised analysis plan makes more sense. In the revised analysis plan for Experiment 2, we start with the three-way interaction (as just described in the last paragraph). The three-way interaction was not significant, and after dropping the third interaction terms, the two-way interaction and the main effect of word length are both significant, and we use this as the overall model. Testing the significance of the omnibus interaction between presentation and syllable position, we found that this was significant (χ<sup>2</sup>(3) =49.77, p<0.001). This represents that, in one model, that the interaction between presentation and syllable position using data from both disyllabic and trisyllabic words. This was in addition to a significant fixed effect of word length (β=0.018, z=6.19, p<0.001). This should motivate the rest of the planned analysis, which regards pairwise comparisons in different word length conditions.

      (6) The results plotted in Figure 2 seem to suggest that RTs to the first syllable of a trisyllabic item slow down with additional word presentations, while RTs to the final position speed up. If anything, in this figure, the magnitude of the effect seems to be greater for 1st syllable positions (e.g., the RT difference between presentation 1 and 4 for syllable position 1 seems to be numerically larger than for syllable position 3, Figure 2D). Thus, it was quite surprising to see in the results (p. 16) that RTs for syllable position 1 were not significantly different for presentation 1 vs. the later presentations (but that they were significant for positions 2 and 3 given the same comparison). Is this possibly a power issue? Would there be a significant slowdown to 1st syllables if results from both the exact replication and conceptual replication conditions were combined in the same analysis?

      Thanks for the suggestion and your careful visual inspection of the data. After combining the data, the slowdown to 1st syllables is indeed significant. We have reported this in the results of Experiment 1 (with an acknowledgement to this review):

      Results showed that later presentations took significantly longer to respond to compared to the first presentation (χ<sup>2</sup>(3) = 10.70, p=0.014), where the effect grew larger with each presentation (second presentation: β=0.011, z=1.82, p=0.069; third presentation: β=0.019, z=2.40, p=0.016; fourth presentation: β=0.034, z=3.23, p=0.001).

      (7) It is difficult to evaluate the description of the PARSER simulation on page 36. Perhaps this simulation should be introduced earlier in the methods and results rather than in the discussion only.

      Thanks for the suggestions. We have added two separate simulations in the paper, which should describe the PARSER simulations sufficiently, as well as provide further information on the correspondence between the simulations and the experiments. Thanks again for the great review! We believe our paper has improved significantly as a result.

    1. Reviewer #1 (Public review):

      In this manuscript, the authors aimed to identify the molecular target and mechanism by which α-Mangostin, a xanthone from Garcinia mangostana, produces vasorelaxation that could explain the antihypertensive effects. Building on prior reports of vascular relaxation and ion channel modulation, the authors convincingly show that large-conductance potassium BK channels are the primary site of action. Using electrophysiological, pharmacological, and computational evidence, the authors achieved their aims and showed that BK channels are the critical molecular determinant of mangostin's vasodilatory effects, even though the vascular studies are quite preliminary in nature.

      Strengths:

      (1) The broad pharmacological profiling of mangostin across potassium channel families, revealing BK channels - and the vascular BK-alpha/beta1 complex - as the potently activated target in a concentration-dependent manner.

      (2) Detailed gating analyses showing large negative shifts in voltage-dependence of activation and altered activation and deactivation kinetics.

      (3) High-quality single-channel recordings for open probability and dwell times.

      (4) Convincing activation in reconstituted BKα/β1-Caᵥ nanodomains mimicking physiological conditions and functional proof-of-concept validation in mouse aortic rings.

      Weaknesses are minor:

      (1) Some mutagenesis data (e.g., partial loss at L312A) could benefit from complementary structural validation.

      (2) While Cav-BK nanodomains were reconstituted, direct measurement of calcium signals after mangostin application onto native smooth muscle could be valuable.

      (3) The work has an impact on ion channel physiology and pharmacology, providing a mechanistic link between a natural product and vasodilation. Datasets include electrophysiology traces, mutagenesis scans, docking analyses, and aortic tension recordings. The latter, however, are preliminary in nature.

    2. Reviewer #2 (Public review):

      Summary:

      In the present manuscript, Cordeiro et al. show that α-mangostin, a xanthone obtained from the fruit of the Garcinia mangostana tree, behaves as an agonist of the BK channels. The authors arrive at this conclusion through the effect of mangostin on macroscopic and single-channel currents elicited by BK channels formed by the α subunit and α + β1sununits, as well as αβ1 channels coexpressed with voltage-dependent Ca2+ (CaV1,2) channels. The single-channel experiments show that α-mangostin produces a robust increase in the probability of opening without affecting the single-channel conductance. The authors contend that α-mangostin activation of the BK channel is state-independent and molecular docking and mutagenesis suggest that α-mangostin binds to a site in the internal cavity. Importantly, α-mangostin (10 μM) alleviates the contracture promoted by noradrenaline. Mangostin is ineffective if the contracted muscles are pretreated with the BK toxin iberiotoxin.

      Strengths:

      The set of results combining electrophysiological measurements, mutagenesis, and molecular docking reveals α-mangostin as a potent activator of BK channels and the putative location of the α-mangostin binding site. Moreover, experiments conducted on aortic preparations from mice suggest that α-mangostin can aid in developing drugs to treat a myriad of diverse diseases involving the BK channel.

      Weaknesses:

      Major:

      (1) Although the results indicate that α-mangostin is modifying the closed-open equilibrium, the conclusion that this can be due to a stabilization of the voltage sensor in its active configuration may prove to be wrong. It is more probable that, as has been demonstrated for other activators, the α-mangostin is increasing the equilibrium constant that defines the closed-open reaction (L in the Horrigan, Aldrich allosteric gating model for BK). The paper will gain much if the authors determine the probability of opening in a wide range of voltages, to determine how the drug is affecting (or not), the channel voltage dependence, the coupling between the voltage sensor and the pore, and the closed-open equilibrium (L).

      (2) Apparently, the molecular docking was performed using the truncated structure of the human BK channel. However, it is unclear which one, since the PDB ID given in the Methods (6vg3), according to what I could find, corresponds to the unliganded, inactive PTK7 kinase domain. Be as it may, the apo and Ca2+ bound structures show that there is a rotation and a displacement of the S6 transmembrane domain. Therefore, the positions of the residues I308, L312, and A316 in the closed and open configurations of the BK channel are not the same. Hence, it is expected that the strength of binding will be different whether the channel is closed or open. This point needs to be discussed.

      Minor:

      (1) From Figure 3A, it is apparent that the increase in Po is at the expense of the long periods (seconds) that the channel remains closed. One might suggest that α-mangostin increases the burst periods. It would be beneficial if the authors measured both closed and open dwell times to test whether α-mangostin primarily affects the burst periods.

      (2) In several places, the authors make similarities in the mode of action of other BK activators and α-mangostin; however, the work of Gessner et al. PNAS 2012 indicates that NS1619 and Cym04 interact with the S6/RCK linker, and Webb et al. demonstrated that GoSlo-SR-5-6 agonist activity is abolished when residues in the S4/S5 linker and in the S6C region are mutated. These findings indicate that binding of the agonist is not near the selectivity filter, as the authors' results suggest that α-mangostin binds.

      (3) The sentence starting in line 452 states that there is a pronounced allosteric coupling between the voltage sensors and Ca2+ binding. If the authors are referring to the coupling factor E in the Horrigan-Aldrich gating model, the references cited, in particular, Sun and Horrigan, concluded that the coupling between those sensors is weak.

    3. Reviewer #3 (Public review):

      Summary:

      This research shows that a-mangostin, a proposed nutraceutical, with cardiovascular protective properties, could act through the activation of large conductance potassium permeable channels (BK). The authors provide convincing electrophysiological evidence that the compound binds to BK channels and induces a potent activation, increasing the magnitude of potassium currents. Since these channels are important modulators of the membrane potential of smooth muscle in vascular tissue, this activation leads to muscle relaxation, possibly explaining cardiovascular protective effects.

      Strengths:

      The authors present evidence based on several lines of experiments that a-mangostin is a potent activator of BK channels. The quality of the experiments and the analysis is high and represents an appropriate level of analysis. This research is timely and provides a basis to understand the physiological effects of natural compounds with proposed cardio-protective effects.

      Weaknesses:

      The identification of the binding site is not the strongest point of the manuscript. The authors show that the binding site is probably located in the hydrophobic cavity of the pore and show that point mutations reduce the magnitude of the negative voltage shift of activation produced by a-mangostin. However, these experiments do not demonstrate binding to these sites, and could be explained by allosteric effects on gating induced by the mutations themselves.

    1. Reviewer #2 (Public review):

      Summary:

      The manuscript by Freier et al examines the impact of deletion of the glycine cleavage system (GCS) GcvPAB enzyme complex in the facultative intracellular bacterial pathogen Listeria monocytogenes. GcvPAB mediates the oxidative decarboxylation of glycine as a first step in a pathway that leads to the generation of N5, N10-methylene-Tetrahydrofolate (THF) to replenish the 1-carbon THF (1C-THF) pool. 1C-THF species are important for the biosynthesis of purines and pyrimidines as well as for the formation of serine, methionine, and N-formylmethionine, and the authors have previously demonstrated that gcvPAB is important for bacterial replication within macrophages. A significant defect for growth is observed for the gcvPAB deletion mutant in defined media, and this growth defect appears to stem from the sensitivity of the mutant strain to excess glycine, which is hypothesized to further deplete the 1C-THF pool. Selection of suppressor mutations that restored growth of gcvPAB deletion mutants in synthetic media with high glycine yielded mutants that reversed stop codon inactivation of the formate-tetrahydrofolate ligase (fhs) gene, supporting the premise that generation of N10-formyl-THF can restore growth. Mutations within the folk, codY, and glyA genes, encoding serine hydroxymethyltransferase, were also identified, although the functional impact of these mutations is somewhat less clear. Overall, the authors report that their work identifies three pathways that feed the 1C-THF pool to support the growth and virulence of L. monocytogenes and that this work represents the first example of the spontaneous reactivation of a L. monocytogenes gene that is inactivated by a premature stop codon.

      Strengths:

      This is an interesting study that takes advantage of a naturally existing fhs mutant Listeria strain to reveal the contributions of different pathways leading to 1C-THF synthesis. The defects observed for the gcvPAB mutant in terms of intracellular growth and virulence are somewhat subtle, indicating that bacteria must be able to access host sources (such as adenine?) to compensate for the loss of purine and fMet synthesis. Overall, the authors do a nice job of assessing the importance of the pathways identified for 1C-THF synthesis.

      Weaknesses:

      (1) Line 114 and Figure 1: The authors indicate that the gcvPAB deletion forms significantly fewer plaques in addition to forming smaller plaques (although this is a bit hard to see in the plaque images). A reduction in the overall number of plaques sounds like a bacterial invasion defect - has this been carefully assessed? The smaller plaque size makes sense with reduced bacterial replication, but I'm not sure I understand the reduction in plaque number.

      (2) Do other Listeria strains contain the stop codon in fhs? How common is this mutation? That would be interesting to know.

      (3) Based on the observation that fhs+ ΔgcvPAB ΔglyA mutant is only possible to isolate in complex media, and fhs is responsible for converting formate to 1C-THF with the addition of FolD, have the authors thought of supplementing synthetic media with formate and assessing mutant growth?

    2. Reviewer #3 (Public review):

      Summary:

      In this study, Freier et al. demonstrate that 3 distinct metabolic pathways are critical for the synthesis of 1C-THF, a metabolite that is crucial for the growth and virulence of Listeria monocytogenes. Using an elegant suppressor screen, they also demonstrate the hierarchical importance of these metabolic pathways with respect to the biosynthesis of 1C-THF.

      Strengths:

      This study uses elegant bacterial genetics to confirm that 3 distinct metabolic pathways are critical for 1C-THF synthesis in L. monocytogenes, and the lack of either one of these pathways compromises bacterial growth and virulence. The study uses a combination of in vitro growth assays, macrophage-CFU assays, and murine infection models to demonstrate this.

      Weaknesses:

      (1) The primary finding of the study is that the perturbation of any of the 3 metabolic pathways important for the synthesis of 1C-THF results in reduced growth and virulence of L. monocytogenes. However, there is no evidence demonstrating the levels of 1C-THF in the various knockouts and suppressor mutants used in this study. It is important to measure the levels of this metabolite (ideally using mass spectrometry) in the various knockouts and suppressor mutants, to provide strong causality.

      (2) The story becomes a little hard to follow since macrophage-CFU assays and murine infection model data precede the in vitro growth assays. The manuscript would benefit from a reorganization of Figures 2,3, and 4 for better readability and flow of data.

    1. Reviewer #1 (Public review):

      Summary:

      This important study functionally profiled ligands targeting the LXR nuclear receptors using biochemical assays in order to classify ligands according to pharmacological functions. Overall, the evidence is solid, but nuances in the reconstituted biochemical assays and cellular studies and terminology of ligand pharmacology limit the potential impact of the study. This work will be of interest to scientists interested in nuclear receptor pharmacology.

      Strengths:

      (1) The authors rigorously tested their ligand set in CRTs for several nuclear receptors that could display ligand-dependent cross-talk with LXR cellular signaling and found that all compounds display LXR selectivity when used at ~1 µM.

      (2) The authors tested the ligand set for selectivity against two LXR isoforms (alpha and beta). Most compounds were found to be LXRbeta-specific.

      (3) The authors performed extensive LXR CRTs, performed correlation analysis to cellular transcription and gene expression, and classification profiling using heatmap analysis-seeking to use relatively easy-to-collect biochemical assays with purified ligand-binding domain (LBD) protein to explain the complex activity of full-length LXR-mediated transcription.

      Weaknesses:

      (1) The descriptions of some observations lack detail, which limits understanding of some key concepts.

      (2) The presence of endogenous NR ligands within cells may confound the correlation of ligand activity of cellular assays to biochemical assay data.

      (3) The normalization of biochemical assay data could confound the classification of graded activity ligands.

      (4) The presence of >1 coregulator peptide in the biplex (n=2 peptides) CRT (pCRT) format will bias the LBD conformation towards the peptide-bound form with the highest binding affinity, which will impact potency and interpretation of TR-FRET data.

      (5) Correlation graphical plots lack sufficient statistical testing.

      (6) Some of the proposed ligand pharmacology nomenclature is not clear and deviates from classifications used currently in the field (e.g., hard and soft antagonist; weak vs. partial agonist, definition of an inverse agonist that is not the opposite function to an agonist).

    1. Reviewer #1 (Public review):

      Summary:

      This study presents a high-throughput screening platform to identify nanobodies capable of recruiting chromatin regulators and modulating gene expression. The authors utilize a yeast display system paired with mammalian reporter assays to validate candidate nanobodies, aiming to create a modular resource for synthetic epigenetic control.

      Strengths:

      (1) The overall screening design combining yeast display with mammalian functional assays is innovative and scalable.

      (2) The authors demonstrate proof-of-concept that nanobody-based recruitment can repress or activate reporter expression.

      (3) The manuscript contributes to the growing toolkit for epigenome engineering.

      Weaknesses:

      (1) The manuscript does not investigate which endogenous factors are recruited by the nanobodies. While repression activity is demonstrated at the reporter level, there is no mechanistic insight into what proteins are being brought to the target site by each nanobody. This limits the interpretability and generalizability of the findings. Related to this, Figure S1B reports sequence similarity among complementarity-determining regions (CDRs) of nanobodies that scored highly in the DNMT3A screen. However, it remains unclear whether this similarity reflects convergence on a common molecular target or is coincidental. Without functional or proteomic validation, the relationship between sequence motifs and effector recruitment remains speculative.

      (2) The epigenetic consequences of nanobody recruitment are also left unexplored. Despite targeting epigenetic regulators, the study does not assess changes such as DNA methylation or histone modifications. This makes it difficult to interpret whether the observed reporter repression is due to true chromatin remodeling or secondary effects.

    1. Reviewer #1 (Public review):

      In this study, the authors investigated a specific subtype of SST-INs (layer 5 Chrna2-expressing Martinotti cells) and examined its functional role in motor learning. Using endoscopic calcium imaging combined with chemogenetics, they showed that activation of Chrna2 cells reduces the plasticity of pyramidal neuron (PyrN) assemblies but does not affect the animals' performance. However, activating Chrna2 cells during re-training improved performance. The authors claim that activating Chrna2 cells likely reduces PyrN assembly plasticity during learning and possibly facilitates the expression of already acquired motor skills.

      There are many major issues with the study. The findings across experiments are inconsistent, and it is unclear how the authors performed their analyses or why specific time points and comparisons were chosen. The study requires major re-analysis and additional experiments to substantiate its conclusions.

      Major Points:

      (1a) Behavior task - the pellet-reaching task is a well-established paradigm in the motor learning field. Why did the authors choose to quantify performance using "success pellets per minute" instead of the more conventional "success rate" (see PMID 19946267, 31901303, 34437845, 24805237)? It is also confusing that the authors describe sessions 1-5 as being performed on a spoon, while from session 6 onward, the pellets are presented on a plate. However, in lines 710-713, the authors define session 1 as "naïve," session 2 as "learning," session 5 as "training," and "retraining" as a condition in which a more challenging pellet presentation was introduced. Does "naïve session 1" refer to the first spoon session or to session 6 (when the food is presented on a plate)? The same ambiguity applies to "learning session 2," "training session 5," and so on. Furthermore, what criteria did the authors use to designate specific sessions as "learning" versus "training"? Are these definitions based on behavioral performance thresholds or some biological mechanisms? Clarifying these distinctions is essential for interpreting the behavioral results.

      (1b) Judging from Figures 1F and 4B, even in WT mice, it is not convincing that the animals have actually learned the task. In all figures, the mice generally achieve ~10-20 pellets per minute across sessions. The only sessions showing slightly higher performance are session 5 in Figure 1F ("train") and sessions 12 and 13 in Figure 4B ("CLZ"). In the classical pellet-reaching task, animals are typically trained for 10-12 sessions (approximately 60 trials per session, one session per day), and a clear performance improvement is observed over time. The authors should therefore present performance data for each individual session to determine whether there is any consistent improvement across days. As currently shown, performance appears largely unchanged across sessions, raising doubts about whether motor learning actually occurred.

      (1c) The authors also appear to neglect existing literature on the role of SST-INs in motor learning and local circuit plasticity (e.g., PMID 26098758, 36099920). Although the current study focuses on a specific subpopulation of SST-INs, the results reported here are entirely opposite to those of previous studies. The authors should, at a minimum, acknowledge these discrepancies and discuss potential reasons for the differing outcomes in the Discussion section.

      (2a) Calcium imaging - The methodology for quantifying fluorescence changes is confusing and insufficiently described. The use of absolute ΔF values ("detrended by baseline subtraction," lines 565-567) for analyses that compare activity across cells and animals (e.g., Figure 1H) is highly unconventional and problematic. Calcium imaging is typically reported as ΔF/F₀ or z-scores to account for large variations in baseline fluorescence (F₀) due to differences in GCaMP expression, cell size, and imaging quality. Absolute ΔF values are uninterpretable without reference to baseline intensity - for example, a ΔF of 5 corresponds to a 100% change in a dim cell (F₀ = 5) but only a 1% change in a bright cell (F₀ = 500). This issue could confound all subsequent population-level analyses (e.g., mean or median activity) and across-group comparisons. Moreover, while some figures indicate that normalization was performed, the Methods section lacks any detailed description of how this normalization was implemented. The critical parameters used to define the baseline are also omitted. The authors should reprocess the imaging data using a standardized ΔF/F₀ or z-score approach, explicitly define the baseline calculation procedure, and revise all related figures and statistical analyses accordingly.

      (2b) Figure 1G - It is unclear why neural activity during successful trials is already lower one second before movement onset. Full traces with longer duration before and after movement onset should also be shown. Additionally, only data from "session 2 (learning)" and a single neuron are presented. The authors should present data across all sessions and multiple neurons to determine whether this observation is consistent and whether it depends on the stage of learning.

      (2c) Figure 1H - The authors report that chemogenetic activation of Chrna2 cells induces differential changes in PyrN activity between successful and failed trials. However, one would expect that activating all Chrna2 cells would strongly suppress PyrN activity rather than amplifying the activity differences between trials. The authors should clarify the mechanism by which Chrna2 cell activation could exaggerate the divergence in PyrN responses between successful and failed trials. Perhaps, performing calcium imaging of Chrna2 cells themselves during successful versus failed trials would provide insight into their endogenous activity patterns and help interpret how their activation influences PyrN activity during successful and failed trials.

      (2d) Figure 1H - Also, in general, the Cre⁺ (red) data points appear consistently higher in activity than the Cre⁻ (black) points. This is counterintuitive, as activating Chrna2 cells should enhance inhibition and thereby reduce PyrN activity. The authors should clarify how Cre⁺ animals exhibit higher overall PyrN activity under a manipulation expected to suppress it. This discrepancy raises concerns about the interpretation of the chemogenetic activation effects and the underlying circuit logic.

      (3) The statistical comparisons throughout the manuscript are confusing. In many cases, the authors appear to perform multiple comparisons only among the N, L, T, and R conditions within the WT group. However, the central goal of this study should be to assess differences between the WT and hM3D groups. In fact, it is unclear why the authors only provide p-values for some comparisons but not for the majority of the groups.

      (4a) Figure 4 - It is hard to understand why the authors introduce LFP experiments here, and the results are difficult to interpret in isolation. The authors should consider combining LFP recordings with calcium imaging (as in Figure 1) or, alternatively, repeating calcium imaging throughout the entire re-training period. This would provide a clearer link between circuit activity and behavior and strengthen the conclusions regarding Chrna2 cell function during re-training.

      (4b) It is unclear why CLZ has no apparent effect in session 11, yet induces a large performance increase in sessions 12 and 13. Even then, the performance in sessions 12 and 13 (~30 successful pellets) is roughly comparable to Session 5 in Figure 1F. Given this, it is questionable whether the authors can conclude that Chrna2 cell activation truly facilitates previously acquired motor skills?

      (5) Figure 5 - The authors report decreased performance in the pasta-handling task (presumably representing a newly learned skill) but observe no difference in the pellet-reaching task (presumably an already acquired skill). This appears to contradict the authors' main claim that Chrna2 cell activation facilitates previously acquired motor skills.

      (6) Supplementary Figure 1 - The c-fos staining appears unusually clean. Previous studies have shown that even in home-cage mice, there are substantial numbers of c-fos⁺ cells in M1 under basal conditions (PMID 31901303, 31901303). Additionally, the authors should present Chrna2 cell labeling and c-fos staining in separate channels. As currently shown, it is difficult to determine whether the c-fos⁺ cells are truly Chrna2 cells⁺.

      Overall, the authors selectively report statistical comparisons only for findings that support their claims, while most other potentially informative comparisons are omitted. Complete and transparent reporting is necessary for proper interpretation of the data.

    2. Reviewer #2 (Public review):

      Summary:

      In this manuscript, Malfatti et al. study the role of Chrna2 Martinotti cells (Mα2 cells), a subset of SST interneurons, for motor learning and motor cortex activity. The authors trained mice on a forelimb prehension task while recording neuronal activity of pyramidal cells using calcium imaging with a head-mounted miniscope. While chemogenetically increasing Mα2 cell activity did not affect motor learning, it changed pyramidal cell activity such that activity peaks became sharper and differently timed than in control mice. Moreover, co-active neuronal assemblies become more stable with a smaller spatial distribution. Increasing Mα2 cell activity in previously trained mice did increase performance on the prehension task and led to increased theta and gamma band activity in the motor cortex. On the other hand, genetic ablation of Mα2 cells affected fine motor movements on a pasta handling task while not affecting the prehension task.

      Strengths:

      The proposed question of how Chrna2-expressing SST interneurons affect motor learning and motor cortex activity is important and timely. The study employs sophisticated approaches to record neuronal activity and manipulate the activity of a specific neuronal population in behaving mice over the course of motor learning. The authors analyze a variety of neuronal activity parameters, comparing different behavior trials, stages of learning, and the effects of Mα2 cell activation. The analysis of neuronal assembly activity and stability over the course of learning by tracking individual neurons throughout the imaging sessions is notable, since technically challenging, and yielded the interesting result that neuronal assemblies are more stable when activating Mα2 cells.

      Overall, the study provides compelling evidence that Mα2 cells regulate certain aspects of motor behaviors, likely by shaping circuit activity in the motor cortex.

      Weaknesses:

      The main limitation of the study lies in its small sample sizes and the absence of key control experiments, which substantially weaken the strength of the conclusions.

      Core findings of this paper, such as the lack of effect of Mα2 cell activation on motor learning, as well as the altered neuronal activity, rely ona sample size of n=3 mice per condition, which is likely underpowered to detect differences in behavior and contributes to the somewhat disconnected results on calcium activity, activity timing, and neuronal assembly activity.

      More comprehensive analyses and data presentation are also needed to substantiate the results. For example, examining calcium activity and behavioral performance on a trial-by-trial basis could clarify whether closely spaced reaching attempts influence baseline signals and skew interpretation.

      The study uses cre-negative mice as controls for hM3Dq-mediated activation, which does not account for potential effects of Cre-dependent viral expression that occur only in Cre-positive mice.

      This important control would be necessary to substantiate the conclusion that it is increased Mα2 cell activity that drives the observed changes in behavior and cortical activity.

    1. Reviewer #1 (Public review):

      Summary:

      This study investigates how human temporal voice areas (TVA) respond to vocalizations from nonhuman primates. Using functional MRI during a species-categorization task, the authors compare neural responses to calls from humans, chimpanzees, bonobos, and macaques while modeling both acoustic and phylogenetic factors. They find that bilateral anterior TVA regions respond more strongly to chimpanzee than to other nonhuman primate vocalizations, suggesting that these regions are sensitive not only to human voices but also to acoustically and evolutionarily related sounds.

      The work provides important comparative evidence for continuity in primate vocal communication and offers a strong empirical foundation for modeling how specific acoustic features drive TVA activity.

      Strengths:

      ­(1) Comparative scope: The inclusion of four primate species, including both great apes and monkeys, provides a rare and valuable cross-species perspective on voice processing.

      ­(2) Methodological rigor: Acoustic and phylogenetic distances are carefully quantified and incorporated into the analyses.

      ­(4) Neuroscientific significance: The finding of TVA sensitivity to chimpanzee calls supports the view that human voice-selective regions are evolutionarily tuned to certain acoustic features shared across primates.

      ­(4) Clear presentation: The study is well organized, the stimuli well controlled, and the imaging analyses transparent and replicable.

      ­(5) Theoretical contribution: The results advance understanding of the neural bases of voice perception and the evolutionary roots of voice sensitivity in the human brain.

      Weaknesses:

      ­(1) Acoustic-phylogenetic confound: The design does not fully disentangle acoustic similarity from phylogenetic proximity, as species co-vary along both dimensions. A promising way to address this would be to include an additional model focusing on the acoustic features that specifically differentiate bonobo from chimpanzee calls, which share equal phylogenetic distance to humans.

      ­(2) Selectivity vs. sensitivity: Without non-vocal control sounds, the study cannot determine whether TVA responses reflect true selectivity for primate vocalizations or general auditory sensitivity.<br /> ­<br /> (3) Task demands: The use of an active categorization task may engage additional cognitive processes beyond auditory perception; a passive listening condition would help clarify the contribution of attention and task performance.

      ­(4) Figures and presentation: Some results are partially redundant; keeping only the most representative model figure in the main text and moving others to the Supplementary Material would improve clarity.

    2. Reviewer #3 (Public review):

      Summary:

      Ceravolo et al. employed functional magnetic resonance imaging (fMRI) to examine how the temporal voice areas (TVA) in the human brain respond to vocalizations from different nonhuman primate species. Their findings reveal that the human TVA is not only responsible for human vocalizations but also exhibits sensitivity to the vocalizations of other primates, particularly chimpanzee vocalizations sharing acoustic similarities with human voices, which offers compelling evidence for cross-species vocal processing in the human auditory system. Overall, the study presents intellectually stimulating hypotheses and demonstrates methodological originality. However, the current findings are not yet solid enough to fully support the proposed claims, and the presentation could be enhanced for clarity and impact.

      Strengths:

      The study presents intellectually stimulating hypotheses and demonstrates methodological originality.

      Weaknesses:

      (1) The analysis of the fMRI data does not account for the participants' behavioral performance, specifically their reaction times (RTs) during the species categorization task.

      (2) The figure organization/presentation requires significant revision to avoid confusion and redundancy.

    1. Reviewer #3 (Public review):

      Summary:

      In this study, authors utilize biophysical modeling to investigate differences in free energies and nucleosomal configuration probability density of CpG islands and nonmethylated regions in the genome. Toward this goal, they develop and apply the cgNA+ coarse-grained model, an extension of their prior molecular modeling framework.

      Strengths:

      The study utilizes biophysical modeling to gain mechanistic insight into nucleosomal occupancy differences in CpG and nonmethylated regions in the genome.

      Weaknesses:

      Although the overall study is interesting, the manuscripts need more clarity in places. Moreover, the rationale and conclusion for some of the analyses are not well described.

      Comments on revised version:

      The authors have addressed my concerns.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      Summary:

      In this manuscript, the authors used a coarse-grained DNA model (cgNA+) to explore how DNA sequences and CpG methylation/hydroxymethylation influence nucleosome wrapping energy and the probability density of optimal nucleosomal configuration. Their findings indicate that both methylated and hydroxymethylated cytosines lead to increased nucleosome wrapping energy. Additionally, the study demonstrates that methylation of CpG islands increases the probability of nucleosome formation.

      Strengths:

      The major strength of this method is the model explicitly includes phosphate group as DNA-histone binding site constraints, enhancing CG model accuracy and computational efficiency and allowing comprehensive calculations of DNA mechanical properties and deformation energies.

      Weaknesses:

      A significant limitation of this study is that the parameter sets for the methylated and hydroxymethylated CpG steps in the cgNA+ model are derived from all-atom molecular dynamics (MD) simulations that use previously established force field parameters for modified cytosines (P´erez A, et al. Biophys J. 2012; Battistini, et al. PLOS Comput Biol. 2021). These parameters suggest that both methylated and hydroxymethylated cytosines increase DNA stiffness and nucleosome wrapping energy, which could predispose the coarse-grained model to replicate these findings. Notably, conflicting results from other all-atom MD simulations, such as those by Ngo T in Nat. Commun. 2016, shows that hydroxymethylated cytosines increase DNA flexibility, contrary to methylated cytosines. If the cgNA+ model were trained on these later parameters or other all-atom MD force fields, different conclusions might be obtained regarding the effects of methylated and hydroxymethylation on nucleosome formation.

      Despite the training parameters of the cgNA+ model, the results presented in the manuscript indicate that methylated cytosines increase both DNA stiffness and nucleosome wrapping energy. However, when comparing nucleosome occupancy scores with predicted nucleosome wrapping energies and optimal configurations, the authors find that methylated CGIs exhibit higher nucleosome occupancies than unmethylated ones, which seems to contradict the expected relationship where increased stiffness should reduce nucleosome formation affinity. In the manuscript, the authors also admit that these conclusions “apparently runs counter to the (perhaps naive) intuition that high nucleosome forming affinity should arise for fragments with low wrapping energy”. Previous all-atom MD simulations (P´erez A, et al. Biophys J. 2012; Battistini, et al. PLOS Comput Biol. 202; Ngo T, et al. Nat. Commun. 20161) show that the stiffer DNA upon CpG methylation reduces the affinity of DNA to assemble into nucleosomes or destabilizes nucleosomes. Given these findings, the authors need to address and reconcile these seemingly contradictory results, as the influence of epigenetic modifications on DNA mechanical properties and nucleosome formation are critical aspects of their study.

      Understanding the influence of sequence-dependent and epigenetic modifications of DNA on mechanical properties and nucleosome formation is crucial for comprehending various cellular processes. The authors’ study, focusing on these aspects, definitely will garner interest from the DNA methylation research community.

      Training the cgNA+ model on alternative MD simulation datasets is certainly of interest to us. However, due to the significant computational cost, this remains a goal for future work. The relationship between nucleosome occupancy scores and nucleosome wrapping energy is still debated, as noted in our Discussion section. The conflicting results may reflect differences in experimental conditions and the contribution of cellular factors other than DNA mechanics to nucleosome formation in vivo. For instance, P´erez et al. (2012), Battistini et al. (2021), and Ngo et al. (2016) concluded that DNA methylation reduces nucleosome formation based on experiments with modified Widom 601 sequences. In contrast, the genome-wide methylation study by Collings and Anderson (2017) found the opposite effect. In our work, we also use whole-genome nucleosome occupancy data.

      Comments on revised version:

      The authors have addressed most of my comments and concerns regarding this manuscript.

      Reviewer #2 (Public Review):

      Summary:

      This study uses a coarse-grained model for double stranded DNA, cgNA+, to assess nucleosome sequence affinity. cgNA+ coarse-grains DNA on the level of bases and accounts also explicitly for the positions of the backbone phosphates. It has been proven to reproduce all-atom MD data very accurately. It is also ideally suited to be incorporated into a nucleosome model because it is known that DNA is bound to the protein core of the nucleosome via the phosphates.

      It is still unclear whether this harmonic model parametrized for unbound DNA is accurate enough to describe DNA inside the nucleosome. Previous models by other authors, using more coarse-grained models of DNA, have been rather successful in predicting base pair sequence dependent nucleosome behavior. This is at least the case as long as DNA shape is concerned whereas assessing the role of DNA bendability (something this paper focuses on) has been consistently challenging in all nucleosome models to my knowledge.

      It is thus of major interest whether this more sophisticated model is also more successful in handling this issue. As far as I can tell the work is technically sound and properly accounts for not only the energy required in wrapping DNA but also entropic effects, namely the change in entropy that DNA experiences when going from the free state to the bound state. The authors make an approximation here which seems to me to be a reasonable first step.

      Of interest is also that the authors have the parameters at hand to study the effect of methylation of CpG-steps. This is especially interesting as this allows to study a scenario where changes in the physical properties of base pair steps via methylation might influence nucleosome positioning and stability in a cell-type specific way.

      Overall, this is an important contribution to the questions of how sequence affects nucleosome positioning and affinity. The findings suggest that cgNA+ has something new to offer. But the problem is complex, also on the experimental side, so many questions remain open. Despite of this, I highly recommend publication of this manuscript.

      Strengths:

      The authors use their state-of-the-art coarse grained DNA model which seems ideally suited to be applied to nucleosomes as it accounts explicitly for the backbone phosphates.

      Weaknesses:

      The authors introduce penalty coefficients c<sub>i</sub> to avoid steric clashes between the two DNA turns in the nucleosome. This requires c<sub>i</sub>-values that are so high that standard deviations in the fluctuations of the simulation are smaller than in the experiments.

      Indeed, smaller c<sub>i</sub> values lead to steric clashes between the two turns of DNA. A possible improvement of our optimisation method and a direction of future work would be adding a penalty which prevents steric clashes to the objective function. Then the c<sub>i</sub> values could be reduced to have bigger fluctuations that are even closer to the experimental structures.

      Reviewer #3 (Public Review):

      Summary:

      In this study, authors utilize biophysical modeling to investigate differences in free energies and nucleosomal configuration probability density of CpG islands and nonmethylated regions in the genome. Toward this goal, they develop and apply the cgNA+ coarse-grained model, an extension of their prior molecular modeling framework.

      Strengths:

      The study utilizes biophysical modeling to gain mechanistic insight into nucleosomal occupancy differences in CpG and nonmethylated regions in the genome.

      Weaknesses:

      Although the overall study is interesting, the manuscripts need more clarity in places. Moreover, the rationale and conclusion for some of the analyses are not well described.

      We have revised the manuscript in accordance with the reviewer’s latest suggestions.

      Comments on revised version:

      Authors have attempted to address previously raised concerns.

      Reviewer #1 (Recommendations for the authors):

      The authors have addressed most of my comments and concerns regarding this manuscript. Among them, the most significant pertains to fitting the coarse-grained model using a different all-atom force field to verify the conclusions. The authors acknowledged this point but noted the computational cost involved and proposed it as a direction for future work. Overall, I recommend the revised version for publication.

      Reviewer #2 (Recommendations for the authors):

      My previous comments were addressed satisfactorily.

      Reviewer #3 (Recommendations for the authors):

      Authors have attempted to address previously raised concerns. However, some concerns listed below remain that need to be addressed.

      (1) The first reviewer makes a valid point regarding the reconciliation of conflicting observations related to nucleosome-forming affinity and wrapping energy. Unfortunately, the authors don’t seem to address this and state that this will be the goal for the future study.

      Training the cgNA+ model on alternative MD simulation datasets remains future work. However, we revised the Discussion section to more clearly address the conflicting experimental findings in the literature on how DNA methylation influences nucleosome formation.

      (2) Please report the effect size and statistical significance value for Figures 7 and 8, as this information is currently not provided, despite the authors’ claim that these observations are statistically significant.

      This information is now presented in Supplementary Tables S1-S4.

      (3) In response to the discrepancy in cell lines for correlating nucleosome occupancy and methylation analyses, the authors claim that there is no publicly available nucleosome occupancy and methylation data for a human cell type within the human genome. This claim is confusing, as the GM12878 cell line has been extensively characterized with MNaseseq and WGBS.

      We thank the reviewer for this remark. We have removed the statement regarding the lack of data from the manuscript; we intend to examine the suggested cell line in future research.

      (4) In response to my question, the authors claimed that they selected regions from chromosome 1 exclusively; however, the observation remains unchanged when considering sequence samples from different genomic regions. They should provide examples from different chromosomes as part of the supplementary information to further support this.

      The examples of corresponding plots for other nucleosomes are now shown in Supplementary Figure S9.

    1. Reviewer #2 (Public review):

      Summary:

      In this study, the authors propose that there are two types of letter knowledge: knowledge about letter sound and knowledge about letter shape. Based on previous studies on implicit statistical learning in adults and babies, the authors hypothesized that passive exposure to letters in the environment allows early readers to acquire knowledge of letter shapes even before knowledge of letter-sound association. Children performed a set of experiments that measures letter shape familiarity, letter-sound association performance, visual processing of letters, and a reading-related cognitive skill. The results show that even the children who have little to no knowledge of letter names are familiar with letter shapes, and that this letter shape familiarity is predictive of performance in visual processing of letters.

      Strengths:

      The authors' hypothesis is based on widely accepted findings in vision science that repeated exposure to certain stimuli promotes implicit learning of, for example, statistical properties of the stimuli. They used simple and well-established tasks in large-scale experiments with a special population (i.e., children). The data analysis is quite comprehensive, accounting for any alternative explanations when needed. The data support at least a part of their hypothesis that the knowledge of letter shapes is distinct from, and precedes, the knowledge of letter-sound association, and is associated with performance in visual processing of the letters. This study shed light on a rather overlooked aspect of letter knowledge, i.e., letter shapes, challenging the idea that letters are learned only through formal instruction and calling for future research on the role of passive exposure to letters in reading acquisition.

      Weaknesses:

      Although the authors have successfully identified the knowledge of letter shapes as another type of letter knowledge other than the knowledge of letter-sound association, the question of whether it drives the subsequent reading acquisition remains largely unanswered, despite it being strongly implied in the Introduction. The authors collected a RAN score, which is known to robustly predict future reading fluency, but it did not show a significant partial correlation with familiarity accuracy (i.e., familiarity accuracy is not necessary to predict RAN score). The authors discussed that the performance in visual processing of letters might capture unique variance in reading fluency unexplained by RAN scores, but currently, this claim seems speculative.

      Since even children without formal literacy instruction were highly familiar with letter shapes, it would be reasonable to assume that they had obtained the knowledge through passive exposure. However, the role of passive exposure was not directly tested in the study.

      Given the superimposed straight lines in Figure 2, I assume the authors computed Pearson correlation coefficients. Testing the statistical significance of the Pearson correlation coefficient requires the assumption of bivariate normality (and therefore constant variance of a variable across the range of the other). According to Figure 2, this doesn't seem to be met, as the familiarity accuracy is hitting the ceiling. The ceiling effect might not be critical in Figure 2, since it tends to attenuate correlation, not inflate it. But in Figures 3 and 4, the authors' conclusion depends on the non-significant partial correlation. In fact, the authors themselves wrote that the ceiling effect might lead to a non-significant correlation even if there is an actual effect (line 404).

    2. Reviewer #3 (Public review):

      Summary:

      This study examined how young children with minimal reading instruction process letters, focusing on their familiarity with letter shapes, knowledge of letter names, and visual discrimination of upright versus inverted letters. Across four experiments, kindergarten and Grade 1 children could identify the correct orientation of letters even without knowing their names.

      Strengths:

      This study addresses an important research gap by examining whether children develop letter familiarity prior to formal literacy instruction and how this skill relates to reading-related cognitive abilities. By emphasizing letter familiarity alongside letter recognition, the study highlights a potentially overlooked yet important component of emergent literacy development.

      Weaknesses:

      The study's methods and results do not effectively test its stated research goals. Reading ability was not directly measured; instead, the authors inferred its relationship with reading from correlations between letter familiarity and reading-related cognitive measures, which limits the validity of their conclusions. Furthermore, the analytical approach was rather limited, relying primarily on simple and partial correlations without employing more advanced statistical methods that could better capture the underlying relationships.

      Major Comments:

      (1) Limited Novelty and Unclear Theoretical Contribution:

      The authors aim to challenge the view that children acquire letter shape knowledge only through formal literacy instruction, but similar questions regarding letter familiarity have already been explored in previous research. The manuscript does not clearly articulate how the present study advances beyond existing findings or why examining letter familiarity specifically before formal instruction provides new theoretical insight. Moreover, if letter familiarity and letter recognition are treated as distinct constructs, the authors should better justify their differentiation and clarify the theoretical significance of focusing on familiarity as an independent component of emergent literacy.

      (2) Overgeneralization to Reading Ability:

      Although the study measured several literacy-related cognitive skills and examined correlations with letter familiarity, it did not directly assess children's reading ability, as participants had not yet received formal literacy instruction. Therefore, the conclusion that letter familiarity influences reading skills (e.g., Line 519: "Our results are broadly consistent with previous work that has highlighted print letter knowledge as a strong predictor of future reading skills") is not fully supported and should be clarified or revised. To draw conclusions about the impact on reading ability, a longitudinal study would be more appropriate, assessing the relationship between letter familiarity and reading skills after children have received formal literacy instruction. If a longitudinal study is not feasible, measuring familial risk for dyslexia could provide an alternative approach to infer the potential influence of letter familiarity on later reading development.

      (3) Confusing and Limited Analytical Approach with Potential for More Sophisticated Modeling:

      The study employs a confusing analytical approach, alternating between simple correlational analyses and group-based comparisons, which may introduce circularity - for example, defining high vs. low familiarity groups partly based on performance differences in upright versus inverted letters and then observing a visual search advantage for upright letters within these groups. Moreover, the analyses are relatively simple: although multiple linear regression is mentioned, the results are not fully reported. These approaches may not fully capture the complex relationships among letter familiarity, recognition, visual search performance, RAN, and other covariates. More sophisticated modeling, such as mixed-effects models to account for repeated measures, structural equation modeling to examine latent constructs, or multivariate approaches jointly modeling familiarity and recognition effects, could provide a clearer understanding of the unique contribution of letter shape familiarity to early literacy outcomes. In addition, a large number of correlations were conducted without correction for multiple comparisons, which may increase the risk of false positives and raise concerns about the reliability of some significant findings.

    1. Reviewer #2 (Public review):

      Summary:

      This work investigates transcriptional responses to varying levels of transcription factors (TFs). The authors aim for gradual up- and down-regulation of three transcription factors GFI1B, NFE2 and MYB in K562 cells, by using a CRISPRa- and a CRISPRi line, together with sgRNAs of varying potency. Targeted single-cell RNA sequencing is then used to measure gene expression of a set of 90 genes, which were previously shown to be downstream of GFI1B and NFE2 regulation. This is followed by an extensive computational analysis of the scRNA-seq dataset. By grouping cells with the same perturbations, the authors can obtain groups of cells with varying average TF expression levels. The achieved perturbations are generally subtle, not reaching half or double doses for most samples, and up-regulation is generally weak below 1.5-fold in most cases. Even in this small range, many target genes exhibit a non-linear response. Since this is rather unexpected, it is crucial to rule out technical reasons for these observations.

      Strengths:

      The work showcases how a single dataset of CRISPRi/a perturbations with scRNA-seq readout and an extended computational analysis can be used to estimate transcriptome dose-responses, a general approach that likely can be built upon in the future.<br /> Moreover, the authors highlight tiling of sgRNAs +/-1000bp around TSS as a useful approach. Compared with conventional direct TSS-targeting (+/- 200 bp), the larger sequence window allows placing more sgRNAs. Also it requires little prior knowledge of CREs, and avoids using "attenuated" sgRNAs which would require specialized sgRNA design.

      Weaknesses:

      The experiment was performed in a single replicate and it would have been reassuring to see an independent validation of the main findings, for example through measuring individual dose-response curves .

      Much of the analysis depends on the estimation of log-fold changes between groups of single cells with non-targeting controls and those carrying a guide RNA driving a specific knockdown. Generally, biological replicates are recommended for differential gene expression testing (Squair et al. 2021, https://doi.org/10.1038/s41467-021-25960-2). When using the FindMarkers function from the Seurat package, the authors divert from the recommendations for pseudo-bulk analysis to aggregate the raw counts (https://satijalab.org/seurat/articles/de_vignette.html). Furthermore, differential gene expression analysis of scRNA-seq data can suffer from mis-estimations (Nguyen et al. 2023, https://doi.org/10.1038/s41467-023-37126-3), and different computational tools or versions can affect these estimates strongly (Pullin et al. 2024, https://doi.org/10.1186/s13059-024-03183-0 and Rich et al. 2024, https://doi.org/10.1101/2024.04.04.588111). Therefore it would be important to describe more precisely in the Methods how this analysis was performed, any deviations from default parameters, package versions, and at which point which values were aggregated to form "pseudobulk" samples.

      Two different cell lines are used to construct dose-response curves, where a CRISPRi line allows gene down-regulation and the CRISPRa line allows gene upregulation. Although both lines are derived from the same parental line (K562) the expression analysis of Tet2, which is absent in the CRISPRi line, but expressed in the CRISPRa line (Fig. S1F, S3A) suggests clonal differences between the two lines. Similarly, the UMAP in S3C and the PCA in S4A suggest batch effects between the two lines. These might confound this analysis, even though all fold changes are calculated relative to the baseline expression in the respective cell line (NTC cells). Combining log2-fold changes from the two cell lines with different baseline expression into a single curve (e.g. Fig. 3) remains misleading, because different data points could be normalized to different base line expression levels.

      The study estimates the relationship between TF dose and target gene expression. This requires a system that allows quantitative changes in TF expression. The data provided does not convincingly show that this condition is met, which however is an essential prerequisite for the presented conclusions. Specifically, the data shown in Fig. S3A shows that upon stronger knock-down, a subpopulation of cells appear, where the targeted TF is not detected any more (drop-outs). Also in Fig. 3B (top) suggests that the knock-down is either subtle (similar to NTCs) or strong, but intermediate knock-down (log2-FC of 0.5-1) does not occur. Although the authors argue that this is a technical effect of the scRNA-seq protocol, it is also possible that this represents a binary behavior of the CRISPRi system. Previous work has shown that CRISPRi systems with the KRAB domain largely result in binary repression and not in gradual down-regulation as suggested in this study (Bintu et al. 2016 (https://doi.org/10.1126/science.aab2956), Noviello et al. 2023 (https://doi.org/10.1038/s41467-023-38909-4)).

      One of the major conclusions of the study is that non-linear behavior is common. It would be helpful to show that this observation does not arise from the technical concerns described in the previous points. This could be done for instance with independent experimental validations.

      Did the authors achieve their aims? Do the results support the conclusions?:

      Some of the most important conclusions, such as the claim that non-linear responses are common, are not well supported because they rely on accurately determining the quantitative responses of trans genes, which suffers from the previously mentioned concerns.

      Discussion of the likely impact of the work on the field, and the utility of the methods and data to the community:

      Together with other recent publications, this work emphasizes the need to study transcription factor function with quantitative perturbations. The computational code repository contains all the valuable code with inline comments, but would have benefited from a readme file explaining the repository structure, package versions, and instructions to reproduce the analyses, including which input files or directory structure would be needed.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      In this manuscript, Domingo et al. present a novel perturbation-based approach to experimentally modulate the dosage of genes in cell lines. Their approach is capable of gradually increasing and decreasing gene expression. The authors then use their approach to perturb three key transcription factors and measure the downstream effects on gene expression. Their analysis of the dosage response curve of downstream genes reveals marked non-linearity.

      One of the strengths of this study is that many of the perturbations fall within the physiological range for each cis gene. This range is presumably between a single-copy state of heterozygous loss-of-function (log fold change of -1) and a three-copy state (log fold change of ~0.6). This is in contrast with CRISPRi or CRISPRa studies that attempt to maximize the effect of the perturbation, which may result in downstream effects that are not representative of physiological responses.

      Another strength of the study is that various points along the dosage-response curve were assayed for each perturbed gene. This allowed the authors to effectively characterize the degree of linearity and monotonicity of each dosage-response relationship. Ultimately, the study revealed that many of these relationships are non-linear, and that the response to activation can be dramatically different than the response to inhibition.

      To test their ability to gradually modulate dosage, the authors chose to measure three transcription factors and around 80 known downstream targets. As the authors themselves point out in their discussion about MYB, this biased sample of genes makes it unclear how this approach would generalize genome-wide. In addition, the data generated from this small sample of genes may not represent genome-wide patterns of dosage response. Nevertheless, this unique data set and approach represents a first step in understanding dosage-response relationships between genes.

      Another point of general concern in such screens is the use of the immortalized K562 cell line. It is unclear how the biology of these cell lines translates to the in vivo biology of primary cells. However, the authors do follow up with cell-type-specific analyses (Figures 4B, 4C, and 5A) to draw a correspondence between their perturbation results and the relevant biology in primary cells and complex diseases.

      The conclusions of the study are generally well supported with statistical analysis throughout the manuscript. As an example, the authors utilize well-known model selection methods to identify when there was evidence for non-linear dosage response relationships.

      Gradual modulation of gene dosage is a useful approach to model physiological variation in dosage. Experimental perturbation screens that use CRISPR inhibition or activation often use guide RNAs targeting the transcription start site to maximize their effect on gene expression. Generating a physiological range of variation will allow others to better model physiological conditions.

      There is broad interest in the field to identify gene regulatory networks using experimental perturbation approaches. The data from this study provides a good resource for such analytical approaches, especially since both inhibition and activation were tested. In addition, these data provide a nuanced, continuous representation of the relationship between effectors and downstream targets, which may play a role in the development of more rigorous regulatory networks.

      Human geneticists often focus on loss-of-function variants, which represent natural knock-down experiments, to determine the role of a gene in the biology of a trait. This study demonstrates that dosage response relationships are often non-linear, meaning that the effect of a loss-of-function variant may not necessarily carry information about increases in gene dosage. For the field, this implies that others should continue to focus on both inhibition and activation to fully characterize the relationship between gene and trait.

      We thank the reviewer for their thoughtful and thorough evaluation of our study. We appreciate their recognition of the strengths of our approach, particularly the ability to modulate gene dosage within a physiological range and to capture non-linear dosage-response relationships. We also agree with the reviewer’s points regarding the limitations of gene selection and the use of K562 cells, and we are encouraged that the reviewer found our follow-up analyses and statistical framework to be well-supported. We believe this work provides a valuable foundation for future genome-wide applications and more physiologically relevant perturbation studies.

      Reviewer #2 (Public review):

      Summary:

      This work investigates transcriptional responses to varying levels of transcription factors (TFs). The authors aim for gradual up- and down-regulation of three transcription factors GFI1B, NFE2, and MYB in K562 cells, by using a CRISPRa- and a CRISPRi line, together with sgRNAs of varying potency. Targeted single-cell RNA sequencing is then used to measure gene expression of a set of 90 genes, which were previously shown to be downstream of GFI1B and NFE2 regulation. This is followed by an extensive computational analysis of the scRNA-seq dataset. By grouping cells with the same perturbations, the authors can obtain groups of cells with varying average TF expression levels. The achieved perturbations are generally subtle, not reaching half or double doses for most samples, and up-regulation is generally weak below 1.5-fold in most cases. Even in this small range, many target genes exhibit a non-linear response. Since this is rather unexpected, it is crucial to rule out technical reasons for these observations.

      We thank the reviewer for their detailed and thoughtful assessment of our work. We are encouraged by their recognition of the strengths of our study, including the value of quantitative CRISPR-based perturbation coupled with single-cell transcriptomics, and its potential to inform gene regulatory network inference. Below, we address each of the concerns raised:

      Strengths:

      The work showcases how a single dataset of CRISPRi/a perturbations with scRNA-seq readout and an extended computational analysis can be used to estimate transcriptome dose responses, a general approach that likely can be built upon in the future.

      Weaknesses:

      (1) The experiment was only performed in a single replicate. In the absence of an independent validation of the main findings, the robustness of the observations remains unclear.

      We acknowledge that our study was performed in a single pooled experiment. While additional replicates would certainly strengthen the findings, in high-throughput single-cell CRISPR screens, individual cells with the same perturbation serve as effective internal replicates. This is a common practice in the field. Nevertheless, we agree that biological replicates would help control for broader technical or environmental effects.

      (2) The analysis is based on the calculation of log-fold changes between groups of single cells with non-targeting controls and those carrying a guide RNA driving a specific knockdown. How the fold changes were calculated exactly remains unclear, since it is only stated that the FindMarkers function from the Seurat package was used, which is likely not optimal for quantitative estimates. Furthermore, differential gene expression analysis of scRNA-seq data can suffer from data distortion and mis-estimations (Heumos et al. 2023 (https://doi.org/10.1038/s41576-023-00586-w), Nguyen et al. 2023 (https://doi.org/10.1038/s41467-023-37126-3)). In general, the pseudo-bulk approach used is suitable, but the correct treatment of drop-outs in the scRNA-seq analysis is essential.

      We thank the reviewer for highlighting recent concerns in the field. A study benchmarking association testing methods for perturb-seq data found that among existing methods, Seurat’s FindMarkers function performed the best (T. Barry et al. 2024).

      In the revised Methods, we now specify the formula used to calculate fold change and clarify that the estimates are derived from the Wilcoxon test implemented in Seurat’s FindMarkers function. We also employed pseudo-bulk grouping to mitigate single-cell noise and dropout effects.

      (3) Two different cell lines are used to construct dose-response curves, where a CRISPRi line allows gene down-regulation and the CRISPRa line allows gene upregulation. Although both lines are derived from the same parental line (K562) the expression analysis of Tet2, which is absent in the CRISPRi line, but expressed in the CRISPRa line (Figure S3A) suggests substantial clonal differences between the two lines. Similarly, the PCA in S4A suggests strong batch effects between the two lines. These might confound this analysis.

      We agree that baseline differences between CRISPRi and CRISPRa lines could introduce confounding effects if not appropriately controlled for. We emphasize that all comparisons are made as fold changes relative to non-targeting control (NTC) cells within each line, thereby controlling for batch- and clone-specific baseline expression. See figures S4A and S4B.

      (4) The study uses pseudo-bulk analysis to estimate the relationship between TF dose and target gene expression. This requires a system that allows quantitative changes in TF expression. The data provided does not convincingly show that this condition is met, which however is an essential prerequisite for the presented conclusions. Specifically, the data shown in Figure S3A shows that upon stronger knock-down, a subpopulation of cells appears, where the targeted TF is not detected anymore (drop-outs). Also Figure 3B (top) suggests that the knock-down is either subtle (similar to NTCs) or strong, but intermediate knock-down (log2-FC of 0.5-1) does not occur. Although the authors argue that this is a technical effect of the scRNA-seq protocol, it is also possible that this represents a binary behavior of the CRISPRi system. Previous work has shown that CRISPRi systems with the KRAB domain largely result in binary repression and not in gradual down-regulation as suggested in this study (Bintu et al. 2016 (https://doi.org/10.1126/science.aab2956), Noviello et al. 2023 (https://doi.org/10.1038/s41467-023-38909-4)).

      Figure S3A shows normalized expression values, not fold changes. A pseudobulk approach reduces single-cell noise and dropout effects. To test whether dropout events reflect true binary repression or technical effects, we compared trans-effects across cells with zero versus low-but-detectable target gene expression (Figure S3B). These effects were highly concordant, supporting the interpretation that dropout is largely technical in origin. We agree that KRAB-based repression can exhibit binary behavior in some contexts, but our data suggest that cells with intermediate repression exist and are biologically meaningful. In ongoing unpublished work, we pursue further analysis of these data at the single cell level, and show that for nearly all guides the dosage effects are indeed gradual rather than driven by binary effects across cells.

      (5) One of the major conclusions of the study is that non-linear behavior is common. This is not surprising for gene up-regulation, since gene expression will reach a plateau at some point, but it is surprising to be observed for many genes upon TF down-regulation. Specifically, here the target gene responds to a small reduction of TF dose but shows the same response to a stronger knock-down. It would be essential to show that his observation does not arise from the technical concerns described in the previous point and it would require independent experimental validations.

      This phenomenon—where relatively small changes in cis gene dosage can exceed the magnitude of cis gene perturbations—is not unique to our study. This also makes biological sense, since transcription factors are known to be highly dosage sensitive and generally show a smaller range of variation than many other genes (that are regulated by TFs). Empirically, these effects have been observed in previous CRISPR perturbation screens conducted in K562 cells, including those by Morris et al. (2023), Gasperini et al. (2019), and Replogle et al. (2022), to name but a few studies that our lab has personally examined the data of.

      (6) One of the conclusions of the study is that guide tiling is superior to other methods such as sgRNA mismatches. However, the comparison is unfair, since different numbers of guides are used in the different approaches. Relatedly, the authors point out that tiling sometimes surpassed the effects of TSS-targeting sgRNAs, however, this was the least fair comparison (2 TSS vs 10 tiling guides) and additionally depends on the accurate annotation of TSS in the relevant cell line.

      We do not draw this conclusion simply from observing the range achieved but from a more holistic observation. We would like to clarify that the number of sgRNAs used in each approach is proportional to the number of base pairs that can be targeted in each region: while the TSS-targeting strategy is typically constrained to a small window of a few dozen base pairs, tiling covers multiple kilobases upstream and downstream, resulting in more guides by design rather than by experimental bias. The guides with mismatches do not have a great performance for gradual upregulation.

      We would also like to point out that the observation that the strongest effects can arise from regions outside the annotated TSS is not unique to our study and has been demonstrated in prior work (referenced in the text).

      To address this concern, we have revised the text to clarify that we do not consider guide tiling to be inherently superior to other approaches such as sgRNA mismatches. Rather, we now describe tiling as a practical and straightforward strategy to obtain a wide range of gene dosage effects without requiring prior knowledge beyond the approximate location of the TSS. We believe this rephrasing more accurately reflects the intent and scope of our comparison.

      (7) Did the authors achieve their aims? Do the results support the conclusions?: Some of the most important conclusions are not well supported because they rely on accurately determining the quantitative responses of trans genes, which suffers from the previously mentioned concerns.

      We appreciate the reviewer’s concern, but we would have wished for a more detailed characterization of which conclusions are not supported, given that we believe our approach actually accounts for the major concerns raised above. We believe that the observation of non-linear effects is a robust conclusion that is also consistent with known biology, with this paper introducing new ways to analyze this phenomenon.

      (8) Discussion of the likely impact of the work on the field, and the utility of the methods and data to the community:

      Together with other recent publications, this work emphasizes the need to study transcription factor function with quantitative perturbations. Missing documentation of the computational code repository reduces the utility of the methods and data significantly.

      Documentation is included as inline comments within the R code files to guide users through the analysis workflow.

      Reviewer #1 (Recommendations for the authors):

      In Figure 3C (and similar plots of dosage response curves throughout the manuscript), we initially misinterpreted the plots because we assumed that the zero log fold change on the horizontal axis was in the middle of the plot. This gives the incorrect interpretation that the trans genes are insensitive to loss of GFI1B in Figure 3C, for instance. We think it may be helpful to add a line to mark the zero log fold change point, as was done in Figure 3A.

      We thank the reviewer for this helpful suggestion. To improve clarity, we have added a vertical line marking the zero log fold change point in Figure 3C and all similar dosage-response plots. We agree this makes the plots easier to interpret at a glance.

      Similarly, for heatmaps in the style of Figure 3B, it may be nice to have a column for the non-targeting controls, which should be a white column between the perturbations that increase versus decrease GFI1B.

      We appreciate the suggestion. However, because all perturbation effects are computed relative to the non-targeting control (NTC) cells, explicitly including a separate column for NTC in the heatmap would add limited interpretive value and could unnecessarily clutter the figure. For clarity, we have emphasized in the figure legend that the fold changes are relative to the NTC baseline.

      We found it challenging to assess the degree of uncertainty in the estimation of log fold changes throughout the paper. For example, the authors state the following on line 190: "We observed substantial differences in the effects of the same guide on the CRISPRi and CRISPRa backgrounds, with no significant correlation between cis gene fold-changes." This claim was challenging to assess because there are no horizontal or vertical error bars on any of the points in Figure 2A. If the log fold change estimates are very noisy, the data could be consistent with noisy observations of a correlated underlying process. Similarly, to our understanding, the dosage response curves are fit assuming that the cis log fold changes are fixed. If there is excessive noise in the estimation of these log fold changes, it may bias the estimated curves. It may be helpful to give an idea of the amount of estimation error in the cis log fold changes.

      We agree that assessing the uncertainty in log fold change estimates is important for interpreting both the lack of correlation between CRISPRi and CRISPRa effects (Figure 2A) and the robustness of the dosage-response modeling.

      In response, we have now updated Figure 2A to include both vertical and horizontal error bars, representing the standard errors of the log2 fold-change estimates for each guide in the CRISPRi and CRISPRa conditions. These error estimates were computed based on the differential expression analysis performed using the FindMarkers function in Seurat, which models gene expression differences between perturbed and control cells. We also now clarify this in the figure legend and methods.

      The authors mention hierarchical clustering on line 313, which identified six clusters. Although a dendrogram is provided, these clusters are not displayed in Figure 4A. We recommend displaying these clusters alongside the dendrogram.

      We have added colored bars indicating the clusters to improve the clarity. Thank you for the suggestion.

      In Figures 4B and 4C, it was not immediately clear what some of the gene annotations meant. For example, neither the text nor the figure legend discusses what "WBCs", "Platelets", "RBCs", or "Reticulocytes" mean. It would be helpful to include this somewhere other than only the methods to make the figure more clear.

      To improve clarity, we have updated the figure legends for Figures 4B and 4C to explicitly define these abbreviations.

      We struggled to interpret Figure 4E. Although the authors focus on the association of MYB with pHaplo, we would have appreciated some general discussion about the pattern of associations seen in the figure and what the authors expected to observe.

      We have changed the paragraph to add more exposition and clarification:

      “The link between selective constraint and response properties is most apparent in the MYB trans network. Specifically, the probability of haploinsufficiency (pHaplo) shows a significant negative correlation with the dynamic range of transcriptional responses (Figure 4G): genes under stronger constraint (higher pHaplo) display smaller dynamic ranges, indicating that dosage-sensitive genes are more tightly buffered against changes in MYB levels. This pattern was not reproduced in the other trans networks (Figure 4E)”.

      Line 71: potentially incorrect use of "rending" and incorrect sentence grammar.

      Fixed

      Line 123: "co-expression correlation across co-expression clusters" - authors may not have intended to use "co-expression" twice.

      Original sentence was correct.

      Line 246: "correlations" is used twice in "correlations gene-specific correlations."

      Fixed.

      Reviewer #2 (Recommendations for the authors):

      (1) To show that the approach indeed allows gradual down-regulation it would be important to quantify the know-down strength with a single-cell readout for a subset of sgRNAs individually (e.g. flowfish/protein staining flow cytometry).

      We agree that single-cell validation of knockdown strength using orthogonal approaches such as flowFISH or protein staining would provide additional support. However, such experiments fall outside the scope of the current study and are not feasible at this stage. We note that the observed transcriptomic changes and dosage responses across multiple perturbations are consistent with effective and graded modulation of gene expression.

      (2) Similarly, an independent validation of the observed dose-response relationships, e.g. with individual sgRNAs, can be helpful to support the conclusions about non-linear responses.

      Fig. S4C includes replication of trans-effects for a handful of guides used both in this study and in Morris et al. While further orthogonal validation of dose-response relationships would be valuable, such extensive additional work is not currently feasible within the scope of this study. Nonetheless, the high degree of replication in Fig. S4C as well as consistency of patterns observed across multiple sgRNAs and target genes provides strong support for the conclusions drawn from our high-throughput screen.

      (3) The calculation of the log2 fold changes should be documented more precisely. To perform a pseudo-bulk analysis, the raw UMI counts should be summed up in each group (NTC, individual targeting sgRNAs), including zero counts, then the data should be normalized and the fold change should be calculated. The DESeq package for example would be useful here.

      We have updated the methods in the manuscript to provide more exposition of how the logFC was calculated:

      “In our differential expression (DE) analysis, we used Seurat’s FindMarkers() function, which computes the log fold change as the difference between the average normalized gene expression in each group on the natural log scale:

      Logfc = log_e(mean(expression in group 1)) - log_e(mean(expression in group 2))

      This is calculated in pseudobulk where cells with the same sgRNA are grouped together and the mean expression is compared to the mean expression of cells harbouring NTC guides. To calculate per-gene differential expression p-value between the two cell groups (cells with sgRNA vs cells with NTC), Wilcoxon Rank-Sum test was used”.

      (4) A more careful characterization of the cell lines used would be helpful. First, it would be useful to include the quality controls performed when the clonal lines were selected, in the manuscript. Moreover, a transcriptome analysis in comparison to the parental cell line could be performed to show that the cell lines are comparable. In addition, it could be helpful to perform the analysis of the samples separately to see how many of the response behaviors would still be observed.

      Details of the quality control steps used during the selection of the CRISPRa clonal line are already included in the Methods section, and Fig. S4A shows the transcriptome comparison of CRISPRi and CRISPRa lines also for non-targeting guides. Regarding the transcriptomic comparison with the parental cell line, we agree that such an analysis would be informative; however, this would require additional experiments that are not feasible within the scope of the current study. Finally, while analyzing the samples separately could provide further insight into response heterogeneity, we focused on identifying robust patterns across perturbations that are reproducible in our pooled screening framework. We believe these aggregate analyses capture the major response behaviors and support the conclusions drawn.

      (5) In general we were surprised to see such strong responses in some of the trans genes, in some cases exceeding the fold changes of the cis gene perturbation more than 2x, even at the relatively modest cis gene perturbations (Figures S5-S8). How can this be explained?

      This phenomenon—where trans gene responses can exceed the magnitude of cis gene perturbations—is not unique to our study. Similar effects have been observed in previous CRISPR perturbation screens conducted in K562 cells, including those by Morris et al. (2023), Gasperini et al. (2019), and Replogle et al. (2022).

      Several factors may contribute to this pattern. One possibility is that certain trans genes are highly sensitive to transcription factor dosage, and therefore exhibit amplified expression changes in response to relatively modest upstream perturbations. Transcription factors are known to be highly dosage sensitive and generally show a smaller range of variation than many other genes (that are regulated by TFs). Mechanistically, this may involve non-linear signal propagation through regulatory networks, in which intermediate regulators or feedback loops amplify the downstream transcriptional response. While our dataset cannot fully disentangle these indirect effects, the consistency of this observation across multiple studies suggests it is a common feature of transcriptional regulation in K562 cells.

      (6) In the analysis shown in Figure S3B, the correlation between cells with zero count and >0 counts for the cis gene is calculated. For comparison, this analysis should also show the correlation between the cells with similar cis-gene expression and between truly different populations (e.g. NTC vs strong sgRNA).

      The intent of Figure S3B was not to compare biologically distinct populations or perform differential expression analyses—which we have already conducted and reported elsewhere in the manuscript—but rather to assess whether fold change estimates could be biased by differences in the baseline expression of the target gene across individual cells. Specifically, we sought to determine whether cells with zero versus non-zero expression (as can result from dropouts or binary on/off repression from the KRAB-based CRISPRi system) exhibit systematic differences that could distort fold change estimation. As such, the comparisons suggested by the reviewer do not directly relate to the goal of the analysis which Figure S3B was intended to show.

      (7) It is unclear why the correlation between different lanes is assessed as quality control metrics in Figure S1C. This does not substitute for replicates.

      The intent of Figure S1C was not to serve as a general quality control metric, but rather to illustrate that the targeted transcript capture approach yielded consistent and specific signal across lanes. We acknowledge that this may have been unclear and have revised the relevant sentence in the text to avoid misinterpretation.

      “We used the protein hashes and the dCas9 cDNA (indicating the presence or absence of the KRAB domain) to demultiplex and determine the cell line—CRISPRi or CRISPRa. Cells containing a single sgRNA were identified using a Gaussian mixture model (see Methods). Standard quality control procedures were applied to the scRNA-seq data (see Methods). To confirm that the targeted transcript capture approach worked as intended, we assessed concordance across capture lanes (Figure S1C)”.

      (8) Figures and legends often miss important information. Figure 3B and S5-S8: what do the transparent bars represent? Figure S1A: color bar label missing. Figure S4D: what are the lines?, Figure S9A: what is the red line? In Figure S8 some of the fitted curves do not overlap with the data points, e.g. PKM. Fig. 2C: why are there more than 96 guide RNAs (see y-axis)?

      We have addressed each point as follows:

      Figure 3B: The figure legend has been updated to clarify the meaning of the transparent bars.

      Figures S5–S8: There are no transparent bars in these figures; we confirmed this in the source plots.

      Figure S1A: The color bar label is already described in the figure legend, but we have reformulated the caption text to make this clearer.

      Figure S4D: The dashed line represents a linear regression between the x and y variables. The figure caption has been updated accordingly.

      Figure S9A: We clarified that the red line shows the median ∆AIC across all genes and conditions.

      Figure S8: We agree that some fitted curves (e.g., PKM) do not closely follow the data points. This reflects high noise in these specific measurements; as noted in the text, TET2 is not expected to exert strong trans effects in this context.

      Figure 2C: Thank you for catching this. The y-axis numbers were incorrect because the figure displays the proportion of guides (summing to 100%), not raw counts. We have corrected the y-axis label and updated the numbers in the figure to resolve this inconsistency.

      (9) The code is deposited on Github, but documentation is missing.

      Documentation is included as inline comments within the R code files to guide users through the analysis workflow.

      (10) The methods miss a list of sgRNA target sequences.

      We thank the reviewer for this observation. A complete table containing all processed data, including the sequences of the sgRNAs used in this study, is available at the following GEO link:

      https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE257547&format=file&file=GSE257547%5Fd2n%5Fprocessed%5Fdata%2Etxt%2Egz

      (11) In some parts, the language could be more specific and/or the readability improved, for example:

      Line 88: "quantitative landscape".

      Changed to “quantitative patterns”.

      Lines 88-91: long sentence hard to read.

      This complex sentence was broken up into two simpler ones:

      “We uncovered quantitative patterns of how gradual changes in transcription dosage lead to linear and non-linear responses in downstream genes. Many downstream genes are associated with rare and complex diseases, with potential effects on cellular phenotypes”.

      Line 110: "tiling sgRNAs +/- 1000 bp from the TSS", could maybe be specified by adding that the average distance was around 100 or 110 bps?

      Lines 244-246: hard to understand.

      We struggle to see the issue here and are not sure how it can be reworded.

      Lines 339-342: hard to understand.

      These sentences have been reworded to provide more clarity.

      (12) A number of typos, and errors are found in the manuscript:

      Line 71: "SOX2" -> "SOX9".

      FIXED

      Line 73: "rending" -> maybe "raising" or "posing"?

      FIXED

      Line 157: "biassed".

      FIXED

      Line 245: "exhibited correlations gene-specific correlations with".

      FIXED

      Multiple instances, e.g. 261: "transgene" -> "trans gene".

      FIXED

      Line 332: "not reproduced with among the other".

      FIXED

      Figure S11: betweenness.

      This is the correct spelling

      There are more typos that we didn't list here.

      We went through the manuscript and corrected all the spelling errors and typos.