10,000 Matching Annotations
  1. Dec 2025
    1. Digital citizenship in schools : nine elements all students should know by Ribble, Mike Publication date 2015 Topics Computer literacy -- Study and teaching -- United States, Internet literacy, Education, Elementary -- United States -- Data processing, Education, Secondary -- United States -- Data processing Publisher Eugene, Oregon : International Society for Technology in Education Collection internetarchivebooks; nationaluniversity; inlibrary; printdisabled Contributor Internet Archive Language English Item Size 474.9M x, 212 pages : 28 cmIncludes bibliographical references and indexSection I. Understanding digital citizenship -- chapter 1. A brief history of digital citizenship -- chapter 2. The nine elements of digital citizenship -- Section II. Digital citizenship in schools -- chapter 3. Creating a digital citizenship program -- chapter 4. Professional development activities in digital citizenship -- Section III. Digital citizenship in the classroom -- chapter 5. Teaching digital citizenship to students -- chapter 6. Foundational lessons in digital citizenship -- chapter 7. Guided lessons in digital citizenship -- Conclusion Access-restricted-item true Addeddate 2023-07-18 11:33:13 Autocrop_version 0.0.15_books-20220331-0.2 Bookplateleaf 0002 Boxid IA41026906 Camera USB PTP Class Camera Col_number COL-2513 Collection_set printdisabled External-identifier urn:lcp:digitalcitizensh0000ribb_x4u3:epub:925c17a4-eb03-466d-9236-c990d53eee2f urn:lcp:digitalcitizensh0000ribb_x4u3:lcpdf:24304311-9320-40ce-8f6d-da51036f64ec urn:oclc:record:1391393133 Foldoutcount 0

      Digital

    1. https://www.facebook.com/groups/1794856020751839/?multi_permalinks=4308967442674005

      A reasonable sounding version of why not to use some of the commonly suggested methods for rejuvenating platens.

      If you wish to attempt to lower the Shore A hardness of your typewriter platen temporarily, I would recommend applying a more compatible mixture of xylene (non-polar solvent), Methyl Alcohol and Methyl salicylate (wintergreen oil) in a 3 to 1 to 1 ratio such as is found in the product called Rubber Renue from M.G. Chemicals. All the necessary chemicals are available on Amazon, and you can make it by the litre for pennies compared to the commercial product.

    1. drei Erkenntnisse

      Three insights / results from doing it a month 1) small patterns emerge, after 2 weeks. Repeated observations bring them to the fore, and no longer done a way with as incident. The weekly review was key in this. 2) more aware of positive moments (a pattern in itself imo), again weekly review key Vgl #microsucces 3) reflection changed his practice. Small feedback loop was doing something slightly diff the next session, based on the action formulated the previous one. The review served to see the impact of micro-interventions. The reflection provided agency as professional.

    1. examine power as an emergent consequence of deployment and incentives, not intent.

      Intent def is there too though, much of this is entrenching, and much of it is a power grab (esp US tech at the mo), to get from capital/tech concentration to coopting governance structures

      AI is a tech where by design it is not lowering a participation threshold, it positions itself as bigger-than-us, like nuclear reactors, not just anyone can run with it. That only after 3 years we see a budding diy / individual agency angle shows as much. It was only designed to create and entrench power (or transform it to another form), other digital techs originate as challenge to power, this one clearly the opposite. The companies involved fight against things that push towards smaller than us ai tech, like local offline first. E.g. DMA/DSA

    1. Guide pratique : S'engager sans s'épuiser, cultiver un militantisme durable

      Introduction : La double facette de l'engagement moderne

      Face à une urgence écologique et sociale de plus en plus palpable, nous assistons à une multiplication des formes d'engagement citoyen.

      Des actions de désobéissance civile aux initiatives de sensibilisation, en passant par la création de médias indépendants, cet élan collectif est vital pour faire face aux défis de notre époque.

      Cependant, cette mobilisation intense expose les individus et les organisations à un risque élevé d'épuisement physique et psychologique, un phénomène souvent désigné sous le nom de « burnout militant ».

      Loin d'être un signe de faiblesse, cet épuisement est une conséquence logique d'une lutte exigeante contre des systèmes profondément ancrés.

      Ce guide se veut une ressource pragmatique et encourageante, synthétisant les stratégies, les changements de perspective et les leçons partagées par des militants expérimentés pour préserver son énergie et cultiver sa motivation sur le long terme.

      En tant que psychologue observant ces dynamiques, ce guide vise à outiller les acteurs du changement pour qu'ils puissent aligner leur action extérieure avec leur résilience intérieure.

      --------------------------------------------------------------------------------

      1. Comprendre la flamme de l'engagement : Les racines de l'action

      Avant de chercher à protéger la flamme de l'engagement, il est fondamental de comprendre ce qui l'a allumée.

      Identifier ses motivations profondes, cette « étincelle » initiale qui pousse à l'action, est la première étape pour construire un engagement résilient et authentique.

      C'est en se reconnectant à ce « pourquoi » viscéral que l'on peut trouver la force de traverser les moments de doute et de fatigue.

      Cette section explore les divers détonateurs de l'action, tels que vécus et partagés par des personnes engagées aux parcours variés.

      1.1. L'étincelle initiale : Identifier votre « pourquoi »

      Les chemins qui mènent à l'engagement sont multiples, souvent personnels et profondément transformateurs. Ils naissent d'une rencontre entre une sensibilité individuelle et une réalité qui devient intolérable.

      La prise de conscience soudaine : Pour certains, l'engagement naît d'un choc, d'une information qui brise les certitudes.

      C'est le cas de l'arboriste-grimpeur Thomas Braille, qui a été « coupé dans ses jambes » en réalisant que l'échéance de l'urgence climatique n'était plus une projection lointaine mais une réalité imminente :

      « 20 ans, c'est demain ».

      Cette prise de conscience a été catalysée par la peur viscérale pour l'avenir de son fils.

      Le sentiment d'injustice personnel : L'expérience vécue de l'injustice est un moteur puissant et durable.

      Pour la réalisatrice Flore Vasseur, le « foyer de la flamme » se trouve dans une injustice personnelle vécue durant l'enfance.

      Cette blessure initiale, bien que longtemps enfouie, est devenue la source d'une quête de réparation et d'une sensibilité aiguë aux injustices du monde.

      La passion confrontée à la réalité : L'engagement peut aussi émerger lorsque la passion d'une vie se heurte à l'inaction et à l'absurdité du système.

      L'agroclimatologue Serge Zaka, passionné par la météo depuis l'enfance, a basculé dans un engagement public en constatant les impacts concrets du changement climatique (des végétaux brûlés à 46°C) et l'ignorance des décideurs politiques face à des études qu'ils avaient eux-mêmes commandées.

      La quête de cohérence et la fin de la solitude : Parfois, l'engagement est une flamme qui couvait depuis longtemps mais peinait à trouver un exutoire.

      Pour Anaïs Terrien, présidente de La Fresque du Climat, un engagement précoce mais solitaire a trouvé un nouvel élan grâce à un outil lui permettant enfin de structurer le dialogue, de briser l'isolement et d'être comprise dans ses préoccupations.

      1.2. Le moteur psychologique de l'action

      Selon l'analyse de l'écopsychologue Emmanuel Delrieu, l'engagement n'est pas un simple choix intellectuel, mais une transformation profonde qui répond à des mécanismes psychologiques précis.

      1. L'interaction des forces : Pour persévérer, un engagement doit mobiliser une synergie de trois types de forces.

      Les forces affectives (ce qui nous touche, la sensibilité à la souffrance du monde), les forces comportementales (la capacité à agir et à persévérer dans la durée) et les forces cognitives (la capacité à analyser et à réconcilier les aspects positifs et négatifs de la lutte).

      2. La résolution de la dissonance cognitive : S'engager est souvent un moyen de réduire la tension interne entre ses valeurs et les paradigmes dominants de la société (capitalisme, patriarcat, colonialisme).

      Face à cette dissonance, l'action permet de « remettre de l'ordre dans sa vie » en alignant ses comportements avec ses convictions profondes.

      3. La transformation par l'enracinement : Plus l'engagement est profond, plus l'individu se transforme et se « radicalise », au sens étymologique du terme :

      il s'enracine dans ses convictions. Cet enracinement crée des liens, un « mycélium » avec d'autres luttes, renforçant la solidarité et la position de chacun.

      Cependant, cette même puissance qui ancre l'individu dans ses convictions le rend aussi plus vulnérable.

      En s'alignant si profondément avec sa cause, il s'expose frontalement à la résistance, à l'inertie et à la violence du système qu'il combat, créant un terrain propice à l'usure.

      --------------------------------------------------------------------------------

      2. Naviguer les tempêtes : Reconnaître et gérer le risque d'épuisement

      Loin d'être un échec personnel ou un signe de faiblesse, les moments de fatigue, de doute et même d'effondrement sont des étapes quasi inévitables du parcours militant.

      Ils sont le reflet de l'intensité de la lutte et de la violence de ce qui est combattu.

      L'enjeu stratégique n'est donc pas d'éviter ces moments à tout prix, mais d'apprendre à en reconnaître les signes avant-coureurs et à y répondre de manière constructive et bienveillante.

      2.1. Les symptômes avant-coureurs du burnout militant

      Être à l'écoute de soi est la première ligne de défense. Voici quelques signaux d'alerte, basés sur les analyses et témoignages, qui doivent inciter à la prudence :

      Fatigue physique et mentale : Une irritabilité croissante et une fatigue persistante qui ne se résorbe pas avec le repos sont des premiers signes clairs que les réserves d'énergie s'épuisent (Emmanuel Delrieu).

      Perte de sens et envie de retrait : Après une action extrême – 40 jours de grève de la faim suivis d'une grève de la soif – Thomas Braille a ressenti le besoin de s'isoler : « je ne voulais plus voir d'êtres humains ».

      Ce sentiment que le sacrifice est vain et que « tout le monde s'en fout » est un symptôme critique.

      Sentiment de submerssion : L'impression que « le vase était presque plein et menaçait de casser » a poussé Anaïs Terrien à annuler ses engagements.

      Cette sensation d'être submergé par les responsabilités et les urgences est un indicateur majeur.

      Confrontation à l'indifférence et au cynisme : La frustration face à l'inaction générale, comme l'a vécue Flore Vasseur après les révélations d'Edward Snowden, peut user la motivation et mener à un sentiment d'impuissance destructeur.

      2.2. Le burnout comme un cycle, et non comme une fin

      Il est crucial de déconstruire l'idée que le burnout est un point final. C'est avant tout un signal et une étape de transformation.

      L'effondrement est un « moment transformatoire nécessaire ».

      L'écopsychologue Emmanuel Delrieu insiste : plus on résiste à la fatigue et au besoin de changement, plus l'effondrement est douloureux.

      L'accepter comme une étape nécessaire permet de le traverser plus sereinement.

      L'engagement n'est pas linéaire mais cyclique. Il s'apparente à une spirale.

      Les phases de « down » ne sont pas des régressions, mais des moments où l'on plonge pour « chercher des forces encore plus grandes d'ancrage ».

      Chaque cycle permet de se transformer et de repartir sur des bases plus solides.

      L'erreur est de « toujours vouloir être parfait et aller bien ». Comme le souligne Flore Vasseur, la société nous pousse à masquer nos vulnérabilités.

      Or, la libération que représentent les émotions, les larmes et l'acceptation de ses failles est une source de résilience immense.

      L'enjeu stratégique est donc de cultiver un réseau de soutien solide, capable de vous accueillir lors de ces phases d'effondrement pour qu'elles deviennent des sources de transformation, et non de destruction.

      2.3. Les facteurs aggravants spécifiques au militantisme

      Au-delà du surmenage classique, le militantisme expose à des sources de stress uniques qui accélèrent le risque d'épuisement.

      1. La violence des attaques personnelles : L'exposition publique s'accompagne souvent d'une violence décomplexée.

      Les insultes constantes reçues par Serge Zaka sur son physique (allant jusqu'à la création du sobriquet « Grosaka ») ou sa crédibilité (son chapeau) sont une forme de harcèlement visant à déstabiliser et à user psychologiquement.

      2. L'invisibilisation institutionnelle : Comme l'analyse Emmanuel Delrieu, les structures politiques et sociales nient ou minimisent systématiquement les luttes.

      Cette non-reconnaissance est une source d'injustice profonde et d'épuisement, car elle oblige à se battre non seulement pour sa cause, mais aussi pour la légitimité même de son combat.

      3. La confrontation à la force du système : Les militants se heurtent à la capacité du système à absorber et neutraliser la critique.

      Flore Vasseur a constaté que « plus vous tapez dedans, plus il est fort ».

      Le système peut transformer la dénonciation en spectacle, la vidant de sa substance et laissant le militant avec un sentiment d'impuissance.

      --------------------------------------------------------------------------------

      3. Entretenir la flamme : Stratégies pour un engagement durable

      Un engagement durable ne se résume pas à la gestion des crises d'épuisement.

      Il repose sur la mise en place de stratégies proactives pour nourrir sa motivation, protéger son énergie et construire sa propre résilience.

      Les quatre piliers suivants, complémentaires et interdépendants, offrent des pistes concrètes pour y parvenir.

      3.1. Stratégie 1 : La force du collectif et du soutien

      Le premier et le plus puissant rempart contre l'épuisement est la qualité des liens humains. L'isolement est le terreau du burnout.

      S'appuyer sur le collectif : Anaïs Terrien l'affirme sans détour : elle a été sauvée du burnout par son conseil d'administration.

      Le groupe agit comme un filet de sécurité, permettant de prendre le relais lorsque l'un de ses membres flanche.

      Savoir demander de l'aide : Reconnaître ses propres limites et oser demander du soutien n'est pas une faiblesse, mais une compétence stratégique essentielle pour durer.

      C'est un acte de confiance envers le collectif.

      Cultiver le « prendre soin du lien » : Comme le propose Emmanuel Delrieu, il est crucial d'instaurer au sein des groupes une pratique active de soutien mutuel.

      Cela signifie créer des espaces où la vulnérabilité est acceptée et où l'on prend soin les uns des autres autant que de la cause défendue.

      3.2. Stratégie 2 : La justesse de la perspective

      La manière dont on perçoit son action et ses objectifs peut radicalement diminuer la pression et le risque d'épuisement.

      Adopter « l'esprit des cathédrales » : Partagée par Flore Vasseur via Edward Snowden, cette métaphore est libératrice.

      Elle invite à accepter de ne pas voir le résultat final de ses actions, mais à se concentrer sur sa contribution : poser sa « brique » avec la confiance que d'autres construiront dessus.

      Lutter « pour » plutôt que « contre » : Ce changement de paradigme, également proposé par Flore Vasseur, rend l'engagement plus positif et moins autodestructeur.

      Il s'agit de se battre pour un monde désirable, pour la vie, pour l'avenir de ses enfants — des moteurs qui génèrent une énergie positive et renouvelable, à l'inverse de la lutte contre un système qui peut se révéler corrosive.

      Renoncer à l'attente d'un résultat immédiat :

      L'attente d'une victoire rapide est l'une des principales sources de dépression et de désillusion pour les militants.

      L'esprit des cathédrales aide à se détacher de cette tyrannie du résultat.

      3.3. Stratégie 3 : L'alignement et l'action authentique

      Un engagement qui dure est un engagement qui vient du cœur, pas de l'ego.

      Se connecter à son injustice profonde : Comme le conseille Flore Vasseur, les blessures personnelles, les humiliations, les trahisons vécues sont le « fioul » le plus durable.

      C'est en allant chercher ce qui nous touche viscéralement que l'on trouve une énergie inépuisable.

      S'engager pour se réparer soi-même : Plutôt que de s'engager pour la reconnaissance sociale ou l'image, ce qui mène inévitablement à l'épuisement, l'engagement le plus durable est celui qui est aussi une démarche intime.

      Comme l'explique Flore Vasseur, « on y va pour se réparer soi. Ce qu'on vient réparer c'est soi et en se réparant soi on répare le monde ».

      Diversifier ses projets et ses sources d'énergie : Pour ne pas dépendre d'une seule source de gratification, il est sain de « ne pas mettre tous ses œufs dans le même panier », comme le pratique Anaïs Terrien.

      Avoir d'autres projets (collectif d'habitation, jardinage, art) permet de se ressourcer et de maintenir un équilibre.

      3.4. Stratégie 4 : La culture du soin personnel

      Prendre soin de soi n'est pas un luxe ou un acte égoïste ; c'est une condition indispensable pour pouvoir continuer à prendre soin du monde.

      « Faire silence d'humain » : Ce conseil d'Emmanuel Delrieu invite à se reconnecter régulièrement et profondément à la nature, loin du bruit et de l'agitation humaine, pour se ressourcer et retrouver une perspective plus large.

      Se détacher de la peur du jugement : Thomas Braille illustre une source de force immense :

      « Je n'ai pas peur du jugement des hommes, j'ai peur uniquement du jugement de mon fils ».

      Se libérer de la peur du regard social permet d'agir avec une plus grande liberté et une plus grande force.

      Le plus grand renoncement : renoncer à plaire. Cette phrase puissante de Flore Vasseur résume un acte de libération essentiel.

      Un militant ne peut pas plaire à tout le monde. L'accepter, c'est se libérer d'un poids immense.

      Se nourrir de la joie : Malgré les difficultés, l'engagement est aussi une source de joies intenses.

      Flore Vasseur rappelle rencontrer « plus souvent des moments de joie quasi extatique que des moments de burnout ».

      Le lien, la solidarité et les petites victoires sont des nourritures essentielles.

      --------------------------------------------------------------------------------

      4. Conclusion : L'engagement, un marathon pour la vie

      En définitive, l'engagement sur le long terme s'apparente bien plus à un marathon qu'à un sprint. Les stratégies pour durer ne sont pas des distractions ou des luxes, mais des composantes essentielles de la lutte elle-même.

      Prendre soin de soi, cultiver la force du collectif, ajuster sa perspective et agir depuis un lieu d'authenticité sont les conditions de la victoire.

      En acceptant la nature cyclique de l'énergie et en se rappelant constamment son « pourquoi », il devient possible non seulement de tenir, mais aussi de s'épanouir dans l'action.

      Comme le disait Baden-Powell, cité par Anaïs Terrien, l'objectif n'est peut-être pas de sauver le monde seul et tout de suite, mais plus humblement et plus durablement d'« essayer de laisser le monde un peu meilleur que quand vous êtes arrivé ».

    1. The officer then said that even a swift return of America to its former role won’t matter. Because “we will never fucking trust you again.”The Americans at the table seemed somewhat startled by the heat of that pronouncement. I agreed with it entirely. So, it seemed to me, did most of the non-Americans. This wasn’t the only such moment at the forum this year, but it was, to me, the most interesting. And it was still being talked about the next day. “Thank God,” one allied official said to me. “Someone had to tell them.”

      Whatever happens in the USA in the coming 3 yrs: "We will never trust you again". This has very deep reaching impacts.

    1. economic and socialreturns

      Economic returns (what do I gain?) = all measurable economic benefits. 1 Direct Monetary returns 2. Employment stability and options (Higher chances of getting a job, ability to switch and negotiate, less fear of losing job, access to better quality work) 3. Productivity and lifetime earning capacity - simply means that within the job either directly through the nature of the job, or buy time to progress outside of your job but make sure to become more productive and gain skills such that it does not only bring you money for the current year but also increases your chances of earnings for your lifetime. 4. Economic returns also encapsulates better health, financial literacy, mobility, and networks (access to opportunities)

      Social returns (What do we gain?) What does a society gain when an individual is well educated? 1. Public health improvements Education changes the decision quality of people. Basic literacy helps deepen understanding about importance of hygiene, medical instructions, etc.

      1. Social order and safety Education helps people control their impulses and improves conflict-resolution skills, improves understanding about consequences and drives a way to lawful income paths and gives confidence to engage with any kind of institutions.

      2. Civic and democratic participation Basic pollical literacy and critical thinking - ability to think beyond immediate self interest.

      3. Intergenerational human capital Educated parents talk more with children using good vocab, they themselves value schooling and intervene early when learning gaps appear.

      4. Social cohesion and equality Education creates a common language. Not only linguistically but also creates common ways in which people can understand each other's reasoning. Common base = numbers, terms, concepts, and some std. ways and ref. to explain things.

      Education brings about social mobility meaning a person's ability to move beyond social and economic status that they are born into - Education does not erase inequality but weakens the link between birth = destiny

    1. visualize_token2token_scores(norm_fn(output_attentions_all, dim=2).squeeze().detach().cpu().numpy(), x_label_name='Layer')

      维度变化链路 output_attentions_all:(layer, batch, head, seq_len, seq_len) → norm_fn(dim=2):聚合head维度 → (layer, batch, seq_len, seq_len) → squeeze():删除batch维度 → (layer, seq_len, seq_len) → 最终用于可视化:每层的“token-token注意力强度矩阵”(汇总所有头的信息)


      维度格式 output_attentions_all.shape = (layer, batch, head, seq_len, seq_len)<br /> (文档中通过代码 output_attentions_all = torch.stack(output_attentions) 明确堆叠逻辑,且在 [29] 单元格注释中验证了维度构成)

      各维度含义

      • layer(维度索引 0,数值示例为 12) 表示 BERT 模型中编码器的层数。以 bert-base-uncased 为例,模型默认包含 12 个 Transformer 编码层。

      • batch(维度索引 1,数值示例为 1) 表示输入样本的批次大小。文档示例中仅使用了 1 个问答对作为输入,因此该维度取值为 1。

      • head(维度索引 2,数值示例为 12) 表示每一层中的多头注意力头数。对于 bert-base-uncased,每个编码层默认包含 12 个注意力头。

      • seq_len(维度索引 3,行维度,数值示例为 26) 表示输入序列的长度,包括 [CLS][SEP] 等特殊 token。该维度对应注意力的“发出者”(query)token。

      • seq_len(维度索引 4,列维度,数值示例为 26) 与上一维度含义一致,同样表示序列长度,但对应注意力的“接收者”(key)token。张量中的每个元素 $[l, b, h, i, j]$ 表示在第 $l$ 层、第 $b$ 个样本、第 $h$ 个注意力头下,第 $i$ 个 token 对第 $j$ 个 token 分配的注意力权重(经 softmax 归一化)。


      文档中 norm_fn 是 L2 范数计算函数(基于 PyTorch 版本选择 torch.linalg.normtorch.norm),调用方式为 norm_fn(output_attentions_all, dim=2),核心是在“注意力头(head)”维度上计算范数,以汇总每层所有头的注意力信息。

      操作逻辑 - 输入:output_attentions_all 维度为 (layer, batch, head, seq_len, seq_len)<br /> - 关键参数:dim=2 表示对第2维(head维度)计算L2范数——即对每层、每个样本、每个“发出者-接收者”token对(i,j),将12个注意力头的权重作为向量,计算其L2范数( \(\sqrt{\sum_{h=1}^{12} w_{l,b,h,i,j}^2}\) )。

      输出维度与含义 - 输出维度(norm_fn 后):(layer, batch, seq_len, seq_len)<br /> (因在 head 维度(dim=2)上聚合,故维度数从5维减少为4维,删除了 head 维度) - 后续处理:squeeze().detach().cpu().numpy() 是张量格式转换操作,不改变维度含义: - squeeze():去除维度大小为1的维度(此处 batch=1,故删除 batch 维度),最终维度变为 (layer, seq_len, seq_len); - detach().cpu().numpy():将PyTorch张量转为NumPy数组,用于后续可视化。

      最终维度

      • layer(维度索引 0,数值示例为 12) 与输入保持一致,表示 BERT 的 12 个编码器层。

      • seq_len(维度索引 1,行维度,数值示例为 26) 表示输入序列的长度,对应注意力的“发出者”(query)token,与原始注意力张量的第 3 维一致。

      • seq_len(维度索引 2,列维度,数值示例为 26) 同样表示输入序列的长度,对应注意力的“接收者”(key)token,与原始注意力张量的第 4 维一致。张量中每个元素 $[l,i,j]$ 表示在第 $l$ 层中,第 $i$ 个 token 对第 $j$ 个 token 的多头注意力权重汇总范数,用于刻画该 token 对在该层上的整体注意力强度,而不区分具体注意力头。

    1. R0:

      Review Comments to the Author

      Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

      Reviewer #1: Full Title:

      Manuscript full title does not match with the short title. Full title reads "Climate change, livelihoods, gender and violence in Rukiga, Uganda: intersections and pathways". While short tile reads "Climate Change and Gender Based Violence". 'gender based violence' may not necessarily mean the same as 'gender and violence'. Authors should consider revising the wording in the full time if they meant gender based violence.

      Abstract:

      Inconsistency in FGD size, harmonize to consistent range across the manuscript. Author said "Between April and July 2021, we conducted 28 focus group discussions (FGDs), comprising 6-8 participants each (line 29-30" and in methods author said "From 20 April 2021- 02 July 2021 five focus group discussions (FGDs) were conducted in each community (28 in total) each consisting of four to six participants (lines 135-136)".

      clarify the CBV emergent theme. You said "This study, though not originally intended to focus on GBV, examines how it interconnects with poverty, shifting gender roles, alcoholism, environmental stress, and family planning dynamics." (lines 26-28). Consider adding a statement signalling GBV emerged inductively during data colletion and/or analysis.

      Methods: Revise the methods section to ensure the study can be reprodcible, and signal reliability of findings.

      What study design did you use? not clear

      Author said participants were " purposively selected... with the help of community leaders" (lines 140-141). Clearly elaborate the eligibility criteria and how the gatekeepers' influence was mitigated, and proper justification why 28 FGDs and 40 KIIs were sufficient. Talk about saturation, was maximum variation considered? and how?

      Results:

      Tag all quotes with data source (FGD or KII), sex, age to evidence diversity across the groups.

      Make sure all quotes are in clear quotations marks (lines 220-222). fix that for the entire results section and be consistent.

      Authors said "When describing their experiences and perceptions of poverty and its associated consequences including poor diets, sickness, and lack of ability to pay for healthcare and transport to medical facilities, most respondents explicitly identified poverty as a direct cause of GBV:" (lines 311-314). Revise the wording on participants' perceptions to avoid implying causality from qualitative data. Check the entire document for this including the abstract lines 36 to 41.

      Ethics: Include ethical committee name that gave ethical clearance for the study, also include the reference number and date.

      describe safeguardings and referral procedures followed in the study if any.

      Conclusion: The concept for this paper is timely and relevant. However several important elements require revision before the manuscript can meet PLOS Global Public Health Standards. Work on the clarity and consistency of the methods (study design was not clearly mentioned, there are several qualitative designs one can use, e.g. phenomenology, case study, etc. what design did you use?). PLOS Global Public Health guidelines on data sharing require that you provide some de-identified data, nevertheless authors stated that they would share data and the justification for that leaves much to be desired.

      Reviewer #2: 1. Kindly mention the methodological orientation adopted for the study? 2. Discrepancy between number of participants in FGD mentioned in abstract and methods – (6 – 8 in abstract and 4 – 6 participants in methods)…Kindly make it uniform 3. Additional context on domestic violence and related statistics can be added in study setting 4. Details on steps taken to ensure internal validity/rigor to be mentioned – member checking, reflexivity 5. Give details of the parent project briefly 6. Any conceptual model/framework adopted to guide data generation/analysis? 7. What efforts were taken to address/refer victims of GBV once disclosed? 8. Socio - demographic details of the respondents could be added for better interpretation 9. Key themes are restated multiple times; Many dimensions of GBV (more details on each typology, coping strategies, prevention, etc) not elicited

      Reviewer #3: Overall Comments The paper takes a qualitative approach to “examine locally held perceptions of the relationships between climate and livelihood-related stressors and changing dynamcis, including the risk of Rukiga district. Climate change remains a global threat, with many countries and communities within Africa, ill prepared to adapt and mitigate the consequences. The paper is an attempt to paint a picture of climate-related impacts, particularly how gender-based violence, a persistent public health, socioeconomic and development issue is shaped by and influencing social, economic and environmental stressors.

      In its current form, the paper need to be strengthened to get it to be sufficiently robust for publication in PLOS Global Public Health. The paper needs to be strengthened in at least three ways:

      1) Overall, the paper needs to better contextualise their goal. Authors state in line 115 to 117, that their purpose is to understand locally held perceptions of the relationship between climate and livelihood-related stressors, and in several other sections, indicate make clear that, their original intention was not GBV, but undertook a thematic analysis on the latter. This can be confusing making it difficult for readers to follow. Authors need to clarify their focus – if it is on GBV, they may consider better contextualising their paper, especially in the introduction.

      As part of contextualising, authors may consider highlighting the initial primary research focus – this helps to provide context for readers to begin to appreciate how and why GBV took center-stage during the analysis. In doing so, it also provides an opportunity for authors to properly situate their contributions to the literature.

      Other minor issues include: • Authors make claims about projected exponential increase (line 51-52) and yet, do not support with any data. Similarly, authors may want to consider revising the sentence, as it appears redundant.

      • In line 55-57, it argued “Uganda’s vulnerability to climate change and climate-sensitive disasters is extremely high – it is not immediately clear to readers what this means. By which benchmark or metric are authors assessing Uganda’s vulnerability. Authors may consider revising to ensure clarity (also see lines 108-110 for punctuation issues).

      • Lastly, the study takes place in Rukiga District – it would be helpful if authors provided some additional background context. Will the results be different, if the study was conducted in a different district rather than Rukiga? Basically, some discussions of the rationale and/or choice of the selected district is be useful.

      2) Overall, authors need to improve their methods by revising and clarifying, some of the sections. For example, under study setting (line 128-130), it is not clear if the concluding sentence is provided additional context for the prior statement. Authors may want to revise for clarity purpose.

      I. Reconcile the number of participants for FGDs – in the abstract, authors indicate 6-8 people form a FDG and in line 136, it says “…each consisting of four to six participants,…”. II. For both FGD and KII, it is useful to indicate and/or describe the demographic/characteristics of the people participating in the study (Perhaps, authors could outline their demographics by sex and age, and any other stratifier in the results section in a tabular format. How were participants selected, especially among the FGD participants? III. On ethics statement, although the data emanates from key informants and community members, authors do not indicate whether they sought ethnical approval for their study. If ethics was obtained, it is useful to indicate so. IV. Regarding data collection, lines 172 to 173, authors indicate that “discrepancies in the coding were re-examined…”. It useful to explain how the independent assessor resolved discrepancies and reached consensus. V. In the data collection section (line 155 to 157), authors indicate that they “undertook a specific analysis of what participants said about GBV”. However, in the results, it is often not clear, the specific thematic issues or results arising from this analysis. Related to this and linked to the analysis, it is not clear to readers how the two main clusters (line 188 to 191) link to GBV. While lines 193 to 212, describe nature of GBV, for the most parts (for example, line 213 to 308), it is difficult to follow how GBV is an interconnector in the results being discussed. At times, it difficult to see, where the analysis departs from its original intended goal. Were the issues around climate change and environment among others emergent from the data?

      3) Overall, the results section outlines some very interesting insights. However, I do feel this section can be deepened. In many instances, the narratives are often not immediately supported by the relevant quotes, linking to GBV. • In line 230 – 323, authors reflect that the disruption to livelihoods leading to family instabilities and conflict, demonstrate how GBV is triggered. This assumption is challenging to sustain, considering that “unrest in families” and not having “peace in a home” do not necessarily connote GBV. Similar reflections are presented at line 306 (“...they both resort to quarrels…”), lines 316 to 320 (…start quarrelling and fighting…”) and (“…you fight with the woman”). • Although authors indicate these are “euphemisms for GBV” (line 208) that participants use – without critical analysis, we risk painting a picture that may not be correct. For example, will readers be correct to assume, that in Ugandan context, such referencs always mean GBV?. To avoid readers assuming without appropriate understanding of context, authors may consider, making explicit any additional nunaces related to the quotations or contexts for this pharses, to clarify and make the links to GBV much clearer.

      Minor • Line 199 – please clarify how and why unintended pregnancies is considered a form of GBV • Line 208 to 209 – revise sentence – it is not clear what authors mean by throughout their experiences and perceptions • Line 211 – “GBV was raised during the discussions of a wide range of factors” – perhaps, useful to outline the contexts which GBV was raised

    1. Comments to the Author

      1. Does this manuscript meet PLOS Global Public Health’s publication criteria? Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe methodologically and ethically rigorous research with conclusions that are appropriately drawn based on the data presented.

      Reviewer #1: Partly

      1. Has the statistical analysis been performed appropriately and rigorously?

      Reviewer #1: N/A

      1. Have the authors made all data underlying the findings in their manuscript fully available (please refer to the Data Availability Statement at the start of the manuscript PDF file)?

      The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

      Reviewer #1: No

      1. Is the manuscript presented in an intelligible fashion and written in standard English?

      PLOS Global Public Health does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

      Reviewer #1: Yes

      1. Review Comments to the Author

      Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

      Reviewer #1: Thank you for the opportunity to review this manuscript. Overall, it makes an important contribution to understanding climate and health policy in Argentina, but several issues should be addressed before it is suitable for publication:

      The manuscript addresses an important and timely topic, analyzing climate and health policy in Argentina through stakeholder perspectives.

      The qualitative design (interviews, document analysis, stakeholder workshop) is appropriate for the research question.

      Valuable insights are provided on governance, financing, technical networks, federalism, and awareness gaps, with lessons for Latin America more broadly.

      Inconsistencies in sample reporting: text mentions both 31 interviews and 26 interviews with 31 participants. This must be clarified and reconciled with Table 1.

      The analysis section requires more detail on how coding disagreements were resolved and how workshop data were integrated.

      The rationale for merging WHO framework dimensions should be better explained to ensure analytical nuance is not lost.

      The Data Availability Statement does not comply with PLOS requirements. Data are not publicly available and no concrete mechanism for controlled access is provided. At minimum, de-identified excerpts or a codebook should be shared.

      Ethics approvals are described but approval identifiers/protocol numbers should be included for transparency.

      The manuscript is intelligible and written in standard English but contains issues that should be corrected:

      Abstract is too long and must be shortened to ~250–300 words.

      “Intersectionality” should be corrected to “intersectorality.”

      “Precarized personnel” should be rephrased as “temporary personnel with insecure contracts.”

      “Professionals and non-professionals” should be replaced with clearer wording (e.g., “clinical and support staff”).

      Redundancy around “technical teams” and “federalism” should be reduced.

      References require major correction:

      Multiple broken Zotero placeholders are present.

      Several entries are incomplete or missing DOIs/URLs.

      Reference formatting must be standardized to PLOS style.

      Discussion section:

      Some statements overgeneralize from interviewee quotes (e.g., physicians not sensitized); these should be framed more cautiously.

      Financing section should explore in more depth why mitigation dominates international funding.

      References to political events (2024–2025) should be time-stamped as “at the time of data collection” to avoid rapid obsolescence.

      Overall, the study is methodologically appropriate and conclusions are mostly supported by the data.

      Revisions are necessary to ensure methodological clarity, compliance with data availability policy, correction of references, and refinement of language before publication.

      1. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

      Do you want your identity to be public for this peer review? If you choose “no”, your identity will remain anonymous but your review may still be made public.

      For information about this choice, including consent withdrawal, please see our Privacy Policy.

      Reviewer #1: No

      [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

      Figure Resubmissions:

      While revising your submission, we strongly recommend that you use PLOS’s NAAS tool (https://ngplosjournals.pagemajik.ai/artanalysis) to test your figure files. NAAS can convert your figure files to the TIFF file type and meet basic requirements (such as print size, resolution), or provide you with a report on issues that do not meet our requirements and that NAAS cannot fix.

      After uploading your figures to PLOS’s NAAS tool - https://ngplosjournals.pagemajik.ai/artanalysis, NAAS will process the files provided and display the results in the "Uploaded Files" section of the page as the processing is complete. If the uploaded figures meet our requirements (or NAAS is able to fix the files to meet our requirements), the figure will be marked as "fixed" above. If NAAS is unable to fix the files, a red "failed" label will appear above. When NAAS has confirmed that the figure files meet our requirements, please download the file via the download option, and include these NAAS processed figure files when submitting your revised manuscript.

    1. R0:

      Reviewer #1:

      The article “Profiling Zero-Dose Measles-Rubella Children in Zambia: Insights from the 2024 Post-Campaign Coverage Survey” addresses an urgent global health issue aligned with IA2030 and Gavi’s zero-dose priorities. The title is concise, descriptive, and fits the scope of PLOS Global Public Health (PGPH).

      The study employs a cross-sectional, two-stage cluster survey following WHO guidelines, with robust sample size (n=8,634) and weighting for representativeness. Statistical analyses—survey-weighted logistic regression and confidence intervals—are appropriate. Ethical standards and data quality controls are well-documented. However, heavy reliance on caregiver recall (88.3%) introduces recall bias, and the absence of district-level disaggregation limits local applicability. The manuscript’s use of WHO standards and analytical transparency strengthens credibility.

      It provides novel national evidence on MR zero-dose prevalence and systemic immunization failures in Zambia, filling a gap between administrative and survey estimates. The identification of access and awareness barriers (e.g., 42.6% unaware of campaigns) adds actionable insights for health The article follows a clear IMRaD structure with strong coherence between results, discussion, and policy recommendations. Figures and tables are informative, though data visualization could be simplified for readability. Language is clear and professional, though some sections (e.g., policy implications) could be condensed to reduce redundancy.

      No conflicts of interest or funding bias reported. Data availability upon request aligns with journal policy, though full open-access data would enhance transparency. Overall, the manuscript is methodologically sound, policy-relevant, and well-aligned with PGPH’s thematic focus on equity and immunization coverage. Minor revisions are recommended—clarify recall bias mitigation, improve data visualization, and emphasize data accessibility. With these revisions, it is highly suitable for publication in PLOS Global Public Health.

      Reviewer #2:

      1. Is this a national wide surgvey? Please explain.

      2. Does this survey cover whole population? or part of population? What is the percentage of coverage?

      3. The survey is procpective study. How does it happen to miss the data? Please expalin.

      4. The ststistical part need more elaboration considering the variables.

      5. In discussion section, avoid bullet. Avoid the policy implecation rather right as paragraph.

      6. Rewrite the conclusion. Avoid frequency, percent, only mentiont he fidnings in relation to the objective of the study.

      Reviewer #3:

      In the current era of changing global and public health landscape, this manuscript is very timely in helping Zambia to improve vaccination coverage and address the inequities that exacerbates children to miss vaccinations. The manuscript is nearly perfect for publication with exception of few editorial areas which I request the authors to work on before the manuscript gets published. The areas are highlighted below:

      ABSTRACT Background  Line 22: I suggest to add the abbreviation MR in brackets after “Rubella”.  Line 23: I suggest to replace “and” with “can” between communities & sustain. Methods  Line 27: suggest to insert “was conducted from” before 27th & replace “-” with “to” between 2024 and 16th. Conclusions  Line 42: I suggest to write “RI” in its long form and the abbreviation in brackets.  Line 44: I suggest to edit “IA2030” to be “Immunization Agenda 2030”.

      INTRODUCTION  Line 86: I suggest to insert the abbreviation “RI” in brackets after “immunisation” before “performance”.

      METHODS Study Design  Line 97: I suggest to replace “Post-Campaign Coverage Survey (PCCS)” with its abbreviation PCCS.  Lines 98-99: I suggest to replace “Measles–Rubella (MR) Supplementary Immunisation Activity (SIA)” with the abbreviations “MR – SIA.

      RESULTS Zero-Dose Prevalence  Line 172: I suggest to write “DPT” in its long form and the abbreviation in brackets.

      DISCUSSION  Line 262: I suggest to replace “routine immunisation” with the abbreviation “RI”.

      CONCLUSION  Line 325: I suggest to replace “routine immunisation” with the abbreviation “RI”.

    1. AWS CEO Explains 3 Reasons AI Can’t Replace Junior Devs
      • AWS CEO Matt Garman argues against replacing junior developers with AI, calling it "one of the dumbest ideas."
      • Juniors excel with AI tools due to recent exposure, using them daily more than seniors (55.5% per Stack Overflow survey).
      • They are cheapest to employ, so not ideal for cost-cutting; true savings require broader optimization.
      • Cutting juniors disrupts talent pipeline, stifling fresh ideas and future leaders; tech workforce demand grows rapidly.
      • AI boosts productivity, enabling more software creation, but jobs will evolve—fundamentals remain key.

      Hacker News Discussion

      • AI accelerates junior ramp-up by handling boilerplate, APIs, imports, freeing time for system understanding and learning.
      • Juniors ask "dumb questions" revealing flaws, useless abstractions; seniors may hesitate due to face-saving or experience.
      • Need juniors for talent pipeline; skipping them creates senior shortages in 4-5 years as workloads pile up.
      • Team leads foster vulnerability by modeling questions, identifying "superpowers" to build confidence.
      • Debates on AI vs. docs struggle: AI speeds answers but may skip broader discovery; friction aids deep learning.
    1. Jak obniżyć CHOLESTEROL? Dieta, suplementy czy statyny? — lipidolog Magdalena Kaczan

      Summary of "How to Lower Cholesterol? Diet, Supplements, or Statins?"

      Guest: Magdalena Kaczan (Lipidologist)

      The video provides an extensive overview of cholesterol management, the mechanism of atherosclerosis, and the roles of lifestyle, genetics, and medication in cardiovascular health.

      1. Understanding Cholesterol and Lipoproteins

      • The Nature of Cholesterol: Cholesterol is an essential fatty substance required for building cell membranes and producing hormones [00:02:55].
      • The Role of Lipoproteins: Since cholesterol is a fat, it cannot travel alone in the blood. It is carried by "packages" called lipoproteins. The most problematic ones contain Apolipoprotein B (ApoB), which allows them to penetrate arterial walls [00:04:30].
      • LDL vs. HDL: * LDL (Low-Density Lipoprotein): Often called "bad" cholesterol. High levels are a primary driver of plaque buildup [00:05:31].
        • HDL (High-Density Lipoprotein): Generally "good" as it transports cholesterol back to the liver, though it can become dysfunctional in some cases [00:06:04].
      • The Importance of ApoB: ApoB is increasingly seen as a more accurate marker than LDL alone because it counts the total number of atherogenic (plaque-forming) particles [00:32:09].

      2. The Process of Atherosclerosis

      • Infiltration: Lipoproteins (like LDL) enter the arterial wall (intima) through a process called transcytosis [00:07:45].
      • Oxidation and Inflammation: Once inside the wall, LDL particles oxidize. The immune system views them as intruders; macrophages "eat" them and turn into "foam cells," triggering chronic inflammation [00:08:13].
      • Plaque Formation: Over time, a "lipid core" forms, surrounded by a fibrous cap. If this plaque ruptures, a blood clot forms, which can lead to a heart attack or stroke [00:13:07].

      3. Risk Factors and Individual Norms

      • Personalized Norms: There is no single "normal" cholesterol level. Targets depend on an individual's 10-year cardiovascular risk (based on age, smoking, blood pressure, etc.) [00:20:01].
      • Lipoprotein(a) [Lp(a)]: This is a genetically determined, highly aggressive form of LDL. It acts as an "accelerator" for heart disease and should be tested at least once in a lifetime, as it isn't lowered by traditional diet or exercise [00:36:10].
      • Metabolic Factors: High triglycerides, insulin resistance, and obesity significantly worsen the quality of LDL particles, making them smaller, denser, and more dangerous [00:28:22].

      4. Dietary Strategies

      • Saturated Fats: High intake of animal fats (butter, lard, fatty meats) and certain plant fats (coconut/palm oil) increases LDL levels [00:43:04].
      • The Power of Fiber: Soluble fiber (found in oats, legumes, and psyllium) binds bile acids in the gut, preventing the reabsorption of cholesterol [00:45:24].
      • Plant-Based Fats: Replacing saturated fats with polyunsaturated and monounsaturated fats (olive oil, nuts, fatty fish) is a primary dietary intervention [00:44:46].
      • Carbohydrates and Triglycerides: Excess simple sugars and alcohol are the main drivers of high triglycerides [00:47:48].

      5. Pharmacological Treatment (Statins)

      • Safety Profile: Statins are described as some of the safest drugs in cardiology [00:01:08].
      • Beyond Lowering LDL: Statins do more than lower cholesterol; they have "pleiotropic" effects, meaning they stabilize existing plaques and reduce systemic inflammation [00:56:33].
      • Side Effects and the "Nocebo" Effect: * Muscle pain occurs in about 9% of patients in clinical trials, but many subjective complaints are due to the nocebo effect (expecting side effects because of negative publicity) [01:03:06].
        • True statin intolerance is rare; switching to a different type or dose of statin often resolves issues [01:01:15].
      • Liver Impact: Serious liver damage is extremely rare (1 in 100,000). Minor elevations in liver enzymes are usually temporary as the liver adapts [01:04:05].

      6. Supplements and "Nutraceuticals"

      • Supplements vs. Medication: Supplements like berberine or red yeast rice (monacolin K) are not substitutes for medication in high-risk patients (e.g., those who have already had a heart attack) [01:09:46].
      • Red Yeast Rice: Contains monacolin K, which is chemically identical to lovastatin. While "natural," it can still cause the same side effects as prescription statins [01:11:14].
      • Coenzyme Q10: While statins can lower CoQ10 levels, clinical studies do not definitively show that supplementing it reduces muscle pain [01:06:19].

      7. Key Takeaways for Longevity

      • Start Early: Prevention is more effective than treating advanced disease.
      • Test Extensively: Go beyond a basic lipid panel; request ApoB and Lp(a) tests [01:13:05].
      • Continuity: Lifestyle changes and medications are long-term commitments. If you stop the intervention, the risk levels typically return to their baseline [01:14:11].
    1. 5 zaskakujących LEKÓW długowieczności — w tym… Viagra

      5 Surprising Longevity Drugs – Comprehensive Summary

      1. Study Background & Methodology * The Cohort: The study analyzed data from the UK Biobank, involving 501,169 participants aged 37 to 73, followed over a period of approximately 14 years [00:03:42]. * Prescription Data: Researchers examined nearly 56 million prescriptions issued to roughly 222,000 patients [00:03:58]. * Control Pairing: To determine the effect of a drug, patients taking a specific medication were paired with "control" subjects of similar age, sex, and health status (e.g., matching two diabetic males) who did not take the drug [00:06:46]. * Endpoint: The study used mortality (death) as the primary hard endpoint, as it is the most objective and difficult to manipulate in medical research [00:01:27].

      2. Key Risk Factors for Mortality * Smoking: The highest risk factor, with a Hazard Ratio (HR) of 2.0 (doubling the risk of death) [00:04:42]. * Cancer: HR of 1.88 [00:05:00]. * Age: HR of 1.72 [00:06:05]. * Diabetes: HR of 1.65 [00:05:22]. * Sex: Being male carried an HR of 1.64 [00:05:56].

      3. The Most Correlated Drugs with Longevity (The "Winners") * SGLT2 Inhibitors (Flozins): The top performer with a 36% reduction in mortality risk (HR 0.64). These drugs cause the body to excrete glucose through urine independently of insulin. They also act as a "weak ketosis," increasing ketones and LDL cholesterol while protecting blood vessels [00:15:50], [00:23:03]. * PDE5 Inhibitors (e.g., Viagra/Sildenafil, Cialis/Tadalafil): * Tadalafil (Cialis): Showed up to a 28% reduction in mortality risk at a 10mg dose (HR 0.72) [00:19:51]. * Sildenafil (Viagra): Showed a 15% reduction at a 50mg dose (HR 0.85) [00:20:19]. * Mechanism: These drugs stabilize Nitric Oxide (NO) levels, maintaining healthy arteries and preventing cardiovascular incidents [00:18:21]. * Estrogens (Hormone Replacement Therapy): Women taking estrogens saw a 24% reduction in mortality risk (HR 0.76). Positive results were seen across various forms, including oral, transdermal, and vaginal [00:13:50]. * Naproxen: A non-steroidal anti-inflammatory drug (NSAID) that showed a 10-11% reduction in mortality risk. Unlike Ibuprofen (2-hour half-life), Naproxen stays in the body for 17 hours, effectively blocking COX enzymes and reducing blood clotting (thromboxane) [00:17:36], [00:25:26]. * Atorvastatin (Statins): While statins as a group had a minimal effect (3% reduction), Atorvastatin specifically showed a 13% reduction at 20mg. However, higher doses (80mg) actually increased the risk of death [00:16:31].

      4. Surprising "Losers" or Neutral Drugs * Metformin: Long considered a longevity staple, it showed no significant effect on lifespan in this specific cohort (HR 1.01) [00:11:22]. * ACE Inhibitors: Despite being common for blood pressure, they correlated with an 11% increase in mortality risk [00:10:36]. * Morfine & Opioids: Correlated with a 400%+ increase in mortality risk (HR ~5.5), likely due to the terminal conditions (cancer, post-surgery) for which they are prescribed [00:08:16]. * Paracetamol: Correlated with a 48% increase in mortality risk (HR 1.48) [00:08:50].

      5. Critical Insights * Correlation vs. Causation: Most drugs (92% of the 169 significant ones) showed a negative correlation with lifespan, largely because people who need medication are generally in poorer health [00:07:42]. * Flozin Paradox: SGLT2 inhibitors protect the heart and extend life significantly even though they increase LDL cholesterol, challenging the traditional view that lowering cholesterol is the only path to heart health [00:23:13]. * The Role of Nitric Oxide: PDE5 inhibitors are highlighted as "longevity drugs" of the future because they restore physiological arterial regulation [00:19:35].

    1. Jak uczyć się 10x szybciej? Dieta, mózg, pamięć - Bartosz Czekała

      How to Learn 10x Faster? – Summary of Bartosz Czekała’s Insights

      1. The Failures of Traditional Learning * The "Sieve" Effect: Traditional learning methods (reading textbooks, filling in blanks) are highly inefficient, resembling an attempt to carry water in a sieve [00:03:48]. * The Forgetting Curve: Based on Ebbinghaus’s research, without deliberate reviews, we lose about 80% of new information within a month [00:05:10]. * Passive vs. Active: Reading and highlighting are "passive encoding" methods that rarely result in long-term retention [00:03:52].

      2. The Foundation: Spaced Repetition Systems (SRS) * Algorithms over Intuition: Manual planning of reviews is impossible for large amounts of data. Using software like Anki is essential [00:19:12]. * How it Works: The program calculates the optimal interval for the next review (e.g., 1 day, 3 days, 1 week, 1 month) based on your self-assessment of how well you remembered it [00:13:44]. * Reducing Decision Fatigue: The system makes learning "binary"—you simply open the app and complete whatever tasks are scheduled for that day [00:14:54].

      3. Techniques for Creating Effective Flashcards * Atomization: Each flashcard should contain exactly one question and one specific piece of information in the answer [00:26:09]. * Deep Encoding: Creating your own flashcards (rather than using pre-made decks) forces the brain to manipulate information, building stronger neural pathways [00:35:47]. * Contextualization: For language learning, the deepest encoding comes from creating sentences using the new word rather than just memorizing a definition [00:30:13].

      4. Language Learning Strategy (Case Study: Czech in One Month) * Pareto Principle: Start with frequency lists—memorize the words used most often in daily communication [00:46:36]. * Reference Points: Use analogies from languages you already know (e.g., using Polish or Russian roots to learn Czech) to drastically speed up the process [00:52:38]. * Self-Talk: Actively producing speech out loud, even to yourself, is the deepest form of active encoding [00:50:27].

      5. Diet and Lifestyle for Brain Optimization * The Danger of Sugar: Glucose spikes and high glycemic index meals hinder memory. Chronic high blood sugar can even lead to brain atrophy [00:02:52]. * Intermittent Fasting (16/8): Fasting increases blood flow and oxygen to the prefrontal cortex, enhancing logical thinking and concentration [00:14:10]. * Ketones: Low-carb diets and ketosis stabilize neuronal networks and provide "mental clarity" often missing in high-carb diets [01:13:14].

      6. Critique of Supplements and "Nootropics" * False Hopes: Most "smart drugs" provide negligible benefits (around 1%) compared to the massive gains from a proper learning system and diet [01:16:47]. * The Real Nootropic: The best way to learn faster is to accumulate knowledge. The more you know, the easier it is to "attach" new information to your existing mental framework [01:17:34].

    1. Sposoby by czuć się dobrze i być zdrowym za grosze | Bartosz Czekała

      EXTENDED SUMMARY: How to Feel Good and Be Healthy on a Budget

      In this deep-dive conversation, Bartosz Czekała explores the intersection of biology, psychology, and lifestyle, providing practical advice on how to optimize health without spending a fortune.

      1. The Biological Root of Mental Health

      • Inflammation and the Brain: Czekała argues that mental health issues like depression and anxiety are often driven by systemic inflammation. Chronic inflammation increases the permeability of the blood-brain barrier, allowing pro-inflammatory molecules to affect the brain [00:00:47].
      • Serotonin Inhibition: Inflammation doesn't just make you feel physically ill; it actively blocks the uptake of serotonin and lowers its overall levels, mimicking or causing clinical depression [00:00:36].
      • Therapy vs. Medication: He notes that while millions rely on antidepressants, psychotherapy often shows better long-term results. He emphasizes BDNF (Brain-Derived Neurotrophic Factor) as a critical marker for brain health and recovery [00:01:07].

      2. Hormonal Health and Body Composition

      • Fat as a Hormonal Organ: Adipose tissue (body fat) is not just stored energy; it is an active endocrine organ. The more body fat a person has, the higher the activity of an enzyme called aromatase [02:37:30].
      • The Testosterone-Estradiol Balance: In men, aromatase converts testosterone into estradiol (estrogen). High levels of body fat can lead to low testosterone and physical symptoms like gynecomastia ("man boobs") [02:37:48].
      • Risks of Steroid Use: Czekała warns against the misuse of exogenous testosterone (steroids), noting that supra-physiological doses are hepatotoxic (liver-damaging) and can damage the heart, often leading the body to convert excess testosterone into estrogen as a defense mechanism [02:38:09].

      3. Low-Cost "Biohacking" and Lifestyle

      • Ergonomics for Longevity: One of the cheapest health interventions is changing how you work. He suggests working from the floor or a mat rather than a traditional chair to maintain mobility and cardiovascular health during home office hours [00:00:26].
      • Nutrition as a Foundation: He advocates for a diet rich in high-quality animal products and nutrient-dense meats as a way to prevent deficiencies and maintain hormonal balance [00:03:36].
      • Nature and Circadian Rhythms: Simple, free practices like spending time outdoors, grounding, and aligning with natural light cycles are cited as powerful tools for reducing systemic inflammation.

      4. Diagnostics and Critical Thinking

      • Recommended Testing: To truly understand one's health, Czekała recommends testing not just Total Testosterone, but also Estradiol, DHEA-S, Androstenedione, and markers of systemic inflammation [02:38:54].
      • Evaluating Science: He draws a distinction between "hard" sciences (physics/math) and "soft" sciences (psychology/sociology). In human biology, results are rarely black-and-white; the answer is almost always "it depends" on the individual context [00:22:30].

      5. Conclusion

      The central takeaway is that health is a result of low inflammation, balanced hormones, and intentional movement. By focusing on biological fundamentals—diagnostics, diet, and environment—one can achieve significant health improvements without relying on expensive supplements or "magic pill" solutions.

    1. KAWA: powolny ubytek mózgu czy neuroprotekcja? Oto co naprawdę pokazuje MRI
      • Main Thesis: While coffee is often marketed as "neuroprotective," there is significant scientific evidence suggesting it may have negative effects on brain health, including a reduction in gray matter.
      • Antioxidants: Coffee is a major source of dietary antioxidants, but the video argues that exogenous (external) antioxidants can interfere with the body's more effective endogenous (internal) antioxidant systems [00:04:54].
      • Cerebral Blood Flow: Caffeine acts as a vasoconstrictor. Studies using PET scans show that consuming 200-250 mg of caffeine (about 2-3 cups) can reduce blood flow throughout the brain by approximately 30% [00:22:24].
      • Gray Matter Impact: Research indicates that even short-term regular caffeine consumption (e.g., 10 days) can lead to a detectable decrease in gray matter volume in the medial temporal lobe [00:26:50].
      • Adenosine Blocking: Caffeine works by blocking adenosine receptors, which normally signal the brain to rest. This leads to an artificial increase in stimulating neurotransmitters like adrenaline and glutamate [00:17:53].
      • Genetic Variability: The speed of caffeine metabolism is largely determined by the CYP1A2 enzyme. "Slow metabolizers" can experience up to ten times higher concentrations of caffeine in their system compared to "fast metabolizers" [00:13:49].
      • Toxins and Quality: Many commercial coffees, especially instant varieties, contain detectable levels of mycotoxins (like ochratoxin A). The cumulative effect of these toxins across different food sources is a potential health concern [00:10:32].
      • Neuroprotection Claims: Most evidence for coffee's benefits is based on epidemiological correlations rather than clinical trials. While there may be a link to reduced Parkinson's risk, large meta-analyses have found no significant link between coffee and Alzheimer's prevention [00:33:15].
      • The "U-Shaped" Rule: Any potential benefits from coffee appear to follow a U-shaped curve; consuming more than four cups a day generally eliminates any statistical health advantages and may increase risks [00:35:06].
    1. if

      ヒント

      このifの条件には#を表示するタイミングを書く

      i == 0は左側面の#の条件,j == 0は一番上の#の条件

      空欄7,8では右側面と下の#の条件を書こう

    1. if

      ヒント

      このif文はwhile(1)の無限ループを抜けるための条件を書く

      この問題の場合-1が入力されるとループを抜け出すようにする

    1. char a[3][20]; strcpy(a[0], "Nagasawa Masami");

      文字列の配列について

      char型の二次元配列を覚えるにはまずは図で表したら分かりやすい

      二次元配列に文字列を入れる方法

      6行目の「char a[3][20]」で20文字まで入る配列を3つ用意して、その3つの配列にどんな文字を入れるかの初期化の作業

      ここでは「a[0]」の配列に「Nagasawa Masami」の文字を入れている

      図は講義資料を参考にしてみよう

      参考授業資料(第10回)25ページ

    1. for (i = 0; i < DIM; i++) { z[i] = x[i] - y[i]; }

      ヒント

      for文で入力される配列の動き

      1週目: z[0] = x[0] - y[0]    z[0] = 1 - 2

      2週目: z[1] = x[1] - y[1]    z[1] = (-2) - 0

      3週目: z[2]= x[2] - y[2]    z[2] = 1 - (-2)

      このように,配列z[ ]に引き算の結果が入る.

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1* (Evidence, reproducibility and clarity (Required)):

      Summary: In this study, the authors used proximity proteomics in U2OS cells to identify several E3 ubiquitin ligases recruited to stress granules (SGs), and they focused on MKRN2 as a novel regulator. They show that MKRN2 localization to SGs requires active ubiquitination via UBA1. Functional experiments demonstrated that MKRN2 knockdown increases the number of SG condensates, reduces their size, slightly raises SG liquidity during assembly, and slows disassembly after heat shock. Overexpression of MKRN2-GFP combined with confocal imaging revealed co-localization of MKRN2 and ubiquitin in SGs. By perturbing ubiquitination (using a UBA1 inhibitor) and inducing defective ribosomal products (DRiPs) with O-propargyl puromycin, they found that both ubiquitination inhibition and MKRN2 depletion lead to increased accumulation of DRiPs in SGs. The authors conclude that MKRN2 supports granulostasis, the maintenance of SG homeostasis , through its ubiquitin ligase activity, preventing pathological DRiP accumulation within SGs.

      Major comments: - Are the key conclusions convincing? The key conclusions are partially convincing. The data supporting the role of ubiquitination and MKRN2 in regulating SG condensate dynamics are coherent, well controlled, and consistent with previous literature, making this part of the study solid and credible. However, the conclusions regarding the ubiquitin-dependent recruitment of MKRN2 to SGs, its relationship with UBA1 activity, the functional impact of the MKRN2 knockdown for DRiP accumulation are less thoroughly supported. These aspects would benefit from additional mechanistic evidence, validation in complementary model systems, or the use of alternative methodological approaches to strengthen the causal connections drawn by the authors. - Should the authors qualify some of their claims as preliminary or speculative, or remove them altogether? The authors should qualify some of their claims as preliminary. 1) MKRN2 recruitment to SGs (ubiquitin-dependent): The proteomics and IF data are a reasonable starting point, but they do not yet establish that MKRN2 is recruited from its physiological localization to SGs in a ubiquitin-dependent manner. To avoid overstating this point the authors should qualify the claim and/or provide additional controls: show baseline localization of endogenous MKRN2 under non-stress conditions (which is reported in literature to be nuclear and cytoplasmatic), include quantification of nuclear/cytoplasmic distribution, and demonstrate a shift into bona fide SG compartments after heat shock. Moreover, co-localization of overexpressed GFP-MKRN2 with poly-Ub (FK2) should be compared to a non-stress control and to UBA1-inhibition conditions to support claims of stress- and ubiquitination-dependent recruitment. *

      Authors: We will stain cells for endogenous MKRN2 and quantify nuc/cyto ratio of MKRN2 without heat stress, without heat stress + TAK243, with HS and with HS + TAK243. We will do the same in the MKRN2-GFP overexpressing line while also staining for FK2.

      *2) Use and interpretation of UBA1 inhibition: UBA1 inhibition effectively blocks ubiquitination globally, but it is non-selective. The manuscript should explicitly acknowledge this limitation when interpreting results from both proteomics and functional assays. Proteomics hits identified under UBA1 inhibition should be discussed as UBA1-dependent associations rather than as evidence for specific E3 ligase recruitment. The authors should consider orthogonal approaches before concluding specificity. *

      Authors: We have acknowledged the limitation of using only TAK243 in our study by rephrasing statements about dependency on “ubiquitination” to “UBA1-dependent associations”.

      * 3) DRiP accumulation and imaging quality: The evidence presented in Figure 5 is sufficient to substantiate the claim that DRiPs accumulate in SGs upon ubiquitination inhibition or MKRN2 depletion but to show that the event of the SGs localization and their clearance from SGs during stress is promoted by MKRN3 ubiquitin ligase activity more experiments would be needed. *

      Authors: We have acknowledged the fact that our experiments do not include DRiP and SG dynamics assays using ligase-dead mutants of MKRN2 by altering our statement regarding MKRN2-mediated ubiquitination of DRiPs in the text (as proposed by reviewer 1).

      *- Would additional experiments be essential to support the claims of the paper? Request additional experiments only where necessary for the paper as it is, and do not ask authors to open new lines of experimentation. Yes, a few targeted experiments would strengthen the conclusions without requiring the authors to open new lines of investigation. 1) Baseline localization of MKRN2: It would be important to show the baseline localization of endogenous and over-expressed MKRN2 (nuclear and cytoplasmic) under non-stress conditions and prior to ubiquitination inhibition. This would provide a reference to quantify redistribution into SGs and demonstrate recruitment in response to heat stress or ubiquitination-dependent mechanisms. *

      Authors: We thank the reviewer for bringing this important control. We will address it in revisions.

      We will quantify the nuclear/cytoplasmic distribution of endogenous and GFP-MKRN2 under control, TAK243, heat shock, and combined conditions, and assess MKRN2–ubiquitin colocalization by FK2 staining in unstressed cells.

      * 2) Specificity of MKRN2 ubiquitin ligase activity: to address the non-specific effects of UBA1 inhibition and validate that observed phenotypes depend on MKRN2's ligase activity, the authors could employ a catalytically inactive MKRN2 mutant in rescue experiments. Comparing wild-type and catalytic-dead MKRN2 in the knockdown background would clarify the causal role of MKRN2 activity in SG dynamics and DRiP clearance. *

      Authors: We thank the reviewer for this suggestion and have altered the phrasing of some of our statements in the text accordingly.


      * 3) Ubiquitination linkage and SG marker levels: While the specific ubiquitin linkage type remains unknown, examining whether MKRN2 knockdown or overexpression affects total levels of key SG marker proteins would be informative. This could be done via Western blotting of SG markers along with ubiquitin staining, to assess whether MKRN2 influences protein stability or turnover through degradative or non-degradative ubiquitination. Such data would strengthen the mechanistic interpretation while remaining within the current study's scope. *

      Authors: We thank the reviewer for requesting and will address it by performing MKRN2 KD and perform Western blot for G3BP1.

      *

      • Are the suggested experiments realistic in terms of time and resources? It would help if you could add an estimated cost and time investment for substantial experiments. The experiments suggested in points 1 and 3 are realistic and should not require substantial additional resources beyond those already used in the study. • Point 1 (baseline localization of MKRN2): This involves adding two control conditions (no stress and no ubiquitination inhibition) for microscopy imaging. The setup is essentially the same as in the current experiments, with time requirements mainly dependent on cell culture growth and imaging. Overall, this could be completed within a few weeks. • Point 3 (SG marker levels and ubiquitination): This entails repeating the existing experiment and adding a Western blot for SG markers and ubiquitin. The lab should already have the necessary antibodies, and the experiment could reasonably be performed within a couple of weeks. • Point 2 (catalytically inactive MKRN2 mutant and rescue experiments): This is likely more time-consuming. Designing an effective catalytic-dead mutant depends on structural knowledge of MKRN2 and may require additional validation to confirm loss of catalytic activity. If this expertise is not already present in the lab, it could significantly extend the timeline. Therefore, this experiment should be considered only if similarly recommended by other reviewers, as it represents a higher resource and time investment.

      Overall, points 1 and 3 are highly feasible, while point 2 is more substantial and may require careful planning.

      • Are the data and the methods presented in such a way that they can be reproduced? Yes. The methodologies used in this study to analyze SG dynamics and DRiP accumulation are well-established in the field and should be reproducible, particularly by researchers experienced in stress granule biology. Techniques such as SG assembly and disassembly assays, use of G3BP1 markers, and UBA1 inhibition are standard and clearly described. The data are generally presented in a reproducible manner; however, as noted above, some results would benefit from additional controls or complementary experiments to fully support specific conclusions.

      • Are the experiments adequately replicated and statistical analysis adequate? Overall, the experiments in the manuscript appear to be adequately replicated, with most assays repeated between three and five times, as indicated in the supplementary materials. The statistical analyses used are appropriate and correctly applied to the datasets presented. However, for Figure 5 the number of experimental replicates is not reported. This should be clarified, and if the experiment was not repeated sufficiently, additional biological replicates should be performed. Given that this figure provides central evidence supporting the conclusion that DRiP accumulation depends on ubiquitination-and partly on MKRN2's ubiquitin ligase activity-adequate replication is essential. *

      Authors: We thank the reviewer for noting this accidental omission. We now clarify in the legend of Figure 5 that the experiments with DRiPs were replicated three times.

      Minor comments: - Specific experimental issues that are easily addressable. • For the generation and the validation of MKRN2 knockdown in UOS2 cells data are not presented in the results or in the methods sections to demonstrate the effective knockdown of the protein of interest. This point is quite essential to demonstrate the validity of the system used

      Authors: We thank the reviewer for requesting and will address it by performing MKRN2 KD and perform Western blot and RT-qPCR.

      • * In the supplementary figure 2 it would be useful to mention if the Western Blot represent the input (total cell lysates) before the APEX-pulldown or if it is the APEX-pulldown loaded for WB. There is no consistence in the difference of biotynilation between different replicates shown in the 2 blots. For example in R1 and R2 G3BP1-APX TAK243 the biotynilation is one if the strongest condition while on the left blot, in the same condition comparison samples R3 and R4 are less biotinilated compared to others. It would be useful to provide an explanation for that to avoid any confusion for the readers. * Authors: We have added a mention in the legend of Figure S2 that these are total cell lysates before pulldown. The apparent differences in biotin staining are small and not sufficient to question the results of our APEX-proteomics.

      • * In Figure 2D, endogenous MKRN2 localization to SGs appears reduced following UBA1 inhibition. However, it is not clear whether this reduction reflects a true relocalization or a decrease in total MKRN2 protein levels. To support the interpretation that UBA1 inhibition specifically affects MKRN2 recruitment to SGs rather than its overall expression, the authors should provide data showing total MKRN2 levels remain unchanged under UBA1 inhibition, for example via Western blot of total cell lysates. * Authors: Based on first principles in regulation of gene expression, it is unlikely that total MKRN2 expression levels would decrease appreciably through transcriptional or translational regulation within the short timescale of these experiments (1 h TAK243 pretreatment followed by 90 min of heat stress).

      • * DRIPs accumulation is followed during assembly but in the introduction is highlighted the fact that ubiquitination events, other reported E3 ligases and in this study data on MKRN2 showed that they play a crucial role in the disassembly of SGs which is also related with cleareance of DRIPs. Authors could add tracking DRIPs accumulation during disassembly to be added to Figure 5. I am not sure about the timeline required for this but I am just adding as optional if could be addressed easily. * Authors: We thank the reviewer for proposing this experimental direction. However, in a previous study (Ganassi et al., 2016; 10.1016/j.molcel.2016.07.021), we demonstrated that DRiP accumulation during the stress granule assembly phase drives conversion to a solid-like state and delays stress granule disassembly. It is therefore critical to assess DRiP enrichment within stress granules immediately after their formation, rather than during the stress recovery phase, as done here.

      • * The authors should clarify in the text why the cutoff used for the quantification in Figure 5D (PC > 3) differs from the cutoff used elsewhere in the paper (PC > 1.5). Providing a rationale for this choice will help the reader understand the methodological consistency and ensure that differences in thresholds do not confound interpretation of the results. * Authors: We thank the reviewer for this question. The population of SGs with a DRiP enrichment > 1.5 represents SGs with a significant DRiP enrichment compared to the surrounding (background) signal. As explained in the methods, the intensity of DRiPs inside each SG is corrected by the intensity of DRiPs two pixels outside of each SG. Thus, differences in thresholds between independent experimental conditions (5B versus 5D) do not confound interpretation of the results but depend on overall staining intensity that can different between different experimental conditions. Choosing the cut-off > 3 allows to specifically highlight the population of SGs that are strongly enriched with DRiPs. MKRN2 silencing caused a strong DRiP enrichment in the majority of the SGs analyzed and therefore we chose this way of data representation. Note that the results represent the average of the analysis of 3 independent experiments with high numbers of SGs automatically segmented and analyzed/experiment. Figure 5A, B: n = 3 independent experiments; number of SGs analyzed per experiment: HS + OP-puro (695; 1216; 952); TAK-243 + HS + OP-puro (1852; 2214; 1774). Figure 5C, D: n = 3 independent experiments; number of SGs analyzed per experiment: siRNA control, HS + OP-puro (1984; 1400; 1708); siRNA MKRN2, HS + OP-puro (912; 1074; 1532).

      • * For Figure 3G, the authors use over-expressed MKRN2-GFP to assess co-localization with ubiquitin in SGs. Given that a reliable antibody for endogenous MKRN2 is available and that a validated MKRN2 knockdown line exists as an appropriate control, this experiment would gain significantly in robustness and interpretability if co-localization were demonstrated using endogenous MKRN2. In the current over-expression system, MKRN2-GFP is also present in the nucleus, whereas the endogenous protein does not appear nuclear under the conditions shown. This discrepancy raises concerns about potential over-expression artifacts or mislocalization. Demonstrating co-localization using endogenous MKRN2 would avoid confounding effects associated with over-expression. If feasible, this would be a relatively straightforward experiment to implement, as it relies on tools (antibody and knockdown line) already described in the manuscript.

      * Authors: We thank the reviewer for requesting and will address it by performing MKRN2 KD, FK2 immunofluorescence microscopy and perform SG partition coefficient analysis.

      * - Are prior studies referenced appropriately? • From line 54 to line 67, the manuscript in total cites eight papers regarding the role of ubiquitination in SG disassembly. However, given the use of UBA1 inhibition in the initial MS-APEX experiment and the extensive prior literature on ubiquitination in SG assembly and disassembly under various stress conditions, the manuscript would benefit from citing additional relevant studies to provide more specifc examples. Expanding the references would provide stronger context, better connect the current findings to prior work, and emphasize the significance of the study in relation to established literature *

      Authors: We have added citations for the relevant studies.

      • *

      At line 59, it would be helpful to note that G3BP1 is ubiquitinated by TRIM21 through a Lys63-linked ubiquitin chain. This information provides important mechanistic context, suggesting that ubiquitination of SG proteins in these pathways is likely non-degradative and related to functional regulation of SG dynamics rather than protein turnover. * Authors: The reviewer is correct. We have added to the text that G3BP1 is ubiquitinated through a Lys63-linked ubiquitin chain.

      • *

      When citing references 16 and 17, which report that the E3 ligases TRIM21 and HECT regulate SG formation, the authors should provide a plausible explanation for why these specific E3 ligases were not detected in their proteomics experiments. Differences could arise from the stress stimulus used, cell type, or experimental conditions. Similarly, since MKRN2 and other E3 ligases identified in this study have not been reported in previous works, discussing these methodological or biological differences would help prevent readers from questioning the credibility of the findings. It would also be valuable to clarify in the Conclusion that different types of stress may activate distinct ubiquitination pathways, highlighting context-dependent regulation of SG assembly and disassembly. * Authors: We thank the reviewer for this suggestion. We added to the discussion plausible explanations for why our study identified new E3 ligases.

      • *

      Line 59-60: when referring to the HECT family of E3 ligases involved in ubiquitination and SG disassembly, it would be more precise to report the specific E3 ligase identified in the cited studies rather than only the class of ligase. This would provide clearer mechanistic context and improve accuracy for readers. * Authors: We have added this detail to the discussion.

      • *

      The specific statement on line 182 "SG E3 ligases that depend on UBA1 activity are RBULs" should be supported by reference. * Authors: We have added citations to back up our claim that ZNF598, CNOT4, MKRN2, TRIM25 and TRIM26 exhibit RNA-binding activity.

      *- Are the text and figures clear and accurate?

      • In Supplementary Figure 1, DMSO is shown in green and the treatment in red, whereas in the main figures (Figure 1B and 1F) the colours in the legend are inverted. To avoid confusion, the colour coding in figure legends should be consistent across all figures throughout the manuscript. *

      Authors: We have made the colors consistent across the main and supplementary figures.

      • *

      At line 79, the manuscript states that "inhibition of ubiquitination delayed fluorescence recovery dynamics of G3BP1-mCherry, relative to HS-treated cells (Figure 1F, Supplementary Fig. 6A)." However, the data shown in Figure 1F appear to indicate the opposite effect: the TAK243-treated condition (green curve) shows a faster fluorescence recovery compared to the control (red curve). This discrepancy between the text and the figure should be corrected or clarified, as it may affect the interpretation of the role of ubiquitination in SG dynamics. * Authors: Good catch. We now fixed the graphical mistake (Figure 1F and S6).

      • * Line 86: adjust a missing bracket * Authors: Thank you, we fixed it.

      • *

      There appears to be an error in the legend of Supplementary Figure 3: the legend states that the red condition (MKRN2) forms larger aggregates, but both the main Figure 3C of the confocal images and the text indicate that MKRN2 (red) forms smaller aggregates. Please correct the legend and any corresponding labels so they are consistent with the main figure and the text. The authors should also double-check that the figure panel order, color coding, and statistical annotations match the legend and the descriptions in the Results section to avoid reader confusion.

      * Authors: This unfortunate graphical mistake has been corrected.

      • * At lines 129-130, the manuscript states that "FRAP analysis demonstrated that MKRN2 KD resulted in a slight increase in SG liquidity (Fig. 3F, Supplementary Fig. 6B)." However, the data shown in Figure 3F appear to indicate the opposite trend: the MKRN2 KD condition (red curve) exhibits a faster fluorescence recovery compared to the control (green curve). This discrepancy between the text and the figure should be corrected or clarified, as it directly affects the interpretation of MKRN2's role in SG disassembly. Ensuring consistency between the written description and the plotted FRAP data is essential for accurate interpretation. * Authors: We thank the reviewer and clarify in the legend of Figure 3F and the Results the correct labels: indeed faster fluorescence recovery seen in MKRN2 KD is correctly interpreted as increased liquidity in the text.

      • *

      At lines 132-133, the manuscript states: "Then, to further test the impact of MKRN2 on SG dynamics, we overexpressed MKRN2-GFP and observed that it was recruited to SG (Fig. 3G)." This description should be corrected or clarified, as the over-expressed MKRN2-GFP also appears to localize to the nucleus. * Authors: The text has been modified to reflect both the study of MKRN2 localization to SGs and of nuclear localization.

      • *

      At lines 134-135, the manuscript states that the FK2 antibody detects "free ubiquitin." This is incorrect. FK2 does not detect free ubiquitin; it recognizes only ubiquitin conjugates, including mono-ubiquitinated and poly-ubiquitinated proteins. The text should be corrected accordingly to avoid misinterpretation of the immunostaining data. * Authors: Thank you for pointing out this error. We have corrected it.

      • * Figure 5A suffers from poor resolution, and no scale bar is provided, which limits interpretability. Additionally, the ROI selected for the green channel (DRIPs) appears to capture unspecific background staining, while the most obvious DRIP spots are localized in the nucleus. The authors should clarify this in the text, improve the image quality if possible, and ensure that the ROI accurately represents DRIP accumulation - in SGs rather than background signal. * Authors: We thank the reviewer for pointing the sub-optimal presentation of this figure. We modified Figure 5A to improve image quality and interpretation. Concerning the comment that “the most obvious DRIP spots are localized in the nucleus”, this is in line with our previous findings demonstrating that a fraction of DRiPs accumulates in nucleoli (Mediani et al. 2019 10.15252/embj.2018101341). To avoid misinterpretation, we modified Figure 5A as follows: (i) we provide a different image for control cells, exposed to heat shock and OP-puro; (ii) we select a ROI that only shows a few stress granules; (iii) we added arrowheads to indicate the nucleoli that are strongly enriched for DRiPs; (iv) we include a dotted line to show the nuclear membrane, helping to distinguish cytoplasm and nucleus in the red and green channel. We also include the scale bars (5 µm) in the image.

      * Do you have suggestions that would help the authors improve the presentation of their data and conclusions?

      • In the first paragraph following the APEX proteomics results, the authors present validation data exclusively for MKRN2, justifying this early focus by stating that MKRN2 is the most SG-depleted E3 ligase. However, in the subsequent paragraph they introduce the RBULs and present knockdown data for MKRN2 along with two additional E3 ligases identified in the screen, before once again emphasizing that MKRN2 is the most SG-depleted ligase and therefore the main focus of the study. For clarity and logical flow, the manuscript would benefit from reordering the narrative. Specifically, the authors should first present the validation data for all three selected E3 ligases, and only then justify the decision to focus on MKRN2 for in-depth characterization. In addition to the extent of its SG depletion, the authors may also consider providing biologically relevant reasons for prioritizing MKRN2 (e.g., domain architecture, known roles in stress responses, or prior evidence of ubiquitination-related functions). Reorganizing this section would improve readability and better guide the reader through the rationale for the study's focus.*

      Authors: We thank the reviewer for this suggested improvement to our “storyline”. As suggested by the reviewer, we have moved the IF validation of MKRN2 to the following paragraph in order to improve the flow of the manuscript. We added additional justification to prioritizing MKRN2 citing (Youn et al. 2018 and Markmiller et al. 2018).

      • *

      At lines 137-138, the manuscript states: "Together these data indicate that MKRN2 regulates the assembly dynamics of SGs by promoting their coalescence during HS and can increase SG ubiquitin content." While Figure 3G shows some co-localization of MKRN2 with ubiquitin, immunofluorescence alone is insufficient to claim an increase in SG ubiquitin content. This conclusion should be supported by orthogonal experiments, such as Western blotting, in vitro ubiquitination assays, or immunoprecipitation of SG components. Including a control under no-stress conditions would also help demonstrate that ubiquitination increases specifically in response to stress. The second part of the statement should therefore be rephrased to avoid overinterpretation, for example:"...and may be associated with increased ubiquitination within SGs, as suggested by co-localization, pending further validation by complementary assays." * Authors: The statement has been rephrased in a softer way as suggested by the reviewer.

      • At line 157, the statement: "Therefore, we conclude that MKRN2 ubiquitinates a subset of DRiPs, avoiding their accumulation inside SGs" should be rephrased as a preliminary observation. While the data support a role for MKRN2 in SG disassembly and a reduction of DRIPs, direct ubiquitination of DRIPs by MKRN2 has not been demonstrated. A more cautious phrasing would better reflect the current evidence and avoid overinterpretation. * * *Authors: We thank the reviewer for this suggestion and have altered the phrasing of this statement accordingly.

      *Reviewer #1 (Significance (Required)):

      General assessment: provide a summary of the strengths and limitations of the study. What are the strongest and most important aspects? What aspects of the study should be improved or could be developed?

      • This study provides a valuable advancement in understanding the role of ubiquitination in stress granule (SG) dynamics and the clearance of SGs formed under heat stress. A major strength is the demonstration of how E3 ligases identified through proteomic screening, particularly MKRN2, influence SG assembly and disassembly in a ubiquitination- and heat stress-dependent manner. The combination of proteomics, imaging, and functional assays provides a coherent mechanistic framework linking ubiquitination to SG homeostasis. Limitations of the study include the exclusive use of a single model system (U2OS cells), which may limit generalizability. Additionally, some observations-such as MKRN2-dependent ubiquitination within SGs and changes in DRIP accumulation under different conditions-would benefit from orthogonal validation experiments (e.g., Western blotting, immunoprecipitation, or in vitro assays) to confirm and strengthen these findings. Addressing these points would enhance the robustness and broader applicability of the conclusions.

      Advance: compare the study to the closest related results in the literature or highlight results reported for the first time to your knowledge; does the study extend the knowledge in the field and in which way? Describe the nature of the advance and the resulting insights (for example: conceptual, technical, clinical, mechanistic, functional,...).

      • The closest related result in literature is - Yang, Cuiwei et al. "Stress granule homeostasis is modulated by TRIM21-mediated ubiquitination of G3BP1 and autophagy-dependent elimination of stress granules." Autophagy vol. 19,7 (2023): 1934-1951. doi:10.1080/15548627.2022.2164427 - demonstrating that TRIM21, an E3 ubiquitin ligase, catalyzes K63-linked ubiquitination of G3BP1, a core SG nucleator, under oxidative stress. This ubiquitination by TRIM21 inhibits SG formation, likely by altering G3BP1's propensity for phase separation. In contrast, the MKRN2 study identifies a different E3 (MKRN2) that regulates SG dynamics under heat stress and appears to influence both assembly and disassembly. This expands the role of ubiquitin ligases in SG regulation beyond those previously studied (like TRIM21).

      • Gwon and colleagues (Gwon Y, Maxwell BA, Kolaitis RM, Zhang P, Kim HJ, Taylor JP. Ubiquitination of G3BP1 mediates stress granule disassembly in a context-specific manner. Science. 2021;372(6549):eabf6548. doi:10.1126/science.abf6548) have shown that K63-linked ubiquitination of G3BP1 is required for SG disassembly after heat stress. This ubiquitinated G3BP1 recruits the segregase VCP/p97, which helps extract G3BP1 from SGs for disassembly. The MKRN2 paper builds on this by linking UBA1-dependent ubiquitination and MKRN2's activity to SG disassembly. Specifically, they show MKRN2 knockdown affects disassembly, and suggest MKRN2 helps prevent accumulation of defective ribosomal products (DRiPs) in SGs, adding a new layer to the ubiquitin-VCP model.

      • Ubiquitination's impact is highly stress- and context-dependent (different chain types, ubiquitin linkages, and recruitment of E3s). The MKRN2 work conceptually strengthens this idea: by showing that MKRN2's engagement with SGs depends on active ubiquitination via UBA1, and by demonstrating functional consequences (SG dynamics + DRIP accumulation), the study highlights how cellular context (e.g., heat stress) can recruit specific ubiquitin ligases to SGs and modulate their behavior.

      • There is a gap in the literature: very few (if any) studies explicitly combine the biology of DRIPs, stress granules, and E3 ligase mediated ubiquitination, especially in mammalian cells. There are relevant works about DRIP biology in stress granules, but those studies focus on chaperone-based quality control, not ubiquitin ligase-mediated ubiquitination of DRIPs. This study seems to be one of the first to make that connection in mammalian (or human-like) SG biology. A work on the plant DRIP-E3 ligase TaSAP5 (Zhang N, Yin Y, Liu X, et al. The E3 Ligase TaSAP5 Alters Drought Stress Responses by Promoting the Degradation of DRIP Proteins. Plant Physiol. 2017;175(4):1878-1892. doi:10.1104/pp.17.01319 ) shows that DRIPs can be directly ubiquitinated by E3s in other biological systems - which supports the plausibility of the MKRN2 mechanism, but it's not the same context.

      • A very recent review (Yuan, Lin et al. "Stress granules: emerging players in neurodegenerative diseases." Translational neurodegeneration vol. 14,1 22. 12 May. 2025, doi:10.1186/s40035-025-00482-9) summarizes and reinforces the relationship among SGs and the pathogenesis of different neurodegenerative diseases (NDDs). By identifying MKRN2 as a new ubiquitin regulator in SGs, the current study could have relevance for neurodegeneration and proteotoxic diseases, providing a new candidate to explore in disease models.

      Audience: describe the type of audience ("specialized", "broad", "basic research", "translational/clinical", etc...) that will be interested or influenced by this research; how will this research be used by others; will it be of interest beyond the specific field?

      The audience for this paper is primarily specialized, including researchers in stress granule biology, ubiquitin signaling, protein quality control, ribosome biology, and cellular stress responses. The findings will also be of interest to scientists working on granulostasis, nascent protein surveillance, and proteostasis mechanisms. Beyond these specific fields, the study provides preliminary evidence linking ubiquitination to DRIP handling and SG dynamics, which may stimulate new research directions and collaborative efforts across complementary areas of cell biology and molecular biology.

      • Please define your field of expertise with a few keywords to help the authors contextualize your point of view. Indicate if there are any parts of the paper that you do not have sufficient expertise to evaluate.

      I work in ubiquitin biology, focusing on ubiquitination signaling in physiological and disease contexts, with particular expertise in the identification of E3 ligases and their substrates across different cellular systems and in vivo models. I have less expertise in stress granule dynamics and DRiP biology, so my evaluation of those aspects is more limited and relies on interpretation of the data presented in the manuscript.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      This study identifies the E3 ubiquitin ligase Makorin 2 (MKRN2) as a novel regulator of stress granule (SG) dynamics and proteostasis. Using APEX proximity proteomics, the authors demonstrate that inhibition of the ubiquitin-activating enzyme UBA1 with TAK243 alters the SG proteome, leading to depletion of several E3 ligases, chaperones, and VCP cofactors. Detailed characterization of MKRN2 reveals that it localizes to SGs in a ubiquitination-dependent manner and is required for proper SG assembly, coalescence, and disassembly. Functionally, MKRN2 prevents the accumulation of defective ribosomal products (DRiPs) within SGs, thereby maintaining granulostasis. The study provides compelling evidence that ubiquitination, mediated specifically by MKRN2, plays a critical role in surveilling stress-damaged proteins within SGs and maintaining their dynamic liquid-like properties. Major issues: 1. Figures 1-2: Temporal dynamics of ubiquitination in SGs. The APEX proteomics was performed at a single timepoint (90 min heat stress), yet the live imaging data show that SG dynamics and TAK243 effects vary considerably over time: • The peak or SG nucleation was actually at 10-30 min (Figure 1B). • TAK243 treatment causes earlier SG nucleation (Figure 1B) but delayed disassembly (Figure 1A-B, D). A temporal proteomic analysis at multiple timepoints (e.g., 30 min, 60 min, 90 min of heat stress, and during recovery) would reveal whether MKRN2 and other ubiquitination-dependent proteins are recruited to SGs dynamically during the stress response. It would also delineate whether different E3 ligases predominate at different stages of the SG lifecycle. While such experiments may be beyond the scope of the current study, the authors should at minimum discuss this limitation and acknowledge that the single-timepoint analysis may miss dynamic changes in SG composition. *

      Authors: We thank the reviewer for identifying this caveat in our methodology. We now discuss this limitation and acknowledge that the single-timepoint analysis may miss dynamic changes in SG composition.

      * Figures 2D-E, 3G: MKRN2 localization mechanism requires clarification. The authors demonstrate that MKRN2 localization to SGs is dependent on active ubiquitination, as TAK243 treatment significantly reduces MKRN2 partitioning into SGs (Figure 2D-E). However, several mechanistic questions remain: • Does MKRN2 localize to SGs through binding to ubiquitinated substrates within SGs, or does MKRN2 require its own ubiquitination activity to enter SGs? • The observation that MKRN2 overexpression increases SG ubiquitin content (Figure 3G-H) could indicate either: (a) MKRN2 actively ubiquitinates substrates within SGs, or (b) MKRN2 recruitment brings along pre-ubiquitinated substrates from the cytoplasm. • Is MKRN2 localization to SGs dependent on its E3 ligase activity? A catalytically inactive mutant of MKRN2 would help distinguish whether MKRN2 must actively ubiquitinate proteins to remain in SGs or whether it binds to ubiquitinated proteins independently of its catalytic activity. The authors should clarify whether MKRN2's SG localization depends on its catalytic activity or on binding to ubiquitinated proteins, as this would fundamentally affect the interpretation of its role in SG dynamics. *

      Authors: We thank the reviewer for this experimental suggestion. We will perform an analysis of the SG partitioning coefficient between WT-MKRN2 and a RING mutant of MKRN2.

      * Figures 3-4: Discrepancy between assembly and disassembly phenotypes. MKRN2 knockdown produces distinct phenotypes during SG assembly versus disassembly. During assembly: smaller, more numerous SGs that fail to coalesce (Figure 3A-E), while during disassembly: delayed SG clearance (Figure 4A-D). These phenotypes may reflect different roles for MKRN2 at different stages, but the mechanism underlying this stage-specificity is unclear: • Does MKRN2 have different substrates or utilize different ubiquitin chain types during assembly versus disassembly? • The increased SG liquidity upon MKRN2 depletion (Figure 3F) seems paradoxical with delayed disassembly- typically more liquid condensates disassemble faster. The authors interpret this as decreased coalescence into "dense and mature SGs," but this requires clarification. • How does prevention of DRiP accumulation relate to the assembly defect? One would predict that DRiP accumulation would primarily affect disassembly (by reducing liquidity), yet MKRN2 depletion impacts both assembly dynamics and DRiP accumulation. The authors should discuss how MKRN2's role in preventing DRiP accumulation mechanistically connects to both the assembly and disassembly phenotypes. *

      Authors: We thank the reviewer and will add to the Discussion a mention of a precedent for this precise phenotype from our previous work (Seguin et al., 2014).

      * Figure 5: Incomplete characterization of MKRN2 substrates. While the authors convincingly demonstrate that MKRN2 prevents DRiP accumulation in SGs (Figure 5C-D), the direct substrates of MKRN2 remain unknown. The authors acknowledge in the limitations that "the direct MKRN2 substrates and ubiquitin-chain types (K63/K48) are currently unknown." However, several approaches could strengthen the mechanistic understanding: • Do DRiPs represent direct MKRN2 substrates? Co-immunoprecipitation of MKRN2 followed by ubiquitin-chain specific antibodies (K48 vs K63) could reveal whether MKRN2 mediates degradative (K48) or non-degradative (K63) ubiquitination. *

      Authors: The DRiPs generated in the study represent truncated versions of all the proteins that were in the process of being synthesized by the cell at the moment of the stress, and therefore include both MKRN2 specific substrates and MKRN2 independent substrates. Identifying specific MKRN2 substrates, while interesting as a new research avenue, is not within the scope of the present study.

      • * Given that VCP cofactors (such as UFD1L, PLAA) are depleted from SGs upon UBA1 inhibition (Figure 2C) and these cofactors recognize ubiquitinated substrates, does MKRN2 function upstream of VCP recruitment? Testing whether MKRN2 depletion affects VCP cofactor localization to SGs would clarify this pathway. * Authors: We thank the reviewer for requesting and will address it by performing MKRN2 KD, VCP immunofluorescence microscopy and perform SG partition coefficient analysis.

      • * The authors note that MKRN2 knockdown produces a phenotype reminiscent of VCP inhibition-smaller, more numerous SGs with increased DRiP partitioning. This similarity suggests MKRN2 may function in the same pathway as VCP. Direct epistasis experiments would strengthen this connection. * Authors: This study is conditional results of the above study. If VCP partitioning to SGs is reduced upon MKRN2 KD, which we do not know at this point, then MKRN2/VCP double KD experiment will be performed to strengthen this connection.

      * Alternative explanations for the phenotype of delayed disassembly with TAK243 or MKRN2 depletion- the authors attribute this to DRiP accumulation, but TAK243 affects global ubiquitination. Could impaired degradation of other SG proteins (not just DRiPs) contribute to delayed disassembly? Does proteasome inhibition (MG-132 treatment) phenocopy the MKRN2 depletion phenotype? This would support that MKRN2-mediated proteasomal degradation (via K48 ubiquitin chains) is key to the phenotype. *

      Authors: We are happy to provide alternative explanations in the Discussion in line with Reviewer #2 suggestion. The role of the proteosome is out of the scope of our study.

      • Comparison with other E3 ligases (Supplementary Figure 5): The authors show that CNOT4 and ZNF598 depletion also affect SG dynamics, though to lesser extents than MKRN2. However: • Do these E3 ligases also prevent DRiP accumulation in SGs? Testing OP-puro partitioning in CNOT4- or ZNF598-depleted cells would reveal whether DRiP clearance is a general feature of SG-localized E3 ligases or specific to MKRN2. *

      • * Are there redundant or compensatory relationships between these E3 ligases? Do double knockdowns have additive effects? * Authors: Our paper presents a study of the E3 ligase MKRN2. Generalizing these observations to ZNF598, CNOT4 and perhaps an even longer list of E3s, may be an interesting question, outside the scope of our mission.

      • * The authors note that MKRN2 is "the most highly SG-depleted E3 upon TAK243 treatment"-does this mean MKRN2 has the strongest dependence on active ubiquitination for its SG localization, or simply that it has the highest basal level of SG partitioning? * Authors: We thank the reviewer for this smart question. MKRN2 has the strongest dependence on active ubiquitination as we now clarify better in the Results.

      *Reviewer #2 (Significance (Required)):

      This is a well-executed study that identifies MKRN2 as an important regulator of stress granule dynamics and proteostasis. The combination of proximity proteomics, live imaging, and functional assays provides strong evidence for MKRN2's role in preventing DRiP accumulation and maintaining granulostasis. However, key mechanistic questions remain, particularly regarding MKRN2's direct substrates, the ubiquitin chain types it generates, and how its enzymatic activity specifically prevents DRiP accumulation while promoting both SG coalescence and disassembly. Addressing the suggested revisions, particularly those related to MKRN2's mechanism of SG localization and substrate specificity, would significantly strengthen the manuscript and provide clearer insights into how ubiquitination maintains the dynamic properties of stress granules under proteotoxic stress.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      In this paper, Amzallag et al. investigate the relationship between ubiquitination and the dynamics of stress granules (SGs). They utilize proximity ligation coupled mass spectrometry to identify SG components under conditions where the proteasome is inhibited by a small drug that targets UBiquitin-like modifier Activating enzyme 1 (UBA1), which is crucial for the initial step in the ubiquitination of misfolded proteins. Their findings reveal that the E3 ligase Makorin2 (MKRN2) is a novel component of SGs. Additionally, their data suggest that MKRN2 is necessary for processing damaged ribosome-associated proteins (DRIPs) during heat shock (HS). In the absence of MKRN2, DRIPs accumulate in SGs, which affects their dynamics. Major comments: Assess the knockdown efficiency (KD) for CNOT1, ZNF598, and MKRN2 to determine if the significant effect observed on SG dynamics upon MKRN2 depletion is due to the protein's function rather than any possible differences in KD efficiency. *

      Authors: To address potential variability in knockdown efficiency, we will quantify CNOT4, ZNF598, and MKRN2 mRNA levels by RT-qPCR following siRNA knockdown.

      * Since HS-induced stress granules (SGs) are influenced by the presence of TAK-243 or MKRN2 depletion, could it be that these granules become more mature and thus acquire more defective ribosomal products (DRIPs)? Do HS cells reach the same level of DRIPs, as assessed by OP-Puro staining, at a later time point? *

      Authors: an interesting question. Mateju et al. carefully characterized the time course of DRiP accumulation in stress granules during heat shock, decreasing after the 90 minutes point (Appendix Figure S7; 10.15252/embj.201695957). We therefore interpret DRiP accumulation in stress granules following TAK243 treatment as a pathological state, reflecting impaired removal and degradation of DRiPs, rather than a normal, more “mature” stress granule state.

      * Incorporating OP-Puro can lead to premature translation termination, potentially confounding results. Consider treating cells with a short pulse (i.e., 5 minutes) of OP-Puro just before fixation. *

      Authors: Thank you for this suggestion. Treating the cell with a short pulse of OP-Puro just before fixation will lead to the labelling of a small amount of proteins, likely undetectable using conventional microscopy or Western blotting. Furthermore, it will lead to the unwanted labeling of stress responsive proteins that are translated with non canonical cap-independent mechanisms upon stress.

      * Is MKRN2's dependence limited to HS-induced SGs? *

      Authors: We will test sodium arsenite–induced stress and use immunofluorescence at discrete time points to assess whether the heat shock–related observations generalize to other stress types.

      *

      Minor comments: Abstract: Introduce UBA1. Introduction: The reference [2] should be replaced with 25719440. Results: Line 70, 'G3BP1 and 2 genes,' is somewhat misleading. Consider rephrasing into 'G3BP1 and G3BP2 genes'. Line 103: considers rephrasing 'we orthogonally validated the ubiquitin-dependent interaction' to 'we orthogonally validated the ubiquitin-dependent stress granule localization'. Line 125: '(fig.3C, EI Supplementary fig. 3)' Remove 'I'. Methods: line 260: the reference is not linked (it should be ref. [26]). Line 225: Are all the KDs being performed using the same method? Please specify. *

      Authors: The text has been altered to reflect the reviewer’s suggestions.

      *Fig.2C: Consider adding 'DEPLETED' on top of the scheme.

      Reviewer #3 (Significance (Required)):

      The study offers valuable insights into the degradative processes associated with SGs. The figures are clear, and the experimental quality is high. The authors do not overstate or overinterpret their findings, and the results effectively support their claims. However, the study lacks orthogonal methods to validate the findings and enhance the results. For instance, incorporating biochemical and reporter-based methods to measure degradation-related intermediate products (DRIPs) would be beneficial. Additionally, utilizing multiple methods to block ubiquitination, studying the dynamics of MKRN2 on SGs, and examining the consequences of excessive DRIPs on the cell fitness of SGs would further strengthen the research. *

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      In this paper, Amzallag et al. investigate the relationship between ubiquitination and the dynamics of stress granules (SGs). They utilize proximity ligation coupled mass spectrometry to identify SG components under conditions where the proteasome is inhibited by a small drug that targets UBiquitin-like modifier Activating enzyme 1 (UBA1), which is crucial for the initial step in the ubiquitination of misfolded proteins. Their findings reveal that the E3 ligase Makorin2 (MKRN2) is a novel component of SGs. Additionally, their data suggest that MKRN2 is necessary for processing damaged ribosome-associated proteins (DRIPs) during heat shock (HS). In the absence of MKRN2, DRIPs accumulate in SGs, which affects their dynamics.

      Major comments:

      Assess the knockdown efficiency (KD) for CNOT1, ZNF598, and MKRN2 to determine if the significant effect observed on SG dynamics upon MKRN2 depletion is due to the protein's function rather than any possible differences in KD efficiency. Since HS-induced stress granules (SGs) are influenced by the presence of TAK-243 or MKRN2 depletion, could it be that these granules become more mature and thus acquire more defective ribosomal products (DRIPs)? Do HS cells reach the same level of DRIPs, as assessed by OP-Puro staining, at a later time point? Incorporating OP-Puro can lead to premature translation termination, potentially confounding results. Consider treating cells with a short pulse (i.e., 5 minutes) of OP-Puro just before fixation. Is MKRN2's dependence limited to HS-induced SGs?

      Minor comments:

      Abstract:

      Introduce UBA1. Introduction:

      The reference [2] should be replaced with 25719440.

      Results:

      Line 70, 'G3BP1 and 2 genes,' is somewhat misleading. Consider rephrasing into 'G3BP1 and G3BP2 genes'. Line 103: considers rephrasing 'we orthogonally validated the ubiquitin-dependent interaction' to 'we orthogonally validated the ubiquitin-dependent stress granule localization'. Line 125: '(fig.3C, EI Supplementary fig. 3)' Remove 'I'. Methods:

      line 260: the reference is not linked (it should be ref. [26]). Line 225: Are all the KDs being performed using the same method? Please specify.

      Fig.2C: Consider adding 'DEPLETED' on top of the scheme.

      Significance

      The study offers valuable insights into the degradative processes associated with SGs. The figures are clear, and the experimental quality is high. The authors do not overstate or overinterpret their findings, and the results effectively support their claims. However, the study lacks orthogonal methods to validate the findings and enhance the results. For instance, incorporating biochemical and reporter-based methods to measure degradation-related intermediate products (DRIPs) would be beneficial. Additionally, utilizing multiple methods to block ubiquitination, studying the dynamics of MKRN2 on SGs, and examining the consequences of excessive DRIPs on the cell fitness of SGs would further strengthen the research.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      This study identifies the E3 ubiquitin ligase Makorin 2 (MKRN2) as a novel regulator of stress granule (SG) dynamics and proteostasis. Using APEX proximity proteomics, the authors demonstrate that inhibition of the ubiquitin-activating enzyme UBA1 with TAK243 alters the SG proteome, leading to depletion of several E3 ligases, chaperones, and VCP cofactors. Detailed characterization of MKRN2 reveals that it localizes to SGs in a ubiquitination-dependent manner and is required for proper SG assembly, coalescence, and disassembly. Functionally, MKRN2 prevents the accumulation of defective ribosomal products (DRiPs) within SGs, thereby maintaining granulostasis. The study provides compelling evidence that ubiquitination, mediated specifically by MKRN2, plays a critical role in surveilling stress-damaged proteins within SGs and maintaining their dynamic liquid-like properties.

      Major issues:

      1. Figures 1-2: Temporal dynamics of ubiquitination in SGs. The APEX proteomics was performed at a single timepoint (90 min heat stress), yet the live imaging data show that SG dynamics and TAK243 effects vary considerably over time:
        • The peak or SG nucleation was actually at 10-30 min (Figure 1B).
        • TAK243 treatment causes earlier SG nucleation (Figure 1B) but delayed disassembly (Figure 1A-B, D). A temporal proteomic analysis at multiple timepoints (e.g., 30 min, 60 min, 90 min of heat stress, and during recovery) would reveal whether MKRN2 and other ubiquitination-dependent proteins are recruited to SGs dynamically during the stress response. It would also delineate whether different E3 ligases predominate at different stages of the SG lifecycle. While such experiments may be beyond the scope of the current study, the authors should at minimum discuss this limitation and acknowledge that the single-timepoint analysis may miss dynamic changes in SG composition.
      2. Figures 2D-E, 3G: MKRN2 localization mechanism requires clarification. The authors demonstrate that MKRN2 localization to SGs is dependent on active ubiquitination, as TAK243 treatment significantly reduces MKRN2 partitioning into SGs (Figure 2D-E). However, several mechanistic questions remain:
        • Does MKRN2 localize to SGs through binding to ubiquitinated substrates within SGs, or does MKRN2 require its own ubiquitination activity to enter SGs?
        • The observation that MKRN2 overexpression increases SG ubiquitin content (Figure 3G-H) could indicate either: (a) MKRN2 actively ubiquitinates substrates within SGs, or (b) MKRN2 recruitment brings along pre-ubiquitinated substrates from the cytoplasm.
        • Is MKRN2 localization to SGs dependent on its E3 ligase activity? A catalytically inactive mutant of MKRN2 would help distinguish whether MKRN2 must actively ubiquitinate proteins to remain in SGs or whether it binds to ubiquitinated proteins independently of its catalytic activity. The authors should clarify whether MKRN2's SG localization depends on its catalytic activity or on binding to ubiquitinated proteins, as this would fundamentally affect the interpretation of its role in SG dynamics.
      3. Figures 3-4: Discrepancy between assembly and disassembly phenotypes. MKRN2 knockdown produces distinct phenotypes during SG assembly versus disassembly. During assembly: smaller, more numerous SGs that fail to coalesce (Figure 3A-E), while during disassembly: delayed SG clearance (Figure 4A-D). These phenotypes may reflect different roles for MKRN2 at different stages, but the mechanism underlying this stage-specificity is unclear:
        • Does MKRN2 have different substrates or utilize different ubiquitin chain types during assembly versus disassembly?
        • The increased SG liquidity upon MKRN2 depletion (Figure 3F) seems paradoxical with delayed disassembly- typically more liquid condensates disassemble faster. The authors interpret this as decreased coalescence into "dense and mature SGs," but this requires clarification.
        • How does prevention of DRiP accumulation relate to the assembly defect? One would predict that DRiP accumulation would primarily affect disassembly (by reducing liquidity), yet MKRN2 depletion impacts both assembly dynamics and DRiP accumulation. The authors should discuss how MKRN2's role in preventing DRiP accumulation mechanistically connects to both the assembly and disassembly phenotypes.
      4. Figure 5: Incomplete characterization of MKRN2 substrates. While the authors convincingly demonstrate that MKRN2 prevents DRiP accumulation in SGs (Figure 5C-D), the direct substrates of MKRN2 remain unknown. The authors acknowledge in the limitations that "the direct MKRN2 substrates and ubiquitin-chain types (K63/K48) are currently unknown." However, several approaches could strengthen the mechanistic understanding:
        • Do DRiPs represent direct MKRN2 substrates? Co-immunoprecipitation of MKRN2 followed by ubiquitin-chain specific antibodies (K48 vs K63) could reveal whether MKRN2 mediates degradative (K48) or non-degradative (K63) ubiquitination.
        • Given that VCP cofactors (such as UFD1L, PLAA) are depleted from SGs upon UBA1 inhibition (Figure 2C) and these cofactors recognize ubiquitinated substrates, does MKRN2 function upstream of VCP recruitment? Testing whether MKRN2 depletion affects VCP cofactor localization to SGs would clarify this pathway.
        • The authors note that MKRN2 knockdown produces a phenotype reminiscent of VCP inhibition-smaller, more numerous SGs with increased DRiP partitioning. This similarity suggests MKRN2 may function in the same pathway as VCP. Direct epistasis experiments would strengthen this connection.
      5. Alternative explanations for the phenotype of delayed disassembly with TAK243 or MKRN2 depletion- the authors attribute this to DRiP accumulation, but TAK243 affects global ubiquitination. Could impaired degradation of other SG proteins (not just DRiPs) contribute to delayed disassembly? Does proteasome inhibition (MG-132 treatment) phenocopy the MKRN2 depletion phenotype? This would support that MKRN2-mediated proteasomal degradation (via K48 ubiquitin chains) is key to the phenotype.
      6. Comparison with other E3 ligases (Supplementary Figure 5): The authors show that CNOT4 and ZNF598 depletion also affect SG dynamics, though to lesser extents than MKRN2. However:
        • Do these E3 ligases also prevent DRiP accumulation in SGs? Testing OP-puro partitioning in CNOT4- or ZNF598-depleted cells would reveal whether DRiP clearance is a general feature of SG-localized E3 ligases or specific to MKRN2.
        • Are there redundant or compensatory relationships between these E3 ligases? Do double knockdowns have additive effects?
        • The authors note that MKRN2 is "the most highly SG-depleted E3 upon TAK243 treatment"-does this mean MKRN2 has the strongest dependence on active ubiquitination for its SG localization, or simply that it has the highest basal level of SG partitioning?

      Significance

      This is a well-executed study that identifies MKRN2 as an important regulator of stress granule dynamics and proteostasis. The combination of proximity proteomics, live imaging, and functional assays provides strong evidence for MKRN2's role in preventing DRiP accumulation and maintaining granulostasis. However, key mechanistic questions remain, particularly regarding MKRN2's direct substrates, the ubiquitin chain types it generates, and how its enzymatic activity specifically prevents DRiP accumulation while promoting both SG coalescence and disassembly. Addressing the suggested revisions, particularly those related to MKRN2's mechanism of SG localization and substrate specificity, would significantly strengthen the manuscript and provide clearer insights into how ubiquitination maintains the dynamic properties of stress granules under proteotoxic stress.

    4. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      Summary:

      In this study, the authors used proximity proteomics in U2OS cells to identify several E3 ubiquitin ligases recruited to stress granules (SGs), and they focused on MKRN2 as a novel regulator. They show that MKRN2 localization to SGs requires active ubiquitination via UBA1. Functional experiments demonstrated that MKRN2 knockdown increases the number of SG condensates, reduces their size, slightly raises SG liquidity during assembly, and slows disassembly after heat shock. Overexpression of MKRN2-GFP combined with confocal imaging revealed co-localization of MKRN2 and ubiquitin in SGs. By perturbing ubiquitination (using a UBA1 inhibitor) and inducing defective ribosomal products (DRiPs) with O-propargyl puromycin, they found that both ubiquitination inhibition and MKRN2 depletion lead to increased accumulation of DRiPs in SGs. The authors conclude that MKRN2 supports granulostasis, the maintenance of SG homeostasis , through its ubiquitin ligase activity, preventing pathological DRiP accumulation within SGs.

      Major comments:

      • Are the key conclusions convincing?

      The key conclusions are partially convincing. The data supporting the role of ubiquitination and MKRN2 in regulating SG condensate dynamics are coherent, well controlled, and consistent with previous literature, making this part of the study solid and credible. However, the conclusions regarding the ubiquitin-dependent recruitment of MKRN2 to SGs, its relationship with UBA1 activity, the functional impact of the MKRN2 knockdown for DRiP accumulation are less thoroughly supported. These aspects would benefit from additional mechanistic evidence, validation in complementary model systems, or the use of alternative methodological approaches to strengthen the causal connections drawn by the authors. - Should the authors qualify some of their claims as preliminary or speculative, or remove them altogether? The authors should qualify some of their claims as preliminary.

      1) MKRN2 recruitment to SGs (ubiquitin-dependent): The proteomics and IF data are a reasonable starting point, but they do not yet establish that MKRN2 is recruited from its physiological localization to SGs in a ubiquitin-dependent manner. To avoid overstating this point the authors should qualify the claim and/or provide additional controls: show baseline localization of endogenous MKRN2 under non-stress conditions (which is reported in literature to be nuclear and cytoplasmatic), include quantification of nuclear/cytoplasmic distribution, and demonstrate a shift into bona fide SG compartments after heat shock. Moreover, co-localization of overexpressed GFP-MKRN2 with poly-Ub (FK2) should be compared to a non-stress control and to UBA1-inhibition conditions to support claims of stress- and ubiquitination-dependent recruitment.

      2) Use and interpretation of UBA1 inhibition: UBA1 inhibition effectively blocks ubiquitination globally, but it is non-selective. The manuscript should explicitly acknowledge this limitation when interpreting results from both proteomics and functional assays. Proteomics hits identified under UBA1 inhibition should be discussed as UBA1-dependent associations rather than as evidence for specific E3 ligase recruitment. The authors should consider orthogonal approaches before concluding specificity.

      3) DRiP accumulation and imaging quality: The evidence presented in Figure 5 is sufficient to substantiate the claim that DRiPs accumulate in SGs upon ubiquitination inhibition or MKRN2 depletion but to show that the event of the SGs localization and their clearance from SGs during stress is promoted by MKRN3 ubiquitin ligase activity more experiments would be needed. - Would additional experiments be essential to support the claims of the paper? Request additional experiments only where necessary for the paper as it is, and do not ask authors to open new lines of experimentation. Yes, a few targeted experiments would strengthen the conclusions without requiring the authors to open new lines of investigation.

      1) Baseline localization of MKRN2: It would be important to show the baseline localization of endogenous and over-expressed MKRN2 (nuclear and cytoplasmic) under non-stress conditions and prior to ubiquitination inhibition. This would provide a reference to quantify redistribution into SGs and demonstrate recruitment in response to heat stress or ubiquitination-dependent mechanisms.

      2) Specificity of MKRN2 ubiquitin ligase activity: to address the non-specific effects of UBA1 inhibition and validate that observed phenotypes depend on MKRN2's ligase activity, the authors could employ a catalytically inactive MKRN2 mutant in rescue experiments. Comparing wild-type and catalytic-dead MKRN2 in the knockdown background would clarify the causal role of MKRN2 activity in SG dynamics and DRiP clearance.

      3) Ubiquitination linkage and SG marker levels: While the specific ubiquitin linkage type remains unknown, examining whether MKRN2 knockdown or overexpression affects total levels of key SG marker proteins would be informative. This could be done via Western blotting of SG markers along with ubiquitin staining, to assess whether MKRN2 influences protein stability or turnover through degradative or non-degradative ubiquitination. Such data would strengthen the mechanistic interpretation while remaining within the current study's scope. - Are the suggested experiments realistic in terms of time and resources? It would help if you could add an estimated cost and time investment for substantial experiments. The experiments suggested in points 1 and 3 are realistic and should not require substantial additional resources beyond those already used in the study. - Point 1 (baseline localization of MKRN2): This involves adding two control conditions (no stress and no ubiquitination inhibition) for microscopy imaging. The setup is essentially the same as in the current experiments, with time requirements mainly dependent on cell culture growth and imaging. Overall, this could be completed within a few weeks. - Point 3 (SG marker levels and ubiquitination): This entails repeating the existing experiment and adding a Western blot for SG markers and ubiquitin. The lab should already have the necessary antibodies, and the experiment could reasonably be performed within a couple of weeks. - Point 2 (catalytically inactive MKRN2 mutant and rescue experiments): This is likely more time-consuming. Designing an effective catalytic-dead mutant depends on structural knowledge of MKRN2 and may require additional validation to confirm loss of catalytic activity. If this expertise is not already present in the lab, it could significantly extend the timeline. Therefore, this experiment should be considered only if similarly recommended by other reviewers, as it represents a higher resource and time investment.

      Overall, points 1 and 3 are highly feasible, while point 2 is more substantial and may require careful planning. - Are the data and the methods presented in such a way that they can be reproduced?

      Yes. The methodologies used in this study to analyze SG dynamics and DRiP accumulation are well-established in the field and should be reproducible, particularly by researchers experienced in stress granule biology. Techniques such as SG assembly and disassembly assays, use of G3BP1 markers, and UBA1 inhibition are standard and clearly described. The data are generally presented in a reproducible manner; however, as noted above, some results would benefit from additional controls or complementary experiments to fully support specific conclusions. - Are the experiments adequately replicated and statistical analysis adequate?

      Overall, the experiments in the manuscript appear to be adequately replicated, with most assays repeated between three and five times, as indicated in the supplementary materials. The statistical analyses used are appropriate and correctly applied to the datasets presented. However, for Figure 5 the number of experimental replicates is not reported. This should be clarified, and if the experiment was not repeated sufficiently, additional biological replicates should be performed. Given that this figure provides central evidence supporting the conclusion that DRiP accumulation depends on ubiquitination-and partly on MKRN2's ubiquitin ligase activity-adequate replication is essential.

      Minor comments:

      • Specific experimental issues that are easily addressable.
        • For the generation and the validation of MKRN2 knockdown in UOS2 cells data are not presented in the results or in the methods sections to demonstrate the effective knockdown of the protein of interest. This point is quite essential to demonstrate the validity of the system used
        • In the supplementary figure 2 it would be useful to mention if the Western Blot represent the input (total cell lysates) before the APEX-pulldown or if it is the APEX-pulldown loaded for WB. There is no consistence in the difference of biotynilation between different replicates shown in the 2 blots. For example in R1 and R2 G3BP1-APX TAK243 the biotynilation is one if the strongest condition while on the left blot, in the same condition comparison samples R3 and R4 are less biotinilated compared to others. It would be useful to provide an explanation for that to avoid any confusion for the readers.
        • In Figure 2D, endogenous MKRN2 localization to SGs appears reduced following UBA1 inhibition. However, it is not clear whether this reduction reflects a true relocalization or a decrease in total MKRN2 protein levels. To support the interpretation that UBA1 inhibition specifically affects MKRN2 recruitment to SGs rather than its overall expression, the authors should provide data showing total MKRN2 levels remain unchanged under UBA1 inhibition, for example via Western blot of total cell lysates.
        • DRIPs accumulation is followed during assembly but in the introduction is highlighted the fact that ubiquitination events, other reported E3 ligases and in this study data on MKRN2 showed that they play a crucial role in the disassembly of SGs which is also related with cleareance of DRIPs. Authors could add tracking DRIPs accumulation during disassembly to be added to Figure 5. I am not sure about the timeline required for this but I am just adding as optional if could be addressed easily.
        • The authors should clarify in the text why the cutoff used for the quantification in Figure 5D (PC > 3) differs from the cutoff used elsewhere in the paper (PC > 1.5). Providing a rationale for this choice will help the reader understand the methodological consistency and ensure that differences in thresholds do not confound interpretation of the results.
        • For Figure 3G, the authors use over-expressed MKRN2-GFP to assess co-localization with ubiquitin in SGs. Given that a reliable antibody for endogenous MKRN2 is available and that a validated MKRN2 knockdown line exists as an appropriate control, this experiment would gain significantly in robustness and interpretability if co-localization were demonstrated using endogenous MKRN2. In the current over-expression system, MKRN2-GFP is also present in the nucleus, whereas the endogenous protein does not appear nuclear under the conditions shown. This discrepancy raises concerns about potential over-expression artifacts or mislocalization. Demonstrating co-localization using endogenous MKRN2 would avoid confounding effects associated with over-expression. If feasible, this would be a relatively straightforward experiment to implement, as it relies on tools (antibody and knockdown line) already described in the manuscript.
      • Are prior studies referenced appropriately?

        • From line 54 to line 67, the manuscript in total cites eight papers regarding the role of ubiquitination in SG disassembly. However, given the use of UBA1 inhibition in the initial MS-APEX experiment and the extensive prior literature on ubiquitination in SG assembly and disassembly under various stress conditions, the manuscript would benefit from citing additional relevant studies to provide more specifc examples. Expanding the references would provide stronger context, better connect the current findings to prior work, and emphasize the significance of the study in relation to established literature
        • At line 59, it would be helpful to note that G3BP1 is ubiquitinated by TRIM21 through a Lys63-linked ubiquitin chain. This information provides important mechanistic context, suggesting that ubiquitination of SG proteins in these pathways is likely non-degradative and related to functional regulation of SG dynamics rather than protein turnover.
        • When citing references 16 and 17, which report that the E3 ligases TRIM21 and HECT regulate SG formation, the authors should provide a plausible explanation for why these specific E3 ligases were not detected in their proteomics experiments. Differences could arise from the stress stimulus used, cell type, or experimental conditions. Similarly, since MKRN2 and other E3 ligases identified in this study have not been reported in previous works, discussing these methodological or biological differences would help prevent readers from questioning the credibility of the findings. It would also be valuable to clarify in the Conclusion that different types of stress may activate distinct ubiquitination pathways, highlighting context-dependent regulation of SG assembly and disassembly.
        • Line 59-60: when referring to the HECT family of E3 ligases involved in ubiquitination and SG disassembly, it would be more precise to report the specific E3 ligase identified in the cited studies rather than only the class of ligase. This would provide clearer mechanistic context and improve accuracy for readers.
        • The specific statement on line 182 "SG E3 ligases that depend on UBA1 activity are RBULs" should be supported by reference.
        • Are the text and figures clear and accurate?
        • In Supplementary Figure 1, DMSO is shown in green and the treatment in red, whereas in the main figures (Figure 1B and 1F) the colours in the legend are inverted. To avoid confusion, the colour coding in figure legends should be consistent across all figures throughout the manuscript.
        • At line 79, the manuscript states that "inhibition of ubiquitination delayed fluorescence recovery dynamics of G3BP1-mCherry, relative to HS-treated cells (Figure 1F, Supplementary Fig. 6A)." However, the data shown in Figure 1F appear to indicate the opposite effect: the TAK243-treated condition (green curve) shows a faster fluorescence recovery compared to the control (red curve). This discrepancy between the text and the figure should be corrected or clarified, as it may affect the interpretation of the role of ubiquitination in SG dynamics.
        • Line 86: adjust a missing bracket
        • There appears to be an error in the legend of Supplementary Figure 3: the legend states that the red condition (MKRN2) forms larger aggregates, but both the main Figure 3C of the confocal images and the text indicate that MKRN2 (red) forms smaller aggregates. Please correct the legend and any corresponding labels so they are consistent with the main figure and the text. The authors should also double-check that the figure panel order, color coding, and statistical annotations match the legend and the descriptions in the Results section to avoid reader confusion.
        • At lines 129-130, the manuscript states that "FRAP analysis demonstrated that MKRN2 KD resulted in a slight increase in SG liquidity (Fig. 3F, Supplementary Fig. 6B)." However, the data shown in Figure 3F appear to indicate the opposite trend: the MKRN2 KD condition (red curve) exhibits a faster fluorescence recovery compared to the control (green curve). This discrepancy between the text and the figure should be corrected or clarified, as it directly affects the interpretation of MKRN2's role in SG disassembly. Ensuring consistency between the written description and the plotted FRAP data is essential for accurate interpretation.
        • At lines 132-133, the manuscript states: "Then, to further test the impact of MKRN2 on SG dynamics, we overexpressed MKRN2-GFP and observed that it was recruited to SG (Fig. 3G)." This description should be corrected or clarified, as the over-expressed MKRN2-GFP also appears to localize to the nucleus.
        • At lines 134-135, the manuscript states that the FK2 antibody detects "free ubiquitin." This is incorrect. FK2 does not detect free ubiquitin; it recognizes only ubiquitin conjugates, including mono-ubiquitinated and poly-ubiquitinated proteins. The text should be corrected accordingly to avoid misinterpretation of the immunostaining data.
        • Figure 5A suffers from poor resolution, and no scale bar is provided, which limits interpretability. Additionally, the ROI selected for the green channel (DRIPs) appears to capture unspecific background staining, while the most obvious DRIP spots are localized in the nucleus. The authors should clarify this in the text, improve the image quality if possible, and ensure that the ROI accurately represents DRIP accumulation - in SGs rather than background signal.

      Do you have suggestions that would help the authors improve the presentation of their data and conclusions?

      • In the first paragraph following the APEX proteomics results, the authors present validation data exclusively for MKRN2, justifying this early focus by stating that MKRN2 is the most SG-depleted E3 ligase. However, in the subsequent paragraph they introduce the RBULs and present knockdown data for MKRN2 along with two additional E3 ligases identified in the screen, before once again emphasizing that MKRN2 is the most SG-depleted ligase and therefore the main focus of the study. For clarity and logical flow, the manuscript would benefit from reordering the narrative. Specifically, the authors should first present the validation data for all three selected E3 ligases, and only then justify the decision to focus on MKRN2 for in-depth characterization. In addition to the extent of its SG depletion, the authors may also consider providing biologically relevant reasons for prioritizing MKRN2 (e.g., domain architecture, known roles in stress responses, or prior evidence of ubiquitination-related functions). Reorganizing this section would improve readability and better guide the reader through the rationale for the study's focus.
      • At lines 137-138, the manuscript states: "Together these data indicate that MKRN2 regulates the assembly dynamics of SGs by promoting their coalescence during HS and can increase SG ubiquitin content." While Figure 3G shows some co-localization of MKRN2 with ubiquitin, immunofluorescence alone is insufficient to claim an increase in SG ubiquitin content. This conclusion should be supported by orthogonal experiments, such as Western blotting, in vitro ubiquitination assays, or immunoprecipitation of SG components. Including a control under no-stress conditions would also help demonstrate that ubiquitination increases specifically in response to stress. The second part of the statement should therefore be rephrased to avoid overinterpretation, for example:"...and may be associated with increased ubiquitination within SGs, as suggested by co-localization, pending further validation by complementary assays."
      • At line 157, the statement: "Therefore, we conclude that MKRN2 ubiquitinates a subset of DRiPs, avoiding their accumulation inside SGs" should be rephrased as a preliminary observation. While the data support a role for MKRN2 in SG disassembly and a reduction of DRIPs, direct ubiquitination of DRIPs by MKRN2 has not been demonstrated. A more cautious phrasing would better reflect the current evidence and avoid overinterpretation.

      Significance

      General assessment: provide a summary of the strengths and limitations of the study. What are the strongest and most important aspects? What aspects of the study should be improved or could be developed?

      • This study provides a valuable advancement in understanding the role of ubiquitination in stress granule (SG) dynamics and the clearance of SGs formed under heat stress. A major strength is the demonstration of how E3 ligases identified through proteomic screening, particularly MKRN2, influence SG assembly and disassembly in a ubiquitination- and heat stress-dependent manner. The combination of proteomics, imaging, and functional assays provides a coherent mechanistic framework linking ubiquitination to SG homeostasis. Limitations of the study include the exclusive use of a single model system (U2OS cells), which may limit generalizability. Additionally, some observations-such as MKRN2-dependent ubiquitination within SGs and changes in DRIP accumulation under different conditions-would benefit from orthogonal validation experiments (e.g., Western blotting, immunoprecipitation, or in vitro assays) to confirm and strengthen these findings. Addressing these points would enhance the robustness and broader applicability of the conclusions.

      Advance: compare the study to the closest related results in the literature or highlight results reported for the first time to your knowledge; does the study extend the knowledge in the field and in which way? Describe the nature of the advance and the resulting insights (for example: conceptual, technical, clinical, mechanistic, functional,...).

      • The closest related result in literature is - Yang, Cuiwei et al. "Stress granule homeostasis is modulated by TRIM21-mediated ubiquitination of G3BP1 and autophagy-dependent elimination of stress granules." Autophagy vol. 19,7 (2023): 1934-1951. doi:10.1080/15548627.2022.2164427 - demonstrating that TRIM21, an E3 ubiquitin ligase, catalyzes K63-linked ubiquitination of G3BP1, a core SG nucleator, under oxidative stress. This ubiquitination by TRIM21 inhibits SG formation, likely by altering G3BP1's propensity for phase separation. In contrast, the MKRN2 study identifies a different E3 (MKRN2) that regulates SG dynamics under heat stress and appears to influence both assembly and disassembly. This expands the role of ubiquitin ligases in SG regulation beyond those previously studied (like TRIM21).
      • Gwon and colleagues (Gwon Y, Maxwell BA, Kolaitis RM, Zhang P, Kim HJ, Taylor JP. Ubiquitination of G3BP1 mediates stress granule disassembly in a context-specific manner. Science. 2021;372(6549):eabf6548. doi:10.1126/science.abf6548) have shown that K63-linked ubiquitination of G3BP1 is required for SG disassembly after heat stress. This ubiquitinated G3BP1 recruits the segregase VCP/p97, which helps extract G3BP1 from SGs for disassembly. The MKRN2 paper builds on this by linking UBA1-dependent ubiquitination and MKRN2's activity to SG disassembly. Specifically, they show MKRN2 knockdown affects disassembly, and suggest MKRN2 helps prevent accumulation of defective ribosomal products (DRiPs) in SGs, adding a new layer to the ubiquitin-VCP model.
      • Ubiquitination's impact is highly stress- and context-dependent (different chain types, ubiquitin linkages, and recruitment of E3s). The MKRN2 work conceptually strengthens this idea: by showing that MKRN2's engagement with SGs depends on active ubiquitination via UBA1, and by demonstrating functional consequences (SG dynamics + DRIP accumulation), the study highlights how cellular context (e.g., heat stress) can recruit specific ubiquitin ligases to SGs and modulate their behavior.
      • There is a gap in the literature: very few (if any) studies explicitly combine the biology of DRIPs, stress granules, and E3 ligase mediated ubiquitination, especially in mammalian cells. There are relevant works about DRIP biology in stress granules, but those studies focus on chaperone-based quality control, not ubiquitin ligase-mediated ubiquitination of DRIPs. This study seems to be one of the first to make that connection in mammalian (or human-like) SG biology. A work on the plant DRIP-E3 ligase TaSAP5 (Zhang N, Yin Y, Liu X, et al. The E3 Ligase TaSAP5 Alters Drought Stress Responses by Promoting the Degradation of DRIP Proteins. Plant Physiol. 2017;175(4):1878-1892. doi:10.1104/pp.17.01319 ) shows that DRIPs can be directly ubiquitinated by E3s in other biological systems - which supports the plausibility of the MKRN2 mechanism, but it's not the same context.
      • A very recent review (Yuan, Lin et al. "Stress granules: emerging players in neurodegenerative diseases." Translational neurodegeneration vol. 14,1 22. 12 May. 2025, doi:10.1186/s40035-025-00482-9) summarizes and reinforces the relationship among SGs and the pathogenesis of different neurodegenerative diseases (NDDs). By identifying MKRN2 as a new ubiquitin regulator in SGs, the current study could have relevance for neurodegeneration and proteotoxic diseases, providing a new candidate to explore in disease models.

      Audience: describe the type of audience ("specialized", "broad", "basic research", "translational/clinical", etc...) that will be interested or influenced by this research; how will this research be used by others; will it be of interest beyond the specific field?

      The audience for this paper is primarily specialized, including researchers in stress granule biology, ubiquitin signaling, protein quality control, ribosome biology, and cellular stress responses. The findings will also be of interest to scientists working on granulostasis, nascent protein surveillance, and proteostasis mechanisms. Beyond these specific fields, the study provides preliminary evidence linking ubiquitination to DRIP handling and SG dynamics, which may stimulate new research directions and collaborative efforts across complementary areas of cell biology and molecular biology.

      Please define your field of expertise with a few keywords to help the authors contextualize your point of view. Indicate if there are any parts of the paper that you do not have sufficient expertise to evaluate.

      I work in ubiquitin biology, focusing on ubiquitination signaling in physiological and disease contexts, with particular expertise in the identification of E3 ligases and their substrates across different cellular systems and in vivo models. I have less expertise in stress granule dynamics and DRiP biology, so my evaluation of those aspects is more limited and relies on interpretation of the data presented in the manuscript.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      Englert et al. proposed a functional connectome-based Hopfield artificial neural network (fcHNN) architecture to reveal attractor states and activity flows across various conditions, including resting state, task-evoked, and pathological conditions. The fcHNN can reconstruct characteristics of resting-state and task-evoked brain activities. Additionally, the fcHNN demonstrates differences in attractor states between individuals with autism and typically developing individuals.

      Strengths:

      (1) The study used seven datasets, which somewhat ensures robust replication and validation of generalization across various conditions.

      (2) The proposed fcHNN improves upon existing activity flow models by mimicking artificial neural networks, thereby enhancing the representational ability of the model. This advancement enables the model to more accurately reconstruct the dynamic characteristics of brain activity.

      (3) The fcHNN projection offers an interesting visualization, allowing researchers to observe attractor states and activity flow patterns directly.

      We are grateful to the reviewer for highlighting the robustness of our findings across multiple datasets and for appreciating the novelty and representational advantages of our fcHNN model (which has been renamed to fcANN in the revised manuscript).

      Weaknesses:

      (1) The fcHNN projection can offer low-dimensional dynamic visualizations, but its interpretability is limited, making it difficult to make strong claims based on these projections. The interpretability should be enhanced in the results and discussion.

      We thank the reviewer for these important points. We agree that the interpretability of the low-dimensional projection is limited. In the revised manuscript, we have reframed the fcANN projection primarily as a visualization tool (see e.g. line 359) and moved the corresponding part of Figure 2 to the Supplementary Material (Supplementary Figure 2). We have also implemented a substantial revision of the manuscript, which now directly links our analysis to the novel theoretical framework of self-orthogonalizing attractor networks (Spisak & Friston, 2025), opening several new avenues in terms of interpretation and shedding light on the computational principles underlying attractor dynamics in the brain (see the revised introduction and the new section “Theoretical background”, starting at lines 128, but also the Mathematical Appendices 1-2 in the Supplementary Material for a comprehensive formal derivation). As part of these efforts, we now provide evidence for the brain’s functional organization approximating a special, computationally efficient class of attractor networks, the so-called Kanter-Sompolinsky projector network (Figure 2A-C, line 346, see also our answer to your next comment). This is exactly, what the theoretical framework of free-energy-minimizing attractor networks predicts.

      (2) The presentation of results is not clear enough, including figures, wording, and statistical analysis, which contributes to the overall difficulty in understanding the manuscript. This lack of clarity in presenting key findings can obscure the insights that the study aims to convey, making it challenging for readers to fully grasp the implications and significance of the research.

      We have thoroughly revised the manuscript for clarity in wording, figures (see e.g. lines 257, 482, 529 in the Results and lines 1128, 1266, 1300, 1367 in the Methods). We carefully improved statistical reporting and ensured that we always report test statistics, effect sizes and clearly refer to the null modelling approach used (e.g. lines 461, 542, 550, 565, 573, 619, as well as Figures 2-4). As absolute effect sizes, in many analyses, do not have a straightforward interpretation, we provided Glass’ , as a standardized effect size measure, expressing the distance of the true observation from the null distribution as a ratio of the null standard deviation. To further improve clarity, we now clearly define our research questions and the corresponding analyses and null models in the revised manuscript, both in the main text and in two new tables (Tables 1 and 2). We denoted research questions and null model with Q1-7 and NM1-5, respectively and refer to them at multiple instances when detailing the analyses and the results.

      Reviewer #2 (Public Review):

      Summary:

      Englert et al. use a novel modelling approach called functional connectome-based Hopfield Neural Networks (fcHNN) to describe spontaneous and task-evoked brain activity and the alterations in brain disorders. Given its novelty, the authors first validate the model parameters (the temperature and noise) with empirical resting-state function data and against null models. Through the optimisation of the temperature parameter, they first show that the optimal number of attractor states is four before fixing the optimal noise that best reflects the empirical data, through stochastic relaxation. Then, they demonstrate how these fcHNN-generated dynamics predict task-based functional activity relating to pain and self-regulation. To do so, they characterise the different brain states (here as different conditions of the experimental pain paradigm) in terms of the distribution of the data on the fcHNN projections and flow analysis. Lastly, a similar analysis was performed on a population with autism condition. Through Hopfield modeling, this work proposes a comprehensive framework that links various types of functional activity under a unified interpretation with high predictive validity.

      Strengths:

      The phenomenological nature of the Hopfield model and its validation across multiple datasets presents a comprehensive and intuitive framework for the analysis of functional activity. The results presented in this work further motivate the study of phenomenological models as an adequate mechanistic characterisation of large-scale brain activity.

      Following up on Cole et al. 2016, the authors put forward a hypothesis that many of the changes to the brain activity, here, in terms of task-evoked and clinical data, can be inferred from the resting-state brain data alone. This brings together neatly the idea of different facets of brain activity emerging from a common space of functional (ghost) attractors.

      The use of the null models motivates the benefit of non-linear dynamics in the context of phenomenological models when assessing the similarity to the real empirical data.

      We thank the reviewer for recognizing the comprehensive and intuitive nature of our framework and for acknowledging the strength of our hypothesis that diverse brain activity facets emerge from a common resting state attractor landscape.

      Weaknesses:

      While the use of the Hopfield model is neat and very well presented, it still begs the question of why to use the functional connectome (as derived by activity flow analysis from Cole et al. 2016). Deriving the functional connectome on the resting-state data that are then used for the analysis reads as circular.

      We agree that starting from functional couplings to study dynamics is in stark contrast with the common practice of estimating the interregional couplings based on structural connectome data. We now explicitly discuss how this affects the scope of the questions we can address with the approach, with explicit notes on the inability of this approach to study the structure-function coupling and its limitations in deriving mechanistic insights at the level of biophysical implementation.

      Line 894:

      “The proposed approach is not without limitations. First, as the proposed approach does not incorporate information about anatomical connectivity and does not explitly model biophysical details. Thus, in its present form, the model is not suitable to study the structure-function coupling and cannot yiled mechanistic explanations underlying (altered) polysynaptic connections, at the level of biophysical details.”

      We are confident, however, that our approach is not circular. At the high level, our approach can be considered as a function-to-function generative model, with twofold aims.

      First, we link large-scale brain dynamics to theoretical artificial neural network models and show that the functional connectome display characteristics that render it as an exceptionally “well-behaving” attractor network (e.g. superior convergence properties, as contrasted against appropriate respective null models). In the revised manuscript, we have significantly improved upon this aspect by explicitly linking the fcANN model to the theoretical framework of self-orthogonalizing attractor networks (Spisak & Friston, 2025) (see the revised introduction and the new section “Theoretical background”, starting at lines 128, but also the Mathematical Appendices 1-2 in the Supplementary Material for a comprehensive formal derivation). As part of these efforts, we now provide evidence for the brain’s functional organization approximating a special, computationally efficient class of attractor networks, the so-called Kanter-Sompolinsky projector network (Figure 2A-C, line 346, see also our answer to your next comment). This is exactly, what the theoretical framework of free-energy-minimizing attractor networks predicts. This result is not circular, as the empirical model does not use the key mechanism (the Hebbian/anti-Hebbian learning rule) that induces self-orthogonalization in the theoretical framework. We clarify this in the revised manuscript, e.g. in line 736.

      Second, we benchmark ability of the proposed function-to-function generative model to predict unseen data (new datasets) or data characteristics that are not directly encompassed in the connectivity matrix (e.g. non-Gaussian conditional dependencies, temporal autocorrelation, dynamical responses to perturbations on the system). These benchmarks are constructed against well defined null models, which provide reasonable references. We have now significantly improved the discussion of these null models in the revised manuscript (Tables 1 and 2, lines 257). We not only show, that our model - when reconstructing resting state dynamics - can generalize to unseen data over and beyond what is possible with the baseline descriptive measure (e.g. covariance measures and PCA), but also demonstrate the ability of the framework to reconstruct the effects of perturbations on this dynamics (such as task-evoked changes), based solely on the resting state data form another sample.

      If the fcHNN derives the basins of four attractors that reflect the first two principal components of functional connectivity, it perhaps suffices to use the empirically derived components alone and project the task and clinical data on it without the need for the fcHNN framework.

      We are thankful for the reviewer for highlighting this important point, which encouraged us to develop a detailed understanding of the origins of the close alignment between attractors and principal components (eigenvectors of the coupling matrix) and the corresponding (approximate) orthogonality. Here, we would like to emphasize that the attractor-eigenvector correspondence is by no means a general feature of any arbitrary attractor network. In fact, such networks are a very special class of attractor neural networks (the so-called Kanter-Sompolinsky projector neural network (Kanter & Sompolinsky, 1987)), with a high degree of computational efficiency, maximal memory capacity and perfect memory recall. It has been rigorously shown that in such networks, the eigenvectors of the coupling matrix (i.e. PCA on the timeseries data) and the attractors become equivalent (Kanter & Sompolinsky, 1987). This in turn made us ask the question, what are the learning and plasticity rules that drive attractor networks towards developing approximately orthogonal attractors? We found that this is a general tendency of networks obeying the free energy principle ( Figure 2A-C, line 346, see also our answer to your next comment). The formal derivation of this framework in now presented in an accompanying theoretical piece (Spisak & Friston, 2025). In the revised manuscript, we provide a short, high-level overview of these results (in the Introduction form line 55 and in the new section “Theoretical background”, line 128, but also the Mathematical Appendices 1-2 in the Supplementary Material for a comprehensive formal derivation). According to this new theoretical model, attractor states can be understood as a set of priors (in the Bayesian sense) that together constitute an optimal orthogonal basis, equipping the update process (which is akin to a Markov-chain Monte Carlo sampling) to find posteriors that generalize effectively within the spanned subspace. Thus, in sum, understanding brain function in terms of attractor dynamics - instead of PCA-like descriptive projections - provides important links towards a Bayesian interpretation of brain activity. At the same time, the eigenvector-attractor correspondence also explains, why descriptive decomposition approaches, like PCA or ICA are so effective at capturing the dynamics of the system, at the first place.

      As presented here, the Hopfield model is excellent in its simplicity and power, and it seems suited to tackle the structure-function relationship with the power of going further to explain task-evoked and clinical data. The work could be strengthened if that was taken into consideration. As such the model would not suffer from circularity problems and it would be possible to claim its mechanistic properties. Furthermore, as mentioned above, in the current setup, the connectivity matrix is based on statistical properties of functional activity amongst regions, and as such it is difficult to talk about a certain mechanism. This contention has for example been addressed in the Cole et al. 2016 paper with the use of a biophysical model linking structure and function, thus strengthening the mechanistic claim of the work.

      We agree that investigating how the structural connectome constraints macro-scale dynamics is a crucial next step. Linking our results with the theoretical framework of self-orthogonalizing attractor networks provides a principled approach to this, as the “self-orthogonalizing” learning rule in the accompanying theoretical work provides the opportunity to fit attractor networks with structural constraints to functional data, shedding light on the plastic processes which maintain the observed approximate orthogonality even in the presence of these structural constraints. We have revised the manuscript to clarify that our phenomenological approach is inherently limited in its ability to answer mechanistic questions at the level of biophysical details (lines 894) and discuss this promising direction as follows:

      Lines 803:

      “A promising application of this is to consider structural brain connectivity (as measured by diffusion MRI) as a sparsity constraint for the coupling weights and then train the fcANN model to match the observed resting-state brain dynamics. If the resulting structural-functional ANN model is able to closely match the observed functional brain substate dynamics, it can be used as a novel approach to quantify and understand the structural functional coupling in the brain”.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      (1) The statistical analyses are poorly described throughout the manuscript. The authors should provide more details on the statistical methods used for each comparison, as well as the corresponding statistics and degrees of freedom, rather than solely reporting p-values.

      We thank the reviewer for pointing this out. We have revised the manuscript to include the specific test statistics, precise p-values and raw effect sizes for all reported analyses to ensure full transparency and replicability, see e.g. lines 461, 542, 550, 565, 573, 619, as well as Figures 2-4. Additionally, as absolute effect sizes - in many analyses - do not have a straightforward interpretation, we provided Glass’ Δ, as a standardized effect size measure, expressing the distance of the true observation from the null distribution as a ratio of the null standard deviation.

      We have also improved the description of the statistical methods used in the manuscript (lines 1270, 1306, 1339, 1367, 1404) and added two overview tables (Tables 1 and 2) that summarize the methodological approaches and the corresponding null models.

      Furthermore, we have fully revised the analysis corresponding to noise optimization. We only retained null model 2 (covariance-matched Gaussian) in the main text and on Figure 3, and moved model 1 (spatial phase randomization) into the Supplementary Material (Supplementary Figure 6) and is less appropriate for this analysis (trivially significant in all cases). Furthermore, as test statistic, no we use a Wasserstein distance between the 122-dimensional empirical and the simulated data (instead of focusing on the 2-dimensional projection). This analysis now directly quantifies the capacity of the fcANN model to capture non-Gaussian conditionals in the data.

      (2) The convergence procedure is not clearly explained in the manuscript. Is this an optimization procedure to minimize energy? If so, the authors should provide more details about the optimizer used.

      We apologize for the lack of clarity. The convergence is not an optimization procedure per se, in a sense that it does not involve any external optimizer. It is simply the repeated (deterministic) application of the same update rule also known from Hopfield networks or Boltzmann machines. However, as detailed in the accompanying theoretical paper, this update rule (or inference rule) inherently solves and optimization problem: it performs gradient descent on the free energy landscape of the network. As such, it is guaranteed to converge to a local free energy minimum in the deterministic case. We have clarified this process in the Results and Methods sections as follows:

      Line 161:

      “Inference arises from minimizing free energy with respect to the states \sigma. For a single unit, this yields a local update rule homologous to the relaxation dynamics in Hopfield networks”.

      Line 181:

      “In the basis framework (Spisak & Friston, 2025), inference is a gradient descent on the variational free energy landscape with respect to the states σ and can be interpreted as a form of approximate Bayesian inference, where the expected value of the state σ<sub>i</sub> is interpreted as the posterior mean given the attractor states currently encoded in the network (serving as a macro-scale prior) and the previous state, including external inputs (serving as likelihood in the Bayesian sense)”.

      Line 1252:

      “As the inference rule was derived as a gradient descent on free energy, iterations monotonically decrease the free energy function and therefore converge to a local free‑energy minimum without any external optimizer. Thus, convergence does not require any optimization procedure with an external optimizer. Instead, it arises as the fixed point of repeated local inference updates, which implement gradient descent on free energy in the deterministic symmetric case.”

      (3) In Figure 2G, the beta values range from 0.035 to 0.06, but they are reported as 0.4 in the main text and the Supplementary Figure. Please clarify this discrepancy.

      We are grateful to the reviewer for spotting this typo. The correct value for β is 0.04, as reported in the Methods section. We have corrected this inconsistency in the revised manuscript and as well as in Supplementary Figure 5.

      (4) Line 174: What type of null model was used to evaluate the impact of the beta values? The authors did not provide details on this anywhere in the manuscript.

      We apologize for this omission. The null model is based on permuting the connectome weights while retaining the matrix symmetry, which destroys the specific topological structure but preserves the overall weight distribution. We have now clarified this at multiple places in the revised manuscript (lines 432, Table 1-2, Figure 2), and added new overview tables (Tables 1 and 2) to summarize the methodological approaches and the corresponding null models.

      (5) Figure 3B: It appears that the authors only demonstrate the reproducibility of the “internal” attractor across different datasets. What about other states?

      Thank you for noticing this. We now visualize all attractor states in Figure 3B (note that these essentially consist of two symmetric pairs).

      (6) Figure 3: What does “empirical” represent in Figure 3? Is it PCA? If the “empirical” method, which is a much simpler method, can achieve results similar to those of the fcHNN in terms of state occupancy, distribution, and activity flow, what are the benefits of the proposed method? Furthermore, the authors claim that the explanatory power of the fcHNN is higher than that of the empirical model and shows significant differences. However, from my perspective, this difference is not substantial (37.0% vs. 39.9%). What does this signify, particularly in comparison to PCA?

      This is a crucial point that is now a central theme of our revised manuscript. The reviewer is correct that the “empirical” method is PCA. PCA - by identifying variance-heavy orthogonal directions - aims to explain the highest amount of variance possible in the data (with the assumption of Gaussian conditionals). While empirical attractors are closely aligned to the PCs (i.e. eigenvectors of the inverse covariance matrix, as shown in the new analysis Q1), the alignment is only approximate. We basically take advantage of this small “gap” to quantify, weather attractor states are a better fit to the unseen data than the PCs. Obviously, due to the otherwise strong PC-attractor correspondence, this is expected to be only a small improvement. However, it is an important piece of evidence for the validity of our framework, as it shows that attractors are not just a complementary, perhaps “noisier” variety of the PCs, but a “substrate” that generalizes better to unseen data than the PCs themselves. We have revised the manuscript to clarify this point (lines 528).

      Reviewer #2 (Recommendations For The Authors):

      For clarity, it might be useful to define and use consistently certain key terms. Connectome often refers to structural (anatomical) connectivity unless defined specifically this should be considered, in Figure 1B title for example Brain state often refers to different conditions ie autism, neurotypical, sleep, etc... see for review Kringelbach et al. 2020, Cell Reports. When referring to attractors of brain activity they might be called substates.

      We thank the reviewer for these helpful suggestions. We have carefully revised the manuscript to ensure our terminology is precise and consistent. We now explicitly refer to the “functional connectome” (including the title) and avoid using the too general term “brain state” and use “substates” instead.

      In Figure 2 some terms are not defined. Noise is sigma in the text but elpsilon in the figure. Only in methods, the link becomes clear. Perhaps define epsilon in the caption for clarity. The same applies to μ in the methods. It is only described above in the methods, I suggest repeating the epsilon definition for clarity

      We appreciate this feedback and apologize for the inconsistency. We have revised all figures and the Methods section to ensure that all mathematical symbols (including ε, σ, and μ) are clearly and consistently defined upon their first appearance and in all figure captions. For instance, noise level is now consistently referred to as ϵ. We improved the consistency and clarity for other terms, too, including:

      functional connectome-based Hopfiled network (fcHNN) => functional connectivity-based attractor network (fcANN);

      temperature => inverse temperature;

      And improved grammar and language throughout the manuscript.

      References

      Kanter, I., & Sompolinsky, H. (1987). Associative recall of memory without errors. Physical Review A, 35(1), 380–392. 10.1103/physreva.35.380

      Spisak T & Friston K (2025). Self-orthogonalizing attractor neural networks emerging from the free energy principle. arXiv preprint arXiv:2505.22749.

    1. Like many people I have been reading a lot less over the past ~5y, but since I made a Goodreads account earlier this year, I’ve read tens of books. Reading in public has helped to motivate me. You may say reading in public is performative. I say reading in private is solipsistic. Dante, in De Monarchia, writes: All men on whom the Higher Nature has stamped the love of truth should especially concern themselves in laboring for posterity, in order that future generations may be enriched by their efforts, as they themselves were made rich by the efforts of generations past. For that man who is imbued with public teachings, but cares not to contribute something to the public good, is far in arrears of his duty, let him be assured; he is, indeed, not “a tree planted by the rivers of water that bringeth forth his fruit in his season,” [Psalms 1:3] but rather a destructive whirlpool, always engulfing, and never giving back what it has devoured. My default mode is solipsism. I read in private, build in private, learn in private. And the problem with that is self-doubt and arbitrariness. I’m halfway through a textbook and think: why? Why am I learning geology? Why this topic, and not another? There is never an a priori reason. I take notes, but why tweak the LaTeX if no-one, probably not even future me, will read them? If I stop reading this book, what changes? And doing things in public makes them both more real and (potentially) useful. If you publish your study notes, they might be useful to someone. Maybe they get slurped up in the training set of the next LLM, marginally improving performance.

      The LLM leaves a bitter taste but...

    1. At the time shepublished these essays, she was chief ofthe reference service at the BibliothequeNationale in Paris. She had already beenheavily involved in the development ofthe documentation profession, includingbeing one of the founders and leadersof the Union Francaise des Organismesde Documentation. However, only threeyears after publishing Qu’est-ce que la docu-mentation?, Briet took early retirement

      Briet published Qu'est-ce que la documentation? at the height of her professional life, working at the national library as head of the reference service, and 3 yrs before her early retirement.

    1. Reviewer #1 (Public review):

      Summary:

      Zhou and colleagues developed a computational model of replay that heavily builds on cognitive models of memory in context (e.g., the context-maintenance and retrieval model), which have been successfully used to explain memory phenomena in the past. Their model produces results that mirror previous empirical findings in rodents and offers a new computational framework for thinking about replay.

      Strengths:

      The model is compelling and seems to explain a number of findings from the rodent literature. It is commendable that the authors implement commonly used algorithms from wakefulness to model sleep/rest, thereby linking wake and sleep phenomena in a parsimonious way. Additionally, the manuscript's comprehensive perspective on replay, bridging humans and non-human animals, enhanced its theoretical contribution.

      Weaknesses:

      This reviewer is not a computational neuroscientist by training, so some comments may stem from misunderstandings. I hope the authors would see those instances as opportunities to clarify their findings for broader audiences.

      (1) The model predicts that temporally close items will be co-reactivated, yet evidence from humans suggests that temporal context doesn't guide sleep benefits (instead, semantic connections seem to be of more importance; Liu and Ranganath 2021, Schechtman et al 2023). Could these findings be reconciled with the model or is this a limitation of the current framework?

      (2) During replay, the model is set so that the next reactivated item is sampled without replacement (i.e., the model cannot get "stuck" on a single item). I'm not sure what the biological backing behind this is and why the brain can't reactivate the same item consistently. Furthermore, I'm afraid that such a rule may artificially generate sequential reactivation of items regardless of wake training. Could the authors explain this better or show that this isn't the case?

      (3) If I understand correctly, there are two ways in which novelty (i.e., less exposure) is accounted for in the model. The first and more talked about is the suppression mechanism (lines 639-646). The second is a change in learning rates (lines 593-595). It's unclear to me why both procedures are needed, how they differ, and whether these are two different mechanisms that the model implements. Also, since the authors controlled the extent to which each item was experienced during wakefulness, it's not entirely clear to me which of the simulations manipulated novelty on an individual item level, as described in lines 593-595 (if any).

      As to the first mechanism - experience-based suppression - I find it challenging to think of a biological mechanism that would achieve this and is selectively activated immediately before sleep (somehow anticipating its onset). In fact, the prominent synaptic homeostasis hypothesis suggests that such suppression, at least on a synaptic level, is exactly what sleep itself does (i.e., prune or weaken synapses that were enhanced due to learning during the day). This begs the question of whether certain sleep stages (or ultradian cycles) may be involved in pruning, whereas others leverage its results for reactivation (e.g., a sequential hypothesis; Rasch & Born, 2013). That could be a compelling synthesis of this literature. Regardless of whether the authors agree, I believe that this point is a major caveat to the current model. It is addressed in the discussion, but perhaps it would be beneficial to explicitly state to what extent the results rely on the assumption of a pre-sleep suppression mechanism.

      (4) As the manuscript mentions, the only difference between sleep and wake in the model is the initial conditions (a0). This is an obvious simplification, especially given the last author's recent models discussing the very different roles of REM vs NREM. Could the authors suggest how different sleep stages may relate to the model or how it could be developed to interact with other successful models such as the ones the last author has developed (e.g., C-HORSE)? Finally, I wonder how the model would explain findings (including the authors') showing a preference for reactivation of weaker memories. The literature seems to suggest that it isn't just a matter of novelty or exposure, but encoding strength. Can the model explain this? Or would it require additional assumptions or some mechanism for selective endogenous reactivation during sleep and rest?

      (5) Lines 186-200 - Perhaps I'm misunderstanding, but wouldn't it be trivial that an external cue at the end-item of Figure 7a would result in backward replay, simply because there is no potential for forward replay for sequences starting at the last item (there simply aren't any subsequent items)? The opposite is true, of course, for the first-item replay, which can't go backward. More generally, my understanding of the literature on forward vs backward replay is that neither is linked to the rodent's location. Both commonly happen at a resting station that is further away from the track. It seems as though the model's result may not hold if replay occurs away from the track (i.e. if a0 would be equal for both pre- and post-run).

      (6) The manuscript describes a study by Bendor & Wilson (2012) and tightly mimics their results. However, notably, that study did not find triggered replay immediately following sound presentation, but rather a general bias toward reactivation of the cued sequence over longer stretches of time. In other words, it seems that the model's results don't fully mirror the empirical results. One idea that came to mind is that perhaps it is the R/L context - not the first R/L item - that is cued in this study. This is in line with other TMR studies showing what may be seen as contextual reactivation. If the authors think that such a simulation may better mirror the empirical results, I encourage them to try. If not, however, this limitation should be discussed.

      (7) There is some discussion about replay's benefit to memory. One point of interest could be whether this benefit changes between wake and sleep. Relatedly, it would be interesting to see whether the proportion of forward replay, backward replay, or both correlated with memory benefits. I encourage the authors to extend the section on the function of replay and explore these questions.

      (8) Replay has been mostly studied in rodents, with few exceptions, whereas CMR and similar models have mostly been used in humans. Although replay is considered a good model of episodic memory, it is still limited due to limited findings of sequential replay in humans and its reliance on very structured and inherently autocorrelated items (i.e., place fields). I'm wondering if the authors could speak to the implications of those limitations on the generalizability of their model. Relatedly, I wonder if the model could or does lead to generalization to some extent in a way that would align with the complementary learning systems framework.

    2. Reviewer #3 (Public review):

      In this manuscript, Zhou et al. present a computational model of memory replay. Their model (CMR-replay) draws from temporal context models of human memory (e.g., TCM, CMR) and claims replay may be another instance of a context-guided memory process. During awake learning, CMR-replay (like its predecessors) encodes items alongside a drifting mental context that maintains a recency-weighted history of recently encoded contexts/items. In this way, the presently encoded item becomes associated with other recently learned items via their shared context representation - giving rise to typical effects in recall such as primacy, recency and contiguity. Unlike its predecessors, CMR-replay has built in replay periods. These replay periods are designed to approximate sleep or wakeful quiescence, in which an item is spontaneously reactivated, causing a subsequent cascade of item-context reactivations that further update the model's items-context associations.

      Using this model of replay, Zhou et al. were able to reproduce a variety of empirical findings in the replay literature: e.g., greater forward replay at the beginning of a track and more backwards replay at the end; more replay for rewarded events; the occurrence of remote replay; reduced replay for repeated items, etc. Furthermore, the model diverges considerably (in implementation and predictions) from other prominent models of replay that, instead, emphasize replay as a way of predicting value from a reinforcement learning framing (i.e., EVB, expected value backup).

      Overall, I found the manuscript clear and easy to follow, despite not being a computational modeller myself. (Which is pretty commendable, I'd say). The model also was effective at capturing several important empirical results from the replay literature while relying on a concise set of mechanisms - which will have implications for subsequent theory building in the field.

      The authors addressed my concerns with respect to adding methodological detail. I am satisfied with the changes.

    3. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      Zhou and colleagues developed a computational model of replay that heavily builds on cognitive models of memory in context (e.g., the context-maintenance and retrieval model), which have been successfully used to explain memory phenomena in the past. Their model produces results that mirror previous empirical findings in rodents and offers a new computational framework for thinking about replay.

      Strengths:

      The model is compelling and seems to explain a number of findings from the rodent literature. It is commendable that the authors implement commonly used algorithms from wakefulness to model sleep/rest, thereby linking wake and sleep phenomena in a parsimonious way. Additionally, the manuscript's comprehensive perspective on replay, bridging humans and non-human animals, enhanced its theoretical contribution.

      Weaknesses:

      This reviewer is not a computational neuroscientist by training, so some comments may stem from misunderstandings. I hope the authors would see those instances as opportunities to clarify their findings for broader audiences.

      (1) The model predicts that temporally close items will be co-reactivated, yet evidence from humans suggests that temporal context doesn't guide sleep benefits (instead, semantic connections seem to be of more importance; Liu and Ranganath 2021, Schechtman et al 2023). Could these findings be reconciled with the model or is this a limitation of the current framework?

      We appreciate the encouragement to discuss this connection. Our framework can accommodate semantic associations as determinants of sleep-dependent consolidation, which can in principle outweigh temporal associations. Indeed, prior models in this lineage have extensively simulated how semantic associations support encoding and retrieval alongside temporal associations. It would therefore be straightforward to extend our model to simulate how semantic associations guide sleep benefits, and to compare their contribution against that conferred by temporal associations across different experimental paradigms. In the revised manuscript, we have added a discussion of how our framework may simulate the role of semantic associations in sleep-dependent consolidation.

      “Several recent studies have argued for dominance of semantic associations over temporal associations in the process of human sleep-dependent consolidation (Schechtman et al., 2023; Liu and Ranganath 2021; Sherman et al., 2025), with one study observing no role at all for temporal associations (Schechtman et al., 2023). At first glance, these findings appear in tension with our model, where temporal associations drive offline consolidation. Indeed, prior models have accounted for these findings by suppressing temporal context during sleep (Liu and Ranganath 2024; Sherman et al., 2025). However, earlier models in the CMR lineage have successfully captured the joint contributions of semantic and temporal associations to encoding and retrieval (Polyn et al., 2009), and these processes could extend naturally to offline replay. In a paradigm where semantic associations are especially salient during awake learning, the model could weight these associations more and account for greater co-reactivation and sleep-dependent memory benefits for semantically related than temporally related items. Consistent with this idea, Schechtman et al. (2023) speculated that their null temporal effects likely reflected the task’s emphasis on semantic associations. When temporal associations are more salient and task-relevant, sleep-related benefits for temporally contiguous items are more likely to emerge (e.g., Drosopoulos et al., 2007; King et al., 2017).”

      The reviewer’s comment points to fruitful directions for future work that could employ our framework to dissect the relative contributions of semantic and temporal associations to memory consolidation.

      (2) During replay, the model is set so that the next reactivated item is sampled without replacement (i.e., the model cannot get "stuck" on a single item). I'm not sure what the biological backing behind this is and why the brain can't reactivate the same item consistently.

      Furthermore, I'm afraid that such a rule may artificially generate sequential reactivation of items regardless of wake training. Could the authors explain this better or show that this isn't the case?

      We appreciate the opportunity to clarify this aspect of the model. We first note that this mechanism has long been a fundamental component of this class of models (Howard & Kahana 2002). Many classic memory models (Brown et al., 2000; Burgess & Hitch, 1991; Lewandowsky & Murdock 1989) incorporate response suppression, in which activated items are temporarily inhibited. The simplest implementation, which we use here, removes activated items from the pool of candidate items. Alternative implementations achieve this through transient inhibition, often conceptualized as neuronal fatigue (Burgess & Hitch, 1991; Grossberg 1978). Our model adopts a similar perspective, interpreting this mechanism as mimicking a brief refractory period that renders reactivated neurons unlikely to fire again within a short physiological event such as a sharp-wave ripple. Importantly, this approach does not generate spurious sequences. Instead, the model’s ability to preserve the structure of wake experience during replay depends entirely on the learned associations between items (without these associations, item order would be random). Similar assumptions are also common in models of replay. For example, reinforcement learning models of replay incorporate mechanisms such as inhibition to prevent repeated reactivations (e.g., Diekmann & Cheng, 2023) or prioritize reactivation based on ranking to limit items to a single replay (e.g., Mattar & Daw, 2018). We now discuss these points in the section titled “A context model of memory replay”

      “This mechanism of sampling without replacement, akin to response suppression in established context memory models (Howard & Kahana 2002), could be implemented by neuronal fatigue or refractory dynamics (Burgess & Hitch, 1991; Grossberg 1978). Non-repetition during reactivation is also a common assumption in replay models that regulate reactivation through inhibition or prioritization (Diekmann & Cheng 2023; Mattar & Daw 2018; Singh et al., 2022).”

      (3) If I understand correctly, there are two ways in which novelty (i.e., less exposure) is accounted for in the model. The first and more talked about is the suppression mechanism (lines 639-646). The second is a change in learning rates (lines 593-595). It's unclear to me why both procedures are needed, how they differ, and whether these are two different mechanisms that the model implements. Also, since the authors controlled the extent to which each item was experienced during wakefulness, it's not entirely clear to me which of the simulations manipulated novelty on an individual item level, as described in lines 593-595 (if any).

      We agree that these mechanisms and their relationships would benefit from clarification. As noted, novelty influences learning through two distinct mechanisms. First, the suppression mechanism is essential for capturing the inverse relationship between the amount of wake experience and the frequency of replay, as observed in several studies. This mechanism ensures that items with high wake activity are less likely to dominate replay. Second, the decrease in learning rates with repetition is crucial for preserving the stochasticity of replay. Without this mechanism, the model would increase weights linearly, leading to an exponential increase in the probability of successive wake items being reactivated back-to-back due to the use of a softmax choice rule. This would result in deterministic replay patterns, which are inconsistent with experimental observations.

      We have revised the Methods section to explicitly distinguish these two mechanisms:

      “This experience-dependent suppression mechanism is distinct from the reduction of learning rates through repetition; it does not modulate the update of memory associations but exclusively governs which items are most likely to initiate replay.”

      We have also clarified our rationale for including a learning rate reduction mechanism:

      “The reduction in learning rates with repetition is important for maintaining a degree of stochasticity in the model’s replay during task repetition, since linearly increasing weights would, through the softmax choice rule, exponentially amplify differences in item reactivation probabilities, sharply reducing variability in replay.”

      Finally, we now specify exactly where the learning-rate reduction applied, namely in simulations where sequences are repeated across multiple sessions:

      “In this simulation, the learning rates progressively decrease across sessions, as described above.“

      As to the first mechanism - experience-based suppression - I find it challenging to think of a biological mechanism that would achieve this and is selectively activated immediately before sleep (somehow anticipating its onset). In fact, the prominent synaptic homeostasis hypothesis suggests that such suppression, at least on a synaptic level, is exactly what sleep itself does (i.e., prune or weaken synapses that were enhanced due to learning during the day). This begs the question of whether certain sleep stages (or ultradian cycles) may be involved in pruning, whereas others leverage its results for reactivation (e.g., a sequential hypothesis; Rasch & Born, 2013). That could be a compelling synthesis of this literature. Regardless of whether the authors agree, I believe that this point is a major caveat to the current model. It is addressed in the discussion, but perhaps it would be beneficial to explicitly state to what extent the results rely on the assumption of a pre-sleep suppression mechanism.

      We appreciate the reviewer raising this important point. Unlike the mechanism proposed by the synaptic homeostasis hypothesis, the suppression mechanism in our model does not suppress items based on synapse strength, nor does it modify synaptic weights. Instead, it determines the level of suppression for each item based on activity during awake experience. The brain could implement such a mechanism by tagging each item according to its activity level during wakefulness. During subsequent consolidation, the initial reactivation of an item during replay would reflect this tag, influencing how easily it can be reactivated.

      A related hypothesis has been proposed in recent work, suggesting that replay avoids recently active trajectories due to spike frequency adaptation in neurons (Mallory et al., 2024). Similarly, the suppression mechanism in our model is critical for explaining the observed negative relationship between the amount of recent wake experience and the degree of replay.

      We discuss the biological plausibility of this mechanism and its relationship with existing models in the Introduction. In the section titled “The influence of experience”, we have added the following:

      “Our model implements an activity‑dependent suppression mechanism that, at the onset of each offline replay event, assigns each item a selection probability inversely proportional to its activation during preceding wakefulness. The brain could implement this by tagging each memory trace in proportion to its recent activation; during consolidation, that tag would then regulate starting replay probability, making highly active items less likely to be reactivated. A recent paper found that replay avoids recently traversed trajectories through awake spike‑frequency adaptation (Mallory et al., 2025), which could implement this kind of mechanism. In our simulations, this suppression is essential for capturing the inverse relationship between replay frequency and prior experience. Note that, unlike the synaptic homeostasis hypothesis (Tononi & Cirelli 2006), which proposes that the brain globally downscales synaptic weights during sleep, this mechanism leaves synaptic weights unchanged and instead biases the selection process during replay.”

      (4) As the manuscript mentions, the only difference between sleep and wake in the model is the initial conditions (a0). This is an obvious simplification, especially given the last author's recent models discussing the very different roles of REM vs NREM. Could the authors suggest how different sleep stages may relate to the model or how it could be developed to interact with other successful models such as the ones the last author has developed (e.g., C-HORSE)? 

      We appreciate the encouragement to comment on the roles of different sleep stages in the manuscript, especially since, as noted, the lab is very interested in this and has explored it in other work. We chose to focus on NREM in this work because the vast majority of electrophysiological studies of sleep replay have identified these events during NREM. In addition, our lab’s theory of the role of REM (Singh et al., 2022, PNAS) is that it is a time for the neocortex to replay remote memories, in complement to the more recent memories replayed during NREM. The experiments we simulate all involve recent memories. Indeed, our view is that part of the reason that there is so little data on REM replay may be that experimenters are almost always looking for traces of recent memories (for good practical and technical reasons).

      Regarding the simplicity of the distinction between simulated wake and sleep replay, we view it as an asset of the model that it can account for many of the different characteristics of awake and NREM replay with very simple assumptions about differences in the initial conditions. There are of course many other differences between the states that could be relevant to the impact of replay, but the current target empirical data did not necessitate us taking those into account. This allows us to argue that differences in initial conditions should play a substantial role in an account of the differences between wake and sleep replay.

      We have added discussion of these ideas and how they might be incorporated into future versions of the model in the Discussion section:

      “Our current simulations have focused on NREM, since the vast majority of electrophysiological studies of sleep replay have identified replay events in this stage. We have proposed in other work that replay during REM sleep may provide a complementary role to NREM sleep, allowing neocortical areas to reinstate remote, already-consolidated memories that need to be integrated with the memories that were recently encoded in the hippocampus and replayed during NREM (Singh et al., 2022). An extension of our model could undertake this kind of continual learning setup, where the student but not teacher network retains remote memories, and the driver of replay alternates between hippocampus (NREM) and cortex (REM) over the course of a night of simulated sleep. Other differences between stages of sleep and between sleep and wake states are likely to become important for a full account of how replay impacts memory. Our current model parsimoniously explains a range of differences between awake and sleep replay by assuming simple differences in initial conditions, but we expect many more characteristics of these states (e.g., neural activity levels, oscillatory profiles, neurotransmitter levels, etc.) will be useful to incorporate in the future.”

      Finally, I wonder how the model would explain findings (including the authors') showing a preference for reactivation of weaker memories. The literature seems to suggest that it isn't just a matter of novelty or exposure, but encoding strength. Can the model explain this? Or would it require additional assumptions or some mechanism for selective endogenous reactivation during sleep and rest?

      We appreciate the encouragement to discuss this, as we do think the model could explain findings showing a preference for reactivation of weaker memories, as in Schapiro et al. (2018). In our framework, memory strength is reflected in the magnitude of each memory’s associated synaptic weights, so that stronger memories yield higher retrieved‑context activity during wake encoding than weaker ones. Because the model’s suppression mechanism reduces an item’s replay probability in proportion to its retrieved‑context activity, items with larger weights (strong memories) are more heavily suppressed at the onset of replay, while those with smaller weights (weaker memories) receive less suppression. When items have matched reward exposure, this dynamic would bias offline replay toward weaker memories, therefore preferentially reactivating weak memories. 

      In the section titled “The influence of experience”, we updated a sentence to discuss this idea more explicitly: 

      “Such a suppression mechanism may be adaptive, allowing replay to benefit not only the most recently or strongly encoded items but also to provide opportunities for the consolidation of weaker or older memories, consistent with empirical evidence (e.g., Schapiro et al. 2018; Yu et al., 2024).”

      (5) Lines 186-200 - Perhaps I'm misunderstanding, but wouldn't it be trivial that an external cue at the end-item of Figure 7a would result in backward replay, simply because there is no potential for forward replay for sequences starting at the last item (there simply aren't any subsequent items)? The opposite is true, of course, for the first-item replay, which can't go backward. More generally, my understanding of the literature on forward vs backward replay is that neither is linked to the rodent's location. Both commonly happen at a resting station that is further away from the track. It seems as though the model's result may not hold if replay occurs away from the track (i.e. if a0 would be equal for both pre- and post-run).

      In studies where animals run back and forth on a linear track, replay events are decoded separately for left and right runs, identifying both forward and reverse sequences for each direction, for example using direction-specific place cell sequence templates. Accordingly, in our simulation of, e.g., Ambrose et al. (2016), we use two independent sequences, one for left runs and one for right runs (an approach that has been taken in prior replay modeling work). Crucially, our model assumes a context reset between running episodes, preventing the final item of one traversal from acquiring contextual associations with the first item of the next. As a result, learning in the two sequences remains independent, and when an external cue is presented at the track’s end, replay predominantly unfolds in the backward direction, only occasionally producing forward segments when the cue briefly reactivates an earlier sequence item before proceeding forward.

      We added a note to the section titled “The context-dependency of memory replay” to clarify this:

      “In our model, these patterns are identical to those in our simulation of Ambrose et al. (2016), which uses two independent sequences to mimic the two run directions. This is because the drifting context resets before each run sequence is encoded, with the pause between runs acting as an event boundary that prevents the final item of one traversal from associating with the first item of the next, thereby keeping learning in each direction independent.”

      To our knowledge, no study has observed a similar asymmetry when animals are fully removed from the track, although both types of replay can be observed when animals are away from the track. For example, Gupta et al. (2010) demonstrated that when animals replay trajectories far from their current location, the ratio of forward vs. backward replay appears more balanced. We now highlight this result in the manuscript and explain how it aligns with the predictions of our model:

      “For example, in tasks where the goal is positioned in the middle of an arm rather than at its end, CMR-replay predicts a more balanced ratio of forward and reverse replay, whereas the EVB model still predicts a dominance of reverse replay due to backward gain propagation from the reward. This contrast aligns with empirical findings showing that when the goal is located in the middle of an arm, replay events are more evenly split between forward and reverse directions (Gupta et al., 2010), whereas placing the goal at the end of a track produces a stronger bias toward reverse replay (Diba & Buzsaki 2007).” 

      Although no studies, to our knowledge, have observed a context-dependent asymmetry between forward and backward replay when the animal is away from the track, our model does posit conditions under which it could. Specifically, it predicts that deliberation on a specific memory, such as during planning, could generate an internal context input that biases replay: actively recalling the first item of a sequence may favor forward replay, while thinking about the last item may promote backward replay, even when the individual is physically distant from the track.

      We now discuss this prediction in the section titled “The context-dependency of memory replay”:

      “Our model also predicts that deliberation on a specific memory, such as during planning, could serve to elicit an internal context cue that biases replay: actively recalling the first item of a sequence may favor forward replay, while thinking about the last item may promote backward replay, even when the individual is physically distant from the track. While not explored here, this mechanism presents a potential avenue for future modeling and empirical work.”

      (6) The manuscript describes a study by Bendor & Wilson (2012) and tightly mimics their results. However, notably, that study did not find triggered replay immediately following sound presentation, but rather a general bias toward reactivation of the cued sequence over longer stretches of time. In other words, it seems that the model's results don't fully mirror the empirical results. One idea that came to mind is that perhaps it is the R/L context - not the first R/L item - that is cued in this study. This is in line with other TMR studies showing what may be seen as contextual reactivation. If the authors think that such a simulation may better mirror the empirical results, I encourage them to try. If not, however, this limitation should be discussed.

      Although our model predicts that replay is triggered immediately by the sound cue, it also predicts a sustained bias toward the cued sequence. Replay in our model unfolds across the rest phase as multiple successive events, so the bias observed in our sleep simulations indeed reflects a prolonged preference for the cued sequence.

      We now discuss this issue, acknowledging the discrepancy:

      “Bendor and Wilson (2012) found that sound cues during sleep did not trigger immediate replay, but instead biased reactivation toward the cued sequence over an extended period of time. While the model does exhibit some replay triggered immediately by the cue, it also captures the sustained bias toward the cued sequence over an extended period.”

      Second, within this framework, context is modeled as a weighted average of the features associated with items. As a result, cueing the model with the first R/L item produces qualitatively similar outcomes as cueing it with a more extended R/L cue that incorporates features of additional items. This is because both approaches ultimately use context features unique to the two sides.

      (7) There is some discussion about replay's benefit to memory. One point of interest could be whether this benefit changes between wake and sleep. Relatedly, it would be interesting to see whether the proportion of forward replay, backward replay, or both correlated with memory benefits. I encourage the authors to extend the section on the function of replay and explore these questions.

      We thank the reviewer for this suggestion. Regarding differences in the contribution of wake and sleep to memory, our current simulations predict that compared to rest in the task environment, sleep is less biased toward initiating replay at specific items, leading to a more uniform benefit across all memories. Regarding the contributions of forward and backward replay, our model predicts that both strengthen bidirectional associations between items and contexts, benefiting memory in qualitatively similar ways. Furthermore, we suggest that the offline learning captured  by our teacher-student simulations reflects consolidation processes that are specific to sleep.

      We have expanded the section titled The influence of experience to discuss these predictions of the model: 

      “The results outlined above arise from the model's assumption that replay strengthens bidirectional associations between items and contexts to benefit memory. This assumption leads to several predictions about differences across replay types. First, the model predicts that sleep yields different memory benefits compared to rest in the task environment: Sleep is less biased toward initiating replay at specific items, resulting in a more uniform benefit across all memories. Second, the model predicts that forward and backward replay contribute to memory in qualitatively similar ways but tend to benefit different memories. This divergence arises because forward and backward replay exhibit distinct item preferences, with backward replay being more likely to include rewarded items, thereby preferentially benefiting those memories.”

      We also updated the “The function of replay” section to include our teacher-student speculation:

      “We speculate that the offline learning observed in these simulations corresponds to consolidation processes that operate specifically during sleep, when hippocampal-neocortical dynamics are especially tightly coupled (Klinzing et al., 2019).”

      (8) Replay has been mostly studied in rodents, with few exceptions, whereas CMR and similar models have mostly been used in humans. Although replay is considered a good model of episodic memory, it is still limited due to limited findings of sequential replay in humans and its reliance on very structured and inherently autocorrelated items (i.e., place fields). I'm wondering if the authors could speak to the implications of those limitations on the generalizability of their model. Relatedly, I wonder if the model could or does lead to generalization to some extent in a way that would align with the complementary learning systems framework.

      We appreciate these insightful comments. Traditionally, replay studies have focused on spatial tasks with autocorrelated item representations (e.g., place fields). However, an increasing number of human studies have demonstrated sequential replay using stimuli with distinct, unrelated representations. Our model is designed to accommodate both scenarios. In our current simulations, we employ orthogonal item representations while leveraging a shared, temporally autocorrelated context to link successive items. We anticipate that incorporating autocorrelated item representations would further enhance sequence memory by increasing the similarity between successive contexts. Overall, we believe that the model generalizes across a broad range of experimental settings, regardless of the degree of autocorrelation between items. Moreover, the underlying framework has been successfully applied to explain sequential memory in both spatial domains, explaining place cell firing properties (e.g., Howard et al., 2004), and in non-spatial domains, such as free recall experiments where items are arbitrarily related. 

      In the section titled “A context model of memory replay”, we added this comment to address this point:

      “Its contiguity bias stems from its use of shared, temporally autocorrelated context to link successive items, despite the orthogonal nature of individual item representations. This bias would be even stronger if items had overlapping representations, as observed in place fields.”

      Since CMR-replay learns distributed context representations where overlap across context vectors captures associative structure, and replay helps strengthen that overlap, this could indeed be viewed as consonant with complementary learning systems integration processes. 

      Reviewer #2 (Public Review):

      This manuscript proposes a model of replay that focuses on the relation between an item and its context, without considering the value of the item. The model simulates awake learning, awake replay, and sleep replay, and demonstrates parallels between memory phenomenon driven by encoding strength, replay of sequence learning, and activation of nearest neighbor to infer causality. There is some discussion of the importance of suppression/inhibition to reduce activation of only dominant memories to be replayed, potentially boosting memories that are weakly encoded. Very nice replications of several key replay findings including the effect of reward and remote replay, demonstrating the equally salient cue of context for offline memory consolidation.

      I have no suggestions for the main body of the study, including methods and simulations, as the work is comprehensive, transparent, and well-described. However, I would like to understand how the CMRreplay model fits with the current understanding of the importance of excitation vs inhibition, remembering vs forgetting, activation vs deactivation, strengthening vs elimination of synapses, and even NREM vs REM as Schapiro has modeled. There seems to be a strong association with the efforts of the model to instantiate a memory as well as how that reinstantiation changes across time. But that is not all this is to consolidation. The specific roles of different brain states and how they might change replay is also an important consideration.

      We are gratified that the reviewer appreciated the work, and we agree that the paper would benefit from comment on the connections to these other features of consolidation.

      Excitation vs. inhibition: CMR-replay does not model variations in the excitation-inhibition balance across brain states (as in other models, e.g., Chenkov et al., 2017), since it does not include inhibitory connections. However, we posit that the experience-dependent suppression mechanism in the model might, in the brain, involve inhibitory processes. Supporting this idea, studies have observed increased inhibition with task repetition (Berners-Lee et al., 2022). We hypothesize that such mechanisms may underlie the observed inverse relationship between task experience and replay frequency in many studies. We discuss this in the section titled “A context model of memory replay”:

      “The proposal that a suppression mechanism plays a role in replay aligns with models that regulate place cell reactivation via inhibition (Malerba et al., 2016) and with empirical observations of increased hippocampal inhibitory interneuron activity with experience (Berners-Lee et al., 2022). Our model assumes the presence of such inhibitory mechanisms but does not explicitly model them.”

      Remembering/forgetting, activation/deactivation, and strengthening/elimination of synapses: The model does not simulate synaptic weight reduction or pruning, so it does not forget memories through the weakening of associated weights. However, forgetting can occur when a memory is replayed less frequently than others, leading to reduced activation of that memory compared to its competitors during context-driven retrieval. In the Discussion section, we acknowledge that a biologically implausible aspect of our model is that it implements only synaptic strengthening: 

      “Aspects of the model, such as its lack of regulation of the cumulative positive weight changes that can accrue through repeated replay, are biologically implausible (as biological learning results in both increases and decreases in synaptic weights) and limit the ability to engage with certain forms of low level neural data (e.g., changes in spine density over sleep periods; de Vivo et al., 2017; Maret et al., 2011). It will be useful for future work to explore model variants with more elements of biological plausibility.” Different brain states and NREM vs REM: Reviewer 1 also raised this important issue (see above). We have added the following thoughts on differences between these states and the relationship to our prior work to the Discussion section:

      “Our current simulations have focused on NREM, since the vast majority of electrophysiological studies of sleep replay have identified replay events in this stage. We have proposed in other work that replay during REM sleep may provide a complementary role to NREM sleep, allowing neocortical areas to reinstate remote, already-consolidated memories that need to be integrated with the memories that were recently encoded in the hippocampus and replayed during NREM (Singh et al., 2022). An extension of our model could undertake this kind of continual learning setup, where the student but not teacher network retains remote memories, and the driver of replay alternates between hippocampus (NREM) and cortex (REM) over the course of a night of simulated sleep. Other differences between stages of sleep and between sleep and wake states are likely to become important for a full account of how replay impacts memory. Our current model parsimoniously explains a range of differences between awake and sleep replay by assuming simple differences in initial conditions, but we expect many more characteristics of these states (e.g., neural activity levels, oscillatory profiles, neurotransmitter levels, etc.) will be useful to incorporate in the future.”

      We hope these points clarify the model’s scope and its potential for future extensions.

      Do the authors suggest that these replay systems are more universal to offline processes beyond episodic memory? What about procedural memories and working memory?

      We thank the reviewer for raising this important question. We have clarified in the manuscript:

      “We focus on the model as a formulation of hippocampal replay, capturing how the hippocampus may replay past experiences through simple and interpretable mechanisms.”

      With respect to other forms of memory, we now note that:

      “This motor memory simulation using a model of hippocampal replay is consistent with evidence that hippocampal replay can contribute to consolidating memories that are not hippocampally dependent at encoding (Schapiro et al., 2019; Sawangjit et al., 2018). It is possible that replay in other, more domain-specific areas could also contribute (Eichenlaub et al., 2020).”

      Though this is not a biophysical model per se, can the authors speak to the neuromodulatory milieus that give rise to the different types of replay?

      Our work aligns with the perspective proposed by Hasselmo (1999), which suggests that waking and sleep states differ in the degree to which hippocampal activity is driven by external inputs. Specifically, high acetylcholine levels during waking bias activity to flow into the hippocampus, while low acetylcholine levels during sleep allow hippocampal activity to influence other brain regions. Consistent with this view, our model posits that wake replay is more biased toward items associated with the current resting location due to the presence of external input during waking states. In the Discussion section, we have added a comment on this point:

      “Our view aligns with the theory proposed by Hasselmo (1999), which suggests that the degree of hippocampal activity driven by external inputs differs between waking and sleep states: High acetylcholine levels during wakefulness bias activity into the hippocampus, while low acetylcholine levels during slow-wave sleep allow hippocampal activity to influence other brain regions.”

      Reviewer #3 (Public Review):

      In this manuscript, Zhou et al. present a computational model of memory replay. Their model (CMR-replay) draws from temporal context models of human memory (e.g., TCM, CMR) and claims replay may be another instance of a context-guided memory process. During awake learning, CMR replay (like its predecessors) encodes items alongside a drifting mental context that maintains a recency-weighted history of recently encoded contexts/items. In this way, the presently encoded item becomes associated with other recently learned items via their shared context representation - giving rise to typical effects in recall such as primacy, recency, and contiguity. Unlike its predecessors, CMR-replay has built-in replay periods. These replay periods are designed to approximate sleep or wakeful quiescence, in which an item is spontaneously reactivated, causing a subsequent cascade of item-context reactivations that further update the model's item-context associations.

      Using this model of replay, Zhou et al. were able to reproduce a variety of empirical findings in the replay literature: e.g., greater forward replay at the beginning of a track and more backward replay at the end; more replay for rewarded events; the occurrence of remote replay; reduced replay for repeated items, etc. Furthermore, the model diverges considerably (in implementation and predictions) from other prominent models of replay that, instead, emphasize replay as a way of predicting value from a reinforcement learning framing (i.e., EVB, expected value backup).

      Overall, I found the manuscript clear and easy to follow, despite not being a computational modeller myself. (Which is pretty commendable, I'd say). The model also was effective at capturing several important empirical results from the replay literature while relying on a concise set of mechanisms - which will have implications for subsequent theory-building in the field.

      With respect to weaknesses, additional details for some of the methods and results would help the readers better evaluate the data presented here (e.g., explicitly defining how the various 'proportion of replay' DVs were calculated).

      For example, for many of the simulations, the y-axis scale differs from the empirical data despite using comparable units, like the proportion of replay events (e.g., Figures 1B and C). Presumably, this was done to emphasize the similarity between the empirical and model data. But, as a reader, I often found myself doing the mental manipulation myself anyway to better evaluate how the model compared to the empirical data. Please consider using comparable y-axis ranges across empirical and simulated data wherever possible.

      We appreciate this point. As in many replay modeling studies, our primary goal is to provide a qualitative fit that demonstrates the general direction of differences between our model and empirical data, without engaging in detailed parameter fitting for a precise quantitative fit. Still, we agree that where possible, it is useful to better match the axes. We have updated figures 2B and 2C so that the y-axis scales are more directly comparable between the empirical and simulated data. 

      In a similar vein to the above point, while the DVs in the simulations/empirical data made intuitive sense, I wasn't always sure precisely how they were calculated. Consider the "proportion of replay" in Figure 1A. In the Methods (perhaps under Task Simulations), it should specify exactly how this proportion was calculated (e.g., proportions of all replay events, both forwards and backwards, combining across all simulations from Pre- and Post-run rest periods). In many of the examples, the proportions seem to possibly sum to 1 (e.g., Figure 1A), but in other cases, this doesn't seem to be true (e.g., Figure 3A). More clarity here is critical to help readers evaluate these data. Furthermore, sometimes the labels themselves are not the most informative. For example, in Figure 1A, the y-axis is "Proportion of replay" and in 1C it is the "Proportion of events". I presumed those were the same thing - the proportion of replay events - but it would be best if the axis labels were consistent across figures in this manuscript when they reflect the same DV.

      We appreciate these useful suggestions. We have revised the Methods section to explain in detail how DVs are calculated for each simulation. The revisions clarify the differences between related measures, such as those shown in Figures 1A and 1C, so that readers can more easily see how the DVs are defined and interpreted in each case. 

      Reviewer #4/Reviewing Editor (Public Review):

      Summary:

      With their 'CMR-replay' model, Zhou et al. demonstrate that the use of spontaneous neural cascades in a context-maintenance and retrieval (CMR) model significantly expands the range of captured memory phenomena.

      Strengths:

      The proposed model compellingly outperforms its CMR predecessor and, thus, makes important strides towards understanding the empirical memory literature, as well as highlighting a cognitive function of replay.

      Weaknesses:

      Competing accounts of replay are acknowledged but there are no formal comparisons and only CMR-replay predictions are visualized. Indeed, other than the CMR model, only one alternative account is given serious consideration: A variant of the 'Dyna-replay' architecture, originally developed in the machine learning literature (Sutton, 1990; Moore & Atkeson, 1993) and modified by Mattar et al (2018) such that previously experienced event-sequences get replayed based on their relevance to future gain. Mattar et al acknowledged that a realistic Dyna-replay mechanism would require a learned representation of transitions between perceptual and motor events, i.e., a 'cognitive map'. While Zhou et al. note that the CMR-replay model might provide such a complementary mechanism, they emphasize that their account captures replay characteristics that Dyna-replay does not (though it is unclear to what extent the reverse is also true).

      We thank the reviewer for these thoughtful comments and appreciate the opportunity to clarify our approach. Our goal in this work is to contrast two dominant perspectives in replay research: replay as a mechanism for learning reward predictions and replay as a process for memory consolidation. These models were chosen as representatives of their classes of models because they use simple and interpretable mechanisms that can simulate a wide range of replay phenomena, making them ideal for contrasting these two perspectives.

      Although we implemented CMR-replay as a straightforward example of the memory-focused view, we believe the proposed mechanisms could be extended to other architectures, such as recurrent neural networks, to produce similar results. We now discuss this possibility in the revised manuscript (see below). However, given our primary goal of providing a broad and qualitative contrast of these two broad perspectives, we decided not to undertake simulations with additional individual models for this paper.

      Regarding the Mattar & Daw model, it is true that a mechanistic implementation would require a mechanism that avoids precomputing priorities before replay. However, the "need" component of their model already incorporates learned expectations of transitions between actions and events. Thus, the model's limitations are not due to the absence of a cognitive map.

      In contrast, while CMR-replay also accumulates memory associations that reflect experienced transitions among events, it generates several qualitatively distinct predictions compared to the Mattar & Daw model. As we note in the manuscript, these distinctions make CMR-replay a contrasting rather than complementary perspective.

      Another important consideration, however, is how CMR replay compares to alternative mechanistic accounts of cognitive maps. For example, Recurrent Neural Networks are adept at detecting spatial and temporal dependencies in sequential input; these networks are being increasingly used to capture psychological and neuroscientific data (e.g., Zhang et al, 2020; Spoerer et al, 2020), including hippocampal replay specifically (Haga & Fukai, 2018). Another relevant framework is provided by Associative Learning Theory, in which bidirectional associations between static and transient stimulus elements are commonly used to explain contextual and cue-based phenomena, including associative retrieval of absent events (McLaren et al, 1989; Harris, 2006; Kokkola et al, 2019). Without proper integration with these modeling approaches, it is difficult to gauge the innovation and significance of CMR-replay, particularly since the model is applied post hoc to the relatively narrow domain of rodent maze navigation.

      First, we would like to clarify our principal aim in this work is to characterize the nature of replay, rather than to model cognitive maps per se. Accordingly, CMR‑replay is not designed to simulate head‐direction signals, perform path integration, or explain the spatial firing properties of neurons during navigation. Instead, it focuses squarely on sequential replay phenomena, simulating classic rodent maze reactivation studies and human sequence‐learning tasks. These simulations span a broad array of replay experimental paradigms to ensure extensive coverage of the replay findings reported across the literature. As such, the contribution of this work is in explaining the mechanisms and functional roles of replay, and demonstrating that a model that employs simple and interpretable memory mechanisms not only explains replay phenomena traditionally interpreted through a value-based lens but also accounts for findings not addressed by other memory-focused models.

      As the reviewer notes, CMR-replay shares features with other memory-focused models. However, to our knowledge, none of these related approaches have yet captured the full suite of empirical replay phenomena, suggesting the combination of mechanisms employed in CMR-replay is essential for explaining these phenomena. In the Discussion section, we now discuss the similarities between CMR-replay and related memory models and the possibility of integrating these approaches:

      “Our theory builds on a lineage of memory-focused models, demonstrating the power of this perspective in explaining phenomena that have often been attributed to the optimization of value-based predictions. In this work, we focus on CMR-replay, which exemplifies the memory-centric approach through a set of simple and interpretable mechanisms that we believe are broadly applicable across memory domains. Elements of CMR-replay share similarities with other models that adopt a memory-focused perspective. The model learns distributed context representations whose overlaps encodes associations among items, echoing associative learning theories in which overlapping patterns capture stimulus similarity and learned associations (McLaren & Mackintosh 2002). Context evolves through bidirectional interactions between items and their contextual representations, mirroring the dynamics found in recurrent neural networks (Haga & Futai 2018; Levenstein et al., 2024). However, these related approaches have not been shown to account for the present set of replay findings and lack mechanisms—such as reward-modulated encoding and experience-dependent suppression—that our simulations suggest are essential for capturing these phenomena. While not explored here, we believe these mechanisms could be integrated into architectures like recurrent neural networks (Levenstein et al., 2024) to support a broader range of replay dynamics.”

      Recommendations For The Authors

      Reviewer #1 (Recommendations For The Authors):

      (1) Lines 94-96: These lines may be better positioned earlier in the paragraph.

      We now introduce these lines earlier in the paragraph.

      (2) Line 103 - It's unclear to me what is meant by the statement that "the current context contains contexts associated with previous items". I understand why a slowly drifting context will coincide and therefore link with multiple items that progress rapidly in time, so multiple items will be linked to the same context and each item will be linked to multiple contexts. Is that the idea conveyed here or am I missing something? I'm similarly confused by line 129, which mentions that a context is updated by incorporating other items' contexts. How could a context contain other contexts?

      In the model, each item has an associated context that can be retrieved via Mfc. This is true even before learning, since Mfc is initialized as an identity matrix. During learning and replay, we have a drifting context c that is updated each time an item is presented. At each timestep, the model first retrieves the current item’s associated context cf by Mfc, and incorporates it into c. Equation #2 in the Methods section illustrates this procedure in detail. Because of this procedure, the drifting context c is a weighted sum of past items’ associated contexts. 

      We recognize that these descriptions can be confusing. We have updated the Results section to better distinguish the drifting context from items’ associated context. For example, we note that:

      “We represent the drifting context during learning and replay with c and an item's associated context with cf.”

      We have also updated our description of the context drift procedure to distinguish these two quantities: 

      “During awake encoding of a sequence of items, for each item f, the model retrieves its associated context cf via Mfc. The drifting context c incorporates the item's associated context cf and downweights its representation of previous items' associated contexts (Figure 1c). Thus, the context layer maintains a recency weighted sum of past and present items' associated contexts.”

      (3) Figure 1b and 1d - please clarify which axis in the association matrices represents the item and the context.

      We have added labels to show what the axes represent in Figure 1.

      (4) The terms "experience" and "item" are used interchangeably and it may be best to stick to one term.

      We now use the term “item” wherever we describe the model results. 

      (5) The manuscript describes Figure 6 ahead of earlier figures - the authors may want to reorder their figures to improve readability.

      We appreciate this suggestion. We decided to keep the current figure organization since it allows us to group results into different themes and avoid redundancy. 

      (6) Lines 662-664 are repeated with a different ending, this is likely an error.

      We have fixed this error.

      Reviewer #3 (Recommendations For The Authors):

      Below, I have outlined some additional points that came to mind in reviewing the manuscript - in no particular order.

      (1) Figure 1: I found the ordering of panels a bit confusing in this figure, as the reading direction changes a couple of times in going from A to F. Would perhaps putting panel C in the bottom left corner and then D at the top right, with E and F below (also on the right) work?

      We agree that this improves the figure. We have restructured the ordering of panels in this figure. 

      (2) Simulation 1: When reading the intro/results for the first simulation (Figure 2a; Diba & Buszaki, 2007; "When animals traverse a linear track...", page 6, line 186). It wasn't clear to me why pre-run rest would have any forward replay, particularly if pre-run implied that the animal had no experience with the track yet. But in the Methods this becomes clearer, as the model encodes the track eight times prior to the rest periods. Making this explicit in the text would make it easier to follow. Also, was there any reason why specifically eight sessions of awake learning, in particular, were used?

      We now make more explicit that the animals have experience with the track before pre-run rest recording:

      “Animals first acquire experience with a linear track by traversing it to collect a reward. Then, during the pre-run rest recording, forward replay predominates.”

      We included eight sessions of awake learning to match with the number of sessions in Shin et al. (2017), since this simulation attempts to explain data from that study. After each repetition, the model engages in rest. We have revised the Methods section to indicate the motivation for this choice: 

      “In the simulation that examines context-dependent forward and backward replay through experience (Figs. 2a and 5a), CMR-replay encodes an input sequence shown in Fig. 7a, which simulates a linear track run with no ambiguity in the direction of inputs, over eight awake learning sessions (as in Shin et al. 2019)”

      (3) Frequency of remote replay events: In the simulation based on Gupta et al, how frequently overall does remote replay occur? In the main text, the authors mention the mean frequency with which shortcut replay occurs (i.e., the mean proportion of replay events that contain a shortcut sequence = 0.0046), which was helpful. But, it also made me wonder about the likelihood of remote replay events. I would imagine that remote replay events are infrequent as well - given that it is considerably more likely to replay sequences from the local track, given the recency-weighted mental context. Reporting the above mean proportion for remote and local replay events would be helpful context for the reader.

      In Figure 4c, we report the proportion of remote replay in the two experimental conditions of Gupta et al. that we simulate. 

      (4) Point of clarification re: backwards replay: Is backwards replay less likely to occur than forward replay overall because of the forward asymmetry associated with these models? For example, for a backwards replay event to occur, the context would need to drift backwards at least five times in a row, in spite of a higher probability of moving one step forward at each of those steps. Am I getting that right?

      The reviewer’s interpretation is correct: CMR-replay is more likely to produce forward than backward replay in sleep because of its forward asymmetry. We note that this forward asymmetry leads to high likelihood of forward replay in the section titled “The context-dependency of memory replay”: 

      “As with prior retrieved context models (Howard & Kahana 2002; Polyn et al., 2009), CMR-replay encodes stronger forward than backward associations. This asymmetry exists because, during the first encoding of a sequence, an item's associated context contributes only to its ensuing items' encoding contexts. Therefore, after encoding, bringing back an item's associated context is more likely to reactivate its ensuing than preceding items, leading to forward asymmetric replay (Fig. 6d left).”

      (5) On terminating a replay period: "At any t, the replay period ends with a probability of 0.1 or if a task-irrelevant item is reactivated." (Figure 1 caption; see also pg 18, line 635). How was the 0.1 decided upon? Also, could you please add some detail as to what a 'task-irrelevant item' would be? From what I understood, the model only learns sequences that represent the points in a track - wouldn't all the points in the track be task-relevant?

      This value was arbitrarily chosen as a small value that allows probabilistic stopping. It was not motivated by prior modeling or a systematic search. We have added: “At each timestep, the replay period ends either with a stop probability of 0.1 or if a task-irrelevant item becomes reactivated. (The choice of the value 0.1 was arbitrary; future work could explore the implications of varying this parameter).” 

      In addition, we now explain in the paper that task irrelevant items “do not appear as inputs during awake encoding, but compete with task-relevant items for reactivation during replay, simulating the idea that other experiences likely compete with current experiences during periods of retrieval and reactivation.”

      (6) Minor typos:

      Turn all instances of "nonlocal" into "non-local", or vice versa

      "For rest at the end of a run, cexternal is the context associated with the final item in the sequence. For rest at the end of a run, cexternal is the context associated with the start item." (pg 20, line 663) - I believe this is a typo and that the second sentence should begin with "For rest at the START of a run".

      We have updated the manuscript to correct these typos. 

      (7) Code availability: I may have missed it, but it doesn't seem like the code is currently available for these simulations. Including the commented code in a public repository (Github, OSF) would be very useful in this case.

      We now include a Github link to our simulation code: https://github.com/schapirolab/CMR-replay.

    1. I am making three predictions. What I would like to see in the next 5 years, what I expect to happen, and what I think the biggest risks are.

      Cameron Jones (Austr) 3 views on IndieWeb 1) wish 2) expectation 3) risk

    1. Author response:

      The following is the authors’ response to the previous reviews

      Public Reviews: 

      Reviewer #1 (Public review): 

      Summary: 

      The manuscript by Raices et al., provides some novel insights into the role and interactions between SPO-11 accessory proteins in C. elegans. The authors propose a model of meiotic DSBs regulation, critical to our understanding of DSB formation and ultimately crossover regulation and accurate chromosome segregation. The work also emphasizes the commonalities and species-specific aspects of DSB regulation. 

      Strengths: 

      This study capitalizes on the strengths of the C. elegans system to uncover genetic interactions between a lSPO-11 accessory proteins. In combination with physical interactions, the authors synthesize their findings into a model, which will serve as the basis for future work, to determine mechanisms of DSB regulation. 

      Weaknesses: 

      The methodology, although standard, still lacks some rigor, especially with the IPs. 

      Reviewer #2 (Public review): 

      Summary: 

      Meiotic recombination initiates with the formation of DNA double-strand break (DSB) formation, catalyzed by the conserved topoisomerase-like enzyme Spo11. Spo11 requires accessory factors that are poorly conserved across eukaryotes. Previous genetic studies have identified several proteins required for DSB formation in C. elegans to varying degrees; however, how these proteins interact with each other to recruit the DSB-forming machinery to chromosome axes remains unclear. 

      In this study, Raices et al. characterized the biochemical and genetic interactions among proteins that are known to promote DSB formation during C. elegans meiosis. The authors examined pairwise interactions using yeast two-hybrid (Y2H) and co-immunoprecipitation and revealed an interaction between a chromatin-associated protein HIM-17 and a transcription factor XND-1. They further confirmed the previously known interaction between DSB-1 and SPO-11 and showed that DSB-1 also interacts with a nematodespecific HIM-5, which is essential for DSB formation on the X chromosome. They also assessed genetic interactions among these proteins, categorizing them into four epistasis groups by comparing phenotypes in double vs. single mutants. Combining these results, the authors proposed a model of how these proteins interact with chromatin loops and are recruited to chromosome axes, offering insights into the process in C. elegans compared to other organisms. 

      Weaknesses: 

      This work relies heavily on Y2H, which is notorious for having high rates of false positives and false negatives. Although the interactions between HIM-17 and XND-1 and between DSB-1 and HIM-5 were validated by co-IP, the significance of these interactions was not tested in vivo. Cataloging Y2H and genetic interactions does not yield much more insight. The model proposed in Figure 4 is also highly speculative. 

      Reviewer #3 (Public review): 

      The goal of this work is to understand the regulation of double-strand break formation during meiosis in C. elegans. The authors have analyzed physical and genetic interactions among a subset of factors that have been previously implicated in DSB formation or the number of timing of DSBs: CEP-1, DSB-1, DSB-2, DSB-3, HIM-5, HIM-17, MRE-11, REC-1, PARG-1, and XND-1. 

      The 10 proteins that are analyzed here include a diverse set of factors with different functions, based on prior analyses in many published studies. The term "Spo11 accessory factors" has been used in the meiosis literature to describe proteins that directly promote Spo11 cleavage activity, rather than factors that are important for the expression of meiotic proteins or that influence the genome-wide distribution or timing of DSBs. Based on this definition, the known SPO-11 accessory factors in C. elegans include DSB-1, DSB2, DSB-3, and the MRN complex (at least MRE-11 and RAD-50). These are all homologs of proteins that have been studied biochemically and structurally in other organisms. DSB-1 & DSB-2 are homologs of Rec114, while DSB-3 is a homolog of Mei4. Biochemical and structural studies have shown that Rec114 and Mei4 directly modulate Spo11 activity by recruiting Spo11 to chromatin and promoting its dimerization, which is essential for cleavage. The other factors analyzed in this study affect the timing, distribution, or number of RAD-51 foci, but they likely do so indirectly. As elaborated below, XND-1 and HIM-17 are transcription factors that modulate the expression of other meiotic genes, and their role in DSB formation is parsimoniously explained by this regulatory activity. The roles of HIM-5 and REC-1 remain unclear; the reported localization of HIM-5 to autosomes is consistent with a role in transcription (the autosomes are transcriptionally active in the germline, while the X chromosome is largely silent), but its loss-of-function phenotypes are much more limited than those of HIM-17 and XND-1, so it may play a more direct role in DSB formation. The roles of CEP-1 (a Rad53 homolog) and PARG-1 are also ambiguous, but their homologs in other organisms contribute to DNA repair rather than DSB formation. 

      We appreciate the reviewer’s clarification. However, the definition of Spo11 accessory factors varies across the literature. Only Keeney and colleagues define these as proteins that physically associate with and activate Spo11 to catalyze DSB formation (Keeney, Lange & Mohibullah, 2014; Lam & Keeney, 2015). In contrast, other authors have used the term more broadly to refer to proteins that promote or regulate Spo11-dependent DSB formation, without necessarily implying a direct interaction with Spo11 (e.g., Panizza et al., 2011; Robert et al., 2016; Stanzione et al., 2016; Li et al., 2021; Lange et al., 2016). Thus, our usage of the term follows this broader functional definition.

      An additional significant limitation of the study, as stated in my initial review, is that much of the analysis here relies on cytological visualization of RAD-51 foci as a proxy for DSBs. RAD-51 associates transiently with DSB sites as they undergo repair and is thus limited in its ability to reveal details about the timing or abundance of DSBs since its loading and removal involve additional steps that may be influenced by the factors being analyzed. 

      We agree with the reviewer that counting RAD-51 foci provides only an indirect measure of SPO-11–dependent DSBs, as RAD-51 marks sites of repair rather than the breaks themselves. However, we would like to clarify that our current study does not rely on RAD51 foci quantification for any of the analyses or conclusions presented. None of the figures or datasets in this manuscript are based on RAD-51 cytology. Instead, our conclusions are drawn from genetic interactions, biochemical assays, and protein–protein interaction analyses.

      The paper focuses extensively on HIM-5, which was previously shown through genetic and cytological analysis to be important for breaks on the X chromosome. The revised manuscript still claims that "HIM-5 mediates interactions with the different accessory factors sub-groups, providing insights into how components on the DNA loops may interact with the chromosome axis." The weak interactions between HIM-5 and DSB-1/2 detected in the Y2H assay do not convincingly support such a role. The idea that HIM-5 directly promotes break formation is also inconsistent with genetic data showing that him5 mutants lack breaks on the X chromosomes, while HIM-5 has been shown to be is enriched on autosomes. Additionally, as noted in my comment to the authors, the localization data for HIM-5 shown in this paper are discordant with prior studies; this discrepancy should be addressed experimentally. 

      We appreciate the reviewer’s concerns regarding the interpretation of HIM-5 function.  The weak Y2H interactions between HIM-5 and DSB-1 are not interpreted as direct biochemical evidence of a strong physical interaction, but rather as a potential point of regulatory connection between these pathways. Importantly, these Y2H data are further supported by co-immunoprecipitation experiments, genetic interactions, and the observed mislocalization of HIM-5 in the absence of DSB-1. Together, these complementary results strengthen our conclusion that HIM-5 functionally associates with DSB-promoting complexes.

      Regarding HIM-5 localization, the pattern we observe using both anti-GFP staining of the eaIs4 transgene (Phim-5::him-5::GFP) and anti-HA staining of the HIM-5::HA strain is consistent with that reported by McClendon et al. (2016), who validated the same eaIs4 transgene. Although the pattern difers slightly from Meneely et al. (2012), that used a HIM5 antibody that is no longer functional and that has been discontinued by the commercial source. In this prior study, a weak signal was detected in the mitotic region and late pachytene, but stronger signal was seen in early to mid-pachytene. Our imaging— optimized for low background and stable signal—similarly shows robust HIM-5 localization in early and mid-pachytene, supporting the reliability of our GFP and HA-tagged analyses.

      The recent analysis of DSB formation in C. elegans males (Engebrecht et al; PloS Genetics; PMID: 41124211) shows that in absence of him-5 there is a significant reduction of CO designation (measured as COSA-1 foci) on autosomes. This study strongly supports a direct and general role for HIM-5 in crossover formation— on both autosomes and on the hermaphrodite X.

      This paper describes REC-1 and HIM-5 as paralogs, based on prior analysis in a paper that included some of the same authors (Chung et al., 2015; DOI 10.1101/gad.266056.115). In my initial review I mentioned that this earlier conclusion was likely incorrect and should not be propagated uncritically here. Since the authors have rebutted this comment rather than amending it, I feel it is important to explain my concerns about the conclusions of previous study. Chung et al. found a small region of potential homology between the C. elegans rec-1 and him-5 genes and also reported that him-5; rec-1 double mutants have more severe defects than either single mutant, indicative of a stronger reduction in DSBs. Based on these observations and an additional argument based on microsynteny, they concluded that these two genes arose through recent duplication and divergence. However, as they noted, genes resembling rec-1 are absent from all other Caenorhabditis species, even those most closely related to C. elegans. The hypothesis that two genes are paralogs that arose through duplication and divergence is thus based on their presence in a single species, in the absence of extensive homology or evidence for conserved molecular function. Further, the hypothesis that gene duplication and divergence has given rise to two paralogs that share no evident structural similarity or common interaction partners in the few million years since C. elegans diverged from its closest known relatives is implausible. In contrast, DSB-1 and DSB-2 are both homologs of Rec114 that clearly arose through duplication and divergence within the Caenorhabditis lineage, but much earlier than the proposed split between REC-1 and HIM-5. Two genes that can be unambiguously identified as dsb-1 and dsb-2 are present in genomes throughout the Elegans supergroup and absent in the Angaria supergroup, placing the duplication event at around 18-30 MYA, yet DSB-1 and DSB-2 share much greater similarity in their amino acid sequence, predicted structure, and function than HIM-5 and REC-1. Further, Raices place HIM-5 and REC-1 in different functional complexes (Figure 3B). 

      We respectfully disagree with the reviewer’s characterization of the relationship between HIM-5 and REC-1. Our use of the term “paralog” follows the conclusions of Chung et al. (2015), a peer-reviewed study that provided both sequence and microsynteny evidence supporting this relationship. While we acknowledge that the degree of sequence conservation is limited, the evolutionary scenario proposed by Chung et al. remains the only published framework addressing this question. Further the degree of homology between either HIM-5 or REC-1 and the ancestral locus are similar to that observed for DSB-1 and DSB-2 with REC-114 (Hinman et al., 2021). We therefore retain the use of the term “paralog” in reference to these genes. Importantly, our conclusions regarding their distinct molecular and functional roles are independent of this classification.

      The authors acknowledge that HIM-17 is a transcription factor that regulates many meiotic genes. Like HIM-17, XND-1 is cytologically enriched along the autosomes in germline nuclei, suggestive of a role in transcription. The Reinke lab performed ChIP-seq in a strain expressing an XND-1::GFP fusion protein and showed that it binds to promoter regions, many of which overlap with the HIM-17-regulated promoters characterized by the Ahringer lab (doi: 10.1126/sciadv.abo4082). Work from the Yanowitz lab has shown that XND-1 influences the transcription of many other genes involved in meiosis (doi: 10.1534/g3.116.035725) and work from the Colaiacovo lab has shown that XND-1 regulates the expression of CRA-1 (doi: 10.1371/journal.pgen.1005029). Additionally, loss of HIM-17 or XND-1 causes pleiotropic phenotypes, consistent with a broad role in gene regulation. Collectively, these data indicate that XND-1 and HIM-17 are transcription factors that are important for the proper expression of many germline-expressed genes. Thus, as stated above, the roles of HIM-17 and XND-1 in DSB formation, as well as their effects on histone modification, are parsimoniously explained by their regulation of the expression of factors that contribute more directly to DSB formation and chromatin modification. I feel strongly that transcription factors should not be described as "SPO-11 accessory factors." 

      The ChIP analysis of XND-1 binding sites (using the XND-1::GFP transgene we provided to the Reinke lab) was performed, and Table S3 in the Ahringer paper suggests it is found at germline promoters, although the analysis is not actually provided. We completely agree that at least a subset of XND-1 functions is explained by its regulation of transcriptional targets (as we previously showed for HIM-5). However, like the MES proteins, a subset of which are also autosomal and impact X chromosome gene expression, XND-1 could also be directly regulating chromatin architecture which could have profound effects on DSB formation.  As stated in our prior comments, precedent for the involvement of a chromatin factor in DSB formation is provided by yeast Spp1. 

      Recommendations for the authors: 

      Editor comments: 

      As you can see, the reviewers have additional comments, and the authors can include revisions to address those points prior to publicizing 'a version of record' (e.g. hatching rate assay mentioned by reviewer #1). This type of study, trying to catalog interactions of many factors, inevitably has loose ends, but in my opinion, it does not reduce the value of the study, as long as statements are not misleading. I suggest that the authors address issues by making changes to the main text. After the next round of adjustments by authors, I feel that it will be ready for a version of record, based on the spirit of the current eLife publication model. 

      Reviewer #1 (Recommendations for the authors): 

      I still have concerns about the HIM-17 IP and immunoblot probing with XND-1 antibodies. While the newly provided whole extract immunoblot clearly shows a XND-1 specific band that goes away in the mutant extracts, there is additional bands that are recognized - the pattern looks different than in the input in Figure 1B. Additionally, there is still a band of the corresponding size in the IPs from extracts not containing the tagged allele of HIM-17, calling into question whether XND-1 is specifically pulled down. 

      The authors did not include the hatching rate as pointed out in the original reviews. In the rebuttal: 

      "Great question. I guess we need to do this while back out for review. If anyone has suggestions of what to say here. Clearly we overlooked this point but do have the strain." 

      We thank the reviewer for this suggestion. We had intended to include a hatching analysis; however, during the course of this work we discovered that our him-17 stock had acquired an additional linked mutation(s) that altered its phenotype and led to inconsistent results. This strain was used to rederive the him-17; eaIs4 double mutant after our original did not survive freeze/thaw. Given the abnormal behavior observed in this line, we concluded that proceeding with the hatching assays could yield unreliable data. We are currently reestablishing a verified him-17 strain, but in the interest of accuracy and reproducibility, we have restricted our analysis in this manuscript to validated datasets derived from confirmed strains.

      Reviewer #2 (Recommendations for the authors): 

      The authors have addressed most of the previous concerns and substantially improved the manuscript. The new data demonstrate that HIM-5 localization depends on DSB-1, and together with the Y2H and Co-PI results, strengthen the link between HIM-5 and the DSBforming machinery in C. elegans. The remaining points are outlined below: 

      Specific comments: 

      The font size of texts and labels in the Figure is very small and is hardly legible. Please enlarge them and make them clearly visible (Fig 1A, 1B, 2A, 2B, 2C, 2D, 2E, 3A, 3B, 3C, 3D, 3F)

      Done

      Although the authors have addressed the specificity of the XND-1 antibody, it remains unclear whether the boxed band is specific to the him-17::3xHA IP, since the same band appears in the control IP, albeit with lower intensity (Fig 1B). Is the ~100 kDa band in the him-17::3xHA IP a modified form XND-1? While antibody specificity was previously demonstrated by IF using xnd-1 mutants, it would be ideal to confirm this on a western blot as well. 

      A Western Blot performed using whole cell extracts and probed with the anti- XND-1 antibody has been provided in the revised version of the manuscript (Fig. S1A). This confirms that the antibody specifically recognizes XND-1 protein. We believe that the ~100 kDa band mentioned by the reviewer is likely to be a non-specific cross reaction band detected by the antibody, since an identical band of the same mW was also detected in xnd-1 null mutants (Fig. S1A).

      Regarding the IP negative controls, we are firmly convinced the boxed band to be specific, and the fact that a (very) low intensity band is also found in the negative control should not infringe the validity of the HIM-17-XND-1 specific interaction. There is a constellation of similar examples present across the literature, as it is widely acknowledged amongst biochemists that some proteins may “stick” to the beads due their intrinsic biochemical properties despite usage of highly stringent IP buffers. However, the high level of enrichment detected in the IP (as also underlined by the reviewer) corroborates that XND-1 specifically immunoprecipitates with HIM-17 despite a low, non-specific binding to the HA beads is present. If interaction between XND-1 and HIM-17 was non-specific, we logically would have found the band in the IP and the band in the negative control to be of very similar intensity, which is clearly not the case. 

      Although co-IP assays are generally considered not a strictly quantitative assay, we want to emphasize that a comparable amount of nuclear extract was employed in both samples as also evidenced by the inputs, in which it is also possible to see that if anything, slightly less  nuclear extracts were employed in the him-17::3xHA; him-5::GFP::3xFLAG vs. the him5::GFP::3xFLAG negative control, corroborating the above mentioned points.

      Lastly, it is crucial to mention that mass spectrometry analyses performed on HIM17::3xHA pulldowns show XND-1 as a highly enriched interacting protein (Blazickova et al.; 2025 Nature Comms.), which strongly supports our co-IP results.

      The subheading "HIM-5 is the essential factor for meiotic breaks in the X chromosome" does not accurately represent the work described in the Results or in Figure 1. I disagree with the authors' response to the earlier criticism. The issue is not merely semantic. The data do not demonstrate that HIM-5 is required for DSB formation on the X chromosome - this conclusion can only be inferred. What Figure 1 shows is that XND-1 and HIM-17 interact, and that pie-1p-driven HIM-5 expression can partially rescue meiotic defects of him-17 mutants. This supports the conclusion that him-5 is a target of HIM-17/XND-1 in promoting CO formation on the X chromosome. However, the data provide no direct evidence for the claim stated in the subheading. I strongly encourage authors to revise the subheading to more accurately represent the findings presented in the paper. 

      After considering the reviewer’s comments, we have revised the subheading to more accurately describe our findings.

      In Fig1C, please fix the typo in the last row - "pie1p::him5-::GFP" to "pie-1p::him- 5::GFP".

      Done

      In Fig 2C, "p" is missing from the label on the right for Phim-5::him-5::GFP.

      Done

      In Fig 3I, bring the labels (DSB-1/2/3) at the lower right to the front.

      Done

      In Concluding Remarks, please fix the typo "frequently".

      Done

      Reviewer #3 (Recommendations for the authors): 

      The experiments that analyze HIM-5 in dsb-1 mutants should be repeated using antibodies against the endogenous HIM-5 antibody, and localization of the HIM-5::HA and HIM-5::GFP proteins should be compared directly to antibody staining. This work uses an epitopetagged protein and a GFP-tagged protein to analyze the localization of HIM-5, while prior work (Meneely et al., 2012) used an antibody against the endogenous protein. In Figures 2 and S4 of this paper, neither HIM-5::HA nor HIM-5::GFP appears to localize strongly to chromatin, and autosomal enrichment of HIM-5, as previously reported for the endogenous protein based on antibody staining, is not evident. Moreover, HIM-5::GFP and HIM-5::HA look different from each other, and neither resembles the low-resolution images shown in Figure 6 in Meneely et al 2012, which showed nuclear staining throughout the germline, including in the mitotic zone, and also in somatic sheath cells. Given the differences in localization between the tagged transgenes and the endogenous protein, it is important to analyze the behavior of the endogenous, untagged protein. A minor issue: a wild-type control should also be shown for HIM-5::HA in Figure S4. 

      Wild type control added to figure S4

      Evidence that XND-1 and HIM-17 form a complex is weak; it is supported by the Y2H and co-IP data but opposed by functional analysis or localization. The diversity of proteins found in the Co-IP of HIM-17::GFP (Table S2) indicate that these interactions are unlikely to be specific. The independent localization of these proteins to chromatin is clear evidence that they do not form an obligate complex; additionally, they have been found to regulate distinct (although overlapping) sets of genes. The predicted structure generated by Alphafold3 has very low confidence and should not be taken as evidence for an interaction.The newly added argument about the lack of apparently overlap between HIM-17 and XND1 due to the distance between the HA tag on HIM-17 and XND-1 is flawed and should be removed - the extended C-terminus in the predicted AlphaFold3 C-terminus of HIM-17 has been interpreted as if it were a structured domain. Moreover, the predicted distance of 180 Å (18 nm) is comparable to the distance between a fluorophore on a secondary antibody and the epitope recognized by the primary antibody (~20-25 nm) and is far below than the resolution limit of light microscopy. 

      We appreciate the reviewer’s thoughtful comment. The evidence supporting a physical interaction between XND-1 and HIM-17 is not only shown by our co-IP experiments, but it has also been recently shown in an independent study where MS analyses were conducted on HIM-17::3xHA pull downs to identify novel HIM-17 interactors (Blazickova et al.; 2025 Nature Comms). As shown in the data provided in this study, also under these experimental settings XND-1 was identified as a highly enriched putative HIM-17 interactor. We do acknowledge that their chromatin localization patterns are distinct and they regulate overlapping but not identical sets of genes, however, it is worth noting that protein–protein interactions in meiosis are often transient or context-dependent, and may not necessarily result in co-localization detectable by microscopy. In line with this, in the same work cited above, a similar situation for BRA-2 and HIM-17 was reported, as they were shown to interact biochemically despite the absence of overlapping staining patterns. 

      Minor issues: 

      The images shown in Panel D in Figure 1 seem to have very different resolutions; the HTP3/HIM-17 colocalization image is particularly blurry/low-resolution and should be replaced. The contrast between blue and green cannot be seen clearly; colors with stronger contrast should be used, and grayscale images should also be shown for individual channels. High-resolution images should probably be included for all of the factors analyzed here to facilitate comparisons.

    1. Reviewer #1 (Public review):

      Summary:

      In their article, Guo and coworkers investigate the Ca²⁺ signaling responses induced by Enteropathogenic Escherichia coli (EPEC) in epithelial cells and how these responses regulate NF-κB activation. The authors show that EPEC induces rapid, spatially coordinated Ca²⁺ transients mediated by extracellular ATP released through the type III secretion system (T3SS). Using high-speed Ca²⁺ imaging and stochastic modeling, they propose that low ATP levels trigger "Coordinated Ca²⁺ Responses from IP₃R Clusters" (CCRICs) via fast Ca²⁺ diffusion and Ca²⁺-induced Ca²⁺ release. These responses may dampen TNF-α-induced NF-κB activation through Ca²⁺-dependent modulation of O-GlcNAcylation of p65. The interdisciplinary work suggests a new perspective on calcium-mediated immune response by combining quantitative imaging, bacterial genetics, and computational modeling.

      Strengths:

      The study provides a new concept for host responses to bacterial infections and introduces the concept of Coordinated Ca²⁺ Responses from IP₃R Clusters (CCRICs) as synchronized, whole-cell-scale Ca²⁺ transients with the fast kinetics typical of local events. This is elegantly done by an interdisciplinary approach using quantitative measurements and mechanistic modelling.

      Weaknesses:

      (1) The effect of coordination by fast diffusion for small eATP concentrations is explained by the resulting low Ca2+ concentration that is not as strongly affected by calcium buffers compared to higher concentrations. While I agree with this statement on the relative level, CICR is based on the resulting absolute concentration at neighboring IP3Rs (to activate them). Thus, I do not fully agree with the explanation, or at least would expect to use the modelling approach to demonstrate this effect. Simulations for different activation and buffer concentrations could strengthen this point and exclude potential inhibition of channels at higher stimulation levels.

      In this respect, I would also include the details of the modelling, such as implementation environment, parameters, and benchmarking. The description in the Supplementary Methods is very similar to the description in the main text. In terms of reproducibility, it would be important to at least provide simulation parameters, and providing the code would align with the emerging standards for reproducible science.

      (2) Quantitative characterization of CCRICs:

      The paper would benefit from a clearer definition of the term CCRICs and quantitative descriptors like duration, amplitude distribution, frequency, and spatial extent (also in relation to the comment on the EGTA measurements below). Furthermore, it remains unclear to me whether CCRICs represent a population of rapidly propagating micro-waves or truly simultaneous events. Maybe kymographs or wave-front propagation analyses (at least from simulations if experimental resolution is too bad) would strengthen this point.

      (3) Specificity of pharmacological tools:

      Suramin and U73122 are known to have off-target effects. Control experiments using alternative P2 receptor antagonists like PPADS or inactive U73343 analogs would strengthen the causal link.

    1. Ellis Island

      "Immigrants entered the United States through several ports. Those from Europe generally came through East Coast facilities, while those from Asia generally entered through West Coast centers. More than 70 percent of all immigrants, however, entered through New York City, which came to be known as the "Golden Door." Throughout the late 1800s, most immigrants arriving in New York entered at the Castle Garden depot near the tip of Manhattan. In 1892, the federal government opened a new immigration processing center on Ellis Island in New York harbor." https://www.loc.gov/classroom-materials/united-states-history-primary-source-timeline/rise-of-industrial-america-1876-1900/immigration-to-united-states-1851-1900/

      Ellis Island was chosen as the first federal facility in which immigrants were processed because of its strategic position: it was isolated, far from the mainland and, therefore, considered fitting to carefully inspect immigrants and prevent them from entering the country without being registered. Inspection process was not detached from class distinctions: only indigent, (that is, poor), third-class passengers (also referred to as "steerage") were required to undergo the inspection process as Ellis Island. What was the criterion, then? Those who boarded the ship on first or second class were presumed to be wealthy people, less likely to "become a public charge in America due to medical or legal reasons". After a long trip, which entailed staying for days in unsanitary conditions and overly crowded spaces, poor people were submitted to a minimum of 3\5 hours of inspection in the Great Hall: their health condition was examined and their origins as well as destinations were investigated. image https://klagenfurtmigrationstudies.home.blog/understanding-barriers-to-immigration-by-listening-to-ellis-island-oral-histories/ https://www.statueofliberty.org/ellis-island/overview-history/.

      Ellis Island is now seat of a National Museum of Immigration which can be visited in person (https://www.statueofliberty.org/visit/); otherwise, the official website offers numerous online resources if you are interested in the topic.

    2. The 1980s: Bruce Hornsby and the Range—The Way It Is

      Setting the scene: the song was released in July 1986 as a single from the band's debut album The Way It Is. It was a great success and the band won the 1987 Grammy Awards in the Best New Artist category. The success of the song has had a long-lasting effect in the music industry: it was sampled by other artists and included in songs such as 2Pac's Changes and Polo G's Wishing for a Hero. The singer has "never counted it" but he has read that his song "has now been recorded 17 times by hip-hop artists" (https://www.rollingstone.com/music/music-features/bruce-hornsby-interview-way-it-is-non-secure-connection-1036032/). In order to understand the following lyrics, it is necessary to place the song in its historical context. The 1980s were years in which several issues emerged: * The process of de-industrialization (that is, the process in which American companies moved their seats abroad, outside the country) deeply affected the job market: tens of thousands of workers lost their jobs. In particular, Blacks were the ones who suffered the most since the majority of them were employed in various industrial fields. As a consequence, poverty spread: 30% of black work force was jobless in 1982. * The conservative Reagan presidency (1981-1989) reduced federal (governmental, that is) economic support to people in need by 20%. The cut to financial measures combined with the ongoing industrial crisis was disastrous. Il presidente Ronald Reagan * White supremacy movements and groups (such as the Ku Klux Klan) reignited and engaged in violent acts against African Americans, firebombing of churches and campaigns against affirmative actions programs and integration in schools. "Millions of white Americans had become convinced that “too much” had been given to blacks". * Poverty, hunger and hopelessness paved the way to the abuse of drugs; crack was especially consumed by poor Americans as it was inexpensive and easily available. As a consequence of the combination of low employment, educational poverty and drug popularity, drug dealing became the source of income for young people and violence increased significantly in Black neighborhoods.

      What was the government's response? Aggravated levels of violence and crime were responded with the "War on Drugs", which entailed: 1. the elimination of parole (that is, the conditional release of a prisoner, often on the basis of good behavior in prison); 2. stricter penalties for drug sale and possession; 3. building a larger network of prisons.

      Needless to say, African-Americans were the most targeted ones. Mass incarceration as a system of control (see the "home" of the website for more on the topic) started to bloom.

      https://www.amistadresource.org/the_future_in_the_present/social_and_economic_issues.html

    3. war in the Middle East

      The 1990s were a decade of unrest and, sadly, or great military violence. The main conflict that occurred in the Middle East is the Persian Gulf War (1990-1991), an international war fought between Iraq, Kuwait and the Unites States, which intervened when Kuwait was invaded by Iraqi forces. During the 1990s, other Middle East conflicts include: 1. the Iraqi Kurdish Civil War (1994-1997); 2. the Yemeni Civil War (1994); 3. the Operation Desert Fox (1998), which consisted in the U.S. bombing of Iraq.

      https://www.britannica.com/event/Persian-Gulf-War#:~:text=The%20Persian%20Gulf%20War%2C%20also,Kuwait%20on%20August%202%2C%201990 https://www.bbc.co.uk/bitesize/guides/zhsssk7/revision/5

    1. if

      ヒント

      このifの条件には#を表示するタイミングを書く

      i == 0は左側面の#の条件,j == 0は一番上の#の条件

      空欄7,8では右側面と下の#の条件を書こう

    1. if

      ヒント

      このif文はwhile(1)の無限ループを抜けるための条件を書く

      この問題の場合-1が入力されるとループを抜け出すようにする

    1. char a[3][20]; strcpy(a[0], "Nagasawa Masami");

      文字列の配列について

      char型の二次元配列を覚えるにはまずは図で表したら分かりやすい

      二次元配列に文字列を入れる方法

      6行目の「char a[3][20]」で20文字まで入る配列を3つ用意して、その3つの配列にどんな文字を入れるかの初期化の作業

      ここでは「a[0]」の配列に「Nagasawa Masami」の文字を入れている

      図は講義資料を参考にしてみよう

      参考授業資料(第10回)25ページ

    1. Ellis Island

      "Immigrants entered the United States through several ports. Those from Europe generally came through East Coast facilities, while those from Asia generally entered through West Coast centers. More than 70 percent of all immigrants, however, entered through New York City, which came to be known as the "Golden Door." Throughout the late 1800s, most immigrants arriving in New York entered at the Castle Garden depot near the tip of Manhattan. In 1892, the federal government opened a new immigration processing center on Ellis Island in New York harbor." https://www.loc.gov/classroom-materials/united-states-history-primary-source-timeline/rise-of-industrial-america-1876-1900/immigration-to-united-states-1851-1900/

      Ellis Island was chosen as the first federal facility in which immigrants were processed because of its strategic position: it was isolated, far from the mainland and, therefore, considered fitting to carefully inspect immigrants and prevent them from entering the country without being registered. Inspection process was not detached from class distinctions: only indigent, (that is, poor), third-class passengers (also referred to as "steerage") were required to undergo the inspection process as Ellis Island. What was the criterion, then? Those who boarded the ship on first or second class were presumed to be wealthy people, less likely to "become a public charge in America due to medical or legal reasons". After a long trip, which entailed staying for days in unsanitary conditions and overly crowded spaces, poor people were submitted to a minimum of 3\5 hours of inspection in the Great Hall: their health condition was examined and their origins as well as destinations were investigated. image https://klagenfurtmigrationstudies.home.blog/understanding-barriers-to-immigration-by-listening-to-ellis-island-oral-histories/ https://www.statueofliberty.org/ellis-island/overview-history/. Ellis Island is now seat of a National Museum of Immigration which can be visited (https://www.statueofliberty.org/visit/); otherwise, the official website offers many online resources if you are interested in digging in the topic.

    2. war in the Middle East

      The 1990s were a decade of unrest and, sadly, or great military violence. The main conflict that occurred in the Middle East is the Persian Gulf War (1990-1991), an international war fought between Iraq, Kuwait and the Unites States, which intervened when Kuwait was invaded by Iraqi forces. During the 1990s, other Middle East conflicts include: 1. the Iraqi Kurdish Civil War (1994-1997); 2. the Yemeni Civil War (1994); 3. the Operation Desert Fox (1998), which consisted in the U.S. bombing of Iraq.

      https://www.britannica.com/event/Persian-Gulf-War#:~:text=The%20Persian%20Gulf%20War%2C%20also,Kuwait%20on%20August%202%2C%201990 https://www.bbc.co.uk/bitesize/guides/zhsssk7/revision/5

    3. The 1980s: Bruce Hornsby and the Range—The Way It Is

      Setting the scene: the song was released in July 1986 as a single from the band's debut album The Way It Is. It was a great success and the band won the 1987 Grammy Awards in the Best New Artist category. The success of the song has had a long-lasting effect in the music industry: it was sampled by other artists and included in songs such as 2Pac's Changes and Polo G's Wishing for a Hero. The singer has "never counted it" but he has read that his song "has now been recorded 17 times by hip-hop artists" (https://www.rollingstone.com/music/music-features/bruce-hornsby-interview-way-it-is-non-secure-connection-1036032/). In order to understand the following lyrics, it is necessary to place the song in its historical context. The 1980s were years in which several issues emerged: * The process of de-industrialization (that is, the process in which American companies moved their seats abroad, outside the country) deeply affected the job market: tens of thousands of workers lost their jobs. In particular, Blacks were the ones who suffered the most since the majority of them were employed in various industrial fields. As a consequence, poverty spread: 30% of black work force was jobless in 1982. * The conservative Reagan presidency (1981-1989) reduced federal (governmental, that is) economic support to people in need by 20%. The cut to financial measures combined with the ongoing industrial crisis was disastrous. Il presidente Ronald Reagan * White supremacy movements and groups (such as the Ku Klux Klan) reignited and engaged in violent acts against African Americans, firebombing of churches and campaigns against affirmative actions programs and integration in schools. "Millions of white Americans had become convinced that “too much” had been given to blacks". * Poverty, hunger and hopelessness paved the way to the abuse of drugs; crack was especially consumed by poor Americans as it was inexpensive and easily available. As a consequence of the combination of low employment, educational poverty and drug popularity, drug dealing became the source of income for young people and violence increased significantly in Black neighborhoods.

      What was the government's response? Aggravated levels of violence and crime were responded with the "War on Drugs", which entailed: 1. the elimination of parole (that is, the conditional release of a prisoner, often on the basis of good behavior in prison); 2. stricter penalties for drug sale and possession; 3. building a larger network of prisons.

      Needless to say, African-Americans were most targeted. Mass incarceration as a system of control (see the "home" of the website for more on the topic) started to bloom.

      https://www.amistadresource.org/the_future_in_the_present/social_and_economic_issues.html

    1. Reviewer #3 (Public review):

      Summary:

      Overall, this is a well-done study, and the conclusions are largely supported by the data, which will be of interest to the field.

      Strengths:

      Strengths of this study include experiments with solution NMR that can resolve high-resolution interactions of the highly flexible C-terminal tail of arr2 with clathrin and AP2. Although mainly confirmatory in defining the arr2 CBL 376LIELD380 as the clathrin binding site, the use of the NMR is of high interest (Fig. 1). The 15N-labeled CLTC-NTD experiment with arr2 titrations reveals a span from 39-108 that mediates an arr2 interaction, which corroborates previous crystal data, but does not reveal a second area in CLTC-NTD that in previous crystal structures was observed to interact with arr2.

      SEC and NMR data suggest that full-length arr2 (1-418) binding with 2-adaptin subunit of AP2 is enhanced in the presence of CCR5 phospho-peptides (Fig. 3). The pp6 peptide shows the highest degree of arr2 activation, and 2-adaptin binding, compared to less phosphorylated peptide or not phosphorylated at all. It is interesting that the arr2 interaction with CLTC NTD and pp6 cannot be detected using the SEC approach, further suggesting that clathrin binding is not dependent on arrestin activation. Overall, the data suggest that receptor activation promotes arrestin binding to AP2, not clathrin, suggesting the AP2 interaction is necessary for CCR5 endocytosis.

      To validate the solid biophysical data, the authors pursue validation experiments in a HeLa cell model by confocal microscopy. This requires transient transfection of tagged receptor (CCR5-Flag) and arr2 (arr2-YFP). CCR5 displays a "class B"-like behavior in that arr2 is rapidly recruited to the receptor at the plasma membrane upon agonist activation, which forms a stable complex that internalizes onto endosomes (Fig. 4). The data suggest that complex internalization is dependent on AP2 binding not clathrin (Fig. 5).

      The addition of the antagonist experiment/data adds rigor to the study.

      Overall, this is a solid study that will be of interest to the field.

    2. Author response:

      The following is the authors’ response to the previous reviews

      Public Reviews: 

      Reviewer #1 (Public review): 

      Petrovic et al. investigate CCR5 endocytosis via arrestin 2, with a particular focus on clathrin and AP2 contributions. The study is thorough and methodologically diverse. The NMR titration data clearly demonstrate chemical shift changes at the canonical clathrin-binding site (LIELD), present in both the 2S and 2L arrestin splice variants. 

      To assess the effect of arrestin activation on clathrin binding, the authors compare: truncated arrestin (1-393), full-length arrestin, and 1-393 incubated with CCR5 phosphopeptides. All three bind clathrin comparably, whereas controls show no binding. These findings are consistent with prior crystal structures showing peptide-like binding of the LIELD motif, with disordered flanking regions. The manuscript also evaluates a non-canonical clathrin binding site specific to the 2L splice variant. Though this region has been shown to enhance beta2-adrenergic receptor binding, it appears not to affect CCR5 internalization. 

      Similar analyses applied to AP2 show a different result. AP2 binding is activation-dependent and influenced by the presence and level of phosphorylation of CCR5-derived phosphopeptides. These findings are reinforced by cellular internalization assays. 

      In sum, the results highlight splice-variant-dependent effects and phosphorylation-sensitive arrestin-partner interactions. The data argue against a (rapidly disappearing) one-size-fitsall model for GPCR-arrestin signaling and instead support a nuanced, receptor-specific view, with one example summarized effectively in the mechanistic figure.

      We thank the referee for this positive assessment of our manuscript. Indeed, by stepping away from the common receptor models for understanding internalization (b2AR and V2R), we revealed the phosphorylation level of the receptor as a key factor in driving the sequestration of the receptor from the plasma membrane. We hope that the proposed mechanistic model will aid further studies to obtain an even more detailed understanding of forces driving receptor internalization.

      Weaknesses: 

      Figure 1 shows regions alphaFold model that are intrinsically disordered without making it clear that this is not an expected stable position. The authors NMR titration data are n=1. Many figure panels require that readers pinch and zoom to see the data.

      In the “Recommendations for the Authors” section, we addressed the reviewer’s stated weaknesses. In short, for the AlphaFold representation in Figure 1A, we added explicit labeling and revised the legend and main text to clearly state that the depicted loops are intrinsically disordered, absent from crystal structures due to flexibility, and shown only for visualization of their location. We also clarified that the NMR titration experiments inherently have n = 1 due to technical limitations, and that this is standard practice in the field, while ensuring individual data points remain visible. The supplementary NMR figures now have more vibrant coloring, allowing easier data assessment. However, we have not changed the zooming of the microscopy and NMR spectra. We believe that the presentation of microscopy data, which already show zoomed-in regions of interest, follow standard practices in the field. Furthermore, we strongly believe that we should display full NMR spectra in the supplementary figures to allow the reader to assess the overall quality and behavior. As indicated previously, the reader can zoom in to very high resolution, since the spectra are provided by vector graphics. Zoomed regions of the relevant details are provided in the main figures.

      Reviewer #2 (Public review): 

      Summary: 

      Based on extensive live cell assays, SEC, and NMR studies of reconstituted complexes, these authors explore the roles of clathrin and the AP2 protein in facilitating clathrin mediated endocytosis via activated arrestin-2. NMR, SEC, proteolysis, and live cell tracking confirm a strong interaction between AP2 and activated arrestin using a phosphorylated C-terminus of CCR5. At the same time a weak interaction between clathrin and arrestin-2 is observed, irrespective of activation. 

      These results contrast with previous observations of class A GPCRs and the more direct participation by clathrin. The results are discussed in terms of the importance of short and long phosphorylated bar codes in class A and class B endocytosis. 

      Strengths: 

      The 15N,1H and 13C,methyl TROSY NMR and assignments represent a monumental amount of work on arrestin-2, clathrin, and AP2. Weak NMR interactions between arrestin-2 and clathrin are observed irrespective of activation of arrestin. A second interface, proposed by crystallography, was suggested to be a possible crystal artifact. NMR establishes realistic information on the clathrin and AP2 affinities to activated arrestin with both kD and description of the interfaces.

      We sincerely thank the referee for this encouraging evaluation of our work and appreciate the recognition of the NMR efforts and insights into the arrestin–clathrin–AP2 interactions.

      Weaknesses: 

      This reviewer has identified only minor weaknesses with the study. 

      (1) I don't observe two overlapping spectra of Arrestin2 (1393) +/- CLTC NTD in Supp Figure 1

      We believe the referee is referring to Figure 1 – figure supplement 2. We have now made the colors of the spectra more vibrant and used different contouring to make the differences between the two spectra clearer. The spectra are provided as vector graphics, which allows zooming in to the very fine details.

      (2) Arrestin-2 1-418 resonances all but disappear with CCR5pp6 addition. Are they recovered with Ap2Beta2 addition and is this what is shown in Supp Fig 2D

      We believe the reviewer is referring to Figure 3 - figure supplement 1. In this figure, the panels E and F show resonances of arrestin2<sup>1-418</sup> (apo state shown with black outline) disappear upon the addition of CCR5pp6 (arrestin2<sup>1-418</sup>•CCR5pp6 complex spectrum in red). The panels C and D show resonances of arrestin2<sup>1-418</sup> (apo state shown with black outline), which remain unchanged upon addition of AP2b2 <sup>701-937</sup> (orange), indicating no complex formation. We also recorded a spectrum of the arrestin2<sup>1-418</sup>•CCR5pp6 complex under addition of AP2b2 <sup>701-937</sup> (not shown), but the arrestin2 resonances in the arrestin2<sup>1-418</sup> •CCR5pp6 complex were already too broad for further analysis. This had been already explained in the text.

      “In agreement with the AP2b2 NMR observations, no interaction was observed in the arrestin2 methyl and backbone NMR spectra upon addition of AP2b2 in the absence of phosphopeptide (Figure 3-figure supplement 1C, D). However, the significant line broadening of the arrestin2 resonances upon phosphopeptide addition (Figure 3-figure supplement 1E, F) precluded a meaningful assessment of the effect of the AP2b2 addition on arrestin2 in the presence of phosphopeptide”.

      (3) I don't understand how methyl TROSY spectra of arrestin2 with phosphopeptide could look so broadened unless there are sample stability problems?

      We thank the referee for this comment. We would like to clarify that in general a broadened spectrum beyond what is expected from the rotational correlation time does not necessarily correlate with sample stability problems. It is rather evidence of conformational intermediate exchange on the micro- to millisecond time scale.

      The displayed <sup>1</sup>H-<sup>15</sup>N spectra of apo arrestin2 already suffer from line broadening due to such intrinsic mobility of the protein. These spectra were recorded with acquisition times of 50 ms (<sup>15</sup>N) and 55 ms (<sup>1</sup>H) and resolution-enhanced by a 60˚-shifted sine-bell filter for <sup>15</sup>N and a 60˚-shifted squared sine-bell filter for <sup>1</sup>H, respectively, which leads to the observed resolution with still reasonable sensitivity. The <sup>1</sup>H-<sup>15</sup>N resonances in Fig. 1b (arrestin2<sup>1-393</sup>) look particularly narrow. However, this region contains a large number of flexible residues. The full spectrum, e.g. Figure 1-figure supplement 2, shows the entire situation with a clear variation of linewidths and intensities. The linewidth variation becomes stronger when omitting the resolution enhancement filters.

      The addition of the CCR5pp6 phosphopeptide does not change protein stability, which we assessed by measuring the melting temperature of arrestin2<sup>1-418</sup> and arrestin2<sup>1-418</sup>•CCR5pp6 complex (Tm = 57°C in both cases). We believe that the explanation for the increased broadening of the arrestin2 resonances is that addition of the CCR5pp6, possibly due to the release of the arrestin2 strand b20, amplifies the mentioned intermediate timescale protein dynamics. This results in the disappearance of arrestin2 resonances.

      We have now included the assessment of arrestin2<sup>1-418</sup> and arrestin2<sup>1-418</sup>•CCR5pp6 stability in the manuscript:

      “The observed line broadening of arrestin2 in the presence of phosphopeptide must be a result of increased protein motions and is not caused by a decrease in protein stability, since the melting temperature of arrestin2 in the absence and presence of phosphopeptide are identical (56.9 ± 0.1 °C)”.

      (4) At one point the authors added excess fully phosphorylated CCR5 phosphopeptide (CCR5pp6). Does the phosphopeptide rescue resolution of arrestin2 (NH or methyl) to the point where interaction dynamics with clathrin (CLTC NTD) are now more evident on the arrestin2 surface?

      Unfortunately, when we titrate arrestin2 with CCR5pp6 (please see Isaikina & Petrovic et. al, Mol. Cell, 2023 for more details), the arrestin2 resonances undergo fast-to-intermediate exchange upon binding. In the presence of phosphopeptide excess, very few resonances remain, the majority of which are in the disordered region, including resonances from the clathrin-binding loop. Due to the peak overlap, we could not unambiguously assign arrestin2 resonances in the bound state, which precluded our assessment of the arrestin2-clathrin interaction in the presence of phosphopeptide. We have made this now clearer in the paragraph ‘The arrestin2-clathrin interaction is independent of arrestin2 activation’

      “Due to significant line broadening and peak overlap of the arrestin2 resonances upon phosphopeptide addition, the influence of arrestin activation on the clathrin interaction could not be detected on either backbone or methyl resonances “.

      (5) Once phosphopeptide activates arrestin-2 and AP2 binds can phosphopeptide be exchanged off? In this case, would it be possible for the activated arrestin-2 AP2 complex to re-engage a new (phosphorylated) receptor?

      This would be an interesting mechanism. In principle, this should be possible as long as the other (phosphorylated) receptor outcompetes the initial phosphopeptide with higher affinity towards the binding site. However, we do not have experiments to assess this process directly. Therefore, we rather wish not to further speculate.

      (6) I'd be tempted to move the discussion of class A and class B GPCRs and their presumed differences to the intro and then motivate the paper with specific questions. 

      We appreciate the referee’s suggestion and had a similar idea previously. However, as we do not have data on other class-A or class-B receptors, we rather don’t want to motivate the entire manuscript by this question.

      (7) Did the authors ever try SEC measurements of arrestin-2 + AP2beta2+CCR5pp6 with and without PIP2, and with and without clathrin (CLTC NTD? The question becomes what the active complex is and how PIP2 modulates this cascade of complexation events in class B receptors.

      We thank the referee for this question. Indeed, we tested whether PIP2 can stabilize the arrestin2•CCR5pp6•AP2 complex by SEC experiments. Unfortunately, the addition of PIP2 increased the formation of arrestin2 dimers and higher oligomers, presumably due to the presence of additional charges. The resolution of SEC experiments was not sufficient to distinguish arrestin2 in oligomeric form or in arrestin2•CCR5pp6•AP2 complex. We now mention this in the text:

      “We also attempted to stabilize the arrestin2-AP2b2-phosphopetide complex through the addition of PIP2, which can stabilize arrestin complexes with the receptor (Janetzko et al., 2022). The addition of PIP2 increased the formation of arrestin2 dimers and higher oligomers, presumably due to the presence of additional charges. Unfortunately, the resolution of the SEC experiments was not sufficient to separate the arrestin2 oligomers from complexes with AP2b2”.

      Reviewer #3 (Public review): 

      Summary: 

      Overall, this is a well-done study, and the conclusions are largely supported by the data, which will be of interest to the field. 

      Strengths: 

      Strengths of this study include experiments with solution NMR that can resolve high-resolution interactions of the highly flexible C-terminal tail of arr2 with clathrin and AP2. Although mainly confirmatory in defining the arr2 CBL376LIELD380 as the clathrin binding site, the use of the NMR is of high interest (Fig. 1). The 15N-labeled CLTC-NTD experiment with arr2 titrations reveals a span from 39-108 that mediates an arr2 interaction, which corroborates previous crystal data, but does not reveal a second area in CLTC-NTD that in previous crystal structures was observed to interact with arr2. 

      SEC and NMR data suggest that full-length arr2 (1-418) binding with 2-adaptin subunit of AP2 is enhanced in the presence of CCR5 phospho-peptides (Fig. 3). The pp6 peptide shows the highest degree of arr2 activation, and 2-adaptin binding, compared to less phosphorylated peptide or not phosphorylated at all. It is interesting that the arr2 interaction with CLTC NTD and pp6 cannot be detected using the SEC approach, further suggesting that clathrin binding is not dependent on arrestin activation. Overall, the data suggest that receptor activation promotes arrestin binding to AP2, not clathrin, suggesting the

      AP2 interaction is necessary for CCR5 endocytosis. 

      To validate the solid biophysical data, the authors pursue validation experiments in a HeLa cell model by confocal microscopy. This requires transient transfection of tagged receptor (CCR5-Flag) and arr2 (arr2-YFP). CCR5 displays a "class B"-like behavior in that arr2 is rapidly recruited to the receptor at the plasma membrane upon agonist activation, which forms a stable complex that internalizes onto endosomes (Fig. 4). The data suggest that complex internalization is dependent on AP2 binding not clathrin (Fig. 5). 

      The addition of the antagonist experiment/data adds rigor to the study. 

      Overall, this is a solid study that will be of interest to the field.

      We thank the referee for the careful and encouraging evaluation of our work. We appreciate the recognition of the solidity of our data and the support for our conclusions regarding the distinct roles of AP2 and clathrin in arrestin-mediated receptor internalization.

      Recommendations for the authors: 

      Reviewer #1 (Recommendations for the authors): 

      I believe that the authors have made efforts to improve the accessibility to a broader audience. In a few cases, I believe that the authors response either did not truly address the concern or made the problem worse. I am grouping these as 'very strong opinions' and 'sticking point'. 

      Very strong opinion 1: 

      While data presentation is somewhat at the authors discretion, there were several figures where the presentation did not make the work approachable, including microscopy insets and NMR spectra. A suggestion to 'pinch and zoom' does not really address this. For the overlapping NMR spectra in supporting Figure 1, I actually -can- see this on zooming, but I did not recognize this on first pass because the colors are almost identical for the two spectra. This is an easy fix. Changing the presentation by coloring these distinctly would alleviate this. The Supplemental figure to Fig. 2 looks strange with pinch and zoom. But at the end of the day, data presentation where the reader is to infer that they must zoom in is not very approachable and may prevent readers from being able to independently assess the data. In this case, there doesn't seem to be a strong rationale to not make these panels easier to see at 100% size. 

      We appreciate the reviewer’s thoughtful comments regarding figure accessibility and agree that data presentation should be clear and interpretable without requiring readers to zoom in extensively. However, we must note that the presentation of the microscopy data follows standard practices in the field and that the panels already include zoomed-in regions, which enable easier access to key results and observations.

      Regarding the NMR data, we have revised Figure 1—figure supplement 2 and Figure 2— figure supplement 1 to match the presentation style of Figure 3—figure supplement 1, which the reviewer apparently found more accessible. We also made the colors of the spectra more vibrant, as the referee suggested. We would like to emphasize that it is absolutely necessary to display the full NMR spectra in order to allow independent assessment of signal assignment, data quality, and overall protein behavior. Zoomed regions of the relevant details are provided in the main figures.

      Very strong opinion 2: 

      The author's response to lack of individual data points and error bars is that this is an n=1 experiment. I do not believe this meets the minimum standard for best practices in the field.

      We respectfully disagree with the reviewer’s assessment. The Figure already displays individual data points, as shown already in the initial submission. Performing NMR titrations with isotopically labeled protein samples is inherently resource-intensive, and single-sample (n = 1) experiments are widely accepted and routinely reported in the field. Numerous studies have used the same approach, including Rosenzweig et al., Science (2013); Nikolaev et al., Nat. Methods (2019); and Hobbs et al., J. Biomol. NMR (2022), as well as our own recent work (Isaikina & Petrovic et al., Mol. Cell, 2023). These studies demonstrate that such NMR-based affinity measurements, even when performed on a single sample, are highly reproducible, precise, and consistent with orthogonal evidence and across different sample conditions.

      Sticking point:

      Figure 1A - the alphaFold model of arrestin2L depicts the disordered loops as ordered. The depiction is misleading at best, and inaccurate in truth. To use an analogy, what the authors depict is equivalent to publishing an LLM hallucination in the text. Unlike LLMs, alphaFold will actually flag its hallucination with the confidence (pLDDT) in the output. Both for LLMs and for alphaFold, we are spending much time teaching our students in class how to use computation appropriately - both to improve efficiency but also to ensure accuracy by removing hallucinations.

      The original review indicated that confidences needed to be shown and that this needed to be depicted in a way that clarifies that this is NOT a structural state of the loops. The newly added description ("The model was used to visualize the clathrin-binding loop and the 344-loop of the arrestin2 Cdomain, which are not detected in the available crystal structures...) worsens the concern because it even more strongly implies that a 0 confidence computational output is a likely structural state. It also indicates that these regions were 'not detected' in crystal structures. These regions of arrestin are intrinsically disordered. AlphaFold (by it's nature) must put out something in terms of coordinates, even if the pLDDT suggests that the region cannot be predicted or is not in a stable position, which is the case here. In crystal structures, these regions are not associated with interpretable electron density, meaning that coordinates are omitted in these regions because adding them would imply that under the conditions used, the protein adopts a low energy structural state in this region. This region is instead intrinsically disordered. 

      A good description of why showing disordered loops in a defined position is incorrect and how to instead depict disorder correctly is in Brotzakis et al. Nat communications 16, 1632 (2025) "AlphaFold prediction of structural ensembles of disordered proteins", where figures 3A, 4A, and 5A show one AlphaFold prediction colored by confidence and 3B, 4B and 5B are more accurate depictions of the structural ensemble. 

      Coming back to the original comment "The AlphaFold model could benefit from a more transparent discussion of prediction confidence and caveats. The younger crowd (part of the presumed intended readership) tends to be more certain that computational output is 'true'...." Right now, the authors are still showing in Fig 1A a depiction of arrestin with models for the loops that are untrue. They now added text indicating that these loops are visualized in an AlphaFold prediction and 'true' but 'not detected in crystal structures'. There is no indication in the text that these are intrinsically disordered. The lack of showing the pLDDT confidence and the lack of any indication that these are disordered regions is simply incorrect. 

      We appreciate the concern of the reviewer towards AlphaFold models. As NMR spectroscopists we are highly aware of intrinsic biomolecular motions. However, our AlphaFold2 model is used as a graphical representation to display the interaction sites of loops; it is not intended to depict the loops as fixed structural states. The flexibility of the loops had been clearly described in the main text before:

      “Arrestin2 consists of two consecutive (N- and C-terminal) β-sandwich domains (Figure 1A), followed by the disordered clathrin-binding loop (CBL, residues 353–386), strand b20 (residues 386–390), and a disordered C-terminal tail after residue 393”.

      and

      “Figure 1B depicts part of a 1H-15N TROSY spectrum (full spectrum in Figure 1-figure supplement 2A) of the truncated 15N-labeled arrestin2 construct arrestin21-393 (residues 1393), which encompasses the C-terminal strand β20, but lacks the disordered C-terminal tail. Due to intrinsic microsecond dynamics, the assignment of the arrestin21-393 1H-15N resonances by triple resonance methods is largely incomplete, but 16 residues (residues 367381, 385-386) within the mobile CBL could be assigned. This region of arrestin is typically not visible in either crystal or cryo-EM structures due to its high flexibility”.

      as well as in the legend to Figure 1:

      “The model was used to visualize the clathrin-binding loop and the 344-loop of the arrestin2 C-domain, which are not detected in the available crystal structures of apo arrestin2 [bovine: PDB 1G4M (Han et al., 2001), human: PDB 8AS4 (Isaikina et al., 2023)]. In the other structured regions, the model is virtually identical to the crystal structures”.

      We have now further added a label ‘AlphaFold2 model’ to Figure 1A and amended the respective Figure legend to

      “The model was used to visualize the clathrin-binding loop and the 344-loop of the arrestin2 C-domain, which are not detected in the available crystal structures of apo arrestin2 [bovine: PDB 1G4M (Han et al., 2001), human: PDB 8AS4 (Isaikina et al., 2023)] due to flexibility. In the other structured regions, the model is virtually identical to the crystal structures”.

      Reviewer #2 (Recommendations for the authors): 

      I appreciated the response by the authors to all of my questions. I have no further comments

      We thank the referee for the raised questions, which we believe have improved the quality of the manuscript.

    1. Philips 27E2F7901 usb4一线通和c-dp同时接入mac两个thunderbolt 3口时,用betterdisplaycli脚本制动停用usbc (30hz)信号源. usb4只做10gb/s数据传输,外接显示用c-dp (4k60hz)

    1. These “elite” classes had privileges and power, thanks to their control of wealth. In order to protect those privileges, elites pioneered the development of the state—rules, laws, government structures, and military that protected people in a society, but especially the wealthy.

      Spannender Punkt: Der Staat war von Anfang an ein Projekt, das dazu diente, die Privilegien der Mächtigen und REichen zu sichern.

    1. Reviewer #1 (Public review):

      Summary

      The manuscript presents EIDT, a framework that extracts an "individuality index" from a source task to predict a participant's behaviour in a related target task under different conditions. However, the evidence that it truly enables cross-task individuality transfer is not convincing.

      Strengths

      The EIDT framework is clearly explained, and the experimental design and results are generally well-described. The performance of the proposed method is tested on two distinct paradigms: a Markov Decision Process (MDP) task (comparing 2-step and 3-step versions) and a handwritten digit recognition (MNIST) task under various conditions of difficulty and speed pressure. The results indicate that the EIDT framework generally achieved lower prediction error compared to baseline models and that it was better at predicting a specific individual's behaviour when using their own individuality index compared to using indices from others.

      Furthermore, the individuality index appeared to form distinct clusters for different individuals, and the framework was better at predicting a specific individual's behaviour when using their own derived index compared to using indices from other individuals.

      Comments on revisions:

      I thank the author for the additional analyses. They have fully addressed all of my previous concerns, and I have no further recommendations.

    2. Reviewer #2 (Public review):

      This paper introduces a framework for modeling individual differences in decision-making by learning a low-dimensional representation (the "individuality index") from one task and using it to predict behaviour in a different task. The approach is evaluated on two types of tasks: a sequential value-based decision-making task and a perceptual decision task (MNIST). The model shows improved prediction accuracy when incorporating this learned representation compared to baseline models.

      The motivation is solid, and the modelling approach is interesting, especially the use of individual embeddings to enable cross-task generalization. That said, several aspects of the evaluation and analysis could be strengthened.

      (1) The MNIST SX baseline appears weak. RTNet isn't directly comparable in structure or training. A stronger baseline would involve training the GRU directly on the task without using the individuality index-e.g., by fixing the decoder head. This would provide a clearer picture of what the index contributes.

      (2) Although the focus is on prediction, the framework could offer more insight into how behaviour in one task generalizes to another. For example, simulating predicted behaviours while varying the individuality index might help reveal what behavioural traits it encodes.

      (3) It's not clear whether the model can reproduce human behaviour when acting on-policy. Simulating behaviour using the trained task solver and comparing it with actual participant data would help assess how well the model captures individual decision tendencies.

      (4) Figures 3 and S1 aim to show that individuality indices from the same participant are closer together than those from different participants. However, this isn't fully convincing from the visualizations alone. Including a quantitative presentation would help support the claim.

      (5) The transfer scenarios are often between very similar task conditions (e.g., different versions of MNIST or two-step vs three-step MDP). This limits the strength of the generalization claims. In particular, the effects in the MNIST experiment appear relatively modest, and the transfer is between experimental conditions within the same perceptual task. To better support the idea of generalizing behavioural traits across tasks, it would be valuable to include transfers across more structurally distinct tasks.

      (6) For both experiments, it would help to show basic summaries of participants' behavioural performance. For example, in the MDP task, first-stage choice proportions based on transition types are commonly reported. These kinds of benchmarks provide useful context.

      (7) For the MDP task, consider reporting the number or proportion of correct choices in addition to negative log-likelihood. This would make the results more interpretable.

      (8) In Figure 5, what is the difference between the "% correct" and "% match to behaviour"? If so, it would help to clarify the distinction in the text or figure captions.

      (9) For the cognitive model, it would be useful to report the fitted parameters (e.g., learning rate, inverse temperature) per individual. This can offer insight into what kinds of behavioural variability the individuality index might be capturing.

      (10) A few of the terms and labels in the paper could be made more intuitive. For example, the name "individuality index" might give the impression of a scalar value rather than a latent vector, and the labels "SX" and "SY" are somewhat arbitrary. You might consider whether clearer or more descriptive alternatives would help readers follow the paper more easily.

      (11) Please consider including training and validation curves for your models. These would help readers assess convergence, overfitting, and general training stability, especially given the complexity of the encoder-decoder architecture.

      Comments on revisions:

      Thank you to the authors for the updated manuscript. The authors have addressed the majority of my concerns, and the paper is now in a much better form.

      Regarding my previous Comment 6, I still believe it would be helpful to include a graph similar to what is typically reported for these tasks-specifically, a breakdown of choices based on rare versus common transitions (see Model-Based Influences on Humans' Choices and Striatal Prediction Errors, Figure 2). Presenting this for both the actual behaviour and the simulated data would strengthen the paper and allow for clearer comparison.

    3. Reviewer #3 (Public review):

      Summary:

      This work presents a novel neural network-based framework for parameterizing individual differences in human behavior. Using two distinct decision-making experiments, the author demonstrates the approach's potential and claims it can predict individual behavior (1) within the same task, (2) across different tasks, and (3) across individuals. While the goal of capturing individual variability is compelling and the potential applications are promising, the claims are weakly supported, and I find that the underlying problem is conceptually ill-defined.

      Strengths:

      The idea of using neural networks for parameterizing individual differences in human behavior is novel, and the potential applications can be impactful.

      Weaknesses:

      (1) To demonstrate the effectiveness of the approach, the authors compare a Q-learning cognitive model (for the MDP task) and RTNet (for the MNIST task) against the proposed framework. However, as I understand it, neither the cognitive model nor RTNet is designed to fit or account for individual variability. If that is the case, it is unclear why these models serve as appropriate baselines. Isn't it expected that a model explicitly fitted to individual data would outperform models that do not? If so, does the observed superiority of the proposed framework simply reflect the unsurprising benefit of fitting individual variability? I think the authors should either clarify why these models constitute fair control or validate the proposed approach against stronger and more appropriate baselines.

      (2) It's not very clear in the results section what it means by having a shorter within-individual distance than between-individual distances. Related to the comment above, is there any control analysis performed for this? Also, this analysis appears to have nothing to do with predicting individual behavior. Is this evidence toward successfully parameterizing individual differences? Could this be task-dependent, especially since the transfer is evaluated on exceedingly similar tasks in both experiments? I think a bit more discussion of the motivation and implications of these results will help the reader in making sense of this analysis.

      (3) The authors have to better define what exactly he meant by transferring across different "tasks" and testing the framework in "more distinctive tasks". All presented evidence, taken at face value, demonstrated transferring across different "conditions" of the same task within the same experiment. It is unclear to me how generalizable the framework will be when applied to different tasks.

      (4) Conceptually, it is also unclear to me how plausible it is that the framework could generalize across tasks spanning multiple cognitive domains (if that's what is meant by more distinctive). For instance, how can an individual's task performance on a Posner task predict task performance on the Cambridge face memory test? Which part of the framework could have enabled such a cross-domain prediction of task performance? I think these have to be at least discussed to some extent, since without it the future direction is meaningless.

      (5) How is the negative log-likelihood, which seems to be the main metric for comparison, computed? Is this based on trial-by-trial response prediction or probability of responses, as what usually performed in cognitive modelling?

      (6) None of the presented evidence is cross-validated. The authors should consider performing K-fold cross-validation on the train, test, and evaluation split of subjects to ensure robustness of the findings.

      (7) The authors excluded 25 subjects (20% of the data) for different reasons. This is a substantial proportion, especially by the standards of what is typically observed in behavioral experiments. The authors should provide a clear justification for these exclusion criteria and, if possible, cite relevant studies that support the use of such stringent thresholds.

      (8) The authors should do a better job of creating the figures and writing the figure captions. It is unclear which specific claim the authors are addressing with the figure. For example, what is the key message of Figure 2C regarding transfer within and across participants? Why are the stats presentation different between the Cognitive model and the EIDT framework plots? In Figure 3, it's unclear what these dots and clusters represent and how they support the authors' claim that the same individual forms clusters. And isn't this experiment have 98 subjects after exclusion, this plot has way less than 98 dots as far as I can tell. Furthermore, I find Figure 5 particularly confusing, as the underlying claim it is meant to illustrate is unclear. Clearer figures and more informative captions are needed to guide the reader effectively.

      (9) I also find the writing somewhat difficult to follow. The subheadings are confusing, and it's often unclear which specific claim the authors are addressing. The presentation of results feels disorganized, making it hard to trace the evidence supporting each claim. Also, the excessive use of acronyms (e.g., SX, SY, CG, EA, ES, DA, DS) makes the text harder to parse. I recommend restructuring the results section to be clearer and significantly reducing the use of unnecessary acronyms.

      Comments on revisions:

      The authors have addressed my previous comments with great care and detail. I appreciate the additional analyses and edits. I have no further comments.

    4. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Because the "source" and "target" tasks are merely parameter variations of the same paradigm, it is unclear whether EIDT achieves true crosstask transfer. The manuscript provides no measure of how consistent each participant's behaviour is across these variants (e.g., two- vs threestep MDP; easy vs difficult MNIST). Without this measure, the transfer results are hard to interpret. In fact, Figure 5 shows a notable drop in accuracy when transferring between the easy and difficult MNIST conditions, compared to transfers between accuracy-focused and speedfocused conditions. Does this discrepancy simply reflect larger withinparticipant behavioural differences between the easy and difficult settings? A direct analysis of intra-individual similarity for each task pair and how that similarity is related to EIDT's transfer performance is needed.

      Thank you for your insightful comment. We agree that the tasks used in our study are variations of the same paradigm. Accordingly, we have revised the manuscript to consistently frame our findings as demonstrating individuality transfer "across task conditions" rather than "across distinct tasks."

      In response to your suggestion, we have conducted a new analysis to directly investigate the relationship between individual behavioural patterns and transfer performance. As show in the new Figures 4, 11, S8, and S9, we found a clear relationship between the distance in the space of individual latent representation (called individuality index in the previous manuscript) and prediction performance. Specifically, prediction accuracy for a given individual's behaviour degrades as the latent representation of the model's source individual becomes more distant. This result directly demonstrates that our framework captures meaningful individual differences that are predictive of transfer performance across conditions.

      We have also expanded the Discussion (Lines 332--343) to address the potential for applying this framework to more structurally distinct tasks, hypothesizing that this would rely on shared underlying cognitive functions.

      Related to the previous comment, the individuality index is central to the framework, yet remains hard to interpret. It shows much greater within-participant variability in the MNIST experiment (Figure S1) than in the MDP experiment (Figure 3). Is such a difference meaningful? It is hard to know whether it reflects noisier data, greater behavioural flexibility, or limitations of the model.

      Thank you for raising this important point about interpretability. To enhance the interpretability of the individual latent representation, we have added a new analysis for the MDP task (see Figures 6 and S4). By applying our trained encoder to data from simulated Q-learning agents with known parameters, we demonstrate that the dimensions of the latent space systematically map onto the agents' underlying cognitive parameters (learning rate and inverse temperature). This analysis provides a clearer interpretation by linking our model's data-driven representation to established theoretical constructs.

      Regarding the greater within-participant variability observed in the MNIST task (visualized now in Figure S7), this could be attributed to several factors, such as greater behavioural flexibility in the perceptual task. However, disentangling these potential factors is complex and falls outside the primary scope of the current study, which prioritizes demonstrating robust prediction accuracy across different task conditions.

      The authors suggests that the model's ability to generalize to new participants "likely relies on the fact that individuality indices form clusters and individuals similar to new participants exist in the training participant pool". It would be helpful to directly test this hypothesis by quantifying the similarity (or distance) of each test participant's individuality index to the individuals or identified clusters within the training set, and assessing whether greater similarity (or closer proximity) to the clusters in the training set is associated with higher prediction accuracy for those individuals in the test set.

      Thank you for this excellent suggestion. We have performed the analysis you proposed to directly test this hypothesis. Our new results, presented in Figures 4, 11, S5, S8, and S9, quantify the distance between the latent representation of a test participant and that of the source participant used to generate the prediction model.

      The results show a significant negative correlation: prediction accuracy consistently decreases as the distance in the latent space increases. This confirms that generalization performance is directly tied to the similarity of behavioural patterns as captured by our latent representation, strongly supporting our hypothesis.

      Reviewer #2 (Public review):

      The MNIST SX baseline appears weak. RTNet isn't directly comparable in structure or training. A stronger baseline would involve training the GRU directly on the task without using the individuality index-e.g., by fixing the decoder head. This would provide a clearer picture of what the index contributes.

      We agree that a more direct baseline is crucial for evaluating the contribution of our transfer mechanism. For the Within-Condition Prediction scenario, the comparison with RTNet was intended only to validate that our task solver architecture could achieve average humanlevel task performance (Figure 7).

      For the critical Cross-Condition Transfer scenario, we have now implemented a stronger and more appropriate baseline, which we call ``task solver (source).'' This model has the same architecture as our EIDT task solver but is trained directly on the source task data of the specific test participant. As shown in revised Figure 9, our EIDT framework significantly outperforms this direct-training approach, clearly demonstrating the benefit of the individuality transfer mechanism.

      Although the focus is on prediction, the framework could offer more insight into how behaviour in one task generalizes to another. For example, simulating predicted behaviours while varying the individuality index might help reveal what behavioural traits it encodes.

      Thank you for this valuable suggestion. To provide more insight into the encoded behavioural traits, we have conducted a new analysis linking the individual latent representation to a theoretical cognitive model. As detailed in the revised manuscript (Figures 6 and S4), we applied our encoder to simulated data from Q-learning agents with varying parameters. The results show a systematic relationship between the latent space coordinates and the agents' learning rates and inverse temperatures, providing a clearer interpretation of what the representation captures.

      It's not clear whether the model can reproduce human behaviour when acting on-policy. Simulating behaviour using the trained task solver and comparing it with actual participant data would help assess how well the model captures individual decision tendencies.

      We have added the suggested on-policy evaluation (Lines 195--207). In the revised manuscript (Figure 5), we present results from simulations where the trained task solvers performed the MDP task. We compared their performance (total reward and rate of the highly-rewarding action selected) against their corresponding human participants. The strong correlations observed demonstrate that our model successfully captures and reproduces individual-specific behavioural tendencies in an onpolicy setting.

      Figures 3 and S1 aim to show that individuality indices from the same participant are closer together than those from different participants. However, this isn't fully convincing from the visualizations alone. Including a quantitative presentation would help support the claim.

      We agree that the original visualizations of inter- and intraparticipant distances was not sufficiently convincing. We have therefore removed that analysis. In its place, we have introduced a more direct and quantitative analysis that explicitly links the individual latent representation to prediction performance (see Figures 4, 11, S5, S8, and S9). This new analysis demonstrates that prediction error for an individual is a function of distance in the latent space, providing stronger evidence that the representation captures meaningful, individual-specific information.

      The transfer scenarios are often between very similar task conditions (e.g., different versions of MNIST or two-step vs three-step MDP). This limits the strength of the generalization claims. In particular, the effects in the MNIST experiment appear relatively modest, and the transfer is between experimental conditions within the same perceptual task. To better support the idea of generalizing behavioural traits across tasks, it would be valuable to include transfers across more structurally distinct tasks.

      We agree with this limitation and have revised the manuscript to be more precise. We now frame our contribution as "individuality transfer across task conditions" rather than "across tasks" to accurately reflect the scope of our experiments. We have also expanded the Discussion section (Line 332-343) to address the potential and challenges of applying this framework to more structurally distinct tasks, noting that it would likely depend on shared underlying cognitive functions.

      For both experiments, it would help to show basic summaries of participants' behavioural performance. For example, in the MDP task, first-stage choice proportions based on transition types are commonly reported. These kinds of benchmarks provide useful context.

      We have added behavioral performance summaries as requested. For the MDP task, Figure 5 now compares the total reward and rate of highlyrewarding action selected between humans and our model. For the MNIST task, Figure 7 shows the rate of correct responses for humans, RTNet, and our task solver across all conditions. These additions provide better context for the model's performance.

      For the MDP task, consider reporting the number or proportion of correct choices in addition to negative log-likelihood. This would make the results more interpretable.

      Thank you for the suggestion. To make the results more interpretable, we have added a new prediction performance metric: the rate for behaviour matched. This metric measures the proportion of trials where the model's predicted action matches the human's actual choice. This is now included alongside the negative log-likelihood in Figures 2, 3, 4, 8, 9, and 11.

      In Figure 5, what is the difference between the "% correct" and "% match to behaviour"? If so, it would help to clarify the distinction in the text or figure captions.

      We have clarified these terms in the revised manuscript. As defined in the Result section (Lines 116--122, 231), "%correct" (now "rate of correct responses") is a measure of task performance, whereas "%match to behaviour" (now "rate for behaviour matched") is a measure of prediction accuracy.

      For the cognitive model, it would be useful to report the fitted parameters (e.g., learning rate, inverse temperature) per individual. This can offer insight into what kinds of behavioural variability the individual latent representation might be capturing.

      We have added histograms of the fitted Q-learning parameters for the human participants in Supplementary Materials (Figure S1). This analysis revealed which parameters varied most across the population and directly informed the design of our subsequent simulation study with Q-learning agents (see response to Comment 2-2), where we linked these parameters to the individual latent representation (Lines 208--223).

      A few of the terms and labels in the paper could be made more intuitive. For example, the name "individuality index" might give the impression of a scalar value rather than a latent vector, and the labels "SX" and "SY" are somewhat arbitrary. You might consider whether clearer or more descriptive alternatives would help readers follow the paper more easily.

      We have adopted the suggested changes for clarity.

      "Individuality index" has been changed to "individual latent representation".

      "Situation SX" and "Situation SY" have been renamed to the more descriptive "Within-Condition Prediction" and "Cross-Condition Transfer", respectively.

      We have also added a table in Figure 7 to clarify the MNIST condition acronyms (EA/ES/DA/DS).

      Please consider including training and validation curves for your models. These would help readers assess convergence, overfitting, and general training stability, especially given the complexity of the encoder-decoder architecture.

      Training and validation curves for both the MDP and MNIST tasks have been added to Supplementary Materials (Figure S2 and S6) to show model convergence and stability.

      Reviewer #3 (Public review):

      To demonstrate the effectiveness of the approach, the authors compare a Q-learning cognitive model (for the MDP task) and RTNet (for the MNIST task) against the proposed framework. However, as I understand it, neither the cognitive model nor RTNet is designed to fit or account for individual variability. If that is the case, it is unclear why these models serve as appropriate baselines. Isn't it expected that a model explicitly fitted to individual data would outperform models that do not? If so, does the observed superiority of the proposed framework simply reflect the unsurprising benefit of fitting individual variability? I think the authors should either clarify why these models constitute fair control or validate the proposed approach against stronger and more appropriate baselines.

      Thank you for raising this critical point. We wish to clarify the nature of our baselines:

      For the MDP task, the cognitive model baseline was indeed designed to account for individual variability. We estimated its parameters (e.g., learning rate) from each individual's source task behaviour and then used those specific parameters to predict their behaviour in the target task. This makes it a direct, parameter-based transfer model and thus a fair and appropriate baseline for individuality transfer.

      For the MNIST task, we agree that the RTNet baseline was insufficient for evaluating individual-level transfer in the "Cross-Condition Transfer" scenario. We have now introduced a much stronger baseline, the "task solver (source)," which is trained specifically on the source task data of each test participant. Our results (Figure 9) show that the EIDT framework significantly outperforms this more appropriate, individualized baseline, highlighting the value of our transfer method over direct, within-condition fitting.

      It's not very clear in the results section what it means by having a shorter within-individual distance than between-individual distances. Related to the comment above, is there any control analysis performed for this? Also, this analysis appears to have nothing to do with predicting individual behavior. Is this evidence toward successfully parameterizing individual differences? Could this be task-dependent, especially since the transfer is evaluated on exceedingly similar tasks in both experiments? I think a bit more discussion of the motivation and implications of these results will help the reader in making sense of this analysis.

      We agree that the previous analysis on inter- and intra-participant distances was not sufficiently clear or directly linked to the model's predictive power. We have removed this analysis from the manuscript. In its place, we have introduced a new, more direct analysis (Figures 4, 11, S5, S8, and S9) that demonstrates a quantitative relationship between the distance in the latent space and prediction accuracy. This new analysis shows that prediction error for an individual increases as a function of this distance, providing much stronger and clearer evidence that our framework successfully parameterizes meaningful individual differences.

      The authors have to better define what exactly he meant by transferring across different "tasks" and testing the framework in "more distinctive tasks". All presented evidence, taken at face value, demonstrated transferring across different "conditions" of the same task within the same experiment. It is unclear to me how generalizable the framework will be when applied to different tasks.

      Conceptually, it is also unclear to me how plausible it is that the framework could generalize across tasks spanning multiple cognitive domains (if that's what is meant by more distinctive). For instance, how can an individual's task performance on a Posner task predict task performance on the Cambridge face memory test? Which part of the framework could have enabled such a cross-domain prediction of task performance? I think these have to be at least discussed to some extent, since without it the future direction is meaningless.

      We agree with your assessment and have corrected our terminology throughout the manuscript. We now consistently refer to the transfer as being "across task conditions" to accurately describe the scope of our findings.

      We have also expanded our Discussion (Line 332-343) to address the important conceptual point about cross-domain transfer. We hypothesize that such transfer would be possible if the tasks, even if structurally different, rely on partially shared underlying cognitive functions (e.g., working memory). In such a scenario, the individual latent representation would capture an individual's specific characteristics related to that shared function, enabling transfer. Conversely, we state that transfer between tasks with no shared cognitive basis would not be expected to succeed with our current framework.

      How is the negative log-likelihood, which seems to be the main metric for comparison, computed? Is this based on trial-by-trial response prediction or probability of responses, as what usually performed in cognitive modelling?

      The negative log-likelihood is computed on a trial-by-trial basis. It is based on the probability the model assigned to the specific action that the human participant actually took on that trial. This calculation is applied consistently across all models (cognitive models, RTNet, and EIDT). We have added sentences to the Results section to clarify this point (Lines 116--122).

      None of the presented evidence is cross-validated. The authors should consider performing K-fold cross-validation on the train, test, and evaluation split of subjects to ensure robustness of the findings.

      All prediction performance results reported in the revised manuscript are now based on a rigorous leave-one-participant-out cross-validation procedure to ensure the robustness of our findings. We have updated the

      Methods section to reflect this (Lines 127--129 and 229).

      For some purely illustrative visualizations (e.g., plotting the entire latent space in Figures S3 and S7), we used a model trained on all participants' data to provide a single, representative example and avoid clutter. We have explicitly noted this in the relevant figure captions.

      The authors excluded 25 subjects (20% of the data) for different reasons. This is a substantial proportion, especially by the standards of what is typically observed in behavioral experiments. The authors should provide a clear justification for these exclusion criteria and, if possible, cite relevant studies that support the use of such stringent thresholds.

      We acknowledge the concern regarding the exclusion rate. The previous criteria were indeed empirical. We have now implemented more systematic exclusion procedure based on the interquartile range of performance metrics, which is detailed in Section 4.2.2 (Lines 489--498). This revised, objective criterion resulted in the exclusion of 42 participants (34% of the initial sample). While this rate is high, we attribute it to the online nature of the data collection, where participant engagement can be more variable. We believe applying these strict criteria was necessary to ensure the quality and reliability of the behavioural data used for modeling.

      The authors should do a better job of creating the figures and writing the figure captions. It is unclear which specific claim the authors are addressing with the figure. For example, what is the key message of Figure 2C regarding transfer within and across participants? Why are the stats presentation different between the Cognitive model and the EIDT framework plots? In Figure 3, it's unclear what these dots and clusters represent and how they support the authors' claim that the same individual forms clusters. And isn't this experiment have 98 subjects after exclusion, this plot has way less than 98 dots as far as I can tell. Furthermore, I find Figure 5 particularly confusing, as the underlying claim it is meant to illustrate is unclear. Clearer figures and more informative captions are needed to guide the reader effectively.

      We agree that several figures and analyses in the original manuscript were unclear, and we have thoroughly revised our figures and their captions to improve clarity.

      The confusing analysis in the old Figures 2C and 5 (Original/Others comparison) have been completely removed. The unclear visualization of the latent space for the test pool (old Figure 3 showing representations only from test participants) has also been removed to avoid confusion. For visualization of the overall latent space, we now use models trained on all data (Figures S3 and S7) and have clarified this in the captions. In place of these removed analyses, we have introduced a new, more intuitive "cross-individual" analysis (presented in Figures 4, 11, S5, S8, and S9). As explained in the new, more detailed captions, this analysis directly plots prediction performance as a function of the distance in latent space, providing a much clearer demonstration of how the latent representation relates to predictive accuracy.

      I also find the writing somewhat difficult to follow. The subheadings are confusing, and it's often unclear which specific claim the authors are addressing. The presentation of results feels disorganized, making it hard to trace the evidence supporting each claim. Also, the excessive use of acronyms (e.g., SX, SY, CG, EA, ES, DA, DS) makes the text harder to parse. I recommend restructuring the results section to be clearer and significantly reducing the use of unnecessary acronyms.

      Thank you for this feedback. We have made significant revisions to improve the clarity and organization of the manuscript. We have renamed confusing acronyms: "Situation SX" is now "Within- Condition Prediction," and "Situation SY" is now "Cross-Condition Transfer." We also added a table to clarify the MNIST condition acronyms (EA/ES/DA/DS) in Figure 7.

      The Results section has been substantially restructured with clearer subheadings.

    1. Reviewer #1 (Public review):

      Summary:

      This study presents compelling new data that combine two FTD-tau mutations P301L/S320F (PL-SF), that reliably induce spontaneous full-length tau aggregation across multiple cellular systems. However, several conclusions would benefit from more validation. Key findings rely on quantification of overexposed immunoblot, and in some experiments, the tau bands shift in molecular weight that are not explained (and in some instances vary between experiments). The effect seems to be driven by the S320F mutation, with the P301L mutation enhancing the effect observed with S320F alone. Although the observation that Hsp70, but not the related Hsc70, enhances aggregation is intriguing, the mechanistic basis for these differences remains unclear despite both Hsp70 and Hsc70 binding to tau. Additional experiments clarifying which PL-SF tau species Hsp70 engages, how this interaction alters tau conformational landscapes, and whether other chaperones or cofactors contribute to this effect would help solidify the conclusions and build a mechanistic picture. Overexpression of Hsp70 in the context of PL tau did not increase tau aggregation, which raises questions about whether the observed effects are specific to the SF mutation. Hsp70 functions in the context of a larger network of chaperones and has been proposed to cooperate with other proteins/machinery to disassemble tau amyloids, perhaps to produce more seeds. This would be consistent with the presented observations. For example, co-IP experiments using Hsp70 as bait combined with proteomics could really help build a more complete picture of what tau species Hsp70 binds and what other factors cooperate to yield the observed increases in aggregation. As it stands, the Hsp70 component of the paper is not fully developed, and additional experiments to address these questions would strengthen this manuscript beyond simply presenting a new tool to study spontaneous tau aggregation.

      Strengths:

      (1) The PL-SF FL tau mutant aggregates spontaneously in different cellular systems and shows hallmarks of tau pathology linked to disease.

      (2) PL-SF 4delta mutant reverses the spontaneous aggregation phenotype, consistent with these residues being critical for tau aggregation.

      (3) PL-SF 4delta also loses the ability to recruit Hsp70/Hsc70, consistent with these residues also being critical for chaperone recruitment.

      (4) The PL-SF tau mutant establishes a new system to study spontaneous tau assembly and to begin to compare it to seeded tau aggregation processes.

      Weaknesses:

      (1) Mechanistic insight into how Hsp70 but not Hsc70 increase PL-SF FL tau aggregation/pathology is missing. This is despite both chaperones binding to PL-SF FL tau. What species of tau does Hsp70 bind, and what cofactors are important in this process?

      (2) The study relies heavily on densitometry of bands to draw conclusions; in several instances, the blots are overexposed to accurately quantify the signal.

    2. Reviewer #2 (Public review):

      Summary:

      This study developed a novel tauopathy model combining two mutations, P301L and S320F, termed the PL-SF model. This model shows rapid tau protein aggregation.

      Strengths:

      The authors demonstrated pathogenicity through solid in vivo and in vitro experiments. Simultaneously, they used this model to investigate the role of the heat shock protein Hsp70 in tau protein aggregation, finding that Hsp70 promotes rather than inhibits tau pathology, which differs from previous findings.

      Weaknesses:

      (1) Although the PL-SF model can accelerate tau aggregation, it is crucial to determine whether this aligns with the temporal progression and spatial distribution of tau pathology in the brains of patients with tauopathies.

      (2) The authors did not elucidate the specific molecular mechanism by which Hsp70 promotes tau aggregation.

      (3) Some figures in this study show large error bars in the quantitative data (some statistical analysis figures, MEA recordings, etc.), indicating significant inter-sample variability. It is recommended to label individual data points in all quantitative figures and clearly indicate them in figure legends.

    3. Author response:

      Reviewer #1

      (1) Mechanistic insight into how Hsp70 but not Hsc70 increase PL-SF FL tau aggregation/pathology is missing. This is despite both chaperones binding to PL-SF FL tau. What species of tau does Hsp70 bind, and what cofactors are important in this process?

      We agree that explaining why Hsp70, but not Hsc70, promotes tau aggregation would strengthen the study. Although both chaperones bind tau, they diverge slightly in 1) protein sequence, 2) biochemical activity, and 3) co-chaperone engagement.

      Sequence: Hsp70 has an extra cysteine residue (Cys306) that is highly reactive to oxidation and a glycine residue that is critical for cysteine oxidation (Gly557). Both residues are specific to Hsp70 (not present in Hsc70) and may alter Hsp70 conformation or client handling (Hong et al., 2022).

      Biochemical activity: Prior studies indicate that Hsp70’s ATPase domain (NBD) is critical for tau interactions (Jinwal et al., 2009; Fontaine et al., 2015; Young et al., 2016) and can be disrupted with point mutations including K71E and E175S for ATPase and A406G/V438G for substrate binding (Fontaine et al., 2015).

      Co-chaperone engagement: Hsp70 recruits the co-chaperone and E3 ubiquitin ligase CHIP/Stub1 more strongly than Hsc70, suggesting co-chaperone engagement could lead to differences in tau processing (Jinwal et al., 2013).

      To directly test how the two closely related chaperones could differentially impact tau, we plan to perform the following experiments:

      (a) We will mutate residues responsible for cysteine reactivity in Hsp70 including the cysteine itself (Cys306) and the critical glycine that facilitates cysteine reactivity (Gly557). These residues will be deleted from Hsp70 or alternatively inserted into Hsc70 to determine whether cysteine reactivity is the reason for Hsp70’s ability to drive tau aggregation.

      (b) We will generate Hsp70 mutants lacking ATPase- or substrate-binding mutants to determine which Hsp70 domains are responsible for driving tau aggregation.

      (c) We will perform seeding assays in stable tau-expressing cell lines to determine whether Hsp70/Hsc70 overexpression or depletion alters seeded tau aggregation.

      (d) We will perform confocal microscopy to determine the extent of co-localization of Hsp70 or Hsc70 with phospho-tau, oligomeric tau, or Thioflavin-S (ThioS) to identify which tau species are engaged by Hsp70/Hsc70.

      (e) We will perform immunoprecipitation pull-downs followed by mass spectrometry to globally identify any relevant Hsp70/Hsc70 interacting factors that might account for the differences in tau aggregation.

      (2) The study relies heavily on densitometry of bands to draw conclusions; in several instances, the blots are overexposed to accurately quantify the signal.

      All immunoblots were acquired as 16-bit TIFFs with exposure settings chosen to prevent pixel saturation, and quantification was performed on raw, unsaturated images. Brightness and contrast adjustments were applied only for visualization and did not alter pixel values used for analysis. All quantified bands fell within the linear range of the detector, with one exception in Figure 7B, which we removed from quantification. We will add both low- and high-exposure versions of immunoblots to the revised figures to demonstrate signal linearity and dynamic range.

      Reviewer #2

      (1) Although the PL-SF model can accelerate tau aggregation, it is crucial to determine whether this aligns with the temporal progression and spatial distribution of tau pathology in the brains of patients with tauopathies.

      No single tauopathy model fully recapitulates the temporal and spatial progression of human tauopathies. The PL-SF system is not intended to model the disease course. Rather, it is an excellent model for mechanistic studies of mature tau aggregation, which is otherwise challenging to study. We note that prior studies showed that PL-SF tau expression in transgenic mice (Xia et al., 2022 and Smith et al., 2025) and rhesus monkeys (Beckman et al., 2021) led to prion-like tau seeding and aggregation in hippocampal and cortical regions. Indeed, the spatial and temporal tau aggregation patterns aligned with features of human tauopathies. So far, these findings all support PL-SF as a valid accelerated model of tauopathy than can be used to interrogate pathogenic mechanisms that impact tau processing, degradation, and/or aggregation.

      (2) The authors did not elucidate the specific molecular mechanism by which Hsp70 promotes tau aggregation.

      We agree that a deeper understanding of the molecular mechanism is needed. The revision experiments outlined above (Reviewer #1, point #1) will define how Hsp70 promotes tau aggregation by testing sequence contributions, dissecting ATPase and substrate-binding domain requirements, and mapping Hsp70/Hsc70 interactors to directly address this mechanistic question.

      (3) Some figures in this study show large error bars in the quantitative data (some statistical analysis figures, MEA recordings, etc.), indicating significant inter-sample variability. It is recommended to label individual data points in all quantitative figures and clearly indicate them in figure legends.

      We acknowledge the inter-sample variability in some of the quantitative datasets. This level of variability can occur in primary neuronal cultures (e.g., MEA recordings) that are sensitive to growth and surface adhesion conditions, leading to many technical considerations. To improve transparency and interpretation, we will revise all quantitative figures to display individual data points overlaid on summary statistics and will update figure legends to clearly indicate sample sizes and statistical tests used.

      References

      Hong Z, Gong W, Yang J, Li S, Liu Z, Perrett S, Zhang H. Exploration of the cysteine reactivity of human inducible Hsp70 and cognate Hsc70. J Biol Chem. 2023 Jan;299(1):102723. doi: 10.1016/j.jbc.2022.102723. Epub 2022 Nov 19. PMID: 36410435; PMCID: PMC9800336.

      Jinwal UK, Miyata Y, Koren J 3rd, Jones JR, Trotter JH, Chang L, O'Leary J, Morgan D, Lee DC, Shults CL, Rousaki A, Weeber EJ, Zuiderweg ER, Gestwicki JE, Dickey CA. Chemical manipulation of hsp70 ATPase activity regulates tau stability. J Neurosci. 2009 Sep 30;29(39):12079-88. doi: 10.1523/JNEUROSCI.3345-09.2009. PMID: 19793966; PMCID: PMC2775811.

      Fontaine SN, Rauch JN, Nordhues BA, Assimon VA, Stothert AR, Jinwal UK, Sabbagh JJ, Chang L, Stevens SM Jr, Zuiderweg ER, Gestwicki JE, Dickey CA. Isoform-selective Genetic Inhibition of Constitutive Cytosolic Hsp70 Activity Promotes Client Tau Degradation Using an Altered Co-chaperone Complement. J Biol Chem. 2015 May 22;290(21):13115-27. doi: 10.1074/jbc.M115.637595. Epub 2015 Apr 11. PMID: 25864199; PMCID: PMC4505567

      Young ZT, Rauch JN, Assimon VA, Jinwal UK, Ahn M, Li X, Dunyak BM, Ahmad A, Carlson G, Srinivasan SR, Zuiderweg ERP, Dickey CA, Gestwicki JE. Stabilizing the Hsp70‑Tau Complex Promotes Turnover in Models of Tauopathy. Cell Chem Biol. 2016 Aug 4;23(8):992–1001. doi:10.1016/j.chembiol.2016.04.014.

      Jinwal UK, Akoury E, Abisambra JF, O'Leary JC 3rd, Thompson AD, Blair LJ, Jin Y, Bacon J, Nordhues BA, Cockman M, Zhang J, Li P, Zhang B, Borysov S, Uversky VN, Biernat J, Mandelkow E, Gestwicki JE, Zweckstetter M, Dickey CA. Imbalance of Hsp70 family variants fosters tau accumulation. FASEB J. 2013 Apr;27(4):1450-9. doi: 10.1096/fj.12-220889. Epub 2012 Dec 27. PMID: 23271055; PMCID: PMC3606536.

      Xia, Y., Prokop, S., Bell, B.M. et al. Pathogenic tau recruits wild-type tau into brain inclusions and induces gut degeneration in transgenic SPAM mice. Commun Biol 5, 446 (2022). https://doi.org/10.1038/s42003-022-03373-1.

      Smith ED, Paterno G, Bell BM, Gorion KM, Prokop S, Giasson BI. Tau from SPAM Transgenic Mice Exhibit Potent Strain-Specific Prion-Like Seeding Properties Characteristic of Human Neurodegenerative Diseases. Neuromolecular Med. 2025 May 30;27(1):44. doi: 10.1007/s12017-025-08850-4. PMID: 40447946; PMCID: PMC12125038.

      Beckman D, Chakrabarty P, Ott S, Dao A, Zhou E, Janssen WG, Donis-Cox K, Muller S, Kordower JH, Morrison JH. A novel tau-based rhesus monkey model of Alzheimer's pathogenesis. Alzheimers Dement. 2021 Jun;17(6):933-945. doi: 10.1002/alz.12318. Epub 2021 Mar 18. PMID: 33734581; PMCID: PMC8252011.

    1. Reviewer #1 (Public review):

      Summary:

      This study presents convincing findings that oligodendrocytes play a regulatory role in spontaneous neural activity synchronization during early postnatal development, with implications for adult brain function. Utilizing targeted genetic approaches, the authors demonstrate how oligodendrocyte depletion impacts Purkinje cell activity and behaviors dependent on cerebellar function. Delayed myelination during critical developmental windows is linked to persistent alterations in neural circuit function, underscoring the lasting impact of oligodendrocyte activity.

      Strengths:

      (1) The research leverages the anatomically distinct olivocerebellar circuit, a well-characterized system with known developmental timelines and inputs, strengthening the link between oligodendrocyte function and neural synchronization.

      (2) Functional assessments, supported by behavioral tests, validate the findings of in vivo calcium imaging, enhancing the study's credibility.

      (3) Extending the study to assess long-term effects of early life myelination disruptions adds depth to the implications for both circuit function and behavior.

      Weaknesses:

      (1) The study would benefit from a closer analysis of myelination during the periods when synchrony is recorded. Direct correlations between myelination and synchronized activity would substantiate the mechanistic link and clarify if observed behavioral deficits stem from altered myelination timing.

      (2) Although the study focuses on Purkinje cells in the cerebellum, neural synchrony typically involves cross-regional interactions. Expanding the discussion on how localized Purkinje synchrony affects broader behaviors-such as anxiety, motor function, and sociality - would enhance the findings' functional significance.

      (3) The authors discuss the possibility of oligodendrocyte-mediated synapse elimination as a possible mechanism behind their findings, drawing from relevant recent literature on oligodendrocyte precursor cells. However, there are no data presented supporting these assumptions. The authors should explain why they think the mechanism behind their observation extends beyond the contribution of myelination or remove this point from the discussion entirely.

      Comment for resubmission: Although the argument on synaptic elimination has been removed, it has been replaced with similarly unclear speculation about roles for oligodendrocytes outside of conventional myelination or metabolic support, again without clear evidence. The authors measured MBP area but have not performed detailed analysis of oligodendrocyte biology to support the claims made in the discussion. Please consider removing this section or rephrasing it to align with the data presented.

      (4) It would be valuable to investigate secondary effects of oligodendrocyte depletion on other glial cells, particularly astrocytes or microglia, which could influence long-term behavioral outcomes. Identifying whether the lasting effects stem from developmental oligodendrocyte function alone or also involve myelination could deepen the study's insights.

      (5) The authors should explore the use of different methods to disturb myelin production for a longer time, in order to further determine if the observed effects are transient or if they could have longer-lasting effects.

      (6) Throughout the paper, there are concerns about statistical analyses, particularly on the use of the Mann-Whitney test or using fields of view as biological replicates.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review): 

      Summary: 

      This study presents convincing findings that oligodendrocytes play a regulatory role in spontaneous neural activity synchronisation during early postnatal development, with implications for adult brain function. Utilising targeted genetic approaches, the authors demonstrate how oligodendrocyte depletion impacts Purkinje cell activity and behaviours dependent on cerebellar function. Delayed myelination during critical developmental windows is linked to persistent alterations in neural circuit function, underscoring the lasting impact of oligodendrocyte activity. 

      Strengths: 

      (1) The research leverages the anatomically distinct olivocerebellar circuit, a well-characterized system with known developmental timelines and inputs, strengthening the link between oligodendrocyte function and neural synchronization. 

      (2) Functional assessments, supported by behavioral tests, validate the findings of in vivo calcium imaging, enhancing the study's credibility. 

      (3) Extending the study to assess the long-term effects of early-life myelination disruptions adds depth to the implications for both circuit function and behavior.

      We appreciate these positive evaluation.

      Weaknesses: 

      (1) The study would benefit from a closer analysis of myelination during the periods when synchrony is recorded. Direct correlations between myelination and synchronized activity would substantiate the mechanistic link and clarify if observed behavioral deficits stem from altered myelination timing. 

      We appreciate the reviewer’s thoughtful suggestion and have expanded the manuscript to clarify how oligodendrocyte maturation relates to the development of Purkinje-cell synchrony. The developmental trajectory of Purkinje-cell synchrony has already been comprehensively characterized by Good et al. (2017, Cell Reports 21: 2066–2073): synchrony drops from a high level at P3–P5 to adult-like values by P8. We found that the myelination in the cerebellum starts to appear from P5-P7 (Figure S1A, B), indicating that the timing of Purkinje cell desynchronization coincides with the initial appearance of oligodendrocytes and myelin in the cerebellum. To determine whether myelin growth could nevertheless modulate this process, we quantified ASPA-positive oligodendrocyte density and MBP-positive bundle thickness and area at P10, P14, P21 and adulthood (Fig. 1J, K, Fig. S1E). Both metrics increase monotonically and clearly lag behind the rapid drop in synchrony, indicating that myelination could be not the primary trigger for the desynchronization. When oligodendrocytes were ablated during the second postnatal week, the synchrony was reduced (new Fig. 2). Thus, once myelination is underway, oligodendrocytes become critical for maintaining the synchrony, acting not as the initiators but as the stabilizers and refiners of the mature network state.

      We have added the new subsection in discussion (lines 451–467) now in which we propose a two-phase model. Phase I (P3–P8): High early synchrony is generated by non-myelin mechanisms (e.g. transient gap junctions, shared climbing-fiber input). Phase II (P8-). As oligodendrocytes proliferate and ensheath axons, they fine-tune conduction velocity and stabilize the mature, low-synchrony network state.

      We believe these additions fully address the reviewer’s concerns.

      (2) Although the study focuses on Purkinje cells in the cerebellum, neural synchrony typically involves cross-regional interactions. Expanding the discussion on how localized Purkinje synchrony affects broader behaviors - such as anxiety, motor function, and sociality - would enhance the findings' functional significance.

      We appreciate the reviewer’s helpful suggestion and have expanded the Discussion (lines 543–564) to clarify how localized Purkinje-cell synchrony can influence broader behavioral domains. In the revised text we note that changes in PC synchrony propagate  into thalamic, prefrontal, limbic, and parietal targets, thereby impacting distributed networks involved in motor coordination, affect, and social interaction. Our optogenetic rescue experiments further support this framework, as transient resynchronization of PCs normalized sociability and motor coordination while leaving anxiety-like behavior impaired. This dissociation highlights that different behavioral domains rely to varying degrees on precise cerebellar synchrony and underscores how even localized perturbations in Purkinje timing can acquire system-level significance.

      (3) The authors discuss the possibility of oligodendrocyte-mediated synapse elimination as a possible mechanism behind their findings, drawing from relevant recent literature on oligodendrocyte precursor cells. However, there are no data presented supporting this assumption. The authors should explain why they think the mechanism behind their observation extends beyond the contribution of myelination or remove this point from the discussion entirely.

      We thank the reviewer for pointing out that our original discussion of oligodendrocyte-mediated synapse elimination was not directly supported by data in the present manuscript. Because we are actively analyzing this question in a separate, follow-up study, we have deleted the speculative passage to keep the current paper focused on the demonstrated, myelination-dependent effects. We believe this change sharpens the mechanistic narrative and fully addresses the reviewer’s concern.

      (4) It would be valuable to investigate the secondary effects of oligodendrocyte depletion on other glial cells, particularly astrocytes or microglia, which could influence long-term behavioral outcomes. Identifying whether the lasting effects stem from developmental oligodendrocyte function alone or also involve myelination could deepen the study's insights. 

      We thank the reviewer for raising this point and have performed the requested analyses. Using IBA1 immunostaining for microglia and S100b for Bergmann glia, we quantified cell density and these marker signal intensity at P14 and P21. Neither microglial or Bergmann-glial differed between control and oligodendrocyte-ablated mice at either time‐point (new Figure S2). These results indicate that the behavioral phenotypes we report are unlikely to arise from secondary activation or loss of other glial populations.

      We now added results (lines 275–286) and also discuss myelination and other oligodendrocyte function (lines 443–450). It remains difficult to disentangle conduction-related effects from myelination-independent trophic roles of oligodendrocytes. We therefore note explicitly that future work employing stage-specific genetic tools or acute metabolic manipulations will be required to parse these contributions more definitively.

      (5) The authors should explore the use of different methods to disturb myelin production for a longer time, in order to further determine if the observed effects are transient or if they could have longer-lasting effects.

      We agree that distinguishing transient from enduring effects is critical. Importantly, our original submission already included data demonstrating a persistent deficit of PC population synchrony (Fig. 4, previous Fig. 3): (i) at P14—the early age after oligodendrocyte ablation—population synchrony is reduced, and (ii) the same deficit is still present in adults (P60–P70) despite full recovery of ASPA-positive cell density and MBP-area and -thickness (Fig. 2H-K, Fig. S1E, and Fig. 4). We also performed the ablation of oligodendrocytes after the third postnatal week. Despite a similar acute drop in ASPA-positive cells, neither population synchrony nor anxiety-, motor-, or social behaviors differed from littermate controls. Thus, extending myelin disruption beyond the developmental window does not exacerbate or prolong the phenotype, whereas a short perturbation within that window leaves a permanent timing defect. These findings strengthen our conclusion that it is the developmental oligodendrocyte/myelination program itself—rather than ongoing adult myelin production—that is essential for establishing stable network synchrony. We now highlight this point explicitly in the revised Discussion (lines 507–522).

      (6) Throughout the paper, there are concerns about statistical analyses, particularly on the use of the Mann-Whitney test or using fields of view as biological replicates.

      We appreciate the reviewer’s guidance on appropriate statistical treatment. To address these concerns we have re-analyzed all datasets that contained multiple measurements per animal (e.g., fields of view, lobules, or trials) using nested statistics with animal as the higher-order unit. Specifically, we applied a two-level nested ANOVA when more than two groups were compared and a nested t-test when two conditions were present. The re-analysis confirmed all original conclusions. Because the nested models yielded comparable effect sizes to the Mann–Whitney tests, we have retained the mean ± SEM for ease of comparison with prior literature but now also report all values for each mouse in Table 1. In cases where a single measurement per mouse was compared between two groups, we used the Mann–Whitney test and present the results in the graphs as median values.

      Major

      (1) The authors present compelling evidence that early loss of myelination disrupts synchronous firing prematurely. However, synchronous neuronal firing does not equate to circuit synchronization. It is improbable that myelination directly generates synchronous firing in Purkinje cells (PCs). For instance, Foran et al. (1992) identified that cerebellar myelination begins around postnatal day 6 (P6), while Good et al. (2017) recorded a developmental decline in PC activity correlation from P5-P11. To clarify myelin's role, we recommend detailed myelin imaging through light microscopy (MBP staining at higher magnification) to assess the extent of myelin removal accurately. Myelin sheaths, as shown by Snaidero et al. (2020), can persist after oligodendrocyte (OL) death, particularly following DTA induction (Pohl et al. 2011). Quantification of MBP+ area, rather than mean MBP intensity, is necessary to accurately measure myelin coverage.

      We appreciate the reviewer’s concern that residual sheaths might remain after oligodendrocyte ablation and have therefore re-examined myelin at higher spatial resolution. Then, two independent metrics were extracted: MBP⁺ area fraction in the white matter and MBP⁺ bundle thickness (new Figure 1J, K, and Fig. S1E). We confirm a robust, transient loss of myelin at P10 and P14 as shown by the reduction of MBP⁺ area and MBP⁺ bundle thickness. Both parameters recovered to control values by P21 and adulthood, indicating effective remyelination. These data demonstrate that, in our paradigm, oligodendrocyte ablation is accompanied by substantial sheath loss rather than the persistent myelin reported after acute toxin exposure. We have added them in Result (lines 266–271).

      The results reinforce the view that myelin removal and/or loss of trophic support during a narrow developmental window drive the long-term hyposynchrony and behavioral phenotypes we report. We have added the new subsection in discussion (lines 443–450) now in which we propose a two-phase model. Phase I (P3–P8): High early synchrony is generated by non-myelin mechanisms (e.g. transient gap junctions, shared climbing-fiber input). Phase II (P8-). As oligodendrocytes proliferate and ensheath axons, they fine-tune conduction velocity and stabilize the mature, low-synchrony network state. We believe these additions fully address the reviewer’s concerns.

      (2) Surprisingly, the authors speculate about oligodendrocyte-mediated synaptic pruning without supportive data, shifting the focus away from the potential impact of myelination. Even if OLs perform synaptic pruning, OL depletion would likely maintain synchrony, yet the opposite was observed. Further characterisation of the model and the potential source of the effect is needed. 

      We thank the reviewer for pointing out that our original discussion of oligodendrocyte-mediated synapse elimination was not directly supported by data in the present manuscript. Because we are actively analyzing this question in a separate, follow-up study, we have deleted the speculative passage to keep the current paper focused on the demonstrated, myelination-dependent effects. We believe this change sharpens the mechanistic narrative and fully addresses the reviewer’s concern.

      (3) Improved characterization of the DTA model would add clarity. Although almost all infected cells are reported as OLs, quantification of infected OL-lineage cells (e.g., via Olig2 staining) would verify this. It remains possible that observed activity changes are driven by OL-independent demyelination effects. We suggest cross-staining with Iba1 and GFAP to rule out inflammation or gliosis. 

      We thank the reviewer for this important suggestion and have expanded our histological characterization accordingly. First, to verify that DTA expression is confined to mature oligodendrocytes, we co-stained cerebellar sections collected 7 days after AAV-hMAG-mCherry injection with Olig2 (pan-OL lineage) and ASPA (mature OL marker) as shown in Figure S1C-D. Quantitative analysis revealed that 100 % of mCherry⁺ cells were Olig2⁺/ASPA⁺, whereas mCherry signal was virtually absent in Olig2⁺/ASPA⁻ immature oligodendrocytes. These data confirm that our DTA manipulation targets mature myelinating OLs rather than earlier lineage stages. We have added them in Result (lines 260–262).

      Second, to examine indirect effects mediated by other glia, we performed cross-staining with IBA1 (microglia) and S100β (Bergmann). Cell density and fluorescence intensity for each marker were indistinguishable between control and DTA groups at P14 and P21 (Figure S2A-H). Thus, neither inflammation nor astro-/microgliosis accompanies OL ablation. We have added them in Result (lines 275–286).

      Collectively, these results demonstrate that the observed desynchronization and behavioral phenotypes arise from specific loss of mature oligodendrocytes and their myelin, rather than from off-target viral expression or secondary glial responses.

      (4) The use of an independent model of myelin loss, such as the inducible Myrf knockout mouse with a MAG promoter, to assess if oligodendrocyte loss causes temporary or sustained impacts, employing an extended knockout model like Myrf cKO with MAG-Cre viral methods would be advantageous.

      We agree that distinguishing transient from enduring effects is critical. Importantly, our original submission already included data demonstrating a persistent deficit of PC population synchrony (Fig. 4, previous Fig. 3): (i) at P13-15—the early age after oligodendrocyte ablation—population synchrony is reduced, and (ii) the same deficit is still present in adults (P60–P70) despite full recovery of ASPA-positive cell density and MBP-area and -thickness (Fig. 2H-K, Fig. S1E, and Fig. 4). We also performed the ablation of oligodendrocytes after the third postnatal week. Despite a similar acute drop in ASPA-positive cells, neither population synchrony nor anxiety-, motor-, or social behaviors differed from littermate controls. Thus, extending myelin disruption beyond the developmental window does not exacerbate or prolong the phenotype, whereas a short perturbation within that window leaves a permanent timing defect. These findings strengthen our conclusion that it is the developmental oligodendrocyte/myelination program itself—rather than ongoing adult myelin production—that is essential for establishing stable network synchrony. We now highlight this point explicitly in the revised Discussion (lines 507–522).

      (5) For statistical robustness, the use of non-parametric tests (Mann-Whitney) necessitates reporting the median instead of the mean as the authors do. Furthermore, as repeated measurements within the same animal are not independent, the authors should ideally use nested ANOVA (or nested t-test comparing two conditions) to validate their findings (Aarts et al., Nat. Neuroscience 2014). Alternatively use one-way ANOVA with each animal as a biological replicate, although this means that the distribution in the data sets per animal is lost.

      We appreciate the reviewer’s guidance on appropriate statistical treatment. To address these concerns we have re-analyzed all datasets that contained multiple measurements per animal (e.g., fields of view, lobules, or trials) using nested statistics with animal as the higher-order unit. Specifically, we applied a two-level nested ANOVA when more than two groups were compared and a nested t-test when two conditions were present. The re-analysis confirmed all original conclusions. Because the nested models yielded comparable effect sizes to the Mann–Whitney tests, we have retained the mean ± SEM for ease of comparison with prior literature but now also report all values for each mouse in Table 1. In cases where a single measurement per mouse was compared between two groups, we used the Mann–Whitney test and present the results in the graphs as median values.

      Minor Points 

      (1) In all figures, please specify the ages at which each procedure was conducted, as demonstrated in Figure 2A.

      All main and supplementary figures now specify the exact postnatal age.

      (2) Clarify the selection criteria for regions of interest (ROI) in calcium imaging, and provide representative ROIs.

      We appreciate the reviewer’s guidance. We have clarified that our ROI detection followed the protocol reported by our previous paper (Tanigawa et al., 2024, Communications Biology) (lines 177-178) and representative Purkinje cell ROIs are now shown in Fig. 2B.

      (3) Include data on the proportion of climbing fiber or inferior olive neurons expressing Kir and the total number of neurons transfected, which would help contextualize the observed effects on PC synchronization and its broader implications for cerebellar circuit function.

      We appreciate the reviewer’s guidance. New Fig. 7C summarizes the efficiency of AAV-GFP and AAV-Kir2.1-GFP injections into the inferior olive. Across 4 mice PCs with GFP-labeled CFs was detected in 19.3 ± 11.9 (mean ± S.D.) % for control and 26.2 ± 11.8 (mean ± S.D.) % for Kir2.1 of PCs. These numbers are reported in the Results (lines 373–375).

      (4) Higher magnification images in Figures 1 and S3 would improve visual clarity. 

      We have addressed the request for higher-magnification images in two ways. First, all panels in Figure S3 were placed on a larger canvas. Second, in Figure 1 we adjusted panel sizes to emphasize fine structure: panel 1C already represents an enlargement of the RFP positive cells shown in 1B, and panel 1H and 1J now occupies a wider span so that every ASPA-positive cell body can be distinguished. Should the reviewer still require an even closer view, we have additional ready for upload.

      (5) Consider language editing to enhance overall clarity and readability.

      The entire manuscript was edited to improve flow, consistency, and readability.

      (6) Refine the discussion to align with the presented data.

      We have refined the discussion.

      Thank you once again for your constructive suggestions and comments. We believe these changes have improved the clarity and readability of our manuscript.

      Reviewer #2 (Public review):

      We appreciate Reviewer #2’s positive evaluation of our work and thank him/her for the constructive suggestions and comments. We followed these suggestions and comments and have conducted additional experiments. We have rewritten the manuscript and revised the figures according to the points Reviewer #1 mentioned. Our point-by-point responses to the comments are as follows.

      Summary:

      In this manuscript, the authors use genetic tools to ablate oligodendrocytes in the cerebellum during postnatal development. They show that the oligodendrocyte numbers return to normal post-weaning. Yet, the loss of oligodendrocytes during development seems to result in decreased synchrony of calcium transients in Purkinje neurons across the cerebellum. Further, there were deficits in social behaviors and motor coordination. Finally, they suppress activity in a subset of climbing fibers to show that it results in similar phenotypes in the calcium signaling and behavioral assays. They conclude that the behavioral deficits in the oligodendrocyte ablation experiments must result from loss of synchrony. 

      Strengths:

      Use of genetic tools to induce perturbations in a spatiotemporally specific manner.

      We appreciate these positive evaluation.

      Weaknesses: 

      The main weakness in this manuscript is the lack of a cohesive causal connection between the experimental manipulation performed and the phenotypes observed. Though they have taken great care to induce oligodendrocyte loss specifically in the cerebellum and at specific time windows, the subsequent experiments do not address specific questions regarding the effect of this manipulation.

      Calcium transients in Purkinje neurons are caused to a large extent by climbing fibers, but there is evidence for simple spikes to also underlie the dF/F signatures (Ramirez and Stell, Cell Reports, 2016).

      We thank the reviewer for drawing attention to the work of Ramirez & Stell (2016), which showed that simple-spike bursts can elicit Ca²⁺ rises, but only in the soma and proximal dendrites of adult Purkinje cells. In our study, Regions of Interest were restricted to the dendritic arbor, where SS-evoked signals are essentially undetectable (Ramirez & Stell, 2016), whereas climbing-fiber complex spikes generate large, all-or-none transients (Good et al., 2017). Accordingly, even if a rare SS-driven event reached threshold it would likely fall outside our ROIs.

      In addition, we directly imaged CF population activity by expressing GCaMP7f in inferior-olive neurons. Correlation analysis of CF boutons revealed that DTA ablation lowers CF–CF synchrony at P14 (new Fig. 3A–D). Because CF synchrony is a principal driver of Purkinje-cell co-activation, this reduction provides a mechanistic link between oligodendrocyte loss and the hyposynchrony we observe among Purkinje cells. Consistent with this interpretation, electrophysiological recordings showed that parallel-fiber EPSCs and inhibitory synaptic inputs onto Purkinje cells were unchanged by DTA treatment (Fig. 3E-H) , which makes it less likely that the reduced synchrony simply reflects changes in other excitatory or inhibitory synaptic drive.

      That said, SS-dependent somatic Ca²⁺ signals could still influence downstream plasticity and long-term cerebellar function. In future work we therefore plan to combine somatic imaging with stage-specific oligodendrocyte manipulations to test whether SS-evoked Ca²⁺ dynamics are themselves modulated by oligodendrocyte support. We have added these descriptions in the Results (lines 288–294) and Discussion (lines 423–434).

      Also, it is erroneous to categorize these calcium signals as signatures of "spontaneous activity" of Purkinje neurons as they can have dual origins.

      Thank you for pointing out the potential ambiguity. In the revised manuscript we have clarified how we use the term “spontaneous activity” in the context of our measurements (lines 289-290). Our calcium imaging was restricted to the dendritic arbor of Purkinje cells, where calcium transients are dominated by climbing-fiber (CF) inputs (Ramirez & Stell, 2016; Good et al., 2017). Thus, the synchrony values reported here primarily reflect CF-driven complex spikes rather than mixed signals of dual origin. We have revised the Results section accordingly (lines 289–293) to make this measurement-specific limitation explicit.

      Further, the effect of developmental oligodendrocyte ablation on the cerebellum has been previously reported by Mathis et al., Development, 2003. They report very severe effects such as the loss of molecular layer interneurons, stunted Purkinje neuron dendritic arbors, abnormal foliations, etc. In this context, it is hardly surprising that one would observe a reduction of synchrony in Purkinje neurons (perhaps due to loss of synaptic contacts, not only from CFs but also from granule cells).

      We appreciate the reviewer’s comparison to Mathis et al. (2003). Mathis et al. used MBP–HSV-TK transgenic mice in which systemic FIAU treatment eliminates oligodendrocytes. When ablation began at P1, they observed severe dysmorphology—loss of molecular-layer interneurons, Purkinje-cell (PC) dendritic stunting, and abnormal foliation. Crucially, however, the same study reports that starting the ablation later (FIAU from P6-P20) left cerebellar cyto-architecture entirely normal.

      Our AAV MAG-DTA paradigm resembles this later window. Our temporally restricted DTA protocol produces the same ‘late-onset’ profile—robust yet reversible hypomyelination with no loss of Purkinje cells, interneurons, dendritic length, or synaptic input (new Fig. S1–S2, Fig. 3E-H). The enduring hyposynchrony we report therefore cannot be attributed to the dramatic anatomical defects seen after prenatal ablation, but instead reveals a specific requirement for early-postnatal myelin in stabilizing PC synchrony, especially affecting CF-CF synchrony.

      This clarification shows that we have carefully considered the Mathis model and that our findings not only replicate, but also extend the earlier work. We have added these description in Result (lines 273-286)

      The last experiment with the expression of Kir2.1 in the inferior olive is hardly convincing.

      We appreciate the reviewer’s concern and have reinforced the causal link between Purkinje-cell synchrony and behavior. To test whether restoring PC synchrony is sufficient to rescue behavior, we introduced a red-shifted opsin (AAV-L7-rsChrimine) into PCs of DTA mice raised to adulthood. During testing we delivered 590-nm light pulses (10 ms, 1 Hz) to the vermis, driving brief, population-wide spiking (new Fig. 8). This periodic re-synchronization left anxiety measures unchanged (open-field center time remained low) but rescued both motor coordination (rotarod latency normalized to control levels) and sociability (time spent with a novel mouse restored). The dissociation implies that distinct behavioral domains differ in their sensitivity to PC timing precision and confirms that reduced synchrony—not cell loss or gross circuit damage (Fig. S1F, S2)—is the primary driver of the motor and social deficits. Together, the optogenetic rescue establishes a bidirectional, mechanistic link between PC synchrony and behavior, addressing the reviewer’s reservations about the original experiment. We have added these descriptions in Result (lines 394-415)

      In summary, while the authors used a specific tool to probe the role of developmental oligodendrocytes in cerebellar physiology and function, they failed to answer specific questions regarding this role, which they could have done with more fine-grained experimental analysis.

      Thank you once again for your constructive suggestions and comments. We believe these changes have improved the clarity and readability of our manuscript.

      Recommendations for the authors:

      Reviewer #2 (Recommendations for the authors):

      (1) Show that ODC loss is specific to the cerebellum.

      We thank the reviewer for requesting additional evidence. To verify that oligodendrocyte ablation was confined to the cerebellum, we injected an AAV carrying mCherry under the human MAG promoter (AAV-hMAG-mCherry) into the cerebellum, and screened the whole brain one week later. As shown in the new Figure 1E–G, mCherry positive cells were present throughout the injected cerebellar cortex (Fig. 1E), but no fluorescent cells were detected in extracerebellar regions—including cerebral cortex, medulla, pons, midbrain. These data demonstrate that our viral approach are specific to the cerebellum, ruling out off-target demyelination elsewhere in the CNS as a contributor to the behavioral and synchrony phenotypes. We have added these descriptions in Result (lines 262-264)

      (2) Characterize the gross morphology of the cerebellum at different developmental stages. Are major cell types all present? Major pathways preserved? 

      We thank the reviewer for requesting additional evidence. To ensure that the developmental loss of oligodendrocytes did not globally disturb cerebellar architecture, we performed a comprehensive histological and electrophysiological survey during development. New data are presented (new Fig. S1–S2, Fig. 3E-H).

      (1) Overall morphology. Low-magnification parvalbumin counterstaining revealed similar cerebellar area in DTA versus control mice at every age (Fig. S1F, G).

      (2) Major neuronal classes. Quantification of parvalbumin-positive Purkinje cells and interneurons showed no differences in density between control and DTA (Fig. S2E, H, M, N, P). Purkinje dendritic arbors were not different between control and DTA (Fig. S2G, O).

      (3) Excitatory and inhibitory synapse inputs. Miniature IPSCs and Parallel-fiber-EPSCs onto Purkinje cells were quantified; neither was differed between groups (Fig. 3E-G).

      (4) Glial populations. IBA1-positive microglia and S100β-positive astrocytes exhibited normal density and marker intensity (Fig. S2).

      Taken together, these analyses show that all major cell types are present at normal density, synaptic inputs from excitatory and inhibitory neurons are preserved, and gross cerebellar morphology is intact after DTA-mediated oligodendrocyte ablation.

      (3) Recording of PNs to see whether the lack of synchrony is due to CFs or simple spikes.

      We thank the reviewer for drawing attention to the work of Ramirez & Stell (2016), which showed that simple-spike bursts can elicit Ca<sup>2+</sup> rises, but only in the soma and proximal dendrites of adult Purkinje cells. In our study, Regions of Interest were restricted to the dendritic arbor, where SS-evoked signals are essentially undetectable (Ramirez & Stell, 2016), whereas climbing-fiber complex spikes generate large, all-or-none transients (Good et al., 2017). Accordingly, even if a rare SS-driven event reached threshold it would likely fall outside our ROIs.

      In addition, we directly imaged CF population activity by expressing GCaMP7f in inferior-olive neurons. Correlation analysis of CF boutons revealed that DTA ablation lowers CF–CF synchrony at P14 (new Fig. 3A–D). Because CF synchrony is a principal driver of Purkinje-cell co-activation, this reduction provides a mechanistic link between oligodendrocyte loss and the hyposynchrony we observe among Purkinje cells. Consistent with this interpretation, electrophysiological recordings showed that parallel-fiber EPSCs and inhibitory synaptic inputs onto Purkinje cells were unchanged by DTA treatment (Fig. 3E-H) , which makes it less likely that the reduced synchrony simply reflects changes in other excitatory or inhibitory synaptic drive.

      That said, SS-dependent somatic Ca<sup>2+</sup> signals could still influence downstream plasticity and long-term cerebellar function. In future work we therefore plan to combine somatic imaging with stage-specific oligodendrocyte manipulations to test whether SS-evoked Ca²⁺ dynamics are themselves modulated by oligodendrocyte support. We have added these descriptions in the Results (lines 301–312) and Discussion (lines 423–434).

      (4) Is CF synapse elimination altered? Test using evoked EPSCs or staining methods.

      We agree that directly testing whether oligodendrocyte loss disturbs climbing-fiber synapse elimination would provide a full mechanistic picture. We are already quantifying climbing fiber terminal number with vGluT2 immunostaining and recording evoked CF-EPSCs in the same DTA model; these data, together with an analysis of how population synchrony is involved in synapse elimination, will form the basis of a separate manuscript now in preparation. To keep the present paper focused on the phenomena we have rigorously documented—transient oligodendrocyte loss and the resulting long-lasting hyposynchrony and abnormal behaviors—we have removed the speculative sentence on oligodendrocyte-mediated synapse elimination. We believe this revision meets the reviewer’s request without over-extending the current dataset.

      Thank you once again for your constructive suggestions and comments. We believe these changes have improved the clarity and readability of our manuscript.

    1. Reviewer #2 (Public review):

      Okabe and colleagues build on a super-resolution-based technique that they have previously developed in cultured hippocampal neurons, improving the pipeline and using it to analyze spine nanostructure differences across 8 different mouse lines with mutations in autism or schizophrenia (Sz) risk genes/pathways. It is a worthy goal to try to use multiple models to examine potential convergent (or not) phenotypes, and the authors have made a good selection of models. They identify some key differences between the autism versus the Sz risk gene models, primarily that dendritic spines are smaller in Sz models and (mostly) larger in autism risk gene models. They then focus on three models (2 Sz - 22q11.2 deletion, Setd1a; 1 ASD - Nlgn3) for time-lapse imaging of spine dynamics, and together with computational modelling provide a mechanistic rationale for the smaller spines in Sz risk models. Bulk RNA sequencing of all 8 model cultures identifies several differentially expressed genes, which they go on to test in cultures, finding that ecgr4 is upregulated in several Sz models and its misexpression recapitulates spine dynamics changes seen in the Sz mutants, while knockdown rescues spine dynamics changes in the Sz mutants. Overall, these have the potential to be very interesting findings and useful for the field. However, I do have a number of major concerns.

      (1) The main finding of spine nanostructure changes is done by carrying out a PCA on various structural parameters, creating spine density plots across PC1 and PC2, and then subtracting the WT density plot from the mutant. Then, spines in the areas with obvious differences only are analyzed, from which they derive the finding that, for example, spine sizes are smaller. However, this seems a circular approach. It is like first identifying where there might be a difference in the data, then only analyzing that part of the data. I welcome input from a statistician, but to me, this is at best unconventional and potentially misleading. I assume the overall means are not different (although this should be included), but could they look at the distribution of sizes and see if these are shifted?

      (2) Despite extracting 64 parameters describing spine structure, only 5 of these seemed to be used for the PCA. It should be possible to use all parameters and show the same results. More information on PC1 and PC2 would be helpful, given that the rest of the paper is based on these - what features are they related to? These specific features could then be analyzed in the full dataset, without doing the cherry picking above. It would also be helpful to demonstrate whether PC1 and 2 differ across groups - for example, the authors could break their WT data into 2 subsets and repeat the analysis.

      (3) Throughout the paper, the 'n' used for statistical analysis is often spine, which is not appropriate. At a minimum, cell should be used, but ideally a nested mixed model, which would take into account factors like cell, culture, and animal, would be preferable. Also, all of these factors should be listed, with sufficient independent cultures.

      (4) The authors should confirm that all mutants are also on the C57BL/6J background, and clarify whether control cultures are from littermates (this would be important). Also, are control versus mutant cultures done simultaneously? There can be significant batch effects with cultures.

      (5) The spine analysis uses cultures from 18-22 DIV - this is quite a large range. It would be worth checking whether age is a confounder or correlated with any parameters / principal components.

      (6) The computational modelling is interesting, but again, I am concerned about some circularity. Parameter optimization was used to identify the best fit model that replicated the spine turnover rates, so it is somewhat circular to say that this matched the observations when one of these is the turnover rate. It is more convincing for spine density and size, but why not go back and test whether parameter differences are actually seen - for example, it would be possible to extract the probability of nascent spine loss, etc. More compelling would be to repeat the experiments and see if the model still fits the data. In the interpretation (line 314-318) it is stated that '... reduced spine maturation rate can account for the three key properties of schizophrenia-related spines...', which is interesting if true, but it has just been stated that the probability of spine destabilization is also higher in mutants (line 303) - the authors should test whether if the latter is set to be the same as controls whether all the findings are replicated.

      (7) No validation for overexpression or knockdown is shown, although it is mentioned in the methods - please include. Also, for the knockdown, a scrambled shRNA control would be preferable.

      (8) The finding regarding ecgr4 is interesting, but showing that some ecgr4 is expressed at boutons and spines and some in DCVs is not enough evidence to suggest that actively involved in the regulation of synapse formation and maturation (line 356).

      (9) The same caveats that apply to the analysis also apply to the ecgr4 rescue. In addition, while for 22q the control shRNA mutant vs WT looks vaguely like Figure 2, setd1a looks completely different. And if rescued, surely shRNA in the mutant should now resemble control in WT, so there shouldn't be big differences, but in fact, there are just as many differences as comparing mutant vs wildtype? Plus, for spine features, they only compare mutant rescue with mutant control, but this is not ideal - something more like a 2-way ANOVA is really needed. Maybe input from a statistician might be useful here?

      (10) Although this is a study entirely focused on spine changes in mouse models for Sz, there is no discussion (or citation) of the various studies that have examined this in the literature. For example, for Setd1a, smaller spines or reduced spine densities have been described in various papers (Mukai et al, Neuron 2019; Chen et al, Sci Adv 2022; Nagahama et al, Cell Rep 2020).

      (11) There is a conceptual problem with the models if being used to differentiate autism risk from Sz risk genes. It is difficult to find good mouse models for Sz, so the choice of 22q11.2del and Setd1a haploinsufficiency is completely reasonable. However, these are both syndromic. 22qdel syndrome involves multiple issues, including hearing loss, delayed development, and learning disabilities, and is associated with autism (20% have autism, as compared to 25% with Sz). Similarly, Setd1a is also strongly associated with autism as well as Sz (and also involves global developmental delay and intellectual disability). While I think this is still the best we can do, and it is reasonable to say that these models show biased risk for these developmental disorders, it definitely can't be used as an explanation for the higher variability seen in the autism risk models.

      (12) I am not convinced that using dissociated cultures is 'more likely to reflect the direct impact of schizophrenia-related gene mutations on synaptic properties' - first, cultures do have non-neuronal cells, although here glial proliferation was arrested at 2 days, glia will be present with the protocol used (or if not, this needs demonstrating). Second, activity levels will affect spine size, and activity patterns are very abnormal in dissociated cultures, so it is very possible that spine changes may not translate into in vivo scenarios. Overall, it is a weakness that the dissociated culture system has been used, which is not to say that it is not useful, and from a technical and practical perspective, there are good justifications.

      (13) As a minor comment, the spine time-lapse imaging is a strength of the paper. I wonder about the interpretation of Figure 5. For example, the results in Figure 5G and J look as if they may be more that the spines grow to a smaller size and start from a smaller size, rather than necessarily the rate of growth.

    2. Author response:

      Reviewer #1

      (1) The main weakness is that the study is wholly in vitro, using cultured hippocampal neurons.

      We appreciate this reviewer's concern about the limitation of cultured hippocampal neurons in extracting disease-related spine phenotypes. While we fully recognize this limitation, we consider that this in vitro system has several advantages that contribute to translational research on mental disorders.

      First, our culture system has been shown to support the development of spine morphology similar to that of the hippocampal CA1 excitatory synapse in vivo. High-resolution imaging techniques confirmed that the in vitro spine structure was highly preserved compared with in vivo preparations (Kashiwagi et al., Nature Communications, 2019). The present study used the same culture system and SIM imaging. Therefore, the difference we detected in samples derived from disease models is likely to reflect impairment of molecular mechanisms underlying native structural development in vivo.

      Second, super-resolution imaging of thousands of spines in tissue preparations under precisely controlled conditions cannot be practically applied using currently available techniques. The advantage of our imaging and analytical pipeline is its reproducibility, which enabled us to compare the spine population data from eight different mouse models without normalization.

      Third, a reduced culture system can demonstrate the direct effects of gene mutations on synapse phenotypes, independent of environmental influences. This property is highly advantageous for screening chemical compounds that rescue spine phenotypes. Neuronal firing patterns and receptor functions can also be easily controlled in a culture system. The difference in spine structure between ASD and schizophrenia mouse models is valuable information to establish a drug screening system.

      Fourth, establishing an in vitro system for evaluating synapse phenotypes could reduce the need for animal experiments. Researchers should be aware of the 3Rs principles. In the future, combined with differentiation techniques for human iPS cells, our in vitro approach will enable the evaluation of disease-related spine phenotypes without the need for animal experiments. The effort to establish a reliable culture system should not be eliminated.

      (2) Another weakness is that CaMKIIαK42R/K42R mutant mice are presented as a schizophrenia model.

      We agree with this reviewer that CAMK2A mutations in humans are linked to multiple mental disorders, including developmental disorders, ASD, and schizophrenia. Association of gene mutations with the categories of mental disorders is not straightforward, as the symptoms of these disorders also overlap with each other. For the CaMKIIα K42R/K42R mutant, we considered the following points in its characterization as a model of mental disorder. Analysis of CaMKIIα +/- mice in Dr. Tsuyoshi Miyakawa's lab has provided evidence for the reduced CaMKIIα in schizophrenia-related phenotypes (Yamasaki et al., Mol Brain 2008; Frankland et al., Mol Brain Editorial 2008). It is also known that the CaMKIIα R8H mutation in the kinase domain is linked to schizophrenia (Brown et al., 2021). Both CaMKIIα R8H and CaMKIIα K42R mutations are located in the N-terminal domain and eliminate kinase activity. On the other hand, the representative CaMKIIα E183V mutation identified in ASD patients exhibits unique characteristics, including reduced kinase activity, decreased protein stability and expression levels, and disrupted interactions with ASD-associated proteins such as Shank3 (Stephenson et al., 2017). Importantly, reduced dendritic spines in neurons expressing CaMKIIα E183V is a property opposite to that of the CaMKIIα K42R/K42R mutant, which showed increased spine density (Koeberle et al. 2017).

      Different CAMK2A mutations likely cause distinct phenotypes observed in the broad spectrum of mental disorders. In the revised manuscript, we will include a discussion of the relevant literature to categorize this mouse model appropriately.

      References related to this discussion.

      (1) Yamasaki et al., Mol Brain. 2008 DOI: 10.1186/1756-6606-1-6

      (2) Frankland et al. Mol Brain. 2008 DOI: 10.1186/1756-6606-1-5

      (3) Stephenson et al., J Neurosci. 2017 DOI: 10.1523/JNEUROSCI.2068-16.2017

      (4) Koeberle et al. Sci Rep. 2017 DOI: 10.1038/s41598-017-13728-y

      (5) Brown et al., iScience. 2021 DOI: 10.1016/j.isci.2021.103184

      Reviewer #2

      We recognize the reviewer's comments as important for improving our manuscript. We outline our general approach to addressing major concerns. Detailed responses to each point, along with additional data, will be provided in a formal revised manuscript.

      (1) Demonstrating the robustness of statistical analyses

      We appreciate this reviewer's concern about our strategies for the quantitative analysis of the large spine population. For the PCA analysis (Point 2), our preliminary results indicated that including all parameters or the selected five parameters did not make a significant difference in the relative placement of spines with specific morphologies in the feature space defined by the principal components. This point will be discussed in the revised manuscript. The potential problem of selecting a particular region within a feature space for spine shape analysis (Point 1) can be addressed by using alternative simulation-based approaches, such as bootstrap or permutation tests. These analyses will be included in the revised manuscript. The use of sample numbers in statistical analyses should align with the analysis's purpose (Point 3). When analyzing the distribution of samples in the feature space, it is necessary to use spine numbers for statistical assessment. We will recheck the statistical methods and apply the appropriate method for each analysis. The spine population data in Figures 2 and 8 cannot be directly compared, as the spine visualization methods differ (Figure 2 with membrane DiI labeling; Figure 8 with cytoplasmic GFP labeling) (Point 9). Spine populations of the same size are inevitably plotted in different feature spaces. This point will be discussed more clearly in the revised manuscript.

      (2) Clarification of experimental conditions and data reliability

      Per this reviewer's suggestion, we will provide more information on the genetic background of mice and the differences in spine structure from DIV 18-22 (Points 4 and 5). We will also provide additional validation data for the functional analyses using knockdown and overexpression methods, for which we already have preliminary data (Point 7). Concerns about the interpretation of data obtained from in vitro culture (Point 12), raised by this reviewer, are also noted by reviewer #1. As explained in the response to reviewer #1, we intentionally selected an in vitro culture system to analyze multiple samples derived from mouse models of mental disorders for several reasons. Nevertheless, we will revise the discussion and incorporate the points this reviewer raised regarding the disadvantages of in vitro systems.

      (3) Validation of biological mechanisms and interpretation

      In the computational modeling (Point 6), we started from the data of spine turnover (excluding the data of spine volume increase/decrease), fitted the model with the data, and found that the best-fit model showed three features: fast spine turnover, lower spine density, and smaller size of transient spines in schizophrenia mouse models. As the reviewer noted, information about spine turnover is already present in the input data. However, the other two properties are generated independently of the input data, indicating the value of this model. We plan to add additional confirmatory analyses to this model in the revised manuscript.

      In response to Point 8, we will provide supporting data on the functional role of Ecgr4 in synapse regulation. We will also refine our discussion on the ASD and Schizophrenia phenotypes based on the suggested literature (Points 10 and 11). Quantification of the initial growth of spines is technically demanding, as it requires higher imaging frequency and longer time-lapse recordings to capture rare events. It is difficult to conclude which of the two possibilities, slow spine growth or initial size differences, is correct, based on our available data. This point will be discussed in the revised manuscript (Point 13).

    1. Reviewer #1 (Public review):

      Summary of goals:

      The authors' stated goal (line 226) was to compare gene expression levels for gut hormones between males and females. As female flies contain more fat than males, they also sought to identify hormones that control this sex difference. Finally, they attempted to place their findings in the broader context of what is already known about established underlying mechanisms.

      Strengths:

      (1) The core research question of this work is interesting. The authors provide a reasonable hypothesis (neuro/entero-peptides may be involved) and well-designed experiments to address it.

      (2) Some of the data are compelling, especially positive results that clearly implicate enteropeptides in sex-biased fat contents (Figures 1 and 3).

      Weaknesses:

      (1) The greatest weakness of this work is that it falls short of providing a clear mechanism for the regulation of sex-biased fat content by AstC and Tk. By and large, feminization of neurons or enteroendocrine cells with UAS-traF did not increase fat in males (Figure 2). The authors mention that ecdysone, juvenile hormone or Sex-lethal may instead play a role (lines 258-270), but this is speculative, making this study incomplete.

      (2) Related to the above point, the cellular mechanisms by which AstC and Tk regulate fat content in males and females are only partially characterized. For example, knockdown of TkR99D in insulin-producing neurons (Figure 4E) but not pan-neuronally (Figure 4B) increases fat in males, but Tk itself only shows a tendency (Figure 3B). In females, the situation is even less clear: again, Tk only shows a tendency (Figure 3B), and pan-neuronal, but not IPC-specific knockdown of TkR99D decreases fat.

      (3) The text sometimes misrepresents or contradicts the Results shown in the figures. UAS-traF expression in neurons or enteroendocrine cells did sometimes alter fat contents (Figure 2H, S), but the authors report that sex differences were unaffected (lines 164-166). On the other hand, although knockdown of Tk in enteroendocrine cells caused no significant effect (Figure 3B), the authors report this as a trend towards reduction (lines 182-183). This biased representation raises concerns about the interpretation of the data and the authors' conclusions.

      (4) The authors find that in males, neuropeptide expression in the head is higher (Figure 1F-J). This may also play an important role in maintaining lower levels of fat in males, but this finding is not explored in the manuscript.

      Appraisal of goal achievement & conclusions:

      The authors were successful in identifying hormones that show sex bias in their expression and also control the male vs. female difference in fat content. However, elucidation of the relevant cellular pathways is incomplete. Additionally, some of their conclusions are not supported by the data (see Weaknesses, point 3).

      Impact:

      It is difficult to evaluate the impact of this study. This is in great part because the authors do not attempt to systematically place their findings about AstC/Tk in the broader context of their previous studies, which investigated the same phenomenon (Wat et al., 2021, eLife and Biswas et al., 2025, Cell Reports). As the underlying mechanisms are complex and likely redundant, it is necessary to generate a visual model to explain the pathways which regulate fat content in males and females.

    1. 层组织:你能描述一下三种类型的层(输入层、隐藏层、输出层)以及它们如何按顺序转换数据吗?

      神经网络中的数据转换遵循一个清晰、线性的路径,从接收原始数据开始,通过处理阶段,最终产生一个预测。这三个主要层协同工作: 1. 输入层 (Input Layer) 作用: 这是网络的入口点。它不执行任何计算或转换,只是接收原始的外部数据。数据形式: 数据以数字向量或矩阵的形式进入网络(例如,在您提供的图像中,输入是 [1.0, 5.0, 9.0] 这个向量)。数据流向: 将原始输入信号直接传递到下一个隐藏层。 2. 隐藏层 (Hidden Layer) 作用: 隐藏层是神经网络的“大脑”,负责执行大部分复杂的计算和模式识别。数据形式: 数据在这里被转换。每个神经元接收来自上一层的加权输入,加上偏置,并通过非线性激活函数进行处理。数据流向: 隐藏层提取并转换原始输入数据为更抽象、更有意义的特征表示,并将这些新表示传递给下一层(另一个隐藏层或输出层)。网络的深度(隐藏层的数量)决定了它可以学习的复杂程度。 3. 输出层 (Output Layer) 作用: 这是网络的出口点,负责生成最终的预测结果或决策。数据形式: 它接收来自最后一个隐藏层的信号,并将其格式化为用户需要的输出形式(例如,一个概率值、一个类别标签或一个连续的数值)。数据流向: 输出层将网络的最终答案传递给外部世界。 数据转换顺序总结 数据从左向右(如您图像所示)按顺序转换: 原始数据 \(\rightarrow \) 输入层 (接收) \(\rightarrow \) 隐藏层 (特征提取/转换) \(\rightarrow \) 输出层 (最终预测) 这三个层的结合使网络能够从简单的数据点构建复杂的决策。

    1. By convention, enum values are given names that are made up of upper case letters, but thatis a style guideline and not a syntax rule. An enum value is a constant; that is, it representsa fixed value that cannot be changed. The possible values of an enum type are usually referredto as enum constants.

      Note that these classes are special because: 1. instead of storing variables, they store constants 2. there are no static subroutines 3. the constants are stored into variables of type Season. Therefore, the static constants behave like objects. We can conclude that classes are not limited to storing variables; they can also store constants. The definition of an enum must integrate the constants with subroutines to create objects.

    Annotators

    1. STUPNE ODKÁZANOSTI

      Stupeň odkázanosti sa určí na základe počtu základných životných potrieb, ktoré fyzická osoba nie je schopná samostatne uspokojovať. Základné životné potreby, ktoré fyzická osoba nie je schopná samostatne uspokojovať, a ich počet sa určí na základe dotazníka k určeniu odkázanosti fyzickej osoby na pomoc inej fyzickej osoby. Stupeň odkázanosti Počet základných životných potrieb, ktoré fyzická osoba nad 15 rokov veku nie je schopná samostatne uspokojovať Počet základných životných potrieb, ktoré fyzická osoba do 15 rokov veku nie je schopná samostatne uspokojovať I. ľahká odkázanosť 1 – 2 1 II. stredne ľahká odkázanosť 3 – 4 2 – 3

    1. By Strawperson:Level 1: “There’s a lion across the river.” = There’s a lion across the river.Level 2: “There’s a lion across the river.” = I don’t want to go (or have other people go) across the river.Level 3: “There’s a lion across the river.” = I’m with the popular kids who are too cool to go across the river.Level 4: “There’s a lion across the river.” = A firm stance against trans-river expansionism focus grouped well with undecided voters in my constituency.

      I never realized a simple statement can be viewed from so many perspectives, I wonder of a AI Prompt Ecology can do a similar level of Analysis from various perspectives and then Synthesize action items

    2. One way to test which level someone is on is what would make them say the opposite of what they say now:Level 1: If they see enough evidence in the opposite direction.Level 2: If people begin responding the opposite way to the same statement.Level 3: If your group starts saying the opposite.Level 4: If you benefit more from saying the opposite.

      The idea of saying stuff people will object to in conversation or debate to check if they are engaging is a pretty great strategy.

      Taking strong opinions is required for the Thesis, Antithesis, Synthesis cognitive pattern

    1. Why these 4 professions? We chose contrasting examples across the spectrum of real purchasing power (2016–2024): Geriatric Care (+24% real) and Hospitality (+14.3% real) represent the biggest winners, while Software Development (+3% real) and Electrical Engineering (−3.3% real) show how even highly skilled professions failed to keep pace with inflation. Apartment size calculated as 30% of net disposable professional income divided by the local rent per m². Net income calculated using a progressive German tax model (deductions of 30%, 35%, and 40% depending on income level). Data Note: Professional salary data is available at the state level. For non-city-states, the average salaries of the respective state were used (e.g., Frankfurt = Hesse, Munich = Bavaria). Rent data is city-specific.

      Why these four professions? We chose contrasting examples across the spectrum of real purchasing power (2016–2024): Geriatric Care (+24%) and Hospitality (+14.3%) represent the biggest winners, while Software Development (+3%) and Electrical Engineering (−3.3%) show how even highly skilled professions failed to keep pace with inflation. Apartment size calculated as 30% of net disposable professional income divided by the local rent per m². Net income calculated using a progressive German tax model (deductions of 30%, 35%, and 40% depending on income level). Data Note: Professional salary data is available at the state level. For non-city-states, the average salaries of the respective state were used (e.g., Frankfurt = Hesse, Munich = Bavaria). Rent data is city-specific.

    1. Pierre-François Bouchard’s men discovered the ancient stone slab
      <center>

      Rosetta Stone (RS)

      </center>

      Useful Links

      1. Rosetta Stone_ Wikipedia
      2. Explore the Rosetta Stone_ British Museum
      3. Rosetta Stone_ Britannica
      4. What is Rosetta Stone and why is it important?
      5. Rosetta Stone- Smithsonian

      On July 19, 1799, Pierre- Francois Bouchard's men discovered an ancient "basalt" slab in Rosetta (local name Rashid), Egypt. It was covered with 3 types of writing- Demotic, Hieroglyphics and ancient Greek. Scholars traced origin of the RS to 196 BCE in Egypt's Ptolemaic era

      Click map of the Ptolemaic dynasty

      <center>The Rosetta Stone decoded by AI</center> Click this YouTube Link

    1. PlanPrice6 Months$2,100 (350/month)3 Months$1,200 (400/month)1 Month$500

      These are incorrect pricing for US. And I don't think this is yet thought or supported for US. My suggestion is to hide this page for US. Shubham should confirm

    1. pytest==9.0.1 # pyproject.toml、uv.lockファイルにあるパッケージなのでインストール

      uv sync ではpytestがインストールされてないと思います。

      uv sync --group dev でインストールするのが正しいようです。

    2. uv pip install python-dateutil

      あえて、アンインストールするために uv pip でインストールしているのかな。 uv add と uv pip installを混ぜて使うことを推奨しているように見えたけど注意のためにこれをやっているのかな? なくても良いがしました。

    3. パッケージが書き込まれる

      「依存関係のあるPythonパッケージが書き込まれる」とあり、別の場所にあったコメントがハイライトをしているので、後半だけにハイライトしてコメントします。

      ここは、「依存関係にある」ではなく、「インストールしたPythonパッケージ」だと思います。

    4. uvを使った環境構築時に必ず使用するため

      「uv vunvで作った仮想環境と同様に仮想環境を作り、その仮想環境を使うため、」ってことですかね? 「必ず使用するため」っていうところがピンとこなかった。

    1. Disguise structural and sentence-level faults as intentional strategies. In this light, Infinite Jest is no longer poorly-plotted and inconclusive, but ‘fractally structured like a Sierpiński gasket.’3 The hundreds of pages of solecistic flummery in his story collections are not really a grating catalogue of cliches, but an incisive parody of corporate-speak and other modern argots (George Saunders, another basically talentless writer, employs this strategy constantly, besides much else from the Wallace playbook). When it comes time to swoon into obvious sentimentality and Hallmark-style kitsch, just point out you’re aware that’s what it is and are doing it intentionally too. This will let the reader think they’re in on a complicated post-ironic work with real feeling behind it, rather than simply reading bad writing.

      Nicely put.

    1. Les Algorithmes Contre la Société : Synthèse des Analyses d'Hubert Guillaud

      Résumé Exécutif

      Ce document de synthèse expose les arguments principaux développés par Hubert Guillaud, journaliste et essayiste, concernant l'impact sociétal des systèmes algorithmiques.

      L'analyse révèle que loin d'être des outils neutres, les algorithmes constituent une nouvelle logique systémique qui transforme en profondeur les services publics et les rapports sociaux.

      Leur fonction première est de calculer, trier et appareiller, traduisant le fait social en une simple "combinaison de chiffres".

      Les points critiques à retenir sont les suivants :

      La discrimination comme fonctionnalité : Par nature, le calcul est une machine à différencier.

      Des systèmes comme Parcoursup ou le "score de risque" de la Caisse d'Allocations Familiales (CAF) génèrent des distinctions souvent aberrantes et fictionnelles pour classer les individus, ce qui institutionnalise la discrimination sous couvert d'objectivité mathématique.

      Ciblage des populations précaires : L'automatisation des services publics cible et surveille de manière disproportionnée les populations les plus vulnérables.

      La CAF, par exemple, ne chasse pas tant la fraude que les "indus" (trop-perçus), affectant principalement les personnes aux revenus morcelés et complexes comme les mères isolées.

      Menace sur les principes démocratiques :

      L'interconnexion croissante des données entre les administrations (CAF, Impôts, France Travail, Police) menace la séparation des pouvoirs en créant un système de surveillance généralisée où les faiblesses d'un individu dans un domaine peuvent avoir des répercussions dans tous les autres.

      La massification déguisée : Contrairement à l'idée d'une personnalisation poussée, les algorithmes opèrent une massification des individus.

      Ils ne ciblent pas des personnes uniques mais les regroupent en permanence dans des catégories larges et standardisées à des fins de contrôle ou de publicité.

      Un risque de dérive fasciste : En systématisant la discrimination et en la rendant opaque et invisible, ces technologies créent un terrain propice à des dérives autoritaires, un risque qualifié par Hubert Guillaud de "fasciste".

      En conclusion, bien que ces technologies posent une menace sérieuse, Hubert Guillaud les replace dans un contexte plus large, arguant que les enjeux primordiaux demeurent le réchauffement climatique et les logiques du capitalisme financier, dont les algorithmes ne sont qu'un outil d'amplification.

      --------------------------------------------------------------------------------

      1. Introduction : La Logique Algorithmique et ses Enjeux Sociétaux

      La discussion, introduite par Marine Placa, doctorante en droit public, s'articule autour de l'ouvrage d'Hubert Guillaud, Les algorithmes contre la société.

      L'enjeu central est "l'immixtion d'une nouvelle logique algorithmique plus insidieuse et plus systémique à la délivrance des prestations de services publics".

      Cette logique, qui "traduit le fait social comme une combinaison de chiffres", gouverne de plus en plus l'environnement des individus avec des conséquences tangibles.

      Plusieurs critiques majeures sont soulevées dès l'introduction :

      Opacité et injustice : Les systèmes d'IA sont souvent trop opaques, discriminants et il est impossible d'expliciter les décisions qui en résultent.

      Déconnexion des réalités : Alors que les investissements massifs se poursuivent (109 milliards d'euros débloqués par le gouvernement français), les retours d'expérience alertent sur les "dégâts sociaux, démocratiques et écologiques".

      Technologie privée : La technologie est privée, développée par des capitaux privés et dictée par les "mastodontes économiques de la Silicon Valley".

      Son usage est ainsi largement influencé par des intérêts de profit plutôt que par le bien commun.

      L'IA n'est pas autonome : L'IA "ne décide de rien.

      Elle ne raisonne pas." Elle est le résultat d'une conception humaine, et son impact dépend moins de son essence que de son usage.

      2. Définition et Fonctionnement des Algorithmes

      Selon Hubert Guillaud, les systèmes algorithmiques, de l'algorithme simple à l'IA complexe, doivent être compris comme une "continuité technologique" de systèmes de calcul appliqués à la société. Leur fonctionnement repose sur trois fonctions fondamentales :

      | Fonction | Description | Exemple | | --- | --- | --- | | 1\. Produire des scores | Transformer des informations qualitatives (mots, comportements) en données quantitatives (chiffres, notes). | Un profil sur une application de rencontre est "scoré", une demande d'aide sociale reçoit une note de risque. |

      | 2\. Trier | Classer les individus ou les informations en fonction des scores produits. | Les candidats sur Parcoursup sont classés du premier au dernier. |

      | 3\. Apparier (Le "mariage") | Faire correspondre une demande à une offre sur la base du tri effectué. | Un étudiant est appareillé à une formation, un demandeur d'emploi à un poste, un bénéficiaire à l'obtention (ou non) d'une aide sociale. |

      Cette mécanique simple est au cœur de tous les systèmes, des réseaux sociaux aux plateformes de services publics, avec pour enjeu principal de classer, trier et faire correspondre.

      3. La Modification des Rapports de Force Sociétaux

      3.1. Le Calcul comme Machine à Discriminer : l'Exemple de Parcoursup

      Hubert Guillaud utilise l'exemple de Parcoursup pour illustrer comment le calcul génère une discrimination systémique.

      Contexte : Une plateforme nationale orientant 900 000 élèves de terminale vers plus de 25 000 formations.

      Mécanisme : Chaque formation doit classer tous ses candidats du premier au dernier, sans aucun ex-æquo.

      Le critère principal : les notes. Le système se base quasi exclusivement sur les bulletins scolaires, ignorant des critères essentiels comme la motivation, qui est pourtant un facteur clé de la réussite dans le supérieur.

      La création de distinctions aberrantes : Pour départager la masse d'élèves aux dossiers homogènes (par exemple, avec une moyenne de 14/20), le système génère des calculs complexes pour créer des micro-différences.

      Les scores finaux sont calculés à trois chiffres après la virgule (ex: 14,001 contre 14,003). Guillaud souligne l'absurdité de cette distinction :

      "Je ne peux pas faire de différence académique même entre eux. [...] Mais en fait pour le calcul par le calcul on va générer en fait des différences entre ces élèves."

      Équivalence au tirage au sort : Pour 80 % des candidats, ce système d'attribution basé sur des différences insignifiantes est "pleinement équivalent au tirage au sort", mais il est camouflé par l'apparence scientifique des chiffres.

      3.2. La Normalisation d'une Sélection Élitaire

      Contrairement à un simple tirage au sort, Parcoursup n'introduit pas d'aléa.

      Au contraire, il diffuse et normalise les méthodes de sélection des formations d'élite (grandes écoles, Sciences Po) à l'ensemble du système éducatif, y compris à des formations techniques (BTS) où ce type de sélection est inadapté.

      Cette standardisation interdit les méthodes d'évaluation alternatives (entretiens, projets) et renforce les biais sociaux.

      Le résultat est un taux d'insatisfaction élevé :

      2 % des candidats ne reçoivent aucune proposition.

      20 % reçoivent une seule proposition qu'ils refusent.

      20 % retentent leur chance l'année suivante.

      Au total, environ 45-46 % des élèves sont insatisfaits chaque année par la plateforme.

      4. L'Automatisation de la Vie et la Neutralité Illusoire de la Technologie

      4.1. Le "Score de Risque" de la CAF : Surveillance des Plus Précaires

      Hubert Guillaud réfute l'idée que la technologie est neutre. L'exemple de la Caisse d'Allocations Familiales (CAF) est emblématique de cette non-neutralité.

      Objectif affiché : Détecter le risque de fraude chez les allocataires grâce à l'IA.

      Réalité : Le système ne mesure pas la fraude (souvent liée aux déclarations des employeurs) mais ce que l'on nomme "l'indu", c'est-à-dire le trop-perçu d'un mois qui doit être remboursé le suivant.

      Ciblage : Ce système pénalise les personnes aux situations complexes et aux revenus non-linéaires : mères isolées, veuves, travailleurs précaires.

      Le calcul de leurs droits est difficile, générant mécaniquement des "indus".

      Critères de calcul absurdes : Des données comportementales sont utilisées.

      Par exemple, se connecter à son espace CAF plus d'un certain nombre de fois par mois augmente le score de risque, alors que ce comportement reflète simplement l'anxiété de personnes en situation de besoin.

      Conséquences : Des populations déjà précaires, représentant moins de 20 % des bénéficiaires, subissent la majorité des contrôles.

      Certaines mères isolées sont contrôlées "quatre à cinq fois dans la même année".

      4.2. Menace sur la Séparation des Pouvoirs

      L'interconnexion des données entre les administrations, sous couvert de "fluidifier l'information", constitue une menace pour le principe démocratique de la séparation des pouvoirs.

      • La CAF a accès aux données des impôts, de France Travail, et aux fichiers des comptes bancaires (FICOBA).

      • Le niveau d'accès est opaque : certains agents peuvent voir les soldes, voire le détail des dépenses sur six mois.

      • Cette collusion crée des formes de surveillance étendues et problématiques.

      Exemple : la police qui dénoncerait des individus à la CAF (environ 3000 cas par an), instaurant un "échange de bons procédés" en dehors de tout cadre légal clair.

      • Cela crée ce qu'un sociologue nomme un "lumpen scorariat" : des individus constamment mal évalués et pénalisés par le croisement des systèmes.

      4.3. Le Risque d'une Dérive Fasciste

      La discussion met en avant une phrase choc tirée du livre de Guillaud : "Déni de démocratie un principe, la discrimination une fonctionnalité, le fascisme une possibilité."

      Le risque fasciste réside dans le fait que ces systèmes permettent de mettre en place des discriminations massives, objectives en apparence, mais basées sur des choix politiques et des biais invisibles.

      Exemple du recrutement : Les logiciels de tri de CV analysent les mots pour produire des scores.

      Ils préfèrent des profils "moyens partout" plutôt que des profils avec des failles et des points forts.

      Discrimination géographique et ethnique :

      Ces systèmes permettent très facilement aux employeurs d'exclure des candidats sur la base de critères non-dits, comme leur localisation géographique (via l'adresse IP) ou leur origine (via des termes associés à certains pays).

      5. Implications Psychosociales : La Massification Déguisée en Personnalisation

      L'idée que les algorithmes nous offrent une expérience "personnalisée" (les "bulles de filtre") est un leurre. En réalité, ils opèrent une massification.

      Logique publicitaire : L'objectif n'est pas de comprendre un individu, mais de le faire rentrer dans des catégories préexistantes pour lui vendre de la publicité de masse.

      Exemple concret : Si un utilisateur "like" une publication critiquant le football où le mot "PSG" apparaît, l'algorithme ne retient que le mot-clé "PSG".

      L'utilisateur est alors associé à la masse de tous les autres profils liés au "PSG" et recevra de la publicité ciblée pour les fans de football, même si son intention initiale était opposée.

      • L'individu est ainsi constamment regroupé "d'une masse à l'autre", pris dans des profils de données qui le dépassent.

      6. Conclusion : Mise en Perspective des Menaces Technologiques

      Interrogé sur une citation du journal Le Postillon affirmant que le "grand refroidissement technologique" est la plus grande menace de notre époque, Hubert Guillaud exprime son désaccord.

      • Il considère que cette vision est trop "techno-centrée".

      • Selon lui, des enjeux plus fondamentaux et urgents priment :

      1. Le réchauffement climatique.    2. La concentration financière et les logiques du capitalisme.

      • La technologie et ses dérives ne sont pas la cause première des problèmes sociaux (isolement, repli sur soi), mais plutôt un amplificateur des dynamiques déjà à l'œuvre, comme la "dissolution des rapports engendrés par le capitalisme".

      • Il conclut en affirmant qu'il faut "savoir raison garder".

      L'enjeu n'est pas seulement de réformer un système comme Parcoursup, mais de s'attaquer au problème de fond : "comment est-ce qu'on crée des places dans l'enseignement supérieur public".

      La technologie n'est pas une fatalité, mais un prisme à travers lequel des forces sociales, politiques et économiques plus vastes s'expriment.

    1. וְאָבְד֞וּ

      **Qal)perish, die, be exterminated perish, vanish (fig.) be lost, strayed

      **(Piel)to destroy, kill, cause to perish, to give up (as lost), exterminate to blot out, do away with, cause to vanish, (fig.) cause to stray, lose

    2. וְהִכֵּיתִ֥י

      Hiphil)to smite, strike, beat, scourge, clap, applaud, give a thrust to smite, kill, slay (man or beast) to smite, attack, attack and destroy, conquer, subjugate, ravage to smite, chastise, send judgment upon, punish, destroy

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      *The authors have a longstanding focus and reputation on single cell sequencing technology development and application. In this current study, the authors developed a novel single-cell multi-omic assay termed "T-ChIC" so that to jointly profile the histone modifications along with the full-length transcriptome from the same single cells, analyzed the dynamic relationship between chromatin state and gene expression during zebrafish development and cell fate determination. In general, the assay works well, the data look convincing and conclusions are beneficial to the community. *

      Thank you for your positive feedback.

      *There are several single-cell methodologies all claim to co-profile chromatin modifications and gene expression from the same individual cell, such as CoTECH, Paired-tag and others. Although T-ChIC employs pA-Mnase and IVT to obtain these modalities from single cells which are different, could the author provide some direct comparisons among all these technologies to see whether T-ChIC outperforms? *

      In a separate technical manuscript describing the application of T-ChIC in mouse cells (Zeller, Blotenburg et al 2024, bioRxiv, 2024.05. 09.593364), we have provided a direct comparison of data quality between T-ChIC and other single-cell methods for chromatin-RNA co-profiling (Please refer to Fig. 1C,D and Fig. S1D, E, of the preprint). We show that compared to other methods, T-ChIC is able to better preserve the expected biological relationship between the histone modifications and gene expression in single cells.

      *In current study, T-ChIC profiled H3K27me3 and H3K4me1 modifications, these data look great. How about other histone modifications (eg H3K9me3 and H3K36me3) and transcription factors? *

      While we haven't profiled these other modifications using T-ChIC in Zebrafish, we have previously published high quality data on these histone modifications using the sortChIC method, on which T-ChIC is based (Zeller, Yeung et al 2023). In our comparison, we find that histone modification profiles between T-ChIC and sortChIC are very similar (Fig. S1C in Zeller, Blotenburg et al 2024). Therefore the method is expected to work as well for the other histone marks.

      *T-ChIC can detect full length transcription from the same single cells, but in FigS3, the authors still used other published single cell transcriptomics to annotate the cell types, this seems unnecessary? *

      We used the published scRNA-seq dataset with a larger number of cells to homogenize our cell type labels with these datasets, but we also cross-referenced our cluster-specific marker genes with ZFIN and homogenized the cell type labels with ZFIN ontology. This way our annotation is in line with previous datasets but not biased by it. Due the relatively smaller size of our data, we didn't expect to identify unique, rare cell types, but our full-length total RNA assay helps us identify non-coding RNAs such as miRNA previously undetected in scRNA assays, which we have now highlighted in new figure S1c .

      *Throughout the manuscript, the authors found some interesting dynamics between chromatin state and gene expression during embryogenesis, independent approaches should be used to validate these findings, such as IHC staining or RNA ISH? *

      We appreciate that the ISH staining could be useful to validate the expression pattern of genes identified in this study. But to validate the relationships between the histone marks and gene expression, we need to combine these stainings with functional genomics experiments, such as PRC2-related knockouts. Due to their complexity, such experiments are beyond the scope of this manuscript (see also reply to reviewer #3, comment #4 for details).

      *In Fig2 and FigS4, the authors showed H3K27me3 cis spreading during development, this looks really interesting. Is this zebrafish specific? H3K27me3 ChIP-seq or CutTag data from mouse and/or human embryos should be reanalyzed and used to compare. The authors could speculate some possible mechanisms to explain this spreading pattern? *

      Thanks for the suggestion. In this revision, we have reanalysed a dataset of mouse ChIP-seq of H3K27me3 during mouse embryonic development by Xiang et al (Nature Genetics 2019) and find similar evidence of spreading of H3K27me3 signal from their pre-marked promoter regions at E5.5 epiblast upon differentiation (new Figure S4i). This observation, combined with the fact that the mechanism of pre-marking of promoters by PRC1-PRC2 interaction seems to be conserved between the two species (see (Hickey et al., 2022), (Mei et al., 2021) & (Chen et al., 2021)), suggests that the dynamics of H3K27me3 pattern establishment is conserved across vertebrates. But we think a high-resolution profiling via a method like T-ChIC would be more useful to demonstrate the dynamics of signal spreading during mouse embryonic development in the future. We have discussed this further in our revised manuscript.

      Reviewer #1 (Significance (Required)):

      *The authors have a longstanding focus and reputation on single cell sequencing technology development and application. In this current study, the authors developed a novel single-cell multi-omic assay termed "T-ChIC" so that to jointly profile the histone modifications along with the full-length transcriptome from the same single cells, analyzed the dynamic relationship between chromatin state and gene expression during zebrafish development and cell fate determination. In general, the assay works well, the data look convincing and conclusions are beneficial to the community. *

      Thank you very much for your supportive remarks.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      *Joint analysis of multiple modalities in single cells will provide a comprehensive view of cell fate states. In this manuscript, Bhardwaj et al developed a single-cell multi-omics assay, T-ChIC, to simultaneously capture histone modifications and full-length transcriptome and applied the method on early embryos of zebrafish. The authors observed a decoupled relationship between the chromatin modifications and gene expression at early developmental stages. The correlation becomes stronger as development proceeds, as genes are silenced by the cis-spreading of the repressive marker H3k27me3. Overall, the work is well performed, and the results are meaningful and interesting to readers in the epigenomic and embryonic development fields. There are some concerns before the manuscript is considered for publication. *

      We thank the reviewer for appreciating the quality of our study.

      *Major concerns: *

        • A major point of this study is to understand embryo development, especially gastrulation, with the power of scMulti-Omics assay. However, the current analysis didn't focus on deciphering the biology of gastrulation, i.e., lineage-specific pioneer factors that help to reform the chromatin landscape. The majority of the data analysis is based on the temporal dimension, but not the cell-type-specific dimension, which reduces the value of the single-cell assay. *

      We focused on the lineage-specific transcription factor activity during gastrulation in Figure 4 and S8 of the manuscript and discovered several interesting regulators active at this stage. During our analysis of the temporal dimension for the rest of the manuscript, we also classified the cells by their germ layer and "latent" developmental time by taking the full advantage of the single-cell nature of our data. Additionally, we have now added the cell-type-specific H3K27-demethylation results for 24hpf in response to your comment below. We hope that these results, together with our openly available dataset would demonstrate the advantage of the single-cell aspect of our dataset.

      1. *The cis-spreading of H3K27me3 with developmental time is interesting. Considering H3k27me3 could mark bivalent regions, especially in pluripotent cells, there must be some regions that have lost H3k27me3 signals during development. Therefore, it's confusing that the authors didn't find these regions (30% spreading, 70% stable). The authors should explain and discuss this issue. *

      Indeed we see that ~30% of the bins enriched in the pluripotent stage spread, while 70% do not seem to spread. In line with earlier observations(Hickey et al., 2022; Vastenhouw et al., 2010), we find that H3K27me3 is almost absent in the zygote and is still being accumulated until 24hpf and beyond. Therefore the majority of the sites in the genome still seem to be in the process of gaining H3K27me3 until 24hpf, explaining why we see mostly "spreading" and "stable" states. Considering most of these sites are at promoters and show signs of bivalency, we think that these sites are marked for activation or silencing at later stages. We have discussed this in the manuscript ("discussion"). However, in response to this and earlier comment, we went back and searched for genes that show H3K27-demethylation in the most mature cell types (at 24 hpf) in our data, and found a subset of genes that show K27 demethylation after acquiring them earlier. Interestingly, most of the top genes in this list are well-known as developmentally important for their corresponding cell types. We have added this new result and discussed it further in the manuscript (Fig. 2d,e, , Supplementary table 3).

      *Minors: *

        • The authors cited two scMulti-omics studies in the introduction, but there have been lots of single-cell multi-omics studies published recently. The authors should cite and consider them. *

      We have cited more single-cell chromatin and multiome studies focussed on early embryogenesis in the introduction now.

      *2. T-ChIC seems to have been presented in a previous paper (ref 15). Therefore, Fig. 1a is unnecessary to show. *

      Figure 1a. shows a summary of our Zebrafish TChIC workflow, which contains the unique sample multiplexing and sorting strategy to reduce batch effects, which was not applied in the original TChIC workflow. We have now clarified this in "Results".

      1. *It's better to show the percentage of cell numbers (30% vs 70%) for each heatmap in Figure 2C. *

      We have added the numbers to the corresponding legends.

      1. *Please double-check the citation of Fig. S4C, which may not relate to the conclusion of signal differences between lineages. *

      The citation seems to be correct (Fig. S4C supplements Fig. 2C, but shows mesodermal lineage cells) but the description of the legend was a bit misleading. We have clarified this now.

      *5. Figure 4C has not been cited or mentioned in the main text. Please check. *

      Thanks for pointing it out. We have cited it in Results now.

      Reviewer #2 (Significance (Required)):

      *Strengths: This work utilized a new single-cell multi-omics method and generated abundant epigenomics and transcriptomics datasets for cells covering multiple key developmental stages of zebrafish. *

      *Limitations: The data analysis was superficial and mainly focused on the correspondence between the two modalities. The discussion of developmental biology was limited. *

      *Advance: The zebrafish single-cell datasets are valuable. The T-ChIC method is new and interesting. *

      *The audience will be specialized and from basic research fields, such as developmental biology, epigenomics, bioinformatics, etc. *

      *I'm more specialized in the direction of single-cell epigenomics, gene regulation, 3D genomics, etc. *

      Thank you for your remarks.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      *This manuscript introduces T‑ChIC, a single‑cell multi‑omics workflow that jointly profiles full‑length transcripts and histone modifications (H3K27me3 and H3K4me1) and applies it to early zebrafish embryos (4-24 hpf). The study convincingly demonstrates that chromatin-transcription coupling strengthens during gastrulation and somitogenesis, that promoter‑anchored H3K27me3 spreads in cis to enforce developmental gene silencing, and that integrating TF chromatin status with expression can predict lineage‑specific activators and repressors. *

      *Major concerns *

      1. *Independent biological replicates are absent, so the authors should process at least one additional clutch of embryos for key stages (e.g., 6 hpf and 12 hpf) with T‑ChIC and demonstrate that the resulting data match the current dataset. *

      Thanks for pointing this out. We had, in fact, performed T-ChIC experiments in four rounds of biological replicates (independent clutch of embryos) and merged the data to create our resource. Although not all timepoints were profiled in each replicate, two timepoints (10 and 24hpf) are present in all four, and the celltype composition of these replicates from these 2 timepoints are very similar. We have added new plots in figure S2f and added (new) supplementary table (#1) to highlight the presence of biological replicates.

      2. *The TF‑activity regression model uses an arbitrary R² {greater than or equal to} 0.6 threshold; cross‑validated R² distributions, permutation‑based FDR control, and effect‑size confidence intervals are needed to justify this cut‑off. *

      Thank you for this suggestion. We did use 10-fold cross validation during training and obtained the R2 values of TF motifs from the independent test set as an unbiased estimate. However, the cutoff of R2 > 0.6 to select the TFs for classification was indeed arbitrary. In the revised version, we now report the FDR-adjusted p-values for these R2 estimates based on permutation tests, and select TFs with a cutoff of padj supplementary table #4 to include the p-values for all tested TFs. However, we see that our arbitrary cutoff of 0.6 was in fact, too stringent, and we can classify many more TFs based on the FDR cutoffs. We also updated our reported numbers in Fig. 4c to reflect this. Moreover, supplementary table #4 contains the complete list of TFs used in the analysis to allow others to choose their own cutoff.

      3. *Predicted TF functions lack empirical support, making it essential to test representative activators (e.g., Tbx16) and repressors (e.g., Zbtb16a) via CRISPRi or morpholino knock‑down and to measure target‑gene expression and H3K4me1 changes. *

      We agree that independent validation of the functions of our predicted TFs on target gene activity would be important. During this revision, we analysed recently published scRNA-seq data of Saunders et al. (2023) (Saunders et al., 2023), which includes CRISPR-mediated F0 knockouts of a couple of our predicted TFs, but the scRNAseq was performed at later stages (24hpf onward) compared to our H3K4me1 analysis (which was 4-12 hpf). Therefore, we saw off-target genes being affected in lineages where these TFs are clearly not expressed (attached Fig 1). We therefore didn't include these results in the manuscript. In future, we aim to systematically test the TFs predicted in our study with CRISPRi or similar experiments.

      4. *The study does not prove that H3K27me3 spreading causes silencing; embryos treated with an Ezh2 inhibitor or prc2 mutants should be re‑profiled by T‑ChIC to show loss of spreading along with gene re‑expression. *

      We appreciate the suggestion that indeed PRC2-disruption followed by T-ChIC or other forms of validation would be needed to confirm whether the H3K27me3 spreading is indeed causally linked to the silencing of the identified target genes. But performing this validation is complicated because of multiple reasons: 1) due to the EZH2 contribution from maternal RNA and the contradicting effects of various EZH2 zygotic mutations (depending on where the mutation occurs), the only properly validated PRC2-related mutant seems to be the maternal-zygotic mutant MZezh2, which requires germ cell transplantation (see Rougeot et al. 2019 (Rougeot et al., 2019)) , and San et al. 2019 (San et al., 2019) for details). The use of inhibitors have been described in other studies (den Broeder et al., 2020; Huang et al., 2021), but they do not show a validation of the H3K27me3 loss or a similar phenotype as the MZezh2 mutants, and can present unwanted side effects and toxicity at a high dose, affecting gene expression results. Moreover, in an attempt to validate, we performed our own trials with the EZH2 inhibitor (GSK123) and saw that this time window might be too short to see the effect within 24hpf (attached Fig. 2). Therefore, this validation is a more complex endeavor beyond the scope of this study. Nevertheless, our further analysis of H3K27me3 de-methylation on developmentally important genes (new Fig. 2e-f, Sup. table 3) adds more confidence that the polycomb repression plays an important role, and provides enough ground for future follow up studies.

      *Minor concerns *

      1. *Repressive chromatin coverage is limited, so profiling an additional silencing mark such as H3K9me3 or DNA methylation would clarify cooperation with H3K27me3 during development. *

      We agree that H3K27me3 alone would not be sufficient to fully understand the repressive chromatin state. Extension to other chromatin marks and DNA methylation would be the focus of our follow up works.

      *2. Computational transparency is incomplete; a supplementary table listing all trimming, mapping, and peak‑calling parameters (cutadapt, STAR/hisat2, MACS2, histoneHMM, etc.) should be provided. *

      As mentioned in the manuscript, we provide an open-source pre-processing pipeline "scChICflow" to perform all these steps (github.com/bhardwaj-lab/scChICflow). We have now also provided the configuration files on our zenodo repository (see below), which can simply be plugged into this pipeline together with the fastq files from GEO to obtain the processed dataset that we describe in the manuscript. Additionally, we have also clarified the peak calling and post-processing steps in the manuscript now.

      *3. Data‑ and code‑availability statements lack detail; the exact GEO accession release date, loom‑file contents, and a DOI‑tagged Zenodo archive of analysis scripts should be added. *

      We have now publicly released the .h5ad files with raw counts, normalized counts, and complete gene and cell-level metadata, along with signal tracks (bigwigs) and peaks on GEO. Additionally, we now also released the source datasets and notebooks (.Rmarkdown format) on Zenodo that can be used to replicate the figures in the manuscript, and updated our statements on "Data and code availability".

      *4. Minor editorial issues remain, such as replacing "critical" with "crucial" in the Abstract, adding software version numbers to figure legends, and correcting the SAMtools reference. *

      Thank you for spotting them. We have fixed these issues.

      Reviewer #3 (Significance (Required)):

      The method is technically innovative and the biological insights are valuable; however, several issues-mainly concerning experimental design, statistical rigor, and functional validation-must be addressed to solidify the conclusions.

      Thank you for your comments. We hope to have addressed your concerns in this revised version of our manuscript.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      This manuscript introduces T‑ChIC, a single‑cell multi‑omics workflow that jointly profiles full‑length transcripts and histone modifications (H3K27me3 and H3K4me1) and applies it to early zebrafish embryos (4-24 hpf). The study convincingly demonstrates that chromatin-transcription coupling strengthens during gastrulation and somitogenesis, that promoter‑anchored H3K27me3 spreads in cis to enforce developmental gene silencing, and that integrating TF chromatin status with expression can predict lineage‑specific activators and repressors.

      Major concerns

      1. Independent biological replicates are absent, so the authors should process at least one additional clutch of embryos for key stages (e.g., 6 hpf and 12 hpf) with T‑ChIC and demonstrate that the resulting data match the current dataset.
      2. The TF‑activity regression model uses an arbitrary R² {greater than or equal to} 0.6 threshold; cross‑validated R² distributions, permutation‑based FDR control, and effect‑size confidence intervals are needed to justify this cut‑off.
      3. Predicted TF functions lack empirical support, making it essential to test representative activators (e.g., Tbx16) and repressors (e.g., Zbtb16a) via CRISPRi or morpholino knock‑down and to measure target‑gene expression and H3K4me1 changes.
      4. The study does not prove that H3K27me3 spreading causes silencing; embryos treated with an Ezh2 inhibitor or prc2 mutants should be re‑profiled by T‑ChIC to show loss of spreading along with gene re‑expression.

      Minor concerns

      1. Repressive chromatin coverage is limited, so profiling an additional silencing mark such as H3K9me3 or DNA methylation would clarify cooperation with H3K27me3 during development.
      2. Computational transparency is incomplete; a supplementary table listing all trimming, mapping, and peak‑calling parameters (cutadapt, STAR/hisat2, MACS2, histoneHMM, etc.) should be provided.
      3. Data‑ and code‑availability statements lack detail; the exact GEO accession release date, loom‑file contents, and a DOI‑tagged Zenodo archive of analysis scripts should be added.
      4. Minor editorial issues remain, such as replacing "critical" with "crucial" in the Abstract, adding software version numbers to figure legends, and correcting the SAMtools reference.

      Significance

      The method is technically innovative and the biological insights are valuable; however, several issues-mainly concerning experimental design, statistical rigor, and functional validation-must be addressed to solidify the conclusions.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      Joint analysis of multiple modalities in single cells will provide a comprehensive view of cell fate states. In this manuscript, Bhardwaj et al developed a single-cell multi-omics assay, T-ChIC, to simultaneously capture histone modifications and full-length transcriptome and applied the method on early embryos of zebrafish. The authors observed a decoupled relationship between the chromatin modifications and gene expression at early developmental stages. The correlation becomes stronger as development proceeds, as genes are silenced by the cis-spreading of the repressive marker H3k27me3. Overall, the work is well performed, and the results are meaningful and interesting to readers in the epigenomic and embryonic development fields. There are some concerns before the manuscript is considered for publication.

      Major concerns:

      1. A major point of this study is to understand embryo development, especially gastrulation, with the power of scMulti-Omics assay. However, the current analysis didn't focus on deciphering the biology of gastrulation, i.e., lineage-specific pioneer factors that help to reform the chromatin landscape. The majority of the data analysis is based on the temporal dimension, but not the cell-type-specific dimension, which reduces the value of the single-cell assay.
      2. The cis-spreading of H3K27me3 with developmental time is interesting. Considering H3k27me3 could mark bivalent regions, especially in pluripotent cells, there must be some regions that have lost H3k27me3 signals during development. Therefore, it's confusing that the authors didn't find these regions (30% spreading, 70% stable). The authors should explain and discuss this issue.

      Minors:

      1. The authors cited two scMulti-omics studies in the introduction, but there have been lots of single-cell multi-omics studies published recently. The authors should cite and consider them.
      2. T-ChIC seems to have been presented in a previous paper (ref 15). Therefore, Fig. 1a is unnecessary to show.
      3. It's better to show the percentage of cell numbers (30% vs 70%) for each heatmap in Figure 2C.
      4. Please double-check the citation of Fig. S4C, which may not relate to the conclusion of signal differences between lineages.
      5. Figure 4C has not been cited or mentioned in the main text. Please check.

      Significance

      Strengths: This work utilized a new single-cell multi-omics method and generated abundant epigenomics and transcriptomics datasets for cells covering multiple key developmental stages of zebrafish. Limitations: The data analysis was superficial and mainly focused on the correspondence between the two modalities. The discussion of developmental biology was limited.

      Advance: The zebrafish single-cell datasets are valuable. The T-ChIC method is new and interesting.

      The audience will be specialized and from basic research fields, such as developmental biology, epigenomics, bioinformatics, etc.

      I'm more specialized in the direction of single-cell epigenomics, gene regulation, 3D genomics, etc.

    1. 假设联盟每增加一名成员,税率就减少0.5个百分点(比如说联盟成员变成3人,税率就由原来的50%下降为49.5%),国家收入增加1%。再假设联盟每增加一名成员,政府花在公共物品上的开支即增加2%

      经济学把戏,结论已经隐藏在条件中了。

    1. community-identified providercompetencies.

      Summary: Community-identified provider competencies include 1) being comfortable working with LGBTQI patients ("be" rather than "seem" = intentionality), 2) shared medical-decision-making (know patient's preferences), 3) avoid assumptions (provide the correct BEST care), 4) apply knowledge (know how to provide specific individualized care), 5) acknowledge and address social marginalization (destigmatize and humanize).

    1. Stretching pulls on the muscle fibers and results in an increased blood flow to the muscles being worked

      Do we want to discuss that dynamic stretching prior to exercise provides more benefit than static stretching based on current understanding?

    2. The tension is released from the biceps brachii and the angle of the elbow joint increases.

      It may be helpful to contrast this with relaxation, as my students initially struggle with this concept. I use the controlled descent portion of a pushup as an example of an eccentric contraction of triceps brachii or losing an arm-wrestling match as another example.

    3. much like a key unlocking a lock. This allows the myosin heads to attach to actin.

      FWIW, the analogy I like to use here is a garage door opener (troponin) pulling open the garage door (tropomyosin), allowing the car (myosin head) to enter the garage (myosin binding site on actin).

    1. Was ist das wichtigste Merkmal des Lebens in dieser Stadt? 5.  Ein Überblick über die Geschichte des Alten Testaments Um die Bibel besser zu verstehen, ist es häufig hilfreich, etwas über die ursprüngliche historische Situation des behandelten Bibeltextes bzw. biblischen Buches zu wissen. Es ist allerdings noch wichtiger, die Hauptbegebenheiten der Bibel zueinander in Bezug setzen zu können – also die Reihenfolge von Ereignissen und die Einordnung wichtiger Personen in die Hauptstruktur zu kennen. In Einheit 1 wurde die Botschaft der gesamten Bibel von 1. Mose bis zur Offenbarung kurz zusammengefasst und wichtige Ereignisse wurden hervorgehoben. Zum Abschluss dieser Einheit werden einige dieser Ereignisse nun erneut graphisch dargestellt werden mitsamt einigen Jahreszahlen und den Namen wichtiger Personen. Im weiteren Verlauf des Kurses kann es hilfreich sein, immer wieder zu dieser Übersicht zurückzukommen und weitere Details hinzuzufügen. Die Geschichte des Alten Testaments Die Abbildung wurde mit wenigen Änderungen Graeme Goldworthys The Goldsworthy Trilogy (Cumbria: Paternoster Press, 2000, S. 36) entnommen und mit freundlicher Genehmigung wiedergegeben. Weitere Einzelheiten zur biblischen Geschichte können in einem Bibellexikon oder einer entsprechenden Abhandlung alttestamentlicher Geschichte nachgeschlagen werden. Die Daten zu Abraham und Mose sind abhängig von der Datierung des Exodus. Die archäologische Beweislage zum Exodus ist leider nicht eindeutig. Die Mehrheit der Forscher datiert den Exodus heute bevorzugt im 13. Jahrhundert (ca. 1280 bis 1240 v. Chr.), aber die chronologischen Angaben innerhalb des Alten Testaments legen eine Datierung im 15. Jahrhundert nahe (ca. 1450 v. Chr.; vgl. 1. Könige 6,1; Richter 11,26; 2. Mose 12,40). Aufgrund der Mehrdeutigkeit des archäologischen Materials erscheint es weiser, den expliziten Angaben des biblischen Textes Glauben zu schenken und die frühere Datierung („Lange Chronologie“ in der folgenden Tabelle) als korrekt anzunehmen. Wichtige Jahreszahlen (Es gibt zwei mögliche Datierungen für diesen frühen Zeitraum.)1 Lange Chronologie Kurze Chronologie Abraham ca. 2165–1990 v. Chr. ca. 2000–1825 v. Chr. Isaak ca. 2065–1885 v. Chr. ca. 1900–1720 v. Chr. Jakob ca. 2000–1860 v. Chr. ca. 1840–1700 v. Chr. Josef ca. 1910–1800 v. Chr. ca. 1750–1640 v. Chr. Ankunft in Ägypten ca. 1875 ca. 1700 Auszug aus Ägypten ca. 1450 ca. 1260 Zeit der Richter ca. 1380–1050 v. Chr. ca. 1200–1050 v. Chr. Zeitstrahl Manchmal fällt es schwer, verschiedene geschichtliche Ereignisse in Relation zueinander zu setzen. Ein Zeitstrahl kann dabei helfen, einen besseren Überblick zu gewinnen. Betrachten Sie den unten stehenden Zeitstrahl und die oben stehende Tabelle zusammen und gewinnen Sie ein Eindruck davon, mit welch großer Zeitspanne wir uns befassen. Zeitstrahl – menschliche Perspektive auf die Geschichte Übungen Was hat die Sintflut erreicht, wenn sich die Situation der Menschheit nach Noah so schnell wieder abwärts bewegt? Welche Absicht war damit verknüpft? Nehmen Sie sich die Zeit, Jesaja 65,17–25 sorgfältig zu lesen. Versuchen Sie in eigenen Worten auszudrücken, was mit der Bildsprache gemeint ist. Welche Botschaft versucht der Prophet, zu vermitteln? Weiterführende Lektüre: Schlagen Sie „Adam“, „Eva“ und „Sündenfall“ im Bibellexikon nach. Reflexion Wie viele der Probleme in unserer Welt können mit den Begebenheiten in 1. Mose 3–11 in Verbindung gebracht werden? Wir haben in dieser Einheit viel darüber nachgedacht, was in der Welt im Argen liegt. Welche Hoffnungen wurden dabei zugleich in Ihnen geweckt? 1 Vgl. Artikel „Archaeological sites: Late Bronze Age” und „Time Charts: Biblical History from Abraham to Saul” in New Bible Atlas, Leicester: IVP 1985. Als erledigt kennzeichnen ◄ 2. Auslegung der Bibel Direkt zu: Direkt zu: Bitte lesen... Ankündigungen Gruppeninterne Videokonferenz Gruppeninternes Forum Offenes Forum 1. Das Buch der Bücher 2. Auslegung der Bibel 4. Israel und Gottes Heilsplan 5. Das verheißene Land und Gottes Heilsplan 6. Davids Königreich und Gottes Heilsplan 7. Die Erneuerung von Gottes Heilsplan 8. Jesus: die Erfüllung von Gottes Heilsplan 9. Die Gute Nachricht für alle Völker 10. Warten auf die Vollendung – die Schriften der Apostel Quiz: 1. Das Buch der Bücher Quiz: 2. Auslegung der Bibel <input type="submit" class="btn btn-secondary ml-1" value="Start"> 4. Israel und Gottes Heilsplan ► Kontakte Ausgewählte Mitteilungen: 1 × Kontakte 0 Einstellungen Kontakte Anfragen 0 Keine Kontakte Keine Kontaktanfragen Kontaktanfrage gesendet Persönlicher Bereich Speichern Sie Entwürfe von Nachrichten, Links, Notizen usw. für einen späteren Zugriff. Für mich und alle anderen löschen Blockieren Blockierung aufheben Entfernen Hinzufügen Löschen Löschen Kontaktanfrage senden Annehmen und zu Kontakten hinzufügen Ablehnen OK Abbrechen Favoriten () Keine Kommunikation als Favorit markiert Gruppe () Keine Gruppenkommunikation Persönlich () Keine persönliche Kommunikation Kontakte Weitere Personen Mehr laden Mitteilungen Mehr laden Keine Ergebnisse Personen und Mitteilungen suchen Datenschutz Welche Personen sollen Ihnen persönliche Mitteilungen senden können? Mitteilungen akzeptieren von: Nur meine Kontakte Kontakte und aus meinen Kursen Systemnachrichten Allgemein Eingabetaste zum Senden tippen Ausgewählte Mitteilungen löschen Kontaktanfrage senden Sie haben diese Person blockiert. Blockierung für diese Person aufheben Sie können dieser Person keine Mitteilung senden. Alle anzeigen Sie sind angemeldet als Franziska Kaps (Logout) Einführung in die Bibel Datenschutzinfos Impressum | Cookie-Einstellungen © Copyright 2018–2025 – Alle Inhalte des Bibel-für-alle-Kurses sind urheberrechtlich geschützt. Alle Rechte, einschließlich der Vervielfältigung, Veröffentlichung, Bearbeitung und Übersetzung, bleiben vorbehalten. Das Urheberrecht liegt, soweit nicht ausdrücklich anders gekennzeichnet, beim Moore Theological College. Soweit nicht anders angegeben sind die Bibelzitate der Schlachter Übersetzung in der revidierten Fassung von 2000 entnommen: © Copyright Genfer Bibelgesellschaft. Wiedergegeben mit freundlicher Genehmigung. Alle Rechte vorbehalten. try { document.querySelector('.bfaFooter .currentYear').textContent = new Date().getFullYear(); } catch (e) { } //<![CDATA[ var require = { baseUrl : 'https://kurs.bibel-fuer-alle.net/lib/requirejs.php/1647014878/', // We only support AMD modules with an explicit define() statement. enforceDefine: true, skipDataMain: true, waitSeconds : 0, paths: { jquery: 'https://kurs.bibel-fuer-alle.net/lib/javascript.php/1647014878/lib/jquery/jquery-3.5.1.min', jqueryui: 'https://kurs.bibel-fuer-alle.net/lib/javascript.php/1647014878/lib/jquery/ui-1.12.1/jquery-ui.min', jqueryprivate: 'https://kurs.bibel-fuer-alle.net/lib/javascript.php/1647014878/lib/requirejs/jquery-private' }, // Custom jquery config map. map: { // '*' means all modules will get 'jqueryprivate' // for their 'jquery' dependency. '*': { jquery: 'jqueryprivate' }, // Stub module for 'process'. This is a workaround for a bug in MathJax (see MDL-60458). '*': { process: 'core/first' }, // 'jquery-private' wants the real jQuery module // though. If this line was not here, there would // be an unresolvable cyclic dependency. jqueryprivate: { jquery: 'jquery' } } }; //]]> //<![CDATA[ M.util.js_pending("core/first"); require(['core/first'], function() { require(['core/prefetch']) ; require(["media_videojs/loader"], function(loader) { loader.setUp('de'); });; require(['jquery', 'message_popup/notification_popover_controller'], function($, controller) { var container = $('#nav-notification-popover-container'); var controller = new controller(container); controller.registerEventListeners(); controller.registerListNavigationEventListeners(); }); ; require( [ 'jquery', 'core_message/message_popover' ], function( $, Popover ) { var toggle = $('#message-drawer-toggle-6945962ccc63e6945962cc6f2b3'); Popover.init(toggle); }); ; require(['jquery', 'core/custom_interaction_events'], function($, CustomEvents) { CustomEvents.define('#jump-to-activity', [CustomEvents.events.accessibleChange]); $('#jump-to-activity').on(CustomEvents.events.accessibleChange, function() { if (!$(this).val()) { return false; } $('#url_select_f6945962cc6f2b12').submit(); }); }); ; require(['jquery', 'core_message/message_drawer'], function($, MessageDrawer) { var root = $('#message-drawer-6945962ccf8396945962cc6f2b13'); MessageDrawer.init(root, '6945962ccf8396945962cc6f2b13', false); }); ; M.util.js_pending('theme_boost/loader'); require(['theme_boost/loader'], function() { M.util.js_complete('theme_boost/loader'); }); M.util.js_pending('theme_boost/drawer'); require(['theme_boost/drawer'], function(drawer) { drawer.init(); M.util.js_complete('theme_boost/drawer'); }); ; require(['core_course/manual_completion_toggle'], toggle => { toggle.init() }); ; M.util.js_pending('core/notification'); require(['core/notification'], function(amd) {amd.init(170, []); M.util.js_complete('core/notification');});; M.util.js_pending('core/log'); require(['core/log'], function(amd) {amd.setConfig({"level":"warn"}); M.util.js_complete('core/log');});; M.util.js_pending('core/page_global'); require(['core/page_global'], function(amd) {amd.init(); M.util.js_complete('core/page_global');}); M.util.js_complete("core/first"); }); //]]> //<![CDATA[ M.str = {"moodle":{"lastmodified":"Zuletzt ge\u00e4ndert","name":"Name","error":"Fehler","info":"Infos","yes":"Ja","no":"Nein","cancel":"Abbrechen","confirm":"Best\u00e4tigen","areyousure":"Sind Sie sicher?","closebuttontitle":"Schlie\u00dfen","unknownerror":"Unbekannter Fehler","file":"Datei","url":"URL","collapseall":"Alles einklappen","expandall":"Alles aufklappen"},"repository":{"type":"Typ","size":"Gr\u00f6\u00dfe","invalidjson":"Ung\u00fcltiger JSON-Text","nofilesattached":"Keine Datei","filepicker":"Dateiauswahl","logout":"Abmelden","nofilesavailable":"Keine Dateien vorhanden","norepositoriesavailable":"Sie k\u00f6nnen hier zur Zeit keine Dateien hochladen.","fileexistsdialogheader":"Datei bereits vorhanden","fileexistsdialog_editor":"Eine Datei mit diesem Namen wurde bereits an den Text angeh\u00e4ngt, den Sie gerade bearbeiten","fileexistsdialog_filemanager":"Eine Datei mit diesem Namen wurde bereits an den Text angeh\u00e4ngt","renameto":"Nach '{$a}' umbenennen","referencesexist":"Es gibt {$a} Links zu dieser Datei.","select":"W\u00e4hlen Sie"},"admin":{"confirmdeletecomments":"M\u00f6chten Sie die Kommentare wirklich l\u00f6schen?","confirmation":"Best\u00e4tigung"},"debug":{"debuginfo":"Debug-Info","line":"Zeile","stacktrace":"Stack trace"},"langconfig":{"labelsep":":\u00a0"}}; //]]> //<![CDATA[ (function() {M.util.help_popups.setup(Y); M.util.js_pending('random6945962cc6f2b14'); Y.on('domready', function() { M.util.js_complete("init"); M.util.js_complete('random6945962cc6f2b14'); }); })(); //]]> window.hypothesisConfig = function () { return {"openSidebar":false,"showHighlights":true,"appType":"bookmarklet"}; };window.hypothesisConfig = function () { return {"openSidebar":false,"showHighlights":true,"appType":"bookmarklet"}; };window.hypothesisConfig = function () { return {"openSidebar":false,"showHighlights":true,"appType":"bookmarklet"}; };window.hypothesisConfig = function () { return {"openSidebar":false,"showHighlights":true,"appType":"bookmarklet"}; };window.hypothesisConfig = function () { return {"openSidebar":false,"showHighlights":true,"appType":"bookmarklet"}; };window.hypothesisConfig = function () { return {"openSidebar":false,"showHighlights":true,"appType":"bookmarklet"}; };

      Gott der Herr selbt wird gegenwärtig sein und alles mit seiner herrlichen Gegenwart erfüllen.

    1. Reviewer #3 (Public review):

      Summary:

      The authors propose a new version of idTracker.ai for animal tracking. Specifically, they apply contrastive learning to embed cropped images of animals into a feature space where clusters correspond to individual animal identities. By doing this, they address the requirement for so-called global fragments - segments of the video, in which all entities are visible/detected at the same time. In general, the new method reduces the long tracking times from the previous versions, while also increasing the average accuracy of assigning the identity labels.

      Strengths and weaknesses:

      The authors have reorganized and rewritten a substantial portion of their manuscript, which has improved the overall clarity and structure to some extent. In particular, omitting the different protocols enhanced readability. However, all technical details are now in appendix which is now referred to more frequently in the manuscript, which was already the case in the initial submission. These frequent references to the appendix - and even to appendices from previous versions - make it difficult to read and fully understand the method and the evaluations in detail. A more self-contained description of the method within the main text would be highly appreciated.

      Furthermore, the authors state that they changed their evaluation metric from accuracy to IDF1. However, throughout the manuscript they continue to refer to "accuracy" when evaluating and comparing results. It is unclear which accuracy metric was used or whether the authors are confusing the two metrics. This point needs clarification, as IDF1 is not an "accuracy" measure but rather an F1-score over identity assignments.

      The authors compare the speedups of the new version with those of the previous ones by taking the average. However, it appears that there are striking outliers in the tracking performance data (see Supplementary Table 1-4). Therefore, using the average may not be the most appropriate way to compare. The authors should consider using the median or providing more detailed statistics (e.g., boxplots) to better illustrate the distributions.

      The authors did not provide any conclusion or discussion section. Including a concise conclusion that summarizes the main findings and their implications would help to convey the message of the manuscript.

      The authors report an improvement in the mean accuracy across all benchmarks from 99.49% to 99.82% (with crossings). While this represents a slight improvement, the datasets used for benchmarking seem relatively simple and already largely "solved". Therefore, the impact of this work on the field may be limited. It would be more informative to evaluate the method on more challenging datasets that include frequent occlusions, crossings, or animals with similar appearances. The accuracy reported in the main text is "without crossings" - this seems like incomplete evaluation, especially that tracking objects that do not cross seems a straightforward task. Information is missing why crossings are a problem and are dealt with separately. There are several videos with a much lower tracking accuracy, explaining what the challenges of these videos are and why the method fails in such cases would help to understand the method's usability and weak points.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary

      This is a strong paper that presents a clear advance in multi-animal tracking. The authors introduce an updated version of idtracker.ai that reframes identity assignment as a contrastive learning problem rather than a classification task requiring global fragments. This change leads to gains in speed and accuracy. The method eliminates a known bottleneck in the original system, and the benchmarking across species is comprehensive and well executed. I think the results are convincing and the work is significant.

      Strengths

      The main strengths are the conceptual shift from classification to representation learning, the clear performance gains, and the fact that the new version is more robust. Removing the need for global fragments makes the software more flexible in practice, and the accuracy and speed improvements are well demonstrated. The software appears thoughtfully implemented, with GUI updates and integration with pose estimators.

      Weaknesses

      I don't have any major criticisms, but I have identified a few points that should be addressed to improve the clarity and accuracy of the claims made in the paper.

      (1) The title begins with "New idtracker.ai," which may not age well and sounds more promotional than scientific. The strength of the work is the conceptual shift to contrastive representation learning, and it might be more helpful to emphasize that in the title rather than branding it as "new."

      We considered using “Contrastive idtracker.ai”. However, we thought that readers could then think that we believe they could use both the old idtracker.ai or this contrastive version. But we want to say that the new version is the one to use as it is better in both accuracy and tracking times. We think “New idtracker.ai” communicates better that this version is the version we recommend.

      (2) Several technical points regarding the comparison between TRex (a system evaluated in the paper) and idtracker.ai should be addressed to ensure the evaluation is fair and readers are fully informed.

      (2.1) Lines 158-160: The description of TRex as based on "Protocol 2 of idtracker.ai" overlooks several key additions in TRex, such as posture image normalization, tracklet subsampling, and the use of uniqueness feedback during training. These features are not acknowledged, and it's unclear whether TRex was properly configured - particularly regarding posture estimation, which appears to have been omitted but isn't discussed. Without knowing the actual parameters used to make comparisons, it's difficult to dassess how the method was evaluated.

      We added the information about the key additions of TRex in the section “The new idtracker.ai uses representation learning”, lines 153-157. Posture estimation in TRex was not explicitly used but neither disabled during the benchmark; we clarified this in the last paragraph of “Benchmark of accuracy and tracking time”, lines 492-495.

      (2.2) Lines 162-163: The paper implies that TRex gains speed by avoiding Protocol 3, but in practice, idtracker.ai also typically avoids using Protocol 3 due to its extremely long runtime. This part of the framing feels more like a rhetorical contrast than an informative one.

      We removed this, see new lines 153-157.

      (2.3) Lines 277-280: The contrastive loss function is written using the label l, but since it refers to a pair of images, it would be clearer and more precise to write it as l_{I,J}. This would help readers unfamiliar with contrastive learning understand the formulation more easily.

      We added this change in lines 613-620.

      (2.4) Lines 333-334: The manuscript states that TRex can fail to track certain videos, but this may be inaccurate depending on how the authors classify failures. TRex may return low uniqueness scores if training does not converge well, but this isn't equivalent to tracking failure. Moreover, the metric reported by TRex is uniqueness, not accuracy. Equating the two could mislead readers. If the authors did compare outputs to human-validated data, that should be stated more explicitly.

      We observed TRex crashing without outputting any trajectories on some occasions (Appendix 1—figure 1), and this is what we labeled as “failure”. These failures happened in the most difficult videos of our benchmark, that’s why we treated them the same way as idtracker.ai going to P3. We clarified this in new lines 464-469.

      The accuracy measured in our benchmark is not estimated but it is human-validated (see section Computation of tracking accuracy in Appendix 1). Both softwares report some quality estimators at the end of a tracking (“estimated accuracy” for idtracker.ai and "uniqueness” for TRex) but these were not used in the benchmark.

      (2.5) Lines 339-341: The evaluation approach defines a "successful run" and then sums the runtime across all attempts up to that point. If success is defined as simply producing any output, this may not reflect how experienced users actually interact with the software, where parameters are iteratively refined to improve quality.

      Yes, our benchmark was designed to be agnostic to the different experiences of the user. Also, our benchmark was designed for users that do not inspect the trajectories to choose parameters again not to leave room for potential subjectivity.

      (2.6) Lines 344-346: The simulation process involves sampling tracking parameters 10,000 times and selecting the first "successful" run. If parameter tuning is randomized rather than informed by expert knowledge, this could skew the results in favor of tools that require fewer or simpler adjustments. TRex relies on more tunable behavior, such as longer fragments improving training time, which this approach may not capture.

      We precisely used the TRex parameter track_max_speed to elongate fragments for optimal tracking. Rather than randomized parameter tuning, we defined the “valid range” for this parameter so that all values in it would produce a decent fragment structure. We used this procedure to avoid worsening those methods that use more parameters.

      (2.7) Line 354 onward: TRex was evaluated using two varying parameters (threshold and track_max_speed), while idtracker.ai used only one (intensity_threshold). With a fixed number of samples, this asymmetry could bias results against TRex. In addition, users typically set these parameters based on domain knowledge rather than random exploration.

      idtracker.ai and TRex have several parameters. Some of them have a single correct value (e.g. number of animals) or the default value that the system computes is already good (e.g. minimum blob size). For a second type of parameters, the system finds a value that is in general not as good, so users need to modify them. In general, users find that for this second type of parameter there is a valid interval of possible values, from which they need to choose a single value to run the system. idtracker.ai has intensity_threshold as the only parameter of this second type and TRex has two: threshold and track_max_speed. For these parameters, choosing one value or another within the valid interval can give different tracking results. Therefore, when we model a user that wants to run the system once except if it goes to P3 (idtracker.ai) or except if it crashes (TRex), it is these parameters we sample from within the valid interval to get a different value for each run of the system. We clarify this in lines 452-469 of the section “Benchmark of accuracy and tracking time”.

      Note that if we chose to simply run old idtracker.ai (v4 or v5) or TRex a single time, this would benefit the new idtracker.ai (v6). This is because old idtracker.ai can enter the very slow protocol 3 and TRex can fail to track. So running old idtracker.ai or TRex up to 5 times until old idtracker.ai does not use Protocol 3 and TRex does not fail is to make them as good as they can be with respect to the new idtracker.ai

      (2.8) Figure 2-figure supplement 3: The memory usage comparison lacks detail. It's unclear whether RAM or VRAM was measured, whether shared or compressed memory was included, or how memory was sampled. Since both tools dynamically adjust to system resources, the relevance of this comparison is questionable without more technical detail.

      We modified the text in the caption (new Figure 1-figure supplement 2) adding the kind of memory we measured (RAM) and how we measured it. We already have a disclaimer for this plot saying that memory management depends on the machine's available resources. We agree that this is a simple analysis of the usage of computer resources.

      (3) While the authors cite several key papers on contrastive learning, they do not use the introduction or discussion to effectively situate their approach within related fields where similar strategies have been widely adopted. For example, contrastive embedding methods form the backbone of modern facial recognition and other image similarity systems, where the goal is to map images into a latent space that separates identities or classes through clustering. This connection would help emphasize the conceptual strength of the approach and align the work with well-established applications. Similarly, there is a growing literature on animal re-identification (ReID), which often involves learning identity-preserving representations across time or appearance changes. Referencing these bodies of work would help readers connect the proposed method with adjacent areas using similar ideas, and show that the authors are aware of and building on this wider context.

      We have now added a new section in Appendix 3, “Differences with previous work in contrastive/metric learning” (lines 792-841) to include references to previous work and a description of what we do differently.

      (4) Some sections of the Results text (e.g., lines 48-74) read more like extended figure captions than part of the main narrative. They include detailed explanations of figure elements, sorting procedures, and video naming conventions that may be better placed in the actual figure captions or moved to supplementary notes. Streamlining this section in the main text would improve readability and help the central ideas stand out more clear

      Thank you for pointing this out. We have rewritten the Results, for example streamlining the old lines 48-74 (new lines 42-48)  by moving the comments about names, files and order of videos to the caption of Figure 1.

      Overall, though, this is a high-quality paper. The improvements to idtracker.ai are well justified and practically significant. Addressing the above comments will strengthen the work, particularly by clarifying the evaluation and comparisons.

      We thank the reviewer for the detailed suggestions. We believe we have taken all of them into consideration to improve the ms.

      Reviewer #2 (Public review):

      Summary:

      This work introduces a new version of the state-of-the-art idtracker.ai software for tracking multiple unmarked animals. The authors aimed to solve a critical limitation of their previous software, which relied on the existence of "global fragments" (video segments where all animals are simultaneously visible) to train an identification classifier network, in addition to addressing concerns with runtime speed. To do this, the authors have both re-implemented the backend of their software in PyTorch (in addition to numerous other performance optimizations) as well as moving from a supervised classification framework to a self-supervised, contrastive representation learning approach that no longer requires global fragments to function. By defining positive training pairs as different images from the same fragment and negative pairs as images from any two co-existing fragments, the system cleverly takes advantage of partial (but high-confidence) tracklets to learn a powerful representation of animal identity without direct human supervision. Their formulation of contrastive learning is carefully thought out and comprises a series of empirically validated design choices that are both creative and technically sound. This methodological advance is significant and directly leads to the software's major strengths, including exceptional performance improvements in speed and accuracy and a newfound robustness to occlusion (even in severe cases where no global fragments can be detected). Benchmark comparisons show the new software is, on average, 44 times faster (up to 440 times faster on difficult videos) while also achieving higher accuracy across a range of species and group sizes. This new version of idtracker.ai is shown to consistently outperform the closely related TRex software (Walter & Couzin, 2021\), which, together with the engineering innovations and usability enhancements (e.g., outputs convenient for downstream pose estimation), positions this tool as an advancement on the state-of-the-art for multi-animal tracking, especially for collective behavior studies.

      Despite these advances, we note a number of weaknesses and limitations that are not well addressed in the present version of this paper:

      Weaknesses

      (1) The contrastive representation learning formulation. Contrastive representation learning using deep neural networks has long been used for problems in the multi-object tracking domain, popularized through ReID approaches like DML (Yi et al., 2014\) and DeepReID (Li et al., 2014). More recently, contrastive learning has become more popular as an approach for scalable self-supervised representation learning for open-ended vision tasks, as exemplified by approaches like SimCLR (Chen et al., 2020), SimSiam (Chen et al., 2020\), and MAE (He et al., 2021\) and instantiated in foundation models for image embedding like DINOv2 (Oquab et al., 2023). Given their prevalence, it is useful to contrast the formulation of contrastive learning described here relative to these widely adopted approaches (and why this reviewer feels it is appropriate):

      (1.1) No rotations or other image augmentations are performed to generate positive examples. These are not necessary with this approach since the pairs are sampled from heuristically tracked fragments (which produces sufficient training data, though see weaknesses discussed below) and the crops are pre-aligned egocentrically (mitigating the need for rotational invariance).

      (1.2) There is no projection head in the architecture, like in SimCLR. Since classification/clustering is the only task that the system is intended to solve, the more general "nuisance" image features that this architectural detail normally affords are not necessary here.

      (1.3) There is no stop gradient operator like in BYOL (Grill et al., 2020\) or SimSiam. Since the heuristic tracking implicitly produces plenty of negative pairs from the fragments, there is no need to prevent representational collapse due to class asymmetry. Some care is still needed, but the authors address this well through a pair sampling strategy (discussed below).

      (1.4) Euclidean distance is used as the distance metric in the loss rather than cosine similarity as in most contrastive learning works. While cosine similarity coupled with L2-normalized unit hypersphere embeddings has proven to be a successful recipe to deal with the curse of dimensionality (with the added benefit of bounded distance limits), the authors address this through a cleverly constructed loss function that essentially allows direct control over the intra- and inter-cluster distance (D\_pos and D\_neg). This is a clever formulation that aligns well with the use of K-means for the downstream assignment step.

      No concerns here, just clarifications for readers who dig into the review. Referencing the above literature would enhance the presentation of the paper to align with the broader computer vision literature.

      Thank you for this detailed comparison. We have now added a new section in Appendix 3, “Differences with previous work in contrastive/metric learning” (lines 792-841) to include references to previous work and a description of what we do differently, including the points raised by the reviewer.

      (2) Network architecture for image feature extraction backbone. As most of the computations that drive up processing time happen in the network backbone, the authors explored a variety of architectures to assess speed, accuracy, and memory requirements. They land on ResNet18 due to its empirically determined performance. While the experiments that support this choice are solid, the rationale behind the architecture selection is somewhat weak. The authors state that: "We tested 23 networks from 8 different families of state-of-the-art convolutional neural network architectures, selected for their compatibility with consumer-grade GPUs and ability to handle small input images (20 × 20 to 100 × 100 pixels) typical in collective animal behavior videos."

      (2.1) Most modern architectures have variants that are compatible with consumer-grade GPUs. This is true of, for example, HRNet (Wang et al., 2019), ViT (Dosovitskiy et al., 2020), SwinT (Liu et al., 2021), or ConvNeXt (Liu et al., 2022), all of which report single GPU training and fast runtime speeds through lightweight configuration or subsequent variants, e.g., MobileViT (Mehta et al., 2021). The authors may consider revising that statement or providing additional support for that claim (e.g., empirical experiments) given that these have been reported to outperform ResNet18 across tasks.

      Following the recommendation of the reviewer, we tested the architectures SwinT, ConvNeXt and ViT. We found out that none of them outperformed ResNet18 since they all showed a slower learning curve. This would result in higher tracking times. These tests are now included in the section “Network architecture” (lines 550-611).

      (2.2) The compatibility of different architectures with small image sizes is configurable. Most convolutional architectures can be readily adapted to work with smaller image sizes, including 20x20 crops. With their default configuration, they lose feature map resolution through repeated pooling and downsampling steps, but this can be readily mitigated by swapping out standard convolutions with dilated convolutions and/or by setting the stride of pooling layers to 1, preserving feature map resolution across blocks. While these are fairly straightforward modifications (and are even compatible with using pretrained weights), an even more trivial approach is to pad and/or resize the crops to the default image size, which is likely to improve accuracy at a possibly minimal memory and runtime cost. These techniques may even improve the performance with the architectures that the authors did test out.

      The only two tested architectures that require a minimum image size are AlexNet and DenseNet. DenseNet proved to underperform ResNet18 in the videos where the images are sufficiently large. We have tested AlexNet with padded images to see that it also performs worse than ResNet18 (see Appendix 3—figure 1).

      We also tested the initialization of ResNet18 with pre-trained weights from ImageNet (in Appendix 3—figure 2) and it proved to bring no benefit to the training speed (added in lines 591-592).

      (2.3) The authors do not report whether the architecture experiments were done with pretrained or randomly initialized weights.

      We adapted the text to make it clear that the networks are always randomly initialized (lines 591-592, lines 608-609 and the captions of Appendix 3—figure 1 and 2).

      (2.4) The authors do not report some details about their ResNet18 design, specifically whether a global pooling layer is used and whether the output fully connected layer has any activation function. Additionally, they do not report the version of ResNet18 employed here, namely, whether the BatchNorm and ReLU are applied after (v1) or before (v2) the conv layers in the residual path.

      We use ResNet18 v1 with no activation function nor bias in its last layer (this has been clarified in the lines 606-608). Also, by design, ResNet has a global average pool right before the last fully connected layer which we did not remove. In response to the reviewer, Resnet18 v2 was tested and its performance is the same as that of v1 (see Appendix 3—figure 1 and lines 590-591).

      (3) Pair sampling strategy. The authors devised a clever approach for sampling positive and negative pairs that is tailored to the nature of the formulation. First, since the positive and negative labels are derived from the co-existence of pretracked fragments, selection has to be done at the level of fragments rather than individual images. This would not be the case if one of the newer approaches for contrastive learning were employed, but it serves as a strength here (assuming that fragment generation/first pass heuristic tracking is achievable and reliable in the dataset). Second, a clever weighted sampling scheme assigns sampling weights to the fragments that are designed to balance "exploration and exploitation". They weigh samples both by fragment length and by the loss associated with that fragment to bias towards different and more difficult examples.

      (3.1) The formulation described here resembles and uses elements of online hard example mining (Shrivastava et al., 2016), hard negative sampling (Robinson et al., 2020\), and curriculum learning more broadly. The authors may consider referencing this literature (particularly Robinson et al., 2020\) for inspiration and to inform the interpretation of the current empirical results on positive/negative balancing.

      Following this recommendation, we added references of hard negative mining in the new section “Differences with previous work in contrastive/metric learning”, lines 792-841. Regarding curriculum learning, even though in spirit it might have parallels with our sampling method in the sense that there is a guided training of the network, we believe the approach is more similar to an exploration-exploitation paradigm.

      (4) Speed and accuracy improvements. The authors report considerable improvements in speed and accuracy of the new idTracker (v6) over the original idTracker (v4?) and TRex. It's a bit unclear, however, which of these are attributable to the engineering optimizations (v5?) versus the representation learning formulation.

      (4.1) Why is there an improvement in accuracy in idTracker v5 (L77-81)? This is described as a port to PyTorch and improvements largely related to the memory and data loading efficiency. This is particularly notable given that the progression went from 97.52% (v4; original) to 99.58% (v5; engineering enhancements) to 99.92% (v6; representation learning), i.e., most of the new improvement in accuracy owes to the "optimizations" which are not the central emphasis of the systematic evaluations reported in this paper.

      V5 was a two year-effort designed to improve time efficiency of v4. It was also a surprise to us that accuracy was higher, but that likely comes from the fact that the substituted code from v4 contained some small bug/s. The improvements in v5 are retained in v6 (contrastive learning) and v6 has higher accuracy and shorter tracking times. The difference in v6 for this extra accuracy and shorter tracking times is contrastive learning.

      (4.2) What about the speed improvements? Relative to the original (v4), the authors report average speed-ups of 13.6x in v5 and 44x in v6. Presumably, the drastic speed-up in v6 comes from a lower Protocol 2 failure rate, but v6 is not evaluated in Figure 2 - figure supplement 2.

      Idtracker.ai v5 runs an optimized Protocol 2 and, sometimes, the Protocol 3. But v6 doesn’t run either of them. While P2 is still present in v6 as a fallback protocol when contrastive fails, in our v6 benchmark P2 was never needed. So the v6 speedup comes from replacing both P2 and P3 with the contrastive algorithm.

      (5) Robustness to occlusion. A major innovation enabled by the contrastive representation learning approach is the ability to tolerate the absence of a global fragment (contiguous frames where all animals are visible) by requiring only co-existing pairs of fragments owing to the paired sampling formulation. While this removes a major limitation of the previous versions of idtracker.ai, its evaluation could be strengthened. The authors describe an ablation experiment where an arc of the arena is masked out to assess the accuracy under artificially difficult conditions. They find that the v6 works robustly up to significant proportions of occlusions, even when doing so eliminates global fragments.

      (5.1) The experiment setup needs to be more carefully described.

      (5.1.1) What does the masking procedure entail? Are the pixels masked out in the original video or are detections removed after segmentation and first pass tracking is done?

      The mask is defined as a region of interest in the software. This means that it is applied at the segmentation step where the video frame is converted to a foreground-background binary image. The region of interest is applied here, converting to background all pixels not inside of it. We clarified this in the newly added section Occlusion tests, lines 240-244.

      (5.1.2) What happens at the boundary of the mask? (Partial segmentation masks would throw off the centroids, and doing it after original segmentation does not realistically model the conditions of entering an occlusion area.)

      Animals at the boundaries of the mask are partially detected. This can change the location of their detected centroid. That’s why, when computing the ground-truth accuracy for these videos, only the groundtruth centroids that were at minimum 15 pixels further from the mask were considered. We clarified this in the newly added section Occlusion tests, lines 248-251.

      (5.1.3) Are fragments still linked for animals that enter and then exit the mask area?

      No artificial fragment linking was added in these videos. Detected fragments are linked the usual way. If one animal hides into the mask, the animal disappears so the fragment breaks.  We clarified this in the newly added section Occlusion tests, lines 245-247.

      (5.1.4) How is the evaluation done? Is it computed with or without the masked region detections?

      The groundtruth used to validate these videos contains the positions of all animals at all times. But only the positions outside the mask at each frame were considered to compute the tracking accuracy. We clarified this in the newly added section Occlusion tests, lines 248-251.

      (5.2) The circular masking is perhaps not the most appropriate for the mouse data, which is collected in a rectangular arena.

      We wanted to show the same proof of concept in different videos. For that reason, we used to cover the arena parametrized by an angle. In the rectangular arena the circular masking uses an external circle, so it is covering the rectangle parametrized by an angle.

      (5.3) The number of co-existing fragments, which seems to be the main determinant of performance that the authors derive from this experiment, should be reported for these experiments. In particular, a "number of co-existing fragments" vs accuracy plot would support the use of the 0.25(N-1) heuristic and would be especially informative for users seeking to optimize experimental and cage design. Additionally, the number of co-existing fragments can be artificially reduced in other ways other than a fixed occlusion, including random dropout, which would disambiguate it from potential allocentric positional confounds (particularly relevant in arenas where egocentric pose is correlated with allocentric position).

      We included the requested analysis about the fragment connectivity in Figure 3-figure supplement 1. We agree that there can be additional ways of reducing co-existing fragments, but we think the occlusion tests have the additional value that there are many real experiments similar to this test.

      (6) Robustness to imaging conditions. The authors state that "the new idtracker.ai can work well with lower resolutions, blur and video compression, and with inhomogeneous light (Figure 2 - figure supplement 4)." (L156). Despite this claim, there are no speed or accuracy results reported for the artificially corrupted data, only examples of these image manipulations in the supplementary figure.

      We added this information in the same image, new Figure 1 - figure supplement 3.

      (7) Robustness across longitudinal or multi-session experiments. The authors reference idmatcher.ai as a compatible tool for this use case (matching identities across sessions or long-term monitoring across chunked videos), however, no performance data is presented to support its usage. This is relevant as the innovations described here may interact with this setting. While deep metric learning and contrastive learning for ReID were originally motivated by these types of problems (especially individuals leaving and entering the FOV), it is not clear that the current formulation is ideally suited for this use case. Namely, the design decisions described in point 1 of this review are at times at odds with the idea of learning generalizable representations owing to the feature extractor backbone (less scalable), low-dimensional embedding size (less representational capacity), and Euclidean distance metric without hypersphere embedding (possible sensitivity to drift). It's possible that data to support point 6 can mitigate these concerns through empirical results on variations in illumination, but a stronger experiment would be to artificially split up a longer video into shorter segments and evaluate how generalizable and stable the representations learned in one segment are across contiguous ("longitudinal") or discontiguous ("multi-session") segments.

      We have now added a test to prove the reliability of idmatcher.ai in v6. In this test, 14 videos are taken from the benchmark and split in two non-overlapping parts (with a 200 frames gap in between). idmatcher.ai is run between the two parts presenting a 100% accuracy identity matching across all of them (see section “Validity of idmatcher.ai in the new idtracker.ai”, lines 969-1008).

      We thank the reviewer for the detailed suggestions. We believe we have taken all of them into consideration to improve the ms.

      Reviewer #3 (Public review):

      Summary

      The authors propose a new version of idTracker.ai for animal tracking. Specifically, they apply contrastive learning to embed cropped images of animals into a feature space where clusters correspond to individual animal identities.

      Strengths

      By doing this, the new software alleviates the requirement for so-called global fragments - segments of the video, in which all entities are visible/detected at the same time - which was necessary in the previous version of the method. In general, the new method reduces the tracking time compared to the previous versions, while also increasing the average accuracy of assigning the identity labels.

      Weaknesses

      The general impression of the paper is that, in its current form, it is difficult to disentangle the old from the new method and understand the method in detail. The manuscript would benefit from a major reorganization and rewriting of its parts. There are also certain concerns about the accuracy metric and reducing the computational time.

      We have made the following modifications in the presentation:

      (1) We have added section tiles to the main text so it is clearer what tracking system we are referring to. For example, we now have sections “Limitation of the original idtracker.ai”, “Optimizing idtracker.ai without changes in the learning method” and “The new idtracker.ai uses representation learning”.

      (2) We have completely rewritten all the text of the ms until we start with contrastive learning. Old L20-89 is now L20-L66, much shorter and easier to read.

      (3) We have rewritten the first 3 paragraphs in the section “The new idtracker.ai uses representation learning” (lines 68-92).

      (4) We now expanded Appendix 3 to discuss the details of our approach  (lines 539-897).  It discusses in detail the steps of the algorithm, the network architecture, the loss function, the sampling strategy, the clustering and identity assignment, and the stopping criteria in training

      (5) To cite previous work in detail and explain what we do differently, we have now added in Appendix 3 the new section “Differences with previous work in contrastive/metric learning” (lines 792-841).

      Regarding accuracy metrics, we have replaced our accuracy metric with the standard metric IDF1. IDF1 is the standard metric that is applied to systems in which the goal is to maintain consistent identities across time. See also the section in Appendix 1 "Computation of tracking accuracy” (lines 414-436) explaining IDF1 and why this is an appropriate metric for our goal.

      Using IDF1 we obtain slightly higher accuracies for the idtracker.ai systems. This is the comparison of mean accuracy over all our benchmark for our previous accuracy score and the new one for the full trajectories:

      v4:   97.42% -> 98.24%

      v5:   99.41% -> 99.49%

      v6:   99.74% -> 99.82%

      trex: 97.89% -> 97.89%

      We thank the reviewer for the suggestions about presentation and about the use of more standard metrics.

      Recommendations for the authors:

      Reviewer #2 (Recommendations for the authors):

      (1) Figure 1a: A graphical legend inset would make it more readable since there are multiple colors, line styles, and connecting lines to parse out.

      Following this recommendation, we added a graphical legend in the old Figure 1 (new Figure 2).

      (2) L46: "have images" → "has images".

      We applied this correction. Line 35.

      (3) L52: "videos start with a letter for the species (z,**f**,m)", but "d" is used for fly videos.

      We applied this correction in the caption of Figure 1.

      (4) L62: "with Protocol 3 a two-step process" → "with Protocol 3 being a two-step process".

      We rewrote this paragraph without mentioning Protocol 3, lines 37-41.

      (5) L82-89: This is the main statement of the problems that are being addressed here (speed and relaxing the need for global fragments). This could be moved up, emphasized, and made clearer without the long preamble and results on the engineering optimizations in v5. This lack of linearity in the narrative is also evident in the fact that after Figure 1a is cited, inline citations skip to Figure 2 before returning to Figure 1 once the contrastive learning is introduced.

      We have rewritten all the text until the contrastive learning, (old lines 20-89 are now lines 20-66). The text is shorter, more linear and easier to read.

      (6) L114: "pairs until the distance D_{pos}" → "pairs until the distance approximates D_{pos}".

      We rewrote as “ pairs until the distance 𝐷pos (or 𝐷neg) is reached” in line 107.

      (7) L570: Missing a right parenthesis in the equation.

      We no longer have this equation in the ms.

      (8) L705: "In order to identify fragments we, not only need" → "In order to identify fragments, we not only need".

      We applied this correction, Line 775.

      (9) L819: "probably distribution" → "probability distribution".

      We applied this correction, Line 776.

      (10) L833: "produced the best decrease the time required" → "produced the best decrease of the time required".

      We applied this correction, Line 746.

      Reviewer #3 (Recommendations for the authors):

      (1) We recommend rewriting and restructuring the manuscript. The paper includes a detailed explanation of the previous approaches (idTracker and idTracker.ai) and their limitations. In contrast, the description of the proposed method is short and unstructured, which makes it difficult to distinguish between the old and new methods as well as to understand the proposed method in general. Here are a few examples illustrating the problem. 

      (1.1) Only in line 90 do the authors start to describe the work done in this manuscript. The previous 3 pages list limitations of the original method.

      We have now divided the main text into sections, so it is clearer what is the previous method (“Limitation of the original idtracker.ai”, lines 28-51), the new optimization we did of this method (“Optimizing idtracker.ai without changes in the learning method”, lines 52-66) and the new contrastive approach that also includes the optimizations (“The new idtracker.ai uses representation learning”, lines 66-164). Also, the new text has now been streamlined until the contrastive section, following your suggestion. You can see that in the new writing the three sections are 25 , 15 and 99 lines. The more detailed section is the new system, the other two are needed as reference, to describe which problem we are solving and the extra new optimizations.  

      (1.2) The new method does not have a distinct name, and it is hard to follow which idtracker.ai is a specific part of the text referring to. Not naming the new method makes it difficult to understand.

      We use the name new idtracker.ai (v6) so it becomes the current default version. v5 is now obsolete, as well as v4. And from the point of view of the end user, no new name is needed since v6 is just an evolution of the same software they have been using. Also, we added sections in the main text to clarify the ideas in there and indicate the version of idtracker.ai we are referring to.

      (1.3) There are "Protocol 2" and "Protocol 3" mixed with various versions of the software scattered throughout the text, which makes it hard to follow. There should be some systematic naming of approaches and a listing of results introduced.

      Following this recommendation we no longer talk about the specific protocols of the old version of idtracker.ai in the main text. We rewritten the explanation of these versions in a more clear and straightforward way, lines 29-36.

      (2) To this end, the authors leave some important concepts either underexplained or only referenced indirectly via prior work. For example, the explanation of how the fragments are created (line 15) is only explained by the "video structure" and the algorithm that is responsible for resolving the identities during crossings is not detailed (see lines 46-47, 149-150). Including summaries of these elements would improve the paper's clarity and accessibility.

      We listed the specific sections from our previous publication where the reader can find information about the entire tracking pipeline (lines 539-549). This way, we keep the ms clear and focused on the new identification algorithm while indicating where to find such information.

      (3) Accuracy metrics are not clear. In line 319, the authors define it as based on "proportion of errors in the trajectory". This proportion is not explained. How is the error calculated if a trajectory is lost or there are identity swaps? Multi-object tracking has a range of accuracy metrics that account for such events but none of those are used by the authors. Estimating metrics that are common for MOT literature, for example, IDF1, MOTA, and MOTP, would allow for better method performance understanding and comparison.

      In the new ms, we replaced our accuracy metric with the standard metric IDF1. IDF1 is the standard metric that is applied to systems in which the goal is to maintain consistent identities across time. See also the section in Appendix 1 "Computation of tracking accuracy” explaining why IDF1 and not MOTA or MOTP is the adequate metric for a system that wants to give correct tracking by identification in time. See lines 416-436.

      Using IDF1 we obtain slightly higher accuracies for the idtracker.ai systems. This is the comparison of mean accuracy four our previous accuracy and the new one for the full trajectories:

      v4:   97.42% -> 98.24%

      v5:   99.41% -> 99.49%

      v6:   99.74% -> 99.82%

      trex: 97.89% -> 97.89%

      (4) Additionally, the authors distinguish between tracking with and without crossings, but do not provide statistics on the frequency of crossings per video. It is also unclear how the crossings are considered for the final output. Including information such as the frame rate of the videos would help to better understand the temporal resolution and the differences between consecutive frames of the videos.

      We added this information in the Appendix 1 “Benchmark of accuracy and tracking time”, lines 445-451. The framerate in our benchmark videos goes from 25 to 60 fps (average of 37 fps). On average 2.6% of the blobs are crossings (1.1% for zebrafish 0.7% for drosophila 9.4% for mice).

      (5) In the description of the dataset used for evaluation (lines 349-365), the authors describe the random sampling of parameter values for each tracking run. However, it is unclear whether the same values were used across methods. Without this clarification, comparisons between the proposed method, older versions, and TRex might be biased due to lucky parameter combinations. In addition, the ranges from which the values were randomly sampled were also not described.

      Only one parameter is shared between idtracker.ai and TRex: intensity_threshold (in idtracker.ai) and threshold (in TRex). Both are conceptually equivalent but differ in their numerical values since they affect different algorithms. V4, v5, and TRex each required the same process of independent expert visual inspection of the segmentation to select the valid value range. Since versions 5 and 6 use exactly the same segmentation algorithm, they share the same parameter ranges.

      All the ranges of valid values used in our benchmark are public here https://drive.google.com/drive/folders/1tFxdtFUudl02ICS99vYKrZLeF28TiYpZ as stated in the section “Data availability”, lines 227-228.

      (6) Lines 122-123, Figure 1c. "batches" - is an imprecise metric of training time as there is no information about the batch size.

      We clarified the Figure caption, new Figure 2c.

      (7) Line 145 - "we run some steps... For example..." leaves the method description somewhat unclear. It would help if you could provide more details about how the assignments are carried out and which metrics are being used.

      Following this recommendation, we listed the specific sections from our previous publication where the reader can find information about the entire tracking pipeline (lines 539-549). This way, we keep the ms clear and focused on the new identification algorithm while indicating where to find such information.

      (8) Figure 3. How is tracking accuracy assessed with occlusions? Are the individuals correctly recognized when they reappear from the occluded area?

      The groundtruth for this video contains the positions of all animals at all times. Only the groundtruth points inside the region of interest are taken into account when computing the accuracy. When the tracking reaches high accuracy, it means that animals are successfully relabeled every time they enter the non-masked region. Note that this software works all the time by identification of animals, so crossings and occlusion are treated the same way. What is new here is that the occlusions are so large that there are no global fragments. We clarified this in the new section “Occlusion tests” in Methods, lines 239-251.

      (9) Lines 185-187 this part of the sentence is not clear.

      We rewrote this part in a clearer way, lines 180-182.

      (10) The authors also highlight the improved runtime performance. However, they do not provide a detailed breakdown of the time spent on each component of the tracking/training pipeline. A timing breakdown would help to compare the training duration with the other components. For example, the calculation of the Silhouette Score alone can be time-consuming and could be a bottleneck in the training process. Including this information would provide a clearer picture of the overall efficiency of the method.

      We measured that the training of ResNet takes on average in our benchmark 47% of the tracking time (we added this information line 551 section “Network Architecture”). In this training stage the bottleneck becomes the network forward and backward pass, limited by the GPU performance. All other processes happening during training have been deeply optimized and parallelized when needed so their contribution to the training time is minimal. Apart from the training, we also measured 24.4% of the total tracking time spent in reading and segmenting the video files and 11.1% in processing the identification images and detecting crossings.

      (11) An important part of the computational cost is related to model training. It would be interesting to test whether a model trained on one video of a specific animal type (e.g., zebrafish_5) generalizes to another video of the same type (e.g., zebrafish_7). This would assess the model's generalizability across different videos of the same species and spare a lot of compute. Alternatively, instead of training a model from scratch for each video, the authors could also consider training a base model on a superset of images from different videos and then fine-tuning it with a lower learning rate for each specific video. This could potentially save time and resources while still achieving good performance.

      Already before v6, there was the possibility for the user to start training the identification network by copying the final weights from another tracking session. This knowledge transfer feature is still present in v6 and it still decreases the training times significatively. This information has been added in Appendix 4, lines 906-909.

      We have already begun working on the interesting idea of a general base model but it brings some complex challenges. It could be a very useful new feature for future idtracker.ai releases.

      We thank the reviewer for the many suggestions. We have implemented all of them.

    1. Injury, exercise, and other activities lead to remodeling, but even without injury or exercise, about 5 to 10 percent of the skeleton is remodeled annually just by destroying old bone and replacing it with fresh bone.

      A discussion of Wolff's Law seems appropriate here, especially as we earlier referenced changes in bone density due to force placed upon them.

    2. the arms (i.e., humerus, ulna, and radius) and legs (i.e., femur, tibia, fibula), as well as in the fingers

      We might want to say upper and lower extremity rather than arms and legs here to avoid later confusion when we discuss muscle action (thigh vs leg, for example).