10,000 Matching Annotations
  1. Mar 2025
    1. One of the things I have set up for myself is a website that looks like Twitter, so I can type things and hit "post", and it just gets sent to /dev/null. It's great, one of the best things I've ever set up.

      cf. my eras of voidposting on Mastodon

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      (1) The questions after reading this manuscript are what novel insights have been gained that significantly go beyond what was already known about the interaction of these receptors and, more importantly, what are the physiological implications of these findings? The proposed significance of the results in the last paragraph of the Discussion section is speculative since none of the receptor interactions have been investigated in TNBC cell lines. Moreover, no physiological experiments were conducted using the PRLR and GH knockout T47D cells to provide biological relevance for the receptor heteromers. The proposed role of JAK2 in the cell surface distribution and association of both receptors as stated in the title was only derived from the analysis of box 1 domain receptor mutants. A knockout of JAK2 was not conducted to assess heteromers formation.

      We thank the reviewer for these comments. The novel insight is that two different cytokine receptors can interact in an asymmetric, ligand-dependent manner, such that one receptor regulates the other receptor’s surface availability, mediated by JAK2. To our knowledge this has not been reported before. Beyond our observations, there is the question if this could be a much more common regulatory mechanism and if it has therapeutic relevance. However, answering these questions is beyond the scope of this work.

      Along the same line, the question regarding the biological relevance of our receptor heteromers and JAK2’s role in cell surface distribution is undoubtfully very important. Studying GHR-PRLR cell surface distributions in JAK2 knockout cells and certain TNBC cell lines as proposed by the reviewer could perhaps be insightful. However, most TNBCs down-regulate PRLR [1], so we would first have to identify TNBC cell lines that actually express PRLR at sufficiently high levels. Moreover, knocking out JAK2 is known to significantly reduce GHR surface availability [2,3], such that the proposed experiment would probably provide only limited insights.

      Unfortunately, our team is currently not in the position to perform any experiments (due to lack of funding and shortage of personnel). However, to address the reviewer’s comment as much as possible, we have revised the respective paragraph of the discussion section to emphasize the speculative nature of our statement and have added another paragraph discussing shortcoming and future experiments (see revised manuscript, pages 23-24).

      (1) López-Ozuna, V., Hachim, I., Hachim, M. et al. Prolactin Pro-Differentiation Pathway in Triple Negative Breast Cancer: Impact on Prognosis and Potential Therapy. Sci Rep 6, 30934 (2016). https://www.nature.com/articles/srep30934

      (2) He, K., Wang, X., Jiang, J., Guan, R., Bernstein, K.E., Sayeski, P.P., Frank, S.J. Janus kinase 2 determinants for growth hormone receptor association, surface assembly, and signaling. Mol Endocrinol. 2003;17(11):2211-27. doi: 10.1210/me.2003-0256. PMID: 12920237.

      (3) He, K., Loesch, K., Cowan, J.W., Li, X., Deng, L., Wang, X., Jiang, J., Frank, S.J. Janus Kinase 2 Enhances the Stability of the Mature Growth Hormone Receptor, Endocrinology, Volume 146, Issue 11, 2005, Pages 4755–4765,https://doi.org/10.1210/en.2005-0514

      (2) Except for some investigation of γ2A-JAK2 cells, most of the experiments in this study were conducted on a single breast cancer cell line. In terms of rigor and reproducibility, this is somewhat borderline. The CRISPR/Cas9 mutant T47D cells were not used for rescue experiments with the corresponding full-length receptors and the box1 mutants. A missed opportunity is the lack of an investigation correlating the number of receptors with physiological changes upon ligand stimulation (e.g., cellular clustering, proliferation, downstream signaling strength).

      We appreciate the reviewer’s comments. While we are confident in the reproducibility of our findings, including those obtained in the T47D cell line, we acknowledge that testing in additional cell lines would have strengthened the generalizability of our results. We also recognize that performing a rescue experiment using our T47D hPRLR or hGHR KO cells would have been valuable. Furthermore, examining physiological changes, such as proliferation rates and downstream signaling responses, would have provided additional insights. Unfortunately, these experiments were not conducted at the time, and we currently lack the resources to carry them out.

      (3) An obvious shortcoming of the study that was not discussed seems to be that the main methodology used in this study (super-resolution microscopy) does not distinguish the presence of various isoforms of the PRLR on the cell surface. Is it possible that the ligand stimulation changes the ratio between different isoforms? Which isoforms besides the long form may be involved in heteromers formation, presumably all that can bind JAK2?

      This is a very good point. We fully agree with the reviewer that a discussion of the results in the light of different PRLR isoforms is appropriate. We have added information on PRLR isoforms to the Introduction (see revised manuscript, page 2) and Discussion sections (see revised manuscript, pages 23-24).

      (4) Changes in the ligand-inducible activation of JAK2 and STAT5 were not investigated in the T47D knockout models for the PRL and GHR. It is also a missed opportunity to use super-resolution microscopy as a validation tool for the knockouts on the single cell level and how it might affect the distribution of the corresponding other receptor that is still expressed.

      We thank the reviewer for his comment. We fully agree that such additional experiments could be very valuable. We are sorry but, as already mentioned above, this is not something we are able to address at this stage due to lack of personnel and funding. However, we do hope to address these and other proposed experiments in the future.

      (5) Why does the binding of PRL not cause a similar decrease (internalization and downregulation) of the PRLR, and instead, an increase in cell surface localization? This seems to be contrary to previous observations in MCF-7 cells (J Biol Chem. 2005 October 7; 280(40): 33909-33916).

      It has been recently reported for GHR that not only JAK2 but also LYN binds to the box1-box2 region, creating competition that results in divergent signaling cascades and affects GHR nanoclustering [1]. So, it is reasonable to assume that similar mechanisms may be at work that regulate PRLR cell surface availability. Differences in cells’ expression of such kinases could perhaps play a role in the perceived inconsistency. Also, Lu et al. [2] studied the downregulation of the long PRLR isoform in response to PRL. All other PRLR isoforms were not detectable in MCF-7 cells. So, differences between MCF-7 and T47D may lead to this perceived contradiction.

      At this stage, we can only speculate about the actual reasons for these seemingly contradictory results. However, for full transparency, we are now mentioning this apparent contradiction in the Discussion section (see page 23) and have added the references below.

      (1) Chhabra, Y., Seiffert, P., Gormal, R.S., et al. Tyrosine kinases compete for growth hormone receptor binding and regulate receptor mobility and degradation. Cell Rep. 2023;42(5):112490. doi: 10.1016/j.celrep.2023.112490. PMID: 37163374.

      https://www.cell.com/cell-reports/pdf/S2211-1247(23)00501-6.pdf

      (2) Lu, J.C., Piazza, T.M., Schuler, L.A. Proteasomes mediate prolactin-induced receptor down-regulation and fragment generation in breast cancer cells. J Biol Chem. 2005 Oct 7;280(40):33909-16. doi: 10.1074/jbc.M508118200. PMID: 16103113; PMCID: PMC1976473.

      (6) Some figures and illustrations are of poor quality and were put together without paying attention to detail. For example, in Fig 5A, the GHR was cut off, possibly to omit other nonspecific bands, the WB images look 'washed out'. 5B, 5D: the labels are not in one line over the bars, and what is the point of showing all individual data points when the bar graphs with all annotations and SD lines are disappearing? As done for the y2A cells, the illustrations in 5B-5E should indicate what cell lines were used. No loading controls in Fig 5F, is there any protein in the first lane? No loading controls in Fig 6B and 6H.

      We thank the reviewer for pointing this out. We have amended Fig. 5A to now show larger crops of the two GHR and PRLR Western Blot images and thus a greater range of proteins present in the extracts. Please note that the bands in the WBs other than what is identified as GHR and PRLR are non-specific and reflect roughly equivalent loading of protein in each lane.

      We also made some changes to Figures 5B-5E.

      (7) The proximity ligation method was not described in the M&M section of the manuscript.

      We thank the reviewer for pointing this out. We have added a description of the PL method to the Methods section.

      Reviewer #1 (Recommendations for the Authors):

      A final suggestion for future investigations: Instead of focusing on the heteromer formation of the GHR/PRLR which both signal all through the same downstream effectors (JAK2, STAT5), it would have been more cancer-relevant, and perhaps even more interesting, to look for heteromers between the PRLR and receptors of the IL-6 family since it had been shown that PRL can stimulate STAT3, which is a unique feature of cancer cells. If that is the case, this would require a different modality of the interaction between different JAK kinases.

      We highly appreciate the reviewer’s recommendation and hope to follow up on it in the near future.

      Reviewer #2 (Public Review):

      (1) I could not fully evaluate some of the data, mainly because several details on acquisition and analysis are lacking. It would be useful to know what the background signal was in dSTORM and how the authors distinguished the specific signal from unspecific background fluorescence, which can be quite prominent in these experiments. Typically, one would evaluate the signal coming from antibodies randomly bound to a substrate around the cells to determine the switching properties of the dyes in their buffer and the average number of localisations representing one antibody. This would help evaluate if GHR or PRLR appeared as monomers or multimers in the plasma membrane before stimulation, which is currently a matter of debate. It would also provide better support for the model proposed in Figure 8.

      We are grateful for the reviewer’s comment. In our experience, the background signal is more relevant in dSTORM when imaging proteins that are located at deeper depths (> 3 μm) above the coverslip surface. In our experiments, cells are attached to the coverslip surface and the proteins being imaged are on the cell membrane. In addition, we employed dSTORM’s TIRF (total internal reflection fluorescence) microscopy mode to image membrane receptor proteins. TIRFM exploits the unique properties of an induced evanescent field in a limited specimen region immediately adjacent to the interface between two media having different refractive indices. It thereby dramatically reduces background by rejecting fluorescence from out-of-focus areas in the detection path and illuminating only the area right near the surface.

      Having said that, a few other sources such as auto-fluorescence, scattering, and non-bleached fluorescent molecules close to and distant from the focal plane can contribute to the background signal. We tried to reduce auto-fluorescence by ensuring that cells are grown in phenol-red-free media, imaging is performed in STORM buffer which reduces autofluorescence, and our immunostaining protocol includes a quenching step aside from using blocking buffer with different serum, in addition to BSA. Moreover, we employed extensive washing steps following antibody incubations to eliminate non-specifically bound antibodies. Ensuring that the TIRF illumination field is uniform helps reduce scatter. Additionally, an extended bleach step prior to the acquisition of frames to determine localizations helped further reduce the probability of non-bleached fluorescent molecules.

      In short, due to the experimental design we do not expect much background. However, in the future, we will address this concern and estimate background in a subtype dependent manner. To this end we will distinguish two types of background noise: (A) background with a small change between subsequent frames, which mainly consists of auto-fluorescence and non-bleached out-of-focus fluorescent molecules; and (B) background that changes every imaging frame, which is mainly from non-bleached fluorescent molecules near the focal plane. For type (A) background, temporal filters must be used for background estimation [1]; for type (B) background, low-pass filters (e.g., wavelet transform) should be used for background estimation [2].

      (1) Hoogendoorn, Crosby, Leyton-Puig, Breedijk, Jalink, Gadella, and Postma (2014). The fidelity of stochastic single-molecule super-resolution reconstructions critically depends upon robust background estimation. Scientific reports, 4, 3854. https://doi.org/10.1038/srep03854

      (2) Patel, Williamson, Owen, and Cohen (2021). Blinking statistics and molecular counting in direct stochastic reconstruction microscopy (dSTORM). Bioinformatics, Volume 37, Issue 17, September 2021, Pages 2730–2737, https://doi.org/10.1093/bioinformatics/btab136

      (2) Since many of the findings in this work come from the evaluation of localisation clusters, an image showing actual localisations would help support the main conclusions. I believe that the dSTORM images in Figures 1 and 2 are density maps, although this was not explicitly stated. Alexa 568 and Alexa 647 typically give a very different number of localisations, and this is also dependent on the concentration of BME. Did the authors take that into account when interpreting the results and creating the model in Figures 2 and 8?

      I believe that including this information is important as findings in this paper heavily rely on the number of localisations detected under different conditions.

      Including information on proximity labelling and CRISPR/Cas9 in the methods section would help with the reproducibility of these findings by other groups.

      Figures 1 and 2 show Gaussian interpolations of actual localizations, not density maps. Imaging captured the fluorophores’ blinking events and localizations were counted as true localizations, when at least 5 consecutive blinking events had been observed. Nikon software was used for Gaussian fitting. In other words, we show reconstructed images based on identifying true localizations using gaussian fitting and some strict parameters to identify true fluorophore blinking. This allowed us to identify true localizations with high confidence and generate a high-resolution image for membrane receptors.

      Indeed, Alexa 568 and 647 give different numbers of localization. This is dependent on the intrinsic photo-physics of the fluorophores. Specifically, each fluorophore has a different duty cycle, switching cycle, and survival fraction. However, we note that we focused on capturing the relative changes in receptor numbers over time, before and after stimulation by ligands, not the absolute numbers of surface GHR and PRLR. We are not comparing the absolute numbers of localizations or drawing comparisons for localization numbers between 568 and 647. For all these different conditions/times, the photo-physics for a particular fluorophore remains the same. This allows us to make relative comparisons.

      As far as the effect of BME is concerned, the concentration of mercaptoethanol needs to be carefully optimized, as too high a concentration can potentially quench the fluorescence or affect the overall stability of the sample. However, we are using an optimized concentration which has been previously validated across multiple STORM experiments. This makes the concerns relating to the concentration of BME irrelevant to the current experimental design. Besides, the concentration of BME is maintained across all experimental conditions.

      We have added information regarding PL and CRISPR/Cas9 for generating hGHR KO and hPRLR KO cells in two new subsections to the Methods section.

      Reviewer #2 (Recommendations for the authors):

      In the methods please include:<br /> (1) A section with details on proximity ligation assays.

      We have added a description of the PL method to the Methods section.

      (2) A section on CRISPR/Cas9 technology.

      We have added two new sections on “Generating hGHR knockout and hPRLR knockout T47D cells” and “Design of sgRNAs for hGHR  or hPRLR knockout” to the Methods section.

      (3) List the precise composition of the buffer or cite the paper that you followed.

      We used the buffer recipe described in this protocol [1] and have added the components with concentrations as well as the following reference to the manuscript.

      (1) Beggs, R.R., Dean, W.F., Mattheyses, A.L. (2020). dSTORM Imaging and Analysis of Desmosome Architecture. In: Turksen, K. (eds) Permeability Barrier. Methods in Molecular Biology, vol 2367. Humana, New York, NY. https://doi.org/10.1007/7651_2020_325

      (4) Exposure time used for image acquisition to put 40 000 frames in the context of total imaging time and clarify why you decided to take 40 000 images per channel.

      Our Nikon Ti2 N-STORM microscope is equipped with an iXon DU-897 Ultra EMCCD camera from Andor (Oxford Instruments). According to the camera’s manufacturer, this camera platform uses a back-illuminated 512 x 512 frame transfer sensor and overclocks readout to 17 MHz, pushing speed performance to 56 fps (in full frame mode). We note that we always tried to acquire STORM images at the maximal frame rate. As for the exposure time, according to the manufacturer it can be as short as 17.8 ms. We would like to emphasize that we did not specify/alter the exposure time.

      See also: https://andor.oxinst.com/assets/uploads/products/andor/documents/andor-ixon-ultra-emccd-specifications.pdf

      The decision to take 40,000 images per frame was based on our intention to identify the true population of the molecules of interest that are localized and accurately represented in the final reconstruction image. The total number of frames depends on the sample complexity, density of sample labeling and desired resolution. We tested a range of frames between 20,000 and 60,000 and found for our experimental design and output requirements that 40,000 frames provided the best balance between achieving maximal resolution and desired localizations to make consistent and accurate localization estimates across different stimulation conditions compared to basal controls.

      (5) The lasers used to switch Alexa 568 and Alexa 647. Were you alternating between the lasers for switching and imaging of dyes? Intermittent and continuous illumination will produce very different unspecific background fluorescence.

      Yes, we used an alternating approach for the lasers exciting Alexa 647 and Alexa 568, for both switching and imaging of the dyes.

      (6) A paragraph with a detailed description of methods used to differentiate the background fluorescence from the signal.

      We have addressed the background fluorescence under Point 1 (Public Review). We have added a paragraph in the Methods section on this issue.

      (7) Minor corrections to the text:

      It appears as though there is a large difference in the expression level of GHR and PRLR in basal conditions in Figure 1. This can be due to the switching properties of the dyes, which is related to the amount of BME in the buffer, or it can be because there is indeed more PRL. Would the authors be able to comment on this?

      We thank the reviewer for this suggestions. According to expression data available online there is indeed more PRLR than GHR in T47D cells. According to CellMiner [1], T47D cells have an RNA-Seq gene expression level log2(FPKM + 1) of 6.814 for PRLR, and 3.587 for GHR, strongly suggesting that there is more PRLR than GHR in basal conditions, matching the reviewer’s interpretation of our images in Fig. 1 (basal). However, we would advise against using STORM images for direct comparisons of receptor expression. First, with TIRF images, we are only looking at the membrane fraction (~150 nm close to the coverslip membrane interface) that is attached to the coverslip. Secondly, as discussed above, our data represent relative cell surface receptor levels that allow for comparison of different conditions (basal vs. stimulation) and does not represent absolute quantifications. Everything is relative and in comparison to controls.

      Also, BME is not going to change the level of expression. The differences in growth factor expression as estimated by relative comparison can be attributed to the actual changes in growth factors and is not an artifact of the amount of BME in the buffer or the properties of dyes. These factors are maintained across all experimental conditions and do not influence the final outcome.

      (1) https://discover.nci.nih.gov/cellminer/

      (8) I would encourage the authors to use unspecific binding to characterize the signal coming from single antibodies bound to the substrate. This would provide a mean number of localizations that a single antibody generates. With this information, one can evaluate how many receptors there are per cluster, which would strengthen the findings and potentially provide additional support for the model presented in Figure 8. It would also explain why the distributions of localisations per cluster in Fig. 3B look very different for hGHR and hPRLR. As the authors point out in the discussion, the results on predimerization of these receptors in basal conditions are conflicting and therefore it is important to shed more light on this topic.

      We thank the reviewer for this suggestions. While we are unable to perform this experiment at this stage, we will keep it in mind for future experiments.

      (9) Minor corrections to the figures:

      Figure 1:

      In the legend, please say what representation was used. Are these density maps or another representation? Please provide examples of actual localisations (either as dots or crosses representing the peaks of the Gaussians). Most findings of this work rely on the characterisation of the clusters of localisations and therefore it is of essence to show what the clusters look like. This could potentially go to the supplemental info to minimise additional work. It's very hard to see the puncta in this figure.

      If the authors created zoomed regions in each of the images (as in Figure 3), it would be much easier to evaluate the expression level and the extent of colocalisation. Halfway through GHR 3 min green pixels become grey, but this may be the issue with the document that was created. Please check. Either increase the font on the scale bars in this figure or delete it.

      As described above, Figure 1 does not show density maps. Imaging captured the fluorophores’ blinking events and localizations were counted as true localizations, when at least 5 consecutive blinking events had been observed. Nikon software was used for Gaussian fitting and smoothing.

      We have generated zoomed regions. In our files (original as well as pdf) we do not see pixels become grey. We increased the font size above one of the scale bars and removed all others.

      Figure 3:

      In A, the GHR clusters are colour coded but PRLR are not. Are both DBSCN images? Explain the meaning of colour coding or show it as black and white. Was brightness also increased in the PRLR image? The font on the scale bars is too small. In B, right panels, the font on the axes is too small. In the figure legend explain the meaning of 33.3 and 16.7

      In our document, both GHR and PRLR are color coded but the hGHR clusters are certainly bigger and therefore appear brighter than the hPRLR clusters. Both are DBSCAN images. The color coding allows to distinguish different clusters (there is no other meaning). We have kept the color-coding but have added a sentence to the caption addressing this. Brightness was increased in both images of Panel B equally. 33.3 and 16.7 are the median cluster sizes. We have added a sentence to the caption explaining this. We have increased the font on the axes in B (right panels).

      Figure 4:

      I struggled to see any colocalization in the 2nd and the 3rd image. Please show zoomed-in sections. In the panels B and C, the data are presented as fractions. Is this per cell? My interpretation is that ~80% of PRL clusters also contain GHR.

      Is this in agreement with Figures 1 and 2? In Figure 1, PRL 3 min, Merge, colocalization seems much smaller. Could the authors give the total numbers of GHR and PRLR from which the fractions were calculated at least in basal conditions?

      We have provided zoom-in views. As for panels B and C, fractions are number of clusters containing both receptors divided by the total number of clusters. We used the same strategy that we had used for calculating the localization changes: We randomly selected 4 ROIs (regions of interest) per cell to calculate fractions and then calculated the average of three different cells from independently repeated experiments. We did not calculate total numbers of GHR/PRLR. The numbers are fractions of cluster numbers.

      Moreover, the reviewer interprets results in panels B and C that ~80% of PRLR clusters also contain GHR. We assume the reviewer refers to Basal state. Now, the reviewer’s interpretation is not correct for the following reason: ~80% of clusters have both receptors. How many of the remaining (~20%) clusters have only PRLR or only GHR is not revealed in the panels. Only if 100% of clusters have PRLR, we can conclude that 80% of PRLR clusters also contain GHR.

      Also, while Figures 1 and 2 show localization based on dSTORM images, Figure 3 indicates and quantifies co-localization based on proximity ligation assays following DBSCAN analysis using Clus-DoC. We do not think that the results are directly comparable.

      Reviewer #3 (Public Review):

      (1) The manuscript suffers from a lack of detail, which in places makes it difficult to evaluate the data and would make it very difficult for the results to be replicated by others. In addition, the manuscript would very much benefit from a full discussion of the limitations of the study. For example, the manuscript is written as if there is only one form of the PRLR while the anti-PRLR antibody used for dSTORM would also recognize the intermediate form and short forms 1a and 1b on the T47D cells. Given the very different roles of these other PRLR forms in breast cancer (Dufau, Vonderhaar, Clevenger, Walker and other labs), this limitation should at the very least be discussed. Similarly, the manuscript is written as if Jak2 essentially only signals through STAT5 but Jak2 is involved in multiple other signaling pathways from the multiple PRLRs, including the long form. Also, while there are papers suggesting that PRL can be protective in breast cancer, the majority of publications in this area find that PRL promotes breast cancer. How then would the authors interpret the effect of PRL on GHR in light of all those non-protective results? [Check papers by Hallgeir Rui]

      We thank the reviewer for such thoughtful comments. We have added a paragraph in the Discussion section on the limitations of our study, including sole focus on T47D and γ2A-JAK2 cells and lack of PRLR isoform-specific data. Also, we are now mentioning that these isoforms play different roles in breast cancer, citing papers by Dufau, Vonderhaar, Clevenger, and Walker labs.

      We did not mean to imply that JAK2 signals only via STAT5 or by only binding the long form. We have made this point clear in the Introduction as well as in our revised Discussion section. Moreover, we have added information and references on JAK2 signaling and PRLR isoform specific signaling.

      In our Discussions section we are also mentioning the findings that PRL is promoting breast cancer. We would like to point out that it is well perceivable that PRL is protective in BC by reducing surface hGHR availability but that this effect may depend on JAK2 levels as well as on expression levels of other kinases that competitively bind Box1 and/or Box2 [1]. Besides, could it not be that PRL’s effect is BC stage dependent? In any case, we have emphasized the speculative nature of our statement.

      (1) Chhabra, Y., Seiffert, P., Gormal, R.S., et al. Tyrosine kinases compete for growth hormone receptor binding and regulate receptor mobility and degradation. Cell Rep. 2023;42(5):112490. doi: 10.1016/j.celrep.2023.112490. PMID: 37163374.

      Reviewer #3 (Recommendations for the authors):

      Points for improvement of the manuscript:

      (1) Method details -

      a) "we utilized CRISPR/Cas9 to generate hPRLR knockout T47D cells ......" Exactly how? Nothing is said under methods. Can we be sure that you knocked out the whole gene?

      We have addressed this point by adding two new sections on “Generating hGHR knockout and hPRLR knockout T47D cells” and “Design of sgRNAs for hGHR or hPRLR knockout” to the Methods section.

      b) Some of the Western blots are missing mol wt markers. How specific are the various antibodies used for Westerns? For example, the previous publications are quoted as providing characterization of the antibodies also seem to use just band cutouts and do not show the full molecular weight range of whole cell extracts blotted. Anti-PRLR antibodies are notoriously bad and so this is important.

      There is an antibody referred to in Figure 5 that is not listed under "antibodies" in the methods.

      We have modified Figure 5a, showing the entire gel as well as molecular weight markers. As for specificity of our antibodies, we used monoclonal antibodies Anti-GHR-ext-mAB 74.3 and Anti-PRLR-ext-mAB 1.48, which have been previously tested and used. In addition, we did our own control experiments to ensure specificity. We have added some of our many control results as Supplementary Figures S2 and S3.

      We thank the reviewer for noticing the missing antibody in the Methods section. We have now added information about this antibody.

      c) There is no description of the proximity ligation assay.

      We have addressed this by adding a paragraph on PLA in the Methods section.

      d) What is the level of expression of GHR, PRLR, and Jak2 in the gamma2A-JAK2 cells compared to the T47D cells? Artifacts of overexpression are always a worry.

      γ2A-JAK2 cell series are over-expressing the receptors. That’s the reason we did not only rely on the observation in γ2A-JAK2 cell lines but also did the experiment in T47D cell lines.

      e) There are no concentrations given for components of the dSTORM imaging buffer. On line 380, I think the authors mean alternating lasers not alternatively.

      Thank you. Indeed, we meant alternating lasers. We are referring to [1] (the protocol we followed) for information on the imaging buffer.

      (1) Beggs, R.R., Dean, W.F., Mattheyses, A.L. (2020). dSTORM Imaging and Analysis of Desmosome Architecture. In: Turksen, K. (eds) Permeability Barrier. Methods in Molecular Biology, vol 2367. Humana, New York, NY. https://doi.org/10.1007/7651_2020_325

      f) In general, a read-through to determine whether there is enough detail for others to replicate is required. 4% PFA in what? Do you mean PBS or should it be Dulbecco's PBS etc., etc.?

      We prepared a 4% PFA in PBS solution. We mean Dulbecco's PBS.

      (2) There are no controls shown or described for the dSTORM. For example, non-specific primary antibody and second antibodies alone for non-specific sticking. Do the second antibodies cross-react with the other primary antibody? Is there only one band when blotting whole cell extracts with the GHR antibody so we can be sure of specificity?

      We used monoclonal antibodies Anti-GHR-ext-mAB 74.3 and Anti-PRLR-ext-mAB 1.48 (but also tested several other antibodies). While these antibodies have been previously tested and used, we performed additional control experiments to ensure specificity of our primary antibodies and absence of non-specific binding of our secondary antibodies. We have added some of our many control results as Supplementary Figures S2 and S3.

      (3) Writing/figures-

      a) As discussed in the public review regarding different forms of the PRLR and the presence of other Jak2-dependent signaling

      We have added paragraphs on PRLR isoforms and other JAK2-dependent signaling pathways to the Introduction. Also, we have added a paragraph on PRLR isoforms (in the context of our findings) to the Discussion section.

      b) What are the units for figure 3c and d?

      The figures show numbers of localizations (obtained from fluorophore blinking events). In the figure caption to 3C and 3D, we have specified the unit (i.e. counts).

      c) The wheat germ agglutinin stains more than the plasma membrane and so this sentence needs some adjustment.

      We thank the reviewer for this comment. We have rephrased this sentence (see caption to Fig. 4).

      d) It might be better not to use the term "downregulation" since this is usually associated with expression and not internalization.

      While we understand the reviewer’s discomfort with the use of the word “downregulation”, we still think that it best describes the observed effect. Moreover, we would like to note that in the field of receptorology “downregulation” is a specific term for trafficking of cell surface receptors in response to ligands. That said, to address the reviewer’s comment, we are now using the terms “cell surface downregulation” or “downregulation of cell surface [..] receptor” throughout the manuscript in order to explicitly distinguish it from gene downregulation.

      e) Line 420 talks about "previous work", a term that usually indicates work from the same lab. My apologies if I am wrong, but the reference doesn't seem to be associated with the authors.

      At the end of the sentence containing the phrase “previous work”, we are referring to reference [57], which has Dr. Stuart Frank as senior and corresponding author. Dr. Frank is also a co-corresponding author on this manuscript. While in our opinion, “previous work” does not imply some sort of ownership, we are happy to confirm that one of us was responsible for the work we are referencing.

      Reviewing Editor's recommendations:

      The reviewers have all provided a very constructive assessment of the work and offered many useful suggestions to improve the manuscript. I'd advise thinking carefully about how many of these can be reasonably addressed. Most will not require further experiments. I consider it essential to improve the methods to ensure others could repeat the work. This includes adding methods for the PLA and including detail about the controls for the dSTORM. The reviewers have offered suggestions about types of controls to include if these have not already been done.

      We thank the editor for their recommendations. We have revised the methods section, which now includes a paragraph on PLA as well as on CRISPR/Cas9-based generation of mutant cell lines. We have also added information on the dSTORM buffer to the manuscript. Data of controls indicating antibody specificity (using confocal microscopy) have been added to the manuscript’s supplementary material (see Fig. S2 and S3).

      I agree with the reviewers that the different isoforms of the prolactin receptor need to be considered. I think this could be done as an acknowledgment and point of discussion.

      We have revised the discussions section and have added a paragraph on the different PRLR isoforms, among others.

      For Figure 2E, make it clear in the figure (or at least in legend) that the middle line is the basal condition.

      We thank the editor for their comment. We have made changes to Fig 2E and have added a sentence to the legend making it clear that the middle depicts the basal condition.

      My biggest concern overall was the fact that this is all largely conducted in a single cell line. This was echoed by at least one of the reviewers. I wonder if you have replicated this in other breast cancer cell lines or mammary epithelial cells? I don't think this is necessary for the current manuscript but would increase confidence if available.

      We thank the editor for their comment and fully agree with their assessment. Unfortunately, we have not replicated these experiments in other BC cell lines nor mammary epithelial cells but would certainly want to do so in the near future.

    1. At the access row, the processing is often unconscious and automatic. At the executive function row, the processing is often conscious and intentional

      I like that it's intentional now instead of just going through the motions.

    1. States are not legally required to separate youth from adults in adultfacilities.292 There are some non-mandatory standards like the Prison RapeElimination Act ("PREA") standards that do require that youth be separatedfrom adults but they are only implemented through a funding incentive.While the federal law for juvenile justice, i.e., the Juvenile Justice andDelinquency Prevention Act (JJDPA) as reauthorized in 2002, does establishthe separation of youth from adults as one of its core custody-relatedrequirements, its provisions do not apply to children and adolescents in theadult system.

      !!!!!!!!!!!!!!!!!! JJDPA DOES NOT COVER CHILDREN TRIED AS ADULTS IN THE ADULT SYSTEM!!!!!! NEED TO SAY THIS IMMEDIATELY!!!!! AND OTHER THAN THAT IT'S JUST FUNDING INCENTIVES THAT DON'T WORK!!!!!!!

    1. But the attempt to do without hope, in the struggle to improve the world, as if that struggle could be reduced to calculated acts alone, or a purely scientific approach, is a frivolous illusion

      I really agree with this idea. Without hope, change feels impossible. If we only focus on logic and planning without believing things can get better, we lose the energy to keep going. Hope is what makes people fight for a better future, even when things seem hard. It’s not just about facts or strategies, it’s about believing that change is worth the struggle. I think this is really important for education too. Teachers and students need hope to push through challenges and work toward something better.

    1. Thus, public audiences are not monolithic, but rather quite diverse, and so no single, “rationalistic” formula will suffice to define them or their interests

      Oftentimes, informative scientific media that actively involves academia, from press conferences to opinion pieces, doesn't consider different audiences beyond "not academia." Even within the categories of general public you still need to tailor your writing to convey its point and persuade the audience by meeting them where they are in their understanding, and not just scientifically, but also politically or morally. I understand why it isn't done often, it's hard. However, it is still important for getting the truth out there. I just feel like this is a point that isn't often addressed when talking about science writing.

    2. n 2006, Samson, a science communication expert, argued that despite the increasing prominence and importance of science via television and the Internet, most Americans still get the bulk of their news from print sources.

      Considering how much the state of public media has changed in just the last two decades, I wonder how this assessment has changed. Many still get their science news in writing, but the vast majority of it is still online, either through online versions of print media like newspapers, or through social media. I especially wonder how science communication can even be spread through social media. It's very common to see misinformation on these platforms, but spreading actual science information might be a viable strategy in the future (or even currently).

    1. Cosmic order is the balance and stability of the universe and the gods are the ones who have oversight over that or just the god of Israel with their divine counsel. Social order is the stability and the balance of society and it's usually human rulers as representatives of the gods who have control over that.

      Cosmic Order and Social Order

      One maintained at the hands of the universe or the gods that have oversight over it. The other is ad the hands of human rulers as representatives of the gods.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Experiments in model organisms have revealed that the effects of genes on heritable traits are often mediated by environmental factors---so-called gene-by-environment (or GxE) interactions. In human genetics, however, where indirect statistical approaches must be taken to detect GxE, limited evidence has been found for pervasive GxE interactions. The present manuscript argues that the failure of statistical methods to detect GxE may be due to how GxE is modelled (or not modelled) by these methods.

      The authors show, via re-analysis of an existing dataset in Drosophila, that a polygenic ‘amplification’ model can parsimoniously explain patterns of differential genetic effects across environments. (Work from the same lab had previously shown that the amplification model is consistent with differential genetic effects across the sexes for several traits in humans.) The parsimony of the amplification model allows for powerful detection of GxE in scenarios in which it pertains, as the authors show via simulation.

      Before the authors consider polygenic models of GxE, however, they present a very clear analysis of a related question around GxE: When one wants to estimate the effect of an individual allele in a particular environment, when is it better to stratify one’s sample by environment (reducing sample size, and therefore increasing the variance of the estimator) versus using the entire sample (including individuals not in the environment of interest, and therefore biasing the estimator away from the true effect specific to the environment of interest)? Intuitively, the sample-size cost of stratification is worth paying if true allelic effects differ substantially between the environment of interest and other environments (i.e., GxE interactions are large), but not worth paying if effects are similar across environments. The authors quantify this trade-off in a way that is both mathematically precise and conveys the above intuition very clearly. They argue on its basis that, when allelic effects are small (as in highly polygenic traits), single-locus tests for GxE may be substantially underpowered.

      The paper is an important further demonstration of the plausibility of the amplification model of GxE, which, given its parsimony, holds substantial promise for the detection and characterization of GxE in genomic datasets. However, the empirical and simulation examples considered in the paper (and previous work from the same lab) are somewhat “best-case” scenarios for the amplification model, with only two environments, and with these environments amplifying equally the effects of only a single set of genes. It would be an important step forward to demonstrate the possibility of detecting amplification in more complex scenarios, with multiple environments each differentially modulating the effects of multiple sets of genes. This could be achieved via simulations similar to those presented in the current manuscript.

      Reviewer #2 (Public Review):

      Summary:

      Wine et al. describe a framework to view the estimation of gene-context interaction analysis through the lens of bias-variance tradeoff. They show that, depending on trait variance and context-specific effect sizes, effect estimates may be estimated more accurately in context-combined analysis rather than in context-specific analysis. They proceed by investigating, primarily via simulations, implications for the study or utilization of gene-context interaction, for testing and prediction, in traits with polygenic architecture. First, the authors describe an assessment of the identification of context-specificity (or context differences) focusing on “top hits” from association analyses. Next, they describe an assessment of polygenic scores (PGSs) that account for context-specific effect sizes, showing, in simulations, that often the PGSs that do not attempt to estimate context-specific effect sizes have superior prediction performance. An exception is a PGS approach that utilizes information across contexts. Strengths:

      The bias-variance tradeoff framing of GxE is useful, interesting, and rigorous. The PGS analysis under pervasive amplification is also interesting and demonstrates the bias-variance tradeoff.

      Weaknesses:

      The weakness of this paper is that the first part -- the bias-variance tradeoff analysis -- is not tightly connected to, i.e. not sufficiently informing, the later parts, that focus on polygenic architecture. For example, the analysis of “top hits” focuses on the question of testing, rather than estimation, and testing was not discussed within the bias-variance tradeoff framework. Similarly, while the PGS analysis does demonstrate (well) the bias-variance tradeoff, the reader is left to wonder whether a bias-variance deviation rule (discussed in the first part of the manuscript) should or could be utilized for PGS construction.

      We thank the editors and the reviewers for their thoughtful critique and helpful suggestions throughout. In our revision, we focused on tightening the relationship between the analytical single variant bias-variance tradeoff derivation and the various empirical analyses that follow.

      We improved discussion of our scope and what is beyond our scope. For example, our language was insufficiently clear if it suggested to the editor and reviewers that we are developing a method to characterize polygenic GxE. Developing a new method that does so (let alone evaluating performance across various scenarios) is beyond the scope of this manuscript.

      Similarly, we clarify that we use amplification only as an example of a mode of GxE that is not adequately characterized by current approaches. We do not wish to argue it is an omnibus explanation for all GxE in complex traits. In many cases, a mixture of polygenic GxE relationships seems most fitting (as observed, for example, in Zhu et al., 2023, for GxSex in human physiology).

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      MAJOR COMMENT

      The amplification model is based on an understanding of gene networks in which environmental variables concertedly alter the effects of clusters of genes, or modules, in the network (e.g., if an environmental variable alters the effect of some gene, it indirectly and proportionately alters the effects of genes downstream of that gene in the network---or upstream if the gene acts as a bottleneck in some pathway). It is clear in this model that (i) multiple environmental variables could amplify distinct modules, and (ii) a single environmental variable could itself amplify multiple separate modules, with a separate amplification factor for each module.

      However, perhaps inspired by their previous work on GxSex interactions in humans, the authors’ focus in the present manuscript is on cases where there are only two environments (“control” and “high-sugar diet” in the Drosophila dataset that they reanalyze, and “A” and “B” in their simulations [and single-locus mathematical analysis]), and they consider models where these environments amplify only a single set of genes, i.e., with a single amplification factor. While it is of course interesting that a single-amplification-factor model can generate data that resemble those in the Drosophila dataset that the authors re-analyze, most scenarios of amplification GxE will presumably be more complex. It seems that detecting amplification in these more complex scenarios using methods such as the authors do in their final section will be correspondingly more difficult. Indeed, in the limit of sufficiently many environmental variables amplifying sufficiently many modules, the scenario would resemble one of idiosyncratic single-locus GxE which, as the authors argue, is very difficult to detect. That more complex scenarios of amplification, with multiple environments separately amplifying multiple modules each, might be difficult to detect statistically is potentially an important limitation to the authors’ approach, and should be tested in their simulations.

      We agree that characterizing GxE when there is a mixture of drivers of context-dependency is difficult. Developing a method that does so across multiple (and perhaps not pre-defined) contexts is of high interest to us but beyond the scope of the current manuscript

      We note that for GxSex, modeling this mixture does generally improve phenotypic prediction, and more so in traits where we infer amplification as a major mode of GxE.

      MINOR COMMENTS

      Lines 88-90: “This estimation model is equivalent to a linear model with a term for the interaction between context and reference allele count, in the sense that context-specific allelic effect estimators have the same distributions in the two models.”

      Does this equivalence require the model with the interaction term also to have an interaction term for the intercept, i.e., the slope on a binary variable for context (since the generative model in Eq. 1 allows for context-specific intercepts)?

      It does require an interaction term for the intercept. This is e_i (and its effect beta_E) in Eq. S2 (line 70 of the supplement).

      Lines 94-96: Perhaps just a language thing, but in what sense does the estimation model described in lines 92-94 “assume” a particular distribution of trait values in the combined sample? It’s just an OLS regression, and one can analyze its expected coefficients with reference to the generative model in Eq. 1, or any other model. To say that it “assumes” something presupposes its purpose, which is not clear from its description in lines 92-94.

      We corrected “assume” to “posit”.

      Lines 115-116: It should perhaps be noted that the weights wA and wB need not sum to 1.

      Indeed; it is now explicitly stated.

      Lines 154-160: I think the role of r could be made even clearer by also discussing why, when VA>>VB, it is better to use the whole-sample estimate of betaA than the sample-A-specific estimate (since this is a more counterintuitive case than the case of VA<<VB discussed by the authors).

      This is addressed in lines 153-154, stating: “Typically, this (VA<<VB) will also imply that the additive estimator is greatly preferable for estimating β_B , as β_B will be extremely noisy”

      Line 243 and Figure 4 caption: The text states that the simulated effects in the high-sugar environment are 1.1x greater than those in the control environment, while the caption states that they are 1.4x greater.

      We have corrected the text to be consistent with our simulations.

      TYPOS/WORDING

      Line 14: “harder to interpret” --> “harder-to-interpret”

      Line 22: We --> we

      Line 40: “as average effect” -> “as the average effect”?

      Line 57: “context specific” --> “context-specific”

      Line 139: “re-parmaterization” --> “re-parameterization”

      Lines 140, 158, 412: “signal to noise” --> “signal-to-noise”

      Figure 3C,D: “pule rate” --> “pulse rate”

      The caption of Figure 3: “conutinous” --> “continuous”

      Line 227: “a variant may fall” --> “a variant may fall into”

      Line 295: “conferring to more GxE” --> “conferring more GxE” or “corresponding to more GxE”? This is very pedantic, but I think “bias-variance” should be “bias--variance” throughout, i.e., with an en-dash rather than a hyphen.

      We have corrected all of the above typos.

      Reviewer #2 (Recommendations For The Authors):

      (This section repeats some of what I wrote earlier).

      - First polygenic architecture part: the manuscript focuses on “top hits” in trying to identify sets of variants that are context-specific. This “top hits” approach seems somewhat esoteric and, as written, not connected tightly enough to the bias-variance tradeoff issue. The first section of the paper which focuses on bias-variance trade-off mostly deals with estimation. The “top hits” section deals with testing, which introduces additional issues that are due to thresholding. Perhaps the authors can think of ways to make the connection stronger between the bias-variance tradeoff part to the “top hits” part, e.g., by introducing testing earlier on and/or discussion estimation in addition to testing in the “top hits” part of the manuscript. The second polygenic architecture part: polygenic scores that account for interaction terms. Here the authors focused (well, also here) on pervasive amplification in simulations. This part combines estimation and testing (both the choice of variants and their estimated effects are important). In pervasive amplification the idea is that causal variants are shared, the results may be different than in a model with context-specific effects and variant selection may have a large impact. Still, I think that these simulations demonstrate the idea developed in the bias-variance tradeoff part of the paper, though the reader is left to wonder whether a bias-variance decision rule should or could be utilized for PGS construction.

      In both of these sections we discuss how the consideration of polygenic GxE patterns alters the conclusions based on the single-variant tradeoff. In the “top hits” section, we show that single-variant classification itself, based on a series of marginal hypothesis tests alone, can be misleading. The PGS prediction accuracy analysis shows that both approaches are beaten by the polygenic GxE estimation approach. Intuitively, this is because the consideration of polygenic GxE can mitigate both the bias and variance, as it leverages signals from many variants.

      We agree that the links between these sections of the paper were not sufficiently clear, and have added signposting to help clarify them (lines 176-180; lines 275-277; lines 316-321).

      - Simulation of GxDiet effects on longevity: the methods of the simulation are strange, or communicated unclearly. The authors’ report (page 17) poses a joint distribution of genetic effects (line 439), but then, they simulated effect estimates standard errors by sampling from summary statistics (line 445) rather than simulated data and then estimating effect and effect SE. Why pose a true underlying multivariate distribution if it isn’t used?

      We rewrote the Methods section “Simulation of GxDiet effects on longevity in Drosophila to make our simulation approach clearer (lines 427-449). We are indeed simulating the true effects from the joint distribution proposed. However, in order to mimic the noisiness of the experiment in our simulations, we sample estimated effects from the true simulated effects, with estimation noise conferring to that estimated in the Pallares et al. dataset (i.e., sampling estimation variances from the squares of empirical SEs).

      - How were the “most significantly associated variants” selected into the PGS in the polygenic prediction part? Based on a context-specific test? A combined-context test of effect size estimates?

      For the “Additive” and “Additive ascertainment, GxE estimation” models (red and orange in Fig. 5, respectively), we ascertain the combined-context set. For the “GxE” and “polygenic GxE” (green and blue in Fig. 5, respectively) models, we ascertain in a context-specific test. We now state this explicitly in lines 280-288 and lines 507-526.

      - As stated, I find the conclusion statement not specific enough in light of the rest of the manuscript. “the consideration of polygenic GxE trends is key” - this is very vague. What does it mean “to consider polygenic GxE trends” in the context of this paper? I can’t tell. “The notion that complex trait analyses should combine observations at top associated loci” - I don’t think the authors really refer to combining “observations”, rather perhaps combine information from top associated loci. But this does not represent the “top hits” approach that merely counts loci by their testing patterns. “It may be a similarly important missing piece...” What does “it” refer to? The top loci? What makes it an important missing piece?

      We rewrote the conclusion paragraph to address these concerns (lines 316-321).

    1. st War Theory point of view, the opponents argue that it is verydangerous to allow the gravity of the just cause to determine the outcome of theLegitimate Authority analysis. It not only implies the end of Legitimate Authorityas an independent principle, it also puts us on a slippery slope heading to the hellof a holy war ... The international security system provided for by the UnitedNations and international law could be profoundly destabilized if the legalrestraints on the use of force are loosened by an appeal to vague and general moralprinciples. Thus, for some NATO members such as Belgium there was reluctanceto treat the Kosovo war as a precedent for other forms of interventio

      So it's bad because of the harm it would cause? A moral principle?

    Annotators

    1. We hope that by the end of this book, you have a familiarity with applying different ethics frameworks, and considering the ethical tradeoffs of uses of social media and the design of social media systems. Again, our goal has been not necessarily to come to the “right” answer, but to ask good questions and better understand the tradeoffs, unexpected side-effects, etc.

      I find this sentence particularly striking because it encapsulates the dual goals of knowledge and critical insight in understanding social media. It’s not just about memorizing terms but also about recognizing how these concepts influence both user behavior and platform design. In my experience, learning this vocabulary has empowered me to see the hidden mechanisms behind viral content and algorithmic curation, prompting me to ask deeper questions about who benefits from these design choices. I’m curious whether future revisions of the course might include more hands-on projects to test these concepts in real-world scenarios

    1. ALL FOUR CONCEPTS OF WHY YOU SHOULD FOLLOW THE LAW: 1. gratitude = your country & law was the source of great benefits for you, so you should at least obey the law but against, you could argue that you can be grateful to many people, but it doesn't mean you have to obey everything they say 2. promise-keeping: citizens promise to obey the law in exchange for protection & other benefits (kind of a "social contract" like in Rawls' theory) 3. fairness: different from promise-keeping, because it's extended to all citizens, as a moral ground to everyone, not just to those who choose to participate in the politics SO, you should obey the law, because it would be unfair not to; you owe your fellow citizens "if they all comply and you benefit, it is unfair if you benefit without complying" 4. public good = if people break the law, the welfare of society is diminished, thus we're all morally obliged to obey

    1. Author response:

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      In this detailed study, Cohen and Ben-Shaul characterized the AOB cell responses to various conspecific urine samples in female mice across the estrous cycle. The authors found that AOB cell responses vary with the strains and sexes of the samples. Between estrous and non-estrous females, no clear or consistent difference in responses was found. The cell response patterns, as measured by the distance between pairs of stimuli, are largely stable. When some changes do occur, they are not consistent across strains or male status. The authors concluded that AOB detects the signals without interpreting them. Overall, this study will provide useful information for scientists in the field of olfaction.

      Strengths:

      The study uses electrophysiological recording to characterize the responses of AOB cells to various urines in female mice. AOB recording is not trivial as it requires activation of VNO pump. The team uses a unique preparation to activate the VNO pump with electric stimulation, allowing them to record AOB cell responses to urines in anesthetized animals. The study comprehensively described the AOB cell responses to social stimuli and how the responses vary (or not) with features of the urine source and the reproductive state of the recording females. The dataset could be a valuable resource for scientists in the field of olfaction.

      Weaknesses:

      (1) The figures could be better labeled.

      Figures will be revised to provide more detailed labeling.

      (2) For Figure 2E, please plot the error bar. Are there any statistics performed to compare the mean responses?

      We did not perform statistical comparisons (between the mean rates across the population). We will add this analysis and the corresponding error bars. 

      (3) For Figure 2D, it will be more informative to plot the percentage of responsive units.

      We will do it.

      (4) Could the similarity in response be explained by the similarity in urine composition? The study will be significantly strengthened by understanding the "distance" of chemical composition in different urine.

      We agree. As we wrote in the Discussion: “Ultimately, lacking knowledge of the chemical space associated with each of the stimuli, this and all the other ideas developed here remain speculative.”

      A better understanding of the chemical distance is an important aspect that we aim to include in our future studies. However, this is far from trivial, as it is not chemical distance per se (which in itself is hard to define), but rather the “projection” of chemical space on the vomeronasal receptor neurons array. That is, knowledge of the chemical composition of the stimuli, lacking full knowledge of which molecules are vomeronasal system ligands, will only provide a partial picture. Despite these limitations, this is an important analysis which we would have done had we access to this data.

      (5) If it is not possible for the authors to obtain these data first-hand, published data on MUPs and chemicals found in these urines may provide some clues.

      Measurements about some classes of molecules may be found for some of the stimuli that we used here, but not for all. We are not aware of any single dataset that contains this information for any type of molecules (e.g., MUPs) across the entire stimulus set that we have used. More generally, pooling results from different studies has limited validity because of the biological and technical variability across studies. In order to reliably interpret our current recordings, it would be necessary to measure the urinary content of the very same samples that were used for stimulation. Unfortunately, we are not able to conduct this analysis at this stage.

      (6) It is not very clear to me whether the female overrepresentation is because there are truly more AOB cells that respond to females than males or because there are only two female samples but 9 male samples.

      It is true that the number of neurons fulfilling each of the patterns depends on the number of individual stimuli that define it. However, our measure of “over-representation” aims to overcome this bias, by using bootstrapping to reveal if the observed number of patterns is larger than expected by chance. We also note that more generally, the higher frequency of responses to female, as compared to male stimuli, is obtained in other studies by others and by us, also when the number of male and female stimuli is matched (e.g., Bansal et al BMC Biol 2021, Ben-Shaul et al, PNAS 2010, Hendrickson et al, JNS, 2008).

      (7) If the authors only select two male samples, let's say ICR Naïve and ICR DOM, combine them with responses to two female samples, and do the same analysis as in Figure 3, will the female response still be overrepresented?

      We believe that the answer is positive, but we can, and will perform this analysis to check.

      (8) In Figure 4B and 4C, the pairwise distance during non-estrus is generally higher than that during estrus, although they are highly correlated. Does it mean that the cells respond to different urines more distinctively during diestrus than in estrus?

      This is an important observation. For the Euclidean distance there might be a simple explanation as the distance depends on the number of units (and there are more units recorded in non-estrus females). However, this simple explanation does not hold for the correlation distance. A higher distance implies higher discrimination during the non-estrus stage, but our other analyses of sparseness and the selectivity indices do not support this idea. We note that absolute values of distance measures should generally be interpreted cautiously, as they may depend on multiple factors including sample size. Also, a small number of non-selective units could increase the correlation in responses among stimuli, and thus globally shift the distances. For these reasons, we focus on comparisons, rather than the absolute values of the correlation distances. In the revised manuscript, we will note and discuss this important observation.

      (9) The correlation analysis is not entirely intuitive when just looking at the figures. Some sample heatmaps showing the response differences between estrous states will be helpful.

      If we understand correctly, the idea is to show the correlation matrices from which the values in 4B and 4C are taken. We can and will do this, probably as a supplementary figure.

      Reviewer #2 (Public review):

      Summary:

      Many aspects of the study are carefully done, and in the grand scheme this is a solid contribution. I have no "big-picture" concerns about the approach or methodology. However, in numerous places the manuscript is unnecessarily vague, ambiguous, or confusing. Tightening up the presentation will magnify their impact.

      We will revise the text with the aim of tightening the presentation.

      Strengths:

      (1) The study includes urine donors from males of three strains each with three social states, as well as females in two states. This diversity significantly enhances their ability to interpret their results.

      (2) Several distinct analyses are used to explore the question of whether AOB MCs are biased towards specific states or different between estrus and non-estrus females. The results of these different analyses are self-reinforcing about the main conclusions of the study.

      (3) The presentation maintains a neutral perspective throughout while touching on topics of widespread interest.

      Weaknesses:

      (1) Introduction:

      The discussion of the role of the VNS and preferences for different male stimuli should perhaps include Wysocki and Lepri 1991

      Agreed. we will refer to this work in our discussion.

      (2) Results:

      a) Given the 20s gap between them, the distinction between sample application and sympathetic nerve trunk stimulation needs to be made crystal clear; in many places, "stimulus application" is used in places where this reviewer suspects they actually mean sympathetic nerve trunk stimulation.

      In this study, we have considered both responses that are triggered by sympathetic trunk activation, and those that occur (as happens in some preparations) immediately following stimulus application (and prior to nerve trunk stimulation). An example of the latter Is provided in the second unit shown in Figure 1D (and this is indicated also in the figure legend). In our revision, we will further clarify this confusing point.

      b) There appears to be a mismatch between the discussion of Figure 3 and its contents. Specifically, there is an example of an "adjusted" pattern in 3A, not 3B.

      True. Thanks for catching this error. We will correct this.

      c) The discussion of patterns neglects to mention whether it's possible for a neuron to belong to more than one pattern. For example, it would seem possible for a neuron to simultaneously fit the "ICR pattern" and the "dominant adjusted pattern" if, e.g., all ICR responses are stronger than all others, but if simultaneously within each strain the dominant male causes the largest response.

      This is true. In the legend to Figure 3B, we actually write: “A neuron may fulfill more than one pattern and thus may appear in more than one row.”, but we will discuss this point in the main text as well.

      (3) Discussion:

      a) The discussion of chemical specificity in urine focuses on volatiles and MUPs (citation #47), but many important molecules for the VNS are small, nonvolatile ligands. For such molecules, the corresponding study is Fu et al 2015.

      We fully agree. We will expand our discussion and refer to Fu et al.

      b) "Following our line of reasoning, this scarcity may represent an optimal allocation of resources to separate dominant from naïve males": 1 unit out of 215 is roughly consistent with a single receptor. Surely little would be lost if there could be more computational capacity devoted to this important axis than that? It seems more likely that dominance is computed from multiple neuronal types with mixed encoding.

      We agree, and we are not claiming that dominance, nor any other feature, is derived using dedicated feature selective neurons.  Our discussion of resource allocation is inevitably speculative. Our main point in this context is that a lack of overrepresentation does not imply that a feature is not important. We will revise our discussion to better clarify our view of this issue.

      (4) Methods:

      a) Male status, "were unambiguous in most cases": is it possible to put numerical estimates on this? 55% and 99% are both "most," yet they differ substantially in interpretive uncertainty.

      This sentence is actually misleading and irrelevant. Ambiguous cases were not considered as dominant for urine collection. We only classified mice as dominant if they were “won” in the tube test and exhibited dominant behavior in the subsequent observation period in the cage. We will correct the wording in the revised manuscript.

      b) Surgical procedures and electrode positioning: important details of probes are missing (electrode recording area, spacing, etc).

      True. We will add these details.

      c) Stimulus presentation procedure: Are stimuli manually pipetted or delivered by apparatus with precise timing?

      They are delivered manually. We will clarify this as well.

      d) Data analysis, "we applied more permissive criteria involving response magnitude": it's not clear whether this is what's spelled out in the next paragraph, or whether that's left unspecified. In either case, the next paragraph appears to be about establishing a noise floor on pattern membership, not a "permissive criterion."

      True, the next paragraph is not the explanation for the more permissive criteria. The more permissive criteria involving response magnitude are actually those described in Figure 3A and 3B. The sentence that was quoted above merely states that before applying those criteria, we had also searched for patterns defined by binary designation of neurons as responsive, or not responsive, to each of the stimuli (this is directly related to the next comment below). Using those binary definitions, we obtained a very small number of neurons for each pattern and thus decided to apply the approach actually used and described in the manuscript.

      e) Data analysis, method for assessing significance: there's a lot to like about the use of pooling to estimate the baseline and the use of an ANOVA-like test to assess unit responsiveness.

      But:

      i) for a specific stimulus, at 4 trials (the minimum specified in "Stimulus presentation procedure") kruskalwallis is questionable. They state that most trials use 5, however, and that should be okay.

      The number of cases with 4 trials is truly a minority, and we will provide the exact numbers in our revision.

      ii) the methods statement suggests they are running kruskalwallis individually for each neuron/stimulus, rather than once per neuron across all stimuli. With 11 stimuli, there is a substantial chance of a false-positive if they used p < 0.05 to assess significance. (The actual threshold was unstated.) Were there any multiple comparison corrections performed? Or did they run kruskalwallis on the neuron, and then if significant assess individual stimuli? (Which is a form of multiple-comparisons correction.)

      First, we indeed failed to mention that our criterion was 0.05. We will correct that in our revision. We did not apply any multiple comparison measures. We consider each neuron-stimulus pair as an independent entity, and we are aware that this leads to a higher false positive rate. On the other hand, applying multiple comparisons would be problematic, as we do not always use the same number of stimuli in different studies. Applying multiple comparison corrections would lead to different response criteria across different studies. Notably, most, if not all, of our conclusions involve comparisons across conditions, and for this purpose we think that our procedure is valid. We do not attach any special meaning to the significance threshold, but rather think of it as a basic criterion that allows us to exclude non-responsive neurons, and to compare frequencies of neurons that fulfill this criterion.

    1. Reviewer #2 (Public review):

      Summary:

      In "Founder effects arising from gathering dynamics systematically bias emerging pathogen surveillance" Bradford and Hang present an extension to the SIR model to account for the role of larger than pairwise interactions in infectious disease dynamics. They explore the impact of accounting for group interactions on the progression of infection through the various sub-populations that make up the population as a whole. Further, they explore the extent to which interaction heterogeneity can bias epidemiological inference from surveillance data in the form of IFR and variant growth rate dynamics. This work advances the theoretical formulation of the SIR model and may allow for more realistic modeling of infectious disease outbreaks in the future.

      Strengths:

      (1) This work addresses an important limitation of standard SIR models. While this limitation has been addressed previously in the form of network-based models, those are, as the authors argue, difficult to parameterize to real-world scenarios. Further, this work highlights critical biases that may appear in real-world epidemiological surveillance data. Particularly, over-estimation of variant growth rates shortly after emergence has led to a number of "false alarms" about new variants over the past five years (although also to some true alarms).

      (2) While the results presented here generally confirm my intuitions on this topic, I think it is really useful for the field to have it presented in such a clear manner with a corresponding mathematical framework. This will be a helpful piece of work to point to to temper concerns about rapid increases in the frequency of rare variants.

      (3) The authors provide a succinct derivation of their model that helps the reader understand how they arrived at their formulation starting from the standard SIR model.

      (4) The visualizations throughout are generally easy to interpret and communicate the key points of the authors' work.

      (5) I thank the authors for providing detailed code to reproduce manuscript figures in the associated GitHub repo.

      Weaknesses:

      (1) The authors argue that network-based SIR models are difficult to parameterize (line 66), however, the model presented here also has a key parameter, mainly P_n, or the distribution of risk groups in the population. I think it is important to explore the extent to which this parameter can be inferred from real-world data to assess whether this model is, in practice, any easier to parameterize.

      (2) The authors explore only up to four different risk groups, accounting for only four-wise interactions. But, clearly, in real-world settings, there can be much larger gatherings that promote transmission. What was the justification for setting such a low limit on the maximum group size? I presume it's due to computational efficiency, which is understandable, but it should be discussed as a limitation.

      (3) Another key limitation that isn't addressed by the authors is that there may be population structure beyond just risk heterogeneity. For example, there may be two separate (or, weakly connected) high-risk sub-groups. This will introduce temporal correlation in interactions that are not (and can not easily be) captured in this model. My instinct is that this would dampen the difference between risk groups shown in Figure 2A. While I appreciate the authors's desire to keep their model relatively simple, I think this limitation should be explicitly discussed as it is, in my opinion, relatively significant.

    2. Author response:

      Reviewer #1 (Public review):

      Summary:

      This work considers the biases introduced into pathogen surveillance due to congregation effects, and also models homophily and variants/clades. The results are primarily quantitative assessments of this bias but some qualitative insights are gained e.g. that initial variant transmission tends to be biased upwards due to this effect, which is closely related to classical founder effects.

      Strengths:

      The model considered involves a simplification of the process of congregation using multinomial sampling that allows for a simpler and more easily interpretable analysis.

      Weaknesses:

      This simplification removes some realism, for example, detailed temporal transmission dynamics of congregations.

      We appreciate Reviewer #1's comments. We hope our framework, like the classic SIR model, can be adapted in the future to build more complex and realistic models.

      Reviewer #2 (Public review):

      Summary:

      In "Founder effects arising from gathering dynamics systematically bias emerging pathogen surveillance" Bradford and Hang present an extension to the SIR model to account for the role of larger than pairwise interactions in infectious disease dynamics. They explore the impact of accounting for group interactions on the progression of infection through the various sub-populations that make up the population as a whole. Further, they explore the extent to which interaction heterogeneity can bias epidemiological inference from surveillance data in the form of IFR and variant growth rate dynamics. This work advances the theoretical formulation of the SIR model and may allow for more realistic modeling of infectious disease outbreaks in the future.

      Strengths:

      (1) This work addresses an important limitation of standard SIR models. While this limitation has been addressed previously in the form of network-based models, those are, as the authors argue, difficult to parameterize to real-world scenarios. Further, this work highlights critical biases that may appear in real-world epidemiological surveillance data. Particularly, over-estimation of variant growth rates shortly after emergence has led to a number of "false alarms" about new variants over the past five years (although also to some true alarms).

      (2) While the results presented here generally confirm my intuitions on this topic, I think it is really useful for the field to have it presented in such a clear manner with a corresponding mathematical framework. This will be a helpful piece of work to point to to temper concerns about rapid increases in the frequency of rare variants.

      (3) The authors provide a succinct derivation of their model that helps the reader understand how they arrived at their formulation starting from the standard SIR model.

      (4) The visualizations throughout are generally easy to interpret and communicate the key points of the authors' work.

      (5) I thank the authors for providing detailed code to reproduce manuscript figures in the associated GitHub repo.

      Weaknesses:

      (1) The authors argue that network-based SIR models are difficult to parameterize (line 66), however, the model presented here also has a key parameter, mainly P_n, or the distribution of risk groups in the population. I think it is important to explore the extent to which this parameter can be inferred from real-world data to assess whether this model is, in practice, any easier to parameterize.

      (2) The authors explore only up to four different risk groups, accounting for only four-wise interactions. But, clearly, in real-world settings, there can be much larger gatherings that promote transmission. What was the justification for setting such a low limit on the maximum group size? I presume it's due to computational efficiency, which is understandable, but it should be discussed as a limitation.

      (3) Another key limitation that isn't addressed by the authors is that there may be population structure beyond just risk heterogeneity. For example, there may be two separate (or, weakly connected) high-risk sub-groups. This will introduce temporal correlation in interactions that are not (and can not easily be) captured in this model. My instinct is that this would dampen the difference between risk groups shown in Figure 2A. While I appreciate the authors's desire to keep their model relatively simple, I think this limitation should be explicitly discussed as it is, in my opinion, relatively significant.

      We appreciate Reviewer 2's thoughtful comments and wish to address some of the weaknesses:

      We agree that inferring P_n from real data will be challenging, but think this is an important direction for future research. Further, we’d like to reframe our claim that our approach is "easier to parameterize" than network models. Rather, P_n has fewer degrees of freedom than analogous network models, just as many different networks can share the same degree distribution. Fewer degrees of freedom mean that we expect our model to suffer from fewer identifiability issues when fitting to data, though non-identifiability is often inescapable in models of this nature (e.g., \beta and \gamma in the SIR model are not uniquely identifiable during exponential growth). Whether this is more or less accurate is another question. Classic bias-variance tradeoffs argue that a model with a moderate complexity trained on one data set can better fit future data than overly simple or overly complex models.

      We chose four risk groups for purposes of illustration, but this can be increased arbitrarily. It should be noted that the simulation bottleneck when increasing the numbers of risk groups is numerical due the stiffness of the ODEs. This arises because the nonlinearity of infection terms scales with the number of risk groups (e.g., ~ \beta * S * I^3 for 4 risk groups). As such, a careful choice of numerical solvers may be required when integrating the ODEs. Meanwhile, this is not an issue for stochastic, individual based implementation (e.g., Gillespie). As for how well this captures super-spreading, we believe choosing smaller risk groups does not hinder modeling disease spread at large gatherings. Consider a statistical interpretation, where individuals at a large gathering engage in a series of smaller interactions over time (e.g., 2/3/4/etc person conversations). The key determinants of the resulting gathering size distribution at any one large gathering are the number of individuals within some shared proximity over time and the infectiousness/dispersal of the pathogen. Of course, whether this interpretation is a sufficient approximation for classic super-spreading events (e.g., funerals during 2014-2015 West Africa Ebola outbreak) is a matter of debate. Our framework is best interpreted at a population level where the effects of any single gathering are washed out by the overall gathering distribution, P_n. As the prior weakness highlighted, establishing P_n is challenging, but we believe empirically measuring proxies of it may provide future insight in how behavior impacts disease spread. For example, prior work has combined contact tracing and co-location data from connection to WiFi networks to estimate the distribution of contacts per individual, and its degree of overdispersion (Petros et al. Med 2022).

      We chose to introduce our framework in a simple SIR context familiar to many readers. This decision does not in any way limit applying it to settings with more population structure. Rather, we believe our framework is easily adaptable and that our presentation (hopefully) makes it clear how to do this. For example, two weakly connected groups could be easily achieved by (for each gathering) first sampling the preferred group and then sampling from the population in a biased manner. The biased sampling could even be a function of gathering sizes, time, etc. The resulting infection terms are still (sums of) multinomials. More generally, the sampling probabilities for an individual of some type need not be its frequency (e.g., S/N, I/N). Indeed, we believe generating models with complex social interactions is both simplified and made more robust by focusing on modeling the generative process of attending gatherings.

    1. As a social media user, we hope you are informed about things like: how social media works, how they influence your emotions and mental state, how your data gets used or abused, strategies in how people use social media, and how harassment and spam bots operate. We hope with this you can be a more informed user of social media, better able to participate, protect yourself, and make it a valuable experience for you and others you interact with. For example, you can hopefully recognize when someone is intentionally posting something bad or offensive (like the bad cooking videos we mentioned in the Virality chapter, or an intentionally offensive statement) in an attempt to get people to respond and spread their content. Then you can decide how you want to engage (if at all) given how they are trying to spread their content. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { name: "python3", path: "./ch21_conclusions\03_going_forward" }, predefinedOutput: true } kernelName = 'python3'

      I find it really important to understand how algorithms and engagement-driven strategies shape our social media experience. The chapter made me reflect on how many times I've mindlessly scrolled through my feed, only to be served sensationalized content that seems to be designed just to provoke an emotional reaction, whether it's anger or excitement. It’s unsettling to realize that much of the content we interact with online is engineered to generate reactions, sometimes to the detriment of our mental health.

    1. Author response:

      The following is the authors’ response to the previous reviews.

      Public Reviews:  

      Reviewer # 1 (Public Review): 

      Summary:

      The authors use an innovative behavior assay (chamber preference test) and standard calcium imaging experiments on cultured dorsal root ganglion (DRG) neurons to evaluate the consequences of global knockout of TRPV1 and TRPM2, and overexpression of TRPV1, on warmth detection. They find a profound effect of TRPM2 elimination in the behavioral assay, whereas the elimination of TRPV1 has the largest effect on the neuronal responses. These findings are very important, as there is substantial ongoing discussion in the field regarding the contribution of TRP channels to different aspects of thermosensation.

      Strengths:

      The chamber preference test is an important innovation compared to the standard two-plate test, as it depends on thermal information sampled from the entire skin, as opposed to only the plantar side of the paws. With this assay, and the detailed analysis, the authors provide strong supporting evidence for a role of TRPM2 in warmth avoidance. The conceptual framework using the Drift Diffusion Model provides a first glimpse of how this decision of a mouse to change between temperatures can be interpreted and may form the basis for further analysis of thermosensory behavior.

      Weaknesses:

      The authors juxtapose these behavioral data with calcium imaging data using isolated DRG neurons. As the authors acknowledge, it remains unclear whether the clear behavioral effect seen in the TRPM2 knockout animals is directly related to TRPM2 functioning as a warmth sensor in sensory neurons. The effects of the TRPM2 KO on the proportion of warmth sensing neurons are very subtle, and TRPM2 may also play a role in the behavioral assay through its expression in thermoregulatory processes in the brain. Future behavioral experiments on sensory-neuron specific TRPM2 knockout animals will be required to clarify this important point.

      Reviewer # 1 (Recommendations for the authors):

      (1) I have no further suggestions for the authors, and congratulate them with their excellent study.

      For the authors information, ref. 42 does contain behavioral data from both male (Fig. 4 and Extended Figure 7) and female (Extended Figure 8) mice.

      We thank the referee for pointing out that both males and female mice were tested in the Vandewauw et al. 2018 study. We deliberated whether to include this at the appropriate section of our manuscript (“Limitations of the Study”). But since Vandewauw et al. assessed noxious heat temperatures and we here assess innocuous warmth temperature, we felt that this reference would not add to the clarification whether there are sex differences in Trp channelbased warmth temperature sensing. In particular, we did not want to “use” the argument and to suggest that there are no sex temperature differences in the warmth range just because Vandewauw et al. did not observe major sex differences in the noxious temperature range. 

      Reviewer #3 (Public Review):  

      Summary and strengths:

      In the manuscript, Abd El Hay et al investigate the role of thermally sensitive ion channels TRPM2 and TRPV1 in warm preference and their dynamic response features to thermal stimulation. They develop a novel thermal preference task, where both the floor and air temperature are controlled, and conclude that mice likely integrate floor with air temperature to form a thermal preference. They go on to use knockout mice and show that TRPM2-/- mice play a role in the avoidance of warmer temperatures. Using a new approach for culturing DRG neurons they show the involvement of both channels in warm responsiveness and dynamics. This is an interesting study with novel methods that generate important new information on the different roles of TRPV1 and TRPM2 on thermal behavior.

      Comments on revisions:

      Thanks to the authors for addressing all the points raised. They now include more details about the classifier, better place their work in context of the literature, corrected the FOVs, and explained the model a bit further. The new analysis in Figure 2 has thrown up some surprising results about cellular responses that seem to reduce the connection between the cellular and behavioral data and there are a few things to address because of this:

      (1) TRPM2 deficient responses: The differences in the proportion of TRPM2 deficient responders compared to WT are only observed at one amplitude (39C), and even at this amplitude the effect is subtle. Most surprisingly, TRPM2 deficient cells have an enhanced response to warm compared to WT mice to 33C, but the same response amplitude as WT at 36C and 39C. The authors discuss why this disconnect might be the case, but together with the lack of differences between WT and TRPM2 deficient mice in Fig 3, the data seem in good agreement with ref 7 that there is little effect of TRPM2 on DRG responses to warm in contrast to a larger effect of TRPV1. This doesn't take away from the fact there is a behavioral phenotype in the TRPM2 deficient mice, but the impact of TRPM2 on DRG cellular warm responses is weak and the authors should tone down or remove statements about the strength of TRPM2's impact throughout the manuscript, for example:

      "Trpv1 and Trpm2 knockouts have decreased proportions of WSNs."

      "this is the first cellular evidence for the involvement of TRPM2 on the response of DRG sensory neurons to warm-temperature stimuli"

      "we demonstrate that TRPV1 and TRPM2 channels contribute differently to temperature detection, supported by behavioural and cellular data"

      "TRPV1 and TRPM2 affect the abundance of WSNs, with TRPV1 mediating the rapid, dynamic response to warmth and TRPM2 affecting the population response of WSNs."

      "Lack of TRPV1 or TRPM2 led to a significant reduction in the proportion of WSNs, compared to wildtype cultures".

      We agree with the referee that the somewhat surprising result of the subtle phenotype in Trpm2 knock-out DRG culture experiments, that became detectable in the course of the new analysis, was overemphasized in the previous version of the manuscript. Per suggestion, we have toned down or removed the statements in the revised manuscript (for the referee to find those changes easily, they are indicated in “track-changes mode” in the submitted document).  

      (2) The new analysis also shows that the removal of TRPV1 leads to cellular responses with smaller responses at low stimulus levels but larger responses with longer latencies at higher stimulus levels. Authors should discuss this further and how it fits with the behavioral data.

      Because these changes shown in Fig. 2E are also subtle (similar to the cellular Trpm2 phenotype discussed above), and because both the “% Responders” (Fig 2.D) and The AUC analysis (Fig. 2F) show a reduction in Trpv1 knock out cultures ––both, at lower and at higher stimulus levels–– we did not want to overstate this difference too much and therefore did not further discuss this aspect in the context of the behavioral differences observed in the Trpv1 knock-out animals.  

      (3) Analysis clarification: authors state that TRPM2 deficient WSNs show "Their response to the second and third stimulus, however, are similar to wildtype WSNs, suggesting that tuning of the response magnitude to different warmth stimuli is degraded in Trpm2-/- animals." but is there a graded response in WT mice? It looks like there is in terms of the %responders but not in terms of response amplitude or AUC. Authors could show stats on the figure showing differences in response amplitude/AUC/responders% to different stimulus amplitudes within the WT group.

      We have added the statistics in the main text, you find them on page 7 (also in “track changes mode”).

      (4) New discussion point: sex differences are "similar to what has been shown for an operant-based thermal choice assay (11,56)", but in their rebuttal, they mention that ref 11 did not report sex differences. 56 does. Check this.

      Thank you for pointing out this mishap. We have now corrected this in the “Limitations of the study” section of the discussion and have removed the Paricio-Montesions et al study from that section and slightly revised the text (see “track-changes” on page 16).

      (5) The authors added in new text about the drift diffusion model in the results, however it's still not completely clear whether the "noise" is due to a perceptual deficit or some other underlying cause. Perhaps authors could discuss this further in the discussion.

      We have now included more discussion concerning this (page 14):

      “However, the increased noise in the drift-di3usion model points to a less reliable temperature detection mechanism. Although noise in drift di3usion models can encompass various sources of variability—ranging from peripheral sensory processing to central mechanisms like attention or motor initiation—the most parsimonious interpretation in our study aligns with a perceptual deficit, given the altered temperatureresponsive neuronal populations we observed. This implies that, despite the substantial loss of WSNs, the remaining neuronal population provides su3icient information for the detection of warmer temperatures, albeit with reduced precision”

      Within the limits of the data that is available, we hope the referee agrees with us that we have now adequately discussed this aspect; we feel that any further discussion would be too speculative.

    1. One night in Curitiba, I went out with my translator, a documentary filmmaker, and the filmmaker’s “gonzo lawyer.” We stayed up late into the night eating and drinking at a Japanese izakaya and becoming immediate friends. “To the capybara!” we toasted, as round after round of banana cachaça cocktails and frothy chopes were deposited at our table and the edamame flowed. We had met that day, but some species of animal and human are just meant for companionship. When we were introduced, a part of me wondered, “Will they eat me?” And then I answered, “No, they are in the arts.” “Will they pet me?” I asked myself. “Yes,” I answered, “if I am as charming as a capybara.” And a few hours later our friendship was complete.

      it's meant to make fun of himself but genuinely I don't care how twee it is

    1. How does tension play out across the three acts? When you search online, it’s easy to find diagrams that show story structure. The following diagram is a synopsis of several of the most common structures.

      The classic three-act structure provides a helpful framework for understanding tension: setup introduces conflict, confrontation escalates it, and resolution offers some form of release. However, I also believe that these diagrams can sometimes oversimplify the nuanced and often non-linear way that tension actually unfolds in a well-crafted narrative. For example, many stories interweave subplots, character arcs, and thematic elements that create multiple peaks and valleys in tension-not just a single climactic rise and fall.

    1. I wouldn't say its meant to represent all caucasians but certainly they are meant to portray a certain demographic. In this case a segment of white people that I personally believe the posters accurately reflect. The statistics are there that a lot of Trump supporters who wear the MAGA hats come from red states, from the south, and are Christian Evangelicals. Those are the concepts that I included on these posters. I didn't make that up. I'm just reflecting something that is a quantifiable fact.

      Again and again, Tseng shows his complete ignorance of everything cultural and political. Which is typical of NYC progressives. They project their own ignorance and low information state on to everyone else. So much so, that their create these delusional characterizations of others to preserve their and their city's ego and self-importance.

      The people they critique are far more informed and in tune than they could ever be. The progressive, especially NYC progressives, latch on to whatever stance their corporate-owned media tells them to. It's only when they do research do they move to the right. There are endless examples of this occurrence of shifting views once a progressive is exposed to wider facts.

    1. As policymakers around the world struggle to deal with the new coronavirus and its aftermath,they will have to confront the fact that the global economy doesn’t work as they thought it did.Globalization calls for an ever-increasing specialization of labor across countries, a model thatcreates extraordinary efficiencies but also extraordinary vulnerabilities. Shocks such as theCOVID-19 pandemic reveal these vulnerabilities. Single-source providers, or regions of theworld that specialize in one particular product, can create unexpected fragility in moments ofcrisis, causing supply chains to break down. In the coming months, many more of thesevulnerabilities will be exposed.The result may be a shift in global politics. With the health and safety of their citizens at stake,countries may decide to block exports or seize critical supplies, even if doing so hurts their alliesand neighbors. Such a retreat from globalization would make generosity an even more powerfultool of influence for states that can afford it. So far, the United States has not been a leader in theglobal response to the new coronavirus, and it has ceded at least some of that role to China. Thispandemic is reshaping the geopolitics of globalization, but the United States isn’t adapting.Instead, it’s sick and hiding under the covers.

      Annotation #2- Vyju: This passage is suggesting that while globalization have helped many out of poverty, it has also deepened inequalities. Developing nations that have integrated into global markets have seen economic growth, but the benefits are often unevenly distributed suggested by the phrase "confront the fact that the global economy doesn't work as they thought it did". Wealth tends to concentrate in urban centers, leaving rural areas behind. Additionally, workers in low-income countries often face exploitation as multinational corporations seek the cheapest labor costs. A question that arises from this passage is: How can global trade policies be structured to ensure that economic development benefits all sectors of society rather than just a select few? This is relevant to the inquiry question of the relationship between trade and development because trade is a major driver of development, but without safeguards, it has the possibility of strengthening present inequalities rather than decreasing them. While trade aids economic expansion, it suggests that generosity goes hand in hand with development and should include equitable access to resources, fair wages, and protections for vulnerable populations. According to the United Nations Development Programme (2021), “Inclusive Growth & Innovation,” policies like fair trade agreements, labor rights protections, and investments in rural infrastructure can help account for the difference in economic growth and social development.

      Citation: Inclusive Growth & Innovation. (n.d.). Retrieved from https://www.undp.org/egypt/inclusive-growth-innovation

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      *Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      Summary: This manuscript authored by Kakui and colleagues aims to understand on how mitotic chromosomes get their characteristic, condensed X shape, which is functionally important to ensure faithful chromosome segregation and genome inheritance to both daughter cells. The authors focus on the condensin complex, a central player in chromosome condensation. They ask whether it condenses chromosomes through a now broadly popular "loop-extrusion" mechanism, in which a chromatin-bound condensin complex reels chromatin into loops until it dissociates or encounters a roadblock on the polymer (another condensin or some other protein complex), or through an alternative, "diffusion-capture" mechanism, in which a chromatin-bound condensin complex forms loops by encountering another chromatin-bound condensin until they dissociate from DNA (or from each other.) The authors measured the progressive changes in the shape of mitotic chromosomes by taking samples at given time points from synchronized and mitotically arrested cells and found that while all chromosomes become more condensed and shorter, their width correlated with the length of the chromosome arms. They also observed that chromosome compaction/shortening evolves on a time scale much longer than the interval between the onset of chromosome condensation and the start of chromosome segregation, suggesting that chromatin condensation does not reach its steady-state during an unperturbed mitosis. The observed width-length correlation could be described by a power law with an exponent that increases with the time (i.e. chromosome condensation). The authors also performed polymer simulations of the diffusion-capture mechanism and found that the simulations semi-quantitatively recapitulate their experimental observations. Major Comments My most substantial comments focus on somewhat technical details of the image analysis approaches taken and the polymer models employed. However, as all reported data are derived from those details, I feel it is crucial to address them. *

      We thank the reviewer for their suggestions on how to improve our image analysis and polymer modelling experiments. We are keen to develop both aspects of our manuscript with additional experiments as detailed below.

      1. * Definition/measurement of chromatin arms width and length. The approach taken to manually threshold an "arm" object and then fitting it with a same-area ellipse is not an ideal approach to gauge length and width of the arm, for the following reasons: (1) An ellipse appears to do a poor job approximating many of the objects that we see in the zoom-in insets of Fig.1. Importantly, for somewhat bent shapes we see in the insets it likely strongly underestimates the length of the arms; this approach also presents potential problems for measuring width as well (see 2 and 3 here). (2) One concern is that, due to the diffraction limit, a cylindrical fluorescent object could appear somewhat wider at the mid-length than the real underlying cylinder or the poles; this effect could become more pronounced as the object gets brighter and shorter. (3) Forcing the fit to an ellipse to objects that are not truly rod-shaped can drive an overestimation of the width of the object, and I suspect that this effect also might correlate with the length and brightness of the object. (4) Given 1-3 above, I think the approach the authors used for the first two time points, while not perfect, is better suited and likely more robust while avoiding these caveats. Moreover, why the authors cannot use this same approach (but just for each arm separately) for the later (30+ min) time points as they used for first two is unclear. This point is underscored by the observation that there is a drastic difference in the results between the first two and all subsequent points. When the authors compared the two approaches at the 30 min time point (where width-length dependence is still weak) in different cell lines they did indeed see different results (Fig. S2), although they concluded that the difference was acceptable. * While the manuscript was under review, we have developed an improved pipeline to measure chromosome widths. As suggested by the reviewer, this approach is based on the method used for the first two time points. An additional improvement allows us to take automated measurements along the entire chromosome arm length, instead of being restricted to straight segments. We propose to use the improved algorithm to repeat the measurements at later time points.

      * Along these lines, the difference between short and long arms for the chromosome in the insets of Fig.1 are quite subtle, except maybe at 180 and 240 min. On a related note, it might be informative to compare data for the two sister chromatid arms (as the underlying polymer has the same length) long vs long and short vs short and long vs short to help establish the robustness of the approach. *

      The chromosome arm width differences are clear and measurable. We will select insets that illustrate the arm width differences in a more representative way, and we will furthermore conduct the suggested analyses on subsets of chromosome arms to test the robustness of our approach.

      * Regarding the power-law distribution, it is hard to judge based on the presented data whether it is a really good description of the data or not. In Fig.1c, the points for a given time can barely be distinguished, while in Fig.1b the authors plot individual time points in the panels, but the fits and points are overlapping so much that it is challenging to the main trends described by the clouds. The most informative approach for the reader would be to provide confidence intervals of the best fit parameters for all parameters that were varied in the fit. As the authors make some conclusions based on the power-law exponent values they observed, it would be helpful to know how confident we are in those values. *

      Confidence intervals of the power law exponents will be provided.

      * The conclusion that short arms equilibrate faster based on Fig.3a is not fully convincing. For example, in a scenario where ~1.5 microns is the equilibrium length for all arms, and that the longest arms equilibrate the fastest - you would see the same qualitative pattern for quantiles, not much change in low percentiles, while you would observe a decrease in the values for the high percentiles. The authors might be right, but Fig. 3A does not unambiguously demonstrate that it is so based on this evidence alone. *

      Our reasoning is based on the observation that the shortest percentiles do not change or do not change rapidly after 30 minutes, while the longest percentiles are clearly still relaxing towards a steady state. We will repeat this analysis with the new measurements, obtained in response to point 1.

      * As for chromosome roundness, typically in image analysis, roundness is defined through the ratio of (perimeter)2/area; it might be better to use "aspect ratio" for the metrics used by the authors. And, perhaps, one should expect that shorter (measured, not necessarily by polymer contour length) arms should have a higher width/length ratio? If one selects for more round objects, there should be no surprise that the width and length get almost proportional. Given all of this, I am not sure whether width/aspect ratio serves as a good proxy for the chromatin condensation progression, which is how the authors are employing this data in the manuscript as written. *

      We thank the reviewer for alerting us to an alternatively used definition of ‘roundness’. We will consider this concern, with one solution being to use ‘width-length ratio’ in its place.

      * For the diffusion-capture model simulations, I think the results of the simulation would strongly depend on the assumptions of the probability to associate and the time scale of dissociation of the beads representing the condensin complex. For example, for a very strong association one might expect that all condensin will end up in one big condensate, even in the case of a long polymer. This is not explored/discussed at all. Did the authors optimize their model in any way? If not, how have they estimated the values they used? Moreover, perhaps this is an opportunity to learn/predict something about condensin properties, but the authors do not take advantage of this opportunity. *

      We in fact explored the consequences of altering diffusion capture on and off rates when we initially developed the loop capture simulations, and we will report on the robustness of our model to the probability of dissociation as part of our revisions.

      * In addition, the authors did some checks to show that the steady-state results of the simulations do not depend on the initial conditions. However, as some of the results reported concern the polymer evolution to the steady state (Fig.6b-c), they also need to examine whether these results depend on the chosen initial conditions (or not), and if they do, what is the rationale for the choices the authors have made? *

      The current manuscript contains a comparison of steady states reached after simulations were started from elongated or random walk initial states (see Supplementary Figure 4). We will provide better justification for the choice of a 4x elongated initial state, which approximates the initial state observed in vivo.

      * A more thorough discussion of other possible models, beyond diffusion-capture model considered here, would be beneficial to the reader. First, the authors practically discard the possibility of the loop-extrusion model to explain their observations (although they never explicitly state this in the abstract or discussion). However, they neither leveraged simulations to rigorously compare models nor included some other substantiated arguments to explain why they prefer their model. This is important, as one of the major findings here is that the chromatin never reaches steady state for condensation, making it challenging to intuit what one should expect in this very dynamic state. Second, the authors, while briefly mentioning that there might be some other mechanisms contributing to the mitotic chromosome reshaping, do not really discuss those possibilities in a scholarly way. For example, work by the Kleckner group has suggested an involvement of bridges between sister chromatids into their shortening dynamics (Chu et al. Mol Cell 2020). Third, the authors do not discuss how they envision the interplay between the different SMC complexes - cohesin, condensin I and condensin II - as they act on the same chromatin polymer, or at least acknowledge a possible role that this interplay might contribute to the observed time dependencies. The reviewer raises important points, which we are keen to explore by performing loop extrusion simulations, as well as in an expanded discussion section.

      Reviewer #1 (Significance (Required)):

      Significance: The question the authors are trying to address is fundamental and important. While loop extrusion-driven mitotic chromosome organization is a popular model, considering alternative models is always crucial, especially when one can find experimental observations that allow us to discriminate between possible models. The main limitations are: 1) the performance of the approach the authors take to measure chromosome shape is in question and 2) the main competitive model (loop extrusion) is not modeled. If all shortcomings are addressed this work may provide strong evidence for the diffusion-capture model and thus advance our mechanistic understanding of mitotic processes, which will be of broad interest to the fields of genome and chromosome biology. We are happy to hear that the reviewer agrees that our work ‘may provide strong evidence for the diffusion-capture model and thus advance our mechanistic understanding of mitotic processes’. See above for how we propose to address the two main limitations.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      SUMMARY The authors tracked the progression of mitotic chromosome compaction over time by imaging chromatin spreads from HeLa cells that were released from G2/M arrest. By measuring the mitotic chromosome arms' width and length at different times post-release, the authors demonstrated that the speed at which the chromosome arms reach an equilibrium state is dependent on their length. The authors were able to recapitulate this observation using polymer simulations that they previously developed, supporting the model of loop capture as the mechanism for mitotic chromosome compaction.

      MAIN COMMENTS This is a straightforward paper that supports an alternative mechanism (relative to the highly popular loop-extrusion) model for chromosome compaction. My comments are meant to help the manuscript reach a wider audience.

      I suggest that "equilibrium" be replaced with "equilibrium length" since it is the only equilibrium parameter of concern. *

      The reviewer is correct, and we will implement this change, also taking into account the reasoning of reviewer 3 that ‘steady state’ is a better term to describe a final shape that is maintained by an active process.*

      In the results, it may help to describe how loop capture and loop extrusion are incorporated into the simulations, using terminology that non-experts can understand. Such a description should be accompanied by figures that can be related to the other figures (color scheme, nomenclature if possible). *

      Following from the reviewer’s suggestion, we will provide schematics of the loop capture and loop extrusion mechanisms.*

      OTHER COMMENTS P5: Is it possible the chromosome-spread processing may distort the structures of the chromosomes? *

      We will compare chromosome dimension in live cells with those following spreading to investigate this possibility.*

      Please clarify whether mitosis can complete after drug removal at the various treatment intervals. *

      Drug treatment and removal is often used as an experimental tool. We will perform a control experiment to explore whether mitosis can indeed complete after drug removal under our experimental conditions.*

      P6: "Our records are not, therefore, meant as an accurate absolute measure of individual arms. Rather, fitting allows us to sample all chromosome arms and deduce overall trends of chromosome shape changes over time" It would be better to state this sentence earlier in this paragraph, or earlier in the section so that readers' expectations are curbed when they're reading the detailed analysis plan. *

      Note that we will employ an additional image analysis method, in response to comments from reviewer 1, which should lead to more reliable width measurements.*

      P6: "As soon as individual chromosome arms become discernible (30 minutes), longer chromosome arms were wider, a trend that became more pronounced as time progressed." Implies that at early time points, when the lengths of the arms were unknown, the longer arms were equal or narrower than the short arms. I think it's more accurate to say that as soon as the arms were resolved, the longer arms appeared wider. *

      We will adopt the reviewers’ more accurate wording.*

      P7: Is there a functional consequence to the long arms not equilibrating before anaphase onset? *

      The reviewer raises an interesting question, which we will explore in our revised discussion. One consequence of not reaching ‘steady state’ is that ‘time in mitosis’ becomes a key parameter that defines compaction at anaphase onset.*

      P13: "In a loop capture scenario, we can envision how condensin II sets up a coarse rosette architecture, with condensin I inserting a layer of finer-grained rosettes." This should be illustrated in a figure. *

      We will consider such a figure, though the roles of two condensin complexes is peripheral to our current study. Investigating the consequences of two distinct condensins for chromosome formation will provide fertile ground for future investigations. *

      FIGURES Fig. 1: "...while insets show chromosomes at increasing magnification over time" sounds like the microscope magnification is changing over time. Please change "magnification" to "enlargement". Alternatively, if the goal of the figure is to illustrate the shape/dimensions change of the chromosomes over time, wouldn't it be better to keep all the enlargements at the same scale? *

      During the revisions, we will explore whether to show the insets at the same magnification, or to adjust the wording as suggested by the reviewer.*

      Fig. 2a plot: Does the distribution of normalized intensities really justify a Gaussian fit? I see a double Gaussian. *

      The chosen example indeed resembles a double Gaussian. We will explore whether this is due to noise in the measurement and a poor choice of an example, or whether a double Gaussian fit is indeed merited.*

      Please label the structures that resemble "rosettes". Good idea, which we will implement.

      Lu Gan

      Reviewer #2 (Significance (Required)):

      General - This is a simulation-centric study of mammalian chromosome compaction that supports the loop-capture mechanism. It may be viewed as provocative by some readers because loop-extrusion has dominated the chromosome-compaction literature in the past decade. The only limitation, which is best addressed by future studies, is the absence of more direct molecular evidence of loop capture in situ. Though this same limitation applies to studies of the loop-extrusion mechanism.

      Advance - It is valuable for the field to consider alternative mechanisms. In my opinion, the dominant one has been studied to death by indirect methods without a direct molecular-resolution readout in situ. While the field awaits better experimental tools, more mechanisms should be explored.

      Audience - The chromosome-biology community (both bacterial and eukaryotic) will be interested.

      Expertise - My lab uses cryo-ET to study chromatin in situ.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      In this manuscript, Kakui et al. measured the length/width relationships of mitotic chromosomes in human cells that had entered mitosis for different durations. This simple measurement revealed very interesting behaviors of mitotic chromosomes. They found that the longer chromosome arms were wider than shorter ones. Mitotic chromosoms became progressively wider over time, with shorter ones reached the final state faster than the longer ones. They then built a loop-capture polymer model, which explained the time-dependent increase of width/length ration rather well, but did not quite explain the final roundness of chromosomes.

      I suggest the following points for the authors to consider.

      Major points (1) There is no experimental evidence that the loop capture mechanism is condensin-depdendent. Can the authors deplete condensin I or II or both and measure chromosome length and width in similar assays? This will link their models to molecular players. *

      Such analyses have been conducted by others, and we will provide a brief survey with relevant references to the literature in our revised introduction.*

      (2) It seems rather intuitive to me that if one defines the spacing the condensin-binding sites, then the loop sizes will be the same between shorter and longer chromosomes. It then follows that shorter chromosomes are rounder. Is it that simple? If not, can the authors provide a better explanation. *

      The reviewer makes an interesting point that roundness (width-length ratio), is greater for shorter chromosome arms, even if chromosome width is constant. We will make this clear in the revised manuscript.*

      (3) If the loop sizes are the same between shorter and longer chromosomes, why can't loop extrusion model explain this phenomenon? If one assumes that condensin is stopped by the same barrier element and has the same distrution at the loop base, this should produce the same outcome as loop capture. *

      The key feature of loop extrusion is the formation of a linear condensin backbone, resulting in a bottle brush-shaped chromosome. This arrangement prevents further equilibration of loops into a wider structure, as occurs in the loop capture mechanism by rosette rearrangements. These differences will be better explained, using a schematic, in the revised manuscript.*

      Minor points (1) "We are aware that this approximation underestimates the length of the longest chromosome arms and overestimates the length of the shortest arms." should be "We are aware that this approximation underestimates the length of the longer chromosome (q) arms and overestimates the length of the shorter (p) arms.". Right? *

      In fact, this comparison applies to all longer and shorter arms, not only pairs of p and q arms, which we will clarify.*

      (2) Some scientists argue that the final chromosome conformation might be kinetically driven. Even if the short chromosomes have reached the final roundness, this doesn't necessarily mean that they have reached equilibrium in cells. "Steady state" might be a better term to describe the chromosomes in vivo, as there are clearly energy-burning processes. *

      The reviewer is right that the term ‘equilibrium’ can be seen as misleading, which we will replace with ‘steady state’.*

      Reviewer #3 (Significance (Required)):

      I find the paper intellectually stimulating and a pleasure to read. It suggests a plausible explanation for mitotic chromosome formation. As such, it will be of great interest to scientists in the chromatin field.

      Reviewer #4 (Evidence, reproducibility and clarity (Required)):

      The take home message of this study is that chromosome structure can be attained through mechanisms of looping that do not require an explicit loop extrusion function. As the authors states, alternative models of loop capture have been proposed, dating from 2015-2016. THese models show DNA chains through simply Brownian diffusion can adopt a loop structure (citation 27, 28 and similarly Entropy gives rise to topologically associating domains Vasquez et al 2016 DOI: 10.1093/nar/gkw510).*

      The reviewer makes an excellent point in that entropy considerations, e.g. depletion attraction, likely contribute to the efficiency of loop capture. We will refer to this principle, including a citation to the Vasquez et al. study, in the revised manuscript.

      * In this study, the authors go through careful and well-documented chromosome length measurements through prophase and metaphase. The modeling studies clearly show that loop capture provides a tenable mechanism that accounts for the biological results. The results are clearly written and propose an important alternative narrative for the foundation of chromosome organization.

      Reviewer #4 (Significance (Required)):

      The study is important because it takes a reductionist approach using just Brownian motion and loop capture to ask how well the fundamental processes will recapitulate the biological outcome. The fact that loop capture can account for the arm length to width relationships on biological time scales is important to report to the community. The work is extremely well done and the analysis of chromosome features is thorough and well-documented.*

      • *
    1. Audio FeedbackAudio feedback is another way to differentiate the methods of feedback in the classroom. Mote is a great digital tool that enables teachers to deliver audio feedback to students. It is a downloadable Chrome extension that allows teachers to record their responses to student work. By simply clicking on the purple “M” icon, you can record your feedback.Teachers can use this tool on many different applications. It can be used on all Google Tools, like Docs and Slides, but it can also be used on email. Students can utilize this method of feedback to hear it as many times as needed. One drawback of using Mote is that there’s a limited amount of recording time (about 30 seconds) for each section. However, this short time frame can motivate educators to really think about the quality of their feedback to ensure that it’s concise and meaningful.Mote is also a helpful tool for universal design for learning in that it provides an alternative method of receiving information. Instead of just reading feedback that they get, students can listen to it—as many times as needed.

      I find it challenging to envision this method being more beneficial than written or visual feedback, unless the student is visually impaired. This approach necessitates a location where the student can listen to audio out loud or have access to headphones. It appears to be a confined format that offers limited advantages, aside from the capability to incorporate the tone and intended cadence of your feedback.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      The authors used a subset of a very large, previously generated 16S dataset to:<br /> (1) Assess age-associated features; and (2) develop a fecal microbiome clock, based on an extensive longitudinal sampling of wild baboons for which near-exact chronological age is known. They further seek to understand deviation from age-expected patterns and uncover if and why some individuals have an older or younger microbiome than expected, and the health and longevity implications of such variation. Overall, the authors compellingly achieved their goals of discovering age-associated microbiome features and developing a fecal microbiome clock. They also showed clear and exciting evidence for sex and rank-associated variation in the pace of gut microbiome aging and impacts of seasonality on microbiome age in females. These data add to a growing understanding of modifiers of the pace of age in primates, and links among different biological indicators of age, with implications for understanding and contextualizing human variation. However, in the current version, there are gaps in the analyses with respect to the social environment, and in comparisons with other biological indicators of age. Despite this, I anticipate this work will be impactful, generate new areas of inquiry, and fuel additional comparative studies.

      Thank you for the supportive comments and constructive reviews.

      Strengths:

      The major strengths of the paper are the size and sampling depth of the study population, including the ability to characterize the social and physical environments, and the application of recent and exciting methods to characterize the microbiome clock. An additional strength was the ability of the authors to compare and contrast the relative age-predictive power of the fecal microbiome clock to other biological methods of age estimation available for the study population (dental wear, blood cell parameters, methylation data). Furthermore, the writing and support materials are clear, informative and visually appealing.

      Weaknesses:

      It seems clear that more could be done in the area of drawing comparisons among the microbiome clock and other metrics of biological age, given the extensive data available for the study population. It was confusing to see this goal (i.e. "(i) to test whether microbiome age is correlated with other hallmarks of biological age in this population"), listed as a future direction, when the authors began this process here and have the data to do more; it would add to the impact of the paper to see this more extensively developed.

      Comparing the microbiome clock to other metrics of biological age in our population is a high priority (these other metrics of biological age are in Table S5 and include epigenetic age measured in blood, the non-invasive physiology and behavior clock (NPB clock), dentine exposure, body mass index, and blood cell counts (Galbany et al. 2011; Altmann et al. 2010; Jayashankar et al. 2003; Weibel et al. 2024; Anderson et al. 2021)). However, we have opted to test these relationships in a separate manuscript. We made this decision because of the complexity of the analytical task: these metrics were not necessarily collected on the same subjects, and when they were, each metric was often measured at a different age for a given animal. Further, two of the metrics (microbiome clock and NPB clock) are measured longitudinally within subjects but on different time scales (the NPB clock is measured annually while microbiome age is measured in individual samples). The other metrics are cross-sectional. Testing the correlations between them will require exploration of how subject inclusion and time scale affect the relationships between metrics.

      We now explain the complexity of this analysis in the discussion in lines 447-450. In addition, we have added the NPB clock (Weibel et al. 2024) to the text in lines 260-262 and to Table S5.

      An additional weakness of the current set of analyses is that the authors did not explore the impact of current social network connectedness on microbiome parameters, despite the landmark finding from members of this authorship studying the same population that "Social networks predict gut microbiome composition in wild baboons" published here in eLife some years ago. While a mother's social connectedness is included as a parameter of early life adversity, overall the authors focus strongly on social dominance rank, without discussion of that parameter's impact on social network size or directly assessing it.

      Thank you for raising this important point, which was not well explained in our manuscript. We find that the signatures of social group membership and social network proximity are only detectable our population for samples collected close in time. All of the samples analyzed in  Tung et al. 2015 (“Social networks predict gut microbiome composition in wild baboons”) were collected within six weeks of each other. By contrast, the data set analyzed here spans 14 years, with very few samples from close social partners collected close in time. Hence, the effects of social group membership and social proximity are weak or undetectable. We described these findings in Grieneisen et al. 2021 and Bjork et al. 2022, and we now explain this logic on line 530, which states, “We did not model individual social network position because prior analyses of this data set find no evidence that close social partners have more similar gut microbiomes, probably because we lack samples from close social partners sampled close in time (Grieneisen et al. 2021; Björk et al. 2022).”

      We do find small effects of social group membership, which is included as a random effect in our models of how each microbiome feature is associated with host age (line 529) and our models predicting microbiome Dage (line 606; Table S6).

      Reviewer #2 (Public review):

      Summary:

      Dasari et al present an interesting study investigating the use of 'microbiota age' as an alternative to other measures of 'biological age'. The study provides several curious insights into biological aging. Although 'microbiota age' holds potential as a proxy of biological age, it comes with limitations considering the gut microbial community can be influenced by various non-age related factors, and various age-related stressors may not manifest in changes in the gut microbiota. The work would benefit from a more comprehensive discussion, that includes the limitations of the study and what these mean to the interpretation of the results.

      We agree and have text to the discussion that expands on the limitations of this study and what those limitations mean for the interpretation of the results. For instance, lines 395-400 read, “Despite the relative accuracy of the baboon microbiome clock compared to similar clocks in humans, our clock has several limitations. First, the clock’s ability to predict  individual age is lower than for age clocks based on patterns of DNA methylation—both for humans and baboons (Horvath 2013; Marioni et al. 2015; Chen et al. 2016; Binder et al. 2018; Anderson et al. 2021). One reason for this difference may be that gut microbiomes can be influenced by several non-age-related factors, including social group membership, seasonal changes in resource use, and fluctuations in microbial communities in the environment”

      In addition, lines 405-411 now reads, “Third, the relationships between potential socio-environmental drivers of biological aging and the resulting biological age predictions were inconsistent. For instance, some sources of early life adversity were linked to old-for-age gut microbiomes (e.g., males born into large social groups), while others were linked to young-for-age microbiomes (e.g., males who experienced maternal social isolation or early life drought), or were unrelated to gut microbiome age (e.g., males who experienced maternal loss; any source of early life adversity in females).”

      Strengths:

      The dataset this study is based on is impressive, and can reveal various insights into biological ageing and beyond. The analysis implemented is extensive and high-level.

      Weaknesses:

      The key weakness is the use of microbiota age instead of e.g., DNA-methylation-based epigenetic age as a proxy of biological ageing, for reasons stated in the summary. DNA methylation levels can be measured from faecal samples, and as such epigenetic clocks too can be non-invasive. I will provide authors a list of minor edits to improve the read, to provide more details on Methods, and to make sure study limitations are discussed comprehensively.

      Thank you for this point. In response, we have deleted the text from the discussion that stated that non-invasive sampling is an advantage of microbiome clocks. In addition, we now propose a non-invasive epigenetic clock from fecal samples as an important future direction for our population (see line 450).

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      Abstract - The opening 2 sentences are not especially original or reflective of the potential value/ premise of the study. Members of this team have themselves measured variation in biological age in many different ways, and the implication that measuring a microbiome clock is easy or straightforward is not compelling. This paper is very interesting and provides unique insight, but I think overall there is a missed opportunity in the abstract to emphasize this, given the innovative science presented here. Furthermore, the last 2 sentences of the abstract are especially interesting - but missing a final statement on the broader significance of research outside of baboons.

      We appreciate these comments and have revised the Abstract accordingly. The introductory sentences now read, “Mammalian gut microbiomes are highly dynamic communities that shape and are shaped by host aging, including age-related changes to host immunity, metabolism, and behavior. As such, gut microbial composition may provide valuable information on host biological age.” (lines 31-34). The last two sentences of the abstract now read, “Hence, in our host population, gut microbiome age largely reflects current, as opposed to past, social and environmental conditions, and does not predict the pace of host development or host mortality risk. We add to a growing understanding of how age is reflected in different host phenotypes and what forces modify biological age in primates.” (lines 40-43).

      If possible, it would be highly useful to present some comments on concordance in patterns at different levels. Are all ASVs assessed at both the family and genus levels? Do they follow similar patterns when assessed at different levels? What can we learn about the system by looking at different levels of taxonomic assignment?

      The section on relationships between host age and individual microbiome features is already lengthy, so we have not added an analysis of concordance between different taxonomic levels. However, we added a justification for why we tested for age signatures in different levels of taxa to line 171, which reads, “We tested these different taxonomic levels in order to learn whether the degree to which coarse and fine-grained designations categories were associated with host age.”

      To calculate the delta age - please clarify if this was done at the level of years, as suggested in Figure 3C, or at the level of months or portion months, etc?

      Delta age is measured in years. This is now clarified in lines 294, 295, and 578.

      Spelling mistake in table S12, cell B4 (Octovber)

      Thank you. This typo has been corrected.

      Given the start intro with vertebrates, the second paragraph needs some tweaking to be appropriate. Perhaps, "At least among mammals, one valuable marker of biological aging may lie in the composition and dynamics of the mammalian gut microbiome (7-10)." Or simply remove "mammalian".

      We have updated this sentence based on your suggestions in line 54. It reads, “In mammals, one valuable marker of biological aging may lie in the composition and dynamics of the gut microbiome (Claesson et al. 2012; Heintz and Mair 2014; O’Toole and Jeffery 2015; Sadoughi et al. 2022).”

      A rewrite at the end of the introduction is needed to avoid the almost direct repetition in lines 115-118 and 129-131 (including lit cited). One potentially effective way to approach this is to keep the predictions in the earlier paragraph and then more clearly center the approach and the overarching results statement in the latter paragraph. (I.e., "we find that season and social rank have stronger effects on microbiome age than early life events. Further, microbiome age does not predict host development or mortality.").

      Thank you for pointing this out. We have re-organized the predictions in the introduction based on your suggestion. The alternative “recency effects” model now appears in the paragraph that starts in line 110. The final paragraph then centers on the overall approach and the results statement (lines 128-140)

      Be clear in each case where taxon-level trends are discussed if it's at Family, Genus, or other level. It's there most, but not all, of the time.

      We have gone through the text and clarified what taxa or microbiome feature was the subject of our analyses in any places where this was not clear.

      In the legend for Figure 2, add clarification for how values to right versus left of the centered value should be interpreted with respect to age (e.g. "values to x of the center are more abundant in older individuals").

      We now clarify in Figure 2C and 2D that “Positive values are more abundant in older hosts”.

      Figure 3 - Are Panels A, B, and C all needed - can the value for all individuals not also be overlaid in the panel showing sex differences and the same point showing individuals with "old" and "young" microbiomes be added in the same plot if it was slightly larger?

      We agree and have simplified Figure 3. We reduced the number of panels from three to two, and we added the information about how to calculate delta age to Panel A. We also moved the equation from the top of Panel C to the bottom right of Panel A.

      Reviewer #2 (Recommendations for the authors):

      Dasari et al present an interesting study investigating the use of 'microbiota age' as an alternative to other measures of 'biological age'. The study provides several curious insights which in principle warrant publication. However, I do think the manuscript should be carefully revised. Below I list some minor revisions that should be implemented. Importantly, the authors should discuss in the Discussion the pros and cons of using 'microbiota age' as a proxy of 'biological age'. Further, the authors should provide more information on Methods, to make sure the study can be replicated.

      Thank you for these important points. Based on your comments and those of the first reviewer, we have expanded our discussion of the limitations of using microbiota age as a proxy for biological age (see edits to the paragraph starting in line 395).

      We have also expanded our methods around sample collection, DNA extraction, and sequencing to describe our sampling methods, strategies to mitigate and address possible contamination, and batch effects. See lines 483-490 and our citations to the original papers where these methods are described in detail.

      (1) Lines 85-99: I think this paragraph could be revisited to make the assumptions clearer. For instance, the last sentence is currently a little confusing: are authors expecting males to exhibit old-for-age microbiomes already during the juvenile period?

      This prediction has been clarified. Line 96 now reads, “Hence, we predicted that adult male baboons would exhibit gut microbiomes that are old-for-age, compared to adult females (by contrast, we expected no sex effects on microbiome age in juvenile baboons).”

      (2) Lines 118-121: Could the authors discuss this assumption in relation to what has been observed e.g., in humans in terms of delays in gut microbiome development? Delayed/accelerated gut microbiome development has been studied before, so this assumption would be stronger if related to what we know from previous studies.

      This comment refers to the sentence which originally stated, “However, we also expected that some sources of early life adversity might be linked to young-for-age gut microbiota. For instance, maternal social isolation might delay gut microbiome development due to less frequent microbial exposures from conspecifics.” We have slightly expanded the text here (line 117) to explain our logic. We now include citations for our predictions. We did not include a detailed discussion of prior literature on microbiome development in the interest of keeping the same level of detail across all sections on our predictions.

      (3) As the authors discuss, various adversities can lead to old-for-age but also young-for-age microbiome composition. This should be discussed in the limitations.

      We agree. This is now discussed in the sentence starting at line 371, which reads, “…deviations from microbiome age predictions are explained by socio-environmental conditions experienced by individual hosts, especially recent conditions, although the effect sizes are small and are not always directionally consistent.” In addition, the text starting at line 405 now reads, “Third, the relationships between potential socio-environmental drivers of biological aging and the resulting biological age predictions were inconsistent. For instance, some sources of early life adversity were linked to old-for-age gut microbiomes (e.g., males born into large social groups), while others were linked to young-for-age microbiomes (e.g., males who experienced maternal social isolation or early life drought), or were unrelated to gut microbiome age (e.g., males who experienced maternal loss; any source of early life adversity in females).”

      (4) In various places, e.g., lines 129-131, it is a little unclear at what chronological age authors are expecting microbiota to appear young/old-for-age.

      This sentence was removed while responding to the comments from the first reviewer.

      (5) Lines 132-133: this statement could be backed by stating that this is because the gut microbiota can change rapidly e.g., when diet changes (or whatever the authors think could be behind this).

      We have added an expository sentence at line 123, including new citations. This sentence reads, “Indeed, gut microbiomes are highly dynamic and can change rapidly in response to host diet or other aspects of host physiology, behavior, or environments”.

      We now cite:

      · Hicks, A.L., et al. (2018). Gut microbiomes of wild great apes fluctuate seasonally in response to diet. Nature Communications 9, 1786.

      · Kolodny, O., et al. (2019). Coordinated change at the colony level in fruit bat fur microbiomes through time. Nature Ecology & Evolution 3, 116-124.

      · Risely, A., et al. (2021) Diurnal oscillations in gut bacterial load and composition eclipse seasonal and lifetime dynamics in wild meerkats. Nat Commun 12, 6017.

      (6) Lines 135-137: current or past season and social rank? This paragraph introduces the idea that it could be past rather than current socio-environmental factors that might predict microbiota age, so the authors should clarify this sentence.

      We have clarified the information in this sentence. line 135 now reads, “In general, our results support the idea that a baboon’s current socio-environmental conditions, especially their current social rank and the season of sampling, have stronger effects on microbiome age than early life events—many of which occurred many years prior to sampling.”

      (7) Lines 136-137: this sentence could include some kind of a conclusion of this finding. What might this mean?

      We have added a sentence at line 138, which speculates that, “…the dynamism of the gut microbiome may often overwhelm and erase early life effects on gut microbiome age.”

      (8) Use 'microbiota' or 'microbiome' across the manuscript; currently, the terms are used interchangeably. I don't have a strong opinion on this, although typically 'microbiota' is used when data comes from 16S rRNA.

      We have updated the text to replace any instance of “microbiota” with “microbiome”. We use the term microbiome in the sense of this definition from the National Human Genome Research Institute, which defines a microbiome as “the community of microorganisms (such as fungi, bacteria and viruses) that exists in a particular environment”.

      (9) Figure 1 legend: make sure to unify formatting; e.g., present sample sizes as N= or n=, rather than both, and either include or do not include commas in 4-digit values (sample sizes).

      We have checked the formatting related to sample sizes and the use of commas in 4-digits in the main text and supplement. The formats are now consistent.

      (10) Line 166: relative abundances surely?

      Following Gloor et al. (2017), our analyses use centered log-ratio (CLR) transformations of read counts, which is the recommended approach for compositional data such as 16S rRNA amplicon read counts. CLR transformations are scale-invariant, so the same ratio is obtained in a sample with few read versus many reads. We now cite Gloor et al. (2017) at line 169 and in the methods in line 517, which reads “centered log ratio (CLR) transformed abundances (i.e., read counts) of each microbial phyla (n=30), family (n=290), genus (n=747), and amplicon sequence variance (ASV) detected in >25% of samples (n=358). CLR transformations are a recommended approach for addressing the compositional nature of 16S rRNA amplicon read count data (Gloor et al. 2017).”  

      (11) Lines 167-172: were technical factors, e.g., read depth or sequencing batch, included as random effects?

      Thank you for catching this oversight in the text. We did model sequencing depth and batch effects. The sentence starting at line 173 now reads, “For each of these 1,440 features, we tested its association with host age by running linear mixed effects models that included linear and quadratic effects of host age and four other fixed effects: sequencing depth, the season of sample collection (wet or dry), the average maximum temperature for the month prior to sample collection, and the total rainfall in the month prior to sample collection (Grieneisen et al. 2021; Björk et al. 2022; Tung et al. 2015). Baboon identity, social group membership, hydrological year of sampling, and sequencing plate (as a batch effect) were modeled as random effects.”

      (12) Lines 175-180: When discussing how these alpha diversity results relate to previous findings, the authors should be clear about whether they talk about weighted or non-weighted measures of alpha diversity. - also maybe this should be included in the discussion rather than the results? Please consider this when revisiting the manuscript (see how it reads after edits).

      Richness is the only unweighted metric, which we now clarify in line 181. We opted to retain the interpretation in the text in its original location to maintain the emphasis in the discussion on the microbiome clock results.

      (13) Table S1 is very hard to interpret in the provided PDF format as columns are not presented side-by-side. It is currently hard to check model output for e.g., specific families. This needs to be revisited.

      We agree. We believe that eLife’s submission portal automatically generates a PDF for any supplementary item. However, we also include the supplementary tables as an Excel workbook which has the columns presented side-by-side.

      (14) Line 184: taxa meaning what? Unclear what authors refer to with this sentence, taxa across taxonomic levels, or ASVs, or what does the 51.6% refer to?

      We have edited line 191 to clarify that this sentence refers to taxa at all taxonomic levels (phyla to ASVs).

      (15) Line 191: a punctuation mark missing after ref (81).

      We have added the missing period at the end of this sentence.

      (16) Lines 189-197: this should go into the discussion in my opinion.

      We have opted to retain this interpretation, now at line 183.

      (17) Lines 215-219: Not sure what this means; do the authors mean features were not restricted to age-associated taxa, ie also e.g., diversity and other taxa-independent patterns were included? If so, the rest of the highlighted lines should be revisited to make this clear, currently to me it is very unclear what 'These could include features that are not strongly age-correlated in isolation' means. Currently, that sounds like some features included were only age-associated in combination with other features, but unclear how this relates to taxa-dependency/taxa-independency.

      We agree this was not clear. We have revised line 224 to read, “We included all 9,575 microbiome features in our age predictions, as opposed to just those that were statistically significantly associated with age because removing these non-significant features could exclude features that contribute to age prediction via interactions with other taxa.”

      (18) Line 403-407: There is now a paper showing epigenetic clocks can be built with faecal samples, so this argument is not valid. Please revisit in light of this publication: https://onlinelibrary.wiley.com/doi/epdf/10.1111/mec.17330

      Thank you for bringing this paper to our attention. We deleted the text that describes epigenetic clocks as invasive, and we now cite this paper in line 450, which reads, “We also hope to measure epigenetic age in fecal samples, leveraging methods developed in Hanski et al. 2024.”

      (19) Line 427: a punctuation mark/semicolon missing before However.

      We have corrected this typo.

      (20) Lines 419-428: I don't quite understand this speculation. Why would the priority of access to food lead to an old-looking gut microbiome? This paragraph needs stronger arguments, currently unclear and also not super convincing.

      We agree this was confusing. We have revised this text to clarify the explanation. The text starting at line 424 now reads, “This outcome points towards a shared driver of high social status in shaping gut microbiome age in both males and females. While it is difficult to identify a plausible shared driver, one benefit shared by both high-ranking males and females is priority of access to food. This access may result in fewer foraging disruptions and a higher quality, more stable diet. At the same time, prior research in Amboseli suggests that as animals age, their diets become more canalized and less variable (Grieneisen et al. 2021). Hence aging and priority of access to food might both be associated with dietary stability and old-for-age microbiomes. However, this explanation is speculative and more work is needed to understand the relationship between rank and microbiome age.”

      (21) Line 434: remove 'be'.

      We have corrected this typo.

      (22) Line 478: add information on how samples were collected; e.g., were samples collected from the ground? How was cross-contamination with soil microbiota minimised? Were samples taken from the inner part of depositions? These factors can influence microbiota samples quite drastically so detailed info is needed. Also what does homogenisation mean in this context? How soon were samples freeze-dried after sample collection?

      We have expanded our methods with respect to sample collection. This text starts in line 483 and reads, “Samples were collected from the ground within 15 minutes of defecation. For each sample, approximately 20 g of feces was collected into a paper cup, homogenized by stirring with a wooden tongue depressor, and a 5 g aliquot of the homogenized sample was transferred to a tube containing 95% ethanol. While a small amount of soil was typically present on the outside of the fecal sample, mammalian feces contains 1000 times the number of microbial cells in a typical soil sample (Sender, Fuchs, and Milo 2016; Raynaud and Nunan 2014), which overwhelms the signal of soil bacteria in our analyses (Grieneisen et al. 2021). Samples were transported from the field in Amboseli to a lab in Nairobi, freeze-dried, and then sifted to remove plant matter prior to long term storage at -80°C.”

      (23) Line 480 onwards: were negative controls included in extraction batches? Were samples randomised into extraction batches?

      Yes, we included extraction blanks. These are now described in lines 495-500. This text reads, “We included one extraction blank per batch, which had significantly lower DNA concentrations than sample wells (t-test; t=-50, p < 2.2x10-16; Grieneisen et al. 2021). We also included technical replicates, which were the same fecal sample sequenced across multiple extraction and library preparation batches. Technical replicates from different batches clustered with each other rather than with their batch, indicating that true biological differences between samples are larger than batch effects.”

      (24) Were extraction, library prep, and sequencing negative controls included? Is data available?

      We included extraction blanks (described above) and technical replicates, which were the same sample sequenced across multiple extraction and library preparation batches. Technical replicates from different batches clustered with each other rather than with their batch, indicating that true biological differences between samples are larger than batch effects.

      We have updated the data availability statement to read, “All data for these analyses are available on Dryad at https://doi.org/10.5061/dryad.b2rbnzspv. The 16S rRNA gene sequencing data are deposited on EBI-ENA (project ERP119849) and Qiita (study 12949). Code is available at the following GitHub repository: https://github.com/maunadasari/Dasari_etal-GutMicrobiomeAge”.

      (25) Line 562: how were corrected microbiome delta ages calculated? Currently, the authors state x, y and z factors were corrected for, but it is unclear how this was done.

      The paragraph starting at line 577 describes how microbiome delta age was calculated. We have made only a few changes to this text because we were not sure which aspects of these methods confused the reviewer. However, briefly, we calculated sample-specific microbiome Dage in years as the difference between a sample’s microbial age estimate, age<sub>m</sub> from the microbiome clock, and the host’s chronological age in years at the time of sample collection, age<sub>c</sub>. Higher microbiome Dages indicate old-for-age microbiomes, as age<sub>m</sub> > age<sub>c</sub>, and lower values (which are often negative) indicate a young-for-age microbiome, where age<sub>c</sub> > age<sub>m</sub> (see Figure 3).

      (26) Line 579: typo 'as'.

      We have corrected this typo.

      Works Cited

      Altmann, Jeanne, Laurence Gesquiere, Jordi Galbany, Patrick O Onyango, and Susan C Alberts. 2010. “Life History Context of Reproductive Aging in a Wild Primate Model.” Annals of the New York Academy of Sciences 1204:127–38. https://doi.org/10.1111/j.1749-6632.2010.05531.x.

      Anderson, Jordan A, Rachel A Johnston, Amanda J Lea, Fernando A Campos, Tawni N Voyles, Mercy Y Akinyi, Susan C Alberts, Elizabeth A Archie, and Jenny Tung. 2021. “High Social Status Males Experience Accelerated Epigenetic Aging in Wild Baboons.” Edited by George H Perry. eLife 10 (April):e66128. https://doi.org/10.7554/eLife.66128.

      Binder, Alexandra M., Camila Corvalan, Verónica Mericq, Ana Pereira, José Luis Santos, Steve Horvath, John Shepherd, and Karin B. Michels. 2018. “Faster Ticking Rate of the Epigenetic Clock Is Associated with Faster Pubertal Development in Girls.” Epigenetics 13 (1): 85–94. https://doi.org/10.1080/15592294.2017.1414127.

      Björk, Johannes R., Mauna R. Dasari, Kim Roche, Laura Grieneisen, Trevor J. Gould, Jean-Christophe Grenier, Vania Yotova, et al. 2022. “Synchrony and Idiosyncrasy in the Gut Microbiome of Wild Baboons.” Nature Ecology & Evolution, June, 1–10. https://doi.org/10.1038/s41559-022-01773-4.

      Chen, Brian H., Riccardo E. Marioni, Elena Colicino, Marjolein J. Peters, Cavin K. Ward-Caviness, Pei-Chien Tsai, Nicholas S. Roetker, et al. 2016. “DNA Methylation-Based Measures of Biological Age: Meta-Analysis Predicting Time to Death.” Aging (Albany NY) 8 (9): 1844–59. https://doi.org/10.18632/aging.101020.

      Claesson, Marcus J., Ian B. Jeffery, Susana Conde, Susan E. Power, Eibhlís M. O’Connor, Siobhán Cusack, Hugh M. B. Harris, et al. 2012. “Gut Microbiota Composition Correlates with Diet and Health in the Elderly.” Nature 488 (7410): 178–84. https://doi.org/10.1038/nature11319.

      Galbany, Jordi, Jeanne Altmann, Alejandro Pérez-Pérez, and Susan C. Alberts. 2011. “Age and Individual Foraging Behavior Predict Tooth Wear in Amboseli Baboons.” American Journal of Physical Anthropology 144 (1): 51–59. https://doi.org/10.1002/ajpa.21368.

      Gloor, Gregory B., Jean M. Macklaim, Vera Pawlowsky-Glahn, and Juan J. Egozcue. 2017. “Microbiome Datasets Are Compositional: And This Is Not Optional.” Frontiers in Microbiology 8. https://doi.org/10.3389/fmicb.2017.02224.

      Grieneisen, Laura E., Mauna Dasari, Trevor J. Gould, Johannes R. Björk, Jean-Christophe Grenier, Vania Yotova, David Jansen, et al. 2021. “Gut Microbiome Heritability Is Nearly Universal but Environmentally Contingent.” Science 373 (6551): 181–86. https://doi.org/10.1126/science.aba5483.

      Hanski, Eveliina, Susan Joseph, Aura Raulo, Klara M. Wanelik, Áine O’Toole, Sarah C. L. Knowles, and Tom J. Little. 2024. “Epigenetic Age Estimation of Wild Mice Using Faecal Samples.” Molecular Ecology 33 (8): e17330. https://doi.org/10.1111/mec.17330.

      Heintz, Caroline, and William Mair. 2014. “You Are What You Host: Microbiome Modulation of the Aging Process.” Cell 156 (3): 408–11. http://dx.doi.org/10.1016/j.cell.2014.01.025.

      Horvath, Steve. 2013. “DNA Methylation Age of Human Tissues and Cell Types.” Genome Biology 14 (10): R115. https://doi.org/10.1186/gb-2013-14-10-r115.

      Jayashankar, Lakshmi, Kathleen M. Brasky, John A. Ward, and Roberta Attanasio. 2003. “Lymphocyte Modulation in a Baboon Model of Immunosenescence.” Clinical and Vaccine Immunology 10 (5): 870–75. https://doi.org/10.1128/CDLI.10.5.870-875.2003.

      Marioni, Riccardo E., Sonia Shah, Allan F. McRae, Brian H. Chen, Elena Colicino, Sarah E. Harris, Jude Gibson, et al. 2015. “DNA Methylation Age of Blood Predicts All-Cause Mortality in Later Life.” Genome Biology 16 (1): 25. https://doi.org/10.1186/s13059-015-0584-6.

      O’Toole, Paul W., and Ian B. Jeffery. 2015. “Gut Microbiota and Aging.” Science 350 (6265): 1214–15. https://doi.org/10.1126/science.aac8469.

      Raynaud, Xavier, and Naoise Nunan. 2014. “Spatial Ecology of Bacteria at the Microscale in Soil.” PLOS ONE 9 (1): e87217. https://doi.org/10.1371/journal.pone.0087217.

      Sadoughi, Baptiste, Dominik Schneider, Rolf Daniel, Oliver Schülke, and Julia Ostner. 2022. “Aging Gut Microbiota of Wild Macaques Are Equally Diverse, Less Stable, but Progressively Personalized.” Microbiome 10 (1): 95. https://doi.org/10.1186/s40168-022-01283-2.

      Sender, Ron, Shai Fuchs, and Ron Milo. 2016. “Revised Estimates for the Number of Human and Bacteria Cells in the Body.” PLoS Biology 14 (8): e1002533. https://doi.org/10.1371/journal.pbio.1002533.

      Tung, J, L B Barreiro, M B Burns, J C Grenier, J Lynch, L E Grieneisen, J Altmann, S C Alberts, R Blekhman, and E A Archie. 2015. “Social Networks Predict Gut Microbiome Composition in Wild Baboons.” Elife 4. https://doi.org/10.7554/eLife.05224.

      Weibel, Chelsea J., Mauna R. Dasari, David A. Jansen, Laurence R. Gesquiere, Raphael S. Mututua, J. Kinyua Warutere, Long’ida I. Siodi, Susan C. Alberts, Jenny Tung, and Elizabeth A. Archie. 2024. “Using Non-Invasive Behavioral and Physiological Data to Measure Biological Age in Wild Baboons.” GeroScience 46 (5): 4059–74. https://doi.org/10.1007/s11357-024-01157-5.

    1. If you think this is only an arcane linguistic matter, just look to the North Dakota prairie where, as I write this, there are hundreds of people camping out in a blizzard enduring bitter cold to continue the protective vigil for their river, which is threatened by the construction of the Dakota Access Pipeline and the pipeline’s inevitable oil spills. The river is not an it for them—the river lies within their circle of moral responsibility and compassion and so they protect ki fiercely, as if the river were their relative, because ki is. But the ones they are protecting ki from speak of the river and the oil and the pipe all with the same term, as if “it” were their property, as if “it” were nothing more than resources for them to use. As if it were dead.

      So to be clear. Kimmerer advice is not just creating a pronoun will be sufficient and that can instinctively create a respectful and reciprocal mindsets among those who use that pronoun. Rather, a whole culture needs to be created, as she is citing here of the Native Americans' relationship with nature. It's a relationship that's deeply cultural and which manifests itself on the linguistic level. The so language and culture go hand in hand.

    1. Git ships with built-in tools for collaborating over email.

      Just as a point of fact: many distributions' package repositories don't have a Git package that "ships with built-in tools for collaborating over email", which can be seen in the suggested steps given for the systems in this list—the distros do generally provide packages that back the git send-email command, but it's a separate package.

    1. In modern times, Islam has been fictionalized in a less deferential vein as well, through satire and other forms of trenchant critique regarding religious beliefs and their worldly effects.

      I find this quote interesting because the author is calling out how modern times tends to fictionalize and not take as seriously Islam and it's history. There aren't a lot of moments in this text that sort of "call out" or critique the modern times and their feelings towards religious beliefs.

      I think this is something I'd actually like to see more within the text or just overall, within everyday life. The idea of fictionalizing a religious belief is highly disrespectful but it does happen more often than not.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      In this article, Nedbalova et al. investigate the biochemical pathway that acts in circulating immune cells to generate adenosine, a systemic signal that directs nutrients toward the immune response, and S-adenosylmethionine (SAM), a methyl donor for lipid, DNA, RNA, and protein synthetic reactions. They find that SAM is largely generated through the uptake of extracellular methionine, but that recycling of adenosine to form ATP contributes a small but important quantity of SAM in immune cells during the immune response. The authors propose that adenosine serves as a sensor of cell activity and nutrient supply, with adenosine secretion dominating in response to increased cellular activity. Their findings of impaired immune action but rescued larval developmental delay when the enzyme Ahcy is knocked down in hemocytes are interpreted as due to effects on methylation processes in hemocytes and reduced production of adenosine to regulate systemic metabolism and development, respectively. Overall this is a strong paper that uses sophisticated metabolic techniques to map the biochemical regulation of an important systemic mediator, highlighting the importance of maintaining appropriate metabolite levels in driving immune cell biology.

      Strengths:

      The authors deploy metabolic tracing - no easy feat in Drosophila hemocytes - to assess flux into pools of the SAM cycle. This is complemented by mass spectrometry analysis of total levels of SAM cycle metabolites to provide a clear picture of this metabolic pathway in resting and activated immune cells.

      The experiments show that the recycling of adenosine to ATP, and ultimately SAM, contributes meaningfully to the ability of immune cells to control infection with wasp eggs.

      This is a well-written paper, with very nice figures showing metabolic pathways under investigation. In particular, the italicized annotations, for example, "must be kept low", in Figure 1 illustrate a key point in metabolism - that cells must control levels of various intermediates to keep metabolic pathways moving in a beneficial direction.

      Experiments are conducted and controlled well, reagents are tested, and findings are robust and support most of the authors' claims.

      Weaknesses:

      The authors posit that adenosine acts as a sensor of cellular activity, with increased release indicating active cellular metabolism and insufficient nutrient supply. It is unclear how generalizable they think this may be across different cell types or organs.

      In the final part of the Discussion, we elaborate slightly more on a possible generalization of our results, while being aware of the limited space in this experimental paper and therefore intend to address this in more detail and comprehensively in a subsequent perspective article.

      The authors extrapolate the findings in Figure 3 of decreased extracellular adenosine in ex vivo cultures of hemocytes with knockdown of Ahcy (panel B) to the in vivo findings of a rescue of larval developmental delay in wasp egg-infected larvae with hemocyte-specific Ahcy RNAi (panel C). This conclusion (discussed in lines 545-547) should be somewhat tempered, as a number of additional metabolic abnormalities characterize Ahcy-knockdown hemocytes, and the in vivo situation may not mimic the ex vivo situation. If adenosine (or inosine) measurements were possible in hemolymph, this would help bolster this idea. However, adenosine at least has a very short half-life.

      We agree with the reviewer, and in the 4th paragraph of the Discussion we now discuss more extensively the limitations of our study in relation to ex vivo adenosine measurements and the importance of the SAM pathway on adenosine production.

      Reviewer #2 (Public review):

      Summary:

      In this work, the authors wish to explore the metabolic support mechanisms enabling lamellocyte encapsulation, a critical antiparasitic immune response of insects. They show that S-adenosylmethionine metabolism is specifically important in this process through a combination of measurements of metabolite levels and genetic manipulations of this metabolic process.

      Strengths:

      The metabolite measurements and the functional analyses are generally very strong and clearly show that the metabolic process under study is important in lamellocyte immune function.

      Weaknesses:

      The gene expression data are a potential weakness. Not enough is explained about how the RNAseq experiments in Figures 2 and 4 were done, and the representation of the data is unclear.

      The RNAseq data have already been described in detail in our previous paper (doi.org/10.1371/journal.pbio.3002299), but we agree with the reviewer that we should describe the necessary details again here. The replicate numbers for RNAseq data were added to figure legends, the TPM values for the selected genes shown in figures are in S1_Data and new S4_Data file with complete RNAseq data (TPM and DESeq2) was added to this revised version.

      The paper would also be strengthened by the inclusion of some measure of encapsulation effectiveness: the authors show that manipulation of the S-adenosylmethionine pathway in lamellocytes affects the ability of the host to survive infection, but they do not show direct effects on the ability of the host to encapsulate wasp eggs.

      The reviewer is correct that wasp egg encapsulation and host survival may be different (the host can encapsulate and kill the wasp egg and still not survive) and we should also include encapsulation efficiency. This is now added to Figure 3D, which shows that encapsulation efficiency is reduced upon Ahcy-RNAi, which is consistent with the reduced number of lamellocytes.

      Reviewer #3 (Public review):

      Summary:

      The authors of this study provide evidence that Drosophila immune cells show upregulated SAM transmethylation pathway and adenosine recycling upon wasp infection. Blocking this pathway compromises the lamellocyte formation, developmental delay, and host survival, suggesting its physiological relevance.

      Strengths:

      Snapshot quantification of the metabolite pool does not provide evidence that the metabolic pathway is active or not. The authors use an ex vivo isotope labelling to precisely monitor the SAM and adenosine metabolism. During infection, the methionine metabolism and adenosine recycling are upregulated, which is necessary to support the immune reaction. By combining the genetic experiment, they successfully show that the pathway is activated in immune cells.

      Weaknesses:

      The authors knocked down Ahcy to prove the importance of SAM methylation pathway. However, Ahcy-RNAi produces a massive accumulation of SAH, in addition to blocking adenosine production. To further validate the phenotypic causality, it is necessary to manipulate other enzymes in the pathway, such as Sam-S, Cbs, SamDC, etc.

      We are aware of this weakness and have addressed it in a much more detailed discussion of the limitations of our study in the 6th paragraph of the Discussion.

      The authors do not demonstrate how infection stimulates the metabolic pathway given the gene expression of metabolic enzymes is not upregulated by infection stimulus.

      Although the goal of this work was to test by 13C tracing whether the SAM pathway activity is upregulated, not to analyze how its activity is regulated, we certainly agree with the reviewer that an explanation of possible regulation, especially in the context of the enzyme expressions we show, should be included in our work. Therefore, we have supplemented the data with methyltransferase expressions (Figure 2-figure supplement 3. And S3_Data) and better describe the changes in expression of some SAM pathway genes, which also support stimulation of this pathway by changes in expression. The enzymes of the SAM transmethylation pathway are highly expressed in hemocytes, and it is known that the activity of this pathway is primarily regulated by (1) increased methionine supply to the cell and (2) the actual utilization of SAM by methyltransferases. Therefore, a possible increase in SAM transmethylation pathway in our work can be suggested (1) by increased expression of 4 transporters capable of transporting methionine, (2) by decreased expression of AhcyL2 (dominant-negative regulator of Ahcy) and (3) by increased expression of 43 out of 200 methyltransferases. This was now added to the first section of Results.

      Recommendations for the authors:

      Reviewing Editor Comments:

      In the discussion with the reviewers, two points were underlined as very important:

      (1) Knocking down Ahyc and other enzymes in the SAM methylation pathway may give very distinct phenotypes. Generalising the importance of "SAM methyaltion" only by Ahcy-RNAi is a bit cautious. The authors should be aware of this issue and probably mention it in the Discussion part.

      We are aware of this weakness and have addressed it in a much more detailed discussion of the limitations of our study in the 6th paragraph of the Discussion.

      (2) Sample sizes should be indicated in the Figure Legends. Replicate numbers on the RNAseq are important - were these expression levels/changes seen more than once?

      Sample sizes are shown as scatter plots with individual values wherever possible and all graphs are supplemented with S1_Data table with raw data. The RNAseq data have already been described in detail in our previous paper (doi.org/10.1371/journal.pbio.3002299), but we agree with the reviewers that we should describe the necessary details again here. The replicate numbers for RNAseq data were added to figure legends, the TPM values for the selected genes shown in figures are in S1_Data and new S4_Data file with complete RNAseq data (TPM and DESeq2) was added to this revised version.

      Reviewer #1 (Recommendations for the authors):

      Major points:

      (1) Please provide sample sizes in the legends rather than in a supplementary table.

      Sample sizes are shown either as scatter plots with individual values or added to figure legends now.

      (2) More details in the methods section are needed:

      For hemocyte counting, are sessile and circulating hemocytes measured?

      We counted circulating hemocytes (upon infection, most sessile hemocytes are released into the circulation). While for metabolomics all hemocyte types were included, for hemocyte counting we were mainly interested in lamellocytes. Therefore, we counted them 20 hours after infection, when most of the lamellocytes from the first wave are fully differentiated but still mostly in circulation, as they are just starting to adhere to the wasp egg. This was added to the Methods section.

      How were levels of methionine and adenosine used in ex vivo cultures selected? This is alluded to in lines 158-159, but no references are provided.

      The concentrations are based on measurements of actual hemolymph concentrations in wild-type larvae in the case of methionine, and in the case of adenosine, we used a slightly higher concentration than measured in the adgf-a mutant to have a sufficiently high concentration to allow adenosine to flow into the hemocytes. This is now added to the Methods section.

      Minor points:

      Response to all minor points:  Thank you, errors has now been fixed.

      (1) Line 186 - spell out MTA - 5-methylthioadenosine.

      (2) Lines 196-212 (and elsewhere) - spelling out cystathione rather than using the abbreviation CTH is recommended because the gene cystathione gamma-lyase (Cth) is also discussed in this paragraph. Using the full name of the metabolite will reduce confusion.

      We rather used cystathionine γ-lyase as a full name since it is used only three times while CTH many more times, including figures.

      (3) Figure 2 - supplement 2: please include scale bars.

      (4) Line 303 - spelling error: "trabsmethylation" should be "transmethylation".

      (5) Line 373 - spelling error: "higer" should be "higher".

      Reviewer #2 (Recommendations for the authors):

      For the RNAseq data, it's unclear whether the gene expression data in Figures 2 and 4 include biological replicates, so it's unclear how much weight we should place on them.

      The replicate numbers for RNAseq data were added to figure legends, the TPM values for the selected genes shown in figures are in S1_Data and new S4_Data file with complete RNAseq data (TPM and DESeq2) was added to this revised version.

      The representation of these data is also a weakness: Figure 2 shows measurements of transcripts per million, but we don't know what would be high or low expression on this scale.

      We have added the actual TPM values for each cell in the RNAseq heatmaps in Figure 2, Figure 2-figure supplement 3, and Figure 4 to make them more readable. Although it is debatable what is high or low expression, to at least have something for comparison, we have added the following information to the figure legends that only 20% of the genes in the presented RNAseq data show expression higher than 15 TPM.

      Figure 4 is intended to show expression changes with treatment, but expression changes should be shown on a log scale (so that increases and decreases in expression are shown symmetrically) and should be normalized to some standard level (such as uninfected lamellocytes).

      The bars in Figure 4C,D show the fold change (this is now stated in the y-axis legend) compared to 0 h (=uninfected) Adk3 samples - the reason for this visualization is that we wanted to show (1) the differences in levels between Adk3 and Adk2 and in levels between Ak1 and Ak2, respectively, and at the same time (2) the differences between uninfected and infected Adk3 and Ak1. In our opinion, these fold change differences are also much more visible in normal rather than log scale.

      Reviewer #3 (Recommendations for the authors):

      (1) It might be interesting to test how general this finding would be. How about Bacterial or fungal infection? The authors may also try genetic activation of immune pathways, e.g. Toll, Imd, JAK/STAT.

      Although we would also like to support our results in different systems, we believe that our results are already strong enough to propose the final hypothesis and publish it as soon as possible so that it can be tested by other researchers in different systems and contexts than the Drosophila immune response.

      (2) How does the metabolic pathway get activated? Enzyme activity? Transporters? Please test or at least discuss the possible mechanism.

      The response is already provided above in the Reviewer #3 (Public review) section.

      (3) The authors might test overexpression or genetic activation of the SAM transmethylation pathway.

      Although we agree that this would potentially strengthen our study, it may not be easy to increase the activity of the SAM transmethylation pathway - simply overexpressing the enzymes may not be enough, the regulation is primarily through the utilization of SAM by methyltransferases and there are hundreds of them and they affect numerous processes. 

      (4) Supplementation of adenosine to the Ahcy-RNAi larvae would also support their conclusion.

      Again, this is not an easy experiment, dietary supplementation would not work, direct injection of adenosine into the hemolymph would not last long enough, adenosine would be quickly removed.

      (5) It is interesting to test genetically the requirement of some transporters, especially for gb, which is upregulated upon infection.

      Although this would be an interesting experiment, it is beyond the scope of this study; we did not aim to study the role of the SAM transmethylation pathway itself or its regulation, only its overall activity and its role in adenosine production.

    1. Reviewer #2 (Public review):

      Brickwedde et al. investigate the role of alpha oscillations in allocating intermodal attention. A first EEG study is followed up with a MEG study that largely replicates the pattern of results (with small to be expected differences). They conclude that a brief increase in the amplitude of auditory and visual stimulus-driven continuous (steady-state) brain responses prior to the presentation of an auditory - but not visual - target speaks to the modulating role of alpha that leads them to revise a prevalent model of gating-by-inhibition.

      Overall, this is an interesting study on a timely question, conducted with methods and analysis that are state-of-the-art. I am particularly impressed by the author's decision to replicate the earlier EEG experiment in MEG following the reviewer's comments on the original submission. Evidently, great care was taken to accommodate the reviewer's suggestions.

      Nevertheless, I am struggling with the report for two main reasons: It is difficult to follow the rationale of the study, due to structural issues with the narrative and missing information or justifications for design and analysis decisions, and I am not convinced that the evidence is strong, or even relevant enough for revising the mentioned alpha inhibition theory. Both points are detailed further below.

      Strength/relevance of evidence for model revision: The main argument rests on 1) a rather sustained alpha effect following the modality cue, 2) a rather transient effect on steady-state responses just before the expected presentation of a stimulus, and 3) a correlation between those two. Wouldn't the authors expect a sustained effect on sensory processing, as measured by steady-state amplitude irrespective of which of the scenarios described in Figure 1A (original vs revised alpha inhibition theory) applies? Also, doesn't this speak to the role of expectation effects due to consistent stimulus timing? An alternative explanation for the results may look like this: Modality-general increased steady-state responses prior to the expected audio stimulus onset are due to increased attention/vigilance. This effect may be exclusive (or more pronounced) in the attend-audio condition due to higher precision in temporal processing in the auditory sense or, vice versa, too smeared in time due to the inferior temporal resolution of visual processing for the attend-vision condition to be picked up consistently. As expectation effects will build up over the course of the experiment, i.e., while the participant is learning about the consistent stimulus timing, the correlation with alpha power may then be explained by a similar but potentially unrelated increase in alpha power over time.

      Structural issues with the narrative and missing information: Here, I am mostly concerned with how this makes the research difficult to access for the reader. I list the major points below:

      In the introduction the authors pit the original idea about alpha's role in gating against some recent contradictory results. If it's the aim of the study to provide evidence for either/or, predictions for the results from each perspective are missing. Also, it remains unclear how this relates to the distinction between original vs revised alpha inhibition theory (Fig. 1A). Relatedly if this revision is an outcome rather than a postulation for this study, it shouldn't be featured in the first figure.

      The analysis of the intermodulation frequency makes a surprise entrance at the end of the Results section without an introduction as to its relevance for the study. This is provided only in the discussion, but with reference to multisensory integration, whereas the main focus of the study is focussed attention on one sense. (Relatedly, the reference to "theta oscillations" in this sections seems unclear without a reference to the overlapping frequency range, and potentially more explanation.) Overall, if there's no immediate relevance to this analysis, I would suggest removing it.

  2. brightspace.ccc.edu brightspace.ccc.edu
    1. You do notneed to be in love with that fellow human being. You do not need to make him your partner nor yourfriend, nor nothing. Just respect his right for life. That’s it

      Knowing what's happening now in Palestine, this song is even more emotional and jarring to hear. And knowing that it was sung in 2009? I had no idea how long this conflict had been enduring and this song really brings out the heartache that the conflict brings. It's also portrays how both sides are suffering and humanizes everyone - something that is really missing in common discourse today and social media where you often see people villainizing one side or the other.

    1. Works Cited:

      At first, I thought that each page would feature citations to the sources where the info was found, but I see that it's just repeated on each page, and the artifacts cited here are also repeated on the "Artifacts" page.

      I also only see two secondary sources, both of which aren't the most credible or rigorous. Would exhibit-goers be all that impressed with this? And why no use of assigned readings?

    1. The term “cancel culture” can be used for public shaming and criticism, but is used in a variety of ways, and it doesn’t refer to just one thing.

      I think it's interesting how the definition for this term holds a lot of meaning. Because it's from simple misinterpretations of something to literal sexual harassment. But I also think that with "cancel culture" it doesn't really solve the problem of the persons action. I've seen a lot of people on social media be "canceled" but then go off social media for a bit and come back like nothing happened.

    2. The term “cancel culture” can be used for public shaming and criticism, but is used in a variety of ways, and it doesn’t refer to just one thing.

      it’s interesting how the term “cancel culture” has evolved and is used in different contexts. While some view it as a way to hold individuals and institutions accountable for harmful actions, others see it as an unfair form of public shaming that can have disproportionate consequences. This makes me wonder where should we draw the line between necessary accountability and excessive punishment?

    3. The term “cancel culture” can be used for public shaming and criticism, but is used in a variety of ways, and it doesn’t refer to just one thing. The offense that someone is being canceled for can range from sexual assault of minors (e.g., R. Kelly, Woody Allen, Kevin Spacey), to minor offenses or even misinterpretations. The consequences for being “canceled” can range from simply the experience of being criticized, to loss of job or criminal charges. Given the huge range of things “cancel culture” can be referring to, we’ll mostly stick to talking here about “public shaming,” and “public criticism.”

      I find it really interesting how the term “cancel culture” has evolved and been used in so many different ways. It’s fascinating how the term can refer to something as serious as criminal offenses like sexual assault, but it can also apply to someone being criticized for something that might not be as severe, or even a misunderstanding. It’s also a bit troubling to think about how a single comment or mistake can lead to someone being “canceled,” especially when it’s based on a misinterpretation or something that wasn’t meant to harm.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer 1:

      Point 1 of public reviews and point 2 of recommendations to authors. 

      Temporal ambiguity in credit assignment: While the current design provides clear task conditions, future studies could explore more ambiguous scenarios to further reflect real-world complexity…. The role of ambiguity is very important for the credit assignment process. However, in the current task design, the instruction of the task design almost eliminates the ambiguity of which the trial's choice should be assigned credit to. The authors claim the realworld complexity of credit assignment in this task design. However, the real-world complexity of this type of temporal credit assignment involves this type of temporal ambiguity of responsibility as causal events. I am curious about the consequence of increasing the complexity of the credit assignment process, which is closer to the complexity in the real world.

      We agree that the structure of causal relationships can be more ambiguous in real-world contexts. However, we also believe that there are multiple ways in which a task might approach “real-world complexity”. One way is by increasing the ambiguity in the relationships between choices and outcomes (as done by Jocham et al., 2016). Another is by adding interim decisions that must be completed between viewing the outcome of a first choice, which mimics task structures such as the cooking tasks described in the introduction. In such tasks, the temporal structure of the actions maybe irrelevant, but the relationship between choice identities and the actions is critical to be effective in the task (e.g., it doesn’t matter whether I add spice before or after the salt, all I need to know that adding spice will result in spicy soup).  While ambiguity about either form of causal relation is clearly an important part of real-world complexity, and would make credit assignment harder, our study focuses on how links between outcomes and specific past choice identities are created at the neural level when they are known to be causal. 

      We consequently felt it necessary to resolve temporal ambiguity for participants. Instructing participants on the structure of the task allowed us to make assumptions about how credit assignment for choice identities should proceed (assign credit to the choice made N trials back) and allowed us make positive predictions about the content of representations in OFC when viewing an outcome. This gave the highest power to detect multivariate information about the causal choice and the highest interpretability of such findings. 

      In contrast, if we had not resolved this ambiguity, it would be difficult to tell if incorrect decoding from the classifier resulted from noise in the neural signal, or if on that trial participants were assigning credit to non-causal choices that they erroneously believed to have caused the outcome due to the perceived temporal structure. We believe this would have ultimately decreased our power to determine whether representations of the causal choice were present at the time of outcome because we would have to make assumptions about what counts as a “true” causal representation. 

      We have commented on this in the discussions (p.13): 

      “While our study was designed to focus on the complexity of assigning credit in tasks with different known causal structures, another important component of real-world credit assignment is temporal ambiguity. To isolate the mechanisms which create associations between specific choices and specific outcomes, we instructed participants on the causal structure of each task, removing temporal ambiguity about the causal choice.  However, our results are largely congruent with previously reported results in tasks that dissolved the typical experimental trial structure, producing temporal ambiguity, and which observed more pronounced spreading of effect, in addition to appropriate credit assignment (Jocham et al, 2016).  Namely, this study found that activation in the lOFC increased only when participants received rewards contingent on a previous action, an effect that was more pronounced in subjects whose behavior reflected more accurate credit assignment. This suggests a shared lOFC mechanism for credit assignment in different types of complex environments. Whether these mechanisms extend to situations where the temporal causal structure is completely unknown remains an important question.”

      Point 2 of public reviews and point 1 of recommendations to authors

      Role of task structure understanding: The difference in task comprehension between human subjects in this study and animal subjects in previous studies offers an interesting point of comparison…. The credit assignment involves the resolution of the ambiguity in which the causal responsibility of an outcome event is assigned to one of the preceding events. In the original study of Walton and his colleagues, the monkey subjects could not be instructed on the task structure defining the causal relationships of the events. Then, the authors of the original study observed the spreading of the credit assignments to the "irrelevant" events, which did not occur in the same trial of the outcome event but to the events (choices) in neighbouring trials. This aberrant pattern of the credit assignment can be due to the malfunctions of the credit assignment per se or the general confusion of the task structure on the part of the monkey subjects. In the current study design, the subjects are humans and they are not confused about the task structure. Consistently, it is well known that human subjects rarely show the same patterns of the "spreading of credit assignment". So the implicit mechanism of the credit assignment process involves the understanding of the task structure. In the current study, there are clearly demarked task conditions that almost resolve the ambiguity inherent in the credit assignment process. Yet, the focus of the current analysis stops short of elucidating the role of understanding the task structure. It would be great if the authors could comment on the general difference in the process between the conditions, whether it is behavioral or neural.

      We would like to thank the reviewer for making this important point. We believe that understanding the structure of the credit-assignment problem above is quite important, at least for the type of credit assignment described here. That is, because participants know that the outcome viewed is caused by the choice they made, 0 or 1 trials into the past, they can flexibly link choice identities to the newly observed outcomes as the probabilities change. Note, however, that this is already very challenging in the 1-back condition because participants need to track the two independently changing probabilities. We believe this is critical to address the questions we aimed to answer with this experiment, as described above. 

      We agree that this might be quite different from previous studies done with non-human primates, which also included many more training trials and lesions to the lOFC. Both of these aspects could manifest as difference in task performance and processing at behavioural and neural levels, respectively. Consistent with this possibility, in our task, we found no differences in credit spreading between conditions, suggesting that humans were quite precise in both, despite causal relationships being harder to track in the “indirect transition condition”. This lack of credit spreading could be because humans better understood the task-structure compared to macaques or be due to differences in functioning of the OFC and other regions. Because all participants were trained to understand, and were cued with explicit knowledge of, the task structure, it is difficult to isolate its role as we would need another condition in which they were not instructed about the task structure. This would also be an interesting study, and we leave it to future research to parse the contributions of task-structure ambiguity to credit assignment. 

      Point 3 of public reviews. 

      The authors used a sophisticated method of multivariate pattern analysis to find the neural correlate of the pending representation of the previous choice, which will be used for the credit assignment process in the later trials. The authors tend to use expressions that these representations are maintained throughout this intervening period. However, the analysis period is specifically at the feedback period, which is irrelevant to the credit assignment of the immediately preceding choice. This task period can interfere with the ongoing credit assignment process. Thus, rather than the passive process of maintaining the information of the previous choice, the activity of this specific period can mean the active process of protecting the information from interfering and irrelevant information. It would be great if the authors could comment on this important interpretational issue.

      We agree that lFPC is likely actively protecting the pending choice representation from interference with the most recent choice for future credit assignment. This interpretation is largely congruent with the idea of “prospective memory” (e.g., Burgess, Gonen-Yaacovi, Volle, 2011), in which the lFPC can be thought of as protecting information that will be needed in the future but is not currently needed for ongoing behavior. That said, from our study alone it is difficult to make claims about whether the information maintained in frontal pole is actively protecting this information because of potentially interfering processes. Our “indirect transition condition” only contains trials where there is incoming, potentially interfering information about new outcomes, but no trials that might avoid interference (e.g., an interim choice made but there is nothing to be learned from it). We comment on this important future direction on page 14:  

      “One interpretation of these results is that the lFPC actively protects information about causal choices when potentially interfering information must be processed. Future studies will be needed to determine if the lFPC’s contributions are specific to these instances of potential interference, and whether this is a passive or active process”

      Point 3 of recommendation to authors 

      A slightly minor, but still important issue is the interpretation of the role of lOFC. The authors compared the observed patterns of the credit assignment to the ideal patterns of credit assignment. Then, the similarity between these two matrices is used to find the associated brain region. In the assumption that lOFC is involved in the optimal credit assignment, the result seems reasonable. But as mentioned above, the current design involves the heavy role of understanding the task structure, it is debatable whether the lOFC is just involved in the credit assignment process or a more general role of representing the task structure.

      We agree that this is an important distinction to make, and it is very likely that multiple regions of the OFC carry information about the task structure, and the extent to which participants understood this structure may be reflected in behavioral estimates of credit assignment or the overall patterns of the matrices (though all participants verbalized the correct structure prior to the task). However, we believe that in our task the lOFC is specifically involved in credit-assignment because of the content of the information we decoded. We demonstrated that the lOFC and HPC carry information about the causal choice during the outcome. These results cannot be explained by differences in understanding of the task structure because that understanding would have been consistent across trials where participants choose either shape identity. Thus, a classifier could not use this to separate these types of trials and would reflect chance decoding.   

      One interpretation of the lOFC’s role in credit assignment is that it is particularly important when a model of the task structure has to be used to assign credit appropriately. Here, we show lOFC the reinstates specific causal representations precisely at the time credit needs to be assigned, which are appropriate to participants’ knowledge of the task structure.  These representations may exist alongside representations of the task structure, in the lOFC and other regions of the brain (Park et al., 2020; Boorman et al., 2021; Seo and Lee, 2010; Schuck et al., 2016). We have added the following sentences to clarify our perspective on this point in the discussion (p. 13):

      “Our results from the “indirect transition” condition show that these patterns are not merely representations of the most recent choice but are representations of the causal choice given the current task structure, and may exist alongside representations of the task structure, in the lOFC and elsewhere (Boorman et al., 2021; Park et al., 2020; Schuck et al., 2016; Seo & Lee, 2010).”

      Point 4 of public reviews and point 4 of recommendation to authors

      Broader neural involvement: While the focus on specific regions of interest (ROIs) provided clear results, future studies could benefit from a whole-brain analysis approach to provide a more comprehensive understanding of the neural networks involved in credit assignment… Also, given the ROI constraint of the analysis, the other neural structure may be involved in representing the task structure but not detected in the current analysis

      Given our strong a priori hypotheses about regions of interest (ROIs) in this study, we focused on these specific areas. This choice was based on theoretical and empirical grounds that guided our investigation. However, we thank the reviewer for pointing this out and agree that there could be other unexplored areas that are critical to credit-assignment which we did not examine. 

      We conducted the same searchlight decoding procedure on a whole brain map and corrected for multiple comparisons using TFCE. We found no significant regions of the brain in the “direct transition condition” but did find other significant regions in our information connectivity analysis of the “indirect transition condition”. In addition to replicating the effects in lOFC and HPC, we also found a region of mOFC which showed a strong correlation with pending choice in lFPC. It’s difficult to say whether this region is involved in credit assignment per se, because we did not see this region in the “direct transition condition” and so we cannot say that it is consistently related to this process. However, the mOFC is thought to be critical to representing the current task state (Schuck et al., 2016), and the task structure (Park et al., 2020). In our task, it could be a critical region for communicating how to assign credit given the more complex task structure of the “indirect transition condition” but more evidence would be needed to support this interpretation. 

      For now, we have added the results of this whole brain analysis to a new supplementary figure S7 (page 41), and all unthresholded maps have been deposited in a Neurovault repository, which is linked in the paper, for interested readers to assess.  

      Minor points:

      There are some missing and confusing details in the Figure reference in the main text. For example, references to Figure 3 are almost missing in the section "Pending item representations in FPl during indirect transitions predict credit assignment in lOFC". For readability, the authors should improve this point in this section and other sections.

      Thank you to the reviewer for pointing this out. We have now added references to Figure 3 on page 8:

      “Our analysis revealed a cluster of voxels specifically within the right lFPC ([x,y,z] = [28, 54, 8], t(19) = 3.74, pTFCE <0.05 ROI-corrected; left hemisphere all pTFCE > 0.1, Fig. 3A)”

      And on page 10: 

      Specifically, we found significant correlations in decoding distance between lFPC and bilateral lOFC ([x,y,z] = [-32,24, -22], t(19) = 3.81, [x,y,z] = [20, 38, -14], t(19) = 3.87, pTFCE <0.05 ROI corrected]) and bilateral HC ([x,y,z] = [-28, -10, -24], t(19) = 3.41, [x,y,z] = [22, -10, -24], t(19) = 4.21, pTFCE <0.05 ROI corrected]), Fig. 3C).

      Task instructions for the two conditions (direct and indirect) play important roles in the study. If possible, please include the following parts in the figures and descriptions in the introduction and/or results sections.

      We have now included a short description of the condition instructions beginning on page 5: 

      “Participants were instructed about which condition they were in with a screen displaying “Your latest choice” in the direct transition condition, and “Your previous choice” in the indirect condition.”

      And have modified Figure 1 to include the instructions in the title of each condition. We thought this to be the most parsimonious solution so that the choice options in the examples were not occluded. 

      The subject sample size might be slightly too small in the current standards. Please give some justifications.

      We originally selected the sample size for this study to be commensurate with previous studies that looked for similar behavioral and neural effects (see Boorman et al., 2016; Howard et al., 2015; Jocham et al., 2016). This has been mentioned in the “methods” section on page 24.  

      However, to be thorough, we performed a power analysis of this sample size using simulations based on an independently collected, unpublished data set. In this data set, 28 participants competed an associative learning task similar to the task in the current manuscript. We trained a classifier to decode causal choice option at the time of feedback, using the same searchlight and cross-validation procedures described in the current manuscript, for the same lateral OFC ROI. We calculated power for various sample sizes by drawing N participants with replacement 1000 times, for values of N ranging from 15 to 25. After sampling the participants, we tested for significant decoding for the causal choice within the subset of data, using smallvolume TFCE correction to correct for multiple comparisons. Finally, we calculated the proportion of these samples that were significant at a level of pTFCE <.05.  

      The results of this procedure show that an N of 20 would result in 84.2% power, which is slightly above the typically acceptable level of 80%. We have added the following sentences to the methods section on page 25: 

      “Using an independent, unpublished data set, we conducted a power analysis for the desire neural effect in lOFC. We found that this number of participants had 84% power to detect this effect (Fig. S8).” 

      We also added the following figure to the supplemental figures page (42):

      Reviewer 2:

      I have several concerns regarding the causality analyses in this study. While Multivariate analyses of information connectivity between regions are interesting and appear rigorous, they make some assumptions about the nature of the input data. It is unclear if fMRI with its poor temporal resolution (in addition to possible region-specific heterogeneity in the readouts), can be coupled with these casual analysis methods to meaningfully study dynamics on a decision task where temporal dynamics is a core component (i.e., delay). It would be helpful to include more information/justification on the methods for inferring relationships across regions from fMRI data. Along this line, discussing the reported findings in light of these limitations would be essential.

      We agree that fMRI is limited for capturing fast neural dynamics, and that it can be difficult to separate events that occur within a few seconds. However, we designed the information connectivity analysis to maximally separate the events in question – the representations of the causal choice being held in a pending state, and the representation of the causal choice during credit assignment. These events were separated by at least 10 seconds and by 15 seconds on average, which is commensurate with recommended intervals for disentangling information in such analysis (Mumford et al., 2012, 2014, also see van Loon et al., 2018, eLife; as example of fluctuations in decodability over time). This feature of our task design may not have been clear because information connectivity analyses are typically performed in the same task period. We clarify this point on page 32:

      “Note that the decoding fidelity metric at each time point represents the decodability of the same choice at different phases of the task. These phases were separated by at least 10 seconds and 15 seconds on average, which can be sufficient for disentangling unique activity (Mumford et al., 2012, 2014).”

      However, we agree with the reviewer that the limitations of fMRI make it difficult to precisely determine how roles of the OFC and lFPC might change over time, and whether other regions may contribute to information transfer at times scales which cannot be detected by fMRI. Further, we do not wish to imply causality between lFPC and lOFC (something we believe we do not claim in the paper), only that information strength in lFPC predicts subsequent strength of the same information in the OFC and HC. We have clarified this limitation on page 14:

      “Although we show evidence that lFPC is involved in maintaining specific content about causal choices during interim choices, the limited temporal resolution of fMRI makes it difficult to tell if other regions may be supporting the learning processes at timescales not detectable in the BOLD response. Thus, it is possible that the network of regions supporting credit assignment in complex tasks may be much larger. Our results provide a critical first stem in discerning the nature of interactions between cognitive subsystems that make different contributions to the learning process in these complex tasks.”

      Reviewer 3:  

      Point 1 of public reviews:

      They do find (not surprisingly) that the one-back task is harder. It would be good to ensure that the reason that they had more trouble detecting direct HC & lOFC effects on the harder task was not because the task is harder and thus that there are more learning failures on the harder oneback task. (I suspect their explanation that it is mediated by FPl is likely to be correct. But it would be nice to do some subsampling of the zero-back task [matched to the success rate of the one-back task] to ensure that they still see the direct HC and lOFC there).

      We would like to thank the reviewer for this comment and agree that the “indirect transition condition” is more difficult than the direct transition condition. However, in this task it is difficult to have an explicit measure of learning failures per se because the “correctness” of a choice is to some extent subjective (i.e., based on the gift card preference and the computational model). We could infer when learning failures occur through the computational model by looking at trials in which participants made choices that the model would consider improbable, (i.e., non-reward maximizing) while accounting for outcome preference. However, there are also a myriad of other possible explanations for these choices, such as exploratory/confirmatory strategies, lapses in attention etc. Thus, we could not guarantee that the two conditions would be uniquely matched in difficulty with specific regard to learning even if we subsampled these trials. We feel it would be better left to future experiments which can specifically compare learning failures to tackle this issue. We have now addressed this point when discussing the model on page 31:  

      “Note that learning failures are not trivial to identify in our paradigm and model, because every choice is based on a participant’s preference between gift card outcomes, and the ability of the computational model to accurately estimate participants’ beliefs in the stimulus-outcome transition probabilities.”

      Point 2 of public reviews:

      The evidence that they present in the main text (Figure 3) that the HC and lOFC are mediated by FPl is a correlation. I found the evidence presented in Supplemental Figure 7 to be much more convincing. As I understand it, what they are showing in SF7 is that when FPl decodes the cue, then (and only then) HC and lOFC decode the cue. If my understanding is correct, then this is a much cleaner explanation for what is going on than the secondary correlation analysis. If my understanding here is incorrect, then they should provide a better explanation of what is going on so as to not confuse the reader.

      SF7 (now Figures 3C and 3D) does show that positive decoding in the HC and lOFC are more likely to occur when there is positive decoding in lFPC. However, the analysis shown in these figures are only meant to be control analysis to further characterise what is being captured, but not necessarily implied, by the information connectivity analysis. For example, in principle the classifier might never correctly decode a choice label in the lOFC or HC while still getting closer to the hyperplane when the lFPC patterns are correctly decoded. This would lead to a positive correlation, but a difficult to interpret result since patterns in lOFC and HPC are incorrect. Figure SF7A (now Fig. 3C) shows that this is not the case. Lateral OFC and HC have higher than chance positive decoding when lFPC has positive decoding. Figure SF7B (now Fig. 3D) shows that we can decode that information even if a new hyperplane is constructed. However, both cases have less information about the relationship between these regions because they do not include the trials where lOFC/HC and lFPC classifiers were incorrect at the same time. The correlation in Figure 3B includes these failures, giving a more wholistic picture of the data. We therefore try to concisely clarify this point on page 10:

      “These signed distances allow us to relate both success in decoding information, as well as failures, between regions.”

      And here on page 10: 

      “Subsequent analyses confirmed that this effect was due to these regions showing a significant increase in positive (correct) decoding in trials where pending information could be positively (correctly) decoded in lFPC, and not simply due to a reduction in incorrect information fidelity (see Fig. 3C & 3D).”

      And have integrated these figures on page 9:

      Point 3 of public reviews:

      I like the idea of "credit spreading" across trials (Figure 1E). I think that credit spreading in each direction (into the past [lower left] and into the future [upper right]) is not equivalent. This can be seen in Figure 1D, where the two tasks show credit spreading differently. I think a lot more could be studied here. Does credit spreading in each of these directions decode in interesting ways in different places in the brain?

      We agree that this an interesting question because each component of the off diagonal (upper and lower triangles) may reflect qualitatively different processes of credit spreading. However, we believe this analysis is difficult to carry out with the current dataset for two reasons. First, we designed this study to ask specifically about the information represented in key credit assignment regions during precise credit assignment, meaning we did not optimize the task to induce credit spreading at any point. Indeed, our efforts to train participants on the task were to ensure they would correctly assign credit as much as possible. Figure 1F shows that the regression coefficients representing credit spreading in each condition are near zero (in the negative direction), with little individual differences compared to the credit assignment coefficients. Thus, any analysis aiming to test for credit spreading would unfortunately be poorly powered. Studies such as Jocham et al. (2016), with more variability in causal structures, or studies with ambiguity about the causal structure by dissolving the typical trial structure would be better suited to address this interesting question. The second reason why such an analysis would be challenging is that due to our design, it is difficult to intuitively determine what kind of information should be coded by neural regions when credit spreads to the upper diagonal, since these cells reflect current outcomes that are being linked to future choices. 

      Replace all the FPl with LFPC (lateral frontal polar cortex)

      We have no replace “FPl” with “LFPC” throughout the text and figures

    1. Reviewer #1 (Public review):

      Summary:

      This article investigates the phenotype of macrophages with a pathogenic role in arthritis, particularly focusing on arthritis induced by immune checkpoint inhibitor (ICI) therapy.

      Building on prior data from monocyte-macrophage coculture with fibroblasts, the authors hypothesized a unique role for the combined actions of prostaglandin PGE2 and TNF. The authors studied this combined state using an in vitro model with macrophages derived from monocytes of healthy donors. They complemented this with single-cell transcriptomic and epigenetic data from patients with ICI-RA, specifically, macrophages sorted out of synovial fluid and tissue samples. The study addressed critical questions regarding the regulation of PGE2 and TNF: Are their actions co-regulated or antagonistic? How do they interact with IFN-γ in shaping macrophage responses?

      This study is the first to specifically investigate a macrophage subset responsive to the PGE2 and TNF combination in the context of ICI-RA, describes a new and easily reproducible in vitro model, and studies the role of IFNgamma regulation of this particular Mф subset.

      Strengths:

      Methodological quality: The authors employed a robust combination of approaches, including validation of bulk RNA-seq findings through complementary methods. The methods description is excellent and allows for reproducible research. Importantly, the authors compared their in vitro model with ex vivo single-cell data, demonstrating that their model accurately reflects the molecular mechanisms driving the pathogenicity of this macrophage subset.

      Weaknesses:

      Introduction: The introduction lacks a paragraph providing an overview of ICI-induced arthritis pathogenesis and a comparison with other types of arthritis. Including this would help contextualize the study for a broader audience.

      Results Section: At the beginning of the results section, the experimental setup should be described in greater detail to make an easier transition into the results for the reader, rather than relying just on references to Figure 1 captions.

      There is insufficient comparison between single-cell RNA-seq data from ICI-induced arthritis and previously published single-cell RA datasets. Such a comparison may include DEGs and GSEA, pathway analysis comparison for similar subsets of cells. Ideally, an integration with previous datasets with RA-tissue-derived primary monocytes would allow for a direct comparison of subsets and their transcriptomic features.

      While it's understandable that arthritis samples are limited in numbers and myeloid cell numbers, it would still be interesting to see the results of PGE2+TNF in vitro stimulation on the primary RA or ICI-RA macrophages. It would be valuable to see RNA-Seq signatures of patient cell reactivation in comparison to primary stimulation of healthy donor-derived monocytes.

      Discussion: Prior single-cell studies of RA and RA macrophage subpopulations from 2019, 2020, 2023 publications deserve more discussion. A thorough comparison with these datasets would place the study in a broader scientific context.<br /> Creating an integrated RA myeloid cell atlas that combines ICI-RA data into the RA landscape would be ideal to add value to the field.<br /> As one of the next research goals, TNF blockade data in RA and ICI-RA patients would be interesting to add to such an integrated atlas. Combining responders and non-responders to TNF blockade would help to understand patient stratification with the myeloid pathogenic phenotypes. It would be great to read the authors' opinion on this in the Discussion section.

      Conclusion: The authors demonstrated that while PGE2 maintains the inflammatory profile of macrophages, it also induces a distinct phenotype in simultaneous PGE2 and TNF treatment. The study of this specific subset in single-cell data from ICI-RA patients sheds light on the pathogenic mechanisms underlying this condition, however, how it compares with conventional RA is not clear from the manuscript.<br /> Given the substantial incidence of ICI-induced autoimmune arthritis, understanding the unique macrophage subsets involved for future targeting them therapeutically is an important challenge. The findings are significant for immunologists, cancer researchers, and specialists in autoimmune diseases, making the study relevant to a broad scientific audience.

    2. Author response:

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      This article investigates the phenotype of macrophages with a pathogenic role in arthritis, particularly focusing on arthritis induced by immune checkpoint inhibitor (ICI) therapy.

      Building on prior data from monocyte-macrophage coculture with fibroblasts, the authors hypothesized a unique role for the combined actions of prostaglandin PGE2 and TNF. The authors studied this combined state using an in vitro model with macrophages derived from monocytes of healthy donors. They complemented this with single-cell transcriptomic and epigenetic data from patients with ICI-RA, specifically, macrophages sorted out of synovial fluid and tissue samples. The study addressed critical questions regarding the regulation of PGE2 and TNF: Are their actions co-regulated or antagonistic? How do they interact with IFN-γ in shaping macrophage responses?

      This study is the first to specifically investigate a macrophage subset responsive to the PGE2 and TNF combination in the context of ICI-RA, describes a new and easily reproducible in vitro model, and studies the role of IFNgamma regulation of this particular Mф subset.

      Strengths:

      Methodological quality: The authors employed a robust combination of approaches, including validation of bulk RNA-seq findings through complementary methods. The methods description is excellent and allows for reproducible research. Importantly, the authors compared their in vitro model with ex vivo single-cell data, demonstrating that their model accurately reflects the molecular mechanisms driving the pathogenicity of this macrophage subset.

      Weaknesses:

      Introduction: The introduction lacks a paragraph providing an overview of ICI-induced arthritis pathogenesis and a comparison with other types of arthritis. Including this would help contextualize the study for a broader audience.

      Thank you for this suggestion, we will add a paragraph on ICI-arthritis to intro.

      Results Section: At the beginning of the results section, the experimental setup should be described in greater detail to make an easier transition into the results for the reader, rather than relying just on references to Figure 1 captions.

      We will clarify the experimental setup.

      There is insufficient comparison between single-cell RNA-seq data from ICI-induced arthritis and previously published single-cell RA datasets. Such a comparison may include DEGs and GSEA, pathway analysis comparison for similar subsets of cells. Ideally, an integration with previous datasets with RA-tissue-derived primary monocytes would allow for a direct comparison of subsets and their transcriptomic features.

      This is a great idea, we will integrate the data sets and if batch correction is successful will present this analysis.

      While it's understandable that arthritis samples are limited in numbers and myeloid cell numbers, it would still be interesting to see the results of PGE2+TNF in vitro stimulation on the primary RA or ICI-RA macrophages. It would be valuable to see RNA-Seq signatures of patient cell reactivation in comparison to primary stimulation of healthy donor-derived monocytes.

      We agree that this would be interesting but given limited samples and distribution of samples amongst many studies and investigators this is beyond the scope of the current study. 

      Discussion: Prior single-cell studies of RA and RA macrophage subpopulations from 2019, 2020, 2023 publications deserve more discussion. A thorough comparison with these datasets would place the study in a broader scientific context.

      Creating an integrated RA myeloid cell atlas that combines ICI-RA data into the RA landscape would be ideal to add value to the field.

      As one of the next research goals, TNF blockade data in RA and ICI-RA patients would be interesting to add to such an integrated atlas. Combining responders and non-responders to TNF blockade would help to understand patient stratification with the myeloid pathogenic phenotypes. It would be great to read the authors' opinion on this in the Discussion section.

      We will be happy to improve the discussion by including these topics.

      Conclusion: The authors demonstrated that while PGE2 maintains the inflammatory profile of macrophages, it also induces a distinct phenotype in simultaneous PGE2 and TNF treatment. The study of this specific subset in single-cell data from ICI-RA patients sheds light on the pathogenic mechanisms underlying this condition, however, how it compares with conventional RA is not clear from the manuscript.

      Given the substantial incidence of ICI-induced autoimmune arthritis, understanding the unique macrophage subsets involved for future targeting them therapeutically is an important challenge. The findings are significant for immunologists, cancer researchers, and specialists in autoimmune diseases, making the study relevant to a broad scientific audience.

      Reviewer #2 (Public review):

      Summary/Significance of the findings:

      The authors have done a great job by extensively carrying out transcriptomic and epigenomic analyses in the primary human/mouse monocytes/macrophages to investigate TNF-PGE2 (TP) crosstalk and their regulation by IFN-γ in the Rheumatoid arthritis (RA) synovial macrophages. They proposed that TP induces inflammatory genes via a novel regulatory axis whereby IFN-γ and PGE2 oppose each other to determine the balance between two distinct TNF-induced inflammatory gene expression programs relevant to RA and ICI-arthritis.

      Strengths:

      The authors have done a great job on RT-qPCR analysis of gene expression in primary human monocytes stimulated with TNF and showing the selective agonists of PGE2 receptors EP2 and EP4 22 that signal predominantly via cAMP. They have beautifully shown IFN-γ opposes the effects of PGE2 on TNF-induced gene expression. They found that TP signature genes are activated by cooperation of PGE2-induced AP-1, CEBP, and NR4A with TNF-induced NF-κB activity. On the other hand, they found that IFN-γ suppressed induction of AP-1, CEBP, and NR4A activity to ablate induction of IL-1, Notch, and neutrophil chemokine genes but promoted expression of distinct inflammatory genes such as TNF and T cell chemokines like CXCL10 indicating that TP induces inflammatory genes via IFN-γ in the RA and ICI-arthritis.

      Weaknesses:

      (1) The authors carried out most of the assays in the monocytes/macrophages. How do APC-cells like Dendritic cells behave with respect to this TP treatment similar dosing?

      We agree that this is an interesting topic especially as TNF + PGE2 is one of the standard methods of maturing in vitro generated human DCs. As DC maturation is quite different from monocyte activation this would represent an entire new study and is beyond the scope of the current manuscript. We will instead describe and cite the literature on DC maturation by TNF + PGE2 including one of our older papers (PMID: 18678606; 2008)

      (2) The authors studied 3h and 24h post-treatment transcriptomic and epigenomic. What happens to TP induce inflammatory genes post-treatment 12h, 36h, 48h, 72h. It is critical to see the upregulated/downregulated genes get normalised or stay the same throughout the innate immune response.

      We will clarify that the gene response is mostly subsiding at the 24 hour time point, which is in line with in vitro stimulation of primary monocytes in other systems.

      (3) The authors showed IL1-axis in response to the TP-treatment. Do other cytokine axes get modulated? If yes, then how do they cooperate to reduce/induce inflammatory responses along this proposed axis?

      We will analyze the data for other pathways that are modulated.

      Overall, the data looks good and acceptable but I need to confirm the above-mentioned criticisms.

    1. That might be personal to me, as someone who grew up with a dad who was what you might call a campfire guitarist — not a performer, just a dad who used to entertain us with songs like "Dark as a Dungeon," a little folk tune about the lethal dangers of coal mining. Maybe to you, it's not the guitar.

      This referes to morer personall opnions and asspects of the ad, but can help the reader relate to the topic. It mentions how many people are crushed (haha) because of the unintended offense that this ad portrayed.

    1. Author response:

      The following is the authors’ response to the previous reviews.

      Public Reviews:

      Reviewer #1 (Public Review):  

      Summary:  

      Satoshi Yamashita et al., investigate the physical mechanisms driving tissue bending using the cellular Potts Model, starting from a planar cellular monolayer. They argue that apical length-independent tension control alone cannot explain bending phenomena in the cellular Potts Model, contrasting with previous works, particularly Vertex Models. They conclude that an apical elastic term, with zero rest value (due to endocytosis/exocytosis), is necessary to achieve apical constriction, and that tissue bending can be enhanced by adding a supracellular myosin cable. Additionally, a very high apical elastic constant promotes planar tissue configurations, opposing bending.  

      Strengths:  

      - The finding of the required mechanisms for tissue bending in the cellular Potts Model provides a natural alternative for studying bending processes in situations with highly curved cells. 

      - Despite viewing cellular delamination as an undesired outcome in this particular manuscript, the model's capability to naturally allow T1 events might prove useful for studying cell mechanics during out-of-plane extrusion. 

      We thank the reviewer for the careful comments and suggestions.

      Weaknesses: 

      - The authors claim that the cellular Potts Model (CPM) is unable to achieve the results of the vertex model (VM) simulations due to naturally non-straight cellular junctions in the CPM versus the VM. The lack of a substantial comparison undermines this assertion. None of the references mentioned in the manuscript are from a work using vertex model with straight cellular junctions, simulating apical constriction purely by a enhancing a length-independent apical tension. Sherrard et al and Pérez-González et al. use 2D and 3D Vertex Models, respectively, with a "contractility" force driving apical constriction. However, their models allow cell curvature. Both references suggest that the cell side flexibility of the CPM shouldn't be the main issue of the "contractility model" for apical constriction. 

      We appreciate the comment.

      For the reports by Sherrard et al and Pérez-Gonález et al, lack of the cell rearrangement (T1 transition) might have caused the difference. Other than these, Muñoz et al. (doi:10.1016/j.jbiomech.2006.05.006), Polyakov et al. (doi:10.1016/j.bpj.2014.07.013), Inoue et al.

      (doi:10.1007/s10237-016-0794-1), Sui et al.

      (doi:10.1038/s41467-018-06497-3), and Guo et al. (doi:10.7554/eLife.69082) used simulation models with the straight lateral surface.

      We updated an explanation about the difference between the vertex model and the cellular Potts model in the discussion.

      P12L318 “An edge in the vertex model can be bent by interpolating vertices or can be represented with an arc of circle (Brakke, 1992). Even in cases where vertex models were extended to allow bent lateral surfaces, the model still limited cell rearrangement and neighbor changes (Pérez-González et al., 2021), limiting the cell delamination. Thus the difference in simulation results between the models could be due to whether the cell rearrangement was included or not. However, it is not clear how the absence of the cell rearrangement affected cell behaviors in the simulation, and it shall be studied in future. In contrast to the vertex model, the cellular Potts model included the curved cell surface and the cell rearrangement innately, it elucidated the importance of those factors.”

      - The myosin cable is assumed to encircle the invaginated cells. Therefore, it is not clear why the force acts over the entire system (even when decreasing towards the center), and not locally in the contour of the group of cells under constriction. The specific form of the associated potential is missing. It is unclear how dependent the results of the manuscript are on these not-well-motivated and model-specific rules for the myosin cable.

      A circle radius decreases when the circle perimeter shrinks, and this was simulated with the myosin cable moving toward the midline in the cross section.

      We added an explanation in the introduction and the results.

      P2L74 “In the same way with the contracting circumferential myosin belt in a cell decreasing the cell apical surface, the circular supracellular myosin cable contraction decreases the perimeter, the radius of the circle, and an area inside the circle.”

      P6L197 “In the cross section, the shrinkage of the circular supracellular myosin cable was simulated with a move of adherens junction under the myosin cable toward the midline.”

      - The authors are using different names than the conventional ones for the energy terms. Their current attempt to clarify what is usually done in other works might lead to further confusion. 

      The reviewer is correct. However we named the energy terms differently because the conventional naming would be misleading in our simulation model.

      We added an explanation in the results.

      P4L140 “Note that the naming for the energy terms differs from preceding studies. For example, Farhadifar et al. (2007) named a surface energy term expressed by a proportional function "line tensions" and a term expressed by a quadratic function "contractility of the cell perimeter". In this study, however, calling the quadratic term "contractility" would be misleading since it prevents the contraction when  < _0. Therefore we renamed the terms accordingly.”

      Reviewer #2 (Public Review): 

      Summary: 

      In their work, the Authors study local mechanics in an invaginating epithelial tissue. The work, which is mostly computational, relies on the Cellular Potts model. The main result shows that an increased apical "contractility" is not sufficient to properly drive apical constriction and subsequent tissue invagination. The Authors propose an alternative model, where they consider an alternative driver, namely the "apical surface elasticity". 

      Strengths: 

      It is surprising that despite the fact that apical constriction and tissue invagination are probably most studied processes in tissue morphogenesis, the underlying physical mechanisms are still not entirely understood. This work supports this notion by showing that simply increasing apical tension is perhaps not sufficient to locally constrict and invaginate a tissue. 

      We thank the reviewer for the careful comments.

      Weaknesses: 

      Although the Authors have improved and clarified certain aspects of their results as suggested by the Reviewers, the presentation still mostly relies on showing simulation snapshots. Snapshots can be useful, but when there are too many, the results are hard to read. The manuscript would benefit from more quantitative plots like phase diagrams etc. 

      We agree with the comment.

      However, we could not make the qualitative measurement for the phase diagram since 1) the measurement must be applicable to all simulation results, and 2) measured values must match with the interpretation of the results. To do so, the measurement must distinguish a bent tissue, delaminated cells, a tissue with curved basal surface and flat apical surface, and a tissue with closed invagination. Such measurement is hardly designed.

      Recommendations for the authors: 

      Reviewing Editor (Recommendations For The Authors): 

      I see that the authors have worked on improving their paper in the revision. However, I agree with both reviewer #1 and reviewer #2 that the presentation and discussion of findings could be clearer. 

      Concrete recommendations for improvement: 

      (1) I find the observation by reviewer #1 on cell rearrangement very illuminating: It is indeed another key difference between the Cellular Potts Model that the authors use compared to typical Vertex Models, and could very well explain the different model outcomes. The authors could expand on the discussion of this point. 

      We updated an explanation about the difference between the vertex model and the cellular Potts model in the discussion.

      P12L318 “An edge in the vertex model can be bent by interpolating vertices or can be represented with an arc of circle (Brakke, 1992). Even in cases where vertex models were extended to allow bent lateral surfaces, the model still limited cell rearrangement and neighbor changes (Pérez-González et al., 2021), limiting the cell delamination. Thus the difference in simulation results between the models could be due to whether the cell rearrangement was included or not. However, it is not clear how the absence of the cell rearrangement affected cell behaviors in the simulation, and it shall be studied in future. In contrast to the vertex model, the cellular Potts model included the curved cell surface and the cell rearrangement innately, it elucidated the importance of those factors.”

      (2) In lines 161-164, the authors write "Some preceding studies assumed that the apical myosin generated the contractile force (Sherrard et al, 2010: Conte et al., 2012; Perez-Mockus et al., 2017; Perez-Gonzalez et al., 2021), while others assumed the elastic force (Polyakov et al., 2014; Inoue et al. 2016; Nematbakhsh et al., 2020)." 

      Similarly, in lines 316-319 the authors write "In the preceding studies, the apically localized myosin was assumed to generate either the contractile force (Sherrard et al, 2010: Conte et al., 2012; Perez-Mockus et al., 2017; Perez-Gonzalez et al., 2021), or the elastic force (Polyakov et al., 2014; Inoue et al. 2016; Nematbakhsh et al., 2020)." 

      The phrasing here is poor, as it suggests that the latter three studies (Polyakov et al., 2014; Inoue et al. 2016; Nematbakhsh et al., 2020) do not use the assumption that apical myosin generated contractile forces. This is wrong. All three of these studies do in fact assume apical surface contractility mediated by myosin. In addition, they also include other factors such as elastic restoring forces from the cell membrane (but not mediated by myosin as far as I understand). 

      These statements should be corrected. 

      We named the energy term expressed with the proportional function “contractility” and the energy term expressed with the quadratic function “elasticity”. Here we did not define what biological molecules correspond with the contractility or the elasticity.

      For the three studies, the effect of myosin was expressed by the quadratic function, and Polyakov et al. (2014) named it “springlike elastic properties”, Inoue et al. (2016) named it “Apical circumference elasticity”, and Nematbakhsh et al. (2020) named it “Actomyosin contractility”. To explain that the for generated by myosin was expressed with the quadratic function in these studies, we wrote that they “assumed the elastic force”.

      We assumed the myosin activity to be approximated with the proportional function in later parts and proposed that the membrane might be expressed with the quadratic function and responsible for the apical constriction based on other studies.

      To clarify this, we added it to the results.

      P4L175 “Some preceding studies assumed that the apical myosin generated the contractile force (Sherrard et al., 2010; Conte et al., 2012; Perez-Mockus et al., 2017; Pérez-González et al., 2021), while the others assumed the myosin to generate the elastic force (Polyakov et al., 2014; Inoue et al., 2016; Nematbakhsh et al., 2020).”

      (3) Lines 294-296: The phrasing suggests that the "alternative driving mechanism" consists of apical surface elasticity remodelling alone. This is not true, it's an additional mechanism, not an alternative. The authors' model works by the combined action of increased apical surface contractility and apical surface elasticity remodelling (and the effect can be strengthened by including a supracellular actomyosin cable). 

      We agree with the comment that the surface remodeling is not solely driving the apical constriction but with myosin activity. However, if we wrote it as an additional mechanism, it might look like that both the myosin activity alone and the surface remodeling alone could drive the apical constriction, and they would drive it better when combined together. So we replaced “mechanism” with “model”.

      P12L311 “In this study, we demonstrated that the increased apical surface contractility could not drive the apical constriction, and proposed the alternative driving model with the apical surface elasticity remodeling.”

      (4) In general, the part of the results section encompassing equations 1-5 should more explicitly state which equations were used in all simulations (Eqs1+5), and which ones were used only for certain conditions (Eqs2+3+4). 

      We added it as follows.

      P4L153 “While the terms Equation 1 and Equation 5 were included in all simulations since they were fundamental and designed in the original cellular Potts model (Graner and Glazier, 1992), the other terms Equation 2-Equation 4 were optional and employed only for certain conditions.”

      (5) Lines 150-152: Please state which parameters were examined. I assume Equation 4 was also left out of this initial simulation, as it is the potential energy of the actomyosin cable that was only included in some simulations. 

      We added it as follows.

      P4L163 “The term Equation 4 was not included either. For a cell, its compression was determined by a balance between the pressure and the surface tension, i.e., the heigher surface tension would compress the cell more. The bulk modulus 𝜆 was set 1, the lateral cell-cell junction contractility 𝐽_𝑙 was varied for different cell compressions, and the apical and basal surface contractilities 𝐽_𝑎 and 𝐽_𝑏 were varied proportional to 𝐽_𝑙.”

      (6) Lines 118-122: The sentence is very long and hard to parse. I suggest the following rephrasing: 

      “In this study, we assumed that the cell surface tension consisted of contractility and elasticity. We modelled the contractility as constant to decrease the surface, but not dependent on surface width or strain. We modelled the elasticity as proportional to the surface strain, working to return the surface to its original width." 

      We updated the explanation as follows.

      P3L121 “In this study, we assumed that the cell surface tension consisted of contractility and elasticity. We modeled the contractility as a constant force to decrease the surface, but not dependent on surface width or strain. We modeled the elasticity as a force proportional to the surface strain, working to return the surface to its original width.”

      (7) Lines 270-274: Another long sentence that is difficult to understand.

      Suggested rephrasing: 

      "Note that the supracellular myosin cable alone could not reproduce the apical constriction (Figure 2c), and cell surface elasticity in isolation caused the tissue to stay almost flat. However, combining both the supracellular myosin cable and the cell surface elasticity was sufficient to bend the tissue when a high enough pulling force acted on the adherens junctions." 

      We updated the sentence as follows.

      P9L287 “Note that the supracellular myosin cable alone could not reproduce the apical constriction (Figure 2c), and that with some parameters the modified cell surface elasticity kept the tissue almost flat (Figure 4). However, combining both the supracellular myosin cable and the cell surface elasticity made a sharp bending when the pulling force acting on the adherens junction was sufficiently high.”

      (8) Lines 434-435: Unclear what is meant with sentence starting with "Rest of sites" 

      We update the sentence as follows.

      P17L456 “At the initial configuration and during the simulation, sites adjacent to medium and not marked as apical are marked as basal.”

      (9) Fixing typos and other minor grammar and wording changes would improve readability. Following is a list in order of appearance in the text with suggestions for improvement. 

      We greatly appreciate the careful editing, and corrected the manuscript accordingly.

      Line 14: "a" is not needed in the phrase "increased a pressure" 

      Line 15: "cell into not the wedge shape" --"cell not into the wedge shape"  In fact it might be better to flip the sentence around to say, e.g. "making the cells adopt a drop shape instead of the expected wedge shape". 

      Line 24: "cells decrease its apical surface" --"cells decrease their apical surface" 

      Line 25: instead of "turn into wedge shape", a more natural-sounding expression could be "adopt a wedge shape" 

      Line 28: "which crosslink and contract" --because the subject is the singular "motor protein", the verb tense needs to be changed to "crosslinks and contracts" 

      Line 29: I suggest to use the definite article "the" before "actin filament network" as this is expected to be a known concept to the reader. 

      Line 31: "adherens junction and tight junction" --use the plural, because there are many per cell: "adherens junctions and tight junctions" 

      Line 42: "In vertebrate" --"In vertebrates" 

      Line 46: "Since the interruption to" --"Since the interruption of" 

      Line 56: "the surface tension of the invaginated cells were" --since the subject is "the surface tension", the verb "were" needs to be changed to "was"  Line 63: "extra cellular matrix" --generally written as "extracellular matrix" without the first space 

      Line 66: "many epithelial tissues" --"in many epithelial tissues" 

      Line 70: "This supracellular cables" --"These supracellular cables" 

      Line 72: "encircling salivary gland" --either "encircling the salivary gland" or "encircling salivary glands" 

      Lines 76-77: "investigated a cell physical property required" --"investigated what cell physical properties were required" 

      Line 78: "was another framework" --"is another framework" (it is a generally and currently valid true statement, so use the present tense) 

      Line 79: "simulated an effect of the apically localized myosin" --for clarity, I suggest rephrasing as "simulated the effect of increased apical contractility mediated by apically localized myosin" 

      Similarly, in Line 80: "did not reproduce the apical constriction" --"did not reproduce tissue invagination by apical constriction", as technically the cells in the model do reduce their apical area, but fail to invaginate as a tissue. 

      Line 82: "we found that a force" --"we found that the force" 

      Line 101: "apico-basaly" --"apico-basally" 

      Lines 107-108: "in order to save a computational cost" --"in order to save on computational cost" 

      Line 114: "Therefore an area of the cell" --"Therefore the interior area of the cell" 

      Line 139: "formed along adherens junction" --"formed along adherens junctions" 

      Line 166: "we ignored an effect" --"we ignored the effect" 

      Line 167: "and discussed it later" --"and discuss it later" 

      Lines 167-168: "an experiment with a cell cultured on a micro pattern showed that the myosin activity was well corresponded by the contractility" --"an experiment with cells cultured on a micro pattern showed that the myosin activity corresponded well to the contractility" 

      Line 172: "success of failure" --"success or failure" 

      Figure 1 caption: "none-polar" --"non-polarized"; "reg" --"red" 

      Line 179: "To prevented the surface" --"To prevent the surface" 

      Line 180: "It kept the cells surface" --"It kept the cells' surface" (apostrophe missing) 

      Line 181: "cells were delaminated and resulted in similar shapes" --"cells were delaminated and adopted similar shapes" 

      Line 190: "To investigate what made the difference" --"To investigate the origin of the difference" 

      Line 203: For clarity, I would suggest to add more specific wording. "the pressure, and a difference in the pressure between the cells resulted in" --"the internal pressure due to cell volume conservation, and a difference in the pressure between the contracting and non-contracting cells resulted in" 

      Line 206: "by analyzing the energy with respect to a cell shape" --"by analyzing the energy with respect to cell shape" 

      Line 220: "indicating that cell could shrink" --"indicating that a cell could shrink" 

      Line 224: For clarity, I would suggest more specific wording "lateral surface, while it seems not natural for the epithelial cells" --"lateral surface imposed on the vertex model, a restriction that seems not natural for epithelial cells" 

      Line 244: "succeeded in invaginating" --"succeeding in invaginating" 

      Line 247: "were checked whether the cells" --"were checked to assess whether the cells" 

      Line 250: "cells became the wedge shape" --"cells adopted the wedge shape" 

      Line 286: "there were no obvious change in a distribution pattern" --"there was no obvious change in the distribution pattern" 

      Lines 296-297: "When the cells were assigned the high apical surface contractility, the cells were rounded" --"When the cells were assigned a high apical surface contractility, the cells became rounded" 

      Line 298: "This simulation results" --"These simulation results" 

      Lines 301-302: I suggest to increase clarity by somewhat rephrasing.  "Even when the vertex model allowed the curved lateral surface, the model did not assume the cells to be rearranged and change neighbors" --"Even in cases where vertex models were extended to allow curved lateral surfaces, the model still limited cell rearrangement and neighbor changes" 

      Line 326: "high surface tension tried to keep" --"high surface tension will keep" 

      Line 334: "In many tissue" --"In many tissues" 

      Line 345: "turned back to its original shape" --"turned back to their original shape" (subject is the plural "cells") 

      Lines 348-349: "resembles the result of simulation" --"resembles the result of simulations" 

      Line 352: "how the myosin" --"how do the myosin" 

      Line 356: "it bears the surface tension when extended and its magnitude" What does the last "its" refer to? The surface tension? 

      Line 365: "the endocytosis decrease" --"the endocytosis decreases" 

      Line 371: "activatoin" --"activation" 

      Line 374 "the cells undergoes" --"the cells undergo" 

      Line 378: "entier" --"entire" 

      Line 389: "individual tissue accomplish" --"individual tissues accomplish" 

      Line 423: "is determined" --"are determined" (subject is the plural "labels") 

      Line 430: "phyisical" --"physical" 

      Table 6 caption: "cell-ECN" --cell-ECM 

      Line 557: "do not confused" --"should not be confused" 

      Reviewer #1 (Recommendations For The Authors): 

      - The phrase "In addition, the encircling supracellular myosin cable largely promoted the invagination by the apical constriction, suggesting that too high apical surface tension may keep the epithelium apical surface flat." is not clear to me. It sounds contradictory. 

      This finding was unexpected and surprising for us too. However, it is actually not contradictory since stronger surface tension will make the surface flatter in general. Figure 4 shows the flat apical surface with the wedge shape cells for the too strong apical surface tension. On the other hand, the supracellular myosin cable promoted the cell shape changes without raising the surface tension, and thus it could make a sharp bending (Figure 5).

      We updated the explanation for the effect of the supracellular myosin cable as follows.

      P2L74 “In the same way as the contracting circumferential myosin belt in a cell decreasing the cell apical surface, the circular supracellular myosin cable contraction decreases the perimeter, the radius of the circle, and an area inside the circle.”

      P6L197 “In the cross section, the shrinkage of the circular supracellular myosin cable was simulated with a move of adherens junction under the myosin cable toward the midline.”

      - Even when the authors now avoid to say "in contrast to vertex model simulations" in pg.4, in the next section there is still the intention to compare VM to CPM. Idem in the Discussion section. The conclusion in that section is that the difference between the results arising with VM (achieving the constriction) and the CPM (not achieving the constriction, and leading to cell delamination) are due to the straight lateral surfaces. However, Sherrard et at could achieve the constriction with an enhanced apical surface contractility using a 2D VM that allows curvatures. Therefore, I don't think the main difference is given by the deformability of the lateral surfaces. Instead, it might be due to the facility of the CPM to drive cellular rearrangements, coupled to specific modeling rules such as the permanent lost of the "apical side" once a delamination occurs and the boundary conditions. A clear example is the observation of loss of cell-cell adherence when all the tensions are set the same. Instead, in a VM cells conserve their lateral neighbors in the uniform tension regime (Sherrard et at). Is it noteworthy that the two mentioned works using vertex models to achieve apical constriction (Sherrard et at. (2D) and Pérez-González (3D) et al.) seem to neglect T1 transitions. I specifically think the added discussion on the impact of the T1 events (fundamental for cell delamination) is quite poor. A more detailed description would help justify the differences between model outcomes. 

      We updated an explanation about the difference between the vertex model and the cellular Potts model in the discussion.

      P12L318 “ An edge in the vertex model can be bent by interpolating vertices or can be represented with an arc of circle (Brakke, 1992). Even in cases where vertex models were extended to allow bent lateral surfaces, the model still limited cell rearrangement and neighbor changes (Pérez-González et al., 2021), limiting the cell delamination. Thus the difference in simulation results between the models could be due to whether the cell rearrangement was included or not. However, it is not clear how the absence of the cell rearrangement affected cell behaviors in the simulation, and it shall be studied in future. In contrast to the vertex model, the cellular Potts model included the curved cell surface and the cell rearrangement innately, it elucidated the importance of those factors.”

      - Fig6c: cell boundary colors are quite difficult to see. 

      The images were drawn by custom scripts, and those scripts do not implement a method to draw wide lines.

      - Title Table 1: "epitherila". 

      We corrected the typo.

      Reviewer #2 (Recommendations For The Authors): 

      The Authors have addressed most of my initial comments. In my opinion, the results could be better represented. Overall, the manuscript contains too many snapshots that are hard to read. I am sure the Authors could come up with a parameter that would tell the overall shape of the tissue and distinguish between a proper invagination and delamination. Then they could plot this parameter in a phase diagram using color plots to show how varying values of model parameters affects the shape. Presentation aside, I believe the manuscript will be a valuable piece of work that will be very useful for the community of computational tissue mechanics. 

      We agree with the comment.

      However, we could not make a suitable qualitative measurement method. For the phase diagrams, the measurement must be applicable to simulation results, otherwise each figure introduce a new measurement and a color representation would just redraw the snapshots but no comparison between the figures. So the different measurements would make the figures more difficult to read.

      The single measurement must distinguish the cell delamination by the increased surface contractility from the invagination by the modified surface elasticity and the supracellular contractile ring, even though the center cells were covered by the surrounding cells and lost contact with apical side extracellular medium in both cases.

      With the center of mass, the delaminated cells would return large values because they were moved basally. With the tissue basal surface curvature, it would not measure if the tissue apical surface was also curved or kept flat. If the phase diagram and interpretation of the simulation results do not match with each other, it would be misleading.

      A measurement meeting all these conditions was hardly designed.

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1 Evidence, reproducibility and clarity Summary: Bhatt et al. seek to define factors that influence H3.3 incorporation in the embryo. They test various hypotheses, pinpointing the nuclear/cytoplasmic ratio and Chk1, which affects cell cycle state, as influencers. The authors use a variety of clever Drosophila genetic manipulations in this comprehensive study. The data are presented well and conclusions reasonably drawn and not overblown. I have only minor comments to improve readability and clarity. I suggest two OPTIONAL experiments below. We thank the reviewer for their positive and helpful comments. Major comments: We found this manuscript well written and experimentally thorough, and the data are meticulously presented. We have one modification that we feel is essential to reader understanding and one experimental concern: The authors provide the photobleaching details in the methodology, but given how integral this measurement is to the conclusions of the paper, we feel that this should be addressed in clear prose in the body of the text. The authors explain briefly how nuclear export is assayed, but not import (line 99). Would help tremendously to clarify the methods here. This is especially important as import is again measured in Fig 4. This should also be clarified (also in the main body and not solely in the methods). We have added the following sentences to the main body of the text to clarify how photobleaching and import were assayed. “We note that these differences are not due to photobleaching as our measurements on imaged and unimaged embryos indicate that photobleaching is negligible under our experimental conditions (see methods, Figure S1G-H)” lines 98-101 and “Since nuclear export is effectively zero, we attribute the increase in total H3.3 over time solely to import and therefore the slope of total H3.3 over time corresponds to the import rate.” lines 111-113 Revision Plan In addition we have clarified how import was calculated to figure legends in Figure 5D (formerly 4D) and S1F which now read: “Initial slopes of nuclear import curves (change in total nuclear intensity over time for the first 5 timepoints) …” We also added the following explanation of how nuclear import rates were calculated to the methods section: “Import rates were calculated by using a linear regression for the total nuclear intensity over time for the first 5 timepoints in the nuclear import curves.” lines 471-473, methods If the embryos appeared "reasonably healthy" (line 113) after slbp RNAi, how do the authors know that the RNAi was effective, especially in THESE embryos, given siblings had clear and drastic phenotype? This is especially critical given that the authors find no effect on H3.3 incorporation after slbp RNAi (and presumably H3 reduction), but this result would also be observed if the slbp RNAi was just not effective in these embryos. We apologize for the confusion caused by our word choice. The “healthy” slbp-RNAi embryos had measurable phenotypes consistent with histone depletion that we have reported previously (Chari et al, 2019) including cell cycle elongation and early cell cycle arrest (Figure S4D). However, they did not have the catastrophic mitosis observed in more severely affected embryos. We agree with the reviewer that a concern of this experiment is that the less severely affected embryos likely have more remaining RD histones including H3. To address this we also tested H3.3 incorporation in the embryos that fail to progress to later cell cycles in the cycles that we could measure. Even in these more severely affected embryos we were not able to detect a change in H3.3 incorporation relative to controls (lines 240-243 and Fig S4B). Unfortunately, it is impossible to conduct the ideal experiment, which would be a complete removal of H3 since this is incompatible with oogenesis and embryo survival. To address this confusion we have added supplemental videos of control, moderately affected and severely affected SLBP-RNAi embryos as movies 3-5 and modified the text to read: “All embryos that survive through at least NC12, had elongated cell cycles in NC12 and 60% arrested in NC13 as reported previously indicating the effectiveness of the knockdown (Figure S4C, Movie 3-5)39. In these embryos, H3.3 incorporation is largely unaffected by the reduction in RD H3 (Figure 6B).” lines 236-240 Finally, to characterize the range of SLBP knockdown in the RNAi embryos we propose to do single embryo RT-qPCRs for SLBP mRNA for multiple individual embryos. This will provide a measure of the range knockdown that we observed in our H3.3 movies. Minor comments: Introduction: Revision Plan Consider using "replication dependent" (RD) rather than "replication coupled." Both are used in the field, but RD parallels RI ("replication independent"). We thank the reviewer for this suggestion. We have made the text edits to change "replication coupled" (RC) to "replication dependent" (RD) throughout the manuscript. Would help for clarity if the authors noted that H3 is equivalent to H3.2 in Drosophila. Also it is relevant that there are two H3.3 loci as the authors knock mutations into the H3.3A locus, but leave the H3.3B locus intact. The authors should clarify that there are two H3.3 genes in the Drosophila genome. We have changed the text as follows to increase clarity as suggested: “Similarly, we have previously shown that RD H3.2 (hereafter referred to as H3) is replaced by RI H3.3 during these same cycles, though the cause remains unclear29” lines 52-54 “There are ~100 copies of H3 in the Drosophila genome, but only 2 of H3.3 (H3.3A and H3.3B)26. To determine which factor controls nuclear availability and chromatin incorporation, we genetically engineered flies to express Dendra2-tagged H3/H3.3 chimeras at the endogenous H3.3A locus, keeping the H3.3B locus intact.” lines 127-131 Please add information and citation (line 58): H3.3 is required to complete development when H3.2 copy number is reduced (PMID: 37279945, McPherson et al. 2023) We have added the suggested information. The text now reads “Nonetheless, H3.3 is required to complete development when H3.2 copy number is reduced54.” lines 61-62 Results: Embryo genotype is unclear (line 147): Hira[ssm] haploid embryos inherit the Hira mutation maternally? Are Hira homozygous mothers crossed to homozygous fathers to generate these embryos, or are mothers heterozygous? This detail should be in the main text for clarity. The Hira mutants are maternal effect. We crossed homozygous Hirassm females to their hemizygous Hirassm or FM7C brothers. However, the genotype of the male is irrelevant since the Hira phenotype prevents sperm pronuclear fusion and therefore there is no paternal contribution to the embryonic genotype. We have clarified this point in the text: “We generated embryos lacking functional maternal Hira using Hirassm-185b (hereafter Hirassm) homozygous mothers which have a point mutation in the Hira locus57.” lines 160-162 Revision Plan Line 161: Shkl affects nuclear density, but it also appears from Fig 3 to affect nuclear size? The authors do not address this, but it should at least be mentioned. We thank the reviewer for the astute observation. More dense regions of the Shkl embryos do in fact have smaller nuclei. We believe that this is a direct result of the increased N/C ratio since nuclear size also falls during normal development as the N/C ratio increases. We have added a new figure 1 in which we more carefully describe the events of early embryogenesis in flies including a quantification of nuclear size and number in the pre-ZGA cell cycles (Figure 1C). We also note the correlation of nuclear size with nuclear density in the text: “During the pre-ZGA cycles (NC10-13), the maximum volume that each nucleus attains decreases in response to the doubling number of nuclei with each division (Figure 1C).” lines 86-87 “To test this, we employed mutants in the gene Shackleton (shkl) whose embryos have non-uniform nuclear densities and therefore a gradient of nuclear sizes across the anterior/posterior axis (Figure 3A-B, Movie 1-2)58.” lines 180-183 The authors often describe nuclear H3/H3.3 as chromatin incorporated, but these image-based methods do not distinguish between chromatin-incorporated and nuclear protein. To distinguish between chromatin incorporated and nuclear free histone we have exploited the fact that histones that are not incorporated into DNA freely diffuse away from the chromatin mass during mitosis while those that are bound into nucleosomes remain on chromatin during this time. In our previous study we showed that H3-Dendra2 that is photoconverted during mitosis remains stably associated with the mitotic chromatin through multiple cell cycles (Shindo and Amodeo, 2019) strengthening our use of this metric. To help clarify this point as well as other methodological details we have added a new Figure 1B which documents the time points at which we make various measurements within the lifecycle of the nucleus. We also edited the text to read: “We have previously shown that with each NC, the pool of free H3 in the nucleus is depleted and its levels on chromatin during mitosis decrease (Figure 1D, S1C-D)29. In contrast, H3.3 mitotic chromatin levels increase during the same cycles (Figure 1D, S1C-D)29.” lines 89-92 I very much appreciate how the authors laid out their model in Fig 3 and then used the same figure to explain which part of the model they are testing in Figs 4 and 5. This is not a critique- we can complement too! Thank you! Revision Plan OPTIONAL experimental suggestion: The experiments in Figure 4 and 5 are clever. One would expect that H3 levels might exhaust faster in embryos lacking all H3.2 histone genes (Gunesdogan, 2010, PMID: 20814422), allowing a comparison testing the H3 availability > H3.3 incorporation portion of the hypothesis without manipulating the N/C ratio. This might also result in a more consistent system than slbp RNAi (below). We thank the reviewer for the experimental suggestion. We also considered this experimental manipulation to decrease RD histone H3.2. We chose not to do this experiment because in the Gunesdogan paper they show that the zygotic HisC nulls have normal development until after NC14 (unlike the maternal SLBP-RNAi that we used) suggesting that maternal H3.2 supplies do not become limiting until after the stages under consideration in our paper. Maternal HisC-nulls are, of course, impossible to generate since histones are essential. O'Haren 2024 (PMID: 39661467) did not find increased Pol II at the HLB after zelda RNAi (line 227). Might also want to mention here that zelda RNAi does not result in changes to H3 at the mRNA level (O'Haren 2024), as that would confound the model. We thank the reviewer for the suggestion. We have removed the discussion of Pol II localization and replaced it with the information about histone mRNA : “zelda controls the transcription of the majority of Pol II genes during ZGA but disruption of zelda does not change RD histone mRNA levels67–70”. lines 249-251 Discussion: Should discuss results in context of McPherson et al. 2023 (PMID: 37279945), who showed that decreasing H3.2 gene numbers does not increase H3.3 production at the mRNA or protein levels. We expanded our discussion to include the following: “Given the fact that H3.3 pool size does not respond to H3 copy number in other Drosophila tissues,54 our results suggest that H3.3 incorporation dynamics are likely independent of H3 availability.” lines 278-280 The Shackleton mutation is a clever way to alter N/C ratio, but the authors should point out that it is difficult (impossible?) to directly and cleanly manipulate the N/C ratio. For example, Shkl mutants seem to also have various nuclear sizes. As discussed above, we think that nuclear size is a direct response to the N/C ratio. We have added the following sentence to the discussion as well as a citation to a paper which discusses how the N/C ratio might contribute to nuclear import in early embryos to the discussion: “This may be due to N/C ratio-dependent changes in nuclear import dynamics which may also contribute to the observed changes in nuclear size across the shkl embryo75.” lines 307-309 Revision Plan How is H3.3 expression controlled? Is it possible that H3.3 biosynthesis is affected in Chk1 mutants? To address this question we propose to perform RT-qPCR for H3.3A and H3.3B as well as Hira in the Chk1 mutant. Unfortunately, we do not have antibodies that reliably distinguish between H3 and H3.3 in our hands (despite literature reports), but we will also perform a pan-H3 immunostaining in the Chk1 embryos to measure how the total H3-type histone pool changes as a result of the loss of Chk1. Figures: While I appreciate the statistical summaries in tables, it is still helpful to display standard significance on the figures themselves. We have added statistical comparisons in Figure 3 (formerly Figure 2). We do not feel that it is appropriate to directly compare the intensities of the H3-Dendra2 construct expressed from the pseudo-endogenous locus to the H3.3 and chimeric proteins expressed from the H3.3A locus as they were imaged using different settings. Although we plot H3 on the same graph as the other proteins to allow for ease of comparison of their trends over time it is not appropriate to directly compare their normalized intensities which including statistical tests would encourage. We have added a note to the legend of Figure 1 explaining this which reads: “Note that statistical comparisons between the two Dendra2 constructs have not been done as they were expressed from different loci and imaged under different experimental settings.” Fig 1: A: Is it possible to label panels with the nuclear cycle? We have done this. B: Statistics required - caption suggests statistics are in Table S2, but why not put on graph? Please see the explanation above for why we do not feel that it is appropriate to perform this comparison. C/D: Would be helpful if authors could plot H3/H3.3 on same graph because what we really need to compare is NC13 between H3/H3.3 (and statistics between these curves) Please see the explanation above for why we do not feel that it is appropriate to perform this comparison. These curves can be directly compared within a construct and we can evaluate their trends over time, but the normalized values should not be directly compared in the way that would be encouraged by plotting the data as suggested. E: The comparison in the text is between H3.3 and H3, but only H3.3 data is shown. I realize that it is published prior, but the comparison in figure would be helpful. We have added the previously published values to the text. Revision Plan “These changes in nuclear import and incorporation result in a less complete loss of the free nuclear H3.3 pool (~70% free in NC11 to ~30% in NC13) than previously seen for H3 (~55% free in NC11 to ~20% in NC13)” lines 116-119 Fig 2: A: A very helpful figure. Slightly unclear that the H3 that is not Dendra tagged is at the H3.3 locus. Also unclear that the H3.3A-Dendra2 line exists and used as control, as is not shown in figure. Should show H3 and H3.3 controls (Figure S2) We have edited the figure to add Dendra2 to all of the constructs and made clear the location of each construct including adding the landing site for H3-Dendra2. We have also cited Figure S1 in the legend which contains a more detailed diagram of the integration strategy. F/H- As the comparison is between H3 and ASVM, it would help to combine these data onto the same graph. As the color is currently used unnecessarily to represent nuclear cycle, the authors could use their purple/pink color coding to represent H3/ASVM. We have combined these data onto a single graph as requested and changed the colors appropriately. We have not added statistical comparisons to this graph as we again believe that they would be inappropriate. In the legend of Fig 2 the authors write "in the absence of Hira." Technically, there is only a point mutation in Hira. It is not absent. Good catch! We have changed this to “in Hirassm mutants”. Fig 3: G: Please show WT for comparison. Can use data in Fig 3A. We have added the color-coded number of neighbor embryo representations for WT and Shkl embryos underneath the example embryo images in 4A-B (formerly 3A-B,G). Model in H is very helpful (complement)! Thank you. Fig 4: B/C/F/G: The authors use a point size scale to represent the number of nuclei, but the graphs are so overlaid that it is not particularly useful. Is there a better way to display this dimension? We chose to represent the data in this way so that the visual impact of each line is representative of the amount of data (number of nuclei in each bin) that underlies it. This helps to prevent sparsely populated outlier bins at the edges of the distribution from dominating the interpretation of the data. If the reviewer has a suggestion for a better way to visualize this information we would welcome their suggestion, but we cannot think of a better way at this time. D/E/H/I: What does "min volume" mean on the X axis? Since the uneven N/C ratio in the shkl embryos results in a wavy cell cycle pattern there is no single time point where we can calculate the number of neighbors for the whole embryo (since Revision Plan not all nuclei are in the same cell cycle at a given point). Therefore, we had to choose a criterion for when we would calculate the number of neighbors for each nucleus. We chose nuclear size as a proxy for nuclear age since nuclear size increases throughout interphase (see new figure 1B). So, the minimum volume is the newly formed nucleus in a given cell cycle. We also tested other timepoints for the number of neighbors (maximum nuclear volume, just before nuclear envelope breakdown and midway between these two points) and found similar results. We chose to use minimum volume in this paper because this is the time point when the nucleus is growing most quickly and nuclear import is at its highest. We have added the following explanation to the methods: “For shkl embryos, as the nuclear cycles are asynchronous, nuclear divisions start at different timepoints within the same cell cycle and the nuclear density changes as the neighboring nuclei divide. Therefore, the total intensity traces were aligned to match their minimum volumes (as shown in Figure 1B) to T0.” lines 485-488, methods And the following detail to the figure legend: “...plotted by the number of nuclear neighbors at their minimum nuclear volume…” Figure 5 legend We also added a depiction of the lifecycle of the nucleus in which we marked the minimum volume as the new Figure 1B. Fig 5: F: OPTIONAL Experimental request: Here I would like to see H3 as a control. This is a very good suggestion, and we are currently imaging H3-Dendra2 in the Chk1 background. However, our preliminary results suggest that there may be some synthetic early lethality between the tagged H3-Dendra2 and Chk1 since these embryos are much less healthy than H3.3-Dendra2 Chk1 embryos or Chk1 with other reporters. In addition, we have observed a much higher level of background fluorescence in this cross than in the H3-Dendra2 control. We are uncertain if we will be able to obtain usable data from this experiment, but will continue to try to find conditions that allow us to analyze this data. As an orthogonal approach to answer the question, we will perform immunostaining with a pan-H3 antibody in Chk1 mutant embryos to measure total H3 levels under these conditions. Since the majority of H3-type histone is H3.2 and we know how H3.3 changes, this staining will give us insight into the dynamics of H3 in Chk1 mutant embryos. Significance General assessment: Many long-standing mysteries surround zygotic genome activation, and here the authors tackle one: what are the signals to remodel the zygotic chromatin around ZGA? This is a tricky question to answer, as basically all manipulations done to the embryo Revision Plan have widespread effects on gene expression in general, confounding any conclusions. The authors use clever novel techniques to address the question. Using photoconvertible H3 and H3.3, they can compare the nuclear dynamics of these proteins after embryo manipulation. Their model is thorough and they address most aspects of it. The hurdle this study struggles to overcome is the same that all ZGA studies have, which is that manipulation of the embryo causes cascading disasters (for example, one cannot manipulate the nuclear:cytoplasmic ratio without also altering cell cycle timing), so it's challenging to attribute molecular phenotypes to a single cause. This doesn't diminish the utility of the study. Advance: The conceptual advance of this study is that it implicates the nuclear:cytoplasmic ratio and Chk1 in H3.3 incorporation. The authors suggest these factors influence cell cycle closing, which then affects H3.3 incorporation, although directly testing the granularity of this model is beyond the scope of the study. The authors also provide technical advancement in their use of measuring histone dynamics and using changes in the dynamics upon treatment as a useful readout. I envision this strategy (and the dendra transgenes) to be broadly useful in the cell cycle and developmental fields. Audience: The basic research presented in this study will likely attract colleagues from the cell cycle and embryogenesis fields. It has broader implications beyond Drosophila and even zygotic genome activation. This reviewer's expertise: Chromatin, Drosophila, Gene Regulation Reviewer #2 (Evidence, reproducibility and clarity (Required)): This manuscript investigates the regulation of H3.3 incorporation during zygotic genome activation (ZGA) in Drosophila, proposing that the nuclear-to-cytoplasmic (N/C) ratio plays a central role in this process. While the study is conceptually interesting, several concerns arise regarding the lack of proper control experiments and the clarity of the writing. The manuscript is difficult to follow due to vague descriptions, insufficient distinctions between established knowledge and novel findings, and a lack of rigorous statistical analyses. These issues need to be addressed before the study can be considered for publication. We thank the reviewers for their careful reading of this manuscript. We have sought to clarify the concerns regarding clarity through numerous text edits detailed below. We did include ANOVA analysis for all of the relevant statistical comparisons in the supplemental table. However, to increase clarity we have also added some statistical comparisons in the main figures. We note that we do not feel that it is appropriate to directly compare the intensities of the H3-Dendra2 construct expressed from the pseudo-endogenous locus to the H3.3 and chimeric proteins expressed from the H3.3A locus as they were imaged using different settings. Although we plot H3 on the same graph as the other proteins to allow for ease of comparison of their trends over time it is not appropriate to directly compare their normalized intensities which including statistical tests would encourage. We have added a note to the legend of the new Figure 1 Revision Plan explaining this which reads: “Note that statistical comparisons between the two Dendra2 constructs have not been done as they were expressed from different loci and imaged under different experimental settings.” Major Concerns The manuscript would benefit from a clearer introduction that explicitly distinguishes between previously known mechanisms of histone regulation during ZGA and the novel contributions of this study. Currently, the introduction lacks sufficient background on early embryonic chromatin regulation, making it difficult for readers unfamiliar with the field to grasp the significance of the findings. The authors should also be more precise when discussing the timing of ZGA. While they state that ZGA occurs after 13 nuclear divisions, it is well established that a minor wave of ZGA begins at nuclear cycle 7-8, whereas the major wave occurs after cycle 13. Clarifying this distinction will improve the manuscript's accessibility to a broader audience. We have added a new figure 1 to make the timing and nuclear behaviors of the embryo during ZGA in Drosophila more clear. We have also added information about how the chromatin changes during Drosophila ZGA in the following sentence: “ In Drosophila, these changes include refinement of nucleosomal positioning, partitioning of euchromatin and heterochromatin and formation of topologically associated domains20–22,24.” lines 39-41 We have clarified the major and minor waves of ZGA in the introduction and results by adding the following sentences to the introduction and results respectively: “In most organisms ZGA happens in multiple waves but the chromatin undergoes extensive remodeling to facilitate bulk transcription during the major wave of ZGA (hereafter referred to as ZGA)18–20,22–25..” lines 36-39 “In Drosophila, ZGA occurs in 2 waves. The minor wave starts as early as the 7th cycle, while major ZGA occurs after 13 rapid syncytial nuclear cycles (NCs) and is accompanied by cell cycle slowing and cellularization (Figure 1A-B).” lines 83-85 We hope that these changes help to reduce confusion and make the paper more accessible. However, we are happy to add additional information if the reviewer can provide specific points which require further attention. One of the primary weaknesses of this study is the lack of adequate control experiments. In Figure 1, the authors suggest that the levels of H3 and H3.3 are influenced by the N/C ratio, but Revision Plan it is unclear whether transcription itself plays a role in these dynamics. To properly test this, RNA-seq or Western blot analyses should be performed at nuclear cycles 10 and 13-14 to compare the levels of newly transcribed H3 or H3.3 against maternally supplied histones. Without such data, the authors cannot rule out transcriptional regulation as a contributing factor. In the pre-ZGA cell cycles the vast majority of protein including histones is maternally loaded. Gunesdogan et al. (2010) showed that the zygotic RD histone cluster nulls survive past NC14 (well past ZGA) with no discernible defects indicating that maternal RD histone supplies are sufficient for normal development during the cell cycles under consideration. Therefore, new transcription of replication coupled histones is not needed for apparently normal development during this period. Moreover, we have done the western blot analysis using a Pan-H3 antibody as suggested by the reviewer in our previously published paper (Shindo and Amodeo, 2019 supplemental figure S3A-B) and found that total H3-type histone proteins only increase moderately during this period of development, nowhere near the rate of the nuclear doublings. We have added the following sentence to clarify this point. “These divisions are driven by maternally provided components and the total amount of H3 type histones do not keep up with the pace of new DNA produced29.” lines 88-89 We have also previously done RNA-seq on wild-type embryos (and those with altered maternal histone levels) (Chari et al 2019). In this RNA-seq (like most RNA-seq in flies) we used poly-A selection and therefore cannot detect the RD histone mRNAs (which have a stem-loop instead of a poly-A tail). We have plotted the mRNA concentrations for both H3.3 variants from that dataset below for the reviewers reference (we have not included this in the revised manuscript). The total H3.3 mRNA levels are nearly constant from egg laying (NC0- these are from unfertilized embryos) until after ZGA (NC14). These data combined with the westerns discussed above give us confidence that what we are observing is the partitioning of large pools of maternally provided histones with only a relatively small contribution of new histone synthesis. Revision Plan In Figure 2, the manuscript introduces chimeric embryos expressing modified histone variants, but their developmental viability is not addressed. It is essential to determine whether these embryos survive and whether they exhibit any phenotypic consequences such as altered hatching rates, defects in nuclear division, or developmental arrest. Tagging histones is often deleterious to organismal health. In Drosophila there are two H3.3 loci (H3.3A and H3.3B). In all of our chimera experiments we have left the H3.3B and one copy of the H3.3A locus unperturbed to provide a supply of untagged H3.3. This allows us to study H3.3 and chimera dynamics without compromising organism health. All of our chimeras are viable and fertile with no obvious morphological defects. We have added the following sentences to the text to clarify this point: “There are ~100 copies of H3 in the Drosophila genome, but only 2 of H3.3 (H3.3A and H3.3B)26. To determine which factor controls nuclear availability and chromatin incorporation, we genetically engineered flies to express Dendra2-tagged H3/H3.3 chimeras at the endogenous H3.3A locus, keeping the H3.3B locus intact….These chimeras were all viable and fertile. ” lines 127-131, 136 In addition we propose performing hatch rate assays for embryos from the chimeric embryos of S31A, SVM and ASVM to assess if there is any decrease in fecundity due to the presence of the chimeras. Moreover, given that H3.3 is associated with actively transcribed genes, an RNA-seq analysis of chimeric embryos should be included to assess transcriptional changes linked to H3.3 incorporation. This is an excellent suggestion and will definitely be a future project for the lab. However, to do this experiment correctly we will need to generate untagged chimeric lines that will (hopefully) allow for the full replacement of H3.3 with the chimeric histones instead of a single copy among 4. This is beyond the scope of this paper. Figures 3 and 4 raise additional concerns about whether histone cluster transcription is altered in shkl mutant embryos. The authors propose that the shkl mutation affects the N/C ratio, yet it remains unclear whether this leads to changes in the transcription of histone clusters. Furthermore, since HIRA is a key chaperone for H3.3, it would be important to assess whether its levels or function are compromised in shkl mutants. To address these gaps, RT-qPCR or RNA-seq should be performed to quantify histone cluster transcription, and Western blot analysis should be used to determine if HIRA protein levels are affected. The changes in the N/C ratio that are observed in the shkl mutant are within SINGLE embryo (differences in nuclear spacing). In these experiments we are comparing nuclei within a common cytoplasm that have different local nuclear densities (N/C ratios). Therefore, if Shkl Revision Plan were somehow affecting the transcription of histones or their chaperones we would expect all of the nuclei within the same mutant embryo to be equally affected since they are genetically identical and share a common cytoplasm. We do not directly compare the behavior of shkl embryos to wildtype except to demonstrate that there is no positional effect on the import of H3 and H3.3 across the length of the embryo in wildtype. To clarify our experimental system for these experiments we have added additional panels to Figure 4A and B that depict the number of neighbors for both control and Shkl embryos. Nonetheless, to address the reviewer’s concern that shkl may change the amount of H3 present in the embryo, we propose to conduct a western blot comparison of wildtype and shkl embryos using a pan-H3 antibody. There are no tools (antibodies or fluorescently tagged proteins) to assess HIRA protein levels in Drosophila. We therefore propose to perform RT-qPCR for HIRA in wildtype and shkl embryos. A similar issue arises in Figure 5, where the authors claim that H3.3 incorporation is dependent on cell cycle state but do not sufficiently test whether this is linked to changes in HIRA levels. Given the importance of HIRA in H3.3 deposition, its levels should be examined in Slbp, Zelda, and Chk1 RNAi embryos to verify whether changes in H3.3 incorporation correlate with HIRA function. Without this, it is difficult to conclude that the observed effects are strictly due to cell cycle regulation rather than histone chaperone dynamics. Since H3.3 incorporation is unaffected in the Slbp and Zelda-RNAi lines there is no reason to suspect a change in HIRA function. There are no available tools (antibodies or fluorescently tagged proteins) to directly measure HIRA protein in Drosophila. To test if changes in HIRA loading might contribute to the decreased H3.3 incorporation in the Chk1 mutant we propose to perform RT-qPCR for HIRA in wildtype and Chk1 embryos. Several figures require additional statistical analyses to support the claims made. In Figure 1B, statistical testing should be included to validate the reported differences. Figure 1C-D states that "H3.3 accumulation reduces more slowly than H3," yet there is no quantitative comparison to substantiate this claim. Similarly, Figure 1E presents the conclusion that "These changes in nuclear import and incorporation result in a less dramatic loss of the free nuclear H3.3 pool than previously seen for H3," despite the fact that H3 data are not included in this figure. The conclusions drawn from these data need to be supported with appropriate statistical comparisons and more precise descriptions of what is being measured. For Figure 1B (now 2B) we do not feel that it is appropriate to directly compare the intensities of the H3-Dendra2 construct expressed from the pseudo-endogenous locus to the H3.3 and chimeric proteins expressed from the H3.3A locus as they were imaged using different settings and therefore we do not feel that direct statistical tests are appropriate. Rather, we plot the two histones on the same graph normalized to their own NC10 values so that the trend in their decrease over time may be compared. The statistical tests for H3.3 compared to the chimeras which were originally in the supplemental table have been added to Figure 3 (formerly figure 2). Revision Plan It is important to note that in this directly comparable situation the ASVM mutant (whose trends closely mirror H3) is highly statistically distinct from H3.3. We have added a note to the legend of the new Figure 1 explaining this which reads: “Note that statistical comparisons between the two Dendra2 constructs have not been done as they were expressed from different loci and imaged under different experimental settings.” For Figure 1C-D (now 2C-D) we have removed this claim from the text. We were referring to the plateau in nuclear import for H3 that is less dramatic in H3.3, but this is more carefully discussed in the next paragraph and its addition at that point generated confusion. The text now reads: “To further assess how nuclear uptake dynamics changed during these cycles, we tracked total nuclear H3 and H3.3 in each cycle (Figure 2C-D). Since nuclear export is effectively zero, we attribute the increase in total H3.3 over time solely to import and therefore the slope of total H3.3 over time corresponds to the import rate. Though the change in initial import rates between NC10 and NC13 are similar between the two histones (Figure S1F), we observed a notable difference in their behavior in NC13. H3 nuclear accumulation plateaus ~5 minutes into NC13, whereas H3.3 nuclear accumulation merely slows (Figure 2C-D).” lines 109-116 For Figure 1E (now 2E), to address the difference between H3 and H3.3 free pools we have added the previously published values to the text and changed the phrasing from “less dramatic” to “less complete”. The sentence now reads: “These changes in nuclear import and incorporation result in a less complete loss of the free nuclear H3.3 pool (~70% free in NC11 to ~30% in NC13) than previously seen for H3 (~55% free in NC11 to ~20% in NC13)” lines 116-119 Figure 2 presents additional concerns regarding data interpretation. The comparisons between H3.3 and H3.3S31A to H3 and H3.3SVM/ASVM lack statistical analysis, making it difficult to determine the significance of the observed differences. As discussed above, it is not appropriate to directly compare H3 to H3.3 and the chimeras at the H3.3A locus since they are expressed from different promoters and imaged with different settings. The ANOVA comparisons between all of the constructs in the H3.3A locus can be found in the supplemental table. We have also added the statistical significance between each chimera and H3.3 within a cell cycle to the figure. Including the full set of comparisons for all genotypes and timepoints makes the figure nearly impossible to interpret, but they remain available in the supplemental table. Revision Plan The disappearance of H3.3 from mitotic chromosomes in Figure 2E is also not explained. If this phenomenon is functionally relevant, the authors should provide a mechanistic interpretation, or at the very least, discuss potential explanations in the text. In Figures 2F-H, the reasoning behind comparing the nuclear intensity of H3.3 to H3 in Hira mutants is unclear. To properly assess the role of HIRA in H3.3 chromatin accumulation, a more appropriate comparison would be between wild-type H3.3 and H3.3 levels in Hira knockdown embryos. As explained in the text and depicted in Figure 3D (formerly 2D), the HIRAssm mutant is a point mutation that prevents observable H3.3 chromatin incorporation, but not nuclear import. This is what is depicted in Figure 3E (formerly 2E). The loss of H3.3 from mitotic chromatin is due to the inability to incorporate H3.3 into chromatin as expected for a HIRA mutant. We have edited the figure 3 legend to make this more clear. It now reads: “Hirassm mutation nearly abolishes the observable H3.3 on mitotic chromatin (E).” In Figure 3F (formerly 2F-H) we ask what happens to H3 chromatin incorporation when there is almost no incorporation of H3.3 due to the HIRA mutation. In this mutant there is so little H3.3 incorporation that we cannot quantify H3.3 levels on mitotic chromatin (see the new Figure 1B for the stage where chromatin levels are quantified). This experiment was done to test if H3.3ASVM (expressed at the H3.3A locus) is incorporated into chromatin in embryos lacking the function of H3.3’s canonical chaperone. We have edited the text to make this more clear: “Since the chromatin incorporation of the H3/H3.3 chimeras appears to depend on their chaperone binding sites, we asked if impairing the canonical H3.3 chaperone, Hira, would affect the incorporation of H3.3ASVMexpressed from the H3.3A locus.”lines 158-160 A broader concern is that the authors only test HIRA as a histone chaperone but do not consider alternative chaperones that could influence H3.3 deposition. Since multiple chaperone systems regulate histone incorporation, it would strengthen the conclusions if additional chaperones were tested. Since HIRAssm reduced H3.3-Dendra2 incorporation to nearly undetectable levels (Figure 3E) we believe that it is the primary H3.3 incorporation pathway during this period of development. Therefore, we believe that removing HIRA function is a sufficient test of the dependance of H3.3ASVM on the major H3.3 chaperone at this time. Although it would be interesting to fully map how all H3 and H3.3 chimera constructs respond to all histone chaperone pathways, we believe that this is beyond the scope of this manuscript. Additionally, the manuscript does not include any validation of the RNAi knockdown efficiencies used throughout the study. This raises concerns about whether the observed phenotypes are truly due to target gene depletion or off-target effects. RT-qPCR or Western blot analyses should be performed to confirm knockdown efficiency. Revision Plan Both the Zelda and slbp-RNAi lines used for knockdowns have been used and validated in the early fly embryo in previously published works ((Yamada et al., 2019), (Duan et al., 2021), (O’Haren et al., 2025), (Chari et al, 2019)) and the phenotypes that we observe in our embryos are consistent with the published data including altered cell cycle durations (Figure S4C) and lack of cellularization/gastrulation. We note that the zelda RNAi phenotypes are also highly consistent with the effects of Zelda germline clones. To validate that slbp-RNAi knocks down histones we included a western blot for Pan-H3 in slbp-RNAi embryos that demonstrates a large effect on total H3 levels (Figure S4A). To further demonstrate the phenotypic effects of the slbp-RNAi we have added supplemental movies (Videos 4 and 5). To fully characterize the RNAi efficiency under our conditions we propose to perform RT-qPCR for slbp in slbp-RNAi and Zelda in Zelda-RNAi compared to control (w) RNAi embryos. Finally, the section discussing "H3.3 incorporation depends on cell cycle state, but not cell cycle duration" is unclear. The term "cell cycle state" is vague and should be explicitly defined. Does this refer to a specific phase of the cell cycle, changes in chromatin accessibility, or another regulatory mechanism? The term cell cycle state is deliberately vague. We know that Chk1 regulates many aspects of cell cycle progression and cannot determine from our data which aspect(s) of cell cycle regulation by Chk1 are important for H3.3 incorporation. Our data indicate that it is not simply interphase duration as we originally hypothesized. We have expanded our discussion section to underscore some aspects of Chk1 regulation that we speculate may be responsible for the change in H3.3 behavior. “Chk1 mutants decrease H3.3 incorporation even before the cell cycle is significantly slowed. Cell cycle slowing has been previously reported to regulate the incorporation of other histone variants in Drosophila15. However, our results indicate that cell cycle state and not duration per se, regulates H3.3 incorporation. In most cell types, the primary role of Chk1 is to stall the cell cycle to protect chromatin in response to DNA damage. Therefore, Chk1 activity directly or indirectly affects the chromatin state in a variety of ways. We speculate that Chk1’s role in regulating origin firing may be particularly important in this context73,74. Late replicating regions and heterochromatin first emerge during ZGA, and Chk1 mutants proceed into mitosis before the chromatin is fully replicated22,23,25,71. Since H3.3 is often associated with heterochromatin, the decreased H3.3 incorporation in Chk1 mutants may be an indirect result of increased origin firing and decreased heterochromatin formation73,74.” lines 287-298 Reviewer #2 (Significance (Required)): This manuscript investigates the regulation of H3.3 incorporation during zygotic genome Revision Plan activation (ZGA) in Drosophila, proposing that the nuclear-to-cytoplasmic (N/C) ratio plays a central role in this process. While the study is conceptually interesting, several concerns arise regarding the lack of proper control experiments and the clarity of the writing. The manuscript is difficult to follow due to vague descriptions, insufficient distinctions between established knowledge and novel findings, and a lack of rigorous statistical analyses. These issues need to be addressed before the study can be considered for publication. Reviewer #3 (Evidence, reproducibility and clarity (Required)): Summary: Based on previous findings of the changing ratios of histone H3 to its variant H3.3, the authors test how H3.3 incorporation into chromatin is regulated for ZGA. They demonstrate here that H3 nuclear availability drops and replacement by H3.3 relies on chaperone binding, though not on its typical chaperone Hira. Furthermore, they show that nuclear-cytoplasmic (N/C) ratios can influence this histone exchange likely by influencing cell cycle state. We thank the reviewer for their thoughtful comments. We note that our data ARE consistent with H3.3 incorporation depending on Hira through its chaperone binding site. Major comments: 1. The claims are largely supported by the data but I think a couple more experiments could help bolster the claims about cell cycle and chk1 regulation. a. Creating a phosphomimetic of the chk1 phosphorylation site on H3.3 to see if it can overcome the defects seen in chk1 mutants b. Assessing heterochromatin of embryos without chk1 (or ASVM mutants) for example, by looking at H3K9me3 levels The first experiments could take several months if the flies haven't already been generated by the authors but the second should be quicker. a. This is an excellent experimental suggestion which is bolstered by the fact that in frogs H3.3 S31A cannot rescue H3.3 morpholino during gastrulation, but H3.3S31D can (Sitbon et al, 2020). However, to correctly conduct this experiment would require generating and validating multiple additional endogenous H3.3 replacement lines, likely without a fluorescent tag as they can interfere with histone rescue constructs in most species. As the reviewer notes, this would take several months of work (we have not generated the critical flies yet) and may not yield a satisfying answer since there are reports that H3.3 may be dispensable in flies aside from as a source of H3-type histone outside of S-phase (Hödl and Bassler, 2012). While we hope to continue experiments along these lines in the future we feel that this is beyond the scope of the current manuscript. Revision Plan b. To address this we propose to stain for H3K9me3 in wildtype and Chk1-/- embryos. Since the ASVM line is not a full replacement of all H3.3 we think that staining for H3K9me3 in this line is unlikely to yield a detectable difference. 2. It would also be interesting to see what the health of the flies with some mutations in this paper are beyond the embryo stage if they are viable (e.g., development to adulthood, fertility etc.) a. the SVM, ASVM mutations b. the hira + ASVM mutations The authors might already have this data but if not they have the flies and it shouldn't take long to get these data. a. To address this concern we propose to conduct hatch rate assays for embryos from the Dendra tagged H3.3, S31A, SVM, ASVM flies. However, we do note that in our experiments only one copy of the H3.3A locus was mutated and tagged with Dendra2 leaving one copy of H3.3A and both copies of H3.3B untouched to ensure normal development as tagging all copies of histone genes can lead to lethality. b. All Hira mutants develop as haploids due to the inability to decondense the sperm chromatin (which is dependent on Hira). This leads to one extra division to restore the N/C ratio prior to cell cycle slowing and ZGA. These embryos go on to gastralate and die late in development after cuticle formation (presumably due to their decreased ploidy) (Loppin et al., 2000). The addition of ASVM into the Hira background does not appear to rescue the ploidy defect as these embryos also undergo the extra division (Figure 3H). We are therefore confident that these embryos will not hatch. We have added the information about the development of Hira mutant to the text as follow: “These embryos develop as haploids and undergo one additional syncytial division before ZGA (NC14). Hirassmembryos develop otherwise phenotypically normally through organogenesis and cuticle formation, but die before hatching57.” lines 164-167 3. In the discussion section, can the authors speculate on how they think H3.3 ASVM is getting incorporated if not through Hira. Are there other known H3 variant chaperones, or can the core histone chaperone substitute? We have expanded our discussion to include the the following: “In the case of the chimeric histone proteins the incorporation behavior was dependent on the chaperone binding site. For example, H3.3ASVM import and incorporation was similar to H3 in control embryos and H3.3ASVM was still incorporated in Hirassm mutants. This is consistent with the chaperone binding site determining the chromatin incorporation pathway and suggests that H3.3ASVM likely interacts with H3 chaperones such as Caf1.” lines 280-285 Revision Plan Minor comments: While the paper is well written, I found the figures very confusing and difficult to interpret. Comments here are meant to make it easier to interpret. 1. Fig 1 and most of the paper would benefit from a schematic of early embryo transitions labelled with time and stages of cell cycle to make interpreting data easier This is an excellent suggestion! We have added a new figure (Figure 1) to explain both the biological system and the way that we measured many properties in this paper. 2. Fig 1- same green color is used for nuclear cycle 12 and for H3.3 making it confusing when reading graphs. Please check other figures where there is a similar use of color for two different things We have changed the colors so that they are more distinct. 3. Fig 1C,D might benefit more from being split up into 3 graphs by cell cycle with H3 and H3.3 plotted on the same graphs rather than the way it is now We do not feel that it is appropriate to directly compare the intensities of the H3-Dendra2 construct expressed from the pseudo-endogenous locus to the H3.3 and chimeric proteins expressed from the H3.3A locus as they were imaged using different settings. These curves can be directly compared within a construct and we can evaluate their trends over time, but the normalized values should not be directly compared in the way that would be encouraged by plotting the data as suggested. 4. Line 130-133: can they also comment on the different between SVM and ASVM. It seems like SVM might be even worse than ASVM (Fig 2C). Is this related to chk1 phosphorylation? We think that this is a property of the mixed chimeras since S31A is also imported less efficiently than H3.3 (though we cannot be sure without further experiments). We have added this explanation to the text: “We speculate that chimeric histone proteins (H3.3S31A and H3.3SVM) are not as efficiently handled by the chaperone machinery as species that are normally found in the organism including H3.3ASVM which is protein-identical to H3.” lines 150-152 5. Fig 2F-G: It is very difficult to compare between histones when they are on different graphs, please consider putting H3, H3.3 and H3.3ASVM in a hirassm background on the same graph. We have done this in the new Figure 3F. Revision Plan 6. Fig 3- move G to become A and then have A and B. We have restructured this figure to include the nuclear density map of control in response to a comment from Reviewer 1. Although not exactly what the reviewer has envisioned, we hope that this adds clarity to the figure. 7. The initial slope graphs in 4D, E, H and I are not easy to understand and would benefit from an explanation in the legend. We have edited the legend of Figure 5D (formerly 4D) and S1F which now read: “Initial slopes of nuclear import curves (change in total nuclear intensity over time for the first 5 timepoints) …” In addition we have updated the methods to include: “Import rates were calculated by using a linear regression for the total nuclear intensity over time for the first 5 timepoints in the nuclear import curves.” lines 471-473, methods Reviewer #3 (Significance (Required)): This paper addresses an important and understudied question- how do histones and their variants mediate chromatin regulation in the early embryo before zygotic genome activation? The authors follow up on some previous findings and provide new insights using clever genetics and cell biology in Drosophila melanogaster. However, the authors do not directly look at chromatin structural changes using existing genomic tools. This may be beyond the scope of this work but would make for a nice addition to strengthen their claims if they can implement these chromatin accessibility techniques in the early embryo. Histones affect a majority of biological processes and understanding their role in the early embryo is key to understanding development. I believe this study applies to a broad audience interested in basic science. However, I do think the authors might benefit from a more broad discussion of their results to attract a broad readership.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      Summary:

      Bhatt et al. seek to define factors that influence H3.3 incorporation in the embryo. They test various hypotheses, pinpointing the nuclear/cytoplasmic ratio and Chk1, which affects cell cycle state, as influencers. The authors use a variety of clever Drosophila genetic manipulations in this comprehensive study. The data are presented well and conclusions reasonably drawn and not overblown. I have only minor comments to improve readability and clarity. I suggest two OPTIONAL experiments below.

      Major comments:

      We found this manuscript well written and experimentally thorough, and the data are meticulously presented. We have one modification that we feel is essential to reader understanding and one experimental concern: The authors provide the photobleaching details in the methodology, but given how integral this measurement is to the conclusions of the paper, we feel that this should be addressed in clear prose in the body of the text. The authors explain briefly how nuclear export is assayed, but not import (line 99). Would help tremendously to clarify the methods here. This is especially important as import is again measured in Fig 4. This should also be clarified (also in the main body and not solely in the methods).

      If the embryos appeared "reasonably healthy" (line 113) after slbp RNAi, how do the authors know that the RNAi was effective, especially in THESE embryos, given siblings had clear and drastic phenotype? This is especially critical given that the authors find no effect on H3.3 incorporation after slbp RNAi (and presumably H3 reduction), but this result would also be observed if the slbp RNAi was just not effective in these embryos.

      Minor comments:

      Introduction:

      Consider using "replication dependent" (RD) rather than "replication coupled." Both are used in the field, but RD parallels RI ("replication independent"). Would help for clarity if the authors noted that H3 is equivalent to H3.2 in Drosophila. Also it is relevant that there are two H3.3 loci as the authors knock mutations into the H3.3A locus, but leave the H3.3B locus intact. The authors should clarify that there are two H3.3 genes in the Drosophila genome. Please add information and citation (line 58): H3.3 is required to complete development when H3.2 copy number is reduced (PMID: 37279945, McPherson et al. 2023)

      Results:

      Embryo genotype is unclear (line 147): Hira[ssm] haploid embryos inherit the Hira mutation maternally? Are Hira homozygous mothers crossed to homozygous fathers to generate these embryos, or are mothers heterozygous? This detail should be in the main text for clarity. Line 161: Shkl affects nuclear density, but it also appears from Fig 3 to affect nuclear size? The authors do not address this, but it should at least be mentioned. The authors often describe nuclear H3/H3.3 as chromatin incorporated, but these image-based methods do not distinguish between chromatin-incorporated and nuclear protein. I very much appreciate how the authors laid out their model in Fig 3 and then used the same figure to explain which part of the model they are testing in Figs 4 and 5. This is not a critique- we can complement too! OPTIONAL experimental suggestion: The experiments in Figure 4 and 5 are clever. One would expect that H3 levels might exhaust faster in embryos lacking all H3.2 histone genes (Gunesdogan, 2010, PMID: 20814422), allowing a comparison testing the H3 availability > H3.3 incorporation portion of the hypothesis without manipulating the N/C ratio. This might also result in a more consistent system than slbp RNAi (below). O'Haren 2024 (PMID: 39661467) did not find increased Pol II at the HLB after zelda RNAi (line 227). Might also want to mention here that zelda RNAi does not result in changes to H3 at the mRNA level (O'Haren 2024), as that would confound the model.

      Discussion:

      Should discuss results in context of McPherson et al. 2023 (PMID: 37279945), who showed that decreasing H3.2 gene numbers does not increase H3.3 production at the mRNA or protein levels. The Shackleton mutation is a clever way to alter N/C ratio, but the authors should point out that it is difficult (impossible?) to directly and cleanly manipulate the N/C ratio. For example, Shkl mutants seem to also have various nuclear sizes. How is H3.3 expression controlled? Is it possible that H3.3 biosynthesis is affected in Chk1 mutants? Figures:

      While I appreciate the statistical summaries in tables, it is still helpful to display standard significance on the figures themselves.

      Fig 1:

      A: Is it possible to label panels with the nuclear cycle? B: Statistics required - caption suggests statistics are in Table S2, but why not put on graph? C/D: Would be helpful if authors could plot H3/H3.3 on same graph because what we really need to compare is NC13 between H3/H3.3 (and statistics between these curves) E: The comparison in the text is between H3.3 and H3, but only H3.3 data is shown. I realize that it is published prior, but the comparison in figure would be helpful.

      Fig 2:

      A: A very helpful figure. Slightly unclear that the H3 that is not Dendra tagged is at the H3.3 locus. Also unclear that the H3.3A-Dendra2 line exists and used as control, as is not shown in figure. Should show H3 and H3.3 controls (Figure S2) F/H- As the comparison is between H3 and ASVM, it would help to combine these data onto the same graph. As the color is currently used unnecessarily to represent nuclear cycle, the authors could use their purple/pink color coding to represent H3/ASVM. In the legend of Fig 2 the authors write "in the absence of Hira." Technically, there is only a point mutation in Hira. It is not absent.

      Fig 3:

      G: Please show WT for comparison. Can use data in Fig 3A. Model in H is very helpful (complement)!

      Fig 4:

      B/C/F/G: The authors use a point size scale to represent the number of nuclei, but the graphs are so overlaid that it is not particularly useful. Is there a better way to display this dimension? D/E/H/I: What does "min volume" mean on the X axis?

      Fig 5:

      F: OPTIONAL Experimental request: Here I would like to see H3 as a control.

      Significance

      General assessment: Many long-standing mysteries surround zygotic genome activation, and here the authors tackle one: what are the signals to remodel the zygotic chromatin around ZGA? This is a tricky question to answer, as basically all manipulations done to the embryo have widespread effects on gene expression in general, confounding any conclusions. The authors use clever novel techniques to address the question. Using photoconvertible H3 and H3.3, they can compare the nuclear dynamics of these proteins after embryo manipulation. Their model is thorough and they address most aspects of it. The hurdle this study struggles to overcome is the same that all ZGA studies have, which is that manipulation of the embryo causes cascading disasters (for example, one cannot manipulate the nuclear:cytoplasmic ratio without also altering cell cycle timing), so it's challenging to attribute molecular phenotypes to a single cause. This doesn't diminish the utility of the study.

      Advance: The conceptual advance of this study is that it implicates the nuclear:cytoplasmic ratio and Chk1 in H3.3 incorporation. The authors suggest these factors influence cell cycle closing, which then affects H3.3 incorporation, although directly testing the granularity of this model is beyond the scope of the study. The authors also provide technical advancement in their use of measuring histone dynamics and using changes in the dynamics upon treatment as a useful readout. I envision this strategy (and the dendra transgenes) to be broadly useful in the cell cycle and developmental fields.

      Audience: The basic research presented in this study will likely attract colleagues from the cell cycle and embryogenesis fields. It has broader implications beyond Drosophila and even zygotic genome activation.

      This reviewer's expertise: Chromatin, Drosophila, Gene Regulation

    1. Author response:

      The following is the authors’ response to the original reviews.

      eLife Assessment

      This important study proposes a framework to understand and predict generalization in visual perceptual learning in humans based on form invariants. Using behavioral experiments in humans and by training deep networks, the authors offer evidence that the presence of stable invariants in a task leads to faster learning. However, this interpretation is promising but incomplete. It can be strengthened through clearer theoretical justification, additional experiments, and by rejecting alternate explanations.

      We sincerely thank the editors and reviewers for their thoughtful feedback and constructive comments on our study. We have taken significant steps to address the points raised, particularly the concern regarding the incomplete interpretation of our findings.

      In response to Reviewer #1, we have included long-term learning curves from the human experiments to provide a clearer demonstration of the differences in learning rates across invariants, and have incorporated a new experiment to investigate location generalization within each invariant stability level. These new findings have shifted the focus of our interpretation from learning rates to the generalization patterns both within and across invariants, which, alongside the observed weight changes across DNN layers, support our proposed framework based on the Klein hierarchy of geometries and the Reverse Hierarchy Theory (RHT).

      We have also worked to clarify the conceptual foundation of our study and strengthen the theoretical interpretation of our results in light of the concerns raised by Reviewers #1 and #2. We have further expanded the discussion linking our findings to previous work on VPL generalization, and addressed alternative explanations raised by Reviewers #1.

      Reviewer #1 (Public Review):

      Summary:

      Visual Perceptual Learning (VPL) results in varying degrees of generalization to tasks or stimuli not seen during training. The question of which stimulus or task features predict whether learning will transfer to a different perceptual task has long been central in the field of perceptual learning, with numerous theories proposed to address it. This paper introduces a novel framework for understanding generalization in VPL, focusing on the form invariants of the training stimulus. Contrary to a previously proposed theory that task difficulty predicts the extent of generalization - suggesting that more challenging tasks yield less transfer to other tasks or stimuli - this paper offers an alternative perspective. It introduces the concept of task invariants and investigates how the structural stability of these invariants affects VPL and its generalization. The study finds that tasks with high-stability invariants are learned more quickly. However, training with low-stability invariants leads to greater generalization to tasks with higher stability, but not the reverse. This indicates that, at least based on the experiments in this paper, an easier training task results in less generalization, challenging previous theories that focus on task difficulty (or precision). Instead, this paper posits that the structural stability of stimulus or task invariants is the key factor in explaining VPL generalization across different tasks

      Strengths:

      - The paper effectively demonstrates that the difficulty of a perceptual task does not necessarily correlate with its learning generalization to other tasks, challenging previous theories in the field of Visual Perceptual Learning. Instead, it proposes a significant and novel approach, suggesting that the form invariants of training stimuli are more reliable predictors of learning generalization. The results consistently bolster this theory, underlining the role of invariant stability in forecasting the extent of VPL generalization across different tasks.

      - The experiments conducted in the study are thoughtfully designed and provide robust support for the central claim about the significance of form invariants in VPL generalization.

      Weaknesses:

      - The paper assumes a considerable familiarity with the Erlangen program and the definitions of invariants and their structural stability, potentially alienating readers who are not versed in these concepts. This assumption may hinder the understanding of the paper's theoretical rationale and the selection of stimuli for the experiments, particularly for those unfamiliar with the Erlangen program's application in psychophysics. A brief introduction to these key concepts would greatly enhance the paper's accessibility. The justification for the chosen stimuli and the design of the three experiments could be more thoroughly articulated.

      We appreciate your feedback regarding the accessibility of our paper, particularly concerning the Erlangen Program and its associated concepts. We have revised the manuscript to include a more detailed introduction to Klein’s Erlangen Program in the second paragraph of Introduction section. It provides clear descriptions and illustrative examples for the three invariants within the Klein hierarchy of geometries, as well as the nested relationships among them (see revised Figure 1). We believe this addition will enhance the accessibility of the theoretical framework for readers who may not be familiar with these concepts.

      In the revised manuscript, we have also expanded the descriptions of the stimuli and experimental design for psychophysics experiments. These additions aim to clarify the rationale behind our choices, ensuring that readers can fully understand the connection between our theoretical framework and experimental approach.

      - The paper does not clearly articulate how its proposed theory can be integrated with existing observations in the field of VPL. While it acknowledges previous theories on VPL generalization, the paper falls short in explaining how its framework might apply to classical tasks and stimuli that have been widely used in the VPL literature, such as orientation or motion discrimination with Gabors, vernier acuity, etc. It also does not provide insight into the application of this framework to more naturalistic tasks or stimuli. If the stability of invariants is a key factor in predicting a task's generalization potential, the paper should elucidate how to define the stability of new stimuli or tasks. This issue ties back to the earlier mentioned weakness: namely, the absence of a clear explanation of the Erlangen program and its relevant concepts.

      We thank you for highlighting the necessary to integrate our proposed framework with existing observations in VPL research.

      Prior VPL studies have not concurrently examined multiple geometrical invariants with varying stability levels, making direct comparisons challenging. However, we have identified tasks from the literature that align with specific invariants. For example, orientation discrimination with Gabors (e.g., Dosher & Lu, 2005) and texture discrimination task (e.g., Wang et al., 2016) involve Euclidean invariants, and circle versus square discrimination (e.g., Kraft et al., 2010) involves affine invariants. On the other hand, our framework does not apply to studies using stimuli that are unrelated to geometric transformations, such as motion discrimination with Gabors or random dots, depth discrimination, vernier acuity, spatial frequency discrimination, contrast detection or discrimination.

      By focusing on geometrical properties of stimuli, our work addresses a gap in the field and introduces a novel approach to studying VPL through the lens of invariant extraction, echoing Gibson’s ecological approach to perceptual learning.

      In the revised manuscript, we have added a clearer explanation of Klein’s Erlangen Program, including the definition of geometrical invariants and their stability (the second paragraph in Introduction section). Additionally, we have expanded the Discussion section to draw more explicit comparisons between our results and previous studies on VPL generalization, highlighting both similarities and differences, as well as potential shared mechanisms.

      - The paper does not convincingly establish the necessity of its introduced concept of invariant stability for interpreting the presented data. For instance, consider an alternative explanation: performing in the collinearity task requires orientation invariance. Therefore, it's straightforward that learning the collinearity task doesn't aid in performing the other two tasks (parallelism and orientation), which do require orientation estimation. Interestingly, orientation invariance is more characteristic of higher visual areas, which, consistent with the Reverse Hierarchy Theory, are engaged more rapidly in learning compared to lower visual areas. This simpler explanation, grounded in established concepts of VPL and the tuning properties of neurons across the visual cortex, can account for the observed effects, at least in one scenario. This approach has previously been used/proposed to explain VPL generalization, as seen in (Chowdhury and DeAngelis, Neuron, 2008), (Liu and Pack, Neuron, 2017), and (Bakhtiari et al., JoV, 2020). The question then is: how does the concept of invariant stability provide additional insights beyond this simpler explanation?

      We appreciate your thoughtful alternative explanation. While this explanation accounts for why learning the collinearity task does not transfer to the orientation task—which requires orientation estimation—it does not explain why learning the collinearity task fails to transfer to the parallelism task, which requires orientation invariance rather than orientation estimation. Instead, the asymmetric transfer observed in our study could be perfectly explained by incorporating the framework of the Klein hierarchy of geometries.

      According to the Klein hierarchy, invariants with higher stability are more perceptually salient and detectable, and they are nested hierarchically, with higher-stability invariants encompassing lower-stability invariants (as clarified in the revised Introduction). In our invariant discrimination tasks, participants need only extract and utilize the most stable invariant to differentiate stimuli, optimizing their ability to discriminate that invariant while leaving the less stable invariants unoptimized.

      For example:

      • In the collinearity task, participants extract the most stable invariant, collinearity, to perform the task. Although the stimuli also contain differences in parallelism and orientation, these lower-stability invariants are not utilized or optimized during the task.

      • In the parallelism task, participants optimize their sensitivity to parallelism, the highest-stability invariant available in this task, while orientation, a lower-stability invariant, remains irrelevant and unoptimized.

      • In the orientation task, participants can only rely on differences in orientation to complete the task. Thus, the least stable invariant, orientation, is extracted and optimized.

      This hierarchical process explains why training on a higher-stability invariant (e.g., collinearity) does not transfer to tasks involving lower-stability invariants (e.g., parallelism or orientation). Conversely, tasks involving lower-stability invariants (e.g., orientation) can aid in tasks requiring higher-stability invariants, as these higher-stability invariants inherently encompass the lower ones, resulting in a low-to-high-stability transfer effect.

      This unique perspective underscores the importance of invariant stability in understanding generalization in VPL, complementing and extending existing theories such as the Reverse Hierarchy Theory. To help the reader understand our proposed theory, we revised the Introduction and Discussion section.

      - While the paper discusses the transfer of learning between tasks with varying levels of invariant stability, the mechanism of this transfer within each invariant condition remains unclear. A more detailed analysis would involve keeping the invariant's stability constant while altering a feature of the stimulus in the test condition. For example, in the VPL literature, one of the primary methods for testing generalization is examining transfer to a new stimulus location. The paper does not address the expected outcomes of location transfer in relation to the stability of the invariant. Moreover, in the affine and Euclidean conditions one could maintain consistent orientations for the distractors and targets during training, then switch them in the testing phase to assess transfer within the same level of invariant structural stability.

      We thank you for this good suggestion. Using one of the primary methods for test generalization, we performed a new psychophysics experiment to specifically examine how VPL generalizes to a new test location within a single invariant stability level (see Experiment 3 in the revised manuscript). The results show that the collinearity task exhibits greater location generalization compared to the parallelism task. This finding suggests the involvement of higher-order visual areas during high-stability invariant training, aligning with our theoretical framework based on the Reverse Hierarchy Theory (RHT). We attribute the unexpected location generalization observed in the orientation task to an additional requirement for spatial integration in its specific experimental design (as explained in the revised Results section “Location generalization within each invariant”). Moreover, based on previous VPL studies that have reported location specificity in orientation discrimination (Fiorentini and Berardi, 1980; Schoups et al., 1995; Shiu and Pashler, 1992), along with the substantial weight changes observed in lower layers of DNNs trained on the orientation task (Figure 9B, C), we infer that under a more controlled experimental design—such as the two-interval, two-alternative forced choice (2I2AFC) task employed in DNN simulations, where spatial integration is not required for any of the three invariants—the plasticity for orientation tasks would more likely occur in lower-order areas.

      In the revised manuscript, we have discussed how these findings, together with the observed asymmetric transfer across invariants and the distribution of learning across DNN layers, collectively reveal the neural mechanisms underlying VPL of geometrical invariants.

      - In the section detailing the modeling experiment using deep neural networks (DNN), the takeaway was unclear. While it was interesting to observe that the DNN exhibited a generalization pattern across conditions similar to that seen in the human experiments, the claim made in the abstract and introduction that the model provides a 'mechanistic' explanation for the phenomenon seems overstated. The pattern of weight changes across layers, as depicted in Figure 7, does not conclusively explain the observed variability in generalizations. Furthermore, the substantial weight change observed in the first two layers during the orientation discrimination task is somewhat counterintuitive. Given that neurons in early layers typically have smaller receptive fields and narrower tunings, one would expect this to result in less transfer, not more.

      We appreciate your suggestion regarding the clarity of DNN modeling. While the DNN employed in our study recapitulates several known behavioral and physiological VPL effects (Manenti et al., 2023; Wenliang and Seitz, 2018), we acknowledge that the claim in the abstract and introduction suggesting the model provides a ‘mechanistic’ explanation for the phenomenon may have been overstated. The DNN serves primarily as a tool to generate important predictions about the underlying neural substrates and provides a promising testbed for investigating learning-related plasticity in the visual hierarchy.

      In the revised manuscript, we have made significant improvements in explaining the weight change across DNN layers and its implication for understanding “when” and “where” learning occurs in the visual hierarchy. Specifically, in the Results ("Distribution of learning across layers") and Discussion sections, we have provided a more explicit explanation of the weight change across layers, emphasizing its implications for understanding the observed variability in generalizations and the underlying neural mechanisms.

      Regarding the substantial weight change observed in the first two layers during the orientation discrimination task, we interpret this as evidence that VPL of this least stable invariant relies more on the plasticity of lower-level brain areas, which may explain the poorer generalization performance to new locations or features observed in the previous literature (Fiorentini and Berardi, 1980; Schoups et al., 1995; Shiu and Pashler, 1992). However, this does not imply that learning effects of this least stable invariant cannot transfer to more stable invariants. From the perspective of Klein’s Erlangen program, the extraction of more stable invariants is implicitly required when processing less stable ones, which leads to their automatic learning. Additionally, within the framework of the Reverse Hierarchy Theory (RHT), plasticity in lower-level visual areas affects higher-level areas that receive the same low-level input, due to the feedforward anatomical hierarchy of the visual system (Ahissar and Hochstein, 2004, 1997; Markov et al., 2013; McGovern et al., 2012). Therefore, the improved signal from lower-level plasticity resulted from training on less stable invariants can enhance higher-level representations of more stable invariants, facilitating the transfer effect from low- to high-stability invariants.

      Reviewer #2 (Public Review):

      The strengths of this paper are clear: The authors are asking a novel question about geometric representation that would be relevant to a broad audience. Their question has a clear grounding in pre-existing mathematical concepts, that, to my knowledge, have been only minimally explored in cognitive science. Moreover, the data themselves are quite striking, such that my only concern would be that the data seem almost *too* clean. It is hard to know what to make of that, however. From one perspective, this is even more reason the results should be publicly available. Yet I am of the (perhaps unorthodox) opinion that reviewers should voice these gut reactions, even if it does not influence the evaluation otherwise. Below I offer some more concrete comments:

      (1) The justification for the designs is not well explained. The authors simply tell the audience in a single sentence that they test projective, affine, and Euclidean geometry. But despite my familiarity with these terms -- familiarity that many readers may not have -- I still had to pause for a very long time to make sense of how these considerations led to the stimuli that were created. I think the authors must, for a point that is so central to the paper, thoroughly explain exactly why the stimuli were designed the way that they were and how these designs map onto the theoretical constructs being tested.

      We thank you for reminding us to better justify our experimental designs. In response, we have provided a detailed introduction to Klein’s Erlangen Program, describing projective, affine, and Euclidean geometries, their associated invariants, and the hierarchical relationships among them (see revised Introduction and Figure 1).

      All experiments in our study employed stimuli with varying structural stability (collinearity, parallelism, orientation, see revised Figure 2, 4), enabling us to investigate the impact of invariant stability on visual perceptual learning. Experiment 1 was adapted from paradigms studying the "configural superiority effect," commonly used to assess the salience of geometric invariants. This paradigm was chosen to align with and build upon related research, thereby enhancing comparability across studies. To address the limitations of Experiment 1 (as detailed in our Results section), Experiments 2, 3, and 4 employed a 2AFC (two-alternative forced choice)-like paradigm, which is more common in visual perceptual learning research. Additionally, we have expanded descriptions of our stimuli and designs. aiming to ensure clarity and accessibility for all readers.

      (2) I wondered if the design in Experiment 1 was flawed in one small but critical way. The goal of the parallelism stimuli, I gathered, was to have a set of items that is not parallel to the other set of items. But in doing that, isn't the manipulation effectively the same as the manipulation in the orientation stimuli? Both functionally involve just rotating one set by a fixed amount. (Note: This does not seem to be a problem in Experiment 2, in which the conditions are more clearly delineated.)

      We appreciate your insightful observation regarding the design of Experiment 1 and the potential similarity between the manipulations of the parallelism and orientation stimuli.

      The parallelism and orientation stimuli in Experiment 1 were originally introduced by Olson and Attneave (1970) to support line-based models of shape coding and were later adapted by Chen (1986) to measure the relative salience of different geometric properties. In the parallelism stimuli, the odd quadrant differs from the others in line slope, while in the orientation stimuli, the odd quadrant contains identical line segments but differs in the direction pointed by their angles. The faster detection of the odd quadrant in the parallelism stimuli compared to the orientation stimuli has traditionally been interpreted as evidence supporting line-based models of shape coding. However, as Chen (1986, 2005) proposed, the concept of invariants over transformations offers a different interpretation: in the parallelism stimuli, the fact that line segments share the same slope essentially implies that they are parallel, and the discrimination may be actually based on parallelism. This reinterpretation suggests that the superior performance with parallelism stimuli reflects the relative perceptual salience of parallelism (an affine invariant property) compared to the orientation of angles (a Euclidean invariant property).

      In the collinearity and orientation tasks, the odd quadrant and the other quadrants differ in their corresponding geometries, such as being collinear versus non-collinear. However, in the parallelism task, participants could rely either on the non-parallel relationship between the odd quadrant and the other quadrants or on the difference in line slope to complete the task, which can be seen as effectively similar to the manipulation in the orientation stimuli, as you pointed out. Nonetheless, this set of stimuli and the associated paradigm have been used in prior studies to address questions about Klein’s hierarchy of geometries (Chen, 2005; Wang et al., 2007; Meng et al., 2019). Given its historical significance and the importance of ensuring comparability with previous research, we adopted this set of stimuli despite its imperfections. Other limitations of this paradigm are discussed in the Results section (“The paradigm of ‘configural superiority effects’ with reaction time measures”), and optimized experimental designs were implemented in Experiment 2, 3, and 4 to produce more reliable results.

      (3) I wondered if the results would hold up for stimuli that were more diverse. It seems that a determined experimenter could easily design an "adversarial" version of these experiments for which the results would be unlikely to replicate. For instance: In the orientation group in Experiment 1, what if the odd-one-out was rotated 90 degrees instead of 180 degrees? Intuitively, it seems like this trial type would now be much easier, and the pattern observed here would not hold up. If it did hold up, that would provide stronger support for the authors' theory.

      It is not enough, in my opinion, to simply have some confirmatory evidence of this theory. One would have to have thoroughly tested many possible ways that theory could fail. I'm unsure that enough has been done here to convince me that these ideas would hold up across a more diverse set of stimuli.

      Thanks for your nice suggestion to validate our results using more diverse stimuli. However, the limitations of Experiment 1 make it less suitable for rigorous testing of diverse or "adversarial" stimuli. In addition to the limitation discussed in response to (2), another issue is that participants may rely on grouping effects among shapes in the quadrants, rather than solely extracting the geometrical invariants that are the focus of our study. As a result, the reaction times measured in this paradigm may not exclusively reflect the extraction time of geometrical invariants but could also be influenced by these grouping effects.

      Therefore, we have shifted our focus to the improved design used in Experiment 2 to provide stronger evidence for our theory. Building on this more robust design, we have extended our investigations to study location generalization (revised Experiment 3) and long-term learning effects (revised Figure 6—figure supplement 2). These enhancements allow us to provide stronger evidence for our theory while addressing potential confounds present in Experiment 1.

      While we did not explicitly test the 90-degree rotation scenario in Experiment 1, future studies could employ more diverse set of stimuli within the Experiment 2 framework to better understand the limits and applicability of our theoretical predictions. We appreciate this suggestion, as it offers a valuable direction for further research.

      Reviewer #1 (Recommendations For The Authors):

      Major comments:

      - A concise introduction to the Erlangen program, geometric invariants, and their structural stability would greatly enhance the paper. This would not only clarify these concepts for readers unfamiliar with them but also provide a more intuitive explanation for the choice of tasks and stimuli used in the study.

      - I recommend adding a section that discusses how this new framework aligns with previous observations in VPL, especially those involving more classical stimuli like Gabors, random dot kinematograms, etc. This would help in contextualizing the framework within the broader spectrum of VPL research.

      - Exploring how each level of invariant stability transfers within itself would be an intriguing addition. Previous theories often consider transfer within a condition. For instance, in an orientation discrimination task, a challenging training condition might transfer less to a new stimulus test location (e.g., a different visual quadrant). Applying a similar approach to examine how VPL generalizes to a new test location within a single invariant stability level could provide insightful contrasts between the proposed theory and existing ones. This would be particularly relevant in the context of Experiment 2, which could be adapted for such a test.

      - I suggest including some example learning curves from the human experiment for a more clear demonstration of the differences in the learning rates across conditions. Easier conditions are expected to be learned faster (i.e. plateau faster to a higher accuracy level). The learning speed is reported for the DNN but not for the human subjects.

      - In the modeling section, it would be beneficial to focus on offering an explanation for the observed generalization as a function of the stability of the invariants. As it stands, the neural network model primarily demonstrates that DNNs replicate the same generalization pattern observed in human experiments. While this finding is indeed interesting, the model currently falls short of providing deeper insights or explanations. A more detailed analysis of how the DNN model contributes to our understanding of the relationship between invariant stability and generalization would significantly enhance this section of the paper.

      Minor comments:

      - Line 46: "it is remains" --> "it remains"

      - Larger font sizes for the vertical axis in Figure 6B would be helpful.

      We thank your detailed and constructive comments, which have significantly helped us improve the clarity and rigor of our manuscript. Below, we provide a response to each point raised.

      Major Comments

      (1) A concise introduction to the Erlangen program, geometric invariants, and their structural stability:

      We appreciate your suggestion to provide a clearer introduction to these foundational concepts. In the revised manuscript, we have added a dedicated section in the Introduction that offers a concise explanation of Klein’s Erlangen Program, including the concept of geometric invariants and their structural stability. This addition aims to make the theoretical framework more accessible to readers unfamiliar with these concepts and to better justify the choice of tasks and stimuli used in the study.

      (2) Contextualizing the framework within the broader spectrum of VPL research:

      We have expanded the Discussion section to better integrate our framework with previous VPL studies that reported generalization, including those using classical stimuli such as Gabors (Dosher and Lu, 2005; Hung and Seitz, 2014; Jeter et al., 2009; Liu and Pack, 2017; Manenti et al., 2023) and random dot kinematograms (Chang et al., 2013; Chen et al., 2016; Huang et al., 2007; Liu and Pack, 2017). In particular, we now discuss the similarities and differences between our findings and these earlier studies, exploring potential shared mechanisms underlying VPL generalization across different types of stimuli. These additions aim to contextualize our framework within the broader field of VPL research and highlight its relevance to existing literature.

      (3) Exploring transfer within each invariant stability level:

      In response to this insightful suggestion, we have added a new psychophysics experiment in the revised manuscript (Experiment 3) to examine how VPL generalizes to a new test location within the same invariant stability level. This experiment provides an opportunity to further explore the neural substrates underlying VPL of geometrical invariants, offering a contrast to existing theories and strengthening the connection between our framework and location generalization findings in the VPL literature.

      (4) Including example learning curves from the human experiments:

      We appreciate your suggestion to include learning curves for human subjects. In the revised manuscript, we have added learning curves of long-term VPL (see revised Figure 6—figure supplement 2) to track the temporal learning processes across invariant conditions. Interestingly, and in contrast to the results reported in the DNN simulations, these curves show that less stable invariants are learned faster and exhibit greater magnitudes of learning. We interpret this discrepancy as a result of differences in initial performance levels between humans and DNNs, as discussed in the revised Discussion section.

      (5) Offering a deeper explanation of the DNN model's findings:

      We acknowledge your concern that the modeling section primarily demonstrates that DNNs replicate human generalization patterns without offering deeper mechanistic insights. To address this, we have expanded the Results and Discussion sections to more explicitly interpret the weight change patterns observed across DNN layers in relation to invariant stability and generalization. We discuss how the model contributes to understanding the observed generalization within and across invariants with different stability, focusing on the neural network's role in generating predictions about the neural mechanisms underlying these effects.

      Minor Comments

      (1) Line 46: Correction of “it is remains” to “it remains”:

      We have corrected this typo in the revised manuscript.

      (2) Vertical axis font size in Figure 6B:

      We have increased the font size of the vertical axis labels in revised Figure 8B for improved readability.

      Reviewer #2 (Recommendations For The Authors):

      (1) There are many details throughout the paper that are confusing, such as the caption for Figure 4, which does not appear to correspond to what is shown (and is perhaps a copy-paste of the caption for Experiment 1?). Similarly, I wasn't sure about many methodological details, like: How participants made their second response in Experiment 2? It says somewhere that they pressed the corresponding key to indicate which one was the target, but I didn't see anything explaining what that meant. Also, I couldn't tell if the items in the figures were representative of all trials; the stimuli were described minimally in the paper.

      (2) The language in the paper felt slightly off at times, in minor but noticeable ways. Consider the abstract. The word "could" in the first sentence is confusing, and, more generally, that first sentence is actually quite vague (i.e., it just states something that would appear to be true of any perceptual system). In the following sentence, I wasn't sure what was meant by "prior to be perceived in the visual system". Though I was able to discern what the authors were intending to say most times, I was required to "read between the lines" a bit. This is not to fault the authors. But these issues need to be addressed, I think.

      (1) We sincerely apologize for the oversight regarding the caption for (original) Figure 4, and thank you for pointing out this error. In the revised manuscript, we have corrected the caption for Figure 4 (revised Figure 5) and ensured it accurately describes the content of the figure. Additionally, we have strengthened the descriptions of the stimuli and tasks in both the Materials and Methods section and the captions for (revised) Figures 4 and 5 to provide a clearer and more comprehensive explanation of Experiment 2. These revisions aim to help readers fully understand the experimental design and methodology.

      (2) We appreciate your feedback regarding the clarity and precision of the language in the manuscript. We acknowledge that some expressions, particularly in the abstract, were unclear or imprecise. In the revised manuscript, we have rewritten the abstract to improve clarity and ensure that the statements are concise and accurately convey our intended meaning. Additionally, we have thoroughly reviewed the entire manuscript to address any other instances of ambiguous language, aiming to eliminate the need for readers to "read between the lines." We are grateful for your suggestions, which have helped us enhance the overall readability of the paper.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1(Public review):

      Strengths:

      Utilization of both human placental samples and multiple mouse models to explore the mechanisms linking inflammatory macrophages and T cells to preeclampsia (PE).<br /> Incorporation of advanced techniques such as CyTOF, scRNA-seq, bulk RNA-seq, and flow cytometry.

      Identification of specific immune cell populations and their roles in PE, including the IGF1-IGF1R ligand-receptor pair in macrophage-mediated Th17 cell differentiation.<br /> Demonstration of the adverse effects of pro-inflammatory macrophages and T cells on pregnancy outcomes through transfer experiments.

      Weaknesses:

      Comment 1. Inconsistent use of uterine and placental cells, which are distinct tissues with different macrophage populations, potentially confounding results.

      Response1: We thank the reviewers' comments. We have done the green fluorescent protein (GFP) pregnant mice-related animal experiment, which was not shown in this manuscript. The wild-type (WT) female mice were mated with either transgenic male mice, genetically modified to express GFP, or with WT male mice, in order to generate either GFP-expressing pups (GFP-pups) or their genetically unmodified counterparts (WT-pups), respectively. Mice were euthanized on day 18.5 of gestation, and the uteri of the pregnant females and the placentas of the offspring were analyzed using flow cytometry. The majority of macrophages in the uterus and placenta are of maternal origin, which was defined by GFP negative. In contrast, fetal-derived macrophages, distinguished by their expression of GFP, represent a mere fraction of the total macrophage population. We have added the GFP pregnant mice-related data in uterine and placental cells (Line204-212).

      Comment 2. Missing observational data for the initial experiment transferring RUPP-derived macrophages to normal pregnant mice.

      Response 2: We thank the reviewers' comments. We have added the observational data (Figure 4-figure supplement 1D, 1E) and a corresponding description of the data (Line 198-203).

      Comment 3. Unclear mechanisms of anti-macrophage compounds and their effects on placental/fetal macrophages.

      Response 3: We thank the reviewers' comments. PLX3397, the inhibitor of CSF1R, which is needed for macrophage development (Nature. 2023, PMID: 36890231; Cell Mol Immunol. 2022, PMID: 36220994), we have stated that on Line 227-230. However, PLX3397 is a small molecule compound that possesses the potential to cross the placental barrier and affect fetal macrophages. We have discussed the impact of this factor on the experiment in the Discussion section (Line457-459).

      Comment 4. Difficulty in distinguishing donor cells from recipient cells in murine single-cell data complicates interpretation.

      Response 4: We thank the reviewers' comments. Upon analysis, we observed a notable elevation in the frequency of total macrophages within the CD45<sup>+</sup> cell population. Then we subsequently performed macrophage clustering and uncovered a marked increase in the frequency of Cluster 0, implying a potential correlation between Cluster 0 and donor-derived cells. RNA sequencing revealed that the F480<sup>+</sup>CD206<sup>-</sup> pro-inflammatory donor macrophages exhibited a Folr2<sup>+</sup>Ccl7<sup>+</sup>Ccl8<sup>+</sup>C1qa<sup>+</sup>C1qb<sup>+</sup>C1qc<sup>+</sup> phenotype, which is consistent with the phenotype of cluster 0 in macrophages observed in single-cell RNA sequencing (Figure 4D and Figure 5E). Therefore, we believe that the donor cells should be cluster 0 in macrophages.

      Comment 5. Limitation of using the LPS model in the final experiments, as it more closely resembles systemic inflammation seen in endotoxemia rather than the specific pathology of PE.

      Response 5: We thank the reviewers' comments. Firstly, our other animal experiments in this manuscript used the Reduction in Uterine Perfusion Pressure (RUPP) mouse model to simulate the pathology of PE. However, the RUPP model requires ligation of the uterine arteries in pregnant mice on day 12.5 of gestation, which hinders T cells returning from the tail vein from reaching the maternal-fetal interface. In addition, this experiment aims to prove that CD4<sup>+</sup> T cells are differentiated into memory-like Th17 cells through IGF-1R receptor signaling to affect pregnancy by clearing CD4<sup>+</sup> T cells in vivo with an anti-CD4 antibody followed by injecting IGF-1R inhibitor-treated CD4<sup>+</sup> T cells. And we proved that injection of RUPP-derived memory-like CD4<sup>+</sup> T cells into pregnant mice induces PE-like symptoms (Figure 6F-6H). In summary, the application of the LPS model in the final experiments does not affect the conclusions.

      Reviewer #2 (Public review):

      Strengths:

      (1) This study combines human and mouse analyses and allows for some amount of mechanistic insight into the role of pro-inflammatory and anti-inflammatory macrophages in the pathogenesis of pre-eclampsia (PE), and their interaction with Th17 cells.

      (2) Importantly, they do this using matched cohorts across normal pregnancy and common PE comorbidities like gestation diabetes (GDM).

      (3) The authors have developed clear translational opportunities from these "big data" studies by moving to pursue potential IGF1-based interventions.

      Weaknesses:

      (1) Clearly the authors generated vast amounts of multi-omic data using CyTOF and single-cell RNA-seq (scRNA-seq), but their central message becomes muddled very quickly. The reader has to do a lot of work to follow the authors' multiple lines of inquiry rather than smoothly following along with their unified rationale. The title description tells fairly little about the substance of the study. The manuscript is very challenging to follow. The paper would benefit from substantial reorganizations and editing for grammatical and spelling errors. For example, RUPP is introduced in Figure 4 but in the text not defined or even talked about what it is until Figure 6. (The figure comparing pro- and anti-inflammatory macrophages does not add much to the manuscript as this is an expected finding).

      Response 1: We thank the reviewers' comments. According to the reviewer's suggestion, we have made the necessary revisions. Firstly, the title of the article has been modified to be more specific. We also introduce the RUPP mouse model when interpreted Figure 4-figure supplement 1. Thirdly, We have moved the images of Figure 7 to the Figure 6-figure supplement 2 make them easier to follow. Finally, we diligently corrected the grammatical and spelling errors in the article. As for the figure comparing pro- and anti-inflammatory macrophages, the Editor requested a more comprehensive description of the macrophage phenotype during the initial submission. As a result, we conducted the transcriptome RNA-seq of both uterine-derived pro-inflammatory and anti-inflammatory macrophages and conducted a detailed analysis of macrophages in scRNA-seq.

      Comment 2. The methods lack critical detail about how human placenta samples were processed. The maternal-fetal interface is a highly heterogeneous tissue environment and care must be taken to ensure proper focus on maternal or fetal cells of origin. Lacking this detail in the present manuscript, there are many unanswered questions about the nature of the immune cells analyzed. It is impossible to figure out which part of the placental unit is analyzed for the human or mouse data. Is this the decidua, the placental villi, or the fetal membranes? This is of key importance to the central findings of the manuscript as the immune makeup of these compartments is very different. Or is this analyzed as the entirety of the placenta, which would be a mix of these compartments and significantly less exciting?

      Response 2: We thank the reviewers' comments. Placental villi rather than fetal membranes and decidua were used for CyToF in this study. This detail about how human placenta samples were processed have been added to the Materials and Methods section (Line564-576).

      Comment 3. Similarly, methods lack any detail about the analysis of the CyTOF and scRNAseq data, much more detail needs to be added here. How were these clustered, what was the QC for scRNAseq data, etc? The two small paragraphs lack any detail.

      Response 3: We thank the reviewers' comments. The details about the analysis of the CyTOF (Line577-586) and scRNAseq (Line600-615) data have been added in the Materials and Methods section.

      Comment 4. There is also insufficient detail presented about the quantities or proportions of various cell populations. For example, gdT cells represent very small proportions of the CyTOF plots shown in Figures 1B, 1C, & 1E, yet in Figures 2I, 2K, & 2K there are many gdT cells shown in subcluster analysis without a description of how many cells are actually represented, and where they came from. How were biological replicates normalized for fair statistical comparison between groups?

      Response 4: We thank the reviewers' comments. In our study, approximately 8×10^<sup>5</sup> cells were collected per group for analysis using CyTOF. Of these, about 10% (8×10^<sup>4</sup> cells per group) were utilized to generate Figure 1B. As depicted in Figure 1B, gdT cells constitute roughly 1% of each group, with specific percentages as follows: NP group (1.23%), PE group (0.97%), GDM group (0.94%), and GDM&PE group (1.26%), which equates to approximately 800 cells per group. For the subsequent gdT cell analysis presented in Figure 2I, we employed data from all cells within each group to construct the tSNE maps, comprising approximately 8000 cells per group. Consequently, it may initially appear that the number of gdT cells is significantly higher than what is shown in Figure 1B. To clarify this, we have included pertinent explanations in the figure legend. Given the relatively low proportions of gdT cells, we did not pursue further investigations of these cells in subsequent experiments. Following your suggestion, we have relocated this result to the supplementary materials, where it is now presented as Figure 2-figure supplement 1D-E.

      The number of biological replicates (samples) is consistent with Figure 1, and this information has been added to the figure legend.

      Comment 5. The figures themselves are very tricky to follow. The clusters are numbered rather than identified by what the authors think they are, the numbers are so small, that they are challenging to read. The paper would be significantly improved if the clusters were clearly labeled and identified. All the heatmaps and the abundance of clusters should be in separate supplementary figures.

      Response 5: We thank the reviewers' comments. Based on your suggestions, we have labeled and defined the Clusters (Figure 2A, 2F, Figure 3A, Figure 5C and Figure 6A). Additionally, we have moved most of the heatmaps to the supplementary materials.

      Comment 6. The authors should take additional care when constructing figures that their biological replicates (and all replicates) are accurately represented. Figure 2H-2K shows N=10 data points for the normal pregnant (NP) samples when clearly their Table 1 and test denote they only studied N=9 normal subjects.

      Response 6: We thank the reviewers' careful checking. During our verification, we found that one sample in the NP group had pregnancy complications other than PE and GDM. The data in Figure 2H-2K was not updated in a timely manner. We have promptly updated this data and reanalyze it.

      Comment 7. There is little to no evaluation of regulatory T cells (Tregs) which are well known to undergird maternal tolerance of the fetus, and which are well known to have overlapping developmental trajectory with RORgt+ Th17 cells. We recommend the authors evaluate whether the loss of Treg function, quantity, or quality leaves CD4+ effector T cells more unrestrained in their effect on PE phenotypes. References should include, accordingly: PMCID: PMC6448013 / DOI: 10.3389/fimmu.2019.00478; PMC4700932 / DOI: 10.1126/science.aaa9420.

      Response 7: We thank the reviewers' comments. We have done the Treg-related animal experiment, which was not shown in this manuscript. We have added the Treg-related data in Figure 6F-6H. The injection of CD4<sup>+</sup>CD44<sup>+</sup> T cells derived from RUPP mouse, characterized by a reduced frequency of Tregs, could induce PE-like symptoms in pregnant mice (Line297-304). Additionally, we have added a necessary discussion about Tregs and cited the literature you mentioned (Line433-439).

      Comment 8. In discussing gMDSCs in Figure 3, the authors have missed key opportunities to evaluate bona fide Neutrophils. We recommend they conduct FACS or CyTOF staining including CD66b if they have additional tissues or cells available. Please refer to this helpful review article that highlights key points of distinguishing human MDSC from neutrophils: https://doi.org/10.1038/s41577-024-01062-0. This will both help the evaluation of potentially regulatory myeloid cells that may suppress effector T cells as well as aid in understanding at the end of the study if IL-17 produced by CD4+ Th17 cells might recruit neutrophils to the placenta and cause ROS immunopathology and fetal resorption.

      Response 8: We thank the reviewers' comments. Although we do not have additional tissues or cells available to conduct FACS or CyTOF staining, including for CD66b, we have utilized CD15 and CD66b antibodies for immunofluorescence stain of placental tissue, and our findings revealed a pronounced increase in the proportion of neutrophils among PE patients, fostering the hypothesis that IL-17A produced by Th17 cells might orchestrate the migration of neutrophils towards the placental milieu (Figure 6-figure supplement 2F; Line 325-328). We have cited these references and discussed them in the Discussion section (Line 459-465).

      Comment 9. Depletion of macrophages using several different methodologies (PLX3397, or clodronate liposomes) should be accompanied by supplementary data showing the efficiency of depletion, especially within tissue compartments of interest (uterine horns, placenta). The clodronate piece is not at all discussed in the main text. Both should be addressed in much more detail.

      Response 9: We thank the reviewers' comments. We already have the additional data on the efficiency of macrophage depletion involving PLX3397 and clodronate liposomes, which were not present in this manuscript, and we'll add it to the Figure 4-figure supplement 2A,2B. The clodronate piece is mentioned in the main text (Line236-239), but only briefly described, because the results using clodronate we obtained were similar to those using PLX3397.

      Comment 10. There are many heatmaps and tSNE / UMAP plots with unhelpful labels and no statistical tests applied. Many of these plots (e.g. Figure 7) could be moved to supplemental figures or pared down and combined with existing main figures to help the authors streamline and unify their message.

      Response 10: We thank the reviewers' comments. We have moved the images of Figure 7 to the Figure 6-figure supplement 2. We also have moved most of the heatmaps to the supplementary materials.

      Comment 11. There are claims that this study fills a gap that "only one report has provided an overall analysis of immune cells in the human placental villi in the presence and absence of spontaneous labor at term by scRNA-seq (Miller 2022)" (lines 362-364), yet this study itself does not exhaustively study all immune cell subsets...that's a monumental task, even with the two multi-omic methods used in this paper. There are several other datasets that have performed similar analyses and should be referenced.

      Response 11: We thank the reviewers' comments. We have search for more literature and reference additional studies that have conducted similar analyses (Line382-393).

      Comment 12. Inappropriate statistical tests are used in many of the analyses. Figures 1-2 use the Shapiro-Wilk test, which is a test of "goodness of fit", to compare unpaired groups. A Kruskal-Wallis or other nonparametric t-test is much more appropriate. In other instances, there is no mention of statistical tests (Figures 6-7) at all. Appropriate tests should be added throughout.

      Response 12: We thank the reviewers' comments. As stated in the Statistical Analysis section (lines 672-676), the Kruskal-Wallis test was used to compare the results of experiments with multiple groups. Comparisons between the two groups in Figures 5 were conducted using Student's t-test. The aforementioned statistical methods have been included in the figure legends.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      Overall, the study has several strengths, including the use of human samples and animal models, as well as the incorporation of multiple cutting-edge techniques. However, there are some significant issues with the murine model experiments that need to be addressed:

      Comment 1. The authors are not consistent in their use of or focus on uterine and placental cells. These are distinct tissues, and numerous prior reports have indicated differences in the macrophage populations of these tissues, due in part to the predominantly maternal origin of macrophages in the uterus and the largely fetal origin of those in the placenta. The rationale for switching between uterine and placental cells in different experiments is not clear, and the inclusion of cells from both (such as in the bulk RNAseq experiments) could be potentially confounding.

      Response 1: We thank the reviewers' comments. We have done the green fluorescent protein (GFP) pregnant mice-related animal experiment, which was not shown in this manuscript. The wild-type (WT) female mice were mated with either transgenic male mice, genetically modified to express GFP, or with WT male mice, in order to generate either GFP-expressing pups (GFP-pups) or their genetically unmodified counterparts (WT-pups), respectively. Mice were euthanized on day 18.5 of gestation, and the uteri of the pregnant females and the placentas of the offspring were analyzed using flow cytometry. The majority of macrophages in the uterus and placenta are of maternal origin, which was defined by GFP negative. In contrast, fetal-derived macrophages, distinguished by their expression of GFP, represent a mere fraction of the total macrophage population, signifying their inconsequential or restricted presence amidst the broader cellular landscape. We have added the GPF pregnant mice-related data in Figure 4-figure supplement 1D-1E to explain the different macrophage populations in the uterine and placental cells.

      Comment 2. The observational data for the initial experiment transferring RUPP-derived macrophages to normal pregnant mice (without any other manipulations) seems to be missing. They do not seem to be presented in Figure 4 where they are expected based on the results text.

      Response 2: We thank the reviewers' comments. We thank the reviewers' comments. We have added the observational data (Figure 4-figure supplement 1D, 1E) and a corresponding description of the data (Line 198-203).

      Comment 3. The action of the anti-macrophage compounds is not well explained, nor are their mechanisms validated as affecting or not affecting the placental/fetal macrophage populations. It is important to clarify whether the macrophages are depleted or merely inhibited by these treatments, and it is absolutely critical to determine whether these treatments are affecting placental/fetal macrophage populations (the latter indicative of placental transfer), given the focus on placental macrophages.

      Response 3: We thank the reviewers' comments. PLX3397, the inhibitor of CSF1R, which is needed for macrophage development (Nature. 2023, PMID: 36890231; Cell Mol Immunol. 2022, PMID: 36220994), we have stated that on Line227-230. However, PLX3397 is a small molecule compound that possesses the potential to cross the placental barrier and affect fetal macrophages. We will discuss the impact of this factor on the experiment in the Discussion section (Line457-459).

      Comment 4. The interpretation of the murine single-cell data is hampered by the lack of means for distinguishing donor cells from recipient cells, which is important when seeking to identify the influence of the donor cells.

      Response 4: We thank the reviewers' comments. Upon analysis, we observed a notable elevation in the frequency of total macrophages within the CD45<sup>+</sup> cell population. Then we subsequently per formed macrophage clustering and uncovered a marked increase in the frequency of Cluster 0, implying a potential correlation between Cluster 0 and donor-derived cells. RNA sequencing revealed that the F480<sup>+</sup>CD206<sup>-</sup> pro-inflammatory donor macrophages exhibited a Folr2<sup>+</sup>Ccl7<sup>+</sup>Ccl8<sup>+</sup>C1qa<sup>+</sup>C1qb<sup>+</sup>C1qc<sup>+</sup> phenotype, which is consistent with the phenotype of cluster 0 in macrophages observed in single-cell RNA sequencing (Figure 4D and Figure 5E). Therefore, the donor cells should be in cluster 0 in macrophages.

      Comment 5. The switch to the LPS model in the final experiments is a limitation, as this model more closely resembles the systemic inflammation seen in endotoxemia rather than the specific pathology of preeclampsia (PE). While this is not an exhaustive list, the number of weaknesses in the experimental design makes it difficult to evaluate the findings comprehensively.

      Response 5: We thank the reviewers' comments. Firstly, our other animal experiments in this manuscript used the RUPP mouse model to simulate the pathology of PE. However, the RUPP model requires ligation of the uterine arteries in pregnant mice on day 12.5 of gestation, which hinders T cells returning from the tail vein from reaching the maternal-fetal interface. In addition, this experiment aims to prove that CD4<sup>+</sup> T cells are differentiated into memory-like Th17 cells through IGF-1R receptor signaling to affect pregnancy by clearing CD4<sup>+</sup> T cells in vivo with an anti-CD4 antibody followed by injecting IGF-1R inhibitor-treated CD4<sup>+</sup> T cells. We proved that injection of RUPP-derived memory-like CD4<sup>+</sup> T cells into pregnant rats induces PE-like symptoms (Figure 6F-6H). In summary, applying the LPS model in the final experiments does not affect the conclusions.

      Minor comments:

      Comment 1. Introduction, Lines 67-74: The phrasing here is unclear as to the roles that each mentioned immune cell subset is playing in preeclampsia. Given the statement "Elevated levels of maternal inflammation...", does this imply that the numbers of all mentioned immune cell subsets are increased in the maternal circulation? If not, please consider rewording this.

      Response 1: We thank the reviewers' comments. We have revised the manuscript as follows: Currently, the pivotal mechanism underpinning the pathogenesis of preeclampsia is widely acknowledged to involve an increased frequency of pro-inflammatory M1-like maternal macrophages, along with an elevation in Granulocytes capable of superoxide generation, CD56<sup>+</sup> CD94<sup>+</sup> natural killer (NK) cells, CD19<sup>+</sup>CD5<sup>+</sup> B1 lymphocytes, and activated γδ T cells. Conversely, this pathological process is accompanied by a notable decrease in the frequency of anti-inflammatory M2-like macrophages and NKp46<sup>+</sup> NK cells (Line67-77).

      Comment 2. Introduction, Lines 67-80: Is the involvement of the described immune cell subsets largely ubiquitous to preeclampsia? Recent multi-omic studies suggest that preeclampsia is a heterogeneous condition with different subsets, some more biased towards systemic immune activation than others. Thus, it is important to clarify whether the involvement of specific immune subsets is generally observed or more specific.

      Response 2: We thank the reviewers' comments. We have added a new paragraph as follows: Moreover, as PE can be subdivided into early- and late-onset PE diagnosed before 34 weeks or from 34 weeks of gestation, respectively. Research has revealed that among the myriad of cellular alterations in PE, pro-inflammatory M1-like macrophages and intrauterine B1 cells display an augmented presence at the maternal-fetal interface of both early-onset and late-onset PE patients. Decidual natural killer (dNK) cells and neutrophils emerge as paramount contributors, playing a more crucial role in the pathogenesis of early-onset PE than late-onset PE (Front Immunol. 2020. PMID: 33013837) (Line83-89).

      Comment 3. Introduction, Lines 81-86: The point of this short paragraph is not clear; the authors mention two very specific cellular interactions without explaining why.

      Response 3: In the previous paragraph, we uncovered a heightened inflammatory response among multiple immune cells in patients with PE, yet the intricate interplay between these individual immune cells has been seldom elucidated in the context of PE patient. This is precisely why we delve into the realm of specific immune cellular interactions in relation to other pregnancy complications in this paragraph (Line91-98).

      Comment 4. Methods: What placental tissues (e.g., villous tree, chorionic plate, extraplacental membranes) were included for CyTOF analysis? Was any decidual tissue (e.g., basal plate) included? Please clarify.

      Response 4: Placental villi rather than chorionic plate and extraplacental membranes were used for CyToF in this study. The relevant content has been incorporated into the "Materials and Methods" section (Line564-576).

      Comment 5. Results, Table 1: The authors should clarify that all PE samples were not full term (i.e., were less than 37 weeks of gestation), which is to be expected. In addition, were the PE cases all late-onset PE?

      Response 5: All PE samples enumerated in Table 1 demonstrate a late-onset preeclampsia, with placental specimens being procured from patients more than 35 weeks of gestation and less than the 38 weeks of pregnancy. The relevant content has been incorporated into the "Materials and Methods" section (Line574-576).

      Comment 6. Results, Figure 1: Are the authors considering the identified Macrophage cluster as being largely fetal (e.g., Hofbauer cells)? This also depends on whether any decidual tissue was included in the placental samples for CyTOF.

      Response 6: Firstly, the specimens subjected to CyToF analysis were devoid of decidual tissue and exclusively comprised placental villi. Secondly, the Macrophage cluster in Figure 1 undeniably encompasses Hofbauer cells, and we considering fetal-derived macrophages likely constituting the substantial proportion of the cellular population. However, a limitation of the CyToF technique lies in its inability to discern between maternal and fetal origins of these cells, thereby precluding a definitive distinction.

      Comment 7. Results, Figure 2C: Did the authors validate other T-cell subset markers (e.g., Th1, Th2, Th9, etc.)?

      Response 7: In this study, we did not validate additional T-cell subset markers presented in Figure 2C, recognizing the potential for deeper insights. As we embark on our subsequent research endeavors, we aim to meticulously explore and characterize the intricate changes in diverse T-cell populations at the maternal-fetal interface, with a particular focus on preeclampsia patients, thereby advancing our understanding of this complex condition.

      Comment 8. Results, Figure 2D: Where were the detected memory-like T cells located in the placenta? Did they cluster in certain areas or were they widely distributed?

      Response 8: Upon a thorough re-evaluation of the immunofluorescence images specific to the placenta, we observed a notable preponderance of memory-like T cells residing within the placental sinusoids (Line135-139).

      Comment 9. Results, Figure 2E: I would suggest separating the two plots so that the Y-axis can be expanded for TIM3, as it is impossible to view the medians currently.

      Response 9: We thank the reviewers' comments. We have made the adjustment to Figure 2E according to the reviewers' suggestions.

      Comment 10. Results, Lines 138-140: Do the authors consider that the altered T-cells are largely resident cells of the placenta or newly invading/recruited cells? The clarification of distribution within the placental tissues as mentioned above would help answer this.

      Response 10: Our analysis revealed the presence of memory-like T cells within the placental sinusoids, as evident from the immunofluorescence examination of placental tissues. Consequently, these T cells may represent recently recruited cellular entities, traversing the placental vasculature and integrating into this unique maternal-fetal microenvironment (Line135-139).

      Comment 11. Results, Figure 3C: Has a reduction of gMDSCs (or MDSCs in general) been previously reported in PE?

      Response 11: Myeloid-derived suppressor cells (MDSCs) constitute a diverse population of myeloid-derived cells that exhibit immunosuppressive functions under various conditions. Previous reports have documented a decrease in the levels of gMDSCs from peripheral blood or umbilical cord blood among patients with preeclampsia (Am J Reprod Immunol. 2020, PMID: 32418253; J Reprod Immunol. 2018, PMID: 29763854; Biol Reprod. 2023, PMID: 36504233). Nevertheless, there was no documented reports thus far on the alterations and specific characteristics in gMDSCs within the placenta of PE patients.

      Comment 12. Results, Figure 3D-E: It is not clear what new information is added by the correlations, as the increase of both cluster 23 in CD11b+ cells and cluster 8 in CD4+ T cells in PE cases was already apparent. Are these simply to confirm what was shown from the quantification data?

      Response 12: Despite the evident increase in both cluster 23 within CD11b<sup>+</sup> cells and cluster 8 within CD4<sup>+</sup> T cells in PE cases, the existence of a potential correlation between these two clusters remains elusive. To gain insight into this question, we conducted a Pearson correlation analysis, which is presented in Figure 3D-E, revealing a positive correlation between the two clusters.

      Comment 13. Results, Figure 4A: Please clarify in the results text that the RNA-seq of macrophages from RUPP mice was performed prior to their injection into normal pregnant mice.

      Response 13: We thank the reviewers' comments. We have updated Figure 4A according to the reviewers' suggestions.

      Comment 14. Results / Methods, Figure 4: For the transfer of macrophages from RUPP mice into normal mice, why were the uterine tissues included to isolate cells? The uterine macrophages will be almost completely maternal, as opposed to the largely fetal placental macrophages, and despite the sorting for specific markers these are likely distinct subsets that have been combined for injection. This could potentially impact the differential gene expression analysis and should be accounted for. In addition, did murine placental samples include decidua? This should be clarified.

      Response 14: We thank the reviewers' comments. For our experimental design involving human samples, we meticulously selected placental tissue as the primary focus. Initially, we aimed for uniformity by contemplating the utilization of mouse placenta. However, a pivotal revelation emerged from the GFP pregnant mice-related data in Figure 4-figure supplement 1D,1E: the uterus and placenta of mice are predominantly populated by maternal macrophages, with fetal macrophages virtually absent, marking a notable divergence from the human scenario. Furthermore, the uterine milieu exhibits a macrophage concentration exceeding 20% of total cellular composition, whereas in the placenta, this proportion dwindles to less than 5%, underscoring a distinct distribution pattern. Given these discrepancies and considerations, we incorporated mouse uterine tissues into our protocol to isolate cells, ensuring a more comprehensive and informative exploration that acknowledges the inherent differences between human and mouse placental biology.

      Comment 15. Results, Lines 186-187: I think the figure citation should be Figure 4D here.

      Response 15: We thank the reviewers' careful checking. We have revised and updated Figure 4 accordingly.

      Comment 16. Results, Figure 4: Where are the results of the injection of anti-inflammatory and pro-inflammatory macrophages into normal mice? This experiment is mentioned in Figure 4A, but the only results shown in Figure 4 are with the PLX3397 depletion.

      Response 16: The aim of this experiment in figure 4 is to conclusively ascertain the influence of pro-inflammatory and anti-inflammatory macrophages on the other immune cells within the maternal-fetal interface, as well as their implications for pregnancy outcomes. To achieve this, we employed a strategic approach involving the administration of PLX3397, a compound capable of eliminating the preexisting macrophages in mice. Subsequently, anti-inflam or pro-inflam macrophages were injected to these mice, thereby eliminating the confounding influence of the native macrophage population. This methodology allows for a more discernible observation of the specific effects these two types of macrophages exert on the immune landscape at the maternal-fetal interface and their ultimate impact on pregnancy outcomes.

      Comment 17. Results, Lines 189-190: Does PLX3397 inhibit macrophage development/signaling/etc. or result in macrophage depletion? This is an important distinction. If depletion is induced, does this affect placental/fetal macrophages or just maternal macrophages?

      Response 17: We thank the reviewers' comments. We have updated the additional data on the efficiency of macrophage depletion involving PLX3397 in Figure 4-figure supplement 2A. PLX3397 is a small molecule compound that possesses the potential to cross the placental barrier and affect fetal macrophages. We have discussed the impact of this factor on the experiment in the Discussion section (Line457-459).

      Comment 18. Results, Lines 197-198: Similarly, does clodronate liposome administration affect only maternal macrophages, or also placental/fetal macrophages?

      Response 18: We thank the reviewers' comments. We have updated the additional data on the efficiency of macrophage depletion involving Clodronate Liposomes in Figure 4-figure supplement 2B. Clodronate Liposomes, which are intricate vesicles encapsulating diverse substances, while only small molecule compounds possess the potential to cross the placental barrier. Consequently, we hold the view that the influence of these liposomes is likely confined to the maternal macrophages (Artif Cells Nanomed Biotechnol. 2023. PMID: 37594208).  

      Comment 19. Results, Line 206: A minor point, but consider continuing to refer to the preeclampsia model mice as RUPP mice rather than PE mice.

      Response 19: We thank the reviewers' comments. We have revised and updated this section accordingly.

      Comment 20. Results / Methods, Figure 5: For these experiments, why did the authors focus on the mouse uterus?

      Response 20: We have previously addressed this query in our Response 14. We incorporated mouse uterine tissues for cell isolation due to the profound differences in placental biology between humans and mice.

      Comment 21. Results, Figure 5: Did the authors have a means of distinguishing the transferred donor cells from the recipient cells for their single-cell analysis? If the goal is to separate the effects of the macrophage transfer on other uterine immune cells, then it would be important to identify and separate the donor cells.

      Response 21: We thank the reviewers' comments. Upon analysis, we observed a notable elevation in the frequency of total macrophages within the CD45<sup>+</sup> cell population. Then we subsequently performed macrophage clustering and uncovered a marked increase in the frequency of Cluster 0, implying a potential correlation between Cluster 0 and donor-derived cells. RNA sequencing revealed that the F480<sup>+</sup>CD206<sup>-</sup> pro-inflammatory donor macrophages exhibited a Folr2<sup>+</sup>Ccl7<sup>+</sup>Ccl8<sup>+</sup>C1qa<sup>+</sup>C1qb<sup>+</sup>C1qc<sup>+</sup> phenotype, which is consistent with the phenotype of cluster 0 in macrophages observed in single-cell RNA sequencing (Figure 4D and Figure 5E). Therefore, the donor cells should be in cluster 0 in macrophages.

      Comment 22. Results, Lines 247-248: While the authors have prudently noted that the observed T-cell phenotypes are merely suggestive of immunosuppression, any claims regarding changes in the immunosuppressive function after macrophage transfer would require functional studies of the T cells.

      Response 22: We thank the reviewers' comments. Upon revisiting and meticulously reviewing the pertinent literature, we have refined our terminology, transitioning from 'immunosuppression' to 'immunomodulation', thereby enhancing the accuracy and precision of our Results (Line285-287).

      Comment 23. Results, Figure 6G: The observation of worsened outcomes and PE-like symptoms after T-cell transfer is interesting, but other models of PE induced by the administration of Th1-like cells have already been reported. Are the authors' findings consistent with these reports? These findings are strengthened by the evaluation of second-pregnancy outcomes following the transfer of T cells in the first pregnancy.

      Response 23: We thank the reviewers' comments. As we verified in Figure 6F-6H, the injection of CD4<sup>+</sup>CD44<sup>+</sup> T cells derived from RUPP mouse, characterized by a reduced frequency of Tregs and an increased frequency of Th17 cells, could induce PE-like symptoms in pregnant mice. In line with other studies, which have implicated Th1-like cells in the manifestation of PE-like symptoms, we posit a novel hypothesis: beyond Th1 cells, Th17 cells also have the potential to induce PE-like symptoms.

      Comment 24. Results, Lines 327-337: The disease model implied by the authors here is not clear. Given that the authors' human findings are in the placental macrophages, are the authors proposing that placental macrophages are induced to an M1 phenotype by placenta-derived EVs? Please elaborate on and clarify the proposed model.

      Response 24 In the article authored by our team, titled "Trophoblast-Derived Extracellular Vesicles Promote Preeclampsia by Regulating Macrophage Polarization" published in Hypertension (Hypertension. 2022, PMID: 35993233), we employed trophoblast-derived extracellular vesicles isolated from PE patients as a means to induce an M1-like macrophage phenotype in macrophages from human peripheral blood in vitro. Consequently, in the present study, we have directly leveraged this established methodology to induce pro-inflammatory macrophages.

      Comment 25. Results / Methods, Figure 8E-H: What is the reasoning for switching to an LPS model in this experiment? LPS is less specific to PE than the RUPP model.

      Response 25: We thank the reviewers' comments. Firstly, our other animal experiments in this manuscript used the RUPP mouse model to simulate the pathology of PE. However, the RUPP model requires ligation of the uterine arteries in pregnant mice on day 12.5 of gestation, which hinders T cells returning from the tail vein from reaching the maternal-fetal interface. In addition, this experiment aims to prove that CD4<sup>+</sup> T cells are differentiated into memory-like Th17 cells through IGF-1R receptor signaling to affect pregnancy by clearing CD4<sup>+</sup> T cells in vivo with an anti-CD4 antibody followed by injecting IGF-1R inhibitor-treated CD4<sup>+</sup> T cells. And we proved that injection of RUPP-derived memory-like CD4<sup>+</sup> T cells into pregnant mice induces PE-like symptoms (Figure 6). In summary, the application of the LPS model in the final experiments does not affect the conclusions.

      Comment 26. Discussion: What do the authors consider to be the origins of the inflammatory cells associated with PE onset? Are these maternal cells invading the placental tissues, or are these placental resident (likely fetal) cells?

      Response 26: We thank the reviewers' comments. Numerous reports have consistently observed the presence of inflammatory cells and factors in the maternal peripheral blood and placenta tissues of PE patients, fostering the prevailing notion that the progression of PE is intricately linked to the maternal immune system's inflammatory response towards the fetus. Nevertheless, intriguing findings from single-cell RNA sequencing, analyzed through bioinformatic methods, have challenged this perspective (Elife. 2019. PMID: 31829938;Proc Natl Acad Sci U S A. 2017.PMID: 28830992). These studies reveal that the placenta harbors not just immune cells of maternal origin but also those of fetal origin, raising questions about whether these are maternal cells infiltrating placental tissues or resident (possibly fetal) placental cells. Further investigation is imperative to elucidate this complex interplay.

      Comment 27. Discussion: Given the observed lack of changes in the GDM or GDM+PE groups, do the authors consider that GDM represents a distinct pathology that can lead to secondary PE, and thus is different from primary PE without GDM?

      Response 27: It's possible. Though previous studies reported GDM is associated with aberrant maternal immune cell adaption the findings remained controversial. It seems that GDM does not induce significant alterations in placental immune cell profile in our study, which made us pay more attention to the immune mechanism in PE. However, it is confusing for the reasons why individuals with GDM&PE were protected from the immune alterations at the maternal fetal interface. Limited placental samples in the GDM&PE group can partly explain it, for it is hard to collect clean samples excluding confounding factors. A study reported that macrophages in human placenta maintained anti-inflammatory properties despite GDM (Front Immunol, 2017, PMID: 28824621).Barke et al. also found that more CD163<sup>+</sup> cells were observed in GDM placentas compared to normal controls (PLoS One, 2014, PMID: 24983948). Thus, GDM is likely to have a protective property in the placental immune environment when the individuals are complicated with PE.

      Reviewer #2 (Recommendations for the authors):

      Comment 1. IF images need to be quantified.

      Response 1: We thank the reviewers' comments. We have quantified and calculated the fluorescence intensity and added it in Figure 2D.

      Comment 2. Cluster 12 in Figure 3 is labeled as granulocytes but listed under macrophages.

      Response 2: We thank the reviewers' careful checking. We have revised and updated Figure 3A.

      Comment 3. Figure 4 labels in the text and figure do not match, no 4G in the figure.

      Response 3: We thank the reviewers' careful checking. The figure labels of Figure 4 have been revised and updated.

    1. Reviewer #1 (Public review):

      This work derives a general theory of optimal gain modulation in neural populations. It demonstrates that population homeostasis is a consequence of optimal modulation for information maximization with noisy neurons. The developed theory is then applied to the distributed distributional code (DDC) model of the primary visual cortex to demonstrate that homeostatic DDCs can account for stimulus-specific adaptation.

      What I consider to be the most important contribution of this work is the unification of efficient information transmission in neural populations with population homeostasis. The former is an established theoretical framework, and the latter is a well-known empirical phenomenon - the relationship between them has never been fully clarified. I consider this work to be an interesting and relevant step in that direction.

      The theory proposed in the paper is rigorous and the analysis is thorough. The manuscript begins with a general mathematical setting to identify normative solutions to the problem of information maximization. It then gradually builds towards questions about approximate solutions, neural implementation and plausibility of these solutions, applications of the theory to specific models of neural computation (DDC), and finally comparisons to experimental data in V1. Such a connection of different levels of abstraction is an obvious strength of this work.

      Overall I find this contribution interesting and assess it positively. At the same time, I have three major points of criticism, which I believe the authors should address. I list them below, followed by a number of more specific comments and feedback.

      Major comments:

      (1) Interpretation of key results and relationship between different parts of the manuscript. The manuscript begins with an information-transmission ansatz which is described as "independent of the computational goal" (e.g. p. 17). While information theory indeed is not concerned with what quantity is being encoded (e.g. whether it is sensory periphery or hippocampus), the goal of the studied system is to *transmit* the largest amount of bits about the input in the presence of noise. In my view, this does not make the proposed framework "independent of the computational goal". Furthermore, the derived theory is then applied to a DDC model which proposes a very specific solution to inference problems. The relationship between information transmission and inference is deep and nuanced. Because the writing is very dense, it is quite hard to understand how the information transmission framework developed in the first part applies to the inference problem. How does the neural coding diagram in Figure 3 map onto the inference diagram in Figure 10? How does the problem of information transmission under constraints from the first part of the manuscript become an inference problem with DDCs? I am certain that authors have good answers to these questions - but they should be explained much better.

      (2) Clarity of writing for an interdisciplinary audience. I do not believe that in its current form, the manuscript is accessible to a broader, interdisciplinary audience such as eLife readers. The writing is very dense and technical, which I believe unnecessarily obscures the key results of this study.

      (3) Positioning within the context of the field and relationship to prior work. While the proposed theory is interesting and timely, the manuscript omits multiple closely related results which in my view should be discussed in relationship to the current work. In particular:

      A number of recent studies propose normative criteria for gain modulation in populations:

      - Duong, L., Simoncelli, E., Chklovskii, D. and Lipshutz, D., 2024. Adaptive whitening with fast gain modulation and slow synaptic plasticity. Advances in Neural Information Processing Systems<br /> - Tring, E., Dipoppa, M. and Ringach, D.L., 2023. A power law describes the magnitude of adaptation in neural populations of primary visual cortex. Nature Communications, 14(1), p.8366.<br /> - Młynarski, W. and Tkačik, G., 2022. Efficient coding theory of dynamic attentional modulation. PLoS Biology<br /> - Haimerl, C., Ruff, D.A., Cohen, M.R., Savin, C. and Simoncelli, E.P., 2023. Targeted V1 co-modulation supports task-adaptive sensory decisions. Nature Communications<br /> - The Ganguli and Simoncelli framework has been extended to a multivariate case and analyzed for a generalized class of error measures:<br /> - Yerxa, T.E., Kee, E., DeWeese, M.R. and Cooper, E.A., 2020. Efficient sensory coding of multidimensional stimuli. PLoS Computational Biology<br /> - Wang, Z., Stocker, A.A. and Lee, D.D., 2016. Efficient neural codes that minimize LP reconstruction error. Neural Computation, 28(12),

      More detailed comments and feedback:

      (1) I believe that this work offers the possibility to address an important question about novelty responses in the cortex (e.g. Homann et al, 2021 PNAS). Are they encoding novelty per-se, or are they inefficient responses of a not-yet-adapted population? Perhaps it's worth speculating about.

      (2) Clustering in populations - typically in efficient coding studies, tuning curve distributions are a consequence of input statistics, constraints, and optimality criteria. Here the authors introduce randomly perturbed curves for each cluster - how to interpret that in light of the efficient coding theory? This links to a more general aspect of this work - it does not specify how to find optimal tuning curves, just how to modulate them (already addressed in the discussion).

      (3) Figure 8 - where do Hz come from as physical units? As I understand there are no physical units in simulations.

      (4) Inference with DDCs in changing environments. To perform efficient inference in a dynamically changing environment (as considered here), an ideal observer needs some form of posterior-prior updating. Where does that enter here?

      (5) Page 6 - "We did this in such a way that, for all ν, the correlation matrices, ρ(ν), were derived from covariance matrices with a 1/n power-law eigenspectrum (i.e., the ranked eigenvalues of the covariance matrix fall off inversely with their rank), in line with the findings of Stringer et al. (2019) in the primary visual cortex." This is a very specific assumption, taken from a study of a specific brain region - how does it relate to the generality of the approach?

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      The study by Jena et al. addresses important questions on the fundamental mechanisms of genetic adaptation, specifically, does adaptation proceed via changes of copy number (gene duplication and amplification "GDA") or by point mutation. While this question has been worked on (for example by Tomanek and Guet) the authors add several important aspects relating to resistance against antibiotics and they clarify the ability of Lon protease to reduce duplication formation (previous work was more indirect).

      A key finding Jena et al. present is that point mutations after significant competition displace GDA. A second one is that alternative GDA constantly arise and displace each other (see work on GDA-2 in Figure 3). Finally, the authors found epistasis between resistance alleles that was contingent on lon. Together this shows an intricate interplay of lon proteolysis for the evolution and maintenance of antibiotic resistance by gene duplication.

      Strengths:

      The study has several important strengths: (i) the work on GDA stability and competition of GDA with point mutations is a very promising area of research and the authors contribute new aspects to it, (ii) rigorous experimentation, (iii) very clearly written introduction and discussion sections. To me, the best part of the data is that deletion of lon stimulates GDA, which has not been shown with such clarity until now.

      Weaknesses:

      The minor weaknesses of the manuscript are a lack of clarity in parts of the results section (Point 1) and the methods (Point 2).

      We thank the reviewer for their comments and suggestions on our manuscript. We also appreciate the succinct summary of primary findings that the Reviewer has taken cognisance of in their assessment, in particular the association of the Lon protease with the propensity for GDAs as well as its impact on their eventual fate. We have now revised the manuscript for greater clarity as suggested by Reviewer #1.

      Reviewer #2 (Public review):

      Summary:

      In this strong study, the authors provide robust evidence for the role of proteostasis genes in the evolution of antimicrobial resistance, and moreover, for stabilizing the proteome in light of gene duplication events.

      Strengths:

      This strong study offers an important interaction between findings involving GDA, proteostasis, experimental evolution, protein evolution, and antimicrobial resistance. Overall, I found the study to be relatively well-grounded in each of these literatures, with experiments that spoke to potential concerns from each arena. For example, the literature on proteostasis and evolution is a growing one that includes organisms (even micro-organisms) of various sorts. One of my initial concerns involved whether the authors properly tested the mechanistic bases for the rule of Lon in promoting duplication events. The authors assuaged my concern with a set of assays (Figure 8).

      More broadly, the study does a nice job of demonstrating the agility of molecular evolution, with responsible explanations for the findings: gene duplications are a quick-fix, but can be out-competed relative to their mutational counterparts. Without Lon protease to keep the proteome stable, the cell allows for less stable solutions to the problem of antibiotic resistance.

      The study does what any bold and ambitious study should: it contains large claims and uses multiple sorts of evidence to test those claims.

      Weaknesses:

      While the general argument and conclusion are clear, this paper is written for a bacterial genetics audience that is familiar with the manner of bacterial experimental evolution. From the language to the visuals, the paper is written in a boutique fashion. The figures are even difficult for me - someone very familiar with proteostasis - to understand. I don't know if this is the fault of the authors or the modern culture of publishing (where figures are increasingly packed with information and hard to decipher), but I found the figures hard to follow with the captions. But let me also consider that the problem might be mine, and so I do not want to unfairly criticize the authors.

      For a generalist journal, more could be done to make this study clear, and in particular, to connect to the greater community of proteostasis researchers. I think this study needs a schematic diagram that outlines exactly what was accomplished here, at the beginning. Diagrams like this are especially important for studies like this one that offer a clear and direct set of findings, but conduct many different sorts of tests to get there. I recommend developing a visual abstract that would orient the readers to the work that has been done.

      The reviewer’s comments regarding data presentation are well-taken. Since we already had a diagrammatic model that sums up the chief findings of our study (Figure 9), we have now provided schematics in Figures 1, 3, 5 and 8 to clarify the workflow of smaller sections of the study. We hope that these diagrams provide greater clarity with regards to the experiments we have conducted.

      Next, I will make some more specific suggestions. In general, this study is well done and rigorous, but doesn't adequately address a growing literature that examines how proteostasis machinery influences molecular evolution in bacteria.

      While this paper might properly test the authors' claims about protein quality control and evolution, the paper does not engage a growing literature in this arena and is generally not very strong on the use of evolutionary theory. I recognize that this is not the aim of the paper, however, and I do not question the authors' authority on the topic. My thoughts here are less about the invocation of theory in evolution (which can be verbose and not relevant), and more about engagement with a growing literature in this very area.

      The authors mention Rodrigues 2016, but there are many other studies that should be engaged when discussing the interaction between protein quality control and evolution.

      A 2015 study demonstrated how proteostasis machinery can act as a barrier to the usage of novel genes: Bershtein, S., Serohijos, A. W., Bhattacharyya, S., Manhart, M., Choi, J. M., Mu, W., ... & Shakhnovich, E. I. (2015). Protein homeostasis imposes a barrier to functional integration of horizontally transferred genes in bacteria. PLoS genetics, 11(10), e1005612

      A 2019 study examined how Lon deletion influenced resistance mutations in DHFR specifically: Guerrero RF, Scarpino SV, Rodrigues JV, Hartl DL, Ogbunugafor CB. The proteostasis environment shapes higher-order epistasis operating on antibiotic resistance. Genetics. 2019 Jun 1;212(2):565-75.

      A 2020 study did something similar: Thompson, Samuel, et al. "Altered expression of a quality control protease in E. coli reshapes the in vivo mutational landscape of a model enzyme." Elife 9 (2020): e53476.

      And there's a new review (preprint) on this very topic that speaks directly to the various ways proteostasis shapes molecular evolution:

      Arenas, Carolina Diaz, Maristella Alvarez, Robert H. Wilson, Eugene I. Shakhnovich, C. Brandon Ogbunugafor, and C. Brandon Ogbunugafor. "Proteostasis is a master modulator of molecular evolution in bacteria."

      I am not simply attempting to list studies that should be cited, but rather, this study needs to be better situated in the contemporary discussion on how protein quality control is shaping evolution. This study adds to this list and is a unique and important contribution. However, the findings can be better summarized within the context of the current state of the field. This should be relatively easy to implement.

      We thank the reviewer for their encouraging assessment of our manuscript as well as this important critique regarding the context of other published work that relates proteostasis and molecular evolution. Indeed, this was a particularly difficult aspect for us given the different kinds of literature that were needed to make sense of our study. We have now added the references suggested by the reviewer as well as others to the manuscript. We have also added a paragraph in the discussion section (Lines 463-476) that address this aspect and hopefully fill the lacuna that the reviewer points out in this comment.

      Reviewer #3 (Public review):

      Summary:

      This paper investigates the relationship between the proteolytic stability of an antibiotic target enzyme and the evolution of antibiotic resistance via increased gene copy number. The target of the antibiotic trimethoprim is dihydrofolate reductase (DHFR). In Escherichia coli, DHFR is encoded by folA and the major proteolysis housekeeping protease is Lon (lon). In this manuscript, the authors report the results of the experimental evolution of a lon mutant strain of E. coli in response to sub-inhibitory concentrations of the antibiotic trimethoprim and then investigate the relationship between proteolytic stability of DHFR mutants and the evolution of folA gene duplication. After 25 generations of serial passaging in a fixed concentration of trimethoprim, the authors found that folA duplication events were more common during the evolution of the lon strain, than the wt strain. However, with continued passaging, some folA duplications were replaced by a single copy of folA containing a trimethoprim resistance-conferring point mutation. Interestingly, the evolution of the lon strain in the setting of increasing concentrations of trimethoprim resulted in evolved strains with different levels of DHFR expression. In particular, some strains maintained two copies of a mutant folA that encoded an unstable DHFR. In a lon+ background, this mutant folA did not express well and did not confer trimethoprim resistance. However, in the lon- background, it displayed higher expression and conferred high-level trimethoprim resistance. The authors concluded that maintenance of the gene duplication event (and the absence of Lon) compensated for the proteolytic instability of this mutant DHFR. In summary, they provide evidence that the proteolytic stability of an antibiotic target protein is an important determinant of the evolution of target gene copy number in the setting of antibiotic selection.

      Strengths:

      The major strength of this paper is identifying an example of antibiotic resistance evolution that illustrates the interplay between the proteolytic stability and copy number of an antibiotic target in the setting of antibiotic selection. If the weaknesses are addressed, then this paper will be of interest to microbiologists who study the evolution of antibiotic resistance.

      Weaknesses:

      Although the proposed mechanism is highly plausible and consistent with the data presented, the analysis of the experiments supporting the claim is incomplete and requires more rigor and reproducibility. The impact of this finding is somewhat limited given that it is a single example that occurred in a lon strain and compensatory mutations for evolved antibiotic resistance mechanisms are described. In this case, it is not clear that there is a functional difference between the evolution of copy number versus any other mechanism that meets a requirement for increased "expression demand" (e.g. promoter mutations that increase expression and protein stabilizing mutations).

      We thank the reviewer for their in-depth assessment of our work and appreciate their concerns regarding reproducibility and rigor in analysis of our data. We have now incorporated this feedback and provided necessary clarifications/corrections in the revised version of our manuscript.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      Major Points:

      (1) The authors show that a deletion of lon increases the ability for GDA and they argue that this is adaptive during TMP treatment because it increases the dosage of folA (L. 129). However, the highest frequency of GDA occurred in drug-free conditions (see Figure 1C). This indicates either that GDA is selected in drug-free media and potentially selected against by certain antibiotics. It would help for the authors to discuss this possibility more clearly.

      We thank the reviewer for this astute observation. It is indeed striking that the GDA mutation (i.e. the GDA-2 mutation) selected in a lon-deficient background does not come up in presence of antibiotics. To probe this further, we have now measured the relative fitness of a representative population of lon-knockout from short-term evolution in drug-free LB (population #3) that harbours GDA-2 against its ancestor (marked with DlacZ). These competition experiments were performed in LB (in which GDA-2 emerged spontaneously), as well as in LB supplemented with antibiotics at the concentrations used during the short term evolution.

      Values of relative fitness, w (mean ± SD from 3 measurements), are provided below:

      LB: 1.4 ± 0.2

      LB + Trimethoprim: 1.6 ± 0.2

      LB + Spectinomycin: 0.9 ± 0.2

      LB + Erythromycin: 1.3 ± 0.3

      LB + Nalidixic acid: 1.5 ± 0.2

      LB + Rifampicin: 1.4 ± 0.2

      These data show an increase in relative fitness in drug-free LB as would be expected. Interestingly, we also observe an increase in relative fitness in LB supplemented with antibiotics, except spectinomycin. This result supports the idea that GDA-2 is a “media adaptation” and provides a general fitness advantage to the lon knockout. However, as the reviewer pointed out, we should expect to see GDA-2 emerge spontaneously in antibiotic-supplemented media as well. We think that this does not happen as the fitness advantage of drug-specific mutations (GDAs or point mutations) far exceed the advantage of a media adaptation GDA. As a result, we only see the specific mutations that provide high benefit against the antibiotic at least over the relatively short duration of 20-25 generations. It is noteworthy the GDA-2 mutation does come up in LTMPR1 when it is passaged over >200 generations in drug-free media, but shows fluctuating frequency over time. We expect, therefore, that given enough time we may detect the GDA-2 mutations even in antibiotic-supplemented media.  

      We note, however, that a major caveat in the above fitness calculations is that we cannot be sure that the competing ancestor has no GDA-2 mutations during the course of the experiment. Thus, the above fitness values are only indicative and not definitive. We have therefore not included these data in the revised manuscript.

      (2) It is unclear if the isolates WTMPR1 - 5 and LTMPR1 - 5 were pure clones. The authors write in L.488 "Colonies were randomly picked, cultured overnight in drug-free LB and frozen in 50% glycerol at -80C until further use." And in L. 492 "For long-term evolution, trimethoprim-resistant isolates LTMPR1, WTMPR4 and WTMPR5 were first revived from frozen stocks in drug-free LB overnight." From these descriptions, it is possible that the isolates contained a fraction of cells of other genotypes since colonies are often formed by more than one cell and thus, unless pure-streaked, a subpopulation is present and would in drug-free media be maintained. The possibility of pre-existing subpopulations is important for all statements relating to "reversal".

      This is indeed a valid concern. As far as we can tell all our initial isolates (i.e. WTMPR1-5 and LTMPR1-5) are pure clones at least as far as SNPs are concerned. This is based on whole genome sequencing data that we have reported earlier in Patel and Matange, eLife (2021), where we described the evolution and isolation of WTMPR1-5 and the present study for LTMPR1-5. All SNPs detected were present at a frequency of 100%. For clones with GDAs, however, there is no way to eliminate a sub-population that has a lower or higher gene copy number than average from an isolate. This is because of the inherent instability of GDAs that will inevitably result in heterogeneous gene copy number during standard growth. In this sense, there is most certainly a possibility of a pre-existing subpopulation within each of the clones that may have reversed the GDA. Indeed, we believe that it is this inherent instability that contributes to their rapid loss during growth in drug-free media.

      Minor Points:

      (1) L. 406. "allowing accumulation of IS transposases in E. coli" Please specify that it is the accumulation of transposase proteins (and not genes).

      We have made this change.

      (2) L. 221 typo. Known "to" stabilize.

      We have made this change.

      Reviewer #2 (Recommendations for the authors):

      Most of my suggestions are found in the public review. I believe this to be a strong study, and some slight fixes can solidify its presence in the literature.

      We have attempted to address the two main critiques by Reviewer 2. To simplify the understanding of our data, we have provided small schematics at various points in the paper to clarify the experimental pipelines used by us. We have also provided additional discussion situating our study in the emerging area of proteostasis and molecular evolution. We hope that our revisions have addressed these lacunae in our manuscript.

      Reviewer #3 (Recommendations for the authors):

      Major Points:

      (1) The manuscript is generally a bit difficult to follow. The writing is overly complicated and lacks clarity at times. It should be simplified and improved.

      We have made several revisions to the text, as well as provided schematics in some of our figures which hopefully make our paper easier to understand.

      (2) I cannot find the raw variant summary data for the lon strain evolution experiment in trimethoprim (after 25 generations). Were there any other mutations identified? If not, this should be explicitly stated in the text and the variant output summary from sequencing included as supplemental data.

      We apologise for this oversight. We have now provided these data as Table 1.

      (3) What is the trimethoprim IC50 of the starting (pre-evolution) strains (i.e. wt and lon)? I can't find this information, but it is critical to interpretation.

      We had reported these values earlier in Matange N., J Bact (2020). Wild type and lon-knockout have similar MIC values for trimethoprim, though the lon mutant shows a higher IC50 value. We have now mentioned this in the results section (Line 100-101) and also provided the reference for these data.

      (4) What was the average depth of coverage for WGS? This information is necessary to assess the quality of the variant calling, especially for the population WGS.

      All genome sequencing data has a coverage at least 100x. We have added this detail to the methods section (Line 580-581).

      (5) Five replicate evolution experiments (25 generations, or 7x 10% daily batch transfers) were performed in trimethoprim for the wt and lon strains. Duplication of the folA locus occurred in 1/5 and 4/5 experiments, respectively. It is not entirely clear what type of sampling was actually done to arrive at these numbers (this needs to be stated more clearly), but presumably 1 random colony was chosen at the end of the passaging protocol for each replicate. Based on this result, the authors conclude that folA duplication occurred more frequently in the lon strain, however, this is not rigorously supported by a statistical evaluation. With N=5, one cannot rigorously conclude that a 20% frequency and 80% frequency are significantly different. Furthermore, it's not entirely clear what the mechanism of resistance is for these strains. For example, in one colony sequenced (LTMPR5), it appears no known resistance mechanism (or mutations?) were identified, and yet the IC50 = 900 nM, which is also similar to other strains.

      Indeed, we agree with the reviewer that we don’t have the statistical power to rigorously make this claim. However, since the lon-knockout showed us a greater frequency of GDA across 3 different environments we are fairly confident that loss of lon enhances the overall frequency for GDA mutations. This idea in also supported by a number of previous papers that related GDAs and IS-element transpositions with Lon, viz. Nicoloff et al, Antimicrob Agent Chemother (2007), Derbyshire et al. PNAS (1990), Derbyshire and Grindley, Mol Microbiol (1996). We have therefore not provided further justification in the revised manuscript.

      We had indeed sampled a random isolate from each of the 5 populations and have added a schematic to figure 1 that provides greater clarity.

      Having relooked at the sequencing data for LTMPR1-5 isolates (Table 1), we realised that both LTMPR4 and LTMPR5 harbour mutations in the pitA gene. We had missed this locus during the previous iteration of this manuscript and misidentified an mgrB mutations in LTMPR4. PitA codes for a metal-phosphate symporter. We have observed mutations in pitA in earlier evolution experiments with trimethoprim as well (Vinchhi and Yelpure et al. mBio 2023). Interestingly, in LTMPR5 there was a deletion of pitA, along with 17 other contiguous genes mediated by IS5. To test if loss of pitA is beneficial in trimethoprim, we tested the ability of a pitA knockout to grow on trimethoprim supplemented plates. Indeed, loss of pitA conferred a growth advantage to E. coli on trimethoprim, comparable to loss of mgrB, indicating that the mechanism of resistance of LTMPR5 may be due to loss of pitA. We have added these data to the Supplementary Figure 1 of the revised manuscript and provided a brief description in Lines 103-108. How pitA deficiency confers trimethoprim resistance is yet to be investigated. The mechanism is likely to be by activating some intrinsic resistance mechanism as loss of pitA also conferred a fitness benefit against other antibiotics. This work is currently underway in our lab and hence we do not provide any further mechanism in the present manuscript.

      (6) Although measurement error/variance is reported, statistical tests were not performed for any of the experiments. This is critical to support the rigor and reproducibility of the conclusions.

      We have added statistical testing wherever appropriate to the revised manuscript.

      (7) Lines 150-155 and Figure 2E: Putting a wt copy of mgrB back into the WTMPR4 and LTMPR1 strains would be a better experiment to dissect out the role of mgrB versus the other gene duplications in these strains on fitness. Without this experiment, you cannot confidently attribute the fitness costs of these strains to the inactivation of mgrB alone.

      We agree with the reviewer that our claim was based on a correlation alone. We have now added some new data to confirm our model (Figure 2 E, F). The costs of mgrB mutations come from hyperactivation of PhoQP. In earlier work we have shown that the costs (and benefit) of mgrB mutations can be abrogated in media supplemented with Mg<sup>2+</sup>, which turns off the PhoQ receptor (Vinchhi and Yelpure et al. mBio, 2023). We use this strategy to show that like the mgrB-knockout, the costs of WTMPR4, WTMPR5 and LTMPR1 can be almost completely alleviated by adding Mg<sup>2+</sup> to growth media. These results confirm that the source of fitness cost of TMP-resistant bacteria was not linked to GDA mutations, but to hyperactivation of PhoQP.

      (8) Figure 3F and G: Does the top symbol refer to the starting strain for the 'long-term' evolution? If so, why does WTMPR4 not have the mgrB mutation (it does in Figure 1)? Based on your prior findings, it seems odd that this strain would evolve an mgrB loss of function mutation in the absence of trimethoprim exposure.

      We thank the reviewer for pointing this error out. We have made the correction in the revised manuscript.

      (9) Figure 6A: If the marker is neutral, it should be maintained at 0.1% throughout the 'neutrality' experiment. In both plots, the proportion of some marked strains goes up and then down. This suggests either ongoing evolution (these competitions take place over 105 generations), or noisy data. I suspect these data are just inherently noisy. I don't see error bars in the plots. Were these experiments ever replicated? It seems that replicating the experiments might be able to separate out noise from signal and perhaps clarify this point and better confirm the hypothesis that the point mutants are more fit.

      These experiments were indeed noisy and the apparent enrichment is most likely a measurement error rather than a real change in frequency of competing genotypes. We have now provided individual traces for each of the competing pairs with mean and SD from triplicate observations at each time point.

      (10) Figure 6A: Please indicate which plotted line refers to which 'point mutant' using different colors. These mutants have different trimethoprim IC50s and doubling times, so it would be nice to be able to connect each mutant to its specific data plot.

      We thank the reviewer for this suggestion. We have now colour coded the different strain combinations as suggested.

      (11) Lines 284-285: I disagree that the IC50s are similar. The C-35T mutant has IC50 that is 2x that of LTMPR1. Perhaps more telling is that, compared to the folA duplication strain from the same time-point (which also carries the rpoS mutation), all of the point mutants have greater IC50s (~2x greater). 2-fold changes in IC50 are significant. It would seem that the point-mutants were likely not competing against LTMPR1 at the time they arose, so LTMPR1 might not be the best comparator if it was extinguished from the population early. I'm assuming this is why you chose a contemporary isolate (and, also, rpoS mutant) for the competition experiments. This should be explained more clearly.

      We thank the reviewer for this comment. Indeed, the reviewer is correct about the rationale behind the use of a contemporary isolate and we have provided this clarification in the revised manuscript (Line 287-289). Also, the reviewer is correct in pointing out that a two-fold difference in IC50 cannot be ignored. However, the key point here would be in assessing the differences in growth rates at the antibiotic concentration used during competition (i.e. 300 ng/mL). We are unable to see a direct correlation between the growth rates and enrichment in culture indicating that the observed trends are unlikely to be driven by ‘level of resistance’ alone. We have added these clarifications to the modified manuscript (Lines 299-301)

      Minor Points:

      (1) Line 13: Add a comma before 'Escherichia'

      We have made this change.

      (2) Line 14: Consider changing "mutations...were beneficial in trimethoprim" to "mutations...were beneficial under trimethoprim exposure"

      We have made this change.

      (3) Line 32: Is gene dosage really only "relative to the genome"? Is it not simply its relative copy number generally? Consider changing to "The dosage of a gene, or its relative copy number, can impact its level of expression..."

      We have made this change.

      (4) Line 38: The idea that GDAs are 1000x more frequent than point mutations seems an overgeneralization.

      We agree with the reviewer and have softened our claim.

      (5) Line 50: The term "hard-wired" is confusing. Please be more specific.

      We have modified this statement to “…GDAs are less stable than point mutations….”.

      (6) Line 52-53: What do you mean by "there is also evidence to suggest that...more common in bacteria than appreciated"? Are you implying the field is naïve to this fact? If there is "evidence" of this, then a reference should be included. However, it's not clear why this is important to state in the article. I would consider simply removing this sentence. Less is more in this case.

      We have removed this statement.

      (7) Lines 59-60: Enzymes catalyze reactions. Please also state the substrates for DHFR. Consider, "It catalyzes the NADPH-dependent reduction of dihydrofolate to tetrahydrofolate, and important co-factor for..."

      We have made this change.

      (8) Line 72: Please change to, "In E. coli, DHFR is encoded by folA." You do not need to state this is a gene, as it is implicit with lowercase italics.

      We have made this change.

      (9) Lines 72-86: This paragraph is a bit confusing to read, as it has several different ideas in it. Consider breaking it into two paragraphs at Line 80, "In this study,...". The first paragraph could just review the trimethoprim resistance mechanisms in E. coli and so would change the first sentence (Line 72) to reflect this topic: "In E. coli, DHFR is encoded by folA and several different resistance mechanisms have been characterized." Then, just describe each mechanism in turn. Also, by "hot spots" it would seem you are referring to "point mutations" in the gene that alter the protein sequence and cluster onto the 3D protein structure when mapped? Please be more specific with this sentence for clarity.

      We have made these changes.

      (10) Lines 92-93: Please also state the MIC value of the strain to specifically define "sub-MIC". Alternatively, you could also state the fraction MIC (e.g. 0.1 x MIC).

      We have modified this statement to “…in 300 ng/mL of trimethoprim (corresponding to ~0.3 x MIC) for 25 generations.”

      (11) Lines 95-96. Remove, "These sequencing have been reported earlier, ...(2021)". You just need to cite the reference.

      We have made this change.

      (12) Line 96: Remove the word "gene".

      We have made this change.

      (13) Figure 1 and Figure 4C: The color scheme is tough for those with the most common type of color blindness. Red/green color deficiency causes a lot of difficulty with Red/gray, red/green, green/gray. Consider changing.

      We thank the reviewer for bringing this to our notice. We have modified the colour scheme throughout the manuscript.

      (14) Figure 1: Was there a trimethoprim resistance mechanism identified for LTMPR5?

      As stated by us in response to major comment #7, LTMPR5’s resistance seems to come from a novel mechanism involving loss of the pitA gene.

      (15) Line 349-351: Please briefly define "lower proteolytic stability" as a relative susceptibility to proteolytic degradation and make sure it is clear to the reader that this causes less DHFR. This needs to be clarified because it is confusing how a mutation that causes DHFR proteolytic instability would lead to an increase in trimethoprim IC50. So, you also need to mention that some mutations can cause both increased trimethoprim inhibition and lower proteolytic stability simultaneously. It seems the Trp30Arg mutation is an example of this, as this mutation is associated with a net increase in trimethoprim resistance despite the competing effects of the mutation on enzyme inhibition and DHFR levels.

      We thank the reviewer for this comment and agree that the text in the original manuscript did not fully convey the message. We have made modifications to this section (Lines 359-363) in the revised manuscript in agreement with the reviewer’s suggestions.

    1. 17.1. Individual harassment# Individual harassment (one individual harassing another individual) has always been part of human cultures, bur social media provides new methods of doing so. There are many methods by which through social media. This can be done privately through things like: Bullying: like sending mean messages through DMs Cyberstalking: Continually finding the account of someone, and creating new accounts to continue following them. Or possibly researching the person’s physical location. Hacking: Hacking into an account or device to discover secrets, or make threats. Tracking: An abuser might track the social media use of their partner or child to prevent them from making outside friends. They may even install spy software on their victim’s phone. Death threats / rape threats Etc. Individual harassment can also be done publicly before an audience (such as classmates or family). For example: Bullying: like posting public mean messages Impersonation: Making an account that appears to be from someone and having that account say things to embarrass or endanger the victim. Doxing [q1]: Publicly posting identifying information about someone (e.g., full name, address, phone number, etc.). Revenge porn / deep-fake porn Etc. 17.1.1. Reflections# Have you experienced or witnessed harassment on social media (that you are willing to share about)? 17.1.2. Learn more# I’ve Had a Cyberstalker Since I Was 12 [q2] Chrissy Teigen’s fall from grace: The rise and fall of Chrissy Teigen shows how drastically Twitter changed in 10 years. [q3]

      It’s honestly disturbing how easy social media has made harassment and how hard it is to escape. Cyberstalking is especially scary because even if you block someone, they can just make a new account and keep coming back. And the worst part? Reporting doesn’t always do much, so victims are often left to deal with it on their own. It makes me wonder—what more could social media companies actually do to stop this? The part about impersonation and doxing also stuck with me because it shows how anonymity online can be both a blessing and a curse. On one hand, it protects people who need privacy, but on the other, it lets harassers get away with things they never would in real life. Should social media platforms require people to verify their identity, or would that just create a whole new set of problems?

    1. Take somebody off the street who’s never written any code before and ask them to build an iPhone app with ChatGPT. They are going to run into so many pitfalls, because programming isn’t just about can you write code—it’s about thinking through the problems, understanding what’s possible and what’s not, understanding how to QA, what good code is, having good taste. There’s so much depth to what we do as software engineers. I’ve said before that generative AI probably gives me like two to five times productivity boost on the part of my job that involves typing code into a laptop. But that’s only 10 percent of what I do. As a software engineer, most of my time isn’t actually spent with the typing of the code. It’s all of those other activities. The AI systems help with those other activities, too. They can help me think through architectural decisions and research library options and so on. But I still have to have that agency to understand what I’m doing. So as a software engineer, I don’t feel threatened.

      Deep software engineering

    1. It's complicated though: working on your own projects in the morning creates an emotion state! It makes you feel like the day belongs to you, not just your boss.

      I wear makeup every day that I go into the office. There is no one I see more than monthly to whom it is useful to me to send cosmetic social signals – but the ritual enforces an idea of myself in the morning that's critical to being able to handle 8AM social interaction.

  3. Feb 2025
    1. It is very unfair how some boys are able to live such pleasurable lives while I never had any taste of it, and now it has been confirmed to me that my little brother will become one of them. He will become a popular kid who gets all the girls. Girls will love him. He will become one of my enemies.That was the day that I decided I would have to kill him on the Day of Retribution. I will not allow the boy to surpass me at everything, to live the life I’ve always wanted. It’s not fair that he has the chance tohave a pleasurable life while I’ve been denied it. It will be a hard thing to do, because I had really bonded with my little brother in the last year, and he respected and looked up to me. But I would have to do it. If I can’t live a pleasurable life, then neither will he! I will not let him put my legacy to shame.

      If Jazz, who shared his bloodline and parenting, could still naturally achieve the life Elliot wanted, then it threatened to mean that Elliot’s suffering wasn’t due to an unfair world—or even bad parenting or bad circumstances, it was a reflection of his own personal failure and worthlessness- if Jazz could figure it out in similar circumstances while he couldn’t. What was wrong with him? (The answer is whatever happened to Elliot early that caused a false self to develop.)

      Elliot saw himself as the only one who truly understood the world—the one who saw through the façade of society and recognized everyone else as primitive, animalistic, and undeserving. His suffering and alienation was what made him special, enlightened, superior. Jazz was on track to become one of “them”—the shallow, unthinking, socially successful people Elliot despised. In Elliot’s worldview, Jazz’s success wouldn’t just be unfair—it would be a betrayal of the “bond” they had. Jazz, the little brother who once looked up to Elliot, was now outgrowing him, surpassing him, joining the enemy. And since Elliot only “loved” Jazz because Jazz admired and validated him, that love wasn’t unconditional. The moment Jazz was no longer a reflection of Elliot’s own superiority, the connection became worthless. Jazz’s success meant losing him as someone Elliot could see as an extension of himself.

    2. For more than half of the conversation, the doctor spent time resolving this petty conflict instead of addressing the troubles that I was going through.When we finally did get to my situation, Dr. Sophy ended up giving me the same useless advice that every other psychiatrist, psychologist, and counsellor had given me in the past. I don’t know why my parents wasted money on therapy, as it will never help me in my struggle against such a cruel and unjust world.

      I agree that Elliot needed way more help than normal therapists give. Unfortunately this is why people write off therapy rather than understanding how deeply the right therapy can help: they see therapy as just “basic talk that goes nowhere” where from experience a specialist can really shift your entire world. We can also see how difficult it would be even with a specialist to convince the patient that it’s not the world that’s causing the agony, it’s a mental illness; this goes against what the false self wants to feel dominant and destabilizes it - and it will fight back. This is why most narcs that get into therapy are collapsed or in a narc crash.

    3. They have no sexual attraction towards me. It is such an injustice, and I vehemently questioned why things had to be this way. Why do women behave like vicious, stupid, cruel animals who take delight in my suffering and starvation? Why do they have a perverted sexual attraction for the most brutish of men instead of gentlemen of intelligence?I concluded that women are flawed. There is something mentally wrong with the way their brains are wired, as if they haven’t evolved from animal-like thinking. They are incapable of reason or thinking rationally.

      All this to avoid the fear that it’s just him who is worthless. An attempt to erase the power women have over his worth and life.

    4. People having a high opinion of me is what I’ve always wanted in life. It has always been of the utmost importance. This is why my life has been so miserable, because no one has ever had a high opinion of me. My little brother Jazz was the only one who had such an opinion, and that is why I enjoyed spending so much time with him, despite my envy of his social advantages

      He says he was bonding with jazz without realizing he’s just getting narc gratification. It’s not based on real connection because he’s too mentally ill; his mind is too preoccupied with getting needs met. I run into this problem so often in my relationships. All my new relationships become about how the other person can make me feel because I’m so hungry and it’s so hard to stop it. Especially when my mind tries to paint everything else as boring. It only rewards you when you do what the false self wants. Then it gives you a shot of meaning, relief, significance. But this is all to stop you from developing real connections based on authentic self which false self considers so dangerous.

    5. My little brother really looked up to me. He was one of the few people who treated me with adoration, and that made me feel at least a small twinge of self-worth. It was quite surprising that he respected me so much, since I had nothing in my life to boast about to him.

      I wish he could have take this and healed. I learned at some point in my life that sometimes people just sense something about you and want to be around your energy- and that’s it. They just like your vibe, your soul energy, which you can’t hide even under masks. I was always confused and even angry as to why people would still say they liked me when I didn’t do anything “spectacular” that the false self says will get me admiration. This is because the false self operates on this fantasy of control to feel safe, that if you “do” specific things it will make you worthy and no one will be able to resist you and if they dislike you- you’ll know for sure at that point it’s because they’re jealous.

      But in reality, people admired and couldn’t get enough of me even when I felt I had acted in a mundane way and it threw me for a loop. They were acting outside of my control, my worldview. It’s both relieving (your standards don’t need to be so high) and scary. I was only able to recognize it when i was finally able to tolerate my fears better. The better I was able to tolerate my fears and the more I had a solid proof I was rare no matter what, the safer I felt to “see” reality and allow the idea that people could move outside of my control. This all comes back to the infant/early trauma (abandonment, rejection, abuse) and the baby developing a pathological need to believe it can control the absent mother and “cause” certain reactions to get needs met if it just develops certain traits -or performs certain impressive behaviors (but the mother becomes everyone). It’s all to avoid baby PTSD.

    6. By the month of April, I had driven to Arizona three more times, making a total of four trips to Arizona in my lifetime, just to buy lottery tickets out of intense desperation, believing it to be my only hope of attaining the life I desire

      God this is so sad…Jesus Christ. He’s so mentally ill. This also shows just how hard he fought not to end his life or carry out his plan, even after saying he gave up all hope. I think it shows how there doesn’t have to be “one major event” that snaps a person into malignancy. It’s a slow spiral sometimes. I do believe him when he says he was terrified to die.

      He’s trying to find any other way out but what’s really sad about it is how all his solutions are really detached from the reality of what would actually help -and so they inevitably fail which only damages his psyche further. It’s a horrible loop. But his false self has literally discarded healthy solutions as even an option because it considers them too unsafe.

    7. Everything looked so small, and the people and cars looked like little insects. I briefly fantasized about being a god as I looked down upon them all. I imagined having the power to destroy everything below with destructive, supernatural powers. It made for a fine scenario, worthy of being discussed with James Ellis,

      This defense comes in to reverse such deep feelings of helplessness and shame and invisibility. It’s like flipping the script for a moment to gain some relief in your inner world. I’ve had fantasies like this but without the destruction part. It’s always benevolent. The destruction part of his particular fantasy seems proof of the malignancy and level of damage in Elliot’s psyche. He’s not just flipping helplessness onto others, but the constant torment and narcissistic injury he felt others caused him since early middle school. He wants to now flip his agony and damage onto others as well, projecting it because it is too deep to bare. nor does he have the coping mechanisms to handle it.

      He genuinely sees the world as his rejector, abuser, and the cause of his pain, because the alternative would seem like accepting he was treated this way because he was truly worthless. In being a victim he can also feel uniquely targeted and therefor still significant rather than face the terror that maybe: no one even thought about or noticed him; that he was just an invisible ghost too pathetic to gain attention, forced to watch the greats being seen and loved and admired.

    8. I didn’t win.I sat very quiet and still in my desk chair for a long time, all of the emotion swept out of me. I didn’t react with rage or anguish. I just sat there, cold and dead, mentally trying to contemplate what I had just done.

      This is huge- it’s the final loss of hope. Hope to be a part of the world and be loved and admired and experience the good, to be safe and superior through being a “creator” as Elliot described it- and all that’s left is the rage of an entire life lost, years of connection, love, experience the false self stole. This loss of hope can be a huge transition into malignancy. And, the brink of annihilation (truly facing the idea that you were shut out of life because you were just truly worthless and weak, and couldn’t attract any eyes you’re so replaceable and forgettable). The only that covered up that fear and shame in this case was rage, and opting out of life altogether rather than living watching everyone else live and be adored.

    9. After I picked up the handgun, I brought it back to my room and felt a new sense of power. I was now armed. Who’s the alpha male now, bitches? I thought to myself, regarding all of the girls who’ve looked down on me in the past.

      Here we see a huge part of his false self switch more fully to power over admiration or love. But even though the malignant side of him grows stronger, his root feelings are that of a vulnerable narcissist. It’s like even as the false self grows more malignant, his narcissistic defenses are still so weak and the rage is one last attempt to ward off the fear that the real reason he didn’t succeed or was included is because he was truly born worthless. (Not true it was just a false self that blocked him from ever discovering what made him unique). Plus nothing in his life fueled his false self- he wasn’t praised for any unique external trait- and so his defenses and grandiosity had such a weak foundation, while meanwhile racking up way more proof his “fears” might be right- causing him to end up as an incredibly vulnerable narcissist.

    10. I have craved power and significance all my life, and I will stop at nothing to find ways of attaining it.

      I feel like I need to express that after a lifetime of not being seen and starving inside yourself it’s NORMAL to be incredibly hungry for attention and admiration and what you didn’t get (and yes this starvation still happens even if you’re physically payed attention to bc the false self doesn’t let anything reach the true self and it doesn’t use any building blocks of your authentic self when constructing your identity so you grow up feel like you have nothing unique that was seen or praised or connected to and nurtured).

      You don’t get relationships based on the person’a love and admiration of your true self and that fucks you up and causes so much shame. So yes a part of the constant “I’m starving for admiration” feeling comes from the false self rejecting all other forms of connection and even rejecting some types of praise that’s it’s been decided is too unsafe. But even if you start healing and getting connected with and seen for your true feelings, I’ve found part of the starvation for admiration just comes from going ALL those years without being seen when you should have been getting connected to, adored, admired, included, for who you really are, to have that nurtured, when it was normal in development. I missed out on the feeling of having a true identity reinforced adored and mirrored during so many crucial stages- elementary, middle school, high school, even college.

    11. Most other men have huge drinking parties with their friends and girlfriends to mark their passing over the legal age limit to drink alcohol. I’ve read stories online of how exciting other men’s 21stbirthdays are.

      He’s only able to get his view of the world from media since he doesn’t have a lot of social experience and it skews things- (not “everyone” has this 21st birthday experience, in fact most people just have a party at their house or somewhere with friends). But for him it’s highlighting the lack of connection AND admiration he’s experienced, probably leaving him trying to ward off fears that the real reason his life is empty is that he has absolutely nothing for anyone to even admire or want- and in that I relate so deeply.

      There’s also the fact that even if he realized not everyone’s birthdays are like the ideal, his false self would likely STILL only focus on those who have “that type of party” because in his mind that’s a sign you’re superior; and that’s who “to be.” Even the idea of normal and average is humiliating like; “ok so this is all I’m worth? I have to settle for a basic party because I can never get to the level truly great people are at? I’ll always be outshone by them, always watching them getting adored living the “high life.” people would replace me immediately with them if they could choose, this is why I have to be better, to ward off any chance of being rejected or discarded or abandoned shamefully for someone truly superior. I need to make up for all the time spent being that disgusting pathetic empty creature in the shadows. I’ll do anything to live as the popular powerful one.” It’s the only way the false self allows you to feel alive, valuable, or safe. Being average is punished and shamed by the false self because sometime in very early childhood a belief developed that if you were just yourself you’d be rejected and left in grave danger so “your crib has to literally shine” “above and beyond” in order for people to choose you, to stick around, meet your needs, love you. The problem is sometimes the standard the false self sets is literally impossible for any human. There can be resentment and fear in that too. How will I stand out then? Standing out =safe.

    12. refused to talk to me ever again. That was the last time I ever spoke to him.It was the ultimate betrayal. I thought he was the one friend I had in the whole world who truly understood me, who truly understood my views and the reasons why I thought the way I did about the world. I confided everything to him, because I thought we were on the same page

      James rejecting Elliot (which he couldn’t seem to understand was because of the horrific things he described, wanting to do to people) seemed to deal a final blow towards any softness left in him. He’s left alone in a world where even other virgins reject him. It’s really interesting that he takes the rejection so personally rather than reflecting on why he might have driven James away, the same thing happened to me in high school. I manipulated a scenario to get people to feel sorry for me and scared away the only real friend I felt I had who I was mirroring anyway (So not like it was healthy). This was during a more borderline phase. I did everything I could to get her back even explaining that I faked the scenario so she wouldn’t be as scared. When she rejected me, it felt so personal and I couldn’t see how she might have felt totally manipulated and horrified by the way I’d treated her, trying to provoke reactions in her. My mind literally couldn’t see how that wasn’t normal and I just felt like she was rejecting me because of my very core. I shut down very greatly after this and became cold and angry and it was like I had never cared about her. I had fantasies of her seeing me moving on, becoming successful and superior and imagining how she would regret her decision.

    13. Who else deserved such a victory? I had been through so much rejection, suffering, and injustice in my life, and this was to be my salvation. With my whole body filled with feverish hope, I spent $700 dollars on lottery tickets for this drawing.

      It’s really sad, but I can relate to this sensation. Wondering why the hell all this suffering happened with no breaks in the clouds, and fearing deep down that you are truly just a forgettable pathetic unremarkable being and that’s why you were made to just suffer in the background, watching greater people live life. Fearing I suppose, that you got that suffering and rejection because you were truly worthless and couldn’t hold anyone’s attention.

    14. I planned to go back to college once I had bolstered myself with all this wealth, and lord myself over all the other students there, finally fulfilling my dream of being the coolest and most popular kid at school. As I sat meditating in my room, I imagined the ecstasy I would feel as scores of beautiful girls look at me with admiration as I drive up to college in a Lamborghini. Such an experience would make up for everything. I had to win this jackpot.

      I wonder what he would do if he actually had won the lottery and pulled up in the Lamborghini only to once again get barely any attention because everyone is wrapped up in their own lives, maybe glancing once or twice, but hurrying into class. It’s not like Elliot’s social skills would have changed. What I’m trying to say here is that this fantasy is so unrealistic, that he would’ve found disappointment either way. It’s the idea that all he would have to do is have a certain type of car or a mansion, and suddenly people would flock to him. He would still have to be somewhat socially charming or even just friendly, interested and interesting- in order to have people feel comfortable enough to come home with him or to want to get to know him. And his false self just shoots him in the foot because it doesn’t allow him to realize the extent of his social issues and work on them, because it thinks doing so would be way too dangerous for how fragile he is- it’s protecting him from more overt humiliations or failures.

    15. I had a particular burning hatred for the actor Alexander Ludwig, who I saw sitting arrogantly on a couch as people crowded around him in adoration. I hated everything about him; his golden blonde hair; his tall, muscular frame; his cocky, masculine face. That boy could get any girl he wanted. His life was completely opposite from my own. If only I could get a taste of how he lived for just one day...

      This is such a vNPD feeling, and I relate to it so much… I feel like I would literally die in happiness and disbelief if I suddenly woke up to a life like that. I can’t even imagine how much ecstasy I would feel, being surrounded and adored, everyone vying for my attention. I’ve gotten so little of it in my life, similar to Elliot, that I am so incredibly envious and can’t even envision what it might feel like. That type of envy also leads me to difficult thoughts, such as “I would do anything immoral to obtain that”… “ If I could trade places with that person right now but they died or lost everything, I would instantly do it”….or “The things I would do if I had the power of a celeb, of trump, or Elon Musk… I would indulge in as much as possible I don’t care whether it’s good or bad.” I’m kind of convinced that this is why the universe gave me a psychological set up closer to Elliot’s, to make sure I wouldn’t succeeded, that I would struggle with vulnerable narcissism because of the card I’ve been given and actually learn some lessons.

    16. After some deep contemplation, I had the revelation that the Day of Retribution wasn’t the only way I could make up for all of the suffering I’ve had to experience. If I could somehow become a multi- millionaire at a young age, then my lifestyle would instantly become better than most people my age. I would be able to get revenge on my enemies just by living above them and lording over them. That was a form of happy, peaceful revenge, and it became my only hope.

      This is so fucking sad… You can see the kid didn’t want to do this. Whatever was left of him. He’s fighting like hell not to do it, but of course, he doesn’t understand what he’s even fighting against so he’ll ultimately fail. This also shows how early he started contemplating malignant outcomes -but it also shows a fight against the malignancy.

      Also, what Elliot is mentioning about the peaceful form of “revenge” is what I was talking about in my previous annotation. It’s not even really revenge, just superiority, simply being above everyone else and having them watch you, with you, knowing that they’re inferior to you. It feels so safe and it is peaceful. There’s no active harm. That’s my favorite because it’s so peaceful, and I always imagine choosing to heap wealth or praise upon the few people who are lucky enough, worthy enough- for me to pay attention to them and include them in my grand life. Unfortunately, this never works in reality because you’ll always end up feeling triggered in someway while trying to be peaceful and then things go to shit. Nothing about this disorder’s vision turns out the way that you want, the NPD response doesn’t work. I go into why in my personal notes.

    17. They showed me no mercy, and in turn I will show them no mercy.

      It’s sad because I’ve mentioned before why it would seem to Elliot like everyone around him was out to get him, considering all the humiliation and bullying and how little support he had. His parents also betrayed him or took away his control or safety several times in different ways. But the truth is, everybody else was just living their lives around him and yes- while nobody helped him, and he did deserve help out of his suffering- that wasn’t their responsibility. The reason he sees it is their responsibility is because he sees them as his torturers. He sees them as actively preventing him from feeling any sort of relief from his agony, shutting him out at every turn. He feels truly attacked. And he feels like everyone’s done this, because no one came up to him. He dismisses the people who did try and reach out to him, and the saddest part of all of this for me is that it was all his false self literally strangling him of life. I really want to help people with these mental illnesses so badly and I barely survived my own strangulation. If the world never learns the ins and outs of this, we will have many repeat cases because we won’t be able to truly help.

    18. My life was devoid of friends, devoid of girls, devoid of sex, and devoid of love. I realized that I will never be able to look back on my youth, the time that I should be having a blast, and feel satisfied about all of the happy memories I have. There were no happy memories; only misery, loneliness, rejection, and pain. The only thing I could do was even the score. I wanted to make everyone else suffer just as they made me suffer. I wanted revenge.

      This makes me sad, I’ve had similar feelings although of less of an intensity, and the only thing that has helped me cope with missing out on my childhood, teenager, and early adulthood, is honestly spirituality and reincarnation. I truly believe I will get many more chances to live and so I can endure the pain of this lifetime and all the things I missed out on and tried to make the most of these lessons. Sometimes I picture myself in the temporary role of a narcissist. if I can survive this, who knows what I will be next life. It really helps put things in perspective for me at least and makes me less bitter. It’s the only thing that I found to have worked because as Elliot describes, the pain and loss is immense and I feel like otherwise I would also want revenge.

      Even with my coping skills, I still have extreme trouble having any happiness or empathy for people who have the life I desired. I still struggle with fantasies of tearing down people’s happiness, or at the very least, feelings that they are the type of people I would have no remorse stepping on to get somewhere. I have fantasies that if I ever got successful, how I would make those people watch me from below or not help them at all and feel passively satisfied by their troubles.

      For me, coping means allowing these to be passing thoughts, of course to not act on them- and not really give them fuel. I can’t really control my mind so I allow them to just be symptoms that come and go while I continue to try and cultivate good karma. I am scared of losing control if I feed rage-filled or cold thoughts too much.

    19. As I looked at all the pictures of the two of them together, I shivered with pure hatred. I could physically feel the hatred burn through my entire body. I wanted to kill both of them, and I was capable of doing it. Brittany Story should have been mine, and if can’t have her, no one should! I fantasized about capturing the two of them and stripping the skin off her boyfriend’s flesh while making her watch. Why must my life be so full of torment and hatred? I questioned to the universe with turmoil roiling inside me. I screamed and cried with anguish that day.

      Malignant NPD is NPD with ASPD traits. I already mentioned how Elliot felt so raw and sick from all the constant narcissistic injuries that now when he had an injury, his mind turned way more cruel to combat the deepening damage. Also, self-esteem or admiration was not working to make him feel stable. So it starts to escalate to power.

      Also, when he asks the question “Why must my life be so full of torment and hatred?” I so badly want to be there and just tell him “ It’s because of a disorder called NPD” and sit him down on his bed and explain it to him. I feel like if people can understand what they’re suffering from, they can then try and take back their power and actually try things that go somewhere and get real results. Otherwise they just suffer helplessly and grab for extremes because they are confused as to what’s even happening.

    20. I still didn’t make any friends, and I still didn’t talk to any girls. By the end of the month, I began to question what I was doing so wrong. I saw obnoxious slobs who dressed in basketball shorts and T-shirts walking with hot girls. And there I was, decked out in Armani, all by myself. It was preposterous!

      His false self keeps blocking him from realizing the real reason is his social skills. If he had to actually put in effort, he’s so fragile he couldn’t handle the direct humiliation just like how he chose to sit away from that girl to protect himself even though he wanted her attention so bad. Sometimes he’ll recognize that he has bad social skills, but even then he thinks “Don’t they understand how hard it is with social anxiety?” He’s protecting himself from realizing it’s something he’d have to work on because it would collapse him

    21. I immediately went to the restroom to look at myself in the mirror a few times, just so that I can feel more assured of myself. Yes, I thought. I am the image of beauty ad supremacy. I kept saying it over and over again, as if it was a mantra. When I crossed the renowned bridge that connected the two halves of the campus, I felt as if everyone was admiring me. As I passed by groups of girls, I pretended to imagine that they secretly adored and wanted me. After all, that was how it was meant to be. The more I walked around the campus, the more I tried to convince myself that that was the case.My first class was sociology, and I waited until everyone was seated before I walked in. I came in through the front entrance so that everyone could look at my fabulous self. To my utter dismay, I saw that no one turned their head to look at me at all. No girl tilted a head or lifted a pretty little eyebrow at my approach. After all that effort, I was still being treated like I was invisible.

      You can see how hard he has to fight to convince himself any of this is real. Even though he says he feels like a superior gentleman it really comes across more like he’s desperately convincing himself. it’s probably very hard for him to believe he actually comes across that way with no one to validate it. maybe he was afraid of not even living up to his own image of what he believed he was. What if that wasn’t real either?

      Also, he’s trapped in such a horrible loop where he keeps not understanding he has to put in social effort in order to fully get interacted with, but all he does is walk around, and then, when no one really reacts, he uses it to further his humiliation as we will see again

    22. The mere mention of Leo put me in a bad mood. I couldn’t believe that Vincent, too, was now experiencing the pleasures of partying with young people while I sat all alone at the adult’s party, sipping my wine in lonely depression. I should be partying with my own friends, and my own girlfriends, but I had NONE.

      But yet he rejects the only friends he could’ve partied with. This is one of the worst parts about NPD. The false self does not allow you to exist or partake in anything unless you meet its standards. Only then does it deem you safe to partake in life. He could’ve partied with the friends that he considered losers, but it wouldn’t make him feel alive, happy, or good at all -just humiliated and as empty as he felt at the dinner because it’s not the exact picture the false self demands/ it’s not one of those BIG parties and he doesn’t have a girl. The false self starves you inside a cage even as you may be offered things (remember Elliot previously feeling starved and unseen even when he had James listening to him). If the false self doesn’t consider the things offered safe enough to enjoy, or it doesn’t meet the standard of significance, it won’t even let you have them.

      It’s almost like the false self says “ I won’t allow anything to be a part of the self (whether through association or experience) if it isn’t significant enough by my standard. We need to be (this) significant in order to feel safe and be assured we won’t be replaced, overlooked, or abandoned, and if we can’t prevent it at that point -at least we can blame it on someone else because we will know we are superior for sure.” Although that mindset may be faulty, it’s how the false self operates. It believes if it can portray itself in a certain way, it can control everything, including avoiding rejection. When the child was VERY young, for whatever reason, there was a scenario in which they were abandoned, harmed, or neglected, and their mind decided it was because they were “bad,” that the authentic self was not worthy enough to be attended to. That means it was dangerous to be authentic at all because it won’t get your needs met and it will lead to terrible things. The false self develops then, to create a version of you worthy or shining enough to never get ignored, overlooked, or harmed again. It tries to build you an identity without ever having to be vulnerable or authentic. The only way to do that, though, is through external traits…

    1. African

      I agree, though the reason these ethnicities are at higher risk is due to a number of social determinants of health that have not been addressed.

      it's important for students to know that it's not just because a person is Latino they have a higher risk, it's because of the SDOHs that impact this and other populations that increase their risk, so assessing for poverty, transportation, access to care, etc. all add to risk

      you might refer them back to the SDOH section.

    1. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      Critique

      In this manuscript, the authors examine the biochemistry of two protein domains that are, on the basis of sequence similarity, predicted to function autonomously as binders of histone H3 tails or methylated DNA. They present solid data to suggest that neither domain in fact has this function, but that they act as protein interaction domains that form a heterodimer mediated by the presence of a zinc ion (two ligands from each protein).

      In the first part of the Results, the authors note that ASXL PHD doesn't contain aromatics that are characteristic of methylated lysine binding. I would just note that they don't mention at this point that some PHDs bind unmethylated H3 - and that aromatics are not required for that binding activity. The lack of H3K4me3-binding aromatics doesn't at all make a case the domain doesn't bind histones. The lack of the Ala1 binding residues does make this case, but that's separate...

      Anyway, they then go on to show convincingly by ITC that ASXL doesn't bind the N-terminal H3 tail - unmodified or methylated. They also show modified-H3 ELISA data that make the same point (though it would be nice to know what the points were on the single ELISA that exceeded 2 SDs, even if they weren't reproduced - especially given there is a lot of scatter in the ELISA). I note in passing that I don't think I could find a Supp table 1).

      The authors then use AF3 to show that what would typically be the N-terminal zinc-binding site is not well predicted by the software (and the site ends up being square planar), suggesting that something might be amiss. (They were also unable to obtain an experimental structure.) It would have been helpful to gain more insight into what led them to the conclusion that the protein forms a weak homodimer based on the NMR data. Typically, it can be challenging to determine by NMR whether a dimer is forming or if non-specific soluble aggregates or other factors are contributing to line broadening.

      Next, the authors show nicely that MBD5/6 - two proteins shown in a previous paper to form a complex with ASXL - are predicted by AF3 to dimerize with ASXL - and form an intermolecular zinc-binding module in doing so. This is a nice result and there are very few examples of this in the literature (eg the zinc hook formed by Rad50 proteins). They confirm the zinc-binding prediction biochemically. They also show an HSQC of the complex (both subunits 15N labelled) and they count what they say is roughly the right number of peaks. To me, the lineshapes in the HSQC look good and, as the authors say, there are no clearly disordered resigies. I do make some additional comments below about the NMR data - suggesting what I think would be some valuable follow-up experiments. Overall, this study is a nice piece of biochemistry that recognizes an anomaly in the classification of examples of not one, but two, domain types well-known in the field of epigenetics. Going further than that, they not only show that the domains are mis-annotated but also demonstrate what their real function is and put forward a very likely model for their structure.

      The work is a good combination of AF based computational prediction with corroborating biochemistry and the experiments look technically well done to me. It is definitely of publishable quality and represents an advance in our understanding both of the particular proteins that they have studied and of the quirkiness of protein structure in general - there is always a new wrinkle to be discovered. I would make a couple of comments and suggestions that I think could improve the manuscript. I also have a number of minor comments below.

      Regarding the NMR data, the HSQC of the heterodimer that they show has nice lineshapes, as I mentioned above. However, the spectrum looks a little curious and closer inspection makes me wonder whether we are actually looking at two or more species with related structures. Many of the peaks appear to have a second peak nearby and it looks to me as if there is a consistent intensity ratio between the two forms (maybe 3:1 or 4:1?). It would be beneficial to explore this further, as understanding this aspect more clearly could have important implications for their analysis. I think the overall conclusions would probably still hold, but there would be far fewer signals than expected, suggesting likely some sort of slow-intermediate conformational exchange process that is giving two signals for a chunk of the residues and giving no signals for some of the others. Some comparison with the HSQC of the PHD domain alone might be helpful here.

      Some simple backbone triple resonance experiments would also be very helpful. Not only would they allow assignments to be made - and therefore a comparison of predicted secondary structure with the AF3-predicted fold - but also would help confirm whether there are two conformers. Often in these cases, the Ca and Cb chemical shifts for an exchanging system are much more similar than the HN and 15N signals, and it is therefore often clear that two peaks are actually the same residue in two different conformations. ZZ exchange experiments could help too, though these can sometimes be challenging.

      Finally, it would be reassuring to see SEC-MALS data for the heterodimer. Given that the interaction is mediated by covalent bonds, I'd expect to see a dimer molecular weight. It would also be reassuring to see a nice-looking SEC peak - and it would be useful data to have as part of the interrogation of possible chemical exchange mentioned above.

      Specific points

      • Intro: A nucleosome wraps less than two turns of DNA
      • I'm not a fan of this sentence: "The quaternary structure of the nucleosome forces the N- and C-terminal tails from histone proteins to protrude for covalent chemical modification". Not clear to me that the nucleosome 'forces' the tails to protrude...
      • The authors state that "Attachment of ubiquitin to histone H2A at K119 limits gene expression" - but they don't give any context. Which genes are limited in their expression? Nearby ones? Ones on the same chromosome? Just the gene that has an H2A-Ub in a specific position?
      • No need for capital Z in zinc.
      • "After purification, the protein solution was concentrated to 42.5 uM". The authors would not know the protein concentration to three significant figures. They would be unlikely to know it to 2 figures, given the inherent uncertainty in protein concentration measurement.
      • I like that they show purification gels for their proteins - almost no one does...
      • The authors state that "The domain, however, proved too small and flexible to produce crystals". However, the authors don't (as far as I can see) have any data to support the notion that either of these was the reason that no suitable crystals were obtained. I bet there are plenty of large, well-ordered proteins that haven't been able to have their crystal structures determined...
      • Supp fig 3 - the authors could label N and C termini.
      • "The 1H15N HSQC spectra revealed the presence of about 95 backbone amide peaks, which is in agreement with the overall protein complex." The authors could tell us how many peaks are expected, to make the comparison more useful! (and it should be spectrum).
      • "and form a tight, stable protein complex". Too many adjectives... The data don't show that the complex is tight, nor really say anything about its stability (is the Tm 35 degrees or 95 degrees - can't really say). The data do show that the two proteins form a complex.
      • I'd say that 633 A2 buried surface area isn't 'large'. It's small by protein complex standards, I think. But still perfectly reasonable.
      • Figure S6 - would be good to label N and C termini.

      Significance

      In this manuscript, the authors examine the biochemistry of two protein domains that are, on the basis of sequence similarity, predicted to function autonomously as binders of histone H3 tails or methylated DNA. They present solid data to suggest that neither domain in fact has this function, but that they act as protein interaction domains that form a heterodimer mediated by the presence of a zinc ion (two ligands from each protein).

      I am a structural biologist and biochemist who has worked on zinc-binding domains - including PHD domains - on and off over 30 years.

    1. it emphasizes the development of communities of shared inquiry and action

      I totally agree with this approach to Participatory Action Research. It makes sense that communities should be involved in the research process, not just as subjects, but as active partners that help shape the work. I think this makes the solutions feel more real and relevant to the people they impact. It also just makes the whole process more inclusive and meaningful. I think it also makes the research more impactful because it’s based on the real needs and experiences of the community, rather than just the perspectives of outsiders.

    2. We use design to sustain, heal, and empower our communities, as well as to seek liberation from exploitative and oppressive systems.

      I see design as more than just creating things—it’s a powerful tool for sustaining, healing, and empowering our communities. It helps us challenge and break free from exploitative and oppressive systems, shaping a future that is more just and inclusive. When used intentionally, design can be a force for real change.

    1. people can make themselves feel better by adopting hostile attitudes toward another group, and, unfortunately, some meat eaters do this by characterizing

      I feel like this may be why we can become so defensive when our behaviors are challenged by others. I feel like we don't always fully disagree rather we are just hurt and feel personally attacked. Like in the crash course cigarette example, you recognize it's wrong but still hold the belief that you still smoke so you just continue on.

    1. nt - is not, ac-cording to the powers theory of causation, a cause of the vase'sbreaking. The vase's breaking is a mutual manifest

      right, so the double preventer is not a cause of the breakin bc the breaking is a mutual manifestation of the dispositional properties of the objects so the cause is the fact of being fragile, and that of being hard, not the explosive device destroying the barrier but this is just a runabout way of saying it's an indirect cause???

    1. Although it’s difficult to handle complexity in software, it’s much eas-ier to handle it there than elsewhere in a system. A good engineertherefore moves as much complexity as possible into software.

      Tangent: I've likened system designs for Web tech that requires deep control and/or configuration of the server (versus a design that lets someone just dump content onto a static site) to the leap in productivity/malleability of doing something in hardware versus doing it in software.

      Compare: Mastodon-compatible, ActivityPub-powered fediverse nodes that are required to implement WebFinger versus RSS-/Atom-based blog feeds (or podcasts—both of which you could in theory you could author by hand if you wanted to).

    Annotators

    1. There is a good chance that at some point in your career you will find yourself in a situation that involves unethical behavior at your workplace.

      Everyone thinks that could never happen to them or they would never act like that. But the reality is, once you're in that situation, some people freeze up and will just follow along with others even if it is unethical. This is why it's so important to be able to stand up for yourself and your beliefs.

  4. resu-bot-bucket.s3.ca-central-1.amazonaws.com resu-bot-bucket.s3.ca-central-1.amazonaws.com
    1. Created a version of the popular New York Times web browser game Connections that allows players tocreate and share their own custom puzzles

      don't be afraid to exaggerate on your bullets a little bit, you can probably add another bullet talking about streamlining data from csv files. even if it's just importing a csv as a game, really emphasize how useful this feature is

    1. It’s probable that he has changed his name

      Did he legally change his name or just tell people to call him something else? Could he prove that he was this new person?

    1. Unfortunately for Google, ChatGPT is a better-looking crystal ball. Let’s say I want to replace the rubber on my windshield wipers. A Google Search return for “replace rubber windscreen wiper” shows me a wide variety of junk, starting with the AI overview. Next to it is a YouTube video. If I scroll down further, there’s a snippet; next to it is a photo. Below that are suggested searches, then more video suggestions, then Reddit forum answers. It’s busy and messy.Now let’s go over to ChatGPT. Asking “How do I replace rubber windscreen wiper?” gets me a cleaner layout: a response with sub-headings and steps. I don’t have any immediate link to sources and no way to evaluate whether I’m getting good advice — but I have a clear, authoritative-sounding answer on a clean interface. If you don’t know or care how things work, ChatGPT seems better.

      I got a moisturizer recommendation from Anthropic's LLM because Google search results were full of advertorial garbage. I put in time trying to go through them, and it sucked. Infinite product carousels with nothing even in the right category of what I was looking for (lightweight moisturizers shouldn't be cream-like!!). Even just getting some names to go look at actual reviews for – more than the SEO-adversarial Internet could provide me.

      The moisturizer is troublingly good.

    1. This is compounded by the feedback people get on social media, in the form of likes and retweets and so on. “Our hypothesis is that the design of these platforms could make expressing outrage into a habit, and a habit is something that’s done without regard to its consequences – it’s insensitive to what happens next, it’s just a blind response to a stimulus,” Crockett explains.

      Highlight how we also base some forms of approval off of virtual "likes" and "retweets"

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Fuchs describes a novel method of enzymatic protein-protein conjugation using the enzyme Connectase. The author is able to make this process irreversible by screening different Connectase recognition sites to find an alternative sequence that is also accepted by the enzyme. They are then able to selectively render the byproduct of the reaction inactive, preventing the reverse reaction, and add the desired conjugate with the alternative recognition sequence to achieve near-complete conversion. I agree with the authors that this novel enzymatic protein fusion method has several applications in the field of bioconjugation, ranging from biophysical assay conduction to therapeutic development. Previously the author has published on the discovery of the Connectase enzymes and has shown its utility in tagging proteins and detecting them by in-gel fluorescence. They now extend their work to include the application of Connectase in creating protein-protein fusions, antibody-protein conjugates, and cyclic/polymerized proteins. As mentioned by the author, enzymatic protein conjugation methods can provide several benefits over other non-specific and click chemistry labeling methods. Connectase specifically can provide some benefits over the more widely used Sortase, depending on the nature of the species that is desired to be conjugated. However, due to a similar lengthy sequence between conjugation partners, the method described in this paper does not provide clear benefits over the existing SpyTag-SpyCatcher conjugation system.  Additionally, specific disadvantages of the method described are not thoroughly investigated, such as difficulty in purifying and separating the desired product from the multiple proteins used. Overall, this method provides a novel, reproducible way to enzymatically create protein-protein conjugates.

      The manuscript is well-written and will be of interest to those who are specifically working on chemical protein modifications and bioconjugation.

      I'd like to comment on two points.

      (1) The benefits over the SpyTag-SpyCatcher system. Here, the conjugation partners are fused via the 12.3 kDa SpyCatcher protein, which is considerably larger than the Connectase fusion sequence (19 aa). This is mentioned in the introduction (p. 1 ln 24-26). Furthermore, SpyTag-SpyCatcher fusions are truly irreversible, while Connectase/BcPAP fusions may be reversed (p. 8, ln 265-273). For example, target proteins (e.g., AGAFDADPLVVEI-Protein) may be covalently fused to functionalized magnetic beads (e.g., Bead-ELASKDPGAFDADPLVVEI) in order to perform a pulldown assay. After the assay, the target protein and any bound interactors could be released from the beads by the addition of a Connectase / peptide (AGAFDAPLVVEI) mixture.

      In a related technology, the SpyTag-SpyCatcher system was split into three components, SpyLigase, SpyTag and KTag  (Fierer et al., PNAS 2014). The resulting method introduces a sequence between the fusion partners (SpyTag (13aa) + KTag (10aa)), which is similar in length to the Connectase fusion sequence (p. 8, ln 297 - 298). Compared to the original method, however, this approach seems to require longer incubation times, while yielding less fusion product (Fierer et al., Figure 2).

      (2) Purification of the fusion product. The method is actually advantageous in this respect, as described in the discussion (p. 8, ln 258-264). Examples are now provided in Figure 6.

      Reviewer #2 (Public review):

      Summary:

      Unlike previous traditional protein fusion protocols, the author claims their proposed new method is fast, simple, specific, reversible, and results in a complete 1:1 fusion. A multi-disciplinary approach from cloning and purification, biochemical analyses, and proteomic mass spec confirmation revealed fusion products were achieved.

      Strengths:

      The author provides convincing evidence that an alternative to traditional protein fusion synthesis is more efficient with 100% yields using connectase. The author optimized the protocol's efficiency with assays replacing a single amino acid and identification of a proline aminopeptidase, Bacilius coagulans (BcPAP), as a usable enzyme to use in the fusion reaction. Multiple examples including Ubiquitin, GST, and antibody fusion/conjugations reveal how this method can be applied to a diverse range of biological processes.

      Weaknesses:

      Though the ~100% ligation efficiency is an advancement, the long recognition linker may be the biggest drawback. For large native proteins that are challenging/cannot be synthesized and require multiple connectase ligation reactions to yield a complete continuous product, the multiple interruptions with long linkers will likely interfere with protein folding, resulting in non-native protein structures. This method will be a good alternative to traditional approaches as the author mentioned but limited to generating epitope/peptide/protein tagged proteins, and not for synthetic protein biology aimed at examining native/endogenous protein function in vitro.

      The assessment is fair, and I have no further comments to add.

      Reviewer #1 (Recommendations for the authors):

      Major/Experimental Suggestions:

      (1) Throughout the paper only one reaction shown via gels had 100% conversion to desired product (Figure 3C). It is misleading to title a paper with absolutes such as "100% product yield", when the majority of reactions show >95% product yield, without any purification. Please change the title of the manuscript to something along the lines of "Novel Irreversible Enzymatic Protein Fusions with Near-Complete Product Yield".

      The conjugation reaction is thermodynamically favored. It is driven by the hydrolysis of a peptide bond (P|GADFDADPLVVEI), which typically releases 8 - 16 kJ/mol energy. This should result in a >99.99% complete reaction (DG° = -RT ln (Product/Educt)). In line with this, 99% - 100% of the less abundant educts (LysS, Figure 3A; MBP, Figure 3B; Ub-Strep, Figure 3C) are converted in the time courses (Figure 3D-F show different reaction conditions, which slow down conjugate formation). 100% conversion are also shown in Figure 5, Figure 6, and Figure S4. Likewise, 99.6% relative fusion product signal intensity in an LCMS analysis (Figure S2) after 4h reaction time (0.13% and 0.25% educts). In this experiment, the proline had been removed from 99.8% of the peptide byproducts (P|GADFDADPLVVEI). It is clear that this reaction is still ongoing and that >99.99% of the prolines will be removed from the peptides in time. These findings suggest that the conjugation reaction gradually slows down the less educt is available, but eventually reaches completion.

      For some experiments, lower product yields (e.g. 97% in Figure 3B) are reported in the paper. These were calculated with Yield = 100% x Product / (Educt1 + Educt 2 + Product). With this formula, 100% conjugation can only be achieved with exactly equimolar educt quantities, because both educt 1 and educt 2 need to be converted entirely. If one educt 1 is available in excess, for example because of protein concentration measurement inaccuracies or pipetting errors, some of it will be left without fusion partner. In case of Figure 3B, 3% more GST seemed to have been in the mixture. These are methodological inaccuracies.

      (2) Please provide at least one example of a purified desired product, and mention the difficulties involved as a disadvantage to this particular method. Separating BcPAP, Connectase, and the desired protein-protein conjugate may prove to be quite difficult, especially when Connectase cleaves off affinity tags.

      Examples are now provided in Figure 6. As described in the discussion (p. 8, ln 258-264), the simple product purification is one of the advantages of the method.

      (3) For the antibody conjugate, please provide an example of conjugating an edduct that would prove to be more useful in the context of antibodies. For example, as you mention in the introduction, conjugation of fluorophores, immobilization tags such as biotin, and small molecule linker/drugs are useful bioconjugates to antibodies.

      Antibody-biotinylation is now shown in Figure S6; Antibody-fluorophore conjugates are part of Figures S5 and S7.

      (4) Please assess the stability of these protein-protein conjugates under various conditions (temperature, pH, time) to ensure that the ligation via Connectase is stable over a broad array of conditions. In particular, a relevant antibody-conjugate stability assay should be done over the period of 1-week in both buffer and plasma to show applicability for potential therapeutics.

      The stability of an antibody-biotin conjugate in blood plasma over 7 days at different temperatures is now shown in Figure S7.

      Generally, Connectase introduces a regular peptide bond (Asp-Ala) with a high chemical and physical stability (e.g. 10 min incubation at 95°C in SDS-PAGE loading buffer; H2O-formic acid / acetonitrile gradients for LC-MS). The sequence may be susceptible to proteases, although this is not the case in HEK293 cells (antibody expression), E. coli, or blood plasma (Figure S7).

      (5) Please conduct functional assays with the antibody-protein/peptide conjugates to show that the antibody retains binding capabilities to the HER-2 antigen and the modification was site-selective, not interfering with the binding paratope or binding ability of the antibody in any way. This can be done through bio-layer interferometry, surface plasmon resonance, ELISA, etc.

      We plan the immobilization of the HER2 antibody on microplates and its use in an ELISA. However, this experiment requires significant testing and optimizations. It will be part of a future paper on the use of Connectase for protein immobilization.

      For now, the mass spectrometry data provide clear evidence of a single site-selective conjugation, as the C-terminal ELASKDPGAFDADPLVVEI-Strep sequence is replaced by ELASKDAGAFDADPLVVEI(-Ub). Given that the conjugation sites at the C-termini are far from the antigen binding sites, and have already been used in a number of other approaches (e.g., SpyTag, SnapTag, Sortase), it appears unlikely that these conjugations interfere with antigen binding.

      (6) Please include gels of all proteins used in ligation reactions after purification steps in the SI to show that each species was pure.

      The pure proteins are now shown in Figure S9.

      (7) Please provide the figures (not just tables) of LC/MS deconvoluted mass spectra graphs for all conjugates, either in the main text or the SI.

      Please specify which spectra you are missing. I believe all relevant spectra are shown in Figures 4, 5, and S3. The primary data can be found in Dataset S2.

      (8) Please provide more information in the methods section on exactly how the densitometry quantification of gel bands was performed with ImageJ.

      Details on the quantification with Image Studio Lite 5.2 were added in the method section (p. 17, ln 461-463).

      Minor Suggestions:

      (1) Page 1, line 19: can include one sentence on what assays these particular bioconjugations are usefule for (e.g. internalization cell studies, binding assays, etc.)

      I prefer not to provide additional details here to keep the text concise and focused.

      (2) Page 1, line 22: "three to ten equivalents" instead of 3x-10x.

      Done.

      (3) Page 1, line 23: While NHS labeling is widely considered non-specific, maleimide conjugation to free cysteines is generally considered specific for engineered free cysteine residues, since native proteins often do not have free cysteine residues available for conjugation. If you are referring to the potential of maleimides to label lysines as well, that should be specifically stated.

      I modified the sentence, now stating that these methods are "can be" unspecific.

      As pointed out, it is possible to achieve specificity by eliminating all other free cysteines and/or engineering a cysteine in an appropriate position. In many other cases, however (e.g., natural antibodies), several cysteines are available, or the sample contains other proteins/peptides. I did not want to go into more detail here and refer to the cited review.

      (4) Page 1, line 31: "and an oligoglycine G(1-5)-B"

      Done.

      (5) Page 1, line 34: It is not clear where in the source these specific Km values are coming from, considering these are variable based on specific conditions/substrates and tend to be reaction-specific.

      I cited another review, which lists the same values, along with a few other measurements (Jacobitz et al., Adv Protein Chem Struct Biol 2017, Table 2). It is clear that each of these measurements differs somewhat, but they are generally comparable (K<sub>M</sub>(LPETG) = 5500 - 8760 µM; K<sub>M</sub>(GGGGG) = 140 - 196 µM). I chose the cited study (Frankel et al., Biochemistry 2005), because it also investigated hydrolysis rates. In this study, the measurements are derived from the plots in Figure 2.

      (6) Page 1, line 47: the comparison to western blots feels a little like apples to oranges, even though this comparison was made in previous literature. Engineering an expressed protein to have this tag and then using the tag to detect and quantify it, feels more akin to a tagging/pull down assay than a western blot in which unmodified proteins are easily detected.

      It is akin to a frequently used type of western blots with tag-specific antiboies, e.g. Anti-His<sub>6</sub>, -Streptavidin, -His<sub>6</sub>, -HA ,-cMyc, -Flag. I modified the sentence to clarify this.

      (7) Page 2, line 51: "Connectase cleaves between the first D and P amino acids in the recognition sequence, resulting in an N-terminal A-ELASKD-Connectase intermediate and a C-terminal PGAFDADPLVVEI peptide."

      I prefer the current sentence, because we assume that a bond between the aspartate and Connectase is formed before PGAFDADPLVVEI is cleaved off.

      (8) Page 3, line 94: "Exact determination is not possible due to reversibility of the reaction", the way it is stated now sounds like it is a flaw in the methods. Also, update Figure 2 to read "Estimated relative ligation rate".

      Done.

      (9) Page 3, lines 101-107: This is worded in a confusing way. It can either be X<sub>1</sub> or X<sub>2</sub> that is inactivated depending on if the altered amino acid is on the original protein sequence or on the desired edduct to conjugate. You first give examples of how to render other amino acids inactive, but then ultimately state that proline made inactive, so separate the two distinct possibilities a bit more clearly.

      The reaction requires the inactivation of X<sub>1</sub>, without affecting X<sub>2</sub> (ln 100 - 102). This is true, no matter whether it is X<sub>1</sub> = A, C, S, or P that is inactivated. I added a sentence to clarify this (ln 102 – 103).

      (10) Page 4, line 118: Give a one-sentence justification for why these proteins were chosen to work with (easy to express, stable, etc).

      Done.

      (11) Page 5, line 167: "payload molecules".

      Done.

      (12) Page 5, lines 170-173: Word this more clearly- "full conversion with many of these methods is difficult on antibodies due to each heavy and light chain being modified separately, resulting in only a total yield of 66% DAR4 even when 90% of each chain is conjugated."

      I rephrased the section.

      (13) Page 8, line 290: Discuss other disadvantages of this method including difficulties purifying and in incorporating such a long sequence into proteins of interest.

      Product purification is shown in the new Figure 6. As stated above, I consider the simple purification process an advantage of the method.  The genetic incorporation of the sequence into proteins is a routine process and should not make any difficulties. The disadvantages of long linker sequences between fusion partners are now discussed (p.8 – 9, ln 300-302).

      (14) Page 10, line 341: 'The experiment is described and discussed in detail in a previously published paper.31"

      Done.

      Reviewer #2 (Recommendations for the authors):

      Minor Points:

      (1) It's unclear how the author derived 100% ligation rate with X = Proline in Figure 2 when there is still residual unligated UB-Strep at 96h. Please provide an expanded explanation for those not familiar with the protocol. Is the assumption made that there will be no UB-Strep if the assay was carried out beyond 96h?

      I clarified the figure legend. The assay shows the formation of an equilibrium between educts and products. Therefore, only ~50% Ub-Strep is used with X = Proline (see p. 2, ln 79 - 81). The "relative ligation rate" refers to the relative speed with which this equilibrium is established. The highest rate is seen with X = Proline, and it is set to 100%. The other rates are given relative to the product formation with X = Proline.

      (2) Though the qualitative depiction of the data in Figure 3 is appreciated, an accompanying graphical representation of the data in the same figure will greatly enhance reception and better comprehension of several of the author's conclusions.

      Graphs are now shown in Figure S1.

      (3) Figure 3 panel E is misaligned. Please align it with panel B above it.

      Done, thank you.

      (4) The author refers to 'The resulting circular assemblies (37% UB2...)' in the text but identifies it as UB-C2 in Figure 5B. Is this a mistake or does UB2 refer to another assembly not mentioned in the Figures? Please check for inconsistencies.

      All circular assemblies are now labeled Ub-C <sub>1-6</sub>.

      (5) Finishing with a graphical schematic that depicts the entire protocol in a simple image would be much appreciated and well-received by readers. Including the scheme with A and B proteins, the recognition linkers, the addition of connectase and BcPAP, etc. to the final resulting protein with connected linker.

      A graphical summary of the reaction is now included in Figure 6.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      In this manuscript, Fuchsberger et al. demonstrate a set of experiments that ultimately identifies the de novo synthesis of GluA1-, but not GluA2-containing Ca2+ permeable AMPA receptors as a key driver of dopamine-dependent LTP (DA-LTP) during conventional post-before-pre spike-timing dependent (t-LTD) induction. The authors further identify adenylate cyclase 1/8, cAMP, and PKA as the crucial mitigators of these actions. While some comments have been identified below, the experiments presented are thorough and address the aims of the manuscript, figures are presented clearly (with minor comments), and experimental sample sizes and statistical analyses are suitable. Suitable controls have been utilized to confirm the role of Ca2+ permeable AMPAR. This work provides a valuable step forward built on convincing data toward understanding the underlying mechanisms of spike-timing-dependent plasticity and dopamine.

      Strengths:

      Appropriate controls were used.

      The flow of data presented is logical and easy to follow.

      The quality of the data, except for a few minor issues, is solid.

      Weaknesses:

      The drug treatment duration of anisomycin is longer than the standard 30-45 minute duration (as is the 500uM vs 40uM concentration) typically used in the field. Given the toxicity of these kinds of drugs long term it's unclear why the authors used such a long and intense drug treatment.

      In an initial set of control experiments (Figure S 1C-D) we wanted to ensure that protein synthesis was definitely blocked and therefore used a relatively high concentration of anisomycin and a relatively long pre-incubation period. We agree with the Reviewer that we cannot exclude the possibility that this treatment could compromise cell health in addition to the protein synthesis block. Therefore, we carried out an additional experiment with an alternative protein synthesis inhibitor cycloheximide at a lower standard concentration (10 µM) which confirmed a significant reduction in the puromycin signal (Figure S 1A-B). Together these results support the conclusion that puromycin signal is specific to protein synthesis in our labelling assay.

      Furthermore, in the electrophysiology experiments, we used 500 μM anisomycin in the patch pipette solution. Under these conditions, we recorded a stable EPSP baseline for 60 minutes, indicating that the treatment did not cause toxic effects to the cell (Figure S1F). This high concentration would ensure an effective block of local translation at dendritic sites. Nevertheless, we also carried out this experiment with cycloheximide at a lower standard concentration (10 µM) and observed a similar result with both protein synthesis inhibitors (Figure 1F).

      With some of the normalizations (such as those in S1) there are dramatic differences in the baseline "untreated" puromycin intensities - raising some questions about the overall health of slices used in the experiments.

      We agree with the Reviewer that there is a large variability in the normalised puromycin signal which might be due to variability in the health of slices. However, we assume that the same variability would be present in the treated slices, which showed, despite the variability, a significant inhibition of protein synthesis. To avoid any bias by excluding slices with low puromycin signal in the control condition, we present the full dataset.

      The large set of electrophysiology experiments carried out in our study (all recorded cells were evaluated for healthy resting membrane potential, action potential firing, and synaptic responses) confirmed that, generally, the vast majority of our slices were indeed healthy. 

      Reviewer #2 (Public Review):

      Summary:

      The aim was to identify the mechanisms that underlie a form of long-term potentiation (LTP) that requires the activation of dopamine (DA).

      Strengths:

      The authors have provided multiple lines of evidence that support their conclusions; namely that this pathway involves the activation of a cAMP / PKA pathway that leads to the insertion of calcium-permeable AMPA receptors.

      Weaknesses:

      Some of the experiments could have been conducted in a more convincing manner.

      We carried out additional control experiments and analyses to address the specific points that were raised.

      Reviewer #3 (Public Review):

      The manuscript of Fuchsberger et al. investigates the cellular mechanisms underlying dopamine-dependent long-term potentiation (DA-LTP) in mouse hippocampal CA1 neurons. The authors conducted a series of experiments to measure the effect of dopamine on the protein synthesis rate in hippocampal neurons and its role in enabling DA-LTP. The key results indicate that protein synthesis is increased in response to dopamine and neuronal activity in the pyramidal neurons of the CA1 hippocampal area, mediated via the activation of adenylate cyclases subtypes 1 and 8 (AC1/8) and the cAMP-dependent protein kinase (PKA) pathway. Additionally, the authors show that postsynaptic DA-induced increases in protein synthesis are required to express DA-LTP, while not required for conventional t-LTP.

      The increased expression of the newly synthesized GluA1 receptor subunit in response to DA supports the formation of homomeric calcium-permeable AMPA receptors (CP-AMPARs). This evidence aligns well with data showing that DA-LTP expression requires the GluA1 AMPA subunit and CP-AMPARs, as DA-LTP is absent in the hippocampus of a GluA1 genetic knock-out mouse model. Overall, the study is solid, and the evidence provided is compelling. The authors clearly and concisely explain the research objectives, methodologies, and findings. The study is scientifically robust, and the writing is engaging. The authors' conclusions and interpretation of the results are insightful and align well with the literature. The discussion effectively places the findings in a meaningful context, highlighting a possible mechanism for dopamine's role in the modulation of protein-synthesis-dependent hippocampal synaptic plasticity and its implications for the field. Although the study expands on previous works from the same laboratory, the findings are novel and provide valuable insights into the dynamics governing hippocampal synaptic plasticity.

      The claim that GluA1 homomeric CP-AMPA receptors mediate the expression of DA-LTP is fascinating, and although the electrophysiology data on GluA1 knock-out mice are convincing, more evidence is needed to support this hypothesis. Western blotting provides useful information on the expression level of GluA1, which is not necessarily associated with cell surface expression of GluA1 and therefore CP-AMPARs. Validating this hypothesis by localizing the protein using immunofluorescence and confocal microscopy detection could strengthen the claim. The authors should briefly discuss the limitations of the study.

      Although it would be possible to quantify the surface expression of GluA1 using immunofluorescence, it would not be possible to distinguish  between GluA1 homomers and GluA1-containing heteromers. It would therefore not be informative as to whether these are indeed CP-AMPARs. This is an interesting problem, which we have briefly discussed in the Discussion section.

      Additional comments to address:

      (1) In Figure 2A, the representative image with PMY alone shows a very weak PMY signal. Consequently, the image with TTX alone seems to potentiate the PMY signal, suggesting a counterintuitive increase in protein synthesis.

      We agree with the Reviewer that the original image was not representative and have replaced it with a more representative image.

      (2) In Figures 3A-B, the Western blotting representative images have poor quality, especially regarding GluA1 and α-actin in Figure 3A. The quantification graph (Figure 3B) raises some concerns about a potential outlier in both the DA alone and DA+CHX groups. The authors should consider running a statistical test to detect outlier data. Full blot images, including ladder lines, should be added to the supplementary data.

      We have replaced the western blot image in Figure 3A and have also presented full blot images including ladder lines in supplementary Figure S3.

      Using the ROUT method (Q=1%) we identified one outlier in the DA+CHX group of the western blot quantification. The quantification for this blot was then removed from the dataset and the experiment was repeated to ensure a sufficient number of repeats.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      (1) How the authors perform these experiments with puromycin, these are puromycilation experiments - not SuNSET. The SuNSET protocol (surface sensing of translation) specifically refers to the detection of newly synthesized proteins externally at the plasma membrane. I'd advise to update the terminology used.

      We thank the Reviewer for pointing this out. We have updated this to ‘puromycin-based labelling assay’.

      (2) The legend presented in Figure 2F suggests WT is green and ACKO is orange, however, in Figure 2G the WT LTP trace is orange, consider changing this to green for consistency.

      We thank the Reviewer for this suggestion and agree that a matching colour scheme makes the Figure clearer. This has been updated.

      (3) In the results section, it is recommended to include units for the values presented at the first instance and only again when the units change thereafter.

      The units of the electrophysiology data were [%], this is included in the Results section. Results of western blots and IHC images were presented as [a.u.]. While we included this in the Figures, we have not specifically added this to the text of individual results. 

      (4) Two hours pre-treatment with anisomycin vs 30 minutes pretreatment with cycloheximide seems hard to directly compare - as the pharmokinetics of translational inhibition should be similar for both drugs. What was the rationale for the extremely long anisomycin pretreatment? What controls were taken to assess slice health either prior to or following fixation? This is relevant to the below point (5).

      In an initial set of control experiments (Figure S 1C-D) we wanted to ensure that protein synthesis was definitely blocked and therefore used a relatively high concentration of anisomycin and a relatively long pre-incubation period. We agree with the Reviewer that we cannot exclude the possibility that this treatment could compromise cell health in addition to the protein synthesis block. Therefore, we carried out an additional experiment with an alternative protein synthesis inhibitor cycloheximide at a lower standard concentration (10 µM) which confirmed a significant reduction in the puromycin signal (Figure S1A-B). Together these results support the conclusion that puromycin signal is specific to protein synthesis in our labelling assay.

      IHC slices were visually assessed for health. The large set of electrophysiology experiments carried out in our study (all recorded cells were evaluated for healthy resting membrane potential, action potential firing, and synaptic responses) also confirmed that, generally, the vast majority of our slices were indeed healthy. 

      (5) In Supplementary Figure 1, there is a dramatic difference in the a.u. intensities across CHX (B) and AM (D), please explain the reason for this. It is understood these are normalised values to nuclear staining, please clarify if this is a nuclear area.

      We agree with the Reviewer that there is a large variability in normalised puromycin signal which may be due to variability in the health of the slices. However, we assume that the same variability would be present in the treated slices, which showed, despite the variability, a significant effect of protein synthesis inhibition. To prevent introducing bias by excluding slices with low puromycin signal in the control condition, we present the full dataset.

      The CA1 region of the hippocampus contains of a dense layer of neuronal somata (pyramidal cell layer). We normalized against the nuclear area as it provides a reliable estimate of the number of neurons present in the image. This approach minimizes bias by accounting for variation in the number of neurons within the visual field, ensuring consistency and accuracy in our analysis.

      (6) Please clarify the decision to average both the last 5 minutes of baseline recordings and the last 5 minutes of the recording for the normalisation of EPSP slopes.

      The baseline usually stabilises after a few minutes of recording, thus the last 5 minutes were used for baseline measurement, which are the most relevant datapoints to compare synaptic weight change to. After induction of STDP, potentiation or depression of synaptic weights develops gradually. Based on previous results, evaluating the EPSP slopes at 30-40 minutes after the induction protocol gives a reliable estimate of the amount of plasticity.

      Reviewer #2 (Recommendations For The Authors):

      The concentration of anisomycin used (0.5 mM) is very high.

      As described above, in an initial set of control experiments (Figure S 1C-D) we wanted to ensure that protein synthesis was definitely blocked and therefore used a relatively high concentration of anisomycin and a relatively long pre-incubation period. We agree with the Reviewer that this is higher than the standard concentration used for this drug and we cannot exclude the possibility that this treatment could compromise cell health in addition to the protein synthesis block. Therefore, we carried out an additional experiment with an alternative protein synthesis inhibitor cycloheximide at a lower standard concentration (10 µM) which confirmed a significant reduction in the puromycin signal (Figure S1A-B). Together these results support the conclusion that puromycin signal is specific to protein synthesis in our labelling assay.

      Furthermore, in the electrophysiology experiments, we also used 500 µM anisomycin in the patch pipette solution. Under these conditions, we recorded a stable EPSP baseline for 60 minutes, indicating that the treatment did not cause toxic effects to the cell (Figure S1F). This high concentration would ensure an effective block of local translation at dendritic sites. Nevertheless, we also carried out this experiment with cycloheximide at a lower standard concentration (10 µM) and observed a similar result with both protein synthesis inhibitors (Figure 1F).

      The authors conclude that the effect of DA is mediated via D1/5 receptors, which based on previous work seems likely. But they cannot conclude this from their current study which used a combination of a D1/D5 and a D2 antagonist.

      We thank the Reviewer for pointing this out. We agree and have updated this in the Discussion section to ‘dopamine receptors’, without specifying subtypes.

      There is no mention that I can see that the KO experiments were conducted in a blinded manner (which I believe should be standard practice). Did they verify the KOs using Westerns?

      Only a subset of the experiments was conducted in a blinded manner. However, the results were collected by two independent experimenters, who both observed significant effects in KO mice compared to WTs (TF and ZB).

      We received the DKO mice from a former collaborator, who verified expression levels of the KO mice (Wang et al., 2003). We verified DKO upon arrival in our facility using genotyping.

      Maybe I'm misunderstanding but it appears to me that in Figure 1F there is LTP prior to the addition of DA. (The first point after pairing is already elevated). I think the control of pairing without DA should be added.

      We thank the Reviewer for pointing this out. Based on previous results (Brzosko et al., 2015) we would expect potentiation to develop over time once DA is added after pairing, however, it indeed appears in the Figure here as if there was an immediate increase in synaptic weights after pairing. It should be noted, however, that when comparing the first 5 minutes after pairing to the baseline, this increase was not significant (t(9)=1.810, p =0.1037). Nevertheless, we rechecked our data and noticed that this initial potentiation was biased by one cell with an increasing baseline, which had both the test and control pathway strongly elevated. We had mistakenly included this cell in the dataset, despite the unstable conditions (as stated in the Methods section, the unpaired control pathway served as a stability control). We apologise for the error and this has now been corrected (Figure 1F). In addition, we present the control pathway in Figure S1G and I.

      We have also now included the control for post-before-pre pairing (Δt = -20 ms) without dopamine in a supplemental figure (Figure S1E and F).

      The Westerns (Figure 3A) are fairly messy. Also, it is better to quantify with total protein. Surface biotinylation of GluA1 and GluA2 would be more informative.

      We carried out more repeats of Western blots and have exchanged blots in Figure 3A.

      We observed that DA increases protein synthesis, we therefore cannot exclude the possibility that application of DA could also affect total protein levels. Thus quantifying with total protein may not be the best choice here. Quantification with actin is standard practice.

      While we agree with the Reviewer that surface biotinylation of GluA1 and GluA2 could in principle be more informative, we do not think it would work well in our experimental setup using acute slice preparation, as it strictly requires intact cells. Slicing generates damaged cells, which would take up the surface biotin reagents. This would cause unspecific biotinylation of the damaged cells, leading to a strong background signal in the assay.

      In Figure 4 panels D and E the baselines are increasing substantially prior to induction. I appreciate that long stable baselines with timing-dependent plasticity may not be possible but it's hard to conclude what happened tens of minutes later when the baseline only appears stable for a minute or two. Panels A and B show that relatively stable baselines are achievable.

      We agree with the Reviewer that the baselines are increasing, however, when looking at the baseline for 5 minutes prior to induction (5 last datapoints of the baseline), which is what we used for quantification, the baselines appeared stable. Unfortunately, longer baselines are not suitable for timing-dependent plasticity. In addition, all experiments were carried out with a control pathway which showed stable conditions throughout the recording.

      In general, the discussion could be better integrated with the current literature. Their experiments are in line with a substantial body of literature that has identified two forms of LTP, based on these signalling cascades, using more conventional induction patterns.

      We thank the Reviewer for this suggestion and have added more discussion of the two forms of LTP in the Discussion section.

      It would be helpful to include the drug concentrations when first described in the results.

      Drug concentration have now been included in the Results section.

      It is now more common to include absolute t values (not just <0.05 etc).

      While we indicate significance in Figures using asterisks when p values are below the indicated significance levels, we report absolute values of p and t values in the Results section.

      Similarly full blots should be added to an appendix / made available.

      We have now included full blot images in Supplementary Figure S3.

      A 30% tolerance for series resistance seems generous to me. (10-20% would be more typical).

      We thank the Reviewer for their suggestion, and will keep this in mind for future studies. However, the error introduced by the higher tolerance level is likely to be small and would not influence any of the qualitative conclusions of the manuscript.

      Whereas series resistance is of course extremely important in voltage-clamp experiments, changes in series resistance would be less of a concern in current-clamp recordings of synaptic events. We use the amplifier as a voltage follower, and there are two problems with changes in the electrode, or access, resistance. First, there is the voltage drop across the electrode resistance. Clearly this error is zero if no current is injected and is also negligible for the currents we use in our experiments to maintain the membrane voltage at -70 mV. For example, the voltage drop would be 0.2 mV for 20 pA current through a typical 10 MOhm electrode resistance, and a change in resistance of 30% would give less than 0.1 mV voltage change even if the resistance were not compensated. The second problem is distortion of the EPSP shape due to the low-pass filtering properties of the electrode set up by the pipette capacitance and series resistance (RC). This can be a significant problem for fast events, such as action potentials, but less of a problem for the relatively slow EPSPs recorded in pyramidal cells. Nevertheless, we take on board the advice provided by the Reviewer and will use the conventional tolerance of 20% in future experiments.

      Reviewer #3 (Recommendations For The Authors):

      In the references, the entry for Burnashev N et al. has a different font size. Please ensure that all references are formatted consistently.

      We thank the Reviewer for spotting this and have updated the font size of this reference.

    1. 16.2.1. Crowdsourcing Platforms# Some online platforms are specifically created for crowdsourcing. For example: Wikipedia [p12]: Is an online encyclopedia whose content is crowdsourced. Anyone can contribute, just go to an unlocked Wikipedia page and press the edit button. Institutions don’t get special permissions (e.g., it was a scandal when US congressional staff edited Wikipedia pages [p13]), and the expectation that editors do not have outside institutional support is intended to encourage more people to contribute. Quora [p14]: An crowdsourced question and answer site. Stack Overflow [p15]: A crowdsourced question-and-answer site specifically for programming questions. Amazon Mechanical Turk [p16]: A site where you can pay for crowdsourcing small tasks (e.g., pay a small amount for each task, and then let a crowd of people choose to do the tasks and get paid). Upwork [p17]: A site that lets people find and contract work with freelancers (generally larger and more specialized tasks than Amazon Mechanical Turk. Project Sidewalk [p18]: Crowdsourcing sidewalk information for mobility needs (e.g., wheelchair users).

      Croudsourcing is actually used in many parts of life, such as roadside interviews and company product questionnaires, all of which are widely used forms of croudsourcing in life. It's just that the information uploaded to search sites such as Wikipedia needs to be as rigorous as possible, so croudsourcing on the web needs to be more strictly scrutinized. Of course, most resource sites are open to error correction, and try to make the information accurate under the scrutiny of many Internet users. At the same time, I believe that when more important information is posted on the web, the website should allow the person behind the information to provide real information to ensure the order of the web and provide legal protection for the users.

    1. 16.1. Crowdsourcing Definition# When tasks are done through large groups of people making relatively small contributions, this is called crowdsourcing. The people making the contributions generally come from a crowd of people that aren’t necessarily tied to the task (e.g., all internet users can edit Wikipedia), but then people from the crowd either get chosen to participate, or volunteer themselves. When a crowd is providing financial contributions, that is called crowdfunding (e.g., patreon [p1], kickstarter [p2], gofundme [p3]). Humans have always collaborated on tasks, and crowds have been enlisted in performing tasks long before the internet existed [p4]. What social media (and other internet systems) have done is expand the options for how people can collaborate on tasks. 16.1.1. Different Ways of Collaborating and Communicating# There have been many efforts to use computers to replicate the experience of communicating with someone in person, through things like video chats, or even telepresence robots [p5]]. But there are ways that attempts to recreate in-person interactions inevitably fall short and don’t feel the same. Instead though, we can look at different characteristics that computer systems can provide, and find places where computer-based communication works better, and is Beyond Being There [p6] (pdf here [p7]). Some of the different characteristics that means of communication can have include (but are not limited to): Location: Some forms of communication require you to be physically close, some allow you to be located anywhere with an internet signal. Time delay: Some forms of communication are almost instantaneous, some have small delays (you might see this on a video chat system), or have significant delays (like shipping a package). Synchronicity: Some forms of communication require both participants to communicate at the same time (e.g., video chat), while others allow the person to respond when convenient (like a mailed physical letter). Archiving: Some forms of communication automatically produce an archive of the communication (like a chat message history), while others do not (like an in-person conversation) Anonymity: Some forms of communication make anonymity nearly impossible (like an in-person conversation), while others make it easy to remain anonymous. -Audience: Communication could be private or public, and they could be one-way (no ability to reply), or two+-way where others can respond. Because of these (and other) differences, different forms of communication might be preferable for different tasks. For example, you might send an email to the person sitting next at work to you if you want to keep an archive of the communication (which is also conveniently grouped into email threads). Or you might send a text message to the person sitting next to you if you are criticizing the teacher, but want to do so discretely, so the teacher doesn’t notice. These different forms of communication can then support different methods of crowdsourcing. 16.1.2. Learn More# If you want to learn more about crowdsourcing, you can look at the research from the ACM Conference On Computer-Supported Cooperative Work And Social Computing [p8]. For example, you can see: Best paper awards from 2022 [p9] Best paper awards from 2021 [p10] Best paper awards from 2020 [p11]

      I was extremely struck by how much we depend on crowdsourcing without even realizing it, as demonstrated by Wikipedia, where people from all around the world contribute little things that add up to something enormous. Since I've texted classmates who sat just next to me in class when I needed to discuss something private without the professor noticing, I could also relate to the section about various communication methods. It's fascinating how the internet has altered not only what we communicate but also how we choose to do it depending on factors like convenience, privacy, and timing. It makes me question if internet communication will ever be as good as face-to-face contact or if we should just be grateful for the special things it allows us to do.

    1. - The book argues regenerative agriculture could feed the U.S. – while being better for the long-term environment (p. 232):'Do we have the land for it? Diana consulted with a few experts to run the numbers, including Dr. Allen Williams, an ecosystem and soil health consultant, farmer, and former agriculture professor ... [List of many other people consulted]... One critical piece of information to keep in mind is to remember that we’re comparing industrial monocropping to regenerative agriculture, which have drastically different impacts on the land. Even though it takes more land to produce well-managed grass-finished beef, it could be argued that the regenerative solution is a smarter one for our future than the chemical one. At a recent conference about grass-fed beef, Rowntree said in his presentation, “I’d rather have 2.5 acres of regenerative agriculture than 1 acre of extractive agriculture.” [And that regenerative agriculture leads to more utilisable land]‘let’s dive into what sort of acres we’d need in the US to finish all our beef herd on grass… the numbers are rough and could certainly be challenged, but… If we look at the current amount of idle grassland, underutilized pasture, and cropland that would be freed up from grain production in an all-grass-finished scenario, the short answer to our question is yes. We do have the land to finish all our current beef cattle on pasture in the US.‘If we are now grass-finishing all beef cattle produced annually in the US, we can reduce the ninety to ninety-four million acres of corn planted. Approximately 36–40 percent of today’s corn crop actually goes into livestock feed (cattle, pigs, and chickens)…‘If we take just fifteen million acres of cornfields and consider these productive (after all, they once were thriving grasslands), each of them can finish 1.25 steers per acre. Altogether, these acres finish 18.75 million cattle. In addition to converting some of our corn acres back to grassland, there are over five hundred million acres of privately owned pastureland in the US, and many experts we’ve spoken to estimate it’s only being utilized at 30 percent capacity. This leaves enormous potential for better grazing management.‘And again, these acres will be a net gain to our agricultural land because they would be beneficial to our ecosystems instead of destroying them… soil, water cycles, and mineral cycles, and more wildlife.’

      Sure, but can we really economically scale regenerative agriculture to the millions of acres required to reap it's benefits while maintaining high animal welfare? I'd like to better understand how this would work in practice, given how intensively cows are farmed in factory farms in order to meet consumer demand. And what about chickens and pigs?

    1. Author response:

      The following is the authors’ response to the current reviews.

      Reviewer #1 (Public review):

      The authors have strengthened their conclusions by providing additional information about the specificity of their antibodies, but at the same time the authors have revealed concerning information about the source of their antibodies.

      It appears that many of the antibodies used in this study have been discontinued because the supplier company was involved in a scandal of animal cruelty and all their goats and rabbits Ab products were sacrificed. The authors acknowledge that this is unfortunate but they also claim that the issue is out of their hands.

      The authors' statement is false; the authors ought to not use these antibodies, just as the providing company chose to discontinue them, as those antibodies are tied to animal cruelty. The issue that the authors feel OK with using them is of concern. In short, please remove any results from unethical antibodies.

      Removal of such results also best serves science. That is, any of their results using the discontinued antibodies means that the authors' results are non-reproducible and we should be striving to publish good, reproducible science.

      For the antibodies that do not have unethical origins the authors claim that their antibodies have been appropriately validated, by "testing in positive control tissue and/or Western blot or in situ hybridization". This is good but needs to be expanded upon. It is a strong selling point that the Abs are validated and I want to see additional information in their Supplementary Table 2 stating for each Ab specifically:

      (1) What +ve control tissue was used in the validation of each Ab and which species that +ve control came from. Likewise, if competition assays to confirm validity was used, please also specify.

      (2) Which assay was the Ab validated for (WB, IHC, ELISA, all etc)

      (3) For Antibodies that were validated for, or using WBs please let the reader know if there were additional bands showing.

      (4) Include references to the literature that supports these validations. That is, please make it easy for the reader to appreciate the hard work that went into the validation of the Antibodies.

      Finally, for the Abs, when the authors write that "All antibodies used have been validated by testing in positive control tissue and/or Western blot or in situ hybridization" I fail to understand what in situ hybridisation means in this context. I am under the impression that in situ hybridisation is some nucleic acid -hybridising-to-organ or tissue slice. Not polypeptide binding.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      Remove results that have been obtained by unethically-sourced antibody reagents.

      Strengthen the readers' confidence about the appropriateness & validity of your antibodies.

      First, we want to stress that reviewer 1 has raised his critique related to the used of antibodies from Santa Cruz biotechnology not only through the journal. The head of our department and two others were contacted by reviewer 1 directly without going through the journal or informing/approaching the corresponding or first author. It is our opinion that this debate and critique should be handled through the journal and editorial office and not with people without actual involvement in the project.

      It is correct that we have purchased antibodies from Santa Cruz Biotechnologies both mouse, rabbit and goat antibodies as stated in the correspondence with the reviewer.

      As stated in our previous rebuttal – the goat antibodies from Santa Cruz were discontinued due to inadequate treatment of goats after settling with the authorities in 2016.

      https://www.nature.com/articles/nature.2016.19411

      https://www.science.org/content/blog-post/trouble-santa-cruz-biotechnology

      We have used 11 mouse, rabbit or goat antibodies from Santa Cruz biotechnologies in the manuscript as listed in supplementary table 2 of the manuscript and all of them have been carefully validated in other control tissues supported by ISH and/or WB and many of them already used in several publications by our group (https://pubmed.ncbi.nlm.nih.gov/34612843/, https://pubmed.ncbi.nlm.nih.gov/33893301/, https://pubmed.ncbi.nlm.nih.gov/32931047/, https://pubmed.ncbi.nlm.nih.gov/32729975/, https://pubmed.ncbi.nlm.nih.gov/30965119/, https://pubmed.ncbi.nlm.nih.gov/29029242/, https://pubmed.ncbi.nlm.nih.gov/23850520/, https://pubmed.ncbi.nlm.nih.gov/23097629/, https://pubmed.ncbi.nlm.nih.gov/22404291/, https://pubmed.ncbi.nlm.nih.gov/20362668/, https://pubmed.ncbi.nlm.nih.gov/20172873/,  and other research groups. All antibodies used in this manuscript were purchased before the whole world was aware of mistreatment of goats that was evident several years later.

      We do not support animal cruelty in anyway but the purchase of antibodies from Santa Cruz biotechnologies were conducted long before mistreatment was reported. Moreover, antibodies from Santa Cruz biotechnologies are being used in thousands of publications annually. The company has been punished for their misconduct, and subsequently granted permission to produce antibodies from the relevant authorities again.


      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      Despite the study being a collation of important results likely to have an overall positive effect on the field, methodological weaknesses and suboptimal use of statistics make it difficult to give confidence to the study's message.

      Strengths:

      Relevant human and mouse models approached with in vivo and in vitro techniques.

      Weaknesses:

      The methodology, statistics, reagents, analyses, and manuscripts' language all lack rigour.

      (1) The authors used statistics to generate P-values and Rsquare values to evaluate the strength of their findings.

      However, it is unclear how stats were used and/or whether stats were used correctly. For instance, the authors write: "Gaussian distribution of all numerical variables was evaluated by QQ plots". But why? For statistical tests that fall under the umbrella of General Linear Models (line ANOVA, t-tests, and correlations (Pearson's)), there are several assumptions that ought to be checked, including typically:

      (a) Gaussian distribution of residuals.

      (b) Homoskedasticity of the residuals.

      (c) Independence of Y, but that's assumed to be valid due to experimental design.

      So what is the point of evaluating the Gaussian distribution of the data themselves? It is not necessary. In this reviewer's opinion, it is irrelevant, not a good use of statistics, and we ought to be leading by example here.

      Additionally, it is not clear whether the homoscedasticity of the residuals was checked. Many of the data appear to have particularly heteroskedastic residuals. In many respects, homoscedasticity matters more than the normal distribution of the residuals. In Graphpad analyses if ANOVA is used but equal variances are assumed (when variances among groups are unequal then standard deviations assigned in each group will be wrong and thus incorrect p values are being calculated.

      Based on the incomplete and/or wrong statistical analyses it is difficult to evaluate the study in greater depth.

      We agree with the reviewer that we should lead by example and improve clarity on the use of the different statistical tests and their application. In response to the reviewer’s suggestion, we have extended the statistical section, focusing on the analyses used. Additionally, we have specified the statistical test used in the figure legends for each figure. Additionally, we did check for Gaussian distribution and homoskedasticity of residuals before conducting a general linear model test, and this has now been specified in the revised manuscript. In case the assumptions were not met, we have specified which non-parametric test we used. If the assumptions were not met, we specified which non-parametric test was used.

      While on the subject of stats, it is worth mentioning this misuse of statistics in Figure 3D, where the authors added the Slc34a1 transcript levels from controls in the correlation analyses, thereby driving the intercept down. Without the Control data there does not appear to be a correlation between the Slc34a1 levels and tumor size.

      We agree with the reviewer that a correlation analysis is inappropriate here and have removed this part of the figure.

      There is more. The authors make statements (e.g. in the figure levels as: "Correlations indicated by R2.". What does that mean? In a simple correlation, the P value is used to evaluate the strength of the slope being different from zero. The authors also give R2 values for the correlations but they do not provide R2 values for the other stats (like ANOVAs). Why not?

      We agree with the reviewer and have replaced the R2 values with the Pearson correlation coefficient in combination with the P value.

      (2) The authors used antibodies for immunos and WBs. I checked those antibodies online and it was concerning:

      (a) Many are discontinued.

      Many of the antibodies we have used were from the major antibody provider Santa Cruz Biotechnology (SCBT). SCBT was involved in a scandal of animal cruelty and all their goats and rabbits were sacrificed, which explains why several antibodies were discontinued, while the mice antibodies were allowed to continue. This is unfortunate but out of our hands.

      (b) Many are not validated.

      We agree with the reviewer that antibody validation is essential. All antibodies used in this manuscript have been validated. The minimal validation has been to evaluate cellular expression in positive control tissue for instance bone, kidney, or mamma. Moreover, many of the antibodies have been used and validated in previous publications (doi: 10.1593/neo.121164, doi:10.1096/fj.202000061RR, doi: 10.1093/cvr/cvv187) including knockout models. Moreover, many antibodies but not all have been validated by western blot or in situ hybridization. We have included the following in the Materials and Methods section: “All antibodies used have been validated by testing in positive control tissue and/or Western blot or in situ hybridization”.

      (c) Many performed poorly in the Immunos, e.g. FGF23, FGFR1, and Kotho are not really convincing. PO5F1 (gene: OCT4) is the one that looks convincing as it is expressed at the correct cell types.

      We fail to understand the criticism raised by the reviewer regarding the specificity of these specific antibodies. We believe the FGF23 and Klotho antibodies are performing exceptionally well, and FGFR1 is abundantly expressed in many cell types in the testis. As illustrated in Figure 2E, the expression of Klotho, FGF23, and FGFR1 is very clear, specific, and convincing. FGF23 is not expressed in normal testis – which is in accordance with no RNA present there either. However, it is abundantly expressed in GCNIS where RNA is present. On the other hand, Klotho is abundantly expressed in germ cells from normal testis but not expressed in GCNIS.

      (d) Others like NPT2A (product of gene SLC34A1) are equally unconvincing. Shouldn't the immuno show them to be in the plasma membrane?

      If there is some brown staining, this does not mean the antibodies are working. If your antibodies are not validated then you ought to omit the immunos from the manuscript.

      We acknowledge your concerns regarding the NPT2A, NPT2B, and NPT2C staining. While the NPT2A antibody is performing well, we understand your reservations about the other antibodies. It's worth noting that NPT2A is not expressed in normal testis (no RNA either) but is expressed in GCNIS where the RNA is also present. Although it is typically present in the plasma membrane, cytoplasmic expression can be acceptable as membrane availability is crucial for regulating NPT2A function, particularly in the kidney where FGF23 controls membrane availability. We are currently involved in a comprehensive study exploring these phosphate transporters in the organs lining the male reproductive tract. In functional animal models, we have observed very specific staining with this NPT2A antibody following exposed to high phosphate or FGF23. Additionally, we are conducting Western Blot analyses with this antibody, which reinforces our belief that the antibody has a specific binding.

      Reviewer #2 (Public Review):

      Summary:

      This study set out to examine microlithiasis associated with an increased risk of testicular germ cell tumors (TGCT). This reviewer considers this to be an excellent study. It raises questions regarding exactly how aberrant Sertoli cell function could induce osteogenic-like differentiation of germ cells but then all research should raise more questions than it answers.

      Strengths:

      Data showing the link between a disruption in testicular mineral (phosphate)homeostasis, FGF23 expression, and Sertoli cell dysfunction, are compelling.

      Weaknesses:

      Not sure I see any weaknesses here, as this study advances this area of inquiry and ends with a hypothesis for future testing.

      We thank the reviewer for the acknowledgment and highlighting that this is an important message that addresses several ways to develop testicular microlithiasis, which indicates that it is not only due to malignant disease but also frequent in benign conditions.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      I applaud the authors' approach to nomenclature for rodent and human genes and proteins (italicised for genes, all caps for humans, capitalised only for rodents, etc), but the authors frequently got it wrong when referring to genes or proteins. A couple of examples include:

      (1) SLC34A1 (italics) refers to gene (correct use by the authors) but then again the authors use e.g. SLC34A1 (not italics) to refer to the protein product of SLC34A1(italics) gene. In fact, the protein product of the SLC34A1 (italics) gene is called NPT2A (non-italics).

      (2) OCT4 (italics) refers to gene (correct use by the authors) but then again the authors use e.g. OCT4 (not italics) to refer to the protein product of OCT4 (italics)gene. In fact, the protein product of the OCT4 gene (italics) gene is called PO5F1(non-italics).

      The problem with their incorrect and inconsistent nomenclature is widespread in the manuscript making further evaluation difficult.

      Please consult a reliable protein-based database like Uniprot to derive the correct protein names for the genes. You got NANOG correct though.

      We thank the reviewer for addressing this important point. We have corrected the nomenclature throughout the manuscript as suggested.

      (3) The authors use the word "may" too many times. Also often in conjunction with words like "indicates", and "suggests". Examples of phrases that reflect that the authors lack confidence in their own results, conclusions, and understanding of the literature are:

      "...which could indicate that the bone-specific RUNX2 isoform may also be expressed... "

      "...which indicates that the mature bone may have been..."

      Are we shielding ourselves from being wrong in the future because "may" also means "may not"? It is far more engaging to read statements that have a bit more tooth to them, and some assertion too. How about turning the above statements around, to :

      "...which shows that the bone-specific RUNX2 isoform is also expressed... "

      "...which reveals that the mature bone were..."

      ...then revisit ambiguous language ("may", "might" "possibly", "could", "indicate" etc.) throughout the manuscript?

      It's OK to make a statement and be found wrong in the future. Being wrong is integral to Science.

      Thank you for addressing this. We agree with the reviewer that it is fair to be more direct and have revised many of these vague phrases throughout the manuscript.

      (4) The authors use the word "transporter" which in itself is confusing. For instance, is SLC34A1 an importer or an exporter of phosphate? Or both? Do SLC34As move phosphate in or out of the cells or cellular compartments? "Transporter" sounds too vague a word.

      We understand that it might be easier for the reader with the term "importer". However, we should use the specific nomenclature or "wording" that applies to these transporters. The exact terminology is a co-transporter or sodium-dependent phosphate cotransporter as reported here (doi: 10.1152/physrev.00008.2019). Thus, we will use the terms “co-transporter” and “transporter” throughout the revised manuscript.

    1. What’s Mary Shelley up to then? Her monster doesn’t carry the specific historical baggage of a JakeBarnes, so what does his deformity represent? Let’s look at where he comes from. Victor Frankensteinbuilds his spare-parts masterpiece not only out of a graveyard but also out of a specific historicalsituation. The industrial revolution was just starting up, and this new world would threaten everythingpeople had known during the Enlightenment; at the same time, the new science and the new faith inscience – including anatomical research, of course – imperiled many religious and philosophical tenetsof English society in the first decades of the nineteenth century. Thanks to Hollywood, the monsterlooks like Boris Karloff or Lon Chaney and intimidates us by its sheer physical menace. But in the novelit’s the idea of the monster that is frightening, or perhaps it’s really the idea of the man, the scientist-sorcerer, forging an unholy alliance with dark knowledge that scares us. The monster represents,among other things, forbidden insights, a modern pact with the devil, the result of science withoutethics. You don’t need me to tell you this, naturally. Every time there’s an advance in the state ofknowledge, a movement into a brave new world (another literary reference, of course), somecommentator or other informs us that we’re closer to meeting a Frankenstein (meaning, of course, themonster).

      IM GEEKING OUTT!!! I freaking love Frankenstein don't even get me started.... BUT the commentary here is straight facts, especially about what the monster is representative of, is literally sticking to my soul. I wish Foster would have expanded a bit more on how the industrial revolution affects this story, but I digress... STILL i'm geeking. The monster represents everything, it represents that human fear of the unknown and at the same time, human emotion and all of it's monstrous parts.

    1. boys reclaimed a sense of mastery, indeed masculinity itself, through the control of technology

      It's interesting to hear about this perception of masculinity, especially when comparing it to the modern understanding of masculinity. I don't think that most people today would describe technological proficiency as 'masculine', but it just shows how values have shifted over time. I also believe their talking more about masculinity in reference to the scientific world, but still.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      DiPeso et al. develop two tools to (i) classify micronucleated (MN) cells, which they call VCS MN, and (ii) segment micronuclei and nuclei with MMFinder. They then use these tools to identify transcriptional changes in MN cells.

      The strengths of this study are:

      (1) Developing highly specialized tools to speed up the analysis of specific cellular phenomena such as MN formation and rupture is likely valuable to the community and neglected by developers of more generalist methods.

      (2) A lot of work and ideas have gone into this manuscript. It is clearly a valuable contribution.

      (3) Combining automated analysis, single-cell labeling, and cell sorting is an exciting approach to enrich phenotypes of interest, which the authors demonstrate here.

      Weaknesses:

      (1) Images and ground truth labels are not shared for others to develop potentially better analysis methods.

      We regret this omission and thank the reviewer for pointing it out. Both the images and ground truth labels for VCS MN and MNFinder are now available on the lab’s github page and described in the README.txt files. VCS MN: https://github.com/hatch-lab/fast-mn. MNFinder: https://github.com/hatch-lab/mnfinder.

      (2) Evaluations of the methods are often not fully explained in the text.

      The text has been extensively updated to include a full description of the methods and choices made to develop the VCS MN and MNFinder image segmentation modules.

      (3) To my mind, the various metrics used to evaluate VCS MN reveal it not to be terribly reliable. Recall and PPV hover in the 70-80% range except for the PPV for MN+. It is what it is - but do the authors think one has to spend time manually correcting the output or do they suggest one uses it as is?

      VCS MN attempts to balance precision and recall with speed to reduce the fraction of MN changing state from intact to ruptured during a single cell cycle during a live-cell isolation experiment. In addition, we chose to prioritize inclusion of small MN adjacent to the nucleus in our positive calls. This meant that there were more false positives (lower PPV) than obtained by other methods but allowed us to include this highly biologically relevant class of MN in our MN+ population. Thus, for a comprehensive understanding of the consequences of MN formation and rupture, we recommend using the finder as is. However, for other visual cell sorting applications where a small number of highly pure MN positive and negative cells is preferred, such as clonal outgrowth or metastasis assays, we would recommend using the slower, but more precise, MNFinder to get a higher precision at a cost of temporal resolution. In addition, MNFinder, with its higher flexibility and object coverage, is recommended for all fixed cell analyses.

      Reviewer #2 (Public review):

      Summary:

      Micronuclei are aberrant nuclear structures frequently seen following the missegregation of chromosomes. The authors present two image analysis methods, one robust and another rapid, to identify micronuclei (MN) bearing cells. The authors induce chromosome missegregation using an MPS1 inhibitor to check their software outcomes. In missegregation-induced cells, the authors do not distinguish cells that have MN from those that have MN with additional segregation defects. The authors use RNAseq to assess the outcomes of their MN-identifying methods: they do not observe a transcriptomic signature specific to MN but find changes that correlate with aneuploidy status. Overall, this work offers new tools to identify MN-presenting cells, and it sets the stage with clear benchmarks for further software development.

      Strengths:

      Currently, there are no robust MN classifiers with a clear quantification of their efficiency across cell lines (mIoU score). The software presented here tries to address this gap. GitHub material (tools, protocols, etc) provided is a great asset to naive and experienced computational biologists. The method has been tested in more than one cell line. This method can help integrate cell biology and 'omics' studies.

      Weaknesses:

      Although the classifier outperforms available tools for MN segmentation by providing mIOU, it's not yet at a point where it can be reliably applied to functional genomics assays where we expect a range of phenotypic penetrance.

      We agree that the MNFinder module has limitations with regards to the degree of nuclear atypia and cell density that can be tolerated. Based on the recall and PPV values and their consistency across the majority conditions analyzed, we believe that MNFinder can provide reliable results for MN frequency, integrity, shape, and label characteristics in a functional genomics assay in many commonly used adherent cell lines. We also added a discussion of caveats for these analyses, including the facts that highly lobulated nuclei will have higher false positive rates and that high cell confluency may require additional markers to ensure highly accurate assignment of MN to nuclei.

      Spindle checkpoint loss (e.g., MPS1 inhibition) is expected to cause a variety of nuclear atypia: misshapen, multinucleated, and micronucleated cells. It may be difficult to obtain a pure MN population following MPS1 inhibitor treatment, as many cells are likely to present MN among multinucleated or misshapen nuclear compartments. Given this situation, the transcriptomic impact of MN is unlikely to be retrieved using this experimental design, but this does not negate the significance of the work. The discussion will have to consider the nature, origin, and proportion of MN/rupture-only states - for example, lagging chromatids and unaligned chromosomes can result in different states of micronuclei and also distinct cell fates.

      We appreciate the reviewer’s comments and now quantify the frequency of other nuclear atypias and MN chromosome content in RPE1 cells after 24 h Mps1 inhibition (Fig. S1). In summary, we find only small increases in nuclear atypia, including multinucleate cells, misshapen nuclei, and chromatin bridges, compared to the large increase in MN formation. This contrasts with what is observed when mitosis is delayed using nocodazole or CENPE inhibitors where nuclear atypia is much more frequent. Importantly, after Mps1 inhibition, RPE1 cells with MN were only slightly more likely to have a misshapen nucleus compared to cells without MN (Fig. S1C).

      Interestingly, this analysis showed that the VCS MN pipeline, which uses the Deep Retina segmenter to identify nuclei, has a strong bias against lobulated nuclei and frequently fails to find them (Fig. S2B). Therefore, the cell populations analyzed by RNAseq were largely depleted of highly misshapen nuclei and differences in nuclear atypia frequency between MN+ and MN- cells in the starting population were lost (Fig. S9A, compare to Fig. S1C). This strongly suggests that the transcript changes we observed reflect differences in MN frequency and aneuploidy rather than differences in nuclei morphology.

      We agree with the reviewer that MN rupture frequency and formation, and downstream effects on cell proliferation and DNA damage, are sensitive to the source of the missegregated chromatin. In the revised manuscript we make clear that we chose Mps1 inhibition because it is strongly biased towards whole chromosome MN (Fig. S1E), limiting signal from DNA damage products, including chromosome fragments and chromatin bridges. This provides a base line to disambiguate the consequences of micronucleation and DNA damage in more complex chromosome missegregation processes, such as DNA replication disruption and irradiation. 

      Reviewer #3 (Public review):

      Summary:

      The authors develop a method to visually analyze micronuclei using automated methods. The authors then use these methods to isolate MN post-photoactivation and analyze transcriptional changes in cells with and without micronuclei of RPE-1 cells. The authors observe in RPE-1 cells that MN-containing cells show similar transcriptomic changes as aneuploidy, and that MN rupture does not lead to vast changes in the transcriptome.

      Strengths:

      The authors develop a method that allows for automating measurements and analysis of micronuclei. This has been something that the field has been missing for a long time. Using such a method has the potential to advance micronuclei biology. The authors also develop a method to identify cells with micronuclei in real time and mark them using photoconversion and then isolate them via FACS. The authors use this method to study the transcriptome. This method is very powerful as it allows for the sorting of a heterogenous population and subsequent analysis with a much higher sample number than could be previously done.

      Weaknesses:

      The major weakness of this paper is that the results from the RNA-seq analysis are difficult to interpret as very few changes are found to begin with between cells with MN and cells without. The authors have to use a 1.5-fold cut-off to detect any changes in general. This is most likely due to the sequencing read depth used by the authors. Moreover, there are large variances between replicates in experiments looking at cells with ruptured versus intact micronuclei. This limits our ability to assess if the lack of changes is due to truly not having changes between these populations or experimental limitations. Moreover, the authors use RPE-1 cells which lack cGAS, which may contribute to the lack of changes observed. Thus, it is possible that these results are not consistent with what would occur in primary tissues or just in general in cells with a proficient cGAS/STING pathway.

      We agree with the reviewer’s assessment of the limitations of our RNA-Seq analysis. After additional analysis, we propose an alternative explanation for the lower expression changes we observe in the MN+ and Mps1 inhibitor RNA-Seq experiments. In summary, we find that VCS MN has a strong bias against highly lobulated nuclei that depletes this class of cells from both the bulk analysis and the micronucleated cell populations (Fig. S9A). Based on this result, we propose that our analysis reduces the contribution of nuclear atypia to these transcriptional changes and that nuclear morphology changes are likely a signaling trigger associated with aneuploidy.

      We believe that this finding strengthens our overall conclusion that MN formation and rupture do not cause transcriptional changes, as suppressing the signaling associated with nuclei atypia should increase sensitivity to changes from the MN. However, we cannot completely rule out that MN formation or rupture cause a broad low-level change in transcription that is obscured by other signals in the dataset.

      As to cGAS signaling, several follow up papers and even the initial studies from the Greenburg lab show that MN rupture does not activate cGAS and does not cause cGAS/STING-dependent signaling in the first cell cycle (see citations and discussion in text). Therefore, we expect the absence of cGAS in RPE1 cells will have no effect in the first cell cycle, but could alter the transcriptional profile after mitosis. Although analysis of RPE1  cGAS+ cells or primary cells in these experiments will be required to definitively address this point, we believe that our interpretation of our RNAseq results is sufficiently backed up by the literature to warrant our conclusion that MN formation and rupture do not induce a transcriptional response in the first cell cycle.

      Reviewer #1 (Recommendations for the authors):

      I do not recommend additional experimental or computational work. Instead, I just recommend adapting the claims of the manuscript to what has been done. I am just asking for further clarification and minor rewriting.

      (1) The manuscript is written like a molecular biology paper with sparse explanations of the authors' reasoning, especially in the development of their algorithms. I was often lost as to why they did things in one way or another.

      The revised manuscript has thorough explanations and additional data and graphics defining how and why the VCS MN and MNFinder modules were developed. We hope that this clears up many of the questions the reviewer had and appreciate their guidance on making it more readable for scientists from different backgrounds.

      (2) Evaluations of their method are often not fully explained, for example:

      "On average, 75% of nuclei per field were correctly segmented and cropped."

      "MN segments were then assigned to 'parent' nuclei by proximity, which correctly associated 97% of MN."

      Were there ground truth images and labels created? How many? For example, I don't know how the authors could even establish a ground-truth for associating MNs to nuclei if MNs happened to be almost equidistant between two nuclei in their images.

      I suggest a separate subsection early in the Results section where the underlying imaging data + labels are presented.

      We added new sections to the text and figures at the beginning of the VCS MN and MNFinder subsections (Fig. S2 and Fig. S5) with specific information about how ground truth images and labels were generated for both modules and how these were broken up for training, validation, and testing.

      We also added information and images to explain how ground truth MN/nucleus associations were derived. In summary, we took advantage of the fact that 2xDendra-NLS is present at low levels in the cytoplasm to identify cell boundaries. This combined with a subconfluent cell population allowed us to unambiguously group MN and nuclei for 98% of MN, we estimate. These identifications were used to generate ground truth labels and analyze how well proximity defines MN/nuclei groups (Fig.s S1 and S2).

      (3) Overall, I find the sections long and more subtitles would help me better navigate the manuscript.

      Where possible, we have added subtitles.

      (4) Everything following "To train the model, H2B channel images were passed to a Deep Retina neural net ..." is fully automated, it seems to me. Thus, there seems to be no human intervention to correct the output before it is used to train the neural network. Therefore, I do not understand why a neural network was trained at all if the pipeline for creating ground truth labels worked fully automatically. At least, the explanations are insufficient.

      We apologize for the initial lack of clarity in the text and included additional details in the revision. We used the Deep Retina segmenter to crop the raw images to areas around individual nuclei to accelerate ground truth labeling of MN. A trained user went through each nucleus crop and manually labeled pixels belonging to MN to generate the ground truth dataset for training, validation, and imaging in VCS MN (Fig. S2A).

      (5) To my mind, the various metrics used to evaluate VCS MN reveal it not to be terribly reliable. Recall and PPV hover in the 70-80% range except for the PPV for MN+. It is what it is - but do the authors think one has to spend time manually correcting the output or do they suggest one uses it as is? I understand that for bulk transcriptomics, enrichment may be sufficient but for many other questions, where the wrong cell type could contaminate the population, it is not.

      Remarks in the Results section on what the various accuracies mean for different applications would be good (so one does not need to wait for the Discussion section).

      One of the strengths of the visual cell sorting system is that any image analysis pipeline can be used with it. We used VCS MN for the transcriptomics experiment, but for other applications a user could run visual cell sorting in conjunction with MNFinder for increased purity while maintaining a reasonable recall or use a pre-existing MN segmentation program that gives 100% purity but captures only a specific subgroup of micronucleated cells (e.g. PIQUE). 

      To maintain readability, especially with the expansion of the results sections, we kept the discussion of how we envision using visual cell sorting for other MN-based applications in the discussion section.

      (6) I am confused about what "cell" is referring to in much of the manuscript. Is it the nucleus + MNs only? Is it the whole cell, which one would ordinarily think it is? If so, are there additional widefield images, where one can discern cell boundaries? I found the section "MNFinder accurately ..." very hard to read and digest for this reason and other ambiguous wording. I suggest the authors take a fresh look at their manuscript and see whether the text can be improved for clarity. I did not find it an easy read overall, especially the computational part.

      After re-examining how “cell” was used, we updated the text to limit its use to the MNFinder arm tasked with identifying MN-nucleus associations where the convex hull defined by these objects is used to determine the “cell” boundary. In all other cases we have replaced cell with “nucleus” because, as the reviewer points out, that is what is being analyzed and converted. We hope this is clearer.

      (7) Post-FACS PPVs are not that great (Figure 3c). It depends on the question one wants to answer whether ~70% PPV is good enough. Again, would be good to comment on.

      We added discussion of this result to the revision. In summary, a likely reason for the reduced PPV is that, although we maintain the cells in buffer with a Cdk1 inhibitor, we know that some proportion of the cells go through mitosis post-sorting. Since MN are frequently reincorporated into the nucleus after mitosis (Hatch et al, 2013; Zhang et al., 2015), we expect this to reduce the MN+ population. Thus, we expect that the PPV in the RNAseq population is higher than what we can measure by analyzing post-sorted cells that have been plated and analyzed later.

      (8) I am thoroughly confused as to why the authors claim that their system works in the "absence of genetic perturbations" and why they emphasize the fact that their cells are non-transformed: They still needed a fluorescent label and they induce MNs with a chemical Mps1 inhibitor. (The latter is not a genetic manipulation, of course, but they still need to enrich MNs somehow. That is, their method has not been tested on a cell population in which MNs occur naturally, presumably at a very low rate, unless I missed something.) A more careful description of the benefits of their method would be good.

      We apologize for the confusion on these points and hope this is clarified in the revision. We were comparing our system, which can be made using transient transfection, if desired, to current tools that disambiguate aneuploidy and MN formation by deleting parts of chromosomes or engineering double strand breaks with CRISPR to generate single chromosome-specific missegregation events. Most of these systems require transformed cancer cells to obtain high levels of recombination. In contrast, visual cell sorting can isolate micronucleated cells from any cell line that can exogenously express a protein, including primary cells and non-transformed cells like RPE1s.

      Other minor points:

      (1) The authors should not refer to "H2B channels" but to "H2B-emiRFP703 channels". It may seem obvious to the authors but for someone reading the manuscript for the very first time, it was not. I was not sure whether there were additional imaging modalities used for H2B/nucleus/chromatin detection before I went back and read that only fluorescence images of H2B-emiRFP703 were used. To put it another way, the authors are detecting fluorescence, not histones -- unless I misunderstood something.

      To address this point, we altered the text to read “H2B-emiRFP703” when discussing images of this construct. For MNFinder some images were of cells expressing H2B-GFP, which has also been clarified.

      (2) If the level of zoom on my screen is such that I can comfortably read the text, I cannot see much in the figure panels. The features that I should be able to see are the size of a title. The image panels should be magnified.

      In the revision, the images are appended to the end at full resolution to overcome this difficulty. Thank you for your forbearance.

      Reviewer #2 (Recommendations for the authors):

      The methods are adequately explained. The Results text narrating experiments and data analysis is clear. Interpretation of a few results could be clarified and strengthened as explained below.

      (1) RNAseq experiments are a good proof of principle. To strengthen their interpretation in Figures 4 and 6, I would recommend the authors cite published work on checkpoint/MPS1 loss-induced chromosome missegregation (PMID: 18545697, PMID: 33837239, PMC9559752) and consider in their discussion the 'origin' and 'proportion' of micronucleated cells and irregularly shaped nuclei expected in RPE1 lines. This will help interpret Figure 6 findings on aneuploidy signature accurately. Not being able to see an MN-specific signature could be due to the way the biological specimen is presented with a mixture of cells with 'MN only' or 'rupture' or 'MN along with misshapen nuclei'. These features may all link to aneuploidy rather than 'MN' specifically.

      We appreciate the reviewer’s suggestion and added a new analysis of nuclear atypia after Mps1 inhibition in RPE1 cells to Fig. S1. Overall, we found that Mps1 inhibition significantly, but modestly, increased the proportion of misshapen nuclei and chromatin bridges. Multinucleate cells were so rare that instead of giving them their own category we included them in “misshapen nuclei.” These results are consistent with images of Msp1i treated RPE1 cells from He et al. 2019 and Santaguida et al. 2017 and distinct from the stronger changes in nuclear morphology observed after delaying mitosis by nocodazole or CENPE inhibition.

      We also found that the Deep Retina segmenter used to identify nuclei in VCS MN had a significant bias against highly lobulated nuclei (Fig. S2B) that led to misshapen nuclei being largely excluded from the RNAseq analyses. As a result we found no enrichment of misshapen nuclei, chromatin bridges, or dead/mitotic nuclear morphologies in MN+ compared to MN- nuclei in our RNASeq experiments (Fig. S9A).

      (2) As the authors clarify in the response letter, one round of ML is unlikely to result in fully robust software; additional rounds of ML with other markers will make the work robust. It will be useful to indicate other ML image analysis tools that have improved through such reiterations. They could use reviews on challenges and opportunities using ML approaches to support their statement. Also in the introduction, I would recommend labelling as 'rapid' instead of 'rapid and precise' method.

      We updated the text to reference review articles that discuss the benefit of additional training for increasing ML accuracy and changed the text to “rapid.”

      (3) The lack of live-cell studies does not allow the authors to distinguish the origin of MN (lagging chromatids or unaligned chromosomes). As explained in 1, considering these aspects in discussion would strengthen their interpretation. Live-cell studies can help reduce the dependencies on proximity maps (Figure S2).

      The revised text includes new references and data (Fig. S1E) demonstrating that Mps1 inhibition strongly biases towards whole chromosome missegregation and that MN are most likely to contain a single centromere positive chromosome rather than chromatin fragments or multiple chromosomes.

      (4) Mean Intersection over Union (mIOU) is a good measure to compare outcomes against ground truth. However, the mIOU is relatively low (Figure 2D) for HeLa-based functional genomics applications. It will help to discuss mIOU for other classifiers (non-MN classifiers) so that they can be used as a benchmark (this is important since the authors state in their response that they are the first to benchmark an MN classifier). There are publications for mitochondria, cell cortex, spindle, nuclei, etc. where IOU has been discussed.

      We added references to classifiers for other small cellular structures. We also evaluated major sources of error in MNFinder found that false negatives are enriched in very small MN (3 to 9 pixels, or about 0.4 µm<sup>2</sup> – 3 µm<sup>2</sup>, Fig. S6B). A similar result was obtained for VCS MN (Fig. S3B). Because small changes in the number of pixels identified in small objects can have outsized effects on mIoU scores, we suspect that this is exerting downward pressure on the mIoU value. Based on the PPV and recall values we identified, we believe that MNFinder is robust enough to use for functional genomics and screening applications with reasonable sample sizes.

      (5) Figure 5 figure legend title is an overinterpretation. MN and rupture-initiated transcriptional changes could not be isolated with this technique where several other missegregation phenotypes are buried (see point 1 above).

      We decided to keep the figure title legend based on our analysis of known missegregation phenotypes in Fig. S1 and S9 showing that there is no difference in major classes of nuclear atypia between MN+ and MN- populations in this analysis. Although we cannot rule out that other correlated changes exist, we believe that the title represents the most parsimonious interpretation.

      Minor comments

      (1) The sentence in the introduction needs clarification and reference. "However, these interventions cause diverse "off-target" nuclear and cellular changes, including chromatin bridges, aneuploidy, and DNA damage." Off-target may not be the correct description since inhibiting MPS1 is expected to cause a variety of problems based on its role as a master kinase in multiple steps of the chromosome segregation process. Consider one of the references in point 1 for a detailed live-cell view of MPS1 inhibitor outcomes.

      We have changed “off-target” to “additional” for clarity.

      (2) In Figure 3 or S3, did the authors notice any association between the cell cycle phase and MN or rupture presence? Is this possible to consider based on FACS outcomes or nuclear shapes?

      Previous work by our lab and others have shown that MN rupture frequency increases during the cell cycle (Hatch et al., 2013; Joo et al., 2023). Whether this is stochastic or regulated by the cell cycle may depend on what chromosome is in the MN (Mammel et al., 2021) and likely the cell line. Unfortunately, the H2B-emiRFP703 fluorescence in our population is too variable to identify cell cycle stage from FACS or nuclear fluorescence analysis.

      (3) Figure 5 - Please explain "MA plot".

      An MA plot, or log fold-change (M) versus average (A) gene expression, is a way to visualize differently expressed genes between two conditions in an RNASeq experiment and is used as an alternative to volcano plots. We chose them for our paper because most of the expression changes we observed were small and of similar significance and the MA plot spreads out the data compared to a volcano plot and allowed a better visualization of trends across the population.

      (4) Page 7: "our results strongly suggest that protein expression changes in MN+ and rupture+ cells are driven mainly by increased aneuploidy rather than cellular sensing of MN formation and rupture.". This is an overstatement considering the mIOU limits of the software tool and the non-exclusive nature of MN in their samples.

      We agree that we cannot rule out that an unknown masking effect is inhibiting our ability to observe small broad changes in transcription after MN formation or rupture. However, we believe we have minimized the most likely sources of masking effects, including nuclear atypia and large scale aneuploidy differences, and thus our interpretation is the most likely one.

      Reviewer #3 (Recommendations for the authors):

      Overall, the authors need to explain their methods better, define some technical terms used, and more thoroughly explain the parameters and rationale used when implementing these two protocols for identifying micronuclei; primarily as this is geared toward a more general audience that does not necessarily work with machine learning algorithms.

      (1) A clearer description in the methods as to how accuracy was calculated. Were micronuclei counted by hand or another method to assess accuracy?

      We significantly expanded the section on how the machine learning models were trained and tested, including how sensitivity and specificity metrics were calculated, in both the results and the methods sections. The code used to compare ground truth labels to computed masks is also now included in the MNFinder module available on the lab github page. 

      (2) Define positive predictive value.

      The text now says “the positive predictive value (PPV, the proportion of true positives, i.e. specificity) and recall (the proportion of MN found by the classifier, i.e. sensitivity)…”.

      (3) Why is it a problem to use the VCS MN at higher magnifications where undersegmentation occurs? What do the authors mean by diminished performance (what metrics are they using for this?).

      We have included a representative image and calculated mIoU and recall for 40x magnification images analyzed by MNFinder after rescaling in Fig. 2A. In summary, VCS MN only correctly labeled a few pixels in the MN, which was sufficient to call the adjacent nucleus “MN+” but not sufficient for other applications, such as quantifying MN area. In addition, VCS MN did much worse at identifying all the MN in 40x images with a recall, or sensitivity, metric of 0.36. We are not sure why. Developing MNFinder provided a module that was well suited to quantify MN characteristics in fixed cell images, an important use case in MN biology.

      (4) The authors should compare MN that are analyzed and not analyzed using these methods and define parameters. Is there a size limitation? Closeness to the main nucleus?

      We added two new figures defining what contributes to module error for both VCS MN (Fig. S3) and MNFinder (Fig. S6). For VCS MN, false negatives are enriched in very large or very small MN and tend to be dimmer and farther from the nucleus than true positives. False positives are largely misclassification of small dim objects in the image as MN. For MNFinder, the most missed class of MN are very small ones (3-9 px in area) and the majority of false positives are misclassifications of elongated nuclear blebs as MN.

      (5) Are there parameters in how confluent an image must be to correctly define that the micronucleus belongs to the correct cell? The authors discussed that this was calculated based on predicted distance. However, many factors might affect proper calling on MN. And the authors should test this by staining for a cytosolic marker and calculating accuracy.

      We updated the text with more information about how the cytoplasm was defined using leaky 2x-Dendra2-NLS signal to analyze the accuracy of MN/nucleus associations (Fig. S2G-H). In addition, we quantified cell confluency and distance to the first and second nearest neighbor for each MN in our training and testing image datasets. We found that, as anticipated, cells were imaged at subconfluent concentrations with most fields having a confluency around 30% cell coverage (Fig. S2E) and that the average difference in distance between the closest nucleus to an MN and the next closest nucleus was 3.3 fold (Fig. S2F). We edited the discussion section to state that the ability of MN/nuclear proximity to predict associations at high cell confluencies would have to be experimentally validated.

      (6) The authors measure the ratio of Dendra2(Red) v. Dendra2 (Green) in Figure 3B to demonstrate that photoconversion is stable. This measurement, to me, is confusing, as in the end, the authors need to show that they have a robust conversion signal and are able to isolate these data. The authors should directly demonstrate that the Red signal remains by analyzing the percent of the Red signal compared to time point 0 for individual cells.

      We found a bulk analysis to be more powerful than trying to reidentify individual cells due to how much RPE1 cells move during the 4 and 8 hours between image acquisitions. In addition, we sort on the ratio between red and green fluorescence per cell, rather than the absolute fluorescence, to compensate for variation in 2xDendra-NLS protein expression between cells. Therefore, demonstrating that distinct ratios remained present throughout the time course is the most relevant to the downstream analysis.

      To address the reviewer’s concern, we replotted the data in Fig. 3B to highlight changes over time in the raw levels of red and green Dendra fluorescence (Fig. S7D). As expected, we see an overall decrease in red fluorescence intensity, and complementary increase in green fluorescence intensity, over 8 hours, likely due to protein turnover. We also observe an increase in the number of nuclei lacking red fluorescence. This is expected since the well was only partially converted and we expect significant numbers of unconverted cells to move into the field between the first image and the 8 hour image.

      (7) The authors isolate and subsequently use RNA-sequencing to identify changes between Mps1i and DMSO-treated cells. One concern is that even with the less stringent cut-off of 1.5 fold there is a very small change between DMSO and MPS1i treated cells, with only 63 genes changing, none of which were affected above a 2-fold change. The authors should carefully address this, including why their dataset sees changes in many more pathways than in the He et al. and Santaguida et al. studies. Is this due to just having a decreased cut-off?

      The reviewer correctly points out that we observed an overall reduction in the strength of gene expression changes between our dataset of DMSO versus Mps1i treated RPE1 cells compared to similar studies. We suggest a couple reasons for this. One is that the log<sub>2</sub> fold changes observed in the other studies are not huge and vary between 2.5 and -3.8 for He et al., 3.3 and -2.3 for Santaguida et al., and -0.8 and 1.6 for our study. This variability is within a reasonable range for different experimental conditions and library prep protocols. A second is that our protocol minimizes a potential source of transcriptional change – nuclear lobulation – that is present in the other datasets.

      For the pathway analysis we did not use a fold-change cut-off for any data set, instead opting to include all the genes found to be significantly different between control and Mps1i treated cells for all three studies. Our read-depth was higher than that of the two published experiments, which could contribute to an increased DEG number. However, we hypothesize that our identification of a broader number of altered pathways most likely arises from increased sensitivity due to the loss of covering signal from transcriptional changes associated with increased nuclear atypia. Additional visual cell sorting experiments sorting on misshapen nuclei instead of MN would allow us to determine the accuracy of this hypothesis.

      (8) Moreover, clustering (in Figure 5E) of the replicates is a bit worrisome as the variances are large and therefore it is unclear if, with such large variance and low screening depth, one can really make such a strong conclusion that there are no changes. The authors should prove that their conclusion that rupture does not lead to large transcriptional changes, is not due to the limitations of their experimental design.

      We agree with the reviewers that additional rounds of RNAseq would improve the accuracy of our transcriptomic analysis and could uncover additional DEGs. However, we believe the overall conclusion to be correct based on the results of our attempt to validate changes in gene expression by immunofluorescence. We analyzed two of the most highly upregulated genes in the ruptured MN dataset, ATF3 and EGR1. Although we saw a statistically significant increase in ATF3 intensity between cells without MN and those with ruptured MN, the fold change was so small compared to our positive control (100x less) that we believe it is it is more consistent with a small increase in the probability of aneuploidy rather than a specific signature of MN rupture.

      (9) The authors also need to address the fact that they are using RPE-1 cells more clearly and that the lack of effect in transcriptional changes may be simply due to the loss of cGAS-STING pathway (Mackenzie et al., 2017; Harding et al., 2017; etc.).

      As we discuss above in the public comments section, the literature is clear that MN do not activate cGAS in the first cell cycle after their formation, even upon rupture. Therefore, we do not expect any changes in our results when applied to cGAS-competent cells. However, this expectation needs to be experimentally validated, which we plan to address in upcoming work.

    1. One concept that comes up in a lot of different ethical frameworks is moderation. Famously, Confucian thinkers prized moderation as a sound principle for living, or as a virtue, and taught the value of the ‘golden mean’, or finding a balanced, moderate state between extremes. This golden mean idea got picked up by Aristotle—we might even say ripped off by Aristotle—as he framed each virtue as a medial state between two extremes. You could be cowardly at one extreme, or brash and reckless at the other; in the golden middle is courage. You could be miserly and penny-pinching, or you could be a reckless spender, but the aim is to find a healthy balance between those two. Moderation, or being moderate, is something that is valued in many ethical frameworks, not because it comes naturally to us, per se, but because it is an important part of how we form groups and come to trust each other for our shared survival and flourishing.

      This idea of moderation as a key ethical principle makes a lot of sense, especially in how it applies to real life. Whether in decision-making, relationships, or even personal habits, extremes tend to cause instability, while balance leads to sustainability. It’s interesting to think about how this concept shows up across different cultures and philosophies, reinforcing the idea that moderation is not just a moral principle but a practical one for living well.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      The paper explored cross-species variance in albumin glycation and blood glucose levels in the function of various life-history traits. Their results show that

      (1) blood glucose levels predict albumin gylcation rates

      (2) larger species have lower blood glucose levels

      (3) lifespan positively correlates with blood glucose levels and

      (4) diet predicts albumin glycation rates.

      The data presented is interesting, especially due to the relevance of glycation to the ageing process and the interesting life-history and physiological traits of birds. Most importantly, the results suggest that some mechanisms might exist that limit the level of glycation in species with the highest blood glucose levels.

      While the questions raised are interesting and the amount of data the authors collected is impressive, I have some major concerns about this study:

      (1) The authors combine many databases and samples of various sources. This is understandable when access to data is limited, but I expected more caution when combining these. E.g. glucose is measured in all samples without any description of how handling stress was controlled for. E.g glucose levels can easily double in a few minutes in birds, potentially introducing variation in the data generated. The authors report no caution of this effect, or any statistical approaches aiming to check whether handling stress had an effect here, either on glucose or on glycation levels.

      (2) The database with the predictors is similarly problematic. There is information pulled from captivity and wild (e.g. on lifespan) without any confirmation that the different databases are comparable or not (and here I'm not just referring to the correlation between the databases, but also to a potential systematic bias (e.g. captivate-based sources likely consistently report longer lifespans). This is even more surprising, given that the authors raise the possibility of captivity effects in the discussion, and exploring this question would be extremely easy in their statistical models (a simple covariate in the MCMCglmms).

      (3) The authors state that the measurement of one of the primary response variables (glycation) was measured without any replicability test or reference to the replicability of the measurement technique.

      (4) The methods and results are very poorly presented. For instance, new model types and variables are popping up throughout the manuscript, already reporting results, before explaining what these are e.g. results are presented on "species average models" and "model with individuals", but it's not described what these are and why we need to see both. Variables, like "centered log body mass", or "mass-adjusted lifespan" are not explained. The results section is extremely long, describing general patterns that have little relevance to the questions raised in the introduction and would be much more efficiently communicated visually or in a table.

      Reviewer #2 (Public review):

      Summary

      In this extensive comparative study, Moreno-Borrallo and colleagues examine the relationships between plasma glucose levels, albumin glycation levels, diet, and lifehistory traits across birds. Their results confirmed the expected positive relationship between plasma blood glucose level and albumin glycation rate but also provided findings that are somewhat surprising or contradicting findings of some previous studies (relationships with lifespan, clutch mass, or diet). This is the first extensive comparative analysis of glycation rates and their relationships to plasma glucose levels and life history traits in birds that are based on data collected in a single study and measured using unified analytical methods.

      Strengths

      This is an emerging topic gaining momentum in evolutionary physiology, which makes this study a timely, novel, and very important contribution. The study is based on a novel data set collected by the authors from 88 bird species (67 in captivity, 21 in the wild) of 22 orders, which itself greatly contributes to the pool of available data on avian glycemia, as previous comparative studies either extracted data from various studies or a database of veterinary records of zoo animals (therefore potentially containing much more noise due to different methodologies or other unstandardised factors), or only collected data from a single order, namely Passeriformes. The data further represents the first comparative avian data set on albumin glycation obtained using a unified methodology. The authors used LC-MS to determine glycation levels, which does not have problems with specificity and sensitivity that may occur with assays used in previous studies. The data analysis is thorough, and the conclusions are mostly wellsupported (but see my comments below). Overall, this is a very important study representing a substantial contribution to the emerging field of evolutionary physiology focused on the ecology and evolution of blood/plasma glucose levels and resistance to glycation.

      Weaknesses

      My main concern is about the interpretation of the coefficient of the relationship between glycation rate and plasma glucose, which reads as follows: "Given that plasma glucose is logarithm transformed and the estimated slope of their relationship is lower than one, this implies that birds with higher glucose levels have relatively lower albumin glycation rates for their glucose, fact that we would be referring as higher glycation resistance" (lines 318-321) and "the logarithmic nature of the relationship, suggests that species with higher plasma glucose levels exhibit relatively greater resistance to glycation" (lines 386-388). First, only plasma glucose (predictor) but not glycation level (response) is logarithm transformed, and this semi-logarithmic relationship assumed by the model means that an increase in glycation always slows down when blood glucose goes up, irrespective of the coefficient. The coefficient thus does not carry information that could be interpreted as higher (when <1) or lower (when >1) resistance to glycation (this only can be done in a log-log model, see below) because the semi-log relationship means that glycation increases by a constant amount (expressed by the coefficient of plasma glucose) for every tenfold increase in plasma glucose (for example, with glucose values 10 and 100, the model would predict glycation values 2 and 4 if the coefficient is 2, or 0.5 and 1 if the coefficient is 0.5). Second, the semi-logarithmic relationship could indeed be interpreted such that glycation rates are relatively lower in species with high plasma glucose levels. However, the semi-log relationship is assumed here a priori and forced to the model by log-transforming only glucose level, while not being tested against alternative models, such as: (i) a model with a simple linear relationship (glycation ~ glucose); or (ii) a loglog model (log(glycation) ~ log(glucose)) assuming power function relationship (glycation = a * glucose^b). The latter model would allow for the interpretation of the coefficient (b) as higher (when <1) or lower (when >1) resistance in glycation in species with high glucose levels as suggested by the authors.

      Besides, a clear explanation of why glucose is log-transformed when included as a predictor, but not when included as a response variable, is missing.

      We apologize for missing an answer to this part before. Indeed, glucose is always log transformed and this is explained in the text.

      The models in the study do not control for the sampling time (i.e., time latency between capture and blood sampling), which may be an important source of noise because blood glucose increases because of stress following the capture. Although the authors claim that "this change in glucose levels with stress is mostly driven by an increase in variation instead of an increase in average values" (ESM6, line 46), their analysis of Tomasek et al.'s (2022) data set in ESM1 using Kruskal-Wallis rank sum test shows that, compared to baseline glucose levels, stress-induced glucose levels have higher median values, not only higher variation.

      Although the authors calculated the variance inflation factor (VIF) for each model, it is not clear how these were interpreted and considered. In some models, GVIF^(1/(2*Df)) is higher than 1.6, which indicates potentially important collinearity; see for example https://www.bookdown.org/rwnahhas/RMPH/mlr-collinearity.html). This is often the case for body mass or clutch mass (e.g. models of glucose or glycation based on individual measurements).

      It seems that the differences between diet groups other than omnivores (the reference category in the models) were not tested and only inferred using the credible intervals from the models. However, these credible intervals relate to the comparison of each group with the reference group (Omnivore) and cannot be used for pairwise comparisons between other groups. Statistics for these contrasts should be provided instead. Based on the plot in Figure 4B, it seems possible that terrestrial carnivores differed in glycation level not only from omnivores but also from herbivores and frugivores/nectarivores.

      Given that blood glucose is related to maximum lifespan, it would be interesting to also see the results of the model from Table 2 while excluding blood glucose from the predictors. This would allow for assessing if the maximum lifespan is completely independent of glycation levels. Alternatively, there might be a positive correlation mediated by blood glucose levels (based on its positive correlations with both lifespan and glycation), which would be a very interesting finding suggesting that high glycation levels do not preclude the evolution of long lifespans.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      (1) Line 84: "glycation scavengers" such as polyamines - can you specify what these polyamines do exactly?

      A clarification of what we mean with "glycation scavengers" is added.

      (2) Line 87-89: specify that the work of Wein et al. and this sentence is about birds.

      This is now clarified.

      (3) Line 95: "88 species" add "OF BIRDS". Also, I think it would be nice if you specified here that you are relying on primary data.

      This is now clarified (line 96).

      (4) Line 90-119: I find this paragraph very long and complex, with too many details on the methodology. For instance, I agree with listing your hypothesis, e.g. that with POL, but then what variables you use to measure the pace of life can go in the materials and methods section (so all lines between 112-119).

      This is explained here as a previous reviewer considered this presentation was indeed needed in the introduction.

      (5) Line 122-124: The first sentence should state that you collected blood samples from various sources, and list some examples: zoos? collaborators? designated wild captures? Stating the sample size before saying what you did to get them is a bit weird. Besides, you skipped a very important detail about how these samples were collected, when, where, and using what protocols. We know very well, that glucose levels can increase quickly with handling stress. Was this considered during the captures? Moreover, you state that you had 484 individuals, but how many samples in total? One per individual or more?

      We kindly ask the reviewer to read the multiple supplementary materials provided, in which the questions of source of the samples, potential stress effects and sample sizes for each model are addressed. All individuals contributed with one sample. More details about the general sources employed are given now in lines 125-127.

      (6) Line 135-36: numbers below 10 should be spelled out.

      Ok. Now that is changed.

      (7) Line 136: the first time I saw that you had both wild and captive samples. This should be among the first things to be described in the methods, as mentioned above.

      As stated above, details on this are included in the supplementary materials, but further clarifications have now been included in the main text (question 5).

      (8) Line 137-138: not clear. So you had 46 samples and 9 species. But what does the 3-3-3 sample mean? or for each species you chose 9 samples (no, cause that would be 81 samples in total)?

      This has now been clarified (lines 139-140).

      (9) Line 139-141: what methodological constraints? Too high glucose levels? Too little plasma?

      There were cases in which the device (glucometer) produced an unspecific error. This did not correspond to too high nor too low glucose levels, as these are differently signalled errors. Neither the manual nor the client service provided useful information to discern the cause. This may perhaps be related to the composition of the plasma of certain species, interfering with the measurement. Some clarifications have been added (lines 143-146).

      (10) Line 143: should be ZIMS.

      Corrected.

      (11) Line 120-148: you generally talk about individuals here, but I feel it would be more precise to use 'samples'.

      The use is totally interchangeable, as we never measured more than one sample for a given individual within this study. Besides, in some cases, saying “sample” could result less informative.

      (12) Line 150: missing the final number of measurements for glucose and glycation.

      Please, read the ESM6 (Table ESM6.1), where this information is given.

      (13) Line 154-155: so you took multiple samples from the same individual? It's the first time the text indicates so. Or do you mean technical replicates were not performed on the same samples?

      As previously indicated, each individual included only one sample. Replicates were done only for some individuals to validate the technique, as it would be unfeasible to perform replicates of all of them. This part of the text is referring to the fact that not all samples were analysed at the same time, as it takes a considerable amount of time, and the mass spectrometry devices are shared by other teams and project. Clarifications in this sense are now added (lines 160-163).

      (14) Line 171-172: "After realizing that diet classifications from AVONET were not always suitable for our purpose" - too informal. Try rephrasing, like "After determining that AVONET diet classifications did not align with our research needs...", but you still need to specify what was wrong with it and what was changed, based on what argument?

      The new formulation suggested by the reviewer has now been applied (lines 181-183). The details are given in the ESM6, as indicated in the text. 

      (15) Line 174-176: You start a new paragraph, talking about missing values, but you do not specify what variable are you talking about. you talk about calculating means, but the last variable you mentioned was diet, so it's even more strange.

      We refer to life history traits. It has now been clarified in the text (line 185).

      (16) Line 177: what longevity records? Coming from where? How did you measure longevity? Maximum lifespan ever recorded? 80-90% longevity, life expectancy???

      We refer to maximum lifespan, as indicated in the introduction and in every other case throughout the manuscript. Clarifications have now been introduced (188-190).

      (17) Line 180-183: using ZIMS can be problematic, especially for maximum longevity. There are often individuals who had a wrong date of birth entered or individuals that were failed to be registered as dead. The extremes in this database are often way off. If you want to combine though, you can check the correlation of lifespans obtained from different sources for the overlapping species. If it's a strong correlation it can be ok, but intuitively this is problematic.

      The species for which we used ZIMS were those for which no other databases reported any values. We could try correlations for other species, but this issue is not necessarily restricted to ZIMS, as the primary origin of the data from other databases is often difficultly traceable. Also, ZIMS is potentially more updated that some of the other databases, mainly Amniotes database, from which we rely the most, as it includes the highest number of species in the most easily accessible format.

      (18) Line 181-186: in ZIMS you calculate the average of the competing records, otherwise you choose the max. Why use different preferences for the same data?

      This constitutes a misunderstanding, for which we include clarifications now (line 196). We were referring here to the fact that for maximum lifespan the maximum is always chosen, while for other variables an average is calculated. 

      (19) Line 198: Burn-in and thinning interval is quite low compared to your number of iterations. How were model convergences checked?

      Please, check ESM1.

      (20) Line 201-203: What's the argument using these priors? Why not use noninformative ones? Do you have some a priori expectations? If so, it should be explained.

      Models have now been rerun with no expectations on the variance partitions so the priors are less informative, given the lack of firm expectations, and results are similar. Smaller nu values are also tried.

      (21) Line 217: "carried" OUT.

      Corrected (now in line 229).

      (22) Line 233-234: "species average model" - what is this? it was not described in the methods.

      Please, read the ESM6.

      (23) Line 232-246: (a) all this would be better described by a table or plot. You can highlight some interesting patterns, but describing it all in the text is not very useful I think, (b) statistically comparing orders represented by a single species is a bit odd.

      (a) Figure 1 shows this graphically, but this part was found to be quite short without descriptions by previous reviewers. (b) We recognise this limitation, but this part is not presented as one of the main results of the article, and just constitutes an attempt to illustrate very general patterns, in order to guide future research, as in most groups glycation has never been measured, so this still constitutes the best illustration of such patterns in the literature.

      (24) Line 281: the first time I saw "mass-adjusted maximum lifespan" - what is this, and how was it calculated? It should be described in the methods. But in any case, neither ratios, nor residuals should be used, but preferably the two variables should be entered side by side in the model.

      Please, see ESM6 for the explanations and justifications for all of this.

      (25) Line 281: there was also no mention of quadratic terms so far. How were polynomial effects tested/introduced in the models? Orthogonal polynomials? or x+ x^2?

      Please, read ESM6.

      (26) Table 1. What is 'Centred Log10Body mass', should be added in the methods.

      Please, read ESM6.

      (27) Table 1: what's the argument behind separating terrestrial and aquatic carnivores?

      This was mostly based on the a priori separation made in AVONET, but it is also used in a similar way by Szarka and Lendvai 2024 (comparative study on glucose in birds), where differences in glucose levels between piscivorous and carnivorous are reported. We had some reasons to think that certain differences in dietary nutrient composition, as discussed later, can make this difference relevant.

      (28) Table 1: The variable "Maximum lifespan" is discussed and plotted as 'massadjusted maximum lifespan' and 'residual maximum lifespan'. First, this is confusing, the same name should be used throughout and it should be defined in the methods section. Second, it seems that non-linear effects were tested by using x + x^2. This is problematic statistically, orthogonal polynomials should be used instead (check polyfunction in R). Also, how did you decide to test for non-linear effects in the case of lifespan but not the other continuous predictors? Should be described in the methods again.

      Please, read ESM6. Data exploration was performed prior to carry out these models. Orthogonal polynomials were considered to difficult the interpretation of the estimates and therefore the patterns predicted by the models, so raw polynomials were used. Clarifications have now been included in line 297.

      (29) Figure 2. From the figure label, now I see that relative lifespan is in fact residual. This is problematic, see Freckleton, R. P. (2009). The seven deadly sins of comparative analysis. Journal of evolutionary biology, 22(7), 1367-1375. Using body mass and lifespan side by side is preferred. This would also avoid forcing more emphasis on body mass over lifespan meaning that you subjectively introduce body mass as a key predictor, but lifespan and body size are highly correlated, so by this, you remove a large portion of variance that might in fact be better explained by lifespan.

      Please, read ESM6 for justifications on the use of residuals.

      Reviewer #2 (Recommendations for the authors):

      (1) If the semi-logarithmic relationship (glycation ~ log10(glucose)) is to be used to support the hypothesis about higher glycation resistance in species with high blood glucose (lines 318-321 and 386-388), it should be tested whether it is significantly better than the model assuming a simple linear relationship (i.e., glycation ~ glucose). Alternatively, if the coefficient is to be used to determine whether glycation rate slows down or accelerates with increasing glucose levels, log-log model (log10(glycation) ~ log10(glucose)) assuming power function relationship (glycation = a * glucose^b) should be used (as is for example in the literature about relationships between metabolic rates and body size). Probably the best approach would be to compare all three models (linear, semi-logarithmic, and log-log) and test if one performs significantly better. If none of them, then the linear model should be selected as the most parsimonious.

      Different options (linear, both semi-logarithmic combinations and log-log) have now been tested, with similar results. All of the models confirm the pattern of a significant positive relationship between glucose and glycation. Moreover, when standardizing the variables (both glucose and glycation, either log transformed or not), the estimate of the slope is almost equal for all the models. It is also lower than one, which in the case of both the linear and log-log confirms the stated prediction. The log-log model, showing a much lower DIC than the linear version, is now shown as the final model.

      (2) ESM6, line 46: Please note that Kruskal-Wallis rank sum test in ESM1 shows that, compared to baseline glucose levels, stress-induced glucose levels have higher median values (not only higher variation). With this in mind, what is the argument here about increased variation being the main driver of stress-induced change in glucose levels based on? It seems that both the median values and variation differ between baseline and stress-induced levels, and this should be acknowledged here.

      As discussed in the public answers, Kruskal Wallis does not allow to determine differences in mean, but just says that the groups are “different” (implicitly, in their ranksums, which does not mean necessarily in mean), while the Levene test performed signals heteroskedasticity. This makes this feature of the data analytically more grounded. Of course, when looking at the data, a higher mean can be perceived, but nothing can be said about its statistical significance. Still, some subtle changes have been introduced in corresponding section of the ESM6.

      (3) Have you recorded the sampling times? If yes, why not control them in the models? It is at least highly advisable to include the sampling times in the data (ESM5).

      As indicated in ESM6 lines 42-43, we do not have sampling times for most of the individuals (only zebra finches and swifts), so this cannot be accounted for in the models.

      (4) If sampling times will remain uncontrolled statistically, I recommend mentioning this fact and its potential consequences (i.e., rather conservative results) in the Methods section of the main text, not only in ESM6.

      A brief description of this has now been included in the main text (lines 129-132), referencing the more detailed discussion on the supplementary materials. Some subtle changes have also been included in the “Possible effects of stress” section of the ESM6.

      (5) ESM6, lines 52-53: The lower repeatability in Tomasek et al.' study compared to your study is irrelevant to the argument about the conservative nature of your results (the difference in repeatability between both studies is most probably due to the broader taxonomic coverage of the current study). The important result in this context is that repeatability is lower when sampling time is not considered within Tomasek et al's data set (ESM1). Therefore, I suggest rewording "showing a lower species repeatability than that from our data" to "showing lower species repeatability when sampling time is not considered" to avoid confusion. Please also note that you refer here to species repeatability but, in ESM1, you calculate individual repeatability. Nevertheless, both individual and species repeatabilities are lower when not controlling for sampling time because the main driver, in that case, is an increased residual variance.

      We recognize the current confusion in the way the explanation is exposed, and have significantly changed the redaction of the section. However, we would like to indicate that ESM1 shows both species and individual repeatability (for Tomasek et al. 2022 data, for ours only species as we do not have repeated individual values). Changes are now made to make it more evident.

      (6) I recommend providing brief guidelines for the interpretation of VIFs to the readers, as well as a brief discussion of the obtained values and their potential importance.

      Thank you for the recommendation. We included a brief description in lines 230-231. Also in the results section (lines 389-393).

      (7) Line: 264: Please note that the variance explained by phylogeny obtained from the models with other (fixed) predictors does not relate to the traits (glucose or glycation) per se but to model residuals.

      We appreciate the indication, and this has been rephrased accordingly (lines 280-286).

      (8) Change the term "confidence intervals" to "credible intervals" throughout the paper, since confidence interval is a frequentist term and its interpretations are different from Bayesian credible interval.

      Thank you for the remark, this has now been changed.

      (9) Besides lifespan, have you also considered quadratic terms for body mass? The plot in Figure 2A suggests there might be a non-linear relationship too.

      A quadratic component of body mass has not shown any significant effect on glucose in an alternative model. Also, a model with linear instead of log glucose (as performed in other studies) did not perform better by comparing the DICs, despite both showing a significant relationship between glucose and body mass. Therefore, this model remains the best option considered as presented in the manuscript.

      (10) ESM6, lines 115-116: It is usually recommended that only factors with at least 6 or 8 levels are included as random effects because a lower number of levels is insufficient for a good estimation of variance.

      In a Bayesian approach this does not apply, as random and fixed factors are estimated similarly. 

      (11) Typos and other minor issues:

      a) Line 66: Delete "related".

      b) Figure 2: "B" label is missing in the plot.

      c) Reference 9: Delete "Author".

      d) References 15 and 83 are duplicated. Keep only ref. 83, which has the correct citation details.

      e) ESM6, line 49: Change "GLLM" to "GLMM".

      Thank you for indicating this. Now it’s corrected.

    1. Reviewer #1 (Public review):

      Summary:

      For each of the three key transcription factor (TF) proteins in E. coli, the authors generate a large library of TF binding site (TFBS) sequences on plasmids, such that each TFBS is coupled to the expression of a fluorescence reporter. By sorting the fluorescence of individual cells and sequencing their plasmids to identify each cell's TFBS sequence (sort-seq), they are able to map the landscape of these TFBSs to the gene expression level they regulate. The authors then study the topographical features of these landscapes, especially the number and distribution of local maxima, as well as the statistical properties of evolutionary paths on these landscapes. They find the landscapes to be highly rugged, with about as many local peaks as a random landscape would have, and with those peaks distributed approximately randomly in sequence space. The authors find that there are a number of peaks that produce regulation stronger than that of the wild-type sequence for each TF and that it is not too unlikely to reach one of those "high peaks" from a random starting sequence. Nevertheless, the basins of attractions for different peaks have significant overlap, which means that chance plays a major role in determining which peak a population will evolve to.

      Strengths:

      (1) The experiments and analysis of this paper are very well-executed and, by and large, very thorough (with an important exception identified below). I appreciated the systematic nature of the project, both the large-scale experiments done on three TFs with replicates and the systematic analysis of the resulting landscapes. This not only makes the paper easy to follow but also inspires confidence in their results since there is so much data and so many different ways of analyzing it. It's a great recipe for other studies of genotype-phenotype landscapes to follow.

      (2) Considering how technical the project was, I am really impressed at how easy to read I found the paper, and the authors deserve a lot of credit for making it so. They do a great job of building up the experiments and analyses step-by-step and explaining enough of the basics of the experimental design and the essence of each analysis in the main text without getting too complicated with details that can be left to the Methods or SI. Compared to other big data papers, this one was refreshingly not overwhelming.

      Weaknesses:

      (1) The main weakness of this paper, in my view, is that it felt disconnected from the larger body of work on fitness and genotype-phenotype landscapes, including previous data on TFBSs in E. coli, genotype-phenotype maps of TFBSs in other systems, protein sequence landscapes (e.g., from mutational scans or combinatorially-complete libraries), and fitness landscapes of genomic mutations (e.g., combinatorially-complete landscapes of antibiotic resistance alleles). I have no doubt the authors are experts in this literature, and they probably cite most of it already given the enormous number of references. But they don't systematically introduce and summarize what was already known from all that work, and how their present study builds on it, in the Abstract and Introduction, which left me wondering for most of the paper why this project was necessary. Eventually, the authors do address most of these points, but not until the end, in the Discussion. Readers who have no familiarity with this literature might read this paper thinking that it's the first paper ever to study topography and evolutionary paths on genotype-phenotype landscapes, which is not true.

      There were two points that made this especially confusing for me. First, in order to choose which nucleotides in the binding sites to vary, the authors invoke existing data on the diversity of these sequences (position-weight matrices from RegulonDB). But since those PWMs can imply a genotype-phenotype map themselves, an obvious question I think the authors needed to have answered right away in the Introduction is why it is insufficient for their question. They only make a brief remark much later in the Results that the PWM data is just observed sequence diversity and doesn't directly reflect the regulation strength of every possible TFBS sequence. But that is too subtle in my opinion, and such a critical motivation for their study that it should be a major point in the Introduction.

      The second point where the lack of motivation in the Introduction created confusion for me was that they report enormous levels of sign epistasis in their data, to the point where these landscapes look like random uncorrelated landscapes. That was really surprising to me since it contrasts with other empirical landscape data I'm familiar with. It was only in the Discussion that I found some significant explanation of this - namely that this could be a difference between prokaryotic TFBSs, as this paper studies, and the eukaryotic TFBSs that have been the focus of many (almost all?) previous work. If that is in fact the case - that almost all previous studies have focused on eukaryotic TFBSs or other kinds of landscapes, and this is the first to do a systematic test of prokaryotic TFBS, then that should be a clear point made in the Abstract and Introduction. (I find a comparable statement only in the very last paragraph of the Discussion.) If that's the case, then I would also find that point to be a much stronger, more specific conclusion of this paper to emphasize than the more general result of observing epistasis and contingency (as is currently emphasized in the Abstract), which has been discussed in tons of other papers. This raises all sorts of exciting questions for future studies - why do the landscapes of prokaryotic TFBSs differ so dramatically from almost all the other landscapes we've observed in biology? What does that mean for the evolutionary dynamics of these different systems?

      (2) I am a bit concerned about the lack of uncertainties incorporated into the results. The authors acknowledge several key limitations of their approach, including the discreteness of the sort-seq bins in determining possible values of regulation strength, the existence of a large number of unsampled sequences in their genotype space, as well as measurement noise in the fluorescence readouts and sequencing. While the authors acknowledge the existence of these factors, I do not see much attempt to actually incorporate the effect of these uncertainties into their conclusions, which I suspect may be important. For example, given the bin size for the fluorescence in sort-seq, how confident are they that every sequence that appears to be a peak is actually a peak? Is it possible that many of the peak sequences have regulation strengths above all their neighbors but within the uncertainty of the fluorescence, making it possible that it's not really a peak? Perhaps such issues would average out and not change the statistical nature of their results, which are not about claiming that specific sequences are peaks, just how many peaks there are. Nevertheless, I think the lack of this robustness analysis makes the results less convincing than they otherwise would be.

    1. Highest price you’d spend

      reply to u/Pope_Shady at https://old.reddit.com/r/typewriters/comments/1iwrlij/highest_price_youd_spend/

      Generally my cap for typewriter purchases is in the $20-35 range. Most of my favorite machines (the standards) were acquired for $5-10 and they're so much better than the portables. At these prices I'm not too worried about the level of work required. I regularly spend 3-4 times more money on a full reel of bulk typewriter ribbon than I do on a typical typewriter.

      A few of my more expensive acquisitions: * I went as high as $100 on a machine (including shipping) to get a Royal Quiet De Luxe with a Vogue typeface that turned out to be in about as stunning a condition as one could hope for. * I went to $130 on an Olympia SM3 in part for it's Congress elite typeface as well as an uncommon set of mathematical characters. I'm sure I could have gotten it for significantly less, but wanted to help out the seller and it was in solid condition except for worn bushings. * I also went to around $150 for an (uncommon in the US) early 30's Orga Privat 5 that was in solid shape. I've yet to run into another Orga in the wild in the US since.

      It also bears saying that I don't mind buying "barn machines" as a large portion of the fun in collecting for me is cleaning, adjusting, and restoring them to full functionality. I've been dissapointed once to have bought a Remington Quiet-Riter once for $10 only to discover it was in near mint condition and didn't need any work at all.

      I am at the point where I'm going to need to start selling machines, work at a local shop, or start my own shop if I'm going to keep up with the "hobby" and maintain a sane spouse simultaneously. If I didn't enjoy wrenching on machines so much, I would definitely be buying them from local shops for significantly more money, and I'd probably have far fewer.

      It's not talked about in great length in some typewriter collector spaces, but I think some of the general pricing "game", beyond just getting a "deal", is the answer to the questions: "What am I into this space for anyway? What makes it fun and interesting?" If you don't have the time, talent, tools, or inclination to do your own cleaning and restoration work, then paying $300-$600 for a nice machine in exceptional clean/restored condition from a shop is a totally valid choice and shouldn't be dismissed. Some are in it for the discussions of typewriters. Some are in it for the bargain hunt. Some just want to write. Some want rare gems. Some want common machines from famous writers. Others just want one "good" machine while others want all the machines. It's a multi-faceted space.

    1. Whether not these school districts win this case the attention brought to the issues is win enough. But this is definitely a bigger issue than just in schools. Social media is toxic to anyone, but most importantly kids. These apps need to be kept in check with what content is being posted. But the capitalist companies will forever claim that it's the consumers fault and not theirs. These apps have been proven to be addictive yet they're not regulated like other addictive substances.

    1. Interestingly enough, the biggest ones that complain the most and refuse are the African- Americans. khiara: Really?rose: Absolutely. I can’t— I don’t know why. I just think it’s sort of an entitlement thing that comes with being a New York Medicaid recipient. Th ey sort of fi gure that they’re special

      I found this comment from Dr.Rose very alarming. The generalization of many groups she listed before hand, such as Hispanic and Bangladeshi, and creating a label on these groups causes a change in the way people will treat them. As a teacher to her medical students, she should keep bias and stereotypes away from the medical field so all patients get equal treatment. This can be very harmful as patients trying to receive treatment of a certain minority will get grouped with people of their own race.

    2. m sure it is. I think it’s cultural. Somebody coming from the middle of Africa someplace is going to have a lot more issues than somebody coming from eastern Long Island is going to have. Plus, you’re going to have issues of indigency, lack of education, the whole. . . . It’s just poor people. Poor people don’t have the same level of education obviously. Th ey don’t eat as well.

      This stereotype emphasizes a cultural and socioeconomic health stereotype. It suggests that an African person has more health issues than a rich person from Long Island. It also ignores any structural realities, from medical access to environmental circumstances to socioeconomic oppression. In addition, this operates under the false assumption that poor people are uneducated and unhealthy. Still, many educated and healthy people live in these unfortunate situations, and many communities medical professionals should be against such stereotyping.

    1. Ask for the parent’s perspective. Clarify the parent’s feelingsand beliefs on the issue. Ask questions to learn, not to passjudgment: “What are acceptable ways to you for Erica to expressher angry feelings? What do you do at home? What do you findworks? What doesn’t work? Would you be open to finding ways todiscipline her other than hitting?”

      This part really got me thinking about why it’s so important to ask parents what they believe and how they feel about discipline instead of just judging them. I like how it suggests asking questions like “What works for you at home?” because it shows that everyone’s experience is different. It makes me wonder if the way they were raised affects what they think is okay for their kids. I also think this kind of honest talk could help with other tough issues in families too.

    1. I was sitting down with this receptacle in front of me and each time Icoughed, blood just came out that way. I said, ‘God, if you want, if this iscurtains for me, then it’s okay.’

      !, crazy to think about, the archbishop had faced so much tragedy and near death moments but still can share his powerful joy inspiring words

    2. “As you just mentioned,” the Dalai Lama added, getting quite animated,“people think about money or fame or power. From the point of view of one’sown personal happiness, these are shortsighted. The reality, as the Archbishopmentioned, is that human beings are social animals. One individual, no matterhow powerful, how clever, cannot survive without other human beings. So thebest way to fulfill your wishes, to reach your goals, is to help others, to makemore friends.“How do we create more friends?” he now asked rhetorically. “Trust. How doyou develop trust? It’s simple: You show your genuine sense of concern for theirwell-being. Then trust will come. But if behind an artificial smile, or a bigbanquet, is a self-centered attitude deep inside of you, then there will never betrust. If you are thinking how to exploit, how to take advantage of them, then

      Page Topper

    1. . ‘Great man’ theory, as constructed by Marconi did not exclude marginal groups from the narrative; they were very much present, necessary even, but disempowered, denied anything other than the passive agency of an enthusiastic audience.

      I would say it's arguable that marginal groups were included in this "great man" narrative if they were only relegated to being a passive audience. This role of passiveness can be applied to the relationship between corporations and consumers. One could argue major corporations view consumers as a passive audience since their primary goal is profit above all else, stripping them of their agency. Addtionally, as this sentence mentions marginal groups, I am reminded of current inclusion efforts for marginalized communities in our society. If they are just present and have no power or agency, how truly included are they?

    1. In practice, most people find the heuristics themselves much more useful than the process of applying the heuristics. This is probably because exhaustively analyzing an interface is literally exhausting. Instead, most practitioners learn these heuristics and then apply them as they design ensuring that they don’t violate the heuristics as they make design choices. This incremental approach requires much less vigilance.

      I can see how this would make a lot of sense because trying to review every little detail of a design is just tiring. I also believe that it's just easier to simply have the heuristics at the back of your head as you design as opposed to having to go back and look over everything later. It's almost like being cognizant of good habits beforehand so that you don't need to make adjustments afterwards. But simultaneously, I do think that if you rely solely on heuristics, you will miss some of the problems that real users would spot. So although the approach is helpful, I do think that testing out the design with users is equally important.

    1. major studios don’t spend that kind of money onmovies like “Here.” “There’s no capes or explosions or aliens or superheroes or creatures,” Ulbrich explained. “It’speople talking, it’s families, it’s their loves and their joys and their sorrows. It’s their life.”

      Fuck people man and studios. We need to support just movies in gerneal. Not just marvel and Dinsey films.

    Annotators

    1. Usability tests can help you learn about lower level problems in a user interface (layout, labeling, flow, etc.), but they generally can’t help you learn about whether the design achieves its larger goals (whether it’s useful, valuable, meaningful, etc.). This is because a usability test doesn’t occur in the context of someone’s actual life, where those larger goals are relevant.

      I think this is an important point about usability testing. It helps find small issues like layout and navigation, but it doesn’t show if the design is actually useful in real life. I agree that testing in a lab setting can’t fully show if a product is valuable to users in their daily lives. Just because something is easy to use doesn’t mean people will actually want to use it. This makes me realize that other types of research, like real world testing or user feedback, are needed to truly understand if a design works.

    2. First, you need to decide who is representative of the stakeholders you are designing for and then find several people to invite to participate.

      I really agree with how the paragraph underscores the importance of carefully selecting participants who genuinely represent your target users. It’s a reminder that the success of a usability test largely hinges on involving the right people—those who experience the problem researchers are trying to solve. I also think the practical challenges of recruiting is not always straightforward, and sometimes researchers have to get inventive about finding willing participants. That willingness to approach strangers or tap into mailing lists shows how usability testing isn’t just a theoretical exercise; it’s about real users and real feedback.

    3. The ease with which A/B tests can run, and the difficulty of measuring meaningful things, can lead designers to overlook the importance of meaningful things.

      This quote highlights a key challenge in empirical evaluation—just because something is easy to measure doesn’t mean it’s the right thing to measure. It made me think about how businesses often optimize for engagement metrics (like clicks or views) instead of deeper, more meaningful goals (like user satisfaction or ethical design). This reinforces the importance of balancing quantitative and qualitative evaluation methods to ensure that design decisions align with real-world impact.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews

      Reviewer #1 (Public Review):

      Summary:

      The authors have created a system for designing and running experimental pipelines to control and coordinate different programs and devices during an experiment, called Heron. Heron is based around a graphical tool for creating a Knowledge Graph made up of nodes connected by edges, with each node representing a separate Python script, and each edge being a communication pathway connecting a specific output from one node to an iput on another. Each node also has parameters that can be set by the user during setup and runtime, and all of this behavior is concisely specified in the code that defines each node. This tool tries to marry the ease of use, clarity, and selfdocumentation of a purely graphical system like Bonsai with the flexibility and power of a purely code-based system like Robot Operating System (ROS).

      Strengths:

      The underlying idea behind Heron, of combining a graphical design and execution tool with nodes that are made as straightforward Python scripts seems like a great way to get the relative strengths of each approach. The graphical design side is clear, selfexplanatory, and self-documenting, as described in the paper. The underlying code for each node tends to also be relatively simple and straightforward, with a lot of the complex communication architecture successfully abstracted away from the user. This makes it easy to develop new nodes, without needing to understand the underlying communications between them. The authors also provide useful and well-documented templates for each type of node to further facilitate this process. Overall this seems like it could be a great tool for designing and running a wide variety of experiments, without requiring too much advanced technical knowledge from the users.

      The system was relatively easy to download and get running, following the directions and already has a significant amount of documentation available to explain how to use it and expand its capabilities. Heron has also been built from the ground up to easily incorporate nodes stored in separate Git repositories and to thus become a large community-driven platform, with different nodes written and shared by different groups. This gives Heron a wide scope for future utility and usefulness, as more groups use it, write new nodes, and share them with the community. With any system of this sort, the overall strength of the system is thus somewhat dependent on how widely it is used and contributed to, but the authors did a good job of making this easy and accessible for people who are interested. I could certainly see Heron growing into a versatile and popular system for designing and running many types of experiments.

      Weaknesses:

      (1) The number one thing that was missing from the paper was any kind of quantification of the performance of Heron in different circumstances. Several useful and illustrative examples were discussed in depth to show the strengths and flexibility of Heron, but there was no discussion or quantification of performance, timing, or latency for any of these examples. These seem like very important metrics to measure and discuss when creating a new experimental system.

      Heron is practically a thin layer of obfuscation of signal passing across processes. Given its design approach it is up to the code of each Node to deal with issues of timing, synching and latency and thus up to each user to make sure the Nodes they author fulfil their experimental requirements. Having said that, Heron provides a large number of tools to allow users to optimise the generated Knowledge Graphs for their use cases. To showcase these tools, we have expanded on the third experimental example in the paper with three extra sections, two of which relate to Heron’s performance and synching capabilities. One is focusing on Heron’s CPU load requirements (and existing Heron tools to keep those at acceptable limits) and another focusing on post experiment synchronisation of all the different data sets a multi Node experiment generates.   

      (2) After downloading and running Heron with some basic test Nodes, I noticed that many of the nodes were each using a full CPU core on their own. Given that this basic test experiment was just waiting for a keypress, triggering a random number generator, and displaying the result, I was quite surprised to see over 50% of my 8-core CPU fully utilized. I don’t think that Heron needs to be perfectly efficient to accomplish its intended purpose, but I do think that some level of efficiency is required. Some optimization of the codebase should be done so that basic tests like this can run with minimal CPU utilization. This would then inspire confidence that Heron could deal with a real experiment that was significantly more complex without running out of CPU power and thus slowing down.

      The original Heron allowed the OS to choose how to manage resources over the required process. We were aware that this could lead to significant use of CPU time, as well as occasionally significant drop of packets (which was dependent on the OS and its configuration). This drop happened mainly when the Node was running a secondary process (like in the Unity game process in the 3rd example). To mitigate these problems, we have now implemented a feature allowing the user to choose the CPU that each Node’s worker function runs on as well as any extra processes the worker process initialises. This is accessible from the Saving secondary window of the node. This stops the OS from swapping processes between CPUs and eliminates the dropping of packages due to the OS behaviour. It also significantly reduces the utilised CPU time. To showcase this, we initially run the simple example mentioned by the reviewer. The computer running only background services was using 8% of CPU (8 cores). With Heron GUI running but with no active Graph, the CPU usage went to 15%. With the Graph running and Heron’s processes running on OS attributed CPU cores, the total CPU was at 65% (so very close to the reviewer’s 50%). By choosing a different CPU core for each of the three worker processes the CPU went down to 47% and finally when all processes were forced to run on the same CPU core the CPU load dropped to 30%.  So, Heron in its current implementation running its GUI and 3 Nodes takes 22% of CPU load. This is still not ideal but is a consequence of the overhead of running multiple processes vs multiple threads. We believe that, given Heron’s latest optimisation, offering more control of system management to the user, the benefits of multi process applications outweigh this hit in system resources. 

      We have also increased the scope of the third example we provide in the paper and there we describe in detail how a full-scale experiment with 15 Nodes (which is the upper limit of number of Nodes usually required in most experiments) impacts CPU load. 

      Finally, we have added on Heron’s roadmap projects extra tasks focusing only on optimisation (profiling and using Numba for the time critical parts of the Heron code).

      (3) I was also surprised to see that, despite being meant specifically to run on and connect diverse types of computer operating systems and being written purely in Python, the Heron Editor and GUI must be run on Windows. This seems like an unfortunate and unnecessary restriction, and it would be great to see the codebase adjusted to make it fully crossplatform-compatible.

      This point was also mentioned by reviewer 2. This was a mistake on our part and has now been corrected in the paper. Heron (GUI and underlying communication functionality) can run on any machine that the underlying python libraries run, which is Windows, Linux (both for x86 and Arm architectures) and MacOS. We have tested it on Windows (10 and 11, both x64), Linux PC (Ubuntu 20.04.6, x64) and Raspberry Pi 4 (Debian GNU/Linux 12 (bookworm), aarch64). The Windows and Linux versions of Heron have undergone extensive debugging and all of the available Nodes (that are not OS specific) run on those two systems. We are in the process of debugging the Nodes’ functionality for RasPi. The MacOS version, although functional requires further work to make sure all of the basic Nodes are functional (which is not the case at the moment). We have also updated our manuscript (Multiple machines, operating systems and environments) to include the above information. 

      (4) Lastly, when I was running test experiments, sometimes one of the nodes, or part of the Heron editor itself would throw an exception or otherwise crash. Sometimes this left the Heron editor in a zombie state where some aspects of the GUI were responsive and others were not. It would be good to see a more graceful full shutdown of the program when part of it crashes or throws an exception, especially as this is likely to be common as people learn to use it. More problematically, in some of these cases, after closing or force quitting Heron, the TCP ports were not properly relinquished, and thus restarting Heron would run into an "address in use" error. Finding and killing the processes that were still using the ports is not something that is obvious, especially to a beginner, and it would be great to see Heron deal with this better. Ideally, code would be introduced to carefully avoid leaving ports occupied during a hard shutdown, and furthermore, when the address in use error comes up, it would be great to give the user some idea of what to do about it.

      A lot of effort has been put into Heron to achieve graceful shut down of processes, especially when these run on different machines that do not know when the GUI process has closed. The code that is being suggested to avoid leaving ports open has been implemented and this works properly when processes do not crash (Heron is terminated by the user) and almost always when there is a bug in a process that forces it to crash. In the version of Heron available during the reviewing process there were bugs that caused the above behaviour (Node code hanging and leaving zombie processes) on MacOS systems. These have now been fixed. There are very seldom instances though, especially during Node development, that crashing processes will hang and need to be terminated manually. We have taken on board the reviewer’s comments that users should be made more aware of these issues and have also described this situation in the Debugging part of Heron’s documentation. There we explain the logging and other tools Heron provides to help users debug their own Nodes and how to deal with hanging processes.

      Heron is still in alpha (usable but with bugs) and the best way to debug it and iron out all the bugs in all use cases is through usage from multiple users and error reporting (we would be grateful if the errors the reviewer mentions could be reported in Heron’s github Issues page). We are always addressing and closing any reported errors, since this is the only way for Heron to transition from alpha to beta and eventually to production code quality.

      Overall I think that, with these improvements, this could be the beginning of a powerful and versatile new system that would enable flexible experiment design with a relatively low technical barrier to entry. I could see this system being useful to many different labs and fields. 

      We thank the reviewer for positive and supportive words and for the constructive feedbacks. We believe we have now addressed all the raised concerns.  

      Reviewer #2 (Public Review):

      Summary:

      The authors provide an open-source graphic user interface (GUI) called Heron, implemented in Python, that is designed to help experimentalists to

      (1) design experimental pipelines and implement them in a way that is closely aligned with their mental schemata of the experiments,

      (2) execute and control the experimental pipelines with numerous interconnected hardware and software on a network.

      The former is achieved by representing an experimental pipeline using a Knowledge Graph and visually representing this graph in the GUI. The latter is accomplished by using an actor model to govern the interaction among interconnected nodes through messaging, implemented using ZeroMQ. The nodes themselves execute user-supplied code in, but not limited to, Python.

      Using three showcases of behavioral experiments on rats, the authors highlighted three benefits of their software design:

      (1) the knowledge graph serves as a self-documentation of the logic of the experiment, enhancing the readability and reproducibility of the experiment,

      (2) the experiment can be executed in a distributed fashion across multiple machines that each has a different operating system or computing environment, such that the experiment can take advantage of hardware that sometimes can only work on a specific computer/OS, a commonly seen issue nowadays,

      (3) he users supply their own Python code for node execution that is supposed to be more friendly to those who do not have a strong programming background.

      Strengths:

      (1) The software is light-weight and open-source, provides a clean and easy-to-use GUI,

      (2) The software answers the need of experimentalists, particularly in the field of behavioral science, to deal with the diversity of hardware that becomes restricted to run on dedicated systems.

      (3) The software has a solid design that seems to be functionally reliable and useful under many conditions, demonstrated by a number of sophisticated experimental setups.

      (4) The software is well documented. The authors pay special attention to documenting the usage of the software and setting up experiments using this software.

      Weaknesses:

      (1) While the software implementation is solid and has proven effective in designing the experiment showcased in the paper, the novelty of the design is not made clear in the manuscript. Conceptually, both the use of graphs and visual experimental flow design have been key features in many widely used softwares as suggested in the background section of the manuscript. In particular, contrary to the authors’ claim that only pre-defined elements can be used in Simulink or LabView, Simulink introduced MATLAB Function Block back in 2011, and Python code can be used in LabView since 2018. Such customization of nodes is akin to what the authors presented.

      In the Heron manuscript we have provided an extensive literature review of existing systems from which Heron has borrowed ideas. We never wished to say that graphs and visual code is what sets Heron apart since these are technologies predating Heron by many years and implemented by a large number of software. We do not believe also that we have mentioned that LabView or Simulink can utilise only predefined nodes. What we have said is that in such systems (like LabView, Simulink and Bonsai) the focus of the architecture is on prespecified low level elements while the ability for users to author their own is there but only as an afterthought. The difference with Heron is that in the latter the focus is on the users developing their own elements. One could think of LabView style software as node-based languages (with low level visual elements like loops and variables) that also allow extra scripting while Heron is a graphical wrapper around python where nodes are graphical representations of whole processes. To our knowledge there is no other software that allows the very fast generation of graphical elements representing whole processes whose communication can also be defined graphically. Apart from this distinction, Heron also allows a graphical approach to writing code for processes that span different machines which again to our knowledge is a novelty of our approach and one of its strongest points towards ease of experimental pipeline creation (without sacrificing expressivity). 

      (2) The authors claim that the knowledge graph can be considered as a self-documentation of an experiment. I found it to be true to some extent. Conceptually it’s a welcoming feature and the fact that the same visualization of the knowledge graph can be used to run and control experiments is highly desirable (but see point 1 about novelty). However, I found it largely inadequate for a person to understand an experiment from the knowledge graph as visualized in the GUI alone. While the information flow is clear, and it seems easier to navigate a codebase for an experiment using this method, the design of the GUI does not make it a one-stop place to understand the experiment. Take the Knowledge Graph in Supplementary Figure 2B as an example, it is associated with the first showcase in the result section highlighting this self-documentation capability. I can see what the basic flow is through the disjoint graph where 1) one needs to press a key to start a trial, and 2) camera frames are saved into an avi file presumably using FFMPEG. Unfortunately, it is not clear what the parameters are and what each block is trying to accomplish without the explanation from the authors in the main text. Neither is it clear about what the experiment protocol is without the help of Supplementary Figure 2A.

      In my opinion, text/figures are still key to documenting an experiment, including its goals and protocols, but the authors could take advantage of the fact that they are designing a GUI where this information, with properly designed API, could be easily displayed, perhaps through user interaction. For example, in Local Network -> Edit IPs/ports in the GUI configuration, there is a good tooltip displaying additional information for the "password" entry. The GUI for the knowledge graph nodes can very well utilize these tooltips to show additional information about the meaning of the parameters, what a node does, etc, if the API also enforces users to provide this information in the form of, e.g., Python docstrings in their node template. Similarly, this can be applied to edges to make it clear what messages/data are communicated between the nodes. This could greatly enhance the representation of the experiment from the Knowledge graph.

      In the first showcase example in the paper “Probabilistic reversal learning.

      Implementation as self-documentation” we go through the steps that one would follow in order to understand the functionality of an experiment through Heron’s Knowledge Graph. The Graph is not just the visual representation of the Nodes in the GUI but also their corresponding code bases. We mention that the way Heron’s API limits the way a Node’s code is constructed (through an Actor based paradigm) allows for experimenters to easily go to the code base of a specific Node and understand its 2 functions (initialisation and worker) without getting bogged down in the code base of the whole Graph (since these two functions never call code from any other Nodes). Newer versions of Heron facilitate this easy access to the appropriate code by also allowing users to attach to Heron their favourite IDE and open in it any Node’s two scripts (worker and com) when they double click on the Node in Heron’s GUI. On top of this, Heron now (in the versions developed as answers to the reviewers’ comments) allows Node creators to add extensive comments on a Node but also separate comments on the Node’s parameters and input and output ports. Those can be seen as tooltips when one hovers over the Node (a feature that can be turned off or on by the Info button on every Node).  

      As Heron stands at the moment we have not made the claim that the Heron GUI is the full picture in the self-documentation of a Graph. We take note though the reviewer’s desire to have the GUI be the only tool a user would need to use to understand an experimental implementation. The solution to this is the same as the one described by the reviewer of using the GUI to show the user the parts of the code relevant to a specific Node without the user having to go to a separate IDE or code editor. The reason this has not been implemented yet is the lack of a text editor widget in the underlying gui library (DearPyGUI). This is in their roadmap for their next large release and when this exists we will use it to implement exactly the idea the reviewer is suggesting, but also with the capability to not only read comments and code but also directly edit a Node’s code (see Heron’s roadmap). Heron’s API at the moment is ideal for providing such a text editor straight from the GUI.

      (3) The design of Heron was primarily with behavioral experiments in mind, in which highly accurate timing is not a strong requirement. Experiments in some other areas that this software is also hoping to expand to, for example, electrophysiology, may need very strong synchronization between apparatus, for example, the record timing and stimulus delivery should be synced. The communication mechanism implemented in Heron is asynchronous, as I understand it, and the code for each node is executed once upon receiving an event at one or more of its inputs. The paper, however, does not include a discussion, or example, about how Heron could be used to address issues that could arise in this type of communication. There is also a lack of information about, for example, how nodes handle inputs when their ability to execute their work function cannot keep up with the frequency of input events. Does the publication/subscription handle the queue intrinsically? Will it create problems in real-time experiments that make multiple nodes run out of sync? The reader could benefit from a discussion about this if they already exist, and if not, the software could benefit from implementing additional mechanisms such that it can meet the requirements from more types of experiments.

      In order to address the above lack of explanation (that also the first reviewer pointed out) we expanded the third experimental example in the paper with three more sections. One focuses solely on explaining how in this example (which acquires and saves large amounts of data from separate Nodes running on different machines) one would be able to time align the different data packets generated in different Nodes to each other. The techniques described there are directly implementable on experiments where the requirements of synching are more stringent than the behavioural experiment we showcase (like in ephys experiments). 

      Regarding what happens to packages when the worker function of a Node is too slow to handle its traffic, this is mentioned in the paper (Code architecture paragraph): “Heron is designed to have no message buffering, thus automatically dropping any messages that come into a Node’s inputs while the Node’s worker function is still running.” This is also explained in more detail in Heron’s documentation. The reasoning for a no buffer system (as described in the documentation) is that for the use cases Heron is designed to handle we believe there is no situation where a Node would receive large amounts of data in bursts while very little data during the rest of the time (in which case a buffer would make sense). Nodes in most experiments will either be data intensive but with a constant or near constant data receiving speed (e.g. input from a camera or ephys system) or will have variable data load reception but always with small data loads (e.g. buttons). The second case is not an issue and the first case cannot be dealt with a buffer but with the appropriate code design, since buffering data coming in a Node too slow for its input will just postpone the inevitable crash. Heron’s architecture principle in this case is to allow these ‘mistakes’ (i.e. package dropping) to happen so that the pipeline continues to run and transfer the responsibility of making Nodes fast enough to the author of each Node. At the same time Heron provides tools (see the Debugging section of the documentation and the time alignment paragraph of the “Rats playing computer games”  example in the manuscript) that make it easy to detect package drops and either correct them or allow them but also allow time alignment between incoming and outgoing packets. In the very rare case where a buffer is required Heron’s do-it-yourself logic makes it easy for a Node developer to implement their own Node specific buffer.

      (4) The authors mentioned in "Heron GUI’s multiple uses" that the GUI can be used as an experimental control panel where the user can update the parameters of the different Nodes on the fly. This is a very useful feature, but it was not demonstrated in the three showcases. A demonstration could greatly help to support this claim.

      As the reviewer mentions, we have found Heron’s GUI double role also as an experimental on-line controller a very useful capability during our experiments. We have expanded the last experimental example to also showcase this by showing how on the “Rats playing computer games” experiment we used the parameters of two Nodes to change the arena’s behaviour while the experiment was running, depending on how the subject was behaving at the time (thus exploring a much larger set of parameter combinations, faster during exploratory periods of our shaping protocols construction). 

      (5) The API for node scripts can benefit from having a better structure as well as having additional utilities to help users navigate the requirements, and provide more guidance to users in creating new nodes. A more standard practice in the field is to create three abstract Python classes, Source, Sink, and Transform that dictate the requirements for initialisation, work_function, and on_end_of_life, and provide additional utility methods to help users connect between their code and the communication mechanism. They can be properly docstringed, along with templates. In this way, the com and worker scripts can be merged into a single unified API. A simple example that can cause confusion in the worker script is the "worker_object", which is passed into the initialise function. It is unclear what this object this variable should be, and what attributes are available without looking into the source code. As the software is also targeting those who are less experienced in programming, setting up more guidance in the API can be really helpful. In addition, the self-documentation aspect of the GUI can also benefit from a better structured API as discussed in point 2 above.

      The reviewer is right that using abstract classes to expose to users the required API would be a more standard practice. The reason we did not choose to do this was to keep Heron easily accessible to entry level Python programmers who do not have familiarity yet with object oriented programming ideas. So instead of providing abstract classes we expose only the implementation of three functions which are part of the worker classes but the classes themselves are not seen by the users of the API. The point about the users’ accessibility to more information regarding a few objects used in the API (the worker object for example) has been taken on board and we have now addressed this by type hinting all these objects both in the templates and more importantly in the automatically generated code that Heron now creates when a user chooses to create a Node graphically (a feature of Heron not present in the version available in the initial submission of this manuscript).  

      (6) The authors should provide more pre-defined elements. Even though the ability for users to run arbitrary code is the main feature, the initial adoption of a codebase by a community, in which many members are not so experienced with programming, is the ability for them to use off-the-shelf components as much as possible. I believe the software could benefit from a suite of commonly used Nodes.

      There are currently 12 Node repositories in the Heron-repositories project on Github with more than 30 Nodes, 20 of which are general use (not implementing a specific experiment’ logic). This list will continue to grow but we fully appreciate the truth of the reviewer’s comment that adoption will depend on the existence of a large number of commonly used Nodes (for example Numpy, and OpenCV Nodes) and are working towards this goal.

      (7) It is not clear to me if there is any capability or utilities for testing individual nodes without invoking a full system execution. This would be critical when designing new experiments and testing out each component.

      There is no capability to run the code of an individual Node outside Heron’s GUI. A user could potentially design and test parts of the Node before they get added into a Node but we have found this to be a highly inefficient way of developing new Nodes. In our hands the best approach for Node development was to quickly generate test inputs and/or outputs using the “User Defined Function 1I 1O” Node where one can quickly write a function and make it accessible from a Node. Those test outputs can then be pushed in the Node under development or its outputs can be pushed in the test function, to allow for incremental development without having to connect it to the Nodes it would be connected in an actual pipeline. For example, one can easily create a small function that if a user presses a key will generate the same output (if run from a “User Defined Function 1I 1O” Node) as an Arduino Node reading some buttons. This output can then be passed into an experiment logic Node under development that needs to do something with this input. In this way during a Node development Heron allows the generation of simulated hardware inputs and outputs without actually running the actual hardware. We have added this way of developing Nodes also in our manuscript (Creating a new Node).

      Reviewer #3 (Public Review):

      Summary:

      The authors present a Python tool, Heron, that provides a framework for defining and running experiments in a lab setting (e.g. in behavioural neuroscience). It consists of a graphical editor for defining the pipeline (interconnected nodes with parameters that can pass data between them), an API for defining the nodes of these pipelines, and a framework based on ZeroMQ, responsible for the overall control and data exchange between nodes. Since nodes run independently and only communicate via network messages, an experiment can make use of nodes running on several machines and in separate environments, including on different operating systems.

      Strengths:

      As the authors correctly identify, lab experiments often require a hodgepodge of separate hardware and software tools working together. A single, unified interface for defining these connections and running/supervising the experiment, together with flexibility in defining the individual subtasks (nodes) is therefore a very welcome approach. The GUI editor seems fairly intuitive, and Python as an accessible programming environment is a very sensible choice. By basing the communication on the widely used ZeroMQ framework, they have a solid base for the required non-trivial coordination and communication. Potential users reading the paper will have a good idea of how to use the software and whether it would be helpful for their own work. The presented experiments convincingly demonstrate the usefulness of the tool for realistic scientific applications.

      Weaknesses:

      (1) In my opinion, the authors somewhat oversell the reproducibility and "selfdocumentation" aspect of their solution. While it is certainly true that the graph representation gives a useful high-level overview of an experiment, it can also suffer from the same shortcomings as a "pure code" description of a model - if a user gives their nodes and parameters generic/unhelpful names, reading the graph will not help much. 

      This is a problem that to our understanding no software solution can possibly address. Yet having a visual representation of how different inputs and outputs connect to each other we argue would be a substantial benefit in contrast to the case of “pure code” especially when the developer of the experiment has used badly formatted variable names.

      (2) Making the link between the nodes and the actual code is also not straightforward, since the code for the nodes is spread out over several directories (or potentially even machines), and not directly accessible from within the GUI. 

      This is not accurate. The obligatory code of a Node always exists within a single folder and Heron’s API makes it rather cumbersome to spread scripts relating to a Node across separate folders. The Node folder structure can potentially be copied over different machines but this is why Heron is tightly integrated with git practices (and even politely asks the user with popup windows to create git repositories of any Nodes they create whilst using Heron’s automatic Node generator system). Heron’s documentation is also very clear on the folder structure of a Node which keeps the required code always in the same place across machines and more importantly across experiments and labs. Regarding the direct accessibility of the code from the GUI, we took on board the reviewers’ comments and have taken the first step towards correcting this. Now one can attach to Heron their favourite IDE and then they can double click on any Node to open its two main scripts (com and worker) in that IDE embedded in whatever code project they choose (also set in Heron’s settings windows). On top of this, Heron now allows the addition of notes both for a Node and for all its parameters, inputs and outputs which can be viewed by hovering the mouse over them on the Nodes’ GUIs. The final step towards GUI-code integration will be to have a Heron GUI code editor but this is something that has to wait for further development from Heron’s underlying GUI library DearPyGUI.

      (3) The authors state that "[Heron’s approach] confers obvious benefits to the exchange and reproducibility of experiments", but the paper does not discuss how one would actually exchange an experiment and its parameters, given that the graph (and its json representation) contains user-specific absolute filenames, machine IP addresses, etc, and the parameter values that were used are stored in general data frames, potentially separate from the results. Neither does it address how a user could keep track of which versions of files were used (including Heron itself).

      Heron’s Graphs, like any experimental implementation, must contain machine specific strings. These are accessible either from Heron’s GUI when a Graph json file is opened or from the json file itself. Heron in this regard does not do anything different to any other software, other than saving the graphs into human readable json files that users can easily manipulate directly.

      Heron provides a method for users to save every change of the Node parameters that might happen during an experiment so that it can be fully reproduced. The dataframes generated are done so in the folders specified by the user in each of the Nodes (and all those paths are saved in the json file of the Graph). We understand that Heron offers a certain degree of freedom to the user (Heron’s main reason to exist is exactly this versatility) to generate data files wherever they want but makes sure every file path gets recorded for subsequent reproduction. So, Heron behaves pretty much exactly like any other open source software. What we wanted to focus on as the benefits of Heron on exchange and reproducibility was the ability of experimenters to take a Graph from another lab (with its machine specific file paths and IP addresses) and by examining the graphical interface of it to be able to quickly tweak it to make it run on their own systems. That is achievable through the fact that a Heron experiment will be constructed by a small amount of Nodes (5 to 15 usually) whose file paths can be trivially changed in the GUI or directly in the json file while the LAN setup of the machines used can be easily reconstructed from the information saved in the secondary GUIs.

      Where Heron needs to improve (and this is a major point in Heron’s roadmap) is the need to better integrate the different saved experiments with the git versions of Heron and the Nodes that were used for that specific save. This, we appreciate is very important for full reproducibility of the experiment and it is a feature we will soon implement. More specifically users will save together with a graph the versions of all the used repositories and during load the code base utilised will come from the recorded versions and not from the current head of the different repositories. This is a feature that we are currently working on now and as our roadmap suggests will be implemented by the release of Heron 1.0. 

      (4) Another limitation that in my opinion is not sufficiently addressed is the communication between the nodes, and the effect of passing all communications via the host machine and SSH. What does this mean for the resulting throughput and latency - in particular in comparison to software such as Bonsai or Autopilot? The paper also states that "Heron is designed to have no message buffering, thus automatically dropping any messages that come into a Node’s inputs while the Node’s worker function is still running."- it seems to be up to the user to debug and handle this manually?

      There are a few points raised here that require addressing. The first is Heron’s requirement to pass all communication through the main (GUI) machine. We understand (and also state in the manuscript) that this is a limitation that needs to be addressed. We plan to do this is by adding to Heron the feature of running headless (see our roadmap). This will allow us to run whole Heron pipelines in a second machine which will communicate with the main pipeline (run on the GUI machine) with special Nodes. That will allow experimenters to define whole pipelines on secondary machines where the data between their Nodes stay on the machine running the pipeline. This is an important feature for Heron and it will be one of the first features to be implemented next (after the integration of the saving system with git). 

      The second point is regarding Heron’s throughput latency. In our original manuscript we did not have any description of Heron’s capabilities in this respect and both other reviewers mentioned this as a limitation. As mentioned above, we have now addressed this by adding a section to our third experimental example that fully describes how much CPU is required to run a full experimental pipeline running on two machines and utilising also non python code executables (a Unity game). This gives an overview of how heavy pipelines can run on normal computers given adequate optimisation and utilising Heron’s feature of forcing some Nodes to run their Worker processes on a specific core. At the same time, Heron’s use of 0MQ protocol makes sure there are no other delays or speed limitations to message passing. So, message passing within the same machine is just an exchange of memory pointers while messages passing between different machines face the standard speed limitations of the Local Access Network’s ethernet card speeds. 

      Finally, regarding the message dropping feature of Heron, as mentioned above this is an architectural decision given the use cases of message passing we expect Heron to come in contact with. For a full explanation of the logic here please see our answer to the 3rd comment by Reviewer 2.

      (5) As a final comment, I have to admit that I was a bit confused by the use of the term "Knowledge Graph" in the title and elsewhere. In my opinion, the Heron software describes "pipelines" or "data workflows", not knowledge graphs - I’d understand a knowledge graph to be about entities and their relationships. As the authors state, it is usually meant to make it possible to "test propositions against the knowledge and also create novel propositions" - how would this apply here?

      We have described Heron as a Knowledge Graph instead of a pipeline, data workflow or computation graph in order to emphasise Heron’s distinct operation in contrast to what one would consider a standard pipeline and data workflow generated by other visual based software (like LabView and Bonsai). This difference exists on what a user should think of as the base element of a graph, i.e. the Node. In all other visual programming paradigms, the Node is defined as a low-level computation, usually a language keyword, language flow control or some simple function. The logic in this case is generated by composing together the visual elements (Nodes). In Heron the Node is to be thought of as a process which can be of arbitrary complexity and the logic of the graph is composed by the user both within each Node and by the way the Nodes are combined together. This is an important distinction in Heron’s basic operation logic and it is we argue the main way Heron allows flexibility in what can be achieved while retaining ease of graph composition (by users defining their own level of complexity and functionality encompassed within each Node). We have found that calling this approach a computation graph (which it is) or a pipeline or data workflow would not accentuate this difference. The term Knowledge Graph was the most appropriate as it captures the essence of variable information complexity (even in terms of length of shortest string required) defined by a Node.

      Recommendations for the authors:  

      Reviewer #1 (Recommendations For The Authors):

      -  No buffering implies dropped messages when a node is busy. It seems like this could be very problematic for some use cases... 

      This is a design principle of Heron. We have now provided a detailed explanation of the reasoning behind it in our answer to Reviewer 2 (Paragraph 3) as well as in the manuscript. 

      -  How are ssh passwords stored, and is it secure in some way or just in plain text?  

      For now they are plain text in an unencrypted file that is not part of the repo (if one gets Heron from the repo). Eventually, we would like to go to private/public key pairs but this is not a priority due to the local nature of Heron’s use cases (all machines in an experiment are expected to connect in a LAN).  

      Minor notes / copyedits:

      -  Figure 2A: right and left seem to be reversed in the caption. 

      They were. This is now fixed. 

      -  Figure 2B: the text says that proof of life messages are sent to each worker process but in the figure, it looks like they are published by the workers? Also true in the online documentation.  

      The Figure caption was wrong. This is now fixed.

      -  psutil package is not included in the requirements for GitHub

      We have now included psutil in the requirements.

      -  GitHub readme says Python >=3.7 but Heron will not run as written without python >= 3.9 (which is alluded to in the paper)

      The new Heron updates require Python 3.11. We have now updated GitHub and the documentation to reflect this.

      -  The paper mentions that the Heron editor must be run on Windows, but this is not mentioned in the Github readme.  

      This was an error in the manuscript that we have now corrected.

      -  It’s unclear from the readme/manual how to remove a node from the editor once it’s been added.  

      We have now added an X button on each Node to complement the Del button on the keyboard (for MacOS users that do not have this button most of the times).

      -  The first example experiment is called the Probabilistic Reversal Learning experiment in text, but the uncertainty experiment in the supplemental and on GitHub.  

      We have now used the correct name (Probabilistic Reversal Learning) in both the supplemental material and on GitHub

      -  Since Python >=3.9 is required, consider using fstrings instead of str.format for clarity in the codebase  

      Thank you for the suggestion. Latest Heron development has been using f strings and we will do a refactoring in the near future.

      -  Grasshopper cameras can run on linux as well through the spinnaker SDK, not just Windows.  

      Fixed in the manuscript. 

      -  Figure 4: Square and star indicators are unclear.

      Increased the size of the indicators to make them clear.

      -  End of page 9: "an of the self" presumably a typo for "off the shelf"?  

      Corrected.

      -  Page 10 first paragraph. "second root" should be "second route"

      Corrected.

      -  When running Heron, the terminal constantly spams Blowfish encryption deprecation warnings, making it difficult to see the useful messages.  

      The solution to this problem is to either update paramiko or install Heron through pip. This possible issue is mentioned in the documentation.

      -  Node input /output hitboxes in the GUI are pretty small. If they could be bigger it would make it easier to connect nodes reliably without mis-clicks.

      We have redone the Node GUI, also increasing the size of the In/Out points.

      Reviewer #2 (Recommendations For The Authors):

      (1) There are quite a few typos in the manuscript, for example: "one can accessess the code", "an of the self", etc.  

      Thanks for the comment. We have now screened the manuscript for possible typos.

      (2) Heron’s GUI can only run on Windows! This seems to be the opposite of the key argument about the portability of the experimental setup.  

      As explained in the answers to Reviewer 1, Heron can run on most machines that the underlying python libraries run, i.e. Windows and Linux (both for x86 and Arm architectures). We have tested it on Windows (10 and 11, both x64), Linux PC (Ubuntu 20.04.6, x64) and Raspberry Pi 4 (Debian GNU/Linux 12 (bookworm), aarch64). We have now revised the manuscript and the GitHub repo to reflect this.

      (3) Currently, the output is displayed along the left edge of the node, but the yellow dot connector is on the right. It would make more sense to have the text displayed next to the connectors.  

      We have redesigned the Node GUI and have now placed the Out connectors on the right side of the Node.

      (4) The edges are often occluded by the nodes in the GUI. Sometimes it leads to some confusion, particularly when the number of nodes is large, e.g., Fig 4.

      This is something that is dependent on the capabilities of the DearPyGUI module. At the moment there is no way to control the way the edges are drawn.

      Reviewer #3 (Recommendations For The Authors):

      A few comments on the software and the documentation itself:

      - From a software engineering point of view, the implementation seems to be rather immature. While I get the general appeal of "no installation necessary", I do not think that installing dependencies by hand and cloning a GitHub repository is easier than installing a standard package.

      We have now added a pip install capability which also creates a Heron command line command to start Heron with. 

      -The generous use of global variables to store state (minor point, given that all nodes run in different processes), boilerplate code that each node needs to repeat, and the absence of any kind of automatic testing do not give the impression of a very mature software (case in point: I had to delete a line from editor.py to be able to start it on a non-Windows system).  

      As mentioned, the use of global variables in the worker scripts is fine partly due to the multi process nature of the development and we have found it is a friendly approach to Matlab users who are just starting with Python (a serious consideration for Heron). Also, the parts of the code that would require a singleton (the Editor for example) are treated as scripts with global variables while the parts that require the construction of objects are fully embedded in classes (the Node for example). A future refactoring might make also all the parts of the code not seen by the user fully object oriented but this is a decision with pros and cons needing to be weighted first. 

      Absence of testing is an important issue we recognise but Heron is a GUI app and nontrivial unit tests would require some keystroke/mouse movement emulator (like QTest of pytest-qt for QT based GUIs). This will be dealt with in the near future (using more general solutions like PyAutoGUI) but it is something that needs a serious amount of effort (quite a bit more that writing unit tests for non GUI based software) and more importantly it is nowhere as robust as standard unit tests (due to the variable nature of the GUI through development) making automatic test authoring an almost as laborious a process as the one it is supposed to automate.

      -  From looking at the examples, I did not quite see why it is necessary to write the ..._com.py scripts as Python files, since they only seem to consist of boilerplate code and variable definitions. Wouldn’t it be more convenient to represent this information in configuration files (e.g. yaml or toml)?  

      The com is not a configuration file, it is a script that launches the communication process of the Node. We could remove the variable definitions to a separate toml file (which then the com script would have to read). The pros and cons of such a set up should be considered in a future refactoring.

      Minor comments for the paper:

      -  p.7 (top left): "through its return statement" - the worker loop is an infinite loop that forwards data with a return statement?  

      This is now corrected. The worker loop is an infinite loop and does not return anything but at each iteration pushes data to the Nodes output.

      -  p.9 (bottom right): "of the self" → "off-the-shelf"  

      Corrected.

      -  p.10 (bottom left): "second root" → "second route"  

      Corrected.

      -  Supplementary Figure 3: Green start and square seem to be swapped (the green star on top is a camera image and the green star on the bottom is value visualization - inversely for the green square).  

      The star and square have been swapped around.

      -  Caption Supplementary Figure 4 (end): "rashes to receive" → "rushes to receive"  

      Corrected.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1:

      The entire study is based on only 2 adult animals, that were used for both the single cell dataset and the HCR. Additionally, the animals were caught from the ocean preventing information about their age or their life history. This makes the n extremely small and reduces the confidence of the conclusions. 

      This statement is incorrect.  While the scRNAseq was indeed performed in two animals (n=2), the HCR-FISH was performed in 3-5 animals (depending on the probe used).  These were different animals from those used for the scRNAseq.  The number of animals used has now been included in the manuscript.

      All the fluorescent pictures present in this manuscript present red nuclei and green signals being not color-blind friendly. Additionally, many of the images lack sufficient quality to determine if the signal is real. Additional images of a control animal (not eviscerated) and of a negative control would help data interpretation. Finally, in many occasions a zoomed out image would help the reader to provide context and have a better understanding of where the signal is localized. 

      Fluorescent photos have been changed to color-blind friendly colors.  Diagrams, arrows and new photos have been included as to guide readers to the signal or labeling in cells. Controls for HCR-FISH and labeling in normal intestines have been included.  

      Reviewer #2:

      The spatial context of the RNA localization images is not well represented, making it difficult to understand how the schematic model was generated from the data. In addition, multiple strong statements in the conclusion should be better justified and connected to the data provided.

      As explained above we have made an effort to provide a better understanding of the cellular/tissue localization of the labeled cells. Similarly, we have revised the conclusions so that the statements made are well justified.

      Reviewer #3:

      Possible theoretical advances regarding lineage trajectories of cells during sea cucumber gut regeneration, but the claims that can be made with this data alone are still predictive.

      We are conscious that the results from these lineage trajectories are still predictive and have emphasized this in the text. Nonetheless, they are important part of our analyses that provide the theoretical basis for future experiments.

      Better microscopy is needed for many figures to be convincing. Some minor additions to the figures will help readers understand the data more clearly.

      As explained above we have made an effort to provide a better understanding of the cellular/tissue localization of the labeled cells.  

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      -  Page 4, line 70-81: if the reader is not familiar with holothurian anatomy and regeneration process, this section can be complicated to fully understand. An illustration, together with clear definitions of mesothelium, coelomic epithelium, celothelium and luminal cells would help the reader. 

      A figure (now Figure 1) detailing the holothurian anatomy of normal and regenerating animals has been added. A figure detailing the intestinal regeneration process has also been included (S1).

      -  Page 5 line 92-104: this paragraph could be shortened. It would be more important to explain what the main question is the Authors would like to answer and why single cell would be the best technique to answer it, than listing previous studies that used scRNA-Seq. 

      The paragraph has been shortened and the focus has been shifted to the question of cellular components of regenerative tissues in holothurians.

      -  Page 6, line 125-127 and line 129-132: this belongs to the method section. 

      This information is now provided in the Materials and Methods section.

      -  Page 11, line 210-217: this belongs to the discussion. 

      This section has now been included in the Discussion.

      -  How many mesenteries are present in one animal? 

      This has now been included as part of Figure S1.

      -  In the methods there are no information about the quality of the dataset and the sequencing and the difference between the 2 samples coming from the 2 animals. How many cells from each sample and which is the coverage? The Authors provided this info only between mesentery and anlage but not between animals. 

      We have added additional information about the sequencing statistics in S4 Fig and S15 Table. Description has also been added in the methods in lines 922-926 under Single Cell RNA Sequencing and Data Analysis section.

      -  The result section "An in-depth analysis of the various cluster..." is particularly long and very repetitive. I would encourage to Authors to remove a lot of the details (list of genes and GO terms) that can be found in the figures and stressed only the most important elements that they will need to support their conclusions. Having full and abbreviated gene names and the long list of references makes the text difficult to read and it is challenging to identify the main point that the Authors are trying to highlight. 

      This section has been abbreviated.

      -  Figure 1: I would suggest adding a graph of holothurian anatomy before and after the evisceration to provide more context of the process we are looking at and remove 1C. 

      Information on the holothurian anatomy has been included in a new Fig 1 and in supplementary figure S1

      -  Figure 2: I would suggest removing this figure that is redundant with Figure 3 and several genes are not cluster specific. Figure 3 is doing a better job in showing similar concepts. 

      Figure 2 was removed and placed in the Supplement section. 

      - In figure 3 how were the 3 cell types defined? Was this done manually or through a bioinformatic analysis? 

      The cell definition was done following the analysis of the highly expressed transcripts and comparisons to what has been shown in the scientific literature.

      -  Figure 2O shows that one of the supra-cluster is made of C2, C7, C6 and C10. This contradicts the text page 9, line 195. 

      The transcript chosen for this figure gives the wrong idea that these 4 clusters are similar. We have now addressed this in the manuscript.

      -  Figure 4A and 4C: if these are representing a subset of Figure 3, they should be removed in one or the other. The same comment is valid also for Figures 5, 6 and 7. In general the manuscript is very redundant both in terms of Figures and text. 

      These are indeed subsets of Fig 3 that were added with the purpose of clarifying the findings, however, in view of the reviewer’s comment we have deleted the redundant information from all figures.

      -  Figure 9: since the panels are not in order, it is difficult to follow the flow of the figure.  - All UMAP should have the number of the cluster on the UMAP itself instead of counting only on the color code in order to be color-blind friendly. 

      The figure has been modified and clusters are now identified in the UMAP by their number.

      -  Figure S1F seems acquired in very different conditions compared to the other images in the same figure. 

      Fig S1F (now S2 Fig) is an overlay of fluorescent immune-histochemistry (UV light detected) with “classical” toluidine blue labeling (visible light detected).  This has now been explained in the figure legend.

      -  Table S7 is lacking some product numbers. 

      The toluidine blue product number has now been added to the table.  The antibodies that lack product number correspond to antibodies generated in our lab  and described in the references provided.

      -  The discussion is pretty long and partially redundant with the result section. I would encourage the Authors to shorten the text and shorten paragraphs that have repeating information.  - It might be out of the scope of the Authors but the readers would benefit from having a manuscript that focuses more on the novel aspects discovered with the single-cell RNA-Seq and then have a review that will bring together all the literature published on this topic and integrating the single-cell data with everything that is known so far. 

      We have tried to shorten the discussion by eliminating redundant text.

      Reviewer #2 (Recommendations For The Authors): 

      -  An intriguing finding is the lack of significant difference in the cell clusters between the anlage and mesentery during regeneration. This discovery raises important questions about the regenerative process. The authors should provide a more detailed explanation of the implications of this finding. For example, does it suggest that both organs contribute equally to the regenerated tissues? 

      The lack of significant differences in the cell clusters between the anlage and the mesentery is somewhat surprising but can be explained by two different facts. First, we have previously shown that many of the cellular processes that take place in the anlage, including cell proliferation, apoptosis, dedifferentiation and ECM remodeling occur in a gradient that begins at the tip of the mesentery where the anlage forms and extends to various degrees into the mesentery.  Similarly, migrating cells move along the connective tissue of the mesentery to the anlage.  Thus, there is no clear partition of the two regions that would account for distinct cell populations associated with the regenerative stage.  Second, the two cell populations that would have been found in the mesentery but not in the regenerating anlage, mature muscle and neurons, were not dissociated by our experimental protocol as to allow for their sequencing.  Our current experiments are being done using single nuclei RNA sequencing to overcome this hurdle. This has now been included in the discussion.

      -  Proliferating cells are obviously important to the study of regeneration as it is assumed these form the regenerating tissue. The authors describe cluster 8 as the proliferative cells. Is there evidence of proliferation in other cell types or are these truly the only dividing cells? Is c8 of multiple cell types but the clustering algorithm picks up on the markers of cell division i.e. what happens if you mask cell division markers - does this cluster collapse into other cluster types? This is important as if there is only one truly proliferating cell type then this may be the origin of the regenerative tissues and is important for this study to know this. 

      As the reviewer highlights, we also believe this to be an important aspect to discuss. We have addressed this in the manuscript discussion with the following: “Our data suggest that there appears to be a specific population of only proliferative cells (C8) characterized by a large number of cell proliferation genes, which can be visualized by the top genes shown in Fig 3. These cell proliferation genes are specific to C8, with minimum representation in other populations. Interestingly, as mentioned before C8 expresses at lower levels many of the genes of other coelomic epithelium populations. Nevertheless, even if we mask the top 38 proliferation genes (not shown), this cluster is maintained as an independent cluster, suggesting that its identity is conferred by a complex transcriptomic profile rather than only a few proliferation-related genes. Therefore, the identity and potential role of C8 could be further described by two distinct alternatives: (1) cells of C8 could be an intermediate state between the anlage precursor cells (discussed below) and the specialized cell populations or (2) cells of C8 are the source of the anlage precursor populations from which all other populations arise. The pseudotime data is certainly complex and challenging to interpret with our current dataset, yet the RNA velocity analysis showed in Fig 11B would suggests that cells of C8 transition into the anlage precursor populations, rather than being an intermediate state. This is also supported by the Slingshot pseudotime analysis that incorporates C8 (S13 Fig).

      Nevertheless, additional experiments are needed to confirm this hypothesis.”

      -  The schematic model presented in Fig 10 is essential for clarifying the paper's findings and will provide a crucial baseline model for future research. However, the comparison of the data shown in the HCR figures with the schematic is challenging due to the lack of spatial context in the HCR figures. The authors should find a way to provide better context in the figures, such as providing two-color in situ images to compare spatial relationships of cell types and/or including lower resolution and side-by-side fluorescent and bright field images if possible. 

      The figure has been modified to explain the spatial arrangement of the tissues.

      The authors make several strong statements in the discussion that weren't well connected to the findings in the data. Specifically: 

      “Regardless of which cell population is responsible for giving rise to the cells of the regenerating intestine, our study reveals that the coelomic epithelium, as a tissue layer, is pluripotent.” 

      This has now been expanded to better explain the statement.

      738 “…we postulate that cells from C1 stand as the precursor cell population from which the rest of the cells in the coelomic epithelium arise”. 

      This has now been expanded to better explain the statement.

      748 “differentiation: muscle, neuroepithelium, and coelomic epithelium cells. We also propose the presence of undifferentiated and proliferating cell populations in the coelomic epithelia, which give rise to the cells in this layer…”

      This has now been expanded to better explain the statement.

      777 “amphibians, the cells of the holothurian anlage coelomic epithelium are proliferative undifferentiated cells and originated via a dedifferentiation process…”

      This has now been expanded to better explain the statement.

      Reviewer #3 (Recommendations For The Authors): 

      Specific questions: 

      - Is there any way to systematically compare these cells to evolutionarily-diverged cells in distant relatives to sea cucumbers? Or even on a case-by-case basis? For example, is there evidence for any of these transitory cell types to have correlate(s) in vertebrate gut regeneration? 

      This is a most interesting question but one that is perhaps a bit premature to answer due to multiple reasons.  First, most of the studies in vertebrates focus on the regeneration of the luminal epithelium, a layer that we are not studying in our system since it appears later in the regeneration process.  Second, there is still too little data from adult echinoderms to fully comprehend which cells are cell orthologues to vertebrates. Third, we are only analyzing one regenerative stage.  It is our hope that this is just the start of a full description of what cell types/stages are found and how they function in regeneration and that this will lead us to identify the cellular orthologues among animal species.

      Major revisions: 

      - If lineage tracing is within the scope of this paper, it would provide more definitive evidence to the conclusions made about the precursor populations of the regenerating anlage. 

      Response:  This is certainly one of the next steps, however at present, it is not possible due to technical limitations.

      Minor revisions: 

      - Line 47: "for decades" even longer! Could the authors also cite some other amphibians, such as other salamanders (newts) and larval frogs? 

      References have been added.

      - Line 85: "specially"-could authors potentially change to "specifically" 

      Corrected

      - Line 122: Authors should add the full words of what these abbreviations stand for in the caption for Figure 1 or in Figure 1A itself. 

      Corrected

      - Lines 153: What conclusions are the authors trying to make from one type of tubulin presence compared to the others? It's unclear from the text. 

      The authors are not trying to reach any particular conclusion.  They are just stating what was found using several markers, and the possibility that what might be viewed first hand as a single cell population might be more heterogenous.  Although the tubulin-type information might not be relevant for the conclusions in the present manuscript, it might be important for future work on the cell types involved in the regeneration process.

      - Line 226: Could the authors clarify if "WNT9" is "WNT9a". Figure 3 lists WNT9a but authors refer to WNT9 in the text. 

      The gene names in Fig 3 are based on the human identifiers. H. glaberrima only has one sequence of Wnt9 (Auger et al. 2023) and this sequence shares the highest similarity to human Wnt9a, thus the name in the list. We have now identified the gene as Wnt9 to avoid confusion.

      - Lines 236-237: Can authors rule out that some immune cells might infiltrate the mesenchymal population? 

      No, this cannot be ruled out.  In fact, we believe that most of the immune cells found in our scRNA-seq are indeed cells that have infiltrated the anlage and are part of the mesenchyma.  This has been reported by us previously (see Garcia-Arraras et al. 2006). We have now included this in the text.

      - Line 452-453: The over-representation of ribosomal genes not shown. Would it be possible to show this information in the supplementary figures? 

      The sentence has been modified, the data is being prepared as part of a separate publication that focuses on the ribosomal genes.

      - Line 480: Could authors clarify if it's WNT9a or just WNT9?

      It is indeed Wnt9. See previous response above.

      - Line 500: In future experiments, it would be interesting to compare to populations at different timepoints in order see how the populations are changing or if certain precursors are activated at different times. 

      We fully agree with the reviewer. These are ongoing experiments or are part of new grant proposals.

      - Line 567-568: Choosing 9-dpe allowed for 13 clusters, but do authors expect a different number of clusters at different timepoints as things become more terminally differentiated? 

      Definitely, we believe that clusters related to the different regenerative stages of cells can be found by looking at earlier or later regeneration stages of the organ.  A clear example is that if the experiment is done at 14-dpe, when the lumen is forming, cells related to luminal epithelium populations will appear. It is also possible that different immune cells will be associated with the different regeneration stages.

      - Line 653: References Figure 10D (not in this manuscript). Are authors referring to only 1D or 9D or an old draft figure number? 

      As the reviewer correctly points out, this was a mistake where the reference is to a previous draft. It has now been corrected.

      - Line 701: "our study reveals that the coelomic epithelium, as a tissue layer, is pluripotent." Phrasing may be better as referring to the cell population making up the tissue layer as pluripotent/multipotent or that the cells it contains would likely be pluripotent or multipotent. Additionally, lineage tracing may be needed to definitively demonstrate this. 

      This has been modified.

      - Line 808: The authors may make a more accurate conclusion by saying that the characteristics are similar to blastemas or behave like a blastema rather than it is blastema. There is ambiguity about the meaning of this term in the field, but most researchers seem to currently have in mind that the "blastema" definition includes a discrete spatial organization of cells, and here these cells are much more spread out. This could be a good opportunity for the authors to engage in this dialogue, perhaps parsing out the nuances of what a "blastema" is, what the term has traditionally referred to, and how we might consider updating this term or at least re-framing the terminology to be inclusive of functions that "blastemas" have traditionally had in the literature and how they may be dispersed over geographical space in an organism more so than the more rigid, geographically-restricted definition many researchers have in mind. However, if the authors choose to elaborate on these issues, those elaborations do belong in the discussion, and the more provisional terminology we mention here could be used throughout the paper until that element of the revised discussion is presented. We would welcome the authors to do this as a way to point the field in this direction as this is also how we view the matter. For example, some of the genes whose expression has been observed to be enriched following removal of brain tissue in axolotls (such as kazald2, Lust et al.), are also upregulated in traditional blastemas, for instance, in the limb, but we appreciate that the expression domain may not be as localized as in a limb blastema. Additionally, since there is now evidence that some aspects of progenitor cell activation even in limb regeneration extend far beyond the local site of amputation injury (Johnson et al., Payzin-Dogru et al.), there is an opportunity to connect the dots and make the claim that there could be more dispersion of "blastema function" than previously appreciated in the field. Diving a bit more into these nuances may also enable better conceptual framework of how blastema function may evolve across vast evolutionary time and between different injury contexts in super-regenerative organisms. 

      We have followed the reviewer’s suggestion and stated that the holothurian anlage behaves as a blastema. Though we would love to elaborate on the blastema topic, as suggested by the reviewer, we believe that it would extend the discussion too much and that the topic might be better served in a different publication.

      - In the discussion, it would be important not to leave the reader with the impression that all amphibian blastema cells originate via dedifferentiation. This is not the case. For example, in axolotls (Sandoval-Guzman et al.) and in larval/juvenile newts, muscle progenitors within the blastema structure have been shown to originate from muscle satellite cells, a kind of stem cell, in stump tissues (while adult newts use dedifferentiation of myofibers to generate muscle progenitors in the blastema). Most cell lineages simply have not been evaluated in the level of detail that would be required to definitively conclude one way or the other, and the door is open for a more substantial contribution from stem cell populations than previously appreciated especially because new tools exist to detect and study them. Providing the reader with a more nuanced view of this situation will not negatively impact the findings in this paper, but it will show that there is biological complexity still waiting to be discovered and that we don't have all the answers at this point. 

      This has now been corrected. 

      Figures: Overall, the figures need minor work. 

      - Figure 1A: Can the authors draw a smaller, full-body cartoon and feature the current high-mag cartoon as an inset to that? Can they label the axes and make it clear how the geometry works here?

      Fig 1 has been re-done and now is split into Fig 1 and Fig 2.

      - Figure 1B: Can the authors label the UMAP with cluster identities on the map itself? This will make it easier to identify each cluster (especially to make sure cluster 11 is easier to find). 

      This has been corrected.

      - Figure 2: Could the authors put boxes/clearly distinguish panel labels around each cluster (AO), so that there are clear boundaries? 

      Fig 2 has been moved to Supplement, following another reviewer recommendation.

      - "Gene identifiers starting with "g" correspond to uncharacterized gene models of H. glaberrima." - The sentence is from another figure caption but this figure would benefit from having this sentence in the figure caption as well. 

      This has been added to other figures as suggested.

      - Figure 3A: Can the authors potentially bold, highlight, or underline genes you discuss in text, so it's easier for the reader to reference? 

      This has been added as suggested.

      - Figure 3C: Can the authors please label the cell types directly on the UMAP here as well? 

      The changes were made following the reviewer’s recommendation.

      - Figure 4D-E: There's not much context here to determine if this HCR-FISH validation can tell us anything about these cells besides some of them appear to be there. Do authors expect the coelomocyte morphology to look different in regenerating/injured tissue versus normal animals? Can the authors provide some double in situs, as well as some lower-magnification views showing where the higher-magnification insets are located? Is there any spatial pattern to where these cells are found? Counter stains would be helpful. 

      - Figure 6C: If clusters C5, C8, C9 are part of the coelomic epithelium, then authors could show a smaller diagram above with blue and grey to show types and then show clusters separately to help get their point across better. 

      - Figure 6G: This image appears to have high background- would it be possible for authors to repeat phalloidin stain or reimage with a lower exposure/gain. Additionally, imaging with Zstacks would help to obtain maximum intensity projections. It would greatly aid the reader if each image was labeled with HCR probes/antibodies that have been applied to the sample. 

      - Figure 7E: The cells appear to be out of focus and have high background. Additionally, they are lacking the speckled appearance expected to be seen with HCR-FISH. Would it be possible for authors to collect another image utilizing z-stacks? 

      HCR-FISH figures identifying the gene expression characteristic of cell clusters have been modified following the reviewer’s concerns.  The changes include:

      (1) Additional clusters have been verified with probes to gene identifiers. These include clusters 8, 9 and 12.

      (2) Redundant information has been removed.

      (3) Colors have been changed to make figures friendlier to color-impaired readers.

      (4) Spatial context has been added or identified.

      (5) In some cases, improved photos have been added

      (6) Better labels have been included

      (7) When necessary individual photos used for the overlay have been included.

      - Figure 9A: Could authors add cluster labels onto UMAP directly? 

      This change was made to Fig 2A. UMAP in Fig 9A is the same and used just as reference of the subset.

      - Figure 10: It could be useful if authors put a small map of the sea cucumber like in other images so that readers know where in the anlage this zoomed in model represents. 

      Added as suggested by the reviewer.

      - Supplementary figure 1F: Could authors add an arrow to the dark cell that's being pointed out? 

      Changed made as suggested by the reviewer.

      - Supplementary figure 1: Could authors label clearly what color is labeled with what marker? 

      Changed made as suggested by the reviewer.

    1. Let’s just stop thinking data is perfect. It’s not. Data is primarily human-made. “Data-driven” doesn’t mean “unmistakably true,” and it never did.

      I would say that's not only the cause of human error, but possibly even the unusual cases that lie outside the "big net" of a large sample size, assuming the data set in question was lucky enough to have one.

    1. The disproportionate representation of students of color in special education is a serious concernthat has lasted for forty years.

      It’s unfair that students of color are often placed in special education just because of bias. Many of these students come from low-income backgrounds and don’t get the right support. Instead of giving them extra help, schools label them with disabilities. This can affect their confidence and future opportunities.

    1. She's just a girl, and she's on fireHotter than a fantasy, longer like a highwayShe's living in a world, and it's on fireFeeling the catastrophe, but she knows she can fly awayOh, she got both feet on the groundAnd she's burning it downOh, she got her head in the cloudsAnd she's not backing down

      Metaphorically, 'hotter than a fantasy' describes the girl possessing an exceptional character and radiance, then again, 'longer like a highway' could entail that she has a multitude of positive qualities and strengths in her possession. Another Interpretation could also suggest that she is a person that has gone through many challenges and setbacks in life and 'longer like a highway ' could be placing an emphasis on the complexity and longitude of the challenges and setbacks that she has faced in her life, much like the layers of an onion.

    1. Located at the top of the Illinois River valley, the village is not normally considered asignificant part of American history, so it has remained relatively unknown. In many ac-counts, the location is discussed merely as a refugee center to which desperate, beleagueredAlgonquians fled ahead of a series of mid-seventeenth-century Iroquois conquests that werepart of the violence known as the Beaver Wars. Reeling from violence and constrained bynecessity, the Illinois speakers who predominated in the place belonged to a “fragile, dis-ordered world,” “made of fragments” and dependent on French support.

      The author brings up how the Grand Village of the Kaskaskia is often overlooked in history, which is interesting. It's framed not just as a refugee settlement but as a significant place in its own right, showing the Illinois weren't just victims but had their own agency during this time.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      The manuscript by Dr. Shinkai and colleagues is about the posttranslational modification of a highly important protein, MT3, also known as the growth inhibitory factor. Authors postulate that MT3, or generally all MT isoforms, are sulfane sulfur binding proteins. The presence of sulfane sulfur at each Cys residue has, according to the authors, a critical impact on redox protein properties and almost does not affect zinc binding. They show a model in which 20 Cys residues with sulfane sulfur atoms can still bind seven zinc ions in the same clusters as unmodified protein. They also show that recombinant MT3 (but also MT1 and MT2) protein can react with HPE-IAM, an efficient trapping reagent of persulfides/polysulfides. This reaction performed in a new approach (high temperature and high reagent concentration) resulted in the formation of bis-S-HPE-AM product, which was quantitatively analyzed using LC-MS/MS. This analysis indicated that all Cys residues of MT proteins are modified by sulfane sulfur atoms. The authors performed a series of experiments showing that such protein can bind zinc, which dissociates in the reaction with hydrogen peroxide or SNAP. They also show that oxidized MT3 is reduced by thioredoxin. It gives a story about a new redox-dependent switching mechanism of zinc/persulfide cluster involving the formation of cystine tetrasulfide bridge.

      The whole story is hard to follow due to the lack of many essential explanations or full discussion. What needs to be clarified is the conclusion (or its lack) about MT3 modification proven by mass spectrometry. Figure 1B shows the FT-ICR-MALDI-TOF/MS spectrum of recombinant MT3. It clearly shows the presence of unmodified MT3 protein without zinc ions. Ions dissociate in acidic conditions used for MALDI sample preparation. If the protein contained all Cys residues modified, its molecular weight would be significantly higher. Then, they show the MS spectrum (low quality) of oxidized protein (Fig. 1C), in which new signals (besides reduced apo-MT3) are observed. They conclude that new signals come from protein oxidation and modification with one or two sulfur atoms. If the conclusion on Cys residue oxidation is reasonable, how this protein contains sulfur is unclear. What is the origin of the sulfur if apo-MT does not contain it? Oxidized protein was obtained by acidification of the protein, leading to zinc dissociation and subsequent neutralization and air oxidation. Authors should perform a detailed isotope analysis of the isotopic envelope to prove that sulfur is bound to the protein. They say that the +32 mass increase is not due to the appearance of two oxygen donors. They do not provide evidence. This protein is not a sulfane sulfur binding protein, or its minority is modified. Moreover, it is unacceptable to write that during MT3 oxidation are "released nine molecules of H2". How is hydrogen molecule produced? Moreover, zinc is not "released", it dissociates from protein in a chemical process.

      Thank you for your comment. According to your suggestion, we have rewritten the corresponding sentences below, together with addition of new Fig.1D.

      First, the sentence “which corresponded to the mass of zinc-free apo-GIF/MT3 and indicated that zinc was removed during MS analysis.” was changed to “which corresponded to the mass of zinc-free apo-GIF/MT3 and indicated that zinc dissociates from protein in acidic conditions used for MALDI sample preparation.” in the introduction section. Second, we have added the following sentence “However, FT-ICR-MALDI-TOF/MS analysis failed to detect sulfur modifications in GIF/MT-3 (Fig. 1B), suggesting that sulfur modifications in the protein were dissociated during laser desorption/ionization. Therefore, we postulate that the small amount of sulfur detected in oxidized apo-GIF/MT-3 is derived from the effect of laser desorption/ionization rather than any actual modification of the minority component.” in the discussion section. Third, we have added new Fig. 1D and the corresponding citation in the introduction. Fourth, the sentence “An increase in mass of 32 Da can also result from addition of two oxygen atoms, but we attributed it to one sulfur atom for reasons described later.” was changed to “Note that an increase in mass of 32 Da can also result from addition of two oxygen atoms.”.

      Another important point is a new approach to the HPE-IAM application. Zinc-binding MT3 was incubated with 5 mM reagent at 60°C for 36 h. Authors claim that high concentration was required because apoMT3 has stable conformation. Figure 2B shows that product concentration increases with higher temperature, but it is unclear why such a high temperature was used. Figure 1D shows that at 37°C, there is almost no reaction at 5 mM reagent. Changing parameters sounds reasonable only when the reaction is monitored by mass spectrometry. In conclusion, about 20 sulfane sulfur atoms present in MT3 would be clearly visible. Such evidence was not provided. Increased temperature and reagent concentration could cause modification of cysteinyl thiol/thiolates as well, not only persulfides/polysulfides. Therefore, it is highly possible that non-modified MT3 protein could react with HPE-IAM, giving false results. Besides mass spectrometry, which would clearly prove modifications of 20 Cys, authors should use very important control, which could be chemically synthesized beta- or alfa-domain of MT3 reconstituted with zinc (many protocols are present in the literature). Such models are commonly used to test any kind of chemistry of MTs. If a non-modified chemically obtained domain would undergo a reaction with HPE-IAM under such rigorous conditions, then my expectation would be right.

      Thank you for your comments. Although we have already confirmed that no false-positive results were observed using this method in Fig. 5 (previously Fig. 4), we have conducted additional experiments by preparing chemically synthesized α- and β-domains of GIF/MT-3, as well as recombinant α- and β-domains of GIF/MT-3. As shown in the new Fig. S2A, the chemically synthesized α- and β-domains of GIF/MT-3 detected almost no sulfane sulfur (less than 1 molecule per protein), whereas the recombinant α- and β-domains detected several molecules of sulfane sulfur (more than 5 molecules per protein) (Fig. S2A). Therefore, I would like to emphasize here that the cysteine residue itself cannot be the source of the bis-S-HPE-AM product (sulfane sulfur derivative).

      Accordingly, we have added the following sentence in the results section: “Because this assay was performed at relatively high temperatures (60°C), we also examined the sulfane sulfur levels of several mutant proteins using chemically synthesized α- and β-domains of GIF/MT-3 to eliminate false-positive results. As shown in Fig. S2A, sulfane sulfur (less than 1 molecule per protein) was undetectable in chemically synthesized α- and β-domains of GIF/MT-3, whereas several molecules of sulfane sulfur per protein were detected in recombinant α- and β-domains exhibited (Fig. S2B, left panel). These findings indicated that the sulfane sulfur detected in our assay was derived from biological processes executed during the production of GIF/MT-3 protein. We further analyzed mutant proteins with β-Cys-to-Ala and α-Cys-to-Ala substitutions and found that their sulfane sulfur levels were comparable with those of the α- and β-domains of GIF/MT-3, respectively (Fig. S2B, left panel). Additionally, Ser-to-Ala mutation did not affect the sulfane sulfur levels of GIF/MT-3. The zinc content of each mutant protein was also determined under these conditions (Fig. S2B, right panel).”

      - The remaining experiments provided in the manuscript can also be applied for non-modified protein (without sulfane sulfur modification) and do not provide worthwhile evidence. For instance, hydrogen peroxide or SNAP may interact with non-modified MTs. Zinc ions dissociate due to cysteine residue modification, and TCEP may reduce oxidized residue to rescue zinc binding. Again, mass spectrometry would provide nice evidence.

      Thank you for your comment. We understand that such experiments can also be applied to non-modified proteins (without sulfane sulfur modification). However, the experiments shown in Fig. 4 and Fig. 6 were conducted to investigate the role of sulfane sulfur under oxidative stress conditions, rather than to examine sulfur modification in the protein itself. As mentioned previously, it is difficult to detect sulfur modifications directly in the protein using MALDI-TOF/MS (Fig. 1), as sulfur modifications appear to dissociate during the laser desorption/ionization process.

      - The same is thioredoxin (Fig. 7) and its reaction with oxidized MT3. Nonmodified and oxidized MT3 would react as well.

      Thank you for your comment. We understand that such experiments can also be applied to non-modified MT-3 protein. However, to the best of our knowledge, this is the first report demonstrating that apo-MT-3 can serve as a good substrate for the Trx system. In fact, this experiment is not intended to prove that MT-3 is sulfane sulfur-binding protein. Rather, it demonstrates the novel finding that apo-MT3 serves as an excellent substrate for Trx and that the sulfane sulfur (persulfide structure) remains intact throughout the reduction process.

      - If HPE-IAM reacts with Cys residues with unmodified MT3, which is more likely the case under used conditions, the protein product of such reaction will not bind zinc. It could be an explanation of the cyanolysis experiment (Fig. 6).

      Thank you for your comment. As you pointed out, HPE-IAM reacts with cysteine residues in unmodified MT-3, thereby preventing zinc from binding to the protein. However, we did not use HPE-IAM prior to measuring zinc binding. Instead, HPE-IAM was used solely for determining the sulfane sulfur content in the protein, and thus it cannot explain the results of the cyanolysis experiment.

      - Figure 4 shows the reactivity of (pol)sulfides with TCEP and HPE-IAM. What are redox potentials? Do they correlate with the obtained results?

      Thank you for your comment. However, we must apologize as we do not fully understand the rationale behind determining redox potentials in this experiment. We believe the data itself to be very clear and presenting convincing results.

      - Raman spectroscopy experiments would illustrate the presence of sulfane sulfur in MT3 only if all Cys were modified.

      Yes, that is correct. Since approximately 20 sulfane sulfur atoms are detected in the protein with 20 cysteine residues, we believe that nearly all cysteine residues are modified by sulfane sulfur. Therefore, Raman spectroscopy is considered applicable to our current study.

      - The modeling presented in this study is very interesting and confirms the flexibility of metallothioneins. MT domains are known to bind various metal ions of different diameters. They adopt in this way to larger size the ions. The same mechanism could be present from the protein site. The presence of 9 or 11 sulfur atoms in the beta or alfa domain would increase the size of the domains without changing the cluster structure.

      We truly appreciate your positive evaluation of this work.

      - Comment to authors. Apo-MT is not present in the cell. It exists as a partially metallated species. The term "apo-MT" was introduced to explain that MTs are not fully saturated by metals and function as a metal buffer system. Apo-MT comes from old ages when MT was considered to be present only in two forms: apo-form and fully saturated forms.

      Thank you for your insightful comments. We find it reasonable to understand that apo-MT exists as a partially metallated species within the cell.

      Reviewer #2 (Public Review):

      Summary:

      In this manuscript, the authors reveal that GIF/MT-3 regulates zinc homeostasis depending on the cellular redox status. The manuscript technically sounds, and their data concretely suggest that the recombinant MTs, not only GIF/MT-3 but also canonical MTs such as MT-1 and MT-2, contain sulfane sulfur atoms for the Zn-binding. The scenario proposed by the authors seems to be reasonable to explain the Zn homeostasis by the cellular redox balance.

      Strengths:

      The data presented in the manuscript solidly reveal that recombinant GIF/MT-3 contains sulfane sulfur.

      Weaknesses:

      It is still unclear whether native MTs, in particular, induced MTs in vivo contain sulfane sulfur or not.

      Thank you for pointing out the strengths and weaknesses of this manuscript. Based on your suggestions, we have determined the sulfane sulfur content in the native GIF/MT-3 protein, as explained in our response to "Recommendations for the Authors #2."

      Reviewer #3 (Public Review):

      Summary:

      The authors were trying to show that a novel neuronal metallothionein of poorly defined function, GIF/MT3, is actually heavily persulfidated in both the Zn-bound and apo (metal-free) forms of the molecule as purified from a heterologous or native host. Evidence in support of this conclusion is compelling, with both spectroscopic and mass spectrometry evidence strongly consistent with this general conclusion. The authors would appear to have achieved their aims.

      Strengths:

      The analytical data are compelling in support of the author's primary conclusions are strong. The authors also provide some modeling evidence that strongly supports the contention that MT3 (and other MTs) can readily accommodate sulfane sulfur on each of the 20 cysteines in the Zn-bound structure, with little perturbation of the structure. This is not the case with Cys trisulfides, which suggests that the persulfide-metallated state is clearly positioned at lower energy relative to the immediately adjacent thiolate- or trisulfidated metal coordination complexes.

      Weaknesses:

      The biological significance of the findings is not entirely clear. On the one hand, the analytical data are clearly solid (albeit using a protein derived from a bacterial over-expression experiment), and yes, it's true that sulfane S can protect Cys from overoxidation, but everything shown in the summary figure (Fig. 8D) can be done with Zn release from a thiol by ROS, and subsequent reduction by the Trx/TR system. In addition, it's long been known that Zn itself can protect Cys from oxidation. I view this as a minor weakness that will motivate follow-up studies. Fig. 1 was incomplete in its discussion and only suggests that a few S atoms may be covalently bound to MT3 as isolated. This is in contrast to the sulfate S "release" experiment, which I find quite compelling.

      Impact:

      The impact will be high since the finding is potentially disruptive to the metals in the biology field in general and the MT field for sure. The sulfane sulfur counting experiment (the HPE-IAM electrophile trapping experiment) may well be widely adopted by the field. Those of us in the metals field always knew that this was a possibility, and it will interesting to see the extent to which metal-binding thiolates broadly incorporate sulfate sulfur into their first coordination shells.

      Thank you for pointing out the strengths and weaknesses of this manuscript. As you noted, the explanations and discussions regarding Fig. 1 were missing. To address this, we have added the following sentences to the discission section: “However, FT-ICR-MALDI-TOF/MS analysis failed to detect sulfur modifications in GIF/MT-3 (Fig. 1B), suggesting that sulfur modifications in the protein were dissociated during laser desorption/ionization. Therefore, we postulate that the small amount of sulfur detected in oxidized apo-GIF/MT-3 is derived from the effect of laser desorption/ionization rather than any actual modification of the minority component.”

      Reviewer #1 (Recommendations For The Authors):

      Overall, the topic of the study is interesting, but the provided evidence is insufficient to claim that MT3 is a sulfane sulfur-binding protein. Indeed, some recent studies showed that natural and recombinant MT proteins can be modified, but only one or a few cysteine residues were modified. Authors should follow my suggestion and apply mass spectrometry to all performed reactions and, first of all, to freshly obtained protein. I strongly suggest using chemically synthesized and reconstituted domains to test whether the home-developed approach is appropriate. Moreover, native MS and ICP-MS analysis of MT3 would support their claims.

      Thank you for your insightful comments. Following your suggestions, we have prepared chemically synthesized proteins of the α- and β-domains of GIF/MT-3 and conducted additional experiments, as explained in response comments to “Public Review #1”. Regarding the MS analysis, we have also added a discussion on the difficulty of detecting sulfur modifications in the protein.

      Reviewer #2 (Recommendations For The Authors):

      I have some minor points which should be considered by the authors.

      (1) Table 1: In the simulation by MOE, the authors speculated 7 atoms of metal bound to GIF/MT-3. Although a total of 7 atoms of Zn or Cd are actually bound to MTs as a divalent ion, the number of Cu and Hg bound to MTs as a monovalent ion is scientifically controversial. Several ideas have been proposed in the literature, however, "7 atoms of Cu or Hg" could be inappropriate as far as I know. The authors should simulate again using a more appropriate number of Cu or Hg in MTs.

      Thank you for providing this valuable information. We reviewed several papers by the Stillman group and found that the relative binding constants of Cu4-MT, Cu6-MT, and Cu10-MT were determined after the addition of Cu(I) to apo MT-1A, MT-2, and MT-3 (Melenbacher and Stillman, Metallomics, 2024). However, incorporating these copper numbers into our GIF/MT-3 simulation model proved challenging. Therefore, we decided to omit the score value for copper in Table 1.

      On the other hand, some researchers have reported that mercury binds to MT as a divalent ion, and the formation of Hg<sub>7</sub>MT is possible (not just other forms). Therefore, we decided to continue using the score value for mercury shown in Table 1.

      (2) If possible, native MT samples isolated from an experimental animal should be evaluated for the sulfane sulfur content. Canonical MTs, MT-1 and MT-2, are highly inducible by not only heavy metals but also oxidative stress. Under the oxidative stress condition such as the exposure of hydrogen peroxide, it is questionable whether the induced Zn-MTs contain sulfane sulfur or not.

      According to your suggestion, we evaluated the sulfane sulfur content in native GIF/MT-3 samples isolated from mouse brain cytosol (Fig. 10). The measured amount was 3.3 per protein. This suggests that sulfane sulfur in GIF/MT-3 could be consumed under oxidative conditions, as you anticipated. Another possible explanation for the discrepancy between the native form and recombinant protein is likely related to metal binding in the protein. It is generally understood that both zinc and copper bind to GIF/MT-3 in approximately equal proportions in vivo. When we prepared recombinant copper-binding GIF/MT-3 protein, the sulfane sulfur content in the protein was significantly different (approximately 4.0 per protein) compared to the Zn<sub>7</sub>GIF/MT-3 form. Further studies are needed to clarify the relationship between sulfane sulfur binding and the types of metals in the future.

      (3) The biological significance of sulfane sulfur in MTs is still unclear to me.

      Thank you for your comments. To address this question, we have added the following sentence to the discussion section: “The biological significance of sulfane sulfur in MTs lies in its ability to 1) contribute to metal binding affinity, 2) provide a sensing mechanism against oxidative stress, and 3) aid in the regeneration of the protein.”

      (4) According to the widely accepted nomenclature of MT, "MT3" should be amended to "MT-3".

      According to your suggestion, we have amended from MT3 to MT-3 throughout the manuscript.

      Reviewer #3 (Recommendations For The Authors):

      Most of my comments are editorial in nature, largely focused on what I perceive as overinterpretation or unnecessary speculation.

      The authors state in the abstract that the intersection of sulfane sulfur and Zn enzymes "has been overlooked." This is not actually true - please tone down to "under investigated" or something like this.

      Based on your suggestion, we have replaced the term “has been overlooked” with “has been under investigated” in the abstract.

      Line 228: The discussion of Fig. 6C involved too much speculation. I cannot see a quantitative experiment that supports this.

      Based on your suggestion, we have removed Fig. 6C (currently referred to as Fig. 7C). Additionally, we have revised the sentence from “implying that the sulfane sulfur is an essential zinc ligand in apo-GIF/MT3 and that an asymmetric SSH or SH ligand is insufficient for native zinc binding (Fig. 6C)” to “implying the contribution of sulfane sulfur to zinc binding in GIF/MT-3”.

      Line 247 "persulfide in apo-GIF/MT3 seems.." I think the authors mean that the Zn form of the protein is resistant to Trx or TCEP.

      Thank you for pointing this out. We realized that the term “persulfide in apo-GIF/MT3” might be confusing. Therefore, we have replaced it with “persulfide formation derived from apo-GIF/MT3” in the corresponding sentence.

      Molecular modeling: We need more details- were these structures energy-minimized in any way? Can the authors comment on the plethora of S-S dihedral angles in these structures, and whether they are consistent with expectations of covalent geometry? Please add text to explain or even a table that compiles these data.

      Thank you for your comment. Yes, energy minimization calculations for structural optimization were conducted during homology modeling in MOE. In fact, we have already stated in the Methods section that “Refinement of the model with the lowest generalized Born/volume integral (GBVI) score was achieved through energy minimization of outlier residues in Ramachandran plots generated within MOE.” In this model, covalent geometry, including the S-S dihedral angles, is also taken into consideration.

      What is a thermostability score? Perhaps a bit more discussion here and what relationship this has to an apparent (or macroscopic) metal affinity constant.

      The thermostability score is used to compare the thermal stability between the wild-type and mutant proteins. As shown in Equation (1) in the method section, it is calculated by subtracting the energy of the hypothetical unfolded state from the energy of the folded state. Since obtaining the structure of the unfolded state requires extensive computational effort, MOE employs an empirical formula based on two-dimensional structural features to estimate it. The ΔΔG values represent the difference between ΔGf(WT) and ΔGf(Mut). However, because it is difficult to directly determine ΔGf(Mut) and ΔGf(WT), MOE calculates ΔΔG using the thermodynamic cycle equivalence: ΔΔGs =ΔGsf (WT→Mut) - ΔGsu (WT→Mut), as expressed in Equation (1).

      On the other hand, the affinity score represents the interaction energy between the target ligand and the protein. In this study, we calculated the affinity score by selecting metal atoms as the ligands. The interaction energy (E int) is defined as:

      E int = E complex − E receptor − E ligand

      where each term is as follows:

      E complex : Potential energy of the complex.

      E receptor : Potential energy of the receptor alone.

      E ligand : Potential energy of the ligand alone.

      Each potential energy term includes contributions from bonded interactions such as bond lengths and bond angles. However, since there is no structural difference among E receptor, and E ligand, the bonded energy components cancel out. Consequently, E int is determined as:

      E int = ΔEele +ΔEvdW +ΔE sol

      Here, a negative E int indicates that the complex is more stable, while a positive E int implies that the receptor and ligand are more stable in their dissociated states.

      We have revised the sentence "The affinity score was also calculated using MOE software as the difference between the ΔΔGs values of the protein, free zinc, and metal–protein complex” to "The affinity score was also calculated using MOE software as the difference between the potential energy values of the protein, free zinc, and metal–protein complex” to correct the misdescription.

      Lines 278-280: The authors state that they observe a "marked enhancement of metal binding affinity, and rearrangement of zinc ions." I don't see support for this rather provocative conclusion. This is the expectation of course. I would love to see actual experimental data on this point, direct binding titrations with metals performed before and after the release of the sulfate sulfur atoms.

      Thank you for your comments. Although this statement is based on the 3D modeling simulation, we have also experimentally observed that the diminishment of sulfane sulfur in GIF/MT-3 resulted in a decrease in zinc binding levels, as shown in Fig. 7. However, conducting direct binding titration experiments was difficult for us due to the difficulty in preparing pure GIF/MT-3 protein with or without sulfane sulfur. Therefore, we have revised the sentence "marked enhancement of metal binding affinity, and rearrangement of zinc ions" to simply "enhancement of metal binding affinity" to avoid over-speculation.

      Table I- quantitatively lower stability for the Cu complex- the stoichiometry is clearly wrong in this simulation- please redo this simulation with the right stoichiometry or Cu to MT3- consult a Stillman paper.

      Thank you for providing this valuable information. We reviewed several papers by the Stillman group and found that the relative binding constants of Cu4-MT, Cu6-MT, and Cu10-MT were determined after the addition of Cu(I) to apo MT-1A, MT-2, and MT-3 (Melenbacher and Stillman, Metallomics, 2024). However, incorporating these copper numbers into our GIF/MT-3 simulation model proved challenging. Therefore, we decided to omit the score value for copper in Table 1.

      I like the model for reversible metal release mediated by the thioredoxin system (Fig. 8D)- but you can also do this with thiols- nothing really novel here. Has it been generally established that tetraulfides are better substrates for the Trx/TR system? The data shown in Fig. 7B seems to suggest this, but is this broadly true, from the literature?

      There are reports describing that persulfides and polysulfides are reduced by the thioredoxin system. However, it is not well-established that tetraulfides are better substrates for the Trx/TR system. To the best of our knowledge, this is the first report demonstrating that apo-MT-3 can serve as a good substrate for the Trx/TR system. Further research is required to compare the catalytic efficiency between proteins containing disulfide and those with tetraulfide moieties.

      Line 380: Many groups have reported that many proteins are per- or polysulfidated in a whole host of cells using mass spectrometry workflows, and that terminal persulfides can be readily reduced by general or specific Trx/TR systems. This work could be better acknowledged in the context of the authors' demonstration of the reduction of the tetrasulfides, which itself would appear to be novel (and exciting!).

      We truly appreciate your positive evaluation of this work.

    1. Reviewer #1 (Public review):

      Summary:

      This manuscript investigates a mechanism between the histone reader protein YEATS2 and the metabolic enzyme GCDH, particularly in regulating epithelial-to-mesenchymal transition (EMT) in head and neck cancer (HNC).

      Strengths:

      Great detailing of the mechanistic aspect of the above axis is the primary strength of the manuscript.

      Weaknesses:

      Several critical points require clarification, including the rationale behind EMT marker selection, the inclusion of metastasis data, the role of key metabolic enzymes like ECHS1, and the molecular mechanisms governing p300 and YEATS2 interactions.

      Major Comments:

      (1) The title, "Interplay of YEATS2 and GCDH mediates histone crotonylation and drives EMT in head and neck cancer," appears somewhat misleading, as it implies that YEATS2 directly drives histone crotonylation. However, YEATS2 functions as a reader of histone crotonylation rather than a writer or mediator of this modification. It cannot itself mediate the addition of crotonyl groups onto histones. Instead, the enzyme GCDH is the one responsible for generating crotonyl-CoA, which enables histone crotonylation. Therefore, while YEATS2 plays a role in recognizing crotonylation marks and may regulate gene expression through this mechanism, it does not directly catalyse or promote the crotonylation process.

      (2) The study suggests a link between YEATS2 and metastasis due to its role in EMT, but the lack of clinical or pre-clinical evidence of metastasis is concerning. Only primary tumor (PT) data is shown, but if the hypothesis is that YEATS2 promotes metastasis via EMT, then evidence from metastatic samples or in vivo models should be included to solidify this claim.

      (3) There seems to be some discrepancy in the invasion data with BICR10 control cells (Figure 2C). BICR10 control cells with mock plasmids, specifically shControl and pEGFP-C3 show an unclear distinction between invasion capacities. Normally, we would expect the control cells to invade somewhat similarly, in terms of area covered, within the same time interval (24 hours here). But we clearly see more control cells invading when the invasion is done with KD and fewer control cells invading when the invasion is done with OE. Are these just plasmid-specific significant effects on normal cell invasion? This needs to be addressed.

      (4) In Figure 3G, the Western blot shows an unclear band for YEATS2 in shSP1 cells with YEATS2 overexpression condition. The authors need to clearly identify which band corresponds to YEATS2 in this case.

      (5) In ChIP assays with SP1, YEATS2 and p300 which promoter regions were selected for the respective genes? Please provide data for all the different promoter regions that must have been analysed, highlighting the region where enrichment/depletion was observed. Including data from negative control regions would improve the validity of the results.

      (6) The authors establish a link between H3K27Cr marks and GCDH expression, and this is an already well-known pathway. A critical missing piece is the level of ECSH1 in patient samples. This will clearly delineate if the balance shifted towards crotonylation.

      (7) The p300 ChIP data on the SPARC promoter is confusing. The authors report reduced p300 occupancy in YEATS2-silenced cells, on SPARC promoter. However, this is paradoxical, as p300 is a writer, a histone acetyltransferase (HAT). The absence of a reader (YEATS2) shouldn't affect the writer (p300) unless a complex relationship between p300 and YEATS2 is present. The role of p300 should be further clarified in this case. Additionally, transcriptional regulation of SPARC expression in YEATS2 silenced cells could be analysed via downstream events, like Pol-II recruitment. Assays such as Pol-II ChIP-qPCR could help explain this.

      (8) The role of GCDH in producing crotonyl-CoA is already well-established in the literature. The authors' hypothesis that GCDH is essential for crotonyl-CoA production has been proven, and it's unclear why this is presented as a novel finding. It has been shown that YEATS2 KD leads to reduced H3K27cr, however, it remains unclear how the reader is affecting crotonylation levels. Are GCDH levels also reduced in the YEATS2 KD condition? Are YEATS2 levels regulating GCDH expression? One possible mechanism is YEATS2 occupancy on GCDH promoter and therefore reduced GCDH levels upon YEATS2 KD. This aspect is crucial to the study's proposed mechanism but is not addressed thoroughly.

      (9) The authors should provide IHC analysis of YEATS2, SPARC alongside H3K27cr and GCDH staining in normal vs. tumor tissues from HNC patients.

    2. Author response:

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      This manuscript investigates a mechanism between the histone reader protein YEATS2 and the metabolic enzyme GCDH, particularly in regulating epithelial-to-mesenchymal transition (EMT) in head and neck cancer (HNC).

      Strengths:

      Great detailing of the mechanistic aspect of the above axis is the primary strength of the manuscript.

      Weaknesses:

      Several critical points require clarification, including the rationale behind EMT marker selection, the inclusion of metastasis data, the role of key metabolic enzymes like ECHS1, and the molecular mechanisms governing p300 and YEATS2 interactions.

      We would like to sincerely thank the reviewer for the detailed, in-depth, and positive response. We are committed to implementing constructive revisions to the manuscript to address the reviewer’s concerns effectively.

      Major Comments:

      (1) The title, "Interplay of YEATS2 and GCDH mediates histone crotonylation and drives EMT in head and neck cancer," appears somewhat misleading, as it implies that YEATS2 directly drives histone crotonylation. However, YEATS2 functions as a reader of histone crotonylation rather than a writer or mediator of this modification. It cannot itself mediate the addition of crotonyl groups onto histones. Instead, the enzyme GCDH is the one responsible for generating crotonyl-CoA, which enables histone crotonylation. Therefore, while YEATS2 plays a role in recognizing crotonylation marks and may regulate gene expression through this mechanism, it does not directly catalyse or promote the crotonylation process.

      We thank the reviewer for raising this concern. As stated by the reviewer, YEATS2 functions as a reader protein, capable of recognizing histone crotonylation marks and assisting in the addition of this mark to nearby histone residues, possibly by assisting the recruitment of the writer protein for crotonylation. Our data indicates the involvement of YEATS2 in the recruitment of writer protein p300 on the promoter of the SPARC gene, making YEATS2 a regulatory factor responsible for the addition of crotonyl marks in an indirect manner. Thus, we have decided to make changes in the title by replacing the word “mediates” with “regulates”. Therefore, the updated title can be read as: “Interplay of YEATS2 and GCDH regulates histone crotonylation and drives EMT in head and neck cancer”.

      (2) The study suggests a link between YEATS2 and metastasis due to its role in EMT, but the lack of clinical or pre-clinical evidence of metastasis is concerning. Only primary tumor (PT) data is shown, but if the hypothesis is that YEATS2 promotes metastasis via EMT, then evidence from metastatic samples or in vivo models should be included to solidify this claim.

      We appreciate the reviewer’s suggestion. Here, we would like to state that the primary aim of this study was to delineate the molecular mechanisms behind the role of YEATS2 in maintaining histone crotonylation at the promoter of genes that favour EMT in head and neck cancer. We have dissected the importance of histone crotonylation in the regulation of gene expression in head and neck cancer in great detail, having investigated the upstream and downstream molecular players involved in this process that promote EMT. Moreover, with the help of multiple phenotypic assays, such as Matrigel invasion, wound healing, and 3D invasion assays, we have shown the functional importance of YEATS2 in promoting EMT in head and neck cancer cells. Since EMT is known to be a prerequisite process for cancer cells undergoing metastasis(1), the evidence of YEATS2 being associated with EMT demonstrates a potential correlation of YEATS2 with metastasis. However, as part of the revision, we will use publicly available patient data to investigate the direct association of YEATS2 with metastasis by checking the expression of YEATS2 between different grades of head and neck cancer, as an increase in tumor grade is often correlated with the incidence of metastasis(2).

      (3) There seems to be some discrepancy in the invasion data with BICR10 control cells (Figure 2C). BICR10 control cells with mock plasmids, specifically shControl and pEGFP-C3 show an unclear distinction between invasion capacities. Normally, we would expect the control cells to invade somewhat similarly, in terms of area covered, within the same time interval (24 hours here). But we clearly see more control cells invading when the invasion is done with KD and fewer control cells invading when the invasion is done with OE. Are these just plasmid-specific significant effects on normal cell invasion? This needs to be addressed.

      We appreciate the reviewer for the thorough evaluation of the manuscript. The figure panels in question, Figure 2B and 2C, represent two different experiments performed independently, the invasion assay performed after knockdown and overexpression of YEATS2, respectively. We would like to clarify that both panels represent results that are distinct and independent of each other and that the method used to knockdown or overexpress YEATS2 is also different. As stated in the Materials and Methods section, the knockdown is performed using lentivirus-mediated transfection (transduction) of cells, on the other hand, the overexpression is done using standard method of transfection by directly mixing transfection reagent and the respective plasmids, prior to the addition of this mix to the cells. The difference in the experimental conditions in these two experiments might have attributed to the differences seen in the controls as observed previously(3). Hence, we would like to state that the results of figure panels Figure 2B and Figure 2C should be evaluated independently of each other.

      (4) In Figure 3G, the Western blot shows an unclear band for YEATS2 in shSP1 cells with YEATS2 overexpression condition. The authors need to clearly identify which band corresponds to YEATS2 in this case.

      The two bands seen in the shSP1+pEGFP-C3-YEATS2 condition correspond to the endogenous YEATS2 band (lower band, indicated by * in the shControl lane) and YEATS2-GFP band (upper band, corresponding to overexpressed YEATS2-GFP fusion protein, which has a higher molecular weight). To avoid confusion, the endogenous band will be highlighted (marked by *) in the lane representing the shSP1+pEGFP-C3-YEATS2 condition in the revised version of the manuscript.

      (5) In ChIP assays with SP1, YEATS2 and p300 which promoter regions were selected for the respective genes? Please provide data for all the different promoter regions that must have been analysed, highlighting the region where enrichment/depletion was observed. Including data from negative control regions would improve the validity of the results.

      Throughout our study, we have performed ChIP-qPCR assays to check the binding of SP1 on YEATS2 and GCDH promoter, and to check YEATS2 and p300 binding on SPARC promoter. Using transcription factor binding prediction tools and luciferase assays, we selected multiple sites on the YEATS2 and GCDH promoter to check for SP1 binding. The results corresponding to the site that showed significant enrichment were provided in the manuscript. The region of SPARC promoter in YEATS2 and p300 ChIP assay was selected on the basis of YEATS2 enrichment found in the YEATS2 ChIP-seq data. We will provide data for all the promoter regions investigated (including negative controls) in the revised version of the manuscript.

      (6) The authors establish a link between H3K27Cr marks and GCDH expression, and this is an already well-known pathway. A critical missing piece is the level of ECSH1 in patient samples. This will clearly delineate if the balance shifted towards crotonylation.

      We thank the reviewer for their valuable suggestion. To support our claim, we had checked the expression of GCDH and ECHS1 in TCGA HNC RNA-seq data (provided in Figure 4—figure supplement 1A and B) and found that GCDH showed increase while ECHS1 showed decrease in tumor as compared to normal samples. We hypothesized that higher GCDH expression and decreased ECHS1 expression might lead to an increase in the levels of crotonylation in HNC. To further substantiate our claim, we will check the abundance of ECHS1 in HNC patient samples as part of the revision.

      (7) The p300 ChIP data on the SPARC promoter is confusing. The authors report reduced p300 occupancy in YEATS2-silenced cells, on SPARC promoter. However, this is paradoxical, as p300 is a writer, a histone acetyltransferase (HAT). The absence of a reader (YEATS2) shouldn't affect the writer (p300) unless a complex relationship between p300 and YEATS2 is present. The role of p300 should be further clarified in this case. Additionally, transcriptional regulation of SPARC expression in YEATS2 silenced cells could be analysed via downstream events, like Pol-II recruitment. Assays such as Pol-II ChIP-qPCR could help explain this.

      Using RNA-seq and ChIP-seq analyses, we have shown that YEATS2 affects the expression of several genes by regulating the level of histone crotonylation at gene promoters globally. The histone writer p300 is a promiscuous acyltransferase protein that has been shown to be involved in the addition of several non-acetyl marks on histone residues, including crotonylation(4). Our data provides evidence for the dependency of the writer p300 on YEATS2 in mediating histone crotonylation, as YEATS2 downregulation led to decreased occupancy of p300 on the SPARC promoter (Figure 5F). However, the exact mechanism of cooperativity between YEATS2 and p300 in maintaining histone crotonylation remains to be investigated. To address the reviewer’s concern, we will perform various experiments to delineate the molecular mechanism pertaining to the association of YEATS2 with p300 in regulating histone crotonylation. Following are the experiments that will be performed:

      (a) Co-immunoprecipitation experiments to check the physical interaction between YEATS2 and p300.

      (b) We will check H3K27cr levels on the SPARC promoter and SPARC expression in p300-depleted HNC cells.

      (c) Rescue experiments to check if the decrease in p300 occupancy on the SPARC promoter can be compensated by overexpressing YEATS2.

      (d) As suggested by the reviewer, Pol-II ChIP-qPCR at the promoter of SPARC will be performed in YEATS2-silenced cells to explain the mode of transcriptional regulation of SPARC expression by YEATS2.

      (8) The role of GCDH in producing crotonyl-CoA is already well-established in the literature. The authors' hypothesis that GCDH is essential for crotonyl-CoA production has been proven, and it's unclear why this is presented as a novel finding. It has been shown that YEATS2 KD leads to reduced H3K27cr, however, it remains unclear how the reader is affecting crotonylation levels. Are GCDH levels also reduced in the YEATS2 KD condition? Are YEATS2 levels regulating GCDH expression? One possible mechanism is YEATS2 occupancy on GCDH promoter and therefore reduced GCDH levels upon YEATS2 KD. This aspect is crucial to the study's proposed mechanism but is not addressed thoroughly.

      The source for histone crotonylation, crotonyl-CoA, can be produced by several enzymes in the cell, such as ACSS2, GCDH, ACOX3, etc(5). Since metabolic intermediates produced during several cellular pathways in the cell can act as substrates for epigenetic factors, we wanted to investigate if such an epigenetic-metabolism crosstalk existed in the context of YEATS2. As described in the manuscript, we performed GSEA using publicly available TCGA RNA-seq data and found that patients with higher YEATS2 expression also showed a high correlation with expression levels of genes involved in the lysine degradation pathway, including GCDH. Since the preferential binding of YEATS2 with H3K27cr and the role of GCDH in producing crotonyl-CoA was known(6,7), we hypothesized that higher H3K27cr in HNC could be a result of both YEATS2 and GCDH. We found that the presence of GCDH in the nucleus of HNC cells is correlated to higher H3K27cr abundance, which could be a result of excess levels of crotonyl-CoA produced via GCDH. We also found a correlation between H3K27cr levels and YEATS2 expression, which could arise due to YEATS2-mediated preferential maintenance of crotonylation. This states that although being a reader protein, YEATS2 is affecting the promoter H3K27cr levels, possibly by helping in the recruitment of p300 (as shown in Figure 5F). Thus, YEATS2 and GCDH are both responsible for the regulation of histone crotonylation-mediated gene expression in HNC.

      We did not find any evidence of YEATS2 regulating the expression of GCDH in HNC cells. However, we found that YEATS2 downregulation reduced the nuclear pool of GCDH in head and neck cancer cells (Figure 7F). This suggests that YEATS2 not only regulates histone crotonylation by affecting promoter H3K27cr levels (with p300), but also by affecting the nuclear localization of crotonyl-CoA producing GCDH. Also, we observed that the expression of YEATS2 and GCDH are regulated by the same transcription factor SP1 in HNC. We found that the transcription factor SP1 binds to the promoter of both genes, and its downregulation led to a decrease in their expression (Figure 3 and Figure 7).

      We would like to state that the relationship between YEATS2 and the nuclear localization of GCDH, as well as the underlying molecular mechanism, remains unexplored and presents an open question for future investigation.

      (9) The authors should provide IHC analysis of YEATS2, SPARC alongside H3K27cr and GCDH staining in normal vs. tumor tissues from HNC patients.

      We thank the reviewer for their suggestion. We are consulting our clinical collaborators to assess the feasibility of including this IHC analysis in our revision and will make every effort to incorporate it.

      Reviewer #2 (Public review):

      Summary:

      The manuscript emphasises the increased invasive potential of histone reader YEATS2 in an SP1-dependent manner. They report that YEATS2 maintains high H3K27cr levels at the promoter of EMT-promoting gene SPARC. These findings assigned a novel functional implication of histone acylation, crotonylation.

      We thank the reviewer for the constructive comments. We are committed to making beneficial changes to the manuscript in order to alleviate the reviewer’s concerns.

      Concerns:

      (1) The patient cohort is very small with just 10 patients. To establish a significant result the cohort size should be increased.

      We thank the reviewer for this suggestion. We will increase the number of patient samples to assess the levels of YEATS2 and H3K27cr in normal vs. tumor samples.

      (2) Figure 4D compares H3K27Cr levels in tumor and normal tissue samples. Figure 1G shows overexpression of YEATS2 in a tumor as compared to normal samples. The loading control is missing in both. Loading control is essential to eliminate any disparity in protein concentration that is loaded.

      In Figures 1G and 4D, we have used Ponceau S staining as a control for equal loading. Ponceau S staining is frequently used as an alternative for housekeeping genes like GAPDH as a control for protein loading(8). It avoids the potential for variability in housekeeping gene expression. However, it may be less quantitative than using housekeeping proteins. To address the reviewer’s concern, we will probe with an antibody against a house keeping gene as a loading control in the revised figures, provided its expression remains stable across the conditions tested.

      (3) Figure 4D only mentions 5 patient samples checked for the increased levels of crotonylation and hence forms the basis of their hypothesis (increased crotonylation in a tumor as compared to normal). The sample size should be more and patient details should be mentioned.

      A total of 9 samples were checked for H3K27cr levels (5 of them are included in Figure 4D and rest included in Figure 4—figure supplement 1D). However, as a part of the revision, we will check the H3K27cr levels in more patient samples.

      (4) YEATS2 maintains H3K27Cr levels at the SPARC promoter. The p300 is reported to be hyper-activated (hyperautoacetylated) in oral cancer. Probably, the activated p300 causes hyper-crotonylation, and other protein factors cause the functional translation of this modification. The authors need to clarify this with a suitable experiment.

      In our study, we have shown that p300 is dependent on YEATS2 for its recruitment on the SPARC promoter. As a part of the revision, we propose the following experiments to further substantiate the role of p300 in YEATS2-mediated gene regulation:

      (a) Co-immunoprecipitation experiments to check the physical interaction between YEATS2 and p300.

      (b) We will check H3K27cr levels on the SPARC promoter and SPARC expression in p300-depleted HNC cells.

      (c) Rescue experiments to check if the decrease in p300 occupancy on the SPARC promoter can be compensated by overexpressing YEATS2.

      (d) Pol-II ChIP-qPCR at the promoter of SPARC will be performed in YEATS2-silenced cells to explain the mode of transcriptional regulation of SPARC expression by YEATS2.

      (5) I do not entirely agree with using GAPDH as a control in the western blot experiment since GAPDH has been reported to be overexpressed in oral cancer.

      We would like to clarify that GAPDH was not used as a loading control for protein expression comparisons between normal and tumor samples. GAPDH was used as a loading control only in experiments using head and neck cancer cell lines where shRNA-mediated knockdown or overexpression was employed. These manipulations specifically target the genes of interest and are not expected to alter GAPDH expression, making it a suitable loading control in these instances.

      (6) The expression of EMT markers has been checked in shControl and shYEATS2 transfected cell lines (Figure 2A). However, their expression should first be checked directly in the patients' normal vs. tumor samples.

      We thank the reviewer for the suggestion. To address this, we will check the expression of EMT markers alongside YEATS2 expression in normal vs. tumor samples.

      (7) In Figure 3G, knockdown of SP1 led to the reduced expression of YEATS2 controlled gene Twist1. Ectopic expression of YEATS2 was able to rescue Twist1 partially. In order to establish that SP1 directly regulates YEATS2, SP1 should also be re-introduced upon the knockdown background along with YEATS2 for complete rescue of Twist1 expression.

      To address the reviewer’s concern regarding the partial rescue of Twist1 in SP1 depleted-YEATS2 overexpressed cells, we will perform the experiment as suggested by the reviewer. In brief, we will overexpress both SP1 and YEATS2 in SP1-depleted cells and then assess the expression of Twist1.

      (8) In Figure 7G, the expression of EMT genes should also be checked upon rescue of SPARC expression.

      We thank the reviewer for the suggestion. We will check the expression of EMT markers on YEATS2/ GCDH rescue and update Figure 7G in the revised version of the manuscript.

      References

      (1) T. Brabletz, R. Kalluri, M. A. Nieto and R. A. Weinberg, Nat Rev Cancer, 2018, 18, 128–134.

      (2) P. Pisani, M. Airoldi, A. Allais, P. Aluffi Valletti, M. Battista, M. Benazzo, R. Briatore, S. Cacciola, S. Cocuzza, A. Colombo, B. Conti, A. Costanzo, L. Della Vecchia, N. Denaro, C. Fantozzi, D. Galizia, M. Garzaro, I. Genta, G. A. Iasi, M. Krengli, V. Landolfo, G. V. Lanza, M. Magnano, M. Mancuso, R. Maroldi, L. Masini, M. C. Merlano, M. Piemonte, S. Pisani, A. Prina-Mello, L. Prioglio, M. G. Rugiu, F. Scasso, A. Serra, G. Valente, M. Zannetti and A. Zigliani, Acta Otorhinolaryngol Ital, 2020, 40, S1–S86.

      (3) J. Lin, P. Zhang, W. Liu, G. Liu, J. Zhang, M. Yan, Y. Duan and N. Yang, Elife, 2023, 12, RP87510.

      (4) X. Liu, W. Wei, Y. Liu, X. Yang, J. Wu, Y. Zhang, Q. Zhang, T. Shi, J. X. Du, Y. Zhao, M. Lei, J.-Q. Zhou, J. Li and J. Wong, Cell Discov, 2017, 3, 17016.

      (5) G. Jiang, C. Li, M. Lu, K. Lu and H. Li, Cell Death Dis, 2021, 12, 703.

      (6) D. Zhao, H. Guan, S. Zhao, W. Mi, H. Wen, Y. Li, Y. Zhao, C. D. Allis, X. Shi and H. Li, Cell Res, 2016, 26, 629–632.

      (7) H. Yuan, X. Wu, Q. Wu, A. Chatoff, E. Megill, J. Gao, T. Huang, T. Duan, K. Yang, C. Jin, F. Yuan, S. Wang, L. Zhao, P. O. Zinn, K. G. Abdullah, Y. Zhao, N. W. Snyder and J. N. Rich, Nature, 2023, 617, 818–826.

      (8) I. Romero-Calvo, B. Ocón, P. Martínez-Moya, M. D. Suárez, A. Zarzuelo, O. Martínez-Augustin and F. S. de Medina, Anal Biochem, 2010, 401, 318–320.

  5. griersplagueyear.wordpress.com griersplagueyear.wordpress.com
    1. “Subordinates,” Garrett said. “Okay, so under ‘Communication,’ here’s thefirst comment. ‘He’s not good at cascading information down to staff.’ Washe a whitewater rafter, Clark? I’m just curious.”“Yes,” Clark said, “I’m certain that’s what the interviewee was talkingabout. Actual literal cascades.”“This one’s my other favorite. ‘He’s successful in interfacing with clientswe already have, but as for new clients, it’s low-hanging fruit. He takes ahigh-altitude view, but he doesn’t drill down to that level of granularitywhere we might actionize new opportunities.’ ”Clark winced. “I remember that one. I think I may have had a minor strokein the office when he said that.”

      This memory of who they were and that what they did didn't give them meaning. Clark looking back also allows him to reflect on how much he has grown.

    2. “No, wait, don’t write that down. Let me rephrase that. Okay, let’s sayhe’ll change a little, probably, if you coach him, but he’ll still be a successful-but-unhappy person who works until nine p.m. every night because he’s got aterrible marriage and doesn’t want to go home, and don’t ask how I knowthat, everyone knows when you’ve got a terrible marriage, it’s like havingbad breath, you get close enough to a person and it’s obvious. And you know,I’m reaching here, but I’m talking about someone who just seems like hewishes he’d done something different with his life, I mean really actuallyalmost anything—is this too much?

      She is telling the truth of life.

    1. Contrary to expectations, there was no significant difference insitting in the square versus the Kanizsa control. We justify that, althoughthis study is the first to formally investigate cats’ attraction to 2D shapes,further experimental validity is needed to directly compare the stimuli.Furthermore, the Kanizsa control was likely an unsuitable comparisonfor contour treatment to the square. If performed again, a second con-trol/fourth stimulus could be developed to better compare behaviortowards the Kanizsa versus the square. Furthermore, to better under-stand cats elusive attraction to enclosures, future controls could intro-duce three-dimensional sides to the Kanizsa, square, and control

      This section highlights a key finding—cats did not significantly prefer the Kanizsa square over the control, which contradicts initial expectations. It’s interesting that the researchers acknowledge the need for improved experimental design, suggesting a new control stimulus for better comparison. This makes me wonder how different 3D elements might influence the results. Would adding slight physical barriers change the cats’ behavior, or is their attraction to enclosed spaces more complex than just visual cues?

    1. In 2019 the company Facebook (now called Meta) presented an internal study that found that Instagram was bad for the mental health of teenage girls, and yet they still allowed teenage girls to use Instagram. So, what does social media do to the mental health of teenage girls, and to all its other users? The answer is of course complicated and varies. Some have argued that Facebook’s own data is not as conclusive as you think about teens and mental health [m1]. Many have anecdotal experiences with their own mental health and those they talk to. For example, cosmetic surgeons have seen how photo manipulation on social media has influenced people’s views of their appearance: People historically came to cosmetic surgeons with photos of celebrities whose features they hoped to emulate. Now, they’re coming with edited selfies. They want to bring to life the version of themselves that they curate through apps like FaceTune and Snapchat. Selfies, Filters, and Snapchat Dysmorphia: How Photo-Editing Harms Body Image [m2] Comedian and director Bo Burnham has his own observations about how social media is influencing mental health: “If [social media] was just bad, I’d just tell all the kids to throw their phone in the ocean, and it’d be really easy. The problem is it - we are hyper-connected, and we’re lonely. We’re overstimulated, and we’re numb. We’re expressing our self, and we’re objectifying ourselves. So I think it just sort of widens and deepens the experiences of what kids are going through. But in regards to social anxiety, social anxiety - there’s a part of social anxiety I think that feels like you’re a little bit disassociated from yourself. And it’s sort of like you’re in a situation, but you’re also floating above yourself, watching yourself in that situation, judging it. And social media literally is that. You know, it forces kids to not just live their experience but be nostalgic for their experience while they’re living it, watch people watch them, watch people watch them watch them. My sort of impulse is like when the 13 year olds of today grow up to be social scientists, I’ll be very curious to hear what they have to say about it. But until then, it just feels like we just need to gather the data.” Director Bo Burnham On Growing Up With Anxiety — And An Audience [m3] - NPR Fresh Air (10:15-11:20) It can be difficult to measure the effects of social media on mental health since there are so many types of social media, and it permeates our cultures even of people who don’t use it directly. Some researchers have found that people using social media may enter a dissociation state [m4], where they lose track of time (like what happens when someone is reading a good book). Researchers at Facebook decided to try to measure how their recommendation algorithm was influencing people’s mental health. So they changed their recommendation algorithm to show some people more negative posts and some people more positive posts. They found that people who were given more negative posts tended to post more negatively themselves. Now, this experiment was done without informing users that they were part of an experiment, and when people found out that they might be part of a secret mood manipulation experiment, they were upset [m5].

      The chapter's discussion on 'trauma dumping' resonated with me. I've noticed an increase in unfiltered sharing of personal traumas on social media platforms. While it's essential to have spaces for open expression, I'm concerned about the potential emotional burden this places on unsuspecting readers and whether such platforms are suitable for processing deep-seated issues. How can we balance authentic sharing with the need to protect the mental well-being of the broader online community

    2. 13.1. Social Media Influence on Mental Health# In 2019 the company Facebook (now called Meta) presented an internal study that found that Instagram was bad for the mental health of teenage girls, and yet they still allowed teenage girls to use Instagram. So, what does social media do to the mental health of teenage girls, and to all its other users? The answer is of course complicated and varies. Some have argued that Facebook’s own data is not as conclusive as you think about teens and mental health [m1]. Many have anecdotal experiences with their own mental health and those they talk to. For example, cosmetic surgeons have seen how photo manipulation on social media has influenced people’s views of their appearance: People historically came to cosmetic surgeons with photos of celebrities whose features they hoped to emulate. Now, they’re coming with edited selfies. They want to bring to life the version of themselves that they curate through apps like FaceTune and Snapchat. Selfies, Filters, and Snapchat Dysmorphia: How Photo-Editing Harms Body Image [m2] Comedian and director Bo Burnham has his own observations about how social media is influencing mental health: “If [social media] was just bad, I’d just tell all the kids to throw their phone in the ocean, and it’d be really easy. The problem is it - we are hyper-connected, and we’re lonely. We’re overstimulated, and we’re numb. We’re expressing our self, and we’re objectifying ourselves. So I think it just sort of widens and deepens the experiences of what kids are going through. But in regards to social anxiety, social anxiety - there’s a part of social anxiety I think that feels like you’re a little bit disassociated from yourself. And it’s sort of like you’re in a situation, but you’re also floating above yourself, watching yourself in that situation, judging it. And social media literally is that. You know, it forces kids to not just live their experience but be nostalgic for their experience while they’re living it, watch people watch them, watch people watch them watch them. My sort of impulse is like when the 13 year olds of today grow up to be social scientists, I’ll be very curious to hear what they have to say about it. But until then, it just feels like we just need to gather the data.” Director Bo Burnham On Growing Up With Anxiety — And An Audience [m3] - NPR Fresh Air (10:15-11:20) It can be difficult to measure the effects of social media on mental health since there are so many types of social media, and it permeates our cultures even of people who don’t use it directly. Some researchers have found that people using social media may enter a dissociation state [m4], where they lose track of time (like what happens when someone is reading a good book). Researchers at Facebook decided to try to measure how their recommendation algorithm was influencing people’s mental health. So they changed their recommendation algorithm to show some people more negative posts and some people more positive posts. They found that people who were given more negative posts tended to post more negatively themselves. Now, this experiment was done without informing users that they were part of an experiment, and when people found out that they might be part of a secret mood manipulation experiment, they were upset [m5]. 13.1.1. Digital Detox?# Some people view internet-based social media (and other online activities) as inherently toxic and therefore encourage a digital detox [m6], where people take some form of a break from social media platforms and digital devices. While taking a break from parts or all of social media can be good for someone’s mental health (e.g., doomscrolling is making them feel more anxious, or they are currently getting harassed online), viewing internet-based social media as inherently toxic and trying to return to an idyllic time from before the Internet is not a realistic or honest view of the matter. In her essay “The Great Offline,” [m7] Lauren Collee argues that this is just a repeat of earlier views of city living and the “wilderness.” As white Americans were colonizing the American continent, they began idealizing “wilderness” as being uninhabited land (ignoring the Indigenous people who already lived there, or kicking them out or killing them). In the 19th century, as wilderness tourism was taking off as an industry, natural landscapes were figured as an antidote to the social pressures of urban living, offering truth in place of artifice, interiority in place of exteriority, solitude in place of small talk. Similarly, advocates for digital detox build an idealized “offline” separate from the complications of modern life: Sherry Turkle, author of Alone Together, characterizes the offline world as a physical place, a kind of Edenic paradise. “Not too long ago,” she writes, “people walked with their heads up, looking at the water, the sky, the sand” — now, “they often walk with their heads down, typing.” […] Gone are the happy days when families would gather around a weekly televised program like our ancestors around the campfire! But Lauren Collee argues that by placing the blame on the use of technology itself and making not using technology (a digital detox) the solution, we lose our ability to deal with the nuances of how we use technology and how it is designed: I’m no stranger to apps that help me curb my screen time, and I’ll admit I’ve often felt better for using them. But on a more communal level, I suspect that cultures of digital detox — in suggesting that the online world is inherently corrupting and cannot be improved — discourage us from seeking alternative models for what the internet could look like. I don’t want to be trapped in cycles of connection and disconnection, deleting my social media profiles for weeks at a time, feeling calmer but isolated, re-downloading them, feeling worse but connected again. For as long as we keep dumping our hopes into the conceptual pit of “the offline world,” those hopes will cease to exist as forces that might generate change in the worlds we actually live in together. So in this chapter, we will not consider internet-based social media as inherently toxic or beneficial for mental health. We will be looking for more nuance and where things go well, where they do not, and why.

      This chapter talks about how social media affects mental health, especially for teenage girls. It shows that social media can help us connect but also make us feel lonely and anxious. Bo Burnham talks about how people are always worried about how they look online, which can cause a lot of anxiety. The part about “Snapchat Dysmorphia” is pretty shocking because it shows that filters are changing how people see themselves, making them feel unhappy with their real appearance. I found it interesting when Lauren Collee argued that quitting social media isn’t the solution because it ignores the bigger issue of how social media is designed. It made me think about whether social media can be made healthier or if it’s just made to keep us hooked by playing on our fears and insecurities.

    1. Denotational Design as a Real Process

      Denotational design is the process that has been elaborately developed by Conal Elliott.

      Core Principle of Stepping Back from Implementation

      We don't want to jump in and say, ‘An image is an array of pixels.’ That’s too soon yet that’s where most of us start.

      Abstract Definition of an Image

      An image is just a function from a pixel location, so an X, Y coordinate to color, where X, Y are in the real number space.

      Emphasis on Algebraic Properties and Category Theory

      He uses algebraic properties and category theory. I think algebraic properties are a very good indicator that you are, ‘on to something’ in the design.

      Incremental, Iterative Refinement

      You have to go back and revise and you make an attempt in a certain direction, and you learn something, and you bring that back to the beginning.

      Four Steps of the Denotational Design Process

      These are the four steps that I see... This first one is to...like a Zenning out and forgetting all implementation assumptions...Then you explore...Then you align with category theory concepts...Then the final thing is actually implementing it.

      Challenges with Haskell’s Type System

      Haskell has no type for real numbers. Most languages don’t...Another thing is, when you’re talking about say, the Monad laws or the Functor laws...there’s no way to do that equality comparison.

      Similar Difficulties in Clojure

      I do think it's a little harder than in Haskell, but I also think that most of the design part is happening in your head.

      The Essence of Denotational Design

      It’s about going back to first principles, building things up, understanding how things compose, and following a different gradient from what most people use when they design.

    1. Note: This response was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1 (Evidence, reproducibility and clarity):

      The authors describe a genome-wide CRISPR screen in mouse ES cells to identify factors and genes that regulate positively and negatively FGF/ERK signaling during differentiation. Out of known and potentially novel regulating signals, Mediator subunit Med12 was a strong hit in the screen and it was clearly and extensively shown by that the loss of Med12 results in impaired FGF/ERK signal responsiveness, modulation of mRNA levels and disturbed cell differentiation leading to reduced stem cell plasticity.<br /> This is a very concise and well written manuscript that demonstrates for the first time the important role of Med12 in ES cells and during early cell differentiation. The results support data that had been previously observed in Med12 mouse models and in addition show that Med12 cooperates with various signaling systems to control gene expression during early lineage decision.

      We thank the reviewer for their positive evaluation of our work.

      Fig. 3 Supp1A-B:<br /> The loci of all three independent Med12 mutant clones and the absence of Med12 should be included. Are all three Med12 loss-of-function mutants?

      In the revised version of the manuscript, we have updated the scheme in Fig. 3 Supp 1A to represent both deletions that were obtained with the CRISPR guides used. Both the more common 97 bp deletion as well as the 105 bp deletion that occurred in one clonal line result in a complete loss of the protein on the western blot (Fig. 3 Supp. 1B), suggesting that all mutant clones used for further experiments are loss-of-function mutants.

      Minor:<br /> Line 466: Should be Fig. 6F, not 6E.

      We have removed this figure panel and the corresponding text in response to the other reviewers' comments.

      Reviewer #1 (Significance):

      The CRISPR screen identified list of some novel interesting factors that regulate FGF/ERK signaling in ES cells. Med12 was then analyzed in very detail on various levels and under various differentiation conditions, resulting in a complex picture how Med12 controls stem cell plasticity. These data support results observed in mouse models and identified novel regulating mechanisms of Med12.

      Reviewer #2 (Evidence, reproducibility and clarity):

      In the manuscript "Med12 cooperates with multiple differentiation signals to enhance embryonic stem cell plasticity" Ferkorn and Schröter report on the role of Med12 in mouse embryonic stem cells. The perform an elegant genetic screen to identify regulators of Spry4 in mouse ESCs, screening for mutations that increase and decrease Spr4-reporter expression in serum/LIF conditions. They find that Med12 deletion results in defects in the exit from naïve pluripotency and in PrE-formation upon Gata-TF overexpression. Using scRNAseq experiments they report a reduction in biological noise in Med12 KO cells differentiating towards PrE upon Gata6 OE.

      Major points:<br /> 1) The title might not exactly reflect the scientific findings of the manuscript. There is little direct evidence for a decrease in plasticity upon Med12 depletion.

      We have changed the title to "Med12 cooperates with multiple differentiation signals to facilitate efficient lineage transitions in embryonic stem cells". In addition, we have toned down claims that Med12 regulates plasticity throughout the manuscript.

      2) Fig 1G: From the data provided it is not entirely clear how well screen results can be validated. Did some of the mutants identified in the screen also produce no detectable phenotypes? What would be the phenotype of knocking out an unrelated gene? In other words, are some of the weak phenotypes really showing Spry4 downregulation or are they withing the range of biological variance?

      Fluorescence levels in Fig. 1G have been normalized to control wild-type cells (dashed red line). Absence of a detectable phenotype would have resulted in normalized fluorescence values around 1. Fluorescence values of all tested mutants were significantly different from 1, as indicated in the statistical analysis given in the figure legend. Furthermore, H2B-Venus fluorescence of cells transfected with a non-targeting control vector are shown in Fig. 1F, and are not different from that of untransfected control wild-type cells. We have now added an explicit explanation how we normalized the data to the figure legend of Fig. 1G, and hope that this addresses the reviewer's concern.

      3) Rescue experiments by re-expressing Med12 in Med12 KO ESCs are missing. Can the differentiation and transcriptional phenotypes be rescued?

      We agree with the reviewer that a rescue experiment re-expressing Med12 would be ideal to ensure that the observed phenotypes are specifically due to loss of Med12. However, we could not identify commercially available full-length Med12 cDNA clones. Even though we managed to amplify full-length Med12 cDNA after reverse transcription, we were unable to clone it into expression vectors. These observations suggest that specific properties of the Med12 cds make the construction of expression vectors by conventional means difficult, and solving these issues is beyond the scope of this study.

      Throughout the study we used multiple independent clonal lines in multiple experimental readouts and obtained congruent results. The reduced expression of pluripotency genes for example was observed in bulk sequencing of the lines introduced in Fig. 3, and by single-cell sequencing of independently generated _Med12-_mutant GATA6-mCherry inducible lines (Fig. 5 Supp. 1B). We argue that this congruence makes it unlikely that the results are dominated by off-target effects.

      4) L365: The subheading "Transitions between embryonic... buffered against loss of Med12" is confusing. The data simply shows that Med12 KOs can still, albeit less efficiently generate PrE upon Gata TF OE. Is there evidence for some active buffering? I think the authors could simply report the data as is, stating that the phenotypes are not a complete block but an impairment of differentiation.

      Prompted by the reviewer's comment as well as remarks along similar lines by reviewer #4, we have completely reorganized this section and now present all the analysis pertaining to PrE differentiation in a new figure 4. In the revised text (lines 316 - 378), we refrain from any speculations about possible buffering and simply report the data as is, as suggested by the reviewer.

      5) L386: Would it not make more sense to reduce dox concentrations in control cells to equalize Gata6 OE to equalize levels between Med12 KO and controls? A shorter pulse of Gata6 does not really directly address unequal expression levels due to loss of Med12. Different pulse length of OE might have consequences that the authors do not control for. This also impacts scRNAseq experiments which suffer from the same, in my opinion, suboptimal experimental setup. This is a point that needs to be addressed.

      We agree with the reviewer that it would have been desirable to equalize GATA6 overexpression levels between wild-type and Med12-mutant cells while keeping induction time the same. In our experience however, reducing the dox concentration is not suitable to achieve this: Rather than reducing transgene expression levels across the board, lower dox concentrations tend to increase the variability within the population - see Fig. 2 in PMID: 16400644 for an example. Since we agree with the reviewer that the setup of the scRNAseq experiment limits our ability to draw conclusion regarding the separation of cell states, we have decided remove these analyses in the revised manuscript. In doing so, we have reorganized the previous figures 5 and 6 into a new single figure 4. This has made the manuscript more concise and allowed us to focus on the main phenotype of the Med12 mutant cells, namely their delayed exit from pluripotency.

      6) The reduced transcript number in Med12 KOs is interesting, but how does it come about. Is there indeed less transcriptional activity or is reduced transcript numbers a side effect of slower growth or the different cell states between WT and Med12 mutants. Appropriate experiments to address this should be performed.

      To address this point, we have performed EU labeling experiments, to compare RNA synthesis rates between wild-type and Med12-mutant during the exit from pluripotency. These experiments confirmed an increase in the mRNA production upon differentiation for both wild-type and Med12 mutant cells, but the method was not sensitive enough to detect any differences between wild-type and Med12 mutant cells within the same condition. The EU labeling thus supports the notion that overall transcriptional rate increases during differentiation, but leaves open the possibility that reduced mRNA levels in Med12 mutant cells arise from effects other than reduced transcriptional output. These new analyses areshown in Fig. 4 Supp. 3 and described in the main text in lines 373 - 378.

      7) I the proposed reduction of biological noise a feature of the PrE differentiation experiments or can it also be observed in epiblast differentiation.

      To address this question, we have carried out single-cell measurements of Spry4 and Nanog mRNA numbers to compare transcriptional variability between wild-type and Med12-_mutant cells during epiblast differentiation (new Fig. 3 Supp. 1G, H). These measurements confirmed the differences between genotypes in mean expression levels detected by RNA sequencing. However, this analysis did not reveal strong differences in mRNA number distributions. Furthermore, as discussed in point 6 above, our interpretations of noise levels in the PrE differentiation paradigm could have been influenced by the unequal GATA6 induction times. Finally, reviewer #4 pointed out that 10x genomics scRNAseq is not ideal to compare noise levels when total mRNA content differ between samples, as is the case in our dataset. We therefore decided to tone down our conclusions regarding altered noise levels in _Med12-mutant cells.

      8) I cannot follow the authors logic that Med12 loss results in enhanced separation between lineages. How is this experimentally supported.

      As discussed in point 6 above, this result could have been influenced by the unequal induction times between wild type and Med12-mutant cells. We have therefore decided to remove this analysis in the revised version of the manuscript.

      Minor points:<br /> Fig 3, Supp1 A: What exactly are the black and blue highlighted letters?

      The black and blue highlighted letters indicate whether bases are part of an intron or an exon. Exon 7 is now explicitly labelled in the figure, and the meaning of the highlighting is explained in the figure legend.

      Reviewer #2 (Significance):

      Overall, this is an interesting study. The screen has been performed to a high technical standard and differentiation defects were appropriately analyzed. The manuscript has some weaknesses in investigating the molecular mode of action of Med12 which could be improved to provide more significant insights.

      Reviewer #3 (Evidence, reproducibility and clarity):

      The authors sought to identify genes important for the transcriptional changes needed during mouse ES cell differentiation. They identified a number of genes and focussed on Med12, as it was the strongest hit from a cluster of Mediator components.

      Using knockout ES cells, differentiation assays, bulk and scRNAseq, they clearly show that Med12 is important for transgene activation and for gene activation generally during exit from self-renewal, but it is not specifically influencing differentiation efficacy per se. Rather, cells lacking Med12 display "a reduced ability to react to changing culture conditions" and, by inference, to environmental changes. They conclude that Med12 "contributes to the maintenance of cellular plasticity during differentiation and lineage transitions."

      Med12 is a structural component of the kinase module of Mediator, but it is not clear what this study tells us about Mediator function. The authors state that their results contrast with those obtained using a Cdk8 inhibitor, which resulted in increased self-renewal (lines 577-580). I'm not sure where their results show "...that loss of Med12 leads to reduced pluripotency." (lines 579-580). They do not test potency of these cells. There is reduced expression of some pluripotency-associated markers and fewer colonies formed in a plating assay, but these assays to not test cellular potency.

      We agree with the reviewer that our RNA sequencing and colony formation assays do not exhaustively test cellular potency. We have therefore changed the wording in the paragraphs that describe these assays and now talk about "reduced pluripotency gene expression" (e.g. lines 20, 228, 461, 512).

      While their phenotype certainly appears different from that reported in cells treated with Cdk8 inhibitor, it's not clear to me what to make of it, or what it might tell us about the function of the Mediator Kinase module or of Mediator. That a co-activator is important for gene expression in general, or even for gene activation upon receipt of some signal, is not really surprising.

      We believe that reporting differences in the phenotypes obtained with Cdk8 inhibition versus knock-out of Med12 is relevant, because it yields new insight into the different functions that the components of the Mediator kinase module have in pluripotent cells. We have previously discussed possible reasons for these functional differences (discussion line 519 - 528), and further expand on them in the revised manuscript.

      Minor points:

      It is surprising they don't relate their work to that of Hamilton et al (https://doi.org/10.1038/s41586-019-1732-z) who conclude that differentiation from the ES cell state towards primitive endoderm is compromised without Med24.

      Thank you for pointing out this omission. We now cite the work of Hamilton et al., in line 317 (related to new Fig. 4) and 537 - 538 in the discussion.

      Stylistic point: please make the separation between paragraphs more obvious. With no indentation or extra spacing between paragraphs it looks like one solid mass of words.

      Reviewer #3 (Significance):

      There is a lot of careful work here, but I'm not getting a big conclusion here. Perhaps the authors could argue their main points somewhat more stridently and what we've learned beyond this current system.

      Prompted by the reviewer's comment, we have re-organized the functional analyses of Med12 function in the manuscript by condensing the previous figures 5 and 6 into a new single figure 4. We have removed all discussions of transcriptional noise and plasticity, and now focus more strongly on the slowed pluripotency transitions as the main phenotype of the Med12 mutant cells. These changes make the manuscript more concise, and we hope that they help to deliver a single, clear message to the reader.

      Reviewer #4 (Evidence, reproducibility and clarity):

      Fernkorn and Schröter report the results of a screen in mESCs based on modulation of the fluorescent intensity of the Spry4:H2B-Venus reporter. They identify candidate genes that both positively and negatively modulate the expression of the reporter. Amongst those, are several known regulators of the FGF pathway (transcriptional activator of Spry4) that serve as a positive control for the screen. The manuscript focuses on characterisation of Med12, and the authors conclude that Med12 does not specifically affect FGF-targets. Paradoxically, the authors show that based on the expression of key naïve markers Med12 cells show delayed differentiation. Functionally, however, Med12 mutant cells at 48hrs can form less colonies when plated back in naïve conditions (that would normally indicate accelerated differentiation ). The authors conclude that Med12 mutants have "a reduced ability to react to changing culture conditions". Next, they examine the Med12 mutation affects embryonic/extraembryonic differentiation using an inducible Gata6 expression system. They show that transgene induction is slower and dampened in mutant cells and that overall the balance of fates is skewed towards embryonic cells. Finally, they use single cell RNA sequencing and observe differences in the number of mRNAs detected, as well as the separation between clusters in the mutant cells. They conclude that the mutants have reduce transcriptional noise levels.

      Overall, it was an interesting article exploring the molecular consequences of knocking out a subunit of the mediator complex. The characterisation focuses primarily on the description of the screen and the more functional consequences of the KO, rather than delving onto the molecular aspects (e.g. whether mediator complex assembly is affected, or it's binding etc). The analysis of the transcriptional noise will be of particular interest to the community, although I have some suggestions to exclude the possibility that the analysis simply reflects changes in global transcription levels. I have a small number of concerns and requests for clarification on the data but all of them should be relatively easy to address.

      Mayor points:

      • Med12, transcription levels and noise (Figure 6G, J-L). This is an intriguing observation. The labelling and multiplexing helped resolve many of the issue typically associated with comparing 10x dataset. I have two observations about this analysis:<br /> 1) Clarify how number of mRNA counts per cell is calculated (figure 6F) - the methods only described a value normalised by the total number of counts per cell.

      The mRNA counts shown in the figure correspond to the raw number of UMIs detected per cell. We now explicitly state this in the figure legend. Please note that after re-organizing the manuscript, former Fig. 6F has become Fig. 4 Supp. 3A.

      I feel this observation is key and has repercussions for the interpretation of the data (see point below) and should be independently validated (although I recognise it's difficult!). Since the authors observed differences in a randomly integrated transgene (iGata experiments), it's possible/likely that the dysregulation of transcription output is more generic. A possible suggestion is measuring global mRNA synthesis and degradation rates, either using inhibitors or by adding modified nucleotides and measuring incorporation rate and loss through pulse/chase labelling.

      We have performed an EU labeling experiment to address this point, which is shown in Fig. 4 Supp. 3 and described in the main text in lines 373 - 378 of the revised manuscript. Please refer to our response to reviewer #2, point 6 for a short description of the results.

      2) 10x is not the ideal for looking at heterogeneity/noise since it has a low capture efficiency and there are a lot of gaps/zeros in the lower expression range. Therefore, it's simply possible that mutant cells have dampened transcriptional output, meaning lowly expressed genes which in the WT contribute to the apparent heterogeneity (because there is a higher chance of not being captured), are below the 10x detection range in the mutant. This can be seen by plotting the cumulative sum of the mean gene count across each sample - the 50% mark (=mean gene count at 50% detection) reflects a measure of the "capture efficiency" (either because of technical reasons or lower mRNA input). Generally (e.g. also seen across technical repeats), the mean coefficient of variation, entropy and other measures of population heterogeneity directly scale with this "mean gene count at 50% detection", while the cell-cell correlation inversely scales with the "mean gene count at 50% detection". If this scaling relationships are observed for the WT and mutant, then it is impossible to say from the single cell RNA-seq whether the differences in heterogeneity are due to biological or technical reasons. Unfortunately, down-sampling the reads does not generally correct or normalise for this type of technical noise since the technical errors accumulate at every step of sample prep. Of course, it's possible that the technical noise in the RNAseq obfuscates real differences in the level of noise. The failure of mutant cells to re-establish the naïve network certainly suggest there is something going on. Therefore, I suggest performing the analysis of capture efficiency vs CV2 mentioned above and adjusting the discussion accordingly, and potentially perform single molecule FISH of key variable genes at the interface of the two clusters to validate the difference in heterogeneity.

      As suggested by the reviewer, we have performed single molecule FISH measurements of variable genes (Fig. 3 Supp. 1 G, H), but these did not provide independent evidence for increased noise levels in Med12 mutant cells. In light of the caveats raised by reviewer #4 when estimating noise levels from 10x scRNAseq data, and the suggestion of reviewer #3 to sharpen the focus of the manuscript, we have decided to remove any strong conclusions about different noise levels between the genotypes. Instead, we focus on the slowed pluripotency transitions as the main phenotype of the Med12 mutant cells to make the manuscript more concise, to deliver a single, clear message.

      • Are Oct4 levels affected? Reduction of Oct4 is sufficient to block differentiation (Radzisheuskaya et al. 2013 - PMID: 23629142).

      We thank the reviewer for this idea. We measured OCT4 expression levels in single cells via quantitative immunostaining and found that that there is no difference between wild-type and Med12-mutant cells. It is therefore unlikely that lowered OCT4 levels block differentiation in the mutant. These new results are shown in Fig. 5, Supp. 1 D, E.

      • Med12 mutants showing transcriptionally delayed differentiation (related to figure 4C). Is this delay also reflected in the expression of formative genes? If I understand correctly, Figure 4C is made from a panel of naïve markers. It would be good to determine if the formative network is equally affected (and in the same direction - suggesting a delay), or if the transcriptional changes speak to a global dysregulation/dampened expression.

      Prompted by the reviewer's suggestion, we have extended our analysis of the differentiation delays to genes that are upregulated during differentiation, such as formative genes. Rather than trying to come up with an new set of formative markers to produce a variation of the original Fig. 4C (Fig. 5C in the revised manuscript), we have taken an unbiased approach and extended Fig. 5E with a panel showing the distribution of expression slopes of the 100 most upregulated genes determined as in Fig. 5D. This analysis demonstrates a lower upregulation slope in Med12-mutant cells. This result confirms that both the upregulation and downregulation of genes is less efficient upon the loss of MED12, in line with our conclusion of delayed differentiation.

      • Control for the re-plating experiments in 2i/LIF (Figure 4B). Replating in 2iLIF + FBS can have a large selective effect in certain mutant backgrounds (e.g. Nodal mutants) which don't accurately reflect the differentiation status. To exclude such effects, it would be good to repeat the replating assays in serum-free conditions (laminin coating can help with attachment) and include undifferentiated controls to ensure that the mutant doesn't have a clonal disadvantage.

      The reason we have included FBS in the re-plating assays is that in our experience, Fgf4-_mutant cells show strongly impaired growth standard in 2i+LIF medium. We anticipate that using laminin coating to help with attachment would not overcome this requirement. We have therefore decided against repeating the re-plating assays. Instead, we state the reason why we used FBS in the main text, and also explicitly acknowledge the reviewers' concern of the risk of selective effects of the FBS and the possible clonal disadvantages of the _Med12 mutant line.

      Minor points:<br /> - I found figure 3D and the corresponding text and caption difficult to understand. It is unclear what a "footprint", "relative pathway activity" or "spearman correlation of footprint" mean. Were all the genes listed below Med12 knocked out and sequenced in this study? I suggest re-working and maybe simplifying the text and figure.

      We re-worked the description about the pathway analysis and stated more clearly that:

      • The footprint is a quantitative measure of the differences in gene expression change of a defined list of target genes between wild-type and perturbation.
      • Only the Med12 mutant data is new data produced in this manuscript and all examples below are from Lackner et al., 2021.

      We think that a more extensive explanation of the terms "relative pathway activity" and "spearman correlation of footprint" would disturb the flow of the manuscript too much. Therefore, we now cite the original paper just next to the sentence these terms are mentioned.

      In figure S1 Sup1 the authors report the dose response of targets to FGF - are those affected in the mutant?

      In this manuscript we have not tested if the dose response of FGF target genes changes upon perturbation of Med12. We argue that such an experiment would be beyond the scope of the current manuscript, since - as acknowledged by the reviewer - "Med12 does not specifically affect FGF-targets".

      • Similarly, it would be helpful to guide the reader through figure 5H-I and the corresponding text and caption since it's not immediately obvious how the analysis/graphs lead to the conclusion stated.

      As a consequence of our reorganization of the manuscript, the original figure 5H-I has been moved to Fig. 4, Supp. 1 in the revised version. The analysis strategy has been described in more detail in one of our previous publications (PMID: 26511924). In keeping with our general decision to make the manuscript more focused and concise, we have decided against further expanding on these data, but instead refer the reader to the original publication.

      • Role of Med12 in regulating FGF signalling. There are two observations that seems a bit at odds with the text description and it would be helpful to clarify: "ppERK levels were indistinguishable between wild-type and Med12-mutant lines" (line 222) - 5/6 datapoints show an increase. "[...] overall these results argue against a strong and specific role of Med12 in regulation of FGF target genes." (line 274). If I understood correctly, ~50% of genes are differentially transcribed because of Med12 KO.

      To address the reviewers' first question, we have performed a statistical test on the quantifications of the western blots. This test indicates that there is no significant change of ppERK levels upon loss-of MED12, which now stated clearly in the text (line 217).

      Second, to clarify why our data argues against a strong and specific role of Med12 in regulation of FGF target genes, we now formulate an expectation (lines 276 - 277): If MED12 specifically regulated FGF target genes, the number of differentially expressed genes would be higher in the wild-type than in the Med12-mutant upon stimulation with FGF. This however is not the case.

      • "[...] as well as transitions between different pluripotent states" (line 41) - references missing.

      We have added a reference to PMID: 28174249 (line 39).

      • Line 447: "differentiation conditions" - it's unclear what it's mean by differentiation and how it relates to the diagram in figure 6A. Are those the 20hr cells? Do the -8h, -4hr and 0hr cells (if I understand the meaning of the diagram) cluster all together?

      We now specify in the text that pluripotency conditions refer to cells maintained in 2i + LIF medium, whereas differentiation refers to cells switched to N2B27 after the doxycycline pulse (lines 341 - 342).

      • The difference in dynamics of mCherry activation as a consequence of Med12 KO are not apparent from figure 5E. It might be easier to visualise this observation if x-axis was normalised to the starting point plotting "time from start of induction".

      We agree with the reviewer that the current alignment has not been optimized to compare GATA6 induction dynamics between wild-type and Med12-mutant cells. If we changed the alignment however, it would not be clear any longer that both genotypes were in N2B27 for the same amount of time before analyzing Epi and PrE differentiation. Since our focus is on the differentiation of the two lineages rather than GATA6-mCherry induction dynamics, we decided to keep the original alignment.

      • Figure 3H/I - what does "gene expression changes" and "fold change ratio" mean?

      In Fig. 3H, we plot the the fold change of gene expression upon FGF4 stimulation in _Med12-_mutant versus that in wild-type cells; in Fig. 3I we plot the distribution of the ratio of these two fold changes across all genes. To make this strategy clearer, we have changed the axis label in Fig. 3H to "expression fold change upon FGF", to make it consistent with the axis label "fold-change ratio" in Fig. 3I.

      • Line 579-580 - please clarify what is meant by "reduced pluripotency".

      Prompted by a similar concern raised by reviewer #3, we have changed the wording throughout this paragraph and now talk of "reduced pluripotency gene expression". See also our response to reviewer #3 above.

      • Title: "enhance ESC plasticity". not sure enhance is the right word? There is no evidence that the plasticity of cells is affected.

      We have changed the title; see also our response to reviewer #2, point 1.

      Reviewer #4 (Significance):

      Overall, it was an interesting article exploring the molecular consequences of knocking out a subunit of the mediator complex. The characterisation focuses primarily on the description of the screen and the more functional consequences of the KO, rather than delving onto the molecular aspects (e.g. whether mediator complex assembly is affected, or it's binding etc). The analysis of the transcriptional noise will be of particular interest to the community, although I have some suggestions to exclude the possibility that the analysis simply reflects changes in global transcription levels. I have a small number of concerns and requests for clarification on the data but all of them should be relatively easy to address.

    1. Introduction to Signals & Reactivity

      "Signals are value that change over time...The key to a reactive system is that it knows when you set the value; it looks for the set on the specific property, the specific function...and it reads with a function."

      • The speaker emphasizes that signals conceptually hold a current value rather than providing a continuous stream of intermediate states.
      • Signals update synchronously, ensuring that any consumer remains consistent with the current snapshot of data.

      Immutable vs. Mutable Structures

      "We used to learn the framework and then learn the fundamentals kind of thing... in 2014, JavaScript had already taken off and I admittedly didn’t know very much."

      • The conversation highlights the contrast between immutable data (where changes create new references) and mutable data (where changes happen in place).
      • Immutable updates trigger re-diffing, whereas mutable updates allow granular changes at the specific location.

      Nested Signals & Efficiency

      "If we just go in and change Jack’s name to Janet by just setting the name, we only need to re-run the internal effect…it’s only the nearest effect that runs."

      • By nesting signals inside signals, individual property changes can avoid re-running the entire component or the entire data structure.
      • This nested approach demonstrates how specific effects update only the parts that need to change, improving performance.

      Store Proxies as a “Best of Both Worlds”

      "React basically tells you that this is where you end up—when you know that you can cheat it a little bit, you just get past it."

      • Using proxies allows for deep mutation with minimal overhead, merging the developer simplicity of an immutable interface with the fine-grained updates of mutable change.
      • The speaker points out that many frameworks lack built-in “derived mutable” structures and rely on bridging solutions such as user-space code or specialized stores.

      Map, Filter & Reduce with Signals

      "We can also avoid allocations here by only storing the final results... but only if you care about final results."

      • Mapping over large datasets illustrates the trade-offs between immutable approaches (always re-map) and mutable approaches (apply partial updates or diffs).
      • The speaker notes that “filter” and “reduce” often involve more complete re-runs and may need specialized logic or custom diffs rather than a one-size-fits-all operator.

      Convergent Nodes & Reactive Graphs

      "Because signals are a value conceptually, not a stream—...the goal of effects isn’t to serve as a log... it’s a way of doing external synchronization with the current state."

      • Computed values (memos) serve as convergence points in the reactive graph. Multiple sources merge into derived data that updates automatically.
      • Fine-grained systems need to track only minimal dependencies, but the conversation repeatedly underscores that different transformations (map, filter, reduce) pose unique challenges.

      Async & Suspense Insights

      "Run once doesn’t work—there’s no way to avoid scheduling. We need to always throw or force undefined, so we need suspense."

      • Lazy asynchronous signals can lead to “waterfalls,” where the second request only starts after the first completes.
      • Suspending or temporarily showing old data can prevent blank states but risks double fetching and inconsistent chaining unless carefully guarded.

      Early Returns vs. Control Flow Components

      "Early returns... push decisions upwards where further positions are pushed down, so it might not impact React, but it doesn’t lead to a way forward."

      • The speaker critiques patterns that rely on returning early in components, arguing they duplicate layout logic and sometimes break optimal reactivity.
      • The recommendation is to align with data flow and push condition checks closer to where data is rendered instead of scattering them at multiple return statements.

      Syntax Debates & Framework Convergence

      "Syntax in JS frameworks is overrated... we’re at a point now where the looks are so superficial that you can be looking at complete opposites and it looks identical."

      • Despite the prevalent notion that “ruin” syntax in various frameworks resembles React, the underlying reactivity mechanics differ significantly.
      • Discussion highlights that every major framework—React, Vue, Svelte, Solid—converges on signals or reactivity, yet each approach’s details (mutable vs. immutable, compiler vs. runtime) vary widely.

      Conclusion: Future of Signals & Reactive Systems

      "People are starting to wonder if we should just have one super framework now that we mostly agree... but each framework’s identity is in how it approaches these details."

      • The speaker underscores ongoing exploration into data diffing, nesting, and push-pull mechanisms to improve performance while simplifying the developer experience.
      • Signals and granular reactivity are core to bridging a user-friendly interface with minimal overhead, a goal each framework pursues in its own unique, evolving way.
    1. hese days, stillness is the new hustle, the new collective goal. I’m just as tired as we all are, just as ready to exhale. I fantasize about moving to the Valley, a suburb outside the city—settling into the aloneness I know so well, before it’s too late to get comfortable at all. Nobody wants a single artist living at the end of their suburban cul-de-sac, front porch blasting Fela in the morning and wafting weed smoke in the afternoon. Planned communities have no tables for one. Protection is built that way.

      I believe that this passage speaks to the moment of the article. The author asserts that "stillness is the new hustle." This suggests the author's belief that as a society, we are striving for peace in our lives. For most, the idea of stability includes a long-term partner. This article was published in 2022. In the year prior, the world was opening back up from COVID isolation. I believe this context explains both the frustration with dating and the yearning for "stillness." When everything opened back up, dating became more feasible and popular. Presumptively, the pressure to find a partner rose alongside it. It is also plausible that the goal of stability is a reaction to the instability of the pandemic- a symptom of the need to move on.

    1. 13.2.2. Trauma Dumping# While there are healthy ways of sharing difficult emotions and experiences (see the next section), when these difficult emotions and experiences are thrown at unsuspecting and unwilling audiences, that is called trauma dumping [m11]. Social media can make trauma dumping easier. For example, with parasocial relationships, you might feel like the celebrity is your friend who wants to hear your trauma. And with context collapse, where audiences are combined, how would you share your trauma with an appropriate audience and not an inappropriate one (e.g., if you re-post something and talk about how it reminds you of your trauma, are you dumping it on the original poster?). Trauma dumping can be bad for the mental health of those who have this trauma unexpectedly thrown at them, and it also often isn’t helpful for the person doing the trauma dumping either:

      I think it's an interesting concept. I see this all the time on some social platforms. I always get a lot of negative feelings and extreme comments from people. In fact, I don't like the idea of having to listen to other people's traumas when I open a social media platform, and it seems like I have to persuade others even when I'm not in a good mood myself. It seems like it just aggravates the negative emotions instead of being a healthy way to get rid of them and find a solution.

    1. Designed for ceremony as well as warfare, it would impress senior guests with Henry’s royal power and authority.

      It’s fascinating that medieval castles were not just military strongholds but also symbols of political power. I wonder how the architectural details reinforced this image.

    1. Even if a child had a voucher the previous year, the family must go through the whole process again.And in practical terms, advocates and providers say, the relatively tight timeline is a mirage. Getting avoucher often takes months.The therapists also must wait. “I barely get paid for any of the kids in September,” said CarolSchaeffler, a speech language pathologist. “Sometimes it’s months. More often than not, it’s certainlythrough the end of October

      I think this just goes to show how the whole system is messed up, and does not have the students best interests in mind. I think that creating barriers to entry is important for safety and regulatory purposes, creating SO many where students cannot even have access to basic needs, makes it apparent that something needs to change. I think that the system has failed so many kids and continues to do so, as well as the families and parents that want their children to get the help that they need .

    1. made by queer designers, positing that “permanent living represents a particularly potent trope for expressing both hopes and concerns about contemporary queer life in the face of an uncertain future.” But Ruberg resists the reading of this mechanic as purely utopian:

      In many games I have played in the past, there have been permadeath options, where if you die in a game, it sets the player back to the beginning (ex: "Dont Starve"). Yet, in the article, it is discussed that more liberal leaning individuals (especially ones who are a part of the lgbtq community) have created narratives with “permalife”. Unlike permadeath, these games place the user into inescapable real world issues. This approach to game design makes me wonder about how games can evolve from purely escapist experiences into tools for social commentary. It’s not just about surviving in the game world, but confronting the more darker realities that many face in real life.

    1. as ateen growing up in this society, being LGBTQ and being Asian - you could not be both at thesame time. That was what I was raised to believe in

      Some cultures put a lot of pressure on people to fit into certain roles, and that makes it even harder for LGBTQ+ youth to be themselves. It’s upsetting that some kids feel like they have to hide who they are just to meet family expectations. No one should have to choose between their culture and their identity. Schools and communities need to support students who face this kind of struggle. Everyone deserves to feel accepted for who they truly are.

    2. I thought for a very long time that I was introverted. I realized that I just wanted to bemy true and genuine self - and that’s difficult if people act like it’s weird

      Being true to yourself in important, even if it feels challenging when others don't understand. Embracing your genuine self is a powerful step towards meaningful connection and fighting loneliness. Stay true to yourself and the right people will appreciate you for who you are.

    1. nearly all LGBTQ students of color experienced similar rates of racist ha-rassment, but Black LGBTQ students were more likely than nearly all others tofeel unsafe about their race/ethnicity.

      This section shows that being LGBTQ and a person of color makes school even harder. It’s not just about bullying, these students don’t feel safe in a place where they are supposed to learn. Schools should do more to protect all students and make sure no one feels like they have to hide who they are. If teachers and staff understood these struggles better, maybe they could help create a safer environment.

    1. ‘Thank God E.I.S. was spared,’” said Dr. Michael Iademarco, who helped create the Laboratory Leadership Service when he was at the C.D.C. “And my response will be, ‘Yeah, but we just killed the promising half of field investigation, because nobody knows about it.’”The agency has also lost its presidential management fellows, who were assigned to the C.D.C. under a decades-old government initiative that describes itself as “the premier leadership development program for advanced degree holders across all academic disciplines.”Veterans of the health agencies said they were troubled by the seemingly random nature of the cuts.

      These cuts seemed to be happening without order or reason. It's almost like they are drawing names from a hat.

    1. Drawing on theories discussing gender as a process, homophobia, and intcr-sectionality, this chapter examines the pervasiveness of heteronormativityand the varieties of queerness to help readers understand where bias comesfrom, as well as be attuned to differences in the experiences of gender di-verse, creative, and/or nonconforming students and/or sexual minority stu-dents.

      This chapter really made me think about how deep gender and sexuality biases go in schools. It’s not just about individual prejudice, but about a whole system that reinforces certain norms while pushing others to the margins. I found it interesting how the reading connects homophobia, transphobia, and sexism, showing how they all come from the same roots. It makes sense because society often expects people to fit into rigid categories, and those who don’t are seen as different or even a threat.

    2. By suggesting to adults that there are more possible identi-ties for students to inhabit than adults might consider normal or even possi-ble, such play may indicate not only adult insufficiency of understanding butperhaps also adult lack of control of young people's identities.

      I think it's interesting how this idea of "control" over young people's identities, is applied to all adults, and not just from parents onto heir kids. It shows how not just parents, but teachers, counselors, administrators, and other adult authority figures also feel they need to control the identities of young people, and when they can't, they find it alarming. This whole idea just seems strange to me though. Why do some adults seem to care so much about how a young person identifies on a private and personal level? Is it purely because it's seen as breaking the norm, and is therefore wrong? It just seems like a waste of time and effort to punish these students, especially when they aren't hurting anybody or themselves.

    1. Now, there is a central aspect of UI that he have not discussed yet, and yet is likely one of the most important aspects of designing clear user interfaces: typography.

      Many people think these aesthetics are small details, but their impact is huge. I've noticed the key of good typography is that it seems invisible, or that it blends naturally to the entire design because it's easy to process and seems fitting. This is similar to design features discussed in this—and the last few—chapters, where functions that are necessary or intuitive are often 'invisible' because the user finds it naturally fitting. Overall, I find that this aligns with a lot of what Ko discusses on anticipating user needs when making the experience seamless for them, and typography is just another way that streamlines the UX.

    2. Throughout these choice of inputs are critical issues of diversity, equity, and inclusion. For example, if Google could only be used with a mouse, it would immediately exclude all people who cannot use a mouse because of a disability such as a motor impairment or blindness.

      This point about accessibility really stood out to me. It’s a reminder that good design isn’t just about efficiency or aesthetics—it’s about inclusivity. Many digital tools assume a certain type of user, but not everyone interacts with technology in the same way. The example of Google being inaccessible if it only allowed mouse input is a great way to highlight how small design decisions can have big consequences. I think this ties into the broader discussion of how defaults and implicit inputs shape user experience—sometimes in ways that unintentionally exclude people. I wonder if tech companies are doing enough to actively design with accessibility in mind, rather than just retrofitting it after complaints. Are there examples of companies that have truly prioritized accessibility from the start, rather than as an afterthought?

    1. What would it take for you to move to the mountains? MountainBlog Annina UZH Tuesday, 28 January 2025 69 Hits 0 Comments Written by Tamar Kutubidze, Nini Lagvilava, Sonja Lussi & Charlene ZehnderA collaboration between students from Tbilisi State University and the University of Zurich Imagine a serene village nestled in the Swiss Alps, with breathtaking views and quiet streets that seem straight out of a storybook. Now, imagine this village isn't just a fairytale, it is a place willing to pay you to call it home. Welcome to Albinen, a small village in the Valais mountains of Switzerland. Perched 1'300 meters above sea level, Albinen has only 240 residents (SWI swissinfo, 2017). In 2017, facing a bleak future, Albinen took a bold step. The plan? Offer monetary incentives to attract new residents. To qualify, applicants needed to be under 45, commit to staying at least 10 years, and invest 200'000 Swiss Francs in property development (Siebrecht, 2017).Fast forward to seven years later: has the plan worked? Albinen's goal was modest, to attract five families in five years, with the hope of ten families in ten years. By 2022, the initiative looked promising on paper. Albinen approved 17 applications, supported 31 adults and 16 children, and spent CHF 710'000. However, the head of the municipality remains unconvinced (Lynch 2023). Despite the program's success in applications, Albinen's population dropped from 273 to 262 between 2017-2023 (Metry 2024). Infrastructure challenges remain a significant issue, and integration has been slow. A local of Albinen reported that newly arrived residents are rarely seen in the village (Lynch 2023), sparking concerns that they might view Albinen as a second-home destination rather than a permanent community. This leads us to ask: are these newcomers committed to revitalizing Albinen, or are they simply seeking a picturesque retreat? Svaneti, Georgia. (Image source: https://www.caucasus-trekking.com/regions/svaneti) Albinen, Switzerland. (Image source: https://www.borghisvizzera.ch/de/scheda/albinen) Depopulation of mountainous regions isn't unique to Albinen. It's also a challenge in Georgia's Caucasus Mountains, where issues like limited infrastructure, rural economies, and poor connectivity drive people to seek better opportunities in the lowlands (Telbisz, et al., 2020). The Georgian government addresses this by offering financial aid, agricultural subsidies, and housing support in remote areas. In regions like Svaneti and Tusheti, eco-tourism initiatives are combined with efforts to encourage permanent settlement. Mountain regions in both countries, Georgia, and Switzerland, therefore, face similar issues with depopulation. Almost a quarter of the population lives in the Alps, yet many mountain villages are seeing dwindling numbers (Alpenkonvention, 2015). While the approaches differ, both countries share the same goal: revitalization. Albinen's initiative drew international media attention and still receives up to 100 applications daily from Germany, Austria, Croatia, Sri Lanka, Mexico, and Brazil (Hess 2017). The problem: the press omitted key details, giving people from around the world false hope for a better life in Switzerland. Most applications fail to meet the requirements, creating unnecessary work for the municipality (Lynch 2023). While Albinen achieved its target of attracting families, its deeper goal of transforming into a thriving, cohesive community remains elusive.Research suggests that successful revitalization initiatives require more than financial incentives. They need robust infrastructure, opportunities for community engagement, and long-term planning (Telbisz et al., 2020). In Georgia, the stakes are high. Mountain villages are more than homes; they are living monuments to ancient traditions, music, and architecture. Revitalizing these areas could preserve a unique cultural heritage while supporting ecological sustainability. However, achieving this requires a balanced approach that ensures both integration and sustainable development. With the right strategies, Georgia's mountain villages could thrive again as vibrant, self-sustaining communities.So, what would it take for you to move to the mountains? Would breathtaking views and monetary incentives be enough, or does it take something deeper, like a sense of belonging? The examples of Albinen, Svaneti and Tusheti offer no easy solutions but invite us to reflect on what truly makes a place feel like home.

      აღნიშნული ბლოგი არის ერთგვარი შედარება საქართველოსა და შვეიცარიის მაღლმთიანი რეგიონების მათი საერთო პრობლემის (რატომ ტოვებს ხალხმი მაღალმთან რეგიონებს-სოფლებს ) არსებობის შესახებ . ტექსტში შემოთავაზებულია ამ პრობლემის გადაჭრის სხვადასხვა მეთოდები, მაგალითად შვეიცარიის სოფელ ალბინენის მოსახლეობის ზრდისთვის მთავრობამ გადაწყვიტა ახალმოსახლეებისთის ფინანსური მოტივაცია მიეცათ ( მოსახლეობის მოზიდვა ფულადი სტიმულებით). ამ გეგმამ ნაწილობრივ გაამართლა. ალბინენმა 17 აპლიკაცია დაამტკიცა, 31 ზრდასრულისა და 16 ბავსვს დაუჭირა მხარი დახარჯა 710'000 ფრანკი. თუმცა იმისდამიუხედავად პროგრამამ აპლიკანტების მოზიდვა შეძლო სოფლის მოსახლეობა 2017-2023 წლებში 273-დან 262-მდე შემცირდა. ვინაიდან მთავარი ტავსატეხი მაინც ინფრასტრუქტურული განვითარებაა რომ გადავხედოთ საქართველოს მაგალითსაც , მისი მაღალმთიანი რეგიონები როგორიცაა სვანეთი , თუშეთი იგივე პრობლემებს ვაწყდებით; შეზღუდული ინფრასტრუქტურა , სუსტი ეკონომიკა და არასაკმარისი კავშირები. კვლევები აჩვენებს რომ მხოლოდ ფინანსური სტიმულებით ხალხი არ დარჩება მთაში. აუცილებელია მტკიცე ინფრასტრუქტურა, საზოგადოებრივი ჩართულობის შესაძლებლობები და გრძელვადიანი სტრატეგია. იმისთვსი რომ შეიქმნას “ნამდვილი ხალხი” არარის საკმარისი მხოლოდ ფინანსური ბონუსი.

    2. What would it take for you to move to the mountains? MountainBlog Annina UZH Tuesday, 28 January 2025 62 Hits 0 Comments Written by Tamar Kutubidze, Nini Lagvilava, Sonja Lussi & Charlene ZehnderA collaboration between students from Tbilisi State University and the University of Zurich Imagine a serene village nestled in the Swiss Alps, with breathtaking views and quiet streets that seem straight out of a storybook. Now, imagine this village isn't just a fairytale, it is a place willing to pay you to call it home. Welcome to Albinen, a small village in the Valais mountains of Switzerland. Perched 1'300 meters above sea level, Albinen has only 240 residents (SWI swissinfo, 2017). In 2017, facing a bleak future, Albinen took a bold step. The plan? Offer monetary incentives to attract new residents. To qualify, applicants needed to be under 45, commit to staying at least 10 years, and invest 200'000 Swiss Francs in property development (Siebrecht, 2017).Fast forward to seven years later: has the plan worked? Albinen's goal was modest, to attract five families in five years, with the hope of ten families in ten years. By 2022, the initiative looked promising on paper. Albinen approved 17 applications, supported 31 adults and 16 children, and spent CHF 710'000. However, the head of the municipality remains unconvinced (Lynch 2023). Despite the program's success in applications, Albinen's population dropped from 273 to 262 between 2017-2023 (Metry 2024). Infrastructure challenges remain a significant issue, and integration has been slow. A local of Albinen reported that newly arrived residents are rarely seen in the village (Lynch 2023), sparking concerns that they might view Albinen as a second-home destination rather than a permanent community. This leads us to ask: are these newcomers committed to revitalizing Albinen, or are they simply seeking a picturesque retreat? Svaneti, Georgia. (Image source: https://www.caucasus-trekking.com/regions/svaneti) Albinen, Switzerland. (Image source: https://www.borghisvizzera.ch/de/scheda/albinen) Depopulation of mountainous regions isn't unique to Albinen. It's also a challenge in Georgia's Caucasus Mountains, where issues like limited infrastructure, rural economies, and poor connectivity drive people to seek better opportunities in the lowlands (Telbisz, et al., 2020). The Georgian government addresses this by offering financial aid, agricultural subsidies, and housing support in remote areas. In regions like Svaneti and Tusheti, eco-tourism initiatives are combined with efforts to encourage permanent settlement. Mountain regions in both countries, Georgia, and Switzerland, therefore, face similar issues with depopulation. Almost a quarter of the population lives in the Alps, yet many mountain villages are seeing dwindling numbers (Alpenkonvention, 2015). While the approaches differ, both countries share the same goal: revitalization. Albinen's initiative drew international media attention and still receives up to 100 applications daily from Germany, Austria, Croatia, Sri Lanka, Mexico, and Brazil (Hess 2017). The problem: the press omitted key details, giving people from around the world false hope for a better life in Switzerland. Most applications fail to meet the requirements, creating unnecessary work for the municipality (Lynch 2023). While Albinen achieved its target of attracting families, its deeper goal of transforming into a thriving, cohesive community remains elusive.Research suggests that successful revitalization initiatives require more than financial incentives. They need robust infrastructure, opportunities for community engagement, and long-term planning (Telbisz et al., 2020). In Georgia, the stakes are high. Mountain villages are more than homes; they are living monuments to ancient traditions, music, and architecture. Revitalizing these areas could preserve a unique cultural heritage while supporting ecological sustainability. However, achieving this requires a balanced approach that ensures both integration and sustainable development. With the right strategies, Georgia's mountain villages could thrive again as vibrant, self-sustaining communities.So, what would it take for you to move to the mountains? Would breathtaking views and monetary incentives be enough, or does it take something deeper, like a sense of belonging? The examples of Albinen, Svaneti and Tusheti offer no easy solutions but invite us to reflect on what truly makes a place feel like home.

      აღნიშნული სტატია, რომელიც შექმნილია თბილისის სახელმწიფო უნივერსიტეტისა და ციურიხის უნივერსიტეტის სტუდენტების ერთობლივი კვლევის შედეგად, მკითხველს გვაფიქრებს იმ საერთო პრობლემაზე, რომელიც საქართველოსა და შვეიცარიის მაღალმთიან რეგიონებს აქვთ. როგორც სტატიიდან ნათლად ჩანს, შვეიცარიის სოფელი ალბინენი,ხოლო საქართველოში, თუშეთი და სვანეთი მოსახლეობის საკმაოდ მაღალპროცენტიან კლებას განიცდის. საინტერესოა, როგორ უმკლავდებიან ამ ყოველივეს? მაგალითად: შვეიცარიის სოფელი ალბინენი ფინანსური სტიმულებით ცდილობს მოსახლეობის ზრდას, როგორიცაა სოფელში 10-წლიანი ცხოვრების ვალდებულების სანაცვლოდ 200 000 შვეიცარიული ფრანკის ინვესტიციას უძრავ ქონებაში. ხოლო საქართველოში მოსახლეობის გადინების წინააღმდეგ მთავრობა ფინანსური დახმარებისა და ეკოტურიზმის ხელშეწყობის გზებს იყენებს. თუმცა ამ ყველაფრის მიუხედავად, ფაქტია რომ პრობლემა პრობლემად რჩება, რაც ინფრასტრუქტურულ და სოციალურ ინტეგრაციის გამოწვევებს უკავშირდება. მოსახლეობისთვის რთულია ფინანსური შეთავაზებების მიუხედავად იცხოვრონ მათთვის არაკომფორტულ და არც თუ ისე კარგ საცხოვრებელ პირობებში.

      ვფიქრობ, რომ მოსახლეობის შენარჩუნებისთვის მხოლოდ ე.წ. ფინანსური წახალისებები არ კმარა. ჩემი აზრით, სოფლის ცხოვრების გასაუმჯობესებლად აუცილებელია ინფრასტრუქტურის განვითარება, დასაქმების შესაძლებლობების შექმნა და ადგილობრივი კულტურის პოპულარიზაცია. თუ ახალ მაცხოვრებლებს არ ექნებათ ცხოვრების მაღალი ხარისხი და საზოგადოებასთან ურთიერთობის შესაძლებლობა, ისინი დიდხანს ვერ გაჩერდებიან და ამ ადგილს საკუთარ სახლად ვერ აღიქვამენ.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review): 

      Summary: 

      In this work, Noorman and colleagues test the predictions of the "four-stage model" of consciousness by combining psychophysics and scalp EEG in humans. The study relies on an elegant experimental design to investigate the respective impact of attentional and perceptual blindness on visual processing. 

      The study is very well summarised, the text is clear and the methods seem sound. Overall, a very solid piece of work. I haven't identified any major weaknesses. Below I raise a few questions of interpretation that may possibly be the subject of a revision of the text. 

      We thank the reviewer for their positive assessment of our work and for their extremely helpful and constructive comments that helped to significantly improve the quality of our manuscript.

      (1) The perceptual performance on Fig1D appears to show huge variation across participants, with some participants at chance levels and others with performance > 90% in the attentional blink and/or masked conditions. This seems to reveal that the procedure to match performance across participants was not very successful. Could this impact the results? The authors highlight the fact that they did not resort to postselection or exclusion of participants, but at the same time do not discuss this equally important point. 

      Performance was indeed highly variable between observers, as is commonly found in attentional-blink (AB) and masking studies. For some observers, the AB pushes performance almost to chance level, whereas for others it has almost no effect. A similar effect can be seen in masking. We did our best to match accuracy over participants, while also matching accuracy within participants as well as possible, adjusting mask contrast manually during the experimental session. Naturally, those that are strongly affected by masking need not be the same participants as those that are strongly affected by the AB, given the fact that they rely on different mechanisms (which is also one of the main points of the manuscript). To answer the research question, what mattered most was that at the group-level, performance was well matched between the two key conditions. As all our statistical inferences, both for behavior and EEG decoding, rest on this group level. We do not think that variability at the individualsubject level detracts from this general approach.  

      In the Results, we added that our goal was to match performance across participants:

      “Importantly, mask contrast in the masked condition was adjusted using a staircasing procedure to match performance in the AB condition, ensuring comparable perceptual performance in the masked and the AB condition across participants (see Methods for more details).”

      In the Methods, we added:

      “Second, during the experimental session, after every 32 masked trials, mask contrast could be manually updated in accordance with our goal to match accuracy over participants, while also matching accuracy within participants as well as possible.”

      (2) In the analysis on collinearity and illusion-specific processing, the authors conclude that the absence of a significant effect of training set demonstrates collinearity-only processing. I don't think that this conclusion is warranted: as the illusory and nonillusory share the same shape, so more elaborate object processing could also be occurring. Please discuss. 

      We agree with this qualification of our interpretation, and included the reviewer’s account as an alternative explanation in the Discussion section:  

      “It should be noted that not all neurophysiological evidence unequivocally links processing of collinearity and of the Kanizsa illusion to lateral and feedback processing, respectively (Angelucci et al., 2002; Bair et al., 2003; Chen et al., 2014), so that overlap in decoding the illusory and non-illusory triangle may reflect other mechanisms, for example feedback processes representing the triangular shapes as well.”

      (3) Discussion, lines 426-429: It is stated that the results align with the notion that processes of perceptual segmentation and organization represent the mechanism of conscious experience. My interpretation of the results is that they show the contrary: for the same visibility level in the attentional blind or masking conditions, these processes can be implicated or not, which suggests a role during unconscious processing instead. 

      We agree with the reviewer that the interpretation of this result depends on the definition of consciousness that one adheres to. If one takes report as the leading metric for consciousness (=conscious access), one can indeed conclude that perceptual segmentation/organization can also occur unconsciously. However, if the processing that results in the qualitative nature of an image (rather than whether it is reported) is taken as leading – such as the processing that results in the formation of an illusory percept – (=phenomenal) the conclusion can be quite different. This speaks to the still ongoing debate regarding the existence of phenomenal vs access consciousness, and the literature on no-report paradigms amongst others (see last paragraph of the discussion). Because the current data do not speak directly to this debate, we decided to remove  the sentence about “conscious experience”, and edited this part of the manuscript (also addressing a comment about preserved unconscious processing during masking by Reviewer 2) by limiting the interpretation of unconscious processing to those aspects that are uncontroversial:

      “Such deep feedforward processing can be sufficient for unconscious high-level processing, as indicated by a rich literature demonstrating high-level (e.g., semantic) processing during masking (Kouider & Dehaene, 2007; Van den Bussche et al., 2009; van Gaal & Lamme, 2012). Thus, rather than enabling deep unconscious processing, preserved local recurrency during inattention may afford other processing advantages linked to its proposed role in perceptual integration (Lamme, 2020), such as integration of stimulus elements over space or time.”

      (4) The two paradigms developed here could be used jointly to highlight nonidiosyncratic NCCs, i.e. EEG markers of visibility or confidence that generalise regardless of the method used. Have the authors attempted to train the classifier on one method and apply it to another (e.g. AB to masking and vice versa)? What perceptual level is assumed to transfer? 

      To avoid issues with post-hoc selection of (visible vs. invisible) trials (discussed in the Introduction), we did not divide our trials into conscious and unconscious trials, and thus did not attempt to reveal NCCs, or NCCs generalizing across the two paradigms. Note also that this approach alone would not resolve the debate regarding the ‘true’ NCC as it hinges on the operational definition of consciousness one adheres to; also see our response to the previous point the reviewer raised. Our main analysis revealed that the illusory triangle could be decoded with above-chance accuracy during both masking and the AB over extended periods of time with similar topographies (Fig. 2B), so that significant cross-decoding would be expected over roughly the same extended period of time (except for the heightened 200-250 ms peak). However, as our focus was on differences between the two manipulations and because we did not use post-hoc sorting of trials, we did not add these analyses.

      (5) How can the results be integrated with the attentional literature showing that attentional filters can be applied early in the processing hierarchy? 

      Compared to certain manipulations of spatial attention, the AB phenomenon is generally considered to represent an instance of  “late” attentional filtering. In the Discussion section we included a paragraph on classic load theory, where early and late filtering depend on perceptual and attentional load. Just preceding this paragraph, we added this:  

      “Clearly, these findings do not imply that unconscious high-level (e.g., semantic) processing can only occur during inattention, nor do they necessarily generalize to other forms of inattention. Indeed, while the AB represents a prime example of late attentional filtering, other ways of inducing inattention or distraction (e.g., by manipulating spatial attention) may filter information earlier in the processing hierarchy (e.g., Luck & Hillyard, 1994 vs. Vogel et al., 1998).”

      Reviewer #2 (Public Review): 

      Summary: 

      This is a very elegant and important EEG study that unifies within a single set of behaviorally equated experimental conditions conscious access (and therefore also conscious access failures) during visual masking and attentional blink (AB) paradigms in humans. By a systematic and clever use of multivariate pattern classifiers across conditions, they could dissect, confirm, and extend a key distinction (initially framed within the GNWT framework) between 'subliminal' and 'pre-conscious' unconscious levels of processing. In particular, the authors could provide strong evidence to distinguish here within the same paradigm these two levels of unconscious processing that precede conscious access : (i) an early (< 80ms) bottom-up and local (in brain) stage of perceptual processing ('local contrast processing') that was preserved in both unconscious conditions, (ii) a later stage and more integrated processing (200-250ms) that was impaired by masking but preserved during AB. On the basis of preexisting studies and theoretical arguments, they suggest that this later stage could correspond to lateral and local recurrent feedback processes. Then, the late conscious access stage appeared as a P3b-like event. 

      Strengths: 

      The methodology and analyses are strong and valid. This work adds an important piece in the current scientific debate about levels of unconscious processing and specificities of conscious access in relation to feed-forward, lateral, and late brain-scale top-down recurrent processing. 

      Weaknesses: 

      - The authors could improve clarity of the rich set of decoding analyses across conditions. 

      - They could also enrich their Introduction and Discussion sections by taking into account the importance of conscious influences on some unconscious cognitive processes (revision of traditional concept of 'automaticity'), that may introduce some complexity in Results interpretation 

      - They should discuss the rich literature reporting high-level unconscious processing in masking paradigms (culminating in semantic processing of digits, words or even small group of words, and pictures) in the light of their proposal (deeper unconscious processing during AB than during masking). 

      We thank the reviewer for their positive assessment of our study and for their insightful comments and helpful suggestions that helped to significantly strengthen our paper. We provide a more detailed point-by-point response in the “recommendations for the authors” section below. In brief, we followed the reviewer’s suggestions and revised the Results/Discussion to include references to influences on unconscious processes and expanded our discussion of unconscious effects during masking vs. AB.  

      Reviewer #3 (Public Review): 

      Summary: 

      This work aims to investigate how perceptual and attentional processes affect conscious access in humans. By using multivariate decoding analysis of electroencephalography (EEG) data, the authors explored the neural temporal dynamics of visual processing across different levels of complexity (local contrast, collinearity, and illusory perception). This is achieved by comparing the decidability of an illusory percept in matched conditions of perceptual (i.e., degrading the strength of sensory input using visual masking) and attentional impairment (i.e., impairing topdown attention using attentional blink, AB). The decoding results reveal three distinct temporal responses associated with the three levels of visual processing. Interestingly, the early stage of local contrast processing remains unaffected by both masking and AB. However, the later stage of collinearity and illusory percept processing are impaired by the perceptual manipulation but remain unaffected by the attentional manipulation. These findings contribute to the understanding of the unique neural dynamics of perceptual and attentional functions and how they interact with the different stages of conscious access. 

      Strengths: 

      The study investigates perceptual and attentional impairments across multiple levels of visual processing in a single experiment. Local contrast, collinearity, and illusory perception were manipulated using different configurations of the same visual stimuli. This clever design allows for the investigation of different levels of visual processing under similar low-level conditions. 

      Moreover, behavioural performance was matched between perceptual and attentional manipulations. One of the main problems when comparing perceptual and attentional manipulations on conscious access is that they tend to impact performance at different levels, with perceptual manipulations like masking producing larger effects. The study utilizes a staircasing procedure to find the optimal contrast of the mask stimuli to produce a performance impairment to the illusory perception comparable to the attentional condition, both in terms of perceptual performance (i.e., indicating whether the target contained the Kanizsa illusion) and metacognition (i.e., confidence in the response). 

      The results show a clear dissociation between the three levels of visual processing in terms of temporal dynamics. Local contrast was represented at an early stage (~80 ms), while collinearity and illusory perception were associated with later stages (~200-250 ms). Furthermore, the results provide clear evidence in support of a dissociation between the effects of perceptual and attentional processes on conscious access: while the former affected both neuronal correlates of collinearity and illusory perception, the latter did not have any effect on the processing of the more complex visual features involved in the illusion perception. 

      Weaknesses: 

      The design of the study and the results presented are very similar to those in Fahrenfort et al. (2017), reducing its novelty. Similar to the current study, Fahrenfort et al. (2017) tested the idea that if both masking and AB impact perceptual integration, they should affect the neural markers of perceptual integration in a similar way. They found that behavioural performance (hit/false alarm rate) was affected by both masking and AB, even though only the latter was significant in the unmasked condition. An early classification peak was instead only affected by masking. However, a late classification peak showed a pattern similar to the behavioural results, with classification affected by both masking and AB. 

      The interpretation of the results mainly centres on the theoretical framework of the recurrent processing theory of consciousness (Lamme, 2020), which lead to the assumption that local contrast, collinearity, and the illusory perception reflect feedforward, local recurrent, and global recurrent connections, respectively. It should be mentioned, however, that this theoretical prediction is not directly tested in the study. Moreover, the evidence for the dissociation between illusion and collinearity in terms of lateral and feedback connections seems at least limited. For instance, Kok et al. (2016) found that, whereas bottom-up stimulation activated all cortical layers, feedback activity induced by illusory figures led to a selective activation of the deep layers. Lee & Nguyen (2001), instead, found that V1 neurons respond to illusory contours of the Kanizsa figures, particularly in the superficial layers. They all mention feedback connections, but none seem to point to lateral connections. 

      Moreover, the evidence in favour of primarily lateral connections driving collinearity seems mixed as well. On one hand, Liang et al. (2017) showed that feedback and lateral connections closely interact to mediate image grouping and segmentation. On the other hand, Stettler et al. (2002) showed that, whereas the intrinsic connections link similarly oriented domains in V1, V2 to V1 feedback displays no such specificity. Furthermore, the other studies mentioned in the manuscript did not investigate feedback connections but only lateral ones, making it difficult to draw any clear conclusions. 

      We thank the reviewer for their careful review and positive assessment of our study, as well as for their constructive criticism and helpful suggestions. We provide a more detailed point-by-point response in the “recommendations for the authors” section below. In brief, we addressed the reviewer’s comments and suggestions by better relating our study to Fahrenfort et al.’s (2017) paper and by highlighting the limitations inherent in linking our findings to distinct neural mechanisms (in particular, to lateral vs. feedback connections).

      Recommendations for the authors:  

      Reviewer #1 (Recommendations For The Authors): 

      -  Methods: it states that "The distance between the three Pac-Man stimuli as well as between the three aligned two-legged white circles was 2.8 degrees of visual angle". It is unclear what this distance refers to. Is it the shortest distance between the edges of the objects? 

      It is indeed the shortest distance between the edges of the objects. This is now included in the Methods.

      -  Methods: It's unclear to me if the mask updating procedure during the experimental session was based on detection rate or on the perceptual performance index reported on Fig1D. Please clarify. 

      It was based on accuracy calculated over 32 trials. We have included this information in the Methods.

      -  Methods and Results: I did not understand why the described procedure used to ensure that confidence ratings are not contaminated by differences in perceptual performance was necessary. To me, it just seems to make the "no manipulations" and "both manipulations" less comparable to the other 2 conditions. 

      To calculate accurate estimates of metacognitive sensitivity for the two matched conditions, we wanted participants to make use of the full confidence scale (asking them to distribute their responses evenly over all ratings within a block). By mixing all conditions in the same block, we would have run the risk of participants anchoring their confidence ratings to the unmatched very easy and very difficult conditions (no and both manipulations condition). We made this point explicit in the Results section and in the Methods section:

      “To ensure that the distribution of confidence ratings in the performancematched masked and AB condition was not influenced by participants anchoring their confidence ratings to the unmatched very easy and very difficult conditions (no and both manipulations condition, respectively), the masked and AB condition were presented in the same experimental block, while the other block type included the no and both manipulations condition.”

      “To ensure that confidence ratings for these matched conditions (masked, long lag and unmasked, short lag) were not influenced by participants anchoring their confidence ratings to the very easy and very difficult unmatched conditions (no and both manipulations, respectively), one type of block only contained the matched conditions, while the other block type contained the two remaining, unmatched conditions (masked, short lag and unmasked, long lag).”

      - Methods: what priors were used for Bayesian analyses? 

      Bayesian statistics were calculated in JASP (JASP Team, 2024) with default prior scales (Cauchy distribution, scale 0.707). This is now added to the Methods.

      - Results, line 162: It states that classifiers were applied on "raw EEG activity" but the Methods specify preprocessing steps. "Preprocessed EEG activity" seems more appropriate. 

      We changed the term to “preprocessed EEG activity” in the Methods and to “(minimally) preprocessed EEG activity (see Methods)” in the  Results, respectively.

      - Results, line 173: The effect of masking on local contrast decoding is reported as "marginal". If the alpha is set at 0.05, it seems that this effect is significant and should not be reported as marginal. 

      We changed the wording from “marginal” to “small but significant.”  

      - Fig1: The fixation cross is not displayed. 

      Because adding the fixation cross would have made the figure of the trial design look crowded and less clear, we decided to exclude it from this schematic trial representation. We are now stating this also in the legend of figure 1.  

      - Fig 3A: In the upper left panel, isn't there a missing significant effect of the "local contrast training and testing" condition in the first window? If not, this condition seems oddly underpowered compared to the other two conditions. 

      Thanks for the catch! The highlighting in bold and the significance bar were indeed lacking for this condition in the upper left panel (blue line). We corrected the figure in our revision.

      - Supplementary text and Fig S6: It is unclear to me why the two control analyses (the black lines vs. the green and purple lines) are pooled together in the same figure. They seem to test for different, non-comparable contrasts (they share neither training nor testing sets), and I find it confusing to find them on the same figure. 

      We agree that this may be confusing, and deleted the results from one control analysis from the figure (black line, i.e., training on contrast, testing on illusion), as the reviewer correctly pointed out that it displayed a non-comparable analysis. Given that this control analysis did not reveal any significant decoding, we now report its results only in the Supplementary text.  

      - Fig S6: I think the title of the legend should say testing on the non-illusory triangle instead of testing on the illusory triangle to match the supplementary text. 

      This was a typo – thank you! Corrected.  

      Reviewer #2 (Recommendations For The Authors): 

      Issue #1: One key asymmetry between the three levels of T2 attributes (i.e.: local contrast; non-illusory triangle; illusory Kanisza triangle) is related to the top-down conscious posture driven by the task that was exclusively focusing on the last attribute (illusory Kanisza triangle). Therefore, any difference in EEG decoding performance across these three levels could also depend to this asymmetry. For instance, if participants were engaged to report local contrast or non-illusory triangle, one could wonder if decoding performance could differ from the one used here. This potential confound was addressed by the authors by using decoders trained in different datasets in which the main task was to report one the two other attributes. They could then test how classifiers trained on the task-related attribute behave on the main dataset. However, this part of the study is crucial but not 100% clear, and the links with the results of these control experiments are not fully explicit. Could the author better clarity this important point (see also Issue #1 and #3). 

      The reviewer raises an important point, alluding to potential differences between decoded features regarding task relevance. There are two separate sets of analyses where task relevance may have been a factor, our main analyses comparing illusion to contrast decoding, and our comparison of collinearity vs. illusion-specific processing.  

      In our main analysis, we are indeed reporting decoding of a task-relevant feature (illusion) and of a task-irrelevant feature (local contrast, i.e., rotation of the Pac-Man inducers). Note, however, that the Pac-Man inducers were always task-relevant, as they needed to be processed to perceive illusory triangles, so that local contrast decoding was based on task-relevant stimulus elements, even though participants did not respond to local contrast differences in the main experiment. However, we also ran control analyses testing the effect of task-relevance on local contrast decoding in our independent training data set and in another (independent) study, where local contrast was, in separate experimental blocks, task-relevant or task-irrelevant. The results are reported in the Supplementary Text and in Figure S5. In brief, task-relevance did not improve early (70–95 ms) decoding of local contrast. We are thus confident that the comparison of local contrast to illusion decoding in our main analysis was not substantially affected by differences in task relevance. In our previous manuscript version, we referred to these control analyses only in the collinearity-vs-illusion section of the Results. In our revision, we added the following in the Results section comparing illusion to contrast decoding:

      “In the light of evidence showing that unconscious processing is susceptible to conscious top-down influences (Kentridge et al., 2004; Kiefer & Brendel, 2006; Naccache et al., 2002), we ran control analyses showing that early local contrast decoding was not improved by rendering contrast task-relevant (see Supplementary Information and Fig. S5), indicating that these differences between illusion and contrast decoding did not reflect differences in task-relevance.”

      In addition to our main analysis, there is the concern that our comparison of collinearity vs. illusion-specific processing may have been affected by differences in task-relevance between the stimuli inducing the non-illusory triangle (the “two-legged white circles”, collinearity-only) and the stimuli inducing the Kanizsa illusion (the PacMan inducers, collinearity-plus-illusion). We would like to emphasize that in our main analysis classifiers were always used to decode T2 illusion presence vs. absence (collinearity-plus-illusion), and never to decode T2 collinearity-only. To distinguish collinearity-only from collinearity-plus-illusion processing, we only varied the training data (training classifiers on collinearity-only or collinearity-plus-illusion), using the independent training data set, where collinearity-only and collinearity-plus-illusion (and rotation) were task-relevant (in separate blocks). As discussed in the Supplementary Information, for this analysis approach to be valid, collinearity-only processing should be similar for the illusory and the non-illusory triangle, and this is what control analyses demonstrated (Fig. S7). In any case, general task-relevance was equated for the collinearity-only and the collinearity-plus-illusion classifiers.  

      Finally, in supplementary Figure 6 we also show that our main results reported in Figure 2 (discussed at the top of this response) were very similar when the classifiers were trained on the independent localizer dataset in which each stimulus feature could be task-relevant.  

      Together, for the reasons described above, we believe that differences in EEG decoding performance across these three stimulus levels did  are unlikely to depend also depend on a “task-relevance” asymmetry.

      Issue #2: Following on my previous point the authors should better mention the concept of conscious influences on unconscious processing that led to a full revision of the notion of automaticity in cognitive science [1 , 2 , 3 , 4]. For instance, the discovery that conscious endogenous temporal and spatial attention modulate unconscious subliminal processing paved the way to this revision. This concept raises the importance of Issue#1: equating performance on the main task across AB and masking is not enough to guarantee that differences of neural processing of the unattended attributes of T2 (i.e.: task-unrelated attributes) are not, in part, due to this asymmetry rather than to a systematic difference of unconscious processing strengtsh [5 , 6-8]. Obviously, the reported differences for real-triangle decoding between AB and masking cannot be totally explained by such a factor (because this is a task-unrelated attribute for both AB and masking conditions), but still this issue should be better introduced, addressed, clarified (Issue #1 and #3) and discussed. 

      We would like to refer to our response to the previous point: Control analyses for local contrast decoding showed that task relevance had no influence on our marker for feedforward processing. Most importantly, as outlined above, we did not perform real-triangle decoding – all our decoding analyses focused on comparing collinearity-only vs. collinearity-plus-illusion were run on the task-relevant T2 illusion (decoding its presence vs. absence). The key difference was solely the training set, where the collinearity-only classifier was trained on the (task-relevant) real triangle and the collinearity-plus-illusion classifier was trained on the (task-relevant) Kanizsa triangle. Thus, overall task relevance was controlled in these analyses.  

      In our revision, we are now also citing the studies proposed by the reviewer, when discussing the control analyses testing for an effect of task-relevance on local contrast decoding:

      “In the light of evidence showing that unconscious processing is susceptible to conscious top-down influences (Kentridge et al., 2004; Kiefer & Brendel, 2006; Naccache et al., 2002), we ran control analyses showing that early local contrast decoding was not improved by rendering contrast task-relevant (see Supplementary Information and Fig. S5), indicating that these differences between illusion and contrast decoding did not reflect differences in task-relevance.”

      Issue #3: In terms of clarity, I would suggest the authors to add a synthetic figure providing an overall view of all pairs of intra and cross-conditions decoding analyses and mentioning main task for training and testing sets for each analysis (see my previous and related points). Indeed, at one point, the reader can get lost and this would not only strengthen accessibility to the detailed picture of results, but also pinpoint the limits of the work (see previous point). 

      We understand the point the reviewer is raising and acknowledge that some of our analyses, in particular those using different training and testing sets, may be difficult to grasp. But given the variety of different analyses using different training and testing sets, different temporal windows, as well as different stimulus features, it was not possible to design an intuitive synthetic figure summarizing the key results. We hope that the added text in the Results and Discussion section will be sufficient to guide the reader through our set of analyses.  

      In our revision, we are now more clearly highlighting that, in addition to presenting the key results in our main text that were based on training classifiers on the T1 data, “we replicated all key findings when training the classifiers on an independent training set where individual stimuli were presented in isolation (Fig. 3A, results in the Supplementary Information and Fig. S6).” For this, we added a schematic showing the procedure of the independent training set to Figure 3, more clearly pointing the reader to the use of a separate training data set.  

      Issue #4: In the light of these findings the authors should discuss more thoroughly the question of unconscious high-level representations in masking versus AB: in particular, a longstanding issue relates to unconscious semantic processing of words, numbers or pictures. According to their findings, they tend to suggest that semantic processing should be more enabled in AB than in masking. However, a rich literature provided a substantial number of results (including results from the last authors Simon Van Gaal) that tend to support the notion of unconscious semantic processing in subliminal processing (see in particular: [9 , 10 , 11 , 12 , 13]). So, and as mentioned by the authors, while there is evidence for semantic processing during AB they should better discuss how they would explain unconscious semantic subliminal processing. While a possibility could be to question the unconscious attribute of several subliminal results, the same argument also holds for AB studies. Another possible track of discussion would be to differentiate AB and subliminal perception in terms of strength and durability of the corresponding unconscious representations, but not necessarily in terms of cognitive richness. Indeed, one may discuss that semantic processing of stimuli that do not need complex spatial integration (e.g.: words or digits as compared to illusory Kanisza tested here) can still be observed under subliminal conditions. 

      We thank the reviewer for pointing us to this shortcoming of our previous Discussion. Note that our data does not directly speak to the question of high-level unconscious representations in masking vs AB, because such conclusions would hinge on the operational definition of consciousness one adheres to (also see response to Reviewer 1). Nevertheless, we do follow the reviewer’s suggestions and added the following in the Discussion (also addressing a point about other forms of attention raised by Reviewer 1):

      “Clearly, these findings do not imply that unconscious high-level (e.g., semantic) processing can only occur during inattention, nor do they necessarily generalize to other forms of inattention. Indeed, while the AB represents a prime example of late attentional filtering, other ways of inducing inattention or distraction (e.g., by manipulating spatial attention) may filter information earlier in the processing hierarchy (e.g., Luck & Hillyard, 1994 vs. Vogel et al., 1998).”

      And, in a following paragraph in the Discussion:

      “Such deep feedforward processing can be sufficient for unconscious high-level processing, as indicated by a rich literature demonstrating high-level (e.g., semantic) processing during masking (Kouider & Dehaene, 2007; Van den Bussche et al., 2009; van Gaal & Lamme, 2012). Thus, rather than enabling high-level unconscious processing, preserved local recurrency during inattention may afford other processing advantages linked to its proposed role in perceptual integration (Lamme, 2020), such as integration of stimulus elements over space or time.  

      Reviewer #3 (Recommendations For The Authors): 

      (1) The objective of Fahrenfort et al., 2017 seems very similar to that of the current study. What are the main differences between the two studies? Moreover, Fahrenfort et al., 2017 conducted similar decoding analyses to those performed in the current study.

      Which results were replicated in the current study, and which ones are novel? Highlighting these differences in the manuscript would be beneficial. 

      We now provide a more comprehensive coverage of the study by Fahrenfort et al., 2017. In the Introduction, we added a brief summary of the key findings, highlighting that this study’s findings could have reflected differences in task performance rather than differences between masking and AB:

      “For example, Fahrenfort and colleagues (2017) found that illusory surfaces could be decoded from electroencephalogram (EEG) data during the AB but not during masking. This was taken as evidence that local recurrent interactions, supporting perceptual integration, were preserved during inattention but fully abolished by masking. However, masking had a much stronger behavioral effect than the AB, effectively reducing task performance to chance level. Indeed, a control experiment using weaker masking, which resulted in behavioral performance well above chance similar to the main experiment’s AB condition, revealed some evidence for preserved local recurrent interactions also during masking. However, these conditions were tested in separate experiments with small samples, precluding a direct comparison of perceptual vs. attentional blindness at matched levels of behavioral performance. To test …”

      In the Results , we are now also highlighting this key advancement by directly referencing the previous study:

      “Thus, whereas in previous studies task performance was considerably higher during the AB than during masking (e.g., Fahrenfort et al., 2017), in the present study the masked and the AB condition were matched in both measures of conscious access.” When reporting the EEG decoding results in the Results section, we continuously cite the Fahrenfort et al. (2017) study to highlight similarities in the study’s findings. We also added a few sentences explicitly relating the key findings of the two studies:

      “This suggests that the AB allowed for greater local recurrent processing than masking, replicating the key finding by Fahrenfort and colleagues (2017). Importantly, the present result demonstrates that this effect reflects the difference between the perceptual vs. attentional manipulation rather than differences in behavior, as the masked and the AB condition were matched for perceptual performance and metacognition.”

      “This similarity between behavior and EEG decoding replicates the findings of Fahrenfort and colleagues  (2017) who also found a striking similarity between late Kanizsa decoding (at 406 ms) and behavioral Kanizsa detection. These results indicate that global recurrent processing at these later points in time reflected conscious access to the Kanizsa illusion.”  

      We also more clearly highlighted where our study goes beyond Fahrenfort et al.’s (2017), e.g., in the Results:

      “The addition of this element of collinearity to our stimuli was a key difference to the study by Fahrenfort and colleagues (2017), allowing us to compare non-illusory triangle decoding to illusory triangle decoding in order to distinguish between collinearity and illusion-specific processing.”

      And in the Discussion:

      “Furthermore, the addition of line segments forming a non-illusory triangle to the stimulus employed in the present study allowed us to distinguish between collinearity and illusion-specific processing.”

      Also, in the Discussion, we added a paragraph “summarizing which results were replicated in the current study, and which ones are novel”, as suggested by the reviewer:

      “This pattern of results is consistent with a previous study that used EEG to decode Kanizsa-like illusory surfaces during masking and the AB (Fahrenfort et al., 2017). However, the present study also revealed some effects where Fahrenfort and colleagues (2017) failed to obtain statistical significance, likely reflecting the present study’s considerably larger sample size and greater statistical power. For example, in the present study the marker for feedforward processing was weakly but significantly impaired by masking, and the marker for local recurrency was significantly impaired not only by masking but also by the AB, although to a lesser extent. Most importantly, however, we replicated the key findings that local recurrent processing was more strongly impaired by masking than by the AB, and that global recurrent processing was similarly impaired by masking and the AB and closely linked to task performance, reflecting conscious access. Crucially, having matched the key conditions behaviorally, the present finding of greater local recurrency during the AB can now unequivocally be attributed to the attentional vs. perceptual manipulation of consciousness.”

      Finally, we changed the title to “Distinct neural mechanisms underlying perceptual and attentional impairments of conscious access despite equal task performance” to highlight one of the crucial differences between the Fahrenfort et al., study and this study, namely the fact that we equalized task performance between the two critical conditions (AB and masking).

      (2) It is not clear from the text the link between the current study and the literature on the role of lateral and feedback connections in consciousness (Lamme, 2020). A better explanation is needed. 

      To our knowledge, consciousness theories such as recurrent processing theory by Lamme make currently no distinction between the role of lateral and feedback connections for consciousness. The principled distinction lies between unconscious feedforward processing and phenomenally conscious or “preconscious” local recurrent processing, where local recurrency refers to both lateral (or horizontal) and feedback connections. We added a sentence in the Discussion:

      “As current theories do not distinguish between the roles of lateral vs. feedback connections for consciousness, the present findings may enrich empirical and theoretical work on perceptual vs. attentional mechanisms of consciousness …”

      (3) When training on T1 and testing on T2, EEG data showed an early peak in local contrast classification at 75-95 ms over posterior electrodes. The authors stated that this modulation was only marginally affected by masking (and not at all by AB); however, the main effect of masking is significant. Why was this effect interpreted as nonrelevant? 

      Following this and Reviewer 1’s comment, we changed the wording from “marginal” to “weak but significant.” We considered this effect “weak” and of lesser relevance, because its Bayes factor indicated that the alternative hypothesis was only 1.31 times more likely than the null hypothesis of no effect, representing only “anecdotal” evidence, which is in sharp contrast to the robust effects of the consciousness manipulations on illusion decoding reported later. Furthermore, later ANOVAs comparing the effect of masking on contrast vs. illusion decoding revealed much stronger effects on illusion decoding than on contrast decoding (BFs>3.59×10<sup>4</sup>).

      (4) The decoding analysis on the illusory percept yielded two separate peaks of decoding, one from 200 to 250 ms and another from 275 to 475 ms. The early component was localized occipitally and interpreted as local sensory processing, while the late peak was described as a marker for global recurrent processing. This latter peak was localized in the parietal cortex and associated with the P300. Can the authors show the topography of the P300 evoked response obtained from the current study as a comparison? Moreover, source reconstruction analysis would probably provide a better understanding of the cortical localization of the two peaks. 

      Figure S4 now shows the P300 from electrode Pz, demonstrating a stronger positivity between 375 and 475 ms when the illusory triangle was present than when it was absent. We did not run a source reconstruction analysis.  

      (5) The authors mention that the behavioural results closely resembled the pattern of the second decoding peak results. However, they did not show any evidence for this relationship. For instance, is there a correlation between the two measures across or within participants? Does this relationship differ between the illusion report and the confidence rating? 

      This relationship became evident from simply eyeballing the results figures: Both in behavior and EEG decoding performance dropped from the both-manipulations condition to the AB and masked conditions, while these conditions did not differ significantly. Following a similar observation of a close similarity between behavior and the second/late illusion decoding peak in the study by Fahrenfort et al. (2017), we adopted their analysis approach and ran two additional ANOVAs, adding “measure” (behavior vs. EEG) as a factor. For this analysis, we dropped the both-manipulations condition due to scale restrictions (as noted in footnote 1: “We excluded the bothmanipulations condition from this analysis due to scale restrictions: in this condition, EEG decoding at the second peak was at chance, while behavioral performance was above chance, leaving more room for behavior to drop from the masked and AB condition.”). The analysis revealed that there were no interactions with condition:

      “The pattern of behavioral results, both for perceptual performance and metacognitive sensitivity, closely resembled the second decoding peak: sensitivity in all three metrics dropped from the no-manipulations condition to the masked and AB conditions, while sensitivity did not differ significantly between these performancematched conditions (Fig. 2C). Two additional rm ANOVAs with the factors measure (behavior, second EEG decoding peak) and condition (no-manipulations, masked, AB)<sup>1</sup> for perceptual performance and metacognitive sensitivity revealed no significant interaction (performance: F</iv><sub>2,58</sub>=0.27, P\=0.762, BF<sub>01</sub>=8.47; metacognition: F</iv><sub>2,58</sub=0.54, P\=0.586, BF<sub>2,58</sub>=6.04). This similarity between behavior and EEG decoding replicates the findings of Fahrenfort and colleagues  (2017) who also found a striking similarity between late Kanizsa decoding (at 406 ms) and behavioral Kanizsa detection. These results indicate that global recurrent processing at these later points in time reflected conscious access to the Kanizsa illusion.”

      (6) The marker for illusion-specific processing emerged later (200-250 ms), with the nomanipulation decoding performing better after training on the illusion than the nonillusory triangle. This difference emerged only in the AB condition, and it was fully abolished by masking. The authors confirmed that the illusion-specific processing was not affected by the AB manipulations by running a rm ANOVA which did not result in a significant interaction between condition and training set. However, unlike the other non-significant results, a Bayes Factor is missing here. 

      We added Bayes factors to all (significant and non-significant) rm ANOVAs.

      (7) The same analysis yielded a second illusion decoding peak at 375-475 ms. This effect was impaired by both masking and AB, with no significant differences between the two conditions. The authors stated that this result was directly linked to behavioural performance. However, it is not clear to me what they mean (see point 5). 

      We added analyses comparing behavior and EEG decoding directly (see our response to point 5).

      (8) The introduction starts by stating that perceptual and attentional processes differently affect consciousness access. This differentiation has been studied thoroughly in the consciousness literature, with a focus on how attention differs from consciousness (e.g., Koch & Tsuchiya, TiCS, 2007; Pitts, Lutsyshyna & Hillyard, Phil. Trans. Roy. Soc. B Biol. Sci., 2018). The authors stated that "these findings confirm and enrich empirical and theoretical work on perceptual vs. attentional mechanisms of consciousness clearly distinguishing and specifying the neural profiles of each processing stage of the influential four-stage model of conscious experience". I found it surprising that this aspect was not discussed further. What was the state of the art before this study was conducted? What are the mentioned neural profiles? How did the current results enrich the literature on this topic? 

      We would like to point out that our study is not primarily concerned with the conceptual distinction between consciousness and attention, which has been the central focus of e.g., Koch and Tsuchiuya (2007). While this literature was concerned with ways to dissociate consciousness and attention, we tacitly assumed that attention and consciousness are now generally considered as different constructs. Our study is thus not dealing with dissociations between attention and consciousness, nor with the distinction between phenomenal consciousness and conscious access, but is concerned with different ways of impairing conscious access (defined as the ability to report about a stimulus), either via perceptual or via attentional manipulations. For the state of the art before the study was conducted, we would like to refer to the motivation of our study in the Introduction, e.g., previous studies’ difficulties in unequivocally linking greater local recurrency during attentional than perceptual blindness to the consciousness manipulation, given performance confounds (we expanded this Introduction section). We also expanded a paragraph in the discussion to remind the reader of the neural profiles of the 4-stage model and to highlight the novelty of our findings related to the distinction between lateral and feedback processes:

      “As current theories do not distinguish between the roles of lateral vs. feedback connections for consciousness, the present findings may enrich empirical and theoretical work on perceptual vs. attentional mechanisms of consciousness (Block, 2005; Dehaene et al., 2006; Hatamimajoumerd et al., 2022; Lamme, 2010; Pitts et al., 2018; Sergent & Dehaene, 2004), clearly distinguishing the neural profiles of each processing stage of the influential four-stage model of conscious experience (Fig. 1A). Along with the distinct temporal and spatial EEG decoding patterns associated with lateral and feedback processing, our findings suggest a processing sequence from feedforward processing to local recurrent interactions encompassing lateral-tofeedback connections, ultimately leading to global recurrency and conscious report.”  

      (9) When stating that this is the first study in which behavioural measures of conscious perception were matched between the attentional blink and masking, it would be beneficial to highlight the main differences between the current study and the one from Fahrenfort et al., 2017, with which the current study shares many similarities in the experimental design (see point 1). 

      We would like to refer the reviewer to our response to point 1), where we detail how we expanded the discussion of similarities and differences between our present study and Fahrenfort et al. (2017).

      (10) The discussion emphasizes how the current study "suggests a processing sequence from feedforward processing to local recurrent interactions encompassing lateral-to-feedback connections, ultimately leading to global recurrency and conscious report". For transparency, it is though important to highlight that one limit of the current study is that it does not provide direct evidence for the specified types of connections (see point 6). 

      We added a qualification in the Discussion section:

      “Although the present EEG decoding measures cannot provide direct evidence for feedback vs. lateral processes, based on neurophysiological evidence, …”

      Furthermore, we added this qualification in the Discussion section:

      “It should be noted that the not all neurophysiological evidence unequivocally links processing of collinearity and of the Kanizsa illusion to lateral and feedback processing, respectively (Angelucci et al., 2002; Bair et al., 2003; Chen et al., 2014), so that overlap in decoding the illusory and non-illusory triangle may reflect other mechanisms, for example feedback processing as well.”

      References

      Angelucci, A., Levitt, J. B., Walton, E. J. S., Hupe, J.-M., Bullier, J., & Lund, J. S. (2002). Circuits for local and global signal integration in primary visual cortex. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 22(19), 8633–8646.

      Bair, W., Cavanaugh, J. R., & Movshon, J. A. (2003). Time course and time-distance relationships for surround suppression in macaque V1 neurons. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 23(20), 7690–7701.

      Block, N. (2005). Two neural correlates of consciousness. Trends in Cognitive Sciences, 9(2), 46–52.

      Chen, M., Yan, Y., Gong, X., Gilbert, C. D., Liang, H., & Li, W. (2014). Incremental integration of global contours through interplay between visual cortical areas. Neuron, 82(3), 682–694.

      Dehaene, S., Changeux, J.-P., Naccache, L., Sackur, J., & Sergent, C. (2006). Conscious, preconscious, and subliminal processing: a testable taxonomy. Trends in Cognitive Sciences, 10(5), 204–211.

      Hatamimajoumerd, E., Ratan Murty, N. A., Pitts, M., & Cohen, M. A. (2022). Decoding perceptual awareness across the brain with a no-report fMRI masking paradigm. Current Biology: CB. https://doi.org/10.1016/j.cub.2022.07.068

      JASP Team. (2024). JASP (Version 0.19.0)[Computer software]. https://jasp-stats.org/ Kentridge, R. W., Heywood, C. A., & Weiskrantz, L. (2004). Spatial attention speeds discrimination without awareness in blindsight. Neuropsychologia, 42(6), 831– 835.

      Kiefer, M., & Brendel, D. (2006). Attentional Modulation of Unconscious “Automatic” Processes: Evidence from Event-related Potentials in a Masked Priming Paradigm. Journal of Cognitive Neuroscience, 18(2), 184–198.

      Kouider, S., & Dehaene, S. (2007). Levels of processing during non-conscious perception: a critical review of visual masking. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1481), 857–875.

      Lamme, V. A. F. (2010). How neuroscience will change our view on consciousness. Cognitive Neuroscience, 1(3), 204–220.

      Luck, S. J., & Hillyard, S. A. (1994). Electrophysiological correlates of feature analysis during visual search. Psychophysiology, 31(3), 291–308.

      Naccache, L., Blandin, E., & Dehaene, S. (2002). Unconscious masked priming depends on temporal attention. Psychological Science, 13(5), 416–424.

      Pitts, M. A., Lutsyshyna, L. A., & Hillyard, S. A. (2018). The relationship between attention and consciousness: an expanded taxonomy and implications for ‘noreport’ paradigms. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 373(1755), 20170348.

      Sergent, C., & Dehaene, S. (2004). Is consciousness a gradual phenomenon? Evidence for an all-or-none bifurcation during the attentional blink. Psychological Science, 15(11), 720–728.

      Van den Bussche, E., Van den Noortgate, W., & Reynvoet, B. (2009). Mechanisms of masked priming: a meta-analysis. Psychological Bulletin, 135(3), 452–477. van Gaal, S., & Lamme, V. A. F. (2012). Unconscious high-level information processing: implication for neurobiological theories of consciousness: Implication for neurobiological theories of consciousness. The Neuroscientist: A Review Journal Bringing Neurobiology, Neurology and Psychiatry, 18(3), 287–301.

      Vogel, E. K., Luck, S. J., & Shapiro, K. L. (1998). Electrophysiological evidence for a postperceptual locus of suppression during the attentional blink. Journal of Experimental Psychology. Human Perception and Performance, 24(6), 1656– 1674.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      Summary:

      his study shows a new mechanism of GS regulation in the archaean Methanosarcina mazei and clarifies the direct activation of GS activity by 2-oxoglutarate, thus featuring another way in which 2-oxoglutarate acts as a central status reporter of C/N sensing.

      Mass photometry and single particle cryoEM structure analysis convincingly show the direct regulation of GS activity by 2-OG promoted formation of the dodecameric structure of GS. The previously recognized small proteins GlnK1 and Sp26 seem to play a subordinate role in GS regulation, which is in good agreement with previous data. Although these data are quite clear now, there remains one major open question: how does 2-OG further increase GS activity once the full dodecameric state is achieved (at 5 mM)? This point needs to be reconsidered.

      Weaknesses:

      It is not entirely clear, how very high 2-OG concentrations activate GS beyond dodecamer formation.

      The data presented in this work are in stark contrast to the previously reported structure of M. mazei GS by the Schumacher lab. This is very confusing for the scientific community and requires clarification. The discussion should consider possible reasons for the contradictory results.

      Importantly, it is puzzling how Schumacher could achieve an apo-structire of dodecameric GS? If 2-OG is necessary for dodecameric formation, this should be discussed. If GlnK1 doesn't form a complex with the dodecameric GS, how could such a complex be resolved there?

      In addition, the text is in principle clear but could be improved by professional editing. Most obviously there is insufficient comma placement.

      We thank Reviewer #1 for the professional evaluation and raising important points. We will address those comments in the updated manuscript and especially improve the discussion in respect to the two points of concern.

      (1) How can GlnA1 activity further be stimulated with further increasing 2-OG after the dodecamer is already fully assembled at 5 mM 2-OG.

      We assume a two-step requirement for 2-OG, the dodecameric assembly and the priming of the active sites. The assembly step is based on cooperative effects of 2-OG and does not require the presence of 2-OG in all 2-OG-binding pockets: 2-OG-binding to one binding pocket also causes a domino effect of conformational changes in the adjacent 2-OG-unbound subunit, as also described for Methanothermococcus thermolithotrophicus GS in Müller et al. 2023. Due to the introduction of these conformational changes, the dodecameric form becomes more favourable even without all 2-OG binding sites being occupied. With higher 2-OG concentrations present (> 5mM), the activity increased further until finally all 2-OG-binding pockets were occupied, resulting in the priming of all active sites (all subunits) and thereby reaching the maximal activity.

      (2) The contradictory results with previously published data on the structure of M. mazei by Schumacher et al. 2023.

      We certainly agree that it is confusing that Schumacher et al. 2023 obtained a dodecameric structure without the addition of 2-OG, which we claim to be essential for the dodecameric form. 2-OG is a cellular metabolite that is naturally present in E. coli, the heterologous expression host both groups used. Since our main question focused on analysing the 2-OG effect on GS, we have performed thorough dialysis of the purified protein to remove all 2-OG before performing MP experiments. In the absence of 2-OG we never observed significant enzyme activity and always detected a fast disassembly after incubation on ice. We thus assume that a dodecamer without 2-OG in Schumacher et al. 2023 is an inactive oligomer of a once 2-OG-bound form, stabilized e.g. by the presence of 5 mM MgCl2.

      The GlnA1-GlnK1-structure (crystallography) by Schumacher et al. 2023 is in stark contrast to our findings that GlnK1 and GlnA1 do not interact as shown by mass photometry with purified proteins. A possible reason for this discrepancy might be that at the high protein concentrations used in the crystallization assay, complexes are formed based on hydrophobic or ionic protein interactions, which would not form under physiological concentrations.

      Reviewer #2 (Public Review):

      Summary:

      Herdering et al. introduced research on an archaeal glutamine synthetase (GS) from Methanosarcina mazei, which exhibits sensitivity to the environmental presence of 2-oxoglutarate (2-OG). While previous studies have indicated 2-OG's ability to enhance GS activity, the precise underlying mechanism remains unclear. Initially, the authors utilized biophysical characterization, primarily employing a nanomolar-scale detection method called mass photometry, to explore the molecular assembly of Methanosarcina mazei GS (M. mazei GS) in the absence or presence of 2-OG. Similar to other GS enzymes, the target M. mazei GS forms a stable dodecamer, with two hexameric rings stacked in tail-to-tail interactions. Despite approximately 40% of M. mazei GS existing as monomeric or dimeric entities in the detectable solution, the majority spontaneously assemble into a dodecameric state. Upon mixing 2-OG with M. mazei GS, the population of the dodecameric form increases proportionally with the concentration of 2-OG, indicating that 2-OG either promotes or stabilizes the assembly process. The cryo-electron microscopy (cryo-EM) structure reveals that 2-OG is positioned near the interface of two hexameric rings. At a resolution of 2.39 Å, the cryo-EM map vividly illustrates 2-OG forming hydrogen bonds with two individual GS subunits as well as with solvent water molecules. Moreover, local side-chain reorientation and conformational changes of loops in response to 2-OG further delineate the 2-OG-stabilized assembly of M. mazei GS.

      Strengths & Weaknesses:

      The investigation studies the impact of 2-oxoglutarate (2-OG) on the assembly of Methanosarcina mazei glutamine synthetase (M mazei GS). Utilizing cutting-edge mass photometry, the authors scrutinized the population dynamics of GS assembly in response to varying concentrations of 2-OG. Notably, the findings demonstrate a promising and straightforward correlation, revealing that dodecamer formation can be stimulated by 2-OG concentrations of up to 10 mM, although GS assembly never reaches 100% dodecamerization in this study. Furthermore, catalytic activities showed a remarkable enhancement, escalating from 0.0 U/mg to 7.8 U/mg with increasing concentrations of 2-OG, peaking at 12.5 mM. However, an intriguing gap arises between the incomplete dodecameric formation observed at 10 mM 2-OG, as revealed by mass photometry, and the continued increase in activity from 5 mM to 10 mM 2-OG for M mazei GS. This prompts questions regarding the inability of M mazei GS to achieve complete dodecamer formation and the underlying factors that further enhance GS activity within this concentration range of 2-OG.

      Moreover, the cryo-electron microscopy (cryo-EM) analysis provides additional support for the biophysical and biochemical characterization, elucidating the precise localization of 2-OG at the interface of two GS subunits within two hexameric rings. The observed correlation between GS assembly facilitated by 2-OG and its catalytic activity is substantiated by structural reorientations at the GS-GS interface, confirming the previously reported phenomenon of "funnel activation" in GS. However, the authors did not present the cryo-EM structure of M. mazei GS in complex with ATP and glutamate in the presence of 2-OG, which could have shed light on the differences in glutamine biosynthesis between previously reported GS enzymes and the 2-OG-bound M. mazei GS.

      Furthermore, besides revealing the cryo-EM structure of 2-OG-bound GS, the study also observed the filamentous form of GS, suggesting that filament formation may be a universal stacking mechanism across archaeal and bacterial species. However, efforts to enhance resolution to investigate whether the stacked polymer is induced by 2-OG or other factors such as ions or metabolites were not undertaken by the authors, leaving room for further exploration into the mechanisms underlying filament formation in GS.

      We thank Reviewer #2 for the detailed assessment and valuable input. We will address those comments in the updated manuscript and clarify the message.

      (1) The discrepancy of the dodecamer formation (max. at 5 mM 2-OG) and the enzyme activity (max. at 12.5 mM 2-OG). We assume that there are two effects caused by 2-OG: 1. cooperativity of binding (less 2-OG needed to facilitate dodecamer formation) and 2. priming of each active site. See also Reviewer #1 R.1). We assume this is the reason why the activity of dodecameric GlnA1 can be further enhanced by increased 2-OG concentration until all catalytic sites are primed.

      (2) The lack of the structure of a 2-OG and ATP-bound GlnA1. Although we strongly agree that this would be a highly interesting structure, it seems out of the scope of a typical revision to request new cryo-EM structures. We evaluate the findings of our present study concerning the 2-OG effects as important insights into the strongly discussed field of glutamine synthetase regulation, even without the requested additional structures.

      (3) The observed GlnA1-filaments are an interesting finding. We certainly agree with the referee on that point, that the stacked polymers are potentially induced by 2-OG or ions. However, it is out of the main focus of this manuscript to further explore those filaments. Nevertheless, this observation could serve as an interesting starting point for future experiments.

      Reviewer #3 (Public Review):

      Summary:

      The current manuscript investigates the effect of 2-oxoglutarate and the Glk1 protein as modulators of the enzymatic reactivity of glutamine synthetase. To do this, the authors rely on mass photometry, specific activity measurements, and single-particle cryo-EM data.

      From the results obtained, the authors convey that glutamine synthetase from Methanosarcina mazei exists in a non-active monomeric/dimeric form under low concentrations of 2-oxoglutarate, and its oligomerization into a dodecameric complex is triggered by higher concentration of 2-oxoglutarate, also resulting in the enhancement of the enzyme activity.

      Strengths:

      Glutamine synthetase is a crucial enzyme in all domains of life. The dodecameric fold of GS is recurrent amongst prokaryotic and archaea organisms, while the enzyme activity can be regulated in distinct ways. This is a very interesting work combining protein biochemistry with structural biology.

      The role of 2-OG is here highlighted as a crucial effector for enzyme oligomerization and full reactivity.

      Weaknesses:

      Various opportunities to enhance the current state-of-the-art were missed. In particular, omissions of the ligand-bound state of GnK1 leave unexplained the lack of its interaction with GS (in contradiction with previous results from the authors). A finer dissection of the effect and role of 2-oxoglurate are missing and important questions remain unanswered (e.g. are dimers relevant during early stages of the interaction or why previous GS dodecameric structures do not show 2-oxoglutarate).

      We thank Reviewer #3 for the expert evaluation and inspiring criticism.

      (1) Encouragement to examine ligand-bound states of GlnK1. We agree and plan to perform the suggested experiments exploring the conditions under which GlnA1 and GlnK1 might interact. We will perform the MP experiments in the presence of ATP. In GlnA1 activity test assays when evaluating the presence/effects of GlnK1 on GlnA1 activity, however, ATP was always present in high concentrations and still we did not observe a significant effect of GlnK1 on the GlnA1 activity.

      (2) The exact role of 2-OG could have been dissected much better. We agree on that point and will improve the clarity of the manuscript. See also Reviewer #1 R.1.

      (3) The lack of studies on dimers. This is actually an interesting point, which we did not consider during writing the manuscript. Now, re-analysing all our MP data in this respect, GlnA1 is likely a dimer as smallest species. Consequently, we will add more supplementary data which supports this observation and change the text accordingly.

      (4) Previous studies and structures did not show the 2-OG. We assume that for other structures, no additional 2-OG was added, and the groups did not specifically analyse for this metabolite either. All methanoarchaea perform methanogenesis and contain the oxidative part of the TCA cycle exclusively for the generation of glutamate (anabolism) but not a closed TCA cycle enabling them to use internal 2-OG concentration as internal signal for nitrogen availability. In the case of bacterial GS from organisms with a closed TCA cycle used for energy metabolism (oxidation of acetyl CoA) like e.g. E. coli, the formation of an active dodecameric GS form underlies another mechanism independent of 2-OG. In case of the recent M. mazei GS structures published by Schumacher et al. 2023, the dodecameric structure is probably a result from the heterologous expression and purification from E. coli. (See also Reviewer #1 R.2). One example of methanoarchaeal glutamine synthetases that do in fact contain the 2-OG in the structure, is Müller et al. 2023.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      Specific issues:

      L 141: 2-OG levels increase due to slowing GOGAT reaction (due to Gln limitation as a consequence of N-starvation).... (2-OG also increases in bacteria that lack GDH...)

      As the GS-GOGAT cycle is the major route of ammonium assimilation, consumption of 2-OG by GDH is probably only relevant under high ammonium concentrations.

      In Methanoarchaea, GS is strictly regulated and expression strongly repressed under nitrogen sufficiency - thus glutamate for anabolism is mainly generated by GDH under N sufficiency consuming 2-OG delivered by the oxidative part of the TCA cycle (Methanogenesis is the energy metabolism in methanoarchaea, a closed TCA cycle is not present) thus 2-OG is increasing under nitrogen limitation, when no NH3 is available for GDH.

      L148: it is not clear what is meant by: "and due to the indirect GS activity assay"

      We apologize for not being clear here. The GS activity assay used is the classical assay by Sahpiro & Stadtman 1970 and is a coupled optical test assay (coupling the ATP consumption of the GS activity to the oxidation of NADH by lactate dehydrogenase). Based on the coupled test assay the measurements of low activities show a high deviation. We now added this information in the revised MS respectively.

      L: 177: arguing about 2-OG affinities: more precisely, the 0.75 mM 2-OG is the EC50 concentration of 2-OG for triggering dodecameric formation; it might not directly reflect the total 2-OG affinity, since the affinity may be modulated by (anti)cooperative effects, or by additional sites... as there may be different 2-OG binding sites involved... (same in line 201)

      Thank you for the valuable input. We changed KD to EC50 within the entire manuscript. Concerning possible additional 2-OG binding sites: we did not see any other 2-OG in the cryo-EM structure aside from the described one and we therefore assume that the one described in the manuscript is the main and only one. Considering the high amounts of 2-OG (12.5 mM) used in the structure, it is quite unlikely that additional 2-OG sites exist since they would have unphysiologically low affinities.

      In this respect, instead of the rather poor assay shown in Figure 1D, a more detailed determination of catalytic activation by different 2-OG concentrations should be done (similar to 1A)... This would allow a direct comparison between dodecamerization and enzymatic activation.

      We agree and performed the respective experiments, which are now presented in revised Fig. 1D

      Discussion: the role of 2-OG as a direct activator, comparison with other prokaryotic GS: in other cases, 2-OG affects GS indirectly by being sensed by PII proteins or other 2-OG sensing mechanisms (like 2OG-NtcA-mediated repression of IF factors in cyanobacteria)

      We agree and have added that information in the discussion as suggested.

      290. Unclear: As a second step of activation, the allosteric binding of 2-OG causes a series of conformational.... where is this site located? According to the catalytic effects (compare 1A and 1D) this site should have a lower affinity …

      Thank you very much for pointing this out. Binding of 2-OG only occurs in one specific allosteric binding-site. Binding however, has two effects on the GlnA1: dodecamer assembly and priming of the active site (with two specific EC50, which are now shown in Fig. 1A and D).

      See also public comment #1 (1).

      Reviewer #2 (Recommendations For The Authors):

      The primary concern for me is that mass photometry might lead to incorrect conclusions. The differences in the forms of GS seen in SEC and MP suggest that GS can indeed form a stable dodecamer when the concentration of GS is high enough, as shown in Figure S1B. I strongly suggest using an additional biophysical method to explore the connection between GS and 2-OG in terms of both assembly and activity, to truly understand 2-OG's role in the process of assembly and catalysis.

      We apologize if we did not present this clear enough, however the MP analysis of GlnA1 in the absence of 2-OG showed always (monomers/) dimers, dodecamers were only present in the presence of 2-OG. The SEC analysis in Fig. S1B has been performed in the presence of 12.5 mM 2-OG, we realized this information is missing in the figure legend - we now added this in the revised version. The 2-OG is in addition visible in the Cryo EM structure. Thus, we do not agree to perform additional biophysical methods.

      As for the other experimental findings, they appear satisfactory to me, and I have no reservations regarding the cryoEM data.

      (1) Mass photometry is a fancy technique that uses only a tiny amount of protein to study how they come together. However, the concentration of the protein used in the experiment might be lower than what's needed for them to stick together properly. So, the authors saw a lot of single proteins or pairs instead of bigger groups. They showed in Figure S1B that the M. mazei GS came out earlier than a 440-kDa reference protein, indicating it's actually a dodecamer. But when they looked at the dodecamer fraction using mass photometry, they found smaller bits, suggesting the GS was breaking apart because the concentration used was too low. To fix this, they could try using a technique called analytic ultracentrifuge (AUC) with different amounts of 2-OG to see if they can spot single proteins or pairs when they use a bit more GS. They could also try another technique called SEC-MALS to do similar tests. If they do this, they could replace Figure 1A with new data showing fully formed GS dodecamers when they use the right amount of 2-OG.

      Thank you for this input. In MP we looked at dodecamer formation after removing the 2-OG entirely and re-adding it in the respective concentration. We think that GlnA1 is much more unstable in its monomeric/dimeric fraction and that the complete and harsh removal of 2-OG results in some dysfunctional protein which does not recover the dodecameric conformation after dialysis and re-addition of 2-OG. Looking at the dodecamer-peak right after SEC however, we exclusively see dodecamers, which is now included as an additional supplementary figure (suppl. Fig. 1C). Consequently, we did not perform additional experiments.

      (2) Building on the last point, the estimated binding strength (Kd) between 2-OG and GS might be lower than it really is, because the GS often breaks apart from its dodecameric form in this experiment, even though 2-OG helps keep the pairs together, as seen with cryoEM. What if they used 5-10 times more GS in the mass photometry experiment? Would the estimated bond strength stay the same? Could they use AUC or other techniques like ITC to find out the real, not just estimated, strength of the bond?

      We agree that the term KD is not suitable. We have changed the term KD to EC50 as suggested by reviewer #1, which describes the effective concentration required for 50 % dodecamer assembly. Furthermore, we disagree that the dodecamer breaks apart when the concentrations are as low as in MP experiments. The actual reason for the breaking is rather the harsh dialysis to remove all 2-OG before MP experiments. Right after SEC, the we exclusively see dodecamer in MP (suppl. Fig. S1C). See also #2 (1).

      (3) The fact that the GS hardly works without 2-OG is interesting. I tried to understand the experiment setup, but it wasn't clear as the protocol mentioned in the author's 2021 FEBS paper referred to an old paper from 1970. The "coupled optical test assay" they talked about wasn't explained well. I found other papers that used phosphometry assays to see how much ATP was used up. I suggest the authors give a better, more detailed explanation of their experiments in the methods section. Also, it's unclear why the GS activity keeps going up from 5 to 12.5 mM 2-OG, even though they said it's saturated. They suggested there might be another change happening from 5 to 12.5 mM 2-OG. If that's the case, they should try to get a cryo-EM picture of the GS with lots of 2-OG, both with and without ATP/glutamate (or the Met-Sox-P-ADP inhibitor), to see what's happening at a structural level during this change caused by 2-OG.

      We agree with the reviewer that the GS assay was not explained in detail (since published and known for several years). However, we now added the more detailed description of the assay in the revised MS, which also measures the ATP used up by GS, but couples the generation of ADP to an optical test assay producing pyruvate from PEP with the generated ADP catalysed by pyruvate kinase present in the assay. This generated pyruvate is finally reduced to lactate by the present lactate dehydrogenase consuming NADH, the reduction of which is monitored at 340 nm.

      The still increasing activity of GS after dodecamer formation (max. at 5 mM 2-OG) and the continuously increasing enzyme activity (max. at 12.5 mM 2-OG): See also public reviews, we assume that there are two effects caused by 2-OG: 1. cooperativity of binding (less 2-OG needed to facilitate dodecamer formation) and 2. priming of each active site.

      The suggested additional experiments with and without ATP/Glutamate: Although we strongly agree that this would be a highly interesting structure, it seems out of the scope of a typical revision to request new cryo-EM structures. We evaluate the findings of our present study concerning the 2-OG effects as important insights into the strongly discussed field of glutamine synthetase regulation, even without the requested additional structures.

      (4) Please remake Figure S2, the panels are too small to read the words. At least I have difficulty doing so.

      We assume the reviewer is pointing to Suppl. Fig S3, we now changed this figure accordingly.

      Line 153, the reference Schumacher et al. 23, should be 2023?

      Yes, thank you. We corrected that.

      Line 497. I believe it's UCSF ChimeraX, not Chimera.

      We apologize and corrected accordingly.

      Reviewer #3 (Recommendations For The Authors):

      Recent studies on the Methanothermococcus thermolithotrophicus glutamine synthetase, published by Müller et al., 2024, have identified the binding site for 2-oxoglutarate as well as the conformational changes that were induced in the protein by its presence. In the present study, the authors confirm these observations and additionally establish a link between the presence of 2-oxoglutarate and the dodecameric fold and full activation of GS.

      Curiously, here, the authors could not confirm their own findings that the dodecameric GS can directly interact with the PII-like GlnK1 protein and the small peptide sP26. However, the lack of mention of the GlnK-bound state in these studies is very alarming since it certainly is highly relevant here.

      We agree with the reviewer that we have not observed the interaction with GlnK1 and sP26 in the recent study. Consequently, we speculate that yet unknown cellular factor(s) might be required for an interaction of GlnA1 with GlnK1 and sP26, which were not present in the in vitro experiments using purified proteins, however they were present in the previous pull-down approaches (Ehlers et al. 2005, Gutt et al. 2021). Another reason might be that post-translational modifications occur in M. mazei, which might be important for the interaction, which are also not present in purified proteins expressed in E. coli.

      The manuscript interest could have been substantially increased if the authors had done finer biochemical and enzymatic analyses on the oligomerization process of GS, used GlnK1 bound to known effectors in their assays and would have done some more efforts to extrapolate their findings (even if a small niche) of related glutamine synthetases.

      We thank the reviewer for their valuable encouragement to explore ligand-bound-states of GlnK1. However, in this manuscript we mainly focused on 2-OG as activator of GlnA1 and decided to dedicate future experiments to the exploration of conditions that possibly favor GlnK1-binding.

      In principle, we have explored the ATP bound GlnK1 effects on GlnA1 activity in the activity assays (Fig. 2E) since ATP (3.6 mM) is present. GlnK1 however showed no effects on GlnA1 activity.

      In general, the manuscript is poorly written, with grammatically incorrect sentences that at times, which stands in the way of passing on the message of the manuscript.

      Particular points:

      (1) It is mentioned that 2-OG induces the active oligomeric (dodecamer, 12-mer) state of GlnA1 without detectable intermediates. However, only 62 % of the starting inactive enzyme yields active 12-mers. Note that this is contradicted in line 212.

      Thanks for pointing out this discrepancy. After removing all 2-OG as we did before MP-experiments, GlnA1 doesn’t reach full dodecamers anymore when 2-OG is re-added. This is not because the 2-OG amount is not enough to trigger full assembly, but because the protein is much more unstable in the absence of 2-OG, so we predict that some GlnA1 breaks during dialysis. See also answer reviewer #2 (1) and supplementary figure S1C.

      Is there any protein precipitation upon the addition of 2-OG? Is all protein being detected in the assay, meaning, is monomer/dimer + dodecamer yields close to 100% of the total enzyme in the assay?

      There is no protein precipitation upon the addition of 2-OG, indeed, GlnA1 is much more stable in the presence of 2-OG. In the mass photometry experiments, all particles are measured, precipitated protein would be visible as big entities in the MP.

      Please add to Figure 1 the amount of monomer/dimer during titration. Some debate why there is no full conversion should be tentatively provided.

      We agree with the reviewer and included the amount of monomer/dimer in the figure, as well as some discussion on why it is not fully converted again. GlnA1 is unstable without 2-OG and it was dialysed against buffer without 2-OG before MP measurements. This sample mistreatment resulted in no full re-assembly after re-adding 2-OG (although full dodecamers before dialysis (suppl. Fig. S1C).

      (2) Figure 1B reflects an exemplary result. Here, the addition of 0.1 mM 2-OG seems to promote monomer to dimer transition. Why was this not studied in further detail? It seems highly relevant to know from which species the dodecamer is assembled.

      We thank the reviewer for their comment. However, we would like to point out that, although not shown in the figure, GlnA1 is always mainly present as dimers as the smallest entity. As suggested earlier, we have added the amount of monomers/dimers to Figure 1A, which shows low monomer-counts at all 2-OG concentrations (Fig.1A). Although not depicted in the graph starting at 0.01 mM OG, we also see mainly dimers at 0 mM 2-OG.

      How does the y-axis compare to the number and percentage of counts assigned to the peaks? In line 713, it is written that the percentage of dodecamer considers the total number of counts, and this was plotted against the 2-OG concentration.

      We thank the reviewer for addressing this unclarity. Line 713 corresponds to Figure 1A, where we indeed plotted the percentage of dodecamer against the 2-OG-concentration. Thereby, the percentage of dodecamer corresponds to the percentage calculated from the Gaussian Fit of the MP-dodecamer-peak. In Figure 1 B, however, the y-axis displays the relative amount of counts per mass, multiple similar masses then add up to the percentage of the respective peak (Gaussian Fit above similar masses).

      (3) Lines 714 and 721 (and elsewhere): Why only partial data is used for statistical purposes?

      We in general only show one exemplary biological replicate, since the quality of the respective GlnA1 purification sometimes varied (maximum activity ranging from 5 - 10 U/mg). Therefore, we only compared activities within the same protein purification. For the EC50 calculations of all measurements, we refer to the supplement.

      (4) Lines 192-193: It is claimed that GlnK1 was previously shown to both regulate the activity of GlnA1 and form a complex with GlnA1. Please mention the ratio between GlnK1 and GlnA1 in this complex.

      We now included the requested information (GlnA1:GlnK1 1:1, (Ehlers et al. 2005); His6-GlnA1 (0.95 μM), His6-GlnK1 (0.65 μM); 2:1,4, Gutt et al. 2021).

      It is also known that PII proteins such as GlnK1 can bind ADP, ATP, and 2-OG. Interestingly, however, for various described PII proteins, 2-OG can only bind after the binding of ATP.

      So, the crucial question here is what is the binding state of GlnK1? 

      Were these assays performed in the absence of ATP? This is key to fully understand and connect the results to the previous observations. For example, if the GlnK1 used was bound to ADP but not to ATP, then the added 2-OG might indeed only be able to affect GlnA1 (leading to its activation/oligomerization). If this were true and according to the data reported, ADP would prevent GlnK1 from interacting with any oligomeric form of GlnA1. However, if GlnK1 bound to ATP is the form that interacts with GlnA1 (potentially validating previous results?) then, 2-OG would first bind to GlnK1 (assuming a higher affinity of 2-OG to GlnK1), eventually causing its release from GlnA1 followed by binding and activation of GlnA1.

      These experiments need to be done as they are essential to further understand the process. Given the ability of the authors to produce the protein and run such assays, it is unclear why they were not done here. As written in line 203, in this case, "under the conditions tested" is not a good enough statement, considering what is known in the field and how many more conclusions could easily be taken from such a setup.

      Thanks for the encouragement to investigate the ligand-bound states of GlnK1. We agree and plan to perform the suggested mass photometry experiments exploring the conditions under which GlnA1 and GlnK1 might interact in future work. In GlnA1 activity test assays, when evaluating the presence/effects of GlnK1 on GlnA1 activity, however, ATP was always present in high concentrations and still we did not observe a significant effect of GlnK1 on the GlnA1 activity.

      (5) Figure 2D legend claims that the graphic shows the percentage of dodecameric GlnA1 as a function of the concentration of 2-OG. This is not what the figure shows; Figure 2D shows the dodecamer/dimer (although legend claims monomer was used, in line 732) ratio as a function of 2-OG (stated in line 736!). If this is true, a ratio of 1 means 50 % of dodecamers and dimers co-exist. This appears to be the case when GlnK1 was added, while in the absence of GlnK1 higher ratios are shown for higher 2-OG concentration implying that about 3 times more dodecamers were formed than dimers. However, wouldn´t a 50 % ratio be physiologically significant?

      We apologize for the partially incorrect and also misleading figure legend and corrected it. Indeed, the ratio of dodecamers and dimers is shown. Furthermore, we did not use monomeric GlnA1 (the smallest entity is mainly a dimer, see Fig 1A), however, the molarity was calculated based on the monomer-mass. Concerning the significance of the difference between the maximum ratio of GlnA1 and GlnK1: The ratio does appear higher, but this is mostly because adding large quantities of GlnK1 broadens all peaks at low molecular weight. This happens because the GlnK1 signal starts overlapping with the signal from GlnA1, leading to inflated GlnA1 dimer counts. We therefore do not think that this is biologically significant, especially as the activities do not differ under these conditions.

      (6) Is it possible that the uncleaved GlnA1 tag is preventing interaction with GlnK1? This should be discussed.

      This is of course a very important point. We however realized that Schumacher et al. also used an N-terminal His-tag, so we assume that the N-terminal tag is not hampering the interaction.

      (7) Line 228: Please detail the reported discrepancies in rmsd between the current protein and the gram-negative enzymes.

      The differences in rmsd between our M.mazei GlnA1 structure and the structure of gram-negative enzymes is caused by a) sequence similarity: E.g. M.mazei GlnA1 compared to B.subtilis GlnA have a sequence percent identity of 58.47; b) ligands in the structure: The B.Subtilis structure contains L-Methionine-S-sulfoximine phosphate, a transition state inhibitor, while the M. mazei  structure contains 2OG; c) Methodology: The structural determination methods also contribute to these differences. B. subtilis GlnA was determined using X-ray crystallography, while the M. mazei GlnA1 structure was resolved using Cryo-EM, where the protein behaves differently in ice compared to a crystal.

      (8) Line 747: The figure title claims "dimeric interface" although the manuscript body only refers to "hexameric interface" or "inter-hexamer interface" (line 224). Moreover, the figure 4 legend uses terms such as vertical and horizontal dimers and this too should be uniformized within the manuscript.

      Thank you for your valuable feedback. We have updated both the figure title and the figure legend as well in the main text to ensure consistency in the description.

      (9) Line 752: The description of the color scheme used here is somehow unclear.

      Thanks for pointing this out. We changed the description to make it more comprehensive.

      (10) Please label H14/15 and H14´/H15´in Fig 4C zoom.

      We agree that this has not been very clear. We added helix labels.

      (11) In Figure 4D legend, make sure to note that the binding sites for the substrate are based on homologies with another enzyme poised with these molecules.

      The same should be clear in the text: sites are not known, they are assumed to be, based on homologies (paragraph starting at line 239).

      Concerning this comment we want to point out that we studied the exact same enzyme as the Schumacher group, except that we used 2-OG in our experiments, which they did not.

      (12) Figure 3 appears redundant in light of Figure 4. 

      (13) Line 235: When mentioning F24, please refer to Figure 5.

      Thank you, we changed that accordingly.

      (14) Please provide the distances for the bonds depicted in Figure 4B.

      Thanks for pointing this out, we added distance labels to Figure 4B. For reasons of clarity only to three H-bonds.

      (15) Line 241: D57 is likely serving to abstract a proton from ammonium, what is residue Glu307 potentially doing? The information seems missing in light of how the sentence is built.

      Thanks for pointing this out. According to previous studies both residues are likely involved in proton abstraction - first from ammonium, and then from the formed gamma-ammonium group. Additionally, they contribute in shielding the active site from bulk solvent to prevent hydrolysis of the formed phospho-glutamate.

      (16) Why do the authors assume that increased concentrations of 2-OG are a signal for N starvation only in M. mazei and not in all prokaryotic equivalent systems (line 288)?

      In line 288, we did not claim that this is a unique signal for M. mazei. It is also the central N-starvation signal in Cyanobacteria but not directly perceived by the cyanobacterial GS through binding directly to GS.

      The authors should look into the residues that bind 2-OG and check if they are conserved in other GS. The results of this sequence analysis should be discussed in line with the variable prokaryotic glutamine synthetase types of activity modulation that were exposed in the introduction and Figure 7.

      Please refer to supplementary figure S5, where we already aligned the mentioned glutamine synthetase sequences. Since this was also already discussed in Müller et al. 2024, we did not want to repeat their observations and refer to our supplementary figure in too much detail.

      (17) Figure 5 title: Replace TS by transition state structures of homology enzymes, or alike.

      Thank you for this suggestion. We did not change the title however, since it is not a homologue but the exact same glutamine synthetase from Methanosarcina mazei.

      (18) Line 249: D170 is not shown in Figure 5A or elsewhere in Figure 5.

      Thank you for pointing this out. We added D170 to figure 5A.

      (19) Representative density for the residues binding 2-OG should be provided, maybe in a supplemental figure.

      Thank you for the suggestion. We added the densities of 2-OG-binding residues to figure 4B

      (20) Line 260: Please add a reference when describing the phosphoryl transfer.

      We thank the reviewer for this important point and added that accordingly.

      (21) Line 296: The binding of 2-OG indeed appears to be cooperative, such that at concentrations above its binding affinity to the protein, only dodecamers are seen (under experimental conditions). However, claiming that the oligomerization is fast is not correct when the experimental setup includes 10 minutes of incubation before measurements are done. Please correct this within the entire manuscript.

      A (fast) continuous kinetic assay could have confirmed this point and revealed the oligomerization steps and the intermediaries in the process (maybe monomer/dimers, then dimers/hexamers, and then hexamers/dodecamers). Such assays would have been highly valuable to this study.

      We thank the reviewer for this suggestion, but disagree. It is indeed a rather fast regulation (as activity assays without pre-incubation only takes 1 min longer to reach full activity, see the newly included suppl. Fig S6). Considering other regulation mechanisms like e.g. transcription or translation regulation, an activation that takes only 60 s is actually quite quick.

      (22) Line 305 (and elsewhere in the manuscript): the authors state that 2-OG primes the active site for a transition state. This appears incorrect. The transition state is the highest energy state in an enzymatic reaction progressing from substrate to product. Meaning, the transition state is a state that has a more or less modified form of the original substrate bound to the active site. This is not the case.

      In line 366 an "active open state" appears much more adequate to use. 

      We agree and changed accordingly throughout the manuscript.

      (23) Line 330: Please delete "found". Eventually replace it with "confirmed": As the authors write, others have described this residue as a ligand to glutamine.

      Thanks, we changed that accordingly, although previous descriptions were just based on homologies without the experimental validation.

      (24) The discussion in at various points summarizing again the results. It should be trimmed and improved.

      (25) Line 381: replace "two fast" with "fast"?

      We thank the reviewer for this suggestion, but disagree on this point. We especially wanted to highlight that there are two central nitrogen-metabolites involved in the direct regulation of GlnA1, that means TWO fast direct processes mediated by 2-OG and glutamine.

    1. It's not just that the training sets simply don't have examples of people who look like me. It's that the system is now explicitly engineered to resist imagining me. …Hey, is now a good a time to mention that in an effort to create a welcoming and inclusive community for all users, the Midjourney Community Guidelines consider deformed bodies a form of gore, and thus forbidden? It is something of an amusing curiosity that some AI models were perplexed by a giraffe without spots. But it's these same tools and paradigms that enshrine normativity of all kinds, sanding away the unusual.

      Not just statistically, incidentally homogeneous - but homogenized, carefully tuned for a desired tone

    1. Helpful feedback is best provided when quizzes are graded in class, right after they are completed. With this procedure students learn right away what they know and what they missed. With just a few questions on the quiz, it’s easy for them relate their performance to how they learned the material, and to see what they do not know or misunderstand.

      I annotated this section because it is something I had seen my CT use in my JI. Specifically, she used it for a pretest that was for the district-wide assessment. Giving that feedback immediately was crucial for them to correct any misconceptions and be able to succeed the next class period.

    1. Introduction to the Stream and Purpose

      “I'm uh pretty excited about this one because I'm going to get to finally show off some of the stuff I've been working on for months.”

      • Highlights the speaker’s enthusiasm to demonstrate months of development progress.
      • Establishes that the stream will cover new reactive features and mechanisms.

      Parallel and Nested Async Fetching

      “What if we want to do nested fetching where each component fetches data, but we don’t want to cause waterfalls? … We’ve basically solved waterfalls because promises do not throw out too early.”

      • Emphasizes that asynchronous tasks in Solid can now run in parallel.
      • Shows how nested components fetch data without blocking each other.

      createAsync as a Core Signal Primitive

      createAsync… if we look at the signature here, it expects a computation… and then returns an accessor of a number… it will just give you the resolved async value without returning undefined.”

      • Introduces createAsync as an “async signal” that never yields undefined.
      • Allows a direct, non-nullable way to fetch and use async data in the component.

      Local vs. Global Suspense Boundaries

      “We don’t have to throw away our render tree just because we have something async… it only throws exactly where we read it, and that means everything else is fine.”

      • Suspense is granular: only the part that reads an unresolved value suspends.
      • Other parts of the UI remain interactive rather than unmounting the entire tree.

      Self-Healing Error Boundaries

      “We basically collect the nodes that fail, and then, if they become unfailed or get disposed, the boundary can remove itself—self-heal.”

      • Explains that failed async or errors get tracked locally by boundaries.
      • Once the failure resolves or is disposed, the error boundary automatically resets.

      Avoiding Unpredictable Tearing

      “You basically never want your async data to just flicker in or out. We can choose to throw or to keep ‘stale’ data. Suspense can opt into that.”

      • Details the importance of consistent state during asynchronous updates.
      • Introduces a mechanism (isStale or latest) to avoid jarring UI replacements.

      Splitting createEffect for Predictability

      “If we just let you read signals in the same function where we do side effects, we get unpredictable re-runs… so we split it into two halves.”

      • Shows how Solid 2.0 separates the tracking (pure) side from the side-effect (impure) side.
      • Ensures that data retrieval and side-effects remain consistent, avoiding “zalgo” outcomes.

      Mutable Reactivity and Store Projections

      “I realized a store approach is a general solution… the idea is you have a single source signal and can ‘project’ it out to many places… only the fields that change update.”

      • Describes a new technique called “projections” to handle large data sets efficiently.
      • Allows per-field reactivity, so only the row or property that changes triggers updates.

      Granular Handling of Async and Errors

      “Error boundaries and suspense handle each failing effect locally. The rest of the system doesn’t even know something failed.”

      • Illustrates that errors remain localized, preventing a full unmount.
      • Reflects the fine-grained reactivity approach, making error handling more targeted.

      Impact on Ecosystem Comparisons

      “React can’t do this because… they don’t have the semantics to pull from signals. It’s not the same model.”

      • States the fundamental difference from React’s component rendering.
      • Emphasizes that granular updates and specialized async signals differ sharply from React’s design.

      Future Plans: SSR, Hydration, Transitions

      “We still need transitions. I haven’t implemented them yet, but they’re part of the equation. … Also looking at SSR so we can skip hydration IDs.”

      • Points to upcoming work for Solid 2.0: concurrency transitions, improved server rendering, and more efficient hydration.
      • Aims to unify the new runtime mechanisms with advanced features like streaming.

      Concluding Observations

      “We’ve basically… proven we can handle async, error boundaries, and concurrency all purely at the reactive level. This changes everything.”

      • Summarizes the significance of these new developments in Solid’s reactivity engine.
      • Stresses that purely runtime-based solutions enable advanced use-cases without a compiler-centric approach.
    1. You don’t always have to prototype. If the cost of just implementing the solution is less than prototyping, perhaps it’s worth it to just create it. That cost depends on the skills you have, the tools you have access to, and what knowledge you need from the prototype.

      I feel like this is a statement that we should all consider. I agree that prototype is a very important process that we all need to do when it comes to designing something. But if there's a method that has been working for a long time and you want to implement it, the need to prototype shouldn't be as important. I feel a lot of companies need to consider this to cut cost and time.

    2. You don’t make a prototype in the hopes that you’ll turn it into the final implemented solution. You make it to acquire knowledge, and then discard it, using that knowledge to make another better prototype.

      I really like this point because it shifts the focus of prototyping away from just making and towards learning. It reminds me of how people sometimes get too attached to an early design and resist discarding it, even when it doesn’t fully solve the problem. I wonder, though—what happens when a prototype works well enough that stakeholders push to turn it into a final product, even if it’s not meant to be? Has anyone seen a case where a prototype became the end product, for better or worse?

    3. Of course, after all of this discussion of making, it’s important to reiterate: the purpose of a prototype isn’t the making of it, but the knowledge gained from making and testing it. This means that what you make has to be closely tied to how you test it.

      Interesting. If prototyping is about making decisions, at what level should those decisions be made? For example, if I’m designing an app for pizza delivery on campus, should I test every page and feature of the app? Drawing from the reading on surveys, who should the testers, just developers or also potential users? Should we be mindful of biases, such as leading or loaded questions?

      Additionally, how should we handle consecutive critiques? For instance, someone might suggest that the pizza app should also include grocery delivery. To what extent should we revise our plan in response to such feedback?

    1. This calibration looks very good: no obvious under- or over-fitting, nor clear L-shaped patterns.

      I don't know if I agree with this; it does look like there are some calibration issues. I also imagine the plot would look worse if the x and y axes had the same scales.

      One question - is this run for ALL batter/seasons in the training set? Would be interested in what this looks like if we restrict the population to player/seasons with some sample size threshold (maybe \(PA > 50\). Don't know if that's the right way to evaluate the model, but just something I'm curious about. My prior is that it would make the calibration look even worse, since the model will be more confident about their true talent and their sample size makes the expected noise drop.

      Regardless - we need to think more about what this is telling us. In my mind, it's saying that the model is overconfident. It's estimating true talent too close to the observed values in some cases (too much coverage of low probabilities), and that's likely what's hurting the top end as well (not enough coverage of high probabilities).

    Annotators

    1. This can be fixed, and you know exactly how. “I can give you a name. Something you can call yourself when you need to feel strong. It’s authentic,” you add enthusiastically. “From a real Indian.” That much is true.

      Traditionally no one can just give people Indian names. One has to be given the right to within their tribe and also the one given the name has to receive has to follow the sacred traditions that have pasted down. I think this is also very important to point out that this has also been commercialized.

    1. But, to me, the flip side is, if it takes almost no effort to like something, really, they whole act has little value. And unless it is something that garners likes on the order of tens, hundreds of thousands, the 1, 2, maybe 5 I might get feel pretty cheap as gestures to me. I’m really interested more in hearing from you something more substantial than the 200 milliseconds you spent liking my status.

      Indeed! I've long said that the like button is the conversation killer. I looked back through my blog posts and tweets to see what I wrote about this. Oddly, I don't have a record (that I could find) of me seying the like button is the conversation killer.

      I know I've typed out how frustrating it is when you make a comment on a post, and you think the conversation can continue. Yet the recipient of your comment merely hits the like button.

      It's like the person just said, "yeah, i'm done with the conversation. Good bye."

    1. Reviewer #3 (Public review):

      Summary:

      This intriguing paper addresses a special case of a fundamental statistical question: how to distinguish between stochastic point processes that derive from a single "state" (or single process) and more than one state/process. In the language of the paper, a "state" (perhaps more intuitively called a strategy/process) refers to a set of rules that determine the temporal statistics of the system. The rules give rise to probability distributions (here, the probability for turning events). The difficulty arises when the sampling time is finite, and hence, the empirical data is finite, and affected by the sampling of the underlying distribution(s). The specific problem being tackled is the foraging behavior of C. elegans nematodes, removed from food. Such foraging has been studied for decades, and described by a transition over time from 'local'/'area-restricted' search'(roughly in the initial 10-30 minutes of the experiments, in which animals execute frequent turns) to 'dispersion', or 'global search' (characterized by a low frequency of turns). The authors propose an alternative to this two-state description - a potentially more parsimonious single 'state' with time-changing parameters, which they claim can account for the full-time course of these observations.

      Figure 1a shows the mean rate of turning events as a function of time (averaged across the population). Here, we see a rapid transient, followed by a gradual 4-5 fold decay in the rate, and then levels off. This picture seems consistent with the two-state description. However, the authors demonstrate that individual animals exhibit different "transition" statistics (Figure 1e) and wish to explain this. They do so by fitting this mean with a single function (Equations 1-3).

      Strengths:

      As a qualitative exercise, the paper might have some merit. It demonstrates that apparently discrete states can sometimes be artifacts of sampling from smoothly time-changing dynamics. However, as a generic point, this is not novel, and so without the grounding in C. elegans data, is less interesting.

      Weaknesses:

      (1) The authors claim that only about half the animals tested exhibit discontinuity in turning rates. Can they automatically separate the empirical and model population into these two subpopulations (with the same method), and compare the results?

      (2) The equations consider an exponentially decaying rate of turning events. If so, Figure 2b should be shown on a semi-logarithmic scale.

      (3) The variables in Equations 1-3 and the methods for simulating them are not well defined, making the method difficult to follow. Assuming my reading is correct, Omega should be defined as the cumulative number of turning events over time (Omega(t)), not as a "turn" or "reorientation", which has no derivative. The relevant entity in Figure 1a is apparently , i.e. the mean number of events across a population which can be modelled by an expectation value. The time derivative would then give the expected rate of turning events as a function of time.

      (4) Equations 1-3 are cryptic. The authors need to spell out up front that they are using a pair of coupled stochastic processes, sampling a hidden state M (to model the dynamic turning rate) and the actual turn events, Omega(t), separately, as described in Figure 2a. In this case, the model no longer appears more parsimonious than the original 2-state model. What then is its benefit or explanatory power (especially since the process involving M is not observable experimentally)?

      (5) Further, as currently stated in the paper, Equations 1-3 are only for the mean rate of events. However, the expectation value is not a complete description of a stochastic system. Instead, the authors need to formulate the equations for the probability of events, from which they can extract any moment (they write something in Figure 2a, but the notation there is unclear, and this needs to be incorporated here).

      (6) Equations 1-3 have three constants (alpha and gamma which were fit to the data, and M0 which was presumably set to 1000). How does the choice of M0 affect the results?

      (7) M decays to near 0 over 40 minutes, abolishing omega turns by the end of the simulations. Are omega turns entirely abolished in worms after 30-40 minutes off food? How do the authors reconcile this decay with the leveling of the turning rate in Figure 1a?

      (8) The fit given in Figure 2b does not look convincing. No statistical test was used to compare the two functions (empirical and fit). No error bars were given (to either). These should be added. In the discussion, the authors explain the discrepancy away as experimental limitations. This is not unreasonable, but on the flip side, makes the argument inconclusive. If the authors could model and simulate these limitations, and show that they account for the discrepancies with the data, the model would be much more compelling. To do this, I would imagine that the authors would need to take the output of their model (lists of turning times) and convert them into simulated trajectories over time. These trajectories could be used to detect boundary events (for a given size of arena), collisions between individuals, etc. in their simulations and to see their effects on the turn statistics.

      (9) The other figures similarly lack any statistical tests and by eye, they do not look convincing. The exception is the 6 anecdotal examples in Figure 2e. Those anecdotal examples match remarkably closely, almost suspiciously so. I'm not sure I understood this though - the caption refers to "different" models of M decay (and at least one of the 6 examples clearly shows a much shallower exponential). If different M models are allowed for each animal, this is no longer parsimonious. Are the results in Figure 2d for a single M model? Can Figure 2e explain the data with a single (stochastic) M model?

      (10) The left axes of Figure 2e should be reverted to cumulative counts (without the normalization).

      (11) The authors give an alternative model of a Levy flight, but do not give the obvious alternative models:<br /> a) the 1-state model in which P(t) = alpha exp (-gamma t) dt (i.e. a single stochastic process, without a hidden M, collapsing equations 1-3 into a single equation).<br /> b) the originally proposed 2-state model (with 3 parameters, a high turn rate, a low turn rate, and the local-to-global search transition time, which can be taken from the data, or sampled from the empirical probability distributions). Why not? The former seems necessary to justify the more complicated 2-process model, and the latter seems necessary since it's the model they are trying to replace. Including these two controls would allow them to compare the number of free parameters as well as the model results. I am also surprised by the Levy model since Levy is a family of models. How were the parameters of the Levy walk chosen?

      (12) One point that is entirely missing in the discussion is the individuality of worms. It is by now well known that individual animals have individual behaviors. Some are slow/fast, and similarly, their turn rates vary. This makes this problem even harder. Combined with the tiny number of events concerned (typically 20-40 per experiment), it seems daunting to determine the underlying model from behavioral statistics alone.

      (13) That said, it's well-known which neurons underpin the suppression of turning events (starting already with Gray et al 2005, which, strangely, was not cited here). Some discussion of the neuronal predictions for each of the two (or more) models would be appropriate.

      (14) An additional point is the reliance entirely on simulations. A rigorous formulation (of the probability distribution rather than just the mean) should be analytically tractable (at least for the first moment, and possibly higher moments). If higher moments are not obtainable analytically, then the equations should be numerically integrable. It seems strange not to do this.

      In summary, while sample simulations do nicely match the examples in the data (of discontinuous vs continuous turning rates), this is not sufficient to demonstrate that the transition from ARS to dispersion in C. elegans is, in fact, likely to be a single 'state', or this (eq 1-3) single state. Of course, the model can be made more complicated to better match the data, but the approach of the authors, seeking an elegant and parsimonious model, is in principle valid, i.e. avoiding a many-parameter model-fitting exercise.

      As a qualitative exercise, the paper might have some merit. It demonstrates that apparently discrete states can sometimes be artifacts of sampling from smoothly time-changing dynamics. However, as a generic point, this is not novel, and so without the grounding in C. elegans data, is less interesting.

    1. he notes. “It’s been very spotty, and there are dead whales, but it’s not the continuous elevated mortality that we saw in 2000.” Because of the lack of continuity in the deaths, Gulland agrees with the others that what’s killing gray whales is multifaceted, with climate change acting as an accomplice. “I think the mistake is to be looking for just one thing,” she says.

      quote

    1. "This place safe," the woman says, in a voice that is so soft it sounds like a whisper. "Them not going to small-small shop, only big-big shop and market."

      It's interesting how the author illustrates the differences between these two characters both in terms of class and religion. Yet despite these differences when it comes down to basic survival and need, these ideas of categorization fall away, and it simply becomes a matter of helping others. I feel it's also very telling how fixated Chika is with these categories and how important it seems to her to fit everyone she meets into these neat little boxes while the woman is just like "Hey maybe lets not die."

  6. www.sevanoland.com www.sevanoland.com
    1. (do you realthink it's a mind in there or just a little buzz like a bee in a glass jar?

      The speaker is reflecting common sexist sentiments that men hold toward women, ironically asking if there's anything in girls' heads or just an echoed buzz, calling them ditzy and dumb.

    1. The decision to have a speak-out turned out to be brilliant; according to Ellen Willis, for the three hundred people in the audience, the personal testimony “evoke[d] strong reactions . . . empathy, anger, pain.” Just as protesters of the Vietnam War used the teach-in, women’s liberationists saw the speak-out, with its reliance on personal voices, as a way to sway public opinion.

      I found the decision to hold a speak-out, where women publicly shared their personal experiences with abortion to be a very smart move. I think it's interesting that the authors highlight the importance of breaking the silence surrounding such a personal and controversial issue. This reminds me of some of the strategies used in the Civil Rights and anti-slavery movements. Only because personal testimony was used to gain public support for social change. In the context of the 1960s, discussing abortion publicly was seen as unacceptable. So I think Redstockings’ approach was revolutionary in some ways.

    2. What had been happening in women’s lives that they felt the need—fifty years after gaining the vote—to demand their rights in a similarly public way?

      I believe this might be because It’s not just about legal rights anymore, it’s about totally changing the way society views women in everyday settings. I wonder, how much impact this had on women outside of urban, educated areas. Did the suburban housewife feel as represented or understood by these kinds of protest?

    1. A meme is a piece of culture that might reproduce in an evolutionary fashion, like a hummable tune that someone hears and starts humming to themselves, perhaps changing it, and then others overhearing next. In this view, any piece of human culture can be considered a meme that is spreading (or failing to spread) according to evolutionary forces. So we can use an evolutionary perspective to consider the spread of:

      Memes evolve like living things, spreading and changing based on what grabs people’s attention. Just like how internet trends explode overnight. This makes me think about why some memes last for years , like the "Rickroll" while others fade quickly. The idea that memes follow evolutionary forces also explains how misinformation spreads when it’s catchy or emotionally charged.

    1. By claiming that the language was somehow “frozen in history,” he helped perpetuate the stereotype that Appalachians were a retrograde people.

      Which is also just interesting to think about. Having a family and a parent who grew up in Appalachia, I constantly been surrounded by such aspects or language of Appalachia and never thought about how this came to be, or thought about how others perceive Appalachia, but I should have. I knew that people thought of Appalachia as "hillbilly" or something like that, I just thought it was because it was more rural, but regardless, is a negative stereotype. It's interesting to think that others perceive Appalachia as "frozen in history," this had never stumbled across my mind before.

    2. Whether it’s hillbilly hooch, hillbilly hot dogs or hillbilly mascots, there’s probably no other cultural trope that’s so widely and derisively employed as hillbilly, a term broadly used to refer to the people of Appalachia.

      Perhaps it sounds a bit silly, but I had really never before considered the 'hillbilly' stereotype before reading these articles on Appalachian dialect. These stereotypes are seemingly so ingrained in our culture that I hadn't even saw them for the harm they could cause -- it almost felt just like an character archetype. However, in reading these articles, I can really see the negative impact that discrimination against one;s dialect or language can harm groups of people,

    1. Author response:

      The following is the authors’ response to the original reviews.

      eLife Assessment 

      This study presents valuable finding regarding the role of life history differences in determining population size and demography. The evidence for the claims is still partially incomplete, with concerns about generation times and population structure. Nonetheless, the work will be of considerable interest to biologists thinking about the evolutionary consequences of life history changes.  

      Thank you. We have addressed the generation time and population structure issues in detail in our revision and hope that you, like us, find them to be of sufficiently low concern (i.e., they are not driving the results) that they do not overshadow the main findings and conclusions.

      The opportunity to make in-depth revisions also helped the manuscript in two ways unanticipated by both us and the reviewers. First, KW made a mistake in the original analysis of phylogenetic signal, and catching that error simplifies that aspect of the study (there is none in our measured variables). Second, in June 2024 Hilgers et al. (2024; https://doi.org/10.1101/2024.06.17.599025) posted an important manuscript to bioRxiv noting the possibility of false population size peaks in PSMC analyses using the standard default settings. Our results had three of those, which we have eliminated. N<sub>e</sub>ither of these issues affect the overall conclusions, but their resolution improves the work.  

      Public Reviews: 

      Reviewer #1 (Public Review): 

      Summary: 

      This interesting study applies the PSMC model to a set of new genome sequences for migratory and nonmigratory thrushes and seeks to describe differences in the population size history among these groups. The authors create a set of summary statistics describing the PSMC traces - mean and standard deviation of N<sub>e</sub>, plus a set of metrics describing the shape of the oldest N<sub>e</sub> peak - and use these to compare across migratory and resident species (taking single samples sequenced here as representative of the species). The analyses are framed as supporting or refuting aspects of a biogeographic model describing colonization dynamics from tropical to temperate North and South America. 

      Strengths: 

      At a technical level, the sequencing and analysis up through PSMC looks good and the paper is engaging and interesting to read as an introduction to some verbal biogeographic models of avian evolution in the Pleistocene.

      The core findings - higher and more variable N<sub>e</sub> in migratory species - seem robust, and the biogeographic explanation is plausible.  

      Thanks. We thought so as well. Our analyses go beyond being simply descriptive and test some simple hypotheses, including a biogeographic+ecological expansion opportunity gained in some lineages through the adoption of a seasonal migration life-history strategy.  

      Weaknesses: 

      I did not find the analyses particularly persuasive in linking specific aspects of clade-level PSMC patterns causally to evolutionary driving forces. To their credit, the authors have anticipated my main criticism in the discussion. This is that variation in population size inferred by methods like PSMC is in "effective" terms, and the link between effective and census population size is a morass of bias introduced by population structure and selection so robustly connecting specific aspects of PSMC traces to causal evolutionary forces is somewhere between extremely difficult and impossible.  

      As R1 notes, we do not attempt to link effective population sizes and census sizes (though we do discuss this), and we are also careful to discuss correlated rather than causative factors when going beyond the overarching hypotheses regarding life-history strategy.

      Population structure is the most obvious force that can generate large N<sub>e</sub> changes mimicking the census-sizefocused patterns the authors discuss. The authors argue in the discussion that since they focus on relatively deep time (>50kya at least, with most analyses focusing on the 5mya - 500kya range) population structure is "likely to become less important", and the resident species are usually more structured today (true) which might bias the findings against the observed higher N<sub>e</sub> in migrants.  

      To clarify, the patterns we discuss are entirely related to effective population size, not census size. But, yes, this is why we’ve given population structure its own section in the Discussion.

      But is structure really unimportant in driving PSMC results at these specific timescales? There is no numerical analysis presented to support the claim in this paper. The biogeographic model of increased temperate-latitude land area supporting higher populations could yield high N<sub>e</sub> via high census size, but shifts in population structure (for example, from one large panmictic population to a series of isolated refugial populations as a result of glaciation-linked climate changes) could plausibly create elevated and more variable N<sub>e</sub>. Is it more land area and ecological release leading to a bigger and faster initial N<sub>e</sub> bump, or is it changes in population connectivity over time at expanding range edges, or is the whole single-bump PSMC trace an artifact of the dataset size, or what? The authors have convinced me that the N<sub>e</sub> history of migratory thrushes is on average very different from nonmigrant thrushes, but beyond that it's unclear what exactly we've learned here about the underlying process.  

      We do not argue that population structure is unimportant, only that it is less important as one goes into deeper time. Further, we agree with the reviewer’s observation above that structure is more likely to bias nonmigrant estimates of N<sub>e</sub>. In other words, following Li & Durbin’s (2011) simulations, we interpret that an inflated N<sub>e</sub> due to structure should occur more often among residents. We have clarified this in the revision. We also agree that what we’ve learned about the underlying process is not entirely clear, but as we stated, population structure does not seem to be the main driver, and there is evidence that both biogeographic and ecological factors are involved. With this being the first time that these questions have been asked, we think we’ve made an important advance and that we’ve opened a number of avenues for future study.

      It also important to consider the time scales involved and the sampling regime. Glacial-interglacial cycles averaged ~100 Kyr back to 0.74 Mya and then averaged ~41 Kyr from then back to 2.47 Mya; about 50-60 of these cycles occurred (Lisiecki & Raymo 2005: fig. 4). This probably caused a lot of population structuring and mixing in these lineages. In addition, in the PSMC output from one of our lineages, C. ustulatus swainsonii, we find that there are 54 time segments sampled for the Pleistocene, indicating the inadequacy of this method to reflect fine-scale changes and suggesting that each estimate is capturing a lot of both phenomena, structuring and mixing. We have added this to the revision.

      I generally agree with the authors that "at present there is no way to fully disentangle the effects of population structure and geographic space on our results". But given that, I think there are two options - either we can fully acknowledge that oversimplified demographic models like PSMC cannot be interpreted as supporting evidence of any particular mechanistic or biogeographic hypothesis and stop trying to use them to do that, or we have to do our best to understand specifically which models can be distinguished by the analyses we're employing. 

      Short of developing some novel theory deep in the PSMC model, I think readers would need to see simulations showing that the analyses employed in this paper are capable of supporting or refuting their biogeographic hypothesis before viewing them as strongly supporting a specific biogeographic model. Tools like msprime and stdpopsim can be used to simulate genome-scale data with fairly complex biogeographic models. Running simulations of a thrush-like population under different biogeographic scenarios and then using PSMC to differentiate those patterns would be a more convincing argument for the biogeographic aspects of this paper. The other benefit of this approach would be to nail down a specific quantitative version of the taxon cycles model referenced in the abstract, and it would allow the authors to better study and explain the motivation behind the specific summary statistics they develop for PSMC posthoc analysis.  

      These could very well be fruitful pursuits for future work, but they are beyond the scope of this paper. The impossibility of reconstructing ranges through deep time makes anything other than the very general biogeographic hypothesis we’ve posed an uncertain pursuit. Also, a purely biogeographic approach neglects the likelihood of ecological expansion also being involved. We get at the importance of the latter in the “Geography and evolutionary ecology” section of the Discussion. Below, the editor states that discussions among reviewers indicate that simulations are not warranted at this time. We agree that the complexities involved are substantial, to the point of making direct relevance to this empirical study uncertain (especially in such an among-lineage context). Regarding taxon cycles, we merely point out that that conceptual framework seems relevant given our findings. This was not even remotely anticipated at the outset of the study, so we are reluctant to do anything more than point out its possible relevance in several aspects of the results. Finally, the motivation for the study’s summary statistics were entirely driven by the hypotheses, as given in Methods, and due to an earlier error (noted above), there are no post-hoc analyses in the revision. Sorry for the needless confusion.

      Reviewer #2 (Public Review): 

      Summary: 

      Winker and Delmore present a study on the demographic consequences of migratory versus resident behavior by contrasting the evolutionary history of lineages within the same songbird group (thrushes of the genus Catharus). 

      Strengths: 

      I appreciate the test-of-hypothesis design of the study and the explicit formulation of three main expectations to test. The data analysis has been done with appropriate available tools. 

      Weaknesses: 

      The current version of the paper, with the case study chosen, the results, and the relative discussion, is not satisfying enough to support or reject the hypotheses here considered.  

      Given the stated strengths, the weaknesses noted seem a little incongruous, but we understand from the comments below that the reviewer would like to see the study redesigned and expanded.  

      The authors hypothesized that the wider realized breeding and ecological range characterising migrants versus resident lineages could be a major drive for increased effective population size and population expansion in migrants versus residents. I understand that this pattern (wider range in migrants) is a common characteristic across bird lineages and that it is viewed as a result of adapting to migration. A problem that I see in their dataset is that the breeding grounds range of the two groups are located in very different geographic areas (mainly South versus North America). The authors could have expanded their dataset to include species whose breeding grounds are from the two areas, regardless of their migratory behaviour, as a comparison to disentangle whether ecological differences of these two areas can affect the population sizes or growth rates.

      Because the questions are about the migratory life history strategy and the best way to get at this is in a phylogenetic framework, we’re not sure how we could effectively add species “regardless of their migratory behavior.” Further, we know that migration causes lineages to experience variable ecological conditions that include breeding, migration, and wintering conditions. Obligate migrants are going to have different breeding ranges from their close relatives, and the more distantly related species are, the less likely it is that they respond to particular ecological conditions the same way. So we do not think that an approach that included miscellaneous species from northern and southern regions would strengthen this study. Here, the comparative framework of closely related lineages that possess or lack the trait of interest is a study design strength. We do agree, however, that future work is needed that does encompass more lineages (we would argue in a phylogenetic context), and that disentangling the effects of geography and ecology will also be an important future endeavor. 

      As I understand from previous literature, the time-scale to population growth and estimates of effective population sizes considered in the present paper for the resident versus migratory clades seem to widely predate the times to speciation for the same lineages, which were reported in previous work of the same authors (Everson et al 2019) and others (Termignoni-Garcia et al 2022). This piece of information makes the calculation of species-specific population size changes difficult to interpret in the light of lineages' comparison. It is unclear what the authors consider to be lineage-specific in these estimates, as the clades were likely undergoing substantial admixture during the time predating full isolation.  

      We do recognize that timing estimates vary among studies. Differences among studies in important variables like markers, methods, generation time, and mutation or substitution rates create much of this uncertainty. Also, we are not confident in prior dating efforts in this group, largely because of gene flow and its effects on bringing estimates closer to the present. As we point out (line 485), differences among studies on these issues do not detract from the strengths here for within-study, among-lineage contrasts. In short, the timing could be off in an among-study context (and likely is with prior work, given gene flow), but relative performance of among-lineage N<sub>e</sub> differences is less susceptible to these factors. This was shown fairly well in Li & Durbin’s initial use of the method among human populations. Regarding substantial admixture, PSMC curves often unite at their origins with sister lineages (when they were the same lineage). A good example is with the two C. guttatus E & W curves in Fig. S3, which still have substantial gene flow today (they are subspecies and in contact), yet they show remarkably different N<sub>e</sub> curves through their history. It is not possible to mark a cutoff point for each lineage that represents the cessation of admixture with another lineage (e.g., Everson et al. 2019 showed substantial admixture between three full species in this group); that period can be very long (Price et al. 2008), varies among lineages, and will not be available for deeper lineage divergences in the phylogeny. We therefore chose to use all of the time intervals retrievable from the genomic data in each lineage, considering that this uniform treatment is the best approach for our among-lineage comparison. And note that we were careful to label these as “the lineages’ PSMC inception” (line 190).  

      Regarding the methodological difficulties in interpreting the impact of population structure on the estimates of effective population sizes with the PSMC approach, I would think that performing simulations to compare different scenarios of different degrees of structured populations would have helped substantially understand some of the outcomes.  

      The complexities of such modeling in a system like this are daunting. The different degrees of structuring among all of these lineages across just a single glacial-interglacial cycle would necessitate a lot of guesswork; projecting that back across 50-60 such cycles just in the Pleistocene would probably end up being fiction. Disentangling the effects of structure versus changes in N<sub>e</sub> in a system like this would probably not be possible with that approach and these data. As noted above and below, there was agreement among reviewers and the editor that simulations in this case are not warranted for revision. We have added the nature of the glacialinterglacial cycles and the PSMC sampling time segments to help readers understand this better (see above in response to R1, and lines 272-278).

      Additionally, I have struggled to understand if migratory behaviour in birds is considered to be acquired to relieve species competition, or as a consequence of expanded range (i.e., birds expand their range but their feeding ground is kept where speciation occurred as to exploit a ground with higher quality and abundance of seasonal local resources).  

      The origins of migration have been a struggle for researchers since the subject was taken up. But how the trait was acquired among these species does not really matter for our study. Here, migratory lineages possess different biogeographic+ecological attributes than their close relatives that are sedentary. Our focus is on the presence and absence of this life-history trait.

      The points raised above could be considered to improve the current version of the paper. 

      Thank you. We appreciate the opportunity to guide our revision using your comments.  

      Reviewer #3 (Public Review): 

      Summary: 

      This paper applies PSMC and genomic data to test interesting questions about how life history changes impact long-term population sizes. 

      Strengths: 

      This is a creative use of PSMC to test explicit a priori hypotheses about season migration and N<sub>e</sub>. The PSMC analyses seem well done and the authors acknowledge much of the complexity of interpretation in the discussion. 

      Weaknesses: 

      The authors use an average generation time for all taxa, but the citations imply generation time is known for at least some of them. Are there differences in generation time associated with migration? I am not a bird biologist, but quick googling suggests maybe this is the case (https://doi.org/10.1111/1365-2656.13983). I think it important the authors address this, as differences in generation time I believe should affect estimates of N<sub>e</sub> and growth.  

      Good point. The study cited by the reviewer encompasses a much higher degree of variation in body size and thus generation time. Differences in generation time in similarly sized close relatives, as in our study, should be small, and our approach has been to average those that are known. Unfortunately, generation times are not known for all of these species, but given their similarity in size we can have reasonable confidence in their being similar. We used data from the life-history research available (as cited) to obtain our average; there are not appropriate data for the residents, though. However, there is thought to be a generation time cost to seasonal migration in birds, and Bird et al. (2020) included this in their estimates to provide modeled values for all of the lineages we studied. We’re leery of using modeled values where good data for the nonmigrants in this group don’t exist (and the basis for quantifying this cost is tiny), but we recognize that this second approach is available and could leave some doubt in our results if not pursued. So we re-did everything with the modeled generation times of Bird et al. (2020). As expected, most of the differences are time-related. Importantly, our overall results are not different. We present them as Table S2 and have added the details on this to the Methods.

      The writing could be improved, both in the introduction for readers not familiar with the system and in the clarity and focus of the discussion.  

      We have added a phylogeny (new Fig. 1) to help readers better understand the system, and we’ve re-worked the Discussion to make it clearer what is clarified by our results and what remains unclear.  

      Recommendations for the authors:

      Reviewing Editor comment: 

      I note that discussion among the reviewers made clear that simulations are probably not the right answer given the complexity of the modeling required.  

      We appreciate this conclusion, with which we agree.  

      Reviewer #2 (Recommendations For The Authors): 

      Apologies for the delay with the review, which came at a very busy time. I hope you will find my comments helpful.

      Thanks. Your comments are helpful, and we fully understand how reviews (and our revisions!) have to wait until more pressing needs are addressed.

      I enjoyed reading the manuscript but I believe that the discussion sections could be heavily rewritten for better clarity. The discussion is sometimes redundant and lacks some flow/clarity. In a nutshell, I had the feeling that a bit of everything is thrown in the discussion but clear conclusions are not made.  

      Yes, the Discussion has been difficult to write, because more issues arose in the Results than we anticipated at the outset. We feel that discussing them is relevant, but we agree that much remains unclear. This coupling of paleodemographics with geography and ecology is a new area, which opens some important new (and relevant) areas to consider. So clarity is not possible in some areas. We’ve revised to point out where we do have clarity (e.g., in migrant lineages having different paleodemographic attributes than nonmigrants) and where only further study can provide clarity (e.g., in the roles of geography versus ecology). The journal format does not seem to have secondary subheaders, but we’ve used bold in one place to highlight ‘ecological mechanisms’ to offset that section, one of the more complex. We’ve also added a paragraph in the conclusions to clarify where we have clear takeaways and where uncertainties remain. 

      Reviewer #3 (Recommendations For The Authors): 

      The introduction should engage the reader with biology, not the use of demographic methods or genomics (both of which have been around for more than a decade). I would drop the first paragraph and considerably expand the second. What has previous research on ecology/behavior/genetics found regarding the demographic effects of seasonal migration?

      There are two important aspects to our study: 1) using paleodemographic methods to test hypotheses about adoption of a major life-history trait—an important biological question regardless of system, and so far (surprisingly) unaddressed; and 2) using this novel approach to study the effects of one such trait, seasonal migration. At these timescales, nothing exists on this subject, so there is really nothing to expand with. If there is relevant literature that we’ve missed, we’d be happy to add it.

      What is the missing bit of information or angle the current study addresses (other than just doing it larger and fancier with genomics)?  

      The effects of major life-history traits on paleodemographics has not been addressed before, to our knowledge. The whole context is new, so we’re not doing something “larger and fancier” with genomics. We are doing something that has not been done before: testing hypotheses about the effects of a major life-history trait on population sizes in evolutionary time. We’re not sure how this can be made clearer. To us this seems like a very engaging biological question with wide applicability. We hope that this study is just the first of many to come, in a diversity of biological systems.

      A figure showing the phylogenetic relationships of these taxa which are migratory would help the reader immensely. Although this is shown in Fig S3 I think it might be nice to have a map of the species and their ranges alongside a phylogeny as a main figure early on.  

      Thank you. This is a good suggestion. We can’t fit a phylogeny and all the distribution maps (Fig. S1) onto a page, but we can include a phylogeny as one of the main figures with nonmigrants highlighted. We’ve inserted this as a new Fig. 1. 

      If I understand correctly, the authors' arguments for why migratory species should show more growth hinge on large range size and geographic expansion. Yet they argue in the discussion that these forces are unlikely to be important (L226). I found the discussion on this confusing (e.g. L231 then says maybe it does matter). I think more clarity here would be helpful.

      Our argument and predictions are based both on geographic and ecological expansion. This was clearly stated as our third prediction “3) early population growth would be higher as seasonal migration opens novel ecological and geographic space…” We have gone back through and reiterated the coupling of these two factors. The line mentioned concludes the first paragraph in the section ‘Geography and evolutionary ecology,’ which focuses on the difficulty of decoupling these in this system. As the paragraph relates, geography alone does not seem to be driving our results (we do not argue that it is unimportant). 

      I also would have liked more time in the discussion addressing why variation in N<sub>e</sub> may be higher in migratory lineages.

      In addition to re-clarifying this in the Introduction, we have touched back on this now at line 221: “We attribute the higher variation in N<sub>e</sub> among migrants to be the result of the relative instability of northern biomes compared with tropical ones through glacial-interglacial cycles (e.g., Colinvaux et al., 2000; Pielou, 1991).”

      Minor comments: 

      L 62: Presumably PSMC is limited by the coalescent depth of the genelaogy, which may be younger or older than population "origins" depending on the history of colonization, lineage splitting, gene flow, etc.  

      We were careful to phrase these as “the lineages’ PSMC inception” (line 190), and responded to this issue in more detail above in response to R2’s public review. 

      L 338: I think a few more details on PSMC would be helpful. Was no maskfile used?  

      We did not use a maskfile, choosing instead to generate data of decent coverage and aligning reads to a single closely related relative. 

      Did the consensus fasta include all species?  

      No, we used a single reference high-quality fasta of Catharus ustulatus , as reported (lines 434-37). We have added that “Identical treatment of all lineages in these respects should provide a strong foundation for a comparative study like this among close relatives.” 

      L 361: Fair to assume the authors used a weighted average of N<sub>e</sub> from the output, rather than just averaging the N<sub>e</sub> values from each time segment?  

      No – we used all the values of N<sub>e</sub> produced by PSMC output. The PSMC method uses nonoverlapping portions of the genome in its analyses (which we’ve added to make that clear), and portions in juxtaposition will often provide data for very different periods in the time segments. Further, time segments are uneven within and among taxa, so it is not clear how a uniform and comparable weighting scheme could be implemented. We consider a uniform approach to be of primary importance, including for future comparisons among studies. 

      L 383 "delta" typo

      Thank you for catching this.

      L 93: I'd be tempted to present the questions (how does seasonal migration affect population size trajectory, means, and variation) and rationale before presenting the hypotheses. I found myself reading the hypotheses and wondering "why?"  

      We’ve tried this change in the revision. It makes the hypotheses a little harder to pull out (they are no longer numbered in a short sequence), but it is shorter and solves this concern.  

      L 337 read depth is usually expressed as X (e.g. "23X") rather than bp.

      Changed.

    1. Author response:

      The following is the authors’ response to the original reviews.

      eLife Assessment

      This important study further validates DNAH12 as a causative gene for asthenoteratozoospermia and male infertility in humans and mice. The data supporting the notion that DNAH12 is required for proper axonemal development are generally convincing, although more experiments would solidify the conclusions. This work will interest reproductive biologists working on spermatogenesis and sperm biology, as well as andrologists working on male fertility.

      We thank the editor and the two reviewers for their time and careful evaluation of our manuscript. We sincerely appreciate their encouraging feedback and insightful guidance on improving our study. In the revised manuscript, we have performed additional experiments and provided quantitative data regarding the reviewers' comments.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      Even though this is not the first report that the mutation in the DNAH12 gene causes asthenoteratozoospermia, the current study explores the sperm phenotype in-depth. The authors show experimentally that the said mutation disrupts the proper axonemal arrangement and recruitment of DNALI1 and DNAH1 - proteins of inner dynein arms. Based on these results, the authors propose a functional model of DNAH12 in proper axonemal development. Lastly, the authors demonstrate that the male infertility caused by the studies mutation can be rescued by ICSI treatment at least in the mouse. This study furthers our understanding of male infertility caused by a mutation of axonemal protein DNAH12, and how this type of infertility can be overcome using assisted reproductive therapy.

      Strengths:

      This is an in-depth functional study, employing multiple, complementary methodologies to support the proposed working model.

      Thank you for your recognition of the strength of this study. Your positive feedback motivates us to continue refining our research and methodological rigor in future studies.

      Weaknesses:

      The study strength could be increased by including more controls such as peptide blocking of the inhouse raised mouse and rat DNAH12 antibodies, and mass spectrometry of control IP with beads/IgG only to exclude non-specific binding. Objective quantifications of immunofluorescence images and WB seem to be missing. At least three technical replicates of western blotting of sperm and testis extracts could have been performed to demonstrate that the decrease of the signal intensity between WT and mutant was not caused by a methodological artifact.

      Thank you for your comments. In order to study in-depth, we have analyzed the protein sequence features of DNAH12 protein, 1-200 amino acids of DNAH12 were selected as the ideal antigen considering its good performance (1. high immunogenicity; 2. High hydrophilicity; 3. Good Surface Leakage Groups; 4. Sequence homology analysis to avoid unspecific recognition to other proteins;). The two different anti-DNAH12 antibodies were developed with the help Dia-An Biotech company in 2022, we have tried to acquire the polypeptide fragments of target proteins to do peptide blocking but the material were discard after the service. Luckily, we have got the target band of DNAH12 protein in western blotting experiment while the band was not detected in knockout mice group; the immunofluorescence signals of DNAH12 were strong but not present in knockout mice group. Besides, we have tested that the inhouse raised rabbit antibody were suitable for IP experiment. The IP experiment also showed the raised rabbit antibody were able to immunoprecipitated the DNAH12 band in the Dnah12<sup>+/+</sup> mice but not in Dnah12<sup>-/-</sup> mice. Collectively, these data could support the specificity of the raised DNAH12 antibodies.  In IP assay, we have added the IgG group in the IP-mass spectrometry to exclude non-specific binding. And the experimental design was described in Figure 6B. The raw data were deposited in iProX partner repository (accession number: PXD051681), and we have coordinated with the repository manager to make the data publicly accessible (https://www.iprox.cn/page/subproject.html?id=IPX0008674001).  

      Besides, we have conducted replicates of western blotting of sperm and testis extracts at least 3 times and added the objective quantifications of immunofluorescence signals and WB images. The quantifications of the blot were shown in figures to help readers understand these results easily.

      Reviewer #2 (Public Review):

      Summary:

      The authors first conducted whole exome sequencing for infertile male patients and families where they co-segregated the biallelic mutations in the Dynein Axonemal Heavy Chain 12 (DNAH12) gene.

      Sperm from patients with biallelic DNAH12 mutations exhibited a wide range of morphological abnormalities in both tails and heads, reminiscing a prevalent cause of male infertility, asthenoteratozoospermia. To deepen the mechanistic understanding of DNAH12 in axonemal assembly, the authors generated two distinct DNAH12 knockout mouse lines via CRISPR/Cas9, both of which showed more severe phenotypes than observed in patients. Ultrastructural observations and biochemical studies revealed the requirement of DNAH12 in recruiting other axonemal proteins and that the lack of DNAH12 leads to the aberrant stretching in the manchette structure as early as stage XI-XII. At last, the authors proposed intracytoplasmic sperm injection as a potential measure to rescue patients with DNAH12 mutations, where the knockout sperm culminated in the blastocyst formation with a comparable ratio to that in WT.

      Strengths:

      The authors convincingly showed the importance of DNAH12 in assembling cilia and flagella in both human and mouse sperm. This study is not a mere enumeration of the phenotypes, but a strong substantiation of DNAH12's essentiality in spermiogenesis, especially in axonemal assembly.

      The analyses conducted include basic sperm characterizations (concentration, motility), detailed morphological observations in both testes and sperm (electron microscopy, immunostaining, histology), and biochemical studies (co-immunoprecipitation, mass-spec, computational prediction). Molecular characterizations employing knockout animals and recombinant proteins beautifully proved the interactions with other axonemal proteins.

      Many proteins participate in properly organizing flagella, but the exact understanding of the coordination is still far from conclusive. The present study gives the starting point to untangle the direct relationships and order of manifestation of those players underpinning spermatogenesis. Furthermore, comparing flagella and trachea provides a unique perspective that attracts evolutional perspectives.

      Thank you for your thoughtful and positive feedback. We are delighted that you found our study to be a strong substantiation of DNAH12's essential role in spermiogenesis, particularly in axonemal assembly. We believe that this study represents a meaningful step toward unraveling the intricate coordination of axonemal proteins during spermatogenesis, and your comments further inspire us to continue exploring these complex mechanisms in future work. Thank you once again for your valuable insights and summary of this work.

      Weaknesses:

      Seemingly minor, but the discrepancies found in patients and genetically modified animals were not fully explained. For example, both knockout mice vastly reduced the count of sperm in the epididymis and the motility, while phenotypes in patients were rather milder. Addressing the differences in the roles that the orthologs play in spermatogenesis would deepen the comprehensive understanding of axonemal assembly.

      This is an interesting question. Actually, it seems that although humans and mice share the male infertility phenotypes with deficiency in dynein proteins essential for sperm flagellar development, they are different in some ways. For instance, it has been reported that deficiency in DNAH17 (Clin Genet. 2021. PMID: 33070343) or DNAH8 (Am J Hum Genet. 2020. PMID: 32619401; PMCID: PMC7413861), two other members of Dynein Axonemal Heavy Chain family, also cause more severe phenotype in mice, comparing with that of human patients carrying bi-allelic DNAH17 or DNAH8 loss-of-function mutations. In knockout mice, sperm counts are lower, and the proportion of abnormal sperm morphology is higher, whereas the phenotypes in human patients tend to be milder. These observations suggest that orthologs may influence spermatogenesis to slightly different extents in humans and mice. We plan to investigate the mechanisms underlying these discrepancies in future studies, which will provide deeper insights into axonemal assembly and the evolutionary aspects of spermatogenesis. Thank you again for bringing up this important issue.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      This reviewer is impressed by the study's depth and the extent of the methodology used in the study. The study is well-designed, and the results are very interesting. The reviewer's enthusiasm was reduced by the lack of some controls (provided that the reviewer did not miss them). Further are point-to-point suggestions that this reviewer believes will increase the merit of the present study.

      Title:

      (1) Why a "special" dynein? What makes it special when compared to other dyneins? I suggest removing the word special.

      Through phylogenetic and protein domain analyses of the DNAH family, we found that DNAH12 is the shortest member and the only one that lacks a typical microtubule-binding domain (MTBD) in the DNAH family, thus we want to describe it as a “special” dynein. We have fully considered your valuable suggestion and decided to remove it from the title.

      Abstract:

      (2) L23: same as above, why special?

      We identified DNAH12 as the shortest member of the DNAH family and uniquely lacking the typical microtubule-binding domain (MTBD). This distinct characteristic prompted us to describe it as a 'special' dynein in the abstract part.

      (3) L37: the reviewer did not find a figure (neither main nor supplementary) that would demonstrate the proper organization of microtubules in cilia. Figure S11 only shows the presence of cilia in DNAH12-/- mouse. A TEM image of cilia is required to confirm or reject the claim that DNAH12 does not play a crucial role in proper microtubule organization in cilia.

      We have now added TEM images of cilia in wild-type and Dnah12<sup>-/-</sup> mice. The ultra-structures of cilia axonemes were comparable in wild-type and Dnah12<sup>-/-</sup> groups, suggesting that DNAH12 may not play crucial role in proper microtubule organization. The results have now been added to Supplemental Figure 11F.

      (4) L122-6: Did the authors also confirm these structures by cryo-EM? If not, this needs to be pointed out as a shortcoming in the discussion, that the structures and interactions are predicted in silico only.

      Thank you for your comment. Due to resource limit, we do not perform cryo-EM to confirm these structures. We will pursue the structures details at an atomic resolution structure in further study. We understand this point and now we have addressed this as a shortcoming in the discussion part.

      (5) L134: Be more specific about what characteristics of DNAH12 were analyzed.

      Thank you for your comment. We have now updated these in the method part. The characteristics of the DNAH12 including its region immunogenicity, hydrophilicity, surface leakage groups, and sequence homology were analyzed.

      (6) L137: Be more specific about how the antibodies validated were. Were the antibodies validated for both immunofluorescence and western blotting? I suggest doing peptide blocking of the antibody, for instance for ICC, preincubation of ab with immunizing peptide followed by primary ab incubation with studied cells/tissues.

      Thank you for your comments and suggestions. We validated the antibodies for both immunofluorescence and western blotting to ensure their effectiveness in our experiments. The two different anti-DNAH12 antibodies were developed with the help Dia-An Biotech company in 2022, we have attempted to acquire the polypeptide fragments of target proteins to do peptide blocking but the material were disposed after the service. Luckily, we have got the target band of DNAH12 protein which showed strong signal in western blotting experiment and the band was not detected in knockout mice group; the immunofluorescence signals of DNAH12 were strong but not present in knockout mice group. Besides, the IP experiment also showed the raised rabbit antibody were able to immunoprecipitated the DNAH12 band in the Dnah12<sup>+/+</sup> mice but not in Dnah12<sup>-/-</sup> mice. Collectively, these data could support the specificity of the raised DNAH12 antibodies. We sincerely admire your suggestion and will require for the peptide material if we develop new antibodies.

      (7) L142: This reviewer is unfamiliar with using TRIzol for sperm protein extraction. Is there a specific reason for not using PAGE loading buffer for human sperm protein extraction?

      Thanks for your suggestions. TRIzol reagent can be used for small amounts of samples (5×10<sup>6</sup> cells) as well as large amounts of samples (>10<sup>7</sup> cells). It is suitable for extraction of RNA and proteins at the same time. Our lab has adopted these methods in our previous work (Hum Reprod Open. 2023; PMID: 37325547; PMCID: PMC10266965.). This method is very useful to process valuable small amounts of samples for scientific work. The human sperm protein extraction was added with SDS-sample buffer [PAGE loading buffer] before SDS-PAGE separation. We have added this detail in the method part. We are sorry for making this misunderstanding.

      (8) L144: Were these the final concentrations of the SDS loading buffer? 1 × Laemmli buffer contains 62.5 mM TRIS, 2% (w/w) SDS, 10 % (w/v) glycerol, and 5% 2-mercaptoethanol. Please, amend accordingly.

      Thanks for your suggestions.  We apologized for incorrect labelling of concentrations (The previous one is 3× SDS loading buffer).  We have now amended the SDS loading buffer to 1 × Laemmli buffer as suggested.

      (9) L151: Table S2 contains other homemade antibodies than DNAH12. Please, include references to the studies where the generation and validation of these antibodies is described.

      Thank you for your suggestions. We have developed a DNAH1 antibody for use in Western blot assays, with its generation and validation detailed in Frontiers in Endocrinology (Lausanne), 2021 (PMID: 34867808; PMCID: PMC8635859). Additionally, we have produced a DNAH17 antibody for both immunofluorescence (IF) and Western blot, as described in Journal of Experimental Medicine, 2020 (PMID: 31658987; PMCID: PMC7041708). These references have now been included.

      (10) L167: Please, spell out ICR at its first appearance.

      Done as suggested, Thank you. The full name of ICR is Institute of Cancer Research.

      (11)L169: This reviewer is confused. It seems that the mouse encodes DNAH12 on exons 5 and 18 simultaneously. Each mouse model has only one exon targeted for a knockout. Would not this mean that the expression of DNAH12 in both models is not completely knocked down? Please, give more background in this paragraph for those less familiar with CRISPR/Cas9.

      Thank you for your insightful comment. We appreciate your attention to detail. To clarify, while the mouse model does indeed encode DNAH12 on exons 5 and 18 simultaneously, we specifically targeted the key exon 5 or exon 18 in each model to achieve different knockout strategies. This approach allows us to assess the functional implications of the remaining DNAH12 expression in both models. We have checked the DNAH12 expression in both models, and the result showed both models present with undetected DNAH12 proteins, indicating both models were completely knocked out of DNAH12 proteins. Additionally, we will revise the manuscript to include further details on the CRISPR/Cas9 methodology, ensuring accessibility for readers less familiar with this technique. Thank you again for your valuable feedback, which we believe will greatly enhance our manuscript.

      (12) L201: 50 % PBS? As in 0.5 x concentrated PBS? Please, rewrite for clarity.

      The term "50% PBS" refers to a 1:1 dilution of phosphate-buffered saline (PBS) with an appropriate diluent, resulting in a final concentration of 0.5x PBS. We will revise the text to explicitly clarify this, ensuring it is clear to all readers. Thank you for highlighting this point.

      (13) L224: Please, state what beads those were (magnetic/agarose, conjugated to protein A/G...) Include catalog # and manufacturer.

      Thank you for your suggestion. We have updated the manuscript to include this information. The beads used were Protein A/G Magnetic Beads (Catalog #B23202, Bimake, Texas, USA).

      (14) L227: What was the reason for adding a proteasomal inhibitor? What concentration was used? Please, add this information to the text.

      We adding MG132 in cell immunoprecipitation (IP) experiments is to inhibit proteasomal activity, thereby preventing the degradation of the target protein. This helps maintain the stability of the target protein during the experiment (Sci Adv. 2022. PMID: 35020426; PMCID: PMC8754306.), enhancing its detectability in subsequent analyses. MG132 (5 μM) was added. We have added this information in the revised the manuscript

      (15) L233: in vivo IP of mouse testis lysate? This does not make sense. I suggest removing "in vivo".

      Thank you for your careful review and comments on our manuscript. We have modified as suggested.

      (16) L317: Supplemental Figure 6 precedes Supplemental Figure 5 in the text, which is neither logical nor orderly.

      Thank you for your suggestion. Since the N-terminal DNAH12 antibody is already described in the Methods section (L317), we propose removing Supplemental Figure 6 from the content to improve the logical flow and maintain an orderly presentation.

      (17) L345 and elsewhere: how did the authors quantify the decrement of the signal? This needs to be measured objectively.

      Thank you for your valuable suggestion. We quantified the signal intensity using Fiji (Nat Methods. 2012. PMID: 22743772; PMCID: PMC3855844), which allows for precise analysis of pixel intensity. The results are presented in the figures to effectively illustrate the decrement in signal intensity. We appreciate your suggestion, and we have provided a description of the method in our methodology section.

      (18) L371: I recommend: ...and elongated spermatids; the abnormal...

      Done as suggested. Thank you.

      (19) L412-4: Cilia in both Dnah12<sup>mut/mut</sup> and Dnah12<sup>-/-</sup> are developed, but are they motile or immotile? This needs to be investigated. Is the DNAH12 in cilia truncated while still fulfilling its function?

      Thanks for your comment. We have checked the ciliary motility using an inverted microscope, and no significant difference of ciliary motility were observed between the knockout group and the control group. These results indicated that the ciliary motility was not affected by DNAH12 deficiency. The N-terminal DNAH12 antibody was developed to detect whether a truncated protein in mice tissues while we do not detect DNAH12 signals through immunofluorescence assay on trachea sections of the Dnah12<sup>-/-</sup> mice. These results indicate that DNAH12 may exert little influence on cilia, comparing to its important function in flagella.

      (20) L414-6: The results do not support this claim as the authors do not show that cilia are motile.

      Thanks for your comment. The supplemental videos 3-4 of trachea live of Dnah12<sup>+/+</sup> and Dnah12<sup>-/-</sup> mice have been uploaded to support this conclusion.

      (21) L421-3: Did the authors perform a negative test, where they let the testis lysate interact with beads/IgG only and performed the MS to identify non-specific binding? This is a crucial specificity test for this approach.

      We have performed negative test. In IP assay, we have added the IgG group in the IP-mass spectrometry to exclude non-specific binding. And the experimental design was described in Figure 6B. The raw data were deposited in iProX partner repository (PXD051681), which we have required the manager soon to update the status to public, so it will be visible to readers. 

      (22) L462: same as #18 the authors need to show that cilia are also motile. The mere presence of cilia in DNAH12-/- as shown in Fig S11C&D is not sufficient to conclude that the mice do not manifest PCD symptoms.

      Thanks for your comment. We do not observe obvious differences between the cilia of Dnah12<sup>+/+</sup> and Dnah12<sup>-/-</sup> mice.  The supplemental videos 3-4 of trachea live of Dnah12<sup>+/+</sup> and Dnah12<sup>-/-</sup> mice have been uploaded to show the motility of the trachea.

      (23) L529: MTBD region instead of domain, as "domain" is already part of the abbreviation.

      Done as suggested

      (24) L875: Sperm is both the singular and plural form. Spermatozoon vs spermatozoa can be used where the distinction between singular and plural needs to be made.

      Thanks for your suggestion. We have checked and changed this usage.

      (25) Figure 3H: Is there a specific reason why P11 is not shown?

      Because limited smear slides of P11 were available, the P11 were not stained for DNAH17 antibody previously. We have now updated the experiment, which showed that DNAH17 expression were not affected in patient P11. We have now added this result to Figure 3H.

      (26) Figure 8H: The authors in their MS do not describe what is happening to N-DRC proteins, yet they suggest in their model that it's unaffected in the mutant mouse/human. Please, address this in the MS and clearly state in the model that N-DRC needs further exploration in future studies.

      Thanks for your suggestion, we have checked the MS data but do not observe the enrichment of nexin-dynein regulatory complex (N-DRC) protein, just one known N-DRC protein DRC1 present with only 1 unique peptide. Instead, enrichment of inner dynein arm proteins and radial spoke proteins were observed. However, we cannot determine the N-DRC structures maybe affected or not. We have stated this in the discussion part and will pursue this with high resolution technology like cryo-EM in the future.

      (27) Figure 5F: Is it possible to choose a different Dnah12<sup>-/-</sup> spermatozoon to see a reduced level of DNALI1 so that it corresponds with the WB detection in Fig 5B?

      Thanks for your suggestion, we have chosen a Dnah12<sup>-/-</sup> spermatozoon with faint remnants of the DNALI1 signal as the representative picture.

      (28) Figure S2 and elsewhere: How were the authors able to resolve and calibrate 356 kDa protein using SDS PAGE? Agarose electrophoresis protein electrophoresis is more suitable for resolution of high molecular proteins. Most of the protein standards have as high molecular standard as 250 kDa.

      We have found that high molecular proteins (like 356kDa) were able to resolve in concentration 4-12% gradient gel of polyacrylamide gels and employ appropriate voltages and more time during electrophoresis to improve resolution of high molecular weight proteins. The DNAH12 proteins were calibrated by the using of a HiMark™ Pre-Stained High Molecular Weight Protein Standard (30-460 kDa). We have now updated the blot images to show the size of the DNAH12 protein (Fig S6B,). The target band is obvious between 268 kDa and 460 kDa, which make it easy to calculate the target band of DNAH12 antibody elsewhere. Thanks for your suggestion.

      (29) Figure S5: similar to #24: Why P10 and P11 are not shown?

      Because limited smear slides of P10 or P11 were available, we did not stain ODF2 antibody previously. We have now updated the experiments, which showed that ODF2 expression were not affected in patient P10 or P11. We have now added this result to Figure S5.

      (30) Figure S6B: The specificity of the anti-DNAH12 antibody against mouse DNAH12 seems to be questionable since the authors detect multiple bands on WB. I recommend doing peptide blocking to show that these are non-specific binding as opposed to off-target binding.

      Thank you for your comments. In order to study in-depth, we have analyzed the protein sequence features of DNAH12 protein, 1-200 amino acids of DNAH12 were selected as the ideal antigen considering its good performance (1. high immunogenicity; 2. High hydrophilicity; 3. Good Surface Leakage Groups; 4. Sequence homology analysis to avoid unspecific recognition to other proteins;). The two different anti-DNAH12 antibodies were developed with the help Dia-An Biotech company in 2022, we have attempted to acquire the polypeptide fragments of target proteins to do peptide blocking but the material were disposed after the service. Luckily, we have got the target band of DNAH12 protein which showed strong signal in western blotting experiment and the band was not detected in knockout mice group; the immunofluorescence signals of DNAH12 were strong but not present in knockout mice group. Besides, we have tested that the inhouse raised rabbit antibody was suitable for IP experiment. The IP experiment also showed the raised rabbit antibody were able to immunoprecipitated the DNAH12 band in the Dnah12<sup>+/+</sup> mice but not in Dnah12<sup>-/-</sup> mice. Collectively, these data could support the specificity of the raised DNAH12 antibodies. We admire your suggestion and will require for the peptide material if we develop new antibodies.

      Reviewer #2 (Recommendations For The Authors):

      Recruitment of DNAH1 and DNALI1 to the flagella is dependent on DNAH12 expression, according to the data. What would be the mechanism that locates DNAH12 which lacks MTBD to the flagella?

      Thank you for your insightful question. We are currently investigating the mechanisms that facilitate the loading of DNAH12 to the flagella. Based on existing data, we hypothesize that CCDC39 and/or CCDC40 may play a critical role in the recruitment of DNAH12 to sperm flagella during spermiogenesis (Nat Genet. 2011, PMID: 21131972; PMCID: PMC3509786; Nat Genet. 2011, PMID: 21131974; PMCID: PMC3132183). Furthermore, a structural study by Walton et al. showed that DNAH12 associates with CCDC39/CCDC40 proteins (Nature. 2023, PMID: 37258679; PMCID: PMC10266980). These findings suggest that CCDC39 and/or CCDC40 may play a role in facilitating the localization of DNAH12 to the flagella. Additional studies are needed to identify other potential factors involved in this process and to further elucidate the mechanisms underlying this complex biological phenomenon.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Evolution of cetaceans. November 2023. Page Version ID: 1186568602. URL: https://en.wikipedia.org/w/index.php?title=Evolution_of_cetaceans&oldid=1186568602 (visited on 2023-12-08).

      The idea that whales, these enormous ocean monsters, were once terrestrial animals with legs is astounding, as evidenced by the Wikipedia article on cetacean evolution. The fact that Pakicetus, an early progenitor, still had legs and likely spent some time walking around before its offspring completely acclimated to the water was one particular element that really stood out to me. The shift via species like Ambulocetus, which could walk and swim, gives the impression that evolution is unfolding in real time. I was astounded to discover that modern whales still have small, concealed vestiges of hind limbs buried inside their bodies. It’s crazy how much evidence we have of this transformation, from fossils to DNA, and it really shows how evolution isn’t just an idea—it’s written into the bones of living creatures.

    2. Evolution of cetaceans. November 2023. Page Version ID: 1186568602. URL: https://en.wikipedia.org/w/index.php?title=Evolution_of_cetaceans&oldid=1186568602 (visited on 2023-12-08).

      This article (wikipedia) is interesting that about Evolution of cetaceans also it’s amazing to think that whales and dolphins started out as land animals related to hippos around 50 million years ago. Over time, they adapted to life in the ocean but still kept mammal traits like breathing air and nursing their young. The way they evolved into two groups baleen whales and toothed whales, with some even developing echolocation is just mind-blowing.

  8. laulima.hawaii.edu laulima.hawaii.edu
    1. This is something I found very important to point out. Many people give and make lei for their loved ones, whether it's to say congratulations or just out of love, which I personally never deeply reflected on that until now. Also my tūtū always said, if you're in a bad mood do not make lei because you'll put the bad energy into the lei etc.

    1. Even as many of the show’s details are doing double duty as hints and feints—Kathryn Hahn’s nosy neighbor isn’t just a brash character cracking endless jokes at her husband’s expense, she’s probably someone else; the commercials that talk so much about being in and out of time are presumably hinting at some big themes—but it’s more interested in the sitcom as a sitcom than it has to be.

      Even though the show hints a bigger mysteries, it is more focused on being a fun sitcom other than revealing everything early on.

    1. This suffers from a sufficient formalisation of the concept of "similarity". Everything is either so similar that characterisation as "identical", similar or different or very different, depending on the frame of reference. By pointing out some resemblense, you cannot make a justified judgement about the similarity or difference of anything. I would suggest that Luhmann didn't write more about his method himself because it would have been generally fruitless for him as everyone around him was doing exactly the same thing. I asked ca. two dozen professors at the very university about their method (btw. at the very university that Luhmann was a professor at). NONE had anything remotely resembling a Luhmann-Zettelkasten. During his lifetime there was quite some interest in his Zettelkasten, hence the visitors, hence the disappointment of the visitors (people made an effort to review his Zettelkasten): (9/8,3) Geist im Kasten? Zuschauer kommen. Sie bekommen alles zu sehen, und nichts als das – wie beim Pornofilm. Und entsprechend ist die Enttäuschung. - From his own Zettelkasten So: The statement that his practice was basically common place (or even a common place book) is not based on sound reasoning (sufficiently precise in the use of the concept "similarity") There is empirical evidence that it was very uncommon. (Which is obvious if you think about the his theoretical reasoning about his Zettelkasten as heavily informed by the very systems theory that he developed. So, a reasoning unique to him)

      Reply to u/FastSascha at https://old.reddit.com/r/Zettelkasten/comments/1ilvvnc/you_need_to_first_define_the_zettlekasten_methoda/mc01tsr/

      The primary and really only "innovation" for Luhmann's system was his numbering and filing scheme (which he most likely borrowed and adapted from prior sources). His particular scheme only serves to provide specific addresses for finding his notes. Regardless of doing this explicitly, everyone's notes have a physical address and can be cross referenced or linked in any variety of ways. In John Locke's commonplacing method of 1685/1706 he provided an alternate (but equivalent method) of addressing and allowing the finding of notes. Whether you address them specifically or not doesn't change their shape, only the speed by which they may be found. This may shift an affordance of using such a system, but it is invariant from the form of the system. What I'm saying is that the form and shape of Luhmann's notes is identical to the huge swath of prior art within intellectual history. He was not doing something astoundingly new or different. By analogy he was making the same Acheulean hand axe everyone else was making; it's not as if he figured out a way to lash his axe to a stick and then subsequently threw it to invent the spear.

      When I say the method was commonplace at the time, I mean that a broad variety of people used it for similar reasons, for similar outputs, and in incredibly similar methods. You can find a large number of treatises on how to do these methods over time and space, see a variety of examples I've collected in Zotero which I've mentioned several times in the past. Perhaps other German professors weren't using the method(s) as they were slowly dying out over the latter half of the 20th century with the rise and ultimate ubiquity of computers which replaced many of these methods. I'll bet that if probed more deeply they were all doing something and the something they were doing (likely less efficiently and involving less physically evident means) could be seen to be equivalent to Luhmann's.

      This also doesn't mean that these methods weren't actively used in a variety of equivalent forms by people as diverse as Aristotle, Cicero, Quintilian, Seneca, Boethius, Thomas Aquinas, Desiderius Erasmus, Rodolphus Agricola, Philip Melancthon, Konrad Gessner, John Locke, Carl Linnaeus, Thomas Harrison, Vincentius Placcius, Gottfried Wilhelm Leibniz, S. D. Goitein, Gotthard Deutsch, Beatrice Webb, Sir James Murray, Marcel Mauss, Claude Lévi-Strauss, Mortimer J. Adler, Niklas Luhmann, Roland Barthes, Umberto Eco, Jacques Barzun, Vladimir Nabokov, George Carlin, Twyla Tharp, Gertrud Bauer, and even Eminem to name but a few better known examples. If you need additional examples to look at, try searching my Hypothesis account for tag:"zettelkasten examples". Take a look at their examples and come back to me and tell me that beyond the idiosyncrasies of their individual use that they weren't all doing the same thing in roughly the same ways and for roughly the same purposes. While the modalities (digital or analog) and substrates (notebooks, slips, pen, pencil, electrons on silicon, other) may have differed, the thing they were doing and the forms it took are all equivalent.

      Beyond this, the only thing really unique about Luhmann's notes were that he made them on subjects that he had an interest, the same way that your notes are different from mine. But broadly speaking, they all have the same sort of form, function, and general topology.

      If these general methods were so uncommon, how is it that all the manuals on note taking are all so incredibly similar in their prescriptions? How is it that Marbach can do an exhibition in 2013 featuring 6 different zettelkasten, all ostensibly different, but all very much the same?

      Perhaps the easier way to see it all is to call them indexed databases. Yours touches on your fiction, exercise, and nutrition; Luhmann's focuses on sociology and systems theory; mine looks at intellectual history, information theory, evolution, and mathematics; W. K. Kellogg's 640 drawer system in 1906 focused on manufacturing, distributing and selling Corn Flakes; Jonathan Edwards' focused on Christianity. They all have different contents, but at the end of the day, they're just indexed databases with the same forms and functionalities. Their time periods, modalities, substrates, and efficiencies have differed, but at their core they're all far more similar in structure than they are different.

      Perhaps one day, I'll write a deeper treatise with specific definitions and clearer arguments laying out the entire thing, but in the erstwhile, anyone saying that Luhmann's instantiation is somehow more unique than all the others beyond the meaning expressed by Antoine de Saint-Exupéry in The Little Prince is fooling themselves. Instead, I suspect that by realizing you're part of a longer, tried-and-true tradition, your own practice will be far easier and more useful.

      The simplicity of the system (or these multiply-named methods) allows for the rise of a tremendous amount of complexity. This resultant complexity can in turn hide the simplicity of the root system.

      “To me, you are still nothing more than a little boy who is just like a hundred thousand other little boys. And I have no need of you. And you, on your part, have no need of me. To you, I am nothing more than a fox like a hundred thousand other foxes. But if you tame me, then we shall need each other. To me, you will be unique in all the world. To you, I shall be unique in all the world..."

      I can only hope people choose to tame more than Luhmann.

    2. Explain your definition of hierarchical reference system. How is one note in his system higher, better, or more important than another? Where do you see hierarchies? Lets say Luhmann were doing something on bread. First off he has 3 notes and these end up sequenced 1,2,3. Then he does the equivelent of a block link on 1 by creating 1a=banana bread, 1b=flour bread. A good discussion (https://yannherklotz.com/zettelkasten/) If there weren't direct mappings, it should be impossible to copy & paste Luhmann's notes into Obsidian, Logseq, OneNote, Evernote, Excel, or even Wikipedia. That's not true at all. One can dump from one structure into another structure you just potentially lose structure in the mapping. Those systems don't have similar capabilities. Obsidian has folders Logseq does not. Logseq has block level linking Obsidian does not. I can't even reliable map between the first two elements of your list. Now we throw in OneNote that directly takes OLE embeds which means information linked can dynamically change after being embedded. That is say I'm tracking "current BLS inflation data" it will remain permanently current in my note. Neither Obsidian nor Logseq support that. Etc.. Excel, OneNote and Logseq allow for computations in the note (i.e. the note can contain information not directly entered) Obsidian and Wikipedia do not. We might argue about efficiencies, affordances, or speed, but at the end of the day they're all still structurally similar. We are totally disagreeing here. The OLE example being the clearest cut example.

      reply to u/JeffB1517 at https://old.reddit.com/r/Zettelkasten/comments/1ilvvnc/you_need_to_first_define_the_zettlekasten_methoda/mc1y4oj/

      I'm not new here: https://boffosocko.com/research/zettelkasten-commonplace-books-and-note-taking-collection/

      You example of a hierarchy was not a definition. In practice Luhmann eschewed hierarchies, though one could easily modify his system to create them. This has been covered ad nauseam here in conversations on top-down and bottom-up thinking.

      When "dumping" from one program to another, one can almost always easily get around a variety of affordances supplied by one and not another simply by adding additional data, text, references, links, etc. As an example, my paper system can do Logseq's block level linking by simply writing a card address down and specifying word 7, sentence 3, paragraph 4, etc. One can also do this in Obsidian in a variety of other technical means and syntaxes including embedding notes. Block level linking is a nice affordance when available but can be handled in a variety of different (and structurally similar) ways. Books as a technology have been doing block level linking for centuries; in that context it's called footnotes. In more specialized and frequently referenced settings like scholarship on Plato there is Stephanus pagination or chapter and verse numberings in biblical studies. Roam and Logseq aren't really innovating here.

      Similarly your OLE example is a clever and useful affordance, but could be gotten around by providing an equation that is carried out by hand and done each time it's needed---sure it may take more time, but it's doable in every system. This may actually be useful in some contexts as then one would have the time sequences captured and logged in their files for later analysis and display. These affordances are things which may make things easier and simpler in some cases, but they generally don't change the root structure of what is happening. Digital search is an example of a great affordance, except in cases when it returns thousands of hits which then need to be subsequentlly searched. Short indexing methods with pen and paper can be done more quickly in some cases to do the same search because one's notes can provide a lot of other contextual clues (colored cards, wear on cards, physical location of cards, etc.) that a pure digital search does not. I often can do manual searches through 30,000 index cards more quickly and accurately than I can through an equivalent number of digital notes.

      There is a structural equivalence between folders and tags/links in many programs. This is more easily seen in digital contexts where a folder can be programatically generated by executing a search on a string or tag which then results in a "folder" of results. These searches are a quick affordance versus actively maintaining explicit folders otherwise, but the same result could be had even in pen and paper contexts with careful indexing and manual searches (which may just take longer, but it doesn't mean that they can't be done.) Edge-notched cards were heavily used in the mid-20th century to great effect for doing these sorts of searches.

      When people here are asking or talking about a variety of note taking programs, the answer almost always boils down to which one you like best because, in large part, a zettlkasten can be implemented in all of them. Some may just take more work and effort or provide fewer shortcuts or affordances.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Summary:

      This paper contains what could be described as a "classic" approach towards evaluating a novel taste stimuli in an animal model, including standard behavioral tests (some with nerve transections), taste nerve physiology, and immunocytochemistry of the tongue. The stimulus being tested is ornithine, from a class of stimuli called "kokumi", which are stimuli that enhance other canonical tastes, increasing essentially the hedonic attributes of these other stimuli; the mechanism for ornithine detection is thought to be GPRC6A receptors expressed in taste cells. The authors showed evidence for this in an earlier paper with mice; this paper evaluates ornithine taste in a rat model.

      Strengths:

      The data show the effects of ornithine on taste: in two-bottle and briefer intake tests, adding ornithine results in a higher intake of most, but not all, stimuli tests. Bilateral nerve cuts or the addition of GPRC6A antagonists decrease this effect. Small effects of ornithine are shown in whole-nerve recordings.

      Weaknesses:

      The conclusion seems to be that the authors have found evidence for ornithine acting as a taste modifier through the GPRC6A receptor expressed on the anterior tongue. It is hard to separate their conclusions from the possibility that any effects are additive rather than modulatory. Animals did prefer ornithine to water when presented by itself. Additionally, the authors refer to evidence that ornithine is activating the T1R1-T1R3 amino acid taste receptor, possibly at higher concentrations than they use for most of the study, although this seems speculative. It is striking that the largest effects on taste are found with the other amino acid (umami) stimuli, leading to the possibility that these are largely synergistic effects taking place at the tas1r receptor heterodimer.

      We would like to thank Reviewer #1 for the valuable comments. Our basis for considering ornithine as a taste modifier stems from our observation that a low concentration of ornithine (1 mM), which does not elicit a preference on its own, enhances the preference for umami substances, sucrose, and soybean oil through the activation of the GPRC6A receptor. Notably, this receptor is not typically considered a taste receptor. The reviewer suggested that the enhancement of umami taste might be due to potentiation occurring at the TAS1R receptor heterodimer. However, we propose that a different mechanism may be at play, as an antagonist of GPRC6A almost completely abolished this enhancement. In the revised manuscript, we will endeavor to provide additional information on the role of ornithine as a taste modifier acting through the GPRC6A receptor.

      Reviewer #2 (Public review):

      Summary:

      The authors used rats to determine the receptor for a food-related perception (kokumi) that has been characterized in humans. They employ a combination of behavioral, electrophysiological, and immunohistochemical results to support their conclusion that ornithine-mediated kokumi effects are mediated by the GPRC6A receptor. They complemented the rat data with some human psychophysical data. I find the results intriguing, but believe that the authors overinterpret their data.

      Strengths:

      The authors examined a new and exciting taste enhancer (ornithine). They used a variety of experimental approaches in rats to document the impact of ornithine on taste preference and peripheral taste nerve recordings. Further, they provided evidence pointing to a potential receptor for ornithine.

      Weaknesses:

      The authors have not established that the rat is an appropriate model system for studying kokumi. Their measurements do not provide insight into any of the established effects of kokumi on human flavor perception. The small study on humans is difficult to compare to the rat study because the authors made completely different types of measurements. Thus, I think that the authors need to substantially scale back the scope of their interpretations. These weaknesses diminish the likely impact of the work on the field of flavor perception.

      We would like to thank Reviewer #2 for the valuable comments and suggestions. Regarding the question of whether the rat is an appropriate model system for studying kokumi, we have chosen this species for several reasons: it is readily available as a conventional experimental model for gustatory research; the calcium-sensing receptor (CaSR), known as the kokumi receptor, is expressed in taste bud cells; and prior research has demonstrated the use of rats in kokumi studies involving gamma Glu-Val-Gly (Yamamoto and Mizuta, Chem. Senses, 2022).

      We acknowledge that fundamentally different types of measurements were conducted in the human psychophysical study and the rat study. Kokumi can indeed be assessed and expressed in humans; however, we do not currently have the means to confirm that animals experience kokumi in the same way that humans do. Therefore, human studies are necessary to evaluate kokumi, a conceptual term denoting enhanced flavor, while animal studies are needed to explore the potential underlying mechanisms of kokumi. We believe that a combination of both human and animal studies is essential, as is the case with research on sugars. While sugars are known to elicit sweetness, it is unclear whether animals perceive sweetness identically to humans, even though they exhibit a strong preference for sugars. In the revised manuscript, we will incorporate additional information to address the comments raised by the reviewer. We will also carefully review and revise our previous statements to ensure accuracy and clarity.

      Reviewer #3 (Public review):

      Summary:

      In this study, the authors set out to investigate whether GPRC6A mediates kokumi taste initiated by the amino acid L-ornithine. They used Wistar rats, a standard laboratory strain, as the primary model and also performed an informative taste test in humans, in which miso soup was supplemented with various concentrations of L-ornithine. The findings are valuable and overall the evidence is solid. L-Ornithine should be considered to be a useful test substance in future studies of kokumi taste and the class C G protein-coupled receptor known as GPRC6A (C6A) along with its homolog, the calcium-sensing receptor (CaSR) should be considered candidate mediators of kokumi taste.

      Strengths:

      The overall experimental design is solid based on two bottle preference tests in rats. After determining the optimal concentration for L-Ornithine (1 mM) in the presence of MSG, it was added to various tastants, including inosine 5'-monophosphate; monosodium glutamate (MSG); mono-potassium glutamate (MPG); intralipos (a soybean oil emulsion); sucrose; sodium chloride (NaCl); citric acid and quinine hydrochloride. Robust effects of ornithine were observed in the cases of IMP, MSG, MPG, and sucrose, and little or no effects were observed in the cases of sodium chloride, citric acid, and quinine HCl. The researchers then focused on the preference for Ornithine-containing MSG solutions. The inclusion of the C6A inhibitors Calindol (0.3 mM but not 0.06 mM) or the gallate derivative EGCG (0.1 mM but not 0.03 mM) eliminated the preference for solutions that contained Ornithine in addition to MSG. The researchers next performed transections of the chord tympani nerves (with sham operation controls) in anesthetized rats to identify the role of the chorda tympani branches of the facial nerves (cranial nerve VII) in the preference for Ornithine-containing MSG solutions. This finding implicates the anterior half-two thirds of the tongue in ornithine-induced kokumi taste. They then used electrical recordings from intact chorda tympani nerves in anesthetized rats to demonstrate that ornithine enhanced MSG-induced responses following the application of tastants to the anterior surface of the tongue. They went on to show that this enhanced response was insensitive to amiloride, selected to inhibit 'salt tastant' responses mediated by the epithelial Na+ channel, but eliminated by Calindol. Finally, they performed immunohistochemistry on sections of rat tongue demonstrating C6A positive spindle-shaped cells in fungiform papillae that partially overlapped in its distribution with the IP3 type-3 receptor, used as a marker of Type-II cells, but not with (i) gustducin, the G protein partner of Tas1 receptors (T1Rs), used as a marker of a subset of type-II cells; or (ii) 5-HT (serotonin) and Synaptosome-associated protein 25 kDa (SNAP-25) used as markers of Type-III cells.

      Weaknesses:

      The researchers undertook what turned out to be largely confirmatory studies in rats with respect to their previously published work on Ornithine and C6A in mice (Mizuta et al Nutrients 2021).

      The authors point out that animal models pose some difficulties of interpretation in studies of taste and raise the possibility in the Discussion that umami substances may enhance the taste response to ornithine (Line 271, Page 9). They miss an opportunity to outline the experimental results from the study that favor their preferred interpretation that ornithine is a taste enhancer rather than a tastant.

      At least two other receptors in addition to C6A might mediate taste responses to ornithine: (i) the CaSR, which binds and responds to multiple L-amino acids (Conigrave et al, PNAS 2000), and which has been previously reported to mediate kokumi taste (Ohsu et al., JBC 2010) as well as responses to Ornithine (Shin et al., Cell Signaling 2020); and (ii) T1R1/T1R3 heterodimers which also respond to L-amino acids and exhibit enhanced responses to IMP (Nelson et al., Nature 2001). While the experimental results as a whole favor the authors' interpretation that C6A mediates the Ornithine responses, they do not make clear either the nature of the 'receptor identification problem' in the Introduction or the way in which they approached that problem in the Results and Discussion sections. It would be helpful to show that a specific inhibitor of the CaSR failed to block the ornithine response. In addition, while they showed that C6A-positive cells were clearly distinct from gustducin-positive, and thus T1R-positive cells, they missed an opportunity to clearly differentiate C6A-expressing taste cells and CaSR-expressing taste cells in the rat tongue sections.

      It would have been helpful to include a positive control kokumi substance in the two-bottle preference experiment (e.g., one of the known gamma-glutamyl peptides such as gamma-glu-Val-Gly or glutathione), to compare the relative potencies of the control kokumi compound and Ornithine, and to compare the sensitivities of the two responses to C6A and CaSR inhibitors.

      The results demonstrate that enhancement of the chorda tympani nerve response to MSG occurs at substantially greater Ornithine concentrations (10 and 30 mM) than were required to observe differences in the two bottle preference experiments (1.0 mM; Figure 2). The discrepancy requires careful discussion and if necessary further experiments using the two-bottle preference format.

      We would like to thank Reviewer #3 for the valuable comments and helpful suggestions. We propose that ornithine has two stimulatory actions: one acting on GPRC6A, particularly at lower concentrations, and another on amino acid receptors such as T1R1/T1R3 at higher concentrations. Consequently, ornithine is not preferable at lower concentrations but becomes preferable at higher concentrations. For our study on kokumi, we used a low concentration (1 mM) of ornithine. The possibility mentioned in the Discussion that 'the umami substances may enhance the taste response to ornithine' is entirely speculative. We will reconsider including this description in the revised version. As the reviewer suggested, in addition to GPRC6A, ornithine may bind to CaSR and/or T1R1/T1R3 heterodimers. However, we believe that ornithine mainly binds to GPRC6A, as a specific inhibitor of this receptor almost completely abolished the enhanced response to umami substances, and our immunohistochemical study indicated that GPRC6A-expressing taste cells are distinct from CaSR-expressing taste cells (see Supplemental Fig. 3). We conducted essentially the same experiments using gamma-Glu-Val-Gly in Wistar rats (Yamamoto and Mizuta, Chem. Senses, 2022) and compared the results in the Discussion. The reviewer may have misunderstood the chorda tympani results: we added the same concentration (1 mM) used in the two-bottle preference test to MSG (Fig. 5-B). Fig. 5-A shows nerve responses to five concentrations of plain ornithine. In the revised manuscript, we will strive to provide more precise information reflecting the reviewer’s comments.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      (1) The behavioral effects found with the CPRC6A antagonists are not entirely convincing, as the antagonist is seemingly just mixed up in the solution with the stimuli. There are no control experiments demonstrating that the antagonists do not have a taste themselves.

      We mixed the antagonists into both liquids used in the two-bottle preference test to eliminate any potential taste effects of the antagonists themselves. In the electrophysiological experiments, the antagonist was incorporated into the solution after confirming that it did not elicit any appreciable response in the taste nerve.

      (2) The effects of ornithine found with quinine did not have a satisfying explanation - if there is some taste cell-taste cell modulation that accounts for the taste enhancement, why is the quinine less aversive? Why is it not enhanced like the other compounds?

      The effects of ornithine on quinine responses remain difficult to explain. A previous study (Tokuyama et al., Chem Pharm Bull, 2006) proposed that ornithine prevents bitter substances from binding to bitter receptors, although this hypothesis lacks definitive evidence. In the present study, our findings suggest that the binding of quinine to bitter receptors is essential, as another agonist, gallate, also enhanced the preference for quinine, but this effect was abolished by EGCG, a GPRC6A antagonist (see Supplemental Fig. 2).

      (3) Unless I am missing something, there appears to be no quantitative analysis of the immunocytochemical data, just assertions.

      We have made quantitative analyses in the revised text, and the following sentences have been added: “Approximately 11% of GPRC6A-positive cells overlapped with IP3R3 (9 double-positive cells/80 GPRC6A-positive cells), while approximately 8.3% of IP3R3-positive cells expressed GPRC6A (9 double-positive /109 IP3R3-positive cells). In addition, GPRC6A-positive cells were unlikely to colocalize with a-gustducin, another marker for a subset of type II cells, in single taste cells (0 double-positive cell/93 GPRC6A-positive cells). Regarding type III cell markers, GPRC6A-positive cells were unlikely to colocalize with 5-HT in single taste cells (0 double-positive cell/75 GPRC6A-positive cells).”

      (4) The hallmarks of Kokumi taste include descriptors such as "thickness", and "mouthfeel", which sound like potential somatosensory attributes. Perhaps the authors should consider this possibility for at least some of the effects found.

      The term kokumi, a Japanese word, refers to a phenomenon in which the flavor of complexly composed food is enhanced through certain processes, making them more delicious. To date, kokumi has been described using the representative terms thickness, mouthfulness, and continuity, originally introduced in the first paper on kokumi by Ueda et al. (1990). However, these terms are derived from Japanese and may not fully convey the nuances of the original language when translated into these simple English words. In particular, thickness is often interpreted as referring to physical properties such as viscosity or somatosensory sensations. Since kokumi inherently lacks somatosensory elements, this revised paper adopts alternative terms and explanations for the three components of kokumi to prevent misunderstanding and confusion.

      Therefore, to clarify that kokumi attributes are inherently gustatory, thickness is replaced with intensity of whole complex tastes (rich flavor with complex tastes), emphasizing the synergistic effects of a variety of tastes rather than the mere enhancement of a single flavor. Mouthfulness is clarified as not referring to mouthfeel (the tactile sensation a food gives in the mouth) but rather as spread of taste and flavor throughout the oral cavity, describing how the flavor fills the mouth. Continuity is replaced with persistence of taste (lingering flavor).

      (5) I don't think the human experiment (S1) belongs to the paper, even as a supplementary bit of data. It's only 17 subjects, they are all female, and we don't know anything about how they were selected, even though it states they are all students/staff at Kio. Were any of them lab members? Were they aware of the goals of the experiment? Could simply increasing the amount of solute in the soup make it seem thicker? This (sparse) data seems to have been shoehorned into the paper without enough detail/justification.

      Despite the reviewer’s suggestion, we would like to include the human experiment because the rationale of the present study is to confirm, through a human sensory test, that the kokumi of a complex solution (in this case, miso soup) is enhanced by the addition of ornithine. This is followed by basic animal experiments to investigate the underlying mechanisms. Therefore, this human study serves an important role.

      The total number of participants increased to 22 (19 women and three men) following an additional experiment with 5 new participants. New results have been shown in Supplemental Figure 1 with statistical analyses. The rewritten parts are as follows:

      We recruited 22 participants (19 women and three men, aged 21-28 years) from Kio University who were not affiliated with our laboratory, including students and staff members. All participants passed a screening test based on taste sensitivity. According to the responses obtained from a pre-experimental questionnaire, we confirmed that none of the participants had any sensory abnormalities, eating disorders, or mental disorders, or were taking any medications that may potentially affect their sense of taste. All participants were instructed not to eat or drink anything for 1 hour prior to the start of the experiment. We provided them with a detailed explanation of the experimental procedures, including safety measures and personal data protection, without revealing the specific goals of the study.

      (6) The introduction could be more concise - for example, when describing Kokumi stimuli such as ornithine and its possible receptors, the authors do not need to add the detail about how this stimulus was deduced from adding clams to the soup. Details like this can be reserved for the discussion.

      Thank you for this comment. We have tried to shorten the Introduction.

      (7) Line 86: awkward phrasing - this doesn't need to be a rhetorical question.

      We have deleted the sentence.

      (8) Supplementary Figure 1: The labels on the figure say "Miso soup in 1 mM Orn" when the Orn is dissolved into the soup.

      Thank you for pointing out our mistake. We have changed the description, such as “1 mM Orn in miso soup”.

      Reviewer #2 (Recommendations for the authors):

      Major concerns

      (1) The impact of "kokumi" taste ligands on food perception appears to be profound in humans. This observation is fascinating because it implies that molecules like ornithine impact a variety of flavor perceptions, some of which are non-gustatory in nature (e.g., spread, mouthfulness and harmony). What remains unclear is whether "kokumi" ligands produce analogous sensations in rodents. If they don't, then rodents are an inappropriate model system for studying the impact of kokumi on flavor perceptions. The authors fail to address this key issue, and uncritically assume that kokumi ligands produce sensations like thickness, mouthfulness, and continuity in rodents. For this reason, the authors' reference to GPRC6A as a kokumi receptor is inappropriate.

      Thank you very much for the valuable comments. The term kokumi refers to a phenomenon in which the flavor of complexly composed foods is enhanced through certain processes, making them more delicious. It is an important concept in the field of food science, which studies how to make prepared dishes more enjoyable. Kokumi is also considered a higher-order, profound cognitive function evaluated by humans who experience a wide variety of foods. However, it is unclear whether animals, particularly experimental animals, can perceive kokumi in the same way humans do.

      To date, kokumi has been described using the representative terms thickness, mouthfulness, and continuity, originally introduced in the first paper on kokumi by Ueda et al. (1990). However, these terms are derived from Japanese and may not fully convey the nuances of the original language when translated into these simple English words. In particular, thickness is often interpreted as referring to physical properties such as viscosity or somatosensory sensations. Since kokumi inherently lacks somatosensory elements, this revised paper adopts alternative terms and explanations for the three components of kokumi to prevent misunderstanding and confusion.

      Therefore, to clarify that kokumi attributes are inherently gustatory, thickness is replaced with intensity of whole complex tastes (rich flavor with complex tastes), emphasizing the synergistic effects of a variety of tastes rather than the mere enhancement of a single flavor. Mouthfulness is clarified as not referring to mouthfeel (the tactile sensation a food gives in the mouth) but rather as spread of taste and flavor throughout the oral cavity, describing how the flavor fills the mouth. Continuity is replaced with persistence of taste (lingering flavor).

      Rodents are thought to possess basic taste functions similar to humans, such as the expression of taste receptors, including kokumi receptors, in taste cells. Regardless of whether rodents can perceive kokumi, findings from studies on rodents may provide insights into aspects of the kokumi concept as experienced by humans.

      Indeed, the results of this study indicate that ornithine enhances umami, sweetness, fat taste, and saltiness, leading to the enhancement of complex flavors—referred to as intensity of whole taste. The activation of various taste cells, resulting in the enhancement of multiple tastes, may contribute to the sensation of flavors spreading throughout the oral cavity. Furthermore, the strong enhancement of MSG and MPG suggests that glutamate contributes to the mouthfulness and persistence of taste characteristic of kokumi.

      (2) A related concern is that the authors did not make any measurements that model kokumi sensations documented in the literature. For example, they would need to develop behavioral/electrophysiological measurements that reflect the known effects of kokumi ligands on flavor perception (i.e., increases in intensity, spread, continuity, richness, harmony, and punch). For example, ornithine is thought to produce more "punch" (i.e., a more rapid rise in intensity). This could be manifested as a more rapid rise in peripheral taste response or a more rapid fMRI response in the taste cortex. Alternatively, ornithine is thought to increase "continuity" (i.e., make the taste response more persistent). This response would presumably be manifested as a peripheral taste response that adapts more slowly or a more persistent fMRI response. As it stands, the authors have documented that ornithine increases (i) the preference of rats for some chemical stimuli, but not others; and (ii) the response of the CT nerve to some but not all taste stimuli.

      In animal experiments, it is challenging to examine each attribute of kokumi. The increase of complex tastes can be investigated through behavioral experiments and neural activity recordings. However, phenomena such as spread or harmony, which arise from profound human judgments, are difficult to validate in animal studies.

      While it was possible to examine persistence through neural responses to tastants, all stimuli were rinsed at 30 seconds after onset of stimulation, so the exact duration of persistence was not investigated. However, since the MSG response was enhanced approximately 1.5 times with the addition of ornithine, it is strongly suggested that the duration might also have been prolonged.

      Regarding punch, no differences were observed in the neural responses when ornithine was added, likely because the phasic response already had a rapid onset.

      In the context of fMRI studies, there has been a report that adding glutathione to mixtures of umami and salt solutions increases responses (Goto et al. Chem Senses, 2016). However, research specifically examining the attributes of kokumi has not yet been reported.

      (3) The quality of the SNAP-25 immunohistochemistry is poor (see Figure 7D), with lots of seemingly nonspecific staining in and outside the taste bud.

      The quality of the SNAP-25 is not poor. It is known that SNAP-25 labels not only type III cells but also the dense network of intragemmal nerve fibers (Tizzano et al., Immunohistochemical Analysis of Human Vallate Taste Buds. Chem Senses.40:655-60, 2015). Therefore, lots of seemingly nonspecific staining is due to intense SNAP-25-immunoreactivity of the nerve fibers.

      (4) The authors need to drastically scale back the scope of their conclusions. What they can say is that ornithine appears to enhance the taste responses of rats to a variety of taste stimuli and that this effect appears to be mediated by the GPRC6A receptor. They cannot use their data to address kokumi effects in humans, as they have not attempted to model any of these effects. Given the known problems with pharmacological blocking agents (e.g., nonspecificity), the authors would significantly strengthen their case if they could generate similar results in a GPRC6A knockout mouse.

      Our research approach begins with confirming in humans that the addition of ornithine to complex foods (such as miso soup) induces kokumi. Based on this confirmation, we conduct fundamental studies using animal models to investigate the peripheral taste mechanisms underlying the expression of kokumi.

      It is possible that the key to kokumi expression lies in the enhancement of desirable tastes (particularly umami) and the suppression of unpleasant tastes. Moving forward, we will deepen our fundamental research on the action of ornithine mediated through GPRC6A, including studies using knockout mice.

      (5) The introduction is too long. Much of the discussion of kokumi perception in humans should either be removed or shortened considerably.

      Following the reviewer’s suggestion, the introduction has been shortened.

      (6) I recommend that the authors break up the Methods and Results sections into different experiments. This would enable the authors to provide separate rationales for each procedure. For instance, the authors conducted a variety of different behavioral procedures (e.g., long- and short-term preference tests, and preference tests with and without GPRC6A receptor antagonists).

      Rather than following the reviewer’s suggestion, we have added subheadings to describe the purpose of each experiment. This approach would help readers better understand the experimental flow, as each experiment is relatively straightforward.

      (7) The inclusion of the human data is odd for two reasons. First, the measurements used to assess the impact of ornithine on flavor perception in humans were totally different than those used in rats. This makes it impossible to compare the human and rat datasets. Second, the human study was rather limited in scope, had small effect sizes, and had a lot of individual variation. For these reasons, the human data are not terribly helpful. I recommend that the authors remove the human data from this paper, and publish them as part of a more extensive study on humans.

      Despite the reviewer’s suggestion, we would like to include the human experiment because the rationale of the present study is to confirm, through a human sensory test, that the kokumi of a complex solution (in this case, miso soup) is enhanced by the addition of ornithine. This is followed by basic animal experiments to investigate the underlying mechanisms. Therefore, this human study serves an important role. The considerable variation in the scores suggests that evaluating the three kokumi attributes is challenging and likely influenced by differences in judgment criteria among participants.

      The total number of participants increased to 22 (19 women and three men) following an additional experiment with 5 new participants. New results have been shown in Supplemental Figure 1 with statistical analyses. The rewritten parts are as follows:

      We recruited 22 participants (19 women and three men, aged 21-28 years) from Kio University who were not affiliated with our laboratory, including students and staff members. All participants passed a screening test based on taste sensitivity. According to the responses obtained from a pre-experimental questionnaire, we confirmed that none of the participants had any sensory abnormalities, eating disorders, or mental disorders, or were taking any medications that may potentially affect their sense of taste. All participants were instructed not to eat or drink anything for 1 hour prior to the start of the experiment. We provided them with a detailed explanation of the experimental procedures, including safety measures and personal data protection, without revealing the specific goals of the study.

      (8) While the use of English is generally good, there are many instances where the English is a bit awkward. I recommend that the authors ask a native English speaker to edit the text.

      Thank you for this comment. The text has been edited by a native English speaker.

      Minor concerns

      (1) Lines 13-14: The authors state that "the concept of 'kokumi' has garnered significant attention in gustatory physiology and food science." This is an exaggeration. Kokumi has generated considerable interest in food science but has yet to generate much interest in gustatory physiology.

      We have rewritten this part: “The concept of “kokumi” has generated considerable interest in food science but kokumi has not been well studied in gustatory physiology.”

      (2) Line 20: The use of "specific taste" is unclear in this context. The authors indicate (in Figure 5A) that 1 mM ornithine generates a CT nerve response. They also reveal (in Figure 1A) that rats do not prefer 1 mM ornithine over water. The results from a preference test do not provide insight into whether a solution can be tasted; they merely demonstrate a lack of preference for that solution. Based on these data, the authors cannot infer that 1 mM ornithine cannot be tasted.

      We agree with the reviewer’s comment. Ornithine at 1 mM concentration may have a weak taste because this solution elicited a small neural response (Fig. 5-A). We have rewritten the text: “… at a concentration without preference for this solution.”

      (3) Line 44: Sensory information from foods enters the oral and the nasal cavity.

      The nasal cavity has been added.

      (5) Lines 59: The terms "thickness", "mouthfulness" and "continuity" are not intuitive in English, and may reflect, at least in part, a failure in translation. The word thickness implies a tactile sensation (e.g., owing to high viscosity), but the authors use it to indicate a flavor that is more intense and onsets more quickly. The word mouthfulness is supposed to indicate that a flavor is experienced throughout the oral cavity. The problem here is that this happens with all tastants, independent of the presence of substances like ornithine. Indeed, taste buds occur in a limited portion of the oral epithelium, but we nevertheless experience tastes throughout the oral cavity, owing to a phenomenon called tactile referral (see the following reference: Todrank and Bartoshuk, 1991, A taste illusion: taste sensation localized by touch" Physiology & Behavior 50:1027-1031). The word continuity does not imply that the taste is long-lasting or persistent.

      These three attributes were originally introduced by Ueda et al. (1990), who translated Japanese terms describing the profound characteristics of kokumi, which are deeply rooted in Japanese culinary culture. However, these simply translated terms have caused global misunderstanding and confusion, because they sound like somatosensory rather than gustatory descriptions. Therefore, to clarify that kokumi attributes are inherently gustatory, in the revised version we use the terms “intensity of whole complex tastes (rich flavor with complex tastes)” instead of thickness, “mouthfulness (spread of taste and flavor throughout the oral cavity),” and “persistence of taste (lingering flavor)” instead of continuity.

      The results of this study indicate that ornithine enhances umami, sweetness, fat taste, and saltiness, leading to the enhancement of complex flavors—referred to as intensity of whole taste. The activation of various taste cells, resulting in the enhancement of multiple tastes, may contribute to the sensation of flavors spreading throughout the oral cavity. Furthermore, the strong enhancement of MSG and MPG suggests that glutamate contributes to the mouthfulness and persistence of taste characteristic of kokumi.

      (6) Figure legends: The authors provide results of statistical comparisons in several of the figures. They need to explain what statistical procedures were performed. As it stands, it is impossible to interpret the asterisks provided.

      We have explained statistical procedures in each Figure legend.

      (7) I did not see any reference to the sources of funding or any mention of potential conflicts of interest.

      We have added the following information:

      Funding: JSPS KAKENHI Grant Numbers JP17K00935 (to TY) and JP22K11803(to KU).

      Declaration of interests: The authors declare that they have no competing interests.

      Reviewer #3 (Recommendations for the authors):

      (1) I suggest that the authors increase their level of interest in glutathione and gamma-glutamyl peptides. This might include an appropriate gamma-glutamyl control substance in the two-bottle preference study (see Public Review). It might also include more careful attention to the work that identified glutathione as an activator of the CaSR (Wang et al., JBC 2006) and the nature of its binding site on the CaSR which overlaps with its site for L-amino acids (Broadhead et al., JBC 2011). This latter article also identified S-methyl glutathione, in which the free-SH group is blocked, as a high-potency activator of the CaSR. It would be expected to show comparable potency to gamma-glu-Val-Gly in assays of kokumi taste.

      We have appropriately referenced glutathione and gamma-Glu-Val-Gly, potent agonists of CaSR, where necessary. In our previous study (Yamamoto and Mizuta, Chem Senses, 2022), we examined the additive effects of these substances on basic taste stimuli in rodents, and the results were compared in greater detail with those obtained from the addition of ornithine in the present study. We have also discussed the potential binding of ornithine to other receptors, including CaSR and T1R1/T1R3 heterodimers.

      (2) Figures:

      -None of the figures were labelled with their Figure numbers. I have inferred the Figure numbers from the legends and their positions in the pdf.

      We are sorry for this inconvenience.

      - The labelling of Figure 1 and Figure 2 are problematic. In Figure 1 it should be made clear that the horizontal axes refer to the Ornithine concentration. In Figure 2 it should be made clear that the horizontal axes refer to the tastant concentrations (MSG, IMP, etc) and that the Ornithine concentrations were fixed at either zero or 1.0 mM.

      We are sorry for the lack of information about the horizontal axes. We have explained the horizontal axes in figure legends in Figs. 1 and 2. The labelling of both figures has also been modified to make this clear.

      - Figure 3B: 'Control' should appear at the top of this panel since the panels that follow all refer to it.

      Following the reviewer’s suggestion, we have added ‘Control’ at the top of Figure 3B.

      - Figure 5A. Provide a label for the test substance, presumably Ornithine.

      Yes, we have added ‘Ornithine’.

      - Figure 7 would be strengthened by the inclusion of immunohistochemistry analyses of the CaSR.

      We are sorry that we did not analyze immunohistochemistry for the CaSR because a previous study precisely had analyzed the CaSR expression on taste cells in rats. We have analyzed co-expression of GPRC6A and CaSR (see Supplemental Figure 3).

      (3) Other Matters:

      - Line 38: list the five basic taste modalities here.

      Yes, we have included the five basic taste modalities here.

      - Line 107: 'even if ... kokumi ... is less developed in rodents' - if there is evidence that kokumi is less developed in rodents it should be cited here.

      We cannot cite any references here because no studies have compared the perception of kokumi between humans and rodents.

      - Line 308: 'recently we conducted experiments in rats using gallate ...' - the authors appear to imply that they performed the research in Reference 43, however, I was unable to find an overlap between the two lists of authors.

      We are not doing a similar study as the research in Reference 43 (40 in the revised paper). Following the result that gallate is an agonist of GPRC6A as shown by Reference 43, we were interested in doing similar behavioral experiments using gallate instead of ornithine.

      The sentences have been rewritten to avoid misunderstanding.

      - Line 506: the sections are said to be 20 mm thick - should this read 20 micrometers?

      Thank you. We have changed to 20 micrometers.

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Manuscript number: RC-2024-02767

      Corresponding author(s): Kazuaki Maruyama

      1. General Statements

      Response to Reviewer #1:

      We sincerely appreciate your thoughtful review of our manuscript. Our primary objective is to elucidate the pathogenic mechanisms underlying congenital low-flow vascular malformations, thereby informing the development of novel therapeutic strategies. We recognize that, given the dual nature of our study encompassing both fundamental and clinical science, the presentation may have appeared somewhat convoluted. In response, we have revised the manuscript to clarify these points and have reformatted the text corresponding to your comments—originally presented as a single continuous block—into defined, numbered sections to enhance readability.

      Response to Reviewer #2:

      We are deeply grateful for the time and effort you have dedicated to reviewing our manuscript despite your busy schedule. Your comments have been particularly insightful, especially regarding the section on the preclinical mouse model. In light of your suggestions, we have conducted additional experiments and revised the manuscript accordingly. We trust that these modifications address your concerns and contribute to the overall improvement of our work.

      The revised sections have been highlighted in red in the text.

      2. Point-by-point description of the revisions

      Reviewer #1 (Evidence, reproducibility and clarity (Required):

      The authors investigate the pathogenesis of congenital vascular malformations by overexpressing the Pik3caH1047R mutation under the R26 locus in different cell populations and developmental stages using various Cre and CreERT2 lines, including endothelial-specific and different mesoderm precursor lines. The authors provide a thorough characterization of the vascular malformation phenotypes across models. Specifically, they claim that expressing Pik3caH1047R in the cardiopharyngeal mesoderm (CPM) precursors results in vascular abnormalities localized to the head and neck region of the embryo. The study also includes scRNAseq data analyses, including from previously published data and new data generated by the authors. Trajectory inference analysis of a previous scRNA-seq dataset revealed that Isl1+ mesodermal cells can differentiate into ETV2+ cells, directly giving rise to Prox1+ lymphatic endothelial cell progenitors, bypassing the venous stage. Single-cell RNA sequencing of their CPM model and other in vitro datasets show that Pik3caH1047R upregulates VEGF-A via HIF-1α-mediated hypoxia signaling, findings further corroborated in human samples. Finally, preclinical studies in adult mice confirm that pharmacological inhibition of HIF-1α and VEGF-A reduces the number and size of mutant vessels.

      Major comments

      1. While the study provides a nice characterization of Pik3caH1047R-derived vascular phenotypes induce by expressing this mutation in different cells, the main message of the study is unclear. What is the main question that the authors want to address with this manuscript?

      Response:

      Our main message is as follows:

      1. __ Elucidation of pathogenesis based on developmental cellular origins:__ This study focuses on using embryonic models to elucidate the mechanism by which the Pik3caH1047R mutation induces low-flow vascular malformations. Specifically, we demonstrate that expression of Pik3caH1047R in cells derived from the cardiopharyngeal mesoderm (CPM) induces vascular abnormalities that are confined to the head and neck region. Furthermore, vascular malformations originating from another cell type—for example, Pax3+ cells—are confined to the lower body. This suggests that the embryonic origin of endothelial cells may determine the anatomical location of vascular malformations, with important implications for clinical severity and treatment strategies.

      Molecular ____s____i____gnaling pathways and targeted therapeutic approaches:

      Through single-cell RNA sequencing, we have identified hypoxia signaling—particularly via HIF-1α and VEGF-A—as central to the pathogenesis of these malformations. Moreover, preclinical mouse model experiments demonstrate that pharmacological inhibition of HIF-1α and VEGF-A significantly reduces lesion formation, supporting the potential of targeting these pathways as a novel therapeutic strategy.

      In summary, our main message is that by elucidating the developmental and molecular mechanisms underlying Pik3caH1047R-driven low-flow vascular malformations—especially the pivotal role of hypoxia signaling via HIF-1α/VEGF-A—we provide a strong rationale for novel therapeutic strategies aimed at these challenging conditions

      To further clarify these points, we have revised the manuscript by incorporating additional experiments and reorganizing the text into clearly defined sections.

      The precursor type form where these lesions appear, that venous and lymphatic malformations emerge independently, when and where this phenotype appear?

      Response:

      In Tie2-Cre; R26R-Pik3caH1047R mutant embryos, no prominent phenotype was observed at E9.5 or E11.5. Vascular (venous) malformations are evident from E12.5, whereas lymphatic malformations become prominent from E13.5. We propose that the emergence of the lymphatic phenotype after E13.5 is due to the fact that lymphatic vessels, particularly in the upper body, begin forming a luminal structure mainly from E13.5 onward(Maruyama et al, 2022) . For further details, please refer to the explanation provided in Question 6.

      To address this, we have newly included Supplemental Figure 2 and revised the Results section as follows:

      Whereas clear phenotypes were evident at E12.5 and E13.5, no pronounced external abnormalities were observed at E9.5 or E11.5 (Supplemental Figure 2A–B). Similarly, histological examination revealed no significant differences in the short-axis diameter of the PECAM+ CV or in the number of Prox1+ LECs surrounding the CV between control and mutant embryos at E11.5 (Supplemental Figure 2C–F). We also assessed Tie2-Cre; R26R-Pik3caH1047R mutant embryos at E14.0 from five pregnant mice. Only two embryos were alive at this stage, and both showed severe edema and hemorrhaging, indicating they were nearly moribund. These observations suggest that the critical point for survival of these mutant embryos lies between E13.5 and E14.0 (Supplemental Figure 2G). (Page 5, lines 157–165)

      The manuscript needs some work to make the sections more cohesive and to structure better the main findings and the rationale for choosing the models. Authors should explain better when and where the pathogenic phenotypes refer to blood and/or lymphatic malformations. From the quantifications provided in Figure 1, Pik3caH1047R leads to different phenotypes in blood and lymphatic vessels. These are larger diameters with no difference in the number of blood vessels (are you quantifying all pecam1 positive? Vein, arteries, capillaries?), and an increase in the number of lymphatics vessels. Please clarify and discuss.

      Response:

      We interpreted this as a question regarding which vessels were quantified. The answer to this question is provided in Question 4.

      Which vessel types are considered for the quantifications shown in Fig. 1I, M, Q? All Pecam1+ vessels, including lymphatic, vein, capillaries and arteries or which ones? Provide clarifications.

      __Response: __

      Vessel types were characterized based on anatomical and histological features. For the anatomical details, we referred to The Atlas of Mouse Development by M.H. Kaufman.

      This aspect is described in the Methods section, as follows:

      Veins and arteries were classified based on anatomical criteria. Vessels demonstrating continuity with a clearly identifiable vein (e.g., the anterior cardinal vein) in serial sections were defined as veins. In contrast, the aorta and pulmonary artery, each exhibiting a distinct wall structure indicative of a direct connection to the heart, were designated as arteries. Lymphatic vessels were identified based on the combined expression of Prox1, VEGFR3, and PECAM, along with the developmental stage, morphology, and anatomical location as described in our previous studies (Maruyama et al, 2019, 2022, 2021) . PECAM+ vessels that lacked a definitive wall structure, did not express lymphatic markers, or did not exhibit clearly identifiable continuity necessary for classification as veins or capillaries were collectively designated as blood vessels or vasculatures. (Page 16, lines 530-539)

      Regarding Figure 1I:

      In the tongue and mandible, the facial vein—which branches from the anterior cardinal vein—is dilated, and its continuity with the venous system is confirmed. In contrast, Figure 1J shows the number of PECAM+ vasculatures; however, for smaller vessels, continuity is not always demonstrable, so these are designated as vasculatures according to the criteria.

      Regarding Figures 1M and N:

      In the liver, the dilated vessels are classified as veins because they exhibit continuity with the inferior vena cava. Even in the control group, the central veins tend to have relatively large diameters. Therefore, we compared the average area and quantified the number of abnormal central veins—defined as those contiguous with a vein and exceeding a specified area.

      Regarding Figures 1Q and R:

      Cerebral vessels are classified as veins due to their continuity with the common cardinal and jugular veins. However, as these vessels extend into the periphery, this continuity becomes less distinct, and they are consequently designated as blood vessels lacking Prox1 expression.

      The authors propose that the CPM model results in localized head and neck vascular malformations. However, I am not convinced. The images supporting the neck defects are evident, but it is unclear whether there are phenotypes in the head.

      Response:

      Perhaps the discrepancy arises from a terminological issue. According to the WHO Classification of Tumours, commonly used in clinical settings, the term "Head and Neck" refers to the facial and cervical regions (including the oral cavity, larynx, pharynx, salivary glands, nasal cavity, etc.) and excludes the central nervous system. The inclusion of the brain in Figure 1O-R may have led to some confusion. We included the brain because cerebral cavernous malformations are classified as venous malformations, and thus serve as an example of common sites for venous malformations in humans. To clarify this point, we have made slight revisions to the first part of the Introduction, as follows:

      They frequently manifest in the head and neck region—here defined as the orofacial and cervical areas, excluding the brain. (Page2, lines 52-53)

      Why are half of the experiments with the Tie2-Cre model conducted at E12.5 (e.g., validation of recombination, signaling, proliferation) and the others at E13.5? It becomes confusing for the reader why the authors start the results section with E13.5 and then study E12.5.

      Response:

      This is also related to the previous question (Question 4). We decided to include extensive anatomical information in a single figure. In Supplemental Figure 1, sagittal sections at E12.5 were used so that the pulmonary artery, aorta, and dilated common cardinal vein could be visualized within one sample. This allowed us to demonstrate that the Pik3caH1047R mutation does not affect arteries by contrasting them with the dilated veins. At E13.5, in addition to the dilation observed at E12.5, the common cardinal vein becomes markedly dilated and compresses the surrounding structures. Capturing both veins and arteries simultaneously would require multiple images, which could potentially confuse the reader. Moreover, lymphatic and other organ phenotypes (e.g., in the liver) are more prominent at E13.5. Therefore, we selectively employed both E12.5 and E13.5 stages to suit our specific objectives.

      The quantifications provided do not clarify what the "n" represents or how many embryos or litters were analyzed. 

      Response:

      Thank you for your feedback. We have now incorporated the sample size (n) directly into the graphs and figure legends.

      Blasio et al. (2018), Hare et al (2015) reported that Pik3caH1047R with Tie2-Cre embryos die before E10.5. How do the authors explain the increase in survival here? Were embryos at E13.5alive? What was the Mendelian ratio observed by the authors? Please provide this information and discuss this point.

      Response:

      Two types of Tie2-Cre lines are widely used worldwide. The mouse line employed by Blasio et al. (2018) differs from that used in our study (their manuscript did not specify whether the background was B6 or a mixed strain). In contrast, although Hare et al. (2015) used the same mouse line as we did, they maintained a C57BL/6 background. We selected a mixed background of B6 and ICR, as we believe that a heterogeneous genetic background more accurately reflects the diversity of human pathology. We examined five pregnant females, which yielded approximately 30 embryos from five pregnant mice, of which only two survived until E14.0. Based on these observations, we consider E13.5 to be the appropriate survival limit (see Supplemental Figure 2G for additional details). In our breeding strategy, mice in the Tie2-Cre or Tie2-Cre; R26R-eYFP line were maintained as heterozygotes for Tie2-Cre and homozygotes for R26R-eYFP, whereas those carrying the R26R-Pik3caH1047R allele were homozygous. This approach produced control(Cre (-)) and heterozygous offspring in an expected 1:1 ratio at all examined stages: E9.5 (mutant n = 4, control n = 4 from two pregnant females), E11.5 (mutant n = 8, control n = 8 from two pregnant females), E12.5 (mutant n = 4, control n = 4 from two pregnant females), and E13.5 (mutant n = 5, control n = 5 from two pregnant females), with no deviation from the anticipated Mendelian ratio.

      Regarding this point, we have described it in the Results section as follows:

      Whereas clear phenotypes were evident at E12.5 and E13.5, no pronounced external abnormalities were observed at E9.5 or E11.5 (Supplemental Figure 2A–B). Similarly, histological examination revealed no significant differences in the short-axis diameter of the PECAM+ CV or in the number of Prox1+ LECs surrounding the CV between control and mutant embryos at E11.5 (Supplemental Figure 2C–F). We also assessed Tie2-Cre; R26R-Pik3caH1047R mutant embryos at E14.0 from five pregnant mice. Only two embryos were alive at this stage, and both showed severe edema and hemorrhaging, indicating they were nearly moribund. These observations suggest that the critical point for survival of these mutant embryos lies between E13.5 and E14.0 (Supplemental Figure 2G). (Page 5, lines 157-165)

      Please explain the rationale for using the Cdh5-CreERT2. It is likely due to the lethality observed with Tie2Cre, but this was not mentioned.

      Response:

      Thank you very much for your comment. As mentioned above, nearly all Tie2‐Cre;Pik3caH1047R embryos fail to survive past E14.0.

      The lethality observed with Tie2‐Cre mice is described as follows:

      We also assessed Tie2-Cre; R26R-Pik3caH1047R mutant embryos at E14.0 from five pregnant mice. Only two embryos were alive at this stage, and both showed severe edema and hemorrhaging, indicating they were nearly moribund. These observations suggest that the critical point for survival of these mutant embryos lies between E13.5 and E14.0 (Supplemental Figure 2G). (Page 5, lines 161-165)

      The rationale for using CDH5-CreERT2 mice is described as follows:

      To investigate whether the resulting human disease subtype (e.g., lesions confined to the head and neck region) is determined by the specific embryonic stage at which Pik3caH1047R is expressed, we crossed tamoxifen-inducible, pan-endothelial CDH5-CreERT2 mice with R26R-Pik3caH1047R mice and analyzed the embryos at E16.5 or E17.5. (Page 5, lines 169-172)

      Why were tamoxifen injections done at various time points (E9.5, E12.5, E15.5)? Please clarify the reasoning behind administering tamoxifen at these specific times. Explaining the rationale will help the reader follow the experimental design more easily. Additionally, including an initial diagram summarizing all the strategies to guide the reader from the beginning would be helpful.

      Response:

      Martinez‐Corral et al. (Nat. Commun., 2020) focused on lymphatic malformations, arguing that the timing of tamoxifen administration during the embryonic period determines the anatomical features of these lesions. They stated, “The majority of lesions appeared as large isolated cysts that were localized mainly to the cervical, and less frequently to the sacral region of the skin (Figure 2)”. Although not stated definitively, their data suggest that early embryonic tamoxifen administration results in the formation of large‐caliber lymphatic vessels with region‐specific distribution in the cervical skin (Figure 2C, Supplemental Figure 2). This description likely reflects an intention to model human vascular malformations, implying that the anatomical characteristics of these malformations are influenced by the developmental stage at which the Pik3caH1047R somatic mutation occurs.

      Inspired by these findings, we conducted experiments to determine whether altering the timing of tamoxifen administration would yield region-specific anatomical patterns in vascular malformation development. However, our results indicate that changing the timing of tamoxifen administration does not lead to an anatomical bias similar to that observed in human vascular malformations. Instead, we propose that the embryological cellular origin plays a more significant role in the formation of these human pathologies.

      Regarding this section, we have slightly revised the introductory part of the Figure 2 explanation as follows:

      To investigate whether the resulting human disease subtype (e.g., lesions confined to the head and neck region) is determined by the specific embryonic stage at which Pik3caH1047R is expressed, we crossed tamoxifen-inducible, pan-endothelial CDH5-CreERT2 mice with R26R-Pik3caH1047R mice and analyzed the embryos at E16.5 or E17.5. (Page 5, lines 169-172)

      Additionally, we have added a schematic diagram of the tamoxifen administration schedule at the beginning of Figure 2 and Supplemental Figure 3.

      Why do you use the Isl1-Cre constitutive line (instead of the CreERT2)? The former does not allow control of the timing of recombination (targeting specifically your population of interest) and loses the ability to trace the mutant cell behaviors over time. Is the constitutive expression of Pik3caH1047R in Isl1+ cells lethal at any embryonic time, or do the animals survive into adulthood? When you later use the Isl1-CreERT2 line, why do you induce recombination specifically at E8.5? It would be helpful for the reader to have an explanation for this choice, along with a reference to your previous paper.

      Response:

      Thank you for your comments. We did attempt the same experiments using Isl1-CreERT2 under various conditions. However, administering tamoxifen earlier than E8.5 invariably caused embryonic lethality, likely due to both Pik3ca activity and tamoxifen toxicity, leaving no embryos for analysis. In our previous study, repeated attempts from E6.5 to E16.5 resulted in only two surviving embryos (Maruyama et al., eLife, 2022, Supplemental Figure 3). We also failed to recover any live embryos with tamoxifen administration at E7.5.

      Even reducing the tamoxifen dose to one-fifth did not succeed when given before E8.5. Although E8.5 administration was feasible, the observed phenotype remained mild, and no phenotype was detected at E9.5, E11.5, E12.5, or later stages. These findings align with our earlier observations that moving tamoxifen injection from E8.5 to E9.5 markedly diminishes the Isl1+ contribution to the endothelial lineage.

      Furthermore, Supplemental Figure 5____ and 6 suggest that a decrease in Isl1 mRNA, which occurs as early as E8.0–E8.25, triggers the shift toward endothelial differentiation. Considering these data and the mild phenotype at E8.5, earlier administration would be ideal for impacting Isl1+ cell fate. However, technical constraints prevented us from doing so, leading us to utilize the constitutive Isl1-Cre line instead.

      This section was already included in the Discussion; however, for clarity, we have revised it as follows:

      Given that Isl1 expression disappears at a very early stage and contributes to endothelial differentiation, experiments using Isl1-Cre or Isl1-CreERT2 mice cannot clearly distinguish between LMs, VMs, and capillary malformations, In other words, Isl1+ cells likely label a common progenitor population for multiple endothelial subtypes. Consequently, the diverse vascular malformations in the head and neck—including mixed venous-lymphatic and capillary malformations, as well as the macro- and microcystic subtypes of LMs—cannot be fully accounted for by this study alone. (Page 13, lines 419-425)

      What is the purpose of using this battery of CreERT2 lines (for example, the Myf5-CreERT2)?

      Response:

      The head and neck mesoderm arises primarily from the cardiopharyngeal mesoderm and the cranial paraxial mesoderm. Myf5-CreERT2 labels the cranial paraxial mesoderm in the facial region, which gives rise to facial skeletal muscles. Stone et al. (Dev Cell, 2019) reported that a subset of this lineage contributes to head and neck lymphatic vessels, whereas our study (Maruyama et al., eLife, 2022) found no such contribution—an ongoing point of debate. Nevertheless, expressing Pik3caH1047R in this lineage did not induce any vascular malformations.

      Pax3-CreERT2 mice label Pax3____⁺ paraxial mesoderm (including cranial paraxial mesoderm), which reportedly contributes to the common cardinal vein and subsequently forms trunk lymphatics (Stone & Stainier, 2019; Lupu et al, 2022) . When Pik3caH1047R was expressed in Pax3⁺ cells, we observed abnormal vasculature in the lower trunk and around the vertebrae, consistent with that report.

      Synthesizing these observations with our results from Isl1-Cre, Isl1-CreERT2, and Mef2c-AHF-Cre lines, we propose that Pik3caH1047R mutations within the cardiopharyngeal mesoderm underlie the clinically significant vascular malformations seen in the head and neck region.

      We have also incorporated the following explanation into the main text.

      Regarding the Pax3-CreERT2:

      The head and neck mesoderm arises primarily from the cardiopharyngeal mesoderm and the cranial paraxial mesoderm. In Pax3-CreERT2; R26R-Pik3caH1047R embryos, Pax3+ paraxial mesoderm (including cranial paraxial mesoderm) is labeled; this lineage reportedly contributes to the common cardinal vein and subsequently forms trunk lymphatics(Lupu et al, 2022), (Page 8, lines 247-250)

      Regarding the Myf5-CreERT2;

      In Myf5-CreERT2; R26R-tdTomato mice—which label the cranial paraxial mesoderm, particularly muscle satellite cells—crossed with R26R-Pik3caH1047R, tamoxifen was administered to pregnant mice at E9.5. (Page 8, lines 255-257)

      I find the scRNAseq data in Fig S4 and S5 results very interesting, although I am unsure how they fit with the rest of the story. In principle, a subset of Isl1+ cardiopharyngeal mesoderm (CPM) derivatives into lymphatic endothelial cells was already demonstrated in a previous publication from the group. What is the novelty and purpose here?

      Response:

      This also addresses Question 11. Our aim in using the Isl1⁺ lineage was to determine the extent of analysis possible with this experimental system. Through reanalysis, we found that the downregulation of Isl1 triggers a switch toward endothelial cell differentiation, with this cell fate decision occurring at a very early embryonic stage. Consequently, our single‐cell analysis supports the conclusion that, regardless of the Isl1-CreERT2 line used or the timing of tamoxifen administration, it is challenging to precisely recapitulate the fine clinical phenotypes observed in humans (e.g., lymphatic or venous malformations) with this experimental system. We believe that this single‐cell analysis provides a theoretical basis for the notion that our Isl1-Cre-based developmental model can only generate a mixed phenotype of vascular and lymphatic malformations.

      This section is explained in a similar manner in the revised Discussion for Question 11 as follows:

      Given that Isl1 expression disappears at a very early stage and contributes to endothelial differentiation, experiments using Isl1-Cre or Isl1-CreERT2 mice cannot clearly distinguish between LMs, VMs, and capillary malformations, In other words, Isl1+ cells likely label a common progenitor population for multiple endothelial subtypes. Consequently, the diverse vascular malformations in the head and neck—including mixed venous-lymphatic and capillary malformations, as well as the macro- and microcystic subtypes of LMs—cannot be fully accounted for by this study alone. (Page 13, lines 419-425)

      Why in Fig. 4 ECs were not subclustered for further analysis (as in Fig. S4,5)? This is a missed opportunity to understand the pathogenic phenotypes.

      Response:

      Thank you for your question. We performed sub-clustering analysis, particularly focusing on why no phenotype is observed in arteries, as we believed this approach could provide molecular-level insights. Accordingly, we conducted the analysis presented in Figure 1 for Reviewer 1.





      Figure legends for Figure ____1 ____for Reviewer 1. The number of endothelial cells was insufficient, making subclustering ineffective.

      (Figure for Reviewer 1A, B) Left: UMAP plot showing color-coded clusters (0–3). Subcluster analysis of the Endothelium (Cluster 1) from Fig. 4B. Right: UMAP plot color-coded by condition. (Figure for Reviewer 1C) Heatmap showing the average gene expression of marker genes for each cluster by condition. After cluster annotation, subclusters 0, 1, 2, and 3 were defined as Vein, Capillary, Artery, and Lymphatics, respectively. (Figure for Reviewer 1D) Cell type proportions. (Figure for Reviewer 1E) Number of differentially expressed genes (DEGs) in each sucluster of the PIK3CAH1047R group relative to Control. (Figure for Reviewer 1F) Comparison of enrichment analysis between EC subclusters from scRNA-seq. The bar graph shows the top 20 significantly altered Hallmark gene sets in EC subclusters from scRNA-seq using ssGSEA (escape R package). Red bars represent significantly upregulated Hallmark gene sets in mutants (FDR Initially, we performed sub-clustering on endothelial cells; however, this resulted in a considerably reduced number of cells per sub-cluster, especially in control group (Figure for Reviewer 1A, B). In the control group, there were only approximately 149 endothelial cells in total, and dividing these into four clusters led to very few cells per cluster, thereby introducing statistical instability. Although arterial endothelial cells were relatively well defined by their high expression of Hey1 and Hey2 and lower levels of Nr2f2 and Aplnr, the boundaries between venous, capillary, and lymphatic endothelial cells were less distinct. In particular, defining lymphatic endothelial cells solely by Prox1 expression yielded a very small population; even after incorporating additional lymphatic markers such as Flt4 and Lyve1, it remained challenging to clearly separate the venous, capillary, and lymphatic populations (Figure for Reviewer 1C). Consequently, the proportion of lymphatic endothelial cells was markedly low, and discrepancies with the histological findings further reduced our confidence in this dataset (Figure for Reviewer 1D, E). Moreover, the number of differentially expressed genes (DEGs) increased with the number of cells, and the results of the enrichment analysis as well as the volcano plot were nearly identical to those shown in Figure 4 (Figure for Reviewer 1F, G). In other words, the subclustering process itself had limitations, resulting in the overall outcome being dominated by the most abundant venous cluster.

      It is possible that these limitations in sub-clustering are due to the relatively small number of endothelial cells. Nonetheless, a major strength of our single-cell analysis is its ability to compare various cell types derived from Isl1+ lineages, not just endothelial cells. Therefore, the relative scarcity of endothelial cells represents a limitation of this experimental system. For these reasons, we decided to omit this figure from the final version of the manuscript.

      This point is described in the Discussion section as follows:

      Additionally, we performed endothelial subclustering to explore potential differences in gene expression among arterial, venous, capillary, and lymphatic endothelium. However, in the control embryos, the number of endothelial cells was too low to yield reliable data (data not shown). (Page 13, lines 434-437)

      Hypoxia and glycolysis signatures are not specific to mutant ECs. Do the authors have an explanation for this? It is well known that PI3K overactivation increases glycolysis; please acknowledge this.

      __Response: __

      Thank you for your important comment. We have now incorporated a discussion, along with relevant references, on the section addressing that PI3K overactivation increases glycolysis into the Discussion section as follows:

      It is well known that overactivation of PI3K enhances glycolysis(Hu et al, 2016) . In our study, the elevated expression of glycolytic enzymes, including Ldha, suggests a shift toward aerobic glycolysis, consistent with the Warburg effect. (Page 13, lines447-450)

      Do you have an explanation for the expression of VEGFA by lymphatic mutant cells?

      __Response: __

      VEGF-A acts on VEGFR2 expressed on LECs, thereby promoting their proliferation and migration(Hong et al, 2004; Dellinger & Brekken, 2011) .To clarify this point, we have revised the text accordingly and added additional references as follows:

      We focused on Vegf-a, a key regulator of ECs proliferation and a downstream target of Hif-1α. Vegf-a likely drives both cell-autonomous and non-cell-autonomous effects on blood ECs , as well as LECs(Hong et al, 2004; Dellinger & Brekken, 2011). (Page 13, lines 445-447)

      Likewise, why mesenchymal cells traced from the Islt1-Cre decreased upon expression of Pik3caH1047R?

      Response: When comparing the mesenchyme cluster with other mesoderm-derived cells, we observed a marked downregulation of signaling pathways—notably those involved in inhibiting EMT, such as TGF-β, Wnt/βcatenin, and MYC target genes (Supplemental Figure 7B). Many of these pathways are associated with decreased epithelial-to-mesenchymal transition(Xu et al, 2009; Singh et al, 2012; Larue & Bellacosa, 2005; Yu et al, 2015), which could explain the reduction in the number of mesenchymal cells. However, PI3K activation is generally considered to promote EMT, which is at odds with previous studies.

      On the other hand, several investigations—including those using ES cells—suggest that PI3K activation could suppress TGF-β signaling via SMAD2/3(Yu et al, 2015) , and in some undifferentiated cell contexts, it may also inhibit the Wnt/β-catenin pathway via Smad2/3(Singh et al, 2012) . These multifaceted roles of PI3K could be particularly important during embryonic development(Larue & Bellacosa, 2005).

      Understanding how mesenchymal cell changes under PI3K activation affect endothelial cells is an important issue that requires further study. Accordingly, we have added these points to the Discussion section as follows:

      In our data, the mesenchymal cell population was decreased, and within this cluster, pathways typically promoting epithelial mesenchymal tansition (EMT) (e.g., TGF-β, Wnt, and MYC target genes) were downregulated (Supplemental Figure 7B). Although PI3K activation is generally thought to enhance EMT, several studies in undifferentiated cells have reported that PI3K can suppress these signals via SMAD2/3(Singh et al, 2012; Yu et al, 2015) . Elucidating how these changes in the mesenchyme contribute to vascular malformation pathogenesis remains an important avenue for future research. (Page 13, lines 437-444)

      Authors need to characterize the preclinical model before conducting any preclinical study. No controls are provided, including wild-type mice and phenotypes, before starting the treatment (day 4).

      Response:

      Thank you very much for your comment. We have now added new images illustrating skin under three conditions: untreated skin at Day 7, skin from Cre-negative animals that received tamoxifen, and skin from Cre-positive animals examined 4 days after tamoxifen administration. Additionally, we have included the corresponding statistical data for these skin samples (Figure 6C–E).

      Why did the authors not use their developmental model of head and neck malformation model for preclinical studies? This would be much more coherent with the first part of the manuscript. Also, how many animals were treated and quantified for the different conditions?

      Response:

      We have now indicated the number of animals (n) used under each condition directly on the graphs for clarity. As for why we did not use the Isl1-Cre model, we observed that—similar to the Tie2-Cre line—all Isl1-Cre mutant embryos died between E13.5 and E14.0 (indeed, none survived beyond E14.0; see our newly added Figure 3N). Consequently, we could not perform any postnatal treatment experiments. Moreover, as previously noted, the Isl-CreERT2 line has an extremely narrow developmental window for vascular malformation formation, making it less suitable as a general model.

      Although we considered potential in utero or maternal interventions (e.g., direct uterine injection or placental transfer), these approaches demand extensive technical optimization and remain an area for future investigation. From a clinical standpoint, postnatal therapy meets a more immediate need: while vascular malformations are congenital, they often enlarge over time(Ryu et al, 2023) , becoming more apparent and more likely to require treatment.

      In this study, because embryonic Pik3caH1047R expression was lethal before birth, we generated and treated postnatal cutaneous vascular malformations instead. Although this model does not strictly recapitulate the embryonic disease state, previous studies assessing drug efficacy have similarly employed postnatal tamoxifen-inducible mouse models(Martinez-Corral et al, 2020) , lending validity to this approach. Moreover, because lesions typically become evident later in life rather than in utero, this method more closely aligns with clinical reality and may be more readily translated into practice.

      Minor Comments

      References in the introduction need to be revised. Specifically, how authors reached the stats on head and neck vascular malformations needs to be clarified. For instance, one of the cited papers refers to all types of vascular malformation, while the other focuses exclusively on lymphatic malformations with PIK3CA mutations. Moreover, in the latter, the groups are divided into orofacial and neck and body categories. How do authors substrate the information from the neck and head here?

      Response:

      We have clarified our definition of the “head and neck” region early in the Introduction and separated the discussion on anatomical localization from that on PIK3CA genetics. Additionally, we removed the percentage data of localization to avoid potential confusion with the genetic aspects.

      In Japan, lymphatic and other vascular malformations of the head and neck typically require complex, multidisciplinary management. Consequently, these conditions are officially designated as “intractable diseases,” and the government provides financial assistance for their treatment. Although most of the information is available only in Japanese, we refer reviewers to the following websites for details on head and neck vascular malformations:

      https://www.nanbyou.or.jp/entry/4893 https://www.nanbyou.or.jp/entry/4631 https://www.nanbyou.or.jp/entry/4758.

      (Please read with English translator, e.g., Google chrome translator)

      We are not aware of a comparable system in other countries. However, it is well recognized that vascular malformations frequently occur in the head and neck region(Nair, 2018; Alsuwailem et al, 2020; Sadick et al, 2017), as evidenced by over 250 PubMed hits when searching for “vascular malformation” and “head and neck.

      Incorporating this comment, we have revised the early part of the Introduction as follows:

      They frequently manifest in the head and neck region—here defined as the orofacial and cervical areas, excluding the brain (Zenner et al, 2019; Lee & Chung, 2018; Nair, 2018; Alsuwailem et al, 2020). (Page 2, lines 52-53)

      Also, in line 79, I need clarification on ref 24 about fibrosis.

      __Response: __

      Thank you very much for pointing out the error. We have corrected the placement of the reference accordingly.

      Include references: Studies in mice have shown that p110α is essential for normal blood and lymphatic vessel development. Please clarify and correct. 

      __Response: __

      Thank you very much. We have now added the references(Graupera et al, 2008; Gupta et al, 2007; Stanczuk et al, 2015).

      Please define PIP2 and PIP3

      __Response: __

      Thank you very much for your comment. We have now added the following definitions to the Introduction:

      PIP2: Phosphatidylinositol 4,5-bisphosphate

      PIP3: Phosphatidylinositol 3,4,5-trisphosphate


      Why is Prox1 showing positivity in erythrocytes in Figure 1?

      Response:

      We used paraffin-embedded sections to preserve tissue morphology. Although we applied a reagent to suppress autofluorescence, some spillover from excitation around 488 nm was unavoidable. Moreover, in the mutant mice, blood remained within the abnormal vessels rather than being completely flushed out, which further increased the autofluorescence. Despite our efforts to mitigate this, some residual autofluorescence persisted. Consequently, we also employed DAB-based staining to confirm the specificity of Prox1 labeling in other Figures.

      Regarding Figure 1, I suggest organizing the quantifications in the same order to facilitate phenotype comparisons. For example, I, J vs. Q, R. What is the difference between M and N?

      Response:

      To facilitate the comparison between Figures 1I, J and 1Q, R, we have swapped Figures 1Q and R. Regarding Figures 1M and N, these panels represent the average cross-sectional area of an enlarged malformed vessel and the number of vessels exceeding a defined size, respectively. Although some central veins appeared slightly enlarged in the control group, the liver exhibits both a significant dilation of malformed vessels and an increased number of such vessels.

      Add the reference of the Bulk RNseq data.

      __Response: __

      We have added the following references: (Jauhiainen et al, 2023)

      Mark in the Fig. 4F that the volcano plots are from cluster one of the scRNASeq (this is explained in text and legend, but when you go to the figure, it isn't very clear).

      __Response: __

      We have added the label “Cluster 1: Volcano Plot (genes associated with hypoxia/glycolysis)” to

      Figure 4F.

      Please label Figure 6D/E with the proper labels.

      __Response: __

      We have provided appropriate labels for Figure 6.

      In Fig. 6, it is mentioned that vacuoles are from the tamoxifen injection, how do you know? Do you also see them if you add oil alone (without tamoxifen) or tamoxifen in a WT background?

      __Response: __

      In Figure 6C, we have included both the image at Day 4 and the condition of Cre(–) animals 7 days after tamoxifen injection.

      **Referees cross-commenting**

      I complete agree with referee #2 regarding the preclinical studies. Bevacizumab, does not neutralize murine VEGFA. This is a major issue.

      __Response: __

      As noted in the Reviewer #2 section, there appears to be some effect on mouse vasculature (Lin et al, 2022). However, given the ongoing debate regarding this issue, we performed additional experiments using a neutralizing antibody against mouse VEGF-A (clone 2G11). This antibody has been shown to suppress the proliferation of mouse vascular endothelial cells in vivo, for example(Mashima et al, 2021; Wuest & Carr, 2010). Our results demonstrate that it more sharply suppresses the proliferation of malformed vasculatures (both blood and lymphatic vessels) than bevacizumab. Based on these additional experiments, we revised the figures and updated them as Figure 6.

      Reviewer #1 (Significance (Required)):

      This study addresses a timely and relevant question: the origins, onset and progression of congenital vascular malformations, a field with limited understanding. The work is novel in its approach, employing complex embryonic models that aim to mimic the disease in its native context. By focusing on the effects of Pik3caH1047R mutations in cardiopharyngeal mesoderm-derived endothelial cells, it sheds light on how these mutations drive phenotypic outcomes through specific pathways, such as HIF-1α and VEGF-A signaling, while also identifying potential therapeutic targets. A strong aspect of the study is the use of embryonic models, which enables the investigation of disease onset in a context that closely resembles the in vivo environment. This is particularly valuable for congenital disorders, where native developmental cues are an integral aspect of disease progression. The study also integrates advanced techniques, including single-cell RNA sequencing, to dissect the cellular and molecular responses induced by the Pik3caH1047R mutation. Moreover, from a translational perspective, it provides novel therapeutic strategies for these diseases. Limitations of the study are (1) unclarity of the main question authors try to address, and main conclusions dereived thereof; (2) the different parts of the manuscripts are not well connected, not clear the rationale; (3) scRNAseq analysis is underdeveloped; (4) characterization of the preclinical model is not provided.

      Audience:

      The findings presented here interest specialized audiences within developmental biology, vascular biology, and congenital disease research fields, and clinicians by providing new therapies to treat vascular anomalies. Moreover, the study's integration of single-cell and in vivo models could inspire further research in other contexts where understanding clonal behavior and signaling pathways is critical.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      This paper focuses on vascular malformations driven by PI3K mutation, with particular interest on the vascular defects localized at head and neck anatomical sites. The authors exploit the H1047R mutant which has been largely demonstrated to induce both vascular and lymphatic malformation. To limit the effect of H1047R to tissues originated from cardiopharinegal mesoderm, PI3caH1047R mice were crossed with mice expressing Cre under the control of the promoter of Ils1 , a transcription factor that contributes to the development of cardiopharinegal mesoderm-derived tissues. By comparing the embryo phenotype of this model with that observed by inducing at different times of development the expression of PI3caH1047R, the authors conclude that Isl-Cre; PI3caH1047R; R26R-eYFP model recapitulates better the anatomical features of human vascular malformations and in particular those localized at head and neck. In my opinion the new proposed model represents a significant progress to study human vascular malformations. Furthermore, scRNA seq analysis has allowed to propose a mechanism focused on the role of HIF and VEGFA. The authors provides partial evidences that HIF and VEGFA inhibitors halt the development of vascular malformation in VeCAdCre; Pik3caH1047 mice. This experiment is characterized by a conceptual mistake because bevacizumab does not recognize murine VEGFA (see for instance 10.1073/pnas.0611492104; 10.1167/iovs.07-1175. This error dampens my enthusiasm

      CRITICISM

      1. Fig 1A. E13.5 corresponds to the early phase of vascular remodelling. Which is the phenotype at earliest stages (e.g. 9.5 or 10.5)

      Response:

      Thank you very much for your comment. We have created new Supplemental Figure 2, which demonstrates that no obvious phenotype is observed in mutant embryos at E9.5 and E11.5, and that the survival limit of these mutant embryos is around E13.5 to E14.0.

      In response to Reviewer 1’s question, previous study(Hare et al, 2015) have shown that on a B6 background, this mouse model exhibits an earlier onset of phenotype, resulting in early lethality. However we selected a mixed background of B6 and ICR, as we believe that a heterogeneous genetic background more accurately reflects the diversity of human pathology. We examined five pregnant females, which yielded approximately 30 embryos, of which only two survived until E14.0. Based on these observations, we consider E13.5 to E14.0 to be the appropriate survival limit (see Supplemental Figure 2G for additional details).

      We have described this in the Results section as follows:

      Whereas clear phenotypes were evident at E12.5 and E13.5, no pronounced external abnormalities were observed at E9.5 or E11.5 (Supplemental Figure 2A–B). Similarly, histological examination revealed no significant differences in the short-axis diameter of the PECAM+ CV or in the number of Prox1+ LECs surrounding the CV between control and mutant embryos at E11.5 (Supplemental Figure 2C–F). We also assessed Tie2-Cre; R26R-Pik3caH1047R mutant embryos at E14.0 from five pregnant mice. Only two embryos were alive at this stage, and both showed severe edema and hemorrhaging, indicating they were nearly moribund. These observations suggest that the critical point for survival of these mutant embryos lies between E13.5 and E14.0 (Supplemental Figure 2G). (Page 5, lines 157-165)

      Fig 1,2,3. The analysis of VEGFR2 expression is required. This request is important for the paradigmatic and non-overlapping role of this receptor in early and late vascular development. Furthermore ,these data better clarify the mechanism suggested by the experiments reported in fig 5 (VEGFA and HIF expression)

      __Response: __

      Thank you very much for your comment. For each mouse presented in Figures 1, 2, and 3, we performed VEGFR2 immunostaining on serial sections corresponding to each figure and created a new Supplemental Figure 9. VEGFR2 was broadly expressed in both vascular and lymphatic endothelial cells in control and mutant embryos.

      We have described this in the Results section as follows:

      Furthermore, to verify whether VEGF‐A can act via VEGFR2, we performed VEGFR2 immunostaining on several mouse models: Tie2‐Cre; R26R‐Pik3caH1047R embryos (E13.5, corresponding to Figure 1), CDH5‐CreERT2; R26R‐Pik3caH1047R embryos (tamoxifen administered at E9.5 and analyzed at E16.5, corresponding to Figure 2), and Isl1‐Cre; R26R‐Pik3caH1047R embryos (E11.5 and E13.5, corresponding to Figure 3). In all cases, both control and mutant embryos exhibited widespread VEGFR2 expression in blood and lymphatic vessels at early and late developmental stages (Supplemental Figure 9A-R’). These findings suggest that Pik3caH1047R may act in an autocrine manner, at least in part via the VEGF‐A/VEGFR2 axis in endothelial cells, potentially explaining the observed phenotype. (Page 11, lines352-361)

      As done in Fig 1,2 and 3, data quantification by morphometric analysis is also required for results reported in supplemental figure 3

      __Response: __

      Thank you for your comment. We have now added additional statistics and graphs for clarity, which are presented as Supplemental Figure 4.

      Lines 166-174. I suppose that the reported observations were done at E16.5. What happens later? It's crucial to sustain the statement at lines 187-190

      Response:

      At E9.5 and E12.5, we reduced the tamoxifen dose to one-fifth of the standard dose. After collecting embryos from approximately 10 pregnant females, we were only able to obtain three embryos at these stages. When tamoxifen was administered at E15.5, three embryos were obtained from two litters. In most cases, miscarriages occurred by E16.5, making further observation difficult. We focused on the time point around E16.5 because it is generally believed that the basic distribution of the lymphatic system throughout the body is established around this stage (Srinivasan et al, 2007; Maruyama et al, 2022).

      A similar experiment has been reported using T-CreERT2 to induce mosaic expression of Pik3caH1047R in the mesoderm, which resulted in subcutaneous venous malformations in mice at P1–P5 (Castillo et al, 2016). However, that study did not report whether the mice survived normally after birth. In fact, regarding the survival rate, the authors stated, “Our observations on the lethality and vascular defects in MosMes-Pik3caH1047R (T-CreERT2;R26R-Pik3caH1047R) embryos are similar to the previously reported phenotypes of ubiquitous or EC-specific expression of Pik3caH1047R in the developing embryo (Hare et al, 2015),” suggesting a high mortality rate when Pik3caH1047R is expressed using Tie2-Cre. Moreover, according to Hare et al., analysis of 250 Tie2-Cre; R26R-Pik3caH1047R embryos revealed that all were lethal by E11.5. Thus, considering our results in conjunction with those from previous studies, it appears that expression of Pik3caH1047R in the mesoderm or endothelial cells during embryonic development results in the death of most embryos before birth.

      We have supplemented the Results section with the following details:

      Since the standard tamoxifen dose (125 mg/kg body weight) leads to miscarriage or embryonic death within 1–2 days, we diluted it to one-fifth of the original concentration. (Pages 5-6, lines 175-177)

      scRNAseq was performed at E13.5 (Fig 4). It's mandatory to perform the same analysis at E16.5, which corresponds to the phenotypic analysis shown in fig 3. This experiment is required to understand how hypoxia and glycolysis genes changes along the development of the vascular malformation.

      __Response: __

      Thank you very much for your comment. First, regarding the experiments using Isl1‐Cre, we would like to clarify that the survival aspect was not adequately addressed. Our Isl1‐Cre embryos die between E13.5 and E14.0, which makes it practically impossible to perform single‐cell analysis beyond this stage (please refer to the newly added Figure 4N). Similarly, for experiments using CDH5‐CreERT2, the limited number of embryos obtained renders further analysis extremely challenging. Additionally, we have supplemented the Results section with the following description:

      These Isl1-Cre; R26R-Pik3caH1047R mutant embryos likely died from facial hemorrhaging between E13.5 and E14.0 (Figure 3N). (Page 7, lines 236-237)

      Further analysis at later embryonic stages proved challenging. Consequently, we aimed to investigate the effects of Pik3caH1047R on endothelial cells by comparing gene expression at E10.5 with that at E13.5. We performed single‐cell RNA sequencing on E10.5 embryos from both the control (Isl1-Cre; R26R-eYFP) and mutant (Isl1-Cre; R26R-eYFP; R26R-Pik3caH1047R) embryos. Unfortunately, the quality of both datasets was insufficient for reliable analysis. In the control sample, only 40.3% of reads were assigned to cell‐associated barcodes—substantially below the ideal threshold of >70%—with an estimated 790 cells and a median of 598 genes per cell. Similarly, in the mutant sample, only 37.0% of reads were associated with cells, despite an estimated cell count of 7,326 and a median of only 526 genes per cell. These metrics indicate that both datasets were severely compromised by high levels of ambient RNA or by a significant number of cells with low RNA content, precluding robust downstream analysis. This may be due to the fact that immature cells are particularly susceptible to damage incurred during FACS sorting and transportation to the analysis facility. Moreover, the relatively low number of control endothelial cells at E13.5 led us to conclude that performing similar experiments at earlier stages would be difficult. Despite our best efforts, we acknowledge this as a limitation of the present study.

      Lines 326-343. In this section the authors provide pharmacological evidences that HIF and VEGFa are involved in vascular malformation caused by H1047R . However , I'm surprised of efficacy of bevacizumab, which neutralizes human but not murine VEGFA. Genetech has developed B20 mAb that specifically neutralizes murine VEGFA. So the data shown require a. clarification by the authors and the experiments must be done with the appropriate reagent. Furthermore, which is the pharmacokynetics of these compounds topically applied?

      Response:

      Thank you very much for your comment. There are reports that bevacizumab exerts an in vivo inhibitory effect on neovascularization mediated by mouse Vegf-A (Lin et al, 2022). However, given the contentious nature of this issue, we conducted additional experiments. Due to the requirement for an MTA to obtain B20 mAb from Genentech—and considering the time constraints during revision—we opted to use a neutralizing antibody against mouse VEGF-A (clone 2G11) instead. This antibody has been shown to suppress the proliferation of mouse vascular endothelial cells in vivo (Mashima et al, 2021; Wuest & Carr, 2010) .

      The dosing regimen for 2G11 was determined based on previous studies (Surve et al, 2024; Churchill et al, 2022). Moreover, an example of effective local administration is provided in (Nagao et al, 2017). Since this product is an antibody drug, it is metabolized and does not function as a prodrug. Although the precise half-life of 2G11 is unknown, rat IgG2a antibodies generally have a circulating half-life of approximately 7–10 days in rats. However, when administered to mice, the half-life is often significantly reduced due to interspecies differences in neonatal Fc receptor (FcRn) binding affinity, with estimates in murine models typically around 2–4 days(Abdiche et al, 2015; Medesan et al, 1998) . However, in our model the injection is subcutaneous—almost equivalent to an intradermal injection (Figure 6B, C). Because this method is expected to provide a more sustained, slow-release effect (similar to the tuberculin reaction), the half-life should be longer than that achieved with intravenous administration. Consequently, we believe that sufficient efficacy is maintained in this model.

      Regarding LW-6:

      LW-6 is a small molecule that, due to its hydrophobic nature, is believed to freely cross cell membranes. Once inside the cell, it facilitates the degradation of HIF-1α, leading to reduced expression of its downstream targets (Lee et al, 2010). Although its half-life is estimated to be around 30 minutes, the active metabolites may exert sustained secondary effects (Lee et al, 2021). When administered intravenously, peak blood concentrations are reached within 5 minutes, making Cmax a critical parameter due to the rapid onset of action. In our experiments, we based the dosing regimen on previous studies (Lee et al, 2010; Song et al, 2016; Xu et al, 2022, 2024). While those studies administered doses comparable to or twice as high as ours via intravenous, intraperitoneal, or oral routes, our experimental design—in which a single dose was administered on Day 4 and samples were collected on Day 7—necessitated a single-dose protocol.

      Regarding Rapamycin:

      Several studies have demonstrated that local administration yields anti-inflammatory effects (Takayama et al, 2014; Tyler et al, 2011). Similar outcomes have been observed in vascular malformations (Boscolo et al, 2015; Martinez-Corral et al, 2020). Although the half-life of rapamycin is estimated to be approximately 6 hours following intravenous administration, it may be even shorter (Comas et al, 2012; Popovich et al, 2014).

      In light of these comments, we have revised Figure 6. Furthermore, the Results section pertaining to Figure 6 has been updated as follows:

      Hif-1α and Vegf-A inhibitors suppress the progression of vascular malformations.

      We next examined whether administering Hif-1α and Vegf-A inhibitors could effectively treat vascular malformations. Tamoxifen was administered to 3–4-week-old CDH5-CreERT2;R26R-Pik3caH1047R mice to induce mutations in the dorsal skin. Anti-VEGF-A, a Vegf-A neutralizing antibody; LW6, a Hif-1α inhibitor; and rapamycin, an mTOR inhibitor, were topically applied, and their effects were analyzed (Figure 6A). Both anti-VEGF-A and LW6 reduced the visible swelling in the dorsal skin, whereas the difference between the drug-treated and control groups was less pronounced with rapamycin (Figure 6B). In tamoxifen-treated Cre(–) mice, inflammatory cell infiltration and fibrosis were observed from the dermis to the subcutaneous tissue; however, there were no changes in the number of PECAM⁺ vasculatures or VEGFR3⁺ lymphatic vessels, including their enlarged forms, compared to the untreated control (Figure 6C–E). In contrast, tamoxifen administration to CDH5-CreERT2;R26R-Pik3caH1047R mice resulted in an increase in these vascular structures by day 4 (Figure 6C–E). At day 7, comparing mice with or without treatment using anti-VEGF-A, LW6, or rapamycin, the number of PECAM⁺ vasculatures was reduced in the treated groups; however, in the rapamycin group, the number of enlarged PECAM⁺ vasculatures did not differ from that in the untreated group (Figure 6F–M). Similarly, for VEGFR3⁺ lymphatic vessels, both anti-VEGF-A and LW6 induced a reduction, whereas rapamycin did not produce a statistically significant decrease (Figure 6N–U). (Page 11, lines 363-381)

      **Referees cross-commenting**

      The issues raised by refereee #1 related to the phenotype analysis are right. In my opinion the Isl model here proposed well mimic human pathology evenf the vascular damage at. head is not so evident

      Response:

      Perhaps the discrepancy arises from a terminological issue. According to the WHO Classification of Tumours, commonly used in clinical settings, the term "Head and Neck" refers to the facial and cervical regions (including the oral cavity, larynx, pharynx, salivary glands, nasal cavity, etc.) and excludes the central nervous system. The inclusion of the brain in Figure 1O-R may have led to some confusion. We included the brain because cerebral cavernous malformations are classified as venous malformations, and thus serve as an example of common sites for venous malformations in humans.

      To clarify this point, we have made slight revisions to the first part of the Introduction, as follows:

      They frequently manifest in the head and neck region—here defined as the orofacial and cervical areas, excluding the brain. (Page2, lines 52-53)

      Reviewer #2 (Significance (Required)):

      General assessment

      STRENGTH : a new mouse model seems to well recapitulate human vascular malformation. Possible key molecules have been identified

      WEAKNESS. The pharmacological approach to support the role of VEGFA e HIF is not appropriate

      References for the review:

      Abdiche YN, Yeung YA, Chaparro-Riggers J, Barman I, Strop P, Chin SM, Pham A, Bolton G, McDonough D, Lindquist K, et al (2015) The neonatal Fc receptor (FcRn) binds independently to both sites of the IgG homodimer with identical affinity. mAbs 7: 331–343

      Alsuwailem A, Myer CM & Chaudry G (2020) Vascular anomalies of the head and neck. Semin Pediatr Surg 29: 150968

      Boscolo E, Limaye N, Huang L, Kang K-T, Soblet J, Uebelhoer M, Mendola A, Natynki M, Seront E, Dupont S, et al (2015) Rapamycin improves TIE2-mutated venous malformation in murine model and human subjects. J Clin Investig 125: 3491–3504

      Castillo SD, Tzouanacou E, Zaw-Thin M, Berenjeno IM, Parker VER, Chivite I, Milà-Guasch M, Pearce W, Solomon I, Angulo-Urarte A, et al (2016) Somatic activating mutations in Pik3ca cause sporadic venous malformations in mice and humans. Sci Transl Med 8: 332ra43

      Churchill MJ, Bois H du, Heim TA, Mudianto T, Steele MM, Nolz JC & Lund AW (2022) Infection-induced lymphatic zippering restricts fluid transport and viral dissemination from skin. J Exp Med 219: e20211830

      Comas M, Toshkov I, Kuropatwinski KK, Chernova OB, Polinsky A, Blagosklonny MV, Gudkov AV & Antoch MP (2012) New nanoformulation of rapamycin Rapatar extends lifespan in homozygous p53−/− mice by delaying carcinogenesis. Aging (Albany NY) 4: 715–722

      Dellinger MT & Brekken RA (2011) Phosphorylation of Akt and ERK1/2 Is Required for VEGF-A/VEGFR2-Induced Proliferation and Migration of Lymphatic Endothelium. PLoS ONE 6: e28947

      Graupera M, Guillermet-Guibert J, Foukas LC, Phng L-K, Cain RJ, Salpekar A, Pearce W, Meek S, Millan J, Cutillas PR, et al (2008) Angiogenesis selectively requires the p110α isoform of PI3K to control endothelial cell migration. Nature 453: 662–666

      Gupta S, Ramjaun AR, Haiko P, Wang Y, Warne PH, Nicke B, Nye E, Stamp G, Alitalo K & Downward J (2007) Binding of Ras to Phosphoinositide 3-Kinase p110α Is Required for Ras- Driven Tumorigenesis in Mice. Cell 129: 957–968

      Hare LM, Schwarz Q, Wiszniak S, Gurung R, Montgomery KG, Mitchell CA & Phillips WA (2015) Heterozygous expression of the oncogenic Pik3ca H1047R mutation during murine development results in fatal embryonic and extraembryonic defects. Dev Biol 404: 14–26

      Hong Y, Lange‐Asschenfeldt B, Velasco P, Hirakawa S, Kunstfeld R, Brown LF, Bohlen P, Senger DR & Detmar M (2004) VEGF‐A promotes tissue repair‐associated lymphatic vessel formation via VEGFR‐2 and the α1β1 and α2β1 integrins. FASEB J 18: 1111–1113

      Hu H, Juvekar A, Lyssiotis CA, Lien EC, Albeck JG, Oh D, Varma G, Hung YP, Ullas S, Lauring J, et al (2016) Phosphoinositide 3-Kinase Regulates Glycolysis through Mobilization of Aldolase from the Actin Cytoskeleton. Cell 164: 433–446

      Jauhiainen S, Ilmonen H, Vuola P, Rasinkangas H, Pulkkinen HH, Keränen S, Kiema M, Liikkanen JJ, Laham-Karam N, Laidinen S, et al (2023) ErbB signaling is a potential therapeutic target for vascular lesions with fibrous component. eLife 12: e82543

      Larue L & Bellacosa A (2005) Epithelial–mesenchymal transition in development and cancer: role of phosphatidylinositol 3′ kinase/AKT pathways. Oncogene 24: 7443–7454

      Lee JW & Chung HY (2018) Vascular anomalies of the head and neck: current overview. Arch Craniofacial Surg 19: 243–247

      Lee K, Kang JE, Park S-K, Jin Y, Chung K-S, Kim H-M, Lee K, Kang MR, Lee MK, Song KB, et al (2010) LW6, a novel HIF-1 inhibitor, promotes proteasomal degradation of HIF-1α via upregulation of VHL in a colon cancer cell line. Biochem Pharmacol 80: 982–989

      Lee K, Lee J-Y, Lee K, Jung C-R, Kim M-J, Kim J-A, Yoo D-G, Shin E-J & Oh S-J (2021) Metabolite Profiling and Characterization of LW6, a Novel HIF-1α Inhibitor, as an Antitumor Drug Candidate in Mice. Molecules 26: 1951

      Lin Y, Dong M, Liu Z, Xu M, Huang Z, Liu H, Gao Y & Zhou W (2022) A strategy of vascular‐targeted therapy for liver fibrosis. Hepatology 76: 660–675

      Lupu I-E, Kirschnick N, Weischer S, Martinez-Corral I, Forrow A, Lahmann I, Riley PR, Zobel T, Makinen T, Kiefer F, et al (2022) Direct specification of lymphatic endothelium from non-venous angioblasts. Biorxiv: 2022.05.11.491403

      Martinez-Corral I, Zhang Y, Petkova M, Ortsäter H, Sjöberg S, Castillo SD, Brouillard P, Libbrecht L, Saur D, Graupera M, et al (2020) Blockade of VEGF-C signaling inhibits lymphatic malformations driven by oncogenic PIK3CA mutation. Nat Commun 11: 2869

      Maruyama K, Miyagawa-Tomita S, Haneda Y, Kida M, Matsuzaki F, Imanaka-Yoshida K & Kurihara H (2022) The cardiopharyngeal mesoderm contributes to lymphatic vessel development in mouse. Elife 11

      Maruyama K, Miyagawa-Tomita S, Mizukami K, Matsuzaki F & Kurihara H (2019) Isl1-expressing non-venous cell lineage contributes to cardiac lymphatic vessel development. Dev Biol 452: 134–143

      Maruyama K, Naemura K, Arima Y, Uchijima Y, Nagao H, Yoshihara K, Singh MK, Uemura A, Matsuzaki F, Yoshida Y, et al (2021) Semaphorin3E-PlexinD1 signaling in coronary artery and lymphatic vessel development with clinical implications in myocardial recovery. Iscience: 102305

      Mashima T, Wakatsuki T, Kawata N, Jang M-K, Nagamori A, Yoshida H, Nakamura K, Migita T, Seimiya H & Yamaguchi K (2021) Neutralization of the induced VEGF-A potentiates the therapeutic effect of an anti-VEGFR2 antibody on gastric cancer in vivo. Sci Rep 11: 15125

      Medesan C, Cianga P, Mummert M, Stanescu D, Ghetie V & Ward ES (1998) Comparative studies of rat IgG to further delineate the Fc : FcRn interaction site. Eur J Immunol 28: 2092–2100

      Nagao M, Hamilton JL, Kc R, Berendsen AD, Duan X, Cheong CW, Li X, Im H-J & Olsen BR (2017) Vascular Endothelial Growth Factor in Cartilage Development and Osteoarthritis. Sci Rep 7: 13027

      Nair SC (2018) Vascular Anomalies of the Head and Neck Region. J Maxillofac Oral Surg 17: 1–12

      Popovich IG, Anisimov VN, Zabezhinski MA, Semenchenko AV, Tyndyk ML, Yurova MN & Blagosklonny MV (2014) Lifespan extension and cancer prevention in HER-2/neu transgenic mice treated with low intermittent doses of rapamycin. Cancer Biol Ther 15: 586–592

      Ryu JY, Chang YJ, Lee JS, Choi KY, Yang JD, Lee S-J, Lee J, Huh S, Kim JY & Chung HY (2023) A nationwide cohort study on incidence and mortality associated with extracranial vascular malformations. Sci Rep 13: 13950

      Sadick M, Wohlgemuth WA, Huelse R, Lange B, Henzler T, Schoenberg SO & Sadick H (2017) Interdisciplinary Management of Head and Neck Vascular Anomalies: Clinical Presentation, Diagnostic Findings and Minimalinvasive Therapies. Eur J Radiol Open 4: 63–68

      Singh AM, Reynolds D, Cliff T, Ohtsuka S, Mattheyses AL, Sun Y, Menendez L, Kulik M & Dalton S (2012) Signaling Network Crosstalk in Human Pluripotent Cells: A Smad2/3-Regulated Switch that Controls the Balance between Self-Renewal and Differentiation. Cell Stem Cell 10: 312–326

      Song JG, Lee YS, Park J-A, Lee E-H, Lim S-J, Yang SJ, Zhao M, Lee K & Han H-K (2016) Discovery of LW6 as a new potent inhibitor of breast cancer resistance protein. Cancer Chemother Pharmacol 78: 735–744

      Srinivasan RS, Dillard ME, Lagutin OV, Lin F-J, Tsai S, Tsai M-J, Samokhvalov IM & Oliver G (2007) Lineage tracing demonstrates the venous origin of the mammalian lymphatic vasculature. Gene Dev 21: 2422–2432

      Stanczuk L, Martinez-Corral I, Ulvmar MH, Zhang Y, Laviña B, Fruttiger M, Adams RH, Saur D, Betsholtz C, Ortega S, et al (2015) cKit Lineage Hemogenic Endothelium-Derived Cells Contribute to Mesenteric Lymphatic Vessels. Cell Reports 10: 1708–1721

      Stone OA & Stainier DYR (2019) Paraxial Mesoderm Is the Major Source of Lymphatic Endothelium. Dev Cell 50: 247-255.e3

      Surve CR, Duran CL, Ye X, Chen X, Lin Y, Harney AS, Wang Y, Sharma VP, Stanley ER, Cox D, et al (2024) Signaling events at TMEM doorways provide potential targets for inhibiting breast cancer dissemination. bioRxiv: 2024.01.08.574676

      Takayama K, Kawakami Y, Kobayashi M, Greco N, Cummins JH, Matsushita T, Kuroda R, Kurosaka M, Fu FH & Huard J (2014) Local intra-articular injection of rapamycin delays articular cartilage degeneration in a murine model of osteoarthritis. Arthritis Res Ther 16: 482

      Tyler B, Wadsworth S, Recinos V, Mehta V, Vellimana A, Li K, Rosenblatt J, Do H, Gallia GL, Siu I-M, et al (2011) Local delivery of rapamycin: a toxicity and efficacy study in an experimental malignant glioma model in rats. Neuro-Oncol 13: 700–709

      Wuest TR & Carr DJJ (2010) VEGF-A expression by HSV-1–infected cells drives corneal lymphangiogenesis. J Exp Med 207: 101–115

      Xu H, Chen Y, Li Z, Zhang H, Liu J & Han J (2022) The hypoxia-inducible factor 1 inhibitor LW6 mediates the HIF-1α/PD-L1 axis and suppresses tumor growth of hepatocellular carcinoma in vitro and in vivo. Eur J Pharmacol 930: 175154

      Xu J, Lamouille S & Derynck R (2009) TGF-β-induced epithelial to mesenchymal transition. Cell Res 19: 156–172

      Xu Q, Liu H, Ye Y, Wuren T & Ge R (2024) Effects of different hypoxia exposure on myeloid-derived suppressor cells in mice. Exp Mol Pathol 140: 104932

      Yu JSL, Ramasamy TS, Murphy N, Holt MK, Czapiewski R, Wei S-K & Cui W (2015) PI3K/mTORC2 regulates TGF-β/Activin signalling by modulating Smad2/3 activity via linker phosphorylation. Nat Commun 6: 7212

      Zenner K, Cheng CV, Jensen DM, Timms AE, Shivaram G, Bly R, Ganti S, Whitlock KB, Dobyns WB, Perkins J, et al (2019) Genotype correlates with clinical severity in PIK3CA-associated lymphatic malformations. Jci Insight 4

    1. When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users. Some recommendation algorithms can be simple such as reverse chronological order, meaning it shows users the latest posts (like how blogs work, or Twitter’s “See latest tweets” option). They can also be very complicated taking into account many factors, such as: Time since posting (e.g., show newer posts, or remind me of posts that were made 5 years ago today) Whether the post was made or liked by my friends or people I’m following How much this post has been liked, interacted with, or hovered over Which other posts I’ve been liking, interacting with, or hovering over What people connected to me or similar to me have been liking, interacting with, or hovering over What people near you have been liking, interacting with, or hovering over (they can find your approximate location, like your city, from your internet IP address, and they may know even more precisely) This perhaps explains why sometimes when you talk about something out loud it gets recommended to you (because someone around you then searched for it). Or maybe they are actually recording what you are saying and recommending based on that.

      Yeah, this definitely lines up with what I’ve noticed using social media. Sometimes I see posts or ads that feel way too specific, almost like the app “knows” what I was just talking about. Living in different countries (China, Japan, and now the U.S.), I’ve also seen how recommendation algorithms change based on region—like how WeChat, LINE, and Instagram push different types of content. It’s wild how much data these platforms collect, and since they keep their algorithms secret, we can only guess how deep it really goes.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Elon Musk [@elonmusk]. Trashing accounts that you hate will cause our algorithm to show you more of those accounts, as it is keying off of your interactions. Basically saying if you love trashing *that* account, then you will probably also love trashing *this* account. Not actually wrong lol. January 2023. URL: https://twitter.com/elonmusk/status/1615194151737520128 (visited on 2023-12-07).

      I think this Elon Musk's tweet is interesting because I kind of agree that if you keep trashing accounts you hate, the algorithm thinks you enjoy engaging with them and will show you more. It’s a bit ironic—like the internet version of "if you don’t have anything nice to say, don’t say anything at all." Sometimes, the best way to make something disappear from your feed is to just ignore it.

    1. I heard a Fly buzz - when I died - The Stillness in the Room Was like the Stillness in the Air - Between the Heaves of Storm -

      In these lines from "I heard a Fly buzz - when I died -", Dickinson compares the stillness before death to the quiet right before a storm. She writes, “The Stillness in the Room / Was like the Stillness in the Air / Between the Heaves of Storm.” This comparison gives the stillness a sense of tension, almost like something important is about to happen, but hasn't yet. It's not the calm of peace; it’s more like the pause before chaos or change. Dickinson seems to capture how death isn’t just peaceful-it’s a moment filled with a strange sort of energy, a quiet waiting. That comparison to the storm builds an eerie sense of anticipation, as if the world is just holding its breath in that last moment before death takes over.

    2. The Stillness in the Room Was like the Stillness in the Air -

      In "I heard a Fly buzz-when I died (591) Dickinson talks about her sad death. Emily Dickinson makes a metaphor by comparing "The Stillness in the Room Was like the Stillness in the Air". She is sharing her emotions of her last night of her death that she is like laying down in a bed peacefully in a dark, empty, and in a quiet room. While a storm is coming with no one beside her. But only a Fly buzzing around the whole room. Dickison last wish was just to lay down peacefully before the storm it's on its way.

    1. All I suggest is that looking away on the internet is the wrong impulse. Exit is unproductive and won't make any difference. Just like real life, there is no escape. You can't opt out.

      This is true and it's not true. You can't choose to live a society where the thing doesn't exist just because you're not plugged into it. But it doesn't follow that you're obligated to attach the thing into your brain and maximize what it wants from you.

    1. A persona is only useful if it’s valid. If these details are accurate with respect to the data from your research, then you can use personas as a tool for imagining how any of the design ideas might fit into a person’s life. If you just make someone up and their details aren’t grounded in someone’s reality, your persona will be useless, because what you’re imagining will be fantasy.

      This is interesting, and I agree to some extent. I'm curious about the process of thinking up a persona; If I understand a problem in theory because I hear about it through the grapevine, do I need to do specific interviews to identify a person with this experience? It's possible that I would gain an understanding of the problem, but not underlying factors, so in theory I know the symptoms but not the diagnosis.I think that's where the "grounded in reality" part comes in.

    1. But design will always require you to make a value judgement about who does and who does not deserve your design help. Let that choice be a just one, that centers people’s actual needs. And let that choice be an equitable one, that focuses on people who actually need help

      I agree with this statement. I think it's important to acknowledge the limitations of our choices regarding design, and that even the most well-meaning people will fall short. But it is important to mean well regardless, and be intentional about serving those who need the most help instead of assuming we are choosing the best option and patting ourselves on the back for meeting a very narrow need.

    1. Furthermore, it’s generally a bad idea to think too deeply or quantitatively about your friendships—a certain degree of irrationality and forgetfulness is essential to the whole enterprise. It’s usually not good for a friendship when you start thinking things like “Bob is kind of a shitty person, why exactly am I still buddies with him?” (because you’ve known him for a long time and he really got you out of jam back in college, and that’s plenty enough reason to stay friends with him; also people change and you might be really glad that you stayed friends with him in the future) or “Nancy has to initiate the next hang out because I’ve done it nine times this year compared to her three”(she’s going through some things you don’t know about so cut her a break, just like she’s cut you one in the past). Friendship drama caused by weddings is another example—shit can really get awkward when you have to basically rank your friends by choosing a best man/maid of honor or deciding who makes it into the wedding party and who doesn’t.

      Dissect the frog, and the frog dies.

    1. The definition above implies that the “World Wide Web” uses the http protocol to send its data. Why then, do we still need to add the “WWW” subdomain? It’s a waste of time to type it. Wouldn’t it be easier to just type in the domain name, without the “WWW”?

      It feels like there's a good opportunity for some kind of joke project here: wwww subdomains for the Worse World-Wide Web. Programmatic conventions explicitly designed around checking for such subdomains in a magnificent violation of separation of concerns; perhaps client software would be enlightened or unenlightened, know to parse or display things differently conditionally...

    1. I think this is how most games view the purpose of their loops. It’s for the player to master a skill. But most of these games aren’t really about that. In Assassin’s Creed Odyssey you have an RPG skill tree that unlocks new abilities, lets you get better and better at fighting and sneaking. And, I suppose you could make a case that the game is about an assassin honing her craft. But...actually it really isn't. It’s about someone trying to find who her real family is. Or it’s about exploring ancient Greece. Or it’s about choosing sides in the Peloponnesian War. Or...something else. I’m not really sure what it’s about, honestly, and anyways it doesn’t matter. Let’s say it’s about an assassin honing her craft. Nothing in the game really supports that. The world doesn’t feel oppressive or vulnerable, it hardly matters if you get better or not, it’s quite easy; it’s impossible to get lost; you never fail. I don’t really feel like my skill in that game improved, as I played it. It felt more like…the game just kept going. It takes a lot of work to make this structure meaningful! But let’s suppose these games did make this work. Let’s say all these games achieve the difficult task of creating meaning through play, feeling mastery through repetition…it’s not that this is a bad use of play, but I have to believe it is not all play can offer us. I hope it is just a small fraction of what play is capable of! So why is this all we’re doing? Why can’t we hope for more?

      Inscryption: genuinely top-tier loop, but the weight of the game is in how weird it feels to go outside that loop structure

    1. he bot-tom line is that they have exchanged any interest in reducing treatmentsfor the goal of increasing them. No matter how obvious this might seemnow, I didn’t see the connections right away, even when pharmaceuticalresearchers said it directly: “No one is thinking about the patients, justmarket share.”25

      Dumit’s argument on maximizing prescriptions over cures makes me question the way we define medical success. If the goal of medicine is to improve health, why do we measure progress by the number of people taking medications rather than by recovery rates? It feels like we've normalized the idea that being on multiple prescriptions is just a part of life, rather than questioning if it's necessary. Are we treating diseases or just maintaining them for profit? This makes me wonder whether pharmaceutical companies should have this much control over what counts as “good” healthcare?

    2. Normal and healthyare severed, and this is anxiously funny be-cause it didn’t used to be that way

      I found this line of the passage interesting. It's making it seem as though being healthy and being normal are'nt the same. For example even if you feel fine and go to the doctors they may say you are at risk for certain diseases. It makes me wonder, does knowing more about health make us more worried? The more we hear and learn about risks the more we fear sickness which can result in more frequent changes and lifestyle changes. But although caring about your health is great. Is all of this making people healthy or just more anxious? This part of the passage makes me wonder if we as a society are become more and more progressively anxious and worried about our health.

    1. With a smaller training size of ∼1M examples and just a single GPU, training times ranged from 6-26 hours for 100 epochs for most proteins (4 to 16 minutes per epoch). Pretraining METL-Global with 20M parameters took ∼50 hours on 4x A100s and ∼142 hours with 50M parameters.

      Given the performance, I'm impressed with the affordability of the models' pre-training.

      In a world of foundation models that cost millions of $ to train, I think it's definitely worth mentioning the frugality of these models in the discussion (if not already mentioned).

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer 1:

      (1) The overall conclusion, as summarized in the abstract as "Together, our study documents the diversification of locomotor and oculomotor adaptations among hunting teleost larvae" is not that compelling. What would be much more interesting would be to directly relate these differences to different ecological niches (e.g. different types of natural prey, visual scene conditions, height in water column etc), and/or differences in neural circuit mechanisms. While I appreciate that this paper provides a first step on this path, by itself it seems on the verge of stamp collecting, i.e. collecting and cataloging observations without a clear, overarching hypothesis or theoretical framework.

      There are limited studies on the prey capture behaviors of larval fishes, and ours is the first to compare multiple species systematically using a common analysis framework. Our analysis approach could have uncovered a common set of swim kinematics and capture strategies shared by all species; but instead, we found that medaka used a monocular strategy rather than the binocular strategy of cichlids and zebrafish. Our analysis similarly could have revealed first-feeding larvae of all species go through a “bout” stage, which was previously proposed as important for sensorimotor decision making (Bahl et al., 2019), but instead we found that medaka and some cichlids have more continuous swimming from an early life stage. Finally, the rate at which prey capture kinematics evolves is not known. Our approach could have revealed rapid diversification of feeding strategies in cichlids (similarly to how adult feeding behavior evolves), but instead we found smaller differences within cichlids than between cichlids and medaka.

      (2) The data to support some of the claims is either weak or lacking entirely.

      Highlighted timestamps in videos, new stats in fig 1H and fig 2, updated supplementary figures now provide additional support for claims.

      - It would be helpful to include previously published data from zebrafish for comparison.

      We appreciate the suggestion. Mearns et al. (2020) provided a comprehensive account of prey capture in zebrafish larvae in an almost identical setup with similar analyses. We do not feel it is necessary to recount all the findings in that paper here. There are many studies on prey capture in zebrafish from the past 20 years, and reproducing these here would not add anything to that extensive pre-existing literature.

      - Justification is required for why it is meaningful to compare hunting strategies when both fish species and prey species are being varied. For instance, artemia and paramecia are different sizes and have different movement statistics.

      We added text explaining why different food was chosen for medaka/cichlids. There is no easy way to stage match fishes as evolutionarily diverged as cichlids, medaka, and zebrafish. Size is a reasonable metric within a species, but there is no guarantee that sizematched larvae of two different species are at the same level of maturity. Therefore, we thought the most appropriate stage to address is when larvae first start feeding, as this enables us to study innate prey capture behavior before any learning or experience-dependent changes have taken place. Given that zebrafish, medaka and cichlid larvae are different sizes when they first start feeding, it was necessary to study their hunting behavior to different prey items.

      - It would be helpful in Figure 1A to add the abbreviations used elsewhere in the paper. I found it slightly distracting that the authors switch back and forth in the paper between using "OL" and "medaka" to refer to the same species: please pick one and then remain consistent.

      Medaka is the common name for the japanese rice fish, O. latipes. Cichlilds do not have common names are only referred to by their scientific names. Since readers are more likely to be familiar with the common name, medaka, we now use medaka (OL) throughout the manuscript, which we hope makes the text clearer.

      - The conceptual meaning of behavioral segmentation is somewhat unclear. For zebrafish, the bouts already come temporally segmented. However in medaka for instance, swimming is more continuous, and the segmentation is presumably more in terms of "behavioral syllables" as have been discussed for example mouse or drosophila behavior (in the last row of Figure S1 it is not at all obvious why some of the boundaries were placed at their specific locations). It's not clear whether it's meaningful to make an equivalence between syllables and bouts, and so whether for instance Figure 1H is making an apples-to-apples comparison.

      We clarified the text to say we are comparing syllables, rather than bouts.

      - The interpretation of 1H is that "medaka exhibited significantly longer swims than cichlids"; however this is not supported by the appropriate statistical test. The KS test only says that two probability distributions are different; to say that one quantity is larger than another requires a comparison of means.

      Updated Fig 1H; boostrap test (difference of medians) and re plotted data as violin plots.

      (2) The data to support some of the claims is either weak or lacking entirely.

      Highlighted timestamps in videos, new stats in fig 1H and fig 2, updated supplementary figures now provide additional support for claims.

      - I think the evidence that there are qualitatively different patterns of eye convergence between species is weak. In Figure 2A I admire the authors addressing this using BIC, and the distributions are clearly separated in LA (the Hartigan dip test could be a useful additional test here). However for LO, NM, and AB the distributions only have one peak, and it's therefore unclear why it's better to fit them with two Gaussians rather than e.g. a gamma distribution. Indeed the latter has fewer parameters than a two-gaussian model, so it would be worthwhile to use BIC to make that comparison. The positions of the two Gaussians for LO, NM, and AB are separated by only a handful of degrees (cf LA, where the separation is ~20 degrees), which further supports the idea that there aren't really two qualitatively different convergence states here.

      Added explanation to text.

      - Figure S2 is unfortunately misleading in this regard. I don't claim the authors aimed to mislead, but they have made the well-known error of using colors with very different luminances in a plot where size matters (see e.g.

      https://nam12.safelinks.protection.outlook.com/?url=https%3A %2F%2Fwww.r-project.org%2Fconferences%2FDSC2003%2FProceedings%2FIhaka.pdf&data=05%7C02%7Cdme arns%40princeton.edu%7C17ae2b44f0f246f15ddd08dc9b8e2 01c%7C2ff601167431425db5af077d7791bda4%7C0%7C0%7

      C638556282750568814%7CUnknown%7CTWFpbGZsb3d8ey

      JWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJ XVCI6Mn0%3D%7C0%7C%7C%7C&sdata=Ll4J4Xo39JEtKb %2FNnRWNoyedZAu5aAOMq0lHJCwsfXI%3D&reserved=0).

      Thus, to the eye, it appears there's a big valley between the red and blue regions, but actually, that valley is full of points: it's really just one big continuous blob.

      Kernel density estimation of eye convergence angles were added to Figure S2. The point we wish to make is that there is higher density when both eyes are rotated invwards (converged) in cichlids, but not medaka (O. latipes). The valley between converged and unconverged states being full of points is due to (1) slight variation with placement of key points in SLEAP, which blurs the boundary between states and (2) the eye convergence angle must pass through the valley in order to become converged, so necessarily there are points in between the two extremes of eye convergence.

      - In Figure 2D please could the authors double-check the significance of the difference between LO and NM: they certainly don't look different in the plot.

      Thank for for flagging this. We realize the way we previously reported the stats was open to misinterpretation. We have updated figure 2C, D and F to use letters to indicate statistical groupings, which hopefully makes it clearer which species are statistically different from each other.

      - In Figure 2G it's not clear why AB is not included. It is mentioned that the artemia was hard to track in the AB videos, but the supplementary videos provided do not support this.

      The contrast of the artemia in the AB videos is sufficiently different from the other cichlid videos that our pre-trained YOLO model fails. Retraining the model would be a lot of extra work and we feel like a comparison of three species is sufficient to address the sensorimotor transformations that occur over the course of prey capture in cichlids.

      - The statement "Zebrafish larvae have a unique swim repertoire during prey capture, which is distinct from exploratory swim bouts" is not supported by the work of others or indeed the authors' own work. In Figure 4F all types of bouts can occur at any time, it's just the probability at which they occur that varies during prey capture versus other times (see also Mearns et al (2020) Figure S4B).

      The point is well taken that there probably is not a hard separation between spontaneous and prey capture swims based on tail kinematics alone, which is also shown in Marques et al. (2018). However, we think that figure 2I of Mearns et al., which plots the probability of swims being drawn from different parts of the behavior space during prey capture (eyes converged) or not (eyes unconverged), shows that the repertoire of swims during the two states is substantially different. Points are blue or red; there are very few pale blue/pale red points in that figure panel. Figure S4B is showing clustered data, and clustering is a notoriously challenging problem for which there exists no perfect solution (Kleinberg, 2002). The clusters in Mearns et al. incorporated information about transition structure, as this was necessary for obtaining interpretable clusters for subsequent analyses. However, a different clustering approach could have yielded different boundaries, which may have shown more (or less) separation of bout types during prey capture/exploratory swimming. Therefore, we have updated the text to say that zebrafish perferentially perform different swim types during prey capture and exploration, and re-interpreted the behavior of cichlids similarly.

      - More discussion is warranted of the large variation in the number of behavioral clusters found between species (11-32). First, how much is this variation really to be trusted? I appreciate the affinity propogation parameters were the same in all cases, but what parameters "make sense" is somewhat dependent on the particular data set. Second, if one does believe this represents real variation, then why? This is really the key question, and it's unsatisfying to merely document it without trying to interpret it.

      Extended paragraph with more interpretation.

      - What is the purpose of "hovers"? Why not stay motionless? Could it be a way of reducing the latency of a subsequent movement? Is this an example of the scallop theorem?

      Added a couple of sentences speculating on function.

      - I'm not sure "spring-loaded" is a good term here: the tension force of a coiled tail is fairly negligible since there's little internal force actively trying to straighten it.

      Rewrote this part to highlight that fish spring toward the prey, without the implication that tension forces in the tail are responible for the movement. However, we are not aware of any literature measuring passive forces within the tail of fishes. Presumably the notochord is relatively stiff and may provide an internal force trying to straighten the tail.

      - There are now several statements for which no direct evidence is presented. We shouldn't have to rely on the author's qualitative impressions of what they observed: show us quantitative analysis.

      * "often hover"

      * "cichlids often alternate between approaches and hover swims"

      * "over many hundreds of milliseconds"

      * "we have also observed suction captures and ram-like attacks"

      * "may swim backwards"

      * "may expel prey from their mouth"

      * "cichlid captures often occur in two phases"

      Added references to supplementary videos with timestamps to highlight these behaviors.

      - I don't find it plausible that sated fish continue hunting prey that they know they're not going to eat just for the practice.

      Removed the speculation.

      - In Figure 3 is it not possible to include medaka, based on the hand-tracked paramecia?

      The videos are recorded at high frame rate, so it would be a lot of additional work to track these manually. Furthermore, earlier in prey capture it is very difficult to tell by watching videos which prey the medaka are tracking, especially as single paramecia can drift in and out of focus in the videos. Since there is no eye convergence, it is very difficult to ascertain for certain when tracking a given prey begins. In Fig 4, it was only possible to track paramecia by hand since it is immediately prior to the strike and from the video it is possible to see which paramecium the fish targeted. Our analyses of heading changes was performed over the 200 ms prior to a strike, which we think is a conservative enough cutoff to say that fish were probably pursuing prey in this window (it is shorter than the average behavioral syllable duration in medaka).

      - Figure 3 (particularly 3D) suggests the interesting finding that LA essentially only hunt prey that is directly in front of them (unlike LO and NM, the distribution of prey azimuth actually seems to broaden slightly over the duration of hunting events).

      This is worthy of discussion.

      We offer a suggestion for the many instances of prey capture being initiated in the central visual field in LA later in the manuscript when we discuss spitting behavior. We have added text to make this point earlier in the manuscript. The increase in azimuthal range at the end of prey capture may be due to abort swims (e.g. supp. vid. 1, 00:21). The widening of azimuthal angles is present in LO and NM also and is not unique to LA.

      - The reference Ding et al (2016) is not in the reference list.

      Wrong paper was referenced. Should be Ding 2019, which has been added to bibliography.

      - I am not convinced that medaka exhibit a unique side-swing behavior. I agree there is this tendency in the example movie, however, the results of the quantification (Figure 4) are underwhelming. First, cluster 5 in 4K appears to include a proportion of cases from LA and AB. These proportions may be small, but anything above zero means this is not unique to medaka. Second, the heading angle (4N) starts at 4 degrees for LA and 8 degrees for medaka. This difference is genuine but very small, much smaller than what's drawn in the schematic (4M). I'm not sure it's justifiable to call a difference of 4 degrees a qualitatively different strategy.

      We have changed the text to highlight that side swing is highly enriched in medaka. Comparing 4J to 3B we would argue that there is a qualitative difference in the strategy used to capture prey in the cichlid larvae we study here and medaka. We agree that further work is required to understand distance estimation behaviors in different species. In this manuscript, we use heading angle as a proxy for how prey position might change on the retina over a hunting sequence. But as the heading and distance are changing over time, the actual change in angle on the retina for prey may be much larger than the ~8 degree shift reported here. The actual position of the prey is also important here, which, for reasons mentioned above, we could not track. Given the final location of prey in the visual field prior to the strike (Fig 4J), the most parsimonious explanation of the data is that the prey is always in the monocular visual field. In cichlids, the prey is more-or-less centered in the 200 ms preceding the strike. While it is true theat the absolute difference in heading is 4 degrees, when converted to an angular velocity (4N, right), the medaka (OL) effectively rotate twice as fast as LA (20 deg/s vs 40 deg/s), which we think is a substantial difference and evidence of a different targeting strategy.

      - 4K: This is referred to in the caption as a confusion matrix, which it's not.

      Fixed.

      - 4N right panel: how many fish contributed to the points shown?

      Added to figure legend (n=113, LA; n=36, OL). Same data in left and right panels.

      - In the Discussion it is hypothesized that medaka use their lateral line in hunting more than in other species. Testing this hypothesis (even just compared to one other species) would be fairly straightforward, and would add significant interest to the paper overall.

      We agree that this is an interesting experiment for follow up studies, but it is beyond the scope of the current manuscript as we do not have the appropriate animal license for this experiment.

      Reviewer 2:

      The paper is rather descriptive in nature, although more context is provided in the discussion. Most figures are great, but I think the authors could add a couple of visual aids in certain places to explain how certain components were measured.

      Added new supplemental figure (Supp Fig 2)

      Figure 1B- it could be useful to add zebrafish and medaka to the scientific names (I realize it's already in Figure A but I found myself going back and forth a couple of times, mostly trying to confirm that O. latipes is medaka).

      Added common names to 1B, sprinkled reminders of OL/medaka throughout text.

      Figure 1G. I wasn't sure how to interpret the eye angle relative to the midline. Can they rotate their eyes or is this due to curvature in the 'upper' body of the fish? Adding a schematic figure or something like that could help a reader who is not familiar with these methods. Related to this, I was a bit confused by Figure 2A. After reading the methods section, I think I understand - but I little cartoon to describe this would help. It also reminds the reader (especially if they don't work with fish) that fish eyes can rotate. I also wanted to note that initially, I thought convergence was a measure of how the two eyes were positioned relative to the prey given the emphasis given on binocular vision, and only after reading certain sections again did I realize convergence was a measure of eye rotation/movement.

      New supplemental figure explaining how eye tracking is performed

      Figure 3. It was not immediately clear to me what onset, middle, and end represented - although it is explained in the caption. I think what tripped me up is the 'eye convergence' title in the top right corner of Figure 3A.

      Updated figure with schematic illustrating that time is measured relative to eye convergence onset and end.

      The result section about attack swim, S-strike, capture spring, etc. was a bit confusing to read and could benefit from a couple of concise descriptions of these behaviors. For example, I am not familiar with the S strike but a couple of paragraphs into this section, the reader learns more about the difference between S strike vs. attack swim. This can be mentioned in the first paragraph when these distinct behaviors are mentioned.

      Added description of behavior earlier in text.

      Figure 4. Presents lots of interesting data! I wonder if using Figure 1E could help the reader better understand how these measurements were taken.

      New supplemental figure added, explaining how tail tracking is performed.

      I probably overlooked this, but I wonder why so many panels are just focused on one species.

      Added explanation to the text.

      Is the S-shaped capture strategy the same as an S strike?

      Clarified in text to say "S-strike-like". This is a description of prey capture from adult largemouth bass in New et al. (2002). From the still frames shown in that paper, the kinematics looks similar to an S-strike or capture spring. The important point we wish to make is that tail is coiled in an S-shape prior to a strike, which indicates this that a kinematically similar behavior exists fishes beyond just larval cichlids and zebrafish.

      At the end of the page, when continuous swimming versus interrupted swimming is discussed, please remind the reader that medaka shows more continuous swimming (longer bouts).

      Added "while medaka swim continuously with longer bouts ("gliding")".

      After reading the discussion, it looks like many findings are unique. For example, given that medaka is such a popular model species in biology, it strikes me that nobody has ever looked into their hunting movements before. If their findings are novel, perhaps they should state so it is clear that the authors are not ignoring the literature.

      We have highlighted what we believe to be the novelty of our findings (first description of prey capture in larval cichlids and medaka). To our knowledge, we are first to describe hunting in medaka; but there is an extensive literature on medaka dating back to the early 20th century, some of which is only published in Japanese. We have done our best to review the literature, but we cannot rule out that there are papers that we missed. No English language article or review we found mentions literature on hunting behavior in medaka larvae.

      Reviewer 3:

      More evidence is needed to assess the types of visual monocular depth cues used by medaka fish to estimate prey location, but that is beyond the scope of this compelling paper. For example, medaka may estimate depth through knowledge of expected prey size, accommodation, defocus blur, ocular parallax, and/or other possible algorithms to complement cues from motion parallax.

      Added sentence to discussion highlighting that other cues may also contribute to distance estimation in cichlids and medakas. Follow-up studies will require new animal license.

      None. It's quite nice, timely, and thorough work! For future work, one could use 3D pose estimation of eye and prey kinematics to assess the dynamics of the 2D image (prey and background) cast onto the retina. This sort of representation could be useful to infer which monocular depth cues may be used by medaka during hunting.

      Great suggestion for follow up studies. Bolton et al. and Mearns et al. both find changes in z associated with prey capture, and it would be interesting to see how other fish species use the full 3-dimensional water column during prey capture, especially considering the diversity of hunting strategies in adult cichlids (ranging from piscivorous species, like LA, to algar grazers).

      In Figure 4N, you use "change in heading leading up to a strike as a proxy for the change in visual angle of the prey for cichlids and medaka." This proxy makes sense, but you also have the eye angles and (in some cases) the prey positions. One could estimate the actual change in visual angle from this information, which would also allow one to measure whether the fish are trying to stabilize the position of the prey on a high-acuity patch of the retina during the final moments of the hunt. This information may also shed light on which monocular depth cues are used.

      As addressed in comment to reviewer 1, this would require actually manually tracking individual paramecia over hundreds of frames. It is not possible to determine exactly when hunting begins in medaka, and it is prone to errors if medaka switch between targets over the course of a hunting episode. This question is better addressed with psychophysics experiments in embedded animals where it is possible to precisely control the stimulus, but this requires new animal licenses and is beyond the scope of this paper.

      In Figure 5, you could place the prey object a little farther from the D. rerio fish for the S-strike diagram.

      Fixed.

      Figure 4F legend should read "...at the peak of each bout."

      Fixed.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Thank you for your constructive feedback and recognition of our work. We followed your suggestion and improved the accuracy of the language used to interpret some of our findings. 

      Summary:

      The present study by Mikati et al demonstrates an improved method for in-vivo detection of enkephalin release and studies the impact of stress on the activation of enkephalin neurons and enkephalin release in the nucleus accumbens (NAc). The authors refine their pipeline to measure met and leu enkephalin using liquid chromatography and mass spectrometry. The authors subsequently measured met and leu enkephalin in the NAc during stress induced by handling, and fox urine, in addition to calcium activity of enkephalinergic cells using fiber photometry. The authors conclude that this improved tool for measuring enkephalin reveals experimenter handling stress-induced enkephalin release in the NAc that habituates and is dissociable from the calcium activity of these cells, whose activity doesn't habituate. The authors subsequently show that NAc enkephalin neuron calcium activity does habituate to fox urine exposure, is activated by a novel weigh boat, and that fox urine acutely causes increases in met-enk levels, in some animals, as assessed by microdialysis.

      Strengths:

      A new approach to monitoring two distinct enkephalins and a more robust analytical approach for more sensitive detection of neuropeptides. A pipeline that potentially could help for the detection of other neuropeptides.

      Weaknesses:

      Some of the interpretations are not fully supported by the existing data or would require further testing to draw those conclusions. This can be addressed by appropriately tampering down interpretations and acknowledging other limitations the authors did not cover brought by procedural differences between experiments.

      We have taken time to go through the manuscript ensuring we are more detailed and precise with our interpretations as well as appropriately acknowledging limitations. 

      Reviewer #2 (Public Review):

      Thank you for your constructive and thorough assessment of our work. In our revised manuscript, we adjusted the text to reflect the references you mentioned regarding the methionine oxidation procedure. Additionally, we expanded the methods section to include the key details of the statistical tests and procedures that you outlined. 

      Summary:

      The authors aimed to improve the detection of enkephalins, opioid peptides involved in pain modulation, reward, and stress. They used optogenetics, microdialysis, and mass spectrometry to measure enkephalin release during acute stress in freely moving rodents. Their study provided better detection of enkephalins due to the implementation of previously reported derivatization reaction combined with improved sample collection and offered insights into the dynamics and relationship between Met- and Leu-Enkephalin in the Nucleus Accumbens shell during stress.

      Strengths:

      A strength of this work is the enhanced opioid peptide detection resulting from an improved microdialysis technique coupled with an established derivatization approach and sensitive and quantitative nLC-MS measurements. These improvements allowed basal and stimulated peptide release with higher temporal resolution, lower detection thresholds, and native-state endogenous peptide measurement.

      Weaknesses:

      The draft incorrectly credits itself for the development of an oxidation method for the stabilization of Met- and Leu-Enk peptides. The use of hydrogen peroxide reaction for the oxidation of Met-Enk in various biological samples, including brain regions, has been reported previously, although the protocols may slightly vary. Specifically, the manuscript writes about "a critical discovery in the stabilization of enkephalin detection" and that they have "developed a method of methionine stabilization." Those statements are incorrect and the preceding papers that relied on hydrogen peroxide reaction for oxidation of Met-Enk and HPLC for quantification of oxidized Enk forms should be cited. One suggested example is Finn A, Agren G, Bjellerup P, Vedin I, Lundeberg T. Production and characterization of antibodies for the specific determination of the opioid peptide Met5-Enkephalin-Arg6-Phe7. Scand J Clin Lab Invest. 2004;64(1):49-56. doi: 10.1080/00365510410004119. PMID: 15025428.

      Thank you for highlighting this. It was not our intention to imply that we developed the oxidation method, rather that we were able improve the detection of metenkephalin by oxidation of the methionine without compromising the detection resolution of leu-enkephalin, enabling the simultaneous detection of both peptides. We have addressed this is the manuscript and included the suggested citation. 

      Another suggestion for this draft is to make the method section more comprehensive by adding information on specific tools and parameters used for statistical analysis:

      (1) Need to define "proteomics data" and explain whether calculations were performed on EIC for each m/z corresponding to specific peptides or as a batch processing for all detected peptides, from which only select findings are reported here. What type of data normalization was used, and other relevant details of data handling? Explain how Met- and Leu-Enk were identified from DIA data, and what tools were used.

      Thank you for pointing out this source of confusion. We believe it is because we use a different DIA method than is typically used in other literature. Briefly, we use a DIA method with the targeted inclusion list to ensure MS2 triggering as opposed to using large isolation widths to capture all precursors for fragmentation, as is typically done with MS1 features. For our method, MS2 is triggered based on the 4 selected m/z values (heavy and light versions of Leu and Met-Enkephalin peptides) at specific retention time windows with isolation width of 2 Da; regardless of the intensity of MS1 of the peptides. 

      (2) Simple Linear Regression Analysis: The text mentions that simple linear regression analysis was performed on forward and reverse curves, and line equations were reported, but it lacks details such as the specific variables being regressed (although figures have labels) and any associated statistical parameters (e.g., R-squared values). 

      Additional detail about the linear regression process was added to the methods section, please see lines 614-618. The R squared values are also now shown on the figure. 

      ‘For the forward curves, the regression was applied to the measured concentration of the light standard as the theoretical concentration was increased. For plotting purposes, we show the measured peak area ratios for the light standards in the forward curves. For the reverse curves, the regression was applied to the measured concentration of the heavy standard, as the theoretical concentration was varied.’

      (3) Violin Plots: The proteomics data is represented as violin plots with quartiles and median lines. This visual representation is mentioned, but there is no detail regarding the software/tools used for creating these plots.

      We used Graphpad Prism to create these plots. This detail has been added to the statistical analysis section. See line 630.

      (4) Log Transformation: The text states that the data was log-transformed to reduce skewness, which is a common data preprocessing step. However, it does not specify the base of the logarithm used or any information about the distribution before and after transformation.

      We have added the requested details about the log transformation, and how the data looked before and after, into the statistical analysis section. We followed convention that the use of log is generally base 10 unless otherwise specified as natural log (base 2) or a different base. See lines 622-625

      ‘The data was log10 transformed to reduce the skewness of the dataset caused by the variable range of concentrations measured across experiments/animals. Prior to log transformation, the measurements failed normality testing for a Gaussian distribution. After the log transformation, the data passed normality testing, which provided the rationale for the use of statistical analyses that assume normality.’

      (5) Two-Way ANOVA: Two-way ANOVA was conducted with peptide and treatment as independent variables. This analysis is described, but there is no information regarding the software or statistical tests used, p-values, post-hoc tests, or any results of this analysis.

      Information about the two-way ANOVA analysis has been added to the statistical analysis section. Additionally, more detailed information has been added to the figure legends about the statistical results. Please see lines 625-628.

      ‘Two-way ANOVA testing with peptide (Met-Enk or Leu-Enk) and treatment (buffer or stress for example) as the two independent variables. Post-hoc testing was done using Šídák's multiple comparisons test and the p values for each of these analyses are shown in the figures (Figs. 1F, 2A).’ 

      (6) Paired T-Test: A paired t-test was performed on predator odor proteomic data before and after treatment. This step is mentioned, but specific details like sample sizes, and the hypothesis being tested are not provided.

      The sample size is included in the figure legend to which we have included a reference. We have also included the following text to highlight the purpose of this test. See lines 628-630

      A paired t-test was performed on the predator odor proteomic data before and after odor exposure to test that hypothesis that Met-Enk increases following exposure to predator odor  (Fig. 3F). These analyses were conducted using Graphpad Prism.

      (7) Correlation Analysis: The text mentions a simple linear regression analysis to correlate the levels of Met-Enk and Leu-Enk and reports the slopes. However, details such as correlation coefficients, and p-values are missing.

      We apologize for the use of the word correlation as we think it may have caused some confusion and have adjusted the language accordingly. Since this was a linear regression analysis, there is no correlation coefficient. The slope of the fitted line is reported on the figures to show the fitted values of Met-Enk to Leu-Enk. 

      (8) Fiber Photometry Data: Z-scores were calculated for fiber photometry data, and a reference to a cited source is provided. This section lacks details about the calculation of zscores, and their use in the analysis. 

      These details have been added to the statistical analysis section. See lines 634-637

      ‘For the fiber photometry data, the z-scores were calculated as described in using GuPPy which is an open-source python toolbox for fiber photometry analysis. The z-score equation used in GuPPy is z=(DF/F-(mean of DF/F)/standard deviation of DF/F) where F refers to fluorescence of the GCaMP6s signal.’

      (9) Averaged Plots: Z-scores from individual animals were averaged and represented with SEM. It is briefly described, but more details about the number of animals, the purpose of averaging, and the significance of SEM are needed.

      We have added additional information about the averaging process in the statistical analysis section. See lines 639-643.

      ‘The purpose of the averaged traces is to show the extent of concordance of the response to experimenter handling and predator odor stress among animals with the SEM demonstrating that variability. The heatmaps depict the individual responses of each animal. The heatmaps were plotted using Seaborn in Python and mean traces were plotted using Matplotlib in Python.’

      A more comprehensive and objective interpretation of results could enhance the overall quality of the paper.

      We have taken this opportunity to improve our manuscript following comments from all the reviewers that we hope has resulted in a manuscript with a more objective interpretation of results. 

      Reviewer #3 (Public Review):

      Thank you for your thoughtful review of our work. To clarify some of the points you raised, we revised the manuscript to include more detail on how we distinguish between the oxidized endogenous and standard signal, as well as refine the language concerning the spatial resolution. We also edited the manuscript regarding the concentration measurements. We conducted technical replicates, so we appreciate you raising this point and clarify that in the main text. 

      Summary:

      This important paper describes improvements to the measurement of enkephalins in vivo using microdialysis and LC-MS. The key improvement is the oxidation of met- to prevent having a mix of reduced and oxidized methionine in the sample which makes quantification more difficult. It then shows measurements of enkephalins in the nucleus accumbens in two different stress situations - handling and exposure to predator odor. It also reports the ratio of released met- and leu-enkephalin matching what is expected from the digestion of proenkephalin. Measurements are also made by photometry of Ca2+ changes for the fox odor stressor. Some key takeaways are the reliable measurement of met-enkephalin, the significance of directly measuring peptides as opposed to proxy measurements, and the opening of a new avenue into the research of enkephalins due to stress based on these direct measurements.

      Strengths:

      -Improved methods for measurement of enkephalins in vivo.

      -Compelling examples of using this method.

      -Opening a new area of looking at stress responses through the lens of enkephalin concentrations.

      Weaknesses:

      (1) It is not clear if oxidized met-enk is endogenous or not and this method eliminates being able to discern that.

      We clarified our wording in the text copied below to provide an explanation on how we distinguish between the two. Even after oxidation, the standard signal has a higher m/z ratio due to the presence of the Carbon and Nitrogen isotopes as described in the Chemicals section of the methods ‘For Met Enkephalin, a fully labeled L-Phenylalanine (<sup>13</sup>C<sub>9</sub>, <sup>15</sup>N) was added (YGGFM). The resulting mass shift between the endogenous (light) and heavy isotope-labeled peptide are 7Da and 10Da, respectively.’, so they can still be differentiated from the endogenous signal. We have clarified the language in the results section. See lines 82-87. 

      ‘After each sample collection, we add a consistent known concentration of isotopically labeled internal standard of Met-Enk and Leu-Enk of 40 amol/sample to the collected ISF for the accurate identification and quantification of endogenous peptide. These internal standards have a different mass/charge (m/z) ratio than endogenous Met- and Leu-Enk. Thus, we can identify true endogenous signal for Met-Enk and Leu-Enk (Suppl Fig. 1A,C) versus noise, interfering signals, and standard signal (Suppl. Fig. 1B,D).’

      (2) It is not clear if the spatial resolution is really better as claimed since other probes of similar dimensions have been used.

      Apologies for any confusion here. To clarify we primarily state that our approach improves temporal resolution and in a few cases refer to improved spatiotemporal resolution, which we believe we show. The dimensions of the microdialysis probe used in these experiments allow us to target the nucleus accumbens shell and as well as being smaller – especially at the membrane level - than a fiber photometry probe. 

      (3) Claims of having the first concentration measurement are not quite accurate.

      Thank you for your feedback. To clarify, we do not claim that we have the first concentration measurements, rather we are the first to quantify the ratio of Met-Enk to Leu-Enk in vivo in freely behaving animals in the NAcSh. 

      (4) Without a report of technical replicates, the reliability of the method is not as wellevaluated as might be expected.

      We have added these details in the methods section, please see lines 521-530. 

      ‘Each sample was run in two technical replicates and the peak area ratio was averaged before concentration calculations of the peptides were conducted. Several quality control steps were conducted prior to running the in vivo samples. 1) Two technical replicates of a known concentration were injected and analyzed – an example table from 4 random experiments included in this manuscript is shown below. 2) The buffers used on the day of the experiment (aCSF and high K+ buffer) were also tested for any contaminating Met-Enk or Leu-Enk signals by injecting two technical replicates for each buffer. Once these two criteria were met, the experiment was analyzed through the system. If either step failed, which happened a few times, the samples were frozen and the machines were cleaned and restarted until the quality control measures were met.’

      Recommendations For The Authors:

      Reviewer #1 (Recommendations For The Authors):

      • The authors should provide appropriate citations of a study that has validated the Enkephalin-Cre mouse line in the nucleus accumbens or provide verification experiments if they have any available.

      Thank you for your comment. We have added a reference validating the Enk-Cre mouse line in the nucleus accumbens to the methods section and is copied here. 

      D.C. Castro, C.S. Oswell, E.T. Zhang, C.E. Pedersen, S.C. Piantadosi, M.A. Rossi, A.C. Hunker, A. Guglin, J.A. Morón, L.S. Zweifel, G.D. Stuber, M.R. Bruchas, An endogenous opioid circuit determines state-dependent reward consumption, Nature 2021 598:7882 598 (2021) 646–651. https://doi.org/10.1038/s41586-02104013-0.

      • Better definition of the labels y1,y2,b3 in Figures 1 and S1 would be useful. I may have missed it but it wasn't described in methods, results, or legends.

      Thank you for this comment. We have added this information to Fig.1 legend ‘Y1, y2, b3 refer to the different elution fragments resulting from Met-Enk during LC-MS.

      • It is interesting that the ratio of KCl-evoked release is what changes differentially for Met- vs Leu. Leu enk increases to the range of met-enk. There is non-detectable or approaching being non-detectable leu-enk (below the 40 amol / sample limit of quantification) in most of the subjects that become apparent and approach basal levels of met-enkephalin. This suggests that the K+ evoked response may be more pronounced for leu-enk. This is something that should be considered for further analysis and should be discussed.

      Thank you for this astute observation, and you make a great point. We have added some discussion of this finding in the results and discussion sections see lines 111112 and lines 253-257. 

      ‘Interestingly, Leu-Enk showed a greater fold change compared to baseline than did Met-Enk with the fold changes being 28 and 7 respectively based on the data in Fig.1F.’

      ‘We also noted that Leu-Enk showed a greater fold increase relative to baseline after depolarization with high K+ buffer as compared to Met-Enk. This may be due to increased Leu-Enk packaging in dense core vesicles compared to Met-Enk or due to the fact that there are two distinct precursor sources for Leu-Enk, namely both proenkephalin and prodynorphin while Met-Enk is mostly cleaved from proenkephalin (see Table 1 [48]).’

      • For example in 2E, it would be helpful to label in the graph axis what samples correspond to the manipulation and also in the text provide the reader with the sample numbers. The authors interpret the relationship between the last two samples of baseline and posthandling stress as the following in the figure legend "the concentration released in later samples is affected; such influence suggests that there is regulation of the maximum amount of peptide to be released in NAcSh. E. The negative correlation in panel d is reversed by using a high K+ buffer to evoke Met-Enk release, suggesting that the limited release observed in D is due to modulation of peptide release rather than depletion of reserves." However, the correlations are similar between 2D and E and it appears that two mice are mediating the difference between the two groups. The appropriate statistical analysis would be to compare the regressions of the two groups. Statistics for the high K+ (and all other graphs where appropriate) need to be reported, including the r2 and p-value.

      Thank you for your constructive critique. To elucidate the effect of high K+, we have plotted the regression line and reported the slope for Fig. 2E. Notably, the slope is reduced by a factor of 2 and appears to be driven by a large subset of the animals. The statistics for the high K+ graph are shown on the figure (Fig 1F) which test the hypothesis of whether high K+ leads to the release of Leu-Enk and Met-Enk respectively compared to baseline with aCSF. We have added the test statistics to the figure legend for additional clarity. Fig. 1G has no statistics because it is only there to elucidate the ratio between Met-Enk and Leu-Enk in the same samples. We did not test any hypotheses related to whether there are differences between their levels as that is not relevant to our question. The correlation on the same data is depicted in Fig. 1H, and we have added the R<sup>2</sup> value per your request. 

      • The interpretation that handling stress induces enkephalin release from microdialysis experiments is also confounded by other factors. For instance, from the methods, it appears that mice were connected and sample collection started 30 min after surgery, therefore recovery from anesthesia is also a confounding variable, among other technical aspects, such as equilibration of the interstitial fluid to the aCSF running through the probe that is acting as a transmitter and extracellular molecule "sink". Did the authors try to handle the mice post hookup similar to what was done with photometry to have a more direct comparison to photometry experiments? This procedural difference, recording from recently surgerized animals (microdialysis) vs well-recovered animals with photometry should be mentioned in addition to the other caveats the authors mention.

      Thank you for your comment. We are aware of this technical limitation, and it is largely why we sought to conduct the fiber photometry experiments to get at the same question. As you requested, we have included additional language in the discussion to acknowledge this limitation and how we chose to address it by measuring calcium activity in the enkephalinergic neurons, which would presumably be the same cell population whose release we are quantifying using microdialysis. See lines 262-273.  

      ‘Our findings showed a robust increase in peptide release at the beginning of experiments, which we interpreted as due to experimenter handling stress that directly precedes microdialysis collections. However, there are other technical limitations to consider such as the fact that we were collecting samples from mice that were recently operated on. Another consideration is that the circulation of aCSF through the probe may cause a sudden shift in oncotic and hydrostatic forces, leading to increased peptide release to the extracellular space. As such, we wanted to examine our findings using a different technique, so we chose to record calcium activity from enkephalinergic neurons - the same cell population leading to peptide release. Using fiber photometry, we showed that enkephalinergic neurons are activated by stress exposure, both experimenter handling and fox odor, thereby adding more evidence to suggest that enkephalinergic neurons are activated by stress exposure which could explain the heightened peptide levels at the beginning of microdialysis experiments.’

      • The authors should provide more details on handling stress manipulation during photometry. For photometry what was the duration of the handling bout, what was the interval between handling events, and can the authors provide a description of what handling entailed? Were mice habituated to handling days before doing photometry recording experiments?

      Thank you for your suggestion. We have addressed all of your points in the methods section. See lines 564-570. 

      ‘The handling bout which mimicked traditional scruffing lasted about 3-5 seconds. The mouse was then let go and the handling was repeated another two times in a single session with a minimum of 1-2 minutes between handling bouts. Mice were habituated to this manipulation by being attached to the fiber photometry rig, for 3-5 consecutive days prior to the experimental recording. Additionally, the same maneuver was employed when attaching/detaching the fiber photometry cord, so the mice were subjected to the same process several times.’

      • For the novel weigh boat experiments, the authors should explicitly state when these experiments were done in relation to the fox urine, was it a different session or the same session? Were they the same animals? Statements like the following (line 251) imply it was done in the same animals in the same session but it should be clarified in the methods "We also showed using fiber photometry that the novelty of the introduction of a foreign object to the cage, before adding fox odor, was sufficient to activate enkephalinergic neurons."

      As shown in supplementary figure 4, individual animal data is shown for both water and fox urine exposure (overlaid) to depict whether there were differences in their responses to each manipulation – in the same animal. And yes, you are correct, the animals were first exposed to water 3 times in the recording session and then exposed to fox urine 3 times in the same session. We have added that to the methods section describing in vivo fiber photometry. See lines 575-576.  

      • Statistical testing would be needed to affirm the conclusions the authors draw from the fox urine and novel weigh boat experiments. For example, it shows stats that the response attenuates, that it is not different between fox urine and novel (it looks like the response is stronger to the fox urine when looking at the individual animals), etc. These data look clear but stats are formally needed. Formal statistics are also missing in other parts of the manuscript where conclusions are drawn from the data but direct statistical comparisons are not included (e.g. Fig 2.G-I).

      The photometry data is shown as z-scores which is a formal statistical analysis. ANOVA would be inappropriate to run to compare z-scores. We understand that this is erroneously done in fiber photometry literature, however, it remains incorrect. The z-scores alone provide all the information needed about the deviation from baseline. We understand that this is not immediately clear to readers, and we thank you for allowing us to explain why this is the case. We have added test statistics to figure legends where hypothesis testing was done and p-values were reported. 

      • Did the authors try to present the animals with repeated fox urine exposure to see if this habituates like the photometry?

      No, we did not do that experiment due to the constrained timing within which we had to run our microdialysis/LC-MS timeline, but it is a great point for future exploration. 

      • It would be useful to present the time course of the odor experiment for the microdialysis experiment.

      The timeline is shown in Fig.1a and Fig.3e. To reiterate, each sample is 13 minutes long.

      • Can the authors determine if differences in behavior (e.g. excessive avoidance in animals with with one type of response) or microdialysis probe location dictate whether animals fall into categories of increased release, no release, or no-detection? From the breakdown, it looks like it is almost equally split into three parts but the authors' descriptions of this split are somewhat misleading (line 210). " The response to predator odor varies appreciably: although most animals show increased Met-Enk release after fox odor exposure, some show continued release with no elevation in Met-Enk levels, and a minority show no detectable release".

      Thank you for your constructive feedback. We do not believe the difference in behavior is correlated with probe placement. The hit map can be found in suppl. Fig 3 and shows that all mice included in the manuscript had probes in the NAcSh. We purposely did not distinguish between dorsal and ventral because of our 1 mm membrane would make it hard to presume exclusive sampling from one subregion. That is a great point though, and we have thought about it extensively for future studies. We have edited the language to reflect the almost even split of responses for Met-Enk and appreciate you pointing that out. 

      • Overall, given the inconsistencies in experimental design and overall caveats associated, I think the authors are unable to draw reasonable conclusions from the repeated stressor experiments and something they should either consider is not trying to draw strong conclusions from these observations or perform additional experiments that provide the grounds to derive those conclusions.

      We have included additional language on the caveats of our study, and our use of a dual approach using fiber photometry and microdialysis was largely driven by a

      desire to offer additional support of our conclusions. We expected pushback about our conclusions, so we wanted to offer a secondary analysis using a different technique to test our hypothesis. To be honest the tone of this comment and content is not particularly constructive (especially for trainees) nor does it offer a space to realistically address anything. This work took multiple years to optimize, it was led by a graduate student, and required a multidisciplinary team. As highlighted, we believe it offers an important contribution to the literature and pushes the field of peptide detection forward.  

      Reviewer #2 (Recommendations For The Authors):

      A more comprehensive and objective interpretation of results could enhance the overall quality of the paper. The manuscript contains statements like "we are the first to confirm," which can be challenging to substantiate and may not significantly enhance the paper. It's essential to ensure that novelty statements are well-founded. For example, the release of enkephalins from other brain regions after stress exposure is well-documented but not addressed in the paper. Similarly, the role of the NA shell in stress has been extensively studied but lacks coverage in this manuscript.

      We have edited the language to reflect your feedback. We have also included relevant literature expanding on the demonstrated roles of enkephalins in the literature. We would like to note that most studies have focused on chronic stress, and we were particularly interested in acute stress. See lines 129-134.

      ‘These studies have included regions such as the locus coeruleus, the ventral medulla, the basolateral nucleus of the amygdala, and the nucleus accumbens core and shell. Studies using global knockout of enkephalins have shown varying responses to chronic stress interventions where male knockout mice showed resistance to chronic mild stress in one study, while another study showed that enkephalin-knockout mice showed delayed termination of corticosteroid release. [33,34]’ 

      Finally, not a weakness but a clarification suggestion: the method description mentions the use of 1% FA in the sample reconstitution solution and LC solvents, which is an unusually high concentration of acid. If this concentration is intentional for maintaining the peptides' oxidation state, it would be beneficial to mention this in the text to assist readers who might want to replicate the method.

      This is correct and has been clarified in the methods section

      Reviewer #3 (Recommendations For The Authors):

      -The Abstract should state the critical improvements that are made. Also, quantify the improvements in spatiotemporal resolution.

      Thank you for your comment. We have edited the abstract to reflect this. 

      - The use of "amol/sample" as concentration is less informative than an SI units (e.g., pM concentration) and should be changed. Especially since the volume used was the same for in vivo sampling experiments.

      Thank you for your comment. We chose to report amol/sample because we are measuring such a small concentration and wanted to account for any slight errors in volume that can make drastic differences on reported concentrations especially since samples are dried and resuspended.  

      -Please check this sentence: "After each collection, the samples were spiked with 2 µL of 12.5 fM isotopically labeled Met-Enkephalin and Leu-Enkephalin" This dilution would yield a concentration of ~2 fM. In a 12 uL sample, that would be ~0.02 amol, well below the detection limit. (note that fM would femtomolar concentration and fmol would be femtomoles added).

      -"liquid chromatography/mass spectrometry (LC-MS) [9-12]"... Reference 9 is a RIA analysis paper, not LC-MS as stated.

      Thank you for catching these. We have corrected the unit and citation. 

      -Given that improvements in temporal resolution are claimed, the lack of time course data with a time axis is surprising. Rather, data for baseline and during treatment appear to be combined in different plots. Time course plots of individuals and group averages would be informative.

      Due to the expected variability between individual animal time course data, where for example, we measure detectable levels in one sample followed by no detection, it was very difficult to combine data across time. Therefore, to maximize data inclusion from all animals that showed baseline measurements and responses to individual manipulations, we opted to report snapshot data. Our improvement in temporal resolution refers to the duration of each sample rather than continuous sampling, so those two are unrelated. Thank you for your feedback and allowing us to clarify this.

      - I do not understand this claim "We use custom-made microdialysis probes, intentionally modified so they are similar in size to commonly used fiber photometry probes to avoid extensive tissue damage caused by traditional microdialysis probes (Fig. 1B)." The probes used are 320 um OD and 1 mm long. This is not an uncommon size of microdialysis probes and indeed many are smaller, so is their probe really causing less damage than traditional probes?

      Thank you for your comment. We are only trying to make the point that the tissue damage from these probes is comparable to commonly used fiber photometry probes. We only point that out because tissue damage is used as a point to dissuade the usage of microdialysis in some literature, and we just wanted to disambiguate that. We have clarified the statement you pointed out.  

      -The oxidation procedure is a good idea, as mentioned above. It would be interesting to compare met-enk with and without the oxidation procedure to see how much it affects the result (I would not say this is necessary though). It is not uncommon to add antioxidants to avoid losses like this. Also, it should be acknowledged that the treatment does prevent the detection of any in vivo oxidation, perhaps that is important in met-enk metabolism?

      The comparison between oxidized and unoxidized Met-Enk detection is in figure 1C. 

      -It would be a best practice to report the standard deviation of signal for technical replicates (say near in vivo concentrations) of standards and repeated analysis of a dialysate sample to be able to understand the variability associated with this method. Similarly, an averaged basal concentration from all rats.

      Thank you for your comment. We have included a table showing example quality control standard injections from 4 randomly selected experiments included in the manuscript that were run before and after each experiment and descriptive statistics associated with these technical replicates. We also added some detail to the methods section to describe how quality control is done. See lines 521-530. 

      ‘Each sample was run in two technical replicates and the peak area ratio was averaged before concentration calculations of the peptides were conducted. Several quality control steps were conducted prior to running the in vivo samples. 1) Two technical replicates of a known concentration were injected and analyzed – an example table from 4 random experiments included in this manuscript is shown below. 2) The buffers used on the day of the experiment (aCSF and high K+ buffer) were also tested for any contaminating Met-Enk or Leu-Enk signals by injecting two technical replicates for each buffer. Once these two criteria were met, the experiment was analyzed through the system. If either step failed, which happened a few times, the samples were frozen and the machines were cleaned and restarted until the quality control measures were met.’

      EDITORS NOTE

      Should you choose to revise your manuscript, please include full statistical reporting including exact p-values wherever possible alongside the summary statistics (test statistic and df) and 95% confidence intervals. These should be reported for all key questions and not only when the p-value is less than 0.05.

      Thank you for your suggestion. We have included more detail about statistical analysis in the figure legends per this comment and reviewer comments.

    1. Reviewer #1 (Public review):

      The results of these experiments support a modest but important conclusion: If sub-optimal methods are used to collect retrospective reports, such as simple yes/no questions, inattentional blindness (IB) rates may be overestimated by up to ~8%.

      (1) In experiment 1, data from 374 subjects were included in the analysis. As shown in figure 2b, 267 subjects reported noticing the critical stimulus and 107 subjects reported not noticing it. This translates to a 29% IB rate if we were to only consider the "did you notice anything unusual Y/N" question. As reported in the results text (and figure 2c), when asked to report the location of the critical stimulus (left/right), 63.6% of the "non-noticer" group answered correctly. In other words, 68 subjects were correct about the location while 39 subjects were incorrect. Importantly, because the location judgment was a 2-alternative-forced-choice, the assumption was that if 50% (or at least not statistically different than 50%) of the subjects answered the location question correctly, everyone was purely guessing. Therefore, we can estimate that ~39 of the subjects who answered correctly were simply guessing (because 39 guessed incorrectly), leaving 29 subjects from the non-noticer group who were correct on the 2AFC above and beyond the pure guess rate. If these 29 subjects are moved from the non-noticer to the noticer group, the corrected rate of IB for Experiment 1 is 20.86% instead of the original 28.61% rate that would have been obtained if only the Y/N question was used. In other words, relying only on the "Y/N did you notice anything" question led to an overestimate of IB rates by 7.75% in Experiment 1.

      In the revised version of their manuscript, the authors provided the data that was missing from the original submission, which allows this same exercise to be carried out on the other 4 experiments. Using the same logic as above, i.e., calculating the pure-guess rate on the 2AFC, moving the number of subjects above this pure-guess rate to the non-noticer group, and then re-calculating a "corrected IB rate", the other experiments demonstrate the following:

      Experiment 2: IB rates were overestimated by 4.74% (original IB rate based only on Y/N question = 27.73%; corrected IB rate that includes the 2AFC = 22.99%)

      Experiment 3: IB rates were overestimated by 3.58% (original IB rate = 30.85%; corrected IB rate = 27.27%)

      Experiment 4: IB rates were overestimated by ~8.19% (original IB rate = 57.32%; corrected IB rate for color* = 39.71%, corrected IB rate for shape = 52.61%, corrected IB rate for location = 55.07%)

      Experiment 5: IB rates were overestimated by ~1.44% (original IB rate = 28.99%; corrected IB rate for color = 27.56%, corrected IB rate for shape = 26.43%, corrected IB rate for location = 28.65%)

      *note: the highest overestimate of IB rates was from Experiment 4, color condition, but the authors admitted that there was a problem with 2AFC color guessing bias in this version of the experiment which was a main motivation for running experiment 5 which corrected for this bias.

      Taken as a whole, this data clearly demonstrates that even with a conservative approach to analyzing the combination of Y/N and 2AFC data, inattentional blindness was evident in a sizeable portion of the subject populations. An important (albeit modest) overestimate of IB rates was demonstrated by incorporating these improved methods.

      (2) One of the strongest pieces of evidence presented in this paper was the single data point in Figure 3e showing that in Experiment 3, even the super subject group that rated their non-noticing as "highly confident" had a d' score significantly above zero. Asking for confidence ratings is certainly an improvement over simple Y/N questions about noticing, and if this result were to hold, it could provide a key challenge to IB. However, this result can most likely be explained by measurement error.

      In their revised paper, the authors reported data that was missing from their original submission: the confidence ratings on the 2AFC judgments that followed the initial Y/N question. The most striking indication that this data is likely due to measurement error comes from the number of subjects who indicated that they were highly confident that they didn't notice anything on the critical trial, but then when asked to guess the location of the stimulus, indicated that they were highly confident that the stimulus was on the left (or right). There were 18 subjects (8.82% of the high-confidence non-noticer group) who responded this way. To most readers, this combination of responses (high confidence in correctly judging a stimulus feature that one is highly confident in having not seen at all) indicates that a portion of subjects misunderstood the confidence scales (or just didn't read the questions carefully or made mistakes in their responses, which is common for experiments conducted online).

      In the authors' rebuttal to the first round of peer review, they wrote, "it is perfectly rationally coherent to be very confident that one didn't see anything but also very confident that if there was anything to be seen, it was on the left." I respectfully disagree that such a combination of responses is rationally coherent. The more parsimonious interpretation is that a measurement error occurred, and it's questionable whether we should trust any responses from these 18 subjects.

      In their rebuttal, the authors go on to note that 14 of the 18 subjects who rated their 2AFC with high confidence were correct in their location judgment. If these 14 subjects were removed from analysis (which seems like a reasonable analysis choice, given their contradictory responses), d' for the high-confidence non-noticer group would most likely fall to chance levels. In other words, we would see a data pattern similar to that plotted in Figure 3e, but with the first data point on the left moving down to zero d'. This corrected Figure 3e would then provide a very nice evidence-based justification for including confidence ratings along with Y/N questions in future inattentional blindness studies.

      (3) In most (if not all) IB experiments in the literature, a partial attention and/or full attention trial is administered after the critical trial. These control trials are very important for validating IB on the critical trial, as they must show that, when attended, the critical stimuli are very easy to see. If a subject cannot detect the critical stimulus on the control trial, one cannot conclude that they were inattentionally blind on the critical trial, e.g., perhaps the stimulus was just too difficult to see (e.g., too weak, too brief, too far in the periphery, too crowded by distractor stimuli, etc.), or perhaps they weren't paying enough attention overall or failed to follow instructions. In the aggregate data, rates of noticing the stimuli should increase substantially from the critical trial to the control trials. If noticing rates are equivalent on the critical and control trials, one cannot conclude that attention was manipulated in the first place.

      In their rebuttal to the first round of peer review, the authors provided weak justification for not including such a control condition. They cite one paper that argues such control conditions are often used to exclude subjects from analysis (those who fail to notice the stimulus on the control trial are either removed from analysis or replaced with new subjects) and such exclusions/replacements can lead to underestimations of inattentional blindness rates. However, the inclusion of a partial or full attention condition as a control does not necessitate the extra step of excluding or replacing subjects. In the broadest sense, such a control condition simply validates the attention manipulation, i.e., one can easily compare the percent of subjects who answered "yes" or who got the 2AFC judgment correct during the critical trial versus the control trial. The subsequent choice about exclusion/replacement is separate, and researchers can always report the data with and without such exclusions/replacements to remain more neutral on this practice.

      If anyone were to follow-up on this study, I highly recommend including a partial or full attention control condition, especially given the online nature of data collection. It's important to know the percent of online subjects who answer yes and who get the 2AFC question correct when the critical stimulus is attended, because that is the baseline (in this case, the "ceiling level" of performance) to which the IB rates on the critical trial can be compared.

    2. Author response:

      The following is the authors’ response to the current reviews.

      Responses to Reviewer #1:

      We thank the reviewer for these additional comments, and more generally for their extensive engagement with our work, which is greatly appreciated. Here, we respond to the three points in their latest review in turn.

      The results of these experiments support a modest but important conclusion: If sub-optimal methods are used to collect retrospective reports, such as simple yes/no questions, inattentional blindness (IB) rates may be overestimated by up to ~8%.

      It is true, of course, that we think the field has overstated the extent of IB, and we appreciate the reviewer characterizing our results as important along these lines. Nevertheless, we respectfully disagree with the framing and interpretation the reviewer attaches to them. As explained in our previous response, we think this interpretation — and the associated calculations of IB overestimation ‘rates’ — perpetuates a binary approach to perception and awareness which we regard as mistaken.

      A graded approach to IB and visual awareness 

      Our sense is that many theorists interested in IB have conceived of perception and awareness as ‘all or nothing’: You either see a perfectly clear gorilla right in front of you, or you see nothing at all. This is implicit in the reviewer’s characterization of our results as simply indicating that fewer subjects fail to see the critical stimulus than previously assumed. To think that way is precisely to assume the orthodox binary position about perception, i.e., that any given subject can neatly be categorized into one of two boxes, saw or didn’t see.

      Our perspective is different. We think there can be degraded forms of perception and awareness that fall neatly into neither of the categories “saw the stimulus perfectly clearly” or “saw nothing at all”. On this graded conception, the question is not: “What proportion of subjects saw the stimulus?” but: “What is the sensitivity of subjects to the stimulus?” This is why we prefer signal detection measures like d′ over % noticing and % correct. This powerful framework has been successful in essentially every domain to which it has been applied, and we think perception and visual awareness are no exception. We understand that the reviewer may not think the same way about this foundational issue, but since part of our goal is to promote a graded approach to perception, we are keen to highlight our disagreement here and so resist the reviewer’s interpretation of our results (even to the extent that it is a positive one!).

      Finally, we note that given this perspective, we are correspondingly inclined to reject many of the summary figures following below in Point (1) by the reviewer. These calculations (given in terms of % noticing and not noticing) make sense on the binary conception of awareness, but not on the SDT-based approach we favor. We say more about this below. 

      (1) In experiment 1, data from 374 subjects were included in the analysis. As shown in figure 2b, 267 subjects reported noticing the critical stimulus and 107 subjects reported not noticing it. This translates to a 29% IB rate if we were to only consider the "did you notice anything unusual Y/N" question. As reported in the results text (and figure 2c), when asked to report the location of the critical stimulus (left/right), 63.6% of the "non-noticer" group answered correctly. In other words, 68 subjects were correct about the location while 39 subjects were incorrect. Importantly, because the location judgment was a 2-alternative-forced-choice, the assumption was that if 50% (or at least not statistically different than 50%) of the subjects answered the location question correctly, everyone was purely guessing. Therefore, we can estimate that ~39 of the subjects who answered correctly were simply guessing (because 39 guessed incorrectly), leaving 29 subjects from the nonnoticer group who were correct on the 2AFC above and beyond the pure guess rate. If these 29 subjects are moved from the non-noticer to the noticer group, the corrected rate of IB for Experiment 1 is 20.86% instead of the original 28.61% rate that would have been obtained if only the Y/N question was used. In other words, relying only on the "Y/N did you notice anything" question led to an overestimate of IB rates by 7.75% in Experiment 1.

      In the revised version of their manuscript, the authors provided the data that was missing from the original submission, which allows this same exercise to be carried out on the other 4 experiments.  

      (To briefly interject: All of these data were provided in our public archive since our original submission and remain available at https://osf.io/fcrhu. The difference now is only that they are included in the manuscript itself.)

      Using the same logic as above, i.e., calculating the pure-guess rate on the 2AFC, moving the number of subjects above this pure-guess rate to the non-noticer group, and then re-calculating a "corrected IB rate", the other experiments demonstrate the following:

      Experiment 2: IB rates were overestimated by 4.74% (original IB rate based only on Y/N question = 27.73%; corrected IB rate that includes the 2AFC = 22.99%)

      Experiment 3: IB rates were overestimated by 3.58% (original IB rate = 30.85%; corrected IB rate = 27.27%)

      Experiment 4: IB rates were overestimated by ~8.19% (original IB rate = 57.32%; corrected IB rate for color* = 39.71%, corrected IB rate for shape = 52.61%, corrected IB rate for location = 55.07%)

      Experiment 5: IB rates were overestimated by ~1.44% (original IB rate = 28.99%; corrected IB rate for color = 27.56%, corrected IB rate for shape = 26.43%, corrected IB rate for location = 28.65%)

      *note: the highest overestimate of IB rates was from Experiment 4, color condition, but the authors admitted that there was a problem with 2AFC color guessing bias in this version of the experiment which was a main motivation for running experiment 5 which corrected for this bias.

      Taken as a whole, this data clearly demonstrates that even with a conservative approach to analyzing the combination of Y/N and 2AFC data, inattentional blindness was evident in a sizeable portion of the subject populations. An important (albeit modest) overestimate of IB rates was demonstrated by incorporating these improved methods.

      We appreciate the work the reviewer has put into making these calculations. However, as noted above, such calculations implicitly reflect the binary approach to perception and awareness that we reject. 

      Consider how we’d think about the single subject case where the task is 2afc detection of a low contrast stimulus in noise. Suppose that this subject achieves 70% correct. One way of thinking about this is that the subject fully and clearly sees the stimulus on 40% of trials (achieving 100% correct on those) and guesses completely blindly on the other 60% (achieving 50% correct on those) for a total of 40% + 30% = 70% overall. However, this is essentially a ‘high threshold’ approach to the problem, in contrast to an SDT approach. On an SDT approach — an approach with tremendous evidential support — on every trial the subject receives samples from probabilistic distributions corresponding to each interval (one noise and one signal + noise) and determines which is higher according to the 2afc decision rule. Thus, across trials, they have access to differentially graded information about the stimulus. Moreover, on some trials they may have significant information from the stimulus (perhaps, well above their single interval detection criterion) but still decide incorrectly because of high noise from the other spatial interval. From this perspective, there is no nonarbitrary way of saying whether the subject saw/did not see on a given trial. Instead, we must characterize the subject’s overall sensitivity to the stimulus/its visibility to them in terms of a parameter such as d′ (here, ~ 0.7).

      We take the same attitude to the subjects in our experiments (and specifically to our ‘super subject’). Instead of calculating the proportion of subjects who saw or failed to see the stimulus (with some characterized as aware and some as unaware), we think the best way to characterize our results is that, across subjects (and so trials also), there was differential graded access to information from the stimulus, and this is best represented in terms of the group-level sensitivity parameter d′. This is why we frame our results as demonstrating that subjects traditionally considered inattentionally blind exhibit significant residual visual sensitivity to the critical stimulus.

      (2) One of the strongest pieces of evidence presented in this paper was the single data point in Figure 3e showing that in Experiment 3, even the super subject group that rated their non-noticing as "highly confident" had a d' score significantly above zero. Asking for confidence ratings is certainly an improvement over simple Y/N questions about noticing, and if this result were to hold, it could provide a key challenge to IB. However, this result can most likely be explained by measurement error.

      In their revised paper, the authors reported data that was missing from their original submission: the confidence ratings on the 2AFC judgments that followed the initial Y/N question. The most striking indication that this data is likely due to measurement error comes from the number of subjects who indicated that they were highly confident that they didn't notice anything on the critical trial, but then when asked to guess the location of the stimulus, indicated that they were highly confident that the stimulus was on the left (or right). There were 18 subjects (8.82% of the high-confidence non-noticer group) who responded this way. To most readers, this combination of responses (high confidence in correctly judging a stimulus feature that one is highly confident in having not seen at all) indicates that a portion of subjects misunderstood the confidence scales (or just didn't read the questions carefully or made mistakes in their responses, which is common for experiments conducted online).

      In the authors' rebuttal to the first round of peer review, they wrote, "it is perfectly rationally coherent to be very confident that one didn't see anything but also very confident that if there was anything to be seen, it was on the left." I respectfully disagree that such a combination of responses is rationally coherent. The more parsimonious interpretation is that a measurement error occurred, and it's questionable whether we should trust any responses from these 18 subjects.

      In their rebuttal, the authors go on to note that 14 of the 18 subjects who rated their 2AFC with high confidence were correct in their location judgment. If these 14 subjects were removed from analysis (which seems like a reasonable analysis choice, given their contradictory responses), d' for the high-confidence non-noticer group would most likely fall to chance levels. In other words, we would see a data pattern similar to that plotted in Figure 3e, but with the first data point on the left moving down to zero d'. This corrected Figure 3e would then provide a very nice evidence-based justification for including confidence ratings along with Y/N questions in future inattentional blindness studies.

      We appreciate the reviewer’s highlighting of this particular piece of evidence as amongst our strongest. (At the same time, we must resist its characterization as a “single data point”: it derives from a large pre-registered experiment involving some 7,000 subjects total, with over 200 subjects in the relevant bin — both figures being far larger than a typical IB experiment.) We also appreciate their raising the issue of measurement error.

      Specifically, the reviewer contends that our finding that even highly confident non-noticers exhibit significant sensitivity is “most likely … explained by measurement error” due to subjects mistakenly inverting our confidence scale in giving their response. In our original reply, we gave two reasons for thinking this quite unlikely; the reviewer has not addressed these in this revised review. First, we explicitly labeled our confidence scale (with 0 labeled as ‘Not at all confident’ and 3 as ‘Highly confident’) so that subjects would be very unlikely simply to invert the scale. This is especially so as it is very counterintuitive to treat “0” as reflecting high confidence. More importantly, however, we reasoned that any measurement error due to inverting or misconstruing the confidence scale should be symmetric. That is: If subjects are liable to invert the confidence scale, they should do so just as often when they answer “yes” as when they answer “no” – after all the very same scale is being used in both cases. This allows us to explore evidence of measurement error in relation to the large number of high-confidence “yes” subjects (N = 2677), thus providing a robust indicator as to whether subjects are generally liable to misconstrue the confidence scale. Looking at the number of such high confidence noticers who subsequently respond to the 2afc question with low confidence (a pattern which might, though need not, suggest measurement error), we found that the number was tiny. Only 28/2677 (1.05%) of high-confidence noticers subsequently gave the lowest level of confidence on the 2afc question, and only 63/2677 (2.35%) subjects gave either of the two lower levels of confidence. For these reasons, we consider any measurement error due to misunderstanding the confidence scale to be extremely minimal.

      The reviewer is correct to note that 18/204 (9%) subjects reported both being highly confident that they didn't notice anything and highly confident in their 2afc judgment, although only 14/18 were correct in this judgment. Should we exclude these 14? Perhaps if we agree with the reviewer that such a pattern of responses is not “rationally coherent” and so must reflect a misconstrual of the scale. But such a pattern is in fact perfectly and straightforwardly intelligible. Specifically, in a 2afc task, two stimuli can individually fall well below a subject’s single interval detection criterion — leading to a high confidence judgment that nothing was presented in either interval. Quite consistent with this, the lefthand stimulus may produce a signal that is much higher than the right-hand stimulus — leading to a high confidence forced-choice judgment that, if something was presented, it was on the left. (By analogy, consider how a radiologist could look at a scan and say the following: “We’re 95% confident there’s no tumor. But even on the 5% chance that there is, our tests completely rule out that it’s a malignant one, so don’t worry.”) 

      (3) In most (if not all) IB experiments in the literature, a partial attention and/or full attention trial is administered after the critical trial. These control trials are very important for validating IB on the critical trial, as they must show that, when attended, the critical stimuli are very easy to see. If a subject cannot detect the critical stimulus on the control trial, one cannot conclude that they were inattentionally blind on the critical trial, e.g., perhaps the stimulus was just too difficult to see (e.g., too weak, too brief, too far in the periphery, too crowded by distractor stimuli, etc.), or perhaps they weren't paying enough attention overall or failed to follow instructions. In the aggregate data, rates of noticing the stimuli should increase substantially from the critical trial to the control trials. If noticing rates are equivalent on the critical and control trials, one cannot conclude that attention was manipulated in the first place.

      In their rebuttal to the first round of peer review, the authors provided weak justification for not including such a control condition. They cite one paper that argues such control conditions are often used to exclude subjects from analysis (those who fail to notice the stimulus on the control trial are either removed from analysis or replaced with new subjects) and such exclusions/replacements can lead to underestimations of inattentional blindness rates. However, the inclusion of a partial or full attention condition as a control does not necessitate the extra step of excluding or replacing subjects. In the broadest sense, such a control condition simply validates the attention manipulation, i.e., one can easily compare the percent of subjects who answered "yes" or who got the 2AFC judgment correct during the critical trial versus the control trial. The subsequent choice about exclusion/replacement is separate, and researchers can always report the data with and without such exclusions/replacements to remain more neutral on this practice.

      If anyone were to follow-up on this study, I highly recommend including a partial or full attention control condition, especially given the online nature of data collection. It's important to know the percent of online subjects who answer yes and who get the 2AFC question correct when the critical stimulus is attended, because that is the baseline (in this case, the "ceiling level" of performance) to which the IB rates on the critical trial can be compared.

      We agree with the reviewer that future studies could benefit from including a partial or full attention condition. They are surely right that we might learn something additional from such conditions. 

      Where we differ from the reviewer is in thinking of these conditions as “controls” appropriate to our research question. This is why we offered the justification we did in our earlier response. When these conditions are used as controls, they are used to exclude subjects in ways that serve to inflate the biases we are concerned with in our work. For our question, the absence of these conditions does not impact the significance of the findings, since such conditions are designed to answer a question which is not the one at the heart of our paper. Our key claim is that subjects who deny noticing an unexpected stimulus in a standard inattentional blindness paradigm nonetheless exhibit significant residual sensitivity (as well as a conservative bias in their response to the noticing question); the presence or absence of partial- or full-attention conditions is orthogonal to that question.

      Moreover, we note that our tasks were precisely chosen to be classic tasks widely used in the literature to manipulate attention. Thus, by common consensus in the field, they are effective means to soak up attention, and have in effect been tested in partial- and full-attention control settings in a huge number of studies. Second, we think it very doubtful that subjects in a full-attention trial would not overwhelmingly have detected our critical stimuli. The reviewer worries that they might have been “too weak, too brief, too far in the periphery, too crowded by distractor stimuli, etc.” But consider E5 where the stimulus was a highly salient orange or green shape, present on the screen for 5 seconds. The reviewer also suggests that subjects in the full-attention control might not have detected the stimulus because they “weren't paying enough attention overall”. But evidently if they weren’t paying attention even in the full-attention trial this would be reason for thinking that there was inattentional blindness even in this condition (a point made by White et al. 2018) and certainly not a reason for thinking there was not an attentional effect in the critical trial. Lastly, the reviewer suggests that a full-attention condition would have helped ensure that subjects were following instructions. But we ensured this already by (as per our pre-registration) excluding subjects who performed poorly in the relevant primary tasks.

      Thus, both in principle and in practice, we do not see the absence of such conditions as impacting the interpretation of our findings, even as we agree that future work posing a different research question could certainly learn something from including such conditions.

      Responses to Reviewer #2:

      We note that this report is unchanged from an earlier round of review, and not a response to our significantly revised manuscript. We believe our latest version fully addresses all the issues which the reviewer originally raised. The interested reader can see our original response below. We again thank the reviewer for their previous report which was extremely helpful.

      —-

      The following is the authors’ response to the original reviews.

      eLife Assessment

      This study presents valuable findings to the field interested in inattentional blindness (IB), reporting that participants indicating no awareness of unexpected stimuli through yes/no questions, still show above-chance sensitivity to specific properties of these stimuli through follow-up forced-choice questions (e.g., its color). The results suggest that this is because participants are conservative and biased to report not noticing in IB. The authors conclude that these results provide evidence for residual perceptual awareness of inattentionally blind stimuli and that therefore these findings cast doubt on the claim that awareness requires attention. Although the samples are large and the analysis protocol novel, the evidence supporting this interpretation is still incomplete, because effect sizes are rather small, the experimental design could be improved and alternative explanations have not been ruled out.

      We are encouraged to hear that eLife found our work “valuable”. We also understand, having closely looked at the reviews, why the assessment also includes an evaluation of “incomplete”. We gave considerable attention to this latter aspect of the assessment in our revision. In addition to providing additional data and analyses that we believe strengthen our case, we also include a much more substantial review and critique of existing methods in the IB literature to make clear exactly the gap our work fills and the advance it makes. (Indeed, if it is appropriate to say this here, we believe one key aspect of our work that is missing from the assessment is our inclusion of ‘absent’ trials, which is what allows us to make the crucial claims about conservative reporting of awareness in IB for the first time.) Moreover, we refocus our discussion on only our most central claims, and weaken several of our secondary claims so that the data we’ve collected are better aligned with the conclusions we draw, to ensure that the case we now make is in fact complete. Specifically, our two core claims are (1) that there is residual sensitivity to visual features for subjects who would ordinarily be classified as inattentionally blind (whether this sensitivity is conscious or not), and (2) that there is a tendency to respond conservatively on yes/no questions in the context of IB. We believe we have very compelling support for these two core claims, as we explain in detail below and also through revisions to our manuscript.

      Given the combination of strengthened and clarified case, as well as the weakening of any conclusions that may not have been fully supported, we believe and hope that these efforts make our contribution “solid”, “convincing”, or even “compelling” (especially because the “compelling” assessment characterizes contributions that are “more rigorous than the current state-of-the-art”, which we believe to be the case given the issues that have plagued this literature and that we make progress on).

      Reviewer #1 (Public review):

      Summary:

      In the abstract and throughout the paper, the authors boldly claim that their evidence, from the largest set of data ever collected on inattentional blindness, supports the views that "inattentionally blind participants can successfully report the location, color, and shape of stimuli they deny noticing", "subjects retain awareness of stimuli they fail to report", and "these data...cast doubt on claims that awareness requires attention." If their results were to support these claims, this study would overturn 25+ years of research on inattentional blindness, resolve the rich vs. sparse debate in consciousness research, and critically challenge the current majority view in cognitive science that attention is necessary for awareness.

      Unfortunately, these extraordinary claims are not supported by extraordinary (or even moderately convincing) evidence. At best, the results support the more modest conclusion: If sub-optimal methods are used to collect retrospective reports, inattentional blindness rates will be overestimated by up to ~8% (details provided below in comment #1). This evidence-based conclusion means that the phenomenon of inattentional blindness is alive and well as it is even robust to experiments that were specifically aimed at falsifying it. Thankfully, improved methods already exist for correcting the ~8% overestimation of IB rates that this study successfully identified.

      We appreciate here the reviewer’s recognition of the importance of work on inattentional blindness, and the centrality of inattentional blindness to a range of major questions. We also recognize their concerns with what they see as a gap between our data and the claims made on their basis. We address this in detail below (as well as, of course, in our revised manuscript). However, from the outset we are keen to clarify that our central claim is only the first one the reviewer mentions — and the one which appears in our title — namely that, as a group, participants can successfully report the location, color, and shape of stimuli they deny noticing, and thus that there is “Sensitivity to visual features in inattentional blindness”. This is the claim that we believe is strongly supported by our data, and all the more so after revising the manuscript in light of the helpful comments we’ve received.

      By contrast, the other claims the reviewer mentions, concerning awareness (as opposed to residual sensitivity–which might be conscious or unconscious) were intended as both secondary and tentative. We agree with the referee that these are not as strongly supported by our data (and indeed we say so in our manuscript), whereas we do think our data strongly support the more modest — and, to us central — claim that, as a group, inattentionally blind participants can successfully report the location, color, and shape of stimuli they deny noticing. 

      We also feel compelled to resist somewhat the reviewer’s summary of our claims. For example, the reviewer attributes to us the claim that “subjects retain awareness of stimuli they fail to report”; but while that phrase does appear in our abstract, what we in fact say is that our data are “consistent with an alternative hypothesis about IB, namely that subjects retain awareness of stimuli they fail to report”. We do in fact believe that our data are consistent with that hypothesis, whereas earlier investigations seemed not to be. We mention this only because we had used that careful phrasing precisely for this sort of reason, so that we wouldn’t be read as saying that our results unequivocally support that alternative.

      Still, looking back, we see how we may have given more emphasis than we intended to some of these more secondary claims. So, we’ve now gone through and revised our manuscript throughout to emphasize that our main claim is about residual sensitivity, and to make clear that our claims about awareness are secondary and tentative. Indeed, we now say precisely this, that although we favor an interpretation of “our results in terms of residual conscious vision in IB … this claim is tentative and secondary to our primary finding”. We also weaken the statements in the abstract that the reviewer mentions, to better reflect our key claims.

      Finally, we note one further point: Dialectically, inattentional blindness has been used to argue (e.g.) that attention is required for awareness. We think that our data concerning residual sensitivity at least push back on the use of IB to make this claim, even if (as we agree) they do not provide decisive evidence that awareness survives inattention. In other words, we think our data call that claim into question, such that it’s now genuinely unclear whether awareness does or does not survive inattention. We have adjusted our claims on this point accordingly as well.

      Comments:

      (1) In experiment 1, data from 374 subjects were included in the analysis. As shown in figure 2b, 267 subjects reported noticing the critical stimulus and 107 subjects reported not noticing it. This translates to a 29% IB rate, if we were to only consider the "did you notice anything unusual Y/N" question. As reported in the results text (and figure 2c), when asked to report the location of the critical stimulus (left/right), 63.6% of the "non-noticer" group answered correctly. In other words, 68 subjects were correct about the location while 39 subjects were incorrect. Importantly, because the location judgment was a 2-alternative-forced-choice, the assumption was that if 50% (or at least not statistically different than 50%) of the subjects answered the location question correctly, everyone was purely guessing. Therefore, we can estimate that ~39 of the subjects who answered correctly were simply guessing (because 39 guessed incorrectly), leaving 29 subjects from the nonnoticer group who may have indeed actually seen the location of the stimulus. If these 29 subjects are moved to the noticer group, the corrected rate of IB for experiment 1 is 21% instead of 29%. In other words, relying only on the "Y/N did you notice anything" question leads to an overestimate of IB rates by 8%. This modest level of inaccuracy in estimating IB rates is insufficient for concluding that "subjects retain awareness of stimuli they fail to report", i.e. that inattentional blindness does not exist.

      In addition, this 8% inaccuracy in IB rates only considers one side of the story. Given the data reported for experiment 1, one can also calculate the number of subjects who answered "yes, I did notice something unusual" but then reported the incorrect location of the critical stimulus. This turned out to be 8 subjects (or 3% of the "noticer" group). Some would argue that it's reasonable to consider these subjects as inattentionally blind, since they couldn't even report where the critical stimulus they apparently noticed was located. If we move these 8 subjects to the non-noticer group, the 8% overestimation of IB rates is reduced to 6%.

      The same exercise can and should be carried out on the other 4 experiments, however, the authors do not report the subject numbers for any of the other experiments, i.e., how many subjects answered Y/N to the noticing question and how many in each group correctly answered the stimulus feature question. From the limited data reported (only total subject numbers and d' values), the effect sizes in experiments 2-5 were all smaller than in experiment 1 (d' for the non-noticer group was lower in all of these follow-up experiments), so it can be safely assumed that the ~6-8% overestimation of IB rates was smaller in these other four experiments. In a revision, the authors should consider reporting these subject numbers for all 5 experiments.

      We now report, as requested, all these subject numbers in our supplementary data (see Supplementary Tables 1 and 2 in our Supplementary Materials).

      However, we wish to address the larger question the reviewer has raised: Do our data only support a relatively modest reduction in IB rates? Even if they did, we still believe that this would be a consequential result, suggesting a significant overestimation of IB rates in classic paradigms. However, part of our purpose in writing this paper is to push back against a certain binary way of thinking about seeing/awareness. Our sense is that the field has conceived of awareness as “all or nothing”: You either see a perfectly clear gorilla right in front of you, or you see nothing at all. Our perspective is different: We think there can be degraded forms of awareness that fall into neither of those categories. For that reason, we are disinclined to see our results in the way that the reviewer suggests, namely as simply indicating that fewer subjects fail to see the stimulus than previously assumed. To think that way is, in our view, to assume the orthodox binary position about awareness. If, instead, one conceives of awareness as we do (and as we believe the framework of signal detection theory should compel us to), then it isn’t quite right to think of the proportion of subjects who were aware, but rather (e.g.) the sensitivity of subjects to the relevant stimulus. This is why we prefer measures like d′ over % noticing and % correct. We understand that the reviewer may not think the same way about this issue as we do, but part of our goal is to promote that way of thinking in general, and so some of our comments below reflect that perspective and approach.

      For example, consider how we’d think about the single subject case where the task is 2afc detection of a low contrast stimulus in noise. Suppose that this subject achieves 70% correct. One way of thinking about that is that the subject sees the stimulus on 40% of trials (achieving 100% correct on those) and guesses blindly on the other 60% (achieving 50% correct on those) for a total of 40% + 30% = 70% overall. However, this is essentially a “high threshold” approach to the problem, in contrast to an SDT approach. On an SDT approach (an approach with tremendous evidential support), on every trial the subject receives samples from probabilistic distributions corresponding to each interval (one noise and one signal + noise) and determines which is higher according to the 2afc decision rule. Thus, across trials they have access to differentially graded information about the stimulus. Moreover, on some trials they may have significant information from the stimulus (perhaps, well above their single interval detection criterion) but still decide incorrectly because of high noise from the other spatial interval. From this perspective, there is no non-arbitrary way of saying whether the subject saw/did not see on a given trial. Instead, we must characterize the subject’s overall sensitivity to the stimulus/its visibility to them in terms of a parameter such as d′ (here, ~ 0.7).

      We take the same attitude to our super subject. Instead of saying that some subjects saw/failed to see the stimuli, instead we suggest that the best way to characterize our results is that across subjects (and so trials also) there was differential graded access to information from the stimulus best represented in terms of the group-level sensitivity parameter d′.

      We acknowledge that (despite ourselves) we occasionally fell into an all-too-natural binary/high threshold way of thinking, as when we suggested that our data show that “inattentionally blind subjects consciously perceive these stimuli after all” and “the inattentionally blind can see after all." (p.17) We have removed such problematic phrasing as well as other problematic phrasing as noted below.

      (2) Because classic IB paradigms involve only one critical trial per subject, the authors used a "super subject" approach to estimate sensitivity (d') and response criterion (c) according to signal detection theory (SDT). Some readers may have issues with this super subject approach, but my main concern is with the lack of precision used by the authors when interpreting the results from this super subject analysis.

      Only the super subject had above-chance sensitivity (and it was quite modest, with d' values between 0.07 and 0.51), but the authors over-interpret these results as applying to every subject. The methods and analyses cannot determine if any individual subject could report the features above-chance. Therefore, the following list of quotes should be revised for accuracy or removed from the paper as they are misleading and are not supported by the super subject analysis: "Altogether this approach reveals that subjects can report above-chance the features of stimuli (color, shape, and location) that they had claimed not to notice under traditional yes/no questioning" (p.6)

      "In other words, nearly two-thirds of subjects who had just claimed not to have noticed any additional stimulus were then able to correctly report its location." (p.6)

      "Even subjects who answer "no" under traditional questioning can still correctly report various features of the stimulus they just reported not having noticed, suggesting that they were at least partially aware of it after all." (p.8)

      "Why, if subjects could succeed at our forced-response questions, did they claim not to have noticed anything?" (p.8)

      "we found that observers could successfully report a variety of features of unattended stimuli, even when they claimed not to have noticed these stimuli." (p.14)

      "our results point to an alternative (and perhaps more straightforward) explanation: that inattentionally blind subjects consciously perceive these stimuli after all... they show sensitivity to IB stimuli because they can see them." (p.16)

      "In other words, the inattentionally blind can see after all." (p.17)

      We thank the reviewer for pointing out how these quotations may be misleading as regards our central claim. We intended them all to be read generically as concerning the group, and not universally as claiming that all subjects could report above-chance/see the stimuli etc. We agree entirely that the latter universal claim would not be supported by our data. In contrast, we do contend that our super-subject analysis shows that, as a group, subjects traditionally considered intentionally blind exhibit residual sensitivity to features of stimuli (color, shape, and location) that they had all claimed not to notice, and likewise that as a group they could succeed at our forced-choice questions. 

      To ensure this claim is clear throughout the paper, and that we are not interpreted as making an unsupported universal claim we have revised the language in all of the quotations above, as follows, as well as in numerous other places in the paper.

      “Altogether this approach reveals that subjects can report above-chance the features of stimuli (color, shape, and location) that they had claimed not to notice under traditional yes/no questioning” (p.6) => “Altogether this approach reveals that as a group subjects can report above-chance the features of stimuli (color, shape, and location) that they had all claimed not to notice under traditional yes/no questioning” (p.6)

      “Even subjects who answer “no” under traditional questioning can still correctly report various features of the stimulus they just reported not having noticed, suggesting that they were at least partially aware of it after all.” (p.8) => “... even subjects who answer “no” under traditional questioning can, as a group, still correctly report various features of the stimuli they just reported not having noticed, indicating significant group-level sensitivity to visual features. Moreover, these results are even consistent with an alternative hypothesis about IB, that as a group, subjects who would traditionally be classified as inattentionally blind are in fact at least partially aware of the stimuli they deny noticing.” (p.8)

      “Why, if subjects could succeed at our forced-response questions, did they claim not to have noticed anything?” (p.8) => “Why, if subjects could succeed at our forcedresponse questions as a group, did they all individually claim not to have noticed anything?” (p.8)

      “we found that observers could successfully report a variety of features of unattended stimuli, even when they claimed not to have noticed these stimuli.” (p.14) => “we found that groups of observers could successfully report a variety of features of unattended stimuli, even when they all individually claimed not to have noticed those stimuli.” (p.14)

      “our results point to an alternative (and perhaps more straightforward) explanation: that inattentionally blind subjects consciously perceive these stimuli after all... they show sensitivity to IB stimuli because they can see them.” (p.16) => “our results just as easily raise an alternative (and perhaps more straightforward) explanation: that inattentionally blind subjects may retain a degree of awareness of these stimuli after all.” (p.16) Here deleting: “they show sensitivity to IB stimuli because they can see them.”

      “In other words, the inattentionally blind can see after all.” (p.17) => “In other words, as a group, the inattentionally blind enjoy at least some degraded or partial sensitivity to the location, color and shape of stimuli which they report not noticing.” (p.17)

      In one case, we felt the sentence was correct as it stood, since it simply reported a fact about our data:

      “In other words, nearly two-thirds of subjects who had just claimed not to have noticed any additional stimulus were then able to correctly report its location.” (p.6)

      After all, if subjects were entirely blind and simply guessed, it would be true to say that 50% of subjects would be able to correctly report the stimulus location (by guessing).

      In addition to these and numerous other changes, we also added the following explicit statement early in the paper to head-off any confusion on this point: “Note that all analyses reported here relate to this super subject as opposed to individual subjects”. 

      (3) In addition to the d' values for the super subject being slightly above zero, the authors attempted an analysis of response bias to further question the existence of IB. By including in some of their experiments critical trials in which no critical stimulus was presented, but asking subjects the standard Y/N IB question anyway, the authors obtained false alarm and correct rejection rates. When these FA/CR rates are taken into account along with hit/miss rates when critical stimuli were presented, the authors could calculate c (response criterion) for the super subject. Here, the authors report that response criteria are biased towards saying "no, I didn't notice anything". However, the validity of applying SDT to classic Y/N IB questioning is questionable.

      For example, with the subject numbers provided in Box 1 (the 2x2 table of hits/misses/FA/CR), one can ask, 'how many subjects would have needed to answer "yes, I noticed something unusual" when nothing was presented on the screen in order to obtain a non-biased criterion estimate, i.e., c = 0?' The answer turns out to be 800 subjects (out of the 2761 total subjects in the stimulus-absent condition), or 29% of subjects in this condition.

      In the context of these IB paradigms, it is difficult to imagine 29% of subjects claiming to have seen something unusual when nothing was presented. Here, it seems that we may have reached the limits of extending SDT to IB paradigms, which are very different than what SDT was designed for. For example, in classic psychophysical paradigms, the subject is asked to report Y/N as to whether they think a threshold-level stimulus was presented on the screen, i.e., to detect a faint signal in the noise. Subjects complete many trials and know in advance that there will often be stimuli presented and the stimuli will be very difficult to see. In those cases, it seems more reasonable to incorrectly answer "yes" 29% of the time, as you are trying to detect something very subtle that is out there in the world of noise. In IB paradigms, the stimuli are intentionally designed to be highly salient (and unusual), such that with a tiny bit of attention they can be easily seen. When no stimulus is presented and subjects are asked about their own noticing (especially of something unusual), it seems highly unlikely that 29% of them would answer "yes", which is the rate of FAs that would be needed to support the null hypothesis here, i.e., of a non-biased criterion. For these reasons, the analysis of response bias in the current context is questionable and the results claiming to demonstrate a biased criterion do not provide convincing evidence against IB.

      We are grateful to the reviewer for highlighting this aspect of our data. We agree with several of these points. For example, it is indeed striking that — given the corresponding hit rate — a false alarm rate of 29% would be needed to obtain an unbiased criterion. At the same time, we would respectfully push back on other points above. In our first experiment that uses the super-subject analysis, for example, d′ is 0.51 and highly significant; to describe that figure, as the reviewer does, as “slightly above zero” seemed not quite right to us (and all the more so given that these experiments involve very large samples and preregistered analysis plans). 

      We also respectfully disagree that our data call into question the validity of applying SDT to classic yes/no IB questioning. The mathematical foundations of SDT are rock solid, and have been applied far more broadly than we have applied them here. In fact, in a way we would suggest that exactly the opposite attitude is appropriate: rather than thinking that IB challenges an immensely well-supported, rigorously tested and broadly applicable mathematical model of perception, we think that the conflict between our SDT-based model of IB and the standard interpretation constitutes strong reason to disfavor the standard interpretation. Several points are worth making here.

      First, it is already surprising that 11.03% of our subjects in E2 (46/417) and 7.24% of our subjects in E5 (200/2761) E5 reported noticing a stimulus when no stimulus was present. But while this may have seemed unlikely in advance of inquiry, this is in fact what the data show and forms the basis of our criterion calculations. Thus, our criterion calculations already factor in a surprising but empirically verified high false alarm rate of subjects answering “yes” when no stimulus was presented and were asked about their noticing. (We also note that the only paper we know of to report a false alarm rate in an IB paradigm, though not one used to calculate a response criterion, found a very consistent false alarm rate of 10.4%. See Devue et al. 2009.)

      Second, while the reviewer is of course correct that a common psychophysical paradigm involves detection of a “threshold-level”/faint stimulus in noise, it is widely recognized that SDT has an extremely broad application, being applicable to any situation in which two kinds of event are to be discriminated (Pastore & Scheirer 1975) and being “almost universally accepted as a theoretical account of decision making in research on perceptual detection and recognition and in numerous extensions to applied domains” quite generally (Estes 2002, see also: Wixted 2020). Indeed, cases abound in which SDT has been successfully applied to situations which do not involve near threshold stimuli in noise. To pick two examples at random, SDT has been used in studying acceptability judgments in linguistics (Huang and Ferreira 2020) and the assessment of physical aggression in childstudent interactions (Lerman et al. 2010; for more general discussion of practical applications, see Swets et al. 2000). Given that the framework of SDT is so widely applied and well supported, and that we see no special reason to make an exception, we believe it can be relied on in the present context.

      Finally, we note that inattentional blindness can in many ways be considered analogous to “near threshold” detection since inattention is precisely thought to degrade or even abolish awareness of stimuli, meaning that our stimuli can be construed as near threshold in the relevant sense. Indeed, our relatively modest d′ values suggest that under inattention stimuli are indeed hard to detect. Thus, even were SDT more limited in its application, we think it still would be appropriate to apply to the case of IB.

      (4) One of the strongest pieces of evidence presented in the entire paper is the single data point in Figure 3e showing that in Experiment 3, even the super subject group that rated their non-noticing as "highly confident" had a d' score significantly above zero. Asking for confidence ratings is certainly an improvement over simple Y/N questions about noticing, and if this result were to hold, it could provide a key challenge to IB. However, this result hinges on a single data point, it was not replicated in any of the other 4 experiments, and it can be explained by methodological limitations. I strongly encourage the authors (and other readers) to follow up on this result, in an in-person experiment, with improved questioning procedures.

      We agree that our finding that even the super-subject group that rated their non-noticing as “highly confident” had a d' score significantly above zero is an especially strong piece of evidence, and we thank the reviewer for highlighting that here. At the same time, we note that while the finding is represented by a single marker in Figure 3e, it seemed not quite right to call this a “single data point”, as the reviewer does, given that it derives from a large pre-registered experiment involving some 7,000 subjects total, with over 200 subjects in the relevant bin — both figures being far larger than a typical IB experiment. It would of course be tremendous to follow up on this result – and we certainly hope our work inspires various follow-up studies. That said, we note that recruiting the necessary numbers of in person subjects would be an absolutely enormous, career-level undertaking – it would involve bringing more than the entire undergraduate population at our own institution, Johns Hopkins, into our laboratory! While those results would obviously be extremely valuable, we wouldn’t want to read the reviewer’s comments as implying that only an experiment of that magnitude — requiring thousands upon thousands of in-person subjects — could make progress on these issues. Indeed, because every subject can only contribute one critical trial in IB, it has long been recognized as an extremely challenging paradigm to study in a sufficiently well-powered and psychophysically rigorous way. We believe that our large preregistered online approach represents a major leap forward here, even if it involves certain trade-offs.

      In the current Experiment 3, the authors asked the standard Y/N IB question, and then asked how confident subjects were in their answer. Asking back-to-back questions, the second one with a scale that pertains to the first one (including a tricky inversion, e.g., "yes, I am confident in my answer of no"), may be asking too much of some subjects, especially subjects paying half-attention in online experiments. This procedure is likely to introduce a sizeable degree of measurement error.

      An easy fix in a follow-up study would be to ask subjects to rate their confidence in having noticed something with a single question using an unambiguous scale:

      On the last trial, did you notice anything besides the cross?

      (1): I am highly confident I didn't notice anything else

      (2): I am confident I didn't notice anything else

      (3): I am somewhat confident I didn't notice anything else

      (4): I am unsure whether I noticed anything else

      (5): I am somewhat confident I noticed something else

      (6): I am confident I noticed something else

      (7): I am highly confident I noticed something else

      If we were to re-run this same experiment, in the lab where we can better control the stimuli and the questioning procedure, we would most likely find a d' of zero for subjects who were confident or highly confident (1-2 on the improved scale above) that they didn't notice anything. From there on, the d' values would gradually increase, tracking along with the confidence scale (from 3-7 on the scale). In other words, we would likely find a data pattern similar to that plotted in Figure 3e, but with the first data point on the left moving down to zero d'. In the current online study with the successive (and potentially confusing) retrospective questioning, a handful of subjects could have easily misinterpreted the confidence scale (e.g., inverting the scale) which would lead to a mixture of genuine high-confidence ratings and mistaken ratings, which would result in a super subject d' that falls between zero and the other extreme of the scale (which is exactly what the data in Fig 3e shows).

      One way to check on this potential measurement error using the existing dataset would be to conduct additional analyses that incorporate the confidence ratings from the 2AFC location judgment task. For example, were there any subjects who reported being confident or highly confident that they didn't see anything, but then reported being confident or highly confident in judging the location of the thing they didn't see? If so, how many? In other words, how internally (in)consistent were subjects' confidence ratings across the IB and location questions? Such an analysis could help screen-out subjects who made a mistake on the first question and corrected themselves on the second, as well as subjects who weren't reading the questions carefully enough.

      As far as I could tell, the confidence rating data from the 2AFC location task were not reported anywhere in the main paper or supplement.

      We are grateful to the reviewer for raising this issue and for requesting that we report the confidence rating data from our 2afc location task in Experiment 3. We now report all this data in our Supplementary Materials (see Supplementary Table 3).

      We of course agree with the reviewer’s concern about measurement error, which is a concern in all experiments. What, then, of the particular concern that some subjects might have misunderstood our confidence question? It is surely impossible in principle to rule out this possibility; however, several factors bear on the plausibility of this interpretation. First, we explicitly labeled our confidence scale (with 0 labeled as ‘Not at all confident’ and 3 as ‘Highly confident’) so that subjects would be very unlikely simply to invert the scale. This is especially so as it is very counterintuitive to treat “0” as reflecting high confidence. However, we accept that it is a possibility that certain subjects might nonetheless have been confused in some other way.

      So, we also took a second approach. We examined the confidence ratings on the 2afc question of subjects who reported being highly confident that they didn't notice anything.

      Reassuringly, the large majority of these high confidence “no” subjects (~80%) reported low confidence of 0 or 1 on the 2afc question, and the majority (51%) reported the lowest confidence of 0. Only 18/204 (9%) subjects reported high confidence on both questions. 

      Still, the numbers of subjects here are small and so may not be reliable. This led us to take a third approach. We reasoned that any measurement error due to inverting or misconstruing the confidence scale should be symmetric. That is: If subjects are liable to invert the confidence scale, they should do so just as often when they answer “yes” as when they answer “no” – after all the very same scale is being used in both cases. This allows us to explore evidence of measurement error in relation to the much larger number of highconfidence “yes” subjects (N = 2677), thus providing a much more robust indicator as to whether subjects are generally liable to misconstrue the confidence scale. Looking at the number of such high confidence noticers who subsequently respond to the 2afc question with low-confidence, we found that the number was tiny. Only 28/2677 (1.05%) of highconfidence noticers subsequently gave the lowest level of confidence on the 2afc question, and only 63/2677 (2.35%) subjects gave either of the two lower levels of confidence. In this light, we consider any measurement error due to misunderstanding the confidence scale to be extremely minimal.

      What should we make of the 18 subjects who were highly confident non-noticers but then only low-confidence on the 2afc question? Importantly, we do not think that these 18 subjects necessarily made a mistake on the first question and so should be excluded. There is no a priori reason why one’s confidence criterion in a yes/no question should carry over to a 2afc question. After all, it is perfectly rationally coherent to be very confident that one didn’t see anything but also very confident that if there was anything to be seen, it was on the left. Moreover, these 18 subjects were not all correct on the 2afc question despite their high confidence (4/18 or 22% getting the wrong answer). 

      Nonetheless, and again reassuringly, we found that the above-chance patterns in our data remained the same even excluding these 18 subjects. We did observe a slight reduction in percent correct and d′ but this is absolutely what one should expect since excluding the most confident performers in any task will almost inevitably reduce performance.

      In this light, we consider it unlikely that measurement error fully explains the residual sensitivity found even amongst highly confident non-noticers. That said, we appreciate this concern. We now raise the issue and the analysis of high confidence noticers which addresses it in our revised manuscript. We also thank the reviewer for pressing us to think harder about this issue, which led directly to these new analyses that we believed have strengthened the paper.

      (5) In most (if not all) IB experiments in the literature, a partial attention and/or full attention trial (or set of trials) is administered after the critical trial. These control trials are very important for validating IB on the critical trial, as they must show that, when attended, the critical stimuli are very easy to see. If a subject cannot detect the critical stimulus on the control trial, one cannot conclude that they were inattentionally blind on the critical trial, e.g., perhaps the stimulus was just too difficult to see (e.g., too weak, too brief, too far in the periphery, too crowded by distractor stimuli, etc.), or perhaps they weren't paying enough attention overall or failed to follow instructions. In the aggregate data, rates of noticing the stimuli should increase substantially from the critical trial to the control trials. If noticing rates are equivalent on the critical and control trials one cannot conclude that attention was manipulated.

      It is puzzling why the authors decided not to include any control trials with partial or full attention in their five experiments, especially given their online data collection procedures where stimulus size, intensity, eccentricity, etc. were uncontrolled and variable across subjects. Including such trials could have actually helped them achieve their goal of challenging the IB hypothesis, e.g., excluding subjects who failed to see the stimulus on the control trials might have reduced the inattentional blindness rates further. This design decision should at least be acknowledged and justified (or noted as a limitation) in a revision of this paper.

      We acknowledge that other studies in the literature include divided and full attention trials, and that they could have been included in our work as well. However, we deliberately decided not to include such control trials for an important reason. As the referee comments, the main role of such trials in previous work has been to exclude from analysis subjects who failed to report the unexpected stimulus on the divided and/or full attention control trials.

      (For example, as Most et al. 2001 write: “Because observers should have seen the object in the full-attention trial (Mack & Rock, 1998), we used this trial as a control … Accordingly, 3 observers who failed to see the cross on this trial were replaced, and their data were excluded from the analyses.") As the reviewer points out, excluding such subjects would very likely have ‘helped' us. However, the practice is controversial. Indeed, in a review of 128 experiments, White et al. 2018 argue that the practice has “problematic consequences” and “may lead researchers to understate the pervasiveness of inattentional blindness". Since we wanted to offer as simple and demanding a test of residual sensitivity in IB as possible, we thus decided not to use any such exclusions, and for that reason decided not to include divided/full attention trials. 

      As recommended, we discuss this decision not to include divided/full attention trials and our logic for not doing so in the manuscript. As we explain, not having those conditions makes it more impressive, not less impressive, that we observed the results we in fact did — it makes our results more interpretable, not less interpretable, and so absence of such conditions from our manuscript should not (in our view) be considered any kind of weakness.

      (6) In the discussion section, the authors devote a short paragraph to considering an alternative explanation of their non-zero d' results in their super subject analyses: perhaps the critical stimuli were processed unconsciously and left a trace such that when later forced to guess a feature of the stimuli, subjects were able to draw upon this unconscious trace to guide their 2AFC decision. In the subsequent paragraph, the authors relate these results to above-chance forced-choice guessing in blindsight subjects, but reject the analogy based on claims of parsimony.

      First, the authors dismiss the comparison of IB and blindsight too quickly. In particular, the results from experiment 3, in which some subjects adamantly (confidently) deny seeing the critical stimulus but guess a feature at above-chance levels (at least at the super subject level and assuming the online subjects interpreted and used the confidence scale correctly), seem highly analogous to blindsight. Importantly, the analogy is strengthened if the subjects who were confident in not seeing anything also reported not being confident in their forced-choice judgments, but as mentioned above this data was not reported.

      Second, the authors fail to mention an even more straightforward explanation of these results, which is that ~8% of subjects misinterpreted the "unusual" part of the standard IB question used in experiments 1-3. After all, colored lines and shapes are pretty "usual" for psychology experiments and were present in the distractor stimuli everyone attended to. It seems quite reasonable that some subjects answered this first question, "no, I didn't see anything unusual", but then when told that there was a critical stimulus and asked to judge one of its features, adjusted their response by reconsidering, "oh, ok, if that's the unusual thing you were asking about, of course I saw that extra line flash on the left of the screen". This seems like a more parsimonious alternative compared to either of the two interpretations considered by the authors: (1) IB does not exist, (2) super-subject d' is driven by unconscious processing. Why not also consider: (3) a small percentage of subjects misinterpreted the Y/N question about noticing something unusual. In experiments 4-5, they dropped the term "unusual" but do not analyze whether this made a difference nor do they report enough of the data (subject numbers for the Y/N question and 2AFC) for readers to determine if this helped reduce the ~8% overestimate of IB rates.

      Our primary ambition in the paper was to establish, as our title suggests, residual sensitivity in IB. The ambition is quite neutral as to whether the sensitivity reflects conscious or unconscious processing (i.e. is akin to blindsight as traditionally conceived). We were evidently not clear about this, however, leading to two referees coming away with an impression of our claims that is different than we intended. We have revised our manuscript throughout to address this. But we also want to emphasize here that we take our data primarily to support the more modest claim that there is residual sensitivity (conscious or unconscious) in the group of subjects who are traditionally classified as inattentionally blind. We believe that this claim has solid support in our data.

      We do in the discussion section offer one reason for believing that there is residual awareness in the group of subjects who are traditionally classified as inattentionally blind. However, we acknowledge that this is controversial and now emphasize in the manuscript that this claim “is tentative and secondary to our primary finding”. We also emphasize that part of our point is dialectical: Inattentional blindness has been used to argue (e.g.) that attention is required for awareness. We think that our data concerning residual sensitivity at least push back on the use of IB to make this claim, even if they do not provide decisive evidence (as we agree) that awareness survives inattention. (Cf. here, Hirshhorn et al. 2024 who take up a common suggestion in the field that awareness is best assessed by using both subjective and objective measures, with claims about lack of awareness ideally being supported by both; our data suggest at a minimum that in IB objective measures do not neatly line up with subjective measures.)

      We hope this addresses the referee’s concern that we dismiss the “the comparison of IB and blindsight too quickly”. We do not intend to dismiss that comparison at all, indeed we raise it because we consider it a serious hypothesis. Our aim is simply to raise one possible consideration against it. But, again, our main claim is quite consistent with sensitivity in IB being akin to “blindsight”.

      We also agree with the referee that a possible explanation of why some subjects say they do not notice something unusual in IB paradigms, is not because they didn’t notice anything but because they didn’t consider the unexpected stimulus sufficiently unusual. However, the reviewer is incorrect that we did not mention this interpretation; to the contrary, it was precisely the kind of concern which led us to be dissatisfied with standard IB methods and so motivated our approach. As we wrote in our main text: “However, yes/no questions of this sort are inherently and notoriously subject to bias…   For example, observers might be under-confident whether they saw anything (or whether what they saw counted as unusual); this might lead them to respond “no” out of an excess of caution.” On our view, this is exactly the kind of reason (among other reasons) that one cannot rely on yes/no reports of noticing unusual stimuli, even though the field has relied on just these sorts of questions in just this way.

      We do not, however, think that this explanation accounts for why all subjects fail to report noticing, nor do we think that it accounts for our finding of above-chance sensitivity amongst non-noticers. This is for two critical reasons. First, whereas the word “unusual” did appear in the yes/no question in our Experiments 1-3, it did not appear in our Experiments 4 and 5 on dynamic IB. (In both cases, we used the exact wording of such questions in the experiments we were basing our work on.) And, of course, we still found significant residual sensitivity amongst non-noticers in Experiments 4 and 5. Second, in relation to our confidence experiment, we think it unlikely that subjects who were highly confident that they did not notice anything unusual only said that because they thought what they had seen was insufficiently unusual. Yet even in this group of subjects who were maximally confident that they did not notice anything unusual, we still found residual sensitivity.

      (7) The authors use sub-optimal questioning procedures to challenge the existence of the phenomenon this questioning is intended to demonstrate. A more neutral interpretation of this study is that it is a critique on methods in IB research, not a critique on IB as a manipulation or phenomenon. The authors neglect to mention the dozens of modern IB experiments that have improved upon the simple Y/N IB questioning methods. For example, in Michael Cohen's IB experiments (e.g., Cohen et al., 2011; Cohen et al., 2020; Cohen et al., 2021), he uses a carefully crafted set of probing questions to conservatively ensure that subjects who happened to notice the critical stimuli have every possible opportunity to report seeing them. In other experiments (e.g., Hirschhorn et al., 2024; Pitts et al., 2012), researchers not only ask the Y/N question but then follow this up by presenting examples of the critical stimuli so subjects can see exactly what they are being asked about (recognition-style instead of free recall, which is more sensitive). These follow-up questions include foil stimuli that were never presented (similar to the stimulus-absent trials here), and ask for confidence ratings of all stimuli. Conservative, pre-defined exclusion criteria are employed to improve the accuracy of their IB-rate estimates. In these and other studies, researchers are very cautious about trusting what subjects report seeing, and in all cases, still find substantial IB rates, even to highly salient stimuli. The authors should consider at least mentioning these improved methods, and perhaps consider using some of them in their future experiments.

      The concern that we do not sufficiently discuss the range of “improved” methods in IB studies is well-taken. A similar concern is raised by Reviewer #2 (Dr. Cohen). To address the concern, we have added to our manuscript a substantial new discussion of such improved methods. However, although we do agree that these methods can be helpful and may well address some of the methodological concerns which our paper raises, we do not think that they are a panacea. Thus, our discussion of these methods also includes a substantial discussion of the problems and pitfalls with such methods which led us to favor our own simple forced-response and 2afc questions, combined with SDT analysis. We think this approach is superior both to the classic approach in IB studies and to the approach raised by the reviewers.

      In particular, we have four main concerns about the follow up questions now commonly used in the field:

      First, many follow up questions are used not to exclude people from the IB group but to include people in the IB group. Thus, Most et al. 2001 asked follow up questions but used these to increase their IB group, only excluding subjects from the IB group if they both reported seeing and answered their follow ups incorrectly: “Observers were regarded as having seen the unexpected object if they answered 'yes' when asked if they had seen anything on the critical trial that had not been present before and if they were able to describe its color, motion, or shape." This means that subjects who saw the object but failed to see its color, say, would be treated as inattentionally blind. This has the purpose of inflating IB rates, in exactly the way our paper is intended to critique. So, in our view this isn’t an improvement but rather part of the approach we take issue with.

      Second, many follow up questions remain yes/no questions or nearby variants, all of which are subject to response bias. For example, in Cohen’s studies which the reviewer mentions, it is certainly true that “he uses a carefully crafted set of probing questions to conservatively ensure that subjects who happened to notice the critical stimuli have every possible opportunity to report seeing them.” We agree that this improves over a simple yes/no question in some ways. However, such follow up probes nonetheless remain yes/no questions, subject to response bias, e.g.:

      (1) “Did you notice anything strange or different about that last trial?”

      (2) “If I were to tell you that we did something odd on the last trial, would you have a guess as to what we did?”

      (3) “If I were to tell you we did something different in the second half of the last trial, would you have a guess as to what we did?”

      (4) “Did you notice anything different about the colors in the last scene?”

      Indeed, follow up questions of this kind can be especially susceptible to bias, since subjects may be reluctant to “take back” their earlier answers and so be conservative in responding positively to avoid inconsistency or acknowledgement of earlier error. This may explain why such follow up questions produce remarkable consistency despite their rather different wording. Thus, Simons and Chabris (1999) report: “Although we asked a series of questions escalating in specificity to determine whether observers had noticed the unexpected event, only one observer who failed to report the event in response to the first question (“did you notice anything unusual?'') reported the event in response to any of the next three questions (which culminated in “did you see a ... walk across the screen?''). Thus, since the responses were nearly always consistent across all four questions, we will present the results in terms of overall rates of noticing.” Thus, while there are undoubtedly merits to these follow ups, they do not resolve problems of bias.

      This same basic issue affects the follow up question used in Pitts et al. 2012 which the reviewer mentions. Pitts et al. write: “If a participant reported not seeing any patterns and rated their confidence in seeing the square pattern (once shown the sample) as a 3 or less (1 = least confident, 5 = most confident), she or he was placed in Group 1 and was considered to be inattentionally blind to the square patterns.” The confidence rating follow-up question here remains subject to bias. Moreover, and strikingly, the inclusion criterion used means that subjects who were moderately confident that they saw the square pattern when shown (i.e. answered 3) were counted as inattentionally blind (!). We do not think this is an appropriate inclusion criterion.

      The third problem is that follow up questions are often free/open-response. For instance, Most et al. (2005) ask the follow up question: "If you did see something on the last trial that had not been present during the first two trials, what color was it? If you did not see something, please guess." This is a much more difficult and to that extent less sensitive question than our binary forced-response/2afc questions. For this reason, we believe our follow up questions are more suitable for ascertaining low levels of sensitivity.

      The fourth and final issue is that whereas 2afc questions are criterion free (in that they naturally have an unbiased decision rule), this is in fact not true of n_afc questions in general, nor is it true in general of _delayed n-alternative match to sample designs. Thus, even when limited response options are given, they are not immune to response biases and so require SDT analysis. Moreover, some such tasks can involve decision spaces which are often poorly understood or difficult to analyze without making substantial assumptions about observer strategy. 

      This last point (as well as the first) is relevant to Hirshhorn et al. 2024. Hirshhorn et al. write that they “used two awareness measures. Firstly, participants were asked to rate stimulus visibility on the Perceptual Awareness Scale (PAS, a subjective measure of awareness: Ramsøy & Overgaard, 2004), and then they were asked to select the stimulus image from an array of four images (an objective measure: Jakel & Wichmann, 2006).”

      While certainly an improvement on simple yes/no questioning, the PAS remains subject to response bias. On the other hand, we applaud Hirshhorn et al.’s use of objective measures in the context of IB which of course our design implements. However, while Hirshhorn et al. 2024 suggest that their task is a spatial 4afc following the recommendation of this design by Jakel & Wichmann (2006), it is strictly a 4-alternative delayed match to sample task, so it is doubtful if it can be considered a preferred psychophysical task for the reasons Jakel & Wichmann offer. Regardless, the more crucial point is that observers in such a task might be biased towards one alternative as opposed to another. Thus, use of d′ (as opposed to percent correct as in Hirshhorn et al. 2024) is crucial in assessing performance in such tasks.

      For all these reasons, then, while we agree that the field has taken significant steps to move beyond the simple yes/no question traditionally used in IB studies (and we have revised our manuscript to make this clear); we do not think it has resolved the methodological issues which our paper seeks to highlight and address, and we believe that our approach contributes something additional that is not yet present in the literature. We have now revised our manuscript to make these points much more clearly, and we thank the reviewer for prompting these improvements.

      Reviewer #2 (Public review):

      In this study, Nartker et al. examine how much observers are conscious of using variations of classic inattentional blindness studies. The key idea is that rather than simply asking observers if they noticed a critical object with one yes/no question, the authors also ask follow-up questions to determine if observers are aware of more than the yes/no questions suggest. Specifically, by having observers make forced choice guesses about the critical object, the authors find that many observers who initially said "no" they did not see the object can still "guess" above chance about the critical object's location, color, etc. Thus, the authors claim, that prior claims of inattentional blindness are mistaken and that using such simple methods has led numerous researchers to overestimate how little observers see in the world. To quote the authors themselves, these results imply that "inattentionally blind subjects consciously perceive these stimuli after all... they show sensitivity to IB stimuli because they can see them."

      Before getting to a few issues I have with the paper, I do want to make sure to explicitly compliment the researchers for many aspects of their work. Getting massive amounts of data, using signal detection measures, and the novel use of a "super subject" are all important contributions to the literature that I hope are employed more in the future.

      We really appreciate this comment and that the reviewer found our work to make these important contributions to the literature. We wrote this paper expecting not everyone to accept our conclusions, but hoping that readers would see the work as making a valuable contribution to the literature promoting an underexplored alternative in a compelling way. Given that this reviewer goes on to express some skepticism about our claims, it is especially encouraging to see this positive feedback up top!

      Main point 1: My primary issue with this work is that I believe the authors are misrepresenting the way people often perform inattentional blindness studies. In effect, the authors are saying, "People do the studies 'incorrectly' and report that people see very little. We perform the studies 'correctly' and report that people see much more than previously thought." But the way previous studies are conducted is not accurately described in this paper. The authors describe previous studies as follows on page 3:

      "Crucially, however, this interpretation of IB and the many implications that follow from it rest on a measure that psychophysics has long recognized to be problematic: simply asking participants whether they noticed anything unusual. In IB studies, awareness of the unexpected stimulus (the novel shape, the parading gorilla, etc.) is retroactively probed with a yes/no question, standardly, "Did you notice anything unusual on the last trial which wasn't there on previous trials?". Any subject who answers "no" is assumed not to have any awareness of the unexpected stimulus.

      If this quote were true, the authors would have a point. Unfortunately, I do not believe it is true. This is simply not how many inattentional blindness studies are run. Some of the most famous studies in the inattentional blindness literature do not simply as observes a yes/no question (e.g., the invisible gorilla (Simons et al. 1999), the classic door study where the person changes (Simons and Levin, 1998), the study where observers do not notice a fight happening a few feet from them (Chabris et al., 2011). Instead, these papers consistently ask a series of follow-up questions and even tell the observers what just occurred to confirm that observers did not notice that critical event (e.g., "If I were to tell you we just did XYZ, did you notice that?"). In fact, after a brief search on Google Scholar, I was able to relatively quickly find over a dozen papers that do not just use a yes/no procedure, and instead as a series of multiple questions to determine if someone is inattentionally blind. In no particular order some papers (full disclosure: including my own):

      (1) Most et al. (2005) Psych Review

      (2) Drew et al. (2013) Psych Science

      (3) Drew et al. (2016) Journal of Vision

      (4) Simons et al. (1999) Perception

      (5) Simons and Levin (1998) Perception

      (6) Chabris et al. (2011) iPerception

      (7) Ward & Scholl (2015) Psych Bulletin and Review

      (8) Most et al. (2001) Psych Science

      (9) Todd & Marois (2005) Psych Science

      (10) Fougnie & Marois (2007) Psych Bulletin and Review

      (11) New and German (2015) Evolution and Human Behaviour

      (12) Jackson-Nielsen (2017) Consciousness and cognition

      (13) Mack et al. (2016) Consciousness and cognition

      (14) Devue et al. (2009) Perception

      (15) Memmert (2014) Cognitive Development

      (16) Moore & Egeth (1997) JEP:HPP

      (17) Cohen et al. (2020) Proc Natl Acad Sci

      (18) Cohen et al. (2011) Psych Science

      This is a critical point. The authors' key idea is that when you ask more than just a simple yes/no question, you find that other studies have overestimated the effects of inattentional blindness. But none of the studies listed above only asked simple yes/no questions. Thus, I believe the authors are mis-representing the field. Moreover, many of the studies that do much more than ask a simple yes/no question are cited by the authors themselves! Furthermore, as far as I can tell, the authors believe that if researchers do these extra steps and ask more follow-ups, then the results are valid. But since so many of these prior studies do those extra steps, I am not exactly sure what is being criticized.

      To make sure this point is clear, I'd like to use a paper of mine as an example. In this study (Cohen et al., 2020, Proc Natl Acad Sci USA) we used gaze-contingent virtual reality to examine how much color people see in the world. On the critical trial, the part of the scene they fixated on was in color, but the periphery was entirely in black and white. As soon as the trial ended, we asked participants a series of questions to determine what they noticed. The list of questions included:

      (1) "Did you notice anything strange or different about that last trial?"

      (2) "If I were to tell you that we did something odd on the last trial, would you have a guess as to what we did?"

      (3) "If I were to tell you we did something different in the second half of the last trial, would you have a guess as to what we did?"

      (4) "Did you notice anything different about the colors in the last scene?"

      (5) We then showed observers the previous trial again and drew their attention to the effect and confirmed that they did not notice that previously.

      In a situation like this, when the observers are asked so many questions, do the authors believe that "the inattentionally blind can see after all?" I believe they would not say that and the reason they would not say that is because of the follow-up questions after the initial yes/no question. But since so many previous studies use similar follow-up questions, I do not think you can state that the field is broadly overestimating inattentional blindness. This is why it seems to me to be a bit of a strawman: most people do not just use the yes/no method.

      We appreciate this reviewer raising this issue. As he (Dr. Cohen) states, his “primary issue” concerns our discussion of the broader literature (which he worries understates recent improvements made to the IB methodology), rather than, e.g., the experiments we’ve run. We take this concern very seriously and address it comprehensively here.

      A very similar issue is identified by Reviewer #1, comment (7). To review some of what we say in reply to them: To address the concern we have added to our manuscript a substantial new discussion of such improved methods. However, although we do agree that these methods can be helpful and may well address some of the methodological concerns which our paper raises, we do not think that they are a panacea. Thus, our discussion of these methods also includes a substantial discussion of the problems and pitfalls with such methods which led us to favor our own simple forced-response and 2afc questions, combined with SDT analysis. We think this approach is superior both to the classic approach in IB studies and to the approach raised by the reviewers.

      In particular, we have three main concerns about the follow up questions now commonly used in the field:

      First, many follow up questions are used not to exclude subjects from the IB group but to include subjects in the IB group. Thus, Most et al. (2001) asked follow up questions but used these to increase their IB group, only excluding subjects from the IB group if they both reported seeing and failed to answer their follow ups correctly: “Observers were regarded as having seen the unexpected object if they answered 'yes' when asked if they had seen anything on the critical trial that had not been present before and if they were able to describe its color, motion, or shape." This means that subjects who saw the object but failed to describe it in these respects would be treated as inattentionally blind. This is problematic since failure to describe a feature (e.g., color, shape) does not imply a complete lack of information concerning that feature; and even if a subject did lack all information concerning these features of an object, this would not imply a complete failure to see the object. Similarly, Pitts et al. (2012) asked subjects to rate their confidence in their initial yes/no response from 1 = least confident to 5 = most confident, and used these ratings to include in the IB group those who rated their confidence in seeing at 3 or less. This is evidently problematic, since there is a large gap between being under confident that one saw something and being completely blind to it. More generally, using follows up to inflate IB rates in such ways raises precisely the kinds of issues our paper is intended to critique. So in our view this isn’t an improvement but rather part of the approach we take issue with.

      Second, many follow up questions remain yes/no questions or nearby variants, all of which are subject to response bias. For example, in the reviewer’s own studies (Cohen et al. 2020, 2011; see also: Simons et al., 1999; Most et al., 2001, 2005; Drew et al., 2013; Memmert, 2014) a series of follow up questions are used to try and ensure that subjects who noticed the critical stimuli are given the maximum opportunity to report doing so, e.g.:

      (1) “Did you notice anything strange or different about that last trial?”

      (2) “If I were to tell you that we did something odd on the last trial, would you have a guess as to what we did?”

      (3) “If I were to tell you we did something different in the second half of the last trial, would you have a guess as to what we did?”

      (4) “Did you notice anything different about the colors in the last scene?”

      We certainly agree that such follow up questions improve over a simple yes/no question in some ways. However, such follow up probes nonetheless remain yes/no questions, intrinsically subject to response bias. Indeed, follow up questions of this kind can be especially susceptible to bias, since subjects may be reluctant to “take back” their earlier answers and so be conservative in responding positively to avoid inconsistency or acknowledgement of earlier error. This may explain why such follow up questions produce remarkable consistency despite their rather different wording. Thus, Simons and Chabris (1999) report: “Although we asked a series of questions escalating in specificity to determine whether observers had noticed the unexpected event, only one observer who failed to report the event in response to the first question (“did you notice anything unusual?'') reported the event in response to any of the next three questions (which culminated in “did you see a ... walk across the screen?''). Thus, since the responses were nearly always consistent across all four questions, we will present the results in terms of overall rates of noticing.” Thus, while there are undoubtedly merits to these follow ups, they do not resolve problems of bias.

      It is also important to recognize that whereas 2afc questions are criterion free (in that they naturally have an unbiased decision rule), this is not true of n_afc nor delayed _n-alternative match to sample designs in general. Performance in such tasks thus requires SDT analysis – which itself may be problematic if the decision space is not properly understood or requires making substantial assumptions about observer strategy.

      Third, and finally, many follow up questions are insufficiently sensitive (especially with small sample sizes). For instance, Todd, Fougnie & Marois (2005) used a 12-alternative match-tosample task (see similarly: Fougnie & Marois, 2007; Devue et al., 2009). And Most et al. (2005) asked an open-response follow-up: “If you did see something on the last trial that had not been present during the first two trials, what color was it? If you did not see something, please guess.” These questions are more difficult and to that extent less sensitive than binary forced-response/2afc questions of the sort we use in our own studies – a difference which may be critical in uncovering degraded perceptual sensitivity.

      For all these reasons, then, while we agree that the field has taken significant steps to move beyond the simple yes/no question traditionally used in IB studies (and we have revised our manuscript to make this clear); we do not think it has resolved the methodological issues which our paper seeks to highlight and address, and we believe that our approach of using 2afc or forced-response questions combined with signal detection analysis is an important improvement on prior methods and contributes something additional that is not yet present in the literature. We have now revised our manuscript to make these points much clearer.

      Other studies that improve on the standard methodology

      This reviewer adds something else, however: A very helpful list of 18 papers which include follow ups and that he believes overcome many of the issues we raise in our paper. To just state our reaction bluntly: We are familiar with every one of these papers (indeed, one of them is a paper by one of us!), and while we think these are all very valuable contributions to the literature, it is our view that none of these 18 papers resolves the worries that led us to conduct our work.  

      Here we briefly comment on the relevant pitfalls in each case. We hope this serves to underscore the importance of our methodological approach.

      (1) Most et al. (2005) Psych Review

      Either a 2-item or 5-item questionnaire was used. The 2-item questionnaire ran as follows:

      (1) On the last trial, did you see anything other than the 4 circles and the 4 squares (anything that had not been present on the original two trials)? Yes No 

      (2) If you did see something on the last trial that had not been present during the original two trials, please describe it in as much detail as possible.

      This clearly does not substantially improve on the traditional simple yes/no question. Moreover, the second question (as well as being open-ended) was used to include additional subjects in the IB group, in that participants were counted as having seen the object only if they responded “yes” to Q1 and in addition “were able to report at least one accurate detail” in response to Q2. In other words, either a subject says “no” (and is treated as unaware), or says “yes” and then is asked to prove their awareness, as it were. If anything, this intensifies the concerns we raise, by inflating IB rates. 

      The 5-item questionnaire looked like this: 

      (1) On the last trial, did you see anything other than the black and white L’s and T’s (anything that had not been present on the first two trials)?

      (2) If you did see something on the last trial that had not been present during the first two trials, please describe it.

      (3) If you did see something on the last trial that had not been present during the first two trials, what color was it? If you did not see something, please guess. (Please indicate whether you did see something or are guessing)

      (4) If you did see something during the last trial that had not been present in the first two trials, please draw an arrow on the “screen” below showing the direction in which it was moving. If you did not see something, please guess. (Please indicate whether you did see something or are guessing)

      (5) If you did see something during the last trial that had not been present during the first two trials, please circle the shape of the object below [4 shapes are presented to choose from]. If you did not see anything, please guess. (Please indicate whether you did see something or are guessing)

      Q5 was not used for analysis purposes. (It suffers from the second issue raised above.) Q1 is the traditional y/n question. Qs 2&3 are open ended. It is unclear how responses to Q4 were analyzed (at the limit it could be considered a helpful, forced-choice question – though it again would suffer from the second issue raised above). However, as noted with respect to the 2-item questionnaire, these responses were not used to exclude people from the IB group but to include people in it. So again, this approach does not in any way address the issues we are concerned about, and if anything, only makes them worse. 

      (2)  Drew et al. (2013) Psych Science

      All follow ups were yes/no: “we asked a series of questions to determine whether they noticed the gorilla: ‘Did the final trial seem any different than any of the other trials?’, ‘Did you notice anything unusual on the final trial?’, and, finally, ‘Did you see a gorilla on the final trial?’”. So, this paper essentially implements the standard methodology we mention (and criticize). 

      (3)  Drew et al. (2016) Journal of Vision

      Follow up questions were used, but the reported procedure does not provide sufficient details to evaluate them (we are only told: “After the final trial, they were asked: ‘On that last trial of the task, did you notice anything that was not there on previous trials?’ They then answered questions about the features of the unexpected stimulus on a separate screen (color, shape, movement, and direction of movement).”). It is not clear that these follow ups were used to exclude any subjects from the analysis. Finally, given that the unexpected object could be the same color as the targets/distractors, it is clear that biases would have been introduced which would need to be considered (but which were not).

      (4)  Simons & Chabris (1999) Perception

      All follow ups were yes/no: “observers were … asked to provide answers to a surprise series of additional questions. (i) While you were doing the counting, did you notice anything unusual on the video? (ii) Did you notice any- thing other than the six players? (iii) Did you see anyone else (besides the six players) appear on the video? (iv) Did you see a gorilla [woman carrying an umbrella] walk across the screen? After any “yes'' response, observers were asked to provide details of what they noticed. If at any point an observer mentioned the unexpected event, the remaining questions were skipped.” As noted previously, the analyses in fact did not use these questions to exclude subjects since answers were so consistent.

      (5)  Simons and Levin (1998) Perception

      This is a change detection paradigm, not a study of inattentional blindness. And in any case, one yes/no follow up was used: “Did you notice that I'm not the same person who approached you to ask for directions?”

      (6)  Chabris et al. (2011) iPerception

      Two yes/no questions were asked: “we asked whether the subjects had seen anything unusual along the route, and then whether they had seen anyone fighting.” It seems that follow up questions (a request to describe the fight) were asked only of those who said yes.

      This is in fact a common procedure – follow up questions only being asked of the “yes” group. As discussed, it is sometimes used to increase rates of IB, compounding the problem we identify in our paper. So this is another example of a follow-up question that makes the problem we identify worse, not better.

      (7) Ward & Scholl (2015) Psych Bulletin and Review

      Two yes/no questions were used: “...observers were asked whether they noticed ‘anything … that was different from the first three trials’ — and if so, to describe what was different. They were then shown the gray cross and asked if they had noticed it—and if so, to describe where it was and how it moved. Only observers who explicitly reported not noticing the cross were counted as ‘nonnoticers’ to be included in the final sample (N = 100).” In each case, combining the traditional noticing question with a request to describe and identify may have induced conservative response biases in the noticing question, since a subject might consider being able to describe or identify the unexpected stimulus a precondition of giving a positive answer to the noticing question.

      (8) Most et al. (2001) Psych Science

      The same 5-item questionnaire discussed above in relation to Most et al. (2005) was used: 

      (1) On the last trial, did you see anything other than the black and white L’s and T’s (anything that had not been present on the first two trials)?

      (2)   If you did see something on the last trial that had not been present during the first two trials, please describe it.

      (3) If you did see something on the last trial that had not been present during the first two trials, what color was it? If you did not see something, please guess. (Please indicate whether you did see something or are guessing)

      (4) If you did see something during the last trial that had not been present in the first two trials, please draw an arrow on the “screen” below showing the direction in which it was moving. If you did not see something, please guess. (Please indicate whether you did see something or are guessing)

      (5) If you did see something during the last trial that had not been present during the first two trials, please circle the shape of the object below [4 shapes are presented to choose from]. If you did not see anything, please guess. (Please indicate whether you did see something or are guessing)

      Q5 was not used for analysis purposes. (It suffers from the second issue raised above.) Q1 is the traditional yes/no question. Qs 2&3 are open ended. It is unclear how responses to Q4 were analyzed (at the limit it could be considered a helpful, forced-choice question – though it again would suffer from the second issue raised above). However, as noted with respect to the two item questionnaire in Most et al. 2005, these responses were not used to exclude people from the IB group but to include people in it. So again this approach does not in any way address the issues we are concerned about, and if anything only makes them worse.

      (9) Todd, Fougnie & Marois (2005) Psych Science

      “participants were probed with three questions to determine whether they had detected the critical stimulus ... .The first question assessed whether subjects had seen anything unusual during the trial; they responded ‘‘yes’’ or ‘‘no’’ by pressing the appropriate key on the keyboard. The second question asked participants to select which stimulus they might have seen among 12 possible objects and symbols selected from MacIntosh font databases. The third question asked participants to select the quadrant in which the critical stimulus may have appeared by pressing one of four keys, each of which corresponded to one of the quadrants.”

      These follow ups were used to include people in the IB group: “In keeping with previous studies (Most et al., 2001), participants were considered to have detected the critical stimulus successfully if they (a) reported seeing an unexpected stimulus and (b) correctly selected its quadrant location.” In line with our third point about sensitivity, the object identity test transpired to be “too difficult even under full-attention conditions … Thus, performance with this question was not analyzed further.”

      (10) Fougnie & Marois (2007) Psych Bulletin and Review

      Same exact methods and problems as with Todd & Marois (2005) Psych Science, just discussed.

      (11) New and German (2015) Evolution and Human Behaviour

      “After the fourth trial containing the additional experimental stimulus, the participant was asked, “Did you see anything in addition to the cross on that trial?” and which quadrant the additional stimulus appeared in. They were then asked to identify the stimulus in an array which in Experiment 1 included two variants chosen randomly from the spider stimuli and the two needle stimuli. Participants in Experiment 2 picked from all eight stimuli used in that experiment.”

      Our second concern about response biases and the need for appropriate SDT analysis of the 4/8 alternative tasks applies to all these questions. We also note that analyses were only performed on groups separately (those who detected/failed to detect, those who located/failed to locate, and those who identified/failed to identify) and on the group which did all three/failed to do any one of the three. Especially in light of the fact that some subjects could clearly detect the stimulus without being able to identity it (e.g.), the most stringent test given our concerns (which were not obviously New and German’s comparative concerns), would be to consider the group which could not detect, identify or localize.

      (12) Jackson-Nielsen (2017) Consciousness and cognition

      This is a very interesting example of a follow-up which used a 3-AFC recognition test:

      “participants were immediately asked, ‘‘which display looks most like what you just saw?’ from 3 alternatives”. However, though such an objective test is definitely to be preferred in our view to an open-ended series of probes, the 3-AFC test administered clearly had issues with response biases, as discussed, and actually yielded significantly below chance performance in one of the experiments.

      (13) Mack et al. (2016) Consciousness and cognition

      The follow ups here were essentially yes/no combined with an assessment of surprise. Participants were asked to enter letters into a box, and if they did so “were immediately asked by the experimenter whether they had noticed anything different about the array on this last trial and if they did not, they were told that there had been no letters and their responses to that news were recorded. Clearly, if they expressed surprise, this would be compelling evidence that they were unaware of the absence of the letters. Those observers who did not enter letters and realized there were no letters present were considered aware of the absence.” So, this again has all of the same problems we identify, considering subjects unaware because they expressed surprise.

      (14) Devue et al. (2009) Perception

      An 8-alternative task was used. The authors were primarily interested in a comparative analysis and so did not use this task to exclude subjects. We note that an 8 alternative task is very demanding – compare the 12-alternative task used in Todd, Fougnie & Marois (2005). There was an attempt to investigate biases in a separate bias trial, however SDT measures were not used.

      (15) Memmert (2014) Cognitive Development

      “After watching the video and stating the number of passes, participants answered four questions (following Simons & Chabris, 1999): (1) While you were counting, did you perceive anything unusual on the video? (2) Did you perceive anything other than the six players? (3) Did you see anyone else (besides the six players) appear on the video? (4) Did you notice a gorilla walk across the screen? After any “yes” reply, children were asked to provide details of what they noticed. If at any point a child mentioned the unexpected event, the remaining questions were omitted.” All of these follow-up questions are yes/no judgments, used to determine awareness in exactly the way we critique as problematic.

      (16) Moore & Egeth (1997) JEP:HPP

      This study (which includes one of us, Egeth, as author) did use forced choice questions. In one case, the question was 2-alternative, in the other it was 4-alternative. In the latter case, SDT would have been appropriate but was not used. In the former case, it may have been that a larger sample would have revealed evidence of sensitivity to the background pattern (as it stood 55% answered the 2-alternative question correctly). Although these results have been replicated, unfortunately the replication in Wood and Simons 2019 used a 6-alternative recognition task and this was not analyzed using SDT. We also note that the task is rather difficult in this study. Wood and Simons report: “Exclusion rates were much higher than anticipated, primarily due to exclusions when subjects failed to correctly report the pattern on the full-attention trial; we excluded 361 subjects, or 58% of our sample.”

      (17) Cohen et al. (2020) Proc Natl Acad Sci

      While this paper improves over a simple yes/no question in some ways, especially in that it used the follow up questions to exclude subjects from the unaware (IB) group, the follow up probes nonetheless remain yes/no questions, subject to response bias, e.g.:

      (1) “Did you notice anything strange or different about that last trial?”

      (2) “If I were to tell you that we did something odd on the last trial, would you have a guess as to what we did?”

      (3) “If I were to tell you we did something different in the second half of the last trial, would you have a guess as to what we did?”

      (4) “Did you notice anything different about the colors in the last scene?”

      Follow up questions of this kind can be especially susceptible to bias, since subjects may be reluctant to “take back” their earlier answers and so be conservative in responding positively to avoid inconsistency or acknowledgement of earlier error. This may explain why such follow up questions can produce remarkable consistency despite their rather different wording. 

      (18) Cohen et al. (2011) Psych Science

      Here are the probes used in this study:

      (1) Did you notice anything different on that trial?

      (2) Did you notice something different about the background stream of images?

      (3) Did you notice that a different type of image was presented in the background that was unique in some particular way?

      (4) Did you see an actual photograph of a natural scene in that stream?

      (5) If I were to tell you that there was a photograph in that stream, can you tell me what it was a photograph of?

      Qs 1-4 are yes/no. Q5 is yes/no with an open-ended response. After this, a 5 or 6-alternative recognition test was administered. So again, this faces the same issues, since y/n questions are subject to bias in the way we have described, and many-alternative tests are more problematic than 2afc tests.

      In summary

      We really appreciate the care that went into compiling this list, and we agree that these papers and the improved methods they contain are relevant. But as hopefully made clear above, the approaches in each of these papers simply don’t solve the foundational issues our critique is aimed at (though they may address other issues). This is why we felt our new approach was necessary. And we continue to feel this way even after reading and incorporating these comments from Dr. Cohen.

      Nevertheless, there is clearly lots for us to do in light of these comments. And so as noted earlier we have now added a very substantial new section to our discussion section to more fairly and completely portray the state of the art in this literature. This is really to our benefit in the end, since we now not only better acknowledge the diverse approaches present, but also set up ourselves to make our novel contribution exceedingly clear.

      Main point 2: Let's imagine for a second that every study did just ask a yes/no question and then would stop. So, the criticism the authors are bringing up is valid (even though I believe it is not). I am not entirely sure that above chance performance on a forced choice task proves that the inattentionally blind can see after all. Could it just be a form of subliminal priming? Could there be a significant number of participants who basically would say something like, "No I did not see anything, and I feel like I am just guessing, but if you want me to say whether the thing was to the left or right, I will just 100% guess"? I know the literature on priming from things like change and inattentional blindness is a bit unclear, but this seems like maybe what is going on. In fact, maybe the authors are getting some of the best priming from inattentional blindness because of their large sample size, which previous studies do not use.

      I'm curious how the authors would relate their studies to masked priming. In masked priming studies, observers say the did not see the target (like in this study) but still are above chance when forced to guess (like in this study). Do the researchers here think that that is evidence of "masked stimuli are truly seen" even if a participant openly says they are guessing?

      We’re grateful to the reviewer for raising this question. As we say in response to Reviewer #1, our primary ambition in the paper is to establish, as our title suggests, residual sensitivity in IB. The ambition is quite neutral as to whether the sensitivity reflects conscious or unconscious processing (i.e. is akin to blindsight as traditionally conceived, or what the reviewer here suggests may be happening in masked priming). Since we were evidently insufficiently clear about this we have revised our manuscript in several places to clarify that we take our data primarily to support the more modest claim that there is residual sensitivity (conscious or unconscious) in the group of subjects who are traditionally classified as inattentionally blind. We believe that this claim has much more solid support in our data than our secondary and tentative suggestion about awareness.

      This said, we do consider masked priming studies to be susceptible to the critique that performance may reflect degraded conscious awareness which is unreported because of conservative response criteria. There is good evidence that response criteria tend to be conservative near threshold (Björkman et al. 1993; see also: Railo et al. 2020), including specifically in masked priming studies (Sand 2016, cited in Phillips 2021). So, we consider it a perfectly reasonable hypothesis that subjects who say they feel they are guessing in fact have conscious access to a degraded signal which is insufficient to reach a conservative response criterion but nonetheless sufficient to perform above chance in 2afc detection. Of course, we appreciate that this hypothesis is controversial, so it is not one we argue for in our paper (though we are happy to share our feelings about it here).

      Main point 3: My last question is about how the authors interpret a variety of inattentional blindness findings. Previous work has found that observers fail to notice a gorilla in a CT scan (Drew et al., 2013), a fight occurring right in front of them (Chabris et al., 2011), a plane on a runway that pilots crash into (Haines, 1991), and so forth. In a situation like this, do the authors believe that many participants are truly aware of these items but simply failed to answer a yes/no question correctly? For example, imagine the researchers made participants choose if the gorilla was in the left or right lung and some participants who initially said they did not notice the gorilla were still able to correctly say if it was in the left or right lung. Would the authors claim "that participant actually did see the gorilla in the lung"? I ask because it is difficult to understand what it means to be aware of something as salient as a gorilla in a CT scan, but say "no" you didn't notice it when asked a yes/no question. What does it mean to be aware of such important, ecologically relevant stimuli, but not act in response to them and openly say "no" you did not notice them?

      Our view is that in such cases, observers may well have a “degraded” percept of the relevant feature (gorilla, plane, fight etc.). But crucially we do not suggest that this percept is sufficient for observers to recognize the object/event as a gorilla, plane, fight etc. Our claim is only that, in our studies at least, observers (as a group) do have enough information about the unexpected stimuli to locate them, and discriminate certain low level features better than chance. Crudely, it may be that subjects see the gorilla simply as a smudge or the plane as a shadowy patch etc. (One of us who is familiar with the gorilla CT scan stimuli notes that the gorilla is in fact rather hard to see even when you know which slide it is on, suggesting that they are not as “salient” as the reviewer suggests!) 

      More precisely, in the paper we write that in our view perhaps “...unattended stimuli are encoded in a partial or degraded way. Here we see a variety of promising options for future work to investigate. One is that unattended stimuli are only encoded as part of ensemble representations or summary scene statistics (Rosenholtz, 2011; Cohen et al., 2016). Another is that only certain basic “low-level” or “preattentive” features (see Wolfe & Utochkin, 2019 for discussion) can enter awareness without attention. A final possibility consistent with the present data is that observers can in principle be aware of individual objects and higher-level features under inattention but that the precision of the corresponding representations is severely reduced. Our central aim here is to provide evidence that awareness in inattentional blindness is not abolished. Further work is needed to characterize the exact nature of that awareness.” We hope this sheds light on our perspective while still being appropriately cautious not to go too far beyond our data.

      Overall: I believe there are many aspects of this set of studies that are innovative and I hope the methods will be used more broadly in the literature. However, I believe the authors misrepresent the field and overstate what can be interpreted from their results. While I am sure there are cases where more nuanced questions might reveal inattentional blindness is somewhat overestimated, claims like "the inattentionally blind can see after all" or "Inattentionally blind subjects consciously perceive thest stimuli after all" seem to be incorrect (or at least not at all proven by this data).

      Once again, we would like to thank this reviewer for his feedback, which obviously comes from a place of tremendous expertise on these issues. We appreciate his assessment that our studies are innovative and that our methodological advances will be of use more broadly. We also hear the reviewer loud and clear about the passages in question, which on reflection we agree are not as central to our case as the other claims we make (regarding residual sensitivity and conservative responding), and so we have now edited them accordingly to refocus our discussion on only those claims that are central and supported. Thank you for making our paper stronger!

      Reviewer #3 (Public review):

      Summary:

      Authors try to challenge the mainstream scientific as well as popularly held view that Inattentional

      Blindness (IB) signifies subjects having no conscious awareness of what they report not seeing (after being exposed to unexpected stimuli). They show that even when subjects indicate NOT having seen the unexpected stimulus, they are at above chance level for reporting features such as location, color or movement of these stimuli. Also, they show that 'not seen' responses are in part due to a conservative bias of subjects, i.e. they tend to say no more than yes, regardless of actual visibility. Their conclusion is that IB may not (always) be blindness, but possibly amnesia, uncertainty etc.

      We just thought to say that we felt this was a very accurate summary of our claims, and in ways underscore the modesty we had hoped to convey. This is especially true of the reviewer’s final sentence: “Their conclusion is that IB may not (always) be blindness, but possibly amnesia, uncertainty etc.”; as we noted in response to other reviewers, our claim is not that IB doesn’t exist, that subjects are always conscious of the stimulus, etc.; it is only that the cohort of IB subjects show sensitivity to the unattended stimulus in ways that suggest they are not as blind as traditionally conceived. Thank you for reading us as intended!

      Strengths:

      A huge pool of (25.000) subjects is used. They perform several versions of the IB experiments, both with briefly presented stimuli (as the classic Mack and Rock paradigm), as well as with prolonged stimuli moving over the screen for 5 seconds (a bit like the famous gorilla version), and all these versions show similar results, pointing in the same direction: above chance detection of unseen features, as well as conservative bias towards saying not seen.

      We’re delighted that the reviewer appreciated these strengths in our manuscript!

      Weaknesses:

      Results are all significant but effects are not very strong, typically a bit above chance. Also, it is unclear what to compare these effects to, as there are no control experiments showing what performance would have been in a dual task version where subjects have to also report features etc for stimuli that they know will appear in some trials

      The backdrop to the experiments reported here is the “consensus view” (Noah & Mangun, 2020) according to which inattention completely abolishes perception, such that subjects undergoing IB “have no awareness at all of the stimulus object” (Rock et al., 1992) and that “one can have one’s eyes focused on an object or event … without seeing it at all” (Carruthers, 2015). In this context, we think our findings of significant above-chance sensitivity (e.g., d′ = 0.51 for location in Experiment 1; chance, of course, would be d′ = 0 here) are striking and constitute strong evidence against the consensus view. We of course agree that the residual sensitivity is far lower than amongst subjects who noticed the stimulus. For this reason, we certainly believe that inattention has a dramatic impact on perception. To that extent, our data speak in favor of a “middle ground” view on which inattention substantially degrades but crucially does not abolish perception/explicit encoding. We see this as an importantly neglected option in a literature which has overly focused on seen/not seen binaries (see our section ‘Visual awareness as graded’).

      Regarding the absence of a control condition, we think those conditions wouldn’t have played the same role in our experiments as they typically play in other experiments. As Reviewer #1 comments, the main role of such trials in previous work has been to exclude from analysis subjects who failed to report the unexpected stimulus on the divided and/or full attention control trials. As Reviewer #1 points out, excluding such subjects would very likely have ‘helped’ us. However, the practice is controversial. Indeed, in a review of 128 experiments, White et al. 2018 argue that the practice has “problematic consequences” and “may lead researchers to understate the pervasiveness of inattentional blindness". Since we wanted to offer as simple and demanding a test of residual sensitivity in IB as possible, we thus decided not to use any exclusions, and for that reason decided not to include divided/full attention trials.

      As recommended, we discuss this decision not to include divided/full attention trials and our logic for not doing so in the manuscript. As we explain, not having those conditions makes it more impressive, not less impressive, that we observed the results we in fact did — it makes our results more interpretable, not less interpretable, and so absence of such conditions from our manuscript should not (in our view) be considered any kind of weakness.

      There are quite some studies showing that during IB, neural processing of visual stimuli continues up to high visual levels, for example, Vandenbroucke et al 2014 doi:10.1162/jocn_a_00530 showed preserved processing of perceptual inference (i.e. seeing a kanizsa illusion) during IB. Scholte et al 2006 doi: 10.1016/j.brainres.2005.10.051 showed preserved scene segmentation signals during IB. Compared to the strength of these neural signatures, the reported effects may be considered not all that surprising, or even weak.

      We agree that such evidence of neural processing in IB is relevant to — and perhaps indeed consistent with — our picture, and we’re grateful to the reviewer for pointing out further studies along those lines. Previously, we mentioned a study from Pitts et al., 2012 in which, as we wrote, “unexpected line patterns have been found to elicit the same Nd1 ERP component in both noticers and inattentionally blind subjects (Pitts et al., 2012).” We have added references to both the studies which the reviewer mentions – as well as an additional relevant study – to our manuscript in this context. Thank you for the helpful addition.

      We do however think that our studies are importantly different to this previous work. Our question is whether processing under IB yields representations which are available for explicit report and so would constitute clear evidence of seeing, and perhaps even conscious experience. As we discuss, evidence for this kind of processing remains wanting: “A handful of prior studies have explored the possibility that inattentionally blind subjects may retain some visual sensitivity to features of IB stimuli (e.g., Schnuerch et al., 2016; see also Kreitz et al., 2020, Nobre et al., 2020). However, a recent meta-analysis of this literature (Nobre et al., 2022) argues that such work is problematic along a number of dimensions, including underpowered samples and evidence of publication bias that, when corrected for, eliminates effects revealed by earlier approaches, concluding “that more evidence, particularly from well-powered pre-registered experiments, is needed before solid conclusions can be drawn regarding implicit processing during inattentional blindness” (Nobre et al., 2022).” Our paper is aimed at addressing this question which evidence of neural processing can only speak to indirectly.

      Recommendations for the authors:  

      Reviewer #1 (Recommendations for the authors):

      (1) Please report all of the data, especially the number of subjects in each experiment that answered Y/N and the numbers of subjects in each of the Y and N groups that guessed a feature correctly/incorrectly on the 2AFC tasks. And also the confidence ratings for the 2AFC task (for comparison with the confidence ratings on the Y/N questions).

      We now report all this data in our (revised) Supplementary Materials. We agree that this information will be helpful to readers.

      (2) Consider adding a control condition with partial attention (dual task) or full attention (single task) to estimate the rates of seeing the critical stimulus when it's expected.

      This is the only recommendation we have chosen not to implement. The reason, as we explain in detail above (especially in response to Reviewer #1 comment 5), is that this would not in fact be a “control condition” in our studies, and indeed would only inflate the biases we are concerned with in our work. As the referee comments, the main role of such trials in previous work has been to exclude from analysis subjects who failed to report the unexpected stimulus on the divided and/or full attention control trials. And the practice is controversial: Indeed, in a review of 128 experiments, White et al. 2018 argue that the practice has “problematic consequences” and “may lead researchers to understate the pervasiveness of inattentional blindness" (emphasis added). So, our choice not to have such conditions ensures an especially stringent test of our central claim. Not having those conditions (and their accompanying exclusions) makes our results more interpretable, not less interpretable, and so the absence of such conditions from our manuscript should not (in our view) be considered any kind of weakness.

      We have added a paragraph to our “Design and analytical approach” section explaining the logic behind our deliberate decision not to include divided or full attention trials in our experiments. (For even fuller discussion, see our response to Reviewer #1’s comment 5 above.)

      (3) Consider revising the interpretations to be more precise about the distinction between the super subject being above chance versus each individual subject who cannot be at chance or above chance because there was only a single trial per subject.

      We have now done this throughout the manuscript, as discussed above. We have also added a substantive additional discussion to our “Design and analytical approach” section discussing what should be said about individual subjects in light of our group level data.

      This was a very helpful point, and greatly clarifies the claims we wish to make in the paper. Thank you for this comment, which has certainly made our paper stronger.

      Reviewer #2 (Recommendations for the authors):

      I would be curious to hear the authors' response to two points:

      (1) What do they have to say about prior studies that do more than just ask yes/no questions (and ask several follow-ups)? Are those studies "valid"?

      A very substantial new discussion of this important point has been added. As you will see above, we comment on every one of the 18 papers this reviewer raised (as well as the general argument made); we contend that while many of these papers improve on past methodology in various ways, most in fact do “just ask yes/no questions”, and none of them makes the methodological advance we offer in our manuscript. However, this discussion has helped us clarify that very advance, and so working through this issue has really helped us improve our paper and make its relation to existing literature that much clearer. Thank you for raising this crucial point.

      (2) Do the authors think it is possible that in many cases, people are just guessing about a critical item's location or color and this is at least in part a form of priming?

      We have clarified our discussion in numerous places to further emphasize that our main point concerns above-chance sensitivity, not awareness. Given this, we take very seriously the hypothesis that something like priming of a kind sometimes proposed to occur in cases of blindsight or other putative cases of unconscious perception could be what is driving the responses in non-noticers.

      Reviewer #3 (Recommendations for the authors):

      (1) Control dual task version with expected stimuli would be nice

      We have added a paragraph to our “Design and analytical approach” section explaining the logic behind our deliberate decision not to include divided or full attention trials, which would not in fact be a “control” task in our experiments. For full discussion, see our response to Reviewer 3 above, as well as our summary here in the Recommendations for Authors section in responding to Reviewer 1, recommendation (2).

      (2) Please do a better job in discussing and introducing experiments about neural signatures during IB.

      A discussion of Vandenbroucke et al. 2014 and Scholte et al. 2006 has been added to our discussion of neural signatures in IB, as well as an additional reference to an important early study of semantic processing in IB (Rees et al., 1999). Thank you for these very helpful suggestions!

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      In this manuscript, Dong et al. study the directed cell migration of tracheal stem cells in Drosophila pupae. The migration of these cells which are found in two nearby groups of cells normally happens unidirectionally along the dorsal trunk towards the posterior. Here, the authors study how this directionality is regulated. They show that inter-organ communication between the tracheal stem cells and the nearby fat body plays a role. They provide compelling evidence that Upd2 production in the fat body and JAK/STAT activation in the tracheal stem cells play a role. Moreover, they show that JAK/STAT signalling might induce the expression of apicobasal and planar cell polarity genes in the tracheal stem cells which appear to be needed to ensure unidirectional migration. Finally, the authors suggest that trafficking and vesicular transport of Upd2 from the fat body towards the tracheal cells might be important.

      Strengths:

      The manuscript is well written. This novel work demonstrates a likely link between Upd2JAK/STAT signalling in the fat body and tracheal stem cells and the control of unidirectional cell migration of tracheal stem cells. The authors show that hid+rpr or Upd2RNAi expression in a fat body or Dome RNAi, Hop RNAi, or STAT92E RNAi expression in tracheal stem cells results in aberrant migration of some of the tracheal stem cells towards the anterior. Using ChIP-seq as well as analysis of GFP-protein trap lines of planar cell polarity genes in combination with RNAi experiments, the authors show that STAT92E likely regulates the transcription of planar cell polarity genes and some apicobasal cell polarity genes in tracheal stem cells which appear to be needed for unidirectional migration. Moreover, the authors hypothesise that extracellular vesicle transport of Upd2 might be involved in this Upd2-JAK/STAT signalling in the fat body and tracheal stem cells, which, if true, would be quite interesting and novel.

      Overall, the work presented here provides some novel insights into the mechanism that ensures unidirectional migration of tracheal stem cells that prevents bidirectional migration. This might have important implications for other types of directed cell migration in invertebrates or vertebrates including cancer cell migration.

      Weaknesses:

      It remains unclear to what extent Upd2-JAK/STAT signalling regulates unidirectional migration. While there seems to be a consistent phenotype upon genetic manipulation of Upd2-JAK/STAT signalling and planar cell polarity genes, as in the aberrant anterior migration of a fraction of the cells, the phenotype seems to be rather mild, with the majority of cells migrating towards the posterior.

      We agree that the phenotype is mild, as perturbing JAK/STAT signaling in the progenitors specifically affects the coordinated migration of the cells rather than alters their direction or completely blocks migration. Our data indicate that inter-organ communication ensures coordinated behavior of the progenitor cells, although the differential responses exhibited by individual cells represent an interesting unresolved issue that awaits future in-depth investigation.

      While I am not an expert on extracellular vesicle transport, the data presented here regarding Upd2 being transported in extracellular vesicles do not appear to be very convincing.

      We performed additional PLA experiments which support the interaction between Upd2 and the core components of extracellular vesicles (revised Figure 8). Furthermore, we performed electron microscopy to visualize the Lbm-containing vesicles in fat body (Figure 8-figure supplement 1D).

      These data are now provided in the revised manuscript.

      Major comments:

      (1) The graphs showing the quantification of anterior (and in some cases also posterior migration) are quite confusing. E.g. Figure 1F (and 5E and all others): These graphs are difficult to read because the quantification for the different conditions is not shown separately. E.g. what is the migration distance for Fj RNAi anterior at 3h in Fig5E? Around -205micron (green plus all the other colors) or around -70micron (just green, even though the green bar goes to -205micron). If it's -205micron, then the images in C' or D' do not seem to show this strong phenotype. If it's around -70, then the way the graph shows it is misleading, because some readers will interpret the result as -205. Moreover, it's also not clear what exactly was quantified and how it was quantified. The details are also not described in the methods. It would be useful, to mark with two arrowheads in the image (e.g. 5 A' -D') where the migration distance is measured (anterior margin and point zero).

      Overall, it would be better, if the graph showed the different conditions separately. Also, n numbers should be shown in the figure legend for all graphs.

      We apologize for those inappropriate presentation and insufficient description and thank you for kindly pointing them out. We used different colors to represent different genotypes, and the columns were superimposed. we chose to show the quantification in different conditions separately in the revised Figures. The anterior migration distance for Fj RNAi is around 70 µm.

      We now provided detailed description in the revised methods. For migration distance measurement, we took snapshots at 0hr\ 1hr\ 2hr and 3hr, and measured the distance from the starting point (the junction of TC and DT) to the leading edge of progenitor clusters. The velocity formula: v=d (micrometer)/t (min). As you kindly suggested, we indicated the anterior margin and point zero in the corresponding panels. We have added n number in the legends.

      (2) Figure 2-figure supplement 1: C-L and M: From these images and graph it appears that Upd2 RNAi results in no aberrant anterior migration. Why is this result different from Figures 2D-F where it does?

      The fat body-expressing lsp2-Gal4 was used in Figure 2-figure supplement 1C-L and Figure 2D-F, while trachea specific btl-Gal4 was used in Figure 2-figure supplement 1K-L. The lsp2-Gal4-driven but not btl-Gal4-driven upd2RNAi causes aberrant anterior migration, suggesting that fat bodyderived Upd2 plays a role. We have further clarified this in the text.

      (3) Figure 5F: The data on the localisation of planar cell polarity proteins in the tracheal stem cell group is rather weak. Figure 5G and J should at least be quantified for several animals of the same age for each genotype. Is there overall more Ft-GFP in the cells on the posterior end of the cell group than on the opposite side? Or is there a more classic planar cell polarity in each cell with FtGFP facing to the posterior side of the cell in each cell? Maybe it would be more convincing if the authors assessed what the subcellular localisation of Ft is through the expression of Ft-GFP in clones to figure out whether it localises posteriorly or anteriorly in individual cells.

      We staged the animals, measured several animals for each genotype and provided the quantifications in the revised manuscript. The level of Ft-GFP is higher in the cells at the frontal edge. We tried to examine the expression of Ft-GFP at single-cell level. However, this turned out to be technically difficult because the tracheal stem cells are not regularly arranged as epithelial cells and the proximal-distant axis of the tracheal stem cells remains unclear. We thus decided to measure the fluorescence signal of groups of stem cells along the DT regardless of their individual polarity within cells.

      (4) Regarding the trafficking of Upd2 in the fat body, is it known, whether Grasp65, Lbm, Rab5, and 7 are specifically needed for extracellular vesicle trafficking rather than general intracellular trafficking? What is the evidence for this?

      In our experiments, knocking down rab5, rab7, grasp65 or lbm in trachea using btl-Gal4 did not cause abnormality in the disciplined migration, which excludes their intracellular contribution in the trachea (Figure 7-figure supplement 1). Perturbation of Grasp65 or Lbm in fat body increased intracellular upd2-containing vesicles, indicating that intracellular production is functional (Figure 6J). The Grasp65 is specifically required for Upd2 production. Lbm, Rab5 and Rab7 are important of vesicle trafficking. Our conclusion does not pertain to extracellular or intracellular compartment.

      (5) Figure 8A-B: The data on the proximity of Rab5 and 7 to the Upd2 blobs are not very convincing.

      The confocal images indicate the proximity of Rab5 and Rab7 to the Upd2 vesicles. We interpret the proximity together with the results from Co-IP and PLA data (Figure 8E-K).

      (6) The authors should clarify whether or not their work has shown that "vesicle-mediated transport of ligands is essential for JAK/STAT signaling". In its current form, this manuscript does not appear to provide enough evidence for extracellular vesicle transport of Upd2.

      Lbm belongs to the tetraspanin protein family that contains four transmembrane domains, which are the principal components of extracellular vesicles. We show that Lbm interacts with Upd2. The JAK/STAT signaling depends on the Upd2 in the fat body as well as vesicle trafficking machinery. Furthermore, we performed electron microscopy and show the presence of Lbm-containing vesicles in fat body (Figure 8-figure supplement 1D).

      (7) What is the long-term effect of the various genetic manipulations on migration? The authors don't show what the phenotype at later time points would be, regarding the longer-term migration behaviour (e.g. at 10h APF when the cells should normally reach the posterior end of the pupa). And what is the overall effect of the aberrant bidirectional migration phenotype on tracheal remodelling?

      We observed that the integrity of tracheal network especially the dorsal trunk was impaired, which may be due to incomplete regeneration (Figure 3-figure supplement1E-I).

      (8) The RNAi experiments in this manuscript are generally done using a single RNAi line. To rule out off-target effects, it would be important to use two non-overlapping RNAi lines for each gene.

      We validated the phenotype using several independent RNAi alleles.

      Reviewer #2 (Public review):

      Summary:

      This work by Dong and colleagues investigates the directed migration of tracheal stem cells in Drosophila pupae, essential for tissue homeostasis. These cells, found in two nearby groups, migrate unidirectionally along the dorsal trunk towards the posterior to replenish degenerating branches that disperse the FGF mitogen. The authors show that inter-organ communication between tracheal stem cells and the neighboring fat body controls this directionality. They propose that the fat body-derived cytokine Upd2 induces JAK/STAT signaling in tracheal progenitors, maintaining their directional migration. Disruption of Upd2 production or JAK/STAT signaling results in erratic, bidirectional migration. Additionally, JAK/STAT signaling promotes the expression of planar cell polarity genes, leading to asymmetric localization of Fat in progenitor cells. The study also indicates that Upd2 transport depends on Rab5- and Rab7-mediated endocytic sorting and Lbm-dependent vesicle trafficking. This research addresses inter-organ communication and vesicular transport in the disciplined migration of tracheal progenitors.

      Strengths:

      This manuscript presents extensive and varied experimental data to show a link between Upd2JAK/STAT signaling and tracheal progenitor cell migration. The authors provide convincing evidence that the fat body, located near the trachea, secretes vesicles containing the Upd2 cytokine. These vesicles reach tracheal progenitors and activate the JAK-STAT pathway, which is necessary for their polarized migration. Using ChIP-seq, GFP-protein trap lines of planar cell polarity genes, and RNAi experiments, the authors demonstrate that STAT92E likely regulates the transcription of planar cell polarity genes and some apicobasal cell polarity genes in tracheal stem cells, which seem to be necessary for unidirectional migration.

      Weaknesses:

      Directional migration of tracheal progenitors is only partially compromised, with some cells migrating anteriorly and others maintaining their posterior migration.

      Our results suggest that Upd2-JAK/STAT signaling is required for the consistency of disciplined migration. Although only a few tracheal progenitors display anterior migration, these cells lose the commitment of directional movement. We acknowledge that the phenotype is moderate.

      Additionally, the authors do not examine the potential phenotypic consequences of this defective migration.

      We examined the long-term effects of the aberrant migration and observed an impairment of tracheal integrity and melanized tracheal branches (Figure 3-figure supplement1E-I).

      It is not clear whether the number of tracheal progenitors remains unchanged in the different genetic conditions. If there are more cells, this could affect their localization rather than migration and may change the proposed interpretation of the data.

      We examined the progenitor cell number in bidirectional movement samples and control group. The results show that cell number does not exhibit a significant difference between control and bidirectional movement groups (Figure 3-figure supplement 1).

      Upd2 transport by vesicles is not convincingly shown.

      We performed additional PLA experiments to further support the interaction between Upd2 and the core components of extracellular vesicles. Furthermore, we performed electron microscopy and show the presence of Lbm-containing vesicles in fat body (Figure 8-supplement 1D). Additional experiments such as colocalization and Co-IP assay and better quantification are provided in the revised manuscript (see revised Figure 8).

      Data presentation is confusing and incomplete.

      We used different colors to represent different genotypes, and the columns were superimposed. we changed the graphs to show the quantification in different conditions separately. We revised data presentation to avoid confusing.

      Reviewer #3 (Public review):

      Summary:

      Dong et al tackle the mechanism leading to polarized migration of tracheal progenitors during Drosophila metamorphosis. This work fits in the stem cell research field and its crucial role in growth and regeneration. While it has been previously reported by others that tracheal progenitors migrate in response to FGF and Insulin signals emanating from the fat body in order to regenerate tracheal branches, the authors identified an additional mechanism involved in the communication of the fat body and tracheal progenitors.

      Strengths:

      The data presented were obtained using a wide range of complementary techniques combining genetics, molecular biology, quantitative, and live imaging techniques. The authors provide convincing evidence that the fat body, found in close proximity to the trachea, secrete vesicles containing the Upd2 cytokine that reach tracheal progenitors leading to JAK-STAT pathway activation, which is required for their polarized migration. In addition, the authors show that genes regulating planar cell polarity are also involved in this inter-organ communication.

      Weaknesses:

      (1) Affecting this inter-organ communication leads to a quite discrete phenotype where polarized migration of tracheal progenitors is partially compromised. The study lacks data showing the consequences of this phenotype on the final trachea morphology, function, and/or regeneration capacities at later pupal and adult stages. This could potentially increase the significance of the findings.

      Regarding your kind suggestion, we examined the long-term effects of the aberrant migration and observed the impairment of tracheal integrity and melanized tracheal branches (Figure 3-figure supplement1E-I).

      (2) The conclusions of this paper are mostly well supported by data, but some aspects of data acquisition and analysis need to be clarified and corrected, such as recurrent errors in plotting of tracheal progenitor migration distance that mislead the reader regarding the severity of the phenotype.

      We used different colors to represent different genotypes, and the columns were superimposed. we changed the graphs to show the quantification in different conditions separately. We thank you for kindly pointing it out.

      (3) The number of tracheal progenitors should be assessed since they seem to be found in excess in some genetic conditions that affect their behavior. A change in progenitor number could lead to crowding, thus affecting their localization rather than migration capacities, thereby changing the proposed interpretation. In addition, the authors show data suggesting a reduced progenitor migration speed when the fat body is affected, which would also be consistent with a crowding of progenitors.

      We examined the cell number in bidirectional movement samples and control group. We examined cell number and cell proliferation and observed that there was no significance between control and bidirectional movement groups (Figure 3-figure supplement 2).

      (4) The authors claim that tracheal progenitors display a polarized distribution of PCP proteins that is controlled by JAK-STAT signaling. However, this conclusion is made from a single experiment that is not quantified and for which there is no explanation of how the plot profile measurements were performed. It also seems that this experiment was done only once. Altogether, this is insufficient to support the claim. Finally, a quantification of the number of posterior edges presenting filopodia rather than the number of filopodia at the anterior and posterior leading edges would be more appropriate.

      We staged the animals, measured several animals for each genotype and provided the quantifications in the revised manuscript. The level of Ft-GFP is higher in the cells at the frontal edge. We tried to examine the expression of Ft-GFP at single-cell level. However, this turned out to be difficult due to the fact that the tracheal stem cells are not regularly patterned as epithelial cells and the proximaldistant axis of tracheal stem cells is not well defined. We thus decided to measure the fluorescence signal of groups of stem cells along the DT regardless of their individual polarity.

      (5) The authors demonstrate that Upd2 is transported through vesicles from the fat body to the tracheal progenitors where they propose they are internalized. Since the Upd2 receptor Dome ligand binding sites are exposed to the extracellular environment, it is difficult to envision in the proposed model how Upd2 would be released from vesicles to bind Dome extracellularly and activate the JAK-STAT pathway. Moreover, data regarding the mechanism of the vesicular transport of Upd2 are not fully convincing since the PLA experiments between Upd2 and Rab5, Rab7, and Lbm are not supported by proper positive and negative controls and co-immunoprecipitation data in the main figure do not always correlate to the raw data.

      We use molecular modeling to show that Upd2 and Lbm intermingle, and Upd2 is not entirely encapsulated in vesicles (Figure 8-supplement 1E). We performed PLA experiments using the animals not expressing upd2-Cherry as negative control (Figure 8 E-J). We corrected the Co-IP panel and apologize for this error.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      Minor comments:

      (1) Figure 1-figure supplement 1: E: How was the migration velocity assessed? By live imaging individual cells or following the cell front of the group? Over what time period? Do the data points in the graph correspond to individual cells or the cell group? It would be important to show confocal images that go along with this quantification.

      We took snapshots of pupae at 0hr\ 1hr\ 2hr and 3hr, and measured the distance covered by the migrating progenitor cells from the start place (the junction of TC and DT) to the leading edge of progenitor groups. We then calculated the migration rate by v=d (micrometer)/t (min). As the progenitor cells revolve around and migrate along the DT, tracking single tracheoblast through intact cuticle is technically challenging. We have therefore measured the leading edge as a proxy to the whole cell group. We agree with you that time-lapse imaging is favorable for analysis of migration.

      (2) Figure 1-figure supplement 1: F: Why is there Gal80ts in the genotype? (and in Figure 1H). Also, what pupal age was used for this quantification?

      Expression of hid and rpr in L3 stage impaired fat body integrity and adipocyte abundance, and caused lethality. Gal80ts was used for controlling the expression of rpr.hid. The pupal at 0hr APF were used in EdU experiment.

      (3) Figure 2C: what is shown in the 6 columns (why 3 each for control and rpr/hid)?

      We conducted 3 replicates of each group for control and rpr.hid.

      (4) In the methods, several Drosophila stocks are listed as 'source:" from a particular person (e.g. Dr Ma). Please list the real source of this stock, e.g. Bloomington stock number, or the lab and publication in which the stock was originally made.

      We provide the information on these stocks in the revised methods.

      (5) The SKOV3 carcinoma cell and S2 cell work is not described in the methods.

      We added detailed description of this experiment in the revised method-Cell culture and transfection. 

      (6) Figure 6 (F) 'Bar graph plots the abundance of Upd2-mCherry-containing vesicles in progenitors.' What does abundance mean? What was quantified, the number of vesicles, or the mean intensity? This is also not mentioned in the methods.

      We counted the number of Upd2-mCherry-containing vesicles in fat body cells and trachea progenitors and added the description of measurement in the method.

      (7) There are a few language mistakes throughout the manuscript. E.g.

      (a) Line 117 and other places: Language: 'fat body' should be 'the fat body'.

      We thank you for pointing out these errors and corrected it accordingly.

      (b) Line 1276 Language mistakes: 'Video 1 3D-view of confocal image stacks of tracheal progenitors and fat body. Scale bar: 100 μm. Genotypes: UAS-mCD8-GFP/+;lsp2-Gal4,P[B123]-RFP-moe/+.' :stacks and genotypes should be singular.

      We fixed these errors and thank you for kindly pointing them out. We also proofread the entire manuscript to assure accuracy.

      (8) In general, it is hard to figure out the exact genotypes used in experiments. This is mostly not written very clearly in the figure legends. E.g. Figure 2: genotype for A-C missing in figure legend (is B from control animals?)

      We added genotypes in the figure legends. For Figure 2, A and C lsp2-Gal4,P[B123]-RFP-moe/+ for control, UAS-rpr-hid/+;Gal80ts/+;lsp2-Gal4,P[B123]-RFP-moe/+ for rpr.hid; B from control animals.

      Reviewer #2 (Recommendations for the authors):

      Major comments:

      (1) The phenotype resulting from Upd2 downregulation by RNAi is subtle and shown by unconvincing images. In addition, these phenotypes are analyzed using only one RNAi line.

      We used two independent alleles of upd2RNAi from THFC (THU1288 and THU1331), and observed similar phenotype. For RNAi experiments, we always use multiple independent alleles.

      (2) The authors should analyze the phenotypic consequences of directional migration changes. Is there an effect on tracheal remodeling?

      We observed that the integrity of tracheal network especially the dorsal trunk was impaired and that melanized tracheal branches were present, which may be due to incomplete regeneration (Figure 3figure supplement1E-I).

      (3) The number of tracheal progenitors should be quantified, as some genetic conditions may affect cell numbers, as is apparent in some panels.

      We examined cell number and cell proliferation and observed that there was no significance between control and bidirectional movement groups (Figure 3-figure supplement 1).

      (4) The data on PCP protein distribution are unconvincing, unquantified, and insufficient to support one of the main conclusions of the study, which is stated in the abstract: "JAK/STAT signaling promotes the expression of genes involved in planar cell polarity, leading to asymmetric localization of Fat in progenitor cells."

      We staged the animals, measured several animals for each genotype and provided the quantifications in the revised manuscript. The level of Ft-GFP is higher in the cells at the frontal edge. We tried to examine the expression of Ft-GFP at single-cell level. However, this turned out to be difficult due to the fact that the tracheal stem cells are not regularly patterned as epithelial cells and the proximaldistant axis of tracheal stem cells is not well defined. We thus decided to measure the fluorescence signal of groups of stem cells along the DT regardless of their individual polarity.

      Minor comments:

      (1) Language should be revised. In many places in the manuscript, starting in line 113, "fat body" should be "the fat body".

      Thank you for pointing out this error. We corrected it accordingly.

      (2) Genotypes used in experiments should be described.

      We added all the genotypes. We proofread the entire manuscript to complete the figure legends for genotypes.

      (3) Line 67, the reference to "The progenitor cells reside in Tr4 and Tr5 metameres and start to move along the tracheal branch" should include (Chen and Krasnow, Science 2014).

      We added the reference in the manuscript.

      (4) Line 1081, Figure 7 Legend. "Bar graph plots the abundance of Upd2-mCherry-containing vesicles" Abundance is the number of vesicles? The graph displays the average number of vesicles? Please explain and describe the quantification.

      The bar graph represents the number of Upd2-mCherry-containing vesicles in different conditions. We quantified the number of vesicles per area.

      (5) Figure 1 (I-J) What is shown on the panels? Progenitors marked with? This information is not present in the figure or figure legend. Same for Figure 2 (D-E).

      Figure 1I-J show the vector of migrating progenitors. We added the information in the legends. The tracheal cells were labeled by nls-mCherry in Figure 1I-J. In Figure 2D-E, the progenitors were marked with P[B123]-RFP-moe.

      (6) Figure 3 Q, Stat92E-GFP values in the graph are not well-explained. What do the numbers in the y-axis refer to?

      y-axis represents the intensity of Stat92E-GFP normalized to control. We have changed the y-axis label to ‘normalized Stat92E-GFP intensity’ in the legends.

      (7) In general, figures and figure legends must be revised. Sometimes stainings are not well-defined, some scale bars are missing and plots do not say what the values are.

      We apologized for inadequate information and have revised the figures and legends accordingly.

      Reviewer #3 (Recommendations for the authors):

      Several points should be addressed by the authors in order to improve their manuscript.

      Major points:

      (1) The phenotype obtained from decreasing the inter-organ signaling is quite discrete. It is further weakened by the fact that the images chosen to illustrate the measures are not really convincing. No image at 1h APF shows any clear anterior migration. Based on the scale, most of the images at 3h APF do not show a striking difference compared to the control, and in any case, stronger phenotypes would be missed anteriorly since they would thus be out of frame. In addition, at 3h APF, progenitors migrating anteriorly from Tr5 position get mixed with those migrating posteriorly from Tr4 so it is not clear how measurements were made. Given that most phenotypes are observed upon the use of RNAis, it is possible that phenotypes are weak due to persistent gene expression. Using null clones for dome, hop, or stat in progenitors could therefore aggravate the phenotypes and support further the significance of the study. Finally, assessing the consequences of compromised fat body-tracheal communication on trachea morphology, function, and regeneration later in pupal development and on adult flies would also help strengthen the importance of the findings.

      We agree with you that anteriorly migrated Tr5 progenitors adjoining Tr4 progenitor hinders measurements and that mutants may give stronger phenotype than RNAi lines. We only measured Tr4 progenitors (instead of Tr5) when assessing anterior migration. Thus, we performed experiments using mutant alleles, which gave aberrant migration of tracheal progenitors (Figure 3-figure supplement1A-D). We can now show that the integrity of tracheal network especially dorsal trunk was impaired, which may be due to incomplete regeneration (Figure 3-figure supplement1E-I).

      (2) Although the authors did not observe defects in tracheal progenitor proliferation, progenitors seem to be present in excess in some key genetic background (e.g, upon expression of rpr.hid, statRNAi, Rab-RNAi or in the presence of BFA). This excess could be the result of another mechanism than proliferation (recruitment of extra progenitors since it is not clear how they originate, defect in apoptosis...) and could impact the localization of progenitors, those being pushed anteriorly as a consequence of crowding. A proper characterization of tracheal progenitor number would thus help to discriminate between defects in migration or crowding. This point could also be addressed by performing individual tracking of tracheal progenitors, to find out whether each progenitor is indeed migrating in the wrong direction or if the movement assessed by the global tracking method that is used is just a consequence of progenitor excess.

      We examined the cell number in bidirectional movement samples and control group. The results show that there was no significance between control and bidirectional movement groups (Figure 3figure supplement 1). We also tried to follow every progenitor, but were unable to obtain convincing results with P[B123]-RFP-moe, as tracking single tracheoblast through intact cuticle is technically challenging.

      (3) Regarding the ChIP-seq experiment, an explanation of why choosing the "establishment of planar polarity" family should be provided since data indicate a quite low GeneRatio. Indeed, the "cell adhesion" family seems a more obvious candidate, which would be further supported by the fact that the JAK-STAT pathway has been shown to affect cell adhesion components such as ECadherin and FAK (Silver and Montell 2001, Mallart et al 2024). Also, have these known targets of JAK-STAT signaling been found in the ChIP-seq data? Since filopodia polarization is affected in tracheal progenitors when JAK-STAT signaling is decreased, the same question also applies to enabled, which is involved in filopodia formation and has been recently identified as a target of JAK-STAT signaling.

      As you kindly suggested, we tested a number of cell adhesion-related genes such as E-Cadherin (shg), fak, robo2 and enabled (ena). We did not observe an apparent aberrancy in the migration of tracheal progenitors (Figure 5-supplement 1J).

      (4) Data investigating PCP protein distribution is not convincing, not quantified, and not sufficient to draw one of the main conclusions of the study, which is even written in the abstract "JAK/STAT signaling promotes the expression of genes involved in planar cell polarity leading to asymmetric localization of Fat in progenitor cells."

      We better quantified the abundance of Ft in in the progenitors in the frontal edge and those lagging behind. The traces plot multiple replicates in the figures. The level of Ft-GFP is higher in the cells at the frontal edge.

      (5) Overall, the figures together with their caption and/or the material and methods section lack some important information for the reader to fully understand the data. In addition, some errors are found in multiple plots throughout the article and must be corrected. Here are some examples:

      According to your suggestion, we revised legends and methods section to include sufficient information.

      (a) Migration distance plots from Figure 3E do not match the data presented in the source data file. It seems that, when creating the plot, instead of superimposing the bars, bars were stacked. This should be corrected for all migration distance plots from Figure 3E onward, including in supplementary figures.

      We apologized for misleading representation. We revised it accordingly and show the quantification in different conditions separately.

      (b) The number of analyzed flies and/or clusters of tracheal progenitors from different flies should be stated for all quantification or observations made on images. This information is lacking for all migration distance plots, for progenitor migration tracking (Figure 1 I, J), for DIPF reporter in Figure 2J, for plot profiles (Figure 5G, J), for Upd2-Rab5/Rab7/Lbm co-detections, PLA, CoIP, and lbm-pHluorin experiments. This also applies to RNA seq, ChIP seq, and surface proteomics, for which the number of pupae and number of replicates is not indicated.

      We changed the graphs to show the quantification and n number in different conditions separately.

      We also added the n number of replicates in methods.

      (c) How quantifications were performed is not sufficiently explained. For example, the reference point for migration distance measurement is not defined, and neither is whether the measures were made on fixed or live imaging samples. In fluorescence intensity measurements and Upd2 vesicle counting, information on whether measures were made on a single z slice or on a projection of several z slices should be stated together with what ROI and which FIJI tool for quantification were used. For plot profiles, the same information regarding z slices misses together with how the orientation, the thickness, and the length of the line were chosen, and again the number of times the experiment was conducted should be mentioned and error bars should appear on graphs.

      We thank this reviewer for the suggestions which help clarify the methodology of our experiments and improve presentation of our data. We have made the changes according to the suggestions and modified our methods section and the related figures to incorporate these changes.

      For measuring the migration distance of tracheal progenitors, we took snapshots of living pupae at 0hr\ 1hr\ 2hr and 3hr APF, and measured the migration distance of tracheal progenitors from the start place (the junction of TC and DT) to the leading edge of progenitor groups.

      For the measurements of fluorescent intensity of stat92E-GFP and DIPF, we took z-stack confocal images of samples and quantified the fluorescent intensity using FIJI. Specifically, intensity was quantified for regions of interest, using the Analysis and Measurement tools. To quantify Upd2mCherry vesicles, z-stack confocal images of fat body were taken and the cell counting function of FIJI was used to measure the vesicle number.

      To quantify the fluorescent intensity of in vivo tagged Ds, Ft and Fj proteins, a single z slice was used. The expression level of the protein was assessed as the integrated fluorescent intensity normalized to area.

      For the measurement of Ft-GFP distribution, a single z slice of the progenitors immediately proximal to the DT was imaged. An arbitrary line was drawn along the migration direction from the starting TC-DT junction to the leading front (the length of the line corresponds to the distribution range of tracheal stem cell clusters). Then, fluorescent intensity along the line was automatically calculated with the imbedded measurement function of Zeiss confocal software.

      Minor points:

      (1) In several instances, the authors generalize that stem cells migrate to leave their niche, but this is not the case for all stem cells.

      The phenomenon that stem cells leave their niche when they are activated is commonly observed. We interpreted the general mechanism from our system of tracheal stem cells. We fully agree with you that it may not be the case for all stem cells. We modified the text accordingly.

      (2) Line 122 -a reference paper or an image showing the expression pattern of the lsp2-Gal4 driver is missing.

      We added the reference in the manuscript.

      (3) Line 136 - The term "traces of individual progenitors" is overstated and should be reformulated as the method used does not seem to be individual cell tracking.

      We rephrased accordingly in the revised manuscript.

      (4) Line 146 - Fat body and tracheal progenitors are qualified as interdependent organs, in which aspect do tracheal progenitors affect the fat body?

      Current knowledge suggests a close inter-organ crosstalk between trachea and fat body: The fly trachea provides oxygen to the body and influences the oxidation and metabolism of the whole body. When the trachea is perturbed, the body is in hypoxia, which causes inflammatory response in adipose tissue as an important immune organ (Shin et al., 2024).

      (5) Line 163 - Not all the genes tested are cytokines, so the sentence should be reformulated. In addition, in supplementary Fig2-1 C-J, the KD of hh seems to abolish completely tracheal progenitor migration, which is not commented on.

      According to your suggestion, we revised the description on information of the genes tested. We added comments in the revised manuscript regarding phenotypes of hh knockdown. 

      (6) Line 180 - Conclusion is made on Dome expression while using a dome-Gal4 construct, which does not necessarily recapitulate the endogenous pattern of dome expression, so it should be reformulated. Ideally, dome expression should be assessed in another way. Also, it is not clear whether GFP is present only in progenitors since images are zoomed.

      We revised statement and provided larger view of dome>GFP that shows an enriched expression in the tracheal progenitors (Figure 2-figure supplement 2E), an expression pattern that is consistent with FlyBase.

      (7) Line 199 - Is it upd-Gal4 or upd2-Gal4 that is used? Since the conclusion of the experiment is made on upd2, the use of upd-gal4 would not be relevant. If upd2-gal4 is used, it should be corrected. In general, the provenance of the Gal4 lines should be provided. In addition, a strong GFP signal in the trachea is visible on the image in Supplementary Figure 2-2F but not commented on and seems contradictory with the conclusion mentioning that fat body and gut are the main source of Upd2 production.

      We removed data obtained from the use of this irrelevant upd-Gal4 line.

      (8) Figures:

      -  Figure 1 G, H - Scale bar is missing.

      We added it accordingly.

      -  Figure 1 I, J - The information on the staining is missing.

      We added it in the revised manuscript.

      -  Figure 2A - Providing explanations of the terms "Count" and "Gene ratio" in the caption would be helpful for readers who are not used to this kind of data. In addition, the color code is confusing since the same color is used for the selected gene family and for high p-values (the same applies to other similar graphs).

      Gene ratio refers to the proportion of genes in a dataset that are associated with a particular biological process, function, or pathway. Count indicates the number of genes from input gene list that are associated with a specific GO term. We used redness to indicate a smaller p-value and a higher significance.

      -  Figure 2 B, C - What does the color scale represent? What do the columns in C correspond to, different time points, different replicates?

      The color scale represents the normalized expression. The columns in C correspond to different replicates of control and rpr.hid.

      -  Figure 2 F - The error bars on the 3h APF posterior bars are missing.

      We added error bars accordingly.

      -  Figure 2 G - The legend "Down-Stable-Up" is in comparison to what?

      The control group was generated from the reaction without H2O2. The comparison was relative to the control group.

      -  Figure 2 J - The specificity of the DIPF tool that has been created should be validated in other tissues displaying known JAK-STAT activity and/or in conditions of decreased JAK-STAT signaling. In addition, the added value of the tool as compared to the JAK-STAT activity reporter used later, which has been well characterized, is not obvious.

      We added the signal of DIPF in fat body and salivary gland, both of which harbor active JAK/STAT signaling (Figure 2-figure supplement 2F-H). As opposed to the well characterized Stat92E-GFP reporter that assays the downstream transcription activity, the DIPF reporter measures the upstream event of receptor dimerization.

      -  Figure 3 I-P - Reporter tool validation in Images I-L could be moved to supplementary data. In images M-P, staining of nuclei and/or membranes would be useful to assess cell integrity.

      We revised the figures accordingly.

      -  Figure 3Q and similar plots in the following figures do not explain the normalization performed and how it can be higher than 1 in control conditions.

      In these figures, we normalized the signal relative to control groups, e.g., The value of Stat92E-GFP in btl-GFP control group was set to 1 in the previous Figure 3Q (revised Figure 3-supplementary

      Figure B-J).

      -  Figure 4C - These representations lack explanations to be fully understood by a broad audience.

      The figure showing that Stat92E binding was detected in the promoters and intronic regions (the orange peaks) of genes functioning in distal-to-proximal signaling, such as ds, fj, fz, stan, Vang and fat2. We added the information in figure legend according to your suggestion.

      -  Figure 5 K,L - What is the x-axis missing, together with the method of tracking used?

      The x-axis refers to time of recording from a t stack series with a time interval of 5 min. We revised method section and provide detailed procedure of this experiment.

      -  Figures 6 and 8- The overall figures lack a wider view of the cells/tissues/organs and/or additional staining to understand what is presented.

      We showed preparation of fat body. In order to obtain the high resolution of vesicles, we used high magnification. We now added wider views of the tissues under investigation (e.g. Figure 6-figure supplement 1).

      -  Figure 6 D,E - The scale bar is missing.

      We added it accordingly.

      -  Figure 8 O-S - What is the blue staining?

      The blue staining shows DAPI-stained nuclei. We have added the information in the legend.

      -  PLA experiments can give a lot of non-specific background. What kind of controls have been used in Figure 8 F-J? Negative controls should be done on cells that do not express upd2-mCherry using both antibodies to detect non-specific background, which does not usually appear completely black.

      If possible, a positive control using a known protein interacting with Rab5-GFP should be included.

      We used the control samples without one of the primary antibodies in previous Figure 8. In the revised Figure 8, we conducted experiment as you suggested with controls that do not express upd2mCherry (Figure 8 E-J).

      -  Co-IP experiments - The raw data file for blots is quite hard to read through. Some legends are not facing the right lane and some blots presented in the main figure are difficult to track since several blots are presented in the raw data file. e.g.

      (a)  Raw blot for Figure 8 K: the band for mCherry in the IP anti-GFP blot (lane one in K) is not convincing, it is not distinguishable from other aspecific bands. On the reverse IP presented only in raw data, on the input from blot IB anti-mCherry, both lanes present exactly the same bands at 72kb when one of the lanes corresponds to extract from flies not expressing upd2-mCherry.

      We thank you for pointing out the incorrect labels. We apologized for the errors and corrected it accordingly.

      (b)  Raw blot for Figure 8 L: on the input blot IB anti-GFP, there is a band corresponding to Rab7-GFP in the lane of the extract from flies not expressing Rab7-GFP.

      We corrected it.

      (c)  Raw data for Figure 8 M: on the last blot, legends are missing above the input Ib anti-GFP blot.

      We added the missing legends in the figure.

      Shin, M., Chang, E., Lee, D., Kim, N., Cho, B., Cha, N., Koranteng, F., Song, J.J., and Shim, J. (2024). Drosophila immune cells transport oxygen through PPO2 protein phase transition. Nature 631, 350-359.

    1. Reviewer #2 (Public review):

      The authors inject, into the rete testes, mRNA and plasmids encoding mRNAs for GFP and then ARMC2 (into infertile Armc2 KO mice) in a gene therapy approach to express exogenous proteins in male germ cells. They do show GFP epifluorescence and ARMC2 protein in KO tissues, although the evidence presented is weak. Overall, the data do not necessarily make sense given the biology of spermatogenesis and more rigorous testing of this model is required to fully support the conclusions, that gene therapy can be used to rescue male infertility.

      In this revision, the authors attempt to respond to the critiques from the first round of reviews. While they did address many of the minor concerns, there are still a number to be addressed. With that said, the data still do not support the conclusions of the manuscript.

      (1) The authors have not satisfactorily provided an explanation for how a naked mRNA can persist and direct expression of GFP or luciferase for ~3 weeks. The most stable mRNAs in mammalian cells have half-lives of ~24-60 hours. The stability of the injected mRNAs should be evaluated and reported using cell lines. GFP protein's half-life is ~26 hours, and luciferase protein's half-life is ~2 hours.

      (2) There is no convincing data shown in Figs. 1-8 that the GFP is even expressed in germ cells, which is obviously a prerequisite for the Armc2 KO rescue experiment shown in the later figures! In fact, to this reviewer the GFP appears to be in Sertoli cell cytoplasm, which spans the epithelium and surrounds germ cells - thus, it can be oft-confused with germ cells. In addition, if it is in germ cells, then the authors should be able to show, on subsequent days, that it is present in clones of germ cells that are maturing. Due to intracellular bridges, a molecule like GFP has been shown to diffuse readily and rapidly (in a matter of minutes) between adjacent germ cells. To clarify, the authors must generate single cell suspensions and immunostain for GFP using any of a number of excellent commercially-available antibodies to verify it is present in germ cells. It should also be present in sperm, if it is indeed in the germline.

      Other comments:

      70-1 This is an incorrect interpretation of the findings from Ref 5 - that review stated there were ~2,000 testis-enriched genes, but that does not mean "the whole process involves around two thousand of genes"

      74 would specify 'male'

      79-84 Are the concerns with ICSI due to the procedure itself, or the fact that it's often used when there is likely to be a genetic issue with the male whose sperm was used? This should be clarified if possible using references from the literature, as this reviewer imagines this could be a rather contentious issue with clinicians who routinely use this procedure, even in cases where IVF would very likely have worked

      199 Codon optimization improvement of mRNA stability needs a reference; in one study using yeast transcripts, optimization improved RNA stability on the order of minutes (e.g., from ~5 minutes to ~17 minutes); is there some evidence that it could be increased dramatically to days or weeks?

      472-3 The reported half-life of EGFP is ~36 hours - so, if the mRNA is unstable (and not measured, but certainly could be estimated by qRT-PCR detection of the transcript on subsequent days after injection) and EGFP is comparatively more stable (but still hours), how does EGFP persist for 21 days after injection of naked mRNA??

      Curious why the authors were unable to get anti-GFP to work in immunostaining?

      In Fig. 3-4, the GFP signals are unremarkable, in that they cannot be fairly attributed to any structure or cell type - they just look like blobs; and why, in Fig. 4D-E, why does the GFP signal appear stronger at 21 days than 15 days? And why is it completely gone by 28 days? This data is unconvincing. If the authors did a single cell suspension, what types or percentage of cells would be GFP+? Since germ cells are not adherent in culture, a simple experiment could be done whereby a single cell suspension could be made, cultured for 4-6 hours, and non-adherent cells "shaken off" and imaged vs adherent cells. Cells could also be fixed and immunostained for GFP, which has worked in many other labs using anti-GFP.

      In Fig. 5, what is the half-life of luciferase? From this reviewer's search of the literature, it appears to be ~2-3 h in mammalian cells. With this said, how do the authors envision detectable protein for up to 20 days from a naked mRNA? The stability of the injected mRNAs should be shown in a mammalian cell line - perhaps this mRNA has an incredibly long half-life, which might help explain these results. However, even the most stable endogenous mRNAs (e.g., globin) are ~24-60 hrs.

      527-8 The Sertoli cell cytoplasm is not just present along the basement membrane as stated, but also projects all the way to the lumina

      529-30 This is incorrect, as round spermatids are never "localized between the spermatocytes and elongated spermatids" - if elongated spermatids are present, rounds are not - they are never coincident in the same testis section

      Fig. 7 To this reviewer, all of the GFP appears to be in Sertoli cell cytoplasm

      In Figs 1-8 there is no convincing evidence presented that GFP is expressed in germ cells! In fact, it appears to be in Sertoli cells

      Fig. 9 - alpha-tubuline?

      Fig. 11 - how was sperm morphology/motility not rescued on "days 3, 6, 10, 15, or 28 after surgery", but it was in some at 21 and 35? How does this make sense, given the known kinetics of male germ cell development?? And at least one of the sperm in the KO in Fig. B5 looks relatively normal, and the flagellum may be out-of-focus in the image? With only a few sperm for reviewers to see, how can we know these represent the population?

    1. This is based on the assumption that wealthier people are better able to handle the risks of climate change than poorer ones.

      This is true to an extent. If the worst is to come, every human is at risk, regardless of financial status. Sure, if it's just a minor earthquake or a tornado, wealthier people have access to better protection than those of a lower income, however, if the world was to have such an extreme weather event, every human is at risk regardless of what protection they have access to.

    1. Annotation #1: Thoughts

      "World inequality, however, cannot be explained by climate or diseases, or any version of the geography hypothesis. Just think of Nogales. What separates the two parts is not climate, geography, or disease environment, but the U.S.-Mexico border." In this passage the author is debunking various aspects of the geographic theory, which attributes difference in economic success to geographic conditions. The author is essentially saying that while theories about climate and disease impacting a countries productivity and consequently economic development may seem plausible. When looking at actual events in history, we can see that even countries with exactly the same geographies still face completely different circumstances economically. This connects to Singapore as we can see this phenomenon occur here as well, with Singapore being significantly better off economically than its neighbours such as Thailand, Cambodia, etc.

      Annotation #2: Question “But mostly no, because those aspects of culture often emphasized—religion, national ethics, African or Latin values—are just not important for understanding how we got here and why the inequalities in the world persist.”

      This passage is introducing the culture theory, a theory that cites cultural differences as the source of inequality in economic growth across the world. This specific quote elaborates on why Robinson doesn’t believe that culture is a significant cause of the difference in economic growth across the world, stating that aspects of culture such as “religion, national ethics and African or Latin values” are not important. This idea made me wonder about the impact a societies intrinsic cultural values can have when looked at from a larger scale: How do a society’s intrinsic cultural values influence its long-term economic growth and development on a global scale? When connecting this to Singapore and it’s own rapid economic development, I wonder if certain cultural values such as value of education in many asian cultures or respect for the law, influenced the way Singapore as a country was able to develop economically on a global scale.

      **Annotation #3: Epiphanies ** “China, despite many imperfections in its economic and political system, has been the most rapidly growing nation of the past three decades. Chinese poverty until Mao Zedong’s death had nothing to do with Chinese culture; it was due to the disastrous way Mao organized the economy and conducted politics.”

      This passage further analyzing why the culture theory doesn’t properly explain the economic growth of certain countries over others.This specific quote shifts the focus from cultural explanations of poverty to highlighting how government policies and political decisions can drastically impact economic outcomes. It made me think a lot more about the significant influence that historical and political events have on countries, and how decisions from decades ago can still have lasting impact on countries. What interested me most was how some countries, like China, were able to recover and achieve significant growth, while others, such as certain African nations, continue to struggle. This made me reflect on how institutions and governance propel a country towards prosperity, and connecting it to Singapore made me wonder the unique aspects of it's institutions that lead it to be so economically successful.

    2. Annotation #1

      Quote: "Institutions are the key to economic growth because they determine the incentives for savings, investment, and innovation."

      Thoughts: This quote is saying that it isn't just the amount that you invest but also the institutions that govern these investments that contribute to economic growth. In other words, it adds another layer to the Solow model we discussed in class, which is how capital investments are managed in an economy. According to North (1990), who received the Nobel Memorial Prize in Economics, institutions shape the rules of the game in society and significantly impact economic performance. While this aligns with our class discussion of the Solow growth model, which assumes that economies grow through capital accumulation and technological innovation, with each round of investment decreasing in value to economic growth, it supports the claim made in this article by suggesting that institutional frameworks govern how effectively these factors operate. Our discussion in class, working to answer the question of what are the sources of Singapore’s success also supports this idea. Based on the reasoning in this quote, one of the reasons for Singapore's economic growth is its pro-investment policies into education and medical institutions that prioritize long term benefits.

      Citation: North, D. C. (1990). Institutions, institutional change, and economic performance. Cambridge University Press.

      Annotation #2

      Quote: "Capital deepening alone cannot sustain long-term growth; technological innovation and institutional improvements are equally critical."

      Thoughts: This section elaborates on the idea that the Solow model doesn't account for the effectiveness of capital investments in economic growth, by suggesting that it's how this capital investment is used - to develop technology and invest in institutions that will provide long term benefits - that will ensure its effectiveness. This raises a question: How do institutions evolve to foster innovation in countries transitioning from low to high-income status? In class, we discussed a possible answer to the question about what are the sources of SGs economic development, with one possible answer being Singapore's investments into educational institutions that help to foster innovation. However, this led me to wonder if these strategies be applied in countries like Ghana, where institutional weaknesses impede economic progress, or does their success depend on specific historical and political contexts? Additionally, this also makes me want to understand if it might be impossible for some countries to have economic growth in the long term because of corruption that might prevent capital investments from being directed to institutions and technology. Would institutions be beneficial in prompting economic growth if the governing of the institution is corrupt?

      Annotation #3:

      Quote: "Geography is not destiny; countries with unfavorable geographic conditions can still achieve economic prosperity through sound policies and institutional reforms."

      Thoughts: Despite what teh article suggested about all three factors - culture, institutions, and geography - playing an important role in a countries' economic growth, this quote suggests that they can be substituted/made up for by one another. I found this quote extremely interesting since based on the geography theory Singapore should be poorer than many temperature regions solely because it has a tropical climate; however, this isn't true with SG being one of the wealthiest nations in the world. This made me realize that Singapore was able to overcome geographic disadvantages (limited land, no natural resources) through integration into global trade networks and institutional reforms, which is what made them grow economically. In other words, even if their geographic location gave them a disadvantage, their cultural values and institutions enabled them to overcome this. This also supports our discussion in class relating to our inquiry question - What are the sources of Singapore's economic development? - about how Singapore was able to grow rapidly by suggesting that because of its smart capital investments that resulted in growth following the Solow Model. The argument in this article, shown in this quote, challenged my previous assumption that most tropical regions were poorer on average compared to temperate regions because a nation must have all three factors - cultural values, institutions, and geography- for them to succeed with no exceptions.

    1. So like I said, the prime minister, I can't remember his name of Quebec, he said, we're going to take this moment and we're going to invest. We're going to invest in new ways forward, new trade relationships. We're going to invest our money in making new economic friends. I love that mentality. I'm not going to be beholden to the whims of this person. I'm not going to be beholden to the whims of this guy. I'm going to choose a new way forward. The status quo, the status quo, it's just too volatile.

      Canada looks for its own path in response to Trump

    1. That story was not unique to Minneapolis. Restrictive covenants were commonplace in American cities of the early 20th century, including New York, San Francisco, Washington, D.C., Seattle, Portland, Detroit, and just about anywhere researchers have bothered to look.

      It's shocking how it wasn't just one place, and it was more than one.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews: 

      Reviewer #1 (Public review): 

      The conserved AAA-ATPase PCH-2 has been shown in several organisms including C. elegans to remodel classes of HORMAD proteins that act in meiotic pairing and recombination. In some organisms the impact of PCH-2 mutations is subtle but becomes more apparent when other aspects of recombination are perturbed. Patel et al. performed a set of elegant experiments in C. elegans aimed at identifying conserved functions of PCH-2. Their work provides such an opportunity because in C. elegans meiotically expressed HORMADs localize to meiotic chromosomes independently of PCH-2. Work in C. elegans also allows the authors to focus on nuclear PCH-2 functions as opposed to cytoplasmic functions also seen for PCH-2 in other organisms. 

      The authors performed the following experiments: 

      (1) They constructed C. elegans animals with SNPs that enabled them to measure crossing over in intervals that cover most of four of the six chromosomes. They then showed that doublecrossovers, which were common on most of the four chromosomes in wild-type, were absent in pch-2. They also noted shifts in crossover distribution in the four chromosomes. 

      (2) Based on the crossover analysis and previous studies they hypothesized that PCH-2 plays a role at an early stage in meiotic prophase to regulate how SPO-11 induced double-strand breaks are utilized to form crossovers. They tested their hypothesis by performing ionizing irradiation and depleting SPO-11 at different stages in meiotic prophase in wild-type and pch-2 mutant animals. The authors observed that irradiation of meiotic nuclei in zygotene resulted in pch-2 nuclei having a larger number of nuclei with 6 or greater crossovers (as measured by COSA-1 foci) compared to wildtype. Consistent with this observation, SPO11 depletion, starting roughly in zygotene, also resulted in pch-2 nuclei having an increase in 6 or more COSA-1 foci compared to wild type. The increased number at this time point appeared beneficial because a significant decrease in univalents was observed. 

      (3) They then asked if the above phenotypes correlated with the localization of MSH-5, a factor that stabilizes crossover-specific DNA recombination intermediates. They observed that pch-2 mutants displayed an increase in MSH-5 foci at early times in meiotic prophase and an unexpectedly higher number at later times. They conclude based on the differences in early MSH-5 localization and the SPO-11 and irradiation studies that PCH-2 prevents early DSBs from becoming crossovers and early loading of MSH-5. By analyzing different HORMAD proteins that are defective in forming the closed conformation acted upon by PCH-2, they present evidence that MSH-5 loading was regulated by the HIM-3 HORMAD. 

      (4) They performed a crossover homeostasis experiment in which DSB levels were reduced. The goal of this experiment was to test if PCH-2 acts in crossover assurance. Interestingly, in this background PCH-2 negative nuclei displayed higher levels of COSA-1 foci compared to PCH-2 positive nuclei. This observation and a further test of the model suggested that "PCH-2's presence on the SC prevents crossover designation." 

      (5) Based on their observations indicating that early DSBS are prevented from becoming crossovers by PCH-2, the authors hypothesized that the DNA damage kinase CHK-2 and PCH2 act to control how DSBs enter the crossover pathway. This hypothesis was developed based on their finding that PCH-2 prevents early DSBs from becoming crossovers and previous work showing that CHK-2 activity is modulated during meiotic recombination progression. They tested their hypothesis using a mutant synaptonemal complex component that maintains high CHK-2 activity that cannot be turned off to enable crossover designation. Their finding that the pch-2 mutation suppressed the crossover defect (as measured by COSA-1 foci) supports their hypothesis. 

      Based on these studies the authors provide convincing evidence that PCH-2 prevents early DSBs from becoming crossovers and controls the number and distribution of crossovers to promote a regulated mechanism that ensures the formation of obligate crossovers and crossover homeostasis. As the authors note, such a mechanism is consistent with earlier studies suggesting that early DSBs could serve as "scouts" to facilitate homolog pairing or to coordinate the DNA damage response with repair events that lead to crossing over. The detailed mechanistic insights provided in this work will certainly be used to better understand functions for PCH-2 in meiosis in other organisms. My comments below are aimed at improving the clarity of the manuscript. 

      We thank the reviewer for their concise summary of our manuscript and their assessment of our work as “convincing” and providing “detailed mechanistic insight.”

      Comments 

      (1) It appears from reading the Materials and Methods that the SNPs used to measure crossing over were obtained by mating Hawaiian and Bristol strains. It is not clear to this reviewer how the SNPs were introduced into the animals. Was crossing over measured in a single animal line? Were the wild-type and pch-2 mutations made in backgrounds that were isogenic with respect to each other? This is a concern because it is not clear, at least to this reviewer, how much of an impact crossing different ecotypes will have on the frequency and distribution of recombination events (and possibly the recombination intermediates that were studied). 

      We have clarified these issues in the Materials and Methods of our updated preprint. The control and pch-2 mutants were isogenic in either the Bristol or Hawaiian backgrounds. Control lines were the original Bristol and Hawaiian lines and pch-2 mutants were originally made in the Bristol line and backcrossed at least 3 times before analysis. Hawaiian pch-2 mutants were made by backcrossing pch-2 mutants at least 8 times to the Hawaiian background and verifying the presence of Hawaiian SNPs on all chromosomes tested in the recombination assay. To perform the recombination assays, these lines were crossed to generate the relevant F1s.

      (2) The authors state that in pch-2 mutants there was a striking shift of crossovers (line 135) to the PC end for all of the four chromosomes that were tested. I looked at Figure 1 for some time and felt that the results were more ambiguous. Map distances seemed similar at the PC end for wildtype and pch-2 on Chrom. I. While the decrease in crossing over in pch-2 appeared significant for Chrom. I and III, the results for Chrom. IV, and Chrom. X. seemed less clear. Were map distances compared statistically? At least for this reviewer the effects on specific intervals appear less clear and without a bit more detail on how the animals were constructed it's hard for me to follow these conclusions. 

      We hope that the added details above makes the results of these assays more clear. Map distances were compared and did not satisfy statistical significance, except where indicated. While we agree that the comparisons between control animals and pch-2 mutants may seem less clear with individual chromosomes, we argue that more general, consistent patterns become clear when analyzing multiple chromosomes. Indeed, this is why we expanded our recombination analysis beyond Chromosome III and the X Chromosome, as reported in Deshong, 2014. We have edited this sentence to: “Moreover, there was a striking and consistent shift of crossovers to the PC end of all four chromosomes tested.”

      (3) Figure 2. I'm curious why non-irradiated controls were not tested side-by-side for COSA-1 staining. It just seems like a nice control that would strengthen the authors' arguments. 

      We have added these controls in the updated preprint as Figure 2B.

      (4) Figure 3. It took me a while to follow the connection between the COSA-1 staining and DAPI staining panels (12 hrs later). Perhaps an arrow that connects each set of time points between the panels or just a single title on the X-axis that links the two would make things clearer. 

      To make this figure more clear, we have generated two different cartoons for the assay that scores GFP::COSA-1 foci and the assay that scores bivalents. We have also edited this section of the results to make it more clear.

      Reviewer #2 (Public review): 

      Summary: 

      This paper has some intriguing data regarding the different potential roles of Pch-2 in ensuring crossing over. In particular, the alterations in crossover distribution and Msh-5 foci are compelling. My main issue is that some of the models are confusingly presented and would benefit from some reframing. The role of Pch-2 across organisms has been difficult to determine, the ability to separate pairing and synapsis roles in worms provides a great advantage for this paper. 

      Strengths: 

      Beautiful genetic data, clearly made figures. Great system for studying the role of Pch-2 in crossing over. 

      We thank the reviewers for their constructive and useful summary of our manuscript and the analysis of its strengths. 

      Weaknesses: 

      (1) For a general audience, definitions of crossover assurance, crossover eligible intermediates, and crossover designation would be helpful. This applies to both the proposed molecular model and the cytological manifestation that is being scored specifically in C. elegans. 

      We have made these changes in an updated preprint.

      (2) Line 62: Is there evidence that DSBs are introduced gradually throughout the early prophase? Please provide references. 

      We have referenced Woglar and Villeneuve 2018 and Joshi et. al. 2015 to support this statement in the updated preprint.

      (3) Do double crossovers show strong interference in worms? Given that the PC is at the ends of chromosomes don't you expect double crossovers to be near the chromosome ends and thus the PC? 

      Despite their rarity, double crossovers do show interference in worms. However, the PC is limited to one end of the chromosome. Therefore, even if interference ensures the spacing of these double crossovers, the preponderance of one of these crossovers toward one end (and not both ends) suggest something functionally unique about the PC end.

      (4) Line 155 - if the previous data in Deshong et al is helpful it would be useful to briefly describe it and how the experimental caveats led to misinterpretation (or state that further investigation suggests a different model etc.). Many readers are unlikely to look up the paper to find out what this means. 

      We have added this to the updated preprint: “We had previously observed that meiotic nuclei in early prophase were more likely to produce crossovers when DSBs were induced by the Mos transposon in pch-2 mutants than in control animals but experimental caveats limited our ability to properly interpret this experiment.”

      (5) Line 248: I am confused by the meaning of crossover assurance here - you see no difference in the average number of COSA-1 foci in Pch-2 vs. wt at any time point. Is it the increase in cells with >6 COSA-1 foci that shows a loss of crossover assurance? That is the only thing that shows a significant difference (at the one time point) in COSA-1 foci. The number of dapi bodies shows the loss of Pch-2 increases crossover assurance (fewer cells with unattached homologs). So this part is confusing to me. How does reliably detecting foci vs. DAPI bodies explain this? 

      We have removed this section to avoid confusion.

      (6) Line 384: I am confused. I understand that in the dsb-2/pch2 mutant there are fewer COSA-1 foci. So fewer crossovers are designated when DSBs are reduced in the absence of PCH-2.

      How then does this suggest that PCH-2's presence on the SC prevents crossover designation? Its absence is preventing crossover designation at least in the dsb-2 mutant. 

      We have tried to make this more clear in the updated preprint. In this experiment, we had identified three possible explanations for why PCH-2 persists on some nuclei that do not have GFP::COSA-1 foci: 1) PCH-2 removal is coincident with crossover designation; 2) PCH-2 removal depends on crossover designation; and 3) PCH-2 removal facilitates crossover designation. The decrease in the number of GFP::COSA-1 foci in dsb2::AID;pch-2 mutants argues against the first two possibilities, suggesting that the third might be correct. We have edited the sentence to read: “These data argue against the possibility that PCH-2’s removal from the SC is simply in response to or coincident with crossover designation and instead, suggest that PCH-2’s removal from the SC somehow facilitates crossover designation and assurance.”

      (7) Discussion Line 535: How do you know that the crossovers that form near the PCs are Class II and not the other way around? Perhaps early forming Class I crossovers give time for a second Class II crossover to form. In budding yeast, it is thought that synapsis initiation sites are likely sites of crossover designation and class I crossing over. Also, the precursors that form class I and II crossovers may be the same or highly similar to each other, such that Pch-2's actions could equally affect both pathways. 

      We do not know that the crossovers that form near the PC are Class II but hypothesize that they are based on the close, functional relationship that exists between Class I crossovers and synapsis and the apparent antagonistic relationship that exists between Class II crossovers and synapsis. We agree that Class I and Class II crossover precursors are likely to be the same or highly similar, exhibit extensive crosstalk that may complicate straightforward analysis and PCH-2 is likely to affect both, as strongly suggested by our GFP::MSH-5 analysis. We present this hypothesis based on the apparent relationship between PCH-2 and synapsis in several systems but agree that it needs to be formally tested. We have tried to make this argument more clear in the updated preprint.

      Reviewer #3 (Public review): 

      Summary: 

      This manuscript describes an in-depth analysis of the effect of the AAA+ ATPase PCH-2 on meiotic crossover formation in C. elegant. The authors reach several conclusions, and attempt to synthesize a 'universal' framework for the role of this factor in eukaryotic meiosis. 

      Strengths: 

      The manuscript makes use of the advantages of the 'conveyor' belt system within the c.elegans reproductive tract, to enable a series of elegant genetic experiments. 

      We thank this reviewer for the useful assessment of our manuscript and the articulation of its strengths.

      Weaknesses: 

      A weakness of this manuscript is that it heavily relies on certain genetic/cell biological assays that can report on distinct crossover outcomes, without clear and directed control over other aspects and variables that might also impact the final repair outcome. Such assays are currently out of reach in this model system. 

      In general, this manuscript could be more generally accessible to non-C.elegans readers. Currently, the manuscript is hard to digest for non-experts (even if meiosis researchers). In addition, the authors should be careful to consider alternative explanations for certain results. At several steps in the manuscript, results could ostensibly be caused by underlying defects that are currently unknown (for example, can we know for sure that pch-2 mutants do not suffer from altered DSB patterning, and how can we know what the exact functional and genetic interactions between pch-2 and HORMAD mutants tell us?). Alternative explanations are possible and it would serve the reader well to explicitly name and explain these options throughout the manuscript. 

      We have made the manuscript more accessible to non-C. elegans readers and discuss alternate explanations for specific results in the updated preprint. 

      Recommendations for the authors:  

      Reviewing Editor Comments: 

      (1) Please provide 'n' values for each experiment. 

      n values are now included in the Figure legends for each experiment.

      (2) Line 129: Please represent the DCOs as percent or fraction (1%-9.8%, instead of 1-13). 

      We have made this change.

      (3) Figure 3A legend: the grey bar should read 20hr. COSA-1/ 32 hr DAPI. In Figure 3E, it is not clear why 36hr Auxin and 34hr Auxin show a significant difference in DAPI bodies between control and pch-2, but 32hr Auxin treatment does not. Here again 'n' values will help. 

      We have made this change. We also are not sure why the 32 hour auxin treatment did not show a significant difference in DAPI stained bodies. We have included the n values, which are not very different between timepoints and therefore are unlikely to explain the difference. The difference may reflect the time that it takes for SPO-11 function to be completely abrogated.

      (4) Line 360: Please provide the fraction of PCH-2 positive nuclei in dsb-2.

      We have made this change. 

      Please also address all reviewer comments. 

      Reviewer #1 (Recommendations for the authors): 

      (1) Page 3, line 52. While I agree that crossing over is important to generate new haplotypes, work has suggested that the contribution by an independent assortment of homologs to generate new haplotypes is likely to be significantly greater. One reference for this is: Veller et al. PNAS 116:1659. 

      We deeply appreciate this reviewer pointing us to this paper, especially since it argues that controlling crossover distribution contributes to gene shuffling and now cite it in our introduction! While we agree that this paper concludes that independent assortment likely explains the generation of new haplotypes to a greater degree than crossovers, the authors performed this analysis with human chromosomes and explicitly include the caveat that their modeling assumes uniform gene density across chromosomes. For example, we know this is not true in C. elegans. It would be interesting to perform the same analysis with C. elegans chromosomes in control and pch-2 mutants, taking into account this important difference.

      (2) Figure 2. It would really help the reader if an arrow and text were shown below each irradiation sign to indicate the stage in meiosis in which the irradiation was done as well as another arrow in the late pachytene box to show when the COSA-1 foci were analyzed. In general, having text in the figures that help stage the timing in meiosis would help the non C. elegans reader. This is also an issue where staging of C. elegans is shown (Figure 4). 

      We have made these changes to Figure 2. To help readers interpret Figure 4, we have added TZ and LP to the graphs in Figure 4B and 4D and indicated what these acronyms (transition zone and late pachytene, respectively) are in the Figure legend.

      (3) Page 12, line 288. It would be valuable to first outline why the him3-R93Y and htp-3H96Y alleles were chosen. This was eventually done on Page 13, but introducing this earlier would help the reader. 

      We have introduced these mutations earlier in the manuscript.

      (4) Page 13, line 323. A one sentence description of the OLLAS tagging system would be useful. 

      We have added this sentence: “we generated wildtype animals and pch-2 mutants with both GFP::MSH-5 and a version of COSA-1 that has been endogenously tagged at the Nterminus with the epitope tag, OLLAS, a fusion of the E. coli OmpF protein and the mouse Langerin extracellular domain”

      Reviewer #2 (Recommendations for the authors): 

      (1) The title is a little awkward. Consider: PCH-2 controls the number and distribution of crossovers in C. elegans by antagonizing their formation 

      We have made this change.

      (2) Abstract: 

      Consider removing "that is observed" from line 20. 

      We have made this change.

      I'm confused by the meaning of "reinforcement of crossover-eligible intermediates" from line 27. 

      We have removed this phrase from the abstract.

      A definition of crossover assurance would be helpful in the abstract. 

      We have added this to the abstract: “This requirement is known as crossover assurance and is one example of crossover control.”

      (3) Line 36: I know a stickler but many meioses only produce one haploid gamete (mammalian oocytes, for example) 

      Thanks for the reminder! We have removed the “four” from this sentence.

      (4) Line 284 - are you defining MSH-5 foci as crossover-eligible intermediates? If so, please state this earlier. 

      We have added this to the introduction to this section of the results: “In C. elegans, these crossover-eligible intermediates can be visualized by the loading of the pro-crossover factor MSH-5, a component of the meiosis-specific MutSγ complex that stabilizes crossover-specific DNA repair intermediates called joint molecules”

      (5) Can the control be included in Figure S1? 

      We have made this change.

      (6) Can you define that crossover designation is the formation of a COSA-1 focus? 

      We did this in the section introducing GFP::MSH-5: “In the spatiotemporally organized meiotic nuclei of the germline, a functional GFP tagged version of MSH-5, GFP::MSH-5, begins to form a few foci in leptotene/zygotene (the transition zone), becoming more numerous in early pachytene before decreasing in number in mid pachytene to ultimately colocalize with COSA-1 marked sites in late pachytene in a process called designation” 

      (7) Would it be easier to see the effect of DSB to crossover eligible intermediates in Spo-11, Pch-2 vs. Spo-11 mutant with irradiation using your genetic maps? At least for early vs. late breaks? 

      Unfortunately, irradiation does not show the same bias towards genomic location that endogenous double strand breaks do so it is unlikely to recapitulate the effects on the genetic map.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review)

      This paper focuses on secondary structure and homodimers in the HIV genome. The authors introduce a new method called HiCapR which reveals secondary structure, homodimer, and long-range interactions in the HIV genome. The experimental design and data analysis are well-documented and statistically sound. However, the manuscript could be further improved in the following aspects.

      Major comments:

      (1) Please give the full name of an abbreviation the first time it appears in the paper, for example, in L37, "5' UTR" "RRE".

      Thank you for your suggestion. We have added the full name of these abbreviations.

      (2) The introduction could be strengthened by discussing the limitations of existing methods for studying HIV RNA structures and interactions and highlighting the specific advantages of the HiCapR method.

      Thank you for your insightful suggestion. We have modifed sentences in the introduction section (line 66 -line 71, line 80-line 81 in the revised manuscript).

      (3) Please reorganize Results Part 1.

      Thank you for your advice. We have reorganized results part 1. We hope the revision provides a logical flow and clarity to the results, making it easier for readers to follow the progression of the study and the significance of the findings regarding to the HiCapR method.

      (4) Is there any reason that the authors mention "genome structure of SARS-CoV-2" in L95?

      Thank you for your insightful question. We have deleted this sentence in the revised paper.

      Initially, the mention of our previous work on SARS-CoV-2 serves two purposes: firstly, to demonstrate our capability to perform proximity ligation assays on viral samples; and secondly, to underscore the necessity of the hybridization step, which is particularly relevant for the study of HIV.

      Unlike SARS-CoV-2, which is highly abundant in infected cells and does not require post-library hybridization, HIV-1 presents a unique challenge due to its typically low viral RNA input within cells. The simplified SPLASH protocol, while effective for more abundant viral RNAs, does not provide the necessary coverage for high-resolution analysis when applied directly to HIV samples.

      Now, we have deleted this sentence according to your comments, and discuss the technical difference elsewhere.

      (5) L102: Please clarify the purpose of comparing "NL4-3" and "GX2005002." Additionally, could you explain what NL4-3 and GX2005002 are? The connection between NL4-3, GX2005002, and HIV appears to be missing.

      Thank you for your question, and we apologize for the misleading. "NL4-3" and "GX2005002" are two distinct HIV-1 strains that exhibit different prevalence patterns in various geographical regions. The NL4-3 strain is a well-characterized laboratory strain that is widely used in HIV research and is representative of the HIV-1 subtype B, which is highly prevalent in Europe and the Americas. On the other hand, GX2005002 is a primary isolate of the CRF01_AE subtype, which is one of the most prevalent strains in Southeast Asia, particularly in China.

      The reason for comparing these two strains in our study is twofold. Firstly, it allows us to assess the applicability and versatility of our HiCapR method across different HIV-1 strains that may have distinct genetic and structural features. This is crucial for understanding the potential broad utility of our method in studying various HIV-1 strains globally. Secondly, by comparing these strains, we can begin to elucidate any strain-specific differences in RNA structure, homodimer formation, and long-range interactions, which may have implications for viral pathogenesis, transmission, and response to therapeutic interventions.

      The connection between NL4-3, GX2005002, and HIV lies in their representation of different subtypes of the HIV-1 virus, which exhibit genetic diversity and are associated with different geographical distributions. This diversity is epidemiologically and clinically relevant, as it may be associated with different pathogenesis and resistance mechanisms, and might has implications for vaccine development and treatment strategies.

      (6) Figure 1A is not able to clearly present the innovation point of HiCapR.

      Thank you for your comment. We have revised this figure to more clearly illustrate the steps and principles of the post-library capture process using HIV pooled probes hybridization and streptavidin pull down to enrich HIV RNA-derived chimeras.

      (7) Please compare the contact metrics detected by HiCapR and current techniques like SHAPE on the local interactions to assess the accuracy of HiCapR in capturing local RNA interactions relative to established methods.

      Thank you for your request to compare the contact metrics detected by HiCapR and current techniques like SHAPE on local interactions to assess the accuracy of HiCapR in capturing local RNA interactions relative to established methods.

      In this study, HiCapR has demonstrated its ability to identify key structural elements within the HIV genome, including TAR, polyA, SL1, SL2, and SL3, as well as the polyA-SL1 in the monomeric conformation. These elements are crucial for understanding the local RNA structures involved in HIV replication and pathogenesis. By visualizing the base pairing probability as a heatmap, we have identified the most stable base pairs in the 5’ UTR of HIV, which is consistent across both NL4-3 and GX2005002 strains (Figure 2D). This consistency suggests robustness in the overall structure despite sequence variations and alternative RNA conformations, indicating a high level of agreement between HiCapR and SHAPE methods in detecting local interactions.

      Furthermore, HiCapR not only confirms the presence of known structural elements but also reveals alternative conformations of the 5'UTR that support the alternative conformations found in SHAPE analysis. This additional layer of information provides a more comprehensive view of the RNA structures, highlighting HiCapR's ability to capture local RNA interactions with a high degree of accuracy comparable to established methods like SHAPE.

      (8) The paper needs further language editing.

      We have thoroughly revised the paper. We hope it’s improved significantly.

      Reviewer #2 (Public review):

      Summary:

      In the manuscript "Mapping HIV-1 RNA Structure, Homodimers, Long-Range Interactions and 1 persistent domains by HiCapR" Zhang et al report results from an omics-type approach to mapping RNA crosslinks within the HIV RNA genome under different conditions i.e. in infected cells and in virions. Reportedly, they used a previously published method which, in the present case, was improved for application to RNAs of low abundance.

      Their claims include the detection of numerous long-range interactions, some of which differ between cellular and virion RNA. Further claims concern the detection and analysis of homodimers.

      Strengths:

      (1) The method developed here works with extremely little viral RNA input and allows for the comparison of RNA from infected cells versus virions.

      (2) The findings, if validated properly, are certainly interesting to the community.

      Thank you for your comprehensive review and insightful comments on our manuscript. We appreciate your recognition of the strengths of our HiCapR method and the potential interest of our findings to the scientific community.

      Weaknesses:

      (1) On the communication level, the present version of the manuscript suffers from a number of shortcomings. I may be insufficiently familiar with habits in this community, but for RNA afficionados just a little bit outside of the viral-RNA-X-link community, the original method (reference 22) and the presumed improvement here are far too little explained, namely in something like three lines (98-100). This is not at all conducive to further reading.

      Thank you for your feedback on the clarity of our manuscript, particularly regarding the explanation of the HiCapR method and its improvements over the original method mentioned in reference 22

      In response to your feedback, we expand on the description of the HiCapR method in the revised manuscript to ensure that it is accessible to a broader audience. We will provide a more thorough comparison between HiCapR and the original method, detailing the specific improvements and how they enable the analysis of low-abundance viral RNAs like HIV. This will include:

      Post-library Hybridization: Unlike the original method, HiCapR incorporates a post-library hybridization step. This innovation allows for the capture of target RNA involved in interactions after library construction, offering additional flexibility and enhancing the resolution of the analysis.

      Enhanced Sensitivity: HiCapR has been optimized to work with extremely low viral RNA input, which is a significant advancement over the original method. This is crucial for studying viruses like HIV, where obtaining high quantities of viral RNA can be challenging. As a matter of fact,

      (2) Experimentally, the manuscript seems to be based on a single biological replicate, so there is strong concern about reproducibility.

      Thank you for raising the issue of reproducibility in our study. We understand the importance of experimental replication in ensuring the reliability of our findings. In response to your concern, we would like to provide the following clarification and additional details regarding the reproducibility of our HiCapR experiments:

      Replicates in HiCapR Experiments: All ligation and control samples in our HiCapR experiments were performed in three biological replicates. This was done to ensure the high reproducibility of our results. The high degree of correlation (r > 0.99) between these replicates underscores the reliability of our findings.

      Dimer Validation Experiments: To validate the dimer formation of RRE and 5’-UTR, we employed multiple independent methods, including Native agarose gel electrophoresis, Agilent 4200 TapeStation Capillary electrophoresis, and Biomolecular Binding Kinetics Assays. These methods provide complementary perspectives on the dimer formation, enhancing the robustness of our validation process. The data presented in Figure 3C and Supplementary figure S12 are representative results from these experiments, which consistently support our findings on dimer formation.

      Agreement Between Cellular and Virion RNA: Our study also demonstrates a significant similarity between virions in the supernatant and infected cells from the same viral strain, as shown in Supplementary Figure S3. This consistency further validates the reproducibility and reliability of our HiCapR method in capturing RNA structures and interactions under different conditions.

      Consistency across two strains: Our study includes a comprehensive analysis of two distinct HIV-1 strains, NL4-3 and GX2005002, which are prevalent in Europe and Southeast Asia, respectively. The consistency in our findings across these strains serves as a strong indicator of the reproducibility and general applicability of our HiCapR method. Specifically, presence of key structural elements such as TAR, polyA, SL1, SL2, and SL3 in both NL4-3 and GX2005002 strains, suggests a robust structural framework that is conserved across different strains, despite sequence variations. Additionally, our study reveals approximately 20 candidate dimer peaks conserved between the NL4-3 and GX2005002 strains along the genome. The conservation of these dimer peaks across strains indicates a reproducible pattern of dimerization.

      (3) The authors perform an extensive computational analysis from a limited number of datasets, which are in thorough need of experimental validation

      Thank you for your comment.

      In response to your concern, we would like to clarify that while our manuscript does present an extensive computational analysis, we have also conducted a series of experiments. Specifically, we have validated dimer formation using multiple independent methods (afore discussed).

      Given the time-consuming nature of additional experiments, we have chosen to share the HiCapR data with the community in a timely manner. This approach allows for more immediate communication and evaluation of the data on HIV structure, which we believe is valuable for advancing the field.

      We are committed to further investigating the functional implications of our structural findings. We plan to conduct more experiments to explore the functional linking between the structural insights of HIV, which will help to deepen our understanding of the virus's replication and potential antiviral strategies.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      I suggest a major revision of the manuscript.

      Minor comments:

      (1) The article lacks consistency in its presentation. The expression of the proper noun is wrong in the paper. For example, (a) L89, "RNA:RNA interaction" →RNA-RNA interaction; (b) L431, "SARS-COV-2" → SARS-CoV-2;

      We are sorry for the inconsistency. We have corrected the mistakes.

      (2) "We identified dimers based on the methodology described in23." is not a complete sentence.

      Thank you for your insightful comment. We have revised the sentence to provide a complete and clear description of our methodology. The revised sentence is as follows: "Homodimers were identified in accordance with the methods previously reported in the literature."

      Reviewer #2 (Recommendations for the authors):

      (1) The authors perform an extensive computational analysis from a limited number of datasets, which are in thorough need of experimental validation. There is a single series on in vitro validation of the interaction of an homodimerization site, described in five lines (278-283) plus the Figure panel 3c with a very brief legend, and an extremely minimalist Figure S12. The panel to Figure 3c contains Kd values which have not been assessed for significant digits.

      Thank you for your constructive feedback on our manuscript.

      We acknowledge that our computational analysis is based on a limited number of datasets. Due to the initial exploratory nature of our study and the logistical challenges of generating additional datasets, we have focused on in-depth analysis of the available data. We are currently working on further validating our findings and are committed to publishing these results in a follow-up study.

      Regarding Experimental Validation:

      We agree that the initial description of our in vitro validation of the homodimerization site was concise. To address this, we have expanded the description of our experimental procedures. Specifically, we have detailed the methods used for the in vitro transcription, the preparation of RNA samples, and the use of the Octet R8 platform for biomolecular binding kinetics assays.

      For the Kd values presented in Figure 3c. We have now included standard error of the mean and have defined the significant digits in the figure legend. This revision provides a more accurate representation of the binding affinities.

      (2) As a further example to be experimentally validated, splice sites are discussed after lines 354, for which unsophisticated validation techniques such as targeted RT-PCR are widely accepted.

      In response to your comment, we would like to clarify that the splice sites mentioned in our study are well-established and widely recognized in the literature. They have been previously characterized and are considered canonical within the HIV research community. Given their established nature, we have relied on this foundational knowledge in our analysis.

      However, we concur with the importance of validating the regulatory role of homodimers in splicing, which is a significant aspect of HIV biology. While we have provided evidence for the presence of these homodimers and their potential implications for splicing, we acknowledge the need for further functional studies to elucidate their mechanistic role.

      Due to the scope and length constraints of the current manuscript, we have chosen to focus on the structural and interaction analyses provided by HiCapR. The functional validation of these homodimers and their impact on splicing will be pursued in subsequent studies, which we plan to initiate promptly. We believe that a dedicated follow-up study will allow for a more in-depth exploration of this complex and important aspect of HIV gene regulation.

      We are committed to advancing our understanding of the functional significance of these homodimers in the context of HIV splicing and will ensure that this line of investigation is thoroughly addressed in our future work.

      Thank you again for your valuable feedback. We look forward to contributing further to the field with our ongoing research.

    1. "World inequality, however, cannot be explained by climate or diseases, or any version of the geography hypothesis. Just think of Nogales. What separates the two parts is not climate, geography, or disease environment, but the U.S.-Mexico border."

      Thoughts: The authors make a really strong point against the notion that geography is the main decider of wealth and development. The strongest and most obvious case to me is probably North and South Korea. This prompted me to think about other examples that are closer to my personal life. Having lived a significant amount of time in China I can very much see how this is also the case, for example the Provence of Guangdong is very developed and extremely wealthy, while it's neighbors Guangxi and Yunnan remain much poorer. However, I can also think of many examples that go against this argument as well. The first one that popped into my head was the oil producers in the Middle East. They have similar systems and cultural beliefs that one would normally associate with a less developed country, yet they are extremely wealthy and hold great economic and diplomatic power. Why? Because of their natural geography which gave them oil. It ties into today’s inquiry question by challenging the idea that natural conditions alone explain why some nations fail while others succeed.

      “The real reason that the Kongolese did not adopt superior technology was because they lacked any incentives to do so. They faced a high risk of all their output being expropriated and taxed by the all-powerful king, whether or not he had converted to Catholicism.”

      Question: This really makes me wonder. How much does economic growth depend on security and trust in the government? If people fear that their wealth will be taken away, they won't invest in the future, but if that's true, how did China, a communist country, grow so fast despite having strong government control? There are so many cases where large corporations suffer big losses in capital and assets based on political disputes. Are there cases where restrictive governments have still managed to create incentives for economic development? If so, how did they do it? This connects to today’s inquiry question by highlighting the role of government policies in either encouraging or stifling long-term prosperity and domestic growth.

      “Although the ignorance hypothesis still rules supreme among most economists and in Western policymaking circles—which, almost to the exclusion of anything else, focus on how to engineer prosperity—it is just another hypothesis that doesn’t work.”

      Epiphany: I've always had the perception in politics that bad leaders are just plain stupid. Why would a president promise to lower grocery prices, then immediately places tariffs on their largest long term importers? Wouldn't anyone with any background in economics know the consumer bares most of that tariff? So I often find myself asking: Why did they do that? Because their just stupid: That's what I'd always thought. But now, I wonder if they really are stupid or if they know exactly how to fix an issue, but they fail because those in power choose policies that serve their own interests rather than those of the people. This completely changes how I think about global inequality. It's not just about finding the right policies like investing in education, infrastructure or specific industries, but its about fixing political power and who benefits from the system staying the way it is.

    1. Today, however, she has crossed a big line. She insisted on driving. She contrived to get us inside this house, into the master bedroom, and now she’s just come back from the bathroom after dumping two jars of salts into the tub, and she’s starting to throw some products from the dressing table into the trash.

      What is it about today. We don't know what this trigger is and it's not mentioned. This must be inferred.

    2. “I’m going to get the car out,” I say, picking the wood up again. “I want you out there with me in two minutes. You’d better be there.” The woman is in the hall talking on a cell phone, but she sees me and hangs up. “It’s my husband, he’s on his way.” I wait for an expression that will tell me whether the man is coming to help my mother and me, or to help the woman get us out of the house. But the woman just stares at me, taking care not to give me any clues. I go outside and walk to the car, and I can hear the boy running behind me. I don’t say anything as I prop the wood under the wheels and look around to see where my mother could have left the keys. Then I start the car. It takes several tries, but finally the ramp trick works. I close the car door, and the boy has to run so I don’t hit him. I don’t stop, I

      There's something right under the surface here. This is where the daughter takes charge. She is the mother now. The switch in roles is evident. There is action.

    1. The research literature in psychology is all the published research in psychology, consisting primarily of articles in professional journals and scholarly books. Early in the research process, it is important to conduct a review of the research literature on your topic to refine your research question, identify appropriate research methods, place your question in the context of other research, and prepare to write an effective research report. There are several strategies for finding previous research on your topic. Among the best is using PsycINFO, a computer database that catalogs millions of articles, books, and book chapters in psychology and related fields.

      Tiana Ruiz When picking a research topic, it’s important to find something interesting but also realistic given time and resources. This stood out because passion alone isn’t enough—you need to make sure it’s doable. Another key point is reviewing past research to find gaps instead of repeating what’s already been done. This matters because it helps make a study more valuable and original. Understanding these ideas makes it clear that research isn’t just about curiosity but also about strategy and building on existing knowledge.

    1. He has spent the last four years writing a book that he hopes will forever change the way we view the superrich’s role in our society.

      It's nice to see someone wanting to make change and help people, without just saying it to say ti and taking the next step to write a book to inform people.