10,000 Matching Annotations
  1. Feb 2025
    1. With that in mind, I recently met Edward Conard on 57th Street and Madison Avenue, just outside his office at Bain Capital, the private-equity firm he helped build into a multibillion-dollar business by buying, fixing up and selling off companies at a profit.

      It's interesting to see he buys properties and flips it for profit.

    1. Everyone's a winner?

      In the classic children’s book Alice’s Adventures in Wonderland, a very chaotic race takes place. When it finishes, no one knows who has won. To clear up the confusion, one of the characters proposes a startling solution: ‘EVERYBODY has won the race, and all must have prizes’. Obviously, the author wrote this scene for its comic effect. However, today, there is a growing movement in several countries to organise children’s sport so that there are no winners or losers. In Canada, for example, a regional football organisation has decided that in the future, in matches between under-12 teams, no one will keep score. Although this decision might seem strange, there is some interesting thinking behind it.

      Supporters of the idea maintain that competitive sport puts some children off exercise forever because of the intense pressure to win. Sport for children, they argue, should be fun and not about winning or losing. Another problem occurs with children who aren’t very sporty. They end up losing most of the time and feel they have let the rest of the team down. There is a real possibility that these children will develop a negative self-image which will possibly stay with them the rest of their lives. Competitive sport can also encourage kids to think of their classmates as ‘winners’ or ‘losers’ in general. These are clearly not the values we want to communicate to young children. Finally, when beating your opponent becomes the main objective in sport, there is always a danger that some children are going to want to win at any cost and will cheat.

      Not everyone, of course, is in favour of sport without winners and losers. Many people maintain that losing actually builds character because it encourages you to get over disappointments and try harder. It’s also true that an element of competition is present in many aspects of life, such as doing well in exams or getting a job, and competitive sport prepares young people for these challenges. Sport is also more exciting and challenging when there is a risk of losing. In addition, for children who don’t do well in other school subjects, sport can be their one opportunity to be really good at something. Do we really want to take this opportunity away from them?

      In Canada, and in other countries, more and more organisations are experimenting with non-competitive sport and it appears to be taking off. However, not all the kids are crazy about it. For example, in games which are in theory non-competitive, players often shout out the score when the ball goes in the back of the net and they celebrate victories on the pitch at the end of the match!

      In the end, it’s quite tricky to come to a clear conclusion about this new version of children’s sport. Kids are different, so non-competitive sport will work well with some and not with others. Perhaps a bigger question is: Can we really talk about ‘sport’ when there is no element of competition? Isn’t sport without competition just exercise?

    1. When designers and programmers don’t think to take into account different groups of people, then they might make designs that don’t work for everyone.

      This sentence underscores the fundamental importance of inclusive design, reminding us that overlooking diverse user needs can lead to products that unintentionally marginalize entire groups. It highlights why diversity in design teams isn’t just a buzzword—it’s essential to creating technology that truly serves a broad spectrum of society.

    1. Is it just a matter of time before computers take over the world? It’s not hard to envision a dystopian future where robots roam the earth and outsmart human beings (think of movies like 2001: A Space Odyssey, The Matrix, or The Terminator series).

      It is almost worrisome how advanced technology is becoming. Some people that I know have switched majors because Ai could take over the job they are looking for.

    1. we find that Warshakplainly manifested an expectation that his emails would be shielded from outsidescrutiny. As he notes in his brief, his “entire business and personal life was containedwithin the . . . emails seized.” Appellant’s Br. at 39-40. Given the often sensitive andsometimes damning substance of his emails,15 we think it highly unlikely that Warshakexpected them to be made public, for people seldom unfurl their dirty laundry in plainview

      our case is different because it was a CHATROOM-- explicit purpose is to yap-- not private, it's literally communicating. same result could've been leaked by an individual in the chatroom. just happened to be leaked by isp.

  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. A disability is an ability that a person doesn’t have, but that their society expects them to have.[1] For example: If a building only has staircases to get up to the second floor (it was built assuming everyone could walk up stairs), then someone who cannot get up stairs has a disability in that situation. If a physical picture book was made with the assumption that people would be able to see the pictures, then someone who cannot see has a disability in that situation. If tall grocery store shelves were made with the assumption that people would be able to reach them, then people who are short, or who can’t lift their arms up, or who can’t stand up, all would have a disability in that situation. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined [j1]. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another. There are many things we might not be able to do that won’t be considered disabilities because our social groups don’t expect us to be able to do them. For example, none of us have wings that we can fly with, but that is not considered a disability, because our social groups didn’t assume we would be able to. Or, for a more practical example, let’s look at color vision: Most humans are trichromats, meaning they can see three base colors (red, green, and blue), along with all combinations of those three colors. Human societies often assume that people will be trichromats. So people who can’t see as many colors are considered to be color blind [j2], a disability. But there are also a small number of people who are tetrachromats [j3] and can see four base colors[2] and all combinations of those four colors. In comparison to tetrachromats, trichromats (the majority of people), lack the ability to see some colors. But our society doesn’t build things for tetrachromats, so their extra ability to see color doesn’t help them much. And trichromats’ relative reduction in seeing color doesn’t cause them difficulty, so being a trichromat isn’t considered to be a disability. Some disabilities are visible disabilities that other people can notice by observing the disabled person (e.g., wearing glasses is an indication of a visual disability, or a missing limb might be noticeable). Other disabilities are invisible disabilities that other people cannot notice by observing the disabled person (e.g., chronic fatigue syndrome [j4], contact lenses for a visual disability, or a prosthetic for a missing limb covered by clothing). Sometimes people with invisible disabilities get unfairly accused of “faking” or “making up” their disability (e.g., someone who can walk short distances but needs to use a wheelchair when going long distances). Disabilities can be accepted as socially normal, like is sometimes the case for wearing glasses or contacts, or it can be stigmatized [j5] as socially unacceptable, inconvenient, or blamed on the disabled person. Some people (like many with chronic pain) would welcome a cure that got rid of their disability. Others (like many autistic people [j6]), are insulted by the suggestion that there is something wrong with them that needs to be “cured,” and think the only reason autism is considered a “disability” at all is because society doesn’t make reasonable accommodations for them the way it does for neurotypical [j7] people. Many of the disabilities we mentioned above were permanent disabilities, that is, disabilities that won’t go away. But disabilities can also be temporary disabilities, like a broken leg in a cast, which may eventually get better. Disabilities can also vary over time (e.g., “Today is a bad day for my back pain”). Disabilities can even be situational disabilities, like the loss of fine motor skills when wearing thick gloves in the cold, or trying to watch a video on your phone in class with the sound off, or trying to type on a computer while holding a baby. As you look through all these types of disabilities, you might discover ways you have experienced disability in your life. Though please keep in mind that different disabilities can be very different, and everyone’s experience with their own disability can vary. So having some experience with disability does not make someone an expert in any other experience of disability. As for our experience with disability, Kyle has been diagnosed with generalized anxiety disorder [j8] and Susan has been diagnosed with depression [j9]. Kyle and Susan also both have: near sightedness [j10]: our eyes cannot focus on things far away (unless we use corrective lenses, like glasses or contacts) ADHD [j11]: we have difficulty controlling our focus, sometimes being hyperfocused and sometimes being highly distracted and also have difficulties with executive dysfunction [j12]. [1]

      This made me think about how I’ve encountered situational disabilities in my own life. For example, trying to use a smartphone in bright sunlight when the screen becomes unreadable is a form of situational disability. Similarly, being in a loud space where I can't hear a conversation well might resemble the experience of someone with hearing loss, even if it's only temporary. It’s a reminder that disability is fluid and context-dependent, not just a fixed identity that applies to a specific group of people.

    1. It’s important to note that an outline is different from a script. While a script contains everything that will be said, an outline includes the main content.

      I used to use index cards when preparing speeches. I used to write everything out and down then I would break it all down into the points, and ideas. It eventually got easier to build a skeleton so that I could give the speech and not forget what I needed to say. Just took a ton of practice at least for me.

    1. Second, most of the content on research databases has gone through editorial review, which means a professional editor or a peer editor has reviewed the material to make sure it is credible and worthy of publication. Most content on websites is not subjected to the same review process, as just about anyone with Internet access can self-publish information on a personal website, blog, wiki, or social media page.

      I didn't know CWI offered so many recourses of information for study and research. I think it's amazing how technology has allowed us to just be able to pull something up and be able to use it in that moment.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:  

      Reviewer #1 (Public Review):  

      Summary:  

      The study by Pudlowski et al. investigates how the intricate structure of centrioles is formed by studying the role of a complex formed by delta- and epsilon-tubulin and the TEDC1 and TEDC2 proteins. For this, they employ knockout cell lines, EM, and ultrastructure expansion microscopy as well as pull-downs. Previous work has indicated a role of delta- and epsilon-tubulin in triplet microtubule formation. Without triplet microtubules centriolar cylinders can still form, but are unstable, resulting in futile rounds of de novo centriole assembly during S phase and disassembly during mitosis. Here the authors show that all four proteins function as a complex and knockout of any of the four proteins results in the same phenotype. They further find that mutant centrioles lack inner scaffold proteins and contain an extended proximal end including markers such as SAS6 and CEP135, suggesting that triplet microtubule formation is linked to limiting proximal end extension and formation of the central region that contains the inner scaffold. Finally, they show that mutant centrioles seem to undergo elongation during early mitosis before disassembly, although it is not clear if this may also be due to prolonged mitotic duration in mutants.  

      Strengths:  

      Overall this is a well-performed study, well presented, with conclusions mostly supported by the data. The use of knockout cell lines and rescue experiments is convincing.  

      Weaknesses:  

      In some cases, additional controls and quantification would be needed, in particular regarding cell cycle and centriole elongation stages, to make the data and conclusions more robust. 

      We thank the reviewer for these comments and have improved our analyses of these as detailed below.

      Reviewer #2 (Public Review):  

      Summary:  

      In this article, the authors study the function of TEDC1 and TEDC2, two proteins previously reported to interact with TUBD1 and TUBE1. Previous work by the same group had shown that TUBD1 and TUBE1 are required for centriole assembly and that human cells lacking these proteins form abnormal centrioles that only have singlet microtubules that disintegrate in mitosis. In this new work, the authors demonstrate that TEDC1 and TEDC2 depletion results in the same phenotype with abnormal centrioles that also disintegrate into mitosis. In addition, they were able to localize these proteins to the proximal end of the centriole, a result not previously achieved with TUBD1 and TUBE1, providing a better understanding of where and when the complex is involved in centriole growth.  

      Strengths:  

      The results are very convincing, particularly the phenotype, which is the same as previously observed for TUBD1 and TUBE1. The U-ExM localization is also convincing:

      despite a signal that's not very homogeneous, it's clear that the complex is in the proximal region of the centriole and procentriole. The phenotype observed in U-ExM on the elongation of the cartwheel is also spectacular and opens the question of the regulation of the size of this structure. The authors also report convincing results on direct interactions between TUBD1, TUBE1, TEDC1, and TEDC2, and an intriguing structural prediction suggesting that TEDC1 and TEDC2 form a heterodimer that interacts with the TUBD1- TUBE1 heterodimer.  

      Weaknesses:  

      The phenotypes observed in U-ExM on cartwheel elongation merit further quantification, enabling the field to appreciate better what is happening at the level of this structure.  

      We thank the reviewer for these comments and have improved our analyses of cartwheel elongation as detailed below.

      Reviewer #3 (Public Review):  

      Summary:  

      Human cells deficient in delta-tubulin or epsilon-tubulin form unstable centrioles, which lack triplet microtubules and undergo a futile formation and disintegration cycle. In this study, the authors show that human cells lacking the associated proteins TEDC1 or TEDC2 have these identical phenotypes. They use genetics to knockout TEDC1 or TEDC2 in p53negative RPE-1 cells and expansion microscopy to structurally characterize mutant centrioles. Biochemical methods and AlphaFold-multimer prediction software are used to investigate interactions between tubulins and TEDC1 and TEDC2.  

      The study shows that mutant centrioles are built only of A tubules, which elongate and extend their proximal region, fail to incorporate structural components, and finally disintegrate in mitosis. In addition, they demonstrate that delta-tubulin or epsilon-tubulin and TEDC1 and TEDC2 form one complex and that TEDC1 TEDC2 can interact independently of tubulins. Finally, they show that the localization of four proteins is mutually dependent.  

      Strengths:  

      The results presented here are mostly convincing, the study is exciting and important, and the manuscript is well-written. The study shows that delta-tubulin, epsilon-tubulin, TEDC1, and TEDC2 function together to build a stable and functional centriole, significantly contributing to the field and our understanding of the centriole assembly process.  

      Weaknesses:  

      The ultrastructural characterization of TEDC1 and TEDC2 obtained by U-ExM is inconclusive. Improving the quality of the signals is paramount for this manuscript.  

      We thank the reviewer for these comments and have improved our imaging of TEDC1 and TEDC2 localization, as detailed below.

      Recommendations for the authors:

      Reviewing Editor (Recommendations For The Authors):  

      The reviewers agreed that the conclusions are largely supported by solid evidence, but felt that improving the following aspects would make some of the data more convincing:  

      (1) The UExM localizations of TEDC1/2 are not very convincing and the reviewers suggest to complement these with alternative super-resolution approaches (e.g. SIM) and/or different labeling techniques such as pre-expansion labeling using STAR red/orange secondaries (also robust for SIM and STED), use of the Halo tag, different tag antibodies, etc 

      We thank the reviewers for these recommendations and have adapted two of these strategies to improve our imaging of TEDC1 and TEDC2 localization. First, we used an alternative super-resolution approach, a Yokogawa CSU-W1 SoRA confocal scanner (resolution = 120 nm) and imaged cells grown on coverslips (not expanded). We found that TEDC1 and TEDC2 localize to procentrioles and the proximal end of parental centrioles (Fig 2 – Supplementary Figure 1a, b). Second, we used a recently described expansion gel chemistry (Kong et al., Methods Mol Biol 2024) combined with Abberior Star red and orange secondary antibodies. This technique resulted in robust signal at centrosomes and in the cytoplasm and indicated that TEDC1 and TEDC2 localize near the centriole walls of procentrioles and the proximal region of parental centrioles, near CEP44 (Fig 2 – Supplementary Figure 1c, d). These results complement and support our initial observations (Fig 2C, D) and we have edited the text to reflect this (lines 157-163). We also note that these Flag tag and V5 tag primary antibodies are specific and have little background signal in all applications (Fig 2 – Supplementary Fig 1E-J), while other commercially available antibodies against these tags did exhibit non-specific signal. 

      (2) The cell cycle classifications of centrioles would strongly benefit, apart from a better description, from adding quantifications of average centriole length at a given stage based on tubulin staining (not acTub). 

      We thank the reviewers for these recommendations. We have added an improved description of our cell cycle analyses (lines 234-237). We have also added new analyses for centriole length as measured by staining with alpha-tubulin (Fig 4 – Supp 3 and Fig 4 – Supp 4). We find that in all mutants, acetylated tubulin elongates along with alpha-tubulin in a similar way as control centrioles.

      Reviewer #1 (Recommendations For The Authors):  

      Specific points:  

      (1) The introduction is a bit oddly structured. About halfway through it summarizes what is going to be presented in the study, giving the impression that it is about to conclude, but then continues with additional, detailed introduction paragraphs. Overall, the authors may also want to consider making it more concise.

      We thank the reviewer for these suggestions and have shortened and restructured the introduction for clarity and conciseness.

      (2) The text should explain to the non-expert reader why endogenous proteins are not detected and why exogenously expressed, tagged versions are used. Related to this, the authors state overexpression, but what is this assessment based on? Does expression at the endogenous level also rescue? At least by western blot, these questions should be addressed. 

      In the text, we have added clarification about why endogenous proteins were not detected for immunofluorescence (lines 149-151). To quantify the overexpression, we have added Western blots of TEDC1 and TEDC2 to Fig 1 – Supplementary Figure 1E,F. We note that endogenous levels of both proteins are very low, and the rescue constructs are overexpressed 20 to 70 fold above endogenous levels.  

      (3) The figures should clearly indicate when tagged proteins are used and detected.

      Currently, this info is only found in the legends but should be in the figure panels as well. 

      We have made these changes to the figure panels in Fig 2, Fig 2 – Supp 1, and Fig 3.

      (4)  I could not find a description and reference to Figure 2 Supplement 2 and 3. 

      We have replaced these supplements with new supplementary figures for TEDC1 and TEDC2 localization (Fig 2 – Supp 1).

      (5) The multiple bands including unspecific (?) bands should be labeled to guide the reader in the western blots. 

      We have labeled nonspecific bands in our Western blots with asterisks (Fig 1 – Supp 1, Fig 3)

      (6) The alphafold prediction suggests that TUBD1 can bind to the TED complex in the absence of TUBE1 can this be shown? This would be a nice validation of the predicted architecture of the complex. I also missed a bit of a discussion of the predicted architecture. How could it be linked to triplet microtubule formation? Is the latest alphafold version 3 adding anything to this analysis? 

      In our pulldown experiments, we found that TUBD1 cannot bind to TEDC1 or TEDC2 in the absence of TUBE1 (Fig 3C, D, IB: TUBD1). We performed this experiment with three biological replicates and found the same result. It is possible that TUBD1 and TUBE1 form an intact heterodimer, similar to alpha-tubulin and beta-tubulin, and this will be an exciting area of future research.

      We have added new analysis from AlphaFold3 (Fig 3 – Supp 1B). AlphaFold3 predicts a similar structure as AlphaFold Multimer.

      We have also added additional discussion about the AlphaFold prediction to the text (lines 220-222, 365-367). Thanks to the reviewer for pointing out this oversight.

      (7) I suggest briefly explaining in the text how cells and centrioles at different cell cycle stages were identified. I found some info in the legend of Figure 1, but no info for other figures or in the text. Related to this, how are procentrioles defined in de novo formation? There is no parental centriole to serve as a reference. 

      We have added a brief explanation of the synchronization and identification in lines 234237. We have also clarified the text regarding de novo centrioles, and now term these “de novo centrioles in the first cell cycle after their formation” (lines 271-272).

      (8) Related to point 7: using acetylated tubulin as a universal length and width marker seems unreliable since it is a PTM. The authors should use general tubulin staining to estimate centriole dimensions, or at least establish that acetylated tubulin correlates well with the overall tubulin signal in all mutants. 

      We have added two supplementary data figures (Fig 4 – supp 3 and Fig 4 – supp 4) in which we co-stain control and mutant centrioles with alpha-tubulin. We found that acetylated tubulin marked mutant centrioles well and as alpha-tubulin length increased, acetylated tubulin length also increased. 

      (9) Presence and absence of various centriolar proteins. These analyses lack a clear reference for the precise centriole elongation stage. This is particularly problematic for proteins that are recruited at specific later stages (such as inner scaffold proteins). The staining should be correlated with centriole length measurements, ideally using general tubulin staining.  

      As described for point 8, we have added two supplementary data figures in which we costain control and mutant centrioles with alpha-tubulin and found that acetylated tubulin also increases as overall tubulin length increases in all mutants. We note that inner scaffold proteins are absent in all our mutant centrioles at all stages of the cell and centriole cycle, as also previously reported for POC5 in Wang et al., 2017.

      Reviewer #2 (Recommendations For The Authors):  

      Here's a list of points I think could be improved:  

      -  As the authors previously published, the centriole appears to have a smaller internal diameter than mature centrioles. Could the authors measure to see if the phenotype is identical? Is the centriole blocked in the bloom phase (Laporte et al. 2024)? 

      We have added an additional supplementary figure (Fig 4 – supp 5) to show that mutant centrioles have smaller diameters than mature centrioles, as we previously reported for the delta-tubulin and epsilon-tubulin mutant centrioles by EM. We thank the reviewers for the additional question of the bloom phase. Given the comparatively smaller number of centrioles we analyzed in this paper compared to Laporte et al (50 to 80 centrioles per condition here, versus 800 centrioles in Laporte et al), it is difficult to definitively conclude whether there is a block in bloom phase. This would be an interesting area for future research.  

      -  The images of the centrioles in EM are beautiful. Would it be possible to apply a symmetrisation on it to better see the centriolar structures? For example, is the A-C linker present? 

      We thank the reviewer for this excellent suggestion. Using centrioleJ, we find that the A-C linker is absent from mutant centrioles. The symmetrized images have been added to Fig 1 – Supplementary Fig 2, and additional discussion has been added to the text (line 143-144, line 368-374).  

      -  How many EM images were taken? Did the centrioles have 100% A-microtubule only or sometimes with B-MT? 

      For TEM, we focused on centrioles that were positioned to give perfect cross-section images of the centriolar microtubules, and thus did not take images of off-angle or rotated centrioles. Given the difficulty of this experiment (centrioles are small structures within the cell, centrosomes are single-copy organelles, and off-angle centrioles were not imaged), we were lucky to image 3 centrioles that were in perfect cross-section – 2 for Tedc1<sup>-/-</sup> and 1 for Tedc2<sup>-/-</sup>. Our images indicate that these centrioles only have A-tubules (Fig 1 – Supp Fig

      2).

      -  In Figure 2 - it would be preferable to write TEDC2-flag or TEDC1-flag and not TEDC2/1. 

      We have made this change

      -  It seems that Figures 2C and D aren't cited, and some of the data in the supplemental data are not described in the main text. 

      We have replaced these supplements with new supplementary figures for TEDC1 and TEDC2 localization (Fig 2 – Supp 1).

      -  The signal in U-ExM with the anti-Flag antibody is heterogeneous. Did the authors test several anti-FLAG antibodies in U-ExM? 

      We tested several anti-Flag and anti-V5 antibodies for our analyses, and chose these because they have little background signal in all applications (Fig 2 – Supplementary Fig 1E-J). Other commercially available antibodies against these tags did exhibit non-specific signal.

      -  The AlphaFold prediction is difficult to interpret, the authors should provide more views and the PDB file. 

      We have added 2 additional views of the AlphaFold prediction in Fig 3 – Supp 1A.

      -  In general, but particularly for Figure 4: the length doesn't seem to be divided by the expansion factor, it is therefore difficult to compare with known EM dimensions. Can the authors correct the scale bars? 

      We have corrected the scale bars for all figures to account for the expansion factor.

      -  Concerning Gamma-tubulin that is "recruited to the lumen of centrioles by the inner scaffold, had localization defects in mutant centrioles. However, we were unable to reliably detect gamma-tubulin within the lumen of control or de novo-formed centrioles in S or G2-phase (Figure 4 - Supplement 1E), and thus were unable to test this hypothesis". In Laporte et al 2024, Gamma-tubulin arrives later than the inner scaffold and only on mature centrioles, so this result appears to be in line with previous observation. However, the authors should be able to detect a proximal signal under the microtubules of the procentriole, is this the case? 

      We agree that this is an exciting question. However, in our expansion microscopy staining, we frequently observe that gamma-tubulin surrounds centrioles, corresponding to its role in the pericentriolar material (PCM). In our hands, we find it difficult to distinguish between centriolar gamma-tubulin at the base of the A-tubule from gamma-tubulin within the PCM.  

      -  In the signal elongation of SAS-6, STIL, CEP135, CPAP, and CEP44, would it be possible to quantify the length of these signals (with dimensions divided by the expansion factor for comparison with known TEM distances)? 

      We have quantified the lengths of SAS-6 and CEP135 in new Fig 4 – Supp 3 and Fig 4 – Supp 4.  

      -  The authors observe that centrin is present, but only as a SFI1 dot-like localization (which is another protein that would be interesting to look at), and not an inner scaffold localization. Can the authors elaborate? These results suggest that the distal part is correctly formed with only a microtubule singlet. 

      We agree with the reviewer’s interpretation that the centriole distal tip is likely correctly formed with only singlet microtubules, as both distal centrin and CP110 are present. We have added this point to the discussion (line 415).

      -The authors observe that CPAP is elongated, but CPAP has two locations, proximal and distal. Is it distal or proximal elongation? Is the proximal signal of CPAP longer than that of CEP44 in the mutants? The authors discuss that the elongation could come from overexpression of CPAP, but here it seems that the centriole is not overlong, just the structures around the cartwheel. 

      We thank the reviewer for this point. It is difficult for us to conclude whether the proximal or distal region is extended in the mutants, as our mutant centrioles lacks a visible separation between these two regions. It would be interesting to probe this question in the future by testing whether subdomains of CPAP may be differentially regulated in our mutants.

      Reviewer #3 (Recommendations For The Authors):  

      It isn't apparent to me what was counted in Figure 1C. Were all centrioles (mother centrioles and procentrioles) counted? Where is the 40% in control cells coming from? Can this set of data be presented differently? 

      We apologize for the confusion. In this figure, all centrioles were counted. We have updated the figure legend for clarity. We performed this analysis in a similar way as in Wang et al., 2017 to better compare phenotypes.  

      Figure 2C. and the text lines 182-187: The ultrastructural characterization of TEDC1 and TEDC2 suffers from the low quality of the TEDC1 and TEDC2 signals obtained postexpansion. In comparison with robust low-resolution immunosignal, it appears that most of the signal cannot be recovered after expansion. Another sub-resolution imaging method to re-analyze TEDC1 and TEDC22 localization would be essential. The same concern applies to Figures 2 - Supplement 2 and 3. Also, Figure 2 - Supplement 2 and Supplement 3 do not seem to be cited. 

      We thank the reviewer for these recommendations. As also mentioned above, we used an alternative super-resolution approach, a Yokogawa CSU-W1 SoRA confocal scanner (resolution = 120 nm), and found that TEDC1 and TEDC2 localize to procentrioles and the proximal end of parental centrioles (Fig 2 – Supplementary Figure 1a, b). Second, we used a recently described expansion gel chemistry (Kong et al., Methods Mol Biol 2024) combined with Abberior Star red and orange secondary antibodies. This technique resulted in robust signal at centrosomes and in the cytoplasm and indicated that TEDC1 and TEDC2 localize near the centriole walls of procentrioles and the proximal region of parental centrioles, near CEP44 (Fig 2 – Supplementary Figure 1c, d). These stainings complement and support our initial observations (Fig 2C, D) and we have edited the text to reflect this (lines 157-163). We have also removed the supplementary figures that were uncited in the text.

      TUBD1 and TUBE1 form a dimer and TEDC2 and TEDC1 can interact. Any speculation as to why TEDC2 does not pull down both TUBE1 and TUBD1? 

      We apologize for the confusion. TEDC2 does pull down both TUBE1 and TUBD1 (Fig 3D, pull-down, second column, Tedc2-V5-APEX2 rescuing the Tedc2<sup>-/-</sup> cells pulls down TUBD1, TUBE1, and TEDC1).  

      Figure 4A and B. The authors use acetylated tubulin to determine the length of procentrioles in the S and G2 phases. However, procentrioles are not acetylated on their distal ends in these cell phase phases (as the authors also mention further in the text). Why has alpha tubulin not been used since it works well in U-ExM? The average size of the control, G2 procentrioles, seems too small in Figure 4A and not consistent with other imaging data (for instance, in Figure 4 - Supplement 1 C, Cp110, and CPAP staining). There is no statistical analysis in F4A.  

      We have added two supplementary data figures (Fig 4 – supp 3 and Fig 4 – supp 4) in which we co-stain control and mutant centrioles with alpha-tubulin. We found that acetylated tubulin correlates well with overall tubulin signal in all mutants. We have added statistical analysis to the figure legend of Fig 4A.

      Lines 260 - 262: "These results indicate that centrioles with singlet microtubules can elongate to the same length as controls, and therefore that triplet microtubules are not essential for regulating centriole length." It is hard to agree with this statement. Mutant procentrioles show aberrantly elongated proximal signals of several tested proteins. In addition, in lines 326 - 328, the authors state that "Together, these results indicate that centrioles lacking compound microtubules are unable to properly regulate the length of the proximal end."  

      We thank the reviewer and have clarified the statement to state that these results indicate that centrioles with singlet microtubules can elongate to the same overall length as control centrioles in G2 phase.  

      Line 353: The authors suggest that elongated procentriole structure in mitosis may represent intermediates in centriole disassembly. Another interpretation, more in line with the EM data from Wang et al., 2017, would be that these mutant procentrioles first additionally elongate before they disassemble in late mitosis. The aberrant intermediate structure concept would need further exploration. For instance, anti-alpha/beta-tubulin antibodies could be used to investigate centriole microtubules.  

      We apologize for the confusion and have edited this section for clarity (lines 341-343): “We conclude that in our mutant cells, centrioles elongate in early mitosis to form an aberrant intermediate structure, followed by fragmentation in late mitosis.”

      References need to be included in lines 122, 277, 279. 

      We have added these references

      Line 281: Add references PMID: 30559430 and PMID: 32526902.  

      We have added these references (lines 265-266).

      Line 289: "Moreover, our results suggest that centriole glutamylation is a multistep process, in which long glutamate side chains are added later during centriole maturation." This does not seem like an original observation. For instance, see PMID: 32526902.  

      We have added this reference (lines 273-274).

    1. Who would not sing for Lycidas? He knew 10Him.self to sing, and build the lofty rhyrne.

      Milton expresses sorrow not just for Lycidas but for the loss of a fellow poet. When he asks, "Who would not sing for Lycidas?", it’s not just rhetorical, it shows his own deep sense of obligation to memorialize his friend through poetry. This suggests that writing the elegy is both a duty and a way to process his own grief.

    1. Third, literacy has been described as a situated, sociocul-tural practice that is embedded in and shaped by social and cultural contexts (Barton, Hamilton, & Ivanič, 2000). And fourth, children create syncretic literacies when they draw on literacies from school, home, popular culture, the Internet, and religious and other community settings to create new forms and practices. Often, they blur the boundaries between these as they take texts and practices from one place to reinvent in another (Genishi & Dyson, 2009; Gregory, Volk, & Long, 2013; Volk, 2013).

      I think this idea is important because it shows that literacy isn’t just about reading and writing—it’s also shaped by people’s backgrounds, experiences, and communities. Learning doesn’t happen in isolation; it’s influenced by culture, language, and daily life. This makes me think that schools should recognize different ways students engage with literacy instead of just focusing on traditional reading and writing skills. Understanding literacy as a social practice could help make education more inclusive and meaningful.

    1. which they built and completed a fort named “Orange” with 4 bastions, on an island by them called Castle Island

      It's not a very important point of observation, but I just found "Fort Orange" to be an odd name. Apparently, it's named after the royal family of the Netherlands (House of Orange). I assume that French settlers deciding on this name ties into their goals to expand trade in the region, as discussed in the previous chapter.

    1. The first joke reminds us that being overweight and having high cholesterol is normal now because the average American has these characteristics.

      Just because a lot of things have been normalized, it doesn’t mean that they are right. Being overweight is normal as long as it doesn’t affect your health and life. On the other hand, having high levels of cholesterols is a risk factor for a wide variety of diseases that can lead to death. Therefore high cholesterol levels should be taken seriously and interventions should be done. I understand that some people might prefer a non-pharmaceutical approach which is perfectly fine as long as it’s followed correctly. Exercise, balanced diets and avoiding other dangerous behaviours are just some of the non-pharmaceutical interventions that can have an impact on decreasing these levels. This is assuming that the cholesterol level being spoken about is LDL.

    1. This passage highlights how the Black struggle for human rights has not only advanced Black communities but has also contributed to the broader humanization of the U.S. As a Latino, I see parallels in our own fight for civil rights and recognition. Just like Black activism shaped the country, Latinx movements—like the Chicano movement or immigrant rights struggles—have pushed for justice and inclusion. It’s important to revisit past victories, but also to reflect on the challenges that remain.

    1. Ms. López respects Yamaira’s translanguaging space and acknowledges that even though the class is officially in English, Yamaira has opened a trans-languaging space that has transformed the class. Latinx bilinguals, who make up 75% of this middle school, have begun to understand that their trans-languaging is a resource, not a hindrance, for read-ing deeply about history and other content. This understanding is also now also available to stu-dents who speak languages other than English and Spanish, as well as to African American students. The class begins to understand that the way they use language and what they know is most impor-tant in making sense of reading any text

      This paragraph shows how important it is for teachers to support bilingual students instead of forcing them to stick to just one language. Ms. López understands that language is not just a rule to follow but a tool for learning. By letting Yamaira use Spanish, she makes history more accessible and meaningful. I also think it’s great that this doesn’t just help Yamaira—it changes the whole class. It proves that when students are allowed to use their full language skills, they can actually contribute more, not less.

    1. My new classmates, I thought with excitement. I was a bit dismayed that they didn’t pay any attention to me. They didn’t even look at me. I was sure I had an attractive appearance that day, but those girls didn’t seem to notice it. Perhaps I was deluding myself.

      The thought just never comes into his head that maybe people don’t like to stare at people because they’re also shy and there’s a bunch of new people around -or that it’s normal unless you engage someone for them to not pay attention to you, no matter how you look. his ability to rationalize was so low I think because his fear of shame was so intense, his worry that his fears of inferiority were really true. he just so easily collapsed into vulnerability.

    2. When my classes lined up for the final exams, everyone had a group to socialize with while I stood on the side, alone. Everyone must have thought I was a complete loser. Thank goodness it was the last day. The people in those classes angered me to no end. That was the last time I would ever see that college. On the drive home, I cried to myself as I listened to music on the radio, as I always did. I failed to get the life I wanted at Moorpark

      He has such a brutal pattern of not socializing, not putting in effort, avoiding stuff, and then using the subpar result to humiliate himself with…

      He was trapped in this self-destructive loop because his false self didn’t allow him to acknowledge his own role in his suffering. Vulnerable narcissism creates a deep split: on one hand, he saw himself as special and deserving of effortless admiration, but on the other, he felt completely worthless and humiliated when that admiration didn’t come. To admit that his isolation was partly his own doing would mean facing the painful reality that he wasn’t superior or owed automatic acceptance, something his false self couldn’t tolerate. Instead, he stayed in a cycle where he avoided effort, set himself up for failure, and then used that failure as proof that the world was cruel and he was a hopeless victim. What was missing was emotional responsibility the ability to see that his pain wasn’t just inflicted on him by others, but also reinforced by his own avoidance and expectations. But his narcissism likely blocked that realization, because accepting it would have shattered the fantasy that he was meant for greatness but unfairly denied it.

      His only identity at that point was that he was “ it’s not that I failed, I was robbed.” if he truly accepted he wasn’t “robbed” he would have had to face the unbearable “truth”—that he wasn’t inherently special, that no “cosmic injustice” had singled him out (even the universe didn’t notice him or need him for anything, utterly worthless), and that his isolation was, at least in part, a result of his own actions (or inaction). This would have shattered his entire identity, leaving him with nothing to cling to. Without the belief that he was a victim of an unfair world, he would have had to confront the terrifying reality that he was just ordinary—that he lacked social skills, had put in no effort, and wasn’t owed admiration or love just for existing. For someone whose entire self-worth and sense of existence depended on feeling superior, this realization would have been annihilating. Instead of processing his role in isolation, his mind rejected it completely, reinforcing the victim narrative to preserve what little sense of self he had left.

      Edit: no- this horrific reality of worthlessness is not the “truth.” The truth is Elliot had a true self that should’ve developed and he could’ve uncovered parts of it that would be very unique and very worthy of attention and love. The reason he felt like “nothingness” is because the false self had never allowed him to develop.

    3. Moorpark College was supposed to be a place of hope for me, but it turned into a place of despair, just like everything else. I was invisible there. Nobody knew I existed or cared who I was.

      Even these days, I keep falling back into this old mentality, and I have to wrench myself out of it. The truth is, I don’t put myself out there socially because I’m scared or I don’t know how and I don’t want to risk humiliation, so I sit and then I get bitter and even darkly vengeful and hateful when I see other people bonding or living life. “I hate everyone for not recognizing me.” (if I get deep enough into the mentality, I suddenly forget the fact that the reason is I don’t put myself out there socially and i just feel blindly enraged that people don’t notice me when they should. the false self dissociates me from the social issue so then I’m just bitter in a void.) I think to myself “they would never do that with me.” I switch from thinking “ It’s because I’m nothing I have nothing to look at” to “ They beneath me, they’re so boring and normal, and they could never comprehend a being like me. I’m beyond human” and I flip-flop between these feelings. But all of this certainty is a way to ward off the fear of a more personal humiliation if I actually try. So I’ve been working in therapy on healthy ways to not immediately assign situations to my sense of identity or worth- even if they go bad. However, if I DONT immediately assign new situations to my identity, I feel like I have NO identity and experience existential collapse because the false self only “allows” me to have an identity if it’s externally based… It’s such a mess…

      (The false self only allows you to form an external identity because it’s controllable, editable and shiftable, and it can be manipulated to get specific reactions from people. All of this feels very safe. Allow allowing the self to be built out of authentic pieces makes it too rigid to get “positive” responses in “every” situation, (you’ll be less pleasing to more people) and it also feels incredibly dangerous because of the severe trauma that happened the last time I was authentic as an infant. The false self basically has a PTSD response when it thinks anything authentic is being exposed. It’s also scary feeling so undeveloped and empty underneath you feel like you could never capture attention or stand out or meet any needs)

    4. I was very perplexed as to why he didn’t feel any anger towards girls for denying him sex. He should be just as angry as I am. I supposed he didn’t have a very high sex drive, or he was just a generally weak person.To be angry about the injustices one faces is a sign of strength. It is a sign that one has the will to fight back against those injustices, rather than bowing down and accepting it as fate.

      Giving in = weak. This mentality could explain why Elliot felt if he ever backed down from his tirade once it started, he would have to face the ultimate fear that he was utterly worthless. It’s true that fighting to prevail is strength, but he couldn’t see he was fighting the wrong battle, a battle that would lead him nowhere when he could’ve been spending his energy fighting the real fight. Of course someone would have to had explained to him what was happening to him.

    5. My faith that I could write an epic story that would make me rich soon collapsed. I read so many articles online of the chances that a screenplay would be made into a movie. I also saw that most writers of even the highest budget films didn’t make as much as I thought they did... Definitely not enough to live on for the rest of their life. I also thought, with a lot of despair, of the time that it would take to achieve such a goal. Most bestselling authors or screenwriters didn’t become millionaires until they were well into their forties or fifties. I didn’t want to wait until I was forty years old to lose my virginity! The thought of spending the next twenty years working hard every day for a chance to make a million or two filled me with revulsion. By the time I’d become a millionaire from doing that, I wouldn’t even be able to get hot young girls because I’d be too old. I decided that writing was not my path to salvation, and I abandoned the idea completely.

      Oh my god.. this is literally the exact thought process I had. So I wanted to be a famous popstar, but as time went on and I got older I realized that window was closing, so I dropped the fantasy because I wouldn’t have been able to have the adoration of being YOUNG, desired and talented and it felt pointless. I didn’t want to be old and just making music in some café. It’s also interesting how vulnerable narcissists seems so much closer to reality and so much more aware of the work that it would take and all the things that could go wrong, but that awareness also is what causes them to freeze and keep dropping their ideas and doubting themselves. Grandiose narcissists are so blindly certain their future will work out that some of them actually end up pulling something off from sheer confidence even if their success is usually short-lived because their disorder ruins it.

    6. The problem was that most of the jobs that were available to me at the time were jobs I considered to be beneath me. My mother wanted me to get a simple retail job, and the thought of myself doing that was mortifying. It would be completely against my character. I am an intellectual who is destined for greatness. I would never perform a low-class service job.

      HAHAHAHA oh my godddd. OK, every narcissist I’ve known has struggled with this even me, but I ended up biting the bullet and getting a retail job and it actually taught me so much. I’m laughing because it’s just so damn tragic and stupid and egotistical but yet there’s so much pain, what a mess. On the surface, it looks like just entitlement. But it’s a lot deeper. What ended up helping me not feel humiliated for working retail was to know that even the great people had to climb the ladder to reach the heights, it doesn’t reflect their worth at all. Even the greatest people with the most intelligence have to put in work and even menial labor to get opportunity.

      A huge distortion in NPD is taking any immediate new “part” of your life and having it immediately define your entire identity. Since your identity is external, anything that comes into your world and is connected to “you” that seems subpar can make you instantly feel worthless. It can also cause huge fears that this is all you will ever be worth, all you could ever amount to, even with your best effort is an ordinary life where you barely made an impact. you will always be outshone by brighter stars, always overlooked for better people. Both humiliating and terrifying. If you can’t make a mark on the world, you can’t control your environment and if you can’t control your environment, you can’t feel safe. If you can’t stand out, how can you be seen in the crowd? How can you ever make up for all of your starvation? How will you ever get your needs met?

    7. Without hope, I just couldn’t go on any longer. I needed to feel hope. Hope for the future, hope for a better life. Upon feeling this, I realized that perhaps it is possible for me to have the things I desire; to have a great social life again, to have a girlfriend, to have sex, to have all of the pleasures I’ve desperately craved for so long. It was refreshing.

      What we’re about to witness is one of the saddest things. Elliot will fluctuate between hope and despair for quite a while, trying fantastical solutions that don’t work and aren’t connected to reality until he finally loses all hope without any real help.

      He will then snap into malignancy where he just wants to cause suffering and destroy everything because he was never allowed to be a part of the world, and in his mind, no one notices him or cares he’s dying as they flaunt and tease him daily with everything he desperately needs. It feels like you’re starving to death watching everyone else eating food and no one cares. That’s enough to cause utter hatred and coldness. Every human has basic needs- belonging, emotional connection, love, being seen and accepted, safety, self esteem. NPD starves you of all.

      It’s not to say that Elliot’s tragedy was the responsibility of the people around - nor should they have been expected to go out of their way to cater to him. It was the responsibility of the mental health system to help him integrate into society, help him understand what was happening to him and give him any fighting chance.

    8. I looked around me and saw lots of young couples holding hands and groups of good looking teenage boys and girls walking together and having fun on their Saturday night out. I saw all of those teenagers enjoying their pleasurable lives together, while I was all alone. They were enjoying everything I couldn’t have. I was filled with intense anguish, and I quickly ran all the way back to father’s house with tears pouring down my cheeks. Once I got home I had a breakdown and cried for hours and hours into the night.

      I relate so deeply… I’ve just dissociated from it and am waiting for my next life honestly. But this is my life, even right now writing this. I just watch everyone else live and get to enjoy everything that I can’t have. It’s been like this almost my whole life. (Luckily, I am in therapy so hopefully I will figure something out).

    9. I would walk to the mall and sit on the balcony overlooking the food court nextto the AMC theatres. There I would see all of the young couples lining up to see a movie, and I boiled with hatred. During father’s week, I walked to the Calabasas Commons nearby, and sometimes I rode my bicycle. I also walked up the hill near my father’s house to the Overlook. I spent a lot of time up there, contemplating about my life and fantasizing about becoming powerful enough to punish everyone I hate.

      This is a great description of vNPD, especially the contemplation and rumination. People always wonder why Elliot didn’t just stop obsessing over couples and sex and focus on hobbies or a life goal. But once again, the false self is the only self Elliot has access to. And all it does is obsess over affirmation or how to get affirmation to feed itself and make you feel real or exist. Even if you just sit there without any supply, it’s not like the true self suddenly starts showing itself, and you find all these authentic things to want and think about. You’re usually devoid of authentic interests and opinions about the world -because the false self blocks anything from forming. All that’s left to do is obsess about how to “get your needs met.” It’s the only thing that has any sort of life to it.

      Here we can also see Elliot starting to slip into a malignant narcissistic thought pattern. mNPD usually starts when the narcissist feels NPD (gaining safety and existence through admiration) isn’t working to keep them safe and the person shifts towards power.

    10. I spent more time studying the world, seeing the world for the horrible, unfair place it is. I then had the revelation that just because I was condemned to suffer a life of loneliness and rejection, doesn’t mean I am insignificant. I have an exceptionally high level of intelligence. I see the world differently than anyone else. Because of all of the injustices I went through and the worldview I developed because of them, I must be destined for greatness. I must be destined to change the world, to shape it into an image that suits me!

      Enter grandiosity as a defense with worsening trauma. He’s coping with existential terror and meaninglessness and a lot of shit I so deeply relate to and it looks like this defense finally swoops in and saves him a bit. He must have felt relief finally thinking it’s the WORLD that’s got it wrong and he’s actually the worthy one, above them all and here to change the world to something fair and “how it should be.” I can sense the relief.

    11. sex should be outlawed. It is the only way to make the world a fair and just place. If I can’t have it, I will destroy it.

      People always focus on how entitled Elliot seemed for demanding sex or wanting to legit destroy those that have it- and how “other people get rejected or stay a virgin for a while but THEY don’t do this.”

      But they don’t understand the deeper pathology, which needs help. What Elliot REALLY wanted was a return to a world that was simple enough where he could control it, and receive positive outcomes. He wanted to be able to feel a part of the world, and the only time he’d been able to was before social dynamics got more complicated, including sex. He felt exiled from the world and unable to participate or know how to be seen and so he was basically starving while watching everyone else eat. His only identity inside was a false self, which is a hollow construct that only feels real with social affirmation. That meant that if he couldn’t get a functioning mask and be fed socially, he would essentially feel like he didn’t exist at all. The false self covers a core emptiness that has no uniqueness or spontaneity. That means unless the person takes others’ traits, they feel they have nothing special to offer. If those traits aren’t seen and validated, the person literally feels like an empty ghost (think No Face from Spirited Away) who is forced to watch everyone else get seen and admired because they have nothing to show, nothing that stands out. You feel like you literally have nothing for anyone to look at or love. You feel utterly replaceable and humiliated. It’s a profound, all-encompassing worthlessness that can only be experienced with such an empty core.

      Elliot feels like he’s been denied identity and worth and even existence by being denied sex because sex is final proof you are “superior” and finally real, included, cool, admired, adored. If you can’t stand out and shine the brightest, you’ll have to constantly fear being replaced, discarded for something better and humiliated, and left to fall apart existentially.

      It’s like he see sex as the key to being able to be a part life safely, or even to exist in any meaningful way.

      Elliot’s NPD ensures he CANT feel like he exists or has any worth at all unless he gets these external things and gets proof of his own existence. If he tried to draw upon any internal interests, hobbies, or thoughts, he’d likely find emptiness as the false self is entirely obsessed with getting supply and it’s the ONLY self accessible. So getting this external stuff feels like life or death. I hope this makes sense why sex is so dire for Elliot. It’s a mental health issue. When supply is cut off, the narcissist feels like they are disintegrating. This explains why Elliot’s reaction to rejection was so extreme, it wasn’t just disappointment; it felt like an existential annihilation and removal of any basic needs.

    12. It was at this time that I was just beginning to realize, with a lot of clarity, how truly unfair my life is. I compared myself to other teenagers and became very angry that they were able to experience all of the things I’ve desired, while I was left out of it. I never had the experience of going to a party with other teenagers, I never had my first kiss, I never held hands with a girl, I never lost my virginity. In the past, I felt so inferior and weak from all of the bullying that I just accepted my lonely life and dealt with it by playing WoW, but at this point I started to question why I was condemned to suffer such misery.

      This is totally valid and Elliot is right- his life was unfair. But he can’t connect that it’s due to severe mental health issues and social skill issues. If there was a system in place to spit and help people like us, we could have been reintegrated into society.

    13. I didn’t care about having a social life at the point. All I wanted to do was hide away from the cruel world by playing my online games, and Independence High School gave me the perfect opportunity to do just that.

      This could easily be mistaken as PTSD, anxiety, or even avoidant personality disorder, but what you need to understand is Elliot was hiding away in the fantasy world because it was the only place he could feel superior, powerful, and admired, and actually control how he was perceived, living vicariously through characters. His narcissistic pathology could’ve been easily missed if he had been taken to a psychiatrist for analysis. taking his childhood history into account, all he has ever done for hobbies is try to adopt the popular kids hobbies. He has clear issues having any identity of his own. but since he says stuff like he didn’t care about appearances anymore, people would not have clocked his NPD because it’s vulnerable NPD.

    14. , I broke down and cried in front of my mother, begging her not to make me go to that horrible place. I was so scared that Ifelt physically sick. I continued crying in the car on the way there, and my mother gave in. Instead of taking me to school, we went to the café at Gelson’s in Calabasas where we had a big talk. I tried to explain how much I was suffering there. She just could not take me to school after that. When we were finished with Gelsons’s, she drove me to my father’s house and told him about what happened. They agreed to take me out of Taft.

      I see Elliot’s parents are trying here, and it made me sad. so it’s more likely that they just weren’t aware that they were being emotionally neglectful. They did try and help him. This reminds me of my own parents. They had such good intentions, but they just didn’t know how to be supportive emotionally.

    15. What kind of horrible, depraved people would poke fun at a boy younger than them who has just entered high school? I thought to myself.

      I think Elliot’s mentally young age made these traumas hit a lot harder and caused a lot more damage. As I said, this all eventually culminated in utter lack of empathy for humanity. And of course, he didn’t have a healthy ego or any identity to be able to take blows healthily. I always say that the reason why rejection hits cluster bs so hard and shatters them so deeply -while other kids have trauma but don’t escalate to such extremes - is BECAUSE there’s no foundation to the self, you only exist through the eyes of others and you already have early traumas that made the world seem way too scary and made you feel fundamentally worthless -to your core. It’s also understated how hard it is to know how to navigate the world without a self-concept, you don’t know how to feel seen or safe unless you’re able to constantly analyze and control your environment and stay “on top”. Elliot had trouble analyzing and navigating his environment, which would’ve caused his fear to be tenfold and his need for control to eventually increase to malignant levels.

  3. www.ucpress.edu www.ucpress.edu
    1
    1
    1. Administrators may feel rhetorical or social pressure to respect the values of community members in how they exert their otherwise absolute authority.

      sure, but this paints it to be sincere: it's not. its just so people can continue to offer their data, engagement/ money etc... a millionaire doesnt care: the anti-thesis: youre trying to do revolutonary work on a site that is owned by the ruling class, in which does not wish to have such revolution: clash of class interests

    1. Could there be value, though, in treating an AI system as more of a partner—something or someone with whom we develop a relationship—rather than merely as a tool?It all depends on what you mean by “relationship.” If you’re a woodworker, you might develop emotional associations with a set of chisels you’ve used for years, and in some sense that’s a “relationship,” but it’s entirely different from the relationship you have with people. You might make sure you keep your chisels sharp and rust-free, and say that you’re treating them with respect, but that’s entirely different from the respect you owe to your colleagues. One way to clarify this is to remember that people have their own preferences, while things do not. To respect your colleagues means to pay attention to their preferences and interests and balance them against your own; when they do this to you in return, you have a good relationship. By contrast, your chisel has no preferences; it doesn’t want to be sharp. When you keep it sharp, you are doing so because it will help you do good work or because it gives you a feeling of satisfaction to know that it’s sharp. Either way, you are only serving your own interests, and that’s fine because a chisel is just a tool. If you don’t keep it sharp, you are only harming yourself. By contrast, if you don’t respect your colleagues, there is a problem beyond the fact that it might make your job harder; you do them harm because you are ignoring their preferences. That’s why we consider it wrong to treat a person like a tool; by acting as if they don’t have preferences, you are dehumanizing them.

      Thinking of Konmari and socks. Values embedded in an anthropomorphized relationships can extend beyond harm principle to the sock – it can encompass environmental concerns, attitudes toward repair.

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. There might be some things that we just feel like aren’t for public sharing (like how most people wear clothes in public, hiding portions of their bodies) We might want to discuss something privately, avoiding embarrassment that might happen if it were shared publicly We might want a conversation or action that happens in one context not to be shared in another (context collapse) We might want to avoid the consequences of something we’ve done (whether ethically good or bad), so we keep the action or our identity private We might have done or said something we want to be forgotten or make at least made less prominent We might want to prevent people from stealing our identities or accounts, so we keep information (like passwords) private We might want to avoid physical danger from a stalker, so we might keep our location private We might not want to be surveilled by a company or government that could use our actions or words against us (whether what we did was ethically good or bad) When we use social media platforms though, we at least partially give up som

      This chapter gave a lot of insight into the complexities of privacy in our digital world. I found the section discussing how much of our "private" information is actually not as secure as we might think both eye-opening and alarming. The example about Facebook storing Instagram passwords in plain text stood out to me because it shows how companies often fail at safeguarding our most sensitive data, even when we're trusting them with it. It's easy to forget the risks we take when we freely share personal information on social media or other platforms, assuming that companies will protect us. This chapter serves as a wake-up call for individuals to take their privacy seriously and consider how much control they’re willing to give up in the digital age.

    1. Summary of DevTools FM Podcast with Juan Capa on Membrane.io

      Introduction and Background

      • Juan Capa is the creator of Membrane.io, a still-in-development platform for simplifying API automation and internal tooling.

      "Juan is the creator of membrane.io, a still-on-development platform for simplifying API Automation and internal tooling."_

      • He has a background in game development, having spent over a decade working on console, mobile, and web games.

      "I have a background in game development. I spent about 10 years a little bit more than 10 years working in game development."_

      • Worked at Vercel on the CDN team after being hired through Twitter, then briefly returned to Zynga before joining Mighty under a program that allowed him to work part-time on Membrane.

      "I saw a tweet by Guillermo Rauch ... He hired me to work for Vercel ... I spent two years there as the lead in the CDN team."_

      "Then I guess my last last thing I did was join Mighty ... working on my startup but also working three days for them."_

      • Now focusing on Membrane full-time and looking to onboard users soon.

      "So yeah now I'm a member in 100 and yeah hoping that I can show to the world and onboard some users in the coming week or two."_

      Membrane: Concept and Vision

      • Membrane was inspired by game engines, where every entity is programmable and data is universally accessible.

      "In game development, you’re dealing with this Engine with this universe, and this universe is completely programmable."_

      • Aimed at simplifying API automation and small-scale applications, particularly for personal automation.

      "It’s a place to write programs to build personal automation ... optimized for personal automation programs."_

      • Membrane provides an abstraction over APIs, allowing users to interact with data and automate workflows through a graph-based system.

      "The key to Membrane is this whole concept of a graph that is the main thing that programs use to manipulate the world."_

      • Designed to be highly accessible by integrating with Visual Studio Code and leveraging JavaScript/TypeScript.

      "The entire thing is built inside of Visual Studio Code ... The most used IDE is Visual Studio Code and the most used language is JavaScript."_

      Durability & Orthogonal Persistence

      • Membrane implements "orthogonal persistence," ensuring program state is always durable.

      "I decided to start building what is sometimes called orthogonal persistence, which is this concept of a durable program."_

      • Every Membrane program is an SQLite database, meaning all messages, state, and execution history are stored persistently.

      "Every member program is actually just one SQLite database."_

      • Programs execute with an event-sourcing model, where all inputs and outputs are first logged in SQLite before execution.

      "Every message that it receives, it first goes in the database and then it's processed."_

      • Uses Linux’s soft dirty pages for memory tracking, making it highly efficient in persisting only changed memory states.

      "I use quickjs ... and there’s a constant in the Linux kernel called Soft Dirty Pages ... only serialize the pages that actually change."_

      • Future improvements include optimizing serialization using WebAssembly’s linear memory model.

      "I’m saving more data than I should, so there’s even more optimizations I can do."_

      Observability & Debugging

      • Membrane prioritizes perfect observability, logging every event to enable full program introspection and debugging.

      "If it’s not in the logs, it didn’t happen."_

      • Allows time-travel debugging, replaying past states and executions.

      "You can go back to when that message was received and then run the code that was available back then."_

      • Aims to support snapshot-based time travel for enhanced debugging.

      "The first version I’m gonna have of that type of time travel is going to be with a snapshot that is taken every hour."_

      Membrane’s Graph Model

      • Membrane’s "graph" serves as a type-safe, unified interface for APIs.

      "Everything is a node, which you can think of as an object or a scalar (string, number, JSON type)."_

      • Drivers enable API connectivity, converting external APIs into Membrane’s schema and providing a consistent interface.

      "The GitHub driver has a schema ... basically it mirrors the GitHub API as a Membrane schema."_

      • Pagination is abstracted away, making API traversal seamless.

      "With Membrane, you have this object that’s a one-page, and a page has a reference to the next page."_

      • Users can mount different programs' graphs into their own, dynamically expanding their automation environment.

      "Your graph is basically the combination of all the graphs of all your programs."_

      Chrome Extension & API Interfacing

      • Membrane includes a Chrome extension that recognizes API entities on webpages.

      "What it does is it asks Membrane, ‘Hey, do any of the programs under Juan’s account recognize anything on this page?’"_

      • Future improvements will allow automatic driver installation when encountering unrecognized APIs.

      "Eventually, I can just offer you the option to install that driver with a click from the Chrome extension."_

      • Currently requires users to provide their own API keys, but OAuth-based authentication is planned.

      "Right now, you have to bring your own keys."_

      Cron & Automation Features

      • Membrane features built-in cron-like timers, which are stored in SQLite and visualized in the UI.

      "The SQLite database has a table called timers, and that table holds all scheduled actions."_

      • Users can visually track when timers will execute and manually trigger actions for testing.

      "From Visual Studio Code, you can just hover on each timer and see how long until it fires."_

      • Logs every timer execution, ensuring full transparency in automation workflows.

      "If it’s not in the logs, it didn’t happen."_

      Potential for Expansion & Future Vision

      • Membrane’s approach is inspired by game development tooling, where objects and behaviors are always inspectable.

      "In game engines, you’re dealing with objects where you can see all their properties and control them."_

      • Aims to provide a seamless developer experience, where APIs become interactable entities without custom adapters.

      "If you wanted to automate something with Twitter, you shouldn’t have to pre-install a driver."_

      • Exploring self-hosting and open-source models to improve privacy and decentralization.

      "Self-hosting membrane is going to be a thing ... I think I want to make it open-source."_

      • Could enable mobile implementations, particularly for interacting with on-device automation.

      "You could just access your Membrane graph from your phone."_

      • Possibility of auto-generating API drivers from HAR files or OpenAPI specs.

      "There are ways to generate API specs from network traffic ... from that API spec, you can generate the driver."_

      Conclusion

      • Membrane is a powerful tool aimed at making personal automation and API interaction seamless, leveraging game engine principles for maximum programmability.
      • It provides persistent execution, deep observability, and a graph-based API abstraction layer that simplifies working with external services.
      • With a focus on usability, it integrates tightly with VS Code and JavaScript while also offering innovative features like event sourcing, time travel debugging, and drag-and-drop API connections.
      • The future of Membrane includes open-source possibilities, mobile integrations, and potentially eliminating the need for manually defining API adapters.
      • It represents a new paradigm in developer tooling, where programs are durable, transparent, and universally programmable.
    1. Taking an attitude of skepticism would also mean asking what evidence supports the original claim. Is the author a scientific researcher? Is any scientific evidence cited? If the issue was important enough, it might also mean turning to the research literature to see if anyone else had studied it.

      Juliet De Leon: We can only understand or analyze a study within the context of collective knowing, means that our understanding of research or findings is shaped by the knowledge that already exists in society or within a particular field. In other words, new information is interpreted based on what we already know as a group.

      Let’s take memory and eyewitness testimony as an example. For a long time, like most people, I believed in the absolute validity of eyewitness testimony. The idea was simple: if you saw someone commit a crime, then they must be guilty. This belief was rooted in the assumption that our memories are accurate and reliable reflections of the past.

      However, over time, our understanding of memory has evolved. We’ve learned that memory isn’t as reliable as we once thought. In fact, it’s incredibly malleable and prone to error. Studies on memory have shown that our recollections can be distorted by a number of factors, such as stress, the passage of time, or even leading questions. Eyewitnesses can remember things inaccurately, sometimes unknowingly, and these false memories can feel just as vivid and real as actual events.

    1. On a more benign level, while your parents may have told you that you should make your bed in the morning, making your bed provides the warm damp environment in which mites thrive. Keeping the sheets open provides a less hospitable environment for mites. These examples illustrate that the problem with using authority to obtain knowledge is that they may be wrong, they may just be using their intuition to arrive at their conclusions, and they may have their own reasons to mislead you. Nevertheless, much of the information we acquire is through authority because we don’t have time to question and independently research every piece of knowledge we learn through authority. But we can learn to evaluate the credentials of authority figures, to evaluate the methods they used to arrive at their conclusions, and evaluate whether they have any reasons to mislead us.

      Juliet De Leon: Authority is a big deal because, as kids, we’re taught to listen to our elders, and that tends to carry over when it comes to leaders. We often accept what they say without question. But the truth is, people in power are just as human as anyone else, and humans are flawed. There are tons of examples where leaders were just flat-out wrong—like how Tesla’s ideas were dismissed, the practice of slavery, and the whole Flat Earth Theory.

      Now, we live in a time where we’re hit with information all day long from social media, news outlets, podcasts, blogs, and more. AI-generated content is so realistic these days that it’s hard to tell what’s real and what’s fake. So, more than ever, we need to be on our toes, questioning, assessing, and verifying everything we come across.

    1. There are many design principles in broad use that are a bit more precise, even though they might not be universally good in all contexts:Simple. This is a design aesthetic that prizes minimalism and learnability. These can be good qualities, reducing how much people have to learn to use an interface and how long it takes to learn. But simplicity isn’t always good. Should moderation tools in social media simple? There’s nothing inherently simple about regulating speech, so they might need to be complicated, to reflect the complexity of preventing hate speech.Novel. In some design cultures (e.g., fashion design), the best design is the new design that pushes boundaries and explores undiscovered territories. Novelty is powerful in that it has the power to surprise and empower in new ways. It also has the power to convey status, because possession of new design suggests knowledge and awareness of the bleeding edge of human creativity, which can have status in some cultures. But novelty trades off against simplicity, because simplicity often requires familiarity and convention66 Norman, D. A. (1999). Affordance, conventions, and design. ACM interactions. .Powerful. This aesthetic values the ability of designs to augment human ability. Take, for example, a graphing calculator. These are exceedingly complex little devices with thousands of functions that can support almost any kind of mathematics. It’s certainly not simple or novel, but it’s extremely powerful. But power isn’t always good. Sometimes power leads to complexity that poses barriers to use and adoption. Powerful designs can also amplify harm; for example, powerful saved searches on Twitter enable trolls to quickly find people to harass by keyword. Is that harm worth whatever other positive might come from that power, such as saved time?Invisible. Some trends in design aesthetics value designs that “get out of the way”, trying to bring a person as close as possible to their activity, their information, and their goals. Designs that achieve invisibility don’t try to be the center of attention, but rather put the attention on the work that a person is doing with the design. Good example of designs that attempt to be invisible are the many intelligent assistants such as Siri and Alexa, which try to provide “natural” interfaces that don’t need to be learned, personalized, or calibrated. All of this may come at the expense of power and control, however, as the mechanisms we often use for invisibility are automated.Universal. The premise of universal design77 Story, M. F. (1998). Maximizing usability: the principles of universal design. Assistive Technology.  as something that all of humanity should be able to access, prizing equality over other values. For example, designing a website that is screen readable so that people who are blind can read it often constrains the type of interactivity that can be used on a site. What’s better: power and novelty or universal access? Maybe there are some types of designs that are so powerful, they should only be used by certain people with certain knowledge and skills. Of course, universal designs are rarely universal; all design exclude somehow.Just. The premise of design justice11 Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. MIT Press.  is the purpose of design should not be to amplify inequities and injustices in the world, but to dismantle them. This might mean that a design that ultimately serves the enrich and empower the wealthy (e.g., Facebook Ads) might be deemed worse than a design that helps dismantle an unjust system (e.g., a social media network for small-business loan networking amongst Black owned businesses)

      This reading talks about different ways to design things, like making them simple, new, powerful, hidden, fair, or useful for everyone. I agree that powerful designs can be both helpful and harmful, like how saved searches on Twitter can be used for good or bad. It made me think about how designers need to be careful about how their work affects people, not just how easy or exciting it is to use.

    2. There’s one critical aspect of critiques that we haven’t discussed yet, however. How does someone judge what makes a design “good”?In one sense, “good” is a domain-dependent idea. For example, what makes an email client “good” in our example above is shaped by the culture and use of email, and the organizations and communities in which it is used. Therefore, you can’t define good without understanding context of use.

      I wanted to touch on this specific section because I agreed with the approach, but more importantly, I believe this is the essence of what we have been learning to do in this class. Understanding what exactly is needed in a specific circumstance, and context I think is what drives a good solution. This is why sometimes simple solutions such as the Macca Scoop literally dominate in there specified problems (For those who don't know the Macca Scoop is used to place a specific amount of chips, or fries in the container which improves efficiency and produces cost exponentially which is hilarious since it's just a simple plastic thing). This is why I believe designing and research go hand and hand because you need to understand what the problem needs for the context. Especially once budgets and production get involved.

      Wicked Reading: I wanted to focus on Design and Technology on page 19. Specifically the section where he expresses systematic thinking. I think this insight is beyond useful since we as a society have begun to incorrectly define things we interact with daily such as technology. The issue with these definitions is that they affect how we approach and interact with technology. Although it seems small I can see how these definitions tend to cause impacts on design, and overall expansion of ideas since they focus on the wrong things such as product, instead of valuing how technology-based design all holds very important principles that should be maintained, and I agree with this since it reminds me of one of my favorite insights on how the methodology is usually more important than the content. It is about how you do it.

    3. One way to avoid this harm, while still sharing harsh feedback, is to follow a simple rule: if you’re going to say something sharply negative, say something genuinely positive first, and perhaps something genuinely positive after as well. Some people call this the “hamburger” rule, other people call it a “shit sandwich.”

      I really like the idea of balancing criticism with positive feedback. It makes a lot of sense because people are more open to suggestions when they don’t feel like they’re being attacked. I’ve definitely been in situations where harsh feedback made me shut down instead of actually listening. This approach makes it easier to hear the tough parts while still feeling encouraged. It’s a great reminder that critique should be about helping someone improve, not just pointing out what’s wrong.

  5. drive.google.com drive.google.com
    1. the newspaper, when you give a speech and give 'em hell, whenyou never stop believing that we can all be more than we are. Inother words, Love isn't about what we did yesterday; it's aboutwhat we do today and tomorrow and the day after.

      in the words of my therapist "shoulda, woulda, coulda." we cannot change the past. If we could have we would of. All these "well if we did this" or "if only the person in power did this thing." THEY DIDNT! and we cant do shit about it. We must build. Its never to late to start anything you just have to start you have to work and build now. so what you didnt do something in the past? you have NOW.

    1. : wild, complex with a chemistry that your body recognizes as the real food it’s been waiting for.

      This makes me feel like she is tatsing more than just the fruit, like she is tasting the history and past that comes along with the fruit.

    1. “They have a rising-tide-floats-all-ships mentality,” says Hayward. They higher up they are, they higher they can lift others. It’s not that unicorn employees don’t think about themselves, because they do; they just think about the entire group more.

      This is a great mindset to have in the workplace. As someone who has experienced having a ship-sinker on her team, I speak from experience when I say that mentality changes everything. Healthcare is stressful. Radiology departments have never had more work to do. Orders pile up, and someone who wants to just be on their own boat with their own problems will sink the department. You HAVE to lift others up and be a helper. In our jobs, people's lives can and do depend on it.

  6. Jan 2025
    1. In many ways, being critical is easier than being generative. Our society values criticism much more than it does creation, constantly engaging us in judging and analyzing rather than generating and creating things. It’s also easy to provide vague, high level critical feedback like “Yeah, it’s good” or “Not great, could be improved”. This type of critique sounds like feedback, but it’s not particularly constructive feedback, leading to alternatives or new insights.

      I totally agree that giving constructive feedback is not an easy thing. This is because giving a useful critique might need professional knowledge somehow and also some feedback like it's good and not great cannot let people realize how to improve their current solutions and indeed know what others' thoughts about it and test it if it's user-friendly enough. I changed my perspective because at first, I hadn't taken this course I might just give vague feedback to others but now I comprehend the importance of giving feedback.

    2. Our society values criticism much more than it does creation, constantly engaging us in judging and analyzing rather than generating and creating things. It’s also easy to provide vague, high level critical feedback like “Yeah, it’s good” or “Not great, could be improved”. This type of critique sounds like feedback, but it’s not particularly constructive feedback, leading to alternatives or new insights.

      The point about vague feedback like “It’s good” or “Could be improved” resonates with me because such comments lack depth and don’t provide any room for growth. I find this perspective useful because it highlights the importance of offering constructive, specific feedback that fosters creativity and problem-solving. It also reminds me to be more intentional in my own feedback, focusing on generating ideas rather than just critiquing.

    1. “One of the key special qualities of a unicorn employee is that they know it isn’t all about themselves, it’s about the team as a whole,”

      Within healthcare, it isn't just about one person but the entire team. Imaging departments will work together to achieve smooth workflow for one another. For example if XR has a patient with CT orders, someone will call and ask if they want them after. This allows great team work for a more effective imaging time. At times it may feel like you might be the only one doing things, but always remember if it wasn't for the entire team then things wouldn't be done as fast. This is why our departments are so big, and they need us all to work together. As Patrick Mahomes would say, its not a few players that got us that win but the entire team.

    1. Yes, some people may be better at these skills than others, but that’s only because they’ve practiced more. So start practicing.

      Yes, I see this happening in my life a lot. Some people are already good at what I’m trying to do. Some may even be younger than me, and I often wonder if they’re smarter. But no, it’s simply that they have more practice. I might take five hours to complete a project while they only need one, but I’ve realized I’m just making up for the time they’ve already put in. They’ve already invested those extra four hours at some point, and now I’m playing catch-up. In the end, though, we’re all going to reach the same destination

    2. However, most societies do not value creative thinking and so our skills in generating ideas rapidly atrophies, as we do not practice it, and instead actively learn to suppress it

      I think this point was pretty interesting. This reminds me of how in class, we talked about how when brainstorming ideas, we need to unfilter out our ideas and let them flow out, because if we filter out our ideas, we may lose out on interesting ideas that can contribute to the bigger picture of how we want the project to look like. Plus, taking a little bit and looking back at the idea later can also add interesting insights that make it useful as opposed to just saying it's a dumb idea and forgetting about it.

    3. First, I just argued, people are inherently creative, at least within the bounds of their experience, so you can just ask them for ideas. For example, if I asked you, as a student, to imagine improvements or alternatives to lectures, with some time to reflect, you could probably tell me all kinds of alternatives that might be worth exploring. After all, you have more experience than nearly anyone sitting through lectures that haven’t met your needs, causing you to fall asleep, be bored, or be confused.

      This part of the reading is interesting because it shows how personal experience can drive creativity. I agree that students, having sat through countless lectures, are in a great position to suggest meaningful improvements. It’s a good reminder that our frustrations and experiences can lead to valuable ideas, even in areas we might not think we’re experts in.

    1. Anthology, Blackboard by Anthology

      Exhaling deeply while grading my 47th essay of the evening, coffee gone cold beside me

      Oh, Blackboard... bitter laugh

      Let me tell you about Blackboard through the fragments of my daily struggle, through the prism of 4/4 teaching load spread across three campuses just to make rent:

      Each semester begins the same— Login attempts like scattered prayers Dashboard a maze of broken promises Features that mock with their corporate sheen While I upload syllabi at midnight Again and again and again

      They sold us dreams of streamlined workflows But my grades still vanish into digital void Support tickets float unanswered Like autumn leaves in administrative wind While students message: "Professor, I can't find..." And I drown in workarounds

      The cost? Oh, the cost... Not from my adjunct's pittance But I watch department meetings Where deans speak of budget constraints Yet somehow there's always money For another Blackboard module Another upgrade Another promise

      Canvas beckons from across the quad Where my tenure-track colleagues reside In their technical paradise While we contingent faculty Navigate this labyrinth of legacy code Because migration costs too much For our satellite campus

      Do you know what it's like To build a course shell from scratch Four times a year Because "course copy" fails While grading deadline looms? To explain to students Why their mobile app won't load?

      Rubbing temples, reaching for cold coffee

      But tomorrow I'll log in again Because what choice do we have? When you're paid by the course You dance to the tune they play Even when the music stutters Even when the platform breaks

      Don't talk to me about "bad actors" Talk to me about survival About making do About teaching despite, not because of These digital walls we're given

      ...I should get back to grading. These essays won't grade themselves, and the Blackboard SpeedGrader is down. Again.

    1. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      Summary: It has been known for many years that some peroxisomal proteins are imported by the major peroxisomal protein import receptor Pex5, which recognises the C terminal targeting signal PTS1, despite either lacking a PTS1 or if the PTS1 is blocked. Some proteins are also able to 'piggyback' into peroxisomes by binding to a partner which possesses a PTS. Eci1, the subject of this study is such a protein. This manuscript identified a PTS1-independent, non-canonical interaction interface between S. cerevisiae PEX5 and imported protein Eci1. Confocal imaging was used to observe the PTS1-independent import of Eci1 into peroxisomes and to establish dependence of Pex5 even in the absence of its piggyback partner Dci1. The authors purified the Pex5-Eci1 complex and used Cryo-EM to provide a structure of the purified PEX5-Eci1 complex. In general, this manuscript is well written and easy to read.

      Major points

      Most of the experiments presented are well-designed and accompanied with appropriate controls. However, please mention how many times the experiments have been repeated and how many biological samples were used in the analysis.The authors should also consider the following suggestions substantiate their conclusions:

      Figure 1A: Include full-length Eci1 with an N-terminal fluorophore, Eci1 PTS1-deletion with N-terminal fluorophore, and the PTS1 deletion with a C-terminal fluorophore, to control for any disturbance of targeting by the C terminal NG tag.

      Figure 1C: Confirm the Eci1 and Dci1 levels (if an antibody is available for the latter) by western blot. It is difficult to compare expression levels when comparing just a small number of cells in the microscope. Western blot would give a more robust evaluation of protein levels and help corroborate the claim that Eci1 expression is decreased in the absence of Dci1 if the authors wish to stand by this conclusion.

      Figure 2: confirm the deletion and overexpression of PEX9, PEX5, and PEX7 by western blot of the relevant strains. The production of these strains is not described in the manuscript. If they have been previously described this should be referenced if not it should be included.

      Figure 2: Validate these strains by checking import of a canonical PTS1 and canonical PTS2 and pex9 dependent protein to ensure they function as they should, unless these strains have been published elsewhere in which case their characterisation can be referenced.

      Figure 3: The gel should include a standard of a known amount of the lysate used in the pull down to enable a semi-quantitative estimation of the amount of Eci1 protein captured by PEX5 with and without its PTS1. Also include Eci1 with a C-terminal fluorophore to be comparable with the in vivo data in Figs 1 and 2. A control with no pex5 for background would be useful. A full Coomassie-blue stained gel (not western blot) is required to demonstrate the direct interaction as with the western blot it cannot be excluded that other proteins bridge the interaction since this is a pull down from lysate not purified proteins. OPTIONAL:Interestingly the surface on Eci1 which binds pex5 is where CoA binds in the active enzyme. Would CoA compete for binding to Pex5? (could add it into the pull down expt?)

      Figure S2: The complex between pex5 and eci1 is solved by cryo EM. Eci1 is hexameric usually 1 but sometimes 2 or 3 pex5s are bound to the complex. The size-exclusion chromatography figure with calculated molecular weight is required to support the stoichiometry. A native gel to show the complex, as well as a denaturing gel (using the complex) to show the individual proteins will be beneficial.

      Figure S9: Would Eci1 compete with Dci1 to bind to Pex5 since they share highly conserved interfaces? If so, why did the deletion of Dci1 impair Eci1 location? Or is this just reduced expression in the dci1 deletion background? (See point 2) This seems counterintuitive/contradictory so please comment.

      OPTIONAL: As the authors acknowledge this work is in vitro. It would have been interesting to examine the role of this interface in vivo by mutating one or more of the residues in Eci 1 identified as being important for the interaction. Granted that mutation can affect the folding of the protein, but the binding region is on the surface so it may not, and this can be readily checked e.g by enzyme activity or limited proteolysis.

      OPTIONAL: Similarly, it would have been interesting to see if mutating the residues of PEX5 involved in the interface affect the import of other cargoes than eci1 or if reciprocal mutations in pex5 and Eci1 e.g switching charges could restore an import defect.

      OPTIONAL If 8 & 9 isn't possible could a co-evolutionary analysis of the interface residues provide further independent evidence for their functional importance? They have looked at conservation of residues in Eci1 but this could be extended to a co-evolution analysis.

      Minor points

      Figure 1C and throughout the manuscript state clearly whether the same confocal settings are used when comparing fluorescence intensity of different images/samples.

      Figure S2B: Please use different colours for PEX5 and Eci1 for clarity.

      Figure 4A: please indicate the PTS1 for the other 5 molecules of Eci1. Are they buried? Or not seen? Please add explanation.

      Figure 4B, C, and D: please colour the circled helix in PEX5 so that it can be more easily seen.

      Please indicate the EBI-mediated interaction in Figure 4C. The relationship between 4C and 4D could be explained better as they are not viewed from the same direction

      Figure S3: As the authors indicated, Pex5 binds with multiple conformations and forms a variable interface with an Eci1 subunit. Does this mean different types of non-canonical interface are possible? Please discuss this.

      Figure 5A and B: they should be labelled as PEX5 TPR domain

      Figure S8 is very helpful in understanding the interface and could be included in Figure 5.

      Significance

      While cargo recognition by Pex5-PTS1 is well understood in molecular detail there are proteins which either lack a PTS1 or have a nonessential PTS1 that still require Pex5 for import into peroxisomes. This study provides a structural view of interaction between Pex5 and its cargo Eci1, a protein that does have a PTS1 but which is not essential for import. It's not the first example of a PEX5-cargo structure to show a non-canonical binding interface and the results are compared to the human pex5-AGT structure. It is an important addition to understanding how so-called PTS context dependent or non1 non2 proteins can be imported. Is this the first structure showing Pex5 bound to an oligomer cargo? Previous work is appropriately cited in the manuscript.

      The study will be of interest to audiences interested in protein-protein interaction and in protein targeting to organelles. This manuscript presents additional knowledge on how an oligomeric PTS1-independent protein can be imported into peroxisomes. The potential of other proteins using the similar importing mechanism can be tested to understand how one receptor can use apparently multiple binding modes to import a wide range of different proteins.

    1. AI tools are incredible and can be very useful - especially in a reading and writing class. Yup! You read that correctly. I think AI can be an incredibly useful. However, relying solely on AI for this class to complete assignments and/or tasks is considered an act of misconduct and acts of academic dishonesty. Even if you take an AI generated paragraph and change every five words to your own writing - it's still academic dishonesty.

      If we ever use AI as a support tool for out essays, do we have to use references for them? I just wanted to make sure that all the help that I used would be shown.

    1. Reviewer #2 (Public review):

      Summary:

      The authors perform a remarkably comprehensive, rigorous, and extensive investigation into the spatiotemporal dynamics between ribosomal accumulation, nucleoid segregation, and cell division. Using detailed experimental characterization and rigorous physical models, they offer a compelling argument that nucleoid segregation rates are determined at least in part by the accumulation of ribosomes in the center of the cell, exerting a steric force to drive nucleoid segregation prior to cell division. This evolutionarily ingenious mechanism means cells can rely on ribosomal biogenesis as the sole determinant for the growth rate and cell division rate, avoiding the need for two separate 'sensors,' which would require careful coupling.

      Strengths:

      In terms of strengths; the paper is very well written, the data are of extremely high quality, and the work is of fundamental importance to the field of cell growth and division. This is an important and innovative discovery enabled through a combination of rigorous experimental work and innovative conceptual, statistical, and physical modeling.

      Weaknesses:

      In terms of weaknesses, I have three specific thoughts.

      Firstly, my biggest question (and this may or may not be a bona fide weakness) is how unambiguously the authors can be sure their ribosomal labeling is reporting on polysomes, specifically. My reading of the work is that the loss of spatial density upon rifampicin treatment is used to infer that spatial density corresponds to polysomes, yet this feels like a relatively indirect way to get at this question, given rifampicin targets RNA polymerase and not translation. It would be good if a more direct way to confirm polysome dependence were possible.

      Second, the authors invoke a phase separation model to explain the data, yet it is unclear whether there is any particular evidence supporting such a model, whether they can exclude simpler models of entanglement/local diffusion (and/or perhaps this is what is meant by phase separation?) and it's not clear if claiming phase separation offers any additional insight/predictive power/utility. I am OK with this being proposed as a hypothesis/idea/working model, and I agree the model is consistent with the data, BUT I also feel other models are consistent with the data. I also very much do not think that this specific aspect of the paper has any bearing on the paper's impact and importance.

      Finally, the writing and the figures are of extremely high quality, but the sheer volume of data here is potentially overwhelming. I wonder if there is any way for the authors to consider stripping down the text/figures to streamline things a bit? I also think it would be useful to include visually consistent schematics of the question/hypothesis/idea each of the figures is addressing to help keep readers on the same page as to what is going on in each figure. Again, there was no figure or section I felt was particularly unclear, but the sheer volume of text/data made reading this quite the mental endurance sport! I am completely guilty of this myself, so I don't think I have any super strong suggestions for how to fix this, but just something to consider.

    1. You won't work past seven or onweekends. And I don't need you to say smart shit all the timeor come up with the best most brilliant idea. I mean it's greatif you do but the most important thing is that we all feel com-fortable saying whatever weird shit comes into our minds. Sowe don't feel like we have lo self-censor and we can all justsit around telling stories. Because that's where the good stuffcomes from. These guys know this ... I mean you guys havebeen through this ...(Daveand Danny M1 nod vigorously)I'd say half the stuff on Heathens was from our lives or just /stories we'd heard from other people.

      A key moment in The antipodes is when Sandy promises the group they won’t work late or on weekends. He tells them they don’t need to be perfect, just creative. This makes the job seem relaxed and easy at first. But as the play goes on, this promise fades. The gtoup loses track of time, pressure builds, and work takes over their lives. This moment is important because it shows how things can quickly sprial out of control.

    Annotators

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      The manuscript is dedicated heavily to cell type mapping and identification of sub-type markers in the human testis but does not present enough results from cross-investigation between NOA cases versus control. Their findings are mostly based on transcriptome and the authors do not make enough use of the scATAC-seq data in their analyses as they put forward in the title. Overall, the authors should do more to include the differential profile of NOA cases at the molecular level - specific gene expression, chromatin accessibility, TF binding, pathway, and signaling that are perturbed in NOA patients that may be associated with azoospermia.

      Strengths:

      (1) The establishment of single-cell data (both RNA and ATAC) from the human testicular tissues is noteworthy.

      (2) The manuscript includes extensive mapping of sub-cell populations with some claimed as novel, and reports marker gene expression.

      (3) The authors present inter-cellular cross-talks in human testicular tissues that may be important in adequate sperm cell differentiation.

      Weaknesses:

      (1) A low sample size (2 OA and 3 NOA cases). There are no control samples from healthy individuals.

      Thank you for your comments. We recognize that the small sample size in this study somewhat limits its generalizability. However, in transcriptomic research, limited sample sizes are a common issue due to the complexities involved in acquiring samples, particularly in studies about the reproductive system. Healthy testicular tissue samples are difficult to obtain, and studies (doi: 10.18632/aging.203675) have used obstructive azoospermia as a control group in which spermatogenesis and development are normal.

      (2) Their argument about interactions between germ and Sertoli cells is not based on statistical testing.

      Thank you for your comments. Due to limited funding, we have not yet fully and deeply conducted validation experiments, but we plan to carry out related experiments in the later stage. We hope that the publication of this study will help to obtain more financial support to further investigate the interactions between germ cells and Sertoli cells.

      (3) Rationale/logic of the study. This study, in its present form, seems to be more about the role of sub-Sertoli population interactions in sperm cell development and does not provide enough insights about NOA.

      Thank you for your comments. In Figure 6, we conducted an in-depth analysis and comparison of the differences between the Sertoli cell subtypes and the germ cell subtypes involved in spermatogenesis in the OA and NOA groups. The results revealed that in the NOA group, especially in the NOA3 group, which has a lower sperm count compared to NOA2 and NOA1, there is a significant loss of Sertoli cell subtypes including SC3, SC4, SC5, SC6, and SC8. The NOA1 group, with a sperm count close to that of the OA group, also had a Sertoli cell profile similar to the OA group. The NOA2 group, with a sperm count between that of NOA1 and NOA3, also exhibited an intermediate profile of Sertoli cell subtypes. Therefore, we suggest that change in Sertoli cell subtypes is a key factor affecting sperm count, rather than just the total number of Sertoli cells. We believe that through these analyses, we can provide in-depth insights into NOA, and we hope that the publication of this study will help obtain more funding support to further validate and expand on these findings.

      (4) The authors do not make full use of the scATAC-seq data.

      Thank you for your comments.We have added analysis of the scATAC-seq data and shown in the revised manuscript.

      Reviewer #2 (Public Review):

      Summary:

      Shimin Wang et al. investigated the role of Sertoli cells in mediating spermatogenesis disorders in non-obstructive azoospermia (NOA) through stage-specific communications. The authors utilized scRNA-seq and scATAC-seq to analyze the molecular and epigenetic profiles of germ cells and Sertoli cells at different stages of spermatogenesis.

      Strengths:

      By understanding the gene expression patterns and chromatin accessibility changes in Sertoli cells, the authors sought to uncover key regulatory mechanisms underlying male infertility and identify potential targets for therapeutic interventions. They emphasized that the absence of the SC3 subtype would be a major factor contributing to NOA.

      Weaknesses:

      Although the authors used cutting-edge techniques to support their arguments, it is difficult to find conceptual and scientific advances compared to Zeng S et al.'s paper (Zeng S, Chen L, Liu X, Tang H, Wu H, and Liu C (2023) Single-cell multi-omics analysis reveals dysfunctional Wnt signaling of spermatogonia in non-obstructive azoospermia. Front. Endocrinol. 14:1138386.). Overall, the authors need to improve their manuscript to demonstrate the novelty of their findings in a more logical way.

      Thank you for your detailed review of our work. We greatly appreciate your feedback and have made revisions to our manuscript accordingly.

      Regarding the novelty of our research, we believe our study offers conceptual and scientific advances in several ways:

      We have systematically revealed the stage-specific roles of Sertoli cell subtypes in different stages of spermatogenesis, particularly emphasizing the crucial role of the SC3 subtype in non-obstructive azoospermia (NOA). Additionally, we identified that other Sertoli cell subtypes (SC1, SC2, SC3...SC8, etc.) also collaborate in a stage-specific manner with different subpopulations of spermatogenic cells (SSC0, SSC1/SSC2/Diffed, Pa...SPT3). These findings provide new insights into the understanding of spermatogenesis disorders.

      Compared to the study by Zeng S et al., our research not only focuses on the functional alterations in Sertoli cells but also comprehensively analyzes the interaction patterns between Sertoli cells and spermatogenic cells using scRNA-seq and scATAC-seq technologies. We uncovered several novel regulatory networks that could serve as potential targets for the diagnosis and treatment of NOA.

      We sincerely appreciate your constructive comments and will continue to explore this area further, aiming to make a more significant contribution to the understanding of NOA mechanisms.

      Reviewer #3 (Public Review):

      Summary:

      This study profiled the single-cell transcriptome of human spermatogenesis and provided many potential molecular markers for developing testicular puncture-specific marker kits for NOA patients.

      Strengths:

      Perform single-cell RNA sequencing (scRNA-seq) and single-cell assay for transposase-accessible chromatin sequencing (scATAC-seq) on testicular tissues from two OA patients and three NOA patients.

      Weaknesses:

      Most results are analytical and lack specific experiments to support these analytical results and hypotheses.

      Thank you for your thorough review of our work. We highly value your feedback and have made revisions to our manuscript accordingly. Indeed, we have conducted immunofluorescence (IF) experiments to validate the data obtained from single-cell sequencing and have expanded the sample size to enhance the reliability of our results. To better present these validation experiments, we have reorganized and renamed the sample information, making it easier for you to understand which samples were used in the specific experiments. Following the publication of this paper, we plan to secure additional funding to deepen our research, particularly in the area of experimental validation. We sincerely appreciate your support and insightful suggestions, which have greatly helped guide our future research directions.

      Reviewer #1 (Recommendations For The Authors):

      (1) The authors should include results from cross-investigation comparing NOA/OA patients versus controls.

      Thank you for your comments. In this study, OA was the control group. Healthy testicular tissue samples are difficult to obtain, and studies (doi: 10.18632/aging.203675) have used OA as a control group in which spermatogenesis and development are normal.

      (2) In Table S1, the authors should also include the metric for scATAC-seq, and do more to show the findings the authors obtained in RNA is replicated with chromatin accessibility.

      Thank you for your comments. We have added Table S2, which includes the metric for scATAC-seq.

      (3) A single sample from each OA and NOA group may not be enough to confirm colocalization. The authors should include results from all available samples and use quantitative measures.

      Thank you for your comments. I apologize that the sample size in this study was less than three and we could not conduct quantitative analysis. We will increase the sample size and conduct corresponding experiments in subsequent research.

      (4) The Methods section does not include enough description to follow how the analyses were carried out, and is missing information on some of the key procedures such as velocity and cell cycle analyses.

      Thank you for your comments. The method about velocity and cell cycle analyses was added in the revised manuscript. The description is as follows:

      “Velocity analysis

      RNA velocity analysis was conducted using scVelo's (version 0.2.1) generalized dynamical model. The spliced and unspliced mRNA was quantified by Velocity (version 0.17.17).”

      “Cell cycle analysis

      To quantify the cell cycle phases for individual cell, we employed the CellCycleScoring function from the Seurat package. This function computes cell cycle scores using established marker genes for cell cycle phases as described in a previous study by Nestorowa et al. (2016). Cells showing a strong expression of G2/M-phase or S-phase markers were designated as G2/M-phase or S-phase cells, respectively. Cells that did not exhibit significant expression of markers from either category were classified as G1-phase cells.”

      (5) For the purpose of transparency, the authors should upload codes used for analyses so that each figure can be reproduced. All raw and processed data should be made publicly available.

      Thank you for your comments. We have deposited scRNA-seq and scATAC-seq data in NCBI. ScRNA-seq data have been deposited in the NCBI Gene Expression Omnibus with the accession number GSE202647, and scATAC-seq data have been deposited in the NCBI database with the accession number PRJNA1177103.

      Reviewer #2 (Recommendations For The Authors):

      The detailed points the authors need to improve are attached below.

      The results presented in the study have several weaknesses:

      In Figure 1A, it's required to show HE staining results of all patients who underwent single-cell analysis were provided.

      Thank you very much for your valuable suggestions. In Figure 1, we present the HE staining results paired with the single-cell data, covering all patients involved in the single-cell analysis.

      - Saying "identification of novel potential molecular markers for distinct cell types" seems unsupported by the data.

      Thank you for your comments. I'm sorry for the inaccuracy of my description. We have revised this sentence. The description is as follows: These findings indicate that the scRNA-seq data from this study can serve for cellular classification.

      - The methods suggest an integrated analysis of scRNA-seq and scATAC-seq, but from the figures, it seems like separate analyses were performed. It's necessary to have data showing the integrated analysis.

      Thank you for your comments. We have added an integrated analysis of scRNA-seq and scATAC-seq. The results were shown in Figure S2.

      Figure 2 does not seem to well cover the diversity of germ cell subtypes. The main content appears to be about the differentiation process, and it seems more focused on SSCs (stem cell types), but the intended message is not clearly conveyed.

      Thank you for your comments. Figure S1 revealed the diversity of germ cell subtypes. The second part of the results described the integrated findings from Figures 2 and S1.

      - In Figure 2B, pseudotime could be shown, and I wonder if the pseudotime in this analysis shows a similar pattern as in Figure 2D.

      Thank you for your comments. Figure 2B revealed the pseudotime analysis of 12 germ cell subpopulation. Figure 2D revealed RNA velocity of 12 germ cell subpopulation. The two methods are both used for cell trajectory analysis. The pseudotime in Figure 2B showed a similar pattern as in Figure 2D.

      - While staining occurs within one tissue, saying they are co-expressed seems inaccurate as the staining locations are clearly distinct. For example, the staining patterns of A2M and DDX4 (a classical marker) are quite different, so it's hard to claim A2M as a new potential marker just because it's expressed. Also, TSSK6 was separately described as having a similar expression pattern to DDX4, but from the IF results, it doesn't seem similar.

      Thank you for your comments. We have revised the Figure.

      - It was described that A2M (expressed in SSC0-1), and ASB9 (expressed in SSC2) have open promoter sites in SSC0, SSC2, and Diffing_SPG, but it doesn't seem like they are only open in the promoters of those cell types. For example, there doesn't seem to be a peak in Diffing for either gene. The promoter region of the tracks is not very clear, so overall figure modification seems necessary.

      Thank you for your comments. We have revised the Figure.

      - The ATAC signal scale for each genomic region should be included, and clear markings for the TSS location and direction of the genes are needed.

      Thank you for your comments. We have revised the figure and shown in the revised manuscript.

      Figure 3A mostly shows the SSC2 in the G2/M phase, so it seems questionable to call SSC0/1 quiescent. Also, I wonder if the expression of EOMES and GFRA1 is well distinguished in the SSC subtypes as expected.

      Thank you for your comments. We will validate in subsequent experiments whether the expression of EOMES and GFRA1 is clearly distinguished in the SSC subtypes.

      - In Figure 3C, it would be good to have labels indicating what the x and y axes represent. The figure seems complex, and the description does not seem to fully support it.

      Thank you for your comments. We have added labels indicating what the x and y axes represent in the Figure 3C. The x and y axes represent spliced and unspliced mRNA ratios, respectively.

      - While TFs are the central focus, it's disappointing that scATAC-seq was not used.

      Thank you for your comments. TFs analysis using scATAC-seq will be carried out in the future.

      Figure 4: It would be good to have a more detailed discussion of the differences between subtypes, such as through GO analysis. The track images need modification like marking the peaks of interest and focusing more on the promoter region, similar to the previous figures.

      Thank you for your comments. GO analysis results were put in Figure S5. The description is as follows:

      As shown in Figure S5, SC1 were mainly involved in cell differentiation, cell adhesion and cell communication; SC2 were involved in cell migration, and cell adhesion; SC3 were involved in spermatogenesis, and meiotic cell cycle; SC4 were involved in meiotic cell cycle, and positive regulation of stem cell proliferation; SC5 were involved in cell cycle, and cell division; SC6 were involved in obsolete oxidation−reduction process, and glutathione derivative biosynthetic process; SC7 were involved in viral transcription and translational initiation; SC8 were involved in spermatogenesis and sperm capacitation.

      In Figure 5, it would be good to have criteria for the novel Sertoli cell subtype presented. CCDC62 is presented as a representative marker for the SC8 cluster, but from Figure 4C, it seems to be quite expressed in the SC3 cluster as well. Therefore, in Figure 5E's protein-level check, it's unclear if this truly represents a novel SC8 subtype.

      Thank you for your comments. CCDC62 expression was higher in SC8 cluster than in SC3. Since some molecular markers were not commercially available in the market, CCDC62 was selected as SC8 marker for immunofluorescence verification. Immunofluorescence results showed that CCDC62 is a novel SC8 marker.

      - It might have been more meaningful to use SOX9 as a control and show that markers in the same subtype are expressed in the same location.

      Thank you for your comments. To determine PRAP1, BST2, and CCDC62 as new markers for the SC subtype, we co-stained them with SOX9 (a well-known SC marker).

      - Figures 4 and 5 could potentially be combined into one figure.

      Thank you for your comments. Since combining Figures 4 and 5 into a single image would cause the image to be unclear, two images are used to show it.

      In Figure 6, it would be good to support the results with more NOA patient data.

      Thank you for your comments. Patient clinical and laboratory characteristics has been presented in Table 1.

      - Rather than claiming the importance of SC3 based on 3 single-cell patient data, it would be better to validate using public data with SC3 signature genes (e.g., showing the correlation between germ cell and SC3 ratios).

      Thank you for your comments. I'm sorry I didn't find public data with SC3 signature genes. In the future, we will verify the importance of SC3 through in vivo and in vitro experiments.

      - 462: It seems to be referring to Figure 6G, not 6D.

      Thank you for your comments. We have revised it. The description is as follows: As shown in Figure 6G, State 1 SC3/4/5 were tended to associated with PreLep, SSC0/1/2, and Diffing and Diffed-SPG sperm cells (R > 0.72).

      In Figure 7, the spermatogenesis process is basically well-known, so it would be better to emphasize what novel content is being conveyed here. Additionally, emphasizing the importance of SC3 in the overall process based on GO results leaves room for a better approach.

      Thank you for your valuable suggestions. Regarding Figure 7, we recognize that the spermatogenesis process is well-known, and we will focus on highlighting the novel content, particularly the role and significance of the SC3 subtype in spermatogenesis disorders. As for the importance of SC3 in the overall process based on GO results, we have validated this in Figure 8 through co-staining experiments between Sertoli cells and spermatogenic cells in OA and NOA groups. The results demonstrate a significant correlation between the number of SC3-positive cells and SPT3 spermatogenic cells, particularly in the NOA5-P8 group, where both SC3 and SPT3 cell counts are notably lower than in the NOA4-P7 group. This further supports the critical role of SC3 in the spermatogenesis process. Your suggestions have prompted us to refine our data presentation and more clearly emphasize the novel aspects of our research. We will continue to strive to ensure that every part of our research contributes meaningfully to the academic community. Thank you again for your guidance.

      In Figure 8, only the contents of the IF-stained proteins are listed, which seems slightly insufficient to constitute a subsection on its own. It might have been better to conclude by emphasizing some subtypes.

      Thank you for your comments. We have combined this part of the results with other results into one section. The description is as follows:

      “Co-localization of subpopulations of Sertoli cells and germ cells

      To determine the interaction between Sertoli cells and spermatogenesis, we applied Cell-PhoneDB to infer cellular interactions according to ligand-receptor signalling database. As shown in Figure 6G, compared with other cell types, germ cells were mainly interacted with Sertoli cells. We futher performed Spearman correlation analysis to determine the relationship between Sertoli cells and germ cells. As shown in Figure 6H, State 1 SC3/4/5 were tended to be associated with PreLep, SSC0/1/2, and Diffing and Diffed-SPG sperm cells (R > 0.72). Interestingly, SC3 was significantly positively correlated with all sperm subpopulations (R > 0.5), suggesting an important role for SC3 in spermatogenesis and that SC3 is involved in the entire process of spermatogenesis. Subsequently, to understand whether the functions of germ cells and Sertoli cells correspond to each other, GO term enrichment analysis of germ cells and sertoli cells was carried out (Figure S3, S4). We found that the functions could be divided into 8 categories, namely, material energy metabolism, cell cycle activity, the final stage of sperm cell formation, chemical reaction, signal communication, cell adhesion and migration, stem cells and sex differentiation activity, and stress reaction. These different events were labeled with different colors in order to quickly capture the important events occurring in the cells at each stage. As shown in Figure S3, we discovered that SSC0/1/2 was involved in SRP-dependent cotranslational protein targeting to membrane, and cytoplasmic translation; Diffing SPG was involved in cell division and cell cycle; Diffied SPG was involved in cell cycle and RNA splicing; Pre-Leptotene was involved in cell cycle and meiotic cell cycle; Leptotene_Zygotene was involved in cell cycle and meiotic cell cycle; Pachytene was involved in cilium assembly and spermatogenesis; Diplotene was involved in spermatogenesis and cilium assembly; SPT1 was involved in cilium assembly and flagellated sperm motility; SPT2 was involved in spermatid development and flagellated sperm motility; SPT3 was involved in spermatid development and spermatogenesis. As shown in Figure S4, SC1 were mainly involved in cell differentiation, cell adhesion and cell communication; SC2 were involved in cell migration, and cell adhesion; SC3 were involved in spermatogenesis, and meiotic cell cycle; SC4 were involved in meiotic cell cycle, and positive regulation of stem cell proliferation; SC5 were involved in cell cycle, and cell division; SC6 were involved in obsolete oxidation−reduction process, and glutathione derivative biosynthetic process; SC7 were involved in viral transcription and translational initiation; SC8 were involved in spermatogenesis and sperm capacitation. The above analysis indicated that the functions of 8 Sertoli cell subtypes and 12 germ cell subtypes were closely related.

      To further verify that Sertoli cell subtypes have "stage specificity" for each stage of sperm development, we firstly performed HE staining using testicular tissues from OA3-P6, NOA4-P7 and NOA5-P8 samples. The results showed that the OA3-P6 group showed some sperm, with reduced spermatogenesis, thickened basement membranes, and a high number of sertoli cells without spermatogenic cells. The NOA4-P7 group had no sperm initially, but a few malformed sperm were observed after sampling, leading to the removal of affected seminiferous tubules. The NOA5-P8 group showed no sperm in situ (Figure 7A). Immunofluorescence staining in Figure 7B was performed using these tissues for validation. ASB9 (SSC2) was primarily expressed in a wreath-like pattern around the basement membrane of testicular tissue, particularly in the OA group, while ASB9 was barely detectable in the NOA group. SOX2 (SC2) was scattered around SSC2 (ASB9), with nuclear staining, while TF (SC1) expression was not prominent. In NOA patients, SPATS1 (SC3) expression was significantly reduced. C9orf57 (Pa) showed nuclear expression in testicular tissues, primarily extending along the basement membrane toward the spermatogenic center, and was positioned closer to the center than DDX4, suggesting its involvement in germ cell development or differentiation. BEND4, identified as a marker fo SC5, showed a developmental trajectory from the basement membrane toward the spermatogenic center. ST3GAL4 was expressed in the nucleus, forming a circular pattern around the basement membrane, similar to A2M (SSC1), though A2M was more concentrated around the outer edge of the basement membrane, creating a more distinct wreath-like arrangement. In cases of impaired spermatogenesis, this arrangement becomes disorganized and loses its original structure. SMCP (SC6) was concentrated in the midpiece region of the bright blue sperm cell tail. In the OA group, SSC1 (A2M) was sparsely arranged in a rosette pattern around the basement membrane, but in the NOA group, it appeared more scattered. SSC2 (ASB9) expression was not prominent. BST2 (SC7) was a transmembrane protein primarily localized on the cell membrane. In the OA group, A2M (SSC1) was distinctly arranged in a wreath-like pattern around the basement membrane, with expression levels significantly higher than ASB9 (SSC2). TSSK6 (SPT3) was primarily expressed in OA3-P6, while CCDC62 (SC8) was more abundantly expressed in NOA4-P7, with ASB9 (SCC2) showing minimal expression. Taken together, germ cells of a particular stage tended to co-localize with Sertoli cells of the corresponding stages. Germ cells and sertoli cells at each differentiation stage were functionally heterogeneous and stage-specific (Figure 8). This suggests that each stage of sperm development requires the assistance of sertoli cells to complete the corresponding stage of sperm development.”

      Reviewer #3 (Recommendations For The Authors):

      The authors revealed 11 germ cell subtypes and 8 Sertoli cell subtypes through single-cell analysis of two OA patients and three NOA patients. And found that the Sertoli cell SC3 subtype (marked by SPATS1) plays an important role in spermatogenesis. It also suggests that Notch1/2/3 signaling and integrins are involved in germ cell-Stotoli cell interactions. This is an interesting and useful article that at least gives us a comprehensive understanding of human spermatogenesis. It provides a powerful tool for further research on NOA. However, there are still some issues and questions that need to be addressed.<br /> (1) How to collect testicular tissue, please explain in detail. Extract which part of testicular tissue. It's better to make a schematic diagram.

      Thank you for your comments. The process is as follows: Testicular tissues were obtained from two OA patients (OA1-P1 and OA2-P2) and three NOA patients (NOA1-P3, NOA2-P4, NOA3-P5) using micro-dissection of testicular sperm extraction separately.

      (2) Whether the tissues of these patients are extracted simultaneously or separately, separated into single cells, and stored, and then single cell analysis is performed simultaneously. Please be specific.

      Thank you for your comments. The testicular tissues of these patients were extracted separately, then separated into single cells, and single cell analysis was performed simultaneously.

      (3) When performing single-cell analysis, cells from two OA patients were analyzed individually or combined. The same problem occurred in the cells of three NOA patients.

      Thank you for your comments. Cells from two OA patients and three NOA patients were analyzed individually.

      (4) Can you specifically point out the histological differences between OA and NOA in Figure 1A? This makes it easier for readers to understand the structure change between OA and NOA. Please also label representative supporting cells.

      Thank you for your comments. We have revised the description and it was shown in the revised manuscript.

      (5) The authors demonstrate that "We speculate that this lack of differentiation may be due to the intense morphological changes occurring in the sperm cells during this period, resulting in relatively minor differences in gene expression." Please provide some verification of this hypothesis? For example, use immunofluorescence staining to observe morphological changes in sperm cells.

      Thank you for your comments. Due to limited funds, we will verified this hypothesis in future studies.

      (6) The authors demonstrate that " As shown in Figure 5E, we discovered that PRAP1, BST2, and CCDC62 were co-expressed with SOX9 in testes tissues." The staining in Figure 5D is unclear, and it is difficult to explain that SOX9 is co-expressed with PRAP1 BST2 CCDC62 based on the current staining results. The staining patterns of SOX9 (green) and SOX9 (red) are also different. (SOX9 (red) appears as dots, while the background for SOX9 (green) is too dark to tell whether its staining is also in the form of dots.) In summary, increasing the clarity of the staining makes it more convincing. Alternatively, use high magnification to display these results.

      Thank you for your comments. I have redyed and updated this part of the immunofluorescence staining results. Please refer to the files named Figure 1, Figure 2, Figure 5, and Figure 8.

      (7) In Figure 8, the author emphasized the co-localization of Sertoli cells and Germ cells at corresponding stages and did a lot of staining, but it was difficult to distinguish the specific locations of co-localization, which was similar to Figure 5E. If possible, please mark specific colocalizations with arrows or use high magnification to display these results, in order to facilitate readers to better understand.

      Thank you for your comments. We have re-stained and updated this part of the data. Please refer to the immunofluorescence staining data in the updated Figure 8.

      (8) The authors emphasize that macrophages may play an important role in spermatogenesis. Therefore, adding relevant macrophage staining to observe the differences in macrophage expression between NOA and OA should better support this idea.

      Thank you for your comments. Macrophage-related experiments will be further explored in the future.

      (9) Notch1/2/3 signaling and integrin were discovered to be involved in germ cell-Sertoli cell interaction. However there are currently no concrete experiments to support this hypothesis. At least simple verification experiments are needed.

      Thank you for your comments. Due to limited funding, studies will be carried out in the future.

      (10) Data availability statements should not be limited to the corresponding author, especially for big data analysis. This is crucial to the credibility of this data (Have the scRNA-seq and scATAC-seq in this study been deposited in GEO or other databases, and when will they be released to the public?) The data for such big data analysis needs to be saved in GEO or other databases in advance so that more research can use it.

      Thank you for your comments. We have deposited scRNA-seq and scATAC-seq data in NCBI. “ScRNA-seq data have been deposited in the NCBI Gene Expression Omnibus with the accession number GSE202647, and scATAC-seq data have been deposited in the NCBI database with the accession number PRJNA1177103.”

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      This study by Wu et al. provides valuable computational insights into PROTAC-related protein complexes, focusing on linker roles, protein-protein interaction stability, and lysine residue accessibility. The findings are significant for PROTAC development in cancer treatment, particularly breast and prostate cancers.

      The authors' claims about the role of PROTAC linkers and protein-protein interaction stability are generally supported by their computational data. However, the conclusions regarding lysine accessibility could be strengthened with more in-depth analysis. The use of the term "protein functional dynamics" is not fully justified by the presented work, which focuses primarily on structural dynamics rather than functional aspects.

      Strengths:

      (1) Comprehensive computational analysis of PROTAC-related protein complexes.

      (2) Focus on critical aspects: linker role, protein-protein interaction stability, and lysine accessibility.

      Weaknesses:

      (1) Limited examination of lysine accessibility despite its stated importance.

      (2) Use of RMSD as the primary metric for conformational assessment, which may overlook important local structural changes.

      Reviewer #1 (Recommendations for the authors):

      (1) The authors' claims about the role of PROTAC linkers and protein-protein interaction stability are generally supported by their computational data. However, the conclusions regarding lysine accessibility could be strengthened with more in-depth analysis. Expand the analysis of lysine accessibility, potentially correlating it with other structural features such as linker length.

      We thank the reviewers for the suggestions! We performed time dependent correlation analysis to correlate the dihedral angles of the PROTACs and the Lys-Gly distance (Figures 6 and S17). We included detailed explanation on page 16:

      “To further examine the correlation between PROTAC rotation and the Lys-Gly interaction, we performed a time-dependent correlation analysis. This analysis showed that PROTAC rotation translates motion over time, leading to the Lys-Gly interaction, with a correlation peak around 60-85 ns, marking the time of the interaction (Figure 6 and Figure S17). In addition, the pseudo dihedral angles also showed a high correlation (0.85 in the case of dBET1) with Lys-Gly distance. This indicated that degradation complex undergoes structural rearrangement and drives the Lys-Gly interaction.”

      (2) The use of the term "protein functional dynamics" is not fully justified by the presented work, which focuses primarily on structural dynamics rather than functional aspects. Consider changing "protein functional dynamics" to "protein dynamics" to more accurately reflect the scope of the study.

      Thanks to the reviewer for the suggestion to use the more accurate terminology! We agreed with the reviewer that if we keep “protein functional dynamics” in the title, we should focus on how the “overall protein dynamic” links to the “function” – The function is directly related to PROTAC-induced structural dynamics which is commonly seen in “protein-structural-function” relationship, but it is not our main focus. Therefore, we changed the title to replace “functional” by “structural”.  

      (3) Incorporate more local and specific characterization methods in addition to RMSD for a more comprehensive conformational assessment.

      We thank the reviewer for the suggestion. We performed time dependent correlation analysis to understand how the rotation of PROTACs can translate to the Lys-Gly interaction. In addition, we performed dihedral entropies analysis for each dihedral angle in the linker of the PROTACs to better examine the flexibility of each PROTAC.

      We included detailed explanation at page 18: “Our dihedral entropies analysis showed that dBET57 has ~0.3 kcal/mol lower entropies than the other three linkers, suggesting dBET57 is less flexible than other PROTACs (Figure S18).”

      Reviewer #2 (Public review):

      Summary:

      The manuscript reports the computational study of the dynamics of PROTAC-induced degradation complexes. The research investigates how different linkers within PROTACs affect the formation and stability of ternary complexes between the target protein BRD4BD1 and Cereblon E3 ligase, and the degradation machinery. Using computational modeling, docking, and molecular dynamics simulations, the study demonstrates that although all PROTACs form ternary complexes, the linkers significantly influence the dynamics and efficacy of protein degradation. The findings highlight that the flexibility and positioning of Lys residues are crucial for successful ubiquitination. The results also discussed the correlated motions between the PROTAC linker and the complex.

      Strengths:

      The field of PROTAC discovery and design, characterized by its limited research, distinguishes itself from traditional binary ligand-protein interactions by forming a ternary complex involving two proteins. The current understanding of how the structure of PROTAC influences its degradation efficacy remains insufficient. This study investigated the atomic-level dynamics of the degradation complex, offering potentially valuable insights for future research into PROTAC degradability.

      Reviewer #2 (Recommendations for the authors):

      (1) Regarding the modeling of the ternary complex, the BRD4 structure (3MXF) is from humans, whereas the CRBN structure in 4CI3 is derived from Gallus gallus. Is there a specific reason for not using structures from the same species, especially considering that human CRBN structures are available in the Protein Data Bank (e.g., 8OIZ, 4TZ4)?

      We appreciate the reviewer’s insightful comment regarding the choice of crystal structures of BRD4 and CRBN structures from two species. Our initial selection of 4CI3 for CRBN structure was based on its high resolution and publication in Nature journal. Furthermore, the Gallus gallus CRBN structure shares high degree of sequence and structural similarity with Homo sapiens CRBN, especially in the ligand binding region. At the time of our study, we were aware of 4TZ4 as Homo sapiens CRBN, however, we did not use this structure since no publication or detailed experimental was associated with it. Additionally, PDB 8OIZ, was not publicly available yet for other researchers to use at the time.

      (2) Based on the crystal structure (PDB ID: 6BNB) discussed in Reference 6, the ternary complex of dBET57 exhibits a conformation distinct from other PROTACs, with CRBN adopting an "open" conformation. Using the same CRBN structure for dBET57 as for other PROTACs might result in inaccurate docking outcomes.

      Thank you for the reviewer’s comment! As noted by the authors in Reference 6, the observed open conformation of CRBN in the dBET57 ternary complex may result from the high salt crystallization conditions, which could drive structural rearrangement, and crystal contacts that may induce this conformation. The authors also mentioned that this open conformation could, in part, reflect CRBN’s intrinsic plasticity. However, they acknowledged that further studies are needed to determine whether this conformational flexibility is a characteristic feature of CRBN that enables it to accommodate a variety of substrates. Despite these observations, we believe that the compatibility of the observed BRD4<sup>BD1</sup> binding conformation with both open and closed CRBN states suggests that these conformational changes are all possible. Therefore, we believe using the same initial CRBN structure for dBET57 as for other PROTACs can still reasonably reveal the dynamic nature of the ternary complex and would not significantly affect the accuracy of our docking outcomes either.

      (3) Figure 2 displays only a single frame from the simulations, which might not provide a comprehensive representation. Could a contact frequency heatmap of PROTAC with the proteins be included to offer a more detailed view?

      We thank the reviewer for the suggestion! We performed the contact map analysis to observe the average distance between PROTACs and BRD4<sup>BD1</sup> over 400ns of MD simulation (new Figure S4 added).

      We included detailed explanation at page 8 and 9: “The residues contact map throughout the 400ns MD simulation also showed different pattern of protein-protein interactions, indicating that the linkers were able to adopt different conformations (Figure S4).”

      (4) The conclusions in Figure 3 and S11 are based on a single 400 ns trajectory. The reproducibility of these results is therefore uncertain.

      We thank the reviewer for the suggestion! We added one more random seed MD simulation for each PROTAC to ensure the reproducibility of the results. The Result is shown in Figure S21 and the details for each MD run are updated in Table 1.

      (5) Figure 4 indicates significant differences between the first and last 100 ns of the simulations. Does this suggest that the simulations have not converged? If so, how can the statistical analysis presented in this paper be considered reliable?

      We thank the reviewers for the question. The simulation was initiated with a 10-15A gap between BRD4 and Ub to monitor the movement of degradation machinery and Lys-Gly interaction. The significant changes in pseudo dihedral in Figure 4 shows that the large-scale movement of the degradation complex can initiate the Lys-Gly binding. It does not relate to unstable sampling because the system remains very stable when BRD4 comes close to Ub.

      (6) In Figure 5, the dihedral angle of dBET57_#9MD1 is marked on a peptide bond. Shouldn't this angle have a high energy barrier for rotation?

      We thank the reviewers for catching the error! Indeed, it was an error that the dihedral angles were marked on the peptide bond. We reworked the figure and double checked our dihedral correlation analysis. The updated correlate dihedral angle selection and the correlation coefficient is shown in Figure 5.

      (7) Given that crystal structures for dBET 70, 23, and 57 are available, why is there a need to model the complex using protein-protein docking?

      We thank the reviewer for the feedback. Only dBET23 has the ternary complex available in a crystal structure, which has the PROTAC and both proteins, while dBET1, dBET57 and dBET70 are not completed as ternary complexes. Although dBET70 has a crystal structure, its PROTAC’s conformation is not resolved, and thus we decided to still perform protein-protein docking with dBET70. 

      We includeed the explanation at page 8: “Only dBET23 crystal structure is available with the PROTAC and both proteins, while the experimentally determined ternary complexes of dBET1, dBET57 and dBET70 are not available. “

      (8) On page 9, it is mentioned that "only one of the 12 PDB files had CRBN bound to DDB1 (PDB ID 4TZ4)." However, there are numerous structures of the DDB1-CRBN complex available, including those used for docking like 4CI3, as well as 4CI1, 4CI2, 8OIZ, etc.

      We thank the reviewers for the comment! We acknowledged the existence of several DDB1-CRBN complex crystal structures, such as PDB IDs 4CI1, 4CI2, 4CI3, and the more recent 8OIZ. For our study, we chose to use 4TZ4 to maintain consistency in complex construction and to align with the methodology established in a previously published JBC paper (https://doi.org/10.1016/j.jbc.2022.101653), which successfully utilized the same structure for a similar construct. At the time our study was conducted, the 8OIZ structure had not yet been released. We appreciate your suggestion and will consider incorporating alternative structures in future studies to further investigate our findings.

      (9) Table 2 is first referenced on page 8, while Table 1 is mentioned first on page 10. The numbering of these tables should be reversed to reflect their order of appearance in the text.

      We thank the reviewer for catching the error! We switched the order of Table 1 and Table 2.

      Reviewer #3 (Public review):

      The authors offer an interesting computational study on the dynamics of PROTAC-driven protein degradation. They employed a combination of protein-protein docking, structural alignment, atomistic MD simulations, and post-analysis to model a series of CRBN-dBET-BRD4 ternary complexes, as well as the entire degradation machinery complex. These degraders, with different linker properties, were all capable of forming stable ternary complexes but had been shown experimentally to exhibit different degradation capabilities. While in the initial models of the degradation machinery complex, no surface Lys residue(s) of BRD4 were exposed sufficiently for the crucial ubiquitination step, MD simulations illustrated protein functional dynamics of the entire complex and local side-chain arrangements to bring Lys residue(s) to the catalytic pocket of E2/Ub for reactions. Using these simulations, the authors were able to present a hypothesis as to how linker property affects degradation potency. They were able to roughly correlate the distance of Lys residues to the catalytic pocket of E2/Ub with observed DC50/5h values. This is an interesting and timely study that presents interesting tools that could be used to guide future PROTAC design or optimization.

      Reviewer #3 (Recommendations for the authors):

      (1) My most important comment refers to the MM/PBSA analysis, the results of which are shown in Figure S9: binding affinities of -40 to -50 kcal/mol are unrealistic. This would correspond to a dissociation constant of 10^-37 M. This analysis needs to be removed or corrected.

      We thank the reviewer for the comment! MM/PBSA analysis indeed cannot give realistic binding free energy. It does not include the configurational entropy loss which should be a large positive value. In addition, while the implicit PBSA solvent model computes solvation free energy, the absolute values may not be very accurate. However, because this is a commonly used energy calculation, and some readers may like to see quantitative values to ensure that the systems have stable intermolecular attractions, we kept the analysis in SI. We edited the figure legend, moved the Figure S10 in SI page 19, and added sentences to clearly state that the calculations did not include configuration entropy loss “Note that the energy calculations focus on non-bonded intermolecular interactions and solvation free energy calculations using MM/PBSA, where the configuration entropy loss during protein binding was not explicitly included. “.

      (2) I think that the analysis of what in the different dBETx makes them cause different degradation potency is underdeveloped. The dihedral angle analysis (Figure 4B) did not explain the observed behavior in my opinion. Please add additional, clearer analysis as to what structural differences in the dBETx make them sample very different conformations.

      We thank the reviewer for the suggestions! Based on the suggestion, we further performed dihedral entropy analysis for each dihedral angle in the linker part of the PROTAC to examine the flexibility of each PROTAC. Because each PROTAC has a different linker, we now clearly label them in a new Figure S18 in SI page 27. Low dihedral entropies indicate a more rigid structure and thus less flexibility to make a PROTAC more difficult to rearrange and facilitate the protein structural dynamic necessary for ubiquitination.

      We added detailed explanation on page 18: “Our dihedral entropy analysis showed that dBET57 has ~0.3 kcal/mol lower configuration entropies than the other dBETs with three different linkers, suggesting that dBET57 is less flexible than the other PROTACs (Figure S18).”

      (3) "The movement of the degradation machinery correlated with rotations of specific dihedrals of the linker region in dBETs (Figure 5).": this is not sufficiently clear from the figure. Definitely not in a quantitative way.

      We thank the reviewers for the suggestions! To further understand the correlation between PROTACs dihedral angles and the movement of degradation machinery, we performed time dependent correlation analysis to correlate the dihedral angles of the PROTACs and the Lys-Gly distance (Figures 6 and S17).

      We included detailed explanation on page 16:

      “To further examine the correlation between PROTAC rotation and the Lys-Gly interaction, we performed a time-dependent correlation analysis. This analysis showed that PROTAC rotation translates motion over time, leading to the Lys-Gly interaction, with a correlation peak around 60-85 ns, marking the time of the interaction (Figure 6 and Figure S17). In addition, the pseudo dihedral angles also showed a high correlation (0.85 in the case of dBET1) with Lys-Gly distance. This indicated that degradation complex undergoes structural rearrangement and drives the Lys-Gly interaction.

      (4) Cartoons are needed at multiple stages throughout the paper to enhance the clarity of what the modeled complexes looked like (e.g. which subunits they contained).

      We thank the reviewers for the suggestions. We added and remade several Figures with cartoons to better represent the stages. We also used higher resolution and included clearer labels for each protein system.

      (5) The difference between CRL4A E3 ligase and CRBN E3 ligase is not clear to the non-expert reader.

      Thanks for the reviewer’s comment! To clarify the terms "CRL4A E3 ligase" and "CRBN E3 ligase", which refer to different levels of description for the protein complexes, we added a couple of sentences in the Figure 1 legend. As a result, the non-expert readers can clearly know the differences.

      As illustrated in Figure 1,

      • CRL4A E3 ligase refers to the full E3 ligase complex, which includes all protein components such as CRBN, DDB1, CUL4A, and RBX1.

      • CRBN E3 ligase, on the other hand, is a more colloquial term typically used to describe just the CRBN protein, often in isolation from the full CRL4A complex.

      (6) Figure 1, legend: unclear why it's E3 in A and E2 in B.

      We thank the reviewer for the question! E3 ligase in Figure 1A refers to CRBN E3 ligase, where researchers also simply term it CRBN. We have added a sentence to specify that CRBN E3 ligase is also termed CRBN for simplicity. In Figure 1B, E2 was unclear in the sentences. The full name of E2 should be E2 ubiquitin-conjugating enzyme. Because the name is a bit long, researchers also call it E2 enzyme. We have corrected it and used E2 enzyme to make it clearer. 

      (7) "Although the protein-protein binding affinities were similar, other degraders such as dBET1 and dBET57 had a DC50/5h of about 500 nM". It's unclear what experimental data supports the assertion that the protein-protein binding affinities are similar.

      We thank reviewer for the question. Indeed, the statement is unclear.

      We corrected the sentence in page 6: “Although utilizing the exact same warheads, other degraders such as dBET1 and dBET57 had a DC<sub>50/5h</sub> of about 500 nM.”

      (8) Was the construction of the degradation machinery complex guided by experimental data (maybe cryo-EM or tomography)? If not, what is the accuracy of the starting complex for MD? This may impact the reliability of the obtained results.

      Thank you for your insightful comments! Yes, the construction of the degradation machinery complex was guided by available high-resolution crystal structures, which was selected to maintain consistency and align with the methodology established in a previously published JBC paper (https://doi.org/10.1016/j.jbc.2022.101653).

      We acknowledged that static crystal structures represent only a single snapshot of the system and may not capture the full conformational flexibility of the complex. To address this limitation, we performed MD simulations using multiple starting structures. This approach allowed us to explore a broader conformational landscape and reduced the dependence on any single starting configuration, thereby enhancing the reliability of the results.

      We hope this clarifies the robustness of our methodology and the steps taken to ensure accuracy in our simulations.

      (9) "With quantitative data, we revealed the mechanism underlying dBETx-induced degradation machinery": I think this may be too strong of an assertion. The authors may have developed a mechanistic hypothesis that can be tested experimentally in the future.

      We thank the reviewer for the suggestion. This is indeed a strong assertion and needs to be modified. We edited the sentence in page 7: “With quantitative data, we revealed the importance of the structural dynamics of dBETx-induced motions, which arrange positions of surface lysine residues of BRD4<sup>BD1</sup> and the entire degradation machinery.”

      (10) Figure S2: are the RMSDs calculated over all residues? Or just the BRD4 residues? Given that the structures are aligned with respect to CRBN, the reported RMSD numbers might be artificially low since there are many more CRBN residues than there are BRD4 residues. Also, why weren't the crystal structures used for dBET 23 and 70 for the modeling? Wouldn't you want to use the most accurate possible structures? Simulations were run for 23. Why not for 70?

      We thank the reviewer for the suggestion. We added a sentence to more clearly explain the RMSD calculations in Figure S2: “The structural superposition is performed based on the backbone of CRBN and RMSD calculation is conducted based on the backbone of BRD4<sup>BD1</sup>.”

      Although dBET70 has crystal structure, its PROTAC structure is not resolved, and thus we decided to still perform protein-protein docking with dBET70.  dBET1 and dBET57 do not have a crystal structure for the ternary complexes.

      We included the explanation at page 8: “Only dBET23 crystal structure is available with the PROTACs and both proteins, while the experimentally determined ternary complexes of dBET1, PROTACs of dBET57 and dBET70 are not available. “

      a. And there are no crystal structures available for 1 and 57? If so, please clearly say that. Otherwise please report the RMSD.

      We thank the reviewer for the suggestion. We included the explanation at page 8: “Only dBET23 crystal structure is available with the PROTACs and both proteins, while the experimentally determined ternary complexes of dBET1, PROTACs of dBET57 and dBET70 are not available.”

      (11) Table 2 is referenced before Table 1.

      We thank the reviewer for catching the error! We switched the order for Table 1 and Table 2.

      (12) Figure S3 is not referenced in the main paper.

      We thank the reviewer for catching the error! We now referred Figure S3 on page7.

      (13) Minor comments on grammar and sentence structure:

      a. It should be "binding of a ternary complex"

      b. "Our shows the importance": word missing.

      c. "...providing insights into potential orientations for ubiquitination. observe whether the preferred conformations are pre-organized for ubiquitination." Word or words missing.

      We thank reviewer for catching the errors! We corrected grammatical errors and unclear sentences throughout the entire paper and revised the sentences to make them easily understandable for non-expert readers.

    1. Introduction and Purpose

      “I would like to tell you why fulcro is awesome and why it's much easier to learn than you might believe so we will look at what fulcro is and what it can do for you and why is it interesting...”

      • Emphasizes that the talk aims to introduce Fulcro, explain its ease of learning, and highlight its benefits.

      Speaker Background

      “So first of all who is… I've been doing back-end development since 2006 and front-end development since 2014 on and off…”

      • Establishes the speaker’s credibility with extensive development experience.

      “...I built learning materials for Fulcro beginners and I pair program with and mentor my private clients on their first Fulcro project...”

      • Demonstrates the speaker’s active role in teaching Fulcro to newcomers.

      Motivation for Fulcro

      “When I create web applications I want to be productive and I want to have fun… I don't want to have to manually track whether the data started loading or finished or failed…”

      • Highlights the desire to reduce boilerplate and tedious manual tasks.

      “I don't want to write tons of boilerplate and especially not to do that and again and again for every new type data in my application…”

      • Stresses that Fulcro removes repetitive coding patterns, enhancing developer efficiency.

      Choosing a Full-Stack Framework

      “Now there are simpler Frameworks… or you can pick a full stack framework that has all the parts you need…”

      • Explains how Fulcro’s integrated approach can be preferable to patching together multiple libraries.

      “...malleable web framework designed for sustainable development of real world full stack web applications...”

      • Defines Fulcro as a flexible system that supports complex, long-lived applications.

      Key Fulcro Capabilities

      “It can render data in the UI and it uses React so it's wraps React for that…”

      • Confirms that Fulcro uses React under the hood for rendering.

      “It can manage state… it keeps the state for you at some place… re-render the UI so it reflects that state…”

      • Describes automatic state management and reactive re-rendering.

      “It makes it easy to load data from the backend… you have full control...”

      • Emphasizes the fine-grained control over data fetching.

      “Fulcro also caches the data for you automatically and it does so in normalized form…”

      • Highlights how normalized data storage simplifies updates across the UI.

      “Fulcro has excellent developer experience for multiple reasons… the biggest is locality and navigability…”

      • Points out how Fulcro keeps relevant code together, making it easier to navigate and maintain.

      Core Principles

      1. Graph API / EQL (Edn Query Language)

      “...we use graph API instead of rest API which means that we have just a single endpoint and it's the front end which asks the back end for what data it wants by sending over a query…”

      • Simplifies data retrieval by letting the client specify exactly what it needs.

      • UI as Pure Function of State

      “UI is pure function of state… components only ever get the data they need from their parent…”

      • Removes side effects from the rendering flow.

      • Locality

      “...to understand the UI component I shouldn't be forced to jump over four different files… so in Fulcro a component doesn’t have only a body but also a configuration map…”

      • Co-locates component queries, rendering, and logic in one place.

      • Normalized Client-Side State

      “...it stores that data normalized in a simple tabular form where entities contain other entities replaced with references…”

      • Ensures any update in one place is reflected throughout the UI.

      Architecture Overview

      “...it's a full stack web framework so it has the front end and back end part… front end is Fulcro proper… the back end is Fulcro’s library Pathom…”

      • Describes the division between the Fulcro client and the Pathom-based server.

      “On the front end… we have client DB… we have a transactional subsystem… to the back end we have Pathom… as kind of adapter between the tree of data the UI wants and whatever data sources there are.”

      • Clarifies how Fulcro’s client and server components communicate via EQL queries and mutations.

      UI Rendering Process

      “...UI is a tree of components and for each component we have a query… these queries are composed up so that the root component’s query is the query for the whole page.”

      • Outlines how each component declares its data needs, culminating in a single root query.

      “...Fulcro takes this query, combines it with the client DB, and forms a tree of data that matches the query shape, then hands it off to the root to render.”

      • Demonstrates the round-trip from query to final rendered UI.

      Component Example

      “Here we can see how a Fulcro component looks in code… The most important part here is the query…”

      • Provides a code snippet showing query co-location with the component.

      “...the component also includes the queries of its child components so the parent can pass down just the needed data.”

      • Reinforces that data flows naturally down the component tree.

      Learning Fulcro

      “People have this assumption or believe that Fulcro is hard to learn but it's not…”

      • Dispels the notion of steep difficulty.

      “There are simpler frameworks that do just one thing… but you need to handle a number of tasks and that you need to work across both front end and back end…”

      • Explains why novices might find full-stack solutions initially overwhelming.

      “You need to rewire your brain… if you come in expecting that things just work the way you expect you will be running into walls…”

      • Advises a mindset shift for those accustomed to different paradigms.

      Recommended Beginner Resources

      “...the Fulcro Developer's Guide… it describes everything in great detail but it can be overwhelming…”

      • Mentions the official documentation’s comprehensive nature.

      “...start with the do it yourself Fulcro Workshop… play with the concepts in practice and see how they work...”

      • Suggests hands-on learning as the best first step.

      “...there's this minimalist Fulcro tutorial… tries to teach you the absolute minimum amount of things you need to know…”

      • Recommends a focused tutorial that avoids overload.

      Simplicity Through Principles

      “Fulcro doesn't do any magic… its operation is straightforward and very much possible to understand…”

      • Emphasizes that Fulcro’s complexity is principled, not opaque.

      “...UI is pure function of data, standard input of data is the graph API, standard output of side effects is the transaction subsystem, and data is data, meaning queries and mutations are just data.”

      • Summarizes how Fulcro simplifies data handling, state management, and side effects uniformly.

      Demo Highlights

      “So let's have a demo… a simple Fulcro application showing todo list…”

      • Introduces a working demonstration of a to-do list in Fulcro.

      “...every side effect goes through transaction subsystem so I should see data here and I do, I see that they are loading them…”

      • Illustrates how Fulcro logs and displays all transactions for debugging.

      “I can also see the response… the data mirrors the query… if I ask for something that doesn't exist I get back empty data…”

      • Demonstrates the transparency of EQL-based queries and responses.

      Conclusion and Key Takeaways

      “Takeaways… that full stack frameworks are really useful and especially that Fulcro is really worth looking into and learning is not hard if you are a little smart about it…”

      • Concludes that Fulcro offers an approachable path to building maintainable full-stack ClojureScript applications.

      “Here are some awesome resources especially the Fulcro Community guide where you find the workshop and tutorial…”

      • Reiterates the availability of community-driven materials to support new learners.
    1. Summary of the Tech Talk on Software Development Leverage

      Speaker's Background & Context

      • The speaker has experience with nine startups, with four successes (defined as acquired or still operational).

        "I've been involved in nine startups, four successes so far, success defined as either bought by somebody else or still exists." - Core interests include minimal degradation over time, maximum architectural clarity, and minimal boilerplate.

        "I want to build systems that have a minimal amount of that maximum architecture clarity... I want a small number of Core Concepts and I also want minimal boilerplate." - Prefers Clojure and ClojureScript due to Lisp features, a REPL, macros, full-stack capabilities, and immutable data.

        "The main things are that it's a Lisp, I've got a REPL, I've got macros, I've got full stack language immutable data and literals."

      Concept of Software Development Leverage

      • Defines leverage in software as maximizing efficiency while minimizing incidental complexity.

        "What’s the minimal amount of code I can write to build these things?" - Software generally consists of forms and reports, and optimizing these elements reduces complexity.

        "A lot of what we write are forms or reports essentially." - Critiques past attempts at UI and form abstraction (e.g., Informix 4GL, Visual Basic, Rails, Java Enterprise) as insufficient or overly complex.

        "Every kind of library on the planet trying to do the same sort of thing." - Identifies challenges in leverage: short levers, fragile systems, opposing mindsets, and complex structures.

        "You can have too short of a lever, the object that we're trying to move could be too big for the lever, or my strength... I could have a crowd of people who are just philosophically opposed to levers."

      Key Approaches to Leverage

      • Minimal Incidental Complexity: Reducing unnecessary complexity that accumulates over time.

        "We love minimal incidental complexity... other communities don’t even think about that." - Functional & Immutable Data Models: Advocates for a pure functional approach to state management and UI rendering.

        "The state of the world is some immutable thing, initialized somehow, then I walk from step to step running some pure function." - Generalized Pure Functions: Aiming for functional purity while acknowledging that some dynamism is needed.

        "To me, you're starting by breaking the ideal. You're saying, 'I’m not really going to use pure functions for that.'" - Component-Based Rendering: Prefers data-driven UI, minimizing reliance on React’s event-based state management.

        "A pure function, a render of some sort of transform of the world."

      Core Abstractions for Software Leverage

      1. Entity-Attribute-Value (EAV) Model: A flexible, normalized data structure for representing application state.

        "The first one is just the power of entity attribute value." 2. Idents (Universal Entity Identifiers): Unique tuples ([type id]) for referencing entities.

        "The kind allows you to prevent collisions... useful semantic information." 3. Graph Queries: Uses EDN-like queries to efficiently pull and update data.

        "Attach logic to graph queries that say when you get the result of this query, here's how you normalize it." 4. Full-Stack Datified Mutations: CQRS-like abstractions over side effects and state transitions.

        "CQRS kind of idea... I’m going to make an abstract thing that says what I want to do."

      Emergent Benefits of This Approach

      • Normalized State Representation: Enables automatic merging of data, reducing complexity in state updates.

        "This gives me on my world, my immutable World in that diagram of kind of our idealized application." - Minimizing UI Boilerplate: Using annotated queries and data-driven components reduces manual UI code.

        "A UI location-specific way to annotate my UI... initial state is just a mirror of that." - Abstracting Side Effects: Remote calls and transactions become well-structured, reducing ad-hoc state management.

        "Transact things... processing system talks to remotes for side effects, talks to the database for local changes, and triggers renders."

      State Machines for Process Control

      • Advocates state machines for handling application logic, avoiding scattered imperative code.

        "Very often, process is just peppered around everywhere... having a state machine that abstracts over this is powerful." - Uses state charts (Harel state machines) for complex workflows like authentication.

        "State charts are way better when your state machine gets large."

      Fulcro & RAD (Rapid Application Development)

      • Fulcro: A ClojureScript-based framework built on these principles.

        "How do I simplify F? How do I get these core pieces generic enough to reuse?" - RAD: Built to automate UI and backend generation, minimizing redundant work.

        "I really wanted to minimize the boilerplate right... tired of handwriting schema." - Plugins for Databases, Forms, Reports, and APIs: Reduces custom implementation for common application patterns.

        "Datomic support gives me my network API and integration with Datomic in 1900 lines of code."

      Key Takeaways

      • Graph-based, normalized application state leads to better leverage and scalability.
      • Functional purity where possible, and controlled side effects when necessary.
      • Automatic UI and backend generation through metadata and introspection.
      • Composable, small-core abstractions allow flexibility without unnecessary complexity.

      "A very small number of Core Concepts... it's pluggable, you can escape from everything... it's just an annotated data model."

      This approach significantly reduces the long-term maintenance cost of applications by emphasizing reusability, composition, and functional principles.

    1. Consumer goods thatare taken for granted by people at all class levels in the United States, liketelephones, refrigerators, and automobiles, are beyond the reach of theDominican lower class and not a certainty for the middle class either

      I've seen this myself. Whenever my mom goes back to her home country, Bangladesh, to see her family, she brings all sorts of goods from NYC. I was under the impression that these goods didn't exist at all in Bangladesh, but it turns that was not the case. For example, she bought a decent amount of drugstore makeup for them and mentioned that they requested some of it. She also said that they have most of these same (drugstore) makeup brands in Bangladesh; it's just way more expensive and therefore not worth paying for the quality, so they ask her to bring it for them when she visits.

    1. It’s surprising to me that this is the fourth Oura Ring and that these problems, in addition to inaccurate step counting, haven’t already been solved. Wake me up when they are.

      Reflection: This article caught my attention because my partner just got an Oura Ring after we talked about it over the holidays. We were both curious about how well it actually tracks sleep and if it’s worth the hype. I liked how the article broke down the features in a way that made sense, especially the focus on comfort and how it’s less bulky than other trackers. The idea that it measures things like heart rate and body temperature to give a full picture of sleep is really interesting. My partner has already started using it, and we have been looking at the data together, trying to see if it actually helps with better sleep habits. I also liked how the article explained the app in a simple way, since some tech reviews can feel too complicated. The battery life seems like a big plus too, since constant charging can be annoying. Reading this made me think about how good writing makes even technical topics easy to understand, which is something I want to work on in my own writing.

    1. the ending pulls the accent ahead with it: MO-dern, but mo-DERN-ity, not MO-dern-ity. That doesn’t happen with WON-der and WON-der-ful, or CHEER-y and CHEER-i-ly. But it does happen with PER-sonal, person-AL-ity.

      This is one of the most irritating things in the English language to me. Studying Japanese, one of the first things you're taught is that there's no stress on any syllable more than any other when saying a word. It's a Mora-timed language, where English is a stress-timed language. Mora-timed languages don't put stress on certain syllables the way we do in English. By changing the way we are using a word, and thereby changing our intonation, the language just keeps getting more confusing; stress one syllable wrong and everyone in a three mile radius will go, "Why did you say that like that?"

    2. What’s the difference? It’s that -ful and -ly are Germanic endings, while -ity came in with French.

      I had no idea that either endings came from different languages. I guess I just assumed that they were already apart of the English language.

    1. In other words, peoples are not always subjectsconstantly confronting history as some academics would wish, but the capacity uponwhich they act to become subjects is always part of their condition. This subjectivecapacity ensures confusion because it makes human beings doubly historical or, moreproperly, fully historical. It engages them simultaneously in the sociohistorical processand in narrative constructions about that process.

      I agree with this idea because it shows that people aren't just passively living through history instead they are also making their own stories about it. It makes sense that people can be shaped by history and at the same time try to make sense of it through their own perspective. But this can cause confusion because it’s hard to separate what’s really happening in history from the way we tell our own personal stories about it.

    1. The great unbroken silence in learning’s secret things; The lore of all the learned, the seed of all which springs. Living or lifeless, still or stirred, whatever beings be, None of them is in all the worlds, but it exists by Me!

      This particular quote's reference of "secret things" is not in regard to actual secrets but to the truth or origin of life. By using such examples of "lore of all the learned, the seed of all which springs" is an explanation that all that exists is because of and within us all. This shows that all of nature owes it's existence and origin to God but there is a bit of that God (or divine) in each of us showing that God is not just around us but within us all as well. "All objects, both manifest and unmanifest, including nature are created out of God at the beginning of the kalpa and after completing their life cycles all objects dissolve into God at the end of the kalpa (creation)". This shows that the cycle of life ends where it begins. By being born and created from a God and ending that cycle by becoming a part of that God's divinity. Source: Mohanty, Susil Kumar. International Journal of Science and Research (IJSR). "Scientific Analysis of “The Bhagavad Gita” on God Reflecting Ancient Indian Culture". Vol. 13. 2024.

    1. One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later. Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to gambling, or the 2016 Trump campaign ‘target[ing] 3.5m black Americans to deter them from voting’ [h13].

      This highlights the ethical dilemma of social media’s data-driven business model. While targeted ads can be useful and even beneficial to consumers, they also create risks when used to exploit vulnerabilities—such as addictive behaviors or manipulative content. The pursuit of engagement at all costs raises concerns about user autonomy, mental health, and the broader societal impact of algorithm-driven influence. It’s a reminder that the power these platforms hold isn’t just about profit—it shapes culture, behavior, and even democracy.

    1. Words like romantic, plastic, values, human, dead, sentimental, natural, vitality, as used in art criticism, are strictly meaningless, in the sense that they not only do not point to any discoverable object, but are hardly even expected to do so by the reader.

      What makes him view these words as useless? I am going to college for animation and feel the words living, dead, and human are all great descriptors for how one's art can look. He makes a good point about it being an opinion of how the art looks, but all art is subjective. One person pours their views of the world into a visual piece, and people may interpret in many different words. I do think that you should include what makes the art seem alive or dead, not just use those words as sole descriptors. e.g. "The outstanding feature of Mr. X's work is it's ability to imitate living quality through it's _ and _. As I write this, I can understand how it could be seen as a meaningless word... interesting.

    2. euphonious

      (I just reverse-engineered the naming convention of the instrument the Euphonium... Eu/phon/ium--literally just "The good sound -er") On another note. Where is the dividing line, Orwell? does the use of this word here not fall into the category of "Pretentious Diction" as you laid out before, given that it's a Greek based word?

    3. On the one side we have the free personality: by definition it is not neurotic, for it has neither conflict nor dream. Its desires, such as they are, are transparent, for they are just what institutional approval keeps in the forefront of consciousness; another institutional pattern would alter their number and intensity; there is little in them that is natural, irreducible, or culturally dangerous. But on the other side, the social bond itself is nothing but the mutual reflection of these self-secure integrities. Recall the definition of love. Is not this the very picture of a small academic? Where is there a place in this hall of mirrors for either personality or fraternity?

      ...Okay I can't find the flaw. I can feel it certainly, it's so wordy I can hardly follow, but... I don't know---this isn't the example of "Bad Writing" I'd have expected to find in an English class...

    4. It is often easier to make up words of this kind (deregionalize, impermissible, extramarital, non-fragmentatory and so forth) than to think up the English words that will cover one’s meaning. The result, in general, is an increase in slovenliness and vagueness.

      There isn't anything completley wrong with using words like fragmentory I think it just has to do with the mindset of the one writing it. Orwell seems to have a problem with people writing who don't actually do so mindfully. To me this is still about the overuse of these words but it's because they tend to come from people who don't absorb their own ideas.

    5. The first is staleness of imagery; the other is lack of precision. The writer either has a meaning and cannot express it, or he inadvertently says something else, or he is almost indifferent as to whether his words mean anything or not.

      I was seriously thinking the same thing. There were some snippets that invoked some sort of imagery, but they never felt engaging. The other point I totally agree. It's just a bunch of fluff that doesn't really mean anything.

    1. The film depicts how in times of true desperation to control their own reproductive choices women came together to create something bigger than themselves. Despite the constant threat of legal repercussions, they created a support network that prioritized women’s health and autonomy.

      The "Jane" network wasn't just about providing a medical service it was about community, trust, and the fight to ensure women didn’t have to navigate their most personal decisions in isolation. In today’s world, that sense of solidarity is just as important. Whether it’s the voices of activists, healthcare providers, or everyday women demanding better protections for reproductive rights, the power of women coming together to fight for change is the heart of the movement.

    1. I hateseeing black people take over and in my eyes ruin childhood memoriesI adore.

      This reminds me of the discourse that took place with The Little Mermaid live action movie. It is so bizarre that where people draw the line with a mythical and magical character like a mermaid is their race. It's okay if they have every magical power but being Black is just too unrealistic. This discourse is cyclical.

    2. The film was praised by some reviewers as a “must see,” a rare cin-ematic experience that “exhibit[s] the courage and perseverance that givesus all hope.

      I dont know if it's just me, but the idea of having a movie that profits off a story about the abuse of a black girl feels... wrong? Critics are talking about it like some show, and while it is a film, it is one that should provoke thought and dicussion rather than hype. I dont know if that makes sense

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      This study uses single nucleus multiomics to profile the transcriptome and chromatin accessibility of mouse XX and XY primordial germ cells (PGCs) at three time-points spanning PGC sexual differentiation and entry of XX PGCs into meiosis (embryonic days 11.5-13.5). They find that PGCs can be clustered into sub-populations at each time point, with higher heterogeneity among XX PGCs and more switch-like developmental transitions evident in XY PGCs. In addition, they identify several transcription factors that appear to regulate sex-specific pathways as well as cell-cell communication pathways that may be involved in regulating XX vs XY PGC fate transitions. The findings are important and overall rigorous. The study could be further improved by a better connection to the biological system, including the addition of experiments to validate the 'omics-based findings in vivo and putting the transcriptional heterogeneity of XX PGCs in the context of findings that meiotic entry is spatially asynchronous in the fetal ovary. Overall, this study represents an advance in germ cell regulatory biology and will be a highly used resource in the field of germ cell development.

      Strengths:

      (1) The multiomics data is mostly rigorously collected and carefully interpreted.

      (2) The dataset is extremely valuable and helps to answer many long-standing questions in the field.

      (3) In general, the conclusions are well anchored in the biology of the germ line in mammals.

      Weaknesses:

      (1) The nature of replicates in the data and how they are used in the analysis are not clearly presented in the main text or methods. To interpret the results, it is important to know how replicates were designed and how they were used. Two "technical" replicates are cited but it is not clear what this means.

      The two independent technical replicates comprised different pools of paired gonads. This sentence was added to the methods section of the revised manuscript.

      (2) Transcriptional heterogeneity among XX PGCs is mentioned several times (e.g., lines 321-323) and is a major conclusion of the paper. It has been known for a long time that XX PGCs initiate meiosis in an anterior-to-posterior wave in the fetal ovary starting around E13.5. Some heterogeneity in the XX PGC populations could be explained by spatial position in the ovary without having to invoke novel subpopulations.

      We thank the reviewer for pointing out this important biological phenomenon. We also recognize that transcriptional heterogeneity among XX PGCs is likely due to the anterior-to-posterior wave of meiotic initiation in E13.5 ovaries and highlight this possibility in our manuscript. However, since our study utilizes single-nucleus RNA-sequencing and not spatial transcriptomics, we are not able to capture the spatial location of the XX PGCs analyzed in our dataset. As such, our analysis applied clustering tools to classify the populations of XX PGCs captured in our dataset. 

      (3) There is essentially no validation of any of the conclusions. Heterogeneity in the expression of a given marker could be assessed by immunofluorescence or RNAscope.

      In our revised manuscript, we included immunofluorescence staining of potential candidate factors involved in PGC sex determination, such as PORCN and TFAP2C. Testing and optimizing antibodies for the targets identified in this study are ongoing efforts in our lab and we look forward to sharing our results with the research community.

      (4) The paper sometimes suffers from a problem common to large resource papers, which is that the discussion of specific genes or pathways seems incomplete. An example here is from the analysis of the regulation of the Bnc2 locus, which seems superficial. Relatedly, although many genes and pathways are nominated for important PGC functions, there is no strong major conclusion from the paper overall.

      In this manuscript, we set out to identify candidate factors, some already known and many others unknown, involved in the developmental pathways of PGC sex determination using computational tools. Our goal, as a research group and with future collaborators, is to screen these interesting candidates and discover their function in the primordial germ cell. Our research, presented in this study, represents a launching pad for which to identify future projects that will investigate these factors in further detail.

      Reviewer #2 (Public Review):

      Summary:

      This manuscript by Alexander et al describes a careful and rigorous application of multiomics to mouse primordial germ cells (PGCs) and their surrounding gonadal cells during the period of sex differentiation.

      Strengths:

      In thoughtfully designed figures, the authors identify both known and new candidate gene regulatory networks in differentiating XX and XY PGCs and sex-specific interactions of PGCs with supporting cells. In XY germ cells, novel findings include the predicted set of TFs regulating Bnc2, which is known to promote mitotic arrest, as well as the TFs POU6F1/2 and FOXK2 and their predicted targets that function in mitosis and signal transduction. In XX germ cells, the authors deconstruct the regulation of the premeiotic replication regulator Stra8, which reveals TFs involved in meiosis, retinoic acid signaling, pluripotency, and epigenetics among predictions; this finding, along with evidence supporting the regulatory potential of retinoic acid receptors in meiotic gene expression is an important addition to the debate over the necessity of retinoic acid in XX meiotic initiation. In addition, a self-regulatory network of other TFs is hypothesized in XX differentiating PGCs, including TFAP2c, TCF5, ZFX, MGA, and NR6A1, which is predicted to turn on meiotic and Wnt signaling targets. Finally, analysis of PGC-support cell interactions during sex differentiation reveals more interactions in XX, via WNTs and BMPs, as well as some new signaling pathways that predominate in XY PGCs including ephrins, CADM1, Desert Hedgehog, and matrix metalloproteases. This dataset will be an excellent resource for the community, motivating functional studies and serving as a discovery platform.

      Weaknesses:

      My one major concern is that the conclusion that PGC sex differentiation (as read out by transcription) involves chromatin priming is overstated. The evidence presented in the figures includes a select handful of genes including Porcn, Rimbp1, Stra8, and Bnc2 for which chromatin accessibility precedes expression. Given that the authors performed all of their comparisons between XX versus XY datasets at each timepoint, have they missed an important comparison that would be a more direct test of chromatin priming: between timepoints for each sex? Furthermore, it remains possible that common mechanisms of differentiation to XX and XY could be missing from this analysis that focused on sexspecific differences.

      We thank the reviewer for their thoughtful assessment and suggestions, as stated here. We note that chromatin priming in PGCs prior to sex determination is a well-documented research finding (see references below), that is further supported by our single-nucleus multiomics data. To support these findings previously stated in the scientific literature, we included data demonstrating the asynchronous correlation between chromatin accessibility and gene expression during PGC sex determination. Specifically, we investigated the associations of differentially accessible chromatin peaks with differentially expressed gene expression for each PGC type (between sexes and across embryonic stages) using computational tools and methods that are well-established and applied by the research community. In our manuscript, we note that the patterns we identified support the potential role of chromatin priming in PGC sex determination. Nevertheless, we further highlight that a comprehensive profile of 3D chromatin structure and enhancer-promoter contacts in differentiating PGCs is needed to fully understand how changes to chromatin facilitate PGC sex determination.

      References:

      (1) Chen, M., et al. Integration of single-cell transcriptome and chromatin accessibility of early gonads development among goats, pigs, macaques, and humans. Cell Reports 41 (2022).

      (2) Huang, T.-C. et al. Sex-specific chromatin remodelling safeguards transcription in germ cells. Nature 600, 737–742 (2021).

      Reviewer #3 (Public Review):

      Summary:

      Alexander et al. reported the gene-regulatory networks underpinning sex determination of murine primordial germ cells (PGCs) through single-nucleus multiomics, offering a detailed chromatin accessibility and gene expression map across three embryonic stages in both male (XY) and female (XX) mice. It highlights how regulatory element accessibility may precede gene expression, pointing to chromatin accessibility as a primer for lineage commitment before differentiation. Sexual dimorphism in these elements and gene expression increases over time, and the study maps transcription factors regulating sexually dimorphic genes in PGCs, identifying sex-specific enrichment in various transcription factors. Strengths:

      The study includes step-wise multiomic analysis with some computational approach to identify candidate TFs regulating XX and XY PGC gene expression, providing a detailed timeline of chromatin accessibility and gene expression during PGC development, which identifies previously unknown PGC subpopulations and offers a multimodal reference atlas of differentiating PGC clusters. Furthermore, the study maps a complex network of transcription factors associated with sex determination in PGCs, adding depth to our understanding of these processes.

      Weaknesses:

      While the multiomics approach is powerful, it primarily offers correlational insights between chromatin accessibility, gene expression, and transcription factor activity, without direct functional validation of identified regulatory networks.

      As stated in our response above to a similar concern, we note that our research study represents a launching pad for which to identify future projects that will investigate candidates that may be involved in PGC sex determination, in further detail. With this rich dataset in hand, our goal in future research projects is to screen these candidates and discover their function in PGCs. 

      Response to Recommendations

      Reviewer #1 (Recommendations For The Authors):

      (1) Clarify at first introduction how combined ATAC-seq/RNA-seq mulitomics libraries were prepared, including if ATAC and RNA-seq data are from the same cell.

      This information was added to the introduction of the revised manuscript.

      (2) Clarify what the two technical replicates represent. Are they two libraries from the same gonad or the same pool of gonads? Are they from 2 different gonads?

      The two independent technical replicates comprised different pools of paired gonads. This sentence was added to the methods section of the revised manuscript.

      (3) In Supplemental Figure 1, there is substantial variation in the number of unique snATAC-seq fragments between some conditions. Could this create a systematic bias that affects clustering?

      We recognize the concern that substantial variation in the number of unique snATAC-seq fragments between conditions could potentially create a systematic bias that affects clustering. However, we analyzed our snATAC-seq dataset with Signac, which performs term frequency-inverse document frequency (TF-IDF) normalization. This is a process that normalizes across cells to correct for differences in cellular sequencing depth. Given that sequencing depth was taken into account in our normalization and clustering procedures, and that the unbiased clustering of PGCs also reflects the sex and embryonic stage of PGCs, we are confident that the clustering of the snATAC-seq datasets closely reflects the biological variability present in the PGCs collected.

      References:

      Signac Website:  https://stuartlab.org/signac/articles/pbmc_vignette

      Stuart, T., Srivastava, A., Madad, S., Lareau, C. A., & Satija, R. (2021). Single-cell chromatin state analysis with Signac. Nature methods, 18(11), 1333-1341.

      (4) In Figures 2a, 2e, 3a, and 3e, the visualization scheme is very difficult to follow. It's very hard to see the colors corresponding to average expression for many genes because the circles are so small. In addition, the yellow color is hard to see and makes it hard to estimate the size of the circle since the boundaries can be indistinct. I recommend using a different visualization scheme and/or set of size scales be used.

      In Figures 2a, 2e, 3a, and 3e, we chose this color palette to be inclusive of viewers who are colorblind. The chosen colors are visible on both a computer screen and on printed paper. We also included a legend of the color scale and dot size representing the average expression and percent of cells expressing the gene, respectively. If the color cannot be seen, it is because the cell population is not expressing the gene.

      (5) Perform in vivo validation (immunofluorescence or RNAscope) of at least some targets implicated in PGC development by this study.

      Such validations (immunofluorescence staining of PORCN and TFAP2C) are now included in Figure 4 and the supplement.

      (6) In line 351, the authors state that "we observed a strong demarcation between XX and XY PGCs at E12.5-E13.5." But in Figure 1j it looks like a reasonably high fraction of both XX and XY E12.5 cells are in cluster 1, which should mean that there is some overlap.

      While it is true that Figure 1j shows overlap of both XX and XY E12.5 cells in cluster 1, we were commenting on the separation of E12.5 XX (clusters 4 and 5) and E12.5 XY (clusters 8 and 9) PGCs. We have modified the sentence beginning at line 351 to state that the separation between XX and XY PGCs occurs at E13.5.

      (7) In lines 404-405: "We first linked snATAC-seq peaks to XY PGC functional genes". It is important to know how the peaks were linked to genes.

      We added the following sentence to address this comment: “Peak-to-gene linkages were determined using Signac functionalities and were derived from the correlation between peak accessibility and the intensity of gene expression.”

      (8) In Supplemental Figure 5c, the XX E11.5 condition has a substantially higher fraction of ATAC peaks at promoter regions compared to the others. Does this have statistical and biological significance?

      This is an interesting observation beyond the scope of our manuscript. Many interesting questions arise from this study and it is our plan to investigate further in the future. 

      (9) Line 885: "The increased number of DA peaks at E13.5 may be the result of changes to chromatin structure as XX PGCs enter meiotic prophase I"; but in Figure 4b, there's only a modest increase in DAP number from E12.5 to E13.5 in XX PGCs, compared to a massive gain in XY PGCs.

      In our manuscript, we comment on both phenomena: the doubling of differentially accessible peaks in XX PGCs from E12.5 to E13.5 and the massive increase in differentially accessible peaks in XY PGCs from E12.5 to E13.5. In our description of these results, we propose several hypotheses leading to these increases in differentially accessible peaks. As such, it cannot be ruled out that the changes to chromatin structure that occur during meiotic prophase I contribute to the gain in differentially accessible peaks in XX PGCs at E13.5, and we included this statement in the manuscript accordingly.

      Reviewer #2 (Recommendations For The Authors):

      (1) The methods state at line 141 that nuclei with mitochondrial reads of more than 25% were removed, however our understanding from the Bioconductor manual and companion manuscript (Amezquita, R.A., Lun, A.T.L., Becht, E. et al. Orchestrating single-cell analysis with Bioconductor. Nat Methods 17, 137-145 (2020). https://doi.org/10.1038/s41592-019-0654-x) is that snRNA-seq approaches remove mitochondrial transcripts entirely and datasets containing mitochondrial transcripts are thought to feature incompletely stripped nuclei. It is thought that mitochondrial transcripts participating in nuclear import may remain hanging on to the nuclear envelope and get encapsulated into GEMs. If the mitochondrial read cutoff of 25% was used intentionally to keep this potentially contaminating signal, please justify why this was done for this dataset.

      We agree with the reviewer that the presence of mitochondrial transcripts may be potentially contaminating signal. In our preprocessing steps, we removed the mitochondrial genes and transcripts from our datasets so that they would not influence or affect our analyses. The following sentence was added to the methods section on snRNA-seq data processing: “Mitochondrial genes and transcripts were removed from the snRNA-seq datasets to eliminate any potentially contaminating signal.”

      (2) Methods line 227: please include log2fold change and p-adjusted value cutoffs for GO enrichment.

      We used clusterprofiler for our GO enrichment analysis. Our GO enrichment analysis did not include a log2fold change analysis and the p-adjusted value cutoff is stated in the methods.

      (3) Results line 310: the claim that "At E12.5-E13.5, XY PGCs converged onto a single distinct population (cluster 7), indicating less transcriptional diversity among E12.5-E13.5 XY PGCs when compared to E12.5E13.5 XX PGCs (Fig1d)" would be strengthened if the authors quantified transcriptional distance with distance metrics such as euclidean or cosine distance.

      We used a clustering approach to gain insights into the transcriptional diversity of PGC populations. Using an additional metric, such as Euclidean or cosine distance, would not provide meaningful information not already achieved by clustering or change the conclusions presented in the manuscript.

      (4) Results line 317: the authors allude to Lars2 defining clusters 2 & 3 as a marker gene, but it is not clear why this is highlighted until the reader reaches the discussion, which alludes to the published role of Lars2 in reproduction. Please consider moving this sentence to the results section for clarity and perhaps expanding the discussion on the meaning.

      To provide clarity, we added the statement “genes with reported roles in reproduction” to the results section.

      (5) In Figure 2a, why do the authors choose to focus on Zkscan5 in XY PGCs when it is expressed by such a small portion of cells (<25%)? Do they assume that this is due to dropouts?

      We chose to focus on Zkscan5 as an example because of its enriched and differential expression in male PGCs, the motif for Zkscan5 is not enriched in female PGCs, and the reported roles of Zkscan5 in regulating cellular proliferation and growth. Zkscan5 is an example of how candidate genes can be identified for further investigation.

      (6) Line 461: "the population of E13.5 XX PGCs displaying the strongest Stra8 expression levels corresponded to the same population of XX PGCs with the highest module score of early meiotic prophase I genes (Figure 3c; Supplementary Fig. 3a-b)". However did the authors also consider examining the Stra8+ XX PGCs that do not robustly express meiotic genes to understand more about their differentiation potential?

      We are thankful to the reviewer for this suggestion. However, this research question is beyond the scope of the manuscript. We plan to investigate further in future research studies.

      (7) Line 505: "when we searched for the presence of RA receptor motifs in peaks linked to genes related to meiosis and female sex determination, we found that Stra8, Rec8, Rnf2, Sycp1, Sycp2, Ccnb3, and Zglp1 contain the RA receptor motifs in their regulatory sequences (Supplementary Figure 4g)." My read of the text is that the authors are not taking a side on the RA and meiosis controversy, but rather trying to reveal what the data can tell us, and the answer is that there is a strong signature linking RA to meiotic genes, which supports this as a valid biological pathway. But what is the strength of the RA>meiosis pathway compared to other mechanisms (which must be functioning in the triple receptor KO)? Perhaps the authors could take this analysis further with the following questions: (1) ask whether meiotic genes are more enriched in RA motifs compared to other expressed genes or other motifs (2) compare the strength of peak-gene correlations for all peaks containing RA receptor motifs vs. those with peaks for Zglp1, Rnf2, etc binding. The strengths of these correlations could provide clues to how much gene expression varies in response to RA exposure vs. modulation of these other factors and thus tell us something about how much RA is playing a role.

      We agree with the reviewer that this is a very interesting and important question. We also thank the reviewer for their thoughtful suggestions on the types of bioinformatics analyses that could answer this question. However, the section on RA signaling during PGC sex determination is only a small part of the manuscript and would be better analyzed in greater detail in a future research study or publication.

      (8) The shift from promoters in E11.5 XX PGCs to distal intergenic regions is fascinating. What can we learn about epigenetic reprogramming/methylation changes across gene bodies? 

      We agree with the reviewer that this is an interesting question about gene regulation in E11.5 XX PGCs. However, we prefer to analyze the epigenetic reprogramming changes across gene bodies in this cell population in additional research studies. Our purpose and goal for this section was to link differentially accessible chromatin peaks with differentially expressed genes to identify putative gene regulatory networks.

      (9) Line 581: why did the authors choose to highlight and validate PORCN1 in PGCs? Please elaborate.

      As stated in the manuscript, we chose to highlight and validate PORCN1 in PGCs because of its role in WNT signaling and because of the visibly strong correlation between chromatin accessibility at the XXenriched DAP in Fig. 4c (dashed box) and and gene expression of PORCN1.

      (10) Figure 5f would be easier to interpret if presented as two columns rather than a circle; show one line of the proteins and the other line with the transcripts so that each is on the same line and there are connections between them.

      This comment is related to stylistic preferences. The purpose of Fig. 5f is to demonstrate that the candidate transcription factors may regulate the expression of other enriched transcription factors. Figure 5f figure accomplishes this goal.

      (11) Line 640: "The predicted target genes of TCFL5 totaled 74% (367/494) of all DEGs with peak-to-gene linkages in XX PGCs". This seems like a high number and a lot of work for just TCFL5; given the overlap between other TFs and target genes, how many of these 367 target genes overlap with other TFs?

      We agree with the reviewer that this is an important declaration to make. We added the following sentence to the results section on TCFL5: “A large majority of the predicted target genes of TCFL5 were also predicted to be the target genes of the enriched TFs presented in Fig. 5e, e.g., the predicted target genes of these TFs overlapped with 4%-100% of the predicted target genes of TCFL5.”

      (12) The presentation of TCFL5 in the results section would make more sense with the additional mention of reproductive phenotypes already known (currently in the discussion Lines 914-917). I would furthermore suggest that the discussion goes into more depth on the difference between the regulatory network of TCFL5 in XX meiosis vs XY.

      We thank the reviewer for this comment, however, we already state in the results section that TCFL5 is known to influence XX PGC sex determination.

      (13) In the Methods, please state more clearly for those not familiar that the genetic background of mice is mixed.

      We described the mice with their official names, which provides the context of their genetic backgrounds.

      (14) Please specify which morphologic criteria were used to verify the stage of embryos in the methods.

      We added the following text to the methods section of the revised manuscript: “Plug date was used to determine the stage of embryos collected for single-nucleus RNA-seq and ATAC-seq. The stage of E11.5 embryos was confirmed by counting somites. The stage of embryos collected at E12.5 was confirmed by the morphological presence of the vessel and cords of the testes collected from XY embryos. Similarly, we confirmed the stage of embryos collected at E13.5 by the size of the gonads, the presence of more distinct cords in the testes of XY embryos, and the elongation of the ovaries of XX embryos.”

      (15) The total number of cells and PGCs that passed QC and are included in UMAPS should be stated.

      The requested information was added to the legend for Fig. 1 of the revised manuscript: “The number of PGCs per sex and embryonic stage are: 375 E11.5 XX PGCs; 1,106 E12.5 XX PGCs; 750 E13.5 XX PGCs; 110 E11.5 XY PGCs; 465 E12.5 XY PGCs; and 348 E13.5 XY PGCs.”

      (16) The order of timepoints changes between figures, and this is not for any obvious reason. Please make it consistent. Figures 1 and 6 list XX 11.5, 12.5, 13.5, and the same for XY, but Figures 2, 3, and 4 use the reverse order: XY E13.5, E12.5, E11.5, and then XX. 

      We thank the reviewer for this comment. However, we chose this order for each of the figures to match the coordinates of the graphs and where we would expect the reader to begin reading the graph first. For example, in Figure 3a, XX E11.5 is closest to the x-axis and would be expected to be read first.   

      (17) In Figure S2 the colors of clusters are hard to distinguish, and it is suggested that the cluster numbers should be listed above each colored bar to avoid frustration.

      We made the suggested correction to Figure S2.

      (18) In Figures 2e and 3e: what do the dashed boxes indicate?

      The dashed boxes are to guide the reader’s eyes to the fact that the order of transcription factors/genes under the Cistrome DB regulatory potential score and gene expression plots are the same.

      (19) In Figure 5a: break panels into i-iv so that the in-text call-outs are not all the same.

      We made the suggested correction to Figure 5a and modified the in-text call-outs.

      (20) Please indicate XX in Figure 5e and XY in Figure 5l.

      We made the suggested correction to Figure 5e and 5l.

      (21) In Figure S5c: Please reorganize DA chromatin peak charts so that columns are XX and XY with rows at the same timepoint.

      We made the suggested correction to Figure S5c.

      (22) In Figure S7a: please make images larger so that the overlapping expression of PORCN and TRA98 is more visible, and consider adding a more magnified panel.

      This image is now included in the main text, with expanded panels.

      (23) Line 742-754: this seems like a long introduction for the results section; please consider tightening it up.

      We believe this text is important and necessary to provide context to the bioinformatics analyses of cell signaling pathways in PGCs. Not all readers will be familiar with the ligand-receptor signals between gonadal support cells and PGCs, and this text provides details on which signaling pathways are known to direct sex determination of PGCs.

      (24) For UMAP plots in Figures 2c, 3c, S3b, and S4b, the text overlaid with the timepoints and sexes onto the UMAP plots is misleading, as it allows the reader to presume that the entire group of cells for a given sex/timepoint is located in the location of the text overlay. However, from the UMAP plots in Figure 1i-j, it is clear that the cells from a given sex/timepoint are actually spread across multiple identified clusters. Thus, the overlaid text obscures the important heterogeneity detected. To better represent the actual locations on the UMAP plot of cells from each sex/timepoint, it would be better to show inset density plots alongside these UMAP plots so the reader can locate the cells for themselves. 

      We thank the reviewer for this comment. However, we chose this formatting to offer simplicity and ease of understanding to our UMAPs in addition to highlighting the general biological patterns of gene expression. If the reader is interested in discerning more of the heterogeneity of the UMAPs, they may refer back to Figure 1.

      Reviewer #3 (recommendations for the authors):

      There are some errors or places that need clarification or corrections:

      (1) Figure 1f, according to the graph, it should be 8 clusters, not 9.

      There are 9 clusters because the numbering for the clusters start at ‘0’.

      (2) Why did cluster 8 have so many different states of cells from both sexes?

      The identification of cluster 8 is likely an artifact of sequencing, and would require several different analyses to figure out why cluster 8 has many different states of cells from both sexes. While this will address a technical issue associated with the dataset, this will not change any major conclusions of the study.

      (3) Figure 1i, shouldn't that be ten instead of eleven?

      There are 11 clusters because the numbering for the clusters start at ‘0’.

      (4) Figure 2a, zkscan expression level comparison was not so obvious as the bubble size was small. How many folds of differences from xx pgc?

      There is a 1.5 fold increase in the expression of Zkscan5 between XY and XX PGCs at E13.5. We included this information in the revised manuscript.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      This paper introduces a new approach to modeling human behavioral responses using image-computable models. They create a model (VAM) that is a combination of a standard CNN coupled with a standard evidence accumulation model (EAM). The combined model is then trained directly on image-level data using human behavioral responses. This approach is original and can have wide applicability. However, many of the specific findings reported are less compelling.

      Strengths:

      (1) The manuscript presents an original approach to fitting an image-computable model to human behavioral data. This type of approach is sorely needed in the field.

      (2) The analyses are very technically sophisticated.

      (3) The behavioral data are large both in terms of sample size (N=75) and in terms of trials per subject.

      Weaknesses:

      Major

      (1) The manuscript appears to suggest that it is the first to combine CNNs with evidence accumulation models (EAMs). However, this was done in a 2022 preprint

      (https://www.biorxiv.org/content/10.1101/2022.08.23.505015v1) that introduced a network called RTNet. This preprint is cited here, but never really discussed. Further, the two unique features of the current approach discussed in lines 55-60 are both present to some extent in RTNet. Given the strong conceptual similarity in approach, it seems that a detailed discussion of similarities and differences (of which there are many) should feature in the Introduction.

      Thanks for pointing this out—we agree that the novel contributions of our model (the VAM) with respect to prior related models (including RTNet) should be clarified, and have revised the Introduction accordingly. We include the following clarifications in the Introduction:

      “The key feature of the VAM that distinguishes it from prior models is that the CNN and EAM parameters are jointly fitted to the RT, choice, and visual stimulus data from individual participants in a unified Bayesian framework. Thus, both the visual representations learned by the CNN and the EAM parameters are directly constrained by behavioral data. In contrast, prior models first optimize the CNN to perform the behavioral task, then separately fit a minimal set of high-level CNN parameters [RTNet, Rafiei et al., 2024] and/or the EAM parameters to behavioral data [Annis et al., 2021; Holmes et al., 2020; Trueblood et al., 2021]. As we will show, fitting the CNN with human data—rather than optimizing the model to perform a task—has significant consequences for the representations learned by the model.”

      E.g. in the case of RTNet, the variability of the Bayesian CNN weight distribution, the decision threshold, and the magnitude of the noise added to the images are adjusted to match the average human accuracy (separately for each task condition). RTNet is an interesting and useful model that we believe has complementary strengths to our own work.

      Since there are several other existing models in addition to the VAM and RTNet that use CNNs to generate RTs or RT proxies (by our count, at least six that we cite earlier in the Introduction), we felt it was inappropriate to preferentially include a detailed comparison of the VAM and RTNet beyond the passage quoted above.

      (2) In the approach here, a given stimulus is always processed in the same way through the core CNN to produce activations v_k. These v_k's are then corrupted by Gaussian noise to produce drift rates d_k, which can differ from trial to trial even for the same stimulus. In other words, the assumption built into VAM appears to be that the drift rate variability stems entirely from post-sensory (decisional) noise. In contrast, the typical interpretation of EAMs is that the variability in drift rates is sensory. This is also the assumption built into RTNet where the core CNN produces noisy evidence. Can the authors comment on the plausibility of VAM's assumption that the noise is post-sensory?

      In our view, the VAM is compatible with a model in which the drift rate variability for a given stimulus is due to sensory noise, since we do not specify the origin of the Gaussian noise added to the drift rates. As the reviewer notes, the CNN component of the VAM processes a given stimulus deterministically, yielding the mean drift rates. This does not preclude us from imagining an additional (unmodeled) sensory process that adds variability to the drift rates. The VAM simply represents this and other hypothetical sources of variability as additive Gaussian noise. We agree however that it is worthwhile to think about the origin of the drift rate variability, though it is not a focus of our work.

      (3) Figure 2 plots how well VAM explains different behavioral features. It would be very useful if the authors could also fit simple EAMs to the data to clarify which of these features are explainable by EAMs only and which are not.

      In our view, fitting simple EAMs to the data would not be especially informative and poses a number of challenges for the particular task we study (LIM) that are neatly avoided by using the VAM. In particular, as we show in Figure 2, the stimuli vary along several dimensions that all appear to influence behavior: horizontal position, vertical position, layout, target direction, and flanker direction. Since the VAM is stimulus-computable, fitting the VAM automatically discovers how all of these stimulus features influence behavior (via their effect on the drift rates outputted by the CNN). In contrast, fitting a simple EAM (e.g. the LBA model) necessitates choosing a particular parameterization that specifies the relationship between all of the stimulus features and the EAM model parameters. This raises a number of practical questions. For example, should we attempt to fit a separate EAM for each stimulus feature, or model all stimulus features simultaneously?

      Moreover, while we could in principle navigate these issues and fit simple EAMs to the data, we do not intend to claim that simple EAMs fail to explain the relationship between stimulus features and behavior as well as the VAM. Rather, the key strength of the VAM relative to simple EAMs is that it includes a detailed and biologically plausible model of human vision. The majority of the paper capitalizes on this strength by showing how behavioral effects of interest (namely congruency effects) can be explained in terms of the VAM’s visual representations.

      (4) VAM is tested in two different ways behaviorally. First, it is tested to what extent it captures individual differences (Figure 2B-E). Second, it is tested to what extent it captures average subject data (Figure 2F-J). It wasn't clear to me why for some metrics only individual differences are examined and for other metrics only average human data is examined. I think that it will be much more informative if separate figures examine average human data and individual difference data. I think that it's especially important to clarify whether VAM can capture individual differences for the quantities plotted in Figures 2F-J.

      We would like to clarify that Fig. 2J in fact already shows how well the VAM captures individual differences for the average subject data shown in Fig. 2H (stimulus layout) and Fig. 2I (stimulus position). For a given participant and stimulus feature, we calculated the Pearson's r between model/participant mean RTs across each stimulus feature value. Fig. 2J shows the distribution of these Pearson’s r values across all participants for stimulus layout and horizontal/vertical position.

      Fig. 2G also already shows how well the VAM captures individual differences in behavior. Specifically, this panel shows individual differences in mean RT attributable to differences in age. For Fig. 2F, which shows how the model drift rates differ on congruent vs. incongruent trials, there is no sensible way to compare the models to the participants at any level of analysis (since the participants do not have drift rates). 

      (5) The authors look inside VAM and perform many exploratory analyses. I found many of these difficult to follow since there was little guidance about why each analysis was conducted. This also made it difficult to assess the likelihood that any given result is robust and replicable. More importantly, it was unclear which results are hypothesized to depend on the VAM architecture and training, and which results would be expected in performance-optimized CNNs. The authors train and examine performance-optimized CNNs later, but it would be useful to compare those results to the VAM results immediately when each VAM result is first introduced.

      Thanks for pointing this out—we apologize for any confusion caused by our presentation of the CNN analyses. We have added in additional motivating statements, methodological clarifications, and relevant references to our Results, particularly for Figure 3 in which we first introduce the analyses of the CNN representations/activity. In general, each analysis is prefaced by a guiding question or specific rationale, e.g. “How do the models' visual representations enable target selectivity for stimuli that vary along several irrelevant dimensions?” We also provide numerous references in which these analysis techniques have been used to address similar questions in CNNs or the primate visual cortex.

      We chose to maintain the current organization of our results in which the comparison between the VAM and the task-optimized models are presented in a separate figure. We felt that including analyses of both the VAM and task-optimized models in the initial analyses of the CNN representations would be overwhelming for many readers. As the reviewer acknowledges, some readers may already find these results challenging to follow. 

      (6) The authors don't examine how the task-optimized models would produce RTs. They say in lines 371-2 that they "could not examine the RT congruency effect since the task-optimized models do not generate RTs." CNNs alone don't generate RTs, but RTs can easily be generated from them using the same EAM add-on that is part of VAM. Given that the CNNs are already trained, I can't see a reason why the authors can't train EAMs on top of the already trained CNNs and generate RTs, so these can provide a better comparison to VAM.

      We appreciate this suggestion, but we judge the suggestion to “train EAMs on top of the already trained CNNs and generate RTs” to be a significant expansion of the scope of the paper with multiple possible roads forward. In particular, one must specify how the outputs of the task-optimized CNN (logits for each possible response) relate to drift rates, and there is no widely-accepted or standard way to do this. Previously proposed methods include transforming representation distances in the last layer to drift rates (https://doi.org/10.1037/xlm0000968), fitting additional subject-specific parameters that map the logits to drift rates

      (https://doi.org/10.1007/s42113-019-00042-1), or using the softmax-scored model outputs as drift rates directly (https://doi.org/10.1038/s41562-024-01914-8), though in the latter case the RTs are not on the same scale as human data. In our view, evaluating these different methods is beyond the scope of this paper. An advantage of the VAM is that one does not have to fit two separate models (a CNN and a EAM) to generate RTs.

      Nonetheless, we agree that it would be informative to examine something like RTs in the task-optimized models. Our revised Results section now includes an analysis of the confidence of the task-optimized models’ decisions, which we use a proxy for RTs:   

      “Since the task-optimized models do not generate RTs, it is not possible to directly measure RT congruency effects in these models without making additional assumptions about how the CNN's classification decisions relate to RTs. However, as a coarse proxy for RT, we can examine the confidence of the CNN's decisions, defined as the softmax-scored logit (probability) of the most probable direction in the final CNN layer. This choice of RT proxy is motivated by some prior studies that have combined CNNs with EAMs [Annis et al., 2021; Holmes et al., 2020; Trueblood et al., 2021]. These studies explicitly or implicitly derive a measure of decision confidence from the activity of the last CNN layer. The confidence measure is then mapped to the EAM drift rates, such that greater decision confidence generally corresponds to higher drift rates (and therefore shorter RTs).

      We calculated the average confidence of each task-optimized CNN separately for congruent vs. incongruent trials. On average, the task-optimized models showed higher confidence on congruent vs. incongruent trials (W = 21.0, p < 1e-3, Wilcoxon signed-rank test; Cohen's d = 0.99; n = 75 models). These analyses therefore provide some evidence that task-optimized CNNs have the capacity to exhibit congruency effects, though an explicit comparison of the magnitude of these effects with human data requires additional modeling assumptions (e.g., fitting a separate EAM).”

      (7) The Discussion felt very long and mostly a summary of the Results. I also couldn't shake the feeling that it had many just-so stories related to the variety of findings reported. I think that the section should be condensed and the authors should be clearer about which explanations are speculations and which are air-tight arguments based on the data.

      We have shortened the Discussion modestly and we have added in some clarifying language to help clarify which arguments are more speculative vs. directly supported by our data.

      Specifically, we added in the phrase “we speculate that…” for two suggestions in the Discussion (paragraphs 3 and 5), and we ensured that any other more speculative suggestions contain such clarifying language. We have also added in subheadings in the Discussion to help readers navigate this section. 

      (8) In one of the control analyses, the authors train different VAMs on each RT quantile. I don't understand how it can be claimed that this approach can serve as a model of an individual's sensory processing. Which of the 5 sets of weights (5 VAMs) captures a given subject's visual processing? Are the authors saying that the visual system of a given subject changes based on the expected RT for a stimulus? I feel like I'm missing something about how the authors think about these results.

      We agree that these particular analyses may cause confusion and have removed them from our revised manuscript.

      Reviewer #2 (Public Review):

      In an image-computable model of speeded decision-making, the authors introduce and fit a combined CCN-EAM (a 'VAM') to flanker-task-like data. They show that the VAM can fit mean RTs and accuracies as well as the congruency effect that is present in the data, and subsequently analyze the VAM in terms of where in the network congruency effects arise.

      Overall, combining DNNs and EAMs appears to be a promising avenue to seriously model the visual system in decision-making tasks compared to the current practice in EAMs. Some variants have been proposed or used before (e.g., doi.org/10.1016/j.neuroimage.2017.12.078 , doi.org/10.1007/s42113-019-00042-1), but always in the context of using task-trained models, rather than models trained on behavioral data. However, I was surprised to read that the authors developed their model in the context of a conflict task, rather than a simpler perceptual decision-making task. Conflict effects in human behavior are particularly complex, and thereby, the authors set a high goal for themselves in terms of the to-be-explained human behavior. Unfortunately, the proposed VAM does not appear to provide a great account of conflict effects that are considered fundamental features of human behavior, like the shape of response time distributions, and specifically, delta plots (doi.org/10.1037/0096-1523.20.4.731). The authors argue that it is beyond the scope of the presented paper to analyze delta plots, but as these are central to studies of human conflict behavior, models that aim to explain conflict behavior will need to be able to fit and explain delta plots.

      Theories on conflict often suggest that negative/positive-trending delta plots arise through the relative timing of response activation related to relevant and irrelevant information.

      Accumulation for relevant and irrelevant information would, as a result, either start at different points in time or the rates vary over time. The current VAM, as a feedforward neural network model, does not appear to be able to capture such effects, and perhaps fundamentally not so: accumulation for each choice option is forced to start at the same time, and rates are a static output of the CNN.

      The proposed solution of fitting five separate VAMs (one for each of five RT quantiles) is not satisfactory: it does not explain how delta plots result from the model, for the same reason that fitting five evidence accumulation models (one per RT quantile) does not explain how response time distributions arise. If, for example, one would want to make a prediction about someone's response time and choice based on a given stimulus, one would first have to decide which of the five VAMs to use, which is circular. But more importantly, this way of fitting multiple models does not explain the latent mechanism that underlies the shape of the delta plots.

      As such, the extensive analyses on the VAM layers and the resulting conclusions that conflict effects arise due to changing representations across layers (e.g., "the selection of task-relevant information occurs through the orthogonalization of relevant and irrelevant representations") - while inspiring, they remain hard to weigh, as they are contingent on the assumption that the VAM can capture human behavior in the conflict task, which it struggles with. That said, the promise of combining CNNs and EAMs is clearly there. A way forward could be to either adjust the proposed model so that it can explain delta plots, which would potentially require temporal dynamics and time-varying evidence accumulation rates, or perhaps to start simpler and combine CCNs-EAMs that are able to fit more standard perceptual decision-making tasks without conflict effects.

      We thank the reviewer for their thoughtful comments on our work. However, we note that the

      VAM does in fact capture the positive-trending RT delta plot observed in the participant data (Fig. S4A), though the intercepts for models/participants differ somewhat. On the other hand, the conditional accuracy functions (Fig. S4B) reveal a more pronounced difference between model and participant behavior. As the reviewer points out, capturing these effects is likely to require a model that can produce time-varying drift rates, whereas our model produces a fixed drift rate for a given stimulus. We also agree that fitting a separate VAM to each RT quantile is not a satisfactory means of addressing this limitation and have removed these analyses from our revised manuscript.

      However, while we agree that accurately capturing these dynamic effects is a laudable goal, it is in our view also worthwhile to consider explanations for the mean behavioral effect (i.e. the accuracy congruency effect), which can occur independently of any consideration of dynamics. One of our main findings is that across-model variability in accuracy congruency effects is better attributed to variation in representation geometry (target/flanker subspace alignment) vs.

      variation in the degree of flanker suppression. This finding does not require any consideration of dynamics to be valid at the level of explanation we pursue (across-user variability in congruency effects), but also does not preclude additional dynamic processes that could give rise to more specific error patterns. Our revised discussion now includes a section where we summarize and elaborate on these ideas:

      “It is not difficult to imagine how the orthogonalization mechanism described above, which explains variability in accuracy congruency effects across individuals, could act in concert with other dynamic processes that explain variability in congruency effects within individuals (e.g., as a function of RT). In general, any process that dynamically gates the influence of irrelevant sensory information on behavioral outputs could accomplish this, for example ramping inhibition of incorrect response activation [https://doi.org/10.3389/fnhum.2010.00222], a shrinking attention spotlight [https://doi.org/10.1016/j.cogpsych.2011.08.001], or dynamics in neural population-level geometry [https://doi.org/10.1038/nn.3643]. To pursue these ideas, future work may aim to incorporate dynamics into the visual component and decision component of the VAM with recurrent CNNs [https://doi.org/10.48550/arXiv.1807.00053, https://doi.org/10.48550/arXiv.2306.11582] and the task-DyVA model [https://doi.org/10.1038/s41562-022-01510-8], respectively.”

      Reviewer #3 (Public Review):

      Summary:

      In this article, the authors combine a well-established choice-response time (RT) model (the Linear Ballistic Accumulator) with a CNN model of visual processing to model image-based decisions (referred to as the Visual Accumulator Model - VAM). While this is not the first effort to combine these modeling frameworks, it uses this combination of approaches uniquely.

      Specifically, the authors attempt to better understand the structure of human information representations by fitting this model to behavioral (choice-RT) data from a classic flanker task. This objective is made possible by using a very large (by psychological modeling standards) industry data set to jointly fit both components of this VAM model to individual-level data. Using this approach, they illustrate (among other results) (1) how the interaction between target and flanker representations influence the presence and strength of congruency effects, (2) how the structure of representations changes (distributed versus more localized) with depth in the CNN model component, and (3) how different model training paradigms change the nature of information representations. This work contributes to the ML literature by demonstrating the value of training models with richer behavioral data. It also contributes to cognitive science by demonstrating how ML approaches can be integrated into cognitive modeling. Finally, it contributes to the literature on conflict modeling by illustrating how information representations may lead to some of the classic effects observed in this area of research.

      Strengths:

      (1) The data set used for this analysis is unique and is made publicly available as part of this article. Specifically, they have access to data for 75 participants with >25,000 trials per participant. This scale of data/individual is unusual and is the foundation on which this research rests.

      (2) This is the first time, to my knowledge, that a model combining a CNN with a choice-RT model has been jointly fit to choice-RT data at the level of individual people. This type of model combination has been used before but in a more restricted context. This joint fitting, and in particular, learning a CNN through the choice-RT modeling framework, allows the authors to probe the structure of human information representations learned directly from behavioral data.

      (3) The analysis approaches used in this article are state-of-the-art. The training of these models is straightforward given the data available. The interesting part of this article (opinion of course) is the way in which they probe what CNN has learned once trained. I find their analysis of how distractor and target information interfere with each other particularly compelling as well as their demonstration that training on behavioral data changes the structure of information representations when compared to training models on standard task-optimized data.

      Weaknesses:

      (1) Just as the data in this article is a major strength, it is also a weakness. This type of modeling would be difficult, if not impossible to do with standard laboratory data. I don't know what the data floor would be, but collecting tens of thousands of decisions for a single person is impractical in most contexts. Thus this type of work may live in the realm of industry. I do want to re-iterate that the data for this study was made publicly available though!

      We suspect (but have not systematically tested) that the VAMs can be fitted with substantially less data. We use data augmentation techniques (various randomized image transformations) during training to improve the generalization capabilities of the VAMs, and these methods are likely to be particularly important when training on smaller datasets. One could consider increasing the amount of image data augmentation when working with smaller datasets, or pursuing other forms of data augmentation like resampling from estimated RT distributions (see https://doi.org/10.1038/s41562-022-01510-8 for an example of this). In general, we don’t think that prospective users of our approach should be discouraged if they have only a few hundred trials per subject (or less) - it’s worth trying!

      (2) While this article uses choice-RT data it doesn't fully leverage the richness of the RT data itself. As the authors point out, this modeling framework, the LBA component in particular, does not account for some of the more nuanced but well-established RT effects in this data. This is not a big concern given the already nice contributions of this article and it leads to an opportunity for ongoing investigation.

      We agree that fully capturing the more nuanced behavioral effects you mention (e.g. RT delta plots and conditional accuracy functions) is a worthwhile goal for future research—see our response to Reviewer #2 for a more detailed discussion. ----------

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      (1) The phrase in the Abstract "convolutional neural network models of visual processing and traditional EAMs are jointly fitted" made me initially believe that the two models were fitted independently. You may want to re-word to clarify.

      We think that the phrase “jointly fitted” already makes it clear that both the CNN and EAM parameters are estimated simultaneously, in agreement with how this term is usually used. But we have nonetheless appended some additional clarifying language to that sentence (“in a unified Bayesian framework”).

      (2) Lines 27-28: EAMs "are the most successful and widely-used computational models of decision-making." This is only true for the specific type of decision-making examined here, namely joint modeling of choice and response times. Signal detection theory is arguably more widely-used when response times are not modeled.

      Thanks for pointing this out - we have revised the referenced sentence accordingly.

      (3) Could the authors clarify what is plotted in Figure 2F?

      Fig. 2F shows the drift rates for the target, flanker, and “other” (non-target/non-flanker) accumulators averaged over trials and models for congruent vs. incongruent trials. In case this was a source of confusion, we do not show the value of the flanker drift rates on congruent trials because the flanker and target accumulators are identical (i.e. the flanker/congruent drift rates are equivalent to the target/congruent drift rates).

      (4) Lines 214-7: "The observation that single-unit information for target direction decreased between the fourth and final convolutional layers while population-level decoding remained high is especially noteworthy in that it implies a transition from representing target direction with specialized "target neurons" to a more distributed, ensemble-level code." Can the authors clarify why this is the only reasonable explanation for these results? It seems like many other explanations could be construed.

      We have added additional clarification to this section and now use more tentative language:

      “The observation that single-unit information for target direction decreased between the fourth and final convolutional layers indicates that the units become progressively less selective for particular target directions. Since population-level decoding remained high in these layers, this suggests a transition from representing target direction with specialized "target neurons" to a more distributed, ensemble-level code.”

      (5) Lines 372-376: "Thus, simply training the model to perform the task is not sufficient to reproduce a behavioral phenomenon widely-observed in conflict tasks. This challenges a core (but often implicit) assumption of the task-optimized training paradigm, namely that to do a task well, a training model will result in model representations that are similar to those employed by humans." While I agree with the general sentiment, I feel that its application here is strange. Unless I'm missing something, in the context of the preceding sentence, the authors seem to be saying that researchers in the field expect that CNNs can produce a behavioral phenomenon (RTs) that is completely outside of their design and training. I don't think that anyone actually expects that.

      We moved the discussion/analyses of RTs to the next paragraph. It should now be clear that this statement refers specifically to the absence of an accuracy congruency effect in the task-optimized models.

      (6) Lines 387-389: "As a result, the VAMs may learn richer representations of the stimuli, since a variety of stimulus features-layout, stimulus position, flanker direction-influence behavior (Figure 2)." That is certainly true of tasks like this one where an optimal model would only focus on a tiny part of the image, whereas humans are distracted by many features. I'm not sure that this distractibility is the same as "richer representations". When CNNs classify images based on the background, would the authors claim that they have richer representations than humans?

      We agree that “richer” may not be the best way to characterize these representations, and have changed it to “more complex”.

      (7) Is it possible that drift rate d_k for each response happens to be negative on a given trial? If so, how is the decision given on such trials (since presumably none of the accumulators will ever reach the boundary)?

      It is indeed possible for all of the drift rates to be negative, though we found that this occurred for a vanishingly small number of trials (mean ± s.e.m. percent trials/model: 0.080 ± 0.011%, n = 75 models), as reported in the Methods. These trials were excluded from analyses.

      (8)  Can the authors comment on how they chose the CNN architecture and whether they expect that different architectures will produce similar results?

      Before establishing the seven-layer CNN architecture used throughout the paper, we conducted some preliminary experiments using other architectures that differed primarily in the number of CNN layers. We found that models with significantly fewer than seven layers typically failed to reach human-level accuracy on the task while larger models achieved human-level accuracy but (unsurprisingly) took longer to train.

      Reviewer #3 (Recommendations For The Authors):

      - In the introduction to this paper (particularly the paragraph beginning in line 33), the authors note that EAMs have typically been used in simplified settings and that they do not provide a means to account for how people extract information from naturalistic stimuli. While I agree with this, the idea of connecting CNNs of visual processing with EAMs for a joint modeling framework has been done. I recommend looking at and referencing these two articles as well as adjusting the tenor of this part of an introduction to better reflect the current state of the literature. For full disclosure, I am one of the authors on these articles. https://link.springer.com/article/10.1007/s42113-019-00042-1 https://www.sciencedirect.com/science/article/abs/pii/S0010027721001323

      We agree—thanks for pointing this out. The revised Introduction now discusses prior related models in more detail (including those referenced above) and better clarifies the novel contributions of our model. We specifically highlight that a novel contribution of the VAM is that “the CNN and EAM parameters are jointly fitted to the RT, choice, and visual stimulus data from individual participants in a unified Bayesian framework.”

      - The statement in lines 56-58 implies that this is the first article to glue CNNs together with EAMs. I would edit this accordingly based on the prior comment here and references provided. I will note that the second feature of the approach in this paper is still novel and really nice, namely the fact that the CNN and the EAM are jointly fitted. In the aforementioned references, the CNN is trained on the image set, and individual level Bayesian estimation was only applied to the EAM. Thus, it may be useful to highlight the joint estimation aspect of this investigation as well as how the uniqueness of the data available makes it possible.

      Agreed—see above.

      - Figure 3c and associated text. I understand the MI analysis you are performing here, however it is difficult to interpret as it stands. In the figure, what does a MI of 0.1 mean?? Can you give some context to that scale? I do find the interpretation of the hunchback shape in lines 210-222 to be somewhat of a stretch. The discussion that precedes (lines 199-209) this is clear and convincing. Can this discussion be strengthened more? And more interpretability of Figure 3c would be helpful; entropic scales can be hard to interpret without some context or scale associated.

      The MI analyses in Fig. 3C (and also Figs. 4C and 6E) show normalized MI, in which the raw MI has been divided by the entropy of the stimulus feature distribution. This normalization facilitates comparing the MI for different stimulus features, which is relevant for Figs. 4C and 6E. The normalized MI has a possible range of [0, 1], where 1 indicates perfect correlation between the two variables and 0 indicates complete independence. We now note in the legend of these figures that the possible normalized MI range is [0, 1], which should help with interpreting these values. Our revised results section for Fig. 3C now also includes some additional remarks on our interpretation of the hunchback shape of the MI.

      - Lines 244-248 and the analyses in Figure 3 suggest a change in the behavior of the CNN around layer 4. This is just a musing, but what would happen if you just used a 4 layer CNN, or even a 3 layer? This is not just a methods question. Your analysis suggests a transition from localized to distributed information representation. Right now, the EAM only sees the output of the distributed representation. What if it saw the results the more local representations from early layers? Of course, a shallower network may just form the distributed representations earlier, but it would interesting if there were a way to tease out not just the presence of distributed vs local representations, but the utility of those to the EAM.

      Thanks for this interesting suggestion. We did do some preliminary experiments in models with fewer layers, though we only examined the outputs of these models and did not assess their representations. We found that models with 3–5 layers generally failed to achieve human-level accuracy on the task. In principle, one could relate this observation to the representations of these models as a means of assessing the relative utility of distributed/local representations. However, there are confounding factors that one would ideally control for in order to compare models with different numbers of layers in this fashion (namely, the number of parameters).

      - Section Line 359 (Task optimized models) - It would be helpful to clarify here what these task-optimized models are being trained to do. As I understand it, they are being trained to directly predict the target direction. But are you asking them to learn to predict the true target direction? Or are you training them to predict what each individual responds? I think it is the second (since you have 75 of these), but it's not clear. I looked at the methods and still couldn't get a clear description of this. Also, are you just stripping the LBA off of the end of the CNN and then essentially putting a softmax in its place? If so, it would be helpful to say so.

      The task-optimized models were actually trained to output the true target direction in each stimulus, rather than trained to match the decisions of the human participants. We trained 75 such models since we wanted to use exactly the same stimuli as were used to train each VAM. The task-optimized CNNs were identical to those used in the VAMs, except that the outputs of the last layer were converted to softmax-scored probabilities for each direction rather than drift rates. The Results and Methods section now included additional commentary that clarifies these points.

      - Line 373-376: This statement is pretty well established at this point in the similarity judgement literature. I recommend looking at and referencing https://onlinelibrary.wiley.com/doi/full/10.1111/cogs.13226 https://www.nature.com/articles/s41562-020-00951-3 https://link.springer.com/article/10.1007/s42113-020-00073-z

      Thanks for pointing this out. For reference, the statement in question is “Thus, simply training the model to perform the task is not sufficient to reproduce a behavioral phenomenon widely-observed in conflict tasks. This challenges a core (but often implicit) assumption of the task-optimized training paradigm, namely that training a model to do a task well will result in model representations that are similar to those employed by humans.”

      We agree that the first and third reference you mention are relevant, and we now cite them along with some other relevant work. In our view, the second reference you mention is not particularly relevant (that paper introduces a new computational model for similarity judgements that is fit to human data, but does not comment on training models to perform tasks vs. fitting to human data).

      - Line 387-388: "VAMs may learn richer representations". This is a bit of a philosophical point, but I'll go ahead and mention it. The standard VAM does not necessarily learn "richer" feature representations. Rather, you are asking the VAM and task-optimized models to do different things. As a result, they learn different representations. "Better" or "richer" is in the eye of the beholder. In one view, you could view the VAM performance as sub-par since it exhibits strange artifacts (congruency effects) and the expansion of dimensionality in the VAM representations is merely a side-effect of poor performance. I'm not advocating this view, just playing devils advocate and suggesting a more nuanced discussion of the difference between the VAM and task-optimized models.

      We agree—this is a great point. We have changed this statement to read “the VAMs may learn more complex [rather than richer] representations of the stimuli”.

      - Lines 567-570: Here you discuss how the LBA backend of the VAM can't account for shrinking spotlight-like RT effects but that fitting models to different RT quantiles helps overcome this. I find this to be one of the weakest points of the paper (the whole process of fitting RT quantiles separately to begin with). This is just a limitation of the RT component of the model. This is a great paper but this is just a limitation inherent in the model. I don't see a need to qualify this limitation and think it would be better to just point out that this is a limitation of the LBA itself (be more clear that it is the LBA that is the limiting factor here) and that this leaves room for future research. From your last sentence of this paragraph, I agree that recurrent CNNs would be interesting. I will note that RNN choice-RT models are out there (though not with CNNs as part of the model).

      We agree and have revised this section of the Discussion accordingly (see our response to Reviewer #2 for more detail). We also removed the analyses of models trained on separate RT quantiles.

    1. We need to rekindle an excitement and optimism about the future of technology. Ideas need to represent play and possibility, not something to turn our noses up at because we don’t yet see the possibilities. Nothing is ever perfect when it first comes out, but that doesn’t mean that it won’t get better. And any idea that has potential will get better if it brings value to people.

      Absolutely. We're so skeptical (cynical?) about new technology now. There's a lot less "let's explore this and see if it's fun" and a lot more "watch, I bet it's just hype/it's not that useful/etc."

      I can understand it to an extent, because the hype train can be so annoying. But maybe that's part of the problem itself. Why are we focused on what others are saying or think about the technology rather than just trying to find out for ourselves?

    1. I feel like we’re coming back around to an era where recaps could take off again,” he says. “Because I feel like people find the conversation on social media to be increasingly toxic, and want to just read somebody who’s well informed.”

      I would love to see recaps make a comeback and become as successful, or even more successful, than they were before. That being said, I still feel there are many factors preventing this from happening. For example, discussions on platforms like TikTok could hinder their return. However, if recaps do come back, I don’t think they need to rely on platforms like Twitter. As this generation moves away from apps like Twitter, I believe discussions on Reddit, forums, or even casual conversations about books might overshadow recaps and potentially make them obsolete. That said, I do think it’s possible for recaps to return, though perhaps not with the current generation. And if they do gain traction with this generation, it likely won’t happen for several years.

    2. But it’s not just the quantity of TV that’s changed. Much of the uptick in television production is due to the rise of streaming services like Netflix, which upload their seasons to the internet in multihour chunks instead of broadcasting them at an appointed time each week. The preferred style of viewing is now the so-called “binge”: inhaling multiple episodes at once, often consuming entire seasons in a single weekend without coming up for air

      Really though I only have three or four shows that I have to wait weekly for and most. of the shows that I know I have to wait for. I'll just wait until Netflix drops the entire show and then I'll binge it. I think if anything, this has made me realize how often I do binge shows when they drop.

    3. “I honestly feel Mad Men was the last show where everyone immediately got online to talk about it after the episode aired, en masse,” says Fitzgerald, with no small amount of nostalgia. “Because the following morning, there was this massive audience hungry for conversations about the TV show, and you just don’t see that anymore.” It’s not that people have stopped paying attention to TV; it’s that they’re no longer paying attention to the same TV, which means there’s less of an imperative to understand and discuss a given episode.

      You don't really see this anymore because, with Netflix and similar platforms, people are always watching shows. There will always be someone to talk to about a show, even if they're not the original audience who watched it when it first came out.

    4. “I honestly feel Mad Men was the last show where everyone immediately got online to talk about it after the episode aired, en masse,” says Fitzgerald, with no small amount of nostalgia. “Because the following morning, there was this massive audience hungry for conversations about the TV show, and you just don’t see that anymore.”

      I completely disagree with Fitzgerald on this. TikTok and Instagram are filled with recappers and normal people who like to talk about a certain show. For example, outer banks when it first came out got spoiled for a lot of people by these recappers. I think it’s different in today’s time but not completely gone from our society. An app like X (formerly twitter) can let you search specific things and show the latest updates so if a show/movie comes out it gives people a place to see others thoughts and views on what you just watched.

    5. But it’s not just the quantity of TV that’s changed. Much of the uptick in television production is due to the rise of streaming services like Netflix, which upload their seasons to the internet in multihour chunks instead of broadcasting them at an appointed time each week.

      This made me realize that I have honestly never waited for episodes of a show to drop on a weekly basis. I always just wait for it to arrive on Netflix or another streaming service. Wow.

    1. It’s not just that you completed a degree; it is how you earned your degree and the cumulative effects of your education that matter.

      I am hoping for a Masters in social work. I am going to become a therapist for children in non-nuclear families.

      This class will help me grow my credits so i am one step closer to a degree.

    1. On Sunday, the DeepSeek app had risen to second place, just behind ChatGPT, in the Apple App Store’s list of top free apps.

      It's actually #1 now in App. Let's lead with this fact, then summarize the rest. Also interesting that THIS isn't getting banned, even though it has the potential to syphon up WAY more information about people's personal lives or whatever they put into AI.

    1. Summary of the Talk on the Future of CMS, No-Code, Low-Code, and AI-Generated Applications

      Evolution of CMS and No-Code Tools

      Traditional CMS:

      "Back in the day it was like WordPress... the original web where we would write code and then we would just push it up to servers."

      CMS emerged to allow non-developers to contribute to web content without coding.

      No-Code Tools:

      "No code is like your drag and drop GUI... Webflow or whatever."

      Introduced drag-and-drop interfaces for broader accessibility, with pros and cons in usability.

      AI in No-Code:

      "Fast forward even further we've got AI coding right... now a person can just make an app."

      AI models like Claude 3.5 enable app generation with minimal developer intervention.

      Current No-Code/Low-Code AI Tools Landscape

      Key Tools in the Market:

      "Let's create the definitive list here... Cursor, Bolt, Lovable."

      Cursor is for developers; Bolt and Lovable cater to non-developers with different strengths.

      Strengths and Weaknesses:

      "Bolt is a great boilerplate generator... Lovable is great if you want ShadCN styling."

      Developers prefer Bolt for flexibility; Lovable is preferred for pre-styled design systems.

      Challenges with AI-Generated Code

      Integration Issues:

      "It's not your existing code base... you need to use your components, design system, and backend logic."

      AI-generated code often exists in isolation, making integration difficult for enterprise use.

      Code Quality Concerns:

      "Engineers are not going to want a pull request by a non-engineer."

      Quality control and maintainability remain significant barriers.

      Customization and Precision:

      "Webflow is hard to use... but it gives you 100% precision control."

      While AI provides convenience, fine-grained control is still preferred by professionals.

      Future of AI-Driven Development

      Combining AI with Structured CMS-like Workflows:

      "Ideally, we have something like a headless CMS where we can make updates over API."

      Future solutions should enable AI updates via APIs while maintaining design consistency.

      Ideal Workflow Vision:

      "In an ideal world, we can be editing with prompts and visually."

      The goal is a hybrid model with AI-driven automation and manual precision controls.

      AI-Based Iteration and Optimization:

      "AI should listen to your customers... iterate really fast."

      Faster feedback loops and continuous optimization through AI experimentation.

      Technical Approaches to Solving Challenges

      Meta's React Blocks:

      "What React blocks let developers do is a backend dev in Python can code up a React UI."

      An approach that allows dynamic UI changes without shipping new native app versions.

      Mitosis Framework:

      "Mitosis is a project that explores transpilation and visual manipulation."

      Enables converting JSX into structured JSON for flexible rendering and AI-based updates.

      Code-Driven Visual Editing:

      "SwiftUI allows updating code with visual previews and vice versa."

      Bidirectional code editing is a possible future solution but is still complex.

      Current Limitations and Considerations

      Performance and Feasibility Issues:

      "When I had Google bots crawling my AI-generated site, I got a $4,000/day Anthropics bill."

      Generating content in real-time is currently too expensive at scale.

      Security and Compliance Risks:

      "Dynamic code delivery is ripe with security challenges."

      Any AI-driven solutions must consider performance, security, and governance.

      Key Use Cases and Applications

      Prototyping vs. Production:

      "Phenomenal prototyping tools, but moving to production is challenging."

      AI tools excel in concept validation but require extensive refinement for production.

      Personalization Opportunities:

      "The AI could automatically scale things up or down based on performance."

      Future possibilities include hyper-personalized user experiences.

      Conclusion and Outlook

      Near-Term Expectations:

      "Webflow and Framer will likely add more AI features over time."

      Existing players are expected to incorporate AI capabilities gradually.

      Long-Term Potential:

      "AI tools will eventually iterate and personalize dynamically based on user input."

      The convergence of AI, CMS, and design systems may redefine how software is built.

      This summary captures the essence of the speaker's discussion, highlighting key concepts, industry trends, challenges, and possible future developments in the AI-powered CMS and no-code/low-code space.

    1. Ādhān al-Qāḍīآذان القاضي The Judge’s Ears Scrumptiously luscious and brittle to the bite; fit for a judge indeed, who had to have good ears, large enough for attentively listening to people’s complaints. This pastry will keep its crispness even when it cools. I froze the leftovers and heated them up in the oven and they were as crisp as ever. Deliciously addictive.

      As someone with a Hispanic background, I find the description of Ādhān al-Qāḍī, or "The Judge’s Ears," to be both evocative to the average reader and relatable to myself and others who are of a similar cultural background. In addition, this recipe serves as a history lesson in showing us the impact that the invasions of the Moors of Spain for centuries had on the nation’s culinary practices. The appreciation for a pastry that is "scrumptiously luscious and brittle to the bite" speaks to a universal love for food that delights the senses. In Hispanic culture, we have our own array of pastries that carry rich flavors and textures, such as empanadas and arepas, which also offer that satisfying crunch and warmth. The imagery of the pastry being "fit for a judge" resonates with me. It reflects the importance of quality and craft in food, much like how our traditional dishes often carry the weight of cultural significance. In Hispanic communities, cooking is not just about sustenance; it’s an art form passed down through generations, much like the care taken in preparing a delicate pastry like Ādhān al-Qāḍī. The recipe’s mention of the pastry maintaining its crispness even after freezing and reheating reminds me of how many of our beloved recipes also stand the test of time. For instance, many Hispanic desserts, like flan or tres leches cake, can be enjoyed days later without losing their charm. This speaks to the craftsmanship involved in creating foods that are meant to be savored repeatedly. Moreover, the notion of a pastry being "deliciously addictive" is something I can wholeheartedly agree with. In Hispanic culture, gatherings often revolve around food, and the irresistible nature of treats encourages sharing and connection among family and friends. It’s fascinating how something as simple as a pastry can evoke such depth and history, bridging cultural divides and celebrating the joy that food brings to our lives. Your description inspires me to seek out this pastry and explore its flavors, reminding me of the beauty found in diverse culinary traditions.

    1. “I try my best to use original recipes and raw materials,” Li says. “The purpose of my video series is to show audiences what people actually ate in ancient China.”

      In this excerpt, Li not only brings light to a misconception of her country that she is playing a part in combatting through real life examples of the history of Chinese food, but from just these couple sentences it is inferred that misconceptions about foreign cultures are widespread and include aspects that are often not thought of in regard to the topic, such as food. The phrase “I try my best to use original recipes and raw materials” shows her passion for preserving the culinary heritage of China, which is vital for understanding the cultural and historical context of the food. By sourcing original recipes, Li not only honors traditional methods of cooking but also provides a connection to the past and how it’s been carried to the present, which allows us as readers to appreciate the complexity and richness of ancient Chinese gastronomy. The mention of “raw materials” suggests an emphasis on natural and unprocessed ingredients, which aligns with contemporary trends in cooking that prioritize sustainability and minimalism. This approach can serve as a counterpoint to modern culinary practices that often rely on convenience and industrial production, thereby inviting audiences to reflect on the value of traditional food systems that have sustained communities for centuries. The second part of the excerpt, “The purpose of my video series is to show audiences what people actually ate in ancient China,” represents an educational goal. By aiming to reveal historical eating habits, Li provides insights into the daily lives and societal norms of ancient Chinese people. This not only enriches viewers’ understanding of the culture but also carries us to a deeper appreciation for the evolution of Chinese cuisine. Overall, Li's initiative can be seen as part of a broader movement to revive and celebrate traditional culinary practices. This effort not only preserves valuable historical knowledge but also engages a new generation of food enthusiasts who seek to connect with their heritage through authentic culinary experiences. By bridging the past and present, Li’s work contributes to the ongoing dialogue about food, culture, and identity in a rapidly changing world.

    2. Of course, you won’t see the actors making these dishes from scratch, so food vloggers have stepped in to fill this void, teaching curious minds about these traditional dishes and how to make them.

      It's amazing how digital platforms can bring together different cultures! By watching these food vloggers, we're not just learning recipes but we're also gaining insight into cultures that bring ancient traditions into our kitchens. Digital existence allows us to experience a slice of history through something as universal as cooking. I think it's quite fascinating how the internet can play such a crucial learning role in people's lives.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Inauthenticity

      For this question, I think the authenticity in the media is diverse. For example, people often share a good life to express or achieve some purpose. But the reality is often not so good. It's just the way people protect themselves in the media. Or the authenticity of some brands. Some brand accounts are displayed on social media in a perfect way, but the truth is that some brands are not as well packaged as they are.

    1. But in September last we sent near 80 ounces with extraordinary care and provision that we doubt not but that it will prosper and yield a plentiful return, there being sent also men skillful to instruct the planters for all things belonging to bring the silk to perfection.

      This effort demonstrates the economic ambitions driving colonization, beyond just settling new lands. It's a glimpse into the practical side of building a colony - trying to find ways to make it self-sustaining and profitable.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. These reactions make sense. Try to imagine the early days of human social life, before we started attaching our welfare to the land in terms of planting crops and building structures designed for permanence. Our nomadic forebears functioned in groups who coordinated in highly specialized ways to ensure the survival of the whole. Although such communities are often pictured as being prehistoric, primitive, and obsolete, we now know that such societies were and are highly sophisticated, often developing and depending on highly specified legal codes, some of which are still in use today in Bedouin communities in North Africa. Other nomadic groups, such as Roma people (which you may have heard derogatorily called ‘gypsies’), live within and around land-based nations and their various borders and laws. To ensure the survival of their ethnicity, cultures, and languages, they depend on being able to trust each other. The nations whose land we are living and studying on here also knew the importance of being able to know who can be trusted. These needs may not always be as obvious in highly individualized societies, like Post-Enlightenment Europe and the United States. The possibility for self-reliance has been created in part by making certain things dependable and institutionalized. You can go get yourself food without feeling like you have to trust anyone because you can just go to the store (which has to adhere to corporate legal requirements) and buy food (the supply of which is made stable by complex networks of growing, manufacturing, and transportation, covered by the assurances of FDA-compliant labeling) from people who work there (and are subject to labor laws and HR regulations, which, if they are not followed, means the staff person does not get paid, so their wellbeing depends on them doing their job). The need to trust other people is obscured by the many institutions that we have created. Institutions have ways, sometimes, of getting around human whims and surprises. But at the end of the day, it is still hugely important to us that we feel clear about who can be trusted, and for what.

      It’s interesting to think about how trust was such a fundamental part of survival in nomadic societies, and how that need for trust hasn’t gone away—it’s just evolved into more complex systems in modern life. I never really thought about it that way before, especially with things like buying food. Even though we don’t always feel like we need to trust the person behind the counter, so many systems rely on trust—like laws, regulations, and the people making sure everything runs smoothly. It makes me realize how much we depend on trust in ways we don’t always see.

    1. Welcome back.

      This is part two of this lesson.

      We're going to continue immediately from the end of part one.

      So let's get started.

      Now let's look at another example.

      This looks more complex, but we're going to use the same process to identify the correct answer.

      So this is a multi-select question and we're informed that we need to pick two answers, but we're still going to follow the same process.

      The first step is to check if we can eliminate any of the answers immediately.

      Do any of the answers not make sense without reading the question?

      Well, nothing immediately jumps out as wrong, but answer E does look strange to me.

      It feels like it's not a viable solution.

      I can see the word encryption mentioned and it's rare that I see lambda and encryption mentioned in the same statement.

      So at this stage, let's just say that answer E is in doubt.

      So it's the least preferred answer at this point.

      So keep that in your mind.

      It's fine to have answers which you think are not valid.

      We don't know enough to immediately exclude it, but we can definitely say that we think there's something wrong with it.

      Given that we need to select two answers out of the five, we don't need to worry about E as long as there are two potentially correct answers.

      So let's move on.

      Now, the real step one is to identify what matters in the question text.

      So let's look at that.

      Now, the question is actually pretty simple.

      It gives you two requirements.

      The first is that all data in the cloud needs to be encrypted at rest.

      And the second is that any encryption keys are stored on premises.

      For any answers to be correct, they need to meet both of these requirements.

      So let's follow a similar process on the answer text first looking for any word fluff and then looking for keywords which can help identify either the correct answers or more answers that we can exclude.

      So the first three answers, they all state server side encryption, but the remaining two answers don't.

      And so the first thing that I'm going to try to do with this question is to analyze whether server side encryption means anything.

      Does it exclude the answers or does it point to those answers being correct?

      Well, server side encryption means that S3 performs the encryption and decryption operations.

      But depending on the type of server side encryption, it means that S3 either handles the keys or the customer handles the keys.

      But at this stage, using server side encryption doesn't mean that the answers are right or wrong.

      You can use it or you can't use it.

      That doesn't immediately point to correct versus incorrect.

      What we need to do is to look at the important keywords.

      Now, if we assume that we are excluding answer E for now unless we need it, then we have four different possible answers, each of which is using a different type of encryption.

      So I've highlighted these.

      So we've got S3 managed keys, SSE-S3, KMS managed keys, which is SSE-KMS, customer provided keys, which is SSE-C, and then using client side encryption.

      Now, the first requirement of the question states encryption at rest and all of the answers A, B, C and D, they all provide encryption at rest.

      But it also states that encryption keys are to be stored on premises.

      Answers A and B use server side encryption where AWS handle the encryption process and the encryption keys.

      So SSE-S3 and SSE-KMS both mean that AWS are handling the encryption keys.

      And because of this, the keys are not stored on premises and so they don't meet the second criteria in the question.

      And this means that they're both invalid and can be excluded.

      Now, this leaves answers C, D and E, and we already know that we're assuming that we're ignoring answer E for now and only using it if we have to.

      So we just have to evaluate if C and D are valid.

      And if they are, then those are the answers that we select.

      So SSE-C means that the encryption is performed by S3 with the keys that the customer provides.

      So that works.

      It's a valid answer.

      So that means at the very least C is correct.

      It can be used based on the criteria presented by the question.

      Now, answer D suggests client side encryption, which means encrypting the data on the client side and just passing the encrypted data to S3.

      So that also works.

      So answers C and D are both potentially correct answers.

      So both answers C and D do meet the requirements of the question.

      And because of this, we don't have to evaluate answer E at all.

      It's always been a questionable answer.

      And since the question only requires us to specify two correct answers, we can go ahead and exclude E.

      And that gives us the correct answers of C and D.

      So that's another question answered.

      And it's followed the same process that we used on the previous example.

      So really answering questions within AWS is simply following the same process.

      Try and eliminate any crazy answers.

      So any answers that you can eliminate based just on the text of those answers, then exclude them right away because it reduces the cognitive overhead of having to pick between potentially four correct answers.

      If you can eliminate the answers down to three or two, you significantly reduce the complexity of the question.

      The next step is to find what really matters in the question, find the keywords in both the preamble and the question.

      Then highlight and remove any question fluff.

      So anything in the question which doesn't matter, eliminate any of the words which aren't relevant technically to the product or products that you select.

      So this is something that comes with experience, being able to highlight what matters and what doesn't matter in questions.

      And the more practice that you do, the easier this becomes.

      Next, identify what really matters in the answers.

      So again, this comes down to identifying any shared common words and removing those and then identifying any of the keywords that occur in the answers.

      And then once you've got the keywords in the answers and the keywords in the questions, then you can eliminate any bad answers that occur in the question.

      Now, ideally at this point, what remains are correct answers.

      You might start off with four or five answers.

      You might eliminate two or three.

      The question asks for two correct answers.

      And that's it.

      You've finished the question.

      But if you have more answers than you need to provide, then you need to quickly select between what remains and you can do that by doing this keyword matching.

      So look for things which stand out.

      Look for things which aren't best practice according to AWS.

      Look for things which breach a timescale requirement in the question.

      Look for things which can't perform at the levels that the question requires or that cost too much based on the criteria and the question.

      Essentially, what you're doing is looking for that one thing that will let you eliminate any other answers and leave you with the answers that the question requires.

      Generally, when I'm answering most questions, it's a mixture between the correct answer jumping out at me and eliminating incorrect answers until I'm left with the correct answers.

      You can approach questions in two different ways.

      Either looking for the correct answers or eliminating the incorrect ones.

      Do whichever works the best for you and follow the same process throughout every question in the exam.

      The one big piece of advice that I can give is don't panic.

      Everybody thinks they're running out of time.

      Most people do run out of time.

      So follow the exam technique process that I detailed in the previous lesson to try and get you additional time, leave the really difficult questions until the end, and then just follow this logical process step by step through the exam.

      Keep an eye on the remaining amount of time that you have at every point through the exam and I know that you will do well.

      Most people fail the exam because of their exam technique, not their lack of technical capability.

      With that being said, though, that's everything I wanted to cover in this set of lessons.

      Good luck with the practice tests.

      Good luck with the final exam.

      And if you do follow this process, I know that you'll do really well.

      With that being said, though, go ahead and complete this video and then when you're ready, I'll look forward to joining you in the next.

    1. Welcome back and from the very start this course has been about more than just the technical side.

      So this is part one of a two-part lesson set and in this lesson I'll focus on some exam technique hints and tips that you might find useful in the exam.

      Now in terms of the exam itself it's going to have questions of varying levels of difficulty and this is also based on your own strengths and weaknesses.

      Conceptually though understand that on average the AWS exams will generally feel like they have 25% easy questions, 50% medium questions and 25% really difficult questions.

      Assuming that you've prepared well and have no major skill gaps this is the norm.

      For most people this is how it feels.

      The problem is the order of the difficulty is going to feel random so you could have all of your easy ones at the start or at the end or scattered between all of the other questions and this is part of the technique of the AWS exams how to handle question difficulty in the most efficient way possible.

      Now I recommend conceptually that my students think of exams in three phases.

      You want to spend most of your time on phase two.

      So structurally in phase one I normally try to go through all of the 65 questions and identify ones that I can immediately answer.

      You can use the exam tools including mark for review and just step through all of the questions on the exam answering anything that's immediately obvious.

      If you can answer a question within 10 seconds or have a good idea of what the answer will be and just need to consider it for a couple more seconds this is what I term a phase one question.

      Now the reason that I do these phase one questions first is that they're easy they take very little time and because you know the subject so well you have a very low chance of making a mistake.

      So once you've finished all of these easy questions the phase one questions what you're left with is the medium or yellow questions and the hard or red questions.

      My aim is that I want to leave the hard questions until the very end of the test.

      They're going to be tough to answer anyway and so what I want to do at this stage in phase two is to go through whatever questions remain so whatever isn't easy and I'm looking to identify any red questions and mark them for review and then just skip past them.

      I don't want to worry about any red questions in phase two.

      What phase two is about is powering through the medium questions.

      These will require some thought but they don't scare you they're not impossible.

      The medium questions so the yellow questions should make up the bulk of the time inside the exam.

      They should be your focus because these are the questions which will allow you to pass or fail the exam.

      For most people the medium questions represent the bulk of the exam questions.

      Generally your perception will be that most of the questions will be medium.

      There'll be some easy and some hard so you need to focus in phase two which represents the bulk of the exam on just these medium questions.

      So my suggestion generally is in phase two you've marked the hard questions for review and just skipped past them and then you focused on the medium questions.

      Now after you've completed these medium questions you need to look at your remaining time and it might be that you have 40 minutes left or you might only have four minutes or even less.

      In the remaining time that you have left you should be focusing on the remaining red questions the difficult questions.

      If you have 40 minutes left then you can take your time.

      If you have four minutes you might have to guess or even just click answers at random.

      Now both of these approaches are fine because at this point you've covered the majority of the questions.

      You've answered all of the easy questions and you've completed all of the medium questions.

      What remains are questions that you might get wrong regardless but because you've pushed them all the way through to the end of your time allocation whether you're considering them carefully and answering them because you have 40 minutes left or whether you're just answering them at random they won't impact your process in answering the earlier questions.

      So if you don't follow this approach what tends to happen is you're focusing really heavily on the hard questions at the start of the exam and that means that you run out of time towards the end but if you follow this three-stage process by this point all that you have left is a certain number of minutes and a certain set of really difficult questions and you can take your time safe in the knowledge that you've already hopefully passed to the exam based on the easy and medium questions and the hard ones as simply a bonus.

      Now at a high level this process is designed to get you to answer all of the questions that you're capable of answering as quickly as possible and leave anything that causes you to doubt yourself or that you struggle with to the end.

      So pick off the easy questions, focus on the medium and then finish up with the really hard questions at the end.

      I know that it sounds simple but unless you focus really hard on this process or one like it then your actual exam experience could be fairly chaotic.

      If you're unlucky enough to get hard questions at the start and you don't use a process like this it can really spoil your flow.

      So before we finish this lesson just some final hints and tips that I've got based on my own experiences.

      First if this is your first exam assume that you're going to run out of time.

      Most people enter the exam not having an understanding of the structure and most people myself included with my first exam will run out of time.

      The way that you don't run out of time and the way that you succeed is to be efficient, have a process.

      Now assuming that you have the default amount of time you need to be aware that you have two minutes to read the question, read the answers and to make a decision.

      So this sounds like a lot but it's not a lot of time to do all of those individual components.

      You shouldn't be guessing on any answers until the end.

      If you're guessing on a question then it should be in the hard question category and you should be tackling this at the end.

      I don't want you guessing on any easy questions or any medium questions.

      If you're guessing then you shouldn't be looking at it until right at the very end.

      Another way of looking at this is if you are unsure about a question or you're forced to guess early on you need to be aware that a question that's later on so further on in the exam might prompt you to remember the correct answer for an earlier question.

      So if you do have to guess on any questions then use the mark for review feature.

      You can mark any question that you want for review as you go through the course and then at any point or right at the end you can see all the questions which are flagged for review and revisit them.

      So use that feature it can be used if you're doubtful on any of the answers or you want to prompt yourself as with the hard questions to revisit them toward the end of the exam.

      Now this should be logical but take all the practice tests that you can.

      One of my favorite test vendors in the space is the team over at TutorialsDojo.com.

      They offer a full range of practice questions for all of the major AWS exams so definitely give their site a look.

      One of the benefits of the exam questions created over at TutorialsDojo is that they are more difficult than the real exam questions so they can prepare you for a much higher level of difficulty and by the time you get into the exam you should find it relatively okay.

      So my usual method is to suggest that people take a course and then once they've finished the course take the practice test in the course, follow that up with the tutorials Dojo practice tests and for any questions they get wrong it can identify areas that they need additional study.

      So rinse and repeat that process, perform that additional study, redo the practice tests and when you're regularly scoring above 90% on those practice tests then you're ready to do the real exam.

      And at this point there are all of my suggestions for exam technique.

      In the next lesson I want to focus on questions themselves because it's the questions and your efficiency during the process of answering questions which can mean the difference between success and failure.

      So go ahead complete this video and in the next video when you're ready we'll look at some techniques on how you can really excel when tackling exam questions.

    1. Welcome back and in this lesson I want to talk about a few related features of CloudFormation and those are weight conditions, creation policies and the CFN signal tool.

      So let's jump in and get started straight away.

      Before we look at all of those features as a refresher I want to step through what actually happens with the traditional CloudFormation provisioning process and let's assume that we're building an EC2 instance and we're using some user data to bootstrap WordPress.

      Well if we do this the process starts with logical resources within the template and the template is used to create a Cloud Formation stack.

      Now you know by now that it's the job of the stack to take the logical resources in a template and then create, update or delete physical resources to match them within an AWS account.

      So in this case it creates an EC2 instance within an AWS account.

      From CloudFormation's perspective in this example it initiates the creation of an EC2 instance so when EC2 reports back that the physical resource has completed provisioning the logical resource changes to create complete and that means everything's good right?

      Well the truth is we just don't know.

      With simple provisioning when the relevant system EC2 in this case tells CloudFormation that it's finished then CloudFormation has no further access to any other information beyond the fact that EC2 is telling it that that resource has completed its provisioning process.

      With more complex resource provisions like this one where bootstrapping goes on beyond when the instance itself is ready then the completion state isn't really available until after the bootstrapping finishes and even then there's no built-in link to communicate back to CloudFormation whether that bootstrapping process was successful or whether it failed.

      An EC2 instance will be in a create complete state long before the bootstrapping finishes and so even when it's finished if it fails the resource itself still shows create complete.

      Creation policies, weight conditions and CFN signal provide a few ways that we can get around this default limitation and allow systems to provide more detailed signals on completion or not to CloudFormation.

      So let's have a look at how this works.

      The way that this enhanced signaling is done is via the CFN signal command which is included in the AWS CFN bootstrap package.

      The principle is simple enough you configure CloudFormation to hold or pause a resource and I'll talk more about the ways that this is done next but you configure CloudFormation to wait for a certain number of success signals.

      You want to make it so that resources such as EC2 instances tell CloudFormation that they're okay.

      So in addition to configuring it to wait for a certain number of success signals you also configure a timeout.

      This is a value in hours, minutes and seconds within which those signals can be received.

      Now the maximum permitted value for this is 12 hours and once configured it means that a logical resource such as an EC2 instance will just wait.

      It won't automatically move into a create complete state once the EC2 system says that it's ready.

      Instead if the number of success signals that you define is received by CloudFormation within the timeout period then the status of that resource changes into create complete and the stack process continues with the knowledge that the EC2 instance really is finished and ready to go because on the instance you've configured something to explicitly send that signal or signals to CloudFormation.

      CFN signal is a utility running on the instance itself actually sending a signal back to the CloudFormation service.

      Now if CFN signal communicates a failure signal suggesting that the bootstrapping process didn't complete successfully then the creation of the resource in the stack fails and the stack itself fails.

      So that's important to understand CFN signal can send success signals or failure signals and a failure signal explicitly fails the process.

      Now another possible outcome of this is the timeout period can be reached without the required number of success signals and in this situation CloudFormation views this as an implicit failure.

      The resource being created fails and then logically the stack fails the entire process that it's doing.

      Now the actual thing which is being signaled using CFN signal is a logical resource specifically a resource such as EC2 or auto scaling groups which is using a creation policy or a specific type of separate resource called a weight condition resource.

      Now AWS suggests that for provisioning EC2 and auto scaling groups you should use a creation policy because it's tied to that specific resource that you're handling but you might have other requirements to signal outside of a specific resource.

      For example if you're integrating CloudFormation with an external IT system of some kind in that case you might choose to use a weight condition and next I want to visually step through how both of these work because it will make a lot more sense when you see the architecture visually.

      Let's start with the example of an auto scaling group which uses a launch configuration to launch three EC2 instances.

      These are within a template and that's used to create a stack.

      Because I'm using a creation policy here a few things happen which are different to how CloudFormation normally functions.

      First the creation policy here adds a signal requirement and timeout to the stack.

      In this case the stack needs three signals and it has a timeout of 15 minutes to receive them.

      So the EC2 instances are provisioned but because of the creation policy the auto scaling group doesn't move into a create complete state as normal.

      It waits.

      It can't complete until the creation policy directive is fulfilled.

      The user data for the EC2 instances contains some bootstrapping and then this CFN signal statement at the bottom.

      So once the bootstrapping process whatever it is has been completed and let's say that it's installing the Categorum application well the CFN signal tool signals the resource in this case the auto scaling group that it's completed the build.

      So this CFN signal that's at the bottom left of your screen this is an actual utility which runs on the EC2 instance as part of the bootstrapping process.

      And this causes each instance to signal once and the auto scaling group resource in the stack requires three of these signals within 15 minutes.

      If it gets them all and assuming that they're all success signals then the stack moves into a create complete state.

      If anything else happens so maybe a timeout happens or maybe one of the three instances has a bug then it will signal a failure and in any of those cases the stack will move into a create failed state.

      Creation policies are generally used for EC2 instances or for auto scaling groups and if you do any of the advanced demo lessons in any of my courses you're going to see that I make use of this feature to ensure resources which are being provisioned are actually provisioned correctly before moving on to the next stage.

      Now there are situations when you need some additional functionality maybe you want to pass data back to cloud formation or want to put general wait states into your template which can't be passed until a signal is received and that's where wait conditions come in handy.

      Wait conditions operate in a similar way to creation policies.

      A wait condition is a specific logical resource not something defined in an existing resource.

      A wait condition can depend on other resources and other resources can also depend on a wait condition so it can be used as a more general progress gate within a template a point which can't be passed until those signals are received.

      A wait condition will not proceed to create complete until it gets its signals or the timeout configured on that wait condition expires.

      Now a wait condition relies on a wait handle and a wait handle is another logical resource whose sole job is to generate a pre-signed URL which can be used to send signals to.

      It's pre-signed so that whatever using it doesn't need to use any AWS credentials they're included in the pre-signed URL.

      So let's say that we have an EC2 instance or external server.

      These are responsible for performing a process maybe some final detailed configuration or maybe they assign licensing something which has to happen after a part of the template but before the other part.

      So these generate a JSON document which contains some amazing information or some amazing occurrence.

      This is just an example it can be as complex or as simple as needed.

      This document is passed back as the signal it has a status, a reason, a unique ID and some data.

      Now what's awesome about this is that not only does this signal allow resource creation to be paused and then continued when this event has occurred but the data which has passed back can also be accessed elsewhere in the template.

      We can use the get at function to query for the data attribute of the wait condition and get access to the details on the signal.

      Now this allows a small amount of data exchange and processing between whatever is signaling and the cloud formation stack.

      So you can inject specific data about a given event into the JSON document, send this back as a signal and then access this elsewhere in the cloud formation stack and this might be useful for certain things like licensing or to get additional status information about the event from the external system.

      And that's wait conditions.

      In many ways they're just like creation policies.

      They have the same concept.

      They allow a specific resource creation to be paused, not allowing progress until signaling is received.

      Only wait conditions they're actually a separate resource and can use some more advanced data flow features like I'm demonstrating here.

      AWS recommend creation policies for most situations because they're simpler to manage but as you create more complex templates you might well have need to use wait conditions as well and for the exams it's essential that you understand both creation policies and wait conditions which is why I wanted to go into detail on both.

      Now that's all of the theory that I wanted to cover about creation policies and wait conditions and these are both things that you're going to get plenty of practical experience of in various demo lessons in all of my courses but I wanted to cover the theory and the architecture so that you can understand them when you come across them in those demos.

      For now though thanks for watching go ahead and complete this video and when you're ready I'll look forward to you joining me in the next.

    1. By now, skateboarding wasn’t just a sport I was doing to copy the cool kids. I was truly interested in the sport. I even had hopes and dreams of becoming a professional skateboarder. That became my life goal. I loved skateboarding so much. I pictured myself doing amazing tricks in front of a cheering crowd, just like I saw Tony Hawk do in some videos. I pictured the admiration on their faces, and it was awesome.

      He says it was truly his passion but it’s not- he’s just now imagining another narcissistic fantasy. :(

  9. academic-oup-com.ezp1.lib.umn.edu academic-oup-com.ezp1.lib.umn.edu
    1. it is impossible to feel entirely separate from the moment. The speakers belt out the urgent beats of Mongo Santamaria’s ’69 rendition of The Temptations’ 1968 hit “Cloud 9.” I tap into a musical groove, imagining my parents as teens dancing to this song or the original when it came out. This cross-generational connection flows through the speakers and I ride that feeling, becoming a part of the cypher’s many conversations.

      to participate as a viewer also means to draw from your own experiences (conscious or not) in your response. there is some psychic exchange between performer, viewer, movement, music.

      As an analogy, it would be hard to gain an understanding of a dish I've never had from just the raw ingredients eaten seprately. Perhaps there is needed infusing, mellowing by roasting, aromas from toasting, removal of seeds and stems, fermentation, baking, stewing that makes the experience nourishing to more than just the stomach. And perhaps a recipe is incomplete without it's original name, a visit to its place of procurement, attachment to its means of production, and the occasion for which it is prepared as a remedy for heartache, celebration, and expression of love.

    1. During my first field experience in Brazil, I learned firsthand how challenging cultural relativism could be. Preferences for physical proximity and comfort talking about one’s body are among the first differences likely to be noticed by U.S. visitors to Brazil. Compared to Americans, Brazilians generally are much more comfortable standing close, touching, holding hands, and even smelling one another and often discuss each other’s bodies. Children and adults commonly refer to each other using playful nick-names that refer to their body size, body shape, or skin color. Neighbors and even strangers frequently stopped me on the street to comment on the color of my skin (It concerned some as being overly pale or pink—Was I ill? Was I sunburned?), the texture of my hair (How did I get it so smooth? Did I straighten my hair?), and my body size and shape (“You have a nice bust, but if you lost a little weight around the middle you would be even more attractive!”).

      I can relate to the authors experience in Brazil. I grew up Vietnamese American which exposed me to two different cultures and experiences. Even now, my family always comments on my body and face as if it’s a normal topic of conversation. I hear comments about my weight, hair, complexion every day from my mom, specifically. Sometimes they can be harsh and rarely are they positive, but I’ve gotten used to it. I’ve always told myself just Vietnamese culture. American culture is very different. I learned that from a young age. I remember I made a comment on my classmates weight when I was in the third grade and I hurt his feelings and had to speak to the teacher after class. I didn’t fully understand then the impact my words had on him, but looking back, I realized the difference between what I’m used to at home and what is the cultural norm here in the states. I’ve had many other eye-opening experiences like this throughout the years. I understand now that there are different values and beliefs between the two cultures and I must learn to respect both.

    2. Despite those gains, somemembers of the community did not embrace indigenous status becausebeing considered Indian had a pejorative connotation in Brazil. Manyfelt that the label stigmatized them by associating them with a poor andmarginalized class of Brazilians. Others resisted the label because oflong-standing family and inter-personal conflicts in the community

      One of the most interesting things about this passage is the internal conflict within the Jenipapo-Kanindé community about embracing an "Indian" identity. It made me think about how complicated identity can be when it’s tied to social and political factors. On one hand, being recognized as indigenous brought real benefits, like better infrastructure and resources, but on the other hand, the stigma associated with the label made some people reject it. This shows how identity isn’t just personal or cultural, but is also shaped by outside pressures and perceptions. It makes me wonder: How do communities decide whether to take on an identity when it comes with both benefits and social stigma? And how much influence do outside groups, like researchers for example, have in shaping or even pressuring communities to adopt certain identities? Is it empowering, or does it create more conflict? I also think this connects to bigger questions about how marginalized groups reclaim or redefine their identities. For example, how do they balance wanting recognition for their heritage with dealing with stereotypes and discrimination? Personally, it makes me reflect on how external labels, whether cultural, ethnic, or something else, can impact how people see themselves or their place in society.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Alex Norcia. Brand Twitter Is Absurd, and It Will Only Get Worse. Vice, February 2019. URL: https://www.vice.com/en/article/pangw8/brand-twitter-is-absurd-and-it-will-only-get-worse (visited on 2023-11-24).

      Social media brands are getting noticed for their human-like personas on the internet. There are pros such as building engagement, however it also raises ethical questions about these brands using serious issues, such as depression, to grab their target audience. There is some fear that young adults with mental health issues or isolation are turning to brands to relate to or create connections with. It's often hard to tell if there are real people or just simple marketing tactics to create more sales off of emotional engagement.

    1. On a website, check the name of the website and its articles for clues that they contain material relevant to your research question.

      This is important just as the P in PARRS. We need to skim through and find clues if it’s relevant to the subject

    1. Check your hearing Start 2025 by caring for your hearing health. It’s free and takes just 3 minutes. Check your hearing Check your hearing Start 2025 by caring for your hearing health. It’s free and takes just 3 minutes. Check your hearing We are RNID

      High contrast. This website provides enough contrast between text and its background which is extremely beneficial for those with low vision and who do not have contrast-enhancing technology. This is an example of a good web accessibility practice as it adheres to the WCAG Contrast requirements, as it outlines that large-scale text and images have a contrast ratio of at least 4.5:1.

    1. Skip to contentAdvertisementAdvertisementBritish Broadcasting CorporationRegisterSign InHomeNewsSportBusinessInnovationCultureArtsTravelEarthVideoLive<details class="sc-c8fbcff-0 jLNykE"><summary class="sc-c8fbcff-2 kmUJiQ"><svg viewBox="0 0 32 32" width="20" height="20" category="actions" icon="list-view-text" class="sc-735e4804-0 cCvcou"><path d="M1 7.5h30V1.9H1v5.6zm0 22.6h30v-5.6H1v5.6zm0-11.3h30v-5.6H1v5.6z"></path></svg></summary><ul class="sc-c8fbcff-1 kPEgTc"><li class="sc-c8fbcff-3 jtEFxm"><a href="/home" class="sc-f116bf72-4 eqTiTw">Home</a></li><ul class="sc-c8fbcff-1 lgTkUi"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news" class="sc-f116bf72-4 eqTiTw">News</a></li><ul class="sc-c8fbcff-1 lgTkUi"><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/topics/c2vdnvdg6xxt" class="sc-f116bf72-4 eqTiTw">Israel-Gaza War</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/war-in-ukraine" class="sc-f116bf72-4 eqTiTw">War in Ukraine</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/us-canada" class="sc-f116bf72-4 eqTiTw">US &amp; Canada</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/uk" class="sc-f116bf72-4 eqTiTw">UK</a></li><ul class="sc-c8fbcff-1 bXKLZA"><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/politics" class="sc-f116bf72-4 eqTiTw">UK Politics</a></li><ul class="sc-c8fbcff-1 khALxp"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/england" class="sc-f116bf72-4 eqTiTw">England</a></li><ul class="sc-c8fbcff-1 khALxp"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/northern_ireland" class="sc-f116bf72-4 eqTiTw">N. Ireland</a></li><ul class="sc-c8fbcff-1 khALxp"><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/northern_ireland/northern_ireland_politics" class="sc-f116bf72-4 eqTiTw">N. Ireland Politics</a></li><ul class="sc-c8fbcff-1 bLMSxz"></ul></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/scotland" class="sc-f116bf72-4 eqTiTw">Scotland</a></li><ul class="sc-c8fbcff-1 khALxp"><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/scotland/scotland_politics" class="sc-f116bf72-4 eqTiTw">Scotland Politics</a></li><ul class="sc-c8fbcff-1 bLMSxz"></ul></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/wales" class="sc-f116bf72-4 eqTiTw">Wales</a></li><ul class="sc-c8fbcff-1 khALxp"><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/wales/wales_politics" class="sc-f116bf72-4 eqTiTw">Wales Politics</a></li><ul class="sc-c8fbcff-1 bLMSxz"></ul></ul></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/world/africa" class="sc-f116bf72-4 eqTiTw">Africa</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/world/asia" class="sc-f116bf72-4 eqTiTw">Asia</a></li><ul class="sc-c8fbcff-1 bXKLZA"><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/world/asia/china" class="sc-f116bf72-4 eqTiTw">China</a></li><ul class="sc-c8fbcff-1 khALxp"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/world/asia/india" class="sc-f116bf72-4 eqTiTw">India</a></li><ul class="sc-c8fbcff-1 khALxp"></ul></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/world/australia" class="sc-f116bf72-4 eqTiTw">Australia</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/world/europe" class="sc-f116bf72-4 eqTiTw">Europe</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/world/latin_america" class="sc-f116bf72-4 eqTiTw">Latin America</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/world/middle_east" class="sc-f116bf72-4 eqTiTw">Middle East</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/in_pictures" class="sc-f116bf72-4 eqTiTw">In Pictures</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/bbcindepth" class="sc-f116bf72-4 eqTiTw">BBC InDepth</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/news/reality_check" class="sc-f116bf72-4 eqTiTw">BBC Verify</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/sport" class="sc-f116bf72-4 eqTiTw">Sport</a></li><ul class="sc-c8fbcff-1 lgTkUi"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/business" class="sc-f116bf72-4 eqTiTw">Business</a></li><ul class="sc-c8fbcff-1 lgTkUi"><li class="sc-c8fbcff-3 jtEFxm"><a href="/business/executive-lounge" class="sc-f116bf72-4 eqTiTw">Executive Lounge</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/business/technology-of-business" class="sc-f116bf72-4 eqTiTw">Technology of Business</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/business/future-of-business" class="sc-f116bf72-4 eqTiTw">Future of Business</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/innovation" class="sc-f116bf72-4 eqTiTw">Innovation</a></li><ul class="sc-c8fbcff-1 lgTkUi"><li class="sc-c8fbcff-3 jtEFxm"><a href="/innovation/technology" class="sc-f116bf72-4 eqTiTw">Technology</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/innovation/science" class="sc-f116bf72-4 eqTiTw">Science &amp; Health</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/innovation/artificial-intelligence" class="sc-f116bf72-4 eqTiTw">Artificial Intelligence</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/innovation/ai-v-the-mind" class="sc-f116bf72-4 eqTiTw">AI v the Mind</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/culture" class="sc-f116bf72-4 eqTiTw">Culture</a></li><ul class="sc-c8fbcff-1 lgTkUi"><li class="sc-c8fbcff-3 jtEFxm"><a href="/culture/film-tv" class="sc-f116bf72-4 eqTiTw">Film &amp; TV</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/culture/music" class="sc-f116bf72-4 eqTiTw">Music</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/culture/art" class="sc-f116bf72-4 eqTiTw">Art &amp; Design</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/culture/style" class="sc-f116bf72-4 eqTiTw">Style</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/culture/books" class="sc-f116bf72-4 eqTiTw">Books</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/culture/entertainment-news" class="sc-f116bf72-4 eqTiTw">Entertainment News</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/arts" class="sc-f116bf72-4 eqTiTw">Arts</a></li><ul class="sc-c8fbcff-1 lgTkUi"><li class="sc-c8fbcff-3 jtEFxm"><a href="/arts/arts-in-motion" class="sc-f116bf72-4 eqTiTw">Arts in Motion</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel" class="sc-f116bf72-4 eqTiTw">Travel</a></li><ul class="sc-c8fbcff-1 lgTkUi"><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/destinations" class="sc-f116bf72-4 eqTiTw">Destinations</a></li><ul class="sc-c8fbcff-1 bXKLZA"><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/destinations/africa" class="sc-f116bf72-4 eqTiTw">Africa</a></li><ul class="sc-c8fbcff-1 khALxp"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/destinations/antarctica" class="sc-f116bf72-4 eqTiTw">Antarctica</a></li><ul class="sc-c8fbcff-1 khALxp"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/destinations/asia" class="sc-f116bf72-4 eqTiTw">Asia</a></li><ul class="sc-c8fbcff-1 khALxp"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/destinations/australia-and-pacific" class="sc-f116bf72-4 eqTiTw">Australia and Pacific</a></li><ul class="sc-c8fbcff-1 khALxp"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/destinations/caribbean" class="sc-f116bf72-4 eqTiTw">Caribbean &amp; Bermuda</a></li><ul class="sc-c8fbcff-1 khALxp"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/destinations/central-america" class="sc-f116bf72-4 eqTiTw">Central America</a></li><ul class="sc-c8fbcff-1 khALxp"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/destinations/europe" class="sc-f116bf72-4 eqTiTw">Europe</a></li><ul class="sc-c8fbcff-1 khALxp"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/destinations/middle-east" class="sc-f116bf72-4 eqTiTw">Middle East</a></li><ul class="sc-c8fbcff-1 khALxp"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/destinations/north-america" class="sc-f116bf72-4 eqTiTw">North America</a></li><ul class="sc-c8fbcff-1 khALxp"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/destinations/south-america" class="sc-f116bf72-4 eqTiTw">South America</a></li><ul class="sc-c8fbcff-1 khALxp"></ul></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/worlds-table" class="sc-f116bf72-4 eqTiTw">World’s Table</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/cultural-experiences" class="sc-f116bf72-4 eqTiTw">Culture &amp; Experiences</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/adventures" class="sc-f116bf72-4 eqTiTw">Adventures</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/travel/specialist" class="sc-f116bf72-4 eqTiTw">The SpeciaList</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/future-planet" class="sc-f116bf72-4 eqTiTw">Earth</a></li><ul class="sc-c8fbcff-1 lgTkUi"><li class="sc-c8fbcff-3 jtEFxm"><a href="/future-planet/natural-wonders" class="sc-f116bf72-4 eqTiTw">Natural Wonders</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/future-planet/weather-science" class="sc-f116bf72-4 eqTiTw">Weather &amp; Science</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/future-planet/solutions" class="sc-f116bf72-4 eqTiTw">Climate Solutions</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/future-planet/sustainable-business" class="sc-f116bf72-4 eqTiTw">Sustainable Business</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/future-planet/green-living" class="sc-f116bf72-4 eqTiTw">Green Living</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/video" class="sc-f116bf72-4 eqTiTw">Video</a></li><ul class="sc-c8fbcff-1 lgTkUi"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/live" class="sc-f116bf72-4 eqTiTw">Live</a></li><ul class="sc-c8fbcff-1 lgTkUi"><li class="sc-c8fbcff-3 jtEFxm"><a href="/live/news" class="sc-f116bf72-4 eqTiTw">Live News</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul><li class="sc-c8fbcff-3 jtEFxm"><a href="/live/sport" class="sc-f116bf72-4 eqTiTw">Live Sport</a></li><ul class="sc-c8fbcff-1 bXKLZA"></ul></ul></ul></details>RegisterSign InHomeNewsSportBusinessInnovationCultureArtsTravelEarthVideoLiveAudioWeatherNewslettersLIVETrump threatens to disband emergency agency during visit to hurricane-hit North CarolinaThe new president repeated criticisms of the Federal Emergency Management Agency (Fema) while visiting areas devastated by Hurricane Helene.Trump revokes security protection for Covid adviser FauciThe former top US health official has faced death threats since leading the country's COVID-19 response. 3 hrs agoUS & CanadaLIVETrump threatens to disband emergency agency during visit to hurricane-hit North CarolinaThe new president repeated criticisms of the Federal Emergency Management Agency (Fema) while visiting areas devastated by Hurricane Helene.Trump revokes security protection for Covid adviser FauciThe former top US health official has faced death threats since leading the country's COVID-19 response. 3 hrs agoUS & CanadaHamas names next Israeli hostages set to be releasedFour are due to be freed in a second exchange of hostages for Palestinian prisoners held in Israel.5 hrs agoUkraine claims drone strike on Russian oil refineryRussia says it shot down more than 120 drones overnight, in what would be one of the largest attacks of the war.3 hrs agoEuropeGiant pandas chew bamboo at Washington DC debutQing Bao and Bao Li made their first appearance at the Smithsonian National Zoo after three months of adapting to life in the US.3 hrs agoChina hands death sentence to man who killed Japanese boyIt is the latest punishment imposed by the Chinese authorities following a series of mass attacks.9 hrs agoAsiaMarilyn Manson sexual assault investigation dropped by lawyersProsecutors said the allegations against Manson exceeded the statute of limitations and they could not prove them "beyond a reasonable doubt". 46 mins agoAdvertisementOnly from the BBCThe man who revealed Auschwitz's atrocities to the worldIn 1940, Witold Pilecki infiltrated the camp and began smuggling out reports of what was happening inside, at the same time inspiring an underground resistance movement from within.9 hrs agoTravelWhat Trump has done since taking powerThe Republican has started his second term at a fast pace. Here's a handy guide to what he's done so far.10 hrs agoUS & CanadaArts in MotionInside the studio of Britain's most celebrated sculptorThe BBC visits the studio of British sculptor Antony Gormley to learn how art evolves as a communal practice.See moreMore newsBank of Japan raises rates to highest in 17 years15 hrs agoBusinessCaptain Cook statue vandalised ahead of Australia Day18 hrs agoAustraliaAfghan refugees feel 'betrayed' by Trump order blocking move to US22 hrs agoAsiaLIVEMan dies as Storm Éowyn batters UK and Ireland leaving one million without power'Hot garbage': Australians react to smell of 'corpse flower' in bloom16 hrs agoAustraliaMust watchDrone footage shows Canadian cargo ship trapped in iceThe ship became stuck on Lake Erie while departing Buffalo, New York on 22 January. 5 hrs agoUS & CanadaThe future of self-driving vehiclesBBC Click checks out the latest self-driving vehicle innovations on show at CES 2025 in Las Vegas.13 hrs agoInnovationWatch in 83 seconds: Storm Éowyn sweeps into ScotlandScotland has been battered with wind gusts reaching 100mph, causing disruption and damage across the country. 7 hrs agoScotland'You can always become a state', Trump tells Canada at DavosTrump told business and political leaders at the World Economic Forum that firms manufacturing in the US would enjoy "among the lowest taxes of any nation on Earth".1 day agoUS & CanadaTaxi dashcam shows assailant before Southport attackTaxi dashcam footage shows the moment Axel Rudakubana arrived at a Taylor Swift-themed dance class.1 day agoUKWatch Air National Guard tackle Hughes wildfire in LAThe new wildfire grew quickly overnight and was dubbed an "immediate threat to life" by the California Fire Department.1 day agoUS & CanadaLGBT couples say 'I do' as Thailand legalises marriage equalityThe BBC spoke to couples who married at a Bangkok mall to ask what the legalisation means to them.2 days agoAsiaWatch: Crossbow killer leaving murder sceneKyle Clifford is seen carrying a large white sheet which had the crossbow underneath.1 day agoBeds, Herts & BucksAdvertisementCultureFrom The Apprentice to Wicked, the 2025 Oscars nominees are the most political everThe films nominated this year take on contentious topics with ferocious energy.See moreHealth and wellness'Grief apps' are turning death into dataPeople are turning to 'grief apps' to cope with the loss of family and friends. But the new world of death data raises troubling questions.See moreWhy you feel exhausted all the time17 Jan 2024FutureHow to properly brush your teeth1 Apr 2024FutureHow to keep fit during the winter months1 day agoFutureDiscover more from the BBCTech DecodedGet timely, trusted tech news from BBC correspondents around the world, every Monday and Friday.Download the BBC appClick here to download the BBC app for Apple and Android devices.US Politics UnspunNo noise. No agenda. Just expert analysis of the issues that matter most from Anthony Zurcher, every Wednesday.Listen to The Global StoryGlobal perspectives on one big story. In-depth insights from the BBC, the world's most trusted international news provider.Register for a BBC accountDon't have time to read everything right now? Your BBC account lets you save articles and videos for later. Subscribe to the Essential ListThe week’s best stories, handpicked by BBC editors, in your inbox every Tuesday and Friday.Sign up to News BriefingNews and expert analysis for every schedule. Get the morning and evening editions of our flagship newsletter in your inbox.US & Canada newsNew fires erupt in southern California ahead of Trump visit 3 hrs agoUS & CanadaOpening statements begin in A$AP Rocky's trial in Los Angeles6 hrs agoUS & CanadaTrump pardons anti-abortion activists ahead of rally9 hrs agoUS & CanadaEven before the LA fires, Californians fled for 'climate havens'Some are moving to so-called "climate havens" in the Great Lakes region to avoid climate disasters. 20 hrs agoUS & CanadaIsrael-Gaza warStories of the hostages taken by Hamas from Israel6 hrs agoMiddle EastWho are Israeli hostages released and rescued from Gaza?6 hrs agoMiddle EastWhy are Israel and Hamas fighting in Gaza?3 days agoMiddle EastKey events that led to Israel-Hamas ceasefire deal in GazaThe ceasefire agreement in the Gaza war follows 15 months of fighting between Israel and Hamas.6 days agoMiddle EastWar in UkraineUkraine claims drone strike on Russian oil refinery3 hrs agoEurope'I fled Ukraine as a refugee - now I've won investment on Dragons' Den'6 hrs agoWest Yorkshire'Unbelievable' race track convoy for Ukraine2 days agoEnglandDark humour for dark times: How comedy helps in UkraineUkrainian stand-up comedians say humour can help people cope and raise money for the war effort.2 days agoEuropeMore world newsMan dies after tree falls on his car during Storm Éowyn49 mins agoEuropeTens of thousands protest in Slovakia against PM Fico 1 hr agoEuropeBulgarian woman based in UK denies spying for Russia2 hrs agoEuropeRebels kill DR Congo governor as fighting intensifies Conflict in eastern DR Congo's has forced 400,000 people to flee their homes this year alone, the UN says.3 hrs agoAfricaVideoWhy being a 'loner' could be good for youEmerging research suggests that spending time alone is beneficial for our health and creativity.See moreSportMan City captain Walker completes AC Milan loan move2 hrs agoMan CityLIVEEFL: Hull City stun Sheffield United to move out of drop zoneDjokovic unsure of Australian Open return10 hrs agoTennisEmbracing the chaos - breaking down Bournemouth's riseWith Bournemouth challenging for a top-four place in the Premier League this season, BBC Sport analyses how the Cherries are doing it.5 hrs agoBournemouthBusinessTrump urged not to put massive tariffs on UK4 hrs agoBusinessBank of Japan raises rates to highest in 17 years15 hrs agoBusinessClampdown on fake Google reviews announced7 hrs agoTechnologyUS doesn't need Canadian energy or cars, says TrumpSpeaking to business leaders in Davos, Trump also repeated his jibe that Canada could become a US state and would avoid tariffs if it did. 1 day agoUS & CanadaTech'A mockery': Trump's new meme-coin sparks anger in crypto world1 day agoUS & CanadaHow to make oxygen on the moon22 hrs agoChatGPT back online after outage which hit thousands worldwide1 day agoTechnologyScammers using my face to con people, warns Namibia's ex-first ladyIn a video message Monica Geingos warns people not to be duped into investing in fake schemes.1 day agoAfricaScience & health'I had anti-government views so they treated me for schizophrenia'2 days agoAsiaUnitedHealthcare names new boss after former CEO killed21 hrs agoUS & CanadaPurdue and Sackler family agree $7.4bn opioid settlement1 day agoUS & CanadaHair loss drug finasteride 'biggest mistake of my life'Some online sites prescribe a potentially risky hair loss drug without consistent safety checks, BBC finds.21 hrs agoHealthCultureLola Young's Messy hits number one: My songs are as real as it gets4 hrs agoCultureHarry v the tabloids. What next, if anything? 4 hrs agoUKTwists, turns and betrayals: The standout moments from The Traitors Series 34 hrs agoCultureHow Scandinavian dressing can make us happierNordic style is easy to wear – and can even cheer us up, say its fans. As Copenhagen Fashion Week approaches, we explore the fun, functional Scandi-girl style movement.1 day agoCultureArtsAlan Cumming: 'I'm the Pied Piper of Pitlochry'2 days agoTayside & Central ScotlandHow a performance lab is putting musicians to the test8 days agoInnovationThe 1645 painting that unlocks Trump's portrait4 days agoCultureHow to transform your home with art"It's about what speaks to you": Displaying paintings, prints, textiles and sculptures can all help create a fresh living space for the new year – here's how, according to the experts.9 Jan 2025CultureTravelThe nation that wants you to embrace your sleepy side1 day agoTravelA beauty mogul's guide to luxury self-care in Dubai2 days agoTravelA new life for abandoned US railway stations2 days agoTravelHow to celebrate Jane Austen on her 250th birthdayAs travellers flock to the English countryside to celebrate the renowned novelist, we asked experts to weigh in on the best Austen-themed festivals, reenactments and balls of the year.3 days agoTravelEarthRussia suffering 'environmental catastrophe' after oil spill in Kerch Strait22 hrs agoEuropeGiant iceberg on crash course with island, putting penguins and seals in danger2 days agoScience & EnvironmentStinky bloom of 'corpse flower' enthrals thousands2 days agoAustraliaWhat is hiding under Greenland's ice?The riches thought to lie beneath Greenland's icy terrain have been coveted for more than a century. But how easy are they to access, and will climate change make any difference?2 days agoFutureBritish Broadcasting CorporationHomeNewsSportBusinessInnovationCultureArtsTravelEarthVideoLiveAudioWeatherBBC ShopBBC in other languagesFollow BBC on:Terms of UseAbout the BBCPrivacy PolicyCookiesAccessibility HelpContact the BBCAdvertise with usDo not share or sell my infoContact technical supportCopyright 2025 BBC. All rights reserved.  The BBC is not responsible for the content of external sites. Read about our approach to external linking. {"props":{"pageProps":{"page":{"@\"home\",":{"sections":[{"type":"vermont","content":[{"title":"Hamas names next Israeli hostages set to be released","href":"/news/articles/c8xqv5rqpyjo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737739442096,"firstUpdated":1737714764216,"topics":[]},"tags":[],"description":"Four are due to be freed in a second exchange of hostages for Palestinian prisoners held in Israel.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/9e9c/live/d4b0ed30-da6d-11ef-bc01-8f2c83dad217.jpg","altText":"A composite photo of Karina Ariev, Naama Levy, Daniella Gilboa and Liri Albag","width":976,"height":549}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c8xqv5rqpyjo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Trump threatens to disband emergency agency during visit to hurricane-hit North Carolina","href":"https://www.bbc.com/news/live/ce8y3yk00yjt","isLiveNow":true,"metadata":{"contentType":"live","subtype":"news","lastUpdated":1737935160000,"firstUpdated":1737681720000,"topics":[],"isLiveNow":true},"tags":[],"description":"The new president repeated criticisms of the Federal Emergency Management Agency (Fema) while visiting areas devastated by Hurricane Helene.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/ace/standard/480/cpsprodpb/bab4/live/78867e00-da7a-11ef-902e-cf9b84dc1357.jpg","altText":"Trump speaks to reporters in a fire station","width":1000,"height":563}}},"relatedUrls":[],"id":"urn:bbc:tipo:topic:ce8y3yk00yjt","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Trump revokes security protection for Covid adviser Fauci","href":"/news/articles/cvg49jz7v8no","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737745672803,"firstUpdated":1737745672803,"topics":["US \u0026 Canada"]},"tags":[],"description":"The former top US health official has faced death threats since leading the country's COVID-19 response. ","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/56bd/live/0863dd50-da86-11ef-bf21-81c5146ef2ab.jpg","altText":"Anthony Fauci","width":1005,"height":565}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cvg49jz7v8no","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Ukraine claims drone strike on Russian oil refinery","href":"/news/articles/cvg84r5g8d0o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737744509515,"firstUpdated":1737717548345,"topics":["Europe"]},"tags":[],"description":"Russia says it shot down more than 120 drones overnight, in what would be one of the largest attacks of the war.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/943e/live/dc451080-da4f-11ef-902e-cf9b84dc1357.png","altText":"A man stands in the foreground as a fireball erupts at the Ryazan oil refinery","width":821,"height":462}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cvg84r5g8d0o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Giant pandas chew bamboo at Washington DC debut","href":"/news/videos/cy5kw1nrnqpo","isLiveNow":false,"metadata":{"contentType":"video","subtype":"clip news","lastUpdated":1737743872922,"firstUpdated":1737743872922,"topics":[]},"tags":[],"description":"Qing Bao and Bao Li made their first appearance at the Smithsonian National Zoo after three months of adapting to life in the US.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/3aba/live/76d42b60-da80-11ef-a37f-eba91255dc3d.jpg","altText":"A giant panda chews bamboo leaves.","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cy5kw1nrnqpo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"China hands death sentence to man who killed Japanese boy","href":"/news/articles/c9d5gq5420yo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737721957316,"firstUpdated":1737697402731,"topics":["Asia"]},"tags":[],"description":"It is the latest punishment imposed by the Chinese authorities following a series of mass attacks.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/460d/live/aee99410-da10-11ef-8059-f5274e04c93f.jpg","altText":"The Chinese national flag is raised in front of a court in China.","width":1012,"height":569}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c9d5gq5420yo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Marilyn Manson sexual assault investigation dropped by lawyers","href":"/news/articles/cge7q0z5nz0o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737753097272,"firstUpdated":1737753097272,"topics":[]},"tags":[],"description":"Prosecutors said the allegations against Manson exceeded the statute of limitations and they could not prove them \"beyond a reasonable doubt\". ","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/e261/live/c7b30070-da84-11ef-bc01-8f2c83dad217.jpg","altText":"Brian Warner aka Marilyn Manson","width":1024,"height":682}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cge7q0z5nz0o","brandName":"","seriesName":"","seriesId":"","brandId":""}],"collectionId":"news-home-top-stories-us-tipo","sectionTitleProps":{},"paginationData":{"page":0,"pageSize":7,"total":16},"title":"","summary":"","disableAutoPlay":false,"innerCollections":[],"playlistMode":false},{"type":"advertisement","model":{"adType":"horizontal"},"incrementedType":"mid_1"},{"type":"indiana","content":[{"title":"The man who revealed Auschwitz's atrocities to the world","href":"/travel/article/20250113-the-man-who-volunteered-for-auschwitz","isLiveNow":false,"metadata":{"contentType":"article","subtype":"features","lastUpdated":1737723721145,"firstUpdated":1737723721145,"topics":["Travel"]},"tags":[],"description":"In 1940, Witold Pilecki infiltrated the camp and began smuggling out reports of what was happening inside, at the same time inspiring an underground resistance movement from within.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0kj8nsh.jpg","altText":"Witold Pilecki, Auschwitz (Credit: Archive of The State Museum Auschwitz-Birkenau in Ou015bwiu0119cim)","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:pubpipe:wwverticals:article:travel/article/20250113-the-man-who-volunteered-for-auschwitz","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"What Trump has done since taking power","href":"/news/articles/ced961egp65o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737720366262,"firstUpdated":1731466675119,"topics":["US \u0026 Canada"]},"tags":[],"description":"The Republican has started his second term at a fast pace. Here's a handy guide to what he's done so far.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/684b/live/9c218b60-d8e1-11ef-902e-cf9b84dc1357.png","altText":"Donald Trump, seated, holds up an executive order that he has signed. The photo illusrtration has red and blue in the background with white stripes","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:ced961egp65o","brandName":"","seriesName":"","seriesId":"","brandId":""}],"collectionId":"59f59883-43b2-4844-b38d-7ac28b1de830","sectionTitleProps":{},"paginationData":{"page":0,"pageSize":2,"total":50},"title":"Only from the BBC","summary":"","disableAutoPlay":false,"innerCollections":[],"playlistMode":false},{"type":"montana","content":[{"title":"Inside the studio of Britain's most celebrated sculptor","href":"https://www.bbc.com/arts/arts-in-motion?id=p0kktcrp","isLiveNow":false,"metadata":{"contentType":"customCard","topics":[]},"tags":[],"description":"The BBC visits the studio of British sculptor Antony Gormley to learn how art evolves as a communal practice.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920x1080/p0kktfyp.jpg","altText":"Inside the studio of Britain's most celebrated sculptor","width":1920,"height":1080}}},"relatedUrls":[],"id":"custom-card:b8f01f89-2948-4097-96c3-9fc3ffbea163:0","brandName":"","seriesName":"","seriesId":"","brandId":""}],"collectionId":"","sectionTitleProps":{"link":"https://www.bbc.com/arts/arts-in-motion","linkType":"internal"},"paginationData":{"page":0,"pageSize":1,"total":1},"title":"Arts in Motion","summary":"","disableAutoPlay":false,"isSponsored":true,"style":"dark","innerCollections":[],"playlistMode":false},{"type":"ohio","content":[{"title":"Bank of Japan raises rates to highest in 17 years","href":"/news/articles/cpqln2gwvxlo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737702958298,"firstUpdated":1737690724715,"topics":["Business"]},"tags":[],"description":"The move comes hours after economic data showed prices rose last month at the fastest pace in 16 months.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/1a6a/live/4a72b310-da04-11ef-bd8f-090bb3c281f3.jpg","altText":"Pedestrians carry shopping bags in the Harajuku district of Tokyo, Japan.","width":3157,"height":1776}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cpqln2gwvxlo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Captain Cook statue vandalised ahead of Australia Day","href":"/news/articles/c70qxq49ejlo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737689584820,"firstUpdated":1737680805538,"topics":["Australia"]},"tags":[],"description":"The statue in Sydney has been splashed with red paint and had its nose and hand removed.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/a840/live/fe4e0350-da02-11ef-902e-cf9b84dc1357.jpg","altText":"Red paint covers a statue of Captain Cook in Sydney","width":1261,"height":710}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c70qxq49ejlo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Afghan refugees feel 'betrayed' by Trump order blocking move to US","href":"/news/articles/cz0l97ee2xmo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737677476252,"firstUpdated":1737677476252,"topics":["Asia"]},"tags":[],"description":"Several Afghans tell the BBC the US has \"turned its back\" on them, despite years of working alongside Americans in Afghanistan.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/f2d3/live/62a8fee0-d9c2-11ef-9ab6-81e42a41155d.jpg","altText":"A group of people including women and children arriving at Dulles airport after fleeing the Taliban takeover of Afghanistan August 27, 2021.","width":1024,"height":576}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cz0l97ee2xmo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Man dies as Storm Éowyn batters UK and Ireland leaving one million without power","href":"https://www.bbc.com/news/live/cy5kwlpzlnkt","isLiveNow":true,"metadata":{"contentType":"live","subtype":"news","lastUpdated":1738005360000,"firstUpdated":1737695880000,"topics":[],"isLiveNow":true},"tags":[],"description":"One of the strongest storms in decades leads to cancelled flights, suspended rail services, and closed schools.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/ace/standard/480/cpsprodpb/6c57/live/e7ddee60-da8d-11ef-bc01-8f2c83dad217.jpg","altText":"A man walks his dog in front of a police car, which is parked in front of a fallen tree","width":1200,"height":674}}},"relatedUrls":[],"id":"urn:bbc:tipo:topic:cy5kwlpzlnkt","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"'Hot garbage': Australians react to smell of 'corpse flower' in bloom","href":"/news/videos/c07kdd1g0e7o","isLiveNow":false,"metadata":{"contentType":"video","subtype":"clip news","lastUpdated":1737699193491,"firstUpdated":1737699193491,"topics":["Australia"]},"tags":[],"description":"Almost 20,000 people visited Sydney's Botanic Gardens to catch a whiff of a rare plant in bloom.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/fb09/live/2ae4f3e0-da19-11ef-902e-cf9b84dc1357.jpg","altText":"Split screen of \"corpse flower\" in bloom and woman scrunching her nose in disgust","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c07kdd1g0e7o","brandName":"","seriesName":"","seriesId":"","brandId":""}],"collectionId":"news-home-top-stories-asia-tipo","sectionTitleProps":{"link":"https://www.bbc.com/news","linkType":"internal"},"paginationData":{"page":0,"pageSize":5,"total":9},"title":"More news","summary":"","disableAutoPlay":false,"innerCollections":[],"playlistMode":false},{"type":"texas","content":[{"title":"Drone footage shows Canadian cargo ship trapped in ice","href":"/news/videos/cy0pw3g02p8o","isLiveNow":false,"metadata":{"contentType":"video","subtype":"clip news","lastUpdated":1737737212161,"firstUpdated":1737737212161,"topics":["US \u0026 Canada"]},"tags":[],"description":"The ship became stuck on Lake Erie while departing Buffalo, New York on 22 January. ","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/7a93/live/64947290-da6f-11ef-a37f-eba91255dc3d.jpg","altText":"Shot from behind, a large cargo ship is stuck, surrounded by ice on Lake Erie","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cy0pw3g02p8o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"The future of self-driving vehicles","href":"/reel/video/p0kl4msn/the-future-of-self-driving-vehicles","isLiveNow":false,"metadata":{"contentType":"video","subtype":"reels","lastUpdated":1737710400000,"firstUpdated":1737710400000,"topics":["Innovation"]},"tags":[],"description":"BBC Click checks out the latest self-driving vehicle innovations on show at CES 2025 in Las Vegas.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0kl4r6n.jpg","altText":"The future of self-driving vehicles","width":1920,"height":1080,"duration":218}}},"relatedUrls":[],"id":"urn:pubpipe:gnlvideoproject:video:p0kl4msn","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Watch in 83 seconds: Storm Éowyn sweeps into Scotland","href":"/news/videos/cvg84p9v2e1o","isLiveNow":false,"metadata":{"contentType":"video","subtype":"clip news","lastUpdated":1737730109418,"firstUpdated":1737730109418,"topics":["Scotland"]},"tags":[],"description":"Scotland has been battered with wind gusts reaching 100mph, causing disruption and damage across the country.\n","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/0353/live/47973730-da61-11ef-a37f-eba91255dc3d.jpg","altText":"Waves crash onto a road in Troon, with water sweeping over a bench","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cvg84p9v2e1o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"'You can always become a state', Trump tells Canada at Davos","href":"/news/videos/c5y7lmr4j50o","isLiveNow":false,"metadata":{"contentType":"video","subtype":"clip news","lastUpdated":1737660635270,"firstUpdated":1737660635270,"topics":["US \u0026 Canada"]},"tags":[],"description":"Trump told business and political leaders at the World Economic Forum that firms manufacturing in the US would enjoy \"among the lowest taxes of any nation on Earth\".","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/a8d8/live/492c2c20-d9be-11ef-902e-cf9b84dc1357.jpg","altText":"President Donald Trump speaks into a microphone.","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c5y7lmr4j50o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Taxi dashcam shows assailant before Southport attack","href":"/news/videos/cvg49wxq0pjo","isLiveNow":false,"metadata":{"contentType":"video","subtype":"clip news","lastUpdated":1737658225832,"firstUpdated":1737658225832,"topics":["UK"]},"tags":[],"description":"Taxi dashcam footage shows the moment Axel Rudakubana arrived at a Taylor Swift-themed dance class.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/469b/live/d7c60230-d9b9-11ef-a37f-eba91255dc3d.jpg","altText":"Axel Rudakubana","width":690,"height":388}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cvg49wxq0pjo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Watch Air National Guard tackle Hughes wildfire in LA","href":"/news/videos/c77rvl3e6g8o","isLiveNow":false,"metadata":{"contentType":"video","subtype":"clip news","lastUpdated":1737650073062,"firstUpdated":1737650073062,"topics":["US \u0026 Canada"]},"tags":[],"description":"The new wildfire grew quickly overnight and was dubbed an \"immediate threat to life\" by the California Fire Department.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/fff6/live/4ac9fc70-d9a4-11ef-a37f-eba91255dc3d.jpg","altText":"A large cloud of orange smoke is seen through the window of a plane with a pilot inside.","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c77rvl3e6g8o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"LGBT couples say 'I do' as Thailand legalises marriage equality","href":"/news/videos/cwypjle469zo","isLiveNow":false,"metadata":{"contentType":"video","subtype":"clip news","lastUpdated":1737615992044,"firstUpdated":1737615992044,"topics":["Asia"]},"tags":[],"description":"The BBC spoke to couples who married at a Bangkok mall to ask what the legalisation means to them.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/711c/live/a949e870-d957-11ef-a37f-eba91255dc3d.jpg","altText":"Couple with heads pressed together holding marriage certificate","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cwypjle469zo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Watch: Crossbow killer leaving murder scene","href":"/news/videos/c897pz7n82qo","isLiveNow":false,"metadata":{"contentType":"video","subtype":"clip news","lastUpdated":1737633277615,"firstUpdated":1737633277615,"topics":["Beds, Herts \u0026 Bucks"]},"tags":[],"description":"Kyle Clifford is seen carrying a large white sheet which had the crossbow underneath.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/a4ac/live/d63ac260-d979-11ef-a37f-eba91255dc3d.jpg","altText":"Kyle Clifford dressed in black and walking with a crossbow under his arms covered in a white sheet","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c897pz7n82qo","brandName":"","seriesName":"","seriesId":"","brandId":""}],"collectionId":"a8e12a71-6ca7-443b-b5c9-fbb6dae69127","sectionTitleProps":{"link":"https://www.bbc.com/video","linkType":"internal"},"paginationData":{"page":0,"pageSize":8,"total":15},"title":"Must watch","summary":"","disableAutoPlay":false,"innerCollections":[],"playlistMode":false},{"type":"advertisement","model":{"adType":"horizontal"},"incrementedType":"mid_2"},{"type":"montana","content":[{"title":"From The Apprentice to Wicked, the 2025 Oscars nominees are the most political ever","href":"/culture/article/20250123-the-most-political-oscar-nominations-ever","isLiveNow":false,"metadata":{"contentType":"article","subtype":"features","lastUpdated":1737666000000,"firstUpdated":1737666000000,"topics":["Culture"]},"tags":[],"description":"The films nominated this year take on contentious topics with ferocious energy.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1024xn/p0hyddxb.jpg","altText":"Still from The Apprentice (Credit: Mongrel Media)","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:pubpipe:wwverticals:article:culture/article/20250123-the-most-political-oscar-nominations-ever","brandName":"","seriesName":"","seriesId":"","brandId":""}],"collectionId":"7186783c-de5b-4d6e-be68-2120de684a6f","sectionTitleProps":{"link":"https://www.bbc.com/culture","linkType":"internal"},"paginationData":{"page":0,"pageSize":1,"total":49},"title":"Culture","summary":"","disableAutoPlay":false,"innerCollections":[],"playlistMode":false},{"type":"oregon","content":[{"title":"'Grief apps' are turning death into data","href":"/future/article/20250123-the-apps-turning-grief-into-data-points","isLiveNow":false,"metadata":{"contentType":"article","subtype":"features","lastUpdated":1737712800000,"firstUpdated":1737712800000,"topics":["Future"]},"tags":[],"description":"People are turning to 'grief apps' to cope with the loss of family and friends. But the new world of death data raises troubling questions.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0klpkzk.jpg","altText":"Mannequin heads with hands holding phones coming out of them with a blue background (Credit: Serenity Strull/BBC/Getty Images)","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:pubpipe:wwverticals:article:future/article/20250123-the-apps-turning-grief-into-data-points","brandName":"","seriesName":"","seriesId":"","brandId":""}],"collectionId":"977f5b04-8c92-4e64-a50d-f4c827f6cd37","sectionTitleProps":{},"paginationData":{"page":0,"pageSize":1,"total":30},"title":"Health and wellness","summary":"","disableAutoPlay":false,"innerCollections":[],"playlistMode":false},{"type":"ohio","content":[{"title":"Why you feel exhausted all the time","href":"/future/article/20240117-extreme-exhaustion-the-truth-about-burnout","isLiveNow":false,"metadata":{"contentType":"article","subtype":"features","lastUpdated":1705500008000,"firstUpdated":1705500008000,"topics":["Future"]},"tags":[],"description":"Feelings of fatigue are worryingly common, and in some cases, burnout can be extremely debilitating. So what might we do to treat this malaise?","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0h5gk69.jpg","altText":null,"width":2500,"height":1406}}},"relatedUrls":[],"id":"urn:pubpipe:wwverticals:article:future/article/20240117-extreme-exhaustion-the-truth-about-burnout","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"How to properly brush your teeth","href":"/future/article/20220718-the-best-way-to-brush-your-teeth","isLiveNow":false,"metadata":{"contentType":"article","subtype":"features","lastUpdated":1711958452989,"firstUpdated":1711958452989,"topics":["Future"]},"tags":[],"description":"Brushing your teeth effectively lowers your chances of getting a host of chronic diseases, as well as keeping your teeth and gums healthy. But the majority of us are doing it wrong.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0cmkx1x.jpg","altText":null,"width":2000,"height":1125}}},"relatedUrls":[],"id":"urn:pubpipe:wwverticals:article:future/article/20220718-the-best-way-to-brush-your-teeth","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"How to keep fit during the winter months","href":"/future/article/20250122-expert-tips-on-how-to-keep-exercising-during-cold-winter-weather","isLiveNow":false,"metadata":{"contentType":"article","subtype":"features","lastUpdated":1737626400000,"firstUpdated":1737626400000,"topics":["Future"]},"tags":[],"description":"It can be difficult to find motivation to keep fit when it's cold and dark outside. Here are some expert tips on how to stay active during the winter months.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1024xn/p0klcf1n.jpg","altText":"Someone runs through snowy terrain","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:pubpipe:wwverticals:article:future/article/20250122-expert-tips-on-how-to-keep-exercising-during-cold-winter-weather","brandName":"","seriesName":"","seriesId":"","brandId":""}],"collectionId":"977f5b04-8c92-4e64-a50d-f4c827f6cd37","sectionTitleProps":{},"paginationData":{"page":0,"pageSize":3,"total":29},"title":"","summary":"","disableAutoPlay":false,"innerCollections":[],"playlistMode":false},{"type":"texas","content":[{"title":"Tech Decoded","href":"https://cloud.email.bbc.com/techdecoded-newsletter-signup?\u0026at_bbc_team=studios\u0026at_medium=display\u0026at_objective=acquisition\u0026at_ptr_type=\u0026at_ptr_name=bbc.comhp\u0026at_format=Module\u0026at_link_origin=homepage\u0026at_campaign=techdecoded\u0026at_campaign_type=owned","isLiveNow":false,"metadata":{"contentType":"customCard","topics":[]},"tags":[],"description":"Get timely, trusted tech news from BBC correspondents around the world, every Monday and Friday.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/raw/p0hdrsmy.jpg","altText":"Tech Decoded","width":1920,"height":1080}}},"relatedUrls":[],"id":"custom-card:dc05622e-adfd-4217-a460-ee221724c306:0","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Download the BBC app","href":"https://bbc-global.onelink.me/Ezi6/n4y4nlv7","isLiveNow":false,"metadata":{"contentType":"customCard","topics":[]},"tags":[],"description":"Click here to download the BBC app for Apple and Android devices.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/raw/p0kd4xrj.png","altText":"Download the BBC app","width":1920,"height":1080}}},"relatedUrls":[],"id":"custom-card:dc05622e-adfd-4217-a460-ee221724c306:1","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"US Politics Unspun","href":"https://cloud.email.bbc.com/US_Politics_Unspun_Newsletter_Signup?\u0026at_bbc_team=studios\u0026at_medium=display\u0026at_objective=acquisition\u0026at_ptr_type=\u0026at_ptr_name=bbc.comhp\u0026at_format=Module\u0026at_link_origin=homepage\u0026at_campaign=uselectionunspun\u0026at_campaign_type=owned","isLiveNow":false,"metadata":{"contentType":"customCard","topics":[]},"tags":[],"description":"No noise. No agenda. Just expert analysis of the issues that matter most from Anthony Zurcher, every Wednesday.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/raw/p0kkgb4b.jpg","altText":"US Politics Unspun","width":1920,"height":1080}}},"relatedUrls":[],"id":"custom-card:dc05622e-adfd-4217-a460-ee221724c306:2","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Listen to The Global Story","href":"https://www.bbc.co.uk/programmes/w13xtvsd","isLiveNow":false,"metadata":{"contentType":"customCard","topics":[]},"tags":[],"description":"Global perspectives on one big story. In-depth insights from the BBC, the world's most trusted international news provider.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/640x360/p0gstygv.jpg","altText":"Listen to The Global Story","width":1920,"height":1080}}},"relatedUrls":[],"id":"custom-card:dc05622e-adfd-4217-a460-ee221724c306:3","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Register for a BBC account","href":"https://account.bbc.com/auth/register/email?ab=o13\u0026action=register\u0026clientId=Account\u0026context=international\u0026isCasso=false\u0026nonce=ZhRJnl8N-Nzg4mlepfcUAP3WaQY2IhgqHUUw\u0026ptrt=https%3A%2F%2Fwww.bbc.com%2F\u0026realm=%2F\u0026redirectUri=https%3A%2F%2Fsession.bbc.com%2Fsession%2Fcallback%3Frealm%3D%2F\u0026sequenceId=afd616da-9f13-432d-a54d-95c9f32d1f0b\u0026service=IdRegisterService\u0026userOrigin=BBCS_BBC","isLiveNow":false,"metadata":{"contentType":"customCard","topics":[]},"tags":[],"description":"Don't have time to read everything right now? Your BBC account lets you save articles and videos for later. ","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/raw/p0k21mq7.jpg","altText":"Register for a BBC account","width":1920,"height":1080}}},"relatedUrls":[],"id":"custom-card:dc05622e-adfd-4217-a460-ee221724c306:4","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Subscribe to the Essential List","href":"https://cloud.email.bbc.com/SignUp10_08?\u0026at_bbc_team=studios\u0026at_medium=display\u0026at_objective=acquisition\u0026at_ptr_type=\u0026at_ptr_name=bbc.comhp\u0026at_format=Module\u0026at_link_origin=homepage\u0026at_campaign=essentiallist\u0026at_campaign_type=owned","isLiveNow":false,"metadata":{"contentType":"customCard","topics":[]},"tags":[],"description":"The week’s best stories, handpicked by BBC editors, in your inbox every Tuesday and Friday.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920x1080/p0h74xp9.jpg","altText":"Subscribe to the Essential List","width":1920,"height":1080}}},"relatedUrls":[],"id":"custom-card:dc05622e-adfd-4217-a460-ee221724c306:5","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Sign up to News Briefing","href":"https://cloud.email.bbc.com/bbcnewsignup2?\u0026at_bbc_team=studios\u0026at_medium=display\u0026at_objective=acquisition\u0026at_ptr_type=\u0026at_ptr_name=bbc.comhp\u0026at_format=Module\u0026at_link_origin=homepage\u0026at_campaign=newsbriefing\u0026at_campaign_type=owned","isLiveNow":false,"metadata":{"contentType":"customCard","topics":[]},"tags":[],"description":"News and expert analysis for every schedule. Get the morning and evening editions of our flagship newsletter in your inbox.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920x1080/p0h74xqs.jpg","altText":"Sign up to News Briefing","width":1920,"height":1080}}},"relatedUrls":[],"id":"custom-card:dc05622e-adfd-4217-a460-ee221724c306:6","brandName":"","seriesName":"","seriesId":"","brandId":""}],"collectionId":"","sectionTitleProps":{},"paginationData":{"page":0,"pageSize":10,"total":7},"title":"Discover more from the BBC","summary":"","disableAutoPlay":false,"innerCollections":[],"playlistMode":false},{"type":"advertisement","model":{"adType":"horizontal"},"incrementedType":"mid_3"},{"type":"wyoming","content":[],"collectionId":"","sectionTitleProps":{},"paginationData":{"page":0,"pageSize":0,"total":0},"title":"","summary":"","disableAutoPlay":false,"innerCollections":[{"title":"US \u0026 Canada news","data":[{"title":"New fires erupt in southern California ahead of Trump visit ","href":"/news/articles/c0rqv7evp2ko","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737743701596,"firstUpdated":1737717290040,"topics":["US \u0026 Canada"]},"tags":[],"description":"The blazes - named Laguna, Sepulveda, Gibbel, Gilman and Border 2 - flared up on Thursday in Los Angeles, San Diego, Ventura and Riverside. ","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/a59e/live/a685d5d0-da5c-11ef-902e-cf9b84dc1357.jpg","altText":"Firefighters work as the Hughes Fire burns","width":1024,"height":577}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c0rqv7evp2ko","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Opening statements begin in A$AP Rocky's trial in Los Angeles","href":"/news/articles/czj3nr7nrp1o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737732732582,"firstUpdated":1737732732582,"topics":["US \u0026 Canada"]},"tags":[],"description":"The rapper could face up to 24 years in prison if convicted in the Hollywood shooting case.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/8606/live/f118b670-da67-11ef-bb08-a77ffb70a682.jpg","altText":"Rapper A$AP Rocky is photographed outside a courthouse in California wearing sunglasses and a suit. ","width":877,"height":494}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:czj3nr7nrp1o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Trump pardons anti-abortion activists ahead of rally","href":"/news/articles/cn0yzg9pj97o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737725115599,"firstUpdated":1737725098174,"topics":["US \u0026 Canada"]},"tags":[],"description":"The president will address the anti-abortion March for Life in Washington DC by videolink.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/a16e/live/6ec14650-da4a-11ef-a37f-eba91255dc3d.jpg","altText":"Lauren Handy, pictured in 2022 wearing glasses and a sleeveless denim jacket covered in anti-abortion campaign badges","width":1024,"height":683}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cn0yzg9pj97o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Even before the LA fires, Californians fled for 'climate havens'","href":"/news/articles/c4g334eppm7o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737685608816,"firstUpdated":1737678024323,"topics":["US \u0026 Canada"]},"tags":[],"description":"Some are moving to so-called \"climate havens\" in the Great Lakes region to avoid climate disasters. ","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/610c/live/f42e90f0-d9c2-11ef-a37f-eba91255dc3d.jpg","altText":"Christina Welch wears a green sweater while standing near the water in Deluth","width":1024,"height":576}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c4g334eppm7o","brandName":"","seriesName":"","seriesId":"","brandId":""}],"sectionTitleProps":{"link":"https://www.bbc.com/news/us-canada","linkType":"internal"}},{"title":"Israel-Gaza war","data":[{"title":"Stories of the hostages taken by Hamas from Israel","href":"/news/world-middle-east-67053011","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737735426000,"firstUpdated":1696853083000,"topics":["Middle East"]},"tags":[],"description":"A total of 91 hostages, taken captive after the 7 October attacks, still remain unaccounted for.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/D3F3/production/_133595245_hostage3.jpg","altText":"Composite picture of three hostages: Daniela Gilboa, Elkana Bohbot and Naama Levy","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:bbc:cps:curie:asset:2b405dc9-0ec7-45ae-9179-47323e368587","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Who are Israeli hostages released and rescued from Gaza?","href":"/news/world-middle-east-67477240","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737735165000,"firstUpdated":1700847651000,"topics":["Middle East"]},"tags":[],"description":"A total of 120 people captured during the 7 October attacks have been released or freed.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/17CA0/production/_133704479_emily.jpg","altText":"19 January 2025: Emily Damari smiles broadly from the back seat of a car, which has just arrived in Israel-occupied territory - she has just been released after spending 15 months as a hostage in Gaza","width":976,"height":549}}},"relatedUrls":[],"id":"urn:bbc:cps:curie:asset:6785b65b-5dcf-43e3-a6be-e6ed5921de7c","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Why are Israel and Hamas fighting in Gaza?","href":"/news/world-middle-east-67039975","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737479336000,"firstUpdated":1696694085000,"topics":["Middle East"]},"tags":[],"description":" ","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/12B6B/production/_133615667_gettyimages-1541943689.jpg","altText":"Hamas fighters wearing balaclavas hold weapons","width":976,"height":549}}},"relatedUrls":[],"id":"urn:bbc:cps:curie:asset:2aff1e1b-fe44-40e4-ad7c-e4e4eba093e9","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Key events that led to Israel-Hamas ceasefire deal in Gaza","href":"/news/articles/cvg4ryde7q5o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737278859022,"firstUpdated":1737058377388,"topics":["Middle East"]},"tags":[],"description":"The ceasefire agreement in the Gaza war follows 15 months of fighting between Israel and Hamas.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/c9f8/live/1fa1f7b0-d408-11ef-9993-d9e5e808492c.jpg","altText":"Woman protesting against the government and showing support for the 7 October hostages holds up both hands, with the words \"Stop the War\" written on her palms, as demonstrators with banners stand in the background","width":988,"height":556}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cvg4ryde7q5o","brandName":"","seriesName":"","seriesId":"","brandId":""}],"sectionTitleProps":{"link":"https://www.bbc.com/news/topics/c2vdnvdg6xxt","linkType":"internal"}},{"title":"War in Ukraine","data":[{"title":"Ukraine claims drone strike on Russian oil refinery","href":"/news/articles/cvg84r5g8d0o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737744509515,"firstUpdated":1737717548345,"topics":["Europe"]},"tags":[],"description":"Russia says it shot down more than 120 drones overnight, in what would be one of the largest attacks of the war.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/943e/live/dc451080-da4f-11ef-902e-cf9b84dc1357.png","altText":"A man stands in the foreground as a fireball erupts at the Ryazan oil refinery","width":821,"height":462}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cvg84r5g8d0o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"'I fled Ukraine as a refugee - now I've won investment on Dragons' Den'","href":"/news/articles/cn8x1v5g924o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737734291199,"firstUpdated":1737734291199,"topics":["West Yorkshire"]},"tags":[],"description":"Yana resettled in Huddersfield and set up business importing clothing from Ukraine to distribute.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/7281/live/4f65e0c0-da6a-11ef-902e-cf9b84dc1357.jpg","altText":"Yana Smaglo","width":1162,"height":651}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cn8x1v5g924o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"'Unbelievable' race track convoy for Ukraine","href":"/news/articles/cm273dlmp9jo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737612725743,"firstUpdated":1737612725743,"topics":["England"]},"tags":[],"description":"The 12 vehicles leave the UK on Thursday to arrive in Ukraine on Saturday.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/9d26/live/bccd0910-d8ec-11ef-a6f5-8719fc3c9191.jpg","altText":"Twelve vehicles, including two ambulances, in formation on a Grand Prix race track.","width":4032,"height":2268}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cm273dlmp9jo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Dark humour for dark times: How comedy helps in Ukraine","href":"/news/articles/czx80edlrxjo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737590810510,"firstUpdated":1737590810510,"topics":["Europe"]},"tags":[],"description":"Ukrainian stand-up comedians say humour can help people cope and raise money for the war effort.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/aa90/live/a1cb7170-d8f0-11ef-902e-cf9b84dc1357.jpg","altText":"Nastya Zukhvala next to a leafy bush with red flower looking directly into the camera. ","width":804,"height":452}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:czx80edlrxjo","brandName":"","seriesName":"","seriesId":"","brandId":""}],"sectionTitleProps":{"link":"https://www.bbc.com/news/war-in-ukraine","linkType":"internal"}},{"title":"More world news","data":[{"title":"Man dies after tree falls on his car during Storm Éowyn","href":"/news/articles/cx2qln43e07o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737752905998,"firstUpdated":1737715541430,"topics":["Europe"]},"tags":[],"description":"ESB Networks says it has never experienced so many power cuts in the country at once during a storm.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/f98f/live/fd9a2390-da79-11ef-a37f-eba91255dc3d.jpg","altText":"An image of a garda officer taken from behind","width":3719,"height":2249}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cx2qln43e07o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Tens of thousands protest in Slovakia against PM Fico ","href":"/news/articles/c17ew2lzkyvo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737751139037,"firstUpdated":1737751139037,"topics":["Europe"]},"tags":[],"description":"Tens of thousands come onto the streets, as Robert Fico threatens to deport foreigners he says are fomenting a coup.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/5435/live/d705f000-da89-11ef-902e-cf9b84dc1357.jpg","altText":"An evening protest in Bratislava, where several demonstrators with their backs to the camera give victory salutes and one holds up a rose. They face a much larger group of protesters behind barriers holding banners and Slovakian and Ukrainian flags","width":5242,"height":2948}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c17ew2lzkyvo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Bulgarian woman based in UK denies spying for Russia","href":"/news/articles/cp9x0vvgj72o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737749433418,"firstUpdated":1737749433418,"topics":["Europe"]},"tags":[],"description":"Katrin Ivanova denied knowing that information she gathered would be sent to Moscow.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/dcbd/live/6f82fbf0-da87-11ef-bf21-81c5146ef2ab.jpg","altText":"Composite image of Vanya Gaberova, Katrin Ivanova and Tihomir Ivanchev","width":1183,"height":665}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cp9x0vvgj72o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Rebels kill DR Congo governor as fighting intensifies ","href":"/news/articles/ckgy6qlv5kro","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737744304981,"firstUpdated":1737722703186,"topics":["Africa"]},"tags":[],"description":"Conflict in eastern DR Congo's has forced 400,000 people to flee their homes this year alone, the UN says.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/b646/live/44e13a90-da43-11ef-a37f-eba91255dc3d.jpg","altText":"Two women flee a camp for displaced people with their belongings. They carry a mattress, a large thermos flash and more.","width":1000,"height":562}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:ckgy6qlv5kro","brandName":"","seriesName":"","seriesId":"","brandId":""}],"sectionTitleProps":{"link":"https://www.bbc.com/news/world","linkType":"internal"}}],"playlistMode":false},{"type":"montana","content":[{"title":"Why being a 'loner' could be good for you","href":"/reel/video/p0kkxh7x/why-being-a-loner-could-be-good-for-you","isLiveNow":false,"metadata":{"contentType":"video","subtype":"reels","lastUpdated":1737627900000,"firstUpdated":1737627900000,"topics":["Health Decoded"]},"tags":[],"description":"Emerging research suggests that spending time alone is beneficial for our health and creativity.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0klhv3w.jpg","altText":"The benefits of being alone","width":1920,"height":1080,"duration":110}}},"relatedUrls":[],"id":"urn:pubpipe:gnlvideoproject:video:p0kkxh7x","brandName":"","seriesName":"","seriesId":"","brandId":""}],"collectionId":"895d3e77-e6a3-41b3-bf71-6b9bd00f94ef","sectionTitleProps":{"link":"https://www.bbc.com/video","linkType":"internal"},"paginationData":{"page":0,"pageSize":1,"total":50},"title":"Video","summary":"","disableAutoPlay":false,"style":"dark","innerCollections":[],"playlistMode":false},{"type":"advertisement","model":{"adType":"horizontal"},"incrementedType":"mid_4"},{"type":"wyoming","content":[],"collectionId":"","sectionTitleProps":{},"paginationData":{"page":0,"pageSize":0,"total":0},"title":"","summary":"","disableAutoPlay":false,"innerCollections":[{"title":"Sport","data":[{"title":"Man City captain Walker completes AC Milan loan move","href":"/sport/football/articles/cwy1k3mnyy1o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"sport","lastUpdated":1737747184677,"firstUpdated":1737747184677,"topics":["Man City"]},"tags":[],"description":"Manchester City captain Kyle Walker completes a loan move to Italian club AC Milan.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/585f/live/92401fc0-da8b-11ef-bf21-81c5146ef2ab.jpg","altText":"Kyle Walker","width":2868,"height":1613}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cwy1k3mnyy1o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"EFL: Hull City stun Sheffield United to move out of drop zone","href":"https://www.bbc.com/sport/football/live/ckgryj7km53t","isLiveNow":true,"metadata":{"contentType":"live","subtype":"sport","lastUpdated":1737759600000,"firstUpdated":1737747000000,"topics":[],"isLiveNow":true},"tags":[],"description":"Follow reaction after Hull City beat Sheffield United 3-0 at Bramall Lane to move out of the Championship drop zone.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/ace/standard/480/cpsprodpb/a796/live/e33d0320-da9a-11ef-bc01-8f2c83dad217.jpg","altText":"Hull celebrate scoring","width":2626,"height":1477}}},"relatedUrls":[],"id":"urn:bbc:tipo:topic:ckgryj7km53t","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Djokovic unsure of Australian Open return","href":"/sport/tennis/articles/c20pnn83ywgo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"sport","lastUpdated":1737719196619,"firstUpdated":1737701253531,"topics":["Tennis"]},"tags":[],"description":"Novak Djokovic says he will return to the Australian Open next year if he is \"fit, healthy and motivated\" after retiring from his semi-final with injury.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/3e5c/live/6383c9e0-da43-11ef-bc01-8f2c83dad217.jpg","altText":"Novak Djokovic reacts during his Australian Open news conference","width":2846,"height":1601}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c20pnn83ywgo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Embracing the chaos - breaking down Bournemouth's rise","href":"/sport/football/articles/c5y2kpkdxylo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"sport","lastUpdated":1737737957511,"firstUpdated":1737737957511,"topics":["Bournemouth"]},"tags":[],"description":"With Bournemouth challenging for a top-four place in the Premier League this season, BBC Sport analyses how the Cherries are doing it.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/8a2a/live/87961610-da4a-11ef-a37f-eba91255dc3d.jpg","altText":"Bournemouth tactics","width":720,"height":405}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c5y2kpkdxylo","brandName":"","seriesName":"","seriesId":"","brandId":""}],"sectionTitleProps":{"link":"https://www.bbc.com/sport","linkType":"external"}},{"title":"Business","data":[{"title":"Trump urged not to put massive tariffs on UK","href":"/news/articles/cp8k2jknn8zo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737741952408,"firstUpdated":1737689826791,"topics":["Business"]},"tags":[],"description":"Business Secretary Jonathan Reynolds says there is no need for tariffs because the US has no goods trade deficit with the UK.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/abcb/live/cccd5680-da2c-11ef-95b0-9ba717ab19ce.jpg","altText":"Jonathan Reynolds, business secretary for the UK government, wearing a dark coat and a dark red tie.","width":942,"height":530}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cp8k2jknn8zo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Bank of Japan raises rates to highest in 17 years","href":"/news/articles/cpqln2gwvxlo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737702958298,"firstUpdated":1737690724715,"topics":["Business"]},"tags":[],"description":"The move comes hours after economic data showed prices rose last month at the fastest pace in 16 months.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/1a6a/live/4a72b310-da04-11ef-bd8f-090bb3c281f3.jpg","altText":"Pedestrians carry shopping bags in the Harajuku district of Tokyo, Japan.","width":3157,"height":1776}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cpqln2gwvxlo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Clampdown on fake Google reviews announced","href":"/news/articles/c4gx8pn798qo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737731961226,"firstUpdated":1737725903454,"topics":["Technology"]},"tags":[],"description":"Businesses which boost their star ratings will have warnings attached to them - fake reviewers will be banned.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/e63f/live/766d8d10-da4e-11ef-9101-0340529baaeb.jpg","altText":"A woman sitting at her computer and using her phone","width":2102,"height":1183}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c4gx8pn798qo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"US doesn't need Canadian energy or cars, says Trump","href":"/news/articles/c5y725r90k5o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737662419846,"firstUpdated":1737659556163,"topics":["US \u0026 Canada"]},"tags":[],"description":"Speaking to business leaders in Davos, Trump also repeated his jibe that Canada could become a US state and would avoid tariffs if it did. ","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/b50f/live/6d89fda0-da30-11ef-bc01-8f2c83dad217.jpg","altText":"Close up portrait of Donald Trump looking at the camera","width":976,"height":549}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c5y725r90k5o","brandName":"","seriesName":"","seriesId":"","brandId":""}],"sectionTitleProps":{"link":"https://www.bbc.com/business","linkType":"internal"}},{"title":"Tech","data":[{"title":"'A mockery': Trump's new meme-coin sparks anger in crypto world","href":"/news/articles/crlkjejpwr8o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737650939519,"firstUpdated":1737649981523,"topics":["US \u0026 Canada"]},"tags":[],"description":"The president's meme-coin launch is being criticised as a stunt, while impatience grows for deregulation.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/1ee2/live/41680d50-d91a-11ef-902e-cf9b84dc1357.png","altText":"An image for the Trump coin on its website\n","width":721,"height":407}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:crlkjejpwr8o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"How to make oxygen on the moon","href":"/news/articles/cd7nr8wv5r9o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737676930976,"firstUpdated":1737676930976,"topics":[]},"tags":[],"description":"Engineers are working on systems that can turn lunar regolith into useful elements like oxygen.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/75ae/live/717c86e0-d967-11ef-902e-cf9b84dc1357.jpg","altText":"Artists impression of Nasa's Artemis moon mission","width":2028,"height":1141}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cd7nr8wv5r9o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"ChatGPT back online after outage which hit thousands worldwide","href":"/news/articles/c30d80lg579o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737646562147,"firstUpdated":1737640188148,"topics":["Technology"]},"tags":[],"description":"It comes a day after owner OpenAI announced a massive investment in artificial intelligence (AI).","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/4062/live/c7857e20-d98f-11ef-a7d7-813810cd836f.jpg","altText":"ChatGPT logo on a mobile phone","width":922,"height":518}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c30d80lg579o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Scammers using my face to con people, warns Namibia's ex-first lady","href":"/news/articles/cx24x8we2l7o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737655874322,"firstUpdated":1737655874322,"topics":["Africa"]},"tags":[],"description":"In a video message Monica Geingos warns people not to be duped into investing in fake schemes.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/91dd/live/52c00a80-d9b5-11ef-bc01-8f2c83dad217.jpg","altText":"A head shot of Monica Geingos.","width":976,"height":549}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cx24x8we2l7o","brandName":"","seriesName":"","seriesId":"","brandId":""}],"sectionTitleProps":{"link":"https://www.bbc.com/innovation/technology","linkType":"internal"}},{"title":"Science \u0026 health","data":[{"title":"'I had anti-government views so they treated me for schizophrenia'","href":"/news/articles/cr46npx1e73o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737591001246,"firstUpdated":1737591001246,"topics":["Asia"]},"tags":[],"description":"Student among dozens who challenged China’s authorities to have been sent to psychiatric units, BBC finds.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/1024/cpsprodpb/0889/live/7ba33c60-d8cf-11ef-902e-cf9b84dc1357.jpg","altText":"Zhang Junjie speaking to the BBC indoors - he gazes intently at the reporter and is dressed casually. He has short brown hair, slightly shaved at the sides.","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cr46npx1e73o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"UnitedHealthcare names new boss after former CEO killed","href":"/news/articles/cre8n8rjp5yo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737679177945,"firstUpdated":1737679177945,"topics":["US \u0026 Canada"]},"tags":[],"description":"The company's former chief executive Brian Thompson was shot and killed in December.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/3422/live/e086ad10-d9e7-11ef-87e5-f1aa7935f4d2.jpg","altText":"An outside the United Healthcare corporate headquarters on 4 December, 2024 in Minnetonka, Minnesota. United Healthcare CEO Brian Thompson was shot dead on the street in New York City before he was to attend the company's annual investors meeting.","width":1024,"height":576}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cre8n8rjp5yo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Purdue and Sackler family agree $7.4bn opioid settlement","href":"/news/articles/ceq97nvjv0wo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737661520928,"firstUpdated":1737657445539,"topics":["US \u0026 Canada"]},"tags":[],"description":"Under the terms, the opioid maker agreed to pay $900m, and the family behind it agreed to pay up to $6.5bn.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/6915/live/6203aa10-d9b5-11ef-abf7-8b2a99c77ef2.jpg","altText":"Bottles of prescription painkiller OxyContin pills, made by Purdue Pharma LP","width":2991,"height":1682}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:ceq97nvjv0wo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Hair loss drug finasteride 'biggest mistake of my life'","href":"/news/articles/c05p1pnvymvo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737680241566,"firstUpdated":1737680241566,"topics":["Health"]},"tags":[],"description":"Some online sites prescribe a potentially risky hair loss drug without consistent safety checks, BBC finds.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/10f5/live/28bf5000-d740-11ef-9db6-657c5dd191fb.png","altText":"Kyle stands outside, next to the canal, looking towards the camera","width":760,"height":427}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c05p1pnvymvo","brandName":"","seriesName":"","seriesId":"","brandId":""}],"sectionTitleProps":{"link":"https://www.bbc.com/innovation/science","linkType":"internal"}}],"playlistMode":false},{"type":"advertisement","model":{"adType":"horizontal"},"incrementedType":"mid_5"},{"type":"wyoming","content":[],"collectionId":"","sectionTitleProps":{},"paginationData":{"page":0,"pageSize":0,"total":0},"title":"","summary":"","disableAutoPlay":false,"innerCollections":[{"title":"Culture","data":[{"title":"Lola Young's Messy hits number one: My songs are as real as it gets","href":"/news/articles/ceq9ge90y1lo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737740802207,"firstUpdated":1737740802207,"topics":["Culture"]},"tags":[],"description":"The star discusses how she feels about blowing up on TikTok and the trademark honesty in her lyrics.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/06e9/live/50428720-da54-11ef-a37f-eba91255dc3d.jpg","altText":"Lola Young performing on stage with a finger raised, in front of a red and white backdrop","width":976,"height":549}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:ceq9ge90y1lo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Harry v the tabloids. What next, if anything? ","href":"/news/articles/c07kdj8181jo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737741387353,"firstUpdated":1737741387353,"topics":["UK"]},"tags":[],"description":"As the dust settles on the prince's epic legal battle with The Sun, who has come out on top, and what happens now?","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/dd6a/live/2bed1ad0-da76-11ef-bd96-095eab67c5b7.jpg","altText":"Prince Harry, dressed in a blue suit and grey tie, gives a thumbs up to supporters as he leaves the High Court's Rolls Building in 2023, with media cameras in the background, during his evidence against the Mirror Group titles who he was suing at the time for unlawful intrusion. ","width":2056,"height":1156}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c07kdj8181jo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Twists, turns and betrayals: The standout moments from The Traitors Series 3","href":"/news/articles/ckgyngdqlveo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737741173538,"firstUpdated":1737741173538,"topics":["Culture"]},"tags":[],"description":"Here's a look back at the 13 most memorable moments of epic treachery in series three. ","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/be92/live/9453d7d0-da7b-11ef-9da9-9f71ceaf81c9.png","altText":"Linda on The Traitors","width":1116,"height":627}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:ckgyngdqlveo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"How Scandinavian dressing can make us happier","href":"/culture/article/20250121-how-scandinavian-dressing-can-make-us-happier","isLiveNow":false,"metadata":{"contentType":"article","subtype":"features","lastUpdated":1737630000000,"firstUpdated":1737630000000,"topics":["Culture"]},"tags":[],"description":"Nordic style is easy to wear – and can even cheer us up, say its fans. As Copenhagen Fashion Week approaches, we explore the fun, functional Scandi-girl style movement.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0kl3vb2.jpg","altText":"Copenhagen Fashion Week guests showcase the layering and bright colours typical of Scandi-girl chic (Credit: Getty Images)","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:pubpipe:wwverticals:article:culture/article/20250121-how-scandinavian-dressing-can-make-us-happier","brandName":"","seriesName":"","seriesId":"","brandId":""}],"sectionTitleProps":{"link":"https://www.bbc.com/culture","linkType":"internal"}},{"title":"Arts","data":[{"title":"Alan Cumming: 'I'm the Pied Piper of Pitlochry'","href":"/news/articles/cwyp6y9w4r7o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737556151303,"firstUpdated":1737500444333,"topics":["Tayside \u0026 Central Scotland"]},"tags":[],"description":"The US Traitors host and Hollywood star took over at Pitlochry Festival Theatre last year.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/1024/cpsprodpb/3d3e/live/e9f006b0-d836-11ef-ba26-ef826ed919d4.jpg","altText":"Alan Cumming celebrates the Traitors winning two Emmys in 2024 - he is grinning broadly while holding two Emmy awards up, one in each hand. He is wearing a suit with a tartan plaid design.","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cwyp6y9w4r7o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"How a performance lab is putting musicians to the test","href":"/reel/video/p0kjgpgq/how-a-performance-lab-is-putting-musicians-to-the-test","isLiveNow":false,"metadata":{"contentType":"video","subtype":"reels","lastUpdated":1737103800000,"firstUpdated":1737103800000,"topics":["Innovation"]},"tags":[],"description":" BBC Click visits a simulator lab that allows musicians to practice performance in real-world conditions.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0kjgrpp.jpg","altText":"How a performance lab is putting musicians to the test","width":960,"height":639,"duration":408}}},"relatedUrls":[],"id":"urn:pubpipe:gnlvideoproject:video:p0kjgpgq","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"The 1645 painting that unlocks Trump's portrait","href":"/culture/article/20250120-donald-trumps-official-portrait-the-17th-century-painting-that-unlocks-this-mysterious-image","isLiveNow":false,"metadata":{"contentType":"article","subtype":"features","lastUpdated":1737378000000,"firstUpdated":1737378000000,"topics":["Culture"]},"tags":[],"description":"Following the release of the US president-elect's official portrait, an expert reveals how scouring the pages of art history can help decode its meaning.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0kkw68y.jpg","altText":"Side by side portraits Philosophy and Donald Trump (Credit: The National Gallery, London/ Trump and Vance transition team)","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:pubpipe:wwverticals:article:culture/article/20250120-donald-trumps-official-portrait-the-17th-century-painting-that-unlocks-this-mysterious-image","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"How to transform your home with art","href":"/culture/article/20250108-how-to-transform-your-home-with-art","isLiveNow":false,"metadata":{"contentType":"article","subtype":"features","lastUpdated":1736420400000,"firstUpdated":1736420400000,"topics":["Culture"]},"tags":[],"description":"\"It's about what speaks to you\": Displaying paintings, prints, textiles and sculptures can all help create a fresh living space for the new year – here's how, according to the experts.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0khdtwd.jpg","altText":"A decorated living space with art hanging on the walls, glass table and a yellow couch (Credit: Artfully Walls)","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:pubpipe:wwverticals:article:culture/article/20250108-how-to-transform-your-home-with-art","brandName":"","seriesName":"","seriesId":"","brandId":""}],"sectionTitleProps":{"link":"https://www.bbc.com/arts","linkType":"internal"}},{"title":"Travel","data":[{"title":"The nation that wants you to embrace your sleepy side","href":"/travel/article/20250113-how-sweden-is-embracing-its-sleepy-side","isLiveNow":false,"metadata":{"contentType":"article","subtype":"features","lastUpdated":1737637200000,"firstUpdated":1737637200000,"topics":["Travel"]},"tags":[],"description":"Sweden's long, cold nights might put you off going there in winter, unless, that is, you are in search of that elusive 21st-Century luxury: a good night's sleep.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0kj8j7j.jpg","altText":"Wooden red cabin in Sweden and snow at sunset (Credit: Getty Images)","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:pubpipe:wwverticals:article:travel/article/20250113-how-sweden-is-embracing-its-sleepy-side","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"A beauty mogul's guide to luxury self-care in Dubai","href":"/travel/article/20250121-huda-kattans-guide-to-self-care-in-dubai","isLiveNow":false,"metadata":{"contentType":"article","subtype":"features","lastUpdated":1737565200000,"firstUpdated":1737565200000,"topics":["Travel"]},"tags":[],"description":"Dubai is synonymous with luxury and pampering, and Huda Beauty founder Huda Kattan knows where to find it. Here are her Dubai picks, from heavenly massages to crystal shopping.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0kkgd1r.jpg","altText":"Huda Kattan (Credit: Courtesy of Huda Kattan)","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:pubpipe:wwverticals:article:travel/article/20250121-huda-kattans-guide-to-self-care-in-dubai","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"A new life for abandoned US railway stations","href":"/travel/article/20250121-a-new-life-for-abandoned-us-railway-stations","isLiveNow":false,"metadata":{"contentType":"article","subtype":"features","lastUpdated":1737552600000,"firstUpdated":1737552600000,"topics":["Travel"]},"tags":[],"description":"Train stations were once the centrepieces of many US cities. After decades of neglect, many places are now reviving them in new, creative ways.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0kl4xt8.jpg","altText":"The front of the Crawford Hotel (former Union Station) in Denver, CO (Credit: The Crawford)","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:pubpipe:wwverticals:article:travel/article/20250121-a-new-life-for-abandoned-us-railway-stations","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"How to celebrate Jane Austen on her 250th birthday","href":"/travel/article/20250116-how-to-celebrate-jane-austens-250th-birthday","isLiveNow":false,"metadata":{"contentType":"article","subtype":"features","lastUpdated":1737466200000,"firstUpdated":1737466200000,"topics":["Travel"]},"tags":[],"description":"As travellers flock to the English countryside to celebrate the renowned novelist, we asked experts to weigh in on the best Austen-themed festivals, reenactments and balls of the year.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0kk6pxc.jpg","altText":"People walking in Regency-era costumes (Credit: Alamy)","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:pubpipe:wwverticals:article:travel/article/20250116-how-to-celebrate-jane-austens-250th-birthday","brandName":"","seriesName":"","seriesId":"","brandId":""}],"sectionTitleProps":{"link":"https://www.bbc.com/travel","linkType":"internal"}},{"title":"Earth","data":[{"title":"Russia suffering 'environmental catastrophe' after oil spill in Kerch Strait","href":"/news/articles/c23ngk5vgmpo","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737677768262,"firstUpdated":1737677768262,"topics":["Europe"]},"tags":[],"description":"Activists say the spill - caused after two ships ran were battered by a storm - could cover an area of 400 sq km. ","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/f6b6/live/efc0b7e0-d9b4-11ef-abf7-8b2a99c77ef2.jpg","altText":"Volunteers work to clean up spilled oil on the shoreline following an incident involving two tankers damaged in a storm in the Kerch Strait. The men are wearing camouflage uniforms. One holds a shovel and is putting oil into a bag, which the other man holds. In the background the sea is seen. ","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:c23ngk5vgmpo","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Giant iceberg on crash course with island, putting penguins and seals in danger","href":"/news/articles/cd64vvg4z6go","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737590493450,"firstUpdated":1737590493450,"topics":["Science \u0026 Environment"]},"tags":[],"description":"More than twice the size of greater London, the expanse of ice is unpredictable and dangerous.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/a9df/live/7dd12c90-d956-11ef-9dcf-978aff2fdcba.jpg","altText":"Iceberg A23a drifting in the southern ocean having broken free from the Larsen Ice Shelf.\n","width":2090,"height":1176}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cd64vvg4z6go","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"Stinky bloom of 'corpse flower' enthrals thousands","href":"/news/articles/cvgpnqe91j1o","isLiveNow":false,"metadata":{"contentType":"article","subtype":"news","lastUpdated":1737608318551,"firstUpdated":1737539556725,"topics":["Australia"]},"tags":[],"description":"A livestream of an endangered plant's rare bloom in Sydney has captivated the internet.","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/news/480/cpsprodpb/952a/live/f061ff60-d950-11ef-902e-cf9b84dc1357.jpg","altText":"Two visitors pose for a selfie in front of Amorphophallus titanum, famously known as the Corpse Flower seen at the Royal Botanic Garden Sydney","width":4999,"height":2812}}},"relatedUrls":[],"id":"urn:bbc:optimo:asset:cvgpnqe91j1o","brandName":"","seriesName":"","seriesId":"","brandId":""},{"title":"What is hiding under Greenland's ice?","href":"/future/article/20250121-the-enormous-challenge-of-mining-greenland","isLiveNow":false,"metadata":{"contentType":"article","subtype":"features","lastUpdated":1737554400000,"firstUpdated":1737554400000,"topics":["Future"]},"tags":[],"description":"The riches thought to lie beneath Greenland's icy terrain have been coveted for more than a century. But how easy are they to access, and will climate change make any difference?","image":{"type":"indexImage","model":{"blocks":{"src":"https://ichef.bbci.co.uk/images/ic/1920xn/p0kl5m3l.jpg","altText":"Greenland is rich in mineral resources (Credit: Nigel Baker and GEUS)","width":1920,"height":1080}}},"relatedUrls":[],"id":"urn:pubpipe:wwverticals:article:future/article/20250121-the-enormous-challenge-of-mining-greenland","brandName":"","seriesName":"","seriesId":"","brandId":""}],"sectionTitleProps":{"link":"https://www.bbc.com/future-planet","linkType":"internal"}}],"playlistMode":false},{"type":"advertisement","model":{"adType":"horizontal"},"incrementedType":"mid_6"}],"hideBannerAdvert":false,"isPreCurated":false,"id":"urn:bbc:xpress:page:8","slug":"home","title":"","seo":{"title":"BBC Home - Breaking News, World News, US News, Sports, Business, Innovation, Climate, Culture, Travel, Video \u0026 Audio","description":"Visit BBC for trusted reporting on the latest world and US news, sports, business, climate, innovation, culture and much more.","openGraph":{"title":"BBC Home - Breaking News, World News, US News, Sports, Business, Innovation, Climate, Culture, Travel, Video \u0026 Audio","description":"Visit BBC for trusted reporting on the latest world and US news, sports, business, climate, innovation, culture and much more."},"twitter":{"title":"BBC Home - Breaking News, World News, US News, Sports, Business, Innovation, Climate, Culture, Travel, Video \u0026 Audio","description":"Visit BBC for trusted reporting on the latest world and US news, sports, business, climate, innovation, culture and much more."},"hreflang":{"en-gb":"https://www.bbc.co.uk"}}}},"country":"ca","isUkHeader":"no","featureFlags":{"audio-all-pages":false,"authentication":true,"bookmarking":true,"client-side-routing":false,"client-side-uk-redirect":true,"comments":false,"election-banner-us":true,"election-banner-us-client-refresh":true,"follow-brand-function":false,"legacy-api":false,"legacy-byline":false,"live-articles-in-same-tab":true,"nevada-ad":true,"optimizely-web":true,"piano-composer":false,"playlists":true,"radio-page-progress-bar":false,"seo-allow-indexing":true,"virginia-ad":true,"webvitals":true,"wntv-au":true,"wntv-au-preroll-ad":false,"wntv-us":true,"wntv-us-preroll-ad":false,"zephr":false},"authInfo":{"signInUrl":"https://session.bbc.com/session","signOutUrl":"https://session.bbc.com/session/signout?switchTld=true\u0026ptrt=https%3A%2F%2Faccount.bbc.com%2Fsignout","registerUrl":"https://session.bbc.com/session?action=register","statusUrl":"https://account.bbc.com/account","idSignedInCookieName":"ckns_id","accessTokenRefreshUrl":"https://session.bbc.com/session?ptrt=https%3A%2F%2Fsession.bbc.com%2Fsession%2Fannounce","accessTokenExpiredAt":0,"idAvailability":"GREEN","accountMaintenanceMode":"off"},"mainNavigation":[{"id":"77f6237e-1091-4956-a51f-713c715aeeea","title":"Home","isSpecial":false,"inOverlay":false,"slug":"home","path":"/home","externalTarget":"_self","subMenus":[]},{"id":"a723adca-09cd-459a-b023-0d8c1ca8050b","title":"News","isSpecial":false,"inOverlay":false,"slug":"news","path":"/news","externalTarget":"_self","subMenus":[{"id":"84a3f915-833e-447d-a5a6-157b6984e345","title":"Israel-Gaza War","isSpecial":false,"inOverlay":false,"slug":"israel-gaza-war","path":"/news/topics/c2vdnvdg6xxt","externalTarget":"_self","subMenus":[]},{"id":"1224d914-fc19-4128-b08f-e63393d45f08","title":"War in Ukraine","isSpecial":false,"inOverlay":false,"slug":"war-in-ukraine","path":"/news/war-in-ukraine","externalTarget":"_self","subMenus":[]},{"id":"f762f213-2f87-496a-9bce-5e695e6e106f","title":"US \u0026 Canada","isSpecial":false,"inOverlay":false,"slug":"us-canada","path":"/news/us-canada","externalTarget":"_self","subMenus":[]},{"id":"19b687a4-74af-4809-b00a-4ac7da5f587c","title":"UK","isSpecial":false,"inOverlay":false,"slug":"uk","path":"/news/uk","externalTarget":"_self","subMenus":[{"id":"ee25ad11-eb24-43bd-a14c-ecd7e767bae8","title":"UK Politics","isSpecial":false,"inOverlay":false,"slug":"uk-politics","path":"/news/politics","externalTarget":"_self","subMenus":[]},{"id":"bcace7d7-eb12-4acd-8be9-80ba578c6357","title":"England","isSpecial":false,"inOverlay":false,"slug":"england","path":"/news/england","externalTarget":"_self","subMenus":[]},{"id":"3199afcf-479c-4008-9c79-973ba54311b3","title":"N. Ireland","isSpecial":false,"inOverlay":false,"slug":"n-ireland","path":"/news/northern_ireland","externalTarget":"_self","subMenus":[{"id":"d63f54aa-4b24-45a0-879b-96e61fb967f2","title":"N. Ireland Politics","isSpecial":false,"inOverlay":false,"slug":"n-ireland-politics","path":"/news/northern_ireland/northern_ireland_politics","externalTarget":"_self","subMenus":[]}]},{"id":"4b3871e5-b352-4b20-9a3b-e86be0cb92d4","title":"Scotland","isSpecial":false,"inOverlay":false,"slug":"scotland","path":"/news/scotland","externalTarget":"_self","subMenus":[{"id":"78476582-8167-4636-b6af-d677ef73445e","title":"Scotland Politics","isSpecial":false,"inOverlay":false,"slug":"scotland-politics","path":"/news/scotland/scotland_politics","externalTarget":"_self","subMenus":[]}]},{"id":"496789ee-8ee7-4cef-b0d6-9492232858e3","title":"Wales","isSpecial":false,"inOverlay":false,"slug":"wales","path":"/news/wales","externalTarget":"_self","subMenus":[{"id":"bac0a782-8f9d-4c2c-a716-317c5be25244","title":"Wales Politics","isSpecial":false,"inOverlay":false,"slug":"wales-politics","path":"/news/wales/wales_politics","externalTarget":"_self","subMenus":[]}]}]},{"id":"bed54295-2815-4d9d-a091-6baf0754625e","title":"Africa","isSpecial":false,"inOverlay":false,"slug":"africa","path":"/news/world/africa","externalTarget":"_self","subMenus":[]},{"id":"8bd8f901-ebab-4d5c-aa44-937ab6136a67","title":"Asia","isSpecial":false,"inOverlay":false,"slug":"asia","path":"/news/world/asia","externalTarget":"_self","subMenus":[{"id":"73373376-cd07-4fdc-b1b0-a9ff949d13c8","title":"China","isSpecial":false,"inOverlay":false,"slug":"china","path":"/news/world/asia/china","externalTarget":"_self","subMenus":[]},{"id":"eb0c2f21-6ab5-4832-99c7-0ddc3dd3d353","title":"India","isSpecial":false,"inOverlay":false,"slug":"india","path":"/news/world/asia/india","externalTarget":"_self","subMenus":[]}]},{"id":"c52ff803-9fcb-445d-b8ca-9edfe36e1c86","title":"Australia","isSpecial":false,"inOverlay":false,"slug":"australia","path":"/news/world/australia","externalTarget":"_self","subMenus":[]},{"id":"7b1e9b68-6856-460e-93fd-db75efd8a5ac","title":"Europe","isSpecial":false,"inOverlay":false,"slug":"europe","path":"/news/world/europe","externalTarget":"_self","subMenus":[]},{"id":"64e20d11-8335-495f-9e9f-0a0dadcd7da1","title":"Latin America","isSpecial":false,"inOverlay":false,"slug":"latin-america","path":"/news/world/latin_america","externalTarget":"_self","subMenus":[]},{"id":"9a1914e5-c139-4f65-86a0-f7dc5c959031","title":"Middle East","isSpecial":false,"inOverlay":false,"slug":"middle-east","path":"/news/world/middle_east","externalTarget":"_self","subMenus":[]},{"id":"26cfc082-ec1c-4371-af40-a60a7aa972a5","title":"In Pictures","isSpecial":false,"inOverlay":false,"slug":"in-pictures","path":"/news/in_pictures","externalTarget":"_self","subMenus":[]},{"id":"a7cca41a-fc5e-48b6-8499-c3727308ab00","title":"BBC InDepth","isSpecial":false,"inOverlay":false,"slug":"bbc-indepth","path":"/news/bbcindepth","externalTarget":"_self","subMenus":[]},{"id":"0bf8a2cf-bba6-46ba-a387-f46dce7f3274","title":"BBC Verify","isSpecial":false,"inOverlay":false,"slug":"bbc-verify","path":"/news/reality_check","externalTarget":"_self","subMenus":[]}]},{"id":"b9fef6aa-dfdf-4b8b-8571-48165bef1a93","title":"Sport","isSpecial":false,"inOverlay":false,"slug":"sport","path":"/sport","externalTarget":"_self","subMenus":[]},{"id":"feab6032-18e8-4250-bb85-08a5a6b1448b","title":"Business","isSpecial":false,"inOverlay":false,"slug":"business","path":"/business","externalTarget":"_self","subMenus":[{"id":"25423ec1-2763-4f8a-a665-b8a4602280ec","title":"Executive Lounge","isSpecial":false,"inOverlay":false,"slug":"executive-lounge","path":"/business/executive-lounge","externalTarget":"_self","subMenus":[]},{"id":"6d7c517a-8ab6-4233-95d2-48a5183f688e","title":"Technology of Business","isSpecial":false,"inOverlay":false,"slug":"technology-of-business","path":"/business/technology-of-business","externalTarget":"_self","subMenus":[]},{"id":"63d2d5d4-674d-4069-b7ae-63b85c9eff69","title":"Future of Business","isSpecial":false,"inOverlay":false,"slug":"future-of-business","path":"/business/future-of-business","externalTarget":"_self","subMenus":[]}]},{"id":"74d17192-88e5-4798-baef-81130a2bc5bf","title":"Innovation","isSpecial":false,"inOverlay":false,"slug":"innovation","path":"/innovation","externalTarget":"_self","subMenus":[{"id":"af9e3ee5-50a1-46ae-85bd-3d7c02f65e06","title":"Technology","isSpecial":false,"inOverlay":false,"slug":"technology","path":"/innovation/technology","externalTarget":"_self","subMenus":[]},{"id":"c45a4fd2-e162-4b48-907f-23576556be83","title":"Science \u0026 Health","isSpecial":false,"inOverlay":false,"slug":"science-health","path":"/innovation/science","externalTarget":"_self","subMenus":[]},{"id":"066613bb-6f13-475a-bc42-7681bafaa988","title":"Artificial Intelligence","isSpecial":false,"inOverlay":false,"slug":"artificial-intelligence","path":"/innovation/artificial-intelligence","externalTarget":"_self","subMenus":[]},{"id":"4e2b64a6-1a14-4523-9ac4-9fd26e57a54f","title":"AI v the Mind","isSpecial":false,"inOverlay":false,"slug":"ai-v-the-mind","path":"/innovation/ai-v-the-mind","externalTarget":"_self","subMenus":[]}]},{"id":"69b7031a-8a59-43f5-b340-6322fe92cc87","title":"Culture","isSpecial":false,"inOverlay":false,"slug":"culture","path":"/culture","externalTarget":"_self","subMenus":[{"id":"765e9772-2631-4ff9-a311-5744c62b3261","title":"Film \u0026 TV","isSpecial":false,"inOverlay":false,"slug":"film-tv","path":"/culture/film-tv","externalTarget":"_self","subMenus":[]},{"id":"a22f7b48-0faf-4522-ae17-e3c3ca8ffb93","title":"Music","isSpecial":false,"inOverlay":false,"slug":"music","path":"/culture/music","externalTarget":"_self","subMenus":[]},{"id":"14a68703-318c-44cf-9e3f-9fad5db6a2d9","title":"Art \u0026 Design","isSpecial":false,"inOverlay":false,"slug":"art-design","path":"/culture/art","externalTarget":"_self","subMenus":[]},{"id":"b90ec377-8407-4c6c-9252-6be17c379a7c","title":"Style","isSpecial":false,"inOverlay":false,"slug":"style","path":"/culture/style","externalTarget":"_self","subMenus":[]},{"id":"7c53957e-54b2-496b-a1ce-28eaf621a87b","title":"Books","isSpecial":false,"inOverlay":false,"slug":"books","path":"/culture/books","externalTarget":"_self","subMenus":[]},{"id":"23073890-0af9-4039-b6f3-c79a0f03fa0a","title":"Entertainment News","isSpecial":false,"inOverlay":false,"slug":"entertainment-news","path":"/culture/entertainment-news","externalTarget":"_self","subMenus":[]}]},{"id":"2c564fa3-4d25-47f3-a9f0-9a6975825f90","title":"Arts","isSpecial":false,"inOverlay":false,"slug":"arts","path":"/arts","externalTarget":"_self","subMenus":[{"id":"e2df553a-7415-489d-a6d0-bccab5c49319","title":"Arts in Motion","isSpecial":false,"inOverlay":false,"slug":"arts-in-motion","path":"/arts/arts-in-motion","externalTarget":"_self","subMenus":[]}]},{"id":"ba1e9c57-089f-47b5-86a1-e830157efefd","title":"Travel","isSpecial":false,"inOverlay":false,"slug":"travel","path":"/travel","externalTarget":"_self","subMenus":[{"id":"2d5cfc53-9cdb-45df-a96b-c35e7c428ac9","title":"Destinations","isSpecial":false,"inOverlay":false,"slug":"destinations","path":"/travel/destinations","externalTarget":"_self","subMenus":[{"id":"6cace107-6e39-48f1-9e4c-ffcd1f3fb6f9","title":"Africa","isSpecial":false,"inOverlay":false,"slug":"africa","path":"/travel/destinations/africa","externalTarget":"_self","subMenus":[]},{"id":"83d4b7e3-d7bf-4d32-b42d-417f20316c51","title":"Antarctica","isSpecial":false,"inOverlay":false,"slug":"antarctica","path":"/travel/destinations/antarctica","externalTarget":"_self","subMenus":[]},{"id":"24504a1f-5104-464d-9bfe-d271d3cd11c2","title":"Asia","isSpecial":false,"inOverlay":false,"slug":"asia","path":"/travel/destinations/asia","externalTarget":"_self","subMenus":[]},{"id":"c0ac88de-e193-4072-a17f-0052eac535d8","title":"Australia and Pacific","isSpecial":false,"inOverlay":false,"slug":"australia-and-pacific","path":"/travel/destinations/australia-and-pacific","externalTarget":"_self","subMenus":[]},{"id":"c468b33c-46bb-4472-a5b1-04d08223817d","title":"Caribbean \u0026 Bermuda","isSpecial":false,"inOverlay":false,"slug":"caribbean-bermuda","path":"/travel/destinations/caribbean","externalTarget":"_self","subMenus":[]},{"id":"70dfb7f7-6466-4e99-94f0-75cd955248f7","title":"Central America","isSpecial":false,"inOverlay":false,"slug":"central-america","path":"/travel/destinations/central-america","externalTarget":"_self","subMenus":[]},{"id":"8bfbac8c-5630-4271-a8f1-ba88f69fd876","title":"Europe","isSpecial":false,"inOverlay":false,"slug":"europe","path":"/travel/destinations/europe","externalTarget":"_self","subMenus":[]},{"id":"20b4c8f3-67f4-4bb6-84e4-1d67abe9e3e0","title":"Middle East","isSpecial":false,"inOverlay":false,"slug":"middle-east","path":"/travel/destinations/middle-east","externalTarget":"_self","subMenus":[]},{"id":"902bd777-5625-4d80-8a8d-99c4f5cfda77","title":"North America","isSpecial":false,"inOverlay":false,"slug":"north-america","path":"/travel/destinations/north-america","externalTarget":"_self","subMenus":[]},{"id":"f9176c64-0dfa-4c1b-9f8c-e1977def6956","title":"South America","isSpecial":false,"inOverlay":false,"slug":"south-america","path":"/travel/destinations/south-america","externalTarget":"_self","subMenus":[]}]},{"id":"4f48e2d6-c31f-4a09-a022-0895243e3957","title":"World’s Table","isSpecial":false,"inOverlay":false,"slug":"world’s-table","path":"/travel/worlds-table","externalTarget":"_self","subMenus":[]},{"id":"d26a7cd4-ccc1-4a67-8653-9b8e4845ef92","title":"Culture \u0026 Experiences","isSpecial":false,"inOverlay":false,"slug":"culture-experiences","path":"/travel/cultural-experiences","externalTarget":"_self","subMenus":[]},{"id":"c6226d60-14fe-4a00-bd7f-b2360c9a4c1b","title":"Adventures","isSpecial":false,"inOverlay":false,"slug":"adventures","path":"/travel/adventures","externalTarget":"_self","subMenus":[]},{"id":"6682c5c9-e360-4dae-91de-037f6c0b3216","title":"The SpeciaList","isSpecial":false,"inOverlay":false,"slug":"the-specialist","path":"/travel/specialist","externalTarget":"_self","subMenus":[]}]},{"id":"8380e209-023e-479b-bfd6-e2a4c9ef06c3","title":"Earth","isSpecial":false,"inOverlay":false,"slug":"earth","path":"/future-planet","externalTarget":"_self","subMenus":[{"id":"db13b586-f1eb-4011-9b0b-33624500ebb3","title":"Natural Wonders","isSpecial":false,"inOverlay":false,"slug":"natural-wonders","path":"/future-planet/natural-wonders","externalTarget":"_self","subMenus":[]},{"id":"fe868ace-9323-44d1-8b92-cbc2d456d710","title":"Weather \u0026 Science","isSpecial":false,"inOverlay":false,"slug":"weather-science","path":"/future-planet/weather-science","externalTarget":"_self","subMenus":[]},{"id":"ebe4b8c9-95b9-451b-b048-771e9d8b7226","title":"Climate Solutions","isSpecial":false,"inOverlay":false,"slug":"climate-solutions","path":"/future-planet/solutions","externalTarget":"_self","subMenus":[]},{"id":"be93c748-9b09-4cd1-9bc3-ab8379a5b383","title":"Sustainable Business","isSpecial":false,"inOverlay":false,"slug":"sustainable-business","path":"/future-planet/sustainable-business","externalTarget":"_self","subMenus":[]},{"id":"1723eb44-f072-44ec-b544-342b83f500d1","title":"Green Living","isSpecial":false,"inOverlay":false,"slug":"green-living","path":"/future-planet/green-living","externalTarget":"_self","subMenus":[]}]},{"id":"7cf5e5fc-cbb0-4999-9e49-dbf2a518c15f","title":"Video","isSpecial":true,"inOverlay":false,"slug":"video","path":"/video","externalTarget":"_self","subMenus":[]},{"id":"48c569b1-d8c6-45bc-a268-3b70fe7b8cc0","title":"Live","isSpecial":false,"inOverlay":false,"slug":"live","path":"/live","externalTarget":"_self","subMenus":[{"id":"bf0342e6-128c-4cb4-b3be-c7e3ea491488","title":"Live News","isSpecial":false,"inOverlay":false,"slug":"live-news","path":"/live/news","externalTarget":"_self","subMenus":[]},{"id":"0ca7a535-4068-411d-90c9-0e51a1675b2c","title":"Live Sport","isSpecial":false,"inOverlay":false,"slug":"live-sport","path":"/live/sport","externalTarget":"_self","subMenus":[]}]}],"hamburgerNavigation":[{"id":"e7004d51-1e52-42fa-ae73-698cbdbeed97","title":"Home","isSpecial":false,"inOverlay":false,"slug":"home","path":"/home","externalTarget":"_self","subMenus":[]},{"id":"83ad1775-a7fc-4718-95a9-09bbc1cb5926","title":"News","isSpecial":false,"inOverlay":false,"slug":"news","path":"/news","externalTarget":"_self","subMenus":[{"id":"08beafef-4415-4950-82b6-979206216d8e","title":"News","isSpecial":false,"inOverlay":false,"slug":"news","path":"/news","externalTarget":"_self","subMenus":[]},{"id":"3a1deead-04ad-43c1-87fa-ccbb8defa505","title":"Israel-Gaza War","isSpecial":false,"inOverlay":false,"slug":"c2vdnvdg6xxt","path":"/news/topics/c2vdnvdg6xxt","externalTarget":"_self","subMenus":[]},{"id":"26b1f6ad-2c3c-4921-ac5d-239570ae58a3","title":"War in Ukraine","isSpecial":false,"inOverlay":false,"slug":"war-in-ukraine","path":"/news/war-in-ukraine","externalTarget":"_self","subMenus":[]},{"id":"bd278088-282b-40d4-af82-c5e9dae3806f","title":"US \u0026 Canada","isSpecial":false,"inOverlay":false,"slug":"us-canada","path":"/news/us-canada","externalTarget":"_self","subMenus":[]},{"id":"bb4c3e7c-5eba-48e9-9cec-0b8a06eb0ae9","title":"UK","isSpecial":false,"inOverlay":false,"slug":"uk","path":"/news/uk","externalTarget":"_self","subMenus":[{"id":"ffb3af99-6833-4416-a44a-bfb5c706e475","title":"UK","isSpecial":false,"inOverlay":false,"slug":"uk","path":"/news/uk","externalTarget":"_self","subMenus":[]},{"id":"bb2c1fff-7faf-43d0-879d-47333c5448d2","title":"England","isSpecial":false,"inOverlay":false,"slug":"england","path":"/news/england","externalTarget":"_self","subMenus":[]},{"id":"267b42df-c7c2-4759-acd0-c39434369bd0","title":"N. Ireland","isSpecial":false,"inOverlay":false,"slug":"northern_ireland","path":"/news/northern_ireland","externalTarget":"_self","subMenus":[]},{"id":"133cd65c-5cb3-4224-9790-5fa14f6a101f","title":"Scotland","isSpecial":false,"inOverlay":false,"slug":"scotland","path":"/news/scotland","externalTarget":"_self","subMenus":[]},{"id":"2fcb16e1-4f33-498e-bd3a-127f7c353c54","title":"Wales","isSpecial":false,"inOverlay":false,"slug":"wales","path":"/news/wales","externalTarget":"_self","subMenus":[]},{"id":"8d3c0fa0-cf61-4d65-b01e-ad40e6ded4a6","title":"UK Nations and Regions","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/news/regions","externalTarget":"_self","subMenus":[]}]},{"id":"d89aedab-c992-4b66-8241-dfa46dd0f5f4","title":"Africa","isSpecial":false,"inOverlay":false,"slug":"africa","path":"/news/world/africa","externalTarget":"_self","subMenus":[]},{"id":"e7e8b76c-27d2-4d2a-b12b-eef71f7bcb64","title":"Asia","isSpecial":false,"inOverlay":false,"slug":"asia","path":"/news/world/asia","externalTarget":"_self","subMenus":[]},{"id":"d250a68d-3658-4274-8578-b63ccf9d5b74","title":"Australia","isSpecial":false,"inOverlay":false,"slug":"australia","path":"/news/world/australia","externalTarget":"_self","subMenus":[]},{"id":"eacbf965-9955-4c6e-a4ae-dfc43c95fba1","title":"Europe","isSpecial":false,"inOverlay":false,"slug":"europe","path":"/news/world/europe","externalTarget":"_self","subMenus":[]},{"id":"b153d2b2-2e40-493a-a157-124a9ca2419d","title":"Latin America","isSpecial":false,"inOverlay":false,"slug":"latin_america","path":"/news/world/latin_america","externalTarget":"_self","subMenus":[]},{"id":"82304ec6-9cd1-49d1-b997-80a8120ab900","title":"Middle East","isSpecial":false,"inOverlay":false,"slug":"middle_east","path":"/news/world/middle_east","externalTarget":"_self","subMenus":[]},{"id":"a1aae916-9b90-4f15-b5b1-68118b012f1e","title":"In Pictures","isSpecial":false,"inOverlay":false,"slug":"in_pictures","path":"/news/in_pictures","externalTarget":"_self","subMenus":[]},{"id":"273c53ef-d1e7-4edc-9e73-de4b87d577d4","title":"BBC InDepth","isSpecial":false,"inOverlay":false,"slug":"bbcindepth","path":"/news/bbcindepth","externalTarget":"_self","subMenus":[]},{"id":"b3a82daa-76b4-4283-9757-f49efbba412c","title":"BBC Verify","isSpecial":false,"inOverlay":false,"slug":"reality_check","path":"/news/reality_check","externalTarget":"_self","subMenus":[]}]},{"id":"baf4afbb-0ff7-4ad1-a68f-779e493506a2","title":"Sport","isSpecial":false,"inOverlay":false,"slug":"sport","path":"/sport","externalTarget":"_self","subMenus":[]},{"id":"9857aa8c-a022-410f-8b46-aeeeb1dd8b1b","title":"Business","isSpecial":false,"inOverlay":false,"slug":"business","path":"/business","externalTarget":"_self","subMenus":[{"id":"14a7ec1f-5596-4a91-978b-f11cc9e7242c","title":"Business","isSpecial":false,"inOverlay":false,"slug":"business","path":"/business","externalTarget":"_self","subMenus":[]},{"id":"6e835948-6d35-4d67-872d-102362f833ae","title":"Executive Lounge","isSpecial":false,"inOverlay":false,"slug":"executive-lounge","path":"/business/executive-lounge","externalTarget":"_self","subMenus":[]},{"id":"d3783902-42c6-4ab5-a380-cff36d0b6a9a","title":"Technology of Business","isSpecial":false,"inOverlay":false,"slug":"technology-of-business","path":"/business/technology-of-business","externalTarget":"_self","subMenus":[]},{"id":"96db5a57-d9c9-4b2c-831a-6c6765400250","title":"Future of Business","isSpecial":false,"inOverlay":false,"slug":"future-of-business","path":"/business/future-of-business","externalTarget":"_self","subMenus":[]}]},{"id":"547bc5a6-c66c-44b6-b8ea-1a5509ebe518","title":"Innovation","isSpecial":false,"inOverlay":false,"slug":"innovation","path":"/innovation","externalTarget":"_self","subMenus":[{"id":"9ce011be-5908-4592-ae7c-9f0be080a1ac","title":"Innovation","isSpecial":false,"inOverlay":false,"slug":"innovation","path":"/innovation","externalTarget":"_self","subMenus":[]},{"id":"283d5b1d-31e6-486f-96ed-522d653378ac","title":"Technology","isSpecial":false,"inOverlay":false,"slug":"technology","path":"/innovation/technology","externalTarget":"_self","subMenus":[]},{"id":"1780371e-e20d-4da9-80d9-c7acd86e542c","title":"Science \u0026 Health","isSpecial":false,"inOverlay":false,"slug":"science","path":"/innovation/science","externalTarget":"_self","subMenus":[]},{"id":"1ff9c5bf-4993-43e8-822f-bac855f90cbf","title":"AI v the Mind","isSpecial":false,"inOverlay":false,"slug":"ai-v-the-mind","path":"/innovation/ai-v-the-mind","externalTarget":"_self","subMenus":[]},{"id":"f8e2f54f-6d85-4a90-897d-4ec03d8e529a","title":"Artificial Intelligence","isSpecial":false,"inOverlay":false,"slug":"artificial-intelligence","path":"/innovation/artificial-intelligence","externalTarget":"_self","subMenus":[]}]},{"id":"7f992b44-3c18-4ab4-9446-0d229a37531d","title":"Culture","isSpecial":false,"inOverlay":false,"slug":"culture","path":"/culture","externalTarget":"_self","subMenus":[{"id":"3480a778-414c-48ed-b63c-0a8fb9ee8951","title":"Culture","isSpecial":false,"inOverlay":false,"slug":"culture","path":"/culture","externalTarget":"_self","subMenus":[]},{"id":"ed9c7051-d22b-405b-95bb-de8e6d8d737e","title":"Film \u0026 TV","isSpecial":false,"inOverlay":false,"slug":"film-tv","path":"/culture/film-tv","externalTarget":"_self","subMenus":[]},{"id":"a3e2dab7-1cd8-4e56-be2d-c3e3f43718a2","title":"Music","isSpecial":false,"inOverlay":false,"slug":"music","path":"/culture/music","externalTarget":"_self","subMenus":[]},{"id":"d1e35099-f9f4-436f-a676-894bd920d537","title":"Art \u0026 Design","isSpecial":false,"inOverlay":false,"slug":"art","path":"/culture/art","externalTarget":"_self","subMenus":[]},{"id":"cb047a2b-20a1-487a-a722-398a0f410508","title":"Style","isSpecial":false,"inOverlay":false,"slug":"style","path":"/culture/style","externalTarget":"_self","subMenus":[]},{"id":"8c786d38-0e60-4f86-9e2f-1c2e402450e8","title":"Books","isSpecial":false,"inOverlay":false,"slug":"books","path":"/culture/books","externalTarget":"_self","subMenus":[]},{"id":"7b015272-1778-4c1b-a2f1-350259e98379","title":"Entertainment News","isSpecial":false,"inOverlay":false,"slug":"entertainment-news","path":"/culture/entertainment-news","externalTarget":"_self","subMenus":[]}]},{"id":"8ca86bbd-0752-48a3-9a9b-3e7cdfcbebb5","title":"Arts","isSpecial":false,"inOverlay":false,"slug":"arts","path":"/arts","externalTarget":"_self","subMenus":[{"id":"aaf831c0-56f1-4dfc-b62d-1f29e1cd8926","title":"Arts","isSpecial":false,"inOverlay":false,"slug":"arts","path":"/arts","externalTarget":"_self","subMenus":[]},{"id":"88cd294a-55d0-42be-8154-0fc370f71b7a","title":"Arts in Motion","isSpecial":false,"inOverlay":false,"slug":"arts-in-motion","path":"/arts/arts-in-motion","externalTarget":"_self","subMenus":[]}]},{"id":"820745b1-684e-4053-86d5-028078d8a14f","title":"Travel","isSpecial":false,"inOverlay":false,"slug":"travel","path":"/travel","externalTarget":"_self","subMenus":[{"id":"7f7dbf41-3353-4963-8013-ea8468afed78","title":"Travel","isSpecial":false,"inOverlay":false,"slug":"travel","path":"/travel","externalTarget":"_self","subMenus":[]},{"id":"10721ad0-d22b-414c-af11-b6bf8e0649d9","title":"Destinations","isSpecial":false,"inOverlay":false,"slug":"destinations","path":"/travel/destinations","externalTarget":"_self","subMenus":[]},{"id":"5f3fa9c1-5e83-4687-9ab8-48abe161b2ee","title":"World’s Table","isSpecial":false,"inOverlay":false,"slug":"worlds-table","path":"/travel/worlds-table","externalTarget":"_self","subMenus":[]},{"id":"982b35a3-62a4-4d0f-bb69-74f01db8a6f1","title":"Culture \u0026 Experiences","isSpecial":false,"inOverlay":false,"slug":"cultural-experiences","path":"/travel/cultural-experiences","externalTarget":"_self","subMenus":[]},{"id":"5884cea2-73a7-465f-8773-68c9b473236a","title":"Adventures","isSpecial":false,"inOverlay":false,"slug":"adventures","path":"/travel/adventures","externalTarget":"_self","subMenus":[]},{"id":"b9f59f32-d44c-4a30-b161-f144816a53f1","title":"The SpeciaList","isSpecial":false,"inOverlay":false,"slug":"specialist","path":"/travel/specialist","externalTarget":"_self","subMenus":[]}]},{"id":"1b0f6338-56fb-4199-b243-f073387ef864","title":"Earth","isSpecial":false,"inOverlay":false,"slug":"future-planet","path":"/future-planet","externalTarget":"_self","subMenus":[{"id":"5d7dd66e-06a8-44c8-8518-e4a31968bb71","title":"Earth","isSpecial":false,"inOverlay":false,"slug":"future-planet","path":"/future-planet","externalTarget":"_self","subMenus":[]},{"id":"115a79a1-b9f2-44da-85a2-421a7267ac79","title":"Natural Wonders","isSpecial":false,"inOverlay":false,"slug":"natural-wonders","path":"/future-planet/natural-wonders","externalTarget":"_self","subMenus":[]},{"id":"84626e49-ad39-4f08-a904-ce95ca4c27f4","title":"Weather \u0026 Science","isSpecial":false,"inOverlay":false,"slug":"weather-science","path":"/future-planet/weather-science","externalTarget":"_self","subMenus":[]},{"id":"141a19e7-ff08-42bc-b029-191fe0631f7d","title":"Climate Solutions","isSpecial":false,"inOverlay":false,"slug":"solutions","path":"/future-planet/solutions","externalTarget":"_self","subMenus":[]},{"id":"46385720-5443-49aa-9a8f-757d97062d6b","title":"Sustainable Business","isSpecial":false,"inOverlay":false,"slug":"sustainable-business","path":"/future-planet/sustainable-business","externalTarget":"_self","subMenus":[]},{"id":"3bf3227b-d0d2-4150-bd1b-efbab8c63dfc","title":"Green Living","isSpecial":false,"inOverlay":false,"slug":"green-living","path":"/future-planet/green-living","externalTarget":"_self","subMenus":[]}]},{"id":"e6f4d02e-d46b-465c-ae72-25e474367ba5","title":"Video","isSpecial":false,"inOverlay":false,"slug":"video","path":"/video","externalTarget":"_self","subMenus":[]},{"id":"b6fbf1d4-4d0f-480c-ac5f-08e7c366531f","title":"Live","isSpecial":false,"inOverlay":false,"slug":"live","path":"/live","externalTarget":"_self","subMenus":[{"id":"baeaa3f5-2dc9-44b3-9894-312b6f1dfe7b","title":"Live","isSpecial":false,"inOverlay":false,"slug":"live","path":"/live","externalTarget":"_self","subMenus":[]},{"id":"a5708ba6-b30d-4644-9bcd-d053f5c8e591","title":"Live News","isSpecial":false,"inOverlay":false,"slug":"news","path":"/live/news","externalTarget":"_self","subMenus":[]},{"id":"b458d4fb-c2e5-4101-a707-c8c394980d23","title":"Live Sport","isSpecial":false,"inOverlay":false,"slug":"sport","path":"/live/sport","externalTarget":"_self","subMenus":[]}]},{"id":"f560dd48-2c5b-4a20-be2b-8f9ef1282bd4","title":"Audio","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.co.uk/sounds","externalTarget":"_self","subMenus":[]},{"id":"204ac7a5-c93b-4942-8866-45661c7d289e","title":"Weather","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/weather","externalTarget":"_self","subMenus":[]},{"id":"2ff77742-7dbe-4fd1-bf44-074384e313d0","title":"Newsletters","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/newsletters","externalTarget":"_self","subMenus":[]}],"footer":{"contentLinks":[{"id":"b6222dc7-91ef-4ebf-96f0-294259abd0ac","title":"Home","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/","externalTarget":"_self"},{"id":"26b0b2ff-628a-4254-bc76-b59364200768","title":"News","isSpecial":false,"inOverlay":false,"slug":"news","path":"/news","externalTarget":"_self"},{"id":"a82184d5-922b-48dc-9359-3823b2804ef9","title":"Sport","isSpecial":false,"inOverlay":false,"slug":"sport","path":"/sport","externalTarget":"_self"},{"id":"30f36026-d611-4651-b759-e8eabbe5bfa1","title":"Business","isSpecial":false,"inOverlay":false,"slug":"business","path":"/business","externalTarget":"_self"},{"id":"9f0b63f1-b9da-4757-9782-39f81b7bb9b2","title":"Innovation","isSpecial":false,"inOverlay":false,"slug":"innovation","path":"/innovation","externalTarget":"_self"},{"id":"233e0567-2f59-44a5-85db-6a4f49cef450","title":"Culture","isSpecial":false,"inOverlay":false,"slug":"culture","path":"/culture","externalTarget":"_self"},{"id":"37bf4b89-d6b9-4262-93ce-08ea32eb77ed","title":"Arts","isSpecial":false,"inOverlay":false,"slug":"arts","path":"/arts","externalTarget":"_self"},{"id":"b8c21de7-485c-4a44-adef-48e4fa98628d","title":"Travel","isSpecial":false,"inOverlay":false,"slug":"travel","path":"/travel","externalTarget":"_self"},{"id":"c8506a4e-ec73-41c6-9dda-79cda5bc02bf","title":"Earth","isSpecial":false,"inOverlay":false,"slug":"future-planet","path":"/future-planet","externalTarget":"_self"},{"id":"9f2d1d44-73bf-4543-9a33-6b8fc19bf7be","title":"Video","isSpecial":false,"inOverlay":false,"slug":"video","path":"/video","externalTarget":"_self"},{"id":"b4007e08-9181-4dc0-87de-7c5bdbcb00ad","title":"Live","isSpecial":false,"inOverlay":false,"slug":"live","path":"/live","externalTarget":"_self"},{"id":"ee76a82e-75d2-4ea3-958c-641b0e442c57","title":"Audio","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.co.uk/sounds","externalTarget":"_self"},{"id":"67e6bc82-34d7-45fe-9936-466f0f34a570","title":"Weather","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/weather","externalTarget":"_self"},{"id":"b4eb37f5-c0dd-4461-9adc-d94934e19c8e","title":"BBC Shop","isSpecial":false,"inOverlay":false,"externalUrl":"https://shop.bbc.com/","externalTarget":"_self"}],"languages":[{"id":"0adfea96-9684-473f-875e-ddfd87466a25","title":"Oduu Afaan Oromootiin","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/afaanoromoo","externalTarget":"_self"},{"id":"ad1671b7-1436-47e4-8385-cc2082c6206a","title":"Amharic ዜና በአማርኛ","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/amharic","externalTarget":"_self"},{"id":"71f6a85d-4134-451c-9795-98d79ef307a4","title":"Arabic عربي","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/arabic","externalTarget":"_self"},{"id":"5692a658-31d5-40b2-b21f-5f3ea7a7f068","title":"Azeri AZƏRBAYCAN","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/azeri","externalTarget":"_self"},{"id":"d473a558-4c31-41cd-8cdd-adee2e0a1e4e","title":"Bangla বাংলা","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/bengali","externalTarget":"_self"},{"id":"197ca417-5494-4297-afc5-30ec668acd4e","title":"Burmese မြန်မာ","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/burmese","externalTarget":"_self"},{"id":"2d495cc5-a11a-464c-bc0a-479a29b22ef2","title":"Chinese 中文网","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/zhongwen/simp","externalTarget":"_self"},{"id":"855a4e59-0692-4d85-a9e0-7c84b33d3893","title":"French AFRIQUE","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/afrique","externalTarget":"_self"},{"id":"d3904284-a3fa-4859-9252-4a6ff73f5d99","title":"Hausa HAUSA","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/hausa","externalTarget":"_self"},{"id":"271eda09-7601-48c7-9770-6eeea8bb7cee","title":"Hindi हिन्दी","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/hindi","externalTarget":"_self"},{"id":"0b5804af-a45b-417f-9bb5-2e72bd58d294","title":"Gaelic NAIDHEACHDAN","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/naidheachdan","externalTarget":"_self"},{"id":"706e9df7-5e94-45be-8311-dc8bacdd05a7","title":"Gujarati ગુજરાતીમાં સમાચાર","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/gujarati","externalTarget":"_self"},{"id":"921f8c61-1c3a-4709-a5c1-e06cd1b74450","title":"Igbo AKỤKỌ N’IGBO","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/igbo","externalTarget":"_self"},{"id":"7d8f4953-a0f8-490a-8326-1455cd0a9ba0","title":"Indonesian INDONESIA","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/indonesia","externalTarget":"_self"},{"id":"7aaea091-7890-43ec-a9e6-b774f0095d0b","title":"Japanese 日本語","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/japanese","externalTarget":"_self"},{"id":"c3a4a9ce-2ba8-4045-b40d-e1882a23f470","title":"Kinyarwanda GAHUZA","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/gahuza","externalTarget":"_self"},{"id":"b2d1e39a-0ef8-4eee-ad4e-dffa89d33e43","title":"Kirundi KIRUNDI","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/gahuza","externalTarget":"_self"},{"id":"7cacdb1b-6223-42b0-8a86-a3f3ea04c60d","title":"Korean 한국어","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/korean","externalTarget":"_self"},{"id":"77754bf0-f9bd-4deb-aa4e-ea0a40a29a72","title":"Kyrgyz Кыргыз","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/kyrgyz","externalTarget":"_self"},{"id":"0ef416f6-f010-4f62-96ea-2d77b0b51b40","title":"Marathi मराठी","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/marathi","externalTarget":"_self"},{"id":"1a4f6711-d456-4bb2-8501-46a9bf2efa39","title":"Nepali नेपाली","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/nepali","externalTarget":"_self"},{"id":"8f4f198b-3591-44ff-ba56-a27ad5cdac39","title":"Noticias para hispanoparlantes","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/mundo","externalTarget":"_self"},{"id":"060c0515-2a94-4746-b50e-ce5e885acc26","title":"Pashto پښتو","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/pashto","externalTarget":"_self"},{"id":"ca9abb4b-6299-41b7-8898-52f4784d5184","title":"Persian فارسی","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/persian","externalTarget":"_self"},{"id":"5813c6ef-91af-4efc-86f3-009040419d57","title":"Pidgin","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/pidgin","externalTarget":"_self"},{"id":"c9102037-59dc-4a9d-976b-5af6e2c3dcc1","title":"Portuguese BRASIL","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/portuguese","externalTarget":"_self"},{"id":"50e4cefe-1dc7-43be-a63a-423ce69314a2","title":"Punjabi ਪੰਜਾਬੀ ਖ਼ਬਰਾਂ","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/punjabi","externalTarget":"_self"},{"id":"2cce32c3-aa28-4bb4-a9e5-db0c136cf69f","title":"Russian НА РУССКОМ","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/russian","externalTarget":"_self"},{"id":"5e656813-d89c-417c-bead-8150552bc5aa","title":"Serbian NA SRPSKOM","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/serbian/lat","externalTarget":"_self"},{"id":"8825a82a-4296-4113-80a1-10f3cf93991d","title":"Sinhala සිංහල","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/sinhala","externalTarget":"_self"},{"id":"bfd5b10f-cf78-46c7-9715-3c800bf37840","title":"Somali SOMALI","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/somali","externalTarget":"_self"},{"id":"107b59f2-978b-4656-b877-178ce1e1c321","title":"Swahili HABARI KWA KISWAHILI","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/swahili","externalTarget":"_self"},{"id":"347831f8-858f-47b3-a855-d5c361c5350a","title":"Tamil தமிழில் செய்திகள்","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/tamil","externalTarget":"_self"},{"id":"b9134c6a-f1c6-4c10-8fef-3bcb48c26236","title":"Telugu తెలుగు వార్తలు","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/telugu","externalTarget":"_self"},{"id":"c52aff33-b0db-42bb-9d40-28d5a50969ad","title":"Thai ข่าวภาษาไทย","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/thai","externalTarget":"_self"},{"id":"4d944540-3fde-4abf-a332-c9f24b3dee1e","title":"Tigrinya ዜና ብትግርኛ","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/tigrinya","externalTarget":"_self"},{"id":"fb90cc43-667b-428d-95f8-1fb1ac02c3ba","title":"Turkish TÜRKÇE","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/turkce","externalTarget":"_self"},{"id":"1f880389-77bb-4966-92f3-7b4cbc1596d5","title":"Ukrainian УКРАЇНСЬКA","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/ukrainian","externalTarget":"_self"},{"id":"9739b1a8-c656-4b3c-a99a-2dfda73ff93e","title":"Urdu اردو","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/urdu","externalTarget":"_self"},{"id":"6c9c394b-4506-4457-8982-a5cefc0bb28e","title":"Uzbek O'ZBEK","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/uzbek","externalTarget":"_self"},{"id":"78457d16-8af5-4fa8-bb4c-a6498700af05","title":"Vietnamese TIẾNG VIỆT","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/vietnamese","externalTarget":"_self"},{"id":"a5c49d7a-37af-4dec-a402-e42ff0f94159","title":"Welsh NEWYDDION","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/cymrufyw","externalTarget":"_self"},{"id":"3f0faaa6-d574-4cfa-85a7-6d0765d0546a","title":"Yoruba ÌRÒYÌN NÍ YORÙBÁ","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/yoruba","externalTarget":"_self"}],"legalLinks":[{"id":"9c7e6d42-66f5-4bb1-9910-81ecd19767be","title":"Terms of Use","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.co.uk/usingthebbc/terms","externalTarget":"_self"},{"id":"8197c42a-1aca-4e82-8903-975939b4bee0","title":"About the BBC","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.co.uk/aboutthebbc","externalTarget":"_self"},{"id":"05b4955e-1ecf-47b4-bd7a-aa1165ab1de3","title":"Privacy Policy","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/usingthebbc/privacy/","externalTarget":"_self"},{"id":"462d1fde-33b8-4837-94f6-fb0b661ab02f","title":"Cookies","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/usingthebbc/cookies/","externalTarget":"_self"},{"id":"e93f11a1-b48e-4ca4-bfc3-27f22b805b1e","title":"Accessibility Help","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.co.uk/accessibility/","externalTarget":"_self"},{"id":"87be02ac-4035-45e8-a6f1-5eccdcfdc941","title":"Contact the BBC","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.co.uk/contact","externalTarget":"_self"},{"id":"cb2af6b7-baf6-498b-a7bb-0d33b04ba1a9","title":"Advertise with us","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/advertisingcontact","externalTarget":"_self"},{"id":"b54aa43f-7010-4e14-962b-7d4de2e7fab7","title":"Do not share or sell my info","isSpecial":false,"inOverlay":true,"externalUrl":"https://www.bbc.com/usingthebbc/cookies/how-can-i-change-my-bbc-cookie-settings/","externalTarget":"_self"},{"id":"735917bd-465d-4d5c-9395-89496aef258f","title":"Contact technical support","isSpecial":false,"inOverlay":false,"externalUrl":"https://www.bbc.com/contact-bbc-com-help","externalTarget":"_self"}],"languageButtonText":"BBC in other languages","languageHeadline":"The BBC is in multiple languages","languageSubHeadline":"Read the BBC In your own language","legalTextContent":[{"type":"paragraph","model":{"text":"Copyright current_year BBC. All rights reserved.  The BBC is not responsible for the content of external sites. Read about our approach to external linking.","blocks":[{"type":"fragment","model":{"text":"Copyright current_year BBC. All rights reserved. ","attributes":[]}},{"type":"fragment","model":{"text":" ","attributes":["italic"]}},{"type":"fragment","model":{"text":"The","attributes":[]}},{"type":"fragment","model":{"text":" BBC ","attributes":["italic"]}},{"type":"fragment","model":{"text":"is","attributes":[]}},{"type":"fragment","model":{"text":" not responsible for the content of external sites.","attributes":["italic"]}},{"type":"fragment","model":{"text":" ","attributes":[]}},{"type":"urlLink","model":{"text":" not responsible for the content of external sites.","locator":"https://www.bbc.co.uk/editorialguidelines/guidance/feeds-and-links","blocks":[{"type":"fragment","model":{"text":"Read about our approach to external linking.","attributes":["bold"]}}]}}]}},{"type":"paragraph","model":{"text":" ","blocks":[{"type":"fragment","model":{"text":" ","attributes":[]}}]}}],"legalText":"[{\"type\":\"paragraph\",\"model\":{\"text\":\"Copyright current_year BBC. All rights reserved.  The BBC is not responsible for the content of external sites. Read about our approach to external linking.\",\"blocks\":[{\"type\":\"fragment\",\"model\":{\"text\":\"Copyright current_year BBC. All rights reserved. \",\"attributes\":[]}},{\"type\":\"fragment\",\"model\":{\"text\":\" \",\"attributes\":[\"italic\"]}},{\"type\":\"fragment\",\"model\":{\"text\":\"The\",\"attributes\":[]}},{\"type\":\"fragment\",\"model\":{\"text\":\" BBC \",\"attributes\":[\"italic\"]}},{\"type\":\"fragment\",\"model\":{\"text\":\"is\",\"attributes\":[]}},{\"type\":\"fragment\",\"model\":{\"text\":\" not responsible for the content of external sites.\",\"attributes\":[\"italic\"]}},{\"type\":\"fragment\",\"model\":{\"text\":\" \",\"attributes\":[]}},{\"type\":\"urlLink\",\"model\":{\"text\":\" not responsible for the content of external sites.\",\"locator\":\"https://www.bbc.co.uk/editorialguidelines/guidance/feeds-and-links\",\"blocks\":[{\"type\":\"fragment\",\"model\":{\"text\":\"Read about our approach to external linking.\",\"attributes\":[\"bold\"]}}]}}]}},{\"type\":\"paragraph\",\"model\":{\"text\":\" \",\"blocks\":[{\"type\":\"fragment\",\"model\":{\"text\":\" \",\"attributes\":[]}}]}}]"},"worldNewsTvPath":null,"worldNewsTvScheduleId":null,"worldNewsFullScheduleUrl":null,"flagPoles":{"ops":{"Metadata":{"FlagpoleBucketPrefix":"","FlagpoleDate":"31 Jul 24 08:19 UTC","FlagpoleFileID":"gn-flagpoles/gnlops","FlagpoleIdentity":"gnlops","FlagpoleSystem":"gn-flagpoles","Guid":"8dfe60a2-4322-411c-8a71-04dc7084fec4","urn":"urn:pubpipe:gnflagpoles:gnlops:gn-flagpoles/gnlops"},"Urn":"urn:pubpipe:gnflagpoles:gnlops:gn-flagpoles/gnlops","adverts":"true","analytics":"true","wwhp_feeds":"true","wwhp_obituary":"false","xproxy":"false","zephr":"false","zephrx":"true"},"ngas":{"Metadata":{"FlagpoleBucketPrefix":"","FlagpoleDate":"28 Nov 24 14:39 UTC","FlagpoleFileID":"gn-flagpoles/ngas","FlagpoleIdentity":"ngas","FlagpoleSystem":"gn-flagpoles","Guid":"e229d810-0dcd-45f0-a61b-ff6a4178aff7","urn":"urn:pubpipe:gnflagpoles:ngas:gn-flagpoles/ngas"},"Urn":"urn:pubpipe:gnflagpoles:ngas:gn-flagpoles/ngas","aps":"true","banner":"true","beta":"true","brandmetrics":"true","cmp":"true","comscoremmx":"true","comscoremmxeea":"true","content_rec":"true","covatic":"true","covatic_admeta":"true","doubleverify":"true","fedid":"true","fedid_all":"true","fedid_google":"true","fedid_permutive":"true","fedid_piano":"true","fedid_signed":"true","fpd":"true","gpp":"true","grapeshot":"false","ias_publisher":"false","load_gpt":"true","name":"ngas","native_size":"true","newadunit":"true","newkeys":"true","nielsen":"true","ozone":"true","permutive":"true","permutive_cohort":"true","piano":"true","speedcurve":"true","spo":"true","survey":"true","surveyx":"true","tmt_media_filter":"true","version":"1.28"}},"isMobileWebview":false,"os":{"name":null,"version":null},"browser":{"name":null,"version":null},"rolexPlaylistPath":"/arts/arts-in-motion","rolexSponsorClipId":"p0g364b3","rolexSponsorClipIdSummary":"Advertisement","metadata":{},"pageKey":"@\"home\",","type":"indexPage","subtype":null,"slug":"home"},"__N_SSP":true},"page":"/[[...slug]]","query":{},"buildId":"TZncNGhTRlzFmeal666Ra","assetPrefix":"/bbcx","isFallback":false,"gssp":true,"customServer":true,"appGip":true,"scriptLoader":[]} window._sf_async_config = (window._sf_async_config || {"uid":50924,"domain":"bbc.com","useCanonical":true,"useCanonicalDomain":true,"authors":"NewsMediaOrganization","type":"indexPage","idSync":{}}); window._cbq = (window._cbq || ["_acct","anon"]);

      Perceivability Issue – Low Contrast

      The text on website lacks sufficient contrast against the white or light-coloured background. This makes it harder for users with visual impairments or low contrast sensitivity to read and navigate the website effectively. Improving the contrast between the text and background would align with WCAG standards and ensure that all users can perceive the content clearly.

    1. recoil

      I chose the word "recoil" because I think it begins the showing of the might of God. Elements and people are reacting from the exposure to God's abilities and control, and the word "recoil" is a summary of that, as later lines tell of the various monuments that moved, like the ocean, mountains, and streams. When I think of the word recoil, I picture someone or something curling up in a either scared or disgusted way. It's a dramatic reaction in my opinion. The drama of using it in this context alludes to the power Milton is trying to convey. He's showing the extremity of God's power and just how forbidding it was. As a reader, once I arrive to this word, the Psalm turns darker. The beginning lines explicate the story of the Israelites and God's deliverance of them to the promised land; it has a more upbeat and positive feeling. Then you get to the lines where Milton describes how God's glory was unveiled and shown and you feel like an outsider, feeling the darkness of being on the other end of God's promise. In this way, the word "recoil" helps set up a sort of tension, displaying God's two sides, deliverer and wrathful.

    1. We note here that a lively debate has arisen over the testability of evolutionary hypotheses. Because current behaviors are thought to be adaptations to environmental conditions that existed thousands of years ago, psychologists make their best guesses about what those conditions were and how specific kinds of behaviors gave people a reproductive advantage. But these hypotheses are obviously impossible to test with the experimental method. And just because hypotheses sound plausible does not mean they are true. For example, some scientists now believe that giraffes did not acquire a long neck to eat leaves in tall trees. Instead, they suggest, long necks first evolved in male giraffes to gain an advantage in fights with other males over access to females (Simmons & Scheepers, 1996). Which of these explanations is true? It’s hard to tell. Evolutionary explanations can’t be tested directly, because after all, they involve hypotheses about what happened thousands of years ago. They can, however, suggest novel hypotheses about why people do what they do in today’s world, which can then be put to the test, as we will see in later chapters (Al-Shawa, 2020).

      We (scientists, anyway) want answers, we live for precision and correctness and repeatability. I think this gets in the way of finding productive ways of dealing with both interpersonal problems and social issues on a larger scale. In many cases (such as our innate tendencies of aggression, control over the less powerful, and focus on self-interest), it makes more sense to accept them as "givens" and seek ways to ameliorate them than it does to try understanding them.

    1. Good: Keyboard Navigation You can navigate the BBC News website using just your keyboard, which is awesome for people who can’t use a mouse. Just hit the ‘Tab’ key, and you can move through links, buttons, and other interactive elements. Super handy for accessibility!

      Good: Alt Text for Images They do a great job of adding descriptive alt text to their images, so people using screen readers can still “see” what’s there. This is a must-have for visually impaired users and something they’ve done right.

      Good: Color Contrast The text stands out really well against the background, making it easy to read for everyone, including people with color blindness. Good contrast is one of those simple things that can make a huge difference.

      Good: Skip to Content Link They have a “Skip to Content” link so you can jump straight to the main stuff without having to tab through all the menus. It’s a small detail, but it’s really helpful for keyboard users and screen readers.

      Bad: Video Captions While most videos have captions, some are either missing them or using auto-generated captions that aren’t always accurate. This is a bit of a letdown because good captions are so important for people who are deaf or hard of hearing.

    1. This process is why today disputes are settled by independent courts rather than family elders.

      made me think of how this is still the norm/expectation for many in Nigeria, and how my generation frowns on it so much. It's not unique to us, and is actually the way that most of the world handled disputes for a large part of history. We're just in a unique time where our traditions haven't changed as much as the West's have, but we're so influenced by their culture that we're stuck somewhere in the liminal space between both cultures.

    1. Unit 1 Socratic Seminar: Is Social Media More Beneficial or Negative to Society? Directions: Read and Annotate the readings by making comments in the document, or the margins if you’re doing it on paper on the 2 articles listed below. Use a physical highlighter or highlighter tool for quotes/ideas you want to explore more and talk about. TYPE YOUR ANSWERS IN BLUE IF DOING THIS DIGITALLY.<br /> Fill out the Summaries at the bottom of Fill out the 6 questions you will use during the Socratic to drive the conversation. Do this part LAST!

      Write out the 6 Questions you will use during the Socratic Seminar. (Do this LAST IN BLUE FONT) 1. How will social media evolve in the future 2.How is social media affecting our outside interactions with others 3. 4. 5. 6.

      Reading #1: Supporters Argue: Social Media Is Beneficial Overall 1a Supporters argue that social networking is a phenomenon that is beneficial overall and has changed the world for the better. Perhaps the greatest measure of social media's success, they contend, is the role it played in ousting undemocratic governments in Tunisia and Egypt. Journalist Peter Beaumont of the British newspaper the Guardian argued in 2011 that "a young woman or a young man with a smartphone" was the "defining" image of the Arab Spring. "The instantaneous nature of how social media communicate self-broadcast ideas, unlimited by publication deadlines and broadcast news slots, explains in part the speed at which these revolutions have unraveled, their almost viral spread across a region," he contended. "It explains, too, the often loose and non-hierarchical organization of the protest movements unconsciously modeled on the networks of the web." 2a Indeed, supporters argue that social media can be extremely useful in encouraging people who would not typically be politically motivated to engage in various issues or causes. While such statements are sometimes derided by critics as "hashtag activism" or "slacktivism," defenders insist that such actions really can make a difference. "What is commonly called slacktivism is not at all about 'slacking activists,'" Harvard University sociology professor Zeynep Tufekci wrote on her blog in 2012. "[R]ather it is about non-activists taking symbolic action—often in spheres traditionally engaged only by activists or professionals (governments, NGOs, international institutions.). Since these so-called 'slacktivists' were never activists to begin with, they are not in dereliction of their activist duties. On the contrary, they are acting, symbolically and in a small way, in a sphere that has traditionally been closed off to 'the masses' in any meaningful fashion." 3a Social media has many other benefits, advocates contend, including the potential to assist during times of catastrophe. During and after the terrorist attacks that rocked Paris, France, in November 2015, supporters note, people took to Facebook, Twitter, and other social media to communicate to loved ones that they were safe, or to offer refuge to people stranded in the city. "The attacks which ravaged the French capital yesterday showed how social media can also play a much more positive role," Forbes contributor Federico Guerrini wrote. "Facebook activated its Safety Check tool…to help people in areas affected by a disaster let their Facebook friends know they are safe. Twitter was also helpful: residents used the hashtag #porteouverte [open doors] to offer shelter to people stranded in the city." Advocates of social networking contend that sites like Facebook and Twitter have brought people closer together. "It has never been easier to make friends than it is right now, mainly thanks to social networking sites," writer Dave Parrack argued on the technology website MakeUseOf.com in 2012. "Just a few decades ago it was pretty tough to connect with people unless you were the overly outgoing type able to make conversation with anyone at a party. The rise of mobile phones helped change this, connecting people in a new way, but then social networks sprang up and the whole idea of friendship changed once more and forever." 4a Supporters maintain that social networking sites increasingly function as a refuge where people can relax with their friends and family. "This is where social media become a powerful social force in the modern sphere," Taso Lagos of the University of Washington wrote in the Seattle Times in 2012. "Because we live in a world of constant anxiety and stress about our lives, our careers, the planet and the fate of our families and friends, trusted sites like Facebook and Twitter are places we turn to relieve this tension and allow us to live and express our humanity." Social media, he argued, are "the community centers of the future." 5a Such sites provide many valuable benefits, defenders argue, including enhancing people's sense of self-worth. The act of taking and posting selfies, they contend, helps people exert control over their self-image and the way they are viewed. "The harshly judged practice of self-picture taking," Huffington Post contributor Molly Fosco wrote in March 2014, "while perhaps excessive or annoying at times, can actually be a really simple way to feel really good about yourself…. Although our selfies might be veiled in narcissism, self-obsession, or boastfulness I think that for many it's a genuine attempt to boost self-esteem. Seeing a close-up picture of your own face and willingly showing it to thousands of people with one click is a form of self-confidence that I don't think should be quickly dismissed." 6a Supporters of social media discount many of the fears typically raised by opponents, noting that it is common for new technology to stir criticism. In the late 19th century, they note, some observers predicted that the telephone would severely damage interpersonal relationships, just as detractors of social media do today. The telephone "was going to bring down our society," Megan Moreno of the University of Wisconsin in Madison told the New York Times in 2012. "Men would be calling women and making lascivious comments, and women would be so vulnerable, and we'd never have civilized conversations again." She added, "When a new technology comes out that is something so important, there is this initial alarmist reaction." Write out a 100-word summary of your thoughts/ideas/opinions of the strengths and weaknesses of the Beneficial Side. (TYPE IN BLUE FONT) Social media supporters argue that it is a good thing for the world and there is proof that it helps the movements like Arab Spring to go on smoothly and global peace talks to be the most constructive. It leads to a lot of people and even calling on them to fight for their cause. It is useful for giving real time help like social media platforms and info at the time of an emergency and also brings people who don't live close to each other, closer to each other. Social media can also lead to the development of good self esteem through many apps. These benefits of the media source have risks which include being heavily dependent on technology, getting wrong info, and the threat of getting into harmful sites with people despite how useful it can be sometimes .

      Reading #2: Opponents Argue: Social Media Is Not Beneficial Overall 1b Opponents of social networking argue that such sites are not beneficial overall and that they gradually erode many essential aspects of communication and socialization. "The shortcomings of social media would not bother me awfully if I did not suspect that Facebook friendship and Twitter chatter are displacing real rapport and real conversation," New York Times commentator Bill Keller argued in 2011. "The things we may be unlearning, tweet by tweet—complexity, acuity, patience, wisdom, intimacy—are things that matter." 2b Indeed, critics contend, the rise of social networking has coincided with a decline in the quality of conversation. "As we ramp up the volume and velocity of online connections, we start to expect faster answers," MIT psychology professor Sherry Turkle wrote in the New York Times in 2012. "To get these, we ask one another simpler questions; we dumb down our communications, even on the most important matters." 3b Opponents argue that social media can contribute to feelings of sadness and loneliness. A study by researchers at the University of Michigan in 2013, they note, found that college-aged users felt worse the more they used Facebook. Because people's Facebook personas are often curated to make their lives seem fun or perfect, critics argue, that browsing social media can contribute to feelings of inadequacy. "When you're on a site like Facebook, you get lots of posts about what people are doing," co-author John Jonides, a cognitive neuroscientist at the Department of Psychology at the University of Michigan, told National Public Radio in 2013. "That sets up a social comparison — you maybe feel your life is not as full and rich as those people you see on Facebook." 4b Social media, critics charge, can lead people to obsess about themselves and their self-image to the point where it can be harmful. People need to look deeper for self-worth, they contend, than achieving "likes" by posting selfies on social media. "[I]if you've just spent half an hour editing a photo by blurring around your eyes with one app, adding eyelashes with another, then changing the colors with a third," Teen Vogue contributor Tiffany Perry wrote in March 2016, "chances are you're giving too much merit to how others perceive you." 5b Other critics claim that the impact of social media on political phenomena like the Arab Spring has been overstated. New Yorker columnist Malcolm Gladwell noted in 2011 that many revolutions took place throughout history before the advent of social networking. "People with a grievance will always find ways to communicate with each other," he wrote. "How they choose to do it is less interesting, in the end, than why they were driven to do it in the first place." 6b Opponents also assert that promoting political or social causes on social media has little real impact other than to make the person making the post feel good about themselves. In 2013, for example, the United Nations Children's Emergency Fund (UNICEF), a U.N. organization that raises money to help and protect children throughout the world, ran an ad campaign with a slogan that read "Like us on Facebook, and we will vaccinate zero children against polio." The point of the campaign, UNICEF explained, was not to disparage "likes" but to encourage more active support, such as contributing money to buy vaccines. "Slacktivism's inherent laziness disqualifies it as a real agent of progress because it does not possess the enthusiasm necessary for change," contributor Elias Tavaras wrote for the Hill in January 2016. "How can a post on Facebook inspire necessary action, especially when sitting down on a comfy computer chair? Indeed, the passion one may feel disappears, with a simple scroll or is drowned out by the other slacktivist posts." 7b Critics charge that social media users are in danger of having their online personas co-opted by corporations eager to collect the information users share and employ it for marketing purposes. Robert Barry of the pop culture website The Quietus argues that social media is turning people into "branded products." "Online businesses which seem to be promising something for nothing—from social networking to file sharing—are really offering you, their audience, as a readymade and fully packaged item for purchase," he argued, "be that by the ghost of advertising's future, or the investor whose faith gives that ghost substance." Write out a 100-word summary of your thoughts/ideas/opinions of the strengths and weaknesses of the Against Side. (TYPE IN BLUE FONT)

    1. do want or need feedback on how they are doing with their learning. ‘Do I really understand this?’ or ‘How am I doing compared to other learners?’

      I try to emphasize that grading is not the be all, end all. It is just a tool to determine if the course content is understood (it's also a good evaluation of myself and how well I'm able to impart the knowledge and skills asked for)

    1. All symbolic communication is learned, negotiated, and dynamic. We know that the letters b-o-o-k refer to a bound object with multiple written pages. We also know that the letters t-r-u-c-k refer to a vehicle with a bed in the back for hauling things. But if we learned in school that the letters t-r-u-c-k referred to a bound object with written pages and b-o-o-k referred to a vehicle with a bed in the back, then that would make just as much sense, because the letters don’t actually refer to the object and the word itself only has the meaning that we assign to it. We will learn more, in Chapter 8 “Verbal Communication”, about how language works, but communication is more than the words we use.

      It's interesting how, for those who grew up with English as their native language, the symbol for a big brown thing sticking out of the ground with green puffs is recognized as a 'tree.' In other languages, the symbols or words might be similar or completely different. As someone who is bilingual, I've noticed that the word for 'tree' in our second language is significantly different from the English term. If we were to integrate that word into the English language, it would have no meaning at all.

    1. Later, as I walked down the street to pick up my daughter from childcare, I ran into a friend who asked if I’d seen the newspaper story about a grad student we both knew. I hadn’t, so my friend explained that this woman had just filed a sexual harassment lawsuit against her dissertation director. According to the woman’s allegations, the professor manifested his misogyny by saying that women are good only for sex; he expected his student to find female sexual partners for him, and if she didn’t, he would not continue to support her research

      This shows how so many men just think it's okay to take advantage of anybody they are in higher power in for instance bosses not giving fair wages or even promotions to improve because its controlling tactic.

    1. Most writers are somewhere between these two extreme types of drafters, and that’s the best place to be.

      Right before this I was just thinking how I'm not really either one or the other and I was kind of in the middle. So I'm glad to know that it's actually a good spot and not bad to be more on one side rather than the other

    1. Author response:

      The following is the authors’ response to the previous reviews.

      Reviewer #1 (Public review):

      Summary:

      In the manuscript the authors describe a new pipeline to measure changes in vasculature diameter upon optogenetic stimulation of neurons. The work is useful to better understand the hemodynamic response on a network /graph level.

      Strengths:

      The manuscript provides a pipeline that allows to detect changes in the vessel diameter as well as simultaneously allows to locate the neurons driven by stimulation.

      The resulting data could provide interesting insights into the graph level mechanisms of regulating activity dependent blood flow.

      Weaknesses:

      (1) The manuscript contains (new) wrong statements and (still) wrong mathematical formulas.

      The symbols in these formulas have been updated to disambiguate them, and the accompanying statements have been adjusted for clarity.

      (2) The manuscript does not compare results to existing pipelines for vasculature segmentation (opensource or commercial). Comparing performance of the pipeline to a random forest classifier (illastik) on images that are not preprocessed (i.e. corrected for background etc.) seems not a particularly useful comparison.

      We’ve now included comparisons to Imaris (a commercial) for segmentation and VesselVio (open-source) for graph extraction software.

      For the ilastik comparison, the images were preprocessed prior to ilastik segmentation, specifically by doing intensity normalization.

      Example segmentations utilizing Imaris have now been included. Imaris leaves gaps and discontinuities in the segmentation masks, as shown in Supplementary Figure 10. The Imaris segmentation masks also tend to be more circular in cross-section despite irregularities on the surface of the vessels observable in the raw data and identified in manual segmentation. This approach also requires days to months to generate per image stack.

      “Comparison with commercial and open-source vascular analysis pipelines

      To compare our results with those achievable on these data with other pipelines for segmentation and graph network extraction, we compared segmentation results qualitatively with Imaris version 9.2.1 (Bitplane) and vascular graph extraction with VesselVio [1]. For the Imaris comparison, three small volumes were annotated by hand to label vessels. Example slices of the segmentation results are shown in Supplementary Figure 10. Imaris tended to either over- or under-segment vessels, disregard fine details of the vascular boundaries, and produce jagged edges in the vascular segmentation masks. In addition to these issues with segmentation mask quality, manual segmentation of a single volume took days for a rater to annotate. To compare to VesselVio, binary segmentation masks (one before and one after photostimulation) generated with our deep learning models were loaded into VesselVio for graph extraction, as VesselVio does not have its own method for generating segmentation masks. This also facilitates a direct comparison of the benefits of our graph extraction pipeline to VesselVio. Visualizations of the two graphs are shown in Supplementary Figure 11. Vesselvio produced many hairs at both time points, and the total number of segments varied considerably between the two sequential stacks: while the baseline scan resulted in 546 vessel segments, the second scan had 642 vessel segments. These discrepancies are difficult to resolve in post-processing and preclude a direct comparison of individual vessel segments across time. As the segmentation masks we used in graph extraction derive from the union of multiple time points, we could better trace the vasculature and identify more connections in our extracted graph. Furthermore, VesselVio relies on the distance transform of the user supplied segmentation mask to estimate vascular radii; consequently, these estimates are highly susceptible to variations in the input segmentation masks.We repeatedly saw slight variations between boundary placements of all of the models we utilized (ilastik, UNet, and UNETR) and those produced by raters. Our pipeline mitigates this segmentation method bias by using intensity gradient-based boundary detection from centerlines in the image (as opposed to using the distance transform of the segmentation mask, as in VesselVio).”

      (3) The manuscript does not clearly visualize performance of the segmentation pipeline (e.g. via 2d sections, highlighting also errors etc.). Thus, it is unclear how good the pipeline is, under what conditions it fails or what kind of errors to expect.

      On reviewer’s comment, 2D slices have been added in the Supplementary Figure 4.

      (4) The pipeline is not fully open-source due to use of matlab. Also, the pipeline code was not made available during review contrary to the authors claims (the provided link did not lead to a repository). Thus, the utility of the pipeline was difficult to judge.

      All code has been uploaded to Github and is available at the following location: https://github.com/AICONSlab/novas3d

      The Matlab code for skeletonization is better at preserving centerline integrity during the pruning of hairs from centerlines than the currently available open-source methods.

      - Generalizability: The authors addressed the point of generalizability by applying the pipeline to other data sets. This demonstrates that their pipeline can be applied to other data sets and makes it more useful.  However, from the visualizations it's unclear to see the performance of the pipeline, where the pipelines fails etc. The 3d visualizations are not particularly helpful in this respect . In addition, the dice measure seems quite low, indicating roughly 20-40% of voxels do not overlap between inferred and ground truth. I did not notice this high discrepancy earlier. A thorough discussion of the errors appearing in the segmentation pipeline would be necessary in my view to better assess the quality of the pipeline.

      2D slices from the additional datasets have been added in the Supplementary Figure 13 to aid in visualizing the models’ ability to generalize to other datasets.

      The dice range we report on (0.7-0.8) is good when compared to those (0.56-86) of 3D segmentations of large datasets in microscopy [2], [3], [4], [5], [6]. Furthermore, we had two additional raters segment three images from the original training set. We found that the raters had a mean inter class correlation  of 0.73 [7]. Our model outperformed this Dice score on unseen data: Dice scores from our generalizability tests on C57 mice and Fischer rats on par or higher than this baseline.

      Reviewer #2 (Public review):<br /> The authors have addressed most of my concerns sufficiently. There are still a few serious concerns I have. Primarily, the temporal resolution of the technique still makes me dubious about nearly all of the biological results. It is good that the authors have added some vessel diameter time courses generated by their model. But I still maintain that data sampling every 42 seconds - or even 21 seconds - is problematic. First, the evidence for long vascular responses is lacking. The authors cite several papers:

      Alarcon-Martinez et al. 2020 show and explicitly state that their responses (stimulus-evoked) returned to baseline within 30 seconds. The responses to ischemia are long lasting but this is irrelevant to the current study using activated local neurons to drive vessel signals.

      Mester et al. 2019 show responses that all seem to return to baseline by around 50 seconds post-stimulus.

      In Mester et al. 2019, diffuse stimulations with blue light showed a return to baseline around 50 seconds post-stimulus (cf. Figure 1E,2C,2D). However, focal stimulations where the stimulation light is raster scanned over a small region focused in the field of view show longer-lasting responses (cf. Figure 4) that have not returned to baseline by 70 seconds post-stimulus [8]. Alarcon-Martinez et al. do report that their responses return baseline within 30 seconds; however, their physiological stimulation may lead to different neuronal and vessel response kinetics than those elicited by the optogenetic stimulations as in current work.

      O'Herron et al. 2022 and Hartmann et al. 2021 use opsins expressed in vessel walls (not neurons as in the current study) and directly constrict vessels with light. So this is unrelated to neuronal activity-induced vascular signals in the current study.

      We agree that optogenetic activation of vessel-associated cells is distinct from optogenetic activation of neurons, but we do expect the effects of such perturbations on the vasculature to have some commonalities.

      There are other papers including Vazquez et al 2014 (PMID: 23761666) and Uhlirova et al 2016 (PMID: 27244241) and many others showing optogenetically-evoked neural activity drives vascular responses that return back to baseline within 30 seconds. The stimulation time and the cell types labeled may be different across these studies which can make a difference. But vascular responses lasting 300 seconds or more after a stimulus of a few seconds are just not common in the literature and so are very suspect - likely at least in part due to the limitations of the algorithm.

      The photostimulation in Vazquez et al. 2014 used diffuse photostimulation with a fiberoptic probe similar to Mester et al. 2019 as opposed to raster scanning focal stimulation we used in this study and in the study by Mester et al. 2019  where we observed the focal photostimulation to elicited longer than a minute vascular responses. Uhlirova et al. 2016 used photostimulation powers between 0.7 and 2.8 mW, likely lower than our 4.3 mW/mm2 photostimulation. Further, even with focal photostimulation, we do see light intensity dependence of the duration of the vascular responses. Indeed, in Supplementary Figure 2, 1.1 mW/mm2 photostimulation leads to briefer dilations/constrictions than does 4.3 mW/mm2; the 1.1 mW/mm2 responses are in line, duration wise, with those in Uhlirova et al. 2016.

      Critically, as per Supplementary Figure 2, the analysis of the experimental recordings acquired at 3-second temporal resolution did likewise show responses in many vessels lasting for tens of seconds and even hundreds of seconds in some vessels.

      Another major issue is that the time courses provided show that the same vessel constricts at certain points and dilates later. So where in the time course the data is sampled will have a major effect on the direction and amplitude of the vascular response. In fact, I could not find how the "response" window is calculated. Is it from the first volume collected after the stimulation - or an average of some number of volumes? But clearly down-sampling the provided data to 42 or even 21 second sampling will lead to problems. If the major benefit to the field is the full volume over large regions that the model can capture and describe, there needs to be a better way to capture the vessel diameter in a meaningful way.

      In the main experiment (i.e. excluding the additional experiments presented in the Supplementary Figure 2 that were collected over a limited FOV at 3s per stack), we have collected one stack every 42 seconds. The first slice of the volume starts following the photostimulation, and the last slice finishes at 42 seconds. Each slice takes ~0.44 seconds to acquire. The data analysis pipeline (as demonstrated by the Supplementary Figure 2) is not in any way limited to data acquired at this temporal resolution and - provided reasonable signal-to-noise ratio (cf. Figure 5) - is applicable, as is, to data acquired at much higher sampling rates.

      It still seems possible that if responses are bi-phasic, then depth dependencies of constrictors vs dilators may just be due to where in the response the data are being captured - maybe the constriction phase is captured in deeper planes of the volume and the dilation phase more superficially. This may also explain why nearly a third of vessels are not consistent across trials - if the direction the volume was acquired is different across trials, different phases of the response might be captured.

      Alternatively, like neuronal responses to physiological stimuli, the vascular responses elicited by increases in neuronal activity may themselves be variable in both space and time.

      I still have concerns about other aspects of the responses but these are less strong. Particularly, these bi-phasic responses are not something typically seen and I still maintain that constrictions are not common. The authors are right that some papers do show constriction. Leaving out the direct optogenetic constriction of vessels (O'Herron 2022 & Hartmann 2021), the Alarcon-Martinez et al. 2020 paper and others such as Gonzales et al 2020 (PMID: 33051294) show different capillary branches dilating and constricting. However, these are typically found either with spontaneous fluctuations or due to highly localized application of vasoactive compounds. I am not familiar with data showing activation of a large region of tissue - as in the current study - coupled with vessel constrictions in the same region. But as the authors point out, typically only a few vessels at a time are monitored so it is possible - even if this reviewer thinks it unlikely - that this effect is real and just hasn't been seen.

      Uhlirova et al. 2016 (PMID: 27244241) observed biphasic responses in the same vessel with optogenetic stimulation in anesthetized and unanesthetized animals (cf Fig 1b and Fig 2, and section “OG stimulation of INs reproduces the biphasic arteriolar response”). Devor et al. (2007) and Lindvere et al. (2013) also reported on constrictions and dilations being elicited by sensory stimuli.

      I also have concerns about the spatial resolution of the data. It looks like the data in Figure 7 and Supplementary Figure 7 have a resolution of about 1 micron/pixel. It isn't stated so I may be wrong. But detecting changes of less than 1 micron, especially given the noise of an in vivo prep (brain movement and so on), might just be noise in the model. This could also explain constrictions as just spurious outputs in the model's diameter estimation. The high variability in adjacent vessel segments seen in Figure 6C could also be explained the same way, since these also seem biologically and even physically unlikely.

      Thank you for your comment. To address this important issue, we performed an additional validation experiment where we placed a special order of fluorescent beads with a known diameter of 7.32 ± 0.27um, imaged them following our imaging protocol, and subsequently used our pipeline to estimate their diameter. Our analysis converged on the manufacturer-specified diameters, estimating the diameter to be 7.34 ± 0.32. The manuscript has been updated to detail this experiment, as below:

      Methods section insert

      “Second, our boundary detection algorithm was used to estimate the diameters of fluorescent beads of a known radius imaged under similar acquisition parameters. Polystyrene microspheres labelled with Flash Red (Bangs Laboratories, inc, CAT# FSFR007) with a nominal diameter of 7.32um and a specified range of 7.32 ± 0.27um as determined by the manufacturer using a Coulter counter were imaged on the same multiphoton fluorescence microscope set-up used in the experiment (identical light path, resonant scanner, objective, detector, excitation wavelength and nominal lateral and axial resolutions, with 5x averaging). The images of the beads had a higher SNR than our images of the vasculature, so Gaussian noise was added to the images to degrade the SNR to the same level of that of the blood vessels. The images of the beads were segmented with a threshold, centroids calculated for individual spheres, and planes with a random normal vector extracted from each bead and used to estimate the diameter of the beads. The same smoothing and PSF deconvolution steps were applied in this task. We then reported the mean and standard deviation of the distribution of the diameter estimates. A variety of planes were used to estimate the diameters.”

      Results Section Insert

      “Our boundary detection algorithm successfully estimated the radius of precisely specified fluorescent beads. The bead images had a signal-to-noise ratio of 6.79 ± 0.16 (about 35% higher than our in vivo images): to match their SNR to that of in vivo vessel data, following deconvolution, we added Gaussian noise with a standard deviation of 85 SU to the images, bringing the SNR down to 5.05 ± 0.15. The data processing pipeline was kept unaltered except for the bead segmentation, performed via image thresholding instead of our deep learning model (trained on vessel data). The bead boundary was computed following the same algorithm used on vessel data: i.e., by the average of the minimum intensity gradients computed along 36 radial spokes emanating from the centreline vertex in the orthogonal plane. To demonstrate an averaging-induced decrease in the uncertainty of the bead radius estimates on a scale that is finer than the nominal resolution of the imaging configuration, we tested four averaging levels in 289 beads. Three of these averaging levels were lower than that used on the vessels, and one matched that used on the vessels (36 spokes per orthogonal plane and a minimum of 10 orthogonal planes per vessel). As the amount of averaging increased, the uncertainty on the diameter of the beads decreased, and our estimate of the bead's diameter converged upon the manufacturer's Coulter counter-based specifications (7.32 ± 0.27um), as tabulated in Table 1.”

      Reviewer #1 (Recommendations for the authors):

      Comments to the authors replies to the reviews:

      - Supplementary Figure 13:

      As indicated before the 3d images + scale makes it impossible to judge the quality of the outputs.

      As aforementioned, 2D slices have been added to the Supplementary Figure 13.

      - Supplementary Table 3:

      There is a significant increase in the Hausdorrf and Mean Surface Distance measures for the new data, why ?

      A single vessel being missed by either the rater or the model would significantly affect the Hausdorff distance (HD) and by extension Mean Surface Distance: this is particularly pertinent in the LSFM image with its much larger FOV and thus a potential for much larger max distances to result from missed vessels in the prediction or ground truth data. Large Hausdorff distances may indicate a vessel was missed in either the ground truth or the segmentation mask.

      Of note, a different rater annotated these additional datasets from the raters labeling the ground truth data. There is a high variability in boundary placements between raters. On a test where three raters segmented the same three images from the original dataset, we computed a ICC of 0.73 across their segmentations. Our model Dice scores on predictions in out-of-distribution data sets were on par with the inter-rater ICC on the Thy1ChR2 2PFM data.

      - Supplementary Figure 2: The authors provide useful data on the time responses. However, looking at those figures, it is puzzling why certain vessels were selected as responding as there seems almost no change after stimulation. In addition, some of the responses seem to actually start several tens of seconds before the actual stimulus (particularly in A).

      Only some traces in C and D (dark blue) seem to be actually responding vessels.

      This is not discussed and unclear.

      Supplementary Figure 2 displays the time courses of vessel calibre for all vessels in the FOV, not just those deemed responders.

      The aforementioned effects are due to the loess smoothing filter having been applied to the time courses for the preliminary response, which has been rectified in the updated figures. In particular, Supplementary Figure 2 has been updated with separate loess smoothing before and after photostimulation. The (pre-stimulation) effect is gone once the loess smoothing has been separated.

      - R Point 7: As indicated before and in agreement with the alternative reviewer, the quality of the results in 3d is difficult to judge. No 2d sections that compare 'ground truth' with inferred results are shown in the current manuscript which would enable a much better judgment. The provided video is still 3d and not a video going through 2d slices. Also, in the video the overlap of vasculature and raw data seems to be very good and near 100%, why is the dice measure reported earlier so low ? Is this a particularly good example ?

      Some examples, indicating where the pipeline fails (and why) would be helpful to see, to judge its performance better (ideally in 2d slices).

      As discussed in the public comments, the 2D slices are now included in Suppl. Fig. 4 and suppl. Fig 13 to facilitate visual assessment. The vessels are long and thin so that slight dilations or constrictions impact the Dice scores without being easily visualizable.

      - Author response images 6 and 7. From the presented data the constrictions measured in the smaller vessels may be a result (at least partly) of noise. This seems to be particularly the case in Author response image 7 left top and bottom for example. It would be helpful to show the actual estimates of the vessels radii overlaid in the (raw) images. In some of the pictures the noise level seems to reach higher values than the 10-20% of noise used in the tests by the authors in the revision.

      The vessel radii are estimated as averages across all vertices of the individual vessels: it is thus not possible to overlay them meaningfully in 2D slices: in Figure 2B, we do show a rendering of sample vessel-wise radii estimates.

      - "We tested the centerline detection in Python, scipy (1.9.3) and Matlab. We found that the Matlab implementation performed better due to its inclusion of a branch length parameter for the identification of terminal branches, which greatly reduced the number of false branches; the Python implementation does not include this feature (in any version) and its output had many more such "hair" artifacts. Clearmap skeletonization uses an algorithm by Palagyi & Kuba(1999) to thin segmentation masks, which does not include hair removal. Vesselvio uses a parallelized version of the scipy implementation of Lee et al. (1994) algorithm which does not do hair removal based on a terminal branch length filter; instead, Vesselvio performs a threshold-based hair removal that is frequently overly aggressive (it removes true positive vessel branches), as highlighted by the authors."

      This statement is wrong. The removal of small branches in skeletons is algorithmically independent of the skeletonization algorithm itself. The authors cite a reference concerned with the algorithm they are currently employing for the skeletonization. Careful assessment of that reference shows that this algorithm removes small length branches after skeletonization is performed. This feature is available in open-source packages as well, or could be easily implemented.

      We appreciate that skeletonization is distinct from hair removal and have reworded this paragraph for clarity. We are currently working with SciPy developers to implement hair removal in their image processing pipeline so as to render our pipeline fully open-source.

      The removal of hairs after skeletonization with length based thresholding leads to the possibility of removing parts of centerlines in the main part of vessels after branch points with hairs. The Matlab implementation does not do this and leaves the main branches intact.

      This text has been updated to:

      “Hair” segments shorter than 20 μm and terminal on one end were iteratively removed, starting with the shortest hairs and merging the longest hairs at junctions with 2 terminal branches with the main vessel branch to reduce false positive vascular branches and minimize the amount of centerlines removed. This iterative hair removal functionality of the skeletonization algorithm is currently unavailable in python, but is available in Matlab [9].

      - "On the reviewer's comment, we did try inputting normalized images into Ilastik, but this did not improve its results." This is surprising. Reasonable standard preprocessing (e.g. background removal, equalization, and vessel enhancement) would probably restore most of illastik's performance in the indicated panel.

      While the improvement may be present in a particular set of images, the generalizability of such improvement to other patches is often poor in our experience, as reflected by aforementioned results and the widespread uptake of DL approaches to image segmentation. The in vivo datasets also contain artifacts arising from eg. bleeding into the FOV that ilastik is highly sensitive to. This is an example of noise that is not easily removed by standard preprocessing.

      - "Typical pre-processing/standard computer vision techniques with parameter tuning do not generalize on out-of-distribution data with different image characteristics, motivating the shift to DL-based approaches."

      I disagree with this statement. DL approaches can generalize typically when trained with sufficient amount of diverse data. However, DL approaches can also fail with new out of distribution data. In that situation they only be 'rescued' via new time intensive data generation and retraining. Simple standard image pre-processing steps (e.g. to remove background or boost vessel structures) have well defined parameter that can be easily adapted to new out of distribution data as clear interpretations are available. The time to adapt those parameters is typically much smaller than retraining of DL frameworks.

      We find that the standard image processing approaches with parameter tuning work robustly only if fine-tuned on individual images; i.e., the fine-tuning does not generalize across datasets. This approach thus does not scale to experiments yielding large image sizes/having high throughput experiments. While DL models may not generalize to out-of-distribution data, fine-tuning DL models with a small subset of labels generally produce superior models to parameter tuning that can be applied to entire studies. Moreover, DL fine-tuning is typically an efficient process due to very limited labelling and training time required.

      - It is still unclear how the authors pipeline performs compared with other (open source or commercially) available pipelines. As indicated before, comparing to illastik, particularly when feeding non preprocessed data, does not seem to be a particularly high bar.

      This question has also been raised by the other reviewer who asked to compare to commercially available pipelines.

      This question was not answered by the authors, and instead the authors reply by claiming to provide an open source pipeline. In fact, the use of matlab in their pipeline does not make it fully open-source either. Moreover, as mentioned before, open-source pipelines for comparisons do exists.

      As discussed above, the manuscript now includes comparisons to Imaris for segmentation and Vesselvio for graph extraction. The pipeline is on github.

      -"We agree with the review that this question is interesting; however, it is not addressable using present data: activated neuronal firing will have effects on their postsynaptic neighbors, yet we have no means of measuring the spread of activation using the current experimental model."

      Distances to the closest neuron in the manuscript are measured without checking if it's active. Thus, distances to the first set of n neurons could be measured in the same way, ignoring activation effects.

      Shorter distances to an entire ensemble of neurons would still be (more) informative of metabolic demands.

      This could indeed be done within the existing framework. The connected-components-3d can be used to extract individual occurrences of neurons in the FOV from the neuron segmentation mask. Each neuron could then have its distance calculated to each point on the vessel centerlines.

      - model architecture:

      It is unclear from the description if any positional encoding was used for the image patches.

      It is unclear if the architecture / pipeline can handle any volume sizes or is trained on a fixed volume shapes? In the latter case how is the pipeline applied?

      The model includes positional encoding, as described in Hatamizadeh et al. 2021.

      The model can be applied to images of any size, as demonstrated on larger images in Supplementary Figure 9 and on smaller images in Supplementary Figure 2. The pipeline is applied in the same way. It will read in the size of an input image and output an image of the same size.

      - transformer models often show better results when using a learning rate scheduler that adjust the learning rate (up and down ramps typically). Did the authors test such approaches?

      We did not use a learning rate scheduler, as we found we were getting good results without using one.

      - formula (4): The 95% percentile of two numbers is the max, and thus (5) is certainly not what the HD95 metric is. The formula is simply wrong as displayed.

      Thank you. The formula has been updated.

      - formula (5): formula 5 is certainly wrong: n_X, n_y are either integer numbers as indicated by the sum indices or sets when used in the distances, but can't be both at the same time.

      Thank you for your comment. The Formula has been updated.

      - The statement:

      "this functionality of the skeletonization algorithm is currently unavailable in any python implementation, but is available in Matlab [56]."

      is not correct (see reply above)

      Please see the response above. This text has been updated to:

      “Hair” segments shorter than 20 μm and terminal on one end were iteratively removed, starting with the shortest hairs and merging the longest hairs at junctions with 2 terminal branches with the main vessel branch to reduce false positive vascular branches and minimize the amount of centerlines removed. This iterative hair removal functionality of the skeletonization algorithm is currently unavailable in Python, but is available in Matlab [9].

      - the centerline extraction is performed after taking the union of smoothed masks. The union operation can induce novel 'irregular' boundaries that degrade skeletonization performance. I would expect to apply smoothing after the union?

      Indeed the images were smoothed via dilation after taking the union, as described in the previous set of responses to private comments.

      - "The radius estimate defined the size of the Gaussian kernel that was convolved with the image to smooth the vessel: smaller vessels were thus convolved with narrower kernels."

      It's unclear what image were filtered ?

      We have updated this text for clarity:

      The radius estimate defined the size of the Gaussian kernel that was convolved with the 2D image slice to smooth the vessel: smaller vessels were thus convolved with narrower kernels.

      - Was deconvolution on the raw images applied or after Gaussian filtering ?

      The deconvolution was applied before Gaussian filtering.

      - ",we extracted image intensities in the orthogonal plane from the deconvolved raw registered image. A 2D Gaussian kernel with sigma equal to 80% of the estimated vessel-wise radius was used to low-pass filter the extracted orthogonal plane image and find the local signal intensity maximum searching, in 2D, from the center of the image to the radius of 10 pixels from the center."

      Would it not be better to filter the 3d image before extracting a 2d plane and filter then ?

      That could be done, but would incur a significant computational speed penalty. 2D convolutions are faster, and produced excellent accuracy when estimating radii in our bead experiment.

      What algorithm was used to obtain the 2d images.

      The 2d images were obtained using scipy.ndimage.map_coordinates.

      - Figure 2: H is this the filtered image or the raw data ?

      Panel H is raw data.

      - It would be good to see a few examples of the raw data overlaid with the radial estimates to evaluate the approach (beyond the example in K).

      Additional examples are shown in Figure 5.

      - Figure 2 K: Why are boundary points greater than 2 standard deviations away from the mean excluded ?

      They are excluded to account for irregularities as vessels approach junctions [10], [11] REF.

      - Figure 2 L: what exactly is plotted here ? What are vertex wise changes, is that the difference between the minimum and maximum of all the detected radii for a single vertex? Why do some vessels (red) show high values consistently throughout the vessel ?

      Figure 2L displays change in the radius of vertices - in this FOV- following photostimulation in relation to baseline.

      - Assortativity: to calculate the assortativity, are radius changes binned in any form to account for the fact that otherwise, $e_{xy}$ and related measures will be likely be based on single data points?

      Assortativity is not calculated from single data points. It can be calculated by either binning into categories or computing it on scalars i.e. average radius across a vessel segment:

      See here for info on calculating assortativity from binned categories (ie classifying a vessel as a constrictor, dilator or non-responder):

      https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.assortativity.attribute_assortativity_coefficient.html#networkx.algorithms.assortativity.attribute_assortativity_coefficient

      And see here for calculating assortativity from scalar values:

      https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.assortativity.numeric_assortativity_coefficient.html#networkx.algorithms.assortativity.numeric_assortativity_coefficient

      We calculated the assortativity using scalar values.

      In both cases, one uses all nodes and calculates the correlation between each node and its neighbours with an attribute that can be binned or is a scalar. Binning the value on a given node would not affect the number of nodes in a graph.

      - "Ilastik tended to over-segment vessels, i.e. the model returned numerous false positives, having a high recall (0.89{plus minus}0.19) but low precision (0.37{plus minus}0.33) (Figure 3, Supplementary Table 3)."

      As indicated before, and looking at Figure 4, over segmentation seems due to too high background. A suggested preprocessing step on the raw images to remove background could have avoided this.

      The images were normalized in preprocessing.

      - Figure 4: The 3d panels are not much easier to read in the revised version. As suggested by other reviewers, 2d sections indicating the differences and errors would be much more helpful to judge the pipelines quality more appropriately.

      As discussed above, 2D sections are now available in a supplementary figure.

      - Figure 3: What would be the dice score (and other measures) between two ground truths extracted by two annotations by two humans (assisted e.g. by illastik).

      Two additional human rates annotated images. We observed a ICC of 0.73 across a total of three raters on the three images.

      - Figure 5: The authors only provide the absolute value of SU for the sigma noise levels. This only has some meaning when compared to the mean or median SU of the images. In the text the maximal intensity of 1023 SU is mentioned, but what are those values in images with weaker / smaller vessels (as provided in the constriction examples in the revision)/

      I am unclear why this validation figure should be part of the main manuscript while generalization performance is left out.

      The manuscript has been updated with the mean SNR value of 5.05 ± 0.15 to provide context for the quality of our images.

      Bibliography

      (1) J. R. Bumgarner and R. J. Nelson, “Open-source analysis and visualization of segmented vasculature datasets with VesselVio,” Cell Rep. Methods, vol. 2, no. 4, Apr. 2022, doi: 10.1016/j.crmeth.2022.100189.

      (2) G. Tetteh et al., “DeepVesselNet: Vessel Segmentation, Centerline Prediction, and Bifurcation Detection in 3-D Angiographic Volumes,” Front. Neurosci., vol. 14, Dec. 2020, doi: 10.3389/fnins.2020.592352.

      (3) N. Holroyd, Z. Li, C. Walsh, E. Brown, R. Shipley, and S. Walker-Samuel, “tUbe net: a generalisable deep learning tool for 3D vessel segmentation,” Jul. 24, 2023, bioRxiv. doi: 10.1101/2023.07.24.550334.

      (4) W. Tahir et al., “Anatomical Modeling of Brain Vasculature in Two-Photon Microscopy by Generalizable Deep Learning,” BME Front., vol. 2020, p. 8620932, Dec. 2020, doi: 10.34133/2020/8620932.

      (5) R. Damseh, P. Delafontaine-Martel, P. Pouliot, F. Cheriet, and F. Lesage, “Laplacian Flow Dynamics on Geometric Graphs for Anatomical Modeling of Cerebrovascular Networks,” ArXiv191210003 Cs Eess Q-Bio, Dec. 2019, Accessed: Dec. 09, 2020. [Online]. Available: http://arxiv.org/abs/1912.10003

      (6) T. Jerman, F. Pernuš, B. Likar, and Ž. Špiclin, “Enhancement of Vascular Structures in 3D and 2D Angiographic Images,” IEEE Trans. Med. Imaging, vol. 35, no. 9, pp. 2107–2118, Sep. 2016, doi: 10.1109/TMI.2016.2550102.

      (7) T. B. Smith and N. Smith, “Agreement and reliability statistics for shapes,” PLOS ONE, vol. 13, no. 8, p. e0202087, Aug. 2018, doi: 10.1371/journal.pone.0202087.

      (8) J. R. Mester et al., “In vivo neurovascular response to focused photoactivation of Channelrhodopsin-2,” NeuroImage, vol. 192, pp. 135–144, May 2019, doi: 10.1016/j.neuroimage.2019.01.036.

      (9) T. C. Lee, R. L. Kashyap, and C. N. Chu, “Building Skeleton Models via 3-D Medial Surface Axis Thinning Algorithms,” CVGIP Graph. Models Image Process., vol. 56, no. 6, pp. 462–478, Nov. 1994, doi: 10.1006/cgip.1994.1042.

      (10) M. Y. Rennie et al., “Vessel tortuousity and reduced vascularization in the fetoplacental arterial tree after maternal exposure to polycyclic aromatic hydrocarbons,” Am. J. Physiol.-Heart Circ. Physiol., vol. 300, no. 2, pp. H675–H684, Feb. 2011, doi: 10.1152/ajpheart.00510.2010.

      (11) J. Steinman, M. M. Koletar, B. Stefanovic, and J. G. Sled, “3D morphological analysis of the mouse cerebral vasculature: Comparison of in vivo and ex vivo methods,” PLOS ONE, vol. 12, no. 10, p. e0186676, Oct. 2017, doi: 10.1371/journal.pone.0186676.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      In this work, the authors present a cornucopia of data generated using deep mutational scanning (DMS) of variants in MET kinase, a protein target implicated in many different forms of cancer. The authors conducted a heroic amount of deep mutational scanning, using computational structural models to augment the interpretation of their DMS findings.

      Strengths:

      This powerful combination of computational models, experimental structures in the literature, dose-response curves, and DMS enables them to identify resistance and sensitizing mutations in the MET kinase domain, as well as consider inhibitors in the context of the clinically relevant exon-14 deletion. They then try to use the existing language model ESM1b augmented by an XGBoost regressor to identify key biophysical drivers of fitness. The authors provide an incredible study that has a treasure trove of data on a clinically relevant target that will appeal to many.

      We thank Reviewer 1 for their generous assessment of our manuscript!

      Weaknesses:

      However, the authors do not equally consider alternative possible mechanisms of resistance or sensitivity beyond the impact of mutation on binding, even though the measure used to discuss resistance and sensitivity is ultimately a resistance score derived from the increase or decrease of the presence of a variant during cell growth.

      For this resistance screen, Ba/F3 was a carefully chosen cellular selection system due to its addiction to exogenously provided IL-3, undetected expression of endogenous RTKs (including MET), and dependence on kinase transgenes to promote signaling and growth under IL-3 withdrawal. Together this allows for the readout of variants that alter kinase-driven proliferation without the caveat of bypass resistance. In our previous phenotypic screen (Estevam et al., 2024, eLife), we also carefully examined the impact of all possible MET kinase domain mutations both in the presence and absence of IL-3 withdrawal, but no inhibitors. There, we identified a small group of mutations that were associated with gain-of-function behavior located at conserved regulatory motifs outside of the catalytic site, yet these mutations were largely sensitive to inhibitors within this screen.

      Here, the majority of resistance mutations were located at or near the ATP-binding pocket, suggesting an impact on resistance through direct drug interactions. However, there was also a small population of distal mutations that met our statistical definitions of resistance. Within the crizotinib selection, sites such as T1293, L1272, T1261, amongst others, demonstrated resistance profiles but were located in C-lobe away from the catalytic site. While we did not experimentally validate these specific mutations, it is possible that non-direct drug binders instead promote resistance through allosteric or conformational mechanisms which preserve kinase activity and signaling. Indeed, our ML framework explicitly included conformational and stability effects as significant in improving predictions.

      We would be happy to further discuss any specific alternative resistance mechanisms Reviewer 1 has in mind! Thank you for highlighting this!

      There are also points of discussion and interpretation that rely heavily on docked models of kinase-inhibitor pairs without considering alternative binding modes or providing any validation of the docked pose. Lastly, the use of ESM1b is powerful but constrained heavily by the limited structural training data provided, which can lead to misleading interpretations without considering alternative conformations or poses.

      The majority of our interpretations are grounded in the X-ray structures of WT MET bound to the inhibitors studied (or close analogs). The use of docked models (note - to mutant structures predicted by UMol, not ESM, that can have conformational changes) is primarily in the ML part of the manuscript. Indeed, in our models, conformational and binding mode changes are taken into account as features (see Ligand RMSD, Residue RMSD). There are certainly improved methods (AF3 variants) emerging that might have even more power to model these changes, but they come with greater computational costs and are something we will be evaluating in the future.

      We added to the results section: “While our features can account for some changes in MET-mutant conformation and altered inhibitor binding pose, the prediction of these aspects can likely be improved with new methods.”

      Reviewer #2 (Public review):

      Summary:

      This manuscript provides a comprehensive overview of potential resistance mutations within MET Receptor Tyrosine Kinase and defines how specific mutations affect different inhibitors and modes of target engagement. The goal is to identify inhibitor combinations with the lowest overlap in their sensitivity to resistant mutations and determine if certain resistance mutations/mechanisms are more prevalent for specific modes of ATP-binding site engagement. To achieve this, the authors measured the ability of ~6000 single mutants of MET's kinase domain (in the context of a cytosolic TPR fusion) to drive IL-3-independent proliferation (used as a proxy for activity) of Ba/F3 cells (deep mutational profiling) in the presence of 11 different inhibitors. The authors then used co-crystal and docked structures of inhibitor-bound MET complexes to define the mechanistic basis of resistance and applied a protein language model to develop a predictive model of inhibitor sensitivity/resistance.

      Strengths:

      The major strengths of this manuscript are the comprehensive nature of the study and the rigorous methods used to measure the sensitivity of ~6000 MET mutants in a pooled format. The dataset generated will be a valuable resource for researchers interested in understanding kinase inhibitor sensitivity and, more broadly, small molecule ligand/protein interactions. The structural analyses are systematic and comprehensive, providing interesting insights into resistance mechanisms. Furthermore, the use of machine learning to define inhibitor-specific fitness landscapes is a valuable addition to the narrative. Although the ESM1b protein language model is only moderately successful in identifying the underlying mechanistic basis of resistance, the authors' attempt to integrate systematic sequence/function datasets with machine learning serves as a foundation for future efforts.

      We thank Reviewer 2 for their thoughtful assessment of our manuscript!

      Weaknesses:

      The main limitation of this study is that the authors' efforts to define general mechanisms between inhibitor classes were only moderately successful due to the challenge of uncoupling inhibitor-specific interaction effects from more general mechanisms related to the mode of ATP-binding site engagement. However, this is a minor limitation that only minimally detracts from the impressive overall scope of the study.

      We agree. We have added to the discussion: “A full landscape of mutational effects can help to predict drug response and guide small molecule design to counteract acquired resistance. The ability to define molecular mechanisms towards that goal will likely require more purposefully chosen chemical inhibitors and combinatorial mutational libraries to be maximally informative.”

      Reviewer #3 (Public review):

      Summary:

      In the manuscript 'Mapping kinase domain resistance mechanisms for the MET receptor tyrosine kinase via deep mutational scanning' by Estevam et al, deep mutational scanning is used to assess the impact of ~5,764 mutants in the MET kinase domain on the binding of 11 inhibitors. Analyses were divided by individual inhibitor and kinase inhibitor subtypes (I, II, I 1/2, and III). While a number of mutants were consistent with previous clinical reports, novel potential resistance mutants were also described. This study has implications for the development of combination therapies, namely which combination of inhibitors to avoid based on overlapping resistance mutant profiles. While one suggested pair of inhibitors with the least overlapping resistance mutation profiles was suggested, this manuscript presents a proof of concept toward a more systematic approach for improved selection of combination therapeutics. Furthermore, in a final part of this manuscript the data was used to train a machine learning model, the ESM-1b protein language model augmented with an XG Boost Regressor framework, and found that they could improve predictions of resistance mutations above the initial ESM-1b model.

      Strengths:

      Overall this paper is a tour-de-force of data collection and analysis to establish a more systematic approach for the design of combination therapies, especially in targeting MET and other kinases, a family of proteins significant to therapeutic intervention for a variety of diseases. The presentation of the work is mostly concise and clear with thousands of data points presented neatly and clearly. The discovery of novel resistance mutants for individual MET inhibitors, kinase inhibitor subtypes within the context of MET, and all resistance mutants across inhibitor subtypes for MET has clinical relevance. However, probably the most promising outcome of this paper is the proposal of the inhibitor combination of Crizotinib and Cabozantib as Type I and Type II inhibitors, respectively, with the least overlapping resistance mutation profiles and therefore potentially the most successful combination therapy for MET. While this specific combination is not necessarily the point, it illustrates a compelling systematic approach for deciding how to proceed in developing combination therapy schedules for kinases. In an insightful final section of this paper, the authors approach using their data to train a machine learning model, perhaps understanding that performing these experiments for every kinase for every inhibitor could be prohibitive to applying this method in practice.

      We thank Reviewer 3 for their assessment of our manuscript (we are very happy to have it described as a tour-de-force!)

      Weaknesses:

      This paper presents a clear set of experiments with a compelling justification. The content of the paper is overall of high quality. Below are mostly regarding clarifications in presentation.

      Two places could use more computational experiments and analysis, however. Both are presented as suggestions, but at least a discussion of these topics would improve the overall relevance of this work. In the first case it seems that while the analyses conducted on this dataset were chosen with care to be the most relevant to human health, further analyses of these results and their implications of our understanding of allosteric interactions and their effects on inhibitor binding would be a relevant addition. For example, for any given residue type found to be a resistance mutant are there consistent amino acid mutations to which a large or small or effect is found. For example is a mutation from alanine to phenylalanine always deleterious, though one can assume the exact location of a residue matters significantly. Some of this analysis is done in dividing resistance mutants by those that are near the inhibitor binding site and those that aren't, but more of these types of analyses could help the reader understand the large amount of data presented here. A mention at least of the existing literature in this area and the lack or presence of trends would be worthwhile. For example, is there any correlation with a simpler metric like the Grantham score to predict effects of mutations (in a way the ESM-1b model is a better version of this, so this is somewhat implicitly discussed).

      Indeed we experimented with including these types of features in the XGBoost scheme (particularly residue volume change and distance) to augment the predictive power of the ESM model - see Figure 8 - figure supplement 1; however, we didn’t find them as significant. Therefore, the signal is likely very small and/or incorporated into the baseline ESM model.

      Indeed, this discussion relates to the second point this manuscript could improve upon: the machine learning section. The main actionable item here is that this results section seems the least polished and could do a better job describing what was done. In the figure it looks like results for certain inhibitors were held out as test data - was this all mutants for a single inhibitor, or some other scheme? Overall I think the implications of this section could be fleshed out, potentially with more experiments.

      Figure 8A and the methods section contain a very detailed explanation of test data. We have thought about it and do not have any easy path to improve the description, which we reproduce here:

      “Experimental fitness scores of MET variants in the presence of DMSO and AMG458 were ignored in model training and testing since having just one set of data for a type I ½ inhibitor and DMSO leads to learning by simply memorizing the inhibitor type, without generalizability. The remaining dataset was split into training and test sets to further avoid overfitting (Figure 8A). The following data points were held out for testing - (a) all mutations in the presence of one type I (crizotinib) and one type II (glesatinib analog) inhibitor, (b) 20% of randomly chosen positions (columns) and (c) all mutations in two randomly selected amino acids (rows) (e.g. all mutations to Phe, Ser). After splitting the dataset into train and test sets, the train set was used for XGBoost hyperparameter tuning and cross-validation. For tuning the hyperparameters of each of the XGBoost models, we held out 20% of randomly sampled data points in the training set and used the remaining 80% data for Bayesian hyperparameter optimization of the models with Optuna (Akiba et al., 2019), with an objective to minimize the mean squared error between the fitness predictions on 20% held out split and the corresponding experimental fitness scores. The following hyperparameters were sampled and tuned: type of booster (booster - gbtree or dart), maximum tree depth (max_depth), number of trees (n_estimators), learning rate (eta), minimum leaf split loss (gamma), subsample ratio of columns when constructing each tree (colsample_bytree), L1 and L2 regularization terms (alpha and beta) and tree growth policy (grow_policy - depthwise or lossguide). After identifying the best combination of hyperparameters for each of the models, we performed 10-fold cross validation (with re-sampling) of the models on the full training set. The training set consists of data points corresponding to 230 positions and 18 amino acids. We split these into 10 parts such that each part corresponds to data from 23 positions and 2 amino acids. Then, at each of 10 iterations of cross-validation, models were trained on 9 of 10 parts (207 positions and 16 amino acids) and evaluated on the 1 held out part (23 positions and 2 amino acids). Through this protocol we ensure that we evaluate performance of the models with different subsets of positions and amino acids. The average Pearson correlation and mean squared error of the models from these 10 iterations were calculated and the best performing model out of 8192 models was chosen as the one with the highest cross-validation correlation. The final XGBoost models were obtained by training on the full training set and also used to obtain the fitness score predictions for the validation and test sets. These predictions were used to calculate the inhibitor-wise correlations shown in Figure 8B.“

      As mentioned in the 'Strengths' section, one of the appealing aspects of this paper is indeed its potential wide applicability across kinases -- could you use this ML model to predict resistance mutants for an entirely different kinase? This doesn't seem far-fetched, and would be an extremely compelling addition to this paper to prove the value of this approach.

      This is exactly where we want to go next! But as we see here, it is going to be hard and require more purposeful selection of chemicals and likely combinatorial mutations to be maximally informative (see also reviewer 2 response where we have added text)

      Another area in which this paper could improve its clarity is in the description of caveats of the assay. The exact math used to define resistance mutants and its dependence on the DMSO control is interesting, it is worth discussing where the failure modes of this procedure might be. Could it be that the resistance mutants identified in this assay would differ significantly from those found in patients? That results here are consistent with those seen in the clinic is promising, but discrepancies could remain.

      Thank you for pointing this out. The greatest trade-off of probing the intracellular MET kinase (juxtamembrane, kinase domain, c-tail) in the constitutively active TPR system is that while we gain cytoplasmic expression, constitutive oligomerization, and HGF-independent activation, other features like membrane-proximal effects are lost and translatability of some mutations in non-proliferative conditions may also be limited. Nevertheless, Ba/F3 allows IL-3 withdrawal to serve as an effective variant readout of transgenic kinase variant effects due to its undetectable expression of endogenous RTKs and addiction to exogenous interleukin-3 (IL-3).

      In our previous study, we were also interested in comparing the phenotypic results to available patient populations in cBioPortal. We observed that our DMS captured known oncogenic MET kinase variants, in addition to a population of gain-of-function variants within clinical residue positions that have not been clinically reported. Interestingly, the population of possible novel gain-of-function mutant codons were more distant in genetic space (2-3 Hamming distance) from wild type than the clinically reported variant codon (1-2 Hamming distance).

      For this inhibitor screen, we also carefully compared previously reported and validated resistance mutations across referenced publications to that of our inhibitor screen, and observed large agreement as noted in-text. While discrepancies could definitely remain, there is precedence for consistency.

      Furthermore a more in depth discussion of the MetdelEx14 results is warranted. For example, why is the DMSO signature in Figure 1 - supplement 4 so different from that of Figure 1?

      In our previous study (Estevam et al., 2024), we more directly compared MET and METΔExon14, and while observed several differences, especially at conserved regulatory motifs, the TPR expression system did not provide a robust differential. Therefore, we hypothesize that a membrane-bound context is likely necessary to obtain a differential that captures juxtamembrane regulatory effects for these two isoforms. For that reason, we did not place heavy emphasis on the differences between MET and METΔExon14 in this study. Nevertheless, we performed parallel analysis of the METΔExon14 inhibitor DMS and provided all source and analyzed data in our GitHub repository (https://github.com/fraser-lab/MET_kinase_Inhibitor_DMS).

      In our analysis of resistance, we used Rosace to score and compare DMSO and inhibitor landscapes. We present the full distribution of raw scores in Figure 1 for each condition. However, to visually highlight resistance mutations as a heatmap, we subtracted the scores of each variant in each inhibitor condition from the raw DMSO score, making the heatmaps in Figure 1 - supplement 4 appear more “blue.”

      And finally, there is a lot of emphasis put on the unexpected results of this assay for the tivantinib "type III" inhibitor - could this in fact be because the molecule "is highly selective for the inactive or unphosphorylated form of c-Met" according to Eathiraj et al JBC 2011?

      The work presented by Eathiraj et al JBC 2011 is a key study we reference and is foundational to tivantinib. While the point brought up about tivantinib’s selective preference for an inactive conformation is valid, this is also true for type II kinase inhibitors. In our study, regardless of inhibitor conformational preference, tivantinib was the only one with a nearly identical landscape to DMSO and exhibited selection even in the absence of Ba/F3 MET-addiction (Figure 1E). This result is in closer agreement with MET agnostic behavior reported by Basilico et al., 2013 and Katayama et al., 2013.

      While this paper is crisply written with beautiful figures, the complexity of the data warrants a bit more clarity in how the results are visualized. Namely, clearly highlighting mutants that have previously reported and those identified by this study across all figures could help significantly in understanding the more novel findings of the work.

      To better compare and contrast novel mutation identified in this study to others, we compiled a list of reported resistance mutations from recent clinical and experimental studies (Pecci et al 2024; Yao et al., 2023; Bahcall et al., 2022; Recondo et al., 2020; Rotow et al ., 2020; Fujino et al., 2019), since a direct database with resistance annotations does not exist for MET, to the best of our knowledge. In total, this amounted to 31 annotated resistance mutations across crizotinib, capmatinib, tepotinib, savolitinib, cabozantinib, merestinib, and glesatinib, which we have now tabulated in a new figure (Figure 4) and commentary in the main text:

      To assess the agreement between our DMS and previously annotated resistance mutations, we compiled a list of reported resistance mutations from recent clinical and experimental studies (Pecci et al 2024; Yao et al., 2023; Bahcall et al., 2022; Recondo et al., 2020; Rotow et al ., 2020; Fujino et al., 2019) (Figure 4A,B). Overall, previously discovered mutations are strongly shifted to a GOF distribution for the drugs where resistance is reported from treatment or experiment; in contrast, the distribution is centered around neutral for those sites for other drugs not reported in the literature (Figure 4C). However, even in cases such as L1195V, we observe GOF DMS scores indicative of resistance to previously reported inhibitors. Given this overall strong concordance with prior literature and clinical results, we can also provide hypotheses to clarify the role of mutations that are observed in combination with others. For example, H1094Y is a reported driver mutation that has been linked to resistance in METΔEx14 for glesatinib with either the secondary L1195V mutation or in isolation (Recodo et al., 2020). However, in our assay H1094Y demonstrated slight sensitivity to gelesatinib, suggesting that either resistance is linked to the exon14 deletion isoform, the L1195V mutation, or a cellular factor not modeled well by the BaF3 system.

      Finally, the potential impacts and follow-ups of this excellent study could be communicated better - it is recommended that they advertise better this paper as a resource for the community both as a dataset and as a proof of concept. In this realm I would encourage the authors to emphasize the multiple potential uses of this dataset by others to provide answers and insights on a variety of problems.

      Please see below

      Related to this, the decision to include the MetdelEx14 results, but not discuss them at all is interesting, do the authors expect future analyses to lead to useful insights? Is it surprising that trends are broadly the same to the data discussed?

      Our previous paper suggests that Ba/F3 isn’t a great model for measuring the differences between MET and METΔEx14, so we haven’t emphasized other than to point to our previous paper. We include the full analysis here nonetheless as a resource. Potentially where the greatest differences between resistance mutant behaviors would be observed is in the full-length, membrane-bound MET and METΔEx14 receptor isoforms. While outside of the scope of this study, there is great potential to use the resistance mutations identified in this study as a filtered group to test and map differential inhibitor sensitivities between receptor isoforms.

      And finally it could be valuable to have a small addition of introspection from the authors on how this approach could be altered and/or improved in the future to facilitate the general application of this approach for combination therapies for other targets.

      See also reviewer 2 response where we have added text.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      Major points of revision:

      (1) It seems like much of the structural interpretation of the inhibitor binding mode, outside of crizotinib binding, appears to come from docked models of the inhibitor to the MET kinase domain. Given the potential variability of the docked structure to the kinase domain, it would be useful for the authors to consider alternative possible binding modes that their docking pipeline may have suggested. It could also be useful to provide some degree of validation or contextualization of their docking models.

      All individual figures are very carefully inspected based on either existing crystal structures of the inhibitor or closely related inhibitors (ATP, 3DKC; crizotinib, 2WGJ; tepotinib, 4R1V; tivantinib, 3RHK; AMG-458, 5T3Q; NVP-BVU972, 3QTI; merestinib, 4EEV; savolitinib, 6SDE). In total, four structural interpretations were the result of docking onto reference experimental structures (capmatinib, cabozantinib, glumetinib, glesatinib). As we wrote above, different conformations and binding modes are possible in predicted mutant structures (as we did here at scale) and included in the ML analysis already.

      (2) In the first section, the authors classify an inhibitor as Type Ia on docking models, but mention the conflicting literature describing it as type Ib - it would be helpful to provide a contextualization of why this distinction between Ia and Ib matters, and what difference it might make. It would also be useful to know if their docking score only suggested poses compatible with Ia or if other poses were provided as well. Validation using other method might be beneficial, especially since they acknowledge the conflicting literature for classification. Or at least recontextualization that more evidence would be needed.

      Kinase inhibitors have several canonical structural definitions we use to base the classifications in this study. Specifically, type I inhibitors are classified in MET by interactions with Y1230, D1228, K1110 in addition to its conformation in the ATP-binding site. Type I inhibitors are further subdivided into type 1a in MET if it leverages interactions with the solvent front and residue G1163. In prior literature referenced, tepotinib was classified as type 1b, which would imply it does not have solvent front interactions, like savolitinib (PDB 6SDE) or NVP-BVU972 (PDB 3QTI). However, in the tepotinib experimental structure (PDB 4R1V), we observed a greater structural resemblance to other type 1a inhibitors opposed to type 1b (Figure 1 - figure supplement 1b).

      (3) The measure used to discuss resistance and sensitivity is ultimately a resistance score derived from the increase or decrease of the presence of a variant during cell growth. This is not a measure of direct binding. It would be helpful if the authors discussed alternative mechanisms through which these variants may impact resistance and/or sensitivity, such as stability, protonation effects, or kinase activity. The score itself may be convolving over all these potential mechanisms to drive GOF and LOF observed behavior.

      See the response to the public review. Indeed, our ML framework explicitly included conformational and stability effects as significant in improving predictions.

      (4) While it is promising to try and improve the predictive properties of ESM1b, it is not exactly clear why the authors considered their structural data of 11 inhibitors a sufficient dataset with which to augment the model. It would be useful for the authors to provide some additional context for why they wished to augment ESM1b in particular with their dataset, and provide any metrics indicating that their training data of 11 inhibitors provided an adequate statistical sample.

      We don’t understand what this means. Sorry!

      (5) The authors use ESM-1b to predict the fitness impact of each mutation and augment it using protein structural data of drug-target interactions. However, using an XGBoost regressor on a single set of 11 kinase-inhibitor interaction pairs is an incredibly sparse dataset to train upon. It would be useful for the authors to consider the limitations of their model, as well as its extensibility in the context of alternate binding poses, alternate conformations, or changes in protonation states of ligand or inhibitor.

      On the contrary - this is 11 chemicals across 3000 mutations. We have discussed alternative interpretations above.

      Minor points:

      (1) It would also be useful for the authors to provide more context around their choice of regressor. XGBoost is a powerful regressor but can easily overfit high dimensional data when paired with language models such as ESM-1b. This would be particularly useful since some of the features to train on were also generated using existing models such as ThermoMPNN.

      Yes - we are quite concerned about overfitting and have tried to assess overfitting by careful design of test and validation sets.

      (2) The authors also mention excluding their DMSO and AMG458 scores in the model training and testing due to overfitting issues - it would be useful to have an SI figure pointing to this data.

      No - we exclude the DMSO because that is the reference (baseline) and AMG because it has a different binding mode. This isn’t related to overfitting.

      (3) The authors mention in their docking pipeline that 5 binding modes were used for each ligand docking, but it appears that only one binding mode is considered in the main figures. It would be useful for the authors to provide additional details about what were the other binding modes used for, how different were each binding mode, and how was the "primary" mode selected (and how much better was its score than the others).

      The reviewer misinterprets the difference between poses shown in figures, based on mostly crystal structures or carefully selected templates, and the use of docked models in feature engineering for the ML part of the study. Where existing crystal structures do not exist, we performed docking for capmatinib, cabozantinib, glumetinib, glesatinib onto reference structures bound to type I (2WGJ) and type II (4EEV) inhibitors. We selected one representative binding mode based on the reference inhibitor, and while not exact, at a minimum these models provide a basis for structural interpretation.

      Reviewer #2 (Recommendations for the authors):

      My main suggestion is for the authors to add a few sentences (in non-technical language) to the results section, specifically before the results shown in Figure 3, defining gain-of-function, loss-of-function, resistance, and sensitivity. While these definitions are present in the materials and methods section, explicitly discussing them prior to the relevant results would significantly improve the overall readability of the manuscript.

      We defined “gain-of-function” and “loss-of-function” mutations as those with fitness scores statistically greater or lower than wild-type. Within the DMSO condition, gain-of-function and loss-of -function labels describe mutational perturbation to protein function, whereas within inhibitor conditions, the labels describe the difference in fitness introduced by an inhibitor.

      We have also clarified these definitions where the terms are first introduced: “As expected, the DMSO control population displayed a bimodal distribution with mutations exhibiting wild-type fitness centered around 0, with a wider distribution of mutations that exhibited loss- or gain-of-function effects, as defined by fitness scores with statistically significant lower or greater scores than wild-type, respectively.”

      Figure 7D. Please add a bit more detail to the legend on how fold change (y-axis) was calculated.

      Here, fold change represents the number of viable cells at each inhibitor concentration relative to the TKI control, measured with the CellTiter-Glo® Luminescent Cell Viability Assay (Promega) as an end point readout. We have updated the legend of Figure 7D with calculation details: “Dose-response for each inhibitor concentration is represented as the fraction of viable cells relative to the TKI free control.”

      I must admit, I did not understand what "Specific inhibitor fitness landscapes also aid in identifying mutations with potential drug sensitivity, such as R1086 and C1091 in the MET P-loop" means. These are positions where most mutations lead to greater sensitivity to crizotinib. Is the idea that there are potentially clinically-relevant MET mutations that can be targeted over wild type with crizotinib?

      Thank you for highlighting this! The P-loop (phosphate-binding loop) is a glycine-rich structural motif conserved in kinase domains. This motif is located in the N-lobe, where its primary role is to gate ATP entry into the active site and stabilize the phosphate groups of ATP when bound. Therefore, the P-loop is a common target region for ATP-competitive inhibitor design, but also a site where resistance can emerge (Roumiantsev et al., 2002). The idea we’d like to convey is that identifying residues that offer the potential for drug stabilization with the added benefit of having lower risk resistance, is an attractive consideration for novel inhibitor design.

      We have added to the text: “Individual inhibitor resistance landscapes also aid in identifying target residues for novel drug design by providing insights into mutability and known resistance cases. This enables the selection of vectors for chemical elaboration with potential lower risk of resistance development. Sites with mutational profiles such as R1086 and C1091, located in the common drug target P-loop of MET, could be likely candidates for crizotinib.”

      Reviewer #3 (Recommendations for the authors):

      (1) Suggested Improvements to the Figures:

      a)  Figure 4A - T1261 seems to be mislabeled

      b)  In Figure 3A it's suggested to highlight mutants determined to be resistance mutants by this scheme.

      c)  In Figure 3D it would be informative to highlight which of these resistance mutants have already been previously reported and which are novel to this study

      d)  Throughout figures 3A, 3D, and 4G the graphical choices on how to highlight synonymous mutations and mutations not performed in the assay needs improvement.

      The Green vs Grey 'TRUE' vs 'FALSE' boxes are confusing. Just a green box indicating synonymous mutations would be sufficient. Additionally these green boxes are hard to see, and often edges of this green box are currently missing making it even more difficult to see and interpret.

      * In Figure 4A mutants do not seem to be indicated by a line or plus sign, but this is not explained in the legend or the caption. Please add.

      * In 3D and 4G it is not clear if the mutants not performed are indicated at all - perhaps they are indicated in white, making them indistinguishable from scores with 0. Please clarify.

      T1261 and G1242 are now correctly labeled.

      In text we have also highlighted reported resistance mutations for crizotinib, which are inclusive of clinical reports and in vitro characterization: “These sites, and many of the individual mutations, have been noted in prior reports, such as: D1228N/H/V/Y, Y1230C/H/N/S, G1163R.”

      We have adjusted the heatmaps to improve visual clarity. Mutations with score 0 are white, as indicated by the scale bar, and mutations uncaptured by the screen are now in light yellow. The green outline distinguishing WT synonymous mutations have also been adjusted so edges are no longer cut off. In our representations, we only distinguished mutations by the score color scale bar and WT outline. What looked like a “plus” or “line” in the original figure was only the heatmap background, which now should be resolved in the updated figure and legends for Figure 3 and Figure 4.

      (2) Some Minor Suggested Improvements to the Text:

      a)  The abbreviation CBL for 'CBL docking site' is used without being defined.

      b)  Figure 3G is referenced, but it does not exist.

      c)  In the sentence 'Beyond these well characterized sites, regions with sensitivity occurred throughout the kinase, primarily in loop-regions which have the greatest mutational tolerance in DMSO, but do not provide a growth advantage in the presence of an inhibitor (Figure 1 - Figure Supplement 1; Figure 1 - Figure Supplement 2).'. It is not clear why these supplemental figures are being referenced.

      d)  In the supplement section 'Enrich2 Scoring' has what seem like placeholders for citations in [brackets]

      Cbl is a E3 ubiquitin ligase that plays a role in MET regulation through engagement with exon 14, specifically at Y1003 when phosphorylated. This mode of regulation was more highlighted in our previous study. However, since Cbl was only mentioned briefly in this study, we have removed reference to it to simplify the text.

      In addition, we have removed the figure 3G reference and corrected the in-text range. We have also removed references to figure supplements where unnecessary and edited the “Enrich2 scoring” method section to now reference missing citations.

    1. If the battery ever did need to be replaced, it would run between $2,200 and $2,600 from a Toyota dealer, but it's doubtful that anyone would purchase a new battery for such an old car. Most will probably choose to buy a low-mileage unit from a salvage yard, just as they would with an engine or transmission. We found many units available for around $500.

      battery tips,

    1. Categorizing anything is tricky. On one hand, we strive to simplify our world with models, cate-gories, and taxonomies. On the other hand, simplification limits and potentially undermines theessential concepts we strive to better understand.

      This sentence shows that categorizing change methods can help reduce things but it could also ignore critical aspects. It's just the first step to truly understand the steps. We need to look at them in greater depth and context.

    1. Make an effort to forgive mistakes.

      I feel this very important. It's easy to disregard someone's work just because they made a mistake. It's important though to remember that all humans make mistakes even simple ones. The right thing to do would be to politely correct them but still acknowledge their work or message.

  11. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. Transition to this new understanding is typically precipitated by an event or series of events that force the young person to acknowledge the personal impact of racism.

      Often, a major event or even a series of smaller ones pushes young people to see how racism personally affects them. These moments aren’t just eye opening; they change how teens view themselves and their role in the world. For Black adolescents, it’s a tough but transformative step in making sense of their racial identity.

    1. Author response:

      The following is the authors’ response to the previous reviews.

      Reviewer #1 (Public review): 

      Summary: 

      This manuscript reports the substrate-bound structure of SiaQM from F. nucleatum, which is the membrane component of a Neu5Ac-specific Tripartite ATP-dependent Periplasmic (TRAP) transporter. Until recently, there was no experimentally derived structural information regarding the membrane components of TRAP transporter, limiting our understanding of the transport mechanism. Since 2022, there have been 3 different studies reporting the structures of the membrane components of Neu5Ac-specific TRAP transporters. While it was possible to narrow down the binding site location by comparing the structures to proteins of the same fold, a structure with substrate bound has been missing. In this work, the authors report the Na+-bound state and the Na+ plus Neu5Ac state of FnSiaQM, revealing information regarding substrate coordination. In previous studies, 2 Na+ ion sites were identified. Here, the authors also tentatively assign a 3rd Na+ site. The authors reconstitute the transporter to assess the effects of mutating the binding site residues they identified in their structures. Of the 2 positions tested, only one of them appears to be critical to substrate binding.

      Strengths: 

      The main strength of this work is the capture of the substrate bound state of SiaQM, which provides insight into an important part of the transport cycle.

      Weaknesses: 

      The main weakness is the lack of experimental validation of the structural findings. The authors identified the Neu5Ac binding site, but only test 2 residues for their involvement in substrate interactions, which is quite limited. However, comparison with previous mutagenesis studies on homologues supports the location of the Neu5Ac binding site. The authors tentatively identified a 3rd Na+ binding site, which if true would be an impactful finding, but this site was not sufficiently experimentally tested for its contribution to Na+ dependent transport. This lack of experimental validation prevents the authors from unequivocally assigning this site as a Na+ binding site. However, the reporting of these new data is important as it will facilitate follow up studies by the authors or other researchers. 

      Comments on revisions: 

      Overall, the authors have done a good job of addressing the reviewers' comments. It's good to know that the authors are working on the characterisation of the potential metal binding site mutants - characterizing just a few of these will provide much-needed experimental support for this potential Na+ site. 

      The new MD simulations provide additional support for the new Na+ site and could be included.

      However, as the authors know, direct experimental characterisation of mutants is the ideal evidence of the Na+ site.

      Aside from the characterisation of mutants, which seems to be held up by technical issues, the only remaining issue is the comparison of the Na+- and Na+/Neu5Ac-bound states with ASCT2. It still does not make sense to me why the authors are not directly comparing their Na+ only and Na+/Neu5Ac states with the structures of VcINDY in the Na+-only and Na+/succinate bound states. These VcINDY structures also revealed no conformational changes in the HP loops upon binding succinate, as the authors see for SiaQM. Therefore, this comparison is very supportive. It is understood that the similarity to the DASS structure is mentioned on p.17, but it is also interesting and useful to note that TRAP and DASS transporters also share a lack of substrateinduced local conformational changes, to the extent these things have been measured.

      We acknowledge the summary weakness that experimental data to support the third Na binding site is critical.

      Based on the reviewer’s suggestion, we added the following in the main text and a supplementary figure comparing the Na ion binding sites between VcINDY and SiaQM. Page 13.

      “These two sodium ion binding sites are also conserved in the structure of VcINDY (Supplementary Figure 7) (Sauer et al., 2022). In both cases, the sodium ions are bound at the helix-loop-helix ends of HP1 and HP2. The binding sites utilize both side chains and main chain carbonyl groups. The number of main chain carbonyl interactions suggests that they are critical, and using main chain rather than side chain interactions minimizes the likelihood of point mutations affecting the binding.”

      Reviewer #3 (Public review): 

      The manuscript by Goyal et al report substrate-bound and substrate-free structures of a tripartite ATP independent periplasmic (TRAP) transporter from a previously uncharacterized homolog, F. nucleatum. This is one of most mechanistically fascinating transporter families, by means of its QM domain (the domain reported in his manuscript) operating as a monomeric 'elevator', and its P domain functioning as a substrate-binding 'operator' that is required to deliver the substrate to the QM domain; together, this is termed an 'elevator with an operator' mechanism.

      Remarkably, previous structures had not demonstrated the substrate Neu5Ac bound. In addition, they confirm the previously reported Na+ binding sites, and report a new metal binding site in the transporter, which seems to be mechanistically relevant. Finally, they mutate the substrate binding site and use proteoliposomal uptake assays to show the mechanistic relevance of the proposed substrate binding residues.

      Strengths: 

      The structures are of good quality, the presentation of the structural data has improved, the functional data is robust, the text is well-written, and the authors are appropriately careful with their interpretations. Determination of a substrate bound structure is an important achievement and fills an important gap in the 'elevator with an operator' mechanism.

      Weaknesses: 

      Although the possibility of the third metal site is compelling, I do not feel it is appropriate to model in a publicly deposited PDB structure without directly confirming experimentally. The authors do not extensively test the binding sites due to technical limitations of producing relevant mutants; however, their model is consistent with genetic assays of previously characterized orthologs, which will be of benefit to the field. Finally, some clarifications of EM processing would be useful to readers, and it would be nice to have a figure visualizing the unmodeled lipid densities - this would be important to contextualize to their proposed mechanism.

      Reviewer #3 (Recommendations for the authors): 

      I appreciate the authors' responses to our critiques; the revised manuscript is much improved and has addressed most of my concerns. I look forward to seeing their follow up experiments testing mutational e=ects. I think MD simulations of ion-binding sites on their own are supportive but by themselves not su=icient to prove the existence of a functional Na+-binding site. Some clarifications in the methods/supplements would satisfy my concerns about data processing and analysis.

      - Unliganded map: were the 141,272 particles used for one class of ab initio? This is unusual, usually multiple ab initio classes are used to further eliminate junk particles. The authors themselves use 6 classes for the substrate-bound dataset.

      We classified the particles into multiple 3-D classes.  There was no improvement in statistics or maps on splitting these further.  Hence, we did not pursue that further. 

      - Substrate-bound map: how did the four 'identical' classes independently refine? Are similar Na+/substate densities found in each separate class?

      The other classes refined to worse than 4.5 Å resolution. We stopped characterizing them past that point.  We were hoping to see multiple conformations that are diLerent – and hopefully a class where only two sodium ions could be bound.  However, any interpretation at 4.5 Å would be unreliable.

      - Both maps: all ab initio classes prior to final refinement should be displayed in the supplementary workflow, this is common for EM processing diagrams.

      We agree it is common – however, unless there is a good reason to discuss the other classes, we are not convinced of the value of crowding the figures.

      - What specific refinement package and version of Phenix are the authors using? It seems unusual that it is not possible to refine without a metal in Phenix real-space refinement, I have seen many structures where there is no issue refining without critical ions/waters. The authors should double check that they are using the appropriate scattering table for cryo-EM, which should be "electron".

      Sorry for the confusion – we did not mean to say we cannot refine without a metal. If we want to add something to the density, we cannot refine it without suggesting a metal or solvent.  The site without anything added will refine without any issues but in the absence of additional verification, we cannot be sure of the identity of the ions. We are confident of the metal binding site – but not confident of the exact metal bound.  We used Sodium as our first hypothesis.

      We don’t think the scattering factors will help in the identification of the ions. Servalcat as part of CCP-EM can produce diLerence maps and we believe that for identification of ions, it will require higher resolution (<2.5 Å) but at this resolution, we can say that there is a nonprotein density but not more than that. We were using “electron” (which we believe is default with phenix.real_space_refine). The refinement was performed using standard protocols and appropriate scattering factors (Phenix version 1.19x), and we have previously used similar refinement protocols for other maps/models (Example -Vinothkumar KR, Arya CK, Ramanathan G, Subramanian R. 2021. Comparison of CryoEM and X-ray structures of dimethylformamidase. Progress in Biophysics and Molecular Biology, CryoEM microscopy developments and their biological applications 160:66–78. doi:10.1016/j.pbiomolbio.2020.06.008).

      To convince the reviewer of the quality of the maps, we have added figures that show the model-to-map fit of all of the main secondary structural elements in both the unliganded and the Neu5Ac bound forms.

      - I certainly understand the authors' reluctance to not model the entirety of protein densities; however, I think it would be useful to highlight these densities in the global context of the protein. A common way to show this is to show the density proximal to protein chains in one color, and the remaining densities in a contrasting color (Figure 1 somewhat demonstrates this but it is di=icult to tell). I think this would be a nice figure to show the presence and location of unmodeled densities.

      We have modified supplementary figure 3 to include unmodelled densities in panels G and H for both structures.

      - Small detail, "uniform" is misspelled as "unifrom" in supplementary Figure 3. 

      Thank you.  Corrected.

    1. The lack of guidance and training is of particular concern, experts say, because AI will soon be everywhere.

      As previously stated, AI is being accepted into society more and more everyday. However, a lot of people are using AI simply because it's a convenient tool to make our lives easier, not because they understand the technology and want to use it to its full potential. This kind of technology has to be used and regarded in a way that creates a balance of using AI for its intended use and maintaining the integrity of the work we produce.

    2. The tension surrounding generative AI in education shows no signs of going away.

      This is something to be increasingly aware of. It's true that AI and it's use will continue to grow as the technology is further developed. It's also evident that no matter who opposes AI it is being accepted more and more into our everyday lives.

    3. “Nobody likes talking about short-term operational thinking. It’s just not an exciting thing to do,” he says. “People would rather talk about the big picture and how the world is going to change than the nuts and bolts of how to operate every day.”

      I didn't really think about this. There's been big talk about AI being the future but not really how we're going to get to that point or how it may effect us on the deepest level.

    4. Even AI-savvy professors, he has noticed, share this feeling of being untethered from truth when they read students’ writing. “These intense feelings come and go. You feel like you’ve got it all figured out and got a plan. Then you read something and it throws you off again.”

      I have seriously never read anything AI made, but I imagine this is what it's like. There are scraps of something good, maybe even handmade, but there's also just a lot of fluff that sounds like nothing.

    5. “I do feel a little worried about the future of our students and their training and skills moving forward. But I can’t individually do anything to shut that down or change it. It’s more providing a good example for them. It’s about being a moral compass and showing them that good direction,” he says. “If they choose to abuse it, I kind of believe in karma. ... The world will catch up to them and check them in ways that will damage their career trajectories.”

      If you think about generative as a cheat at creative thinking then I think this a overall passive takeaway. Cheating, while effective, will get you no where in the long run. You're only really hurting yourself if you give up all of your creative integrity.

    6. AI tools can now listen to and summarize a lecture, as well as read and summarize long academic articles.

      I think some people forget that writing isn't the only thing that can be done by AI. It's possible for even studying to be quick and easy too, futher taking control out of our hands.

    7. A few didn’t know they had used generative AI because it’s embedded in so many other tools, like Grammarly.

      It's so common for me to think that most auto correct software uses AI that I find it suprising that people don't know. I remember most advertizing grammarly performs nowadays says they are powered by AI. I also just think it's interesting that a mistake like this is so easy to make especially now.

    8. He notes that some textbook publishers, like Macmillan, are already embedding AI tutors into their learning platforms, so why shouldn’t students take advantage of the tools at their disposal?

      This is an interesting point and I agree with it, but it's more about the way that we are taking advantage of it that is the real problem. How we apply Ai can be a big issue or a great benefit to our intellectual learning.

    9. A STEM professor may encourage students to use AI to polish their writing if that helps them better articulate scientific concepts. But a humanities professor might balk, since clear and coherent writing is central to mastering the discipline.

      There are always going to be differences in opinions. It's just going to come down to the subject, and what the actually need of Ai would be in the classroom. Then enforcing those rules to keep people from stepping over line to say it simple.

    10. He liked the idea that the tool could help students with hidden disabilities or those who struggle with English as a second language. “I thought at the time this would be great,” he recalls.

      I agree with this completely. There are definitely huge benefits as to what AI can do for us as students. It's just applying it in the right ways.

    11. It’s her job to ensure students develop basic writing skills, and the noticeable uptick in AI use is impeding those efforts.

      People want the easy way out of most things. AI plays a big part in that.

    12. “I’m grading fake papers instead of playing with my own kids.”

      It's sad that this is a reality. We would be just as upset if teachers used AI to grade our assignments so why do we think its okay to use AI to do the assignments.

    13. A few didn’t know they had used generative AI because it’s embedded in so many other tools, like Grammarly.

      This is so interesting to me, because AI is becoming so known and such a casual thing in the world that there are times people don't even realize its being used, which can be scary moving forward.

    1. Texting and e-mail and posting let us present the self we want to be' This means we canedit. And if we wish to, we can delete. Or retouch: the voice, the flesh' the face' the body'Not too much, not too liule -- just right

      It's interesting that this sentiment is regarded as something unique to digital interaction. I would think that many people want to look right, speak right, act right, and curse under their breath when they flub the wording of a witty quip. Maybe though, it's that this attitude about oneself is more common these days since online, we Can literally change what we said, feeding that anxious instinct.

    1. What comes to mind? Sights, sounds, and scents? Something special that happened the last time you were there? Do you contemplate joining them? Do you start to work out a plan of getting from your present location to the restaurant? Do you send your friends a text asking if they want company? Until the moment when you hit the “send” button, you are communicating with yourself.

      When I think about meeting friends somewhere, I picture a scene I can almost smell and recall past memories. When I think about the weather, I consider whether I should go, how I would get there and what to say before I even text them. It’s all just my internal voice figuring things out.

    1. Historical study, in sum, is crucial to the promotion of that elusive creature, the well-informed citizen

      It's interesting to look at the concept of the well-informed citizen in this current day. I do agree that understanding history can make someone more well-informed on certain issues. When looking at our modern day media landscape, there are a couple of obstacles pertaining to gathering information. With things like mis/disinformation spreading through social media, censorship in school curricula, and paywalls on news sites, obtaining factual information appears challenging. I just though it'd be interesting to highlight this overlap here.

    1. Author response:

      The following is the authors’ response to the original reviews.

      eLife Assessment

      This study offers a useful treatment of how the population of excitatory and inhibitory neurons integrates principles of energy efficiency in their coding strategies. The analysis provides a comprehensive characterisation of the model, highlighting the structured connectivity between excitatory and inhibitory neurons. However, the manuscript provides an incomplete motivation for parameter choices. Furthermore, the work is insufficiently contextualized within the literature, and some of the findings appear overlapping and incremental given previous work.

      We are genuinely grateful to the Editors and Reviewers for taking time to provide extremely valuable suggestions and comments, which will help us to substantially improve our paper. We decided to do our very best to implement all suggestions, as detailed in the point-by-point rebuttal letter below. We feel that our paper has improved considerably as a result. 

      Public Reviews:

      Reviewer #1 (Public Review): 

      Summary: Koren et al. derive and analyse a spiking network model optimised to represent external signals using the minimum number of spikes. Unlike most prior work using a similar setup, the network includes separate populations of excitatory and inhibitory neurons. The authors show that the optimised connectivity has a like-to-like structure, leading to the experimentally observed phenomenon of feature competition. They also characterise the impact of various (hyper)parameters, such as adaptation timescale, ratio of excitatory to inhibitory cells, regularisation strength, and background current. These results add useful biological realism to a particular model of efficient coding. However, not all claims seem fully supported by the evidence. Specifically, several biological features, such as the ratio of excitatory to inhibitory neurons, which the authors claim to explain through efficient coding, might be contingent on arbitrary modelling choices. In addition, earlier work has already established the importance of structured connectivity for feature competition. A clearer presentation of modelling choices, limitations, and prior work could improve the manuscript.

      Thanks for these insights and for this summary of our work.  

      Major comments:

      (1) Much is made of the 4:1 ratio between excitatory and inhibitory neurons, which the authors claim to explain through efficient coding. I see two issues with this conclusion: (i) The 4:1 ratio is specific to rodents; humans have an approximate 2:1 ratio (see Fang & Xia et al., Science 2022 and references therein); (ii) the optimal ratio in the model depends on a seemingly arbitrary choice of hyperparameters, particularly the weighting of encoding error versus metabolic cost. This second concern applies to several other results, including the strength of inhibitory versus excitatory synapses. While the model can, therefore, be made consistent with biological data, this requires auxiliary assumptions.

      We now describe better the ratio of numbers of E and I neurons found in real data, as suggested. The first submission already contained an analysis of how the optimal ratio of E vs I neuron numbers depends in our model on the relative weighting of the loss of E and I neurons and on the relative weighting of the encoding error vs the metabolic cost in the loss function (see Fig. 7E). We revised the text on page 12 describing Fig. 7E. 

      To allow readers to form easily a clear idea of how the weighting of the error vs the cost may influence the optimal network configuration, we now present how optimal parameters depend on the weighting in a systematic way, by always including this type of analysis when studying all other model parameters (time constants of single E and I neurons, noise intensity, metabolic constant, ratio of mean I-I to E-I connectivity). These results are shown on the Supplementary Fig. S4 A-D and H, and we comment briefly on each of them in Results sections (pages 9, 10, 11 and 12) that analyze each of these parameters.  

      Following this Reviewer’s comment, we now included a joint analysis of network performance relative to the ratio of E-I neuron numbers and the ratio of mean I-I to E-I connectivity (Fig. 7J). We found a positive correlation between optima values of these two ratios. This implies that a lower ratio of E-I neuron numbers, such as a 2:1 ratio in human cortex mentioned by the reviewer, predicts lower optimal ratio of I-I to E-I connectivity and thus weaker inhibition in the network. We made sure that this finding is suitably described in revision (page 13).

      (2) A growing body of evidence supports the importance of structured E-I and I-E connectivity for feature selectivity and response to perturbations. For example, this is a major conclusion from the Oldenburg paper (reference 62 in the manuscript), which includes extensive modelling work. Similar conclusions can be found in work from Znamenskiy and colleagues (experiments and spiking network model; bioRxiv 2018, Neuron 2023 (ref. 82)), Sadeh & Clopath (rate network; eLife, 2020), and Mackwood et al. (rate network with plasticity; eLife, 2021). The current manuscript adds to this evidence by showing that (a particular implementation of) efficient coding in spiking networks leads to structured connectivity. The fact that this structured connectivity then explains perturbation responses is, in the light of earlier findings, not new.

      We agree that the main contribution of our manuscript in this respect is to show how efficient coding in spiking networks can lead to structured connectivity implementing lateral inhibition similar to that proposed in the recent studies mentioned by the Reviewer. We apologize if this was not clear enough in the previous version. We streamlined the presentation to make it clearer in revision.  We nevertheless think it useful to report the effects of perturbations within this network because these results give information about how lateral inhibition works in our network. Thus, we kept presenting it in the revised version, although we de-emphasized and simplified its presentation. We now give more emphasis to the novelty of the derivation of this connectivity rule from the principles of efficient coding (pages 4 and 6). We also describe better (page 8) what the specific results of our simulated perturbation experiments add to the existing literature.

      (3) The model's limitations are hard to discern, being relegated to the manuscript's last and rather equivocal paragraph. For instance, the lack of recurrent excitation, crucial in neural dynamics and computation, likely influences the results: neuronal time constants must be as large as the target readout (Figure 4), presumably because the network cannot integrate the signal without recurrent excitation. However, this and other results are not presented in tandem with relevant caveats.

      We improved the Limitations paragraph in Discussion, and also anticipated caveats in tandem with results when needed, as suggested. 

      We now mention the assumption of equal time constants between the targets and readouts in the Abstract. 

      We now added the analysis of the network performance and dynamics as a function of the time constant of the target (t<sub>x</sub>) to the Supplementary Fig S5 (C-E). These results are briefly discussed in text on page 13. The only measure sensitive to t<sub>x</sub> is the encoding error of E neurons, with a minimum at t<sub>x</sub> =9 ms, while I neurons and metabolic cost show no dependency. Firing rates, variability of spiking as well as the average and instantaneous balance show no dependency on t<sub>x</sub>. We note that t<sub>x</sub> = t, with t=1/l the time constant of the population readout (Eq. 9), is an assumption we use when we derive the model from the efficiency objective (Eq. 18 to 23). In our new and preliminary work (Koren, Emanuel, Panzeri, Biorxiv 2024), we derived a more general class of models where this assumption is relaxed, which gives a network with E-E connectivity that adapts to the time constant of the stimulus. Thus, the reviewer is correct in the intuition that the network requires E-E connectivity to better integrate target signals with a different time constant than the time constant of the membrane. We now better emphasize this limitation in Discussion (page 16).

      (4) On repeated occasions, results from the model are referred to as predictions claimed to match the data. A prediction is a statement about what will happen in the future – but most of the “predictions” from the model are actually findings that broadly match earlier experimental results, making them “postdictions”.

      This distinction is important: compared to postdictions, predictions are a much stronger test because they are falsifiable. This is especially relevant given (my impression) that key parameters of the model were tweaked to match the data.

      We now comment on every result from the model as either matching earlier experimental results, or being a prediction for experiments. 

      In Section “Assumptions and emergent properties of the efficient E-I network derived from first principles”, we report (page 4) that neural networks have connectivity structure that relates to tuning similarity of neurons (postdiction). 

      In Section “Encoding performance and neural dynamics in an optimally efficient E-I network” we report (page 5) that in a network with optimal parameters, I neurons have higher firing rate than E neurons (postdiction), that single neurons show temporally correlated synaptic currents (postdiction) and that the distribution of firing rates across neurons is log-normal (postdiction). 

      In Section “Competition across neurons with similar stimulus tuning emerging in efficient spiking networks” we report (page 6)  that the activity perturbation of E neurons induces lateral inhibition on other E neurons, and that the strength of lateral inhibition depends on tuning similarity (postdiction). We show that activity perturbation of E neurons induces lateral excitation in I neurons (prediction). We moreover show that the specific effects of the perturbation of neural activity rely on structured E-I-E connectivity (prediction for experiments, but similar result in Sadeh and Clopath, 2020). We show strong voltage correlations but weak spike-timing correlations in our network (prediction for experiments, but similar result in Boerlin et al. 2013). 

      In Section “The effect of structured connectivity on coding efficiency and neural dynamics”, we report (page 7) that our model predicts a number of differences between networks with structured and unstructured (random) connectivity. In particular, structured networks differ from unstructured ones by showing better encoding performance, lower metabolic cost, weaker variance over time in the membrane potential of each neuron, lower firing rates and weaker average and instantaneous balance of synaptic currents.

      In Section “Weak or no spike-triggered adaptation optimizes network efficiency”, we report (page 9) that our model predicts better encoding performance in networks with adaptation compared to facilitation. Our results suggest that adaptation should be stronger in E compared to I (PV+) neurons (postdiction). In the same section, we report (page 10) that our results suggest that the instantaneous balance is a better predictor of model efficiency than average balance (prediction).

      In Section “Non-specific currents regulate network coding properties”, we report (page 10) that our model predicts that more than half of the distance between the resting potential and firing threshold is taken by external currents that are unrelated to feedforward processing (postdiction). We also report (page 11) that our model predicts that moderate levels of uncorrelated (additive) noise is beneficial for efficiency (prediction for experiments, but similar results in Chalk et al., 2016, Koren et al., 2017, Timcheck et al. 2022).

      In Section “Optimal ratio of E-I neuron numbers and of mean I-I to E-I synaptic efficacy coincide with biophysical measurements”, we predict the optimal ratio of E to I neuron numbers to be 4:1 (postdiction) and the optimal ratio of mean I-I to E-I connectivity to be 3:1 (postdiction). Further, we report (page 13) that our results predict that a decrease in the ratio of E-I neuron numbers is accompanied with the decrease in the ratio of mean I-I to E-I connectivity. 

      Finally, in Section “Dependence of efficient coding and neural dynamics on the stimulus statistics”, we report (page 13) that our model predicts that the efficiency of the network has almost no dependence on the time scale of the stimulus (prediction). 

      Reviewer #2 (Public Review):

      Summary:

      In this work, the authors present a biologically plausible, efficient E-I spiking network model and study various aspects of the model and its relation to experimental observations. This includes a derivation of the network into two (E-I) populations, the study of single-neuron perturbations and lateral-inhibition, the study of the effects of adaptation and metabolic cost, and considerations of optimal parameters. From this, they conclude that their work puts forth a plausible implementation of efficient coding that matches several experimental findings, including feature-specific inhibition, tight instantaneous balance, a 4 to 1 ratio of excitatory to inhibitory neurons, and a 3 to 1 ratio of I-I to E-I connectivity strength. It thus argues that some of these observations may come as a direct consequence of efficient coding.

      Strengths:

      While many network implementations of efficient coding have been developed, such normative models are often abstract and lacking sufficient detail to compare directly to experiments. The intention of this work to produce a more plausible and efficient spiking model and compare it with experimental data is important and necessary in order to test these models.

      In rigorously deriving the model with real physical units, this work maps efficient spiking networks onto other more classical biophysical spiking neuron models. It also attempts to compare the model to recent single-neuron perturbation experiments, as well as some longstanding puzzles about neural circuits, such as the presence of separate excitatory and inhibitory neurons, the ratio of excitatory to inhibitory neurons, and E/I balance. One of the primary goals of this paper, to determine if these are merely biological constraints or come from some normative efficient coding objective, is also important.

      Though several of the observations have been reported and studied before (see below), this work arguably studies them in more depth, which could be useful for comparing more directly to experiments.

      Thanks for these insights and for the kind words of appreciation of the strengths of our work.  

      Weaknesses:

      Though the text of the paper may suggest otherwise, many of the modeling choices and observations found in the paper have been introduced in previous work on efficient spiking models, thereby making this work somewhat repetitive and incremental at times. This includes the derivation of the network into separate excitatory and inhibitory populations, discussion of physical units, comparison of voltage versus spike-timing correlations, and instantaneous E/I balance, all of which can be found in one of the first efficient spiking network papers (Boerlin et al. 2013), as well as in subsequent papers. Metabolic cost and slow adaptation currents were also presented in a previous study (Gutierrez & Deneve 2019). Though it is perfectly fine and reasonable to build upon these previous studies, the language of the text gives them insufficient credit.

      We indeed built our work on these important previous studies, and we apologize if this was not clear enough. We thus improved the text to make sure that credit to previous studies is more precisely and more clearly given (see detailed reply for the list of changes made). 

      To facilitate the understanding on how we built on previous work, we expanded the comparison of our results with the results of Boerlin et al. (2013) about voltage correlations and uncorrelated spiking (page 7), comparison with the derivation of physical units of Boerlin et al. (2013) (page 3), discussion of how results on the ratio of the number of E to I neurons relate  to Calaim et al (2022) and Barrett et al. (2016) (page 16), and comment on the previous work by Gutierrez and Deneve about adaptation (page 8).  

      Furthermore, the paper makes several claims of optimality that are not convincing enough, as they are only verified by a limited parameter sweep of single parameters at a time, are unintuitive and may be in conflict with previous findings of efficient spiking networks. This includes the following. 

      Coding error (RMSE) has a minimum at intermediate metabolic cost (Figure 5B), despite the fact that intuitively, zero metabolic cost would indicate that the network is solely minimizing coding error and that previous work has suggested that additional costs bias the output. 

      Coding error also appears to have a minimum at intermediate values of the ratio of E to I neurons (effectively the number of I neurons) and the number of encoded variables (Figures 6D, 7B). These both have to do with the redundancy in the network (number of neurons for each encoded variable), and previous work suggests that networks can code for arbitrary numbers of variables provided the redundancy is high enough (e.g., Calaim et al. 2022). 

      Lastly, the performance of the E-I variant of the network is shown to be better than that of a single cell type (1CT: Figure 7C, D). Given that the E-I network is performing a similar computation as to the 1CT model but with more neurons (i.e., instead of an E neuron directly providing lateral inhibition to its neighbor, it goes through an interneuron), this is unintuitive and again not supported by previous work. These may be valid emergent properties of the E-I spiking network derived here, but their presentation and description are not sufficient to determine this.

      With regard to the concern that our previous analyses considered optimal parameter sets determined with a sweep of a single parameter at a time, we have addressed this issue in two ways. First, we presented (Figure 6I and 7J and text on pages 11 and 13) results of joint sweeps of variations of pairs of parameters whose joint variations are expected to influence optimality in a way that cannot be understood varying one parameter at a time. These new analyses complement the joint parameter sweep of the time constants of single E and I neurons (t<sub>r</sub><sup>E</sup> and t<sub>r</sub><sup>I</sup>) that has already been presented in Fig. 5A (former Fig. 4A). Second, we conducted, within a reasonable/realistic range of possible variations of each individual parameter, a Monte-Carlo random joint sampling (10000 simulations with 20 trials each) of all 6 model parameters that we explored in the paper. We presented these new results on Fig. 2 and discuss it on pages 5-6. 

      The Reviewer is correct in stating that the error (RMSE) exhibits a counterintuitive minimum as a function of the metabolic constant despite the fact that, intuitively, for vanishing metabolic constant the network is solely minimizing the coding error (Fig. 6B). In our understanding, this counterintuitive finding is due to the presence of noise in the membrane potential dynamics. In the presence of noise, a non-vanishing metabolic constant is needed to suppress “inefficient” spikes purely induced by noise that do not contribute to coding and increase the error. This gives rise to a form of “stochastic resonance”, where the noise improves detection of the signal coming from the feedforward currents. We note that the metabolic constant and the noise variance both appear in the non-specific external current (Eq. 29f in Methods), and, thus, a covariation in their optimal values is expected. Indeed, we find that the optimal metabolic constant monotonically increases as a function of the noise variance, with stronger regularization (larger beta) required to compensate for larger variability (larger sigma) (Fig. 6I). Finally, we note that a moderate level of noise (which, in turn, induces a non-trivial minimum of the coding error as a function of beta) in the network is optimal. The beneficial effect of moderate levels of noise on performance in networks with efficient coding has been shown in different contexts in previous work (Chalk et al. 2016, Koren and Deneve, 2017). The intuition is that the noise prevents the excessive synchronization of the network and insufficient single neuron variability that decrease the performance. The points above are now explained in the revised text on page 11.

      The Reviewer is also correct in stating that the network exhibits an optimal performance for intermediate values of the number of I neurons and the number of encoded features. In our understanding, the optimal number of encoded features of M=3 arises simply because all the other parameters were optimized for those values of M. The purpose of those analyses was not to state that a network optimally encodes only a given number of features, but how a network whose parameters are optimized for a given M perform reasonably well when M is varied. We clarify this on page 13 of Results in Discussion on page 16. In the same Discussion paragraph we refer also to the results of Calaim et al mentioned by the Reviewer. 

      To address the concern about the comparison of efficiency between the E-I and the 1CT model, we took advantage of the Reviewer’s suggestions to consider this issue more deeply. In revision, we now compare the efficiency of the 1CT model with the E population of the E-I model (Fig. 8H). This new comparison changes the conclusion about which model is more efficient, as it shows the 1CT model is slightly more efficient than the E-I model. Nevertheless, the E-I model performance is more robust to small variations of optimal parameters, e.g., it exhibits biologically plausible firing rates for non-optimal values of the metabolic constant. See also the reply to point 3 of the Public Review of Reviewer 2 for more detail. We added these results and the ensuing caveats for the interpretation of this comparison on Page 14, and also revised the title of the last subsection of Results.  

      Alternatively, the methodology of the model suggests that ad hoc modeling choices may be playing a role. For example, an arbitrary weighting of coding error and metabolic cost of 0.7 to 0.3, respectively, is chosen without mention of how this affects the results. Furthermore, the scaling of synaptic weights appears to be controlled separately for each connection type in the network (Table 1), despite the fact that some of these quantities are likely linked in the optimal network derivation. Finally, the optimal threshold and metabolic constants are an order of magnitude larger than the synaptic weights (Table 1). All of these considerations suggest one of the following two possibilities. One, the model has a substantial number of unconstrained parameters to tune, in which case more parameter sweeps would be necessary to definitively make claims of optimality. Or two, parameters are being decoupled from those constrained by the optimal derivation, and the optima simply corresponds to the values that should come out of the derivation.

      We thank the reviewer for bringing about these important questions.

      In the first submission, we presented both the encoding error and the metabolic cost separately as a function of the parameters, so that readers could get an understanding of how stable optimal parameters would be to the change of the relative weighting of encoding error and metabolic cost. We specified this in Results (page 5) and we kept presenting separately encoding and metabolic terms in the revision.

      However, we agree that it is important to present the explicit quantification on how the optimal parameters may depend on g<sub>L</sub>. In the first submission, we showed the analysis for all possible weightings in case of two parameters for which we found this analysis was the most relevant – the ratio of neuron numbers (Fig. 7E, Fig. 6E in first submission) and the optimal number of input features M (see last paragraph on page 13 and Fig. 8D). We now show this analysis also for the rest of studied model parameters in the Supplementary Fig. S4 (A-D and H). This is discussed on pages 9, 10,11 and 12.

      With regard to the concern that the scaling of synaptic weights should not be controlled separately for each connection type in the network, we agree and we would like to clarify that we did not control such scaling separately. Apologies if this was not clear enough. From the optimal analytical solution, we obtained that the connectivity scales with the standard deviation of decoding weights (s<sub>w</sub><sup>E</sup> and s<sub>w</sub><sup>I</sup>) of the pre and postsynaptic populations (Methods, Eq. 32). We studied the network properties as a function of the ratio of average I-I to E-I connectivity (Fig. 7 F-I; Supplementary Fig. S4 D-H), which is equivalent to the ratio of standard deviations s<sub>w</sub><sup>I</sup> /s<sub>w</sub><sup>E</sup> (see Methods, Eq. 35). We clarified this in text on page 12.

      Next, it is correct that our synaptic weights are an order of magnitude smaller than the metabolic constant. We analysed a simpler version of the network that has the coding and dynamics identical to our full model (Methods, Eq. 25) but without the external currents. We found that the optimal parameters determining the firing threshold in such a simpler network were biologically implausible (see Supplementary Text 2 and Supplementary Table S1). We considered as another simple solution the rescaling of the synaptic efficacy such as to have biologically plausible threshold. However, that gave implausible mean synaptic efficacy (see Supplementary Text 2).  Thus, to be able to define a network with biologically plausible firing threshold and mean synaptic efficacy, we introduced the non-specific external current. After introducing such current, we were able to shift the firing threshold to biologically plausible values while keeping realistic values of mean synaptic efficacy. Biologically plausible values for the firing threshold are around 15 -– 20 mV above the resting potential (Constantinople and Bruno, 2013), which is the value that we have in our model. A plausible value for the average synaptic strength is between a fraction of one millivolt to a couple of millivolts (Constantinople & Bruno, 2013, Campagnola et al. 2022), which also corresponds to values that the synaptic weights take. The above results are briefly explained in the revised text on page 4.

      Finally, to study the optimality of the network when changing multiple parameters at a time, we added a new analysis with Monte-Carlo random joint sampling (10.000 parameter sets with 20 trials for each set) of all 6 model parameters that we explored in the paper. We compared (Fig 2) the so-obtained results of each simulation with those obtained from the understanding gained from varying one or two parameters at a time (optimal parameters reported in Table 1 and used throughout the paper).  We found (Fig. 2) that the optimal configuration in Table 1 was never improved by any other simulations we performed, and that the first three random simulations that came the closest to the optimal one of Table 1 had stronger noise intensity but also stronger metabolic cost than the configuration on Table 1. The second, third and fourth configurations had longer time constants of both E and I single neurons (adaptation time constants). Ratio of E-I neuron numbers and of I-I to E-I connectivity in the second, third and fourth best configuration were either jointly increased or decreased with respect to our configuration. These results are reported on Fig. 2 and in Tables 2-3 and they are discussed in Results (page 5).

      Reviewer #3 (Public Review):

      Summary:

      In their paper the authors tackle three things at once in a theoretical model: how can spiking neural networks perform efficient coding, how can such networks limit the energy use at the same time, and how can this be done in a more biologically realistic way than previous work?

      They start by working from a long-running theory on how networks operating in a precisely balanced state can perform efficient coding. First, they assume split networks of excitatory (E) and inhibitory (I) neurons. The E neurons have the task to represent some lower dimensional input signal, and the I neurons have the task to represent the signal represented by the E neurons. Additionally, the E and I populations should minimize an energy cost represented by the sum of all spikes. All this results in two loss functions for the E and I populations, and the networks are then derived by assuming E and I neurons should only spike if this improves their respective loss. This results in networks of spiking neurons that live in a balanced state, and can accurately represent the network inputs.

      They then investigate in-depth different aspects of the resulting networks, such as responses to perturbations, the effect of following Dale's law, spiking statistics, the excitation (E)/inhibition (I) balance, optimal E/I cell ratios, and others. Overall, they expand on previous work by taking a more biological angle on the theory and showing the networks can operate in a biologically realistic regime.

      Strengths:

      (1) The authors take a much more biological angle on the efficient spiking networks theory than previous work, which is an essential contribution to the field.

      (2) They make a very extensive investigation of many aspects of the network in this context, and do so thoroughly.

      (3) They put sensible constraints on their networks, while still maintaining the good properties these networks should have.

      Thanks for this summary and for these kind words of appreciation of the strengths of our work.  

      Weaknesses:

      (1) The paper has somewhat overstated the significance of their theoretical contributions, and should make much clearer what aspects of the derivations are novel. Large parts were done in very similar ways in previous papers. Specifically: the split into E and I neurons was also done in Boerlin et al (2008) and in Barrett et al (2016). Defining the networks in terms of realistic units was already done by Boerlin et al (2008). It would also be worth it to discuss Barrett et al (2016) specifically more, as there they also use split E/I networks and perform biologically relevant experiments.

      We improved the text to make sure that credit to previous studies is more precisely and more clearly given (see rebuttal to the specific suggestions of Reviewer 2 for a full list).

      We apologize if this was not clear enough in the previous version. 

      With regard to the specific point raised here about the E-I split, we revised the text on page 2. With regard to the realistic units, we revised the text on page 3. Finally, we commented on relation between our results and results of the study by Barrett et al. (2016) on page 16.

      (2) It is not clear from an optimization perspective why the split into E and I neurons and following Dale's law would be beneficial. While the constraints of Dale's law are sensible (splitting the population in E and I neurons, and removing any non-Dalian connection), they are imposed from biology and not from any coding principles. A discussion of how this could be done would be much appreciated, and in the main text, this should be made clear.

      We indeed removed non-Dalian connections because Dale’s law is a major constraint for biological plausibility. Our logic was to consider efficient coding within the space of networks that satisfy this (and other) biological plausibility constraints. We did not intend to claim that removing the non-Dalian connections was the result of an analytical optimization. We clarified this in revision (page 4).

      (3) Related to the previous point, the claim that the network with split E and I neurons has a lower average loss than a 1 cell-type (1-CT) network seems incorrect to me. Only the E population coding error should be compared to the 1-CT network loss, or the sum of the E and I populations (not their average). In my author recommendations, I go more in-depth on this point.

      We carefully considered these possibilities and decided to compare only the E population of the E-I model with the 1-CT model. On Fig.8G (7C of the first submission), E neurons have a slightly higher error and cost compared to the 1CT network. In the revision, we compared the loss of E neurons of the E-I model with the loss of the 1-CT model. Using such comparison, we found that the 1CT network has lower loss and is more efficient compared to E neurons of the E-I model. We revised Figure 8H and text on page 14 to address this point. 

      (4) While the paper is supposed to bring the balanced spiking networks they consider in a more experimentally relevant context, for experimental audiences I don't think it is easy to follow how the model works, and I recommend reworking both the main text and methods to improve on that aspect.

      We tried to make the presentation of the model more accessible to a non-computational audience in the revised paper. We carefully edited the text throughout to make it as accessible as possible. 

      Assessment and context:

      Overall, although much of the underlying theory is not necessarily new, the work provides an important addition to the field. The authors succeeded well in their goal of making the networks more biologically realistic, and incorporating aspects of energy efficiency. For computational neuroscientists, this paper is a good example of how to build models that link well to experimental knowledge and constraints, while still being computationally and mathematically tractable. For experimental readers, the model provides a clearer link between efficient coding spiking networks to known experimental constraints and provides a few predictions.

      Thanks for these kind words. We revised the paper to make sure that these points emerge more clearly and in a more accessible way from the revised paper.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      Referring to the major comments:

      (1) Be upfront about particular modelling choices and why you made them; avoid talk of a "striking/surprising", etc. ability to explain data when this actually requires otherwise-arbitrary choices and auxiliary assumptions. Ideally, this nuance is already clear from the abstract.

      We removed all the "striking/surprising" and similar expressions from the text. 

      We added to the Abstract the assumption of equal time constants of the stimulus and of the membrane of E and I neurons and the assumption of the independence of encoded stimulus features.

      In revision, we performed additional analyses (joint parameter sweeps, Monte-Carlo joint sampling of all 6 model parameters) providing additional evidence that the network parameters in Table 1 capture reasonably well the optimal solution. These are reported on Figs. 2, 6I and 7J and in Results (pages 5, 11 and 13). See rebuttal to weaknesses of the public review of the Referee 2 for details.

      (2) Make even more of an effort to acknowledge prior work on the importance of structured E-I and I-E connectivity.

      We have revised the text (page 4) to better place our results within previous work on structured E-I and I-E connectivity.

      (3) Be clear about the model's limitations and mention them throughout the text. This will allow readers to interpret your results appropriately.

      We now comment more on model's limitations, in particular the simplifying assumption about the network's computation (page 16), the lack of E-E connectivity (page 3), the absence of long-term adaptation (page 10), and the simplification of only having one type of inhibitory neurons (page 16). 

      (4) Present your "predictions" for what they are: aspects of the model that can be made consistent with the existing data after some fitting. Except in the few cases where you make actual predictions, which deserve to be highlighted.

      We followed the suggestion of the reviewer and distinguished cases where the model is consistent with the data (postdictions) from actual predictions, where empirical measurements are not available or not conclusive. We compiled a list of predictions and postdictions in response to the point 4 of Reviewer 1. In revision, we now comment about every property of the model as either reproducing a known property of biological networks (postdiction) or being a prediction. We improved the text in Results on pages 4, 5, 6, 7, 9, 10, 11, 12 and 13 to accommodate these requests.

      Minor comments and recommendations

      It's a sizable list, but most can be addressed with some text edits.

      (1) The image captions should give more details about the simulations and analyses, particularly regarding sample sizes and statistical tests. In Figure 5, for example, it is unclear if the lines represent averages over multiple signals and, if so, how many. It's probably not a single realization, but if it is, this might explain the otherwise puzzling optimal number of three stimuli. Box plots visualize the distribution across simulation trials, but it's not clear how many. In Figure 7d, a star suggests statistical significance, but the caption does not mention the test or its results; the y-axis should also have larger limits.

      All statistical results were computed on 100 or 200 simulation trials, depending on the figure, with duration of the trial of 1 second of simulated time. To compute statistical results in Fig. 1, we used 10 trials with duration of 10 seconds for each trial. Each trial consisted of M independent realizations of Ornstein-Uhlenbeck (OU) processes as stimuli, independent noise in the membrane potential and an independent draw of tuning parameters, such that the results are general over specific realization of these random variables. Realizations of the OU processes were independent across stimulus dimensions and across trials. We added this information in the caption of each figure. 

      The optimal number of M=3 stimuli is the result of measuring the performance of the network in 100 simulation trials (for each parameter value), thus following the same procedure as for all other parameters. Boxplots on Fig. 8G-H were also generated from results computed in 100 simulation trials, which we have now specified in the caption of the figure, together with the statistical test used for assessing the significance (twotailed t-test). We also enlarged the limits of Fig. 8H (7D in the previous version).

      (2) The Oldenburg paper (reference 62) finds suppression of all but nearby neurons in response to two- photon stimulation of small neural ensembles (instead of single neurons, as in Chettih & Harvey). This isn't perfectly consistent with the model's results, even though the Oldenburg experiments seem more relevant given the model's small size, and strong connectivity/high connection probability between similarly tuned neurons. What might explain the potential mismatch?

      We sincerely apologize for not having been precise enough on this point when comparing our model against Chettih & Harvey and Oldenburg et al. We corrected the sentence (page 6) to remove the claim that our model reproduces both. 

      We speculate that the discrepancy between perturbing our model and the Oldenburg data may arise from the lack of E-E connectivity in our model. Synaptic connections between E neurons with similar selectivity could create an enhancement instead of suppression between neuronal pairs with very similar tuning. We added a sentence about this in the section with perturbation experiments “Competition across neurons with similar stimulus tuning emerging in efficient spiking networks” (page 7) where we discuss this limitation of our model. We feel that this example shows the utility to derive some perturbation results from our model, as not all networks with some degree of lateral inhibition will show the same perturbation results. Comparing our model's perturbation with real data perturbation results has thus some value to better appreciate strengths and limitations of our approach. 

      (3) "Previous studies optogenetically stimulated E neurons but did not determine whether the recorded neurons were excitatory or inhibitory " (p. 11). I believe Oldenburg et al. did specifically image excitatory neurons.

      The reviewer is correct about Oldenburg et al. imaging specifically excitatory neurons. We have revised this part of the Discussion (page 15). 

      (4) The authors write that efficiency is particularly achieved where adaptation is stronger in E compared to I neurons (p. 7; Figure 4). Although this would be consistent with experimental data (the I neurons in the model seem akin to fast-spiking Pv+ cells), I struggle to see it in the figure. Instead, it seems like there are roughly two regimes. If either of the neuronal timescales is faster than the stimulus timescale, the optimisation fails. If both are at least as slow, optimisation succeeds.

      We agree with the reviewer that the adaptation properties of our inhibitory neurons are compatible with Pv+ cells. What is essential for determining the dynamical regime of the network is less the relation to the time constant of the stimulus (t<sub>x</sub>) but rather the relation between the time constant of the population readout (t, which is also the membrane time constant) and the time constant of the single neuron (t<sub>r</sub><sup>y</sup> for y=E and y=I; see Eq. 23, 25 or 29e). The relation between t and t<sub>r</sub><sup>y</sup> determines if single neurons generate spike-triggered adaptation (t<sub>r</sub><sup>y</sup> > t) or spike-triggered facilitation (t<sub>r</sub><sup>y</sup> < t; see Table 4). In regimes with facilitation in either E or I neurons (or both), the network performance strongly deteriorates compared to regimes with adaptation (Fig. 5A). 

      Beyond adaptation leading to better performance, we also found different effects of adaptation in E and I neurons. We acknowledge that the difference of these effects was difficult to see from the Fig. 4B in the first submission. We have now replotted results from previously shown Fig. 4B to focus on the adaptation regime only, (since the Fig. 5A already establishes that this is the regime with better performance). We also added figures showing the differential effect of adaptation in E and I cell type on the firing rate and on the average loss (Fig. 5C-D). Fig. 5B and C (top plots) show that with adaptation in E neurons, the error and the loss increase more slowly than with adaptation in I neurons. Moreover, the firing rate in both cell types decreases with adaptation in E neurons, while this is not the case with adaptation in I neurons (Fig. 5D). These results are added to the figure panels specified above and discussed in text on page 9.

      To clarify the relation between neuronal and stimulus timescale, we now also added the analysis of network performance as a function of the time constant of the stimulus t<sub>x</sub> (Supplementary Fig. S5 C-E). We found that the model's performance is optimal when the time constant of the stimulus is close to the membrane time constant t. This result is expected, because the equality of these time constants was imposed in our analytical derivation of the model (t<sub>x</sub>  = t). We see a similar decrease in performance for values of t<sub>x</sub>  that are faster and slower with respect to the membrane time constant (Supplementary Fig. S5C, top). These results are added to the figure panels specified above and discussed in text on page 13.

      (5) A key functional property of cortical interneurons is their lower stimulus selectivity. Does the model replicate this feature?

      We think that whether I neurons are less selective than E neurons is still an open question. A number of recent empirical studies reported that the selectivity of I neurons is comparable to the selectivity of E neurons (see., e.g., Kuan et al. Nature 2024, Runyan et al. Neuron 2010, Najafi et al. Neuron 2020). In our model, the optimal solution prescribes a precise structure in recurrent connectivity (see Eq. 24 and Fig. 1C(ii)) and structured connectivity endows I neurons with stimulus selectivity. To show this, we added plots of example tuning curves and the distribution of the selectivity index across E and I neurons (Fig. 8E-F) and described these new results in Results (page 14). Tuning curves in our network were similar to those computed in a previous work that addressed stimulus tuning in efficient spiking networks (Barrett et al. 2016). We evaluated tuning curves using M=3 constant stimulus features and we varied one of the features while the two others were kept fixed. We provided details on how the tuning curves and the selectivity index were computed in a new Methods subsection (“Tuning curves and selectivity index”) on page 50.

      (6) The final panels of Figure 4 are presented as an approach to test the efficiency of biological networks. The authors seem to measure the instantaneous (and time-averaged) E-I balance while varying the adaptation parameter and then correlate this with the loss. If that is indeed the approach (it's difficult to tell), this doesn't seem to suggest a tractable experiment. Also, the conclusion is somewhat obvious: the tighter the single neuron balance, the fewer unnecessary spikes are fired. I recommend that the authors clearly explain their analysis and how they envision its application to biological data.

      We indeed measured the instantaneous (and time-averaged) E-I balance while varying the adaptation parameters and then correlating this with the loss. We did not want to imply that the latter panels of Figure 4 are a means to test the efficiency or biological networks or that we are suggesting new and possibly unfeasible experiments. We see it as a way to better conceptually understand how spike triggered adaptation helps the network’s coding efficiency, by tightening the E I balance in a way that it reduces the number of unnecessary spikes. We apologize if the previous text was confusing in this respect.   We have now removed the initial paragraph of former Results Subsection (including removing the subsection title) and added new text about different effect of adaptation in E and I neurons on Page 9. We also thoroughly revised Figure 5.

      (7) The external stimuli are repeatedly said to vary (or be tracked) across "multiple time scales", which might inadvertently be interpreted as (i) a single stimulus containing multiple timescales or (ii) simultaneously presented stimuli containing different timescales. These scenarios are potential targets for efficient coding through neuronal adaptation (reference 21 in the manuscript and Pozzorini et al. Nat. Neuro. 2013), but they are not addressed in the current model. I recommend the authors clarify their statements regarding timescales (and if they're up for it, acknowledge this as a limitation).

      We thank the reviewer for bringing up this interesting point. To address the second point raised by the Reviewer (simultaneously presented stimuli containing multiple timescales), we performed new analyses to test the model with simultaneously presented stimuli that have different timescales. We found that the model encodes efficiently such stimuli.  We tested the case with a 3-dimensional stimulus where each dimension is an Ornstein-Uhlenbeck process with a different time constant. More precisely, we kept the time constant in the first dimension fixed (at 10 ms), and varied the time constant in the second and third dimension such that the time constant in the third dimension is doubled with respect to the second dimension. We plotted the encoding error in every stimulus dimension for E and I neurons (Fig. 8B, left plot) as well as the encoding error and the metabolic cost averaged across stimulus dimensions (Fig. 8B, right plot). The results are briefly described with text on page 13.

      Regarding the case i) (single stimulus containing multiple timescales), we considered two possibilities. One possibility is that timescales of the stimulus are separable, and in this case a single stimulus containing several time scales can be decomposed in several stimuli with a single time scale each. As we assign a new set of weights for each dimension of the decomposed stimulus, this case is similar to the case ii) that we already addressed. Another possibility is that timescales of the stimulus cannot be separated. This case is not covered in the present analysis and we listed it among the limitations of the model. We revised the text (page 13) around the question of multiple time scales and included the citation of Pozzorini et al. (2013). 

      (8) It is claimed that the model uses a mixed code to represent signals, citing reference 47 (Rigotti et al., Nature 2013). But whereas the model seems to use linear mixed selectivity, the Rigotti reference highlights the virtues of nonlinear mixed selectivity. In my understanding, a linearly mixed code does not enjoy the same benefits since it’s mathematically equivalent to a non-mixed code (simply rotate the readout matrix). I recommend that the authors clarify the type of selectivity used by their model and how it relates to the paper(s) they cite.

      The reviewer is correct that our selectivity is a linear mixing of input variables, and differs from the selectivity in Rigotti et al. (2013) which is non-linear. We revised the sentence on page 4 to clarify better that the mixed selectivity we consider is linear and we removed Rigotti’s citation. 

      (9) Reference 46 is cited as evidence that leaky integration of sensory features is a relevant computation for sensory areas. I don’t think this is quite what the reference shows. Instead, it finds certain morphological and electrophysiological differences between single pyramidal neurons in the primary visual cortex compared to the prefrontal cortex. Reference 46’ then goes on to speculate that these are differences relevant to sensory computation. This may seem like a quibble, but given the centrality of the objectivee function in normative theories, I think it's important to clarify why a particular objective is chosen.

      We agree that our reference of Amatrudo et al was not the best reference and that the previous text was confusing. We thus tried to improve on its clarity. We looked at the previous theoretical efficient coding papers introducing this leaky integration and we could not find in the previous theoretical work a justification of this assumption based on experimental papers. However, there is evidence that neurons in sensory structures, and in cortical association areas respond to time varying sensory evidence by summing stimuli over time with a weight that decreases steadily going back in time from the time of firing, which suggests that neurons integrate time-varying sensory features. In many cases, these integration kernels decay approximately exponentially going back in time, and several models explaining successfully perceptual readouts of neural activity work assuming leaky integration. This suggests that the mathematical approximation of leaky integration of sensory evidence, though possibly simplistic, is reasonable.  We revised the text in this respect (page 2).  

      (10) The definition of the objective function uses beta as a tuning parameter, but later parts of the text and figures refer to a parameter g_L which might only be introduced in the convex combination of Eq. 40a.

      This is correct. Parameter optimization has been performed on a weighted sum of the average encoding error and cost as given by the Eq. 39a (40a in first submission), with the weighting g<sub>L</sub> for the error versus the cost, and not the beta that is part of the objective in Eq.10. The convex combination in Eq. 39a allowed us to find a set of optimal parameters that is within biologically realistic parameter ranges, which includes realistic values for the firing threshold. The average encoding error and metabolic cost (the two terms on the right-hand side of Eq. 39a, without weighting with g<sub>L</sub>) in our network are of the same order (see Fig 8G for the E-I model where these values are plotted separately for the optimal network). Weighing the cost with optimal beta that is in the range of ~10 would have yielded a network that optimizes almost exclusively the metabolic cost and would bias the results towards solutions with poor encoding accuracy.

      To document more fully how the choice of weighting of the error with the cost (g<sub>L</sub>) affects the optimal parameters, we now added new analysis (Fig. 8D and Supplementary Fig. S4 A-D and H) showing optimal parameters as a function of this weighting. We commented on these results in the text on pages 9-11 and 12. For further details, please see also the reply to point 1 or Reviewer 1.

      (11) Figure 1J: "In E neurons, the distribution of inhibitory and of net synaptic inputs overlap". In my understanding, they are in fact identical, and this is by construction. It might help the reader to state this.

      We apologize for an unclear statement. In E neurons, net synaptic current is the sum of the feedforward current and of recurrent inhibition (Eq. 29c and Eq. 42). With our choice of tuning parameters that are symmetric around zero and with stimulus features that have vanishing mean, the mean of the feedforward current is close to zero. Because of this, the mean of the net current is negative and is close to the mean of the inhibitory current. We have clarified this in the text (page 5).

      (12) A few typos:

      -  p1. "Minimizes the encoding accuracy" should be "maximizes..."

      -  p1: "as well the progress" should be something like "as well as the progress"

      -  p.11 In recorded neurons where excitatory or inhibitory. ", "where" should be "were" - Fig3: missing parentheses (B)

      -  Fig4B: the 200 ticks on the y-scale are cut off.

      -  Panel Fig. 5a: "stimulus" should be "stimuli".

      -  Ref 24 "Efficient andadaptive sensory codes" is missing a space.

      -  p. 26: "requires" should be "required".

      -  On several occasions, the article "the" is missing.

      We thank the reviewer for kindly pointing out the typos that we now corrected.

      Reviewer #2 (Recommendations For The Authors):

      I would like to give the authors more details about the two main weaknesses discussed above, so that they may address specific points in the paper. First, there is the relation to previous work. Several published articles have presented very similar results to those discussed here, including references 5, 26, 28, 32, 33, 42, 43, 48, and an additional reference not cited by the authors (Calaim et al. 2022 eLife e73276). This includes:

      (1) Derivation of an E-I efficient spiking network, which is found in refs. 28, 42, 43, and 48. This is not reflected in the text: e.g., "These previous implementations, however, had neurons that did not respect Dale's law" (Introduction, pg. 1); "Unlike previous approaches (28, 48), we hypothesize that E and I neurons have distinct normative objectives...". The authors should discuss how their derivation compares to these.

      We have now fully clarified on page 3 that our model builds on the seminal previous works that introduced E-I networks with efficient coding (Supplementary text in Boerlin et al. 2013, Chalk et al. 2016, Barrett et al. 2016). 

      (2) Inclusion of a slow adaptation current: I believe this also appears in a previous paper (Gutierrez & Deneve 2019, ref. 33) in almost the exact same form, and is again not reflected in the text: "The strength of the current is proportional to the difference in inverse time constants ... and is thus absent in previous studies assuming that these time constants are equal (... ref. 33). Again, the authors should compare their derivation to this previous work.

      We thank the reviewer for pointing this out. We sincerely apologize if our previous version did not recognize sufficiently clearly that the previous work of Gutierrez and Deneve (eLife 2019; ref 33) introduced first the slow adaptation current that is similar to spike-triggered adaptation in our model. We have made sure that the revised text recognizes it more clearly. We also explained better what we changed or added with respect to this previous work (see revised text on page 8). 

      The work by Gutierrez and Deneve (2019) emphasizes the interplay between single neuron property (an adapting current in single neurons) and network property (networklevel coding through structured recurrent connections). They use a network that does not distinguish E and I neurons. Our contribution instead focuses on the adaptation in an E-I network. To improve the presentation following the Reviewer’s comment, we now better emphasize the differential effect of adaptation in E and in I neurons in revision (Fig. 5 B-D). Moreover, Gutierrez and Deneve studied the effect of adaptation on slower time scales (1 or 2 seconds) while we study the adaptation on a finer time scale of tens of milliseconds. The revised text detailed this is reported on Page 8.

      (3) Background currents and physical units: Pg. 26: "these models did not contain any synaptic current unrelated to feedforward and recurrent processing" and "Moreover previous models on efficient coding did not thoroughly consider physical units of variables" - this was briefly described in ref. 28 (Boerlin et al. 2013), in which the voltage and threshold are transformed by adding a common constant, and additional aspects of physical units are discussed.

      It is correct that Boerlin et al (2013) suggested adding a common constant to introduce physical units. We now revised the text to make clearer the relation between our results and the results of Boerlin et al. (2013) (page 3). In our paper, we built on Boerlin et al. (2013) and assigned physical units to computational variables that define the model's objective (the targets, the estimates, the metabolic constant, etc.). We assigned units to computational variables in such a way that physical variables (such as membrane potential, transmembrane currents, firing thresholds and resets) have the correct physical units.  We have now clarified how we derived physical units in the section of Results where we introduce the biophysical model (page 3) and specified how this derivation relates to the results in Boerlin et al. (2013).

      (4) Voltage correlations, spike correlations, and instantaneous E/I balance: this was already pointed out in Boerlin et al. 2013 (ref 28; from that paper: "Despite these strong correlations of the membrane potentials, the neurons fire rarely and asynchronously") and others including ref. 32. The authors mention this briefly in the Discussion, but it should be more prominent that this work presents a more thorough study of this well-known characteristic of the network.

      We agree that it would be important to comment on how our results relate to these results in Boerlin et al. (2013). It is correct that in Boerlin et al. (2013) neurons have strong correlations in the membrane potentials, but fire asynchronously, similarly to what we observe in our model. However, asynchronous dynamics in Boerlin et al. (2013) strongly depends on the assumption of instantaneous synaptic transmission and time discretization, with a “one spike per time bin” rule in numerical implementation. This rule enforces that at most one spike is fired in each time bin, thus actively preventing any synchronization across neurons. If this rule is removed, their network synchronizes, unless the metabolic constant is strong enough to control such synchronization to bring it back to asynchronous regime (see ref. 36). Our implementation does not contain any specific rule that would prevent synchronization across neurons. We now cite the paper by Boerlin and colleagues and briefly summarize this discussion when we describe the result of Fig. 3D on page 7. 

      (5) Perturbations and parameters sweep: I found one previous paper on efficient spiking networks (Calaim et al. 2022) which the authors did not cite, but appears to be highly relevant to the work presented here. Though the authors perform different perturbations from this previous study, they should ideally discuss how their findings relate to this one. Furthermore, this previous study performs extensive sweeps over various network parameters, which the authors might discuss here, when relevant. For example, on pg. 8, the authors write “We predict that, if number of neurons within the population decreases, neurons have to fire more spikes to achieve an optimal population readout” – this was already shown in Calaim et al. 2022 Figure 5, and the authors should mention if their results are consistent.

      We apologize for not being aware of Calaim et al. (2022) when we submitted the first version of our paper. This important study is now cited in the revised version. We have now, as suggested, performed sweeps of multiple parameters inspired by the work of Calaim. This new analysis is described extensively in reply to Weaknesses in the Public Review of reviewer 2 and is found in Fig 2, 6I and 7J and described on pages 5,11 and 13.

      The Reviewer is also correct that the compensation mechanism that applies when changing the ratio of E-I neuron numbers is similar to the one described in Barrett et al. (2016) and related to our claim “if number of neurons within the population decreases, neurons have to fire more spikes to achieve an optimal population readout”. We have now added (page 11) that this prediction is consistent with the finding of Barrett et al. (2016).

      With regard to the dependence of optimal coding properties on the number of neurons, we have tried to better describe similarities and differences with our work and that of Calaim et al as well as with the work of Barrett et al. (2016) which reports highly relevant results. These additional considerations are summarized in a paragraph in Discussion (page 16).

      (6) Overall, the authors should distinguish which of their results are novel, which ones are consistent with previous work on efficient spiking networks, and which ones are consistent in general with network implementations of efficient and sparse coding. In many of the above cases, this manuscript goes into much more depth and study of each of the network characteristics, which is interesting and commendable, but this should be made clear. In clarifying the points listed above, I hope that the authors can better contextualize their work in relation to previous studies, and highlight what are the unique characteristics of the model presented here.

      We made a number of clarifications of the text to provide better contextualization of our model within existing literature and to credit more precisely previous publications. This includes commenting on previous studies that introduced separate objective functions of E and I neurons (page 2), spike-triggered adaptation (page 8), physical units (page 3), and changes in the number of neurons in the network (page 16). 

      Next, there are the claims of optimal parameters. As explained on pg. 35 (criterion for determining optimal model parameters), it appears to me that they simply vary each parameter one at a time around the optimal value. This argument appears somewhat circular, as they would need to know the optimal parameters before starting this sweep. In general, I find these optimality considerations to be the most interesting and novel part of the paper, but the simulations are relatively limited, so I would ask the authors to either back them up with more extensive parameter sweeps that consider covariations in different parameters simultaneously (as in Calaim et al. 2022). Furthermore, the authors should make sure that they are not breaking any of the required relationships between parameters necessary for the optimization of the loss function. Again, some of the results (such as coding error not being minimized with zero metabolic cost) suggests that there might be issues here. 

      We thank the reviewer for this insightful suggestion. We have now added a joint sweep of all relevant model parameters using Monte-Carlo parameter search with 10.000 iterations. We randomly drew parameter configurations from predetermined parameter ranges that are detailed in the newly added Table 2. Parameters were sampled from a uniform distribution. We varied all the six model parameters studied in the paper (metabolic constant, noise intensity, time constant of single E and I neurons, ratio of E to I neurons and ratio of the mean I-I to E-I connectivity).  We now present these results on a new Figure 2. We did not find any set of parameters with lower loss than the parameters in Table 1 when the weighting of the error with the cost was in the following range: 0.4<g<sub>L</sub><0.81 (Fig. 2C). While our large but finite Monte-Carlo random sampling does not fully prove that the configuration we selected as optimal (on Table 1) is a global optimum, it shows that this configuration is highly efficient. Further, and as detailed in the rebuttal to the Weaknesses of the Public Review of Referee 2, analyses of the near optimal solutions are compatible with the notion (resulting from the join parameter sweep studies that we added to Figures 6 and 7) that network optimality may be influenced by joint covariations in parameters. These new results are reported in Results (page 5, 11 and 13) and in Figure 2, 6I an 7J.

      Some more specific points:

      (1) In general, I find it difficult to understand the scaling of the RMSE, cost, and loss values in Figures 4-7. Why are RMSE values in the range of 1-10, whereas loss and cost values are in the range of 0-1? Perhaps the authors can explicitly write the values of the RMSE and loss for the simulation in Figure 1G as a reference point.

      Encoding error (RMSE), metabolic cost (MC) and average loss for a well performing network are within the range of 1-10 (see Fig. 8G or 7C in the first submission). To ease the visualization of results, we normalized the cost and the loss on Figs. 6-8 in order to plot them on the same figure (while the computation of the optima is done following the Eq. 39 and is without normalization). We have now explicitly written the values of RMSE, MC and the average loss (non-normalized) for the simulation in Fig. 1D on page 5, as suggested by the reviewer. We have also revised Fig. 4 and now show the absolute and not the relative values of the RMSE and the MC (metabolic cost). 

      (2) Optimal E-I neuron ratio of 4:1 and efficacy ratio of 3:1: besides being unintuitive in relation to previous work, are these two optimal settings related to one another? If there are 4x more excitatory neurons than inhibitory neurons, won't this affect the efficacy ratio of the weights of the two populations? What happens if these two parameters are varied together?

      Thanks for this insightful point. Indeed, the optima of these two parameters are interdependent and positively correlated - if we decrease the E-I neuron ratio, the optimal efficacy ratio decreases as well. To better show this relation we added figures with 2dimensional parameter search (Fig. 7J) where we varied jointly the two ratios. The red cross on the right figure marks the optimal ratios used as optimal parameters in our study. These finding are discussed on page 13.

      (3) Optimal dimensionality of M=[1,4]: Again, previous work (Calaim et al. 2022) would suggest that efficient spiking networks can code for arbitrary dimensional signals, but that performance depends on the redundancy in the network - the more neurons, the better the coding. From this, I don't understand how or why the authors find a minimum in Figure 7B. Why does coding performance get worse for small M?

      We optimized all model parameters with M=3 and this is the reason why M=3 is the optimal number of inputs when we vary this parameter. Our network shows a distinct minimum of the encoding error as a function of the stimulus dimensionality for both E and I neurons (Fig. 8C, top). This minimum is reflected in the minimum of the average loss (Fig. 8C, bottom). The minimum of the loss is shifted (or biased) by the metabolic cost, with strong weighting of the cost lowering the optimal number of inputs. This is discussed on pages 13-14.

      Here are a list of other, more minor points, that the authors can consider addressing to make the results and text more clear:

      (1) Feedforward efficient coding models: in the introduction (pg. 1) and discussion (pg. 11) it is mentioned that early efficient coding models, such as that of Olshausen & Field 96, were purely feedforward, which I believe to be untrue (e.g., see Eq. 2 of O&F 96). Later models made this even more explicit (Rozell et al. 2008). Perhaps the authors can either clarify what they meant by this, or downplay this point.

      We sincerely apologize for the oversight present in the previous version of the text. We agree with the reviewer that the model in Olshausen and Field (1996) indeed defines a network with recurrent connections, and the same type of recurrent connectivity has been used by Rozell et al. (2008, 2013). The structure of the connectivity in Olshausen and Field (as well as in Rozell et al (2008)) is closely related to the structure of connectivity that we derived in our model. We have corrected the text in the introduction (page 1) to remove these errors.

      (2) Pg. 2 - The authors state: "We draw tuning parameters from a normal distribution...", but in the methods, it states that these are then normalized across neurons, so perhaps the authors could add this here, or rephrase it to say that weights are drawn uniformly on the hypersphere.

      We rephrased the description of how weights were determined (page 2).

      (3) Pg. 2 - "We hypothesize the time-resolved metabolic cost to be proportional to the estimate of a momentary firing rate of the neural population" - from what I can see, this is not the usual population rate, which would be an average or sum of rates across the population.

      Indeed, the time-dependent metabolic cost is not the population rate (in the sense of the sum of instantaneous firing rates across neurons), but is proportional to it by a factor of 1/t. More precisely, we can define the instantaneous estimate of the firing rate of a single neuron i as z<sub>i</sub>(t) = 1/t<sub>r</sub> r<sub>i</sub>(t) with r<sub>i</sub>(t) as in Eq. 7. We have clarified this in the revised text on page 3. 

      (4) Pg. 3: "The synaptic strength between two neurons is proportional to their tuning similarity if the tuning similarity is positive" - based on the figure and results, this appears to be the case for I-E, E-I, and I-I connections, but not for E-E connections. This should be clarified in the text. Furthermore, one reference given in the subsequent sentence (Ko et al. 2011, ref. 51), is specifically about E-E connections, so doesn't appear to be relevant here.

      We have now specified that the Eq. 24 does not describe E-E connections. We also agree that the reference (Ko et al. 2011) did not adequately support our claim and we thus removed it and revised the text on page 3 accordingly.

      (5) Pg. 3: "the relative weight of the metabolic cost over the encoding error controls the operating regime of the network" and "and an operating regime controlled by the metabolic constant" - what do you mean by operating regime here?

      We used the expression “operating regime” in the sense of a dynamical regime of the network.  However, we agree that this expression may be confusing and we removed it in revision. 

      (6) Pg. 3: "Previous studies interpreted changes of the metabolic constant beta as changes to the firing thresholds, which has less biological plausibility" - can the authors explain why this is less plausible, or ideally provide a reference for it?

      In biological networks, global variables such as brain state can strongly modulate the way neural networks respond to a feedforward stimulus. These variables influence neural activity in at least two distinct ways. One is by changing non-specific synaptic inputs to neurons, which is a network-wide effect (Destexhe and Pare, Nature Reviews Neurosci. 2003). This is captured in our model by changing the strength of the mean and fluctuations in the external currents. Beyond modulating synaptic currents, another way of modulating neural activity is by changing cell-intrinsic factors that modulate the firing threshold in biological neurons (Pozzorini et al. 2013). Previous studies on spiking networks with efficient coding interpreted the effect of the metabolic constant as changes to the firing threshold (Koren and Deneve, 2017, Gutierrez and Deneve 2019), which corresponds to cell-intrinsic factors. Here we instead propose that the metabolic constant modulates the neural activity by changing the non-specific synaptic input, homogeneously across all neurons in the network. Interpreting the metabolic constant as setting the mean of the non-specific synaptic input was necessary in our model to find an optimal set of parameters (as in Table 1) that is also biologically plausible. We revised the text accordingly (page 4).

      (7) Pg. 4: Competition across neurons: since the model lacks E-E connectivity, it seems trivial to conclude that there is competition through lateral inhibition, and it can be directly determined from the connectivity. What is gained from running these perturbation experiments?

      We agree that a reader with a good understanding of sparse / efficient coding theory can tell that there is competition across neurons with similar tuning already from the equation for the recurrent connectivity (Eq. 24). However, we presume that not all readers can see this from the equations and that it is worth showing this with simulations.

      Following the reviewer's comment, we have now downplayed the result about the model manifesting lateral inhibition in general on page 6. We have also removed its extensive elaboration in Discussion.

      One reason to run perturbation experiments was to test to what extent the optimal model qualitatively replicates empirical findings, in particular, single neuron perturbation experiments in Chettih and Harvey, 2019, without specifically tuning any of the model parameters. We found that the model reproduces qualitatively the main empirical findings, without tuning the model to replicate the data. We revised the text on page 5 accordingly.

      Further reason to run these experiments was to refine predictions about the minimal amount of connectivity structure that generates perturbation response profiles that are qualitatively compatible with empirical observations. To establish this, we did perturbation experiments while removing the connectivity structure of a particular connectivity sub-matrices (E-I, I-I or I-E; Fig. S3 F). This allowed us to determine which connectivity matrix has to be structured to observe results that qualitatively match empirical findings. We found that the structure of E-I and I-E connectivity is necessary, but not the structure of I-I connectivity. Finally, we tested partial removal of the connectivity structure where we replaced the precise (and optimal) connectivity structure and imposed a simpler connectivity rule. In the optimal connectivity, the connection strength is proportional to the tuning similarity. A simpler connectivity rule, in contrast, only specifies that neurons with similar tuning share a connection, and beyond this the connection strength is random. Running perturbation experiments in such a network obeying a simpler connectivity rule still qualitatively replicated empirical results from Chettih and Harvey (2019). This is shown on the Supplementary Fig. S2F on described on page 8.

      (8) Pg. 4: "the optimal E-I network provided a precise and unbiased estimator of the multidimensional and time-dependent target signal" - from previous work (e.g., Calaim et al. 2022), I would guess that the estimator is indeed biased by the metabolic cost. Why is this not the case here? Did you tune the output weights to remove this bias?

      Output weights were not tuned to remove the bias. On Fig. 1H in the first submission we plotted the bias for the network that minimizes the encoding error. We forgot to specify this in the text and figure caption, for which we apologize. We now replaced this figure with a new one (Fig. 1E) where we plot the bias of the network minimizing the average loss (with parameters as in Table 1). The bias of the network minimizing the error is close to zero, B^E = 0.02 and B^I = 0.03.  The bias of the network minimizing the loss is stronger and negative, B^E = -0.15 and B^I=-0.34. In the text of Results, we now report the bias of both networks (i.e., optimizing the encoding error and optimizing the loss). We also added a plot showing trial-averaged estimates and a time-dependent bias in each stimulus dimension (Supplementary figure S1 F). Note that the network minimizing the encoding error requires a lower metabolic constant (β = 6) than the network optimizing the loss (β=14), however, the optimal metabolic cost in both networks is nonzero. We revised the text and explained these points on page 5.

      (9) Pg. 4: "The distribution of firing rates was well described by a log-normal distribution" - I find this quite interesting, but it isn't clear to me how much this is due to the simulation of a finitetime noisy input. If the neurons all have equal tuning on the hypersphere, I would expect that the variability in firing is primarily due to how much the input correlates with their tuning. If this is true, I would guess that if you extend the duration of the simulation, the distribution would become tighter. Can you confirm that this is the stationary distribution of the firing rates?

      We now simulated the network with longer simulation time (10 seconds of simulated time instead of 2 seconds used previously) and also iterated the simulation across 10 trials to report a result that is general across random draws of tuning parameters (previously a single set of tuning parameters was used). The reviewer is correct that the distribution of firing rates of E neurons has become tighter with longer simulation time, but distributions remain log-normal. We also recomputed the coefficient of variation (CV) using the same procedure. We updated these plots on Fig. 1F.

      (10) Pg. 4: "We observed a strong average E-I balance" - based on the plots in Figure 1J, the inputs appear to be inhibition-dominated, especially for excitatory neurons. So by what criterion are you calling this strong average balance?

      The reviewer is correct about the fact that the net synaptic input to single neurons in our optimal network shows excess inhibition and the network is inhibition-dominated, so we revised this sentence (page 5) accordingly.  

      (11) Pg. 4: Stronger instantaneous balance in I neurons compared to E neurons - this is curious, and I have two questions: (1) can the authors provide any intuition or explanation for why this is the case in the model? and (2) does this relate to any literature on balance that might suggest inhibitory neurons are more balanced than excitatory neurons?

      In our model, I neurons receive excitatory and inhibitory synaptic currents through synaptic connections that are precisely structured. E neurons receive structured inhibition and a feedforward current. The feedforward current consists of M=3 independent OU processes projected on the tuning vectors of E neurons w<sub>i</sub><sup>E</sup>. We speculate that because the synaptic inhibition and feedforward current are different processes and the 3 OU inputs are independent, it is harder for E neurons to achieve the instantaneous balance that would be as precise as in I neurons. While we think that the feedforward current in our model reflects biologically plausible sensory processing, it is not a mechanistic model of feedforward processing. In biological neurons, real feedforward signals are implemented as a series of complex feedforward synaptic inputs from downstream areas, while the feedforward current in our model is a sum of stimulus features, and is thus a simplification of a biological process that generates feedforward signals. We speculate that a mechanistic implementation of the feedforward current could increase the instantaneous balance in E neurons.  Furthermore, the presence of EE connections could potentially also increase the instantaneous balance in E neurons. We revised the Discussion about these important questions that lie on the side of model limitations and could be advanced in future work. We could not find any empirical evidence directly comparing the instantaneous balance in E versus I neurons.  We have reported these considerations in the revised Discussion (page 16).

      (12) Pg. 5, comparison with random connectivity: "Randomizing E-I and I-E connectivity led to several-fold increases in the encoding error as well as to significant increases in the metabolic cost" and Discussion, pg. 11: "the structured network exhibits several fold lower encoding error compared to unstructured networks": I'm wondering if these comparisons are fair. First, regarding activity changes that affect the metabolic cost - it is known that random balanced networks can have global activity control, so it is not straightforward that randomizing the connectivity will change the metabolic cost. What about shuffling the weights but keeping an average balance for each neuron's input weights? Second, regarding coding error, it is trivial that random weights will not map onto the correct readout. A fairer comparison, in my opinion, would at least be to retrain the output weights to find the best-fitting decoder for the threedimensional signal, something more akin to a reservoir network.

      Thank you for raising these interesting questions. The purpose of comparing networks with and without connectivity structure was to observe causal effects of the connectivity structure on the neural activity. We agree that the effect on the encoding error is close to trivial, because shuffling of connectivity weights decouples neural dynamics from decoding weights. We have carefully considered Reviewer's suggestions to better compare the performance of structured and unstructured networks. 

      In reply to the first point, we followed the reviewer's suggestion and compared the optimal network with a shuffled network that matched the optimal network in its average balance. This was achieved by increasing the metabolic constant, decreasing the noise intensity and slightly decreasing the feedforward stimulus (we did not find a way to match the net current in both cell types by changing a single parameter). As we compared the metabolic cost between the optimal and the shuffled network with matched average balance, we still found lower metabolic cost in the optimal network, even though the difference was now smaller. We replaced Fig. 3B from the first submission with these new results in Fig. 4B and commented on them in the text (page 7).

      In reply to the second point, we followed reviewer’s suggestion and compared the encoding error (RMSE) of the optimal network and the network with shuffled connectivity where decoding weights are trained such as to optimally reconstruct the target signal. As suggested, we now analyzed the encoding error of the networks using decoding weights trained on the set of spike trains generated by the network using linear least square regression to minimize the decoding error. For a fair and quantitative comparison and because we did not train decoding weights of our structured model, we performed this same analysis using spike trains generated by networks with structured and shuffled recurrent connectivity. We found that the encoding error is smaller in the E population and much smaller in the I population in the structured compared to the random network. Decoding weights found numerically in the optimal network approach uniform distribution of weights that we used in our model (Fig. 4A, right). In contrast, decoding weights obtained from the random network do not converge to a uniform distribution, but instead form a much sparser distribution, in particular in I neurons (Supplementary Fig. S3 A). These additional results reported in the above mentioned figures are discussed in text on page 14.  

      (13) Pg. 5: "a shift from mean-driven to fluctuation-driven spiking" and Pg. 11 "a network structured as in our efficient coding solution operates in a dynamical regime that is more stimulus-driven, compared to an unstructured network that is more fluctuation driven" - I would expect that the balanced condition dictates that spiking is always fluctuation driven. I'm wondering if the authors can clarify this.

      We agree with the reviewer that networks with and without connectivity structure are fluctuation-driven, because in a mean-driven network the mean current must be suprathreshold (Ahmadian and Miller, 2021), which is not the case of either network. We removed the claim of the change from mean to fluctuation driven regime in the revised paper. We are grateful to the Reviewer for helping us tighten the elaboration of our findings.

      (14) Pg. 5: "suggesting that variability of spiking is independent of the connectivity structure" - the literature of balanced networks argues against this. Is this not simply because you have a noisy input? Can you test this claim?

      We thank the reviewer for the suggestion. We tested this claim by measuring the coefficient of variation in networks receiving a constant stimulus. In particular, we set the same strength in each of the M=3 stimulus dimensions and set the stimulus amplitude such as to match the firing rate of the optimal network in response to the OU stimulus. We computed the coefficient of variation in 200 simulation trials.  The removal of connectivity structure did not cause significant change of the coefficient of variation in a network driven by a constant stimulus (Fig. 4E). These additional results are discussed in text on page 7. 

      We also taken the suggestion about variability of spiking being independent of the connectivity structure. We removed this claim in the revision, because we only tested a couple of specific cases where the connectivity is structured with respect to tuning similarity (fully structured, fully unstructured and partially unstructured networks). This is not exhaustive of all possible structures that recurrent connectivity may have.

      (15) Pg. 6: "we also removed the connectivity structure only partially, keeping like-to-like connectivity structure and removing all structure beyond like-to-like" - can you clarify what this means, perhaps using an equation? What connectivity structure is there besides like-to-like?

      In the optimal model, the strength of the synapse between a pair of neurons is proportional to the tuning similarity of the two neurons, Y<sub>ij</sub> proportional to J<sub>ij</sub> for Y<sub>ij</sub> >0 (see Eq. 24 and Fig. 1C(ii)). Besides networks with optimal connectivity, we also tested networks with a simpler connectivity rule. Such a simpler rule prescribes a connection if the pair of neurons has similar tuning (Y<sub>ij</sub> >0), and no connection otherwise. The strength of the connection following this simpler connectivity rule is otherwise random (and not proportional to pairwise tuning similarity Y<sub>ij</sub> as it is in the optimal network). We clarified this in the revision (page 8), also by avoiding the term “like-to-like” for the second type of networks, which could indeed be prone to confusion.

      (16) Pgs. 6-7: "we indeed found that optimal coding efficiency is achieved with weak adaptation in both cell types" and "adaptation in E neurons promotes efficient coding because it enforces every spike to be error- correcting" - this was not clear to me. First, it appears as though optimal efficiency is achieved without adaptation nor facilitation, i.e., when the time constants are all equal. Indeed, this is what is stated in Table 1. So is there really a weak adaptation present in the optimal case? Second, it seems that the network already enforces each spike to be errorcorrecting without adaptation, so why and how would adaptation help with this?

      We agree with the Reviewer that the network without adaptation in E and I neurons is already optimal. It is also true that most spikes in an optimal network should already be error-correcting (besides some spikes that might be caused by the noise). However, regimes with weak adaptation in E neurons remain close to optimality. Spike-triggered facilitation, meanwhile, ads spikes that are unnecessary and decrease network efficiency. We revised the Fig.5 (Fig. 4 in first submission) and replaced 2-dimensional plots in Fig.4 C-F with plots that show the differential effect of adaptation in E neurons (top) and in I neurons (bottom plots) for the measures of the encoding error (RMSE), the efficiency (average loss) and the firing rate (Fig. 5B-D). On the new Fig. 5C it is evident that the loss of E and I population grows slowly with adaptation in E neurons (top) while it grows faster with adaptation in I neurons (bottom). These considerations are explained in revised text on page 9.

      (17) Pg. 7: "adaptation in E neurons resulted in an increase of the encoding error in E neurons and a decrease in I neurons" - it would be nice if the authors could provide any explanation or intuition for why this is the case. Could it perhaps be because the E population has fewer spikes, making the signal easier to track for the I population?

      We agree that this could indeed be the case. We commented on it in revision (page 9).

      (18) Pg. 7: "The average balance was precise...with strong adaptation in E neurons, and it got weaker when increasing the adaptation in I neurons (Figure 4E)" - I found the wording of this a bit confusing. Didn't the balance get stronger with larger I time constants?

      By increasing the time constant of I neurons, the average imbalance got weaker (closer to zero) in E neurons (Fig. 5G, left), but stronger (further away from zero) in I neurons (Fig. 5G, right). We have revised the text on page 9 to make this clearer.

      (19) Pg. 7: Figure 4F is not directly described in the text.

      We have now added text (page 9) commenting on this figure in revision.

      (20) Pg. 8: "indicating that the recurrent network dynamics generates substantial variability even in the absence of variability in the external current" -- how does this observation relate to your earlier claim (which I noted above) that "variability of spiking is independent of connectivity structure"?

      We agree that the claim about variability of spiking being independent of connectivity structure was overstated and we thus removed it. The observation that we wanted to report is that both structured and unstructured networks have very similar levels of variability of spiking of single neurons. The fact that much of the variability of the optimal network is generated by recurrent connections is not incompatible. We revised the related text (page 11) for clarity.

      (21) Pg. 9: "We found that in the optimally efficient network, the mean E-I and I-E synaptic efficacy are exactly balanced" - isn't this by design based on the derivation of the network?

      True, the I-E connectivity matrix is the transpose of the E-I connectivity matrix, and their means are the same by the analytical solution. This however remains a finding of our study. We have clarified this in the revised text (page 12).

      (22) Pg. 30, eq. 25: the authors should verify if they include all possible connectivity here, or if they exclude EE connectivity beforehand.

      We now specify that the equation for recurrent connectivity (Eq. 24, Eq. 25 in first submission) does not include the E-E connectivity in the revised text (page 41).

      Reviewer #3 (Recommendations For The Authors):

      Essential

      (1)  Currently, they measure the RMSE and cost of the E and I population separately, and the 1CT model. Then, they average the losses of the E and I populations, and compare that to the 1CT model, with the conclusion that the 1CT model has a higher average loss. However, it seems to me that only the E population should be compared to the 1CT model. The I population loss determines how well the I population can represent the E population representation (which it can do extremely well). But the overall coding accuracy of the network of the input signal itself is only represented by the E population. Even if you do combine the E and I losses, they should be summed, not averaged. I believe a more fair conclusion would be that the E/I networks have generally slightly worse performance because of needing to follow Dale's law, but are still highly efficient and precise nonetheless. Of course, I might be making a critical error somewhere above, and happy to be convinced otherwise!

      We carefully considered the reviewer's comment and tested different ways of combining the losses of the E and I population. We decided to follow the reviewer's suggestion and to compare the loss of the E population of the E-I model with the loss of the one cell type model. As evident already from the Fig. 8G, such comparison indeed changes the result to make the 1CT model more efficient. Also, the sum of losses of E and I neurons results in the 1CT model being more efficient than the E-I model. Note, however, the robustness of the E-I model to changes in the metabolic constant (Fig. 6C, top). The firing rates of the E-I model stay within physiological ranges for any value of the metabolic constant, while the firing rate of the 1CT model skyrocket for the metabolic constant that is lower than optimal (Fig. 8I).

      We added to Results (page 14) a summary of these findings.

      (2) The methods and main text should make much clearer what aspects of the derivation are novel, and which are not novel (see review weaknesses for specifics).

      We specified these aspects, as discussed in more detail in the above reply to point 4 of the public review of Reviewer 1.

      Request:

      If possible, I would like to see the code before publication and give recommendations on that (is it easy to parse and reproduce, etc.)

      We are happy to share the computer code with the reviewer and the community. We added a link to our public repository containing the computer code that we used for simulations and analysis to the preprint and submission (section “Code availability” on page 17). 

      Suggestions:

      (1) I believe that for an eLife audience, the main text is too math-heavy at the beginning, and it could be much simplified, or more effort could be made to guide the reader through the math.

      We tried to do our best to improve the clarity of description of mathematical expressions in the main text.

      (2) Generally vector notation makes network equations for spiking neurons much clearer and easier to parse, I would recommend using that throughout the paper (and not just in the supplementary methods).

      We now use vector notation throughout the paper whenever we think that this improves the intelligibility of the text. 

      (3) In the discussion or at the end of the results adding a clear section summarizing what the minimal requirements or essential assumptions are for biological networks to implement this theory would be helpful for experimentalists and theorists alike.

      We have added such a section in Discussion (page 15). 

      (5) I think the title is a bit too cumbersome and hard to parse. Might I suggest something like 'Efficient coding and energy use in biophysically realistic excitatory-inhibitory spiking networks' or 'Biophysically constrained excitatory-inhibitory spiking networks can efficiently implement efficient coding'.

      We followed reviewer’s suggestion and changed the title to “Efficient coding in biophysically realistic excitatory-inhibitory spiking networks.”

      (6) How the connections were shuffled exactly was not clear to me in how it was described now. Did they just take the derived connectivity, and shuffle the connections around? I recommend a more explicit methods section on it (I might have missed it).

      Indeed, the connections of the optimal network were randomly shuffled, without repetition, between all neuronal pairs of a specific connectivity matrix. This allows to preserve all properties of the distribution of connectivity weights and only removes the structure of the connectivity, which is precisely what we wanted to test. We now added a section in Methods (“Removal of connectivity structure”) on pages 51-52 where we explain how the connectivity structure is removed.

      (7) Figure 1 sub-panel ordering was confusing to read (first up down, then left right). Not sure if re- arranging is possible, but perhaps it could be A, B, and C at the top, with subsublabels (i) and (ii). Might become too busy though.

      We followed this suggestion and rearranged the Fig. 1 as suggested by the reviewer. 

      (8) Equation 3 in the main text should specify that 'y' stands for either E or I.

      This has been specified in the revision (page 3). 

      (9) Figure 1D shows a rough sketch of the types of connectivities that exist, but I would find it very useful to also see the actual connection strengths and the effect of enforcing Dale's law.

      We revised this figure (now Fig. 1B (ii)) and added connection strengths as well as a sketch of a connection that was removed because of Dale’s law.

      (10) The main text mentions how the readout weights are defined (normal distributions), but I think this should also be mentioned in the methods.

      Agreed. We indeed had Methods section “Parametrization of synaptic connectivity (page 46), where we explain how readout weights are defined. We apologize if a call on this section was not salient enough in the first submission. We made sure that the revised main text contains a clear pointer to this Methods section for details. 

      (11) The text seems to mix ‘decoding weights’ and ‘readout weights’.

      Thanks for this suggestion to use consistent language. We opted for ‘decoding weights’ and removed ‘readout weights’.

      (12) The way the paper is written makes it quite hard to parse what are new experimental predictions, and what results reproduce known features. I wonder if some sort of 'box' is possible with novel predictions that experimentalists could easily look at and design an experiment around.

      We now revised the text. We clarified for every property of the model if this property is a prediction of facts that were not yet experimentally tested or if it accounts for previously observed properties of biological neurons. Please see the reply to point 4 of Reviewer 1. 

      (13) Typo's etc.:

      Page 5 bottom -- ("all") should have one of the quotes change direction (common latex typo, seems to be the only place with the issue).

      We thank the reviewer for pointing out this typo that has been removed in revision.

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      We thank the reviews for their thorough assessment of our manuscript and their constructive suggestions for further improvement. We are pleased that the reviewers recognise that “this work represents an important and substantive contribution” to the field of genome organization and gene transcription.

      Reviewer 1

      1) Does the CTCF degron substantially remove CTCF from the Mnx1/Shh TAD border? In prior AID-CTCF degron studies a considerable fraction of cohesin dependent TAD borders are retained upon CTCF removal. Moreover, CTCF sites at these retained borders still have clear ChIP-seq peaks - even though the protein is >95% depleted and scarcely detectable by western. Thus, while I suspect that the authors are correct that the shorter distance of the 35 kb border deletion contributes substantially to the increased crosstalk between the Mnx1 and Shh-enhancers, I suspect part of the reason for a lack of a similar effect in the CTCF degron is due to the known challenges in removing CTCF from this border. To argue that the border but not the CTCF is important, I think it would be helpful to show the CTCF signal is sufficiently lost in the degron by ChIP-seq and/or show that this TAD border has been lost by Hi-C. Alternatively, the authors could tone down this claim to something more conservative, as I did not find it to be presented as a key conclusion of the paper as a whole.

      We used the CTCF-AID mESC line published by Nora et al (2017). In our previous manuscript (Kane et al., 2022) we presented the published Hi-C and CTCF-ChIP-seq data from these cells at the Shh TAD (Fig 2c of Kane et al) – reproduced below for the reviewer’s benefit. This shows the loss of insulation at the Shh/Mnx1 TAD boundary when CTCF is degraded, and the loss of CTCF ChIP-seq signal at this boundary.

      • *

      2) In my opinion, the authors' description of existing data for the importance of TAD borders in enhancer promoter regulation is not described in a sufficiently balanced and complete manner, and overall impression given by the text is that CTCF marked borders have little serious evidence for a role in developmental enhancer specificity and are maybe a cancer thing. This is doubly unfortunate, as it undermines the impact of the authors work in expanding our view of what TAD borders are in a regulatory sense, as well as presents an unbalanced view of work in the field. This is of course easily corrected. In particular, I recommend the following revisions: It is " depletion of CTCF has only a small effect on transcription in cell culture (Nora et al., 2017; Hsieh et al., 2022)." It should be clarified that there is only a small *acute * effect on transcription (in the first 6-12 hours), which may tell us more about the timescale at which promoters sample, integrate and respond to changes in their enhancer environment than about the roles of CTCF particularly. Notably, this degradation is *lethal*, it results in massive changes in transcription after 4 days, and I suspect the authors agree that this lethal affect arises from CTCF's role in transcription regulation (if you remove some key cytoskeletal protein or metabolic enzyme the primary cause of cell death is not transcriptional, but almost all the evidence for CTCF's vital role in the cell is linked in one way or another to transcription).

      As suggested by the reviewer we have inserted the word “acute” into that sentence.

      The discussion of TAD border deletions is more one-sided than ideal. I appreciate the discussion is usually even more unbalanced when presenting the opposite view in the literature - many works only cite the examples where border deletion does lead to ectopic expression and phenotypes. The current text presented a subset of these border deletion data in such a way as to give me the impression the authors are deeply skeptical that CTCF plays a role as an insulator of E-P interactions in a developmental context (rather than just as a weird cancer thing). For example: Pennacchio's lab has analyzed a series of TAD border deletions with more examples of both lethal effects and effects with no apparent phenotype 3

      I appreciate that Bickmore and colleagues found quite phenotypically normal mice upon deletion of CTCF sites from Shh, but it might be balanced to still reference the work from Uishiki et al that indicate in humans the CTCF site does play a role in Shh - ZRS communication. As the authors are doubtless aware, Andrey and colleagues show a CTCF dependent enhancement of a sensitized ZRS enhancer. Zuin et al. in an elegant experiment in which an enhancer is mobilized to different distances away from its promoter using transposon induction, reported a complete lack of detection of enhancers mobilizing outside the TAD to activate gene expression. A balanced presentation of the data on CTCF role might include some discussion of the above. In light of these earlier works, the findings the authors report about border bypass are all the more surprising.

      • *

      We thank the reviewer for highlighting some of these studies, especially for drawing our attention to the interesting recent preprint from Chakraborty et al. (doi.org/10.1101/2024.08.03.606480), which we now discuss in the revised manuscript. * As suggested by the reviewer, we now also cite Ushiki et al., 2021 in the Introduction in the context of CTCF-associated phenotypes, rather than just in the Discussion as in the original submission. We already cited the work of Andrey and colleagues (Paliou et al). However, we chose not to cite the Pennacchio study, because the deletions used were large – all >10kb and some as large as 80kb. Therefore, we consider it highly likely that other regulatory sequences beyond CTCF sites themselves may have been deleted, complicating conclusions drawn about the function of the TAD boundaries per se. We have also chosen to focus our discussion on studies of enhancers in their native genomic locus, and predominantly in vivo analyses, rather than ectopic enhancer integrations (such as Zuin et al) in cell lines.*

      4) By contrast, direct evidence for cross TAD interactions at endogenous loci has not to my knowledge been shown as clearly as described in the current manuscript. Recent work from Rocha and colleagues showed evidence that some enhancers upstream of Sox2 can pass ectopically induced boundaries. While recent work has described examples of 'TAD border bypass' at endogenous loci (e.g. for Pitx1 8, Hoxa regulation 9), these reports really just expand the view of regulatory boundaries rather than provide evidence against it. They invoke a 3D stacking of boundaries that allows boundary proximal enhancers and promoters to stack with (and so bypass) an intervening TAD boundary. Notably, in this view enhancers and promoters that lie away from the border of their respective TADs are still separate, and indeed intervening genes between distal enhancers for Pitx1 and Hoxa appear to follow these rules.2 Mnx1 and the Shh enhancers by contrast do not appear to be an example of border stacking. Given that Sox2 at least is also a TAD border, and the position of the bypassing enhancers is not precisely known in the work from Rocha, it is possible that that case is also an example of boundary stacking, which appears less likely in the case of Mnx1 (which does not appear to be at CTCF marked border, at least in mESCs).

      • *

      We thank the reviewer for highlighting some of these studies. We had already discussed the study from Rocha and colleagues (Chakraborty et al., 2023) and we had discussed the boundary stacking paper from Hung et al, (2024). However, based on the reviewer’s comment we now include a specific discussion about TAD boundary stacking and boundary proximal enhancer bypass, noting that Mnx1 is not close to a TAD boundary. This will become even more relevant in our planned revised manuscript where we will investigate possible Mnx1 activation by Shh enhancers (SBE2/3) located even further away from the Shh/Mnx1 TAD boundary.

      Statistics: Some of the bar graphs quantifying the %-expressing cells do not obviously have associated n-values, as are some of the violin plots of the distances. I think all these bar graphs could also benefit from adding error bars (e.g. by bootstrapping from the sampled population). This will help the reader more easily appreciate how sampling error and sample size affect the variation seen in the plots.

      We will add the n-values to all graphs. Regarding error bars, we think that showing the data from the two biological replicate separately is a better way to show the data reproducibility to the reader, than using boostrapping to estimate error bars.

      Figures 2 and 3: I would have preferred the authors zoom in more on the FISH spots to help the reader appreciate the proximity. I do appreciate also seeing a field of more than 1 cell (to give some sense of the variability), but these images mostly have only 1 spot pair per panel, which is exceedingly small as they contain parts of more than 1 nucleus. There is also unnecessary white space in this figure that could have been used to show zoom in panels.

      The same applies to the image panels in Figure 3 as for figure 2 - there is considerable unused whitespace, the image panels capture mostly a single nucleus and its pattern of DAPI dense heterochromatin (which isn't particularly relevant to the narrative) while the fluorescent spots that are the focus of the narrative are quite small. It is nice to have an example of the cell to see that this isn't just random background (that there is just one spot per cell) - in that sense though it's equally helpful to show its not just 1 cell in the field that has the signal-to-noise (SNR) shown. For this figure and the panels in figure 2, I'd recommend showing a zoom out showing ~3 nuclei with transcription foci (at least in the regions where the % transcribing is >60% it should be fine to have adjacent nuclei transcribing, for those where it is 10%, 1 of 3 nuclei transcribing in the image selected would also help get the sense of the data). These zoom out images would also give a sense of the SNR in the image, and then a zoom in where the FISH spots are sizable would make it easier to see the neighboring transcripts. Extended Data Fig 3 does a better job showing the context of the limb and then zooming in to an image where the RNA spots are appreciable. It looks like the resolution of the zoom in is lower, such that zooming in further on the spots in this data may not enhance the image.

      • *

      In response to the reviewer’s comment, we will present zoomed-out and zoomed-in images as suggested.

      1. Figure 3 - DNA FISH It would be helpful to include a diagram indicated where the DNA FISH probes are located on the genome and their size in kb as an inset in the figure.

      2. *

      We will indicate the locations of DNA-FISH probes in a revised version of Figure 1a. Probe sizes are listed in the supplementary tables. We have now made this clearer in the legend to Figure 3.

      • *

      Reviewer 2

      The authors claim that co-expression of Mnx1 and Shh in the foregut and lung buds is also driven by boundary crossing contacts with the MACS1 enhancer. However, the effect of the boundary deletion on the co-transcription of Shh and Mnx1 is only showed for the ZPA. In this sense I find potentially misleading the statement of the authors in the following paragraph: "In the ZPA, the foregut, and the lung buds, the majority of Mnx1 RNA-FISH signals are at alleles that show simultaneous signal for Shh nascent transcript from the same allele (closely apposed signals) (Fig. 2a, b and Extended Data Fig. 2a). In del 35 embryos, an even higher proportion of Mnx1 transcribing alleles also transcribe Shh (Fig. 2b,Extended Data Fig. 2a, Extended Data Table 3.). These data suggest that both the ZRS and MACS1 enhancers are able to simultaneously activate transcription at two gene loci on the same chromosome". In my opinion this phrasing implicitly extends the increase in Mnx1-Shh co-expressing nuclei observed in the ZPA of 35 del embryos to the expression of these two genes in the foregut and lung buds (driven by the MACS1 enhancer) while this effect has not been specifically addressed. In a previous work, the authors showed that boundary deletion does not impact Mnx1 expression in the foregut and lungs. It would be important to clarify whether more precise analysis in this study have led to different conclusions or, alternatively, appropriately discuss the results. Ideally the authors should analyse the effect of the 35 del allele in the foregut / lung buds or rephrase the statement about the sharing of the MACS1 enhancer. * *

      The reviewer is correct that in our previous publication (Williamson et al., 2019) we did not detect Mnx1 expression in the lungs of 35kb del embryos. However, we only examined this by in situ hybridisation so we probably lacked the sensitivity to detect weak Mnx1 expression. In response to the reviewer’s comments, we now propose to do RNA FISH for transcription at Mnx1 in other tissues of 35kb del embryos.

      The authors use the quantifications of nuclei co-expressing Mnx1 and Shh from the same allele as an indicator of simultaneous transcription of the two genes by the sharing of the enhancer as opposed to a model of alternate transcriptional bursts. However, I am concerned that the time scale at which looping and transcriptional bursts occur is at odds with the detection of nascent transcription in FISH experiments, thus not excluding that shifting of the enhancer from one promoter to the other could still result in detection of nascent RNA of the two genes in the same allele. In any case, following the argumentation of the authors, the fraction of nuclei expressing Mnx1 alone does not appear to be significantly different from those expressing Mnx1 and Shh, and the increase of Mnx1 expressing nuclei upon boundary deletion seem proportionally similar to the increase of Mnx1+/Shh+ nuclei. In my opinion, this makes it difficult to interpret the detection of Mnx1 alone or both Mnx1-Shh expression as a reflection of alternate looping and transcriptional burst from enhancer sharing. Determining whether the two promoters compete for the interaction with the enhancer or share it would require estimate whether in the 35 del homozygote embryos Shh expression is reduced compared to wts, as a result of the increased interaction of the ZRS with the enhancer. The authors claim that there are no differences in the % of cells expressing Shh upon boundary deletion but in my opinion measurement is not sufficient to estimate a change in transcriptional rate (frequency of bursting). Nascent mRNA level detection in single cells would allow to better asses competition or concomitant activation of the two gene. Not being an expert in the RAN FISH technique it is not clear to me whether fluorescence intensity could be used as an estimator of transcription. From the images of the authors, in some cases it seems that expression of Shh alone is higher than when both Shh and Mnx1 are transcribed from the same allele (Fig. 2a, left panel, Fig 2c left vs right panel. However, in other cases an opposite trend can be observed (Mnx1 intensity in Fig2a central vs right panel). Thus, a single nuclei PCR or RNAseq approach may be more suited for this assessment.

      • *

      We respectfully disagree with the reviewer. We argue that nascent RNA FISH, using probe pools that for the most part detect the introns of Shh and Mnx1, is a better measure of transcription bursting/frequency (on or off) than probe signal intensity and therefore is a measurement of transcription rate. Single nuclei PCR or RNAseq would not assay nascent transcription and would not distinguish between alleles.

      Minor comments: 3. In the mESC model overexpressing the tZRS-VP64 construct, Shh and Mnx1 seem to be transcribed at similar rates compared to what observed in vivo (where only a minor fraction of Shh+ cells express Mnx1). Thus, despite the fact that TAD boundary deletion increases Mnx1, but not Shh, expression, the ZRS activity seems to more easily overcome the border in this context than in vivo. Could the authors comment on this interesting observation? May it relate to the insulation score of TAD boundaries in the mESCs compared to in vivo? Alternatively, could it reflect that combinatorial TF binding to an enhancer contribute to its directionality?

      • *

      *These are interesting speculations by the reviewer, but we would argue that it is hard to compare in vivo and in vitro experiments. For example, in the limb bud, the ZPA region where the ZRS is active cannot be distinguished morphologically from the surrounding mesenchymal cells, therefore it is likely that some nuclei that are just outside the ZPA may be included in the analysis. *

      Overall, figure organization and clarity could be improved. For example, enlargement of RNA fish images in Fig. 1 could be enlarged (to the same size than the broad view image) and RNA FISH signal could be highlighted with arrowheads. Panel distribution could also be optimized.

      • *

      We will try to clarify these figures – see also response to reviewer 1 (point 6).

      • *

      Reviewer 3

      There are a couple of claims and conclusions that are not fully supported by the data, and which I think could be resolved by rephrasing them and/or qualify them as preliminary or speculative. The authors often indicate co-expression as suggestive of co-regulation by a single enhancer, when in most cases this is not formally shown; such suggestion remains one among other possibilities. For instance, co-expression of Shh and Mnx1 in the developing bud is attributed to the ZRS enhancer, co-expression of Shh and Mnx1 in the foregut is attributed to MACS1 enhancer. Do the authors have any evidence that when deleting these enhancers, Mnx1 expression is abolished (or reduced) in the respective tissues?

      If not, I think the following sentences need revision, because causality is implied by the way it is written but it is not formally shown (and the data could suggest other options too):

      "However, we have previously identified that ZRS can also drive low level expression of Mnx1, located 150kb away in the adjacent TAD, in the developing limb bud (Williamson et al., 2019)." No genetic evidence is provided in Williamson et al. 2019

      i) It is true that in Williamson et al., we did not provide genetic evidence that ZRS is the enhancer responsible for Mnx1 expression in the limb bud ZPA. However, there is no other known enhancer in biology with activity specific to the ZPA, and when the ZRS is deleted the ZPA no longer functions as a signaling centre for the limb bud. As a compromise, we have rephrased the indicated text to “However, we have previously identified that ZRS also appears to be able to drive low level expression of Mnx1, located 150kb away in the adjacent TAD, in the ZPA of the developing limb bud”.

      "However, we also detect nascent transcription from Mnx1 in the Shh expressing portions of the developing ventral foregut and the lung bud of E10.5 embryos, an activity that is driven by the Shh MACS1 enhancer, located a further 100kb into the Shh TAD from ZRS (Sagai et al., 2017) and therefore able to induce transcription at Mnx1 across a TAD boundary from a distance of >260 kb (Fig. 1a)."

      ii) We have modified the text to now read “However, we also detect nascent transcription from Mnx1 in the Shh expressing portions of the developing ventral foregut and the lung bud of E10.5 embryos, an activity that is likely to bedriven by the Shh MACS1 enhancer, located a further 100kb into the Shh TAD from ZRS”.

      "These data suggest that both the ZRS and MACS1 enhancers are able to simultaneously activate transcription at two gene loci on the same chromosome."

      iii) We have modified this statement to now read that these enhancers “may be able to simultaneously activate transcription at two gene loci on the same chromosome”.

      "This is the first report of two endogenous mammalian genes transcribed simultaneously under the control of the same enhancer" (can the authors really claim this without genetic evidence, i.e., deleting the enhancer? Isn't that the golden standard in the field?).

      iv) We stand by this claim, because we have been able to provide evidence in support of our observations in tissues, by using synthetic enhancer activation in cell culture where we can be absolutely be sure what the enhancer responsible for activation is.

      "Therefore, the Shh ZRS enhancer can simultaneously activate transcription at two genes and across an intact, but porous, TAD boundary. See response (iv) above

      "This is a consequence of ZRS-driven activation, not Mnx1 transcription per se."

      v) We stand by this claim.

      The mathematical model, even if simple, is very poorly described. In the results section, it is not easy to understand what the model takes into account, etc; it would be important for non-experts to understand as well what is at stake. In the methods section, it does not seem to be properly described; it is only stated "The association between the transcription of Shh and Mnx1 regulated by the same enhancer was done by linear modelling with binomial link function." Would this be enough to recreate / reproduce the same model? I am not a mathematician, but I suspect more details would be needed. * *

      *We apologize if our approach was not clear. We used logistic regression not a mathematical model. We have now expanded the relevant Methods section to now read: *

      “To test whether or not there is a tendency of coexpression between two loci on the same chromatid, only nuclei with exactly one signal of each locus are informative. For these nuclei, we scored how many had expression in cis and how many in trans. To assess whether there was chromatid-specific coexpression, we tested statistically whether there was an excess of nuclei showing expression in cis. We did this using logistic regression, a form of generalized linear regression model. More specifically, we tested, for each model, whether the model intercept was significantly different from zero by using the z-scaled test statistic returned by these models and converting it to a p-value.”

      The authors claim that an enhancer working exclusively on one gene at a time would lead to a preference in individual expression - is this really the case? Could the authors show the expected scenarios for [one enhancer - two common targets] versus [two enhancers - two independent targets] and how this compares to the data?

      • *

      Our statistical analysis is restricted to the scenario of one enhancer acting on two genes (either simultaneously, or alternately). We do not test a two enhancers two target genes scenario because it is not relevant to our experimental analyses using synthetic activation of a single enhancer (with tZRS-Vp64, Extended Data Table 4).

      1. The results obtained with the VP64 activation (activation of ZRS leads to increased expression of Mnx1) are used by the authors as another piece of evidence that ZRS controls Mnx1 - but could VP64 activation be inducing chromatin opening / enhanced accessibility and therefore increased expression across the TAD boundary? I am not sure the authors need to test this, but they should at least acknowledge other possibilities (in relation to point 1).

      *We have previously shown (Benabdallah et al., 2019) that tal-VP64 activators alter chromatin structure (H3K27ac) in the Shh TAD only locally at the site of binding and at the Shh gene, and that this does not spread more generally. We have clarified this in the revised text. We also note that the effect of both the 35kb deletion and cohesin degradation on Mnx1 activation from the tZRSVp64 activator would not be consistent with a model of general chromatin opening/accessibility. The same argument applies to the DNA-FISH experiment (Fig 3) showing Mnx1 activation in the limb bud (ZPA) occurs specifically in the context of a compact chromatin conformation. *

      "In the nuclei of pre-motor neurons, where Mnx1 expression is driven from its own proximal enhancers (Fig. 1a), Mnx-ZRS and Mnx1-Shh distances are not different between Mnx1 expressing and non-expressing alleles." The authors use this as an argument to claim that Mnx1 expression per se does not explain the distance differences observed in the limb bud - but can such comparisons of expression and distances between loci be made between different cell types? Is there enough evidence for this to be a valid assumption? If not, then the assumption should be explicitly presented.

      • *

      We believe that the reviewer is confused here. We are not suggesting that Mnx1 expression per se doesn’t explain the distance differences in the limb bud, rather that these distance differences in the limb bud associated with Mnx1 transcription do not occur in the pre-motor neurons where activation is not dependent on distal enhancers, particularly in the Shh TAD.

      1. In Fig. 3b the authors show that shorter distances between the loci (Mnx1, Shh, ZRS) were associated with simultaneous transcription at Mnx1 and Shh, implying throughout that this would be associated with common activation by ZRS; but the shorter distances between the three loci are also associated with Mnx1 transcription alone. How is this explained?

      2. *

      *This is explained by the configuration of the Shh TAD and the general spatial proximity of Shh-ZRS in both expressing and non-expressing tissues due to the CTCF-mediated loop and that is apparent in Hi-C heat maps. *

      1. The text could be revised to look out for "expression levels" versus "expression frequency" - in several instances the authors mention expression "levels" when they are referring to % of cells expressing a given gene, which would thus be more appropriate called "expression frequency"?

      The reviewer makes an important point. In the revised manuscript we have removed all mention of “expression levels” and have replaced these with “ frequency”.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      Summary and significance in the context of the field:

      In this work, the authors conduct a detailed investigation of the 'ectopic'/'bystander' activation of the gene Mnx1 by enhancers of Shh, located in the neighboring TAD. TAD borders have been shown in a number of works to contribute to the remarkable specificity of enhancer-promoter choice, and the current dogma in the field is to view them as perfect boundaries to enhancer-promoter interaction. Notably, this current dogma also highlights a conundrum in our understanding of gene regulation, as available 3D genome data from both sequencing and microscopy show that TAD borders are regions of abrupt decrease in 3D proximity, but far from perfect borders, with numerous cross-TAD interactions detected by Hi-C and its variants and by single-cell microscopy (albeit fewer than the local intra-TAD interactions).

      The authors show convincing data that Mnx1 indeed responds transcriptionally to several Shh-enhancers located over 100 kb distal and on the wrong side of the TAD boundary. The data come from developing mouse embryos, span several tissues, and include key controls for specificity of the method. This provides convincing data with which to challenge the currently widely accepted view of as TADs a significant boundary, complimenting the few examples that indicate that such regulation is possible in special cases (see further discussion in 2b below). I believe this work represents an important and substantive contribution to the field and should ultimately be published, after a few notable issues have been addressed.

      Major comments:

      Does the CTCF degron substantially remove CTCF from the Mnx1/Shh TAD border?<br /> In prior AID-CTCF degron studies 1,2, a considerable fraction of cohesin dependent TAD borders are retained upon CTCF removal. Moreover, CTCF sites at these retained borders still have clear ChIP-seq peaks - even though the protein is >95% depleted and scarcely detectable by western. Thus, while I suspect that the authors are correct that the shorter distance of the 35 kb border deletion contributes substantially to the increased crosstalk between the Mnx1 and Shh-enhancers, I suspect part of the reason for a lack of a similar effect in the CTCF degron is due to the known challenges in removing CTCF from this border. To argue that the border but not the CTCF is important, I think it would be helpful to show the CTCF signal is sufficiently lost in the degron by ChIP-seq and/or show that this TAD border has been lost by Hi-C. Alternatively, the authors could tone down this claim to something more conservative, as I did not find it to be presented as a key conclusion of the paper as a whole.

      Minor comments:

      I believe the manuscript could be strengthened by some textual revisions of the introduction: 2a) In particular, in my opinion, the authors' description of existing data for the importance of TAD borders in enhancer promoter regulation is not described in a sufficiently balanced and complete manner, and overall impression given by the text is that CTCF marked borders have little serious evidence for a role in developmental enhancer specificity and are maybe a cancer thing. This is doubly unfortunate, as it undermines the impact of the authors work in expanding our view of what TAD borders are in a regulatory sense, as well as presents an unbalanced view of work in the field. This is of course easily corrected. In particular I recommend the following revisions:

      It is " depletion of CTCF has only a small effect on transcription in cell culture (Nora et al., 2017; Hsieh et al., 2022)." It should be clarified that there is only a small acute * effect on transcription (in the first 6-12 hours), which may tell us more about the timescale at which promoters sample, integrate and respond to changes in their enhancer environment than about the roles of CTCF particularly. Notably, this degradation is lethal*, it results in massive changes in transcription after 4 days, and I suspect the authors agree that this lethal affect arises from CTCF's role in transcription regulation (if you remove some key cytoskeletal protein or metabolic enzyme the primary cause of cell death is not transcriptional, but almost all the evidence for CTCF's vital role in the cell is linked in one way or another to transcription). The discussion of TAD border deletions is more one-sided than ideal. I appreciate the discussion is usually even more unbalanced when presenting the opposite view in the literature - many works only cite the examples where border deletion does lead to ectopic expression and phenotypes. The current text presented a subset of these border deletion data in such a way as to give me the impression the authors are deeply skeptical that CTCF plays a role as an insulator of E-P interactions in a developmental context (rather than just as a weird cancer thing). For example:

      Pennacchio's lab has analyzed a series of TAD border deletions with more examples of both lethal effects and effects with no apparent phenotype 3

      Deletion of TAD borders upstream of the FGF3/4/15 locus in mouse is embryonic lethal (particularly the border Kim et al label TB1 and didn't delete in their cancer model). https://www.biorxiv.org/content/10.1101/2024.08.03.606480v1

      I appreciate that Bickmore and colleagues found quite phenotypically normal mice upon deletion of CTCF sites from Shh, but it might be balanced to still reference the work from Uishiki et al that indicate in humans the CTCF site does play a role in Shh - ZRS communication: 4

      As the authors are doubtless aware, Andrey and colleagues show a CTCF dependent enhancement of a sensitized ZRS enhancer. 5

      Zuin et al. in an elegant experiment in which an enhancer is mobilized to different distances away from its promoter using transposon induction, reported a complete lack of detection of enhancers mobilizing outside the TAD to activate gene expression 6.

      A balanced presentation of the data on CTCF role might include some discussion of the above. In light of these earlier works, the findings the authors report about border bypass are all the more surprising.

      2b) By contrast, direct evidence for cross TAD interactions at endogenous loci has not to my knowledge been shown as clearly as described in the current manuscript.

      Recent work from Rocha and colleagues 7 showed evidence that some enhancers upstream of Sox2 can pass ectopically induced boundaries. While recent work has described examples of 'TAD border bypass' at endogenous loci (e.g. for Pitx1 8, Hoxa regulation 9), these reports really just expand the view of regulatory boundaries rather than provide evidence against it. They invoke a 3D stacking of boundaries that allows boundary proximal enhancers and promoters to stack with (and so bypass) an intervening TAD boundary. Notably, in this view enhancers and promoters that lie away from the border of their respective TADs are still separate, and indeed intervening genes between distal enhancers for Pitx1 and Hoxa appear to follow these rules.2 Mnx1 and the Shh enhancers by contrast do not appear to be an example of border stacking. Given that Sox2 at least is also a TAD border, and the position of the bypassing enhancers is not precisely known in the work from Rocha, it is possible that that case is also an example of boundary stacking, which appears less likely in the case of Mnx1 (which does not appear to be at CTCF marked border, at least in mESCs).

      Statistics

      Some of the bar graphs quantifying the %-expressing cells do not obviously have associated n-values, as are some of the violin plots of the distances. I think all these bar graphs could also benefit from adding errorbars (e.g. by bootstrapping from the sampled population). This will help the reader more easily appreciate how sampling error and sample size affect the variation seen in the plots.

      Recommendations for improving the figures

      Figure 2

      I would have preferred the authors zoom in more on the FISH spots to help the reader appreciate the proximity. I do appreciate also seeing a field of more than 1 cell (to give some sense of the variability), but these images mostly have only 1 spot pair per panel, which is exceedingly small as they contain parts of more than 1 nucleus. There is also unnecessary white space in this figure that could have been used to show zoom in panels.

      Figure 3 -image panels

      The same applies to the image panels in this figure as for figure 2 - there is considerable unused whitespace, the image panels capture mostly a single nucleus and its pattern of DAPI dense heterochromatin (which isn't particularly relevant to the narrative) while the fluorescent spots that are the focus of the narrative are quite small. It is nice to have an example of the cell to see that this isn't just random background (that there is just one spot per cell) - in that sense though it's equally helpful to show its not just 1 cell in the field that has the signal-to-noise (SNR) shown.<br /> For this figure and the panels in figure 2, I'd recommend showing a zoom out showing ~3 nuclei with transcription foci (at least in the regions where the % transcribing is >60% it should be fine to have adjacent nuclei transcribing, for those where it is 10%, 1 of 3 nuclei transcribing in the image selected would also help get the sense of the data). These zoom out images would also give a sense of the SNR in the image, and then a zoom in where the FISH spots are sizable would make it easier to see the neighboring transcripts. Extended Data Fig 3 does a better job showing the context of the limb and then zooming in to an image where the RNA spots are appreciable. It looks like the resolution of the zoom in is lower, such that zooming in further on the spots in this data may not enhance the image.

      Figure 3 - DNA FISH

      It would be helpful to include a diagram indicated where the DNA FISH probes are located on the genome and their size in kb as an inset in the figure.

      References cited above

      1. Nora, E. P., Goloborodko, A., Valton, A.-L., Gibcus, J. H., Uebersohn, A., Abdennur, N., Dekker, J., Mirny, L. A. & Bruneau, B. G. Targeted Degradation of CTCF Decouples Local Insulation of Chromosome Domains from Genomic Compartmentalization. Cell 169, 930-944.e22 (2017).
      2. Kubo, N., Ishii, H., Gorkin, D., Meitinger, F., Xiong, X., Fang, R., Liu, T., Ye, Z., Li, B., Dixon, J., Desai, A., Zhao, H. & Ren, B. Preservation of Chromatin Organization after Acute Loss of CTCF in Mouse Embryonic Stem Cells. bioRxiv 118737 (2017).
      3. Rajderkar, S., Barozzi, I., Zhu, Y., Hu, R., Zhang, Y., Li, B., Alcaina Caro, A., Fukuda-Yuzawa, Y., Kelman, G., Akeza, A., Blow, M. J., Pham, Q., Harrington, A. N., Godoy, J., Meky, E. M., von Maydell, K., Hunter, R. D., Akiyama, J. A., Novak, C. S., Plajzer-Frick, I., Afzal, V., Tran, S., Lopez-Rios, J., Talkowski, M. E., Lloyd, K. C. K., Ren, B., Dickel, D. E., Visel, A. & Pennacchio, L. A. Topologically associating domain boundaries are required for normal genome function. Commun. Biol. 6, 435 (2023).
      4. Ushiki, A., Zhang, Y., Xiong, C., Zhao, J., Georgakopoulos-Soares, I., Kane, L., Jamieson, K., Bamshad, M. J., Nickerson, D. A., University of Washington Center for Mendelian Genomics, Shen, Y., Lettice, L. A., Silveira-Lucas, E. L., Petit, F. & Ahituv, N. Deletion of CTCF sites in the SHH locus alters enhancer-promoter interactions and leads to acheiropodia. Nat. Commun. 12, 2282 (2021).
      5. Paliou, C., Guckelberger, P., Schöpflin, R., Heinrich, V., Esposito, A., Chiariello, A. M., Bianco, S., Annunziatella, C., Helmuth, J., Haas, S., Jerković, I., Brieske, N., Wittler, L., Timmermann, B., Nicodemi, M., Vingron, M., Mundlos, S. & Andrey, G. Preformed chromatin topology assists transcriptional robustness of Shh during limb development. Proc. Natl. Acad. Sci. U. S. A. 116, 12390-12399 (2019).
      6. Zuin, J., Roth, G., Zhan, Y., Cramard, J., Redolfi, J., Piskadlo, E., Mach, P., Kryzhanovska, M., Tihanyi, G., Kohler, H., Eder, M., Leemans, C., van Steensel, B., Meister, P., Smallwood, S. & Giorgetti, L. Nonlinear control of transcription through enhancer-promoter interactions. Nature 604, 571-577 (2022).
      7. Chakraborty, S., Kopitchinski, N., Zuo, Z., Eraso, A., Awasthi, P., Chari, R., Mitra, A., Tobias, I. C., Moorthy, S. D., Dale, R. K., Mitchell, J. A., Petros, T. J. & Rocha, P. P. Enhancer-promoter interactions can bypass CTCF-mediated boundaries and contribute to phenotypic robustness. Nat. Genet. 55, 280-290 (2023).
      8. Hung, T.-C., Kingsley, D. M. & Boettiger, A. N. Boundary stacking interactions enable cross-TAD enhancer-promoter communication during limb development. Nat. Genet. 56, 306-314 (2024).
      9. Hafner, A., Park, M., Berger, S. E., Murphy, S. E., Nora, E. P. & Boettiger, A. N. Loop stacking organizes genome folding from TADs to chromosomes. Mol. Cell 83, 1377-1392.e6 (2023).

      Significance

      The authors show convincing data that Mnx1 indeed responds transcriptionally to several Shh-enhancers located over 100 kb distal and on the wrong side of the TAD boundary. The data come from developing mouse embryos, span several tissues, and include key controls for specificity of the method. This provides convincing data with which to challenge the currently widely accepted view of as TADs a significant boundary, complimenting the few examples that indicate that such regulation is possible in special cases (see further discussion in 2b below). I believe this work represents an important and substantive contribution to the field and should ultimately be published, after a few notable issues have been addressed.

      Audience: I believe this work will be of general interest to the eukaryotic transcription community, the 4D genome community, and the developmental biology community.

      My expertise: developmental biology, 4D genome biology, microscopy

    1. “[With The Capital Order], we can begin to see method in the madness: austerity is a vital bulwark in defense of the capitalist system.” ― Business RecorderA 2022 Best Book in Economics ― Financial TimesFall 2022 Book Recommendation (General Interest) -- Sean Guynes“Shocking disparities underlie economist Clara Mattei’s topical study of austerity measures promoted over the past century. Focusing on 1920s liberal-democracy Britain and fascist Italy, she argues that the profitable application of austerity to these dissimilar nations licensed its use as a capitalist ‘tool of class control.” ― Nature“She argues that forcing a recession or cutting social welfare is not really about budgets and debt. This so-called “economic pain” is inflicted deliberately to make the labour force feel insecure and to stop demanding better conditions.” ― Irish Examiner“Clara Mattei shows how the supposedly apolitical science of economics has served, and continues to serve, as an ideology of class oppression. The chapters exploring the birth, in Britain and Italy in the 1920s, of what the author calls ‘the technocratic project’ of austerity, and its political and economic consequences, are particularly illuminating.” -- Robert Skidelsky“Illuminating . . . Any reader of The Capital Order will be struck by the contemporary resonances.” ― The New Statesman“There is a long history of efforts to separate the political from the economic domain. . . . One very impressive recent study, by Clara Mattei, argues persuasively that this dichotomy, typically taking the form of austerity programs, has been a major instrument of class war for a century, paving the way to fascism, which was indeed welcomed by Western elite opinion.”—Noam Chomsky  ― Truth Out"In her book The Capital Order, economist Clara Mattei shows that austerity was thought of as a counter-offensive against experiments in economic democracy." ― Alternatives Economiques“A work with remarkable resonance for the moment we are living through. I found it impossible to put down.” -- James K. Galbraith“Austerity is not an innocent policy error, but a fallacy functional to dark interests. Mattei’s admirable new book exposes austerity’s hidden agenda.” -- Yanis Varoufakis“[A] message for our time.” ― Brazzil Magazine“Clara Mattei’s work is an important contribution to building a new economic narrative. At a time when inflation is up and governments feel inclined to once again ‘tighten their belts,’ this book is as relevant as ever.” -- Mariana Mazzucato"The Capital Order uses the historical record in Europe to argue that austerity—tightening the belt, cutting government programs—is less about budgets and debt and more about deliberately making the labor force feel insecure." ― APM's Marketplace Morning Report“A very readable and historically profound work.” -- translated from German ― H/Soz/Kult“Brilliantly provocative . . . powerfully argued. . . . With her history of the relationship between liberal economists and fascism, Mattei puts the skids under complacent champions of liberal democracy who today summon the fascist figure as a reassuring boogyman. . . . A round house critique of the role of liberal economics in general.” -- Adam Tooze ― Chartbook"Through meticulously compiled archival material, Mattei explores austerity by studying economists in the 1920s from the birthplace of liberalism (Britain) and the birthplace of fascism (Italy) to draw a provocative conclusion about its nature: 'an anti-democratic reaction to threats from bottom-up social change.'” ― Politics Today"The capital order asserts the primacy of capital over labor in the hierarchy of social relations within the capitalist production process. That primacy was threatened after World War I in what Mattei claims was the greatest crisis in the history of capitalism. . . . To counter these trends, Mattei argues, unelected technocratic elites 'invented' austerity as a means of re-naturalizing the capital order. . . . What Britain’s technocrats accomplished through the market, Italy’s fascists accomplished through Mussolini’s edicts. . . Recommended." ― Choice"Austerity is premeditated policy. It’s a blunt instrument that preempts resistance by weakening and dividing the working class while unifying different wings of the ruling class. . . . Mattei documents austerity’s essential role in the rise of fascism." ― Counterpunch"Mattei shows how austerity emerged as the response of international capital to the risks to its power and wealth. Its aim was to rescue capitalism from ‘its enemies’ by taming an increasingly politicized and restive class and restoring the prewar order." ― History Today"There are few books that once read manage to leave a clear idea and a full-fledged thesis imprinted on the reader’s mind: Chiara E. Mattei’s book is one of them." ― The Journal of European Economic History"Mattei reminds us that . . . austerity is a one-sided class war, conducted in numbers and defended by economists’ jargon.” -- Aditya Chakrabortty ― The Guardian"A wonderful book [and a] compelling story." ― Rethinking Economics"She [Mattei] has done an impressive amount of archival research and has skillfully mined the published literature of the interwar period. The fruit of these labors is a rich and insightful account of a pivotal moment in capitalism’s history." -- Gary Mongiovi ― Catalyst“It’s often been pointed out that austerity just doesn’t achieve its stated aims of balancing the books and paying down public debt. [In Mattei’s] analysis the actual aim is not the stated one, it is to discipline the working population. Over the last century it would seem to have achieved that quite successfully.” ― The National“A powerful critique.” ― Asiana Times“A serious economic history of the 1920s and its fiscal and credit policies, and you should not dismiss it.” -- Tyler Cowen ― Marginal Revolution“A fascinating history of the rise of austerity policies in post–World War I Europe and how it paved the way for fascism—along with many of the economic policies of today. A must-read, with key lessons for the future. Historical political economy at its best.” -- Thomas Piketty"Austerity’s defenders claim that any adverse impact on employment will quickly end and will be justified by eventual success. Such is the theory. Clara Mattei will have none of it. Her vigorously written and well-researched new study, The Capital Order, insists that austerity is a class strategy, not just a policy to restore economic equilibrium." ― European Review of Books“A decade after austerity tore British society apart, the UK government stands ready to do so again. Given that it didn’t work the first time around, one wonders why they want to try it again. This is where Mattei’s explanation illuminates brightly: if we think of austerity not as an economic policy, but as a form of capitalist crisis management for moments when the lower orders start to question the governing classes’ preferences, then its repeated dosage—despite its damages—makes much more sense.” -- Mark Blyth"Meticulously researched. . . Mattei’s analysis is an exemplary work of historical political economy that seeks to steer the conversation on capitalist crisis from Keynesianism back toward Marx." ― Phenomenal World"In our current moment, as policymakers are once again entertaining monetary tightening as a means to impose necessary hardship & discipline on working people, The Capital Order is a potent reminder of the cruel rationality of austerity." ― Dissent Magazine

      On Mattai: The Capital Order: How Economists Invented Austerity and Paved the Way to Fascism

    1. <img src="https://m.media-amazon.com/images/S/amazon-avatars-global/default.png"/>Reader5.0 out of 5 stars Ignore ALL critics until you've read the book(s) on MMT from the MMT scholars themselves Reviewed in the United States on January 30, 2023Verified Purchase Read the book. If you're curious, just read the book for yourself. For now, just ignore ALL critics.Read any book on MMT from the original MMT scholars/economists. THEN you'll notice that all these critics of MMT haven't read much, IF ANYTHING,....IF ANYTHING from these scholars. Critics boldly misrepresent what MMT scholars are saying...they create straw man arguments to deter the curious. It is the disingenuous critics who are clearly the "political" movement keeping the facts about money(specifically fiat-money), and how it works, hidden from the public. Ignore ALL critics until you've read the book(s), THEN you'll see the critics are disingenuous, arguing against strawmen, with their own political agenda to protect their OWN interests at all costs...even at the cost of the truth. Once the FACTS are out there, like mainstream non-MMT economist Paul Samuelson said, (about the balanced budget myth/superstition), once the cat is out of the bag, it's not going back in. You can find his interview on youtube "paul samuelson budget myth superstition." People also need to understand that in today's highly polarized political climate, telling the truth is called a "political movement" only because it exposes the extreme-lies we've been told. When COVID-19 hit, congress got together and passed a spending bill. Money appeared. Did they get taxes from us or cut trillions in spending beforehand, in order to spend? No. THAT should be a clue.

      On Wray: Making Money Work For Us

    1. The key point is that Na2CrO4 (sodium chromate), Na2Cr2O7 (sodium dichromate), K2CrO4(potassium chromate), K2Cr2O7 (potassium dichromate), and CrO3 (chromium trioxide) are all alike in one crucial manner: when they are combined with aqueous acid, each of them forms H2CrO4, and ultimately it’s H2CrO4 which does the important chemistry. Unfortunately I rarely see this point explained in textbooks. I remember this causing some confusion for me when I took the course. The K or Na ions present are just spectators.

      this is a really good point explained about chromic acid

    1. How is AI currently influencing higher education, and what potential benefits and challenges do you foresee in its continued integration into college classrooms?

      Using AI in education and for your career is just as important as using the Microsoft suite. This tool among others is crucial to be able to navigate and use. Employers expect that an employee can use Microsoft just as the will for AI. I think AI is a great tool but just like anything else the user must be smart and not just go crashing around like a bull in a China closet. You don’t want to go putting your personal information or that of your arch-nemeses into the system so that random hackers can ruin your life. My sister’s friend created an entire Country singer out of AI. The AI “person” is releasing a country music album that’s set to come out in February. Yes, this sounds made up but it’s true. The country signer is fake and so is her voice. The story is real.

    1. But the second one you referred to is Vannevar Bush, who wrote this beautiful essay in 1945 called “As We May Think.” In it, he basically envisions the internet. He envisions this personal computer called the “memex,” from memory and index. It’s extraordinarily prophetic — not just the technology but the relationships that we’ll have with knowledge, with information, with each other. He talks about this — he said there will be a new profession of trailblazers who will make a career out of finding useful trails through the common record. I love this notion of the common record. In a way, so much of what I do is an attempt to make sense of humanity’s common record.

      memex personal computer

      memory and index

      not just interpersonal but interplanetary and local firsst autonomous private secure permanenet evergreen

      will be a new profession of trailblazers who will make a career out of finding useful trails through the common record.

      not just finding but naming and creating/namng trails linking meaningfully the personal and collaboratively emerging interintellect in grounded in individuals indranet.work spaces

      indy.memex interpersonal computer

      spiritual reparenting

    1. Below is a concise overview of the key concepts in the article “How Real-Time Materialized Views Work with ksqlDB, Animated.” It explains:

      1. What Real-Time Materialized Views Are
      2. A real-time materialized view is a continuously updated “pre-aggregated” or “read-optimized” result of incoming streaming data.
      3. Instead of recalculating the entire view on demand (as in many traditional databases), stream processing incrementally updates the view with each new event (the “delta”).

      4. How ksqlDB Maintains These Views

      5. Continuous Queries: When you write a SQL-like query in ksqlDB (e.g., CREATE TABLE ... SELECT ... FROM readings GROUP BY ... EMIT CHANGES;), ksqlDB creates a persistent query that runs forever, reading new events from Kafka topics and updating the view.
      6. Incremental Updates + Changelog: As ksqlDB updates the materialized view in its local state store (RocksDB), it also emits a new record to a changelog topic in Kafka that captures the change.

        • This changelog topic is essentially the “audit log” of every update.
        • The local RocksDB store is fast but treated as transient; changelog topics in Kafka provide durability and fault tolerance.
      7. Push vs. Pull Queries

      8. Pull Queries ask for the current state of the materialized view at the moment you run the query (e.g., SELECT * FROM avg_readings WHERE sensor=...;).
      9. Push Queries subscribe to changes as they happen (e.g., SELECT * FROM avg_readings EMIT CHANGES;). You get a continuous stream of updates whenever a new change arrives.

      10. RocksDB as the Local Store

      11. Each partition of the input stream(s) to a ksqlDB query is associated with its own local RocksDB instance.
      12. RocksDB stores the current state needed for aggregations, joins, etc.
      13. Because data is partitioned, all rows with the same key end up on the same partition (and thus the same RocksDB instance).

      14. Automatic Repartitioning

      15. If your grouping key is not the same as the original Kafka key, ksqlDB must shuffle data so that rows with the same group key end up on the same partition.
      16. This shuffle is automatically handled by creating a *-repartition topic.
      17. If your original keys are already aligned with the grouping columns, ksqlDB skips this shuffle to save I/O.

      18. Fault Tolerance via Changelogs

      19. If a ksqlDB node dies, a new node can rebuild the materialized view by replaying the changelog from Kafka.
      20. Changelog topics use log compaction, which removes older updates to each key, keeping only the latest.
      21. This keeps replay time manageable (rather than applying every single historical update).

      22. Latest-by-Offset Aggregations

      23. Besides sum, min, max, or average, ksqlDB also supports “latest by offset” to store just the most recent value for each key, effectively creating a “recency cache.”
      24. Example:<br /> sql CREATE TABLE latest_readings AS SELECT sensor, LATEST_BY_OFFSET(area) AS area, LATEST_BY_OFFSET(reading) AS last FROM readings GROUP BY sensor EMIT CHANGES;
        • This ensures the table always reflects the last known value for each key (based on Kafka offset).

      Why This Matters

      • Fast Queries: Because the materialized view is already “pre-aggregated,” queries against it are extremely fast—no need to scan or recalculate everything from scratch.
      • Real-Time Updates: The view is updated continuously as new data arrives, so you always have a near-real-time representation of what is happening.
      • Scalable & Fault-Tolerant: Using Kafka’s partitions and log compaction for changelogs, ksqlDB scales horizontally (across multiple nodes) and recovers state quickly when nodes fail.

      Further Resources

      • Try It Out
      • The ksqlDB quickstart is a straightforward way to experiment locally.
      • Once it’s running, you can execute the code examples in the article to see real-time materialized views in action.
      • Next Steps
      • Deep dive into ksqlDB’s fault tolerance and scaling model (i.e., how queries distribute across clusters).
      • Explore additional stream processing patterns such as windowed aggregations for time-based summaries.
      • Learn how joins work between tables and streams in ksqlDB (similar incremental update logic, but with different partitioning considerations).

      In essence, real-time materialized views in ksqlDB let you maintain continually up-to-date “snapshots” of streaming data. By storing incremental results in a local state store and capturing updates in a Kafka changelog, ksqlDB can serve extremely fast queries, recover quickly from failures, and scale out for large data volumes.

    1. It’s also key to surfacing who precisely is benefiting from design, which is key to ensuring that design efforts are equitable, helping to dismantle structures of oppression through design, rather than further reinforce, or worse, amplify them.

      A professor once said in lecture: your opinions are often VERY different from the truth. In the field of design, I think its incredibly important to constantly check whether bias or preconceptions are mixing into what our research is telling us. It's also important to differentiate ourselves and our experiences/beliefs from what our user experiences/believes. This close line is a reminder that a 'bad design' isn't one that just misses the problem, it's one that adds to the detriments of the original problem. Usually, user problems are also complex and intersectional with identities of the target demographic, and good designs need to consider the user's immediate friction in a scenario, as well as the complexities that exists due to social constructs in the problem space.

    2. persona is only useful if it’s valid. If these details are accurate with respect to the data from your research, then you can use personas as a tool for imagining how any of the design ideas might fit into a person’s life. If you just make someone up and their details aren’t grounded in someone’s reality, your persona will be useless, because what you’re imagining will be fantasy.

      I do agree with the author that a persona is only useful when it is valid. However, I feel like coming up with a persona can exhibit many biases. I feel like based on our own values and ideas we may assume a persona for an individual which may be different to how they see themselves. How did we come up with our persona? You might end up stereotyping someone into a persona that contradicts their identity just because you thought that they would fit in that specifc box.

    3. It’s very unlikely that one persona and one scenario is going to faithfully capture everything you learned about the problem you’re trying to address. Create as many as you need to capture the diversity of the the goals, the people, and the scenarios you observed.

      I agree with this statement. Just like design justice, it's unfeasible to sufficiently represent the entire population of a community you are addressing with just one solution. However, with personas and scenarios, I wonder if there is a certain amount or threshold that you can achieve to sufficiently capture the diversity of the goals, or is it hand-in-hand with design justice? I found this useful for the individual project we will be doing during class, since it often takes way more than just a few interviews to create a persona that is representative of a good amount of people--of course it won't represent everyone, but what would be a good amount of interviews?

    4. One simple form of knowledge is to derive goals and values from your data.

      I find it really practical to focus on understanding goals and values from data. Instead of just listening to what people say, it’s about identifying what they’re actually trying to achieve. For example, when people talk about renting, some care about affordability, others about saving time, or accessibility due to a disability. I think recognizing these goals helps in designing something that truly addresses their needs. It reminds me how important it is to dig deeper and create solutions that align with people’s real priorities.

    5. A persona is only useful if it’s valid. If these details are accurate with respect to the data from your research, then you can use personas as a tool for imagining how any of the design ideas might fit into a person’s life. If you just make someone up and their details aren’t grounded in someone’s reality, your persona will be useless, because what you’re imagining will be fantasy.

      I would be interested in learning more about the process or creating and using personas in design. Personally imagining and implementing scenarios into the design process seems like it could be unreliable. Certainly everyone has their own experiences and perspectives which, as the last chapter had mentioned, can completely change the state of the problem. I personally would feel hesitant to unintentionally project my own biases and tendencies on to these personas and would create a persona that is not representative of individuals I am hoping to help. The imaginative process can certainly help in identifying possible weaknesses or vulnerabilities for cases we might not initially consider but I again am curious as to how the personas are 'validated' and used.

    1. Focus on Application and Creativity: Projects thatrequire creative thinking, application of knowledge tonew situations, or the solving of real-world problemscan be more indicative of a student’s own work andunderstanding. A recent article 3 in the Harvard Busi-ness Review, however, states “It [GenAI] can augmentthe creativity of employees and customers and helpthem generate and identify novel ideas”.

      This passage strikes a chord with me because it highlights the importance of focusing on creativity and application in education. As an educator, I’ve seen how projects that push students to think creatively or apply knowledge in new ways reveal their true understanding and skills. It’s a reminder that education should prioritize tasks that require more than just regurgitation of information—tasks where students must think critically and solve problems.

      The Harvard Business Review quote adds an interesting dimension. If GenAI can enhance creativity, then the question isn’t whether to use it but how to use it meaningfully. I can see a scenario where students use GenAI to brainstorm ideas or analyze scenarios, but the real value would come from their ability to refine and apply those ideas in unique ways. Personally, I believe the potential for GenAI to “augment” creativity aligns well with teaching practices that emphasize innovation and collaboration.

      At the same time, this makes me wonder about the balance between GenAI’s contributions and ensuring students are genuinely demonstrating their own capabilities. Could relying on GenAI too much in creative tasks hinder the development of independent thinking? I see a great opportunity here but also a need for clear boundaries and thoughtful integration into learning experiences. How do you think we can strike that balance effectively?

    2. GenAI tools are trained on massive data sets that mayinclude inaccuracies and misconceptions. They do notthink; they create human-like responses based on prob-abilities and, in doing so, also tend to make things up (i.e.,hallucinate).

      This is something that often adults seem to forget about GenAI tools. Since AI is going to be around for awhile, I believe it's important to teach our students how to correctly use it as a tool. This is something that will be hard to since my district (and possibly others) have already banned all AI usage for students.

      Any thoughts on just embracing AI into the curriculum and effectively teaching students how to use it?

    1. If you’re clever, perhaps you can find a design that’s useful to a large, diverse group. But design will always require you to make a value judgement about who does and who does not deserve your design help. Let that choice be a just one, that centers people’s actual needs.

      I like this point a lot. I think often people try to come up with solutions that appeal to the masses, which is a good thing, but it's very hard to execute. Like the author said, you'd have to be very clever. But I agree it's better to focus on smaller audiences first to try and solve a specific problem.

    2. most gambling addicts wish it was harder for them to gamble, but casinos are quite happy that it’s easy to gamble. That means that problems are inherently tied to specific groups of people that wish their situation was different

      After sitting with this example for a bit, I strongly agreed with the idea that no problem can be solved, only situations. In the previous reading from last week, I commented on my disinterest in universal design because it tries to solve problems in one specific way for everyone, which is utterly wrong because having multiple solutions is necessary due to varying needs. That's why this example to me is something I agree with since it helps differentiate how a situation is what we are trying to solve more than just an actual problem. For some, it may be helpful for others it's not.

    3. One view, then is that a problem is just an “undesirable situations” (meaning undesirable to a human). Therefore, problems are really just situations that people don’t want.

      I do agree with this quote because a lot of times, something that is one person's problem, might be another person's desire, just like the author goes on to say. A lot of times multiple people might agree that something is a problem, but that doesn't mean they aren't people who do not disagree. It become a discussion when you talk about this in a design perspective because it's like who's "problem" or "desire" do we uphold?

    4. If you’re clever, perhaps you can find a design that’s useful to a large, diverse group. But design will always require you to make a value judgement about who does and who does not deserve your design help. Let that choice be a just one, that centers people’s actual needs. And let that choice be an equitable one, that focuses on people who actually need help (for example, rural Americans trying to access broadband internet, or children in low income families without computers trying to learn at home during a pandemic—not urban technophiles who want a faster ride to work).

      I agree with this take because it highlights the importance of prioritizing fairness and addressing real needs in design. It’s a useful reminder that design decisions reflect values and can either help underserved communities or simply cater to convenience for those already well off. This perspective reinforces the idea that impactful design should focus on solving meaningful problems for those who truly need support, which is something I find inspiring and worth striving for.

    5. Therefore, the essence of understanding any problem is communicating with the people. That communication might involve a conversation, it might involve watching them work, it might involve talking to a group of people in a community. It might even involve becoming part of their community, so that you can experience the diversity and complexity of problems they face, and partner with them to address them.

      I think this highlights the importance of human connection in problem-solving, which I strongly agree with. Understanding a problem deeply requires more than observation; it involves real communication and empathy. This reading reminded me that partnering with communities and immersing ourselves in their experiences can lead to more meaningful and effective solutions. It’s a useful reminder that design isn’t just about creating—it’s about collaborating.

    6. problem is never “solved”

      I agree with this statement; to further elaborate, I believe that problems occur because society is suspectable to change. Society is constantly changing, and we are constantly adapting to this change. Problems arise as a way for us to adapt to the new change. Solutions will never directly solve a problem, because the problem is constantly changing. Just like in design justice, there will never be a "correct" way to create accessible designs that serves every one., because it's unfeasible. You can't represent the needs of everyone, and society is constantly changing. What you can do is mitigating problems that arise and attempting to solve them.

    1. Author response:

      The following is the authors’ response to the previous reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      (1) As VRMate (a component of behaviorMate) is written using Unity, what is the main advantage of using behaviorMate/VRMate compared to using Unity alone paired with Arduinos (e.g. Campbell et al. 2018), or compared to using an existing toolbox to interface with Unity (e.g. Alsbury-Nealy et al. 2022, DOI: 10.3758/s13428-021-01664-9)? For instance, one disadvantage of using Unity alone is that it requires programming in C# to code the task logic. It was not entirely clear whether VRMate circumvents this disadvantage somehow -- does it allow customization of task logic and scenery in the GUI? Does VRMate add other features and/or usability compared to Unity alone? It would be helpful if the authors could expand on this topic briefly.

      We have updated the manuscript (lines 412-422) to clarify the benefits of separating the VR system as an isolated program and a UI that can be run independently. We argue that “…the recommended behaviorMate architecture has several important advantages. Firstly, by rendering each viewing angle of a scene on a dedicated device, performance is improved by splitting the computational costs across several inexpensive devices rather than requiring specialized or expensive graphics cards in order to run…, the overall system becomes more modular and easier to debug [and] implementing task logic in Unity would require understanding Object-Oriented Programming and C# … which is not always accessible to researchers that are typically more familiar with scripting in Python and Matlab.”

      VRMate receives detailed configuration info from behaviorMate at runtime as to which VR objects to display and receives position updates during experiments. Any other necessary information about triggering rewards or presenting non-VR cues is still handled by the UI so no editing of Unity is necessary. Scene configuration information is in the same JSON format as the settings files for behaviorMate, additionally there are Unity Editor scripts which are provided in the VRmate repository which permit customizing scenes through a “drag and drop” interface and then writing the scene configuration files programmatically. Users interested in these features should see our github page to find example scene.vr files and download the VRMate repository (including the editor scripts).  We provided 4 vr contexts, as well as a settings file that uses one of them which can be found on the behaviorMate github page (https://github.com/losonczylab/behaviorMate) in the “vr_contexts” and “example_settigs_files” directories. These examples are provided to assist VRMate users in getting set up and could provide a more detailed example of how VRMate and behaviorMate interact.

      (2) The section on "context lists", lines 163-186, seemed to describe an important component of the system, but this section was challenging to follow and readers may find the terminology confusing. Perhaps this section could benefit from an accompanying figure or flow chart, if these terms are important to understand.

      We maintain the use of the term context and context list in order to maintain a degree of parity with the java code. However, we have updated lines 173-175 to define the term context for the behaviorMate system: “... a context is grouping of one or more stimuli that get activated concurrently. For many experiments it is desirable to have multiple contexts that are triggered at various locations and times in order to construct distinct or novel environments.”

      a. Relatedly, "context" is used to refer to both when the animal enters a particular state in the task like a reward zone ("reward context", line 447) and also to describe a set of characteristics of an environment (Figure 3G), akin to how "context" is often used in the navigation literature. To avoid confusion, one possibility would be to use "environment" instead of "context" in Figure 3G, and/or consider using a word like "state" instead of "context" when referring to the activation of different stimuli.

      Thank you for the suggestion. We have updated Figure 3G to say “Environment” in order to avoid confusion.

      (3) Given the authors' goal of providing a system that is easily synchronizable with neural data acquisition, especially with 2-photon imaging, I wonder if they could expand on the following features:

      a. The authors mention that behaviorMate can send a TTL to trigger scanning on the 2P scope (line 202), which is a very useful feature. Can it also easily generate a TTL for each frame of the VR display and/or each sample of the animal's movement? Such TTLs can be critical for synchronizing the imaging with behavior and accounting for variability in the VR frame rate or sampling rate.

      Different experimental demands require varying levels of precision in this kind of synchronization signals. For this reason, we have opted against a “one-size fits all” for synchronization with physiology data in behaviorMate. Importantly this keeps the individual rig costs low which can be useful when constructing setups specifically for use when training animals. behaviorMate will log TTL pulses sent to GPIO pins setup as sensors, and can be configured to generate TTL pulses at regular intervals. Additionally all UDP packets received by the UI are time stamped and logged. We also include the output of the arduino millis() function in all UDP packets which can be used for further investigation of clock drift between system components. Importantly, since the system is event driven there cannot be accumulating drift across running experiments between the behaviorMate UI and networked components such as the VR system.

      For these reasons, we have not needed to implement a VR frame synchronization TTL for any of our experiments, however, one could extend VRMate to send "sync" packets back to behaviorMate to log when each frame was displayed precisely or TTL pulses (if using the same ODROID hardware we recommend in the standard setup for rendering scenes). This would be useful if it is important to account for slight changes in the frame rate at which the scenes are displayed. However, splitting rendering of large scenes between several devices results in fast update times and our testing and benchmarks indicate that display updates are smooth and continuous enough to appear coupled to movement updates from the behavioral apparatus and sufficient for engaging navigational circuits in the brain.

      b. Is there a limit to the number of I/O ports on the system? This might be worth explicitly mentioning.

      We have updated lines 219-220 in the manuscript to provide this information: Sensors and actuators can be connected to the controller using one of the 13 digital or 5 analog input/output connectors.

      c. In the VR version, if each display is run by a separate Android computer, is there any risk of clock drift between displays? Or is this circumvented by centralized control of the rendering onset via the "real-time computer"?

      This risk is mitigated by the real-time computer/UI sending position updates to the VR displays. The maximum amount scenes can be out of sync is limited because they will all recalibrate on every position update – which occurs multiple times per second as the animal is moving. Moreover, because position updates are constantly being sent by behaviorMate to VRMate and VRMate is immediately updating the scene according to this position, the most the scene can become out of sync with the mouse's position is proportional to the maximum latency multiplied by the running speed of the mouse. For experiments focusing on eliciting an experience of navigation, such a degree of asynchrony is almost always negligible. For other experimental demands it could be possible to incorporate more precise frame timing information but this was not necessary for our use case and likely for most other use cases. Additionally, refer to the response to comment 3a.

      Reviewer #2 (Public review):

      (1) The central controlling logic is coupled with GUI and an event loop, without a documented plugin system. It's not clear whether arbitrary code can be executed together with the GUI, hence it's not clear how much the functionality of the GUI can be easily extended without substantial change to the source code of the GUI. For example, if the user wants to perform custom real-time analysis on the behavior data (potentially for closed-loop stimulation), it's not clear how to easily incorporate the analysis into the main GUI/control program.

      Without any edits to the existing source code behaviorMate is highly customizable through the settings files, which allow users to combine the existing contexts and decorators in arbitrary combinations. Therefore, users have been able to perform a wide variety of 1D navigation tasks, well beyond our anticipated use cases by generating novel settings files. The typical method for providing closed-loop stimulation would be to set up a context which is triggered by animal behavior using decorators (e.g. based on position, lap number and time) and then trigger the stimulation with a TTL pulse. Rarely, if users require a behavioral condition not currently implemented or composable out of existing decorators, it would require generating custom code in Java to extend the UI. Performing such edits requires only knowledge of basic object-oriented programming in Java and generating a single subclass of either the BasicContextList or ContextListDecorator classes. In addition, the JavaFX (under development) version of behaviorMate incorporates a plugin which doesn't require recompiling the code in order to make these changes. However, since the JavaFX software is currently under development, documentation does not yet exist. All software is open-sourced and available on github.com for users interested in generating plugins or altering the source code.

      We have added the additional caveat to the manuscript in order to clarify this point (Line 197-202): “However, if the available set of decorators is not enough to implement the required task logic, some modifications to the source code may be necessary. These modifications, in most cases, would be very simple and only a basic understanding of object-oriented programming is required. A case where this might be needed would be performing novel customized real-time analysis on behavior data and activating a stimulus based on the result”

      (2) The JSON messaging protocol lacks API documentation. It's not clear what the exact syntax is, supported key/value pairs, and expected response/behavior of the JSON messages. Hence, it's not clear how to develop new hardware that can communicate with the behaviorMate system.

      The most common approach for adding novel hardware is to use TTL pulses (or accept an emitted TTL pulse to read sensor states). This type of hardware addition  is possible through the existing GPIO without the need to interact with the software or JSON API. Users looking to take advantage of the ability to set up and configure novel behavioral paradigms without the need to write any software would be limited to adding hardware which could be triggered with and report to the UI with a TTL pulse (however fairly complex actions could be triggered this way).

      For users looking to develop more customized hardware solutions that interact closely with the UI or GPIO board, additional documentation on the JSON messaging protocol has been added to the behaviormate-utils repository (https://github.com/losonczylab/behaviormate_utils). Additionally, we have added a link to this repository in the Supplemental Materials section (line 971) and referenced this in the manuscript (line 217) to make it easier for readers to find this information.

      Furthermore, developers looking to add completely novel components to the UI  can implement the interface described by Context.java in order to exchange custom messages with hardware. (described  in the JavaDoc: https://www.losonczylab.org/behaviorMate-1.0.0/)  These messages would be defined within the custom context and interact with the custom hardware (meaning the interested developer would make a novel addition to the messaging API). Additionally, it should be noted that without editing any software, any UDP packets sent to behaviorMate from an IP address specified in the settings will get time stamped and logged in the stored behavioral data file meaning that are a large variety of hardware implementation solutions using both standard UDP messaging and through TTL pulses that can work with behaviorMate with minimal effort. Finally, see response to R2.1 for a discussion of the JavaFX version of the behaviorMatee UI including plugin support.

      (3) It seems the existing control hardware and the JSON messaging only support GPIO/TTL types of input/output, which limits the applicability of the system to more complicated sensor/controller hardware. The authors mentioned that hardware like Arduino natively supports serial protocols like I2C or SPI, but it's not clear how they are handled and translated to JSON messages.

      We provide an implementation for an I2C-based capacitance lick detector which interested developers may wish to copy if support for novel I2C or SPI. Users with less development experience wishing to expand the hardware capabilities of  behaviorMatecould also develop adapters which can be triggered  on a TTL input/output. Additionally, more information about the JSON API and how messages are transmitted to the PC by the arduino is described in point (2) and the expanded online documentation.

      a. Additionally, because it's unclear how easy to incorporate arbitrary hardware with behaviorMate, the "Intranet of things" approach seems to lose attraction. Since currently, the manuscript focuses mainly on a specific set of hardware designed for a specific type of experiment, it's not clear what are the advantages of implementing communication over a local network as opposed to the typical connections using USB.

      As opposed to serial communication protocols as typical with USB, networking protocols seamlessly function based on asynchronous message passing. Messages may be routed internally (e.g. to a PCs localhost address, i.e. 0.0.0..0) or to a variety of external hardware (e.g. using IP addresses such as those in the range 192.168.1.2 - 192.168.1.254). Furthermore, network-based communication allows modules, such as VR, to be added easily. behavoirMate systems can be easily expanded using low-cost Ethernet switches and consume only a single network adapter on the PC (e.g. not limited by the number of physical USB ports). Furthermore, UDP message passing is implemented in almost all modern programming languages in a platform independent manner (meaning that the same software can run on OSX, Windows, and Linux). Lastly, as we have pointed out (Line 117) a variety of tools exist for inspecting network packets and debugging; meaning that it is possible to run behaviorMate with simulated hardware for testing and debugging.

      The IOT nature of behaviorMate means there is no requirement for novel hardware to be implemented  using an arduino,  since any system capable of  UDP communication can  be configured. For example, VRMate is usually run on Odroid C4s, however one could easily create a system using Raspberry Pis or even additional PCs. behaviorMate is agnostic to the format of the UDP messages, but packaging any data in the JSON format for consistency would be encouraged. If a new hardware is a sensor that has input requiring it to be time stamped and logged then all that is needed is to add the IP address and port information to the ‘controllers’ list in a behaviorMate settings file. If more complex interactions are needed with novel hardware than a custom implementation of ContextList.java may be required (see response to R2.2). However, the provided UdpComms.java class could be used to easily send/receive messages from custom Context.java subclasses.

      Solutions for highly customized hardware do require basic familiarity with object-oriented programming using the Java programming language. However, in our experience most behavioral experiments do not require these kinds of modifications. The majority of 1D navigation tasks, which behaviorMate is currently best suited to control, require touch/motion sensors, LEDs, speakers, or solenoid valves,  which are easily controlled by the existing GPIO implementation. It is unlikely that custom subclasses would even be needed.

      Reviewer #3 (Public review):

      (1) While using UDP for data transmission can enhance speed, it is thought that it lacks reliability. Are there error-checking mechanisms in place to ensure reliable communication, given its criticality alongside speed?

      The provided GPIO/behavior controller implementation sends acknowledgement packets in response to all incoming messages as well as start and stop messages for contexts and “valves”. In this way the UI can update to reflect both requested state changes as well as when they actually happen (although there is rarely a perceptible gap between these two states unless something is unplugged or not functioning). See Line 85 in the revised manuscript “acknowledgement packets are used to ensure reliable message delivery to and from connected hardware”.

      (2) Considering this year's price policy changes in Unity, could this impact the system's operations?

      VRMate is not affected by the recent changes in pricing structure of the Unity project.

      The existing compiled VRMate software does not need to be regenerated to update VR scenes, or implement new task logic (since this is handled by the behaviorMate GUI). Therefore, the VRMate program is robust to any future pricing changes or other restructuring of the Unity program and does not rely on continued support of Unity. Additionally, while the solution presented in VRMate has many benefits, a developer could easily adapt any open-source VR Maze project to receive the UDP-based position updates from behaviorMate or develop their own novel VR solutions.

      (3) Also, does the Arduino offer sufficient precision for ephys recording, particularly with a 10ms check?

      Electrophysiology recording hardware typically has additional I/O channels which can provide assistance with tracking behavior/synchronization at a high resolution. While behaviorMate could still be used to trigger reward valves, either the ephys hardware or some additional high-speed DAQ would be recommended to maintain accurately with high-speed physiology data. behaviorMate could still be set up as normal to provide closed and open-loop task control at behaviorally relevant timescales alongside a DAQ circuit recording events at a consistent temporal resolution. While this would increase the relative cost of the individual recording setup, identical rigs for training animals could still be configured without the DAQ circuit avoiding unnecessary cost and complexity.

      (4) Could you clarify the purpose of the Sync Pulse? In line 291, it suggests additional cues (potentially represented by the Sync Pulse) are needed to align the treadmill screens, which appear to be directed towards the Real-Time computer. Given that event alignment occurs in the GPIO, the connection of the Sync Pulse to the Real-Time Controller in Figure 1 seems confusing.

      A number of methods exist for synchronizing recording devices like microscopes or electrophysiology recordings with behaviorMate’s time-stamped logs of actuators and sensors. For example, the GPIO circuit can be configured to send sync triggers, or receive timing signals as input. Alternatively a dedicated circuit could record frame start signals and relay them to the PC to be logged independently of the GPIO (enabling a high-resolution post-hoc alignment of the time stamps). The optimal method to use varies based on the needs of the experiment. Our setups have a dedicated BNC output and specification in the settings file that sends a TTL pulse at the start of an experiment in order to trigger 2p imaging setups (see line 224, specifically that this is a detail of “our” 2p imaging setup). We provide this information as it might be useful suggesting how to have both behavior and physiology data start recording at the same time. We do not intend this to be the only solution for alignment. Figure 1 indicates an “optional” circuit for capturing a high speed sync pulse and providing time stamps back to the real time PC. This is another option that might be useful for certain setups (or especially for establishing benchmarks between behavior and physiology recordings). In our setup event alignment does not exclusively occur on the GPIO.

      a. Additionally, why is there a separate circuit for the treadmill that connects to the UI computer instead of the GPIO? It might be beneficial to elaborate on the rationale behind this decision in line 260.

      Event alignment does not occur on the GPIO, separating concerns between position tracking and more general input/output features which improves performance and simplifies debugging.  In this sense we maintain a single event loop on the Arduino, avoiding the need to either run multithreaded operations or rely extensively on interrupts which can cause unpredictable code execution (e.g. when multiple interrupts occur at the same time). Our position tracking circuit is therefore coupled to a separate,low-cost arduino mini which has the singular responsibility of position-tracking.

      b. Moreover, should scenarios involving pupil and body camera recordings connect to the Analog input in the PCB or the real-time computer for optimal data handling and processing?

      Pupil and body camera recordings would be independent data streams which can be recorded separately from behaviorMate. Aligning these forms of full motion video could require frame triggers which could be configured on the GPIO board using single TTL like outputs or by configuring a valve to be “pulsed” which is a provided type customization.

      We also note that a more advanced developer could easily leverage camera signals to provide closed loop control by writing an independent module that sends UDP packets to behavoirMate. For example a separate computer vision based position tracking module could be written in any preferred language and use UDP messaging to send body tracking updates to the UI without editing any of the behaviorMate source code (and even used for updating 1D location).

      (5) Given that all references, as far as I can see, come from the same lab, are there other labs capable of implementing this system at a similar optimal level?

      To date two additional labs have published using behaviorMate, the Soltez and Henn labs (see revised lines 341-342). Since behaviorMate has only recently been published and made available open source, only external collaborators of the Losonczy lab have had access to the software and design files needed to do this. These collaborators did, however, set up their own behavioral setups in separate locations with minimal direct support from the authors–similar to what would be available to anyone seeking to set a behaviorMate system would find online on our github page or by posting to the message board.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      (4) To provide additional context for the significance of this work, additional citations would be helpful to demonstrate a ubiquitous need for a system like behaviorMate. This was most needed in the paragraph from lines 46-65, specifically for each sentence after line 55, where the authors discuss existing variants on head-fixed behavioral paradigms. For instance, for the clause "but olfactory and auditory stimuli have also been utilized at regular virtual distance intervals to enrich the experience with more salient cues", suggested citations include Radvansky & Dombeck 2018 (DOI: 10.1038/s41467-018-03262-4), Fischler-Ruiz et al. 2021 (DOI: 10.1016/j.neuron.2021.09.055).

      We thank the reviewer for the suggested missing citations and have updated the manuscript accordingly (see line 58).

      (5) In addition, it would also be helpful to clarify behaviorMate's implementation in other laboratories. On line 304 the authors mention "other labs" but the following list of citations is almost exclusively from the Losonczy lab. Perhaps the citations just need to be split across the sentence for clarity? E.g. "has been validated by our experimental paradigms" (citation set 1) "and successfully implemented in other labs as well" (citation set 2).

      We have split the citation set as suggested (see lines 338-342).

      Minor Comments:

      (6) In the paragraph starting line 153 and in Fig. 2, please clarify what is meant by "trial" vs. "experiment". In many navigational tasks, "trial" refers to an individual lap in the environment, but here "trial" seems to refer to the whole behavioral session (i.e. synonymous with "experiment"?).

      In our software implementation we had originally used “trial” to refer to an imaging session rather than experiment (and have made updates to start moving to the more conventional lexicon). To avoid confusion we have remove this use of “trial” throughout the manuscript and replaced with “experiment” whenever possible

      (7) This is very minor, but in Figure 3 and 4, I don't believe the gavage needle is actually shown in the image. This is likely to avoid clutter but might be confusing to some readers, so it may be helpful to have a small inset diagram showing how the needle would be mounted.

      We assessed the image both with and without the gavage needle and found the version in the original (without) to be easier to read and less cluttered and therefore maintained that version in the manuscript.

      (8) In Figure 5 legend, please list n for mice and cells.

      We have updated the Figure 5 legend to indicate that for panels C-G, n=6 mice (all mice were recorded in both VR and TM systems), 3253 cells in VR classified as significantly tuned place cells VR, and 6101 tuned cells in TM,

      (9) Line 414: It is not necessary to tilt the entire animal and running wheel as long as the head-bar clamp and objective can rotate to align the imaging window with the objective's plane of focus. Perhaps the authors can just clarify the availability of this option if users have a microscope with a rotatable objective/scan head.

      We have added the suggested caveat to the manuscript in order to clarify when the goniometers might be useful (see lines 281-288).

      (10) Figure S1 and S2 could be referenced explicitly in the main text with their related main figures.

      We have added explicit references to figures S1 and S2 in the relevant sections (see lines 443, 460  and 570)

      (11) On line 532-533, is there a citation for "proximal visual cues and tactile cues (which are speculated to be more salient than visual cues)"?

      We have added citations to both Knierim & Rao 2003 and Renaudineau et al. 2007 which discuss the differential impact of proximal vs distal cues during navigation as well as Sofroniew et al. 2014 which describe how mice navigate more naturally in a tactile VR setup as opposed to purely visual ones.

      (12) There is a typo at the end of the Figure 2 legend, where it should say "Arduino Mini."

      This typo has been fixed.

      Reviewer #2 (Recommendations For The Authors):

      (4) As mentioned in the public review: what is the major advantage of taking the IoT approaches as opposed to USB connections to the host computer, especially when behaviorMate relies on a central master computer regardless? The authors mentioned the readability of the JSON messages, making the system easier to debug. However, the flip side of that is the efficiency of data transmission. Although the bandwidth/latency is usually more than enough for transmitting data and commands for behavior devices, the efficiency may become a problem when neural recording devices (imaging or electrophysiology) need to be included in the system.

      behaviorMate is not intended to do everything, and is limited to mainly controlling behavior and providing some synchronizing TTL style triggers. In this way the system can easily and inexpensively be replicated across multiple recording setups; particularly this is useful for constructing additional animal training setups. The system is very much sufficient for capturing behavioral inputs at relevant timescales (see the benchmarks in Figures 3 and 4 as well as the position correlated neural activity in Figures 5 and 6 for demonstration of this). Additional hardware might be needed to align the behaviorMate output with neural data for example a high-speed DAQ or input channels on electrophysiology recording setups could be utilized (if provided). As all recording setups are different the ideal solution would depend on details which are hard to anticipate. We do not mean to convey that the full neural data would be transmitted to the behaviorMate system (especially using the JSON/UDP communications that behaviorMate relies on).

      (5) The author mentioned labView. A popular open-source alternative is bonsai (https://github.com/bonsai-rx/bonsai). Both include a graphical-based programming interface that allows the users to easily reconfigure the hardware system, which behaviorMate seems to lack. Additionally, autopilot (https://github.com/auto-pi-lot/autopilot) is a very relevant project that utilizes a local network for multiple behavior devices but focuses more on P2P communication and rigorously defines the API/schema/communication protocols for devices to be compatible. I think it's important to include a discussion on how behaviorMate compares to previous works like these, especially what new features behaviorMate introduces.

      We believe that behaviorMate provides a more opinionated and complete solution than the projects mentioned. A wide variety of 1D navigational paradigms can be constructed in behaviorMate without the need to write any novel software. For example, bonsai is a “visual programming language” and would require experimenters to construct a custom implementation of each of their experiments. We have opted to use Java for the UI with distributed computations across modules in various languages. Given the IOT methodology it would be possible to use any number of programming languages or APIs; a large number of design decisions were made  when building the project and we have opted to not include this level of detail in the manuscript in order to maintain readability. We strongly believe in using non-proprietary and open source projects, when possible, which is why the comparison with LabView based solutions was included in the introduction. Also, we have added a reference to the autopilot reference to the section of the introduction where this is discussed.

      (6) One of the reasons labView/bonsai are popular is they are inherently parallel and can simultaneously respond to events from different hardware sources. While the JSON events in behaviorMate are asynchronous in nature, the handling of those events seems to happen only in a main event loop coupled with GUI, which is sequential by nature. Is there any multi-threading/multi-processing capability of behaviorMate? If so it's an important feature to highlight. If not I think it's important to discuss the potential limitation of the current implementation.

      IOT solutions are inherently concurrent since the computation is distributed. Additional parallelism could be added by further distributing concerns between additional independent modules running on independent hardware. The UI has an eventloop which aggregates inputs and then updates contexts based on the current state of those inputs sequentially. This sort of a “snapshot” of the current state is necessary to reason about when the start certain contexts based on their settings and applied decorators. While the behaviorMate UI uses multithreading libraries in Java to be more performant in certain cases, the degree to which this represents true vs “virtual” concurrency would depend on the individual PC architecture it is run on and how the operating system allocates resources. For this reason, we have argued in the manuscript that behaviorMate is sufficient for controlling experiments at behaviorally relevant timescales, and have presented both benchmarks and discussed different synchronization approaches and permit users to determine if this is sufficient for their needs.

      (7) The context list is an interesting and innovative approach to abstract behavior contingencies into a data structure, but it's not currently discussed in depth. I think it's worth highlighting how the context list can be used to cover a wide range of common behavior experimental contingencies with detailed examples (line 185 might be a good example to give). It's also important to discuss the limitation, as currently the context lists seem to only support contingencies based purely on space and time, without support for more complicated behavior metrics (e.g. deliver reward only after X% correct).

      To access more complex behavior metrics during runtime, custom context list decorators would need to be implemented. While this is less common in the sort of 1D navigational behaviors the project was originally designed to control, adding novel decorators is a simple process that only requires basic object oriented programming knowledge. As discussed we are also implementing a plugin-architecture in the JavaFX update to streamline these types of additions.

      Minor Comments:

      (8) In line 202, the author suggests that a single TTL pulse is sent to mark the start of a recording session, and this is used to synchronize behavior data with imaging data later. In other words, there are no synchronization signals for every single sample/frame. This approach either assumes the behavior recording and imaging are running on the same clock or assumes evenly distributed recording samples over the whole recording period. Is this the case? If so, please include a discussion on limitations and alternative approaches supported by behaviorMate. If not, please clarify how exactly synchronization is done with one TTL pulse.

      While the TTL pulse triggers the start of neural data in our setups, various options exist for controlling for the described clock drift across experiments and the appropriate one depends on the type of recordings made, frame rate duration of recording etc. Therefore behaviorMate leaves open many options for synchronization at different time scales (e.g. the adding a frame-sync circuit as shown in Figure 1 or sending TTL pulses to the same DAQ recording electrophysiology data).  Expanded consideration of different synchronization methods has been included in the manuscript (see lines 224-238).

      (9) Is the computer vision-based calibration included as part of the GUI functionality? Please clarify. If it is part of the GUI, it's worth highlighting as a very useful feature.

      The computer vision-based benchmarking is not included in the GUI. It is in the form of a script made specifically for this paper. However for treadmill-based experiments behaviorMate has other calibration tools built into it (see line 301-303).

      (10) I went through the source code of the Arduino firmware, and it seems most "open X for Y duration" functions are implemented using the delay function. If this is indeed the case, it's generally a bad idea since delay completely pauses the execution and any events happening during the delay period may be missed. As an alternative, please consider approaches comparing timestamps or using interrupts.

      We have avoided the use of interrupts on the GPIO due to the potential for unpredictable code execution. There is a delay which is only just executed if the duration is 10 ms or less as we cannot guarantee precision of the arduino eventloop cycling faster than this. Durations longer than 10 ms would be time stamped and non-blocking. We have adjusted this MAX_WAIT to be specified as a macro so it can be more easily adjusted (or set to 0).

      (11) Figure 3 B, C, D, and Figure 4 D, E suffer from noticeable low resolution.

      We have converted Figure 3B, C, D and 4C, D, E to vector graphics in order to improve the resolution.

      (12) Figure 4C is missing, which is an important figure.

      This figure appeared when we rendered and submitted the manuscript. We apologize if the figure was generated such that it did not load properly in all pdf viewers. The panel appears correctly in the online eLife version of the manuscript. Additionally, we have checked the revision in Preview on Mac OS as well as Adobe Acrobat and the built-in viewer in Chrome and all figure panels appear in each so we hope this issue has been resolved.

      (13) There are thin white grid lines on all heatmaps. I don't think they are necessary.

      The grid lines have been removed from the heatmaps  as suggested.

      (14) Line 562 "sometimes devices directly communicate with each other for performance reasons", I didn't find any elaboration on the P2P communication in the main text. This is potentially worth highlighting as it's one of the advantages of taking the IoT approaches.

      In our implementation it was not necessary to rely on P2P communication beyond what is indicated in Figure 1. The direct communication referred to in line 562 is meant only to refer to the examples expanded on in the rest of the paragraph i.e. the behavior controller may signal the microscope directly using a TTL signal without looping back to the UI. As necessary users could implement UDP message passing between devices, but this is outside the scope of what we present in the manuscript.

      (15) Line 147 "Notably, due to the systems modular architecture, different UIs could be implemented in any programming language and swapped in without impacting the rest of the system.", this claim feels unsupported without a detailed discussion of how new code can be incorporated in the GUI (plugin system).

      This comment refers to the idea of implementing “different UIs”. This would entail users desiring to take advantage of the JSON messaging API and the proposed electronics while fully implementing their own interface. In order to facilitate this option we have improved documentation of the messaging API posted in the README file accompanying the arduino source code. We have added reference to the supplemental materials where readers can find a link to the JSON API implementation to clarify this point.

      Additionally, while a plugin system is available in the JavaFX version of behaviorMate, this project is currently under development and will update the online documentation as this project matures, but is unrelated to the intended claim about completely swapping out the UI.

      Reviewer #3 (Recommendations For The Authors):

      (6) Figure 1 - the terminology for each item is slightly different in the text and the figure. I think making the exact match can make it easier for the reader.

      - Real-time computer (figure) vs real-time controller (ln88).

      The manuscript was adjusted to match figure terminology.

      - The position controller (ln565) - position tracking (Figure).

      We have updated Figure 1 to highlight that the position controller does the position tracking.

      - Maybe add a Behavior Controller next to the GPIO box in Figure 1.

      We updated Figure 1 to highlight that the Behavior Controller performs the GPIO responsibility such that "Behavior Controller" and "GPIO circuit" may be used interchangeably.

      - Position tracking (fig) and position controller (subtitle - ln209).

      We updated Figure 1 to highlight that the position controller does the position tracking.

      - Sync Pulse is not explained in the text.

      The caption for Figure 1 has been updated to better explain the Sync pulse and additional systems boxes

      (7) For Figure 3B/C: What is the number of data points? It would be nice to see the real population, possibly using a swarm plot instead of box plots. How likely are these outliers to occur?

      In order to better characterize the distributions presented in our benchmarking data we have added mean and standard deviation information the plots 3 and 4. For Figure 3B: 0.0025 +/- 0.1128, Figure 3C: 12.9749 +/- 7.6581, Figure 4C: 66.0500 +/- 15.6994, Figure 4E: 4.1258 +/- 3.2558.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      Time periods in which experience regulates early plasticity in sensory circuits are well established, but the mechanisms that control these critical periods are poorly understood. In this manuscript, Leier and Foden and colleagues examine early-life critical periods that regulate the Drosophila antennal lobe, a model sensory circuit for understanding synaptic organization. Using early-life (0-2 days old) exposure to distinct odorants, they show that constant odor exposure markedly reduces the volume, synapse number, and function of the VM7 glomerulus. The authors offer evidence that these changes are mediated by invasion of ensheathing glia into the glomerulus where they phagocytose connections via a mechanism involving the engulfment receptor Draper.

      This manuscript is a striking example of a study where the questions are interesting, the authors spent a considerable amount of time to clearly think out the best experiments to ask their questions in the most straightforward way, and expressed the results in a careful, cogent, and well-written fashion. It was a genuine delight to read this paper. I have two experimental suggestions that would really round out existing work to better support the existing conclusions and some instances where additional data or tempered language in describing results would better support their conclusions. Overall, though, this is an incredibly important finding, a careful analysis, and an excellent mechanistic advance in understanding sensory critical period biology.

      We thank the reviewer for their thoughtful and constructive comments on our manuscript. In response to their critiques, we conducted several new experiments as well as additional analysis and making changes to the text. As requested, we carried out an electrophysiological analysis of VM7 PN firing in draper knockdown animals with and without odor exposure. To our surprise, loss of glial Draper fully suppresses the dramatic reduction in spontaneous PN activity observed following critical period ethyl butyrate exposure, arguing that the functional response is restored alongside OSN morphology. It also suggests that the OR42a OSN terminals are intact and functional until they are phagocytosed by ensheathing glia. In other words, glia are not merely clearing axon terminals that have already degenerated. This evidence provides additional support to the claim that the VM7 glomerulus will be an outstanding model for defining mechanism of experience-dependent glial pruning. Detailed responses to the reviewers’ comments follow below. 

      Regarding the apparent disconnect between the near complete silencing of PNs versus the 50% reduction in OR42a OSN infiltration volume, we agree with the reviewer that this tracks with previous data in the field. While our Imaris pipeline is relatively sensitive, it may not pick up modest changes to terminal arbor architecture. Indeed, as described in Jindal et al. (2023) and in the Methods in this manuscript, we chose conservative software settings that, if anything, would undercount the percent change in infiltration volume. We also note that increased inhibitory LN inputs onto PNs could contribute to dramatic PN silencing we observe. While fascinating, we view LN plasticity beyond the scope of the current manuscript. We removed any mention of ‘silent synapses’ and now speculate about increased inhibition. 

      Reviewer #1 (Recommendations For The Authors):

      Major Elements:

      (1) The authors demonstrate that loss of draper in glia can suppress many of the pruning related phenotypes associated with EB exposure. However, they do not assess electrophysiological output in these experiments, only morphology. It would be great to see recordings from those animals to see if the functional response is also restored.

      We performed the experiment the reviewer requested (see Figure 4F-J). We are pleased to report that our recordings from VM7 PNs match our morphology measurements: in repo-GAL4>UAS-draper RNAi flies, there was no difference in the innervation of VM7 PNs between animals exposed to mineral oil or 15% EB from 0-2 DPE. This result is in sharp contrast to the near-total loss of OSN-PN innervation in flies with intact glial Draper signaling, and strongly validates the role we propose for Draper in the Or42a OSN critical period.

      (2) There is a disconnect between physiology and morphology with a near complete loss of activity from VM7 PNs but a less severe loss of ORN synapses. While not completely incongruent (previous work in the AL showed a complete loss of attractive behavior though synapse number was only reduced 40% - Mosca et al. 2017, eLife), it is curious. Can the authors comment further? Ideally, some of these synapses could be visualized by EM to determine if the remaining synapses are indeed of correct morphology. If not, this could support their assertion of silent inputs from page 7. Further, what happens to the remaining synapses? VM7 PNs should be receiving some activity from other local interneurons as well as neighboring PNs.

      We agree that on the surface, our electrophysiology results are more striking than one might expect solely from our measurements of VM7 morphology and presynaptic content. As the reviewer points out, previous studies of fly olfaction have consistently found that relatively modest shifts in glomerular volume in response to prolonged earlylife odorant exposure can be accompanied by drastic changes in physiology and behavior (in addition, we would add Devaud et al., 2003; Devaud et al., 2001; Acebes et al., 2012; and Chodankar et al., 2020, as foundational examples of this phenomenon). 

      A major driver of these changes appears to be remodeling of antennal lobe inhibitory LNs (see Das et al., 2011; Wilson and Laurent, 2005; Chodankar et al., 2020), especially GABAergic inhibitory interneurons. Perhaps increased LN inhibition of chronically activated PNs, on top of the reduced excitatory inputs resulting from ensheathing glial pruning of the Or42a OSN terminal arbor, would explain the near-total loss of VM7 PN activity we observe after critical period EB exposure. However, given that the scope of our study is limited to critical-period glial biology and does not address the complex topics of LN rewiring or synapse morphology, we have removed the sentence in which we raise the possibility of “silent synapses” in order to avoid confusion. The reviewer is also correct that VM7 PNs have inputs from non-ORN presynaptic partners, including LNs and PNs. So again, perhaps increased inhibitory inputs contributes to the near-complete silencing of the PNs. Given the heterogeneity of LN populations, we view this area as fertile ground for future research. 

      Language / Data Considerations:

      (1) Or42a OSNs have other inputs, namely, from LNs. What are they doing here? Are they also affected?

      As discussed above, the question of how LN innervation of Or42a OSNs is altered by critical-period EB exposure is an intriguing one that fully deserves its own follow-up study, and we have tried to avoid speculation about the role of LNs when discussing our pruning phenotype. We note at multiple points throughout the text the importance of LNs and refer to previous studies of LN plasticity in response to chronic odorant exposure. 

      (2) In all of the measurements, what happens to synaptic density? Is it maintained? Does it scale precisely? This would be helpful to know.

      We have performed the analysis as requested, which is now included in a supplement to Figure 5. We found that synaptic density shows no trend in variation across conditions and glial driver genotypes.

      (3) In Figure 5, the controls for the alrm-GAL4 experiments show a much more drastic phenotype than controls in previous figures? Does this background influence how we can interpret the results? Could the response have instead hit a floor effect and it's just not possible to recover?

      The reviewer is correct that following EB exposure, astrocyte vs. ensheathing glial driver backgrounds displayed modest differences in the extent of pruning by volume (0.27 for astros, 0.36 for EG). We note that the two drpr RNAi lines that we used had non-significant (but opposite) effects on the estimated size of OSN42a OSN volume in combination with the astrocyte driver, arguing against a floor effect. In addition, a recent publication by Nelson et al. (2024) replicated our findings with a different astrocyte GAL4 driver and draper RNAi line. Thus, we are confident that this result is biologically meaningful and not an artifact of genetic background. 

      (4) The estimation of infiltration measurement in Figure 6 is tricky to interpret. It implies that the projections occupy the same space, which cannot be possible. I'd advocate a tempering of some of this language and consider an intensity measurement in addition to their current volume measurements (or perhaps an "occupied space" measurement) to more accurately assess the level of resolution that can be obtained via these methods.

      We completely agree that our language in describing EG infiltration could have been more precise, and we modified our language as suggested. The combination of the Or42a-mCD8::GFP label we and others use, our use of confocal microscopy, and our Surface pipeline in Imaris combine to create a glomerular mask that traces the outline of the OSN terminal arbor, but is nonetheless not 100% “filled” by neuronal membrane and/or glial processes. 

      (5) Do the authors have the kind of resolution needed to tell whether there is indeed Or42a-positive axon fragmentation (as asserted on p16 and from their data in figures 4, 5, 7). If the authors want to say this, I would advocate for a measurement of fragmentation / total volume to prove it - if not, I would advocate tempering of the current language.

      The reviewer brings up a fair criticism: while our assertion about axon fragmentation was based on our visual observations of hundreds of EB-exposed brains, the resolution limits of confocal microscopy do not allow us to rigorously rule out fragmentation within a bundle of OSN axons. Instead, our most compelling evidence for the lack of EB-induced Or42a OSN fragmentation in the absence of glial Draper comes from our new electrophysiology data (Figure 4F-J) in repo-GAL4>UAS-draper RNAi animals. We found no difference in spontaneous release from Or42a terminals in flies exposed to mineral oil or 15% EB from 0-2 DPE, which would not be the case if there was Draper-independent fragmentation along the axons or terminal arbors upon EB exposure. We have updated our discussion of fragmentation so that our statements are based on this new evidence, and not confocal microscopy. 

      (6) There is an interesting Discussion opportunity missed here. Some experiments would, ostensibly, require pupae to detect odorants within the casing via structures consistently in place for olfaction during pupation. It would be useful for the authors to discuss a little more deeply when this critical period may arise and why the experiment where pupae are exposed to EB two days before eclosion and there is no response, occurs as it does. I agree that it's clearly a time when they are not sensitive to the odorant, but that could just be because there's no ability to detect odorants at that time. Is it a question of non-sensitivity to EB or just non-sensitivity to everything?

      We share the reviewer’s interest in the plasticity of the olfactory circuit during pupariation, although, as they correctly point out, it is difficult to conceive of an odorant-exposure experiment that could disentangle the barrier effects of puparium from the sensitivity of the circuit itself, and our pre-eclosion data in Figure 3A, D, G does not distinguish between the two. While an investigation into mechanism by which the critical period for ethyl butyrate exposure opens and closes is outside the scope of the present study, we would consider the physical barrier of the puparium to be a satisfactory explanation for why eclosion marks the functional opening of experiencedependent plasticity. As the reviewer suggests, we have added this important nuance to our discussion of the opening of the critical period in the corresponding paragraph of the Results, as well as to the Discussion section “Glomeruli exhibit dichotomous responses to critical period odor exposure.” 

      Minor Elements:

      (1) Page 6 bottom: "Or4a-mCD8::GFP" should be "Or42a-mCD8::GFP"

      (2) Page 15, end of last full paragraph. Remove the "e"

      Thank you for pointing out these typos. They have been corrected. 

      Reviewer #2 (Public Review):

      Sensory experiences during developmental critical periods have long-lasting impacts on neural circuit function and behavior. However, the underlying molecular and cellular mechanisms that drive these enduring changes are not fully understood. In Drosophila, the antennal lobe is composed of synapses between olfactory sensory neurons (OSNs) and projection neurons (PNs), arranged into distinct glomeruli. Many of these glomeruli show structural plasticity in response to early-life odor exposure, reflecting the sensitivity of the olfactory circuitry to early sensory experiences.

      In their study, the authors explored the role of glia in the development of the antennal lobe in young adult flies, proposing that glial cells might also play a role in experiencedependent plasticity. They identified a critical period during which both structural and functional plasticity of OSN-PN synapses occur within the ethyl butyrate (EB)responsive VM7 glomerulus. When flies were exposed to EB within the first two days post-eclosion, significant reductions in glomerular volume, presynaptic terminal numbers, and postsynaptic activity were observed. The study further highlights the importance of the highly conserved engulfment receptor Draper in facilitating this critical period plasticity. The authors demonstrated that, in response to EB exposure during this developmental window, ensheathing glia increase Draper expression, infiltrate the VM7 glomerulus, and actively phagocytose OSN presynaptic terminals. This synapse pruning has lasting effects on circuit function, leading to persistent decreases in both OSN-PN synapse numbers and spontaneous PN activity as analyzed by perforated patch-clamp electrophysiology to record spontaneous activity from PNs postsynaptic to Or42a OSNs.

      In my view, this is an intriguing and potentially valuable set of data. However, since I am not an expert in critical periods or habituation, I do not feel entirely qualified to assess the full significance or the novelty of their findings, particularly in relation to existing research.

      We thank the reviewer for their insightful critique of our work. In response to their comments, we added additional physiological analysis and tempered our language around possible explanations for the apparent disconnect between the physiological and morphological critical period odor exposure. These changes are explained in more detail in the response to the public review by Reviewer 1 and also in our responses outlined below. 

      Reviewer #2 (Recommendations For The Authors):

      I though do have specific comments and questions concerning the presynaptic phenotype they deduce from confocal BRP stainings and electrophysiology.

      Concerning the number of active zones: this can hardly be deduced from standardresolution confocal images and, maybe more importantly, lacking postsynaptic markers. This particularly also in the light of them speculating about "silent synapses". There are now tools existing concerning labeled, cell type specific expression of acetylcholine-receptor expression and cholinergic postsynaptic density markers (importantly Drep2). Such markers should be entailed in their analysis. They should refer to previous concerning "brp-short" concerning its original invention and prior usage.

      We thank the reviewer for their thoughtful approach to our methodology and claims. While the use of confocal microscopy of Bruchpilot puncta to estimate numbers of presynapses is standard practice (see Furusawa et al., 2023; Aimino et al., 2022; Urwyler et al., 2019; Ackerman et al., 2021), the reviewer is correct that a punctum does not an active zone make. Bruchpilot staining and quantification is a well-validated tool for approximating the number of presynaptic active zones, not a substitute for super-resolution microscopy. We made changes to our language about active zones to make this distinction clearer. We have also removed the sentence where we discuss the possibility of “silent synapses,” which both reviewers felt was too speculative for our existing data. Finally, we are highly interested in characterizing the response of PNs and higher-order processing centers to critical-period odorant exposure as a future direction for our research. However, given the complexity of the subject, we chose to limit the scope of this study to the interactions between OSNs and glia. 

      Regarding their electrophysiological analysis and the plausibility of their findings: I am uncertain whether the moderate reduction in BRP puncta at the relevant OSN::PN synapse can fully account for the significantly reduced spontaneous PN activity they report. This seems particularly doubtful in the absence of any direct evidence for postsynaptically silent synapses. Perhaps this is my own naivety, but I wonder why they did not use antennal nerve stimulation in their experiments?

      We refer to previous studies of the AL indicating that moderate changes in glomerular volume and presynaptic content can translate to far more striking alterations in electrophysiology and behavior (Devaud et al., 2003; Devaud et al., 2001; Acebes et al., 2012; and Chodankar et al., 2020, Mosca et al., 2017). This literature has demonstrated that chronic odorant exposure can result in remodeling of inhibitory local interneurons to suppress over-active inputs from OSNs. While we do not address the complex subject of interneuron remodeling in the present study, we find it highly likely that there would be significant changes in interneuron innervation of PNs, independent of glial phagocytosis of OSN excitatory inputs, resulting in additional inhibition. Moving forward, we are very interested in expanding these studies to include odor-evoked changes in PN activity.  

      Additional minor point: The phrase "Soon after its molecular biology was described (et al., 1999), the Drosophila melanogaster" seems somewhat misleading. Isn't the field still actively describing the molecular biology of the fly olfactory system?

      We completely agree and have removed this sentence entirely.  

      Reviewing Editor's Note: to enhance the evidence from mostly compelling in most facets to solid would be to add physiology to the Draper analysis.

      These experiments have been completed and are presented in Figure 4F-J. 

      References

      Acebes A, Devaud J-M, Arnés M, Ferrús A. 2012. Central Adaptation to Odorants Depends on PI3K Levels in Local Interneurons of the Antennal Lobe. J Neurosci 32:417–422. doi:10.1523/jneurosci.2921-11.2012

      Ackerman SD, Perez-Catalan NA, Freeman MR, Doe CQ. 2021. Astrocytes close a motor circuit critical period. Nature592:414–420. doi:10.1038/s41586-021-03441-2

      Aimino MA, DePew AT, Restrepo L, Mosca TJ. 2022. Synaptic Development in Diverse Olfactory Neuron Classes Uses Distinct Temporal and Activity-Related Programs. J Neurosci 43:28–55. doi:10.1523/jneurosci.0884-22.2022

      Chodankar A, Sadanandappa MK, VijayRaghavan K, Ramaswami M. 2020. Glomerulus-Selective Regulation of a Critical Period for Interneuron Plasticity in the Drosophila Antennal Lobe. J Neurosci 40:5549–5560. doi:10.1523/jneurosci.2192-19.2020

      Das S, Sadanandappa MK, Dervan A, Larkin A, Lee JA, Sudhakaran IP, Priya R, Heidari R, Holohan EE, Pimentel A, Gandhi A, Ito K, Sanyal S, Wang JW, Rodrigues V, Ramaswami M. 2011. Plasticity of local GABAergic interneurons drives olfactory habituation. Proc Natl Acad Sci 108:E646–E654. doi:10.1073/pnas.1106411108 Devaud J, Acebes A, Ramaswami M, Ferrús A. 2003. Structural and functional changes in the olfactory pathway of adult Drosophila take place at a critical age. J Neurobiol 56:13–23. doi:10.1002/neu.10215

      Devaud J-M, Acebes A, Ferrus A. 2001. Odor Exposure Causes Central Adaptation and ́Morphological Changes in Selected Olfactory Glomeruli in Drosophila. J Neurosci 21:6274–6282. doi:10.1523/jneurosci.21-16-06274.2001

      Furusawa K, Ishii K, Tsuji M, Tokumitsu N, Hasegawa E, Emoto K. 2023. Presynaptic Ube3a E3 ligase promotes synapse elimination through down-regulation of BMP signaling. Science 381:1197–1205. doi:10.1126/science.ade8978

      Mosca TJ, Luginbuhl DJ, Wang IE, Luo L. 2017. Presynaptic LRP4 promotes synapse number and function of excitatory CNS neurons. eLife 6:e27347. doi:10.7554/elife.27347

      Nelson N, Vita DJ, Broadie K. 2024. Experience-dependent glial pruning of synaptic glomeruli during the critical period. Sci Rep 14:9110. doi:10.1038/s41598-024-59942-3

      Urwyler O, Izadifar A, Vandenbogaerde S, Sachse S, Misbaer A, Schmucker D. 2019. Branch-restricted localization of phosphatase Prl-1 specifies axonal synaptogenesis domains. Science 364. doi:10.1126/science.aau9952

      Wilson RI, Laurent G. 2005. Role of GABAergic Inhibition in Shaping Odor-Evoked Spatiotemporal Patterns in the Drosophila Antennal Lobe. J Neurosci 25:9069–9079.

      doi:10.1523/jneurosci.2070-05.2005

    1. It's not really clear to me what this means in this context - can we provide examples of the "advantages" that you might give yourself? I imagine some students might think of using some plagiarism tools/AI etc. as merely levelling the playing field or just enabling you to keep up if you're struggling. It's a little bit like the distinction between "privilege" in the sense of white people not having to experience racist microagressions and "privilege" in the sense of having extreme wealth and high-profile connections.

    1. when you know and I'm looking at like how somebody mentioned something on this cal

      when you know and I'm looking at like how somebody

      mentioned something on this call and the chat just like blows up and then all of this all of this other stuff which comes

      from human knowledge that's been created all of a sudden flows into this and you know to be able to click on all of these

      things and to learn all these things that it's it's it takes time and it's difficult right

    1. If you write to the lowest common denominator of reader, you are likely to end up with a cumbersome, tedious, book-like report that will turn off the majority of readers. However, if you do not write to that lowest level, you lose that segment of readers.

      When thinking about an audience an issue I've seen was over explaining which may not just make most readers grow restless but also put too much pressure on the writer. It's good to want to explain but overexplaining can cause different issues. Finding the perfect balance of context for audiences, while also being true to the writer can only be practiced.

    1. The reader in turn thoroughly processes the information in order to give a thoughtful response or take appropriate action.

      Going into this class I understood that technical writing was a needed ability to thoroughly communicate for my associates but reading this shows me that it's more than just a sharing of information as it is also a tool to help communication run efficiently both ways even if the person on the other end isn't aware of technical writing.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      The work analyzes how centrosomes mature before cell division. A critical aspect is the accumulation of pericentriolar material (PCM) around the centrioles to build competent centrosomes that can organize the mitotic spindle. The present work builds on the idea that the accumulation of PCM is catalyzed either by the centrioles themselves (leading to a constant accumulation rate) or by enzymes activated by the PCM itself (leading to autocatalytic accumulation). These ideas are captured by a previous model derived for PCM accumulation in C. elegans (ref. 8) and are succinctly summarized by Eq. 1. The main addition of the present work is to allow the activated enzymes to diffuse in the cell, so they can also catalyze the accumulation of PCM in other centrosomes (captured by Eqs. 2-4). The authors claim that this helps centrosomes to reach the same size, independent of potential initial mismatches.

      A strength of the paper is the simplicity of the equations, which are reduced to the bare minimum and thus allow a detailed inspection of the physical mechanism. One shortcoming of this approach is that all equations assume that the diffusion of molecules is much faster than any of the reactive time scales, although there is no experimental evidence for this.

      We appreciate the reviewer’s recognition of the strengths of our work. Indeed, the centrosome growth model incorporates multiple timescales corresponding to various reactions, and existing experimental data do not directly provide diffusion constants for the cytosolic proteins. However, we can estimate these diffusion constants using protein mass, based on the Stokes-Einstein relation, and compare the diffusion timescales with the reaction timescales obtained from FRAP analysis. For example, we estimate that the diffusion timescale for centrosomes separated by 5-10 micrometers is much smaller than the reaction timescales deduced from the FRAP experiments. Specifically, for SPD-5, a scaffold protein with a mass of ~150 kDa, the estimated diffusion constant is ~17 µm<sup>2</sup>/s, using the Stokes-Einstein relation and a reference diffusion constant of ~30 µm<sup>2</sup>/s for a 30 kDa GFP protein (reference: Bionumbers book). This results in a diffusion timescale of ~1 second for centrosomes 10 µm apart. In contrast, FRAP recovery timescales for SPD-5 in C. elegans embryos are on the order of several minutes, suggesting that scaffold protein binding reactions are much slower than diffusion. Therefore, a reaction-limited model is appropriate for studying PCM self-assembly during centrosome maturation. We have revised the manuscript to clarify this point and to include a discussion of the diffusion and reaction timescales.

      Spatially extended model with diffusion

      Both the reviewers have pointed out the importance of considering diffusion effects in centrosome size dynamics, and we agree that this is important to explore. We have developed a spatially extended 3D version of the centrosome growth model, incorporating stochastic reactions and diffusion (see Appendix 4). In this model, the system is divided into small reaction volumes (voxels), where reactions depend on local density, and diffusion is modeled as the transport of monomers/building blocks between voxels.

      We find that diffusion can alter the timescales of growth, particularly when the diffusion timescale is comparable to or slower than the reaction timescale, potentially mitigating size inequality by slowing down autocatalysis. However, the main conclusions of the catalytic growth model remain unchanged, showing robust size regulation independent of diffusion constant or centrosome separation (Figure 2—figure supplement 3). Hence, we focused on the effect of subunit diffusion on the autocatalytic growth model. We find that in the presence of diffusion, the size inequality reduces with increasing diffusion timescale, i.e., increasing distance between centrosomes and decreasing diffusion constant (Figure 2—figure supplement 4). However, the lack of robustness in size control in the autocatalyic growth model remains, i.e., the final size difference increases with increasing initial size difference. Notably, in the diffusion-limited regime (very small diffusion or large distances), the growth curve loses its sigmoidal shape, resembling the behavior in the non-autocatalytic limit (Figure 2). These findings are discussed in the revised manuscript.

      Another shortcoming of the paper is that it is not clear what species the authors are investigating and how general the model is. There are huge differences in centrosome maturation and the involved proteins between species. However, this is not mentioned in the abstract or introduction. Moreover, in the main body of the paper, the authors mention C. elegans on pages 2 and 3, but refer to Drosophila on page 4, switching back to C. elegans on page 5, and discuss Drosophila on page 6. This is confusing and looks as if they are cherry-picking elements from various species. The original model in ref. 8 was constructed for C. elegans and it is not clear whether the autocatalytic model is more general than that. In any case, a more thorough discussion of experimental evidence would be helpful.

      We believe one strength of our approach is its applicability across organisms. Our goal in comparing the theoretical model with experimental data from C. elegans and D.

      melanogaster is to demonstrate that the apparent qualitative differences in centrosome growth across species (see e.g., the extent of size scaling discussed in the section “Cytoplasmic pool depletion regulates centrosome size scaling with cell size”) may arise from the same underlying mechanisms in the theoretical model, albeit with different parameter values. We acknowledge differences in regulatory molecules between species, but the core pathways remain conserved see e.g. Raff, Trends in Cell Biology 2019, section: “Molecular Components of the Mitotic Centrosome Scaffold Appear to Have Been Conserved in Evolution from Worms to Humans”. In the revised manuscript, we have expanded the introduction to clarify this point and explain how our theory applies across species. We have also provided a clearer discussion of the experimental systems used throughout the manuscript and the available experimental evidence.

      The authors show convincingly that their model compensates for initial size differences in centrosomes and leads to more similar final sizes. These conclusions rely on numerical simulations, but it is not clear how the parameters listed in Table 1 were chosen and whether they are representative of the real situation. Since all presented models have many parameters, a detailed discussion on how the values were picked is indispensable. Without such a discussion, it is not clear how realistic the drawn conclusions are. Some of this could have been alleviated using a linear stability analysis of the ordinary differential equations from which one could have gotten insight into how the physical parameters affect the tendency to produce equal-sized centrosomes.

      Following the suggestion of the reviewer, we have revised the manuscript to add references and discussions justifying the choice of the parameter values used for the numerical simulations. These references and parameter choices can be found in Table 1 and Table 2, and are also discussed in relevant figure captions and within the manuscript text.

      We thank the reviewer for the excellent suggestion of including linear stability analysis of the ODE models of centrosome growth. We included linear stability analyses of the catalytic and autocatalytic growth models in Appendix 3. Analysis of the catalytic growth model reaffirms the robustness of size equality and the analysis of autocatalytic growth provides an approximate condition of size inequality. We have modified the revised manuscript to discuss these results.

      The authors use the fact that their model stabilizes centrosome size to argue that their model is superior to the previously published one, but I think that this conclusion is not necessarily justified by the presented data. The authors claim that "[...] none of the existing quantitative models can account for robustness in centrosome size equality in the presence of positive feedback." (page 1; similar sentence on page 2). This is not shown convincingly. In fact, ref 8. already addresses this problem (see Fig. 5 in ref. 8) to some extent.

      The linear stability analysis shown in Fig 5 in ref 8 (Zwicker et al, PNAS, 2014) shows that the solutions are stable around the fixed point and it was inferred from this result that Ostwald ripening can be suppressed by the catalytic activity of the centriole, therefore stabilizing the centrosomes (droplets) against coarsening by Ostwald ripening. But, if size discrepancy arises from the growth process (e.g., due to autocatalysis) the timescale of relaxation for such discrepancy is not clear from the above-mentioned result. We show (in figure 2 - figure supplement 3) that for any appreciable amount of positive feedback, the solution moves very slowly around the fixed point (almost like a line attractor) and cannot reach the fixed point in a biologically relevant timescale. Hence the model in ref 8 does not provide a robust mechanism for size control in the presence of autocatalytic growth. We have added this discussion in the Discussion section.

      More importantly, the conclusion seems to largely be based on the analysis shown in Fig. 2A, but the parameters going into this figure are not clear (see the previous paragraph). In particular, the initial size discrepancy of 0.1 µm^3 seems quite large, since it translates to a sphere of a radius of 300 nm. A similarly large initial discrepancy is used on page 3 without any justification. Since the original model itself already showed size stability, a careful quantitative comparison would be necessary.

      We thank the reviewer for the valuable suggestions. The parameters used in Fig. 2A are listed in Table 1 with corresponding references, and we used the parameter values from Zwicker et al. (2014) for rate constants and concentrations.

      The issue of initial size differences between centrosomes is important, but quantitative data on this are not readily available for C. elegans and Drosophila. Centrosomes may differ initially due to disparities in the amount and incorporation rate of PCM between the mother and daughter centrioles. Based on available images and videos (Cabral et al, Dev. Cell, 2019, DOI: https://doi.org/10.1016/j.devcel.2019.06.004), we estimated an initial radius of ~0.5 μm for centrosomes. Accounting for a 5% radius difference would lead to a volume difference of ~0.1 μm<sup>3</sup>, which was used in our analysis (Fig. 2A). These differences likely arise from distinct growth conditions of centrosomes containing different centrioles (older mother and newer daughter).

      More importantly, we emphasize that the initial size difference does not qualitatively alter the results presented in Figure 2. We agree that a quantitative analysis will further clarify our conclusions, and we have revised the manuscript accordingly. For example, Figure 2—figure supplement 3 provides a detailed analysis of how the final centrosome size depends on initial size differences across various parameter values. Additionally, Appendix 3 now includes analytical estimates of the onset of size inequality as a function of these parameters.

      The analysis of the size discrepancy relies on stochastic simulations (e.g., mentioned on pages 2 and 4), but all presented equations are deterministic. It's unclear what assumptions go into these stochastic equations, and how they are analyzed or simulated. Most importantly, the noise strength (presumably linked to the number of components) needs to be mentioned. How is this noise strength determined? What are the arguments for this choice? This is particularly crucial since the authors quote quantitative results (e.g., "a negligible difference in steady-state size (∼ 2% of mean size)" on page 4).

      As described in the Methods, we used the exact Gillespie method (Gillespie, JPC, 1977) to simulate the evolution of the stochastic trajectories of the systems, corresponding to the deterministic growth and reaction kinetics outlined in the manuscript. We've expanded the Methods to include further details on the stochastic simulations and refer to Appendix 1, where we describe the chemical master equations governing autocatalytic growth..

      The noise strength (fluctuations about the mean size of centrosome) does depend on the total monomer concentration (the pool size), and this may affect size inequality. Similar values of the total monomer concentration were used in the catalytic (0.04 uM) and autocatalytic growth (0.33 uM) simulations. These values for the pool size are similar to previous studies (Zwicker et al, PNAS, 2012) and have been optimized to obtain a good fit with experimental growth curves from C. elegans embryo data.

      To present more quantitative results, we have revised our manuscript to add data showing the effect of pool size on centrosome size inequality (Figure 3 - figure supplement 2). We find the size inequality in catalytic growth to increase with decreasing pool size as the origin of this inequality is the stochastic fluctuation in individual centrosome size. The size inequality (ratio of dv/<V>) in the autocatalytic growth does not depend (strongly) on the pool size (dv and <V> both increase similarly with pool size).

      Moreover, the two sets of testable predictions that are offered at the end of the paper are not very illuminative: The first set of predictions, namely that the model would anticipate an "increase in centrosome size with increasing enzyme concentration, the ability to modify the shape of the sigmoidal growth curve, and the manipulation of centrosome size scaling patterns by perturbing growth rate constants or enzyme concentrations.", are so general that they apply to all models describing centrosome growth. Consequently, these observations do not set the shared enzyme pool apart and are thus not useful to discriminate between models. The second part of the first set of predictions about shifting "size scaling" is potentially more interesting, although I could not discern whether "size scaling" referred to scaling with cell size, total amount of material, or enzymatic activity at the centrioles. The second prediction is potentially also interesting and could be checked directly by analyzing published data of the original model (see Fig. 5 of ref. 8). It is unclear to me why the authors did not attempt this.

      In response to the reviewers' valuable feedback, we have revised the manuscript to include results on potential methods for distinguishing catalytic growth from autocatalytic growth. Since the growth dynamics of a single centrosome do not significantly differ between these two models, it is necessary to experimentally examine the growth dynamics of a centrosome pair under various initial size perturbations. In Figure 3-figure supplement 2, we present theoretical predictions for both catalytic and autocatalytic growth models, illustrating the correlation between initial and final sizes after maturation. The figure demonstrates that the initial size difference and final size difference should be correlated only in the autocatalytic growth and the relative size inequality decreases with increasing subunit pool size in catalytic growth while remains almost unchanged in autocatalytic growth. These predictions can be experimentally examined by inducing varying centrosome sizes at the early stage of maturation for different expression levels of the scaffold former proteins.

      A second experimentally testable feature of the catalytic growth model involves sharing of the enzyme between both centrosomes. This could be tested through immunofluorescent staining of the kinase or by constructing a FRET reporter for PLK1 activity, where it can be studied if the active form of the PLK1 is found in the cytoplasm around the centrosomes indicating a shared pool of active enzyme. Additionally, photoactivated localization microscopy could be employed, where fluorescently tagged enzyme can be selectively photoactivated in one centrosome and intensity can be measured at the other centrosome to find the extent of enzyme sharing between the centrosomes.

      We also discuss shifts in centrosome size scaling behavior with cell size by varying parameters of the catalytic growth model (Fig 4). While quantitative analysis of size scaling in Drosophila is currently unavailable, such an investigation could enable us to distinguish catalytic growth mode with other models. We have included this point in the Discussion section.

      “The second prediction is potentially also interesting …” We assume the reviewer is referencing the scenario in Zwicker et al. (ref 8), where differences in centriole activity lead to unequal centrosome sizes. The data in that study represent a case of centrosome growth with variable centriole activity, resulting in size differences in both autocatalytic and catalytic growth models. This differs from our proposed experiment, where we induce unequal centrosome sizes without modifying centriole activity. We have now revised the text to clarify this distinction.

      Taken together, I think the shared enzyme pool is an interesting idea, but the experimental evidence for it is currently lacking. Moreover, the model seems to make little testable predictions that differ from previous models.

      We appreciate the reviewer’s interest in the core idea of our work. As mentioned earlier, we have improved the clarity in model predictions in the revised discussion section. Unfortunately, the lack of publicly available experimental data limits our ability to provide more direct experimental evidence. However, we are hopeful that our theoretical model will inspire future experiments to test these model predictions.

      Reviewer #2 (Public Review):

      Summary:

      In this paper, Banerjee & Banerjee argue that a solely autocatalytic assembly model of the centrosome leads to size inequality. The authors instead propose a catalytic growth model with a shared enzyme pool. Using this model, the authors predict that size control is enzyme-mediate and are able to reproduce various experimental results such as centrosome size scaling with cell size and centrosome growth curves in C. elegans.

      The paper contains interesting results and is well-written and easy to follow/understand.

      We are delighted that the reviewer finds our work interesting, and we appreciate the thoughtful suggestions provided. In response, we have revised the text and figures to incorporate these recommendations. Below, we address each of the reviewer’s comments point by point:

      Suggestions:

      ● In the Introduction, when the authors mention that their "theory is based on recent experiments uncovering the interactions of the molecular components of centrosome assembly" it would be useful to mention what particular interactions these are.

      As the reviewer suggested, we have modified the introduction section to add the experimental observations upon which we build our model.

      ● In the Results and Discussion sections, the authors note various similarities and differences between what is known regarding centrosome formation in C. elegan and Drosophila. It would have been helpful to already make such distinctions in the Introduction (where some phenomena that may be C. elegans specific are implied to hold centrosomes universally). It would also be helpful to include more comments for the possible implications for other systems in which centrosomes have been studied, such as human, Zebrafish, and Xenopus.

      We thank the reviewer for this suggestion. We have modified the Introduction to motivate the comparative study of centrosome growth in different organisms and draw relevant connections to centrosome growth in other commonly studied organisms like Zebrafish and Xenopus.

      ● For Fig 1.C, the two axes are very close to being the same but are not. It makes the graph a little bit more difficult to interpret than if they were actually the same or distinctly different. It would be more useful to have them on the same scale and just have a legend.

      We have modified the Figure 1C in the revised manuscript. The plot now shows the growth of a single and a pair of centrosomes both on the same y-axis scale.

      ● The authors refer to Equation 1 as resulting from an "active liquid-liquid phase separation", but it is unclear what that means in this context because the rheology of the centrosome does not appear to be relevant.

      We used the term “active liquid-liquid phase separation” simply to refer to a previous model proposed by Zwicker et al (PNAS, 2014) where the underlying process of growth results from liquid-liquid phase separation. We agree with the reviewer that the rheological property of the centrosome is not very relevant in our discussions and we have thus removed the sentence from the revised manuscript to avoid any confusion.

      ● The authors reject the non-cooperative limit of Eq 1 because, even though it leads to size control, it does not give sigmoidal dynamics (Figure 2B). While I appreciate that this is just meant to be illustrative, I still find it to be a weak argument because I would guess a number of different minor tweaks to the model might keep size control while inducing sigmoidal dynamics, such as size-dependent addition of loss rates (which could be due to reactions happen on the surface of the centrosome instead of in its bulk, for example). Is my intuition incorrect? Is there an alternative reason to reject such possible modifications?

      The reviewer raises an interesting point here. However, we disagree with the idea that minor adjustments to the model can produce sigmoidal growth curves while still maintaining size control. In the absence of an external, time-dependent increase in building block concentration (which would lead to an increasing growth rate), achieving sigmoidal growth requires a positive feedback mechanism in the growth rate. This positive feedback alone could introduce size inequality unless shared equally between the centrosomes, as it is in our model of catalytic growth in a shared enzyme pool. The proposed modification involving size-dependent addition or loss rates due to surface assembly/disassembly may result in unequal sizes precisely because of this positive feedback. A similar example is provided in Appendix 1, where assembly and disassembly across the pericentriolic material volume lead to sigmoidal growth but also generate significant size inequality and lack of robustness in size control.

      ● While the inset of Figure 3D is visually convincing, it would be good to include a statistical test for completeness.

      Following the reviewer’s suggestion, we present a statistical analysis in Figure 3 - Figure supplement 2 in the modified manuscript to enhance clarity. We show that the size difference values are uncorrelated (Pearson’s correlation coefficient ~ 0) with the initial size difference indicating the robustness of the size regulation mechanism.

      ● The authors note that the pulse in active enzyme in their model is reminiscent of the Polo kinase pulse observed in Drosophila. Can the authors use these published experimental results to more tightly constrain what parameter regime in their model would be relevant for Drosophila? Can the authors make predictions of how this pulse might vary in other systems such as C. elegans?

      Thank you for the insightful suggestion regarding the use of pulse dynamics in experiments to better constrain the model’s parameter regime. In our revised manuscript, we attempted this analysis; however, the data from Wong et al. (EMBO 2022) for Drosophila are presented as normalized intensity in arbitrary units, rather than as quantitative measures of centrosome size or Polo enzyme concentration. This lack of quantitative data limits our ability to benchmark the model beyond capturing qualitative trends. We thus believe that quantitative measurements of centrosome size and enzyme concentration are necessary to achieve a tighter alignment between model predictions and biological data.

      We discuss the enzyme dynamics in C. elegans in the revised manuscript. We find the enzyme dynamics corresponding to the fitted growth curves of C. elegans centrosomes are distinctly different from the ones observed in Drosophila. Instead of the pulse-like feature, we find a step-like increase in (cytosolic) active enzyme concentration.

      ● The authors mention that the shared enzyme pool is likely not diffusion-limited in C. elegans embryos, but this might change in larger embryos such as Drosophila or Xenopus. It would be interesting for the authors to include a more in-depth discussion of when diffusion will or will not matter, and what the consequence of being in a diffusion-limit regime might be.

      Both the reviewers have pointed out the importance of considering diffusion effects in centrosome size dynamics, and we agree that this is important to explore. We have developed a spatially extended 3D version of the centrosome growth model, incorporating stochastic reactions and diffusion (see Appendix 4). In this model, the system is divided into small reaction volumes (voxels), where reactions depend on local density, and diffusion is modeled as the transport of monomers/building blocks between voxels.

      We find that diffusion can alter the timescales of growth, particularly when the diffusion timescale is comparable to or slower than the reaction timescale, potentially mitigating size inequality by slowing down autocatalysis. However, the main conclusions of the catalytic growth model remain unchanged, showing robust size regulation independent of diffusion constant or centrosome separation (Figure 2—figure supplement 3). Hence, we focused on the effect of subunit diffusion on the autocatalytic growth model. We find that in the presence of diffusion, the size inequality reduces with increasing diffusion timescale, i.e., increasing distance between centrosomes and decreasing diffusion constant (Figure 2—figure supplement 4). However, the lack of robustness in size control in the autocatalyic growth model remains, i.e., the final size difference increases with increasing initial size difference. Notably, in the diffusion-limited regime (very small diffusion or large distances), the growth curve loses its sigmoidal shape, resembling the behavior in the non-autocatalytic limit (Figure 2). These findings are discussed in the revised manuscript.

      ● The authors state "Firstly, our model posits the sharing of the enzyme between both centrosomes. This hypothesis can potentially be experimentally tested through immunofluorescent staining of the kinase or by constructing FRET reporter of PLK1 activity." I don't understand how such experiments would be helpful for determining if enzymes are shared between the two centrosomes. It would be helpful for the authors to elaborate.

      Our results indicate the necessity of the centrosome-activated enzyme to be shared for the robust regulation of centrosome size equality. If a FRET reporter of the active form of the enzyme (e.g., PLK1) can be constructed then the localization of the active form of the enzyme may be determined in the cytosol. We propose this based on reports of studying PLK activities in subcellular compartments using FRET as described in Allen & Zhang, BBRC (2006). Such experiments will be a direct proof of the shared enzyme pool. Following the reviewer’s suggestion, we have modified the description of the FRET based possible experimental test for the shared enzyme pool hypothesis in the revised manuscript.

      Additionally, we have added another possible experimental test based on photoactivated localization microscopy (PALM), where tagged enzyme can be selectively photoactivated in one centrosome and intensity measured at the other centrosome to indicate whether the enzyme is shared between the centrosomes.

      Recommendations for the authors:

      The manuscript needs to clarify better what species the model describes, how alternative models were rejected, and how the parameters were chosen.

      In the revised manuscript, we have connect the chemical species in our model to those documented in organisms like Drosophila and C. elegans. This connection is detailed in the main text under the Catalytic Growth Model section and summarized in Table 2. We discuss alternative models and our reasons for excluding them in the first results section on autocatalytic growth, with additional details provided in Appendix 1 and the accompanying supplementary figures. The selection of model parameters is addressed in the main text and methods, with references listed in Table 1. We believe that these revisions, along with our point-by-point responses to reviewer comments, comprehensively address all reviewer concerns.

      Reviewer #1 (Recommendations For The Authors):

      I think the style and structure of the paper could be improved on at least two accounts:

      (1) What's the role of the last section ("Multi-component centrosome model reveals the utility of shared catalysis on centrosome size control.")? It seems to simply add another component, keeping the essential structure of the model untouched. Not surprisingly, the qualitative features of the model are preserved and quantitative features are not discussed anyway.

      This model provides a more realistic description of centrosome growth by incorporating the dynamics of the two primary scaffold-forming subunits and their interactions with an enzyme. It is based on the observation that the major interaction pathways among centrosome components are conserved across many organisms (see Raff, Trends in Cell Biology, 2019 and Table 2), typically involving two scaffold-forming proteins and one enzyme that mediates positive feedback between them. These pathways may involve homologous proteins in different species.

      This model allows us to validate the experimentally observed spatial spread of the two subunits, Cnn and Spd-2, in Drosophila. Additionally, we used it to investigate the impact of relaxing the assumption of a shared enzyme pool on size control. Although similar insights could be obtained using a single-component model, the two-component model offers a more biologically relevant framework. We have highlighted these points in the revised manuscript to ensure clarity.

      (2 ) The very long discussion section is not very helpful. First, it mostly reiterates points already made in the main text. Second, it makes arguments for the choice of modeling (top left column of page 8), which probably should have been made when introducing the model. Third, it introduces new results (lower left column of page 8), which should probably be moved to the main text. Fourth, the interpretation of the model in light of the known biochemistry is useful and should probably be expanded although I think it would be crucial to keep information from different organisms clearly separate (this last point actually holds for the entire manuscript).

      We thank the reviewer for the feedback. We have modified the discussion section to focus more on the interpretation of the results, model predictions and future outlook with possible experiments to validate crucial aspects of the model. We have moved most of the justifications to the main text model description.

      Here are a few additional minor points:

      * page 1: Typo "for for" → "for"

      * Page 8: Typo "to to" → "to"

      We thank the reviewer for the useful recommendations. We have corrected all the typos in the revised manuscript.

      * Why can diffusion be neglected in Eq. 1? This is discussed only very vaguely in the main text (on page 3). Strangely, there is some discussion of this crucial initial step in the discussion section, although the diffusion time of PLK1 is compared to the centrosome growth time there and not the more relevant enzyme-mediate conversion rate or enzyme deactivation rate.

      We now discuss the justification of neglecting diffusion while motivating the model. We have added a more detailed discussion in the Methods section. We estimate the timescale of diffusion for the scaffold formers and the enzyme and compare them with the turnover timescales of the respective proteins Spd-2, Cnn and Polo. We find the proteins to diffuse fast compared to their FRAP recovery timescales indicating reaction timescales to be slower than the timescales of diffusion. Nevertheless, following the reviewer’s suggestion, we have also investigated the effect of diffusion on the growth process in Appendix 4.

      * Page 3: The comparison k_0^+ ≫ k_1^+ is meaningless without specifying the number of subunits n. I even doubt that this condition is the correct one since even if k_0^+ is two orders of magnitude larger than k_1^+, the autocatalytic term can dominate if there are many subunits.

      We thank the reviewer for the insightful comment on the comparison between the growth rates k^+_0 and k^+_1. Indeed, the pool size matters and we have now included a linear stability analysis of the autocatalytic growth equations in Appendix 3 to estimate the condition for size inequality. We have commented on these new findings in the revised manuscript.

      * The Eqs. 2-4 are difficult to follow in my mind. For instance, it is not clear why the variables N_av and N_av^E are introduced when they evidently are equivalent to S_1 and E. It would also help to explicitly mention that V_c is the cell volume. Moreover, do these equations contain any centriolar activity? If so, I could not understand what term mediates this. If not, it might be good to mention this explicitly.

      Following the reviewer’s suggestion, we have modified the equations 2-4 and added the definition of V_c to enhance clarity in the revised manuscript. The centriole activity is given by k^+ in the catalytic model. We now explicitly mention it.

      * Page 4: The observed peak of active enzyme (Fig 3C) is compared to experimental observation of a PLK1 peak at centrosomes in Drosophila (ref. 28). However, if I understand correctly, the peak in the model refers to active enzyme in the entire cell (and the point of the model is that this enzymatic pool is shared everywhere), whereas the experimental measurement quantified the amount of PLK1 at the centrosome (and not the activity of the enzyme). How are the quantity in the model related to the experimental measurements?

      The reviewer is correct in pointing out the difference between the quantities calculated from our model and those measured in the experiment by Wong et al. We have clarified this point in the revised manuscript. We hypothesize that if, in future experiments, the active (phosphorylated) polo can be observed by using a possible FRET reporter of activity then the cytosolic pulse can be observed too. We discuss this point in the revised manuscript.

      * Page 6: The asymmetry due to differences in centriolar activity is apparently been done for both models (Eq. 1 and Eqs. 2-4), referring to a parameter k_0^+ in both cases. How does this parameter enter in the latter model? More generally, I don't really understand the difference in the two rows in Fig. 5 - is the top row referring to growth driven by centriolar activity while the lower row refers to pure autocatalytic growth? If so, what about the hybrid model where both mechanisms enter? This is particularly relevant, since ref. 8 claims that such a hybrid model explains growth curves of asymmetric centrosomes quantitatively. Along these lines, the analysis of asymmetric growth is quite vague and at most qualitative. Can the models also explain differential growth quantitatively?

      We believe the reviewer’s comment on centrosome size asymmetry may stem from a lack of clarity in our initial explanation. In this section, as shown in Figure 5, we compare the full autocatalytic model (where both k_0^+ and k_1^+ are non-zero) with the catalytic model. The confusion might have arisen due to an unclear definition of centriolar activity in the catalytic growth model, which we have clarified in the revised manuscript. Specifically, we use k+ in the catalytic model and k0+ in the autocatalytic model as indicators of centriolar activity.

      Our findings quantitatively demonstrate that variations in centriole activity can robustly drive size asymmetry in catalytic growth, independent of initial size differences. However, in autocatalytic growth, increased initial size differences make the system more vulnerable to a loss of regulation, as positive feedback can amplify these differences, ultimately influencing the final size asymmetry. Our results do not contradict Zwicker et al. (ref 8); rather, they complement it. We show that size asymmetry in autocatalytic growth is governed by both centriole activity and positive feedback, highlighting that centriole activity alone cannot robustly regulate centrosome size asymmetry within this framework.

      * The code for performing the simulations does not seem to be available

      We have now made the main codes available in a GitHub repository. Link: https://github.com/BanerjeeLab/Centrosome_growth_model

    1. That said, Darden’s specific experience as a Black woman with a full-time job was quite different than that of a white suburban housewife—the central focus of The Feminine Mystique.

      This is crucial! It's not just 'quite different'—it's a fundamentally different experience shaped by the intersecting forces of racism and sexism. This comparison should prompt a deeper reflection. Data analysis should not ignore the intersection of race, gender, and class.

    1. part of my love for the arts comes naturally from my profession as a writer. the irony here is that i do not have the words to adequately formulate just what this craft has given me. if i was forced to, i’d sum it up as power. these words i write are little pieces of me; it’s why i’ll never buy into the ‘separate the art from the artist’ nonsense. this work did not birth itself. it could not birth itself; it needed my pen to make it be. first there were only ideas but in my hands, stories and arguments and analysis live.
    1. I thought it was bad growing up during the “just Google it” age, but as society always manages to outdo itself, the current “just use ChatGPT” mindset is so much worse. At least with Google, there was a semblance of effort: sifting through search results, evaluating sources, and piecing together information to paraphrase for your paper that was due in the next hour. Now, the expectation is instant answers with zero context, no critical thinking, and a growing dependency on AI to do the heavy lifting. It’s not just a shortcut—it’s an exit ramp off the highway of media literacy.
    1. The lesson ends for the day. Both the teacher and the students have worked hard. The students have listened to and spoken only English for the period

      Since they only practice for one hour a day after that hour is up if they don't practice at home, they will most likely not remember everything that they learned or talked about if it's not reinforced a few times throughout that day if they weren't given homework or anything. So, the next day if they have to repeat some of the lesson that they were taught the previous day to try to get them remember would that just set them back more because of time since they only have one hour? Or would You just create a new lesson but try to include some of the things that were in the previous lesson to reinforce what was already taught, but also learn new things?

    1. 娱乐:喷子们经常觉得帖子很有趣,无论是因为扰乱还是情绪反应。如果动机是为了取悦别人而给别人带来痛苦,那就叫做为了好玩 [ g6 ]

      A large part of the reason for malicious comments nowadays is because of so-called ‘fun’ or jealousy. Or maybe it's just stereotyping, because many people are bored and want to have fun in their own lives, so they attack others, and then there are different opposing sides to refute, start their discussion topics, increase the heat of the topic, in fact, there are a lot of bloggers will use this method to make their own videos more popular, but some innocent people will be killed because some netizens for their own fun to scold! A female student of a well known school dyed her hair pink and that girl committed suicide because of it, I think these people who go and malign others for fun will go and reflect on themselves

    1. 现在住在英国),我们倾向于从美国、英国和其他英语社交媒体中寻找例子。我们对 Twitter(现在品牌为“X”)、Facebook 和 Youtube 等社交媒体网络也更

      I actually think tiktok will be more accessible to students now instead, for example, I swiped quite a few videos on ins and yotube before just saying that Americans are now trying to move to live in China because tiktok is banned in the U.S. I know it's a joke but young people may be more inclined to use tiktok nowadays because of it's Coverage is very wide, for example, young people may like to dance or share pets and other things they can share through this software, and more people of the same age, which will promote the probability of their use, the use of more people, the information on it will naturally become more, and most of the users are students. Common topics and the direction of the discussion of the problem is also more unified.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      This study asks whether the phenomenon of crossmodal temporal recalibration, i.e. the adjustment of time perception by consistent temporal mismatches across the senses, can be explained by the concept of multisensory causal inference. In particular, they ask whether the explanation offered by causal inference better explains temporal recalibration better than a model assuming that crossmodal stimuli are always integrated, regardless of how discrepant they are.

      The study is motivated by previous work in the spatial domain, where it has been shown consistently across studies that the use of crossmodal spatial information is explained by the concept of multisensory causal inference. It is also motivated by the observation that the behavioral data showcasing temporal recalibration feature nonlinearities that, by their nature, cannot be explained by a fixed integration model (sometimes also called mandatory fusion).

      To probe this the authors implemented a sophisticated experiment that probed temporal recalibration in several sessions. They then fit the data using the two classes of candidate models and rely on model criteria to provide evidence for their conclusion. The study is sophisticated, conceptually and technically state-of-the-art, and theoretically grounded. The data clearly support the authors’ conclusions.

      I find the conceptual advance somewhat limited. First, by design, the fixed integration model cannot explain data with a nonlinear dependency on multisensory discrepancy, as already explained in many studies on spatial multisensory perception. Hence, it is not surprising that the causal inference model better fits the data.

      We have addressed this comment by including an asynchrony-contingent model, which is capable of predicting the nonlinearity of recalibration effects by employing a heuristic approximation of the causal-inference process (Fig. 3). We also updated the previous competitor model with a more reasonable asynchrony-correction model as the baseline of model comparison, which assumes recalibration aims to restore synchrony whenever the sensory measurement of SOA indicates an asynchrony. The causal-inference model outperformed both models, as indicated by model evidence (Fig. 4A). Furthermore, model predictions show that the causal-inference model more accurately captures recalibration at large SOAs at both the group (Fig. 4B) and the individual levels (Fig. S4).

      Second, and again similar to studies on spatial paradigms, the causal inference model fails to predict the behavioral data for large discrepancies. The model predictions in Figure 5 show the (expected) vanishing recalibration for large delta, while the behavioral data don’t decay to zero. Either the range of tested SOAs is too small to show that both the model and data converge to the same vanishing effect at large SOAs, or the model's formula is not the best for explaining the data. Again, the studies using spatial paradigms have the same problem, but in my view, this poses the most interesting question here.

      We included an additional simulation (Fig. 5B) to show that the causal-inference model can predict non-zero recalibration for long adapter SOAs, especially in observers with a high common-cause prior and low sensory precision. This ability to predict a non-zero recalibration effect even at large SOA, such as 0.7 s, is one key feature of the causal-inference model that distinguishes it from the asynchrony-contingent model.

      In my view there is nothing generally wrong with the study, it does extend the 'known' to another type of paradigm. However, it covers little new ground on the conceptual side.

      On that note, the small sample size of n=10 is likely not an issue, but still, it is on the very low end for this type of study.

      This study used a within-subject design, which included 3 phases each repeated in 9 sessions, totaling 13.5 hours per participant. This extensive data collection allows us to better constrain the model for each participant. Our conclusions are based on the different models’ ability to fit individual data.

      Reviewer #2 (Public Review):

      Summary:

      Li et al.’s goal is to understand the mechanisms of audiovisual temporal recalibration. This is an interesting challenge that the brain readily solves in order to compensate for real-world latency differences in the time of arrival of audio/visual signals. To do this they perform a 3-phase recalibration experiment on 9 observers that involves a temporal order judgment (TOJ) pretest and posttest (in which observers are required to judge whether an auditory and visual stimulus were coincident, auditory leading or visual leading) and a conditioning phase in which participants are exposed to a sequence of AV stimuli with a particular temporal disparity. Participants are required to monitor both streams of information for infrequent oddballs, before being tested again in the TOJ, although this time there are 3 conditioning trials for every 1 TOJ trial. Like many previous studies, they demonstrate that conditioning stimuli shift the point of subjective simultaneity (pss) in the direction of the exposure sequence.

      These shifts are modest - maxing out at around -50 ms for auditory leading sequences and slightly less than that for visual leading sequences. Similar effects are observed even for the longest offsets where it seems unlikely listeners would perceive the stimuli as synchronous (and therefore under a causal inference model you might intuitively expect no recalibration, and indeed simulations in Figure 5 seem to predict exactly that which isn't what most of their human observers did). Overall I think their data contribute evidence that a causal inference step is likely included within the process of recalibration.

      Strengths:

      The manuscript performs comprehensive testing over 9 days and 100s of trials and accompanies this with mathematical models to explain the data. The paper is reasonably clearly written and the data appear to support the conclusions.

      Weaknesses:

      While I believe the data contribute evidence that a causal inference step is likely included within the process of recalibration, this to my mind is not a mechanism but might be seen more as a logical checkpoint to determine whether whatever underlying neuronal mechanism actually instantiates the recalibration should be triggered.

      We have addressed this comment by replacing the fixed-update model with an asynchrony-correction model, which assumes that the system first evaluates whether the measurement of SOA is asynchronous, thus indicating a need for recalibration (Fig. 3). If it does, it shifts the audiovisual bias by a proportion of the measured SOA. We additionally included an asynchrony-contingent model, which is capable of replicating the nonlinearity of recalibration effects by a heuristic approximation of the causal-inference process.

      Model comparisons indicate that the causal-inference model of temporal recalibration outperforms both alternative models (Fig. 4A). Furthermore, the model predictions demonstrate that the causal-inference model more accurately captures recalibration at large SOAs at both the group level (Fig. 4B) and individual level (Fig. S4).

      The authors’ causal inference model strongly predicts that there should be no recalibration for stimuli at 0.7 ms offset, yet only 3/9 participants appear to show this effect. They note that a significant difference in their design and that of others is the inclusion of longer lags, which are unlikely to originate from the same source, but don’t offer any explanation for this key difference between their data and the predictions of a causal inference model.

      We added further simulations to show that the causal-inference model can predict non-zero recalibration also for longer adapter SOAs, especially in observers with a large common-cause prior (Fig. 5A) and low sensory precision (Fig. 5B). This ability to predict a non-zero recalibration effect even at longer adapter SOAs, such as 0.7 s, is a key feature of the causal-inference model that distinguishes it from the asynchrony-contingent model.

      I’m also not completely convinced that the causal inference model isn’t ‘best’ simply because it has sufficient free parameters to capture the noise in the data. The tested models do not (I think) have equivalent complexity - the causal inference model fits best, but has more parameters with which to fit the data. Moreover, while it fits ‘best’, is it a good model? Figure S6 is useful in this regard but is not completely clear - are the red dots the actual data or the causal inference prediction? This suggests that it does fit the data very well, but is this based on predicting held-out data, or is it just that by having more parameters it can better capture the noise? Similarly, S7 is a potentially useful figure but it's not clear what is data and what are model predictions (what are the differences between each row for each participant; are they two different models or pre-test post-test or data and model prediction?!).

      I'm not an expert on the implementation of such models but my reading of the supplemental methods is that the model is fit using all the data rather than fit and tested on held-out data. This seems problematic.

      We recognize the risk of overfitting with the causal-inference model. We now rely on Bayesian model comparisons, which use model evidence for model selection. This method automatically incorporates a penalty for model complexity through the marginalization over the parameter space (MacKay, 2003).

      Our design is not suitable for cross-validation because the model-fitting process is computationally intensive and time-consuming. Each fit of the causal-inference model takes approximately 30 hours, and multiple fits with different initial starting points are required to rule out that the parameter estimates correspond to local minima.

      I would have liked to have seen more individual participant data (which is currently in the supplemental materials, albeit in a not very clear manner as discussed above).

      We have revised Supplementary Figures S4-S6 to show additional model predictions of the recalibration effect for individual participants, and participants’ temporal-order judgments are now shown in Supplement Figure S7. These figures confirm the better performance of the causal-inference model.

      The way that S3 is described in the text (line 141) makes it sound like everyone was in the same direction, however, it is clear that 2 /9 listeners show the opposite pattern, and 2 have confidence intervals close to zero (albeit on the -ve side).

      We have revised the text to clarify that the asymmetry occurs in both directions and is idiosyncratic (lines 168-171). We summarized the distribution of the individual asymmetries of the recalibration effect across visual-leading and auditory-leading adapter SOAs in Supplementary Figure S2.

      Reviewer #3 (Public Review):

      Summary:

      Li et al. describe an audiovisual temporal recalibration experiment in which participants perform baseline sessions of ternary order judgments about audiovisual stimulus pairs with various stimulus-onset asynchronies (SOAs). These are followed by adaptation at several adapting SOAs (each on a different day), followed by post-adaptation sessions to assess changes in psychometric functions. The key novelty is the formal specification and application/fit of a causal-inference model for the perception of relative timing, providing simulated predictions for the complete set of psychometric functions both pre and post-adaptation.

      Strengths:

      (1) Formal models are preferable to vague theoretical statements about a process, and prior to this work, certain accounts of temporal recalibration (specifically those that do not rely on a population code) had only qualitative theoretical statements to explain how/why the magnitude of recalibration changes non-linearly with the stimulus-onset asynchrony of the adapter.

      (2) The experiment is appropriate, the methods are well described, and the average model prediction is a fairly good match to the average data (Figure 4). Conclusions may be overstated slightly, but seem to be essentially supported by the data and modelling.

      (3) The work should be impactful. There seems a good chance that this will become the go-to modelling framework for those exploring non-population-code accounts of temporal recalibration (or comparing them with population-code accounts).

      (4) A key issue for the generality of the model, specifically in terms of recalibration asymmetries reported by other authors that are inconsistent with those reported here, is properly acknowledged in the discussion.

      Weaknesses:

      (1) The evidence for the model comes in two forms. First, two trends in the data (non-linearity and asymmetry) are illustrated, and the model is shown to be capable of delivering patterns like these. Second, the model is compared, via AIC, to three other models. However, the main comparison models are clearly not going to fit the data very well, so the fact that the new model fits better does not seem all that compelling. I would suggest that the authors consider a comparison with the atheoretical model they use to first illustrate the data (in Figure 2). This model fits all sessions but with complete freedom to move the bias around (whereas the new model constrains the way bias changes via a principled account). The atheoretical model will obviously fit better, but will have many more free parameters, so a comparison via AIC/BIC or similar should be informative

      In the revised manuscript, we switched from AIC to Bayesian model selection, which approximates and compares model evidence. This method incorporates a strong penalty for model complexity through marginalization over the parameter space (MacKay, 2003).

      We have addressed this comment by updating the former competitor model into a more reasonable version that induces recalibration only for some measured SOAs and by including another (asynchrony-contingent) model that is capable of predicting the nonlinearity and asymmetry of recalibration (Fig. 3) while heuristically approximating the causal inference computations. The causal-inference model outperformed the asynchrony-contingent model, as indicated by model evidence (Fig. 4A). Furthermore, model predictions show that the causal-inference model more accurately captures recalibration at large SOAs at both the group (Fig. 4B) and the individual level (Fig. S4).

      (2) It does not appear that some key comparisons have been subjected to appropriate inferential statistical tests. Specifically, lines 196-207 - presumably this is the mean (and SD or SE) change in AIC between models across the group of 9 observers. So are these differences actually significant, for example via t-test?

      We statistically compared the models using Bayes factors (Fig. 4A). The model evidence for each model was approximated using Variational Bayesian Monte Carlo. Bayes factors provided strong evidence in support of the causal-inference model relative to the other models.

      (3) The manuscript tends to gloss over the population-code account of temporal recalibration, which can already provide a quantitative account of how the magnitude of recalibration varies with adapter SOA. This could be better acknowledged, and the features a population code may struggle with (asymmetry?) are considered.

      We simulated a population-code model to examine its prediction of the recalibration effect for different adapter SOAs (lines 380–388, Supplement Section 8). The population-code model can predict the nonlinearity of recalibration, i.e., a decreasing recalibration effect as the adapter SOA increases. However, to capture the asymmetry of recalibration effects across auditory-leading and visual-leading adapter stimuli, we would need to assume that the auditory-leading and visual-leading SOAs are represented by neural populations with unequal tuning curves.

      (4) The engagement with relevant past literature seems a little thin. Firstly, papers that have applied causal inference modeling to judgments of relative timing are overlooked (see references below). There should be greater clarity regarding how the modelling here builds on or differs from these previous papers (most obviously in terms of additionally modelling the recalibration process, but other details may vary too). Secondly, there is no discussion of previous findings like that in Fujisaki et al.’s seminal work on recalibration, where the spatial overlap of the audio and visual events didn’t seem to matter (although admittedly this was an N = 2 control experiment). This kind of finding would seem relevant to a causal inference account.

      References:

      Magnotti JF, Ma WJ and Beauchamp MS (2013) Causal inference of asynchronous audiovisual speech. Front. Psychol. 4:798. doi: 10.3389/fpsyg.2013.00798

      Sato, Y. (2021). Comparing Bayesian models for simultaneity judgement with different causal assumptions. J. Math. Psychol., 102, 102521.

      We have revised the Introduction and Discussion to better situate our study within the existing literature. Specifically, we have incorporated the suggested references (lines 66–69) and provided clearer distinctions on how our modeling approach builds on or differs from previous work on causal-inference models, particularly in terms of modeling the recalibration process (lines 75–79). Additionally, we have discussed findings that might contradict the assumptions of the causal-inference model (lines 405–424).

      (5) As a minor point, the model relies on simulation, which may limit its take-up/application by others in the field.

      Upon acceptance, we will publicly share the code for all models (simulation and parameter fitting) to enable researchers to adapt and apply these models to their own data.

      (6) There is little in the way of reassurance regarding the model’s identifiability and recoverability. The authors might for example consider some parameter recovery simulations or similar.

      We conducted a model recovery for each of the six models described in the main text and confirmed that the asynchrony-contingent and causal-inference models are identifiable (Supplement Section 11). Simulations of the asynchrony-correction model were sometimes best fit by causal-inference models, because the latter behaves similarly when the prior of a common cause is set to one.

      We also conducted a parameter recovery for the winning model, the causal-inference model with modality-specific precision (Supplement Section 13).

      Key parameters, including audiovisual bias  , amount of auditory latency noise  , amount of visual latency noise  , criterion, lapse rate  showed satisfactory recovery performance. The less accurate recovery of  is likely due to a tradeoff with learning rate  .

      (7) I don't recall any statements about open science and the availability of code and data.

      Upon acceptance of the manuscript, all code (simulation and parameter fitting) and data will be made available on OSF and publicly available.

      Recommendations for the authors:

      Reviewing Editor (Recommendations For The Authors):

      In addition to the comments below, we would like to offer the following summary based on the discussion between reviewers:

      The major shortcoming of the work is that there should ideally be a bit more evidence to support the model, over and above a demonstration that it captures important trends and beats an account that was already known to be wrong. We suggest you:

      (1) Revise the figure legends (Figure 5 and Figure 6E).

      We revised all figures and figure legends.

      (2) Additionally report model differences in terms of BIC (which will favour the preferred model less under the current analysis);

      We now base the model comparison on Bayesian model selection, which approximates and compares model evidence. This method incorporates a strong penalty for model complexity through marginalization over the parameter space (MacKay, 2003).

      (3) Move to instead fitting the models multiple times in order to get leave-one-out estimates of best-fitting loglikelihood for each left-out data point (and then sum those for the comparison metric).

      Unfortunately, our design is not suitable for cross-validation methods because the model-fitting process is computationally intensive and time-consuming. Each fit of the causal-inference model takes approximately 30 hours, and multiple fits with different initial starting points are required to rule out local minima.

      (4) Offering a comparison with a more convincing model (for example an atheoretical fit with free parameters for all adapters, e.g. as suggested by Reviewer 3.

      We updated the previous competitor model and included an asynchrony-contingent model, which is capable of predicting the nonlinearity of recalibration (Fig. 3). The causal-inference model still outperformed the asynchrony-contingent model (Fig. 4A). Furthermore, model predictions show that only the causal-inference model captures non-zero recalibration effects for long adapter SOAs at both the group level (Fig. 4B) and individual level (Figure S4).

      Reviewer #1 (Recommendations For The Authors):

      A larger sample size would be better.

      This study used a within-subject design, which included 9 sessions, totaling 13.5 hours per participant. This extensive data collection allows us to better constrain the model for each participant. Our conclusions are based on the different models’ ability to fit individual data rather than on group statistics.

      It would be good to better put the study in the context of spatial ventriloquism, where similar model comparisons have been done over the last ten years and there is a large body of work to connect to.

      We now discuss our model in relation to models of cross-modal spatial recalibration in the Introduction (lines 70–78) and Discussion (lines 324–330).

      Reviewer #2 (Recommendations For The Authors):

      Previous authors (e.g. Yarrow et al.,) have described latency shift and criterion change models as providing a good fit of experimental data. Did the authors attempt a criterion shift model in addition to a shift model?

      We have considered criterion-shift variants of our atheoretical recalibration models in Supplement Section 1. To summarize the results, we varied two model assumptions: 1) the use of either a Gaussian or an exponential measurement distribution, and 2) recalibration being implemented either as a shift of bias or a criterion. We fit each model variant separately to the ternary TOJ responses of all sessions. Bayesian model comparisons indicated that the bias-shift model with exponential measurement distributions best captured the data of most participants.

      Figure 4B - I'm not convinced that the modality-independent uncertainty is anything but a straw man. Models not allowed to be asymmetric do not show asymmetry? (the asymmetry index is irrelevant in the fixed update model as I understand it so it is not surprising the model is identical?).

      We included the assumption that temporal uncertainty might be modality-independent for several reasons. First, there is evidence suggesting that a central mechanism governs the precision of temporal-order judgments (Hirsh & Sherrick, 1961), indicating that precision is primarily limited by a central mechanism rather than the sensory channels themselves. Second, from a modeling perspective, it was necessary to test whether an audio-visual temporal bias alone, i.e., assuming modality-independent uncertainty, could introduce asymmetry across adapter SOAs. Additionally, most previous studies implicitly assumed symmetric likelihoods, i.e., modality-independent latency noise, by fitting cumulative Gaussians to the psychometric curves derived from 2AFC-TOJ tasks (Di Luca et al., 2009; Fujisaki et al., 2004; Harrar & Harris, 2005; Keetels & Vroomen, 2007; Navarra et al., 2005; Tanaka et al., 2011; Vatakis et al., 2007, 2008; Vroomen et al., 2004).

      Why does a zero SOA adapter shift the pss towards auditory leading? Is this a consequence of the previous day’s conditioning - it’s not clear from the methods whether all listeners had the same SOA conditioning sequence across days.

      The auditory-leading recalibration effect for an adapter SOA of zero has been consistently reported in previous studies (e.g., Fujisaki et al., 2004; Vroomen et al., 2004). This effect symbolizes the asymmetry in recalibration. This asymmetry can be explained by differences across modalities in the noisiness of the latencies (Figure 5C) in combination with audiovisual temporal bias (Figure S8).

      We added details about the order of testing to the Methods section (lines 456–457).

      Reviewer #3 (Recommendations For The Authors):

      Abstract

      “Our results indicate that human observers employ causal-inference-based percepts to recalibrate cross-modal temporal perception” Your results indicate this is plausible. However, this statement (basically repeated at the end of the intro and again in the discussion) is - in my opinion - too strong.

      We have revised the statement as suggested.

      Intro and later

      Within the wider literature on relative timing perception, the temporal order judgement (TOJ) task refers to a task with just two response options. Tasks with three response options, as employed here, are typically referred to as ternary judgments. I would suggest language consistent with the existing literature (or if not, the contrast to standard usage could be clarified).

      Ref: Ulrich, R. (1987). Threshold models of temporal-order judgments evaluated by a ternary response task. Percept. Psychophys., 42, 224-239.

      We revised the term for the task as suggested throughout the manuscript.

      Results, 2.2.2

      “However, temporal precision might not be due to the variability of arrival latency.” Indeed, although there is some recent evidence that it might be.

      Ref: Yarrow, K., Kohl, C, Segasby, T., Kaur Bansal, R., Rowe, P., & Arnold, D.H. Neural-latency noise places limits on human sensitivity to the timing of events. Cognition, 222, 105012 (2022).

      We included the reference as suggested (lines 245–248).

      Methods, 4.3.

      Should there be some information here about the order of adaptation sessions (e.g. random for each observer)?

      We added details about the order of testing to the Methods section (lines 456–457).

      Supplemental material section 1.

      Here, you test whether the changes resulting from recalibration look more like a shift of the entire psychometric function or an expansion of the psychometric function on one side (most straightforwardly compatible with a change of one decision criterion). Fine, but the way you have done this is odd, because you have introduced a further difference in the models (Gaussian vs. exponential latency noise) so that you cannot actually conclude that the trend towards a win for the bias-shift model is simply down to the bias vs. criterion difference. It could just as easily be down to the different shapes of psychometric functions that the two models can predict (with the exponential noise model permitting asymmetry in slopes). There seems to be no reason that this comparison cannot be made entirely within the exponential noise framework (by a very simple reparameterization that focuses on the two boundaries rather than the midpoint and extent of the decision window). Then, you would be focusing entirely on the question of interest. It would also equate model parameters, removing any reliance on asymptotic assumptions being met for AIC.

      We revised our exploration of atheoretical recalibration models. To summarize the results, we varied two model assumptions: 1) the use of either a Gaussian or an exponential measurement distribution, and 2) recalibration being implemented either as a shift of the cross-modal temporal bias or as a shift of the criterion. We fit each model separately to the ternary TOJ responses of all sessions. Bayesian model comparisons indicated that the bias-shift model with exponential measurement distributions best described the data of most participants.

      References

      Di Luca, M., Machulla, T.-K., & Ernst, M. O. (2009). Recalibration of multisensory simultaneity:

      cross-modal transfer coincides with a change in perceptual latency. Journal of Vision, 9(12), Article 7.

      Fujisaki, W., Shimojo, S., Kashino, M., & Nishida, S. ’ya. (2004). Recalibration of audiovisual simultaneity. Nature Neuroscience, 7(7), 773–778.

      Harrar, V., & Harris, L. R. (2005). Simultaneity constancy: detecting events with touch and vision. Experimental Brain Research. Experimentelle Hirnforschung. Experimentation Cerebrale, 166(3-4), 465–473.

      Hirsh, I. J., & Sherrick, C. E., Jr. (1961). Perceived order in different sense modalities. Journal of Experimental Psychology, 62(5), 423–432.

      Keetels, M., & Vroomen, J. (2007). No effect of auditory-visual spatial disparity on temporal recalibration. Experimental Brain Research. Experimentelle Hirnforschung. Experimentation Cerebrale, 182(4), 559–565.

      MacKay, D. J. (2003). Information theory, inference and learning algorithms.https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=201b835c3f3a3626ca07b e68cc28cf7d286bf8d5

      Navarra, J., Vatakis, A., Zampini, M., Soto-Faraco, S., Humphreys, W., & Spence, C. (2005). Exposure to asynchronous audiovisual speech extends the temporal window for audiovisual integration. Brain Research. Cognitive Brain Research, 25(2), 499–507.

      Tanaka, A., Asakawa, K., & Imai, H. (2011). The change in perceptual synchrony between auditory and visual speech after exposure to asynchronous speech. Neuroreport, 22(14), 684–688.

      Vatakis, A., Navarra, J., Soto-Faraco, S., & Spence, C. (2007). Temporal recalibration during asynchronous audiovisual speech perception. Experimental Brain Research. Experimentelle Hirnforschung. Experimentation Cerebrale, 181(1), 173–181.

      Vatakis, A., Navarra, J., Soto-Faraco, S., & Spence, C. (2008). Audiovisual temporal adaptation of speech: temporal order versus simultaneity judgments. Experimental Brain Research. Experimentelle Hirnforschung. Experimentation Cerebrale, 185(3), 521–529.

      Vroomen, J., Keetels, M., de Gelder, B., & Bertelson, P. (2004). Recalibration of temporal order perception by exposure to audio-visual asynchrony. Brain Research. Cognitive Brain Research, 22(1), 32–35.

    1. But if she lift up her drooping head and prosper, among those that have something more then wisht her welfare

      It's interesting that he sees the church as this objectively good being and idea, and that people have done damage to her, though she is still good, just downtrodden. Rather than the people being the church and their actions being representative, thus painting the church itself as bad.

    1. Gene Fowler once said that writing is easy, just a matter ofstaring at the blank page until your forehead bleeds. And if any-thing will draw blood from your forehead, it’s creating the climaxof the Jast act—the pinnacle and concentration of all meaning andemotion, the fulfillment for which all else is preparation, the deci-sive center of audience satisfaction. If this scene fails, the storyfails. Until you have created it, you don’t have a story. If you fail tomake the poetic leap to a brilliant culminating climax, all previousscenes, characters, dialogue, and description become an elaboratetyping exercise.

      I heard somewhere that, when writing a story, start with what your interested in. First, write the scene you are most excited about and then fill in the blanks with other, less exciting scenes. It makes the process easier. It also doesn't mean you can't tweak or scrap anything as well. Writing is a process.

    Annotators

    1. In his head it’s binary: what draw (of letters) can make a scrabble, what draw can’t”

      I think the fact that the words have no other meaning to him except the sequence of their letters may be the key to his win. He has no intellectual baggage to distract him, he just sees the letters he has and fits them in like a puzzle.

    1. It's not letting me highlight anything inside of the boxes so I assume the browser is reading them as images and not text. Anyone know if there's a way around it? I really just wanted to point out that the mention of the food pyramid heavily ages this text since we haven't used it in decades!

    Annotators

    1. Today’s media consumers still read newspapers, listen to radio, watch television, and get immersed in movies. The difference is that it’s now possible to do all those things and do all those things through one device—be it a personal computer or a smartphone—and through the medium of the Internet

      Exactly. It doesn't just vanish, it becomes more broad.

    1. Using a balance, place weigh paper on it and zero the scale. Using that measure a metalrod and record measurement. Then take a 25 mL graduated cylinder and fill about half withwater then record the exact volume. In the graduated cylinder add the metal rod, record the newvolume.To calculate the density of a metal rod, the mass and volume of the metal rod needed tobe found. The volume was calculated by subtracting the volume of the water in a graduatedcylinder with the metal rod, by the volume of the initial water. Using that calculated volume andthe mass found by using the balance you will use the density equation d=m/v where v is equalto the volume of the cylinder, m is equal to the mass of the metal rod and d equals the density ofthe metal rod.Sugar Content Procedure & CalculationsTo begin use a 125 mL Erlenmeyer flask and fill it with about 50 mL of 2% sugar liquid.Measure a 50 mL beaker by using a balance, record value. Then using that 50 mL beaker and a10 mL volumetric pipette plus pipette filler, fill the 10 mL volumetric pipette and filler to 10.00 mLof 2% sugar. Put the liquid in the volumetric pipette into the 50mL beaker that was previouslymeasured. Next measure and record the mass of the beaker using the balance. This processwas repeated 2 more times for a total of 3 trials Then using a cranberry juice using the samesteps and doing a total of 3 trials. Once more do the same steps using root beer for a total of 3trials.To calculate the sugar content for the cranberry juice and the root beer, first calculatethe average density of the celebration standard, 2% sugar. This is found by first subtracting thefinal mass of both the beaker and the liquid by the initial mass of just the beaker to find theliquid's mass of the 2% sugar. Then using the density equation d=m/v where d= density, v=volume and m=mass. Use the mass found by subtracting the final and initial and divide it bythe constant 10 mL of 2% sugar. Then repeat this for each trial, it was repeated 2 more times fora total of 3 more times. Using the 3 density’s find the average by adding all the desitys of 2%sugar and dividing it by 3. These calculations where completed for both the cranberry juice andthe root beer. After calculating each of those using the calibration standard averages of theclass for the sugar % create a plot graph and a trend line (Figure). Using the trend line equationwhere x represents the % sugar content and y represents the density in g/mL, substitute theaverage density of both cranberry juice and the root beer into the y of the trend line equation.Then solve for x, the % sugar content.Data

      It's all written in the wrong tense

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      The manuscript by Oleh et al. uses in vitro electrophysiology and compartmental modeling (via NEURON) to investigate the expression and function of HCN channels in mouse L2/3 pyramidal neurons. The authors conclude that L2/3 neurons have developmentally regulated HCN channels, the activation of which can be observed when subjected to large hyperpolarizations. They further conclude via blockade experiments that HCN channels in L2/3 neurons influence cellular excitability and pathway-specific EPSP kinetics, which can be neuromodulated. While the authors perform a wide range of slice physiology experiments, concrete evidence that L2/3 cells express functionally relevant HCN channels is limited. There are serious experimental design caveats and confounds that make drawing strong conclusions from the data difficult. Furthermore, the significance of the findings is generally unclear, given modest effect sizes and a lack of any functional relevance, either directly via in vivo experiments or indirectly via strong HCN-mediated changes in known operations/computations/functions of L2/3 neurons.

      Specific points:

      (1) The interpretability and impact of this manuscript are limited due to numerous methodological issues in experimental design, data collection, and analysis. The authors have not followed best practices in the field, and as such, much of the data is ambiguous and/or weak and does not support their interpretations (detailed below). Additionally, the authors fail to appropriately explain their rationale for many of their choices, making it difficult to understand why they did what they did. Furthermore, many important references appear to be missing, both in terms of contextualizing the work and in terms of approach/method. For example, the authors do not cite Kalmbach et al 2018, which performed a directly comparable set of experiments on HCN channels in L2/3 neurons of both humans and mice. This is an unacceptable omission. Additionally, the authors fail to cite prior literature regarding the specificity or lack thereof of Cs+ in blocking HCN. In describing a result, the authors state "In line with previous reports, we found that L2/3 PCs exhibited an unremarkable amount of sag at 'typical' current commands" but they then fail to cite the previous reports.

      We thank the reviewer for the thorough examination of our manuscript; however, we disagree with many of the raised concerns for several reasons, as detailed here:

      To address the lack of certain citations, we would like to emphasize that in the introduction section, we did initially focus on the several decades-long line of investigation into the HCN channel content of layer 2/3 pyramidal cells (L2/3 PCs), where there has undoubtedly been some controversy as to their functional contribution. We did not explicitly cite papers that claimed to find no/little HCN channels/sag- although this would be a significant list of publications from some excellent investigators, as methods used may have differed from ours leading to different interpretations. Simply stated, unless one was explicitly looking for HCN in L2/3 PCs, it might go unobserved. However, we now addressed this more clearly in the revision:

      Just to take one example: in the publication mentioned by the reviewer (Kalmbach et al 2018), the investigators did not carry out voltage clamp or dynamic clamp recordings, as we did in our work here. Furthermore, the reported input resistance values in the aforementioned paper were far above other reports in mice (Routh et al. 2022, Brandalise et al 2022, Hedrick et al 2012; which were similar to our findings here), suggesting that recordings in Kalmbach were carried out at membrane potentials where HCN activation may be less available (Routh, Brager and Johnston 2022).

      Another reason for some mixed findings in the field is undoubtedly due to the small/nonexistent sag in L2/3 current clamp recordings (in mice). We also observed a very small sag, which can be explained by the following:  The ‘sag’ potential is a biphasic voltage response emerging from a relatively fast passive membrane response and a slower Ih activation. In L2/3 PCs, hyperpolarization-activated currents are apparently faster than previously described, and are located proximally (Figure 2 & Figure 5). Therefore, their recruitment in mouse L2/3 PCs is on a similar timescale to the passive membrane response, resulting in a more monophasic response. We now include a more full set of citations in the updated introduction section, to highlight the importance of HCN channels in L2/3 PCs in mice (and other species).

      The justification for using cesium (i.e., ‘best practices’) is detailed below.

      (2) A critical experimental concern in the manuscript is the reliance on cesium, a nonspecific blocker, to evaluate HCN channel function. Cesium blocks HCN channels but also acts at potassium channels (and possibly other channels as well). The authors do not acknowledge this or attempt to justify their use of Cs+ and do not cite prior work on this subject. They do not show control experiments demonstrating that the application of Cs+ in their preparation only affects Ih. Additionally, the authors write 1 mM cesium in the text but appear to use 2 mM in the figures. In later experiments, the authors switch to ZD7288, a more commonly used and generally accepted more specific blocker of HCN channels. However, they use a very high concentration, which is also known to produce off-target effects (see Chevaleyre and Castillo, 2002). To make robust conclusions, the authors should have used both blockers (at accepted/conservative concentrations) for all (or at least most) experiments. Using one blocker for some experiments and then another for different experiments is fraught with potential confounds.

      To address the concerns regarding the usage of cesium to block HCN channels, we would like to state that neither cesium nor ZD-7288 are without off-target effects, however in our case the potential off-target effects of external cesium were deemed less impactful, especially concerning AP firing output experiments. Extracellular cesium has been widely accepted as a blocker of HCN channels (Lau et al. 2010, Wickenden et al. 2009, Rateau and Ropert 2005, Hemond et al. 2009, Yang et al. 2015, Matt et al. 2010). However, it is well known to act on potassium channels as well at higher concentrations, which has been demonstrated with intracellular and extracellular application (Puil et al. 1981, Fleidervish et al. 2008, Williams et al. 1991, 2008).

      Although we initially performed ‘internal’ control experiments to ensure the cesium concentration was unlikely to greatly block voltage gated K+ channels during our recordings, we recognize these were not included in the original manuscript. These are detailed as follows: during our recordings cesium had no significant effect on action potential halfwidth, ruling out substantial blocking of potassium channels, nor did it affect any other aspects of suprathreshold activity (now reported in results, page 4 - line 113). Furthermore, we observed similar effects on passive properties (resting membrane potential, input resistance) following ZD-7288 as with cesium, which we now also updated in our figures (Supplementary Figure 1). We did acknowledge that ZD-7288 is a widely accepted blocker of HCN, and for this reason we carried out some of our experiments using this pharmacological agent instead of cesium.

      On the other hand, ZD-7288 suffers from its own side effects, such as potential effects on sodium channels (Wu et al. 2012) and calcium channels (Sánchez-Alonso et al. 2008, Felix et al. 2003). As our aim was to provide functional evidence for the importance of HCN channels, we initially deemed these potential effects unacceptable in experiments where AP firing output (e.g., in cell-attached experiments) was measured. Nonetheless, in new experiments now included here, we found the effects of ZD and cesium on AP output were similar as shown in new Supplemental Figure 1.

      Many experiments were supported by complementary findings using external cesium and ZD-7288. For example, the effect of ZD-7288 on EPSPs was confirmed by similar synaptic stimulation experiments using cesium. This is important, as synaptic inputs of L2/3 PCs are modulated by both dendritic sodium (Ferrarese et al. 2018) and calcium channels (Landau 2022), therefore the application of ZD-7288 alone may have been difficult to interpret in isolation. We thank the reviewer for bringing up this important point.

      (3) A stronger case could be made that HCN is expressed in the somatic compartment of L2/3 cells if the authors had directly measured HCN-isolated currents with outside-out or nucleated patch recording (with appropriate leak subtraction and pharmacology). Whole-cell voltage-clamp in neurons with axons and/or dendrites does not work. It has been shown to produce erroneous results over and over again in the field due to well-known space clamp problems (see Rall, Spruston, Williams, etc.). The authors could have also included negative controls, such as recordings in neurons that do not express HCN or in HCN-knockout animals. Without these experiments, the authors draw a false equivalency between the effects of cesium and HCN channels, when the outcomes they describe could be driven simply by multiple other cesium-sensitive currents. Distortions are common in these preparations when attempting to study channels (see Williams and Womzy, J Neuro, 2011). In Fig 2h, cesium-sensitive currents look too large and fast to be from HCN currents alone given what the authors have shown in their earlier current clamp data. Furthermore, serious errors in leak subtraction appear to be visible in Supplementary Figure 1c. To claim that these conductances are solely from HCN may be misleading.

      We disagree with the argument that “Whole-cell voltage-clamp in neurons with axons and/or dendrites does not work”. Although this method is not without its confounds (i.e. space clamp), it is still a useful initial measure as demonstrated countless times in the literature. However, the reviewer is correct that the best approach to establish the somatodendritic distribution of ion channels is by direct somatic and dendritic outside-out patches. Due to the small diameter of L2/3 PC dendrites, these experiments haven’t been carried out yet in the literature for any other ion channel either to our knowledge. Mapping this distribution electrophysiologically may be outside the scope of the current manuscript, but it was hard for us to ignore the sheer size of the Cs<sup>+</sup> sensitive hyperpolarizing currents in whole cell. Thus, we will opt to report this data.

      Also, we should point out that space clamp-related errors manifest in the overestimation of frequency-dependent features, such as activation kinetics, and underestimation of steady-state current amplitudes. The activation time constant of our measured currents are somewhat faster than previously reported; reducing major concerns regarding space clamp errors. Furthermore, we simply do not understand what “too large… to be from HCN currents” means. Our voltage-clamp measured currents are similar to previously reported HCN currents (Meng et al. 2011, Li 2011, Zhao et al. 2019, Yu et al. 2004, Zhang et al. 2008, Spinelli et al. 2018, Craven et al. 2006, Ying et al. 2012, Biel et al. 2009).

      Furthermore, we should point out that our measured currents activated at hyperpolarized voltages, had the same voltage dependence as HCN currents, did not show inactivation, influenced both input resistance and resting membrane potential, and are blocked by low concentration extracellular cesium. Each of these features would point to HCN.

      (4) The authors present current-clamp traces with some sag, a primary indicator of HCN conductance, in Figure 2. However, they do not show example traces with cesium or ZD7288 blockade. Additionally, the normalization of current injected by cellular capacitance and the lack of reporting of input resistance or estimated cellular size makes it difficult to determine how much current is actually needed to observe the sag, which is important for assessing the functional relevance of these channels. The sag ratio in controls also varies significantly without explanation (Figure 6 vs Figure 7). Could this variability be a result of genetically defined subgroups within L2/3? For example, in humans, HCN expression in L2/3 varies from superficial and deep neurons. The authors do not make an effort to investigate this. Regardless of inconsistencies in either current injection or cell type, the sag ratio appears to be rather modest and similar to what has already been reported previously in other papers.

      We thank the reviewer for pointing out that our explanation for the modest sag ratio might have not been sufficient to properly understand why this measurement cannot be applied to layer 2/3 pyramidal cells. Briefly: sag potential emerges from a relatively (compared to I<sub>h</sub>) fast passive membrane response and a slower HCN recruitment. The opposing polarity and different timescales of these two mechanisms results in a biphasic response called “sag” potential. However, if the timescale of these two mechanisms is similar, the voltage response is not predicted to be biphasic. We have shown that hyperpolarization activated currents in our preparations are fast and proximal, therefore they are recruited during the passive response (see Figure 2g.). This means that although a substantial amount of HCN currents are activated during hyperpolarization, their activation will not result in substantial sag. Therefore, sag ratio measurement is not necessarily applicable to approximate the HCN content of mouse L2/3 PCs. We would like to emphasize that sag ratio measurements are correct in case of other cell types (i.e. L5 and CA1 PCs_,_ and our aim is not to discredit the method, but rather to show that it cannot be applied similarly in the case of mouse L2/3 PCs.

      Our own measurements, similar to others in the literature show that L2/3 PCs exhibit modest sag ratios, however, this does not mean that HCN is not relevant. I<sub>h</sub> activation in L2/3 PCs does not manifest in large sag potential but rather in a continuous distortion of steady-state responses (Figure 2b.). The reviewer is correct that L2/3 PCs are non-homogenous, therefore we sampled along the entire L2/3 axis. This yielded some potential variability in our results (i.e., passive properties); yet we did not observe any cells where hyperpolarizing-activated/Cs<sup>+</sup>-sensitive currents could not be resolved. As structural variability of L2/3 cells does result in variability in cellular capacitance, we compensated for this variability by injecting cellular capacitance-normalized currents. Our measured cellular capacitances were in accordance with previously published values, in the range of 50-120 pF. Therefore, the injected currents were not outside frequently used values. Together, we would like to state that whether substantial sag potential is present or not, initial estimates of the HCN content for each L2/3 PC should be treated with caution.

      (5) In the later experiments with ZD7288, the authors measured EPSP half-width at greater distances from the soma. However, they use minimal stimulation to evoke EPSPs at increasingly far distances from the soma. Without controlling for amplitude, the authors cannot easily distinguish between attenuation and spread from dendritic filtering and additional activation and spread from HCN blockade. At a minimum, the authors should share the variability of EPSP amplitude versus the change in EPSP half-width and/or stimulation amplitudes by distance. In general, this kind of experiment yields much clearer results if a more precise local activation of synapses is used, such as dendritic current injection, glutamate uncaging, sucrose puff, or glutamate iontophoresis. There are recording quality concerns here as well: the cell pictured in Figure 3a does not have visible dendritic spines, and a substantial amount of membrane is visible in the recording pipette. These concerns also apply to the similar developmental experiment in 6f-h, where EPSP amplitude is not controlled, and therefore, attenuation and spread by distance cannot be effectively measured. The outcome, that L2/3 cells have dendritic properties that violate cable theory, seems implausible and is more likely a result of variable amplitude by proximity.

      To resolve this issue, we made a supplementary figure showing elicited amplitudes, which showed no significant distance dependence and minimal variability (new Supplementary Figure 6). We thank the reviewer for suggesting an amplitude-halfwidth comparison control (now included as new Supplementary Figure 6).). To address the issue of the non-visible spines, we would like to note that these images are of lower magnification and power to resolve them. The presence of dendritic spines was confirmed in every recorded pyramidal cell observed using 2P microscopy at higher magnification.

      We would like to emphasize that although our recordings “seemingly” violated the cable theory, this is only true if we assume a completely passive condition. As shown in our manuscript, cable theory was not violated, as the presence of NMDA receptor boosting explained the observed ‘non-Rallian’ phenomenon.

      (6) Minimal stimulation used for experiments in Figures 3d-i and Figures 4g-h does not resolve the half-width measurement's sensitivity to dendritic filtering, nor does cesium blockade preclude only HCN channel involvement. Example traces should be shown for all conditions in 3h; the example traces shown here do not appear to even be from the same cell. These experiments should be paired (with and without cesium/ZD). The same problem appears in Figure 4, where it is not clear that the authors performed controls and drug conditions on the same cells. 4g also lacks a scale bar, so readers cannot determine how much these measurements are affected by filtering and evoked amplitude variability. Finally, if we are to believe that minimal stimulation is used to evoke responses of single axons with 50% fail rates, NMDA receptor activation should be minimal to begin with. If the authors wish to make this claim, they need to do more precise activation of NMDA-mediated EPSPs and examine the effects of ZD7288 on these responses in the same cell. As the data is presented, it is not possible to draw the conclusion that HCN boosts NMDA-mediated responses in L2/3 neurons.

      As stated in the figure legends, the control and drug application traces are from the same cell, both in figure 3 and figure 4, and the scalebar is not included as the amplitudes were normalized for clarity. We have address the effects of dendritic filtering above in answer (5), and cesium blockade above in answer (2). To reiterate, dendritic filtering alone cannot explain our observations, and cesium is often a better choice for blocking HCN channels compared to ZD-7288, which blocks sodium channels as well.

      When an excitatory synaptic signal arrives onto a pyramidal cell in typical conditions, neurotransmitter sensitive receptors transmit a synaptic current to the dendritic spine. This dendritic spine is electrically isolated by the high resistance of the spine neck and due to the small membrane surface of the spine, the synaptic current can elicit remarkably large voltage changes. These voltage changes can be large enough to depolarize the spine close to zero millivolts upon even single small inputs (Jayant et al. 2016). Therefore, to state that single inputs arriving to dendritic spines cannot be large enough to recruit NMDA receptor activation is incorrect. This is further exemplified by the substantial literature showing ‘miniature’ NMDA recruitment via stochastic vesicle release alone.

      (7) The quality of recordings included in the dataset has concerning variability: for example, resting membrane potentials vary by >15-20 mV and the AP threshold varies by 20 mV in controls. This is indicative of either a very wide range of genetically distinct cell types that the authors are ignoring or the inclusion of cells that are either unhealthy or have bad seals.

      Although we are aware of the diversity of L2/3 PCs, resolving further layer depth differences is outside the scope of our current manuscript. However, as shown in Kalmbech et al, resting membrane potential can greatly vary (>15-20 mV) in L2/3 PCs depending on distance from pia. We acknowledge that the variance in AP threshold is large and could be due to genetically distinct cell types.

      (8) The authors make no mention of blocking GABAergic signaling, so it must be assumed that it is intact for all experiments. Electrical stimulation can therefore evoke a mixture of excitatory and inhibitory responses, which may well synapse at very different locations, adding to interpretability and variability concerns.

      We thank the reviewer for pointing out our lack of detail regarding the GABAergic signaling blocker SR 95531. We did include this drug in our recordings of (50Hz stim.) signal summation, so GABAergic responses did not contaminate our recordings. We now included this information in the results section (page 5) and the methods section (page 15)

      (9) The investigation of serotonergic interaction with HCN channels produces modest effect sizes and suffers the same problems as described above.

      We do not agree with the reviewer that 50% drop in neuronal AP firing responses (Figure 7b) was a modest effect size. Thus, we opted to keep this data in the manuscript.

      (10) The computational modeling is not well described and is not biologically plausible. Persistent and transient K channels are missing. Values for other parameters are not listed. The model does not seem to follow cable theory, which, as described above, is not only implausible but is also not supported by the experimental findings.

      The model was downloaded from the Cell Type Database from the Allen Institute, with only minor modifications including the addition of dendritic HCN channels and NDMA receptors- which were varied along a wide parameter space to find a ‘best fit’ to our observations. These additions were necessary to recapitulate our experimental findings. We agree the model likely does not fully recapitulate all aspects of the dendrites, which as we hope to convey in this manuscript, are not fully resolved in mouse L2/3 PCs. This is a previously published neuronal model, and despite its potential shortcomings, is one among a handful of open-source neuronal models of a fully reconstructed L2/3 PC.

      Reviewer #2 (Public Review):

      Summary:

      This paper by Olah et al. uncovers a previously unknown role of HCN channels in shaping synaptic inputs to L2/3 cortical neurons. The authors demonstrate using slice electrophysiology and computational modeling that, unlike layer 5 pyramidal neurons, L2/3 neurons have an enrichment of HCN channels in the proximal dendrites. This location provides a locus of neuromodulation for inputs onto the proximal dendrites from L4 without an influence on distal inputs from L1. The authors use pharmacology to demonstrate the effect of HCN channels on NMDA-mediated synaptic inputs from L4. The authors further demonstrate the developmental time course of HCN function in L2/3 pyramidal neurons. Taken together, this a well-constructed investigation of HCN channel function and the consequences of these channels on synaptic integration in L2/3 pyramidal neurons.

      Strengths:

      The authors use careful, well-constrained experiments using multiple pharmacological agents to asses HCN channel contributions to synaptic integrations. The authors also use a voltage clamp to directly measure the current through HCN channels across developmental ages. The authors also provide supplemental data showing that their observation is consistent across multiple areas of the cerebral cortex.

      Weaknesses:

      The gradient of the HCN channel function is based almost exclusively on changes in EPSP width measured at the soma. While providing strong evidence for the presence of HCN current in L2/3 neurons, there are space clamp issues related to the use of somatic whole-cell voltage clamps that should be considered in the discussion.

      We thank the reviewer for pointing out our careful and well-constrained experiments and for making suggestions. The potential effects of space clamp errors are detailed in the extended explanations under Reviewer 1, Specific points (3).

      Reviewer #3 (Public Review):

      Summary:

      The authors study the function of HCN channels in L2/3 pyramidal neurons, employing somatic whole-cell recordings in acute slices of visual cortex in adult mice and a bevy of technically challenging techniques. Their primary claim is a non-uniform HCN distribution across the dendritic arbor with a greater density closer to the soma (roughly opposite of the gradient found in L5 PT-type neurons). The second major claim is that multiple sources of long-range excitatory input (cortical and thalamic) are differentially affected by the HCN distribution. They further describe an interesting interplay of NMDAR and HCN, serotonergic modulation of HCN, and compare HCN-related properties at 1, 2 and 6 weeks of age. Several results are supported by biophysical simulations.

      Strengths:

      The authors collected data from both male and female mice, at an age (6-10 weeks) that permits comparison with in vivo studies, in sufficient numbers for each condition, and they collected a good number of data points for almost all figure panels. This is all the more positive, considering the demanding nature of multi-electrode recording configurations and pipette-perfusion. The main strength of the study is the question and focus.

      Weaknesses:

      Unfortunately, in its present form, the main claims are not adequately supported by the experimental evidence: primarily because the evidence is indirect and circumstantial, but also because multiple unusual experimental choices (along with poor presentation of results) undermine the reader's confidence. Additionally, the authors overstate the novelty of certain results and fail to cite important related publications. Some of these weaknesses can be addressed by improved analysis and statistics, resolving inconsistent data across figures, reorganizing/improving figure panels, more complete methods, improved citations, and proofreading. In particular, given the emphasis on EPSPs, the primary data (for example EPSPs, overlaid conditions) should be shown much more.

      However, on the experimental side, addressing the reviewer's concerns would require a very substantial additional effort: direct measurement of HCN density at different points in the dendritic arbor and soma; the internal solution chosen here (K-gluconate) is reported to inhibit HCN; bath-applied cesium at the concentrations used blocks multiple potassium channels, i.e. is not selective for HCN (the fact that the more selective blocker ZD7288 was used in a subset of experiments makes the choice of Cs+ as the primary blocker all the more curious); pathway-specific synaptic stimulation, for example via optogenetic activation of specific long-range inputs, to complement / support / verify the layer-specific electrical stimulation.

      We thank the reviewer for their very careful examination of our manuscript and helpful suggestions. We addressed the concerns raised in the review and presented more raw traces in our figures. Although direct dendritic HCN mapping measurements are outside the scope of the current manuscript due to the morphological constraints presented by L2/3 PCs (which explains why no other full dendritic nonlinearity distribution has been described in L2/3 PCs with this method), we nonetheless supplemented our manuscript with additional suggested experiments as suggested. For example, we included the excellent suggestion of pathway-specific optogenetic stimulation to further validate the disparate effect of HCN channels for distal and proximal inputs. We agree that ZD-7288 is a widely accepted blocker of HCN channels. However, the off-target effects on sodium channels may have significantly confounded our measurements of AP output using extracellular stimulation. Therefore, we chose low concentration cesium as the primary blocker for those experiments, but now validated several other Cs<sup>+</sup>-based results with ZD-7288 as well.

      Recommendations for the authors:

      Reviewer #2 (Recommendations For The Authors):

      I have some issues that need clarification or correction.

      (1) On page 3, line 90, the authors state "We found that bath application of Cs+ (1mM)..." but the methods and Figure 1 state "2mM Cs+". Please check and correct.

      Correct, typo corrected.

      (2) Related to Cs+ application, the methods state that "CsMeSO4 (2mM) was bath applied..." Is this correct? CsMeSO4 is typically used intracellularly while CsCl is used extracellularly. If so, please justify. If not, please correct.

      It is correct. The justification for not using CsCl selectively extracellularly is that introducing intracellular chloride ions can significantly alter basic biophysical properties, unrelated to the cesium effect. However, no similar distinction has been made for CsMeSO4, which would exclude the use of this drug extracellularly.

      (3) The authors normalize the current injections by cell capacitance (pA/pF). Was this done because there is a significant variance in cell morphology? A bit of justification for why the authors chose to normalize the current injection this way would help. If there is significant variation in cell capacitance across cells (or developmental ages), the authors could also include these data.

      Indeed, we choose to normalize current injection to cellular capacitance due to the markedly different morphology of deep and superficial L2/3 PCs. Deeper L2/3 PCs have a pronounced apical branch, closely resembling other pyramidal cell types such as L5 PCs, while superficial L2/3 PC lack a thick main apical branch and instead are equipped with multiple, thinner apical dendrites. This morphological variation would yield an inherent bias in several of the reported measurements, therefore we corrected for it by normalizing current injection to cellular capacitance, similar to our previous recent publications (Olah, Goettemoeller et al., 2022, Goettemoeller et al. 2024, Kumar et al. 2024).

      (4) On page 15, line 445, the section heading is "PV cell NEURON modeling". Is this a typo? The models are of L2/3 pyramidal neurons, correct?  

      Correct, typo corrected.

      (5) Figures 3F and 3I are plots of the voltage integral for different inputs before and after Cs+. The y-axis label units are "pA*ms". This should be "mV*ms" for a voltage integral.  

      Correct, typo corrected.

      (6) On page 9, line 273, the text reads "Voltage clamp experiments revealed that the rectification of steady-state voltage responses to hyperpolarizing current injection was amplified with 5-CT (Fig. 7c)". Both the text and Figure 7C describe current clamp, not voltage clamp, recordings. Please check and correct.

      Correct, typo corrected.

      (7) Figure 2i looks to be a normalized conductance vs voltage (i.e. activation) plot. The y-axis shows 0-1 but the units are in nS. Is that a coincidence or an error?

      Correct, typo corrected.

      Reviewer #3 (Recommendations For The Authors):

      This is your paper. My comments are my own opinion, I don't expect you to agree or to respond. But I hope that what I wrote below will help you to understand my perspective.

      Please pardon my directness (and sheer volume) in this section - I have a lot of notes/thoughts and hope you may find some of them helpful. My high-level comments are unfortunately rather critical, and in (small) part that is because I encountered too many errors/typos/ambiguities in figures, legend, and text. I expect many would be caught with good proofreading, but uncorrected caused confusion on my part, or an inability to interpret your figures with confidence, given some ambiguity.

      The paper reads a bit like patchwork - likely a result of many "helpful" reviewers who came before me. Consider starting with and focusing on the synaptic findings, expanding the number of figures and panels dedicated to that, showing example traces for all conditions, and giving yourself the space to portray these complex experiments and results. While I'm not a fan of a large number of supplemental figures, I feel you could move the "extra" results to the supplementals to improve the focus and get right to the meat of it.

      For me, the main concern is that the evidence you present for the non-uniform HCN distribution is rather indirect. Ideally, I'd like to see patch recordings from various dendritic locations (as others have done in rats, at least; I'm not sure if L2/3 mice have had such conductance density measurements made in basal and apical dendrites). Otherwise, perhaps optical mapping, either functional or via staining. I also mention some concerns about the choice of internal and cesium. More generally, I want to see more primary data (traces), in particular for the big synaptic findings (non-uniform, L1-vs-L4 differences, NMDAR).

      We thank the reviewer for the helpful suggestions. Indeed, direct patch clamp recording is widely considered to be the best method to identify dendritic ion channel distribution, however, we choose an in silico approach instead, for several reasons. Undoubtedly, one of the main reasons to omit direct dendritic recordings was that due to the uniquely narrow apical dendrites this method is extremely challenging, with no previous examples in the literature where isolated dendritic outside-out patch recordings were achieved from this cell type. However, there are theoretical considerations as well. In primates, it has been demonstrated that HCN1 channels are concentrated on dendritic spines (Datta et al., 2023) therefore direct outside-out recordings are not adequate in these circumstances. In future experiments we could directly target L2/3 PC dendrites for outside out recordings in order to resolve dendritic nonlinearity distribution, although a cell-attached methodology may be better suited due to the HCN biophysical properties being closely regulated by intracellular signaling pathways.

      The introduction and Figures 1 and 2 are not so interesting and not entirely accurate: L2/3 do not have "abundant" HCN, nor is there an actual controversy about whether they have HCN. It's been clear (published) for years that they have about the same as all other non-PT neocortical pyramidal neurons (see e.g. Larkum 2007; Sheets 2011). Your own Figure 1A has a logarithmic scale and shows L2/3 as having the lowest expression (?) of all pyramidals and roughly 10x lower than L5 PT, but the text says "comparable", which is misleading.

      We thank the reviewer for this comment. Although there are sporadic reports in the literature about the HCN content of L2/3 PCs, most of these publications arrive to the same conclusion from the negligible sag potential (as the mentioned Larkum et al., 2007 publication); namely that L2/3 PCs do not contain significant amount of HCN channels. We have shown with voltage and current clamp recordings that this assumption is false, as sag potential is not a reliable indicator of HCN content in L2/3 PCs. With the term “controversial” we aimed to highlight the different conclusions of functional investigations (e.g. Sheets et al., 2011) and sag potential recordings (e.g. Larkum et al., 2007), regarding the importance of HCN channels in L2/3 PCs.

      Non-uniform HCN with distal lower density has already been published for a (rare) pyramidal neuron in CA1 (Bullis 2007), similar to what you found in L2/3, and different from the main CA1 population.

      We thank the reviewer for this suggestion. We have now included the mentioned citation in the introduction section (page 3).

      Express sag as a ratio or percentage, consistently. Figure out why in Figure 7 the average sag ratio is 0.02 while in Fig. S1 it is 0.07 (for V1) - that is a massive difference.

      The calculation of sag ratio is consistent across the manuscript (at -6pA.pF), except for experiments depicted in Fig. 7 where sag ratio was calculated from -2pA/pF steps. Explanation below:

      Sag should be measured at a common membrane potential, with each neuron receiving a current pulse appropriate to reach that potential. Your approach of capacitance-based may allow for the same, but it is not clear which responses are used to calculate a single sag value per cell (as in Figure 2d).

      Thank you, we now included this info in the methods section. Sag potential was measured at the -6 pA/pF step peak voltage, except for Fig. 7 as noted above. We have now included this discrepancy detail in the methods section (page 14 ). These recordings in Fig. 7 took significantly longer than any other recording in the manuscript, as it took a considerable time to reach steady-state response from 5-CT application. -6pA/pF is a current injection in the range of 400-800 pA, which was proven to be too severe for continued application in cells after more than an hour of recording. Accordingly, we decided to lower the hyperpolarizing current step in these recordings. The absolute value of sag is thus different in Fig. 7, but nonetheless the 5-CT effect was still significant. Notably, we probably wouldn’t have noticed the small sag in L2/3 here (and thus the entire study), save for the fact that we looked at -6pA/pF to begin.

      In a paper focused on HCN, I would have liked to see resonance curves in the passive characterization.

      We thank the reviewer for the suggestion. Resonance curves can indeed provide useful insights into the impact of HCN on a cell’s physiological behavior, however, these experiments are outside the scope of our current manuscript as without in vivo recordings, resonance curves do not contribute to the manuscript in our opinion.

      How did you identify L2/3? Did you target cells in L2 or L3 or in the middle, or did you sample across the full layer width for each condition? A quantitative diagram showing where you patched (soma) and where you stimulated (L1, L4) with actual measurements, would be helpful (supplemental perhaps). You mention in the text that some L2/3 don't have a tuft, suggesting some variability in morphology - some info on this would be useful, i.e. since you did fill at least some of the neurons (eg 3A), how similar/different are the dendritic arbors?

      We sampled the entire L2/3 region during our recordings. It has been published that deep and superficial L2/3  PCs are markedly different in their morphology, and a recent publication (Brandelise et al. 2023) has even separated these two subpopulations to broad-tufted and slender tufted pyramidal cells, which receive distinct subcortical inputs. Although this differentiation opens exciting avenues for future research, examining potential layer gradients in our dataset would warrant significantly higher sample numbers and is currently out of the scope of our manuscript.

      Distal vs proximal: this could use more clarification, considering how central it is to your results. What about a synapse on a basal dendrite, but 150 or 200 um from the soma, is that considered proximal? Is the distance to the soma you report measured along the 3D dendrite, along the 2D dendrite, as a straight line to the soma, or just relative to some layers or cortical markers? (I apologize if I missed this).

      We thank the reviewer for pointing out the missing description in the results section. We have amended this oversight (p15).  Furthermore, although deeper L3 PCs have characteristic apical and basal dendritic branches, when recordings were made from more superficial L2 cells, a large portion of their dendrites extended radially, which made their classification ambiguous. Therefore, we did not use “apical” and “basal” terminology in the paper to avoid confusion. Distances were measured along the 3D reconstructed surface of the recovered pyramidal cells. This information is now included in the methods.

      Line 445, "PV cell NEURON modeling" ... hmm. Everyone re-uses methods sections to some degree, but this is not confidence-inspiring, and also not from a proofreading perspective.

      We have corrected the typo.

      It seems that you constructed a new HCN NEURON mechanism when several have been published/reviewed already. Please explain your reasons or at least comment on the differences.

      There are slight differences in our model compared to previously published models. Nevertheless, we took a previously published HCN model as a base (Gasparini et al, 2004), and created our own model to fit our whole-cell voltage clamp recordings.

      Bath-applied Cs+ can change synaptic transmission (in the hippocampus; Chevaleyre 2002). But also ZD7288 has some such effects. Also, see (Harris 1995) for a Cs+ and ZD7288 comparison. As well as (Harris 1994) for more Cs+ side-effects (it broadens APs, etc). Bath-applied blockers may affect both long-range and local synapses in your recordings, via K-channels or perhaps presynaptic HCN (though I am aware of your Fig. 1e). Since you can do intracellular perfusion, you could apply ZD7288 postsynaptically (Sheets 2011), an elegant solution.

      We thank the reviewer for the suggestion. We were aware of the potential presynaptic effects of cesium (i.e., presynaptic Kv or other channel effects) and did measure PPR after cesium application (Fig. 1h), noting no effect. At Cs<sup>+</sup> concentrations used here, we now also include new data in the results showing no effect on somatically recorded AP waveform (i.e., representative of a Kv channel effect). As stated earlier for reviewer 1, we now performed additional experiments using either cesium or ZD-7288 for comparison (e.g., see updated Fig. 1; Supplementary Figure 1; Fig. 3b-e). Intracellular ZD re-perfusion is an elegant solution which we will absolutely consider in future experiments.

      K-Gluconate is reported to inhibit Ih (Velumian 1997), consider at least some control experiments with a different internal for the main synaptic finding - maybe you'll find no big change ...

      We thank the reviewer for the suggestion. Although K-Gluconate can inhibit HCN current, the use of this intracellular solution is often used in the literature to measure this current (Huang & Trussel 2014). We have chosen this intracellular solution to improve recording stability.  

      (Biel 2009) is a very comprehensive HCN review, you may find it useful.

      We thank the reviewer for bringing this to our attention, we have now included the citation in the introduction.

      "Hidden" in your title seems too much.

      We changed the title to more accurately describe our findings and removed ‘hidden’.

      While I'm glad you didn't record at room temperature, the choice of 30C seems a bit unfortunate - if you go to the trouble to heat the bath, why not at least 34C, which is reasonably standard as an approximation for physiological temperature?

      We thank the reviewer for pointing this out. The choice of 30C was made to approach physiological temperature levels, while preserving the slices for extended amounts of time which is a standard approach. Future experiments in vivo be performed to further understand the naturalistic relevance at ~37C.

      Line 506: do you mean "Hz" here? It's not a frequency, is it? I think it's a unitless ratio?

      Correct, we have amended the typo.

      Line 95: you have not shown that HCN is "essential" for "excess" AP firing.

      We have corrected the phrasing, we agree.

      Fig. 2b,c: is this data from a single example neuron, maybe the same neuron as in 2a? Or from all recorded neurons pooled?

      The data is from several recorded cells pooled.

      Fig. 3 (important figure):

      Why did you not use a paired test for panels e and f? You have the same number of neurons for each condition and the expectation is that you record each neuron in control and then in cesium condition, which would be a paired comparison. Or did you record only 1 condition per neuron?

      This figure presents your main finding (in my opinion). You should show examples of the synaptic responses, i.e. raw traces, for each condition and panel, and overlaid in such a way that the reader can immediately see the relevant comparison - it's worth the space it requires.

      We thank the reviewer for the suggestions. Traces are only overlaid in the paper when they come from the same cell. For Fig. 3d-i, EPSPs in every neuron were evoked in 2-3 different locations (i.e., 1-2 ‘L4’ locations for Type-I and Type-II synapses, and one ‘L1’ location in each) with the same stimulation pipette and one pharmacological condition per cell. Therefore two-sample t-test were used since the control and cesium conditions came from separate cells (i.e., separate observations). This was necessary, as we can never assume that the stimulating electrode can return back to the same synapse after moving it. We were not comfortable with showing overlaid traces from different cells, however, we did show representative traces from control and the Cs<sup>+</sup> conditions in Fig. 3h. Complementary ZD-7288 experiments can be found on panel b and c, where we did perform within-cell pharmacology (and thus used paired t-tests) from one stimulation area/cell. We hope these complementary experiments increase overall confidence as neither pharmacological approach is 100% without off-target effects. We now also included more overlaid traces where appropriate (i.e., Fig. 3b, and in the new  Fig. 3k experiments using within-cell pharmacology comparisons). We do realize these complementary approaches could cause confusion to the reader, and have now done our best to make the slightly different approaches in this Figure clearer in the results section.

      Consider repeating at least some of these critical experiments with ZD7288 instead of Cs+ (and not K-gluc), or even with ZD7288 pipette perfusion, if it's technically feasible here.

      We thank the reviewer for the suggestions. Although many of our recordings using Cs<sup>+</sup> already had complementary experiments (such as synaptic experiments Figure 3e vs Figure 3b), we recognize the need to extend the manuscript with more ZD-7288 experiments. We have now extended Figure 1 with three panels (Figure 1 c,d,e), which recapitulates a fundamental finding, the change in overall excitability upon HCN channel blockade, using ZD-7288 as well.

      Fig. 3a, why show a schematic (and weirdly scaled) stimulating electrode? Don't you have a BF photo showing the actual stimulating electrode, which you could trace to scale or overlay? Could you use this panel to indicate what counts as "distal" and what as "proximal", visually?

      The stimulating electrode was unfortunately not filled with florescent materials, therefore it was not captured during the z-stack.

      Fig. 3b: is the y-axis labeled correctly? A "100% change" would mean a doubling, but based on the data points here I think y=100% means "no change"?

      The scale is labeled correctly, 100% means doubling.

      Fig. 3b, c: again, show traces representing distal and proximal, not just one example (without telling us how far it was). And use those traces to illustrate the half-width measurement, which may be non-trivial.

      We have extended Figure 3b with an inset showing the effect of ZD-7288 on a proximal stimulating site. The legend now includes additional information indicating stimulating location 28 µm away from the soma in control conditions (black trace) and upon Z-7288 application (green trace).  

      Line 543, 549: it seems you swapped labels "h" and "i"?

      Typo corrected.

      Fig. 4b: to me, MK-801 only *partially* blocks amplification, but in the text L198 you write "abolish".

      We thank the reviewer for pointing this out. Indeed, there are several other subthreshold mechanisms that are still intact after pipette perfusion, which can cause amplification. We have now clarified this in the text (p7).

      Fig. 4e,f: what is the message? Uniform NMDAR? The red asterisk in (e) is at a proximal/distal ratio of roughly 1. I don't understand the meaning of the asterisk (the legend is too basic) and I'm surprised to see a ratio of 1 as the best fit, and also that the red asterisk is at a dendritic distance of 0 um in (f). This could use more explanation (if you feel it's relevant).

      We thank the reviewer for pointing this out. We have now included a better explanation in the results and figure legend. We have also updated the figure to make it clearer and added model traces in Fig. 4f, which correspond to example data from slices in Fig. 4g (both green). The graph suggests nonuniform, proximally abundant NMDA distribution. The color coding corresponds to the proximal EPSP halfwidth divided by distal EPSP halfwidth. It is true that the dendritic distance ‘center’ was best-fit very close to the soma, but also note the dispersion (distribution) half-width was >150mm, so there is quite a significant dendritic spread despite the proximal bias prediction. Based on this model there is likely NMDA spread throughout the entire dendrite, but biased proximally. Naturally, future work will need to map this at the spine level so this is currently an oversimplification. Nonetheless, a proximal NMDA bias was necessary to recapitulate findings from Fig. 3, and additional slice recordings in Fig. 4 were consistent with this interpretation.

      Fig. 4g: I feel your choice of which traces to overlay is focusing on the wrong question. As the reader, what I want to see here is an overlay of all 4 conditions for one pathway. If this is a sequential recording in a single cell (Cs, Cs+MK801, wash out Cs, MK801), then the overlay would be ideal and need not be scaled. Otherwise, you can scale it. But the L1/L4 comparison does not seem appropriate to me. I find myself trying to imagine what all the dark lines would look like overlaid, and all the light lines overlaid separately. Also, the time axis is missing from this panel. Consider a subtraction of traces (if appropriate).

      In these recordings, all EPSPs cells were measured using a stimulating electrode that was moved between L1 and L4 (only once, to keep the exact input consistent) to measure the different inputs in a single neuron. In separate sets of experiments, the same method was used but in the presence of Cs<sup>+</sup>, Cs<sup>+</sup> + MK-801, or MK-801 alone. This was the most controlled method in our hands for this type of approach, as drug wash outs were either impractical or not possible.  Overlaying four traces would have presented a more cluttered image, and were not actually performed experimentally. As our aim was to resolve the proximal-distal halfwidth relationship, therefore we deemed the within-cell L1 vs. L4 comparison appropriate. We have nonetheless added model traces in Fig. 4f, which correspond to example data from slices in Fig. 4g (both green). The bar graphs should serve also serve to illustrate the input-specific  relationship- i.e., that the only time the L1 and L4 EPSP relationship was inverted was in the presence of Cs<sup>+</sup> (green bars) and that this effect was occluded with simultaneous MK-801 in the pipette (red bars).

      Line 579: should "hyperpolarized" be depolarized?

      Corrected

      Fig. 5a: it looks like the HCN density is high in the most basal dendrites (black curve above), then drops towards the soma, then rises again in the apicals (red curve). Is that indeed how the density was modeled? If so, this is completely at odds with the impression I received from reading your text and experimental data - there, "proximal" seems to mean where the L4 axons are, and "distal" seems to mean where the L1 axons are, in other words, high HCN towards the pia and low HCN towards the white matter. But this diagram suggests a biphasic hill-valley-hill distribution of HCN (meaning there is a second "distal" region below the soma). In that case, would the laterally-distant basal dendrites also be considered distal? How does the model implement the distribution - is it 1D, 2D or 3D? As you can probably tell, this figure raised more questions for me and made me wonder why I don't have a better understanding yet of your definitions.

      We thank the reviewer for pointing this out. We agree our initial cartoon of the parameter fitting procedure was not accurate and should have just been depicted a single ‘curve’. We have now simplified it to better demonstrate what the model is testing, and also made the terms more consistent and accurate. There is no ‘second’ region in the model. We hope this better illustrates it now. We also edited the legend to be clearer. Because the model description in Fig. 4d suffered from similar shortcomings, we also modified it accordingly as well as the figure legend there.

      Fig. 5b: why is the best fit at a proximal/distal ratio of 1, yet sigma is 50 um?

      Proximal/distal bias on this figure was fitted to 0.985 (prox/distal ratio) as we modeled control conditions, with intact NDMA and HCN channels,  which closely approximated the control recording comparisons.

      Fig. 6h, Line 662: "vs CsMeSO4 ... for putative LGN events" The panel shows proximal vs distal, not control vs Cs+. What's going on here?

      Typo corrected.

      Fig. 7e: the ctrl sag ratio here averages 0.02, while in Fig. S1 the average (for V1 and others) is about 0.07.  Please refer to our answer given to the previous question regarding sag ratio measurements. Briefly, recordings made with 5-CT application were made using a less severe, -2 pA/pF current injection to test seg responses. This more modest hyperpolarization activated less HCN channels, therefore the sag ratio is lower compared to previously reported datapoints.

      We have included this explanation in the methods section (page 14)

      Now hear you are using a paired test for this pharmacology, but you didn't previously (see my earlier comments/questions).

      Paired t-test were used for these experiments as these control and test datapoints came from the same cell. Cells were recorded in control conditions, and after drug application.

      Line 137: single-axon activation: but cortical axons make multi-synaptic contacts, at least for certain types of pre- and post-synaptic neurons, and (e.g. in L5-L5 pairs) those contacts can be distributed across the entire dendritic arbor. In other words, it's possible that when you stimulate in L1, you activate local axons, and the signal could then propagate to multiple synaptic contact locations, some being distal and some proximal. Maybe you have reasons to believe you're able to avoid this?

      We thank the reviewer for this question. Cortical axons often make distributed contacts, however, top-down and bottom-up pathways innervating L2/3 PCs are at least somewhat restricted to L2/3/L4 and L1, respectively (Shen et al. 2022, Sermet et al. 2019). Therefore, due to the lack evidence suggesting a heavily mixed topographical distribution for top-down and bottom-up inputs, we have reason to believe that L1 stimulation will result in mainly distal input recruitment, while L4 stimulation will mainly excite proximal dendritic regions. The resolution of our experiments was also improved by the minimal stimulation and visual guidance (subset of experiments) of the stimulation. Furthermore, new optogenetic experiments stimulating LGN and LM axons, which have been anatomically defined previously as biased to deeper layers and L1, respectively, were now also performed (Fig. 3j-l) with analogous cesium effects as our local electrical stimulation experiments. Future work using varying optogenetic stimulation parameters will expand on this.

      L140: "previous reports" ==> citation needed.

      We have inserted the citation needed.

      L149: "arriving to layer 1"; but I think earlier you noted that some or many L2/3 neurons lack a dendritic tuft; do they all nevertheless have dendrites in L1? Note that cortico-cortical long-range axons still need to pass through all cortical layers on their way up to L1.

      We thank the reviewer for the question. Although the more superficial L2/3 PCs lack distinct apical tuft, their dendrites reach the pia similarly to deeper L2/3 PCs. All of our recorded and post-hoc recovered cells had dendrites in L1, except in cases where they were clearly cut during the slicing procedure, which cells were occluded from the study.

      When you write "L4 axons" or "L4 inputs", do you specifically mean long-range thalamic axons? Or axons from local L4 neurons? What about axons in L4 that originate from L5 pyramidal neurons?

      In case of ‘L4’ axons, we cannot disambiguate these inputs a priori, as they are both part of the bottom-up pathway, and are possibly experimentally indistinguishable. Even with restricted opto LGN stimulation, disynaptic inputs via L4 PCs cannot be completely ruled out under our conditions. On the other hand, the probability of L5 PC axons to terminate on L2/3 PCs is exceedingly low (single reported connection out of 1145 potential connections; Hage et al. 2022). We did find two clearly different synaptic subpopulations (Supp. Fig 3) in L4- which was tempting to classify as one or the other. However we felt there was not enough evidence in the literature as well as our additional optogenetic experiments to make a classification on the source of these different L4 inputs. Thus we deemed them as Type-I or Type-II for now.

      Do you inject more holding current to compensate for the resting membrane potential when Cs+ or ZD7288 is in the bath?

      We thank the reviewer for the question. We did not inject a compensatory current, as we wanted to investigate the dual, physiologically relevant action of HCN channels (George et al. 2009)

      I'd like to see distributions (histograms) of L4 and L1 EPSP amplitudes, under control conditions and ideally also under HCN block.

      We have now extended the manuscript with a supplementary figure (Supplementary Figure 6) to show that EPSP peak was not distance dependent in control conditions, and there was no relationship between peak and halfwidth in our dataset.

      Line 186, custom pipette perfusion: why not use this for internal ZD7288, to make it cell-specific?

      We thank the reviewer for the question, this is a good point. In future work we will consider this when applicable. It is certainly a way to control for bath application confounds in many ways.

      L205: "recapitulate our experimental findings" - which findings do you mean? I think a bit of explanation/referencing would help.

      Corrected.

      Line 210: L4-evoked were narrower than L1-evoked: is this not expected based on filtering?

      We thank the reviewer for pointing this out, the word “Intriguingly” has been omitted.

      Line 231 and 235: "in L5 PCs" should be restricted to L5 PT-type PCs.

      We have corrected this throughout the manuscript.

      Neuromodulation, Fig. 7, L263-282: the neuromodulation finding is interesting. However, a bit like the developmental figure, it feels "tacked on" and the transition feels a bit awkward. I think you may want to discuss/cite more of the existing literature on neuromodulatory interactions with HCN (not just L2/3). Most importantly, what I feel is missing is a connection to your main finding, namely L1 and L4 inputs. Does serotonergic neuromodulation put L1 and L4 back on equal footing, or does it exaggerate the differences?

      We thank the reviewer for the question. We agree with the reviewer that Figure 7 does not give a complete picture about how the adult brain can capitalize on this channel distribution, as our intention was to show that HCN channels are not a stationary feature of L2/3 PC, but a feature which can be regulated developmentally and even in the adult brain via neuromodulation. In other words, the subthreshold NMDA boosting we observed can be gated by HCN, depending on developmental stage and/or neuromodulatory state of the system. We have now added some brief language to better introduce the transition and its relevance to the current study in the results (p8), and discussed the implications in the discussion section of the original manuscript.

      General comment: different types/sources of synapses may have different EPSP kinetics. I feel this is not mentioned/discussed adequately, considering your emphasis on EPSPs/HCN.

      See points above on input-specific synaptic diversity.

      Line 319/320: enriched distal HCN is found in L5 PT-type, not in all L5 PCs.

      Corrected

      L320: CA1 reportedly has a subset of pyramidal neurons that have higher proximal HCN than distal (I gave the citation above). In light of that, I think "unprecedented" is an overstatement.

      Corrected.

      Methods:

      L367: What form of anesthesia was used?

      Amended.

      Which brain areas, and how?

      Amended.

      Why did you first hold slices at 34C, but during recording hold at 30C?

      We held the slices at 34C to accelerate the degradation of superficial damaged parts of the slice, which is in line with currently used acute slice preparation methodologies, regardless of the subsequent recording temperature.

      Pipette resistance/tip size?

      Amended.

      Cell-attached recordings (L385): provide details of recordings. What was the command potential (fixed value, or did you adjust it per neuron by some criteria)?

      Amended.

      What type of stimulating electrode did you use? If glass, what solution is inside, and what tip size?

      We thank the reviewer for pointing these out, the specific points were added to the methods section.

      L392/393: you adjusted the holding (bias) current to sit at -80 mV. What were the range and max values of holding current? Was -80 mV the "raw" potential, or did it account for liquid junction? If you did not account for liquid junction potential, then would -80 in your hands effectively be between -95 and -90 mV? That seems unusually hyperpolarized.

      All cells were held with bias holding currents between -50 pA and 150 pA. To be clear, as mentioned below, we did not change the bias current after any drug applications. We did not correct for liquid junction potential, and cells were ‘held‘ with bias current at -80 mV as during our recordings, as 1) this value was apparently close to the RMP (i.e. little bias current needed at this voltage on average) (Fig. 2e) and 2) to keep consistent conditions across recordings. The uncorrected -80 mV is in the range of previously reported membrane potential values both in vivo and in vitro (Svoboda et al. 1999, Oswald et al. 2008, Luo et al. 2017), which found the (corrected) RMP to be below -80mV. Naturally this will not reflect every in vivo condition completely and further investigation using naturalistic conditions in the future are warranted.  

      Did you adjust the bias current during/after pharmacology?

      Bias current was not adjusted in order to resolve the effect on resting membrane potential.

      L398: sag calculation could use better explanation: how did you combine/analyze multiple steps from a single neuron when calculating sag? Did you choose one level (how) or did you average across step sizes or ...?

      Sag ratio was measured at -6 pA/pF current step except for one set of experiments in Fig. 7. Methods section was amended.

      L400, 401: 10 uM Alexa-594 or 30 um Alexa-594, which is correct?

      10 µM is correct, typo was corrected

      L445: "PV cell" seems like a typo?

      Typo is corrected.

      L450: "altered", please describe the algorithm or manual process.

      Alterations were made manually.

      L474: NDMA, typo.

      Typo is fixed.

      L474: "were adjusted", again please describe the process.

      Adjustments were made by a grid-search algorithm.

      Biel, M., Wahl-Schott, C., Michalakis, S., & Zong, X. (2009). Hyperpolarization-activated cation channels: from genes to function. Physiological reviews, 89(3), 847-885. https://journals.physiology.org/doi/full/10.1152/physrev.00029.2008 - (very comprehensive review of HCN)

      Bullis JB, Jones TD, Poolos NP. Reversed somatodendritic I(h) gradient in a class of rat hippocampal neurons with pyramidal morphology. J Physiol. 2007 Mar 1;579(Pt 2):431-43. doi: 10.1113/jphysiol.2006.123836. Epub 2006 Dec 21. PMID: 17185334; PMCID: PMC2075407. https://physoc.onlinelibrary.wiley.com/doi/full/10.1113/jphysiol.2006.123836 - (CA1 subset (PLPs) have a reversed HCN gradient; cell-attached patches, NMDAR)

      Velumian AA, Zhang L, Pennefather P, Carlen PL. Reversible inhibition of IK, IAHP, Ih, and ICa currents by internally applied gluconate in rat hippocampal pyramidal neurones. Pflugers Arch. 1997 Jan;433(3):343-50. doi: 10.1007/s004240050286. PMID: 9064651. https://link.springer.com/article/10.1007/s004240050286 - (K-Gluc internal inhibits HCN)

      Sheets, P. L., Suter, B. A., Kiritani, T., Chan, C. S., Surmeier, D. J., & Shepherd, G. M. (2011). Corticospinal-specific HCN expression in mouse motor cortex: I h-dependent synaptic integration as a candidate microcircuit mechanism involved in motor control. Journal of neurophysiology, 106(5), 2216-2231. https://journals.physiology.org/doi/full/10.1152/jn.00232.2011 - (L2/3 IT have same sag ratio as all other non-PT pyramidals, roughly 5% (vs 20% PT); intracellular ZD7288 used at 10 or 25 um)

      Harris NC, Constanti A. Mechanism of block by ZD 7288 of the hyperpolarization-activated inward rectifying current in guinea pig substantia nigra neurons in vitro. J Neurophysiol. 1995 Dec;74(6):2366-78. doi: 10.1152/jn.1995.74.6.2366. PMID: 8747199. https://journals.physiology.org/doi/abs/10.1152/jn.1995.74.6.2366 - (comparison Cs+ and ZD7288)

      Harris, N. C., Libri, V., & Constanti, A. (1994). Selective blockade of the hyperpolarization-activated cationic current (Ih) in guinea pig substantia nigra pars compacta neurones by a novel bradycardic agent, Zeneca ZM 227189. Neuroscience letters, 176(2), 221-225. https://www.sciencedirect.com/science/article/abs/pii/0304394094900876 - (Cs+ is not HCN-selective; it also broadens APs, reduces the AHP)

      Chevaleyre, V., & Castillo, P. E. (2002). Assessing the role of Ih channels in synaptic transmission and mossy fiber LTP. Proceedings of the National Academy of Sciences, 99(14), 9538-9543. https://pnas.org/doi/abs/10.1073/pnas.142213199 - (Cs+ blocks K channels, increases transmitter release; but also ZD7288 affects synaptic transmission)

      Thank you

    1. Author response:

      The following is the authors’ response to the original reviews.

      eLife Assessment

      The modeling and experimental work described provide solid evidence that this model is capable of qualitatively predicting alterations to the swing and stance phase durations during locomotion at different speeds on intact or split-belt treadmills, but a revision of the figures to overlay the model predictions with the experimental data would facilitate the assessment of this qualitative agreement. This paper will interest neuroscientists studying vertebrate motor systems, including researchers investigating motor dysfunction after spinal cord injury.

      Figures showing the overlay of the experimental data with the modeling predictions have been included as figure supplements for Figures 5-7. This highlights how accurate the model predictions were.

      Public Reviews:

      Reviewer #1 (Public review):

      We thank the reviewer for the positive evaluation of our paper and emphasizing its strengths in the Summary.

      Weaknesses:

      (1) Could the authors provide a statement in the methods or results to clarify whether there were any changes in synaptic weight or other model parameters of the intact model to ensure locomotor activity in the hemisected model?

      Such a statement has been inserted in Materials and Methods, section “Modeling”. Also, in the 1st paragraph of section “Spinal sensorimotor network architecture and operation after a lateral spinal hemisection”, we stated that no “additional changes or adjustments” were made.

      (2) The authors should remind the reader what the main differences are between state-machine, flexor-driven, and classical half-center regimes (lines 77-79).

      Short explanations/reminders have been inserted (see lines 80-83 of tracked changes document).

      (3) There may be changes in the wiring of spinal locomotor networks after the hemisection. Yet, without applying any sort of plasticity, the model is able to replicate many of the experimental data. Based on what was experimentally replicated or not, what does the model tell us about possible sites of plasticity after hemisection?

      Quantitative correspondence of changes in locomotor characteristics predicted by the model and those obtained experimentally provide additional validation of the model proposed in the preceding paper and used in this paper. This was our ultimate goal. None of the plastic changes during recovery were modeled because of a lack of precise information on these changes. The absence of possible plastic changes may explain the small discrepancies between our simulations and experimental data (see Supplemental Figures that have been added). However, the model only has a simplified description of spinal circuits without motoneurons and without real simulation of leg biomechanics. This limits our analysis or predictions of possible plastic changes within a reasonable degree of speculation. This issue is discussed in section: “Limitations and future directions” in the Discussion. We have also inserted a sentence: “The lack of possible plastic changes in spinal sensorimotor circuits of our model may explain the absence of exact/quantitative correspondences between simulated and experimental data.

      (4) Why are the durations on the right hemisected (fast) side similar to results in the full spinal transected model (Rybak et al. 2024)? Is it because the left is in slow mode and so there is not much drive from the left side to the right side even though the latter is still receiving supraspinal drive, as opposed to in the full transection model? (lines 202-203).

      This is correct. We have included this explanation in the text (lines 210-211 of tracked changes document).

      (5) There is an error with probability (line 280).

      This typo was corrected.

      Reviewer #2 (Public review):

      This is a nice article that presents interesting findings. One main concern is that I don't think the predictions from the simulation are overlaid on the animal data at any point - I understand the match is qualitative, which is fine, but even that is hard to judge without at least one figure overlaying some of the data.

      We thank the Reviewer for the constructive comments. Figures showing the overlay of the experimental data with the modeling predictions have been included as figure supplements for Figures 5-7. This highlights how accurate the model predictions were.

      Second is that it's not clear how the lateral coupling strengths of the model were trained/set, so it's hard to judge how important this hemi-split-belt paradigm is. The model's predictions match the data qualitatively, which is good; but does the comparison using the hemi-split-belt paradigm not offer any corrections to the model? The discussion points to modeling plasticity after SCI, which could be good, but does that mean the fit here is so good there's no point using the data to refine?

      The model has not been trained or retrained, but was used as it was described in the preceding paper. Response: Quantitative correspondence of changes in locomotor characteristics predicted by the model and those obtained experimentally provide additional validation of the model proposed in the preceding paper and used in this paper. This was our ultimate goal. None of the plastic changes during recovery were modeled because of a lack of precise information on these changes. The absence of possible plastic changes may explain the small discrepancies between our simulations and experimental data (see figure supplements that have been added). However, the model only has a simplified description of spinal circuits without motoneurons and without real simulation of leg biomechanics. This limits our analysis or predictions of possible plastic changes within a reasonable degree of speculation. This issue is discussed in section: “Limitations and future directions” in the Discussion.

      The manuscript is well-written and interesting. The putative neural circuit mechanisms that the model uncovers are great, if they can be tested in an animal somehow.

      We agree and we are considering how we can do this in an animal model.

      Page 2, lines 75-6: Perhaps it belongs in the other paper on the model, but it's surprising that in the section on how the model has been revised to have different regimes of operation as speed increases, there is no reference to a lot of past literature on this idea. Just one example would be Koditschek and Full, 1999 JEB Figure 3, where they talk about exactly this idea, or similarly Holmes et al., 2006 SIAM review Figure 7, but obviously many more have put this forward over the years (Daley and Beiwener, etc). It's neat in this model to have it tied down to a detailed neural model that can be compared with the vast cat literature, but the concept of this has been talked about for at least 25+ years. Maybe a review that discusses it should be cited?

      We have revised the Introduction to include the suggested references.

      Page 2, line 88: While it makes sense to think of the sides as supraspinal vs afferent driven, respectively, what is the added insight from having them coupled laterally in this hemisection model? What does that buy you beyond complete transection (both sides no supra) compared with intact?

      We are trying to make one model that could reproduce multiple experimental data in quadrupedal locomotion, including genetic manipulations with (silencing/removal) particular neuron types (and commissural interneurons), as pointed out in the section “Model Description” in the Results. These lateral connections are critical for reproducing and explaining other locomotor behaviors demonstrated experimentally. However, even in this study, these lateral interactions are necessary to maintain left-right coordination and equal left-right frequency (step period) during split-belt locomotion and after hemisection.

      I can see how being able to vary cycle frequencies separately of the two limbs is a good "knob" to vary when perturbing the system in order to refine the model. But there isn't a ton of context explaining how the hemi-section with split belt paradigm is important for refining the model, and therefore the science. Is it somehow importantly related to the new "regimes" of operation versus speed idea for the model?  

      We did not refine the model in this paper. We just used it for new simulations. The predictions strengthen the organization and operation of the model we recently proposed.

      Page 5, line 212: For the predictions from the model, a lot depends on how strong the lateral coupling of the model is, which, in turn, depends on the data the model was trained on. Were the model parameters (especially for lateral coupling of the limbs) trained on data in a context where limbs were pushed out of phase and neuronal connectivity was likely required to bring the limbs back into the same phase relationship? Because if the model had no need for lateral coupling, then it's not so surprising that the hemisected limbs behave like separate limbs, one with surpaspinal intact and one without.

      Please see our response above concerning the need for lateral interactions incorporated to the model.

      Page 8, line 360: The discussion of the mechanisms (increased influence of afferents, etc) that the model reveals could be causing the changes is exciting, though I'm not sure if there is an animal model where it can be tested in vivo in a moving animal.

      We agree it may be difficult to test right now but we are considering experimental approaches.

      Page 9, line 395: There are some interesting conclusions that rely on the hemi-split-belt paradigm here.

      We agree with this comment. Thanks.

      Reviewer #2 (Recommendations for the authors):

      Figures: Why aren't there any figures with the simulation results overlaid on the animal data?

      We followed this suggestion. Figures showing the overlay of the experimental data with the modeling predictions have been included as figure supplements.

    1. “I really thought it would feel mostly the same, because my husband and I have beentogether for almost four years now, and we’ve lived together for a good portion ofthat,” she says. “Emotionally ... it just feels a little more permanent. He said the otherday that it makes him feel both young and old. Young in that it’s a new chapter, andold in that for a lot of people, the question of who you want to spend your life with isa pretty central question for your 20s and 30s, and having settled that does feel reallybig and momentous.

      Marriage makes a relationship more permanent giving Williams Brown a young and old feeling as he has found the person to spend the rest of his life with while merely beginning the journey.

    Annotators

      • Motivation & Purpose of the Talk

        “This talk is called I see what you mean what a tiny language can teach us about gigantic systems… which sort of uh formed the rock on which I built all of my thesis work.”

        • Alvaro introduces a small, “tiny” language (Dedalus) that explores how to effectively build and reason about distributed systems by focusing on semantics rather than purely operational details.
      • Importance of Abstraction & Its Pitfalls

        “Abstraction is a thing… arguably the the the best tool that we have in computer science… but sometimes it’s harmful.”

        • While abstraction helps manage complexity, it can also hide essential details about distributed behavior and lead to design failures (e.g., RPC “leaking” distributed complexity).
      • Division Between “Infrastructure Programmers” and “Users”

        “We tend to think of abstractions as these fixed boundaries, you know these walls… we put us right we put the Geniuses the infrastructure programmers the 10x Engineers below the wall… who goes above the wall well the the despised users.”

        • Alvaro criticizes the mindset that library writers and library users are separate classes of people; instead, we all alternate between these roles.
      • Spark for a Declarative Approach

        “…if you kind of squint your eyes the work that I was doing in those two modes [C code vs. SQL queries]… it wasn’t really that different.”

        • Observing that data wrangling in both imperative and declarative styles shares core similarities prompted an interest in “could you write distributed systems using a logic/query language?”
      • Model-Theoretic Semantics & Queries

        “…model theoretic semantics say that no no the meaning of a program is precisely the structures that make the statements in the program true… data becomes a Common Language…”

        • A logic-based or query-based approach allows mapping “programs” to “outcomes” directly through data, making correctness and debugging potentially clearer than in purely imperative styles.
      • Data Log & Concurrency

        “Data log is interesting because we see that there's this Rich intimate connection between talking about what you don't know and having to commit to particular orders to get deterministic results.”

        • Data log provides a unifying lens for data, but the addition of recursion, negation, and timing must be carefully managed to keep semantics deterministic in distributed settings.
      • Introducing Dedalus (pronounced ‘Day-Duh-Luss’)

        “So the idea is we want to take that clock and rify it, make the clock a piece of every uh unit of knowledge that we have… time is just a device that was invented to keep everything from happening at once.”

        • Dedalus extends data log with explicit time and asynchronous rules so programmers can represent mutable state, concurrency, and message ordering in a precise logical framework.
      • Three Rule Types in Dedalus

        “…we say you know every every record has a time stamp… deductive rules say the conclusion has the same time stamp as the premise… inductive rules say the conclusion has one Higher Tim stamp… asynchronous rules say hey look there's this infinite domain of time we randomly pick from it…”

        • Dedalus’s key contribution is capturing “now,” “next,” and “eventually” semantics, reflecting real-world distributed behaviors (e.g., immediate local inference vs. future state vs. network delays).
      • State as “Induction in Time”

        “…unlike in databases which with having no time had only state in Dedalus there is no state… state is what you get when you say when you know something then you know it at the next time and by induction you keep knowing it.”

        • Dedalus reframes state changes as an inductive process on discrete time steps, allowing logic-based reasoning about mutation.
      • Confluence & Determinism

        “If we take away that pesky negation… or with very carefully controlled negation… monotonic… we know that negation free or monotonic more broadly Dedalus programs are confluent… they're deterministic without coordination.”

        • By restricting programs to monotonic logic (no negative conditions or well-controlled negation), a system can behave deterministically despite asynchronous execution and failures.
      • Significance for Distributed Systems

        “…there’s this Rich intimate connection between… the meaning of programs, the uniqueness of a model… and this really valuable systems property of deterministic outcomes…”

        • Dedalus reveals how purely logical constructs (stable models, minimal models) can correspond directly to reliable, deterministic distributed protocols in practice.
      • Legacy & Extensions

        “…on top of Bloom we built Blazes… that allow programmers… exactly why they aren’t if they aren’t [deterministic]… lineage driven fault injection… we can prove that our programs are fault tolerant…”

        • Dedalus’s ideas led to subsequent systems like Bloom, Blazes, and lineage-driven fault injection that leverage logic-based reasoning to auto-generate or verify coordination strategies.
      • Closing Thoughts & Academic Invitation

        “We don’t do a good enough job respecting our users… If any of you are interested in spending the next five or six years screwing around inventing languages Building Systems with them… I’m looking for PhD students.”

        • Alvaro emphasizes user-focused abstractions, fluid design, and invites new students to further this research in language-driven system development.
    1. Curiosity Killed the Adage, Radiolab Release date: Dec 20, 2024

      Episode Description

      The early bird gets the worm. What goes around, comes around. It’s always darkest just before dawn. We carry these little nuggets of wisdom—these adages—with us, deep in our psyche. But recently we started wondering: are they true? Like, objectively, scientifically, provably true?

      So we picked a few and set out to fact check them. We talked to psychologists, neuroscientists, runners, a real estate agent, skateboarders, an ornithologist, a sociologist and an astrophysicist, among others, and we learned that these seemingly simple, clear-cut statements about us and our world, contain whole universes of beautiful, vexing complexity and deeper, stranger bits of wisdom than we ever imagined.

    1. Reviewer #2 (Public review):

      Summary:

      The authors aim to provide a comprehensive understanding of the evolutionary history of the Major Histocompatibility Complex (MHC) gene family across primate species. Specifically, they sought to:

      (1) Analyze the evolutionary patterns of MHC genes and pseudogenes across the entire primate order, spanning 60 million years of evolution.

      (2) Build gene and allele trees to compare the evolutionary rates of MHC Class I and Class II genes, with a focus on identifying which genes have evolved rapidly and which have remained stable.

      (3) Investigate the role of often-overlooked pseudogenes in reconstructing evolutionary events, especially within the Class I region.

      (4) Highlight how different primate species use varied MHC genes, haplotypes, and genetic variation to mount successful immune responses, despite the shared function of the MHC across species.

      (5) Fill gaps in the current understanding of MHC evolution by taking a broader, multi-species perspective using (a) phylogenomic analytical computing methods such as Beast2, Geneconv, BLAST, and the much larger computing capacities that have been developed and made available to researchers over the past few decades, (b) literature review for gene content and arrangement, and genomic rearrangements via haplotype comparisons.

      (6) The authors overall conclusions based on their analyses and results are that 'different species employ different genes, haplotypes, and patterns of variation to achieve a successful immune response'.

      Strengths:

      Essentially, much of the information presented in this paper is already well-known in the MHC field of genomic and genetic research, with few new conclusions and with insufficient respect to past studies. Nevertheless, while MHC evolution is a well-studied area, this paper potentially adds some originality through its comprehensive, cross-species evolutionary analysis of primates, focus on pseudogenes and the modern, large-scale methods employed. Its originality lies in its broad evolutionary scope of the primate order among mammals with solid methodological and phylogenetic analyses.

      The main strengths of this study are the use of large publicly available databases for primate MHC sequences, the intensive computing involved, the phylogenetic tool Beast2 to create multigene Bayesian phylogenetic trees using sequences from all genes and species, separated into Class I and Class II groups to provide a backbone of broad relationships to investigate subtrees, and the presentation of various subtrees as species and gene trees in an attempt to elucidate the unique gene duplications within the different species. The study provides some additional insights with summaries of MHC reference genomes and haplotypes in the context of a literature review to identify the gene content and haplotypes known to be present in different primate species. The phylogenetic overlays or ideograms (Figures 6 and 7) in part show the complexity of the evolution and organisation of the primate MHC genes via the orthologous and paralogous gene and species pathways progressively from the poorly-studied NWM, across a few moderately studied ape species, to the better-studied human MHC genes and haplotypes.

      Weaknesses:

      The title 'The Primate Major Histocompatibility Complex: An Illustrative Example of Gene Family Evolution' suggests that the paper will explore how the Major Histocompatibility Complex (MHC) in primates serves as a model for understanding gene family evolution. The term 'Illustrative Example' in the title would be appropriate if the paper aimed to use the primate Major Histocompatibility Complex (MHC) as a clear and representative case to demonstrate broader principles of gene family evolution. That is, the MHC gene family is not just one instance of gene family evolution but serves as a well-studied, insightful example that can highlight key mechanisms and concepts applicable to other gene families. However, this is not the case, this paper only covers specific details of primate MHC evolution without drawing broader lessons to any other gene families. So, the term 'Illustrative Example' is too broad or generalizing. In this case, a term like 'Case Study' or simply 'Example' would be more suitable. Perhaps, 'An Example of Gene Family Diversity' would be more precise. Also, an explanation or 'reminder' is suggested that this study is not about the origins of the MHC genes from the earliest jawed vertebrates per se (~600 mya), but it is an extension within a subspecies set that has emerged relatively late (~60 mya) in the evolutionary divergent pathways of the MHC genes, systems, and various vertebrate species.

      Phylogenomics. Particular weaknesses in this study are the limitations and problems associated with providing phylogenetic gene and species trees to try and solve the complex issue of the molecular mechanisms involved with imperfect gene duplications, losses, and rearrangements in a complex genomic region such as the MHC that is involved in various effects on the response and regulation of the immune system. A particular deficiency is drawing conclusions based on a single exon of the genes. Different exons present different trees. Which are the more reliable? Why were introns not included in the analyses? The authors attempt to overcome these limitations by including genomic haplotype analysis, duplication models, and the supporting or contradictory information available in previous publications. They succeed in part with this multidiscipline approach, but much is missed because of biased literature selection. The authors should include a paragraph about the benefits and limitations of the software that they have chosen for their analysis, and perhaps suggest some alternative tools that they might have tried comparatively. How were problems with Bayesian phylogeny such as computational intensity, choosing probabilities, choosing particular exons for analysis, assumptions of evolutionary models, rates of evolution, systemic bias, and absence of structural and functional information addressed and controlled for in this study?

      Gene families as haplotypes. In the Introduction, the MHC is referred to as a 'gene family', and in paragraph 2, it is described as being united by the 'MHC fold', despite exhibiting 'very diverse functions'. However, the MHC region is more accurately described as a multigene region containing diverse, haplotype-specific Conserved Polymorphic Sequences, many of which are likely to be regulatory rather than protein-coding. These regulatory elements are essential for controlling the expression of multiple MHC-related products, such as TNF and complement proteins, a relationship demonstrated over 30 years ago. Non-MHC fold loci such as TNF, complement, POU5F1, lncRNA, TRIM genes, LTA, LTB, NFkBIL1, etc, are present across all MHC haplotypes and play significant roles in regulation. Evolutionary selection must act on genotypes, considering both paternal and maternal haplotypes, rather than on individual genes alone. While it is valuable to compile databases for public use, their utility is diminished if they perpetuate outdated theories like the 'birth-and-death model'. The inclusion of prior information or assumptions used in a statistical or computational model, typically in Bayesian analysis, is commendable, but they should be based on genotypic data rather than older models. A more robust approach would consider the imperfect duplication of segments, the history of their conservation, and the functional differences in inheritance patterns. Additionally, the MHC should be examined as a genomic region, with ancestral haplotypes and sequence changes or rearrangements serving as key indicators of human evolution after the 'Out of Africa' migration, and with disease susceptibility providing a measurable outcome. There are more than 7000 different HLA-B and -C alleles at each locus, which suggests that there are many thousands of human HLA haplotypes to study. In this regard, the studies by Dawkins et al (1999 Immunol Rev 167,275), Shiina et al. (2006 Genetics 173,1555) on human MHC gene diversity and disease hitchhiking (haplotypes), and Sznarkowska et al. (2020 Cancers 12,1155) on the complex regulatory networks governing MHC expression, both in terms of immune transcription factor binding sites and regulatory non-coding RNAs, should be examined in greater detail, particularly in the context of MHC gene allelic diversity and locus organization in humans and other primates.

      Diversifying and/or concerted evolution. Both this and past studies highlight diversifying selection or balancing selection model is the dominant force in MHC evolution. This is primarily because the extreme polymorphism observed in MHC genes is advantageous for populations in terms of pathogen defence. Diversification increases the range of peptides that can be presented to T cells, enhancing the immune response. The peptide-binding regions of MHC genes are highly variable, and this variability is maintained through selection for immune function, especially in the face of rapidly evolving pathogens. In contrast, concerted evolution, which typically involves the homogenization of gene duplicates through processes like gene conversion or unequal crossing-over, seems to play a minimal role in MHC evolution. Although gene duplication events have occurred in the MHC region leading to the expansion of gene families, the resulting paralogs often undergo divergent evolution rather than being kept similar or homozygous by concerted evolution. Therefore, unlike gene families such as ribosomal RNA genes or histone genes, where concerted evolution leads to highly similar copies, MHC genes display much higher levels of allelic and functional diversification. Each MHC gene copy tends to evolve independently after duplication, acquiring unique polymorphisms that enhance the repertoire of antigen presentation, rather than undergoing homogenization through gene conversion. Also, in some populations with high polymorphism or genetic drift, allele frequencies may become similar over time without the influence of gene conversion. This similarity can be mistaken for gene conversion when it is simply due to neutral evolution or drift, particularly in small populations or bottlenecked species. Moreover, gene conversion might contribute to greater diversity by creating hybrids or mosaics between different MHC genes. In this regard, can the authors indicate what percentage of the gene numbers in their study have been homogenised by gene conversion compared to those that have been diversified by gene conversion?

      Duplication models. The phylogenetic overlays or ideograms (Figures 6 and 7) show considerable imperfect multigene duplications, losses, and rearrangements, but the paper's Discussion provides no in-depth consideration of the various multigenic models or mechanisms that can be used to explain the occurrence of such events. How do their duplication models compare to those proposed by others? For example, their text simply says on line 292, 'the proposed series of events is not always consistent with phylogenetic data'. How, why, when? Duplication models for the generation and extension of the human MHC class I genes as duplicons (extended gene or segmental genomic structures) by parsimonious imperfect tandem duplications with deletions and rearrangements in the alpha, beta, and kappa blocks were already formulated in the late 1990s and extended to the rhesus macaque in 2004 based on genomic haplotypic sequences. These studies were based on genomic sequences (genes, pseudogenes, retroelements), dot plot matrix comparisons, and phylogenetic analyses of gene and retroelement sequences using computer programs. It already was noted or proposed in these earlier 1999 studies that (1) the ancestor of HLA-P(90)/-T(16)/W(80) represented an old lineage separate from the other HLA class I genes in the alpha block, (2) HLA-U(21) is a duplicated fragment of HLA-A, (3) HLA-F and HLA-V(75) are among the earliest (progenitor) genes or outgroups within the alpha block, (4) distinct Alu and L1 retroelement sequences adjoining HLA-L(30), and HLA-N genomic segments (duplicons) in the kappa block are closely related to those in the HLA-B and HLA-C in the beta block; suggesting an inverted duplication and transposition of the HLA genes and retroelements between the beta and kappa regions. None of these prior human studies were referenced by Fortier and Pritchard in their paper. How does their human MHC class I gene duplication model (Fig. 6) such as gene duplication numbers and turnovers differ from those previously proposed and described by Kulski et al (1997 JME 45,599), (1999 JME 49,84), (2000 JME 50,510), Dawkins et al (1999 Immunol Rev 167,275), and Gaudieri et al (1999 GR 9,541)? Is this a case of reinventing the wheel?

      Results. The results are presented as new findings, whereas most if not all of the results' significance and importance already have been discussed in various other publications. Therefore, the authors might do better to combine the results and discussion into a single section with appropriate citations to previously published findings presented among their results for comparison. Do the trees and subsets differ from previous publications, albeit that they might have fewer comparative examples and samples than the present preprint? Alternatively, the results and discussion could be combined and presented as a review of the field, which would make more sense and be more honest than the current format of essentially rehashing old data.

      Minor corrections:

      (1) Abstract, line 19: 'modern methods'. Too general. What modern methods?

      (2) Abstract, line 25: 'look into [primate] MHC evolution.' The analysis is on the primate MHC genes, not on the entire vertebrate MHC evolution with a gene collection from sharks to humans. The non-primate MHC genes are often differently organised and structurally evolved in comparison to primate MHC.

      (3) Introduction, line 113. 'In a companion paper (Fortier and Pritchard, 2024)' This paper appears to be unpublished. If it's unpublished, it should not be referenced.

      (4) Figures 1 and 2. Use the term 'gene symbols' (circle, square, triangle, inverted triangle, diamond) or 'gene markers' instead of 'points'. 'Asterisks "within symbols" indicate new information.

      (5) Figures. A variety of colours have been applied for visualisation. However, some coloured texts are so light in colour that they are difficult to read against a white background. Could darker colours or black be used for all or most texts?

      (6) Results, line 135. '(Fortier and Pritchard, 2024)' This paper appears to be unpublished. If it's unpublished, it should not be referenced.

      (7) Results, lines 152 to 153, 164, 165, etc. 'Points with an asterisk'. Use the term 'gene symbols' (circle, square, triangle, inverted triangle, diamond) or 'gene markers' instead of 'points'. A point is a small dot such as those used in data points for plotting graphs .... The figures are so small that the asterisks in the circles, squares, triangles, etc, look like points (dots) and the points/asterisks terminology that is used is very confusing visually.

      (8) Line 178 (BEA, 2024) is not listed alphabetically in the References.

      (9) Lines 188-190. 'NWM MHC-G does not group with ape/OWM MHC-G, instead falling outside of the clade containing ape/OWM MHC-A, -G, -J and -K.' This is not surprising given that MHC-A, -G, -J, and -K are paralogs of each other and that some of them, especially in NWM have diverged over time from the paralogs and/or orthologs and might be closer to one paralog than another and not be an actual ortholog of OWM, apes or humans.

      (10) Line 249. Gene conversion: This is recombination between two different genes where a portion of the genes are exchanged with one another so that different portions of the gene can group within one or other of the two gene clades. Alternatively, the gene has been annotated incorrectly if the gene does not group within either of the two alternative clades. Another possibility is that one or two nucleotide mutations have occurred without a recombination resulting in a mistaken interpretation or conclusion of a recombination event. What measures are taken to avoid false-positive conclusions? How many MHC gene conversion (recombination) events have occurred according to the authors' estimates? What measures are taken to avoid false-positive conclusions?

      (11) Lines 284-286. 'The Class I MHC region is further divided into three polymorphic blocks-alpha, beta, and kappa blocks-that each contains MHC genes but are separated by well-conserved non-MHC genes.' The MHC class I region was first designated into conserved polymorphic duplication blocks, alpha and beta by Dawkins et al (1999 Immunol Rev 167,275), and kappa by Kulski et al (2002 Immunol Rev 190,95), and should be acknowledged (cited) accordingly.

      (12) Lines 285-286. 'The majority of the Class I genes are located in the alpha-block, which in humans includes 12 MHC genes and pseudogenes.' This is not strictly correct for many other species, because the majority of class I genes might be in the beta block of new and old-world monkeys, and the authors haven't provided respective counts of duplication numbers to show otherwise. The alpha block in some non-primate mammalian species such as pigs, rats, and mice has no MHC class I genes or only a few. Most MHC class I genes in non-primate mammalian species are found in other regions. For example, see Ando et al (2005 Immunogenetics 57,864) for the pig alpha, beta, and kappa regions in the MHC class I region. There are no pig MHC genes in the alpha block.

      (13) Line 297 to 299. 'The alpha-block also contains a large number of repetitive elements and gene fragments belonging to other gene families, and their specific repeating pattern in humans led to the conclusion that the region was formed by successive block duplications (Shiina et al., 1999).' There are different models for successive block duplications in the alpha block and some are more parsimonious based on imperfect multigenic segmental duplications (Kulski et al 1999, 2000) than others (Shiina et al., 1999). In this regard, Kulski et al (1999, 2000) also used duplicated repetitive elements neighbouring MHC genes to support their phylogenetic analyses and multigenic segmental duplication models. For comparison, can the authors indicate how many duplications and deletions they have in their models for each species?

      (14) Lines 315-315. 'Ours is the first work to show that MHC-U is actually an MHC-A-related gene fragment.' This sentence should be deleted. Other researchers had already inferred that MHC-U is actually an MHC-A-related gene fragment more than 25 years ago (Kulski et al 1999, 2000) when the MHC-U was originally named MHC-21.

      (15) Lines 361-362. 'Notably, our work has revealed that MHC-V is an old fragment.' This is not a new finding or hypothesis. Previous phylogenetic analysis and gene duplication modelling had already inferred HLA-V (formerly HLA-75) to be an old fragment (Kulski et al 1999, 2000).

      (16) Line 431-433. 'the Class II genes have been largely stable across the mammals, although we do see some lineage-specific expansions and contractions (Figure 2 and Figure 2-gure Supplement 2).' Please provide one or two references to support this statement. Is 'gure' a typo?

      (17) Line 437. 'We discovered far more "specific" events in Class I, while "broad-scale" events were predominant in Class II.' Please define the difference between 'specific' and 'broad-scale'.<br /> 450-451. 'This shows that classical genes experience more turnover and are more often affected by long-term balancing selection or convergent evolution.' Is balancing selection a form of divergent evolution that is different from convergent evolution? Please explain in more detail how and why balancing selection or convergent evolution affects classical and nonclassical genes differently.

      References. Some references in the supplementary materials such as Alvarez (1997), Daza-Vamenta (2004), Rojo (2005), Aarnink (2014), Kulski (2022), and others are missing from the Reference list. Please check that all the references in the text and the supplementary materials are listed correctly and alphabetically.

  12. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. Much easier. I'm in geometry, :rnd it's like "Oh, okay. I know how to do that." I have a [private] tutor now, and she's planning to be a math teacher at Berkeley High

      There is such a big difference in how Chantelle and Jennifer response in the interview. Chantelle struggles with prealgebra but doesn't hire a private tutor. Instead, she relies on retaking the class multiple times until she gets the material done. While, Jennifer hires a qualified tutor that wants to be a math teacher in the future. This just shows how the SAT prep works too, lower income students usually don't have the outside resources to prep them weel enough for the test while the higher income students does.

    1. Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them.

      Bots can act like real users on social media, which is interesting but also a bit scary. I feel like it's hard to tell if you're talking to a real person or just a program. Should there be a rule that bots must say they’re bots? It might help people trust social media more, but maybe fewer people would like the posts.

    1. It makes a lot of sense to have this different strategy of being rooted in the real physical world and have digital nomads being as like a guild of knowledge workers that seed their specialized knowledge because localism is necessary and good, but it's also not necessarily very innovative. Most people at the local level just keep repeating stuff. It's good to have people coming in from the outside and innovating.

      for - insight - good for digital nomads to be rooted somewhere in the physical word - they are like a cosmo guild of knowledge workers - localities tend to repeat the same things - digital nomads as outsiders can inject new patterns - SOURCE - Youtube Ma Earth channel interview - Devcon 2024 - Cosmo Local Commoning with Web 3 - Michel Bauwens - 2025, Jan 2

    2. use the commons as a new regulatory mechanism. That would mean not local commons but trans-local commons. What I imagine, I call this the magisteria of the commons, you have a coalition of, let's say, permaculture, a particular way of doing respectful agriculture. Locally, they're weak. It's just a bunch of people. Globally, what if there are 12,000 of them? What if they have a common social power, like common property that can help the nodes individually? I think that would create the premises and the seeds for a new type of institution that can operate at the trans-local level. That's what I call cosmolocalism

      for - cosmolicalism - nice articulation - SOURCE - Youtube Ma Earth channel interview - Devcon 2024 - Cosmo Local Commoning with Web 3 - Michel Bauwens - 2025, Jan 2

    1. first try to analyze the problem you are solving, then generate ideas, then test those ideas with the people who have the problem you are solving.

      I agree with this approach because it emphasizes iteration and testing, which makes the design process more grounded and practical. It’s not just about coming up with cool ideas in isolation; it’s about continuously refining those ideas based on real feedback. I think this helps create designs that are not only innovative but actually useful and relevant to the people who will use them.

    1. In fact, many argue that to truly be just and inclusive, design should not be done by professionals on behalf of the world, but rather done with the world. This need for radical inclusion in design processes comes from designers’ inability, no matter how committed to understanding other people’s perspectives, to accounting for the needs of a community, or the potential unintended consequences of a design on a community.

      This point really interested me because it underlined the inclusive and collaborative aspect of design. I agree that professional designers, with their skills and good intentions, cannot perceive a community's diverse needs unless they involve those directly affected. It's a different look from the usual view of design being one-way; this frames design as a partnership. It made me think critically about even the smallest design choices that could have the effect of unintentionally excluding certain groups, and it really encouraged me to think about how I could include others more actively in whatever design processes I take part in.

    2. In a way, all of these skills are fundamentally about empathy55 Wright, P., & McCarthy, J. (2008). Empathy and experience in HCI. ACM SIGCHI Conference on Human Factors in Computing (CHI). , because they all require a designer to see problems and solutions from other people’s perspectives, whether these people are users, other designers, or people in other roles, such as marketers, engineers, project managers, etc.

      I agree with this, but I also wonder how a designer would be able to put themselves in other people's shoes for various situations. I think it's inevitable to have biases and preferences, and one may subconsciously prioritize those scenarios. There can be problems a user faces that a designer just doesn't think of, which is why it's important that there are people of various backgrounds on these deisgn teams.

    3. In professional contexts, design is often where the power is. Designers determine what companies make, and that determines what people use. But people with the word “design” in their job title don’t necessarily possess this power. For example, in one company, graphic designers may just be responsible for designing icons, whereas in another company, they might envision a whole user experience. In contrast, many people without the word design in their title have immense design power. For example, some CEOs like Steve Jobs exercised considerable design power over products, meaning that other designers were actually beholden to his judgement. In other companies (some parts of Microsoft, for example), design power is often distributed to lower-level designers within the company.

      I agree that having the word "design" in a job title does not always guarantee that one has the ability to influence significant results, as this ability frequently depends on the dynamics of the business and the leadership. It's interesting to observe how certain businesses, like Microsoft, disperse design authority among teams, while others, like Apple, concentrate it through powerful individuals like Steve Jobs. This viewpoint changes my understanding of how designers shape things by serving as a reminder that influence and decision-making power frequently have a greater influence on design impact than titles alone.

    4. When I was an undergraduate, I didn’t have a clue about design. Like most students in technical fields, I thought design was about colors, fonts, layout, and other low-level visual details.

      That's what I was thinking in high school. I thought design was just all about fashion, clothes, shoes, and aesthetics. Then I realized design is about anything you create that can be helpful to this world, whether it's a tool, a system, or even a way of thinking. This perspective shift helped me appreciate how design shapes our interactions with technology, solves complex problems, and impacts society on a deeper level.

    5. designers tend to unconsciously default to imagining users whose experiences are similar to their own.

      I strongly agree with this statement, and I think it's very common among society and not just designers, to unconsciously create solutions that only help them because of their own experiences and environment. Part of the reason why I think this occurs is because it's difficult to obtain the personal information of others and the type of environment they grew up in. Sure, it's fairly easy to get a gist of what people experience through qualitative methods like interviews and surveys, but it's not exactly the most representative. Which I believe brings up the question on how can we make design more accessible?

    1. it is phenomenologically impossible for me to Perspectively know what it is like to be dead, because whenever I try to conjure up a frame (indicates the smallest, central box in the diagram), “Oh, I'm in a dark room! But wait, I'm still there in the dark room. There's the hereness and the nowness… Oh well, then I'm nowhere! Well, then I'm just an empty…!” No matter what I do, I can't get a framing that has within it my own non-existence, perspectively.

      for - example - what's it like to be dead? - phenomenologically impossible for me to perspectively know what it's like to be dead - source - Meaning crisis - episode 33 - The Spirituality of Relevance Realization - Wonder/Awe/Mystery/Sacredness - John Vervaeke

    1. Our relationships with each other are more fraught, less tolerant, and quicker-tempered. Our view of people also seems more fleeting and impressionistic — we experience people less as fully rounded humans, and more as personas, or streams of opinions. Our interactions with each other are also more physically distanced, which seems to have an effect akin to road rage. We react to other people on social media as drivers do to other drivers — we rant and scream in ways we wouldn’t dream of doing if we were talking face to face.

      Not true that our relationships with each other are less tolerant and quicker tempered - it's again more of this and less of that, quicker this, slower that, wider this, shallower that.... It's just different relationship to reflect different reality.

  13. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. The controversies-over matters like school funding, vouchers, bilingual education, high-stakes testing, desegrega-tion, and creationism-seem, at first glance, to be separate problems. In im-portant ways, however, they all reflect contention over the goals of the American dream.

      I actually did not think of this before. I would have seen it all as a separate issues, without the author pointing. it out like that. I think it's interesting that this all goes back to the goal of the American dream. It's all comes down to whats best for the individual and the whole. It makes sense to me without a lot of explanation just because all the things listed are about what gonna be best for the people in a way and another part is money. The part of this that is more about the people flows into he concept of success and the American dream.

  14. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. Whether inspired by Mann's plea to elevate the masses to higher moral and financial ground via schooling, or other notions of social justice, even now Europeans refer to publicly funded education as "the social elevator" (Lopez-Fogues, 2011). As Mann originally conceived the function of public education, there was overt recognition that something in society was amiss, and that "something" could be effectively redressed by offering public education to all-not just some. The same "something" that Mann was acutely aware of and deeply troubled by was and is the gross and growing disparities among the social classes. We continue to need methods for shrinking overwhelming and widen-ing class divides. Many of us choose to address the equity gap by struggling to supply universal access to high-quality, free, and appropriate public education. Nearly two centuries later, "the great equalizer" cannot equalize soon enough.

      I feel like this couldn't have been said better. Equity is essential to "level the playing field." The way I think of it is you have two kids. One has a thick cotton blanket, the other doesn't. To make it fair, you give the other one a blanket, yet it's not the same. The other child receives a thin cotton blanket. Yes, they both now have blankets, but they aren't equally as effective.

    2. debt

      This is so interesting because it’s become so normalized to take out big loans just to attend school. And if we don't pursue the higher education, it’s harder to get "good jobs” that pay a livable wage. When I was researching colleges, I had no idea how expensive they were until I started looking at the tuition costs. It was even more shocking when I saw the prices for graduate school.

    1. Often when students return from breaks I ask them to share with us how ideas that they bave Jearned or worked on in the classroom impacted on their experience out-side. This gives them both the opportunity to know that diffi-cult experiences may be commo

      Wow, this is so interesting! I never thought about it this way. I used to think teachers just wanted to hear about what students did over the breaks, but I didn’t realize they were trying to show how some experiences can be shared or common among students. It’s a good idea to encourage sharing so that students from different cultures can talk about their experiences and let others learn more about other cultures

    1. One final note we’d like to make here is that, as we said before, we can use ethics frameworks as tools to help us see into situations. But just because we use an ethics framework to look at a situation doesn’t mean that we will come out with a morally good conclusion. This is perhaps most obvious with something like nihilism, which rejects the very existence of a morally good conclusion. But we can also see this with other frameworks, such as egoism, which we (the authors) believe often gives morally wrong results, or with consequentialist/utilitarianist reasoning reasoning, which has been challenged at many points in history (e.g., A Modest Proposal [b102] from 1729, the character Ivan arguing with his brother [b103] in Brothers Karamazov [b104] fromn 1880, and the two articles Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’ [b105] [archived here] and Effective altruism’s most controversial idea [b106] from 2022). Still, we hope that in using different frameworks (even ones you often disagree with) you are able to understand situations better and with more nuance.

      I found the discussion on Virtue Ethics particularly compelling as it relates to the formation of personal character through consistent virtuous actions. It's intriguing to consider how these principles apply to social media behavior, where one's character is often curated and presented in a manner that may not always reflect one's true self.

    1. 但人类计算机最终被电子计算机所取代,而与电子计算机的通信并不简单。

      I don't think electronic computers will be replaced by human computers, a very simple example can be given - chatgpt, a website commonly used by college students, this website is indeed very intelligent, but his answers can be seen at a glance that they are ai synthesis and not words that a normal college student could have answered, and that's the way some teachers can find out that their students are using this to cheat, so it's just a tool that can assist us in our studies, and if humans ever stop evolving them, they will stagnate. So it's just a tool that can assist us in our studies and electronic computers are made by humans so if humans ever stop evolving them they will stagnate. In other words, if the development of electronic computers is really fast and widespread, a large part of the population will lose their jobs and have no financial resources, and then people will also think that in fact, electronic computers are not so good as imagined, and will slowly fade away later on

    1. In a world where religious fundamentalisms flourish alongside scientific rationality, where global capitalism coexists with ethnic rivalries, in a world of reflexive modernization, of the implosion of information, what can be said about music?

      this string of questions seems very prescient when you consider the technological and political changes that exploded in the 20 years since book was written. OR maybe they are not prescient, but generic? maybe these conditions are constant in capitalist modernity, and it's the manifestations that change (the particular tech that accelerates communication and access to information; the specific ethnic rivalries and fundamentalisms...) Maybe it's just PSEUDO-prescient, i.e. pretentious

    1. Walter Kempner is no I don't so he's an interesting fellow kind of a controversial guy from the 1940s and 1950s maybe the 1960s he's a physician who did a bunch of studies with diabetics people that were morbidly obese morbidly obese we're 00:11:59 talking hundreds of pound obesity if your team wants to find like images you can see the study so he has this thing called the rice diet and in the 1950s and 1960s he put people on a very very 00:12:12 high carb very lowfat very low protein diet it was essentially white sugar so sucrose and white rice was the majority of the 00:12:23 diet and they got better they lost I mean some people went from like literally being round to being thin their diabetes got better so he fixed 00:12:36 diabetes with a very high carb very low fat very low protein diet and it was by necessity it was low protein because it was all carbohydrates and the problem is that the human brain doesn't want to do this like so he's controversial because in 00:12:50 order to get his patients to do this he had to like he had to do some crazy things to cajo them and so I'm not condoning his experiments but the science is is interesting and what it says about human physiology to me is is 00:13:03 very compelling that you can give someone a diet of pure white sugar and rice and their diabetes gets better why does their diabetes get better and this is not even short term this is long term 00:13:15 so that in the span of I think it was four to six months he could then liberalize these people's diets and their diabetes did not return so this is really interesting to me and I think it kind of ties into the seed oil 00:13:27 piece and I'm not suggesting that this is a a reasonable therapy for people because what we know about human physiology and the human brain is that if you try to push any of the macros too far our brain really 00:13:39 Rebels humans seem to be able to lose weight by cutting carbohydrates or cutting fat if you cut both of them together you have what's called rabbit starvation and you can lose a lot of weight very quickly but it's very 00:13:52 stressful on the body hormonally if you cut carbohydrates you have a ketogenic diet if you cut fat you have a lowfat diet and if you look at the trials head-to-head of low fat or low carb they both have about the same amount of weight loss so there's some contention 00:14:05 but it doesn't really look like a ketogenic diet is magical for weight loss a like a lowfat diet is magical for weight loss relative to Kido they both work but when you cut both when you cut 00:14:17 the the fat really really low that's interesting to me and this is what happened in the rice diet and they the the fat was so low that these Pro people were probably becoming fatty acid deficient and there's a fatty acid that 00:14:30 you can measure in the human blood it's called me acid Mead and that's an indication of fatty acid deficiency essential quote unquote fatty acid deficiency and so the hypothesis is that one of the reasons this diet might have 00:14:43 worked is because when you restrict fat that much the cell has to turn over those cell membranes in a different way and that probably causes a lot of these polyunsaturated fats that are stuck in the cell membranes to become mobilized 00:14:56 and turn over the human body doesn't make polyunsaturated fats but if you feed someone carbohydrates the human body can make saturated fats and monounsaturated fats but this is essentially an accelerated way to get 00:15:10 rid of what were potentially excess polyunsaturated fatty acids in these people's cell membranes again I don't think this is a good therapy for humans because it's so hard on the brain humans don't want to do this we sort of 00:15:22 gravitate toward like a third fat third carbohydrates and maybe a third protein depending how you're looking at it maybe a little less if you're doing grams or calories but there's some balance of those things that kind of is what our body tends to if you go too low fat your 00:15:35 body will Rebel and you know if you go too low carb your body's like I want some carbohydrates so the indication here is there's something going on in these cell membranes and there's a massive shift that happens in the cell membrane when you get very very low fat 00:15:47 and I think that's having to do with this turnover of these omega-6 fatty acids so there's a couple of ways to do this without going solo fat you can also just get them out of your diet like extremely intentionally and then do more 00:16:00 fats that are saturated in their place and so this is the part where it gets a little bit cumbersome for people to think about but I think that you can get similar results by just having a low linolic acid diet so let's back up for a 00:16:13 moment talk about linolic acid Omega 6 which means that six carbons from the end of the molecule is the first double bond it's an 18 carbon molecule it's polyunsaturated which means it has multiple double bonds and there's a 00:16:28 small amount in ruminant fat so things like cows or goats or bison or lamb sheep deer small amount one to 2% but animals like humans or pigs or 00:16:43 chickens that are monogastric accumulate linolic acid so the more of this fatty acid we eat the more we store we don't have a way to get rid of it like cows do cows can transform it so where do we 00:16:55 find linic acid in the human diet we find it in chicken and pigs that are fed corn and soy so evolutionarily inappropriate diets and you find it in nuts and seeds plant foods and we have a massive 00:17:08 input of this linolic acid into the human diet now because we're feeding our animals corn and soy things that they've never eaten historically and all of our processed food is combined is added with these 00:17:22 seed oils things like corn canola sunflower safflower soybean grape seed these are all seed oils and they contain between 25 and 65% linolic acid so what you have I 00:17:35 think is an evolutionarily inconsistent amount of linolic acid coming into the human diet and just like pigs just like chickens when we eat corn and soy when we eat Foods when we eat seed oils that 00:17:48 have a lot of this linolic acid we store it and I think that over time it accumulates in our cell membranes and in the membranes of our mitochondria these little powerhouses in the cell and causes problems and we can get into how it might cause problems at the cellular 00:18:00 level if you want but that's kind of the the 15,000 foot perspective that we have this this fatty acid that is in our food supply historically but when we are living in a quote naturalistic way in 00:18:13 the forest in the jungle there's really very limited access to foods that are high in this it's very hard to get the amount of seeds that would you would get even in 3 to five tablespoons of seed oils so I've done some content about 00:18:25 this you look at corn oil for instance or rice brand oil is an even better example Chipotle very very popular and I went to Chipotle and I asked what do you cook your food and they said rice brand oil so it's the oil extracted from the 00:18:39 brand of the rice okay they put three to five tablespoons of rice brand oil into a a bowl like a burrito bowl or a burrito with the rice and the beans and the the 00:18:50 meat that are cooking in there to get three to five tablespoons of rice brain oil you'd have to eat something like three to four pounds of rice so something that humans would never ever do right it's the same with 00:19:04 sunflower seeds sunflower seed oil is in almost everything soybean oil is very common corn oil to get 3 to 5 tablespoons of corn oil you have to eat somewhere between 60 and 75 ears of corn 00:19:17 so you can see here that we have now even if we were eating an occasional sunflower seed from a sunflower plant because we're starving as humans historically or we're eating a little bit of rice and getting the oil from the brand or we're EA eating some corn in 00:19:30 Native American population we're never going to get anywhere close to the amount of linolic acid coming into our bodies in 2023 and really this amount has been increasing massively over the last 100 to 110 years

      super interesting